TITLE: What is the equation of the orthogonal group (as a variety/manifold)? QUESTION [5 upvotes]: I have been studying some elementary Lie theory recently, so I have been thinking about matrix groups as manifolds. Most simple examples of manifolds that we learn in high school or college even are solutions to algebraic equations (for example, conic sections) i.e. algebraic varieties. This led to the thought (which is also my question): Is the orthogonal group also a manifold which is the solution to some multivariable polynomial equation? If so, what is it? Proposed answer: If $X$ denotes the $n \times n$ real-valued matrix whose $ij$th entry is the monomial/variable $x_{ij}$. Then orthogonal group is the zero set of the system of polynomial equations (in $n^2$ variables): $$X^T X - I_n = 0$$ Is this really the answer? It seems too dumb to be possible. Then again, I have never really thought of an entire collection of matrices as the set of solutions to a system of multilinear equations before. Check: this implies that the determinants of these matrices are $\pm 1$ by the multiplicative property of determinants. Check: for the case $n=1$ we have $$x^2 -1 = 0 \implies x = \pm 1$$ which is O(1) as necessary/expected. Check: for the case $n=2$ we have $$\begin{bmatrix} x_1 & x_3 \\ x_2 & x_4 \end{bmatrix} \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix} - \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = 0 \implies \begin{array}{c} x_1^2 + x_3^2 = 1 \\ x_1x_2 + x_3x_4 = 0 \\ x_2 x_1 + x_4 x_3 = 0 \\ x_2^2 + x_4^2 = 1 \end{array}$$ I know that O(2) has two path components, each of which is diffeomorphic to SO(2), which is just the unit circle, and the solution to this system of equations does suggest the disjoint union of two unit circles because of the equations $x_1^2 + x_3^2 =1$ and $x_2^2 + x_4^2=1$; however I am not sure how to interpret the auxiliary condition that $$x_1 x_2 = -x_3x_4$$ does this guarantee that the circles are disjoint somehow? Otherwise I am not sure if this equation is giving me the correct answer for the case $n=2$. REPLY [4 votes]: $f(X)=X^TX-I=0$ is an implicit algebraic equation of $O(n)$ (there are $n(n+1)/2$ distinct equations in the $n^2$ real unknowns $(x_{i,j})_{i,j}$). Consequently $O(n)$ is a real algebraic variety of dimension $\geq n^2-n(n+1)/2=n(n-1)/2$. Since $O(n)$ is a group, to obtain its dimension, it suffices to consider the kernel of the derivative of $f$ in $I$; $Df_I:H\rightarrow H^T+H$, $\ker(Df_I)=SK_n$ is the set of skew symmetric matrices, a vector space of dimension $n(n-1)/2$; conclusion: $dim(O(n))=n(n-1)/2$. From a topological point of view, $O(n)$ is complicated; it has $2$ connected components $SO(n),O^-(n)$ that are algebraically diffeomorphic ($X\in O^-(n)\rightarrow Xdiag(-1,1,\cdots,1)\in SO(n)$); note that $SO(n)$ is a group and $O^-(n)$ is not. In the sequel we consider only $SO(n)$. Note that $SO(n)$ is not simply connected; yet $SO(2)=S^1$ ($S^1$ can be covered by (uncoiled in the form of) the simply connected $\mathbb{R}$ of dimension $1$); $SO(3)=RP^3$, the real projective space (it can be covered by the simply connected $S^3$ of dimension $3$); $SO(4)$ can be covered by the simply connected $S^3\times S^3$ of dimension $6$. Removing a part of $SO(n)$, we can trivialize it. Let $U=\{X\in SO(n)|X+I$ is singular $\}$. Then the Cayley transform https://en.wikipedia.org/wiki/Cayley_transform $K\in SK_n\rightarrow (I-K)(I+K)^{-1}\in SO(n)\setminus U$ is an algebraic parametrization of $SO(n)\setminus U$. In other words, $SO(n)\setminus U$ is flat. That is complicated is the reattachment of $U$ over $SK_n$.<|endoftext|> TITLE: A combinatorics problem related to Bose-Einstein statistics QUESTION [10 upvotes]: Problem Given $N$ and $E$, how many solutions $(n_0,n_1,...,n_E)$ are there to: $$\sum_{k=0}^En_k = N$$ $$\sum_{k=0}^Ek.n_k = E$$ where everything is non-negative integer ($N,E,n_k \in \mathbb{Z}_{\geq 0}$ )? Context $N$ is the number of indistinguishable particles that can occupy $Q+1$ distinguishable energy levels, with energies $E_k=0,1,2,...,Q$. In this case, $Q=E$, which is the total energy of the system. Here's an example for $N=6$ and $E=9$: Each rectangle represents a solution. There are 26 in total. The solution on the top left, for example, is $(5,0,0,0,0,0,0,0,0,1)$. Source: HyperPhysics. Attempts The equivalent problem for distinguishable particles is trivial if solved by the stars and bars method. The solution is given by: $$\binom{E+N-1}{E}$$ For $N=6$ and $E=9$ it would be: $$\binom{9+6-1}{9} = \frac{14!}{9!\;5!} = 2002$$ I don't know if this is relevant, but I noticed that $2002 = 77\times 26$, or $77$ times the number of solutions to the problem I want to solve. There might not even be a closed formula. Many simple combinatorics problems don't seem to have a simple solution. Maybe the problem can be solved by generating functions, as suggested by the answer to this related question, but I have no idea how that works. REPLY [13 votes]: You're right about there not being a closed form, but there being a generating function solution. In combinatorics, the objects you are counting are called partitions, which are ways of writing one number as a sum of positive integers. Interpret $n_k $ as the number of times the integer $k $ appears in the sum. Then $$ \sum_{k =1}^E k \cdot n_k = E $$ means that the numbers sum up to $E $, so we are interested in partitions of $E $ (note that we can ignore $k=0$ here as it does not affect this sum). The first condition, $\sum_{k=0}^E n_k = N $, or equivalently $\sum_{k=1}^E n_k \le N $, means that we are interested in partitions of $E $ with at most $N $ parts. Now unfortunately there is no closed solution to this problem. There is, however, a generating function. We first make a transformation. We claim that the number of partitions of $E $ into at most $N $ parts is the same as the number of partitions of $E $ into parts of size at most $N $. To see this, let $m_j = \sum_{k=j}^E n_k $ count the number of parts of size at least $j $. Clearly $m_j=0$ for $j>N $, and $$ \sum_j m_j = \sum_j \sum_{k \ge j} n_k = \sum_k \sum_{j \le k} n_k = \sum_k k \cdot n_k, $$ so given our partition $(k \cdot n_k)_k $ [that is, $n_k$ copies of $k$ as $k$ ranges from $1$ to $E$] of $E $ with at most $N $ parts, we formed a partition $(m_j)_j $ of $E $ with parts of size at most $N $. One can check that this is in fact a bijection, so we can count either type of partition. Now for the generating function. A partition represents the (unordered) sum $m_1 + m_2 + ... + m_n = E$, where $n$ is the largest integer for which $m_j \ge 1$. One convenient way of representing such sums is through polynomial multiplication: $x^{m_1} \cdot x^{m_2} \cdot ... \cdot x^{m_n} = x^{E} $. When multiplying out these monomials, we can group them together by their powers. So the terms with $m_j =1$ can be collected into the factor $x^{1 \cdot 0} + x^{1 \cdot 1} + x^{1 \cdot 2} + ... + x^{1 \cdot s_1} + ... $, where in $1 \cdot s_1$, the $1$ indicates we are dealing with parts of size $1$, and the $s_1$ keeps track of how many there are. The cool thing is that this is a geometric series, so we have $$ x^{1 \cdot 0} + x^{1 \cdot 1} + x^{1 \cdot 2} + ... + x^{1 \cdot s_1} + ... = \frac {1}{1-x^1}. $$ We can do this for every other part size, and the factor corresponding to the parts of size $j$ will be $\frac {1}{1-x^j} $. Finally, to get the generating function, we multiply these factors together, because remember that adding the parts corresponds to multiplying the monomials. Since we only want parts of size up to $N $, we take the product over $1 \le j \le N $, giving $$ F_N (x) = \prod_{j=1}^N (1-x^j)^{-1}. $$ This gives a power series, and the coefficient of $x^E $ counts the partitions of $E $ into parts of size at most $N $ (or, equivalently, with at most $N $ parts), which is what you are looking for. Even more finally, what good is a generating function? Well, it gives a systematic way to approach these problems. Note that the function $F_N $ doesn't just encode the solution to your problem, but to the problem for every possible value of $E $, all at once! In practical terms, it gives an algorithm for finding the solution, since the power series can be computed efficiently. We humans can also use analytic tools to extract the asymptotics of the solution, although that is not something I can do off the top of my head, so I'll leave that part to someone else.<|endoftext|> TITLE: What are the higher dimensional analogues of left and right handedness? QUESTION [6 upvotes]: The dimensions of a set of three axes can be arranged in two ways; left or right handed. Cartesian co-ordinates are by convention always oriented to comply with the right-hand rule. It would seem this rule can be thought of as a cyclic transformation of order 3 which takes us from one axis to the next. What are the analogues for left- and right- handedness in higher dimensions? Particularly in infinite dimensions? We know that three is geometrically special arising out of the parallelisability of three-sphere, and the only higher dimensional space in which this happens again is 7-dimensions so is there only an analogue in 7-dimensions or can the concept be extended to other spaces? In particular... and this is just a bit of background perhaps not material to the question. I'm interested in a space I'm constructing to study number theory in which every axis represents a prime number and every point along that axis represents an increment in the power of that prime number, so along the first axis we have 2, 4, 8, ... and on the 2nd axis we have 3, 9, 27, ... By this means every co-ordinate in the infinite-dimensional space represents a unique natural number given by the product of its co-ordinates. If the points along each axis are then spaced according to their square root, Pythagorus theorem guarantees that all points in the infinite-dimensional space are well-ordered by their distance from the origin, which is equal to the square root of the log of natural number they represent. What I want to bring some understanding to, is how the process of counting in this space is described by some translation from any $x$ to $x+1$, and whether there might be some sense to be made of the rotation between axes that takes place with each increment. REPLY [6 votes]: $\newcommand{\Reals}{\mathbf{R}}\newcommand{\Basis}{\mathbf{e}}$"Handedness" has a natural generalization to arbitrary finite-dimensional vector spaces, or to finite-dimensional smooth manifolds, see also chirality and orientability. Martin Gardner's The New Ambidextrous Universe may be worth a look. Let $V$ be a finite-dimensional real vector space, and $B = (\Basis_{j})_{j=1}^{n}$ an ordered basis. Given an ordered basis $B' = (\Basis_{j}')_{j=1}^{n}$, there exists a unique linear isomorphism $T:V \to V$ satisfying $T(\Basis_{j}) = \Basis_{j}'$ for each $j$. The basis $B'$ defines the same orientation as $B$ if $\det T > 0$, and defines the opposite orientation as $B$ if $\det T < 0$. Note carefully that orientation (or handedness, or chirality) is not an intrinsic property of a single ordered basis; instead, it's a relationship between two ordered bases. Often we fix an ordered basis $B$ and declare it to be "positively-oriented", which really means that an arbitrary $B'$ is positively oriented if and only if $B'$ has the same orientation as $B$. (According to Gardner, this issue confused Kant for some time.) This concept generalizes to finite-dimensional vector spaces over an ordered field. If your space is infinite-dimensional, you may have trouble defining the determinant of an isomorphism. If your field of scalars is not ordered, there's no way to compare $\det T$ with $0$. In your situation, you're using the bijection from the positive integers to the set of finite sequences of non-negative integers given by the uniqueness of prime factorization: Letting $p_{j}$ denote the $j$th prime number, and letting $(k_{j})_{j=1}^{\infty}$ denote a sequence of non-negative integers with at most finitely many non-zero, you have $$ N \sim (k_{j})_{j=1}^{\infty} \quad\text{iff}\quad N = \prod_{j=1}^{\infty} p_{j}^{k_{j}}. \tag{1} $$ You can view each such sequence as an element of $\Reals^{\infty}$, the vector space of finite sequences of reals. Your idea of considering the mapping $N \mapsto N + 1$ and the induced action on sequences of exponents in (1) doesn't appear to induce an isomorphism of the ambient vector space $\Reals^{\infty}$, however, so it's not clear whether the preceding definition of orientation will be useful to you.<|endoftext|> TITLE: What is the motivation behind the solution of this olympiad problem? QUESTION [5 upvotes]: Is there a particular reason why solutions in arithmetic progression were sought for? Can you think of an alternate solution? Note: This is a problem from the Indian National Mathematical Olympiad 2001. REPLY [6 votes]: The righthand side suggests naming the differences. We might as well assume that $x\le y\le z$ and set $a=y-x$ and $b=z-y$. The equation then becomes $$3y^2+a^2+b^2+2(b-a)y=ab(a+b)\;,$$ and at that point it's easy to see that setting $a=b$ would simplify matters greatly. At this point it could still turn out, of course, that the simplification is useless, because the additional requirement kills off too many solutions, but it's so much easier to work with that it should definitely be investigated, and it turns out to be fruitful.<|endoftext|> TITLE: If $f$ is continuous and $f(x+y)=f(x)f(y)$, then $\lim\limits_{x \rightarrow 0} \frac{f(x)-f(0)}{x}$ exists QUESTION [11 upvotes]: I'm solving the functional equation $f(x+y)=f(x)f(y)$ and I know that I have a continuous function $f:[0,\infty\rangle \to \langle 0,\infty\rangle$ s.t. $f(0)=1$. In one of the steps, I want to show that the limit $$\lim_{x \rightarrow 0} \frac{f(x)-1}{x}$$ exists and is finite? I'm just looking for a hint. REPLY [5 votes]: To get around the fact that you can't differentiate $f$ (yet), we'll integrate it! Define $F$ to be the following primitive of $f$ $$F(x) = \int_0^x f(t) \, dt$$ Integrating $f(x+y) = f(x)f(y)$, with regards to $x$, between $0$ and $1$ (actually you may chose any non zero number here), we get $$F(y+1) - F(y) = f(y)F(1)$$ Since $f$ is positive, $F(1) \ne 0$, and we get $$f(y) = \frac{F(y+1)-F(y)}{F(1)}$$ Now the right hand side is clearly $\mathcal{C}^1$, so $f$ is $\mathcal{C}^1$ too.<|endoftext|> TITLE: Shorter proof of irrationality of $\sqrt{2}$? QUESTION [8 upvotes]: Euclid's proof of the irrationality of $\sqrt{2}$ via contradiction involves arguments about evenness or odness of $a^2 = 2 b^2$ which is then lead to contradiction in using few more steps. I wonder why one needs this line of arguments, isn't it from this expression immediately obvious that all prime factors of both $b^2$ and $a^2$ must have even exponents (and that factor $2^1$ between them makes this impossible)? Is the reason for the "more complicated" structure of the traded proof, that this argument involves implicitly more theory (like the fundamental theorem of arithmetics, maybe)? REPLY [12 votes]: Here is one of my favorite proofs for the irrationality of $\sqrt{2}$. Suppose $\sqrt{2}\in\Bbb Q$. Then there exist an integer $n>0$ such that $n\sqrt{2}\in\Bbb Z$. Now, let $n$ be the smallest one with this property and take $m=n(\sqrt{2}-1)$. We observe that $m$ has the following properties: $m\in\Bbb Z$ $m>0$ $m\sqrt{2}\in\Bbb Z$ $m TITLE: $G$ is an abelian group of order $n$ with the property that $G$ has at most $d$ elements of order $d$, for any $d$ dividing $n$. Then $G$ is cyclic. QUESTION [5 upvotes]: $G$ is an abelian group of order $n$ with the property that $G$ has at most $d$ elements of order $d$, for any $d$ dividing $n$. Then $G$ is cyclic. I am not getting any clue how to start. Please Help. REPLY [2 votes]: Induction by n. Suppose the property is true for all $k \lt n$. We'll prove it for $n$. If $n$ is prime then $G$ is cyclic. Suppose $n=pq$ where $p,q$ are coprime. Now let $H_1, H_2$ subgroups of $G$ of orders $p,q$ (we know from Cauchy theorem there are such subgroups). Then, by induction hypothesis, both subgroups are cyclic. Let $x \in H_1, y \in H_2$ of order $p, q$. Then z=xy will be of order $pq=n$ so $G$ is cyclic. Now suppose $n=p^k, k \ge 2$ where $p$ is prime. Then $G$ is a p-group ( group in which every element has order $p^k$ for some $k$). The prove for $G$ cyclic in this case can be found here. Notice that the property "G has at most d elements of order d" implies $G$ has at most $p$ elements of order $p$ hence there is a unique subgroup of order $p$.<|endoftext|> TITLE: Computing $\int_0^\infty\frac{1}{(1+x^{2015})(1+x^2)}$ QUESTION [11 upvotes]: How can I compute $$\int_0^\infty\frac{1}{(1+x^{2015})(1+x^2)}\quad?$$ My attempt: Looking at the limits of the integration I see that we should induce some $\tan^{-1}(x)$ so if we put $\infty$ we would get something like $\frac{\pi}{2}$ . But I am not sure how to proceed . Partial fractions don't yield good integral. REPLY [25 votes]: For any $a > 0$, consider the integral $$\mathcal{I} = \int_0^\infty \frac{1}{1+x^a} \frac{dx}{1+x^2}$$ Change variable to $y = \frac1x$, we have $$\mathcal{I} = \int_0^\infty \frac{1}{1+y^{-a}}\frac{1}{1+y^{-2}} \frac{dy}{y^2} = \int_0^\infty \frac{y^a}{1+y^a}\frac{dy}{1+y^2}$$ Summing the two expression of $\mathcal{I}$ and divide by $2$, we get $$\mathcal{I} = \frac12 \int_0^\infty \frac{dx}{1+x^2} = \frac{\pi}{4}$$ independent of value of $a$. Random Notes About the question how do I arrive this answer, the key is symmetry. When one is facing a complicated looking integral, the first thing one should do is looking for symmetry hidden in the integral. For the integral at hand, the integrand has similar form and the range of integration is invariant under a change of variable from $x$ to $\frac1x$. In this case, there are a few things one should check. Do different part of integral cancel each other? Can the original and transformed integral combined together to something simpler? Will changing variable to $y = x \pm \frac{\text{const}}{x}$ reduces the integrand to a more manageable form. It turns out the second strategy works and the rest is mostly following your nose. REPLY [11 votes]: HINT: Enforce the substitution $x\to 1/x$ and combine the results. SPOILER ALERT: Scroll over the highlighted area to reveal the solution. Enforcing the substitution $x\to 1/x$ reveals $$\begin{align}I&=\int_0^\infty \frac{1}{(1+x^{2015})(1+x^2)}\,dx\\\\&=\int_0^\infty \frac{x^{2015}}{(1+x^{2015})(1+x^2)}\,dx\tag 1\end{align}$$Adding the right-hand and left-hand sides of $(1)$ and dividing by $2$ yields $$I=\frac12\int_0^\infty \frac{1}{1+x^2}\,dx=\pi/4$$<|endoftext|> TITLE: $\frac{1^2+2^2+\cdots+n^2}{n}$ is a perfect square QUESTION [9 upvotes]: Prove that there exist infinitely many positive integers $n$ such that $\frac{1^2+2^2+\cdots+n^2}{n}$ is a perfect square. Obviously, $1$ is the least integer having this property. Find the next two least integers with this property. The given condition is equivalent to $(2n+2)(2n+1) = 12p^2$ where $p$ is a positive integer. Then since $\gcd(2n+2,2n+1) = 1$, we have that $2n+2 = 4k_1$ and $2n+1 = k_2$. We must have that $k_1$ is divisible by $3$ or that $k_2$ is divisible by $3$. If $k_1$ is divisible by $3$ and $k_2$ is not, then we must have that $k_1$ is divisible by $9$ and so $2n+2 = 36m$. Then we need $3mk_2$ to be a perfect square where $k_2+1 = 36m$. Thus if $3mk_2 = r^2$, we get $m = \dfrac{1}{72}\left(\sqrt{48r^2+1}+1\right)$. REPLY [5 votes]: This can be solved by using the quadratic formula on $2n^2 + 3n + (1-6p^2)=0$ to convert to the Pell's equation $r^2 - 48p^2 = 1$. Standard techniques can be used to generate infinitely many solutions, though we need to be careful to keep only those with $r \equiv 3\pmod{4}$. For example, we can arrive at infinitely many solutions by writing $r+p\sqrt{48} = (7+\sqrt{48})^{2k+1}$, beginning from the primitive solution $r=7, p=1$. Here is a proof that results from that process: If $(n,p)$ is a positive solution to $(n+1)(2n+1)=6p^2$, then $(97n+168p+72,56n+97p+42)$ is a larger solution, which we can see by computing directly: $$((97n+168p+72) + 1)(2(97n+168p+72) + 1) - 6(56n+97p+42)^2$$ $$ = (n+1)(2n+1)-6p^2$$ Beginning from the solution $(n,p)=(1,1)$, this generates infinitely many solutions by induction. REPLY [3 votes]: From the equation $(2n+1)(2n+2) = 12p^2$, we know that there are two possibilities: $2n+2$ is of the form $4x^2$, $2n+1$ is of the form $3y^2$. $2n+2$ is of the form $12x^2$, $2n+1$ is of the form $y^2$. This follows from $2n+1, 2n+2$ being relatively prime. Thus, solutions correspond to integer solutions of the equations $4x^2-3y^2 = 1$ and $12x^2-y^2 = 1$. We claim that the former equation has infinitely many solutions. These correspond to the solutions of $a^2-3b^2 = 1$ where $a$ is even. The theory of Pell equations can be used to show that the solutions of $a^2-3b^2 = 1$ are given by the powers $a+b\sqrt{3} = (2+\sqrt{3})^i$ for $i\ge 0$. Even without this theory in hand, we can check directly that these give solutions: $$ a^2-3b^2 = (a+b\sqrt{3})(a-b\sqrt{3}) = (2+\sqrt{3})^i(2-\sqrt{3})^i = \big[(2+\sqrt{3})(2-\sqrt{3})\big]^i = 1 $$ It only remains to check that infinitely many of these solutions have $a$ as even. An easy inductive argument shows that $a$ is even precisely when $i$ is odd, so there are infinitely many solutions. The first two nontrivial solutions are $(2+\sqrt{3})^3 = 26+15\sqrt{3}$ and $(2+\sqrt{3})^5 = 362+209\sqrt{3}$. These values of $a$ correspond to $n=337$ and $n=65521$. To check that these are indeed the smallest nontrivial solutions to $(2n+1)(2n+2)=12p^2$, there are two approaches. The first is to apply the theory of Pell equations to note that there are no solutions to $a^2-3b^2=1$ other than those given above, and that there are no solutions at all to $12x^2-y^2=1$. The second method is to check by brute force that no other values less than $65521$ yield solutions.<|endoftext|> TITLE: Ergodicity of tent map QUESTION [7 upvotes]: The dynamical system $T:[0,1]\to [0,1]$ defined by $$T(x) = \begin{cases} 2x, & \text{for } 0\leq x\leq \frac{1}{2}\\ 2-2x, & \text{for } \frac{1}{2}\leq x\leq 1 \end{cases}$$ is called the tent map. Prove that $T$ is ergodic with respect to Lebesgue measure. My work: Measure preserving map is ergodic if for every T-invariant measurable set is $m(A)=1$ or $m(A)=0$ (by definition). Let's prove that $T$ preserves Lebesgue measure. It is enough to prove for intervals $[a,b]$ that $m(T^{-1}[a,b])=m([a,b])$ holds. Since $T^{-1}[a,b]=[\frac{a}{2},\frac{b}{2}]\cup [\frac{2-b}{2},\frac{2-a}{2}]$, we have $m(T^{-1}[a,b])=m([\frac{a}{2},\frac{b}{2}]\cup [\frac{2-b}{2},\frac{2-a}{2}])=m([\frac{a}{2},\frac{b}{2}])+m([\frac{2-b}{2},\frac{2-a}{2}])=b-a=m([a,b])$. Therefore, $T$ is measure-preserving. Now, I'm stuck and don't know how to prove that map is ergodic. Any help is welcome. Thanks in advance. REPLY [3 votes]: There is a pretty standard way to prove ergodicity of certain class of maps: use the Lebesgue Density Theorem and the following bounded distortion property \begin{align*} \dfrac{m(T^k(E_1))}{m(T^k(E_2))}=\dfrac{m(E_1)}{m(E_2)} \end{align*} for every pair of measurable subsets $E_1,E_2$ of any interval $I$ where $T^k$ is a bijection. If you need further details, do not hesitate to ask.<|endoftext|> TITLE: Is there a planar graph with two inequivalent embeddings on the sphere? QUESTION [5 upvotes]: A planar graph is a graph with an embedding in the plane. This graph can be embedded in the plane in two ways, and they are not related by a homotopy of graphs, a homotopy which preserves the graph's structure (vertices, edges etc.) and does not involve lines crossing each other. However, as subsets of $\mathbb{R}^2$, they are homotopic without having to drag edges across each other. This graph can be embedded in the plane in two ways, and it is not possible to change one into the other using a homotopy that doesn't cross edges over each other. However, if these embeddings are lifted to the sphere, they become equivalent. It is possible to have two inequivalent embeddings of a graph on the sphere: but these only differ by a reflection. Question: Two embeddings of a graph on the sphere are equivalent if there is a homotopy of topological spaces from one to the other during which the graph does not intersect itself, possibly followed by a reflection. Is there a planar graph with two inequivalent embeddings on the sphere? REPLY [5 votes]: Consider O-----O O---O--O | /| | | | O O | versus | O | |/ | |/ | O-----O O---O<|endoftext|> TITLE: What's wrong with my calculation of the knot group of trefoil? QUESTION [9 upvotes]: I know standard method to compute the knot group of a trefoil $K$, regarding it as a (2,3)-torus knot. But here I find a method giving $\pi_1(\mathbb{R}^3-K)\cong\mathbb{Z}$, which is impossible. The question is that I cannot find out which step is wrong $\dots$ Here is my calculation: Choose six points $\{A_1,\dots,A_6\} \subset K$ as showed in the picture. We can embed $K$ into $\mathbb{R}^3$ in the way such that: (1) $A_k\in\mathbb{R}^2\times\{0\}$ for all $k$; (2) For $k$ odd, the arcs $A_kA_{k+1}$ without endpoints lie in $\{x_3< 0\}\subset\mathbb{R}^3$; (3) For $k$ even, the arcs $A_kA_{k+1}$ without endpoints lie in $\{x_3>0\}\subset\mathbb{R}^3$. (We agree $A_7=A_1$) We write $U=(\mathbb{R}^3-K)\cap\{x_3\leq 0\}$, $V=(\mathbb{R}^3-K)\cap\{x_3\geq 0\}$. Then we have: (1) $\pi_1(U)=\pi_1(V)$ is the free group generated by 3 elements, given by loops around arcs $A_kA_{k+1}$; (2) $U\cap V=\mathbb{R}^2-\{6pts\}$, thus $\pi_1(U\cap V)$ is the free group generated by 6 elements. (Of course we can replace $U,V$ by a little larger open sets.) Now we apply Van Kampen's theorem. Let $i:U\cap V\to U$, $j:U\cap V\to V$ be inclusion, then $\pi_1(\mathbb{R}^3-K)$ is isomorphic to the free group $\pi_1(U)*\pi_1(V)$ modulo the relations $\sim$ given by $i_*(x)=j_*(x)$ for each $x\in\pi_1(U\cap V)$. We can write $\pi_1(U)=$, $\pi_1(V)=$, where $a_k$ stands for a loop rotating around $A_{2k-1}A_{2k}$ and $b_k$ corresponds to $A_{2k}A_{2k+1}$. Also write $\pi_1(U\cap V)=$, where $c_k$ corresponds to loops rotating around $A_k$. Considering orientation, we have $$i_*(c_1)=a_1^{\pm 1}=i_*(c_2)^{-1},\ i_*(c_3)=a_2^{\pm 1}=i_*(c_4)^{-1},\ i_*(c_5)=a_3^{\pm 1}=i_*(c_6)^{-1};$$ $$j_*(c_6)=b_3^{\pm 1}=j_*(c_1)^{-1},\ j_*(c_2)=b_1^{\pm 1}=j_*(c_3)^{-1},\ j_*(c_4)=b_2^{\pm 1}=j_*(c_5)^{-1}.$$ Then $\pi_1(U)*\pi_1(V)/\sim$ $\ \ \ $ has 6 generators, each two are identical or are inverse of each other!!! This immediately gives a group isomorphic to $\mathbb{Z}$. So where is wrong? REPLY [2 votes]: Trying to explain my objection to OP's calculations (see the comments) visually. Here's the trefoil. I made it a thin tube. Ignore its hollow interior that shows when I brutally crop the image. Here's the top part with the 3 arches. This is a fattened version of OP's set $V$. I include the points with $x_3>-\epsilon$. Observe that this "universe" ends at the level of the bottom of the three arches. Here's the bottom part with the 3 wormholes. This is a fattened version of OP's set $U$. I include the points with $x_3<\epsilon$. Observe that this "universe" ends at the level of the top of the three wormholes. Here's a top-down view of the (fattened) middle part $U\cap V$ together with a base point (=the black dot) and a loop around the point $A_1$, i.e. a representative of the generator $c_1$ of $\pi_1(U\cap V)$. Here's what the class of $c_1$ looks like in $\pi_1(U)$. We see that we can homotopically slide the loop along the wormhole $A_1A_2$. This shows, as the OP claimed, that $i_*(c_1)$ and $i_*(c_2)$ are either homotopic or homotopic to each others inverses depending on how we orient them. At least if we pick the obvious loop to represent $c_2$. Here's how the class of $c_1$ looks like in the group $\pi_1(V)$. Observe that because in this space $c_1$ goes underneath the arch $A_4A_5$, we cannot simply slide the loop along the arch $A_1A_6$. This means that $j_*(c_1)$ is not homotopic to $b_3^{\pm}$. We need to conjugate it by $b_2$ to bring it above the arch $A_4A_5$. The OP's attempt has other similar problems in the mappings $i_*$ and $j_*$. Working all of them out is too much work - at least for now. Another way of calculating this homotopy group (a different way of using van Kampen's theorem) is given in e.g. Massey's book. Further observe that if choose the loop $c_1$ to go around the point $A_4$ from the West side, then the OP's claim about the image $j_*(c_1)$ disappears, but this time a similar problem appears in the bottom part, i.e. with $i_*(c_1)$. Anyway, I'm confident that observations like this explain why OP got the wrong group to emerge, and also shows how to fix the calculations. Hope this helps in that task.<|endoftext|> TITLE: An identity about the Dedekind $\eta$ function QUESTION [5 upvotes]: Let $\eta$ be the Dedekind eta function. Show that $\dfrac{\eta(q^9)^3}{\eta(q^3)}=\displaystyle\sum_{a,b\in \mathbb{Z}^2}q^{3(a^2+b^2+ab+a+b)+1}$. I'm pretty sure the RHS is equal to $\theta_2(q^3)\psi_6(q^9)+\theta_3(q^3)\psi_3(q^9)$, but I'm not sure how to show this is equal to the LHS. REPLY [2 votes]: The two sides of the equation are equal up to a factor of $3$. That is, the left side is $\, q + q^4 + 2q^7 + \dots, \,$ the generating function of OEIS sequence A033687, and the right side is the generating function of OEIS sequence A005882 which is $3$ times that. It is also the right side of equation $(63)$ on page $111$ of Conway and Sloane "Sphere Packings, Lattices and Groups". On page $103$ is equation $(11)$ with the definition $\, \psi_k(z) = e^{\pi i/ z^2} \, \theta_3(\pi z/k|z) = \sum_{m=-\infty}^\infty q^{(m+1/k)^2}. \,$ The left side is equal to $$\, \frac{q}3(2\, \psi(q^6)\, f(q^6, q^{12}) + \phi(q^3)\, f(q^3, q^{15}))$$ where $\phi(), \psi()$ are Ramanujan theta functions and $f(, )$ is Ramanujan's general theta function. The right side can be written as $$ \theta_2(0, q^3)\, q^{1/4} f(q^6, q^{12}) + \theta_3(0, q^3)\, qf(q^3, q^{15}). $$ This can be shown by splitting the infinite sum into two infinite sums depending on the parity of $\,a\,$ and then identifying each sum with one of the two products of a theta null function and a Ramanujan general theta function. In more detail, the infinite sum exponent of $q$ function $$ e(a,b) := 3 (a^2 + b^2 + a b + a + b) + 1 $$ regarded as a matrix $\, \{e(i,j)\} \,$ has rank $3$ while the matrices $\, \{e(2i,j-i)\} \,$ and $\, \{e(2i+1,j-i)\} \,$ have rank $2$. Note that the matrix $\, \{q^{e(i,j)}\} \,$ has infinite rank, while the two matrices $\, \{q^{e(2i,j-i)}\} \,$ and $\, \{q^{e(2i+1,j-i)}\} \,$ have rank $1$, and thus the two infinite sums of their entries factor into products of two Ramanujan theta functions each.<|endoftext|> TITLE: What is the nth term of the series? $0, 0 , 1 , 3 , 6 , 10 ....$ QUESTION [7 upvotes]: I am trying to find the relation between the number of nodes and the number of connections possible. So if there are $0$ nodes, that means $0$ connections possible, $1$ node still means $0$ connections possible, $2$ nodes $1$ connection possible, $3$ nodes $3$ connection and so on. How can I find the relation between $n$ nodes and the number of possible connections ? REPLY [8 votes]: Let $a_n$ be the number of connections for $n$ nodes. For $n > 0$, we have $a_n = a_{n-1} + n-1$ because among the $a_n$ connections, there are $n-1$ of them that connect to the $n^{th}$ node while $a_{n-1}$ of them didn't. Rewrite above expression as $a_k - a_{k-1} = k-1$. Start summing both sides from $k = 1$ to $n$, we get $$a_n - a_0 = \sum_{k=1}^n (k-1) = \sum_{k=1}^n \frac{k(k-1)-(k-1)(k-2)}{2} = \frac{n(n-1)}{2}$$ Together with $a_0 = 0$, we find $a_n = \frac{n(n-1)}{2}$ for $n > 0$. Since this is trivially true at $n = 0$, we have $a_n = \frac{n(n-1)}{2}$ in general.<|endoftext|> TITLE: Prove $\sum_{n=1}^{\infty} \frac{n!}{(2n)!} = \frac{1}{2}e^{1/4} \sqrt{\pi} \text{erf}(\frac{1}{2})$ QUESTION [5 upvotes]: I would like to prove: $$\sum_{n=1}^{\infty} \frac{n!}{(2n)!} = \frac{1}{2}e^{1/4} \sqrt{\pi} \text{erf}(\frac{1}{2})$$ What I did was consider: $$e^{-t^2}=\sum_{n=0}^{\infty} (-1)^n \frac{t^{2n}}{n!}$$ Then integrate term by term from $0$ to $x$ to get: $$\frac{\sqrt{\pi}}{2}\text{erf} (x)=\sum_{n=0}^{\infty} (-1)^n \frac{x^{2n+1}}{n!(2n+1)}$$ Then I substituted in $x=\frac{1}{2}$ and tried some manipulations but didn't get anywhere. May someone help, thanks. REPLY [5 votes]: We have $\frac{n!}{(2n)!}=\frac{B(n,n+1)}{(n-1)!}$, hence: $$ \sum_{n\geq 1}\frac{n!}{(2n)!} = \int_{0}^{1}\sum_{n\geq 1}\frac{x^{n-1}(1-x)^n}{(n-1)!}\,dx = \int_{0}^{1}(1-x)e^{x(1-x)}\,dx$$ and the result follows by setting $x=t+\frac{1}{2}$ in the last integral, leading to $e^{1/4}\int_{0}^{\frac{1}{2}}e^{-u^2}\,du.$<|endoftext|> TITLE: Connection between the spectra of a family of matrices and a modelization of particles' scattering? QUESTION [16 upvotes]: In the excellent book "Numerical Computing with MATLAB" by Cleve B. Moler (SIAM 2004), [Moler is the "father" of Matlab], one finds, on pages 298-299, the following graphical representation (fig. 10.12 ; I have reconstructed it with minor changes ; Matlab program below) with the following explanations (personal adaptation of the text): Let $n$ be a fixed integer $>1$. Let $t\in (0,1)$ ; let $A_t$ be the $n \times n$ matrix with entries $$A_{t,i,j}=\dfrac{1}{i-j+t}$$ Consider Fig. 1, gathering the spectra of matrices $A_t$, for $n=11$ and $0.1 \leq t \leq 0.9$ with steps $\Delta t=0.005$. Figure 1. Interpretation : the 11 rightmost points correspond to the spectrum of matrix $A_t$ for $t=0.1$. There is a striking similarity with particles' scattering by a nucleus situated at the origin, with hyperbolic trajectories ; see for example this reference. My question is : How matrices $A_t$ can be connected to a model of particle scattering ? My attempts (mostly unsuccessful) : Consider $A_t$ as a Cauchy matrix ($x_i=i$, $y_j=j-t$) (notations of https://en.wikipedia.org/wiki/Cauchy_matrix) with its different properties, in particular displacement equation. See as well the answers to this question. Connect $A_t$ with a quadratic form defined by its moment's matrix $\int_C z^{i}\bar{z}^{j}z^{t-1}dz$, $z^{t-1}$ playing the rôle of a weight function. But for which curve $C$ ? Prove that the eigenvalues are situated on hyperbolas. Make many simulations (see figures below). Matlab program for Fig. 1 : n=11; [I,J]=meshgrid(1:n,1:n); E=[]; for t=0.1:0.005:0.9 A=1./(I-J+t); E=[E,eig(A)]; end; plot(E,'.k');axis equal Addendum (November 22, 2018) : Here is a second figure that provides a "bigger" view, with $n=20$ and $0.005 \leq t \leq 0.995$. The eigenvalues corresponding to $t=0.005$ are grouped into the rightmost big red blob on the right, to $t=0.995$ are blue filled dots (quasi-circle with radius $\approx 110$). Figure 2 : [enlarged version of Fig. 1 ; case $n=15$] Everything happens as if a planar wave enters at $t=0$ from the right, is slowed down by nucleus' repulsion, then scattered as a circular wave... Fig. 3 : Two cases are gathered here, both for $t=0.995$ : $n=51$ (empty circles) and $n=100$ (stars). One can note that the eigenvalues are very close to the $(n+1)$-th roots of unity. For this value of $t$, the radii are given by experimental formula : $R_n=200(1-12.4/n+212/n^2-3110/n^3)$. I am grateful to @AmbretteOrrisey who did very interesting remarks, in particular by giving the following polar representation for the trajectory of particles : [citation follows ; more details can be found in his/her answer] "The polar equation of the trajectory of a particle being deflected by a point charge is $$r=\frac{2a^2}{\sqrt{b^2+4a^2}\sin\theta -b}\tag{1}$$ where $a$ is the impact parameter which is the closest approach to the nucleus were the path undeviated; $b$ is the closest approach of a head-on ( $a=0$) particle with repulsion operating." Figure 4 displays a reconstruction result that takes into account the fact that (n+1)th roots of unity give asymptotic directions. Figure 4 : Case $n=201$. An approximate reconstruction of $21$ among the $n$ trajectories (Matlab program below ; polar equation - adapted from (1) can be seen on line 5). Please note the "ad hoc" coefficient $3.0592$... n=201; for k=1:20:n d=pi*k/(2*(n+1));c=cos(d); t=-d+0.01:0.01:d-0.01; r=3.0592*(1-c)*exp(i*(t-d))./(cos(t)-c); plot(r);plot(conj(r)); end; Slightly related : http://bdpi.usp.br/bitstream/handle/BDPI/35181/wos2012-3198.pdf I mention here a book gathering publications of Steve Moler [Milestones in Matrix Computation] (https://global.oup.com/academic/product/milestones-in-matrix-computation-9780199206810?cc=fr&lang=en&) and an article mentionning the analycity of the obtain curves : (https://www.math.upenn.edu/~kazdan/504/eigenv.pdf) REPLY [3 votes]: I'm don't think the curves would be hyoerbolæ anyway if it were a model for classical scattering. The polar equation of the particle being deflected by a point charge is $$r=\frac{2a^2}{\sqrt{b^2+4a^2}\sin\theta -b}$$ where $a$ is the impact parameter which is the closest approach to the nucleus were the path undeviated; $b$ is the closest approach of a head-on ( $a=0$) particle with repulsion operating, and $\theta$ is the angle between the radius vector of the particle (nucleus at origin) and the line joining the nucleus to point of closest approach with charge in place (not the axis of approach - ie the line through the nucleus that particle is moving parallel to & distance $a$ from when it is yet infinite distance away). Is that a polar coördinate representation of a hyperbola? I suppose it ought to be, as it is an inverse-square force! Anyway - it's likely I'll look into it will-I-nill-I, now. Oh ... the diæræsis in "coördinate" & the ligature in "diæræsis" ... I'm hoping you don't mind that archaïck stuff too much - I love it ... but many would probably go ballistic if I used that kind of idiom more generally on here. They even complain about my italics & emphasis! The diæræsis over the second vowel in a conjunction of vowels was a device used to denote that the conjunction does not form a diphthong. It never was actually a very widely-used convention!<|endoftext|> TITLE: Construction of exponentiation of the integers, rationals, and reals? QUESTION [7 upvotes]: I don't believe I ever learned the set theoretic construction of exponentiation for any number set other than than the naturals. That being said, for the sake of this question, assume all arithmetic operations are well defined over the natural numbers. Now lets define the equivalence relation $\sim_Z$ $$\forall a,b,c,d:(a,b,c,d\in\mathbb{N})[(a,b)\sim_Z(c,d)\leftrightarrow a+d=c+b]$$ Then the equivalence class of an ordered pair of natural numbers is an integer: $$\mathbb{Z}=\mathbb{N}^2/\sim_Z$$ I'm really uncertain as to how I should go about constructing exponentiation over this set, since the equivalence class $[(a,b)]_{\sim_Z}$ represents the integer $a-b$, so trying to put one of these equivalence classes to the power of another seems like trying to simplify a binomial to the power of a binomial which sounds like a pain (perhaps I'm lacking some knowledge of elementary number theory that would make this quite obvious). Next, knowing addition and multiplication over the integers, we define $\sim_Q$ $$\forall a,b,c,d:(a,b,c,d\in\mathbb{Z}\wedge b,d\neq0)[(a,b)\sim_R(c,d)\leftrightarrow ad=bc]$$ and the equivalence classes are the rational numbers: $$\mathbb{Q}=\mathbb{Z}\times(\mathbb{Z}-\{0\})/\sim_Q$$ This seems like it would be much more simple but the problem is that the rationals are not closed under exponentiation. If we construct the reals as Cauchy sequences of rationals than I would expect that $$\{a_n\}^{\{b_n\}}=\{a_n^{b_n}\}$$ (Assume the above are equivalence classes of sequences, not just sequences) but the problem is the same: $a_n^{b_n}$ is not necessarily rational. Please help, thank you REPLY [6 votes]: Note that every $z \in \mathbb Z$ such that $z \neq [(0,0)]_{\sim_{\mathbb Z}}$ can be uniquely expressed as either $z = [(n,0)]_{\sim_{\mathbb Z}}$ (in this case '$z = n$') or $z = [(0,n)]_{\sim_{\mathbb Z}}$ (in this case '$z = -n$') for some $n \in \mathbb N$. Let us say that $z \in \mathbb Z^{+}$ iff $z = [(n,0)]_{\sim_{\mathbb Z}}$ for some $n \in \mathbb N^{+}$. Also for $z = [(n,m)]_{\sim_{\mathbb Z}}$ let us write $-z := [(m,n)]_{\sim_{\mathbb Z}}$. We can now define for $n, m \in \mathbb N$ $$ [(n,0)]_{\sim_{\mathbb Z}}^{[(m,0)]_{\sim_{\mathbb Z}}} := [(n^m,0)]_{\sim_{\mathbb Z}} $$ and $$ [(0,n)]_{\sim_{\mathbb Z}}^{[(m,0)]_{\sim_{\mathbb Z}}} := \begin{cases} [(n^m,0)]_{\sim_{\mathbb Z}} & \text{, if } 2 \mid m \\ [(0,n^m)]_{\sim_{\mathbb Z}} & \text{, if } 2 \not \mid m \end{cases} $$ We can't define $[(0,n)]_{\sim_{\mathbb Z}}^{[(0,m)]_{\sim_{\mathbb Z}}}$ (as elements of $\mathbb Z$ that respect the usual meaning of exponentiation), since $\mathbb Z$ is not closed under exponentiation with negative powers. If you would like to have a total function, maybe just let $$ [(0,n)]_{\sim_{\mathbb Z}}^{[(0,m)]_{\sim_{\mathbb Z}}} := [(0,0)]_{\sim_{\mathbb Z}}^{[(0,m)]_{\sim_{\mathbb Z}}} := [(1,0)]_{\sim_{\mathbb Z}}. $$ Now, for $x = [(p,q)]_{\sim_{\mathbb Q}} \in \mathbb Q$ and $z \in \mathbb Z$ we define $$ \left( [(p,q)]_{\sim_{\mathbb Q}} \right)^z := \begin{cases} [([(0,0)]_{\sim_{\mathbb Z}},[(1,0)]_{\sim_{\mathbb Z}})]_{\sim_{\mathbb Q}} & \text{, if } p = [(0,0)]_{\sim_{\mathbb Z}} \\ [([(1,0)]_{\sim_{\mathbb Z}},[(1,0)]_{\sim_{\mathbb Z}})]_{\sim_{\mathbb Q}} & \text{, if } z = [(0,0)]_{\sim_{\mathbb Z}} \wedge p \neq [(0,0)]_{\sim_{\mathbb Z}}\\ [(p^z,q^z)]_{\sim_{\mathbb Q}} & \text{, if } z \in \mathbb Z^{+} \wedge p \neq [(0,0)]_{\sim_{\mathbb Z}} \\ [(q^{-z}, p^{-z})_{\sim_{\mathbb Q}} & \text{, if } z \in \mathbb Z \setminus \mathbb Z^{+} \wedge z \neq [(0,0)]_{\mathbb Z} \wedge p \neq [(0,0)]_{\sim_{\mathbb Z}} \end{cases} $$ This can easily be generalized to those cases where $x^y \in \mathbb Q$ for $x,y \in \mathbb Q$ as should be apparend from the above. Now, defining $x^y$ for $x,y \in \mathbb R$ can be done as follows: First suppose that $x,y \in \mathbb Q$ Then $$ \vec{x^y} := \{ q \in \mathbb Q \mid \exists a,b \in \mathbb \colon a^b \in \mathbb Q \wedge q < a^b \}. $$ Choose (here we need some form of choice in order to do that for every $x,y \in \mathbb Q$ simultaneously) a monotone increasing sequence $(q_0, q_1, \ldots)$ of rationals such that $\lim_{n \to \infty} q_n = \sup \vec{x^y}$ and define $$ x^y := (q_0, q_1, \ldots). $$ Now that you've done that you may define $x^y$ for $x \in \mathbb R$ and $y \in \mathbb Q$ just like you suggested in your post and finally, if $y \in \mathbb R^{+}$, fix an increasing sequence $y_n \in \mathbb Q^{+}$ for $n \in \mathbb N$ with limit $y$ and define $$ x^y := \sup \{ x^{y_n} \mid n \in \mathbb N \} $$ and similarly for $x^{-y}, (-x)^{y}, \ldots$. (You may rewrite this using $\vec{x^y}$, if you want to.) This post should teach you two things: You can define exponentiation in a set theoretical construction of the reals. Abstraction is your friend. All of this could've been siginficanlty simplified by adding a layer of abstraction and referring to already developed tools (in this case, some basic analysis which is used - but kept hidden - in my answer).<|endoftext|> TITLE: Problem 5 of Milnor's "Topology From The Differentiable Viewpoint" QUESTION [5 upvotes]: I'am trying to come up with a solution to the referred problem which, by the way, states the following: If $m TITLE: Important identities that can be obtained by manipulating the function $\frac{x}{e^x-1} = \frac{B_0}{0!} + \frac{B_1}{1!}x + \frac{B_2}{2!}x^2 + ...$? QUESTION [14 upvotes]: Note that $B_n$ denotes the nth Bernoulli number. Also note that $\frac{x}{2}\frac{e^x+1}{e^x-1} = \frac{x}{e^x-1} + \frac{x}{2} = 1 + \frac{|B_2|}{2!}x^2 - \frac{|B_4|}{4!}x^4+ \frac{|B_6|}{6!}x^6 - \cdots$ since $B_1 = -\frac{1}{2}$, the odd Bernoulli numbers are equal to zero, and $|B_{2n}| = (-1)^{n+1}B_{2n}$. Two of the most interesting identities that I've read about thus far in my studies have proofs which rely upon the generating function $\frac{x}{e^x-1} = \frac{B_0}{0!} + \frac{B_1}{1!}x + \frac{B_2}{2!}x^2 + \cdots$ in an essential way. The first is (Jacobi's?) proof of Faulhaber's formula, and the second is (Euler's?) proof that connects the Bernoulli numbers to values of the Riemann Zeta function for positive even integers. Faulhaber's Formula:$$1^c+2^c+\cdots+n^c = \frac{1}{c+1}\left(\binom{c+1}{0}B_0n^{c+1}-\binom{c+1}{1}B_1n^c + \cdots +(-1)^c\binom{c+1}{c}B_cn\right)$$ Riemann Zeta Function Identity: $$\frac{1}{1^{2n}} + \frac{1}{2^{2n}}+\frac{1}{3^{2n}}+\cdots = (-1)^{n+1}\frac{{(2\pi)}^{2n}B_{2n}}{2(2n)!}$$ Below I give the manipulations which give rise to these identities, you'll notice that the generating function $\frac{x}{e^x-1} = \frac{B_0}{0!} + \frac{B_1}{1!}x + \frac{B_2}{2!}x^2 + \cdots$ is the key piece of equipment that allows the connection with the Bernoulli numbers to be established in both cases. Given the importance of these two identities and their connection with this generating function I'm left to wonder if there are other important identities that require the manipulation of this generating function. Perhaps someone can provide other examples. Edit-08/09/16 I feel I should clarify. I realize that this function can be manipulated to produce many identities, but Faulhaber's Formula and this Zeta function identity seem somehow special, especially in light of how beautiful their manipulations are. I'm looking for identities that 1) Are considered to be rather important. 2) Can be established be means of the generating function $\frac{x}{e^x-1} = \frac{B_0}{0!} + \frac{B_1}{1!}x + \frac{B_2}{2!}x^2 + \cdots$. 3) Perhaps have some special quality about their manipulations. Proof of Faulhaber's formula. This proof can be found in sigma notation here https://en.wikipedia.org/wiki/Faulhaber%27s_formula#Proof $$\{1^0+2^0+\cdots+n^0\} + \{1^1+2^1+\cdots+n^1\}\frac{x}{1!} + \{1^2+2^2+\cdots+n^2\}\frac{x^2}{2!} +\cdots$$ $$= \{{1^0}+\frac{1^1x^1}{1!} + \frac{1^2x^2}{2!} + \cdots\} + \{{2^0}+\frac{2^1x^1}{1!} + \frac{2^2x^2}{2!} + \cdots\} +\cdots+ \{{n^0}+\frac{n^1x^1}{1!} + \frac{n^2x^2}{2!} + \cdots\}$$ $$=e^x + e^{2x} + \cdots+e^{nx} = e^x\frac{1-e^{nx}}{1-e^x} = \frac{1-e^{nx}}{e^{-x}-1} = \frac{1}{x}\{\frac{-x}{e^{-x}-1}\}\{e^{nx}-1 \} = \frac{1}{x}\{\frac{B_0}{0!}-\frac{B_1}{1!}x+\frac{B_2}{2!}x^2 - \cdots \}\{{e^{nx}-1}\} $$$$= \frac{B_0}{0!}\frac{1}{x}\{{e^{nx}-1}\}-\frac{B_1}{1!}\{{e^{nx}-1}\}+\frac{B_2}{2!}x^1\{{e^{nx}-1}\} - \cdots $$ $$\begin{align} & =\frac{B_0}{0!}\{\frac{n^1}{1!}x^0 + \frac{n^2}{2!}x^1 + \frac{n^3}{3!}x^2 + \frac{n^4}{4!}x^3+\cdots\} \\ & -\frac{B_1}{1!}\{\frac{n^1}{1!}x^1 + \frac{n^2}{2!}x^2 + \frac{n^3}{3!}x^3+\frac{n^4}{4!}x^4+ \cdots\} \\ & +\frac{B_2}{2!}\{\frac{n^1}{1!}x^2 + \frac{n^2}{2!}x^3 + \frac{n^3}{3!}x^4 +\frac{n^4}{4!}x^5+ \cdots\}\\ &-\frac{B_3}{3!}\{\frac{n^1}{1!}x^3 + \frac{n^2}{2!}x^4 + \frac{n^3}{3!}x^5 +\frac{n^4}{4!}x^6+ \cdots\}+\cdots \end{align}$$ $$=\frac{B_0}{0!}\frac{n}{1!} + \{\frac{B_0}{0!}\frac{n^2}{2!} - \frac{B_1}{1!}\frac{n}{1!}\}x + \{\frac{B_0}{0!}\frac{n^3}{3!} - \frac{B_1}{1!}\frac{n^2}{2!} + \frac{B_2}{2!}\frac{n}{1!}\}x^2 + \{\frac{B_0}{0!}\frac{n^4}{4!} - \frac{B_1}{1!}\frac{n^3}{3!} + \frac{B_2}{2!}\frac{n^2}{2!} - \frac{B_3}{3!}\frac{n}{1!}\}x^3+ \cdots$$ Equating coefficients, we see that $$\{1^c+2^c+\cdots+n^c\}\frac{1}{c!} = \frac{B_0}{0!}\frac{n^{c+1}}{(c+1)!}-\frac{B_1}{1!}\frac{n^c}{c!} + \cdots+ (-1)^c\frac{B_c}{c!}\frac{n}{1!}$$Thus $$1^c+2^c+\cdots+n^c = \frac{c!}{0!(c+1)!}B_0n^{c+1}-\frac{c!}{1!c!}B_1n^c+\cdots+(-1)^c\frac{c!}{c!1!}B_cn = \frac{1}{c+1}\left(\binom{c+1}{0}B_0n^{c+1}-\binom{c+1}{1}B_1n^c + \cdots +(-1)^c\binom{c+1}{c}B_cn\right)$$ Proof of Zeta Identity for positive even integers: I found this proof on pages 276-277 in Carr's Synopsis found here https://archive.org/details/synopsisofelemen00carrrich. I'm kind of just assuming that this proof traces back to Euler. Carr's Synopsis is rough in places but I would highly recommend reading certain pages that are intimately connected to Ramanujan's intuition. (I first heard about Carr's Synopsis in this video https://www.youtube.com/watch?v=QUnmAhXe9bg ) [2 It mistakenly refers to 1540 instead of 1541 lol. Note that Carr defines the Bernoulli numbers according to their absolute values. Here is what Carr is saying: $$sin(x) = x\{1-{\left(\frac{x}{\pi}\right)}^{2}\}\{1-{\left(\frac{x}{2\pi}\right)}^{2}\}\{1-{\left(\frac{x}{3\pi}\right)}^{2}\}\cdots$$ Thus $$log\{sin(x)\} = log(x) + log\{1-{\frac{x^2}{\pi^2}}\}+log\{1-{\frac{x^2}{(2\pi)^2}}\}+log\{1-{\frac{x^2}{(3\pi)^2}}\}+\cdots$$ Thus $$\frac{cos(x)}{sin(x)} = \frac{1}{x} + \frac{-2\frac{x}{\pi^2}}{1-\frac{x^2}{\pi^2}} + \frac{-2\frac{x}{(2\pi)^2}}{1-\frac{x^2}{(2\pi)^2}}+ \frac{-2\frac{x}{(3\pi)^2}}{1-\frac{x^2}{(3\pi)^2}}+\cdots$$ Thus $$xcot(x) = 1-2\frac{x^2}{\pi^2}\{1+\frac{x^2}{\pi^2}+\frac{x^4}{\pi^4}+\cdots\}-2\frac{x^2}{(2\pi)^2}\{1+\frac{x^2}{(2\pi)^2}+\frac{x^4}{(2\pi)^4}+\cdots\}-2\frac{x^2}{(3\pi)^2}\{1+\frac{x^2}{(3\pi)^2}+\frac{x^4}{(3\pi)^4}+\cdots\}-\cdots$$ $$= 1-\frac{2x^2}{\pi^2}\{\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\cdots\}-\frac{2x^4}{\pi^4}\{\frac{1}{1^4}+\frac{1}{2^4}+\frac{1}{3^4}+\cdots\}-\frac{2x^6}{\pi^6}\{\frac{1}{1^6}+\frac{1}{2^6}+\frac{1}{3^6}+\cdots\}-\cdots$$ Now, if $y = 2ix$ $$xcot(x) = ix\frac{2cos(x)}{2isin(x)} = ix\frac{\{cos(x)+isin(x)\} + \{cos(x)-isin(x)\}}{\{cos(x)+isin(x)\} - \{cos(x)-isin(x)\}} = ix\frac{e^{ix}+e^{-ix}}{e^{ix}-e^{-ix}} = ix\frac{e^{2ix}+1}{e^{2ix}-1} = \frac{y}{2}\frac{e^y+1}{e^y-1} = \frac{y}{e^{y}-1}+\frac{y}{2} $$$$= 1+\frac{|B_2|}{2!}y^2-\frac{|B_4|}{4!}y^4+\frac{|B_6|}{6!}y^6-\cdots $$$$= 1+\frac{|B_2|}{2!}(2ix)^2-\frac{|B_4|}{4!}(2ix)^4+\frac{|B_6|}{6!}(2ix)^6-\cdots$$$$= 1-\frac{|B_2|}{2!}(2x)^2-\frac{|B_4|}{4!}(2x)^4-\frac{|B_6|}{6!}(2x)^6-\cdots$$ Thus by combining both expressions for $xcot(x)$ we obtain $$1-\frac{2x^2}{\pi^2}\{\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\cdots\}-\frac{2x^4}{\pi^4}\{\frac{1}{1^4}+\frac{1}{2^4}+\frac{1}{3^4}+\cdots\}-\frac{2x^6}{\pi^6}\{\frac{1}{1^6}+\frac{1}{2^6}+\frac{1}{3^6}+\cdots\}-\cdots = 1-\frac{|B_2|}{2!}(2x)^2-\frac{|B_4|}{4!}(2x)^4-\frac{|B_6|}{6!}(2x)^6-\cdots$$ Equating coefficients, we see that $$\frac{2}{\pi^{2n}}\{\frac{1}{1^{2n}} + \frac{1}{2^{2n}}+\frac{1}{3^{2n}}+\cdots\} = \frac{|B_{2n}|}{(2n)!}2^{2n} $$ Therefore, $$\frac{1}{1^{2n}} + \frac{1}{2^{2n}}+\frac{1}{3^{2n}}+\cdots= (-1)^{n+1} \frac{{(2\pi)}^{2n} B_{2n}}{2(2n)!}$$ Edit 08/09/16 I removed my second question since it seemed to cause confusion. I think that my thoughts in this area need to be focused a bit more. REPLY [3 votes]: The Euler-Maclaurin summation formula. Riemann-Roch theorems using the Todd class (Hirzebruch, Grothendieck) Some forms of the the Baker-Campbell-Hausdorff formula.<|endoftext|> TITLE: Cardinality of power set of $\mathbb N$ is equal to cardinality of $\mathbb R$ QUESTION [5 upvotes]: How to prove that; The cardinality of power set of $\mathbb N$ is equal to cardinality of $\mathbb R$ I need a reference or proof outline. I looked up but didn't find results thanks. REPLY [19 votes]: I. There is an injection $f:P(\mathbb N)\to\mathbb R,$ that is, $|P(\mathbb N)|\le|\mathbb R|.$ Proof. For $S\subseteq\mathbb N$ define $f(S)=\sum_{n\in S}10^{-n}.$ II. There is an injection $g:\mathbb R\to P(\mathbb N),$ that is, $|\mathbb R|\le|P(\mathbb N)|.$ Proof. Fix an enumeration $\mathbb Q=\{q_n:n\in\mathbb N\}$ of the set $\mathbb Q$ of all rational numbers, and define $g(x)=\{n:q_n\lt x\}.$ The equality $|P(\mathbb N)|=|\mathbb R|,$ that is, the existence of a bijection between $P(\mathbb N)$ and $\mathbb R,$ follows from I and II by virtue of the Cantor–Schröder–Bernstein theorem. Since I can't tell from your question if you are familiar with that theorem, I will now give a proof. III. Lemma (Knaster–Tarski Fixed Point Theorem). If a function $\varphi:P(A)\to P(A)$ satisfies the condition $$X\subseteq Y\implies\varphi(X)\subseteq\varphi(Y)$$ for all $X,Y\subseteq A,$ then $\varphi(S)=S$ for some $S\subseteq A.$ Proof. Let $\mathcal F=\{X:X\subseteq\varphi(X)\}$ and let $S=\bigcup\mathcal F.$ First note that, if $X\in\mathcal F,$ then $X\subseteq S$ and $X\subseteq\varphi(X)\subseteq\varphi(S).$ Since every member of $\mathcal F$ is a subset of $\varphi(S),$ it follows that $S=\bigcup\mathcal F\subseteq\varphi(S),$ that is, $S\in\mathcal F.$ From $S\in\mathcal F$ it follows that $\varphi(S)\in\mathcal F,$ whence $\varphi(S)\subseteq S,$ and so $\varphi(S)=S.$ IV. Theorem (Cantor–Schröder–Bernstein). If there are injections $f:A\to B$ and $g:B\to A,$ then there is a bijection between $A$ and $B.$ Proof. Define a function $\varphi:P(A)\to P(A)$ by setting $$\varphi(X)=A\setminus g[B\setminus f[X]]$$ for $X\subseteq A,$ and observe that $X\subseteq Y\subseteq A\implies\varphi(X)\subseteq\varphi(Y).$ By the lemma, there is a set $S\subseteq A$ such that $\varphi(S)=S,$ that is, $g[B\setminus f[S]]=A\setminus S.$ Thus, $f$ maps $S$ bijectively onto $f[S],$ and $g$ maps $B\setminus f[S]$ bijectively onto $A\setminus S,$ so there is a bijection between $A$ and $B.$<|endoftext|> TITLE: True or false: If $f(x)$ is continuous on $[0, 2]$ and $f(0)=f(2)$, then there exists a number $c\in [0, 1]$ such that $f(c) = f(c + 1)$. QUESTION [11 upvotes]: I was solving past exams of calculus course and I've encounter with a problem, which I couldn't figure out how to solve.The question is following; Prove that whether the given statement is true or false: Suppose that $f(x)$ is continuous on $[0, 2]$ and $f(0)=f(2)$. Then there exists a number $c\in [0, 1]$ such that $f(c) = f(c + 1)$. By just reading the question, you want to say "use Mean Value Theorem or Rolle's theorem" but we don't know whether $f(x)$ is differentiable on the interval, at least it is now given, so I am totally stuck.I would appreciate any help. REPLY [14 votes]: Define a new function by $g(x)=f(x+1)-f(x)$. Notice that $$g(0) = f(1) - f(0)$$ $$g(1) = f(2) - f(1) = f(0) - f(1)$$ and therefore $g(0) = -g(1)$. Now use the Intermediate Value Theorem. REPLY [5 votes]: Hint: Consider $\phi(x) = f(x+1)-f(x)$ on $[0,1]$. Note that $\phi(1) = - \phi(0)$.<|endoftext|> TITLE: Logical proposition for "Every positive integer can be written as the sum of 2 squares" QUESTION [7 upvotes]: I am to formulate the logical proposition for Every positive integer can be written as the sum of 2 squares (domain of integers) One of the previous questions was Formulate the logical proposition for the sentence "The number 100 can be written as the sum of 2 squares" (domain of integers) And my answer was $$ \exists x \exists y(x^2 + y^2 = 100)$$ So I thought to take the same approach with Every positive integer can be written as the sum of 2 squares And came up with $$ \forall x \exists a \exists b (a^2 + b^2 = x)$$ But there's no where stating that $x$ is positive, so would $$ \exists x \exists a \exists b (a^2 + b^2 = x)$$ be correct instead? If someone could tell me where I went wrong or give me a hint I'd greatly appreciate the help, thank you very much. REPLY [7 votes]: You can solve this question by using your previous answer, $$P_{100}:= \exists\ x,y\ (x^2 + y^2 = 100)$$ where you replace $100$ by "any positive integer", let $z$: $$\forall z>0\ P_z,$$ giving $$\forall z>0\ \exists\ x,y\ (x^2 + y^2 = z).$$ ($x,y,z\in\mathbb Z$ is left implicit.) REPLY [5 votes]: You can formulate this statement as: $\large\forall_{x\in\mathbb{N}}\exists_{a,b\in\mathbb{N}}:x=a^2+b^2$ If you're worried about $0\in\mathbb{N}$ (which is a matter of definition), then you can use $\mathbb{Z^+}$ instead. BTW, the statement itself is false.<|endoftext|> TITLE: What's meaning of the inverted Greek letter iota “ι” in Principia Mathematica I* 14? QUESTION [6 upvotes]: The inverted iota always followed by a variable as following: (ιx)ϕx Some book say the formula should be read "The x such that x is ϕ". Does it is meaning as following: x=ϕ assign ϕ to variable x? REPLY [10 votes]: This is the descriptor operator. $(\iota x)\varphi x $ is the unique $x $ with the property specified by $\varphi $ (should it be the case that, indeed, there is precisely one such $x $). The Wikipedia entry on Principia has a very decent explanation of their notation.<|endoftext|> TITLE: Contradiction between Taylor's theorem and properties of the Taylor series of $f(x)=e^{-1/x^2}$? QUESTION [5 upvotes]: It seems to me that there is a contradiction between the Taylor's theorem and the properties of the Taylor series of the function $f(x)=e^{-1/x^2}$ if $x\ne0$ and $f(x)=0$ if $x=0$. On the one side, with the above extension $f$ is $C^\infty$ at $x=0$ and the Taylor series $TS[f]_0(x)$ of $f(x)$ at $x=0$ is identically null with a radius of convergence $R_c=\infty$, whereas $f(x)\ne 0$ for $x \ne 0$. As a consequence, $TS[f]_0(x)\ne f(x)$, $\forall x\ne0$. On the other side, Taylor's theorem states that an expansion at the order $n$ gives: $f(x)=f(0)+\frac{f'(0)}{1!}x+\frac{f''(0)}{2!}x^2+...+\frac{f^{(n)}}{n!}x^n+h_n(x)x^n, \qquad$ with $\lim_{x\rightarrow0}h_n(x)=0$ So this means that we should get better approximations of $f(x)$ when $n$ increases. Which is then not the case in this particular example as increasing $n$ doesn't make a difference in the polynomial expansion. Now, what is the problem here? Is it that we cannot apply Taylor's theorem in this particular case, and if so, why? Or is it that it is wrong to express the rest as $h_n(x)x^n$ with $\lim_{x\rightarrow0}h_n(x)=0$? Alternatively, what special properties could have the functions $h_n(x)$ that doesn't make $h_n(x)x^n$ become smaller when $n$ increases? Or am I missing something else? REPLY [2 votes]: Taylor's theorem is perfectly true, even in this case. Simply, the value of the function is entirely in its complementary term. This is one case where the Taylor's series does not converge to the value of the function. $\mathcal C^\infty$ functions with a Taylor's series which converge to the value of a function are called analytic functions. Thus you see analytic functions and $\mathcal C^\infty$ *functions are different notions, for functions of a real variable. For functions of a complex variable, the situations is quite different: a function which is differentiable is ipso facto $\mathcal C^\infty$ and analytic.<|endoftext|> TITLE: Finding Euler decomposition of a symplectic matrix QUESTION [6 upvotes]: A symplectic matrix is a $2n\times2n$ matrix $S$ with real entries that satisfies the condition $$ S^T \Omega S = \Omega $$ where $\Omega$ is the symplectic form, typically chosen to be $\Omega=\left(\begin{smallmatrix}0 & I_N \\ -I_N & 0\end{smallmatrix}\right)$. Sympletic matrices form the symplectic group $Sp(2n,\mathbb{R})$. Any symplectic matrix S can be decomposed as a product of three matrices as \begin{equation} S = O\begin{pmatrix}D & 0 \\ 0 & D^{-1}\end{pmatrix}O' \quad \quad \forall S \in Sp(2n,\mathbb{R}), \end{equation} where $O, O'$ are orthogonal and symplectic - $\operatorname{Sp}(2n,\mathbb{R})\cap \operatorname{O}(2n)$; $D$ is positive definite and diagonal. The form of a matrix that is both symplectic and orthogonal can be given in block form as $O=\left(\begin{smallmatrix}X & Y \\ -Y & X\end{smallmatrix}\right)$, where $XX^T+YY^T=I_N$ and $XY^T-YX^T=0$. The decomposition above is known as Euler decomposition or alternatively as Bloch-Messiah decomposition. How can I find the matrices in the decomposition for a given symplectic matrix? Apparently, the decomposition is closely related to the singular value decomposition and I think the elements of the matrices $D$ and $D^{-1}$ coincide with the singular values of $S$. Also, I have the impression that the case where it can be assumed that $S$ is also symmetric is easier. Any help, tips or pointers would be much appreciated! REPLY [7 votes]: Let $\varDelta$ stand for your diagonal matrix to write the factorization as $S=O\varDelta O'$. Now, rewrite a bit more: $$ S=(O\varDelta O^T)(OO')=\varSigma U, $$ with $\varSigma$ symplectic positive definite and $U$ symplectic and orthogonal. Thus we are looking for a polar decomposition. In fact such a decomposition is unique, and given by $$ \varSigma=(SS^T)^{1/2},\quad U=(SS^T)^{-1/2}S. $$ Here we use that a positive semidefinite (symplectic) matrix has a unique positive definite (symplectic) square root. Now, once we have $\varSigma$, it can be diagonalized with a symplectic orthogonal linear change $O$ (again quite constructive), and we get the data we sought. For the "basic facts" one can look at Gosson's book, for instance.<|endoftext|> TITLE: Prove or disprove that $\mathbb{R}^2 / \mathfrak{S}$ is homeomorphic to $\mathbb{R}/ \mathfrak{R} \times \mathbb{R} / \mathfrak{R}$ QUESTION [7 upvotes]: In $\mathbb{R}$ with the usual topology is considered the following equivalence relation $$x \mathfrak{R} y \; \text{if and only if}\ x,y\in \mathbb{Q}\ (\text{or}\ x=y)$$ $\mathfrak{S}$ $\mathfrak{R}$ On the other hand, in $\mathbb{R}^2$ with the usual topology, is defined $$(x,y)\mathfrak{S} (x',y')\; \text{if and only if}\ (x,y),(x',y')\in \mathbb{Q}^2 \; ( \text{or}\ (x,y)=(x',y'))$$ Are $\mathbb{R}^2 / \mathfrak{S}$ and $\mathbb{R}/ \mathfrak{R} \times \mathbb{R} / \mathfrak{R}$ homeomorphic? More generally, can $\mathbb{R}^2/\mathfrak{S}$ be homeomorphic to a product space? I've been trying to picture how the quotient spaces look like. All points in $\mathbb{Q}$ are one point in $\mathbb{R}/ \mathfrak{R}$, so if I take $x\in \mathbb{Q}$, then $$[x]=\{ y\in \mathbb{R} \mid x\mathfrak{R} y\}=\{y\in \mathbb{R} \mid y\in \mathbb{Q} \}=\mathbb{Q}$$ and if $x \not \in \mathbb{Q}$, then $$[x]=\{x\}$$ and for the other equivalence relation $\mathfrak{S}$ it should work the same way, but I don't know how to prove or disprove whether the two quotient spaces are homeomorphic or not. Thanks in advance! REPLY [3 votes]: In $ℝ^2/\mathfrak{S}$ the point rational×rational is dense while all the other points are closed. In $(ℝ/\mathfrak{R})^2$ the point rational×rational is dense, the points irrational×irrational are closed, and the points rational×irrational and irrational×rational have nowhere-dense closure of size continuum.<|endoftext|> TITLE: Prove that the equation $12x^2-y^2 = 1$ has no integer solutions QUESTION [10 upvotes]: Prove that the equation $12x^2-y^2 = 1$ has no integer solutions. I thought about taking this modulo some number to find a contradiction, but couldn't get anything. What should I do? REPLY [3 votes]: I'll try to explain the idea of André's answer in a simple manner. Note that if we divide any integer by $3$ there are only $3$ possibilities: We get a remainder of $0$ We get a "remainder" of $1$ or $-2$ We get a "remainder" of $2$ or $-1$ Hence any integer can be represented as one of the following: $$3k+0$$ $$3k+1$$ $$3k+2$$ Where $k$ is an integer As a side note, using this, if we compute: $n^2$ By setting $n=3k+0,3k+1,3k+2$ we will notice that any perfect square can only have remainder $0$ or $1$ when divided by $3$. Now going back to the equation you are interested in: $$12x^2-y^2=1$$ Set $x=3k+z$ and $y=3u+z$ where $z$ is $0$, $1$ or $2$ and $k$ is any integer: $$12(9k^2+6kz+z^2)-(9u^2+6uz+z^2)=1$$ This is equivalently: $$12(9k^2+6kz+z^2)-(9u^2+6uz)-z^2=1$$ Or: $$12(9k^2+6kz+z^2)-(9u^2+6uz)=1+z^2$$ The right hand side must only have remainder $1$ or $2$ when divided by $3$ (this holds for all integers $z$ from our calculation of $n^2$ but for simplicity just take our original $z=0,1,2$). The left hand side on the other hand is divisible by $3$ and thus must have remainder $0$ when divided by $3$. Thus we conclude that they can't possible be equal for integers $x$ and $y$ because they give different remainders when divided by $3$.<|endoftext|> TITLE: Show that geodesic distance on the $n$-sphere respects the triangle inequality QUESTION [7 upvotes]: Let $S^n$ be the standard unit $n$-sphere, embedded in Euclidian space as $S^n = \{ x \in {\mathbb{R}}^{n+1} | \| x \| = 1 \}$. Define geodesic distance as $d(x, y) = \arccos (x \cdot y)$, where $\cdot$ is the Euclidian dot product. My lecture notes introduce geodesic distance on page 11, but they're quite hand-wavy about the proof that distance is subadditive. To establish the triangle inequality, let $x, y, z ∈ S^{n}$, $\theta = \arccos(\langle x, y \rangle)$ and $\varphi = \arccos(\langle y, z \rangle)$. Then we can write $x = y \cos \theta + u \sin \theta$ and $z = y \cos \varphi + v \sin \varphi$ where $u$ and $v$ are unit vectors orthogonal to $y$. An easy calculation now gives $\langle x, z \rangle = \cos \theta \cos \varphi + \langle u, v \rangle \sin \theta \sin \varphi$. Concretely, how do I find $u$ and $v$? What is this "easy calculation"? REPLY [5 votes]: You are projecting $x, z$ onto $y$ respectively: Consider the plane spanned by $x, y$ (let's say $x\neq \pm y$). Then one can find a unit vector in this plane which is orthogonal to $y$. Call this $u$. Then one has $$ x = \langle x, y\rangle y + \langle x, u\rangle u.$$ By definition, $\arccos \langle x, y\rangle = \theta$, so $\cos\theta = \langle x, y\rangle$. Since$x, y, u$ are orthonormal to each other, this force $\langle x, u\rangle = \sin \theta$. Thus $$ x = y\cos \theta + u\sin\theta .$$ (Similar for $z,v$). (2) is really easy calculations, starting at $$ \langle x,z\rangle = \langle y \cos\theta + u\sin\theta , y\cos\varphi + v\sin\varphi \rangle$$ and then expanding the right hand side.<|endoftext|> TITLE: If two vector spaces $V$ and $W$ are isomorphic and $V$ is F-D then $W$ is F-D. Furthermore, $\text{dim} V = \text{dim} W$. QUESTION [5 upvotes]: Is the following statement true? Conjecture. If two vector spaces $V$ and $W$ are isomorphic and $V$ is finite dimensional (F-D) then $W$ is finite dimensional. Furthermore, $\text{dim} V = \text{dim} W$. and if YES, then how it can be proved? I can prove the following which is slightly different from the conjecture Theorem. Two finite dimensional vector spaces $V$ and $W$ are isomorphic if and only if they have the same dimension. but it does not help to deduce the conjecture from it. However, I strongly feel that the conjecture should be true. I would be thankful if you provide a hint. :) Motivation of the Question While I was reading Linear Algebra Done Right by Sheldon Axler I encountered this theorem However, I was not able to prove the red underlined part by the references made! It seems that a little point was overlooked by the author! The references are where the notations used are $\Bbb{F}$ is the field $\Bbb{R}$ or $\Bbb{C}$. $\Bbb{F}^{m,n}$ is the vector space of $m \times n$ matrices. $\mathcal{L}(V,W)$ is the vector space of linear maps from $V$ to $W$. $\mathcal{M}$ is a linear map from $\mathcal{L}(V,W)$ to $\Bbb{F}^{m,n}$ which gives the corresponding matrix of a linear map belonging to $\mathcal{L}(V,W)$. REPLY [5 votes]: Yes. Pick a basis for $V$ and carry it through the isomorphism. Use the fact that it is an isomorphism to show that it is a basis of $W$. Surjectivity gives that it generates, and injectivity gives that it is linearly independent. In more details, let $\{e_1,\cdots, e_n\}$ be a basis for $V$, and $f$ be the isomorphism. Let's show $\{f(e_1),\cdots, f(e_n)\}$ is a basis. First, let's show it generates. Take $w \in W$. Since $f$ is an isomorphism, it is surjective, and there exists $v \in V$ such that $f(v)=w$. But $v$ can be written as $v=c_1e_1+\cdots+c_ne_n$ for some scalars $c_1,\cdots, c_n$, since $\{e_1,\cdots,e_n\}$ is a basis. Then, $$w=c_1f(e_1)+\cdots+c_nf(e_n),$$ since $f$ is linear. Let's show it is linearly independent. Let $$k_1f(e_1)+\cdots+k_nf(e_n)=0.$$ Then, $$f(k_1e_1+\cdots+k_ne_n)=0.$$ Therefore, since $f$ is injective, $k_1e_1+\cdots+k_ne_n=0$, which gives that each $k_i$ is zero, since $\{e_1,\cdots,e_n\}$ is a basis.<|endoftext|> TITLE: Is there a matrix of every size with all its submatrices invertible? QUESTION [42 upvotes]: Let's call a real matrix of size $m \times n$ totally invertible if for every $k$ rows and $k$ columns that we choose, we get an invertible matrix. I am curious about the following: Is there a totally invertible matrix for all sizes $m \times n$? By taking the transpose, we can assume $m \leq n$. For $m=1$ we can take any vector in $\Bbb R^m$ without a zero entry. For $m=2$ we can take $\begin{pmatrix} 1 &2 & 3 & ... &n \\ 2 & 3 & 4 & ... & n+1 \end{pmatrix}$. Indeed, $\det\begin{pmatrix} a &b \\ a+1 & b+1 \end{pmatrix}=a-b$ is nonzero when $a \neq b$. This does not generalize, since $a_{ij}=i+j-1$ fabulously fails for all submatrices of size $\geq 3$. I also tried taking the rows to be $m$ of the cyclic shifts of the vector $(1,2, ... ,n)$. This does not work either because I found a $k=3, m=n=5$ counterexample. I do feel that the answer should be positive, however. Is it? REPLY [5 votes]: The condition that any particular submatrix has determinant zero is a polynomial equation — the solution set has measure zero in the space of all matrices. There are finitely many submatrices; the set for which some submatrix has determinant zero is is a finite union of measure zero sets, and thus has measure zero. Consequently, almost all matrices are totally invertible.<|endoftext|> TITLE: Is saying a set is nowhere dense the same as saying a set has no interior? QUESTION [8 upvotes]: I have two statements "A set X is nowhere dense" "A set X has no interior" Are these both equivalent statement? REPLY [16 votes]: For possible future use by others, it might be helpful to dissect these notions a bit more systematically in a metric space setting to see what makes them different. In particular, this will show how being nowhere dense is a super-strong way of having empty interior. $B(x,\epsilon)$ denotes the open ball centered at $x$ with radius $\epsilon,$ where $\epsilon > 0$ is assumed. $X$ has empty interior This is equivalent to each of the following: $X$ has no interior points. For each $x$ in the space, $x$ is not an interior point of $X.$ For each $x$ in the space, there is no nonempty open set containing $x$ that is a subset of $X.$ For each $x$ in the space, there is no ball $B(x,\epsilon)$ that is a subset of $X.$ For each $x$ in the space, each ball $B(x,\epsilon)$ contains at least one point that belongs to the complement of $X.$ $X$ is nowhere dense This is equivalent to each of the following: $X$ is not dense in each nonempty open set. $X$ is not dense in each ball $B(x,\epsilon).$ For each $x$ in the space, each ball $B(x,\epsilon)$ is NOT densely filled with points from $X.$ For each $x$ in the space, each ball $B(x,\epsilon)$ contains at least one nonempty open set that is a subset of the complement of $X.$ For each $x$ in the space, each ball $B(x,\epsilon)$ contains at least one ball all of whose points belong to the complement of $X.$<|endoftext|> TITLE: A descending sequence of sets with outer measure 1 QUESTION [5 upvotes]: Show that there is a sequence of sets $A_n\subseteq [0,1]$, with outer measure 1 ($\mu^*(A_n)=1$ for every $n$), so that $A_1\supseteq A_2\supseteq A_3\supseteq...$ and $\bigcap_{n=1}^{\infty}A_n=\varnothing$. From these conditions, the sets mustn't be measurable, so my direction is to use $\mathbb{R}/\mathbb{Q}$ in some way (like the standart construction of a non-measurable set), but I can't figure out how. Any ideas? REPLY [5 votes]: The family $\mathcal F$ of nonempty open subsets of $[0,1]$ of measure $< 1$ has cardinality $c$, so it can be well-ordered so that each member of $\mathcal F$ has fewer than $c$ predecessors. Since the complement in $[0,1]$ of any member of $\mathcal F$ has cardinality $c$, we can choose by transfinite induction, for each $U \in \mathcal F$, a countably infinite set $S(U) =\{s_1(U), s_2(U), \ldots \} \subset [0,1] \backslash U$ such that $S(U)$ is disjoint from $S(V)$ for each predecessor $V$ of $U$. Let $$ A_n = \bigcup_{U \in \mathcal F} \{s_j(U) \; : \; j \ge n\}$$ It has outer measure $1$ because for each $U \in \mathcal F$ it contains points of $[0,1] \backslash U$. By construction, $A_{n+1} \subset A_n$. Each member of $A_1$ is $s_j(U)$ for some unique $U \in \mathcal F$ and some $j$, and thus is not in $A_n$ for $n > j$. That says $\bigcap_n A_n = \emptyset$.<|endoftext|> TITLE: Integration of a cohomology class over a homology class. QUESTION [5 upvotes]: Can anyone explain to me what it is said in the following article : http://indico.ictp.it/event/a06114/material/0/0.pdf , page : $3$, by Mr. Aroldo Kaplan : The paragraph says : Stokes : $$ \int_M d \omega = \int_{ \partial M }\omega $$ implies : $$ d \omega = 0 \ \ \Longleftrightarrow \ \ \int_{ \mathrm{boundary} } \omega = 0 \ \ \ \ \ \ \ \ \ \mathrm{and} \ \ \ \ \ \ \ \ \partial M = 0 \ \ \Longleftrightarrow \ \ \int_M \mathrm{exact} = 0 $$ so, defining : $ H^k ( X ) = \dfrac{ \{ \omega \in \bigwedge^k \ : \ d \omega = 0 \} }{ \{ \omega \in \bigwedge^{k} \ : \ \omega = d \phi \} } = \dfrac{ \mathrm{closed} }{ \mathrm{exact} } $ , $ \ H_k ( X ) = \dfrac{ \{ M \subset X \ : \ \partial M = 0 \} }{ \{ M \subset X \ : \ M = \partial N \} } = \dfrac{ \mathrm{cycles} }{ \mathrm{boundaries} } $ the bilinear (function?): $$ H^k ( X ) \times H_k ( X ) \to \mathbb{R}, \ \ \ \ \ \ \ \ \ \ \ \ ( [ \omega ] , [M] ) = \int_M \omega $$ ( = flux of $ \omega $ through $ M $ ) is well defined. So, I still do not understand why: $$ H^k ( X ) \times H_k ( X ) \to \mathbb{R}, \ \ \ \ \ \ \ \ \ \ \ \ ( [ \omega ] , [M] ) = \int_M \omega $$ is well defined. Can you explain to me that please ? Thanks in advance for your help. REPLY [2 votes]: let's prove that $([\omega],[M])=([\omega+d\phi],[M])$. indeed, $([\omega+d\phi],[M])-([\omega],[M])=\int_M d\phi=\int_{\partial M}\phi=0$, because $M$ is a cycle. secondly, let's prove that $([\omega],[M])=([\omega],[M+\partial N])$. indeed, $([\omega],[M+\partial N])-([\omega],[M])=\int_{\partial N}\omega=\int_N d\omega=0$, because $\omega$ is closed.<|endoftext|> TITLE: Hartshorne Lemma II.5.3 Proof QUESTION [5 upvotes]: The question I have is from the proof of the lemma above, but it is actually a more general statement about quasi-coherent sheaves on an affine scheme. Suppose $X= \text{Spec }A$ for some ring $A$, and $\mathscr{F}$ is a quasi-coherent sheaf on $X$. Then for some open affine cover of $X$, the restriction sheaf is isomorphic to a sheaf of a module over the corresponding ring. In particular, if $\text{Spec }B$ is in the cover, then $\mathscr{F}|_{\text{Spec } B} \cong \widetilde{M}$ for a $B$-module $M$. This part is by definition. Now $\text{Spec }B$ is covered by distinguished open sets of the form $D(g)$ for $g\in A$, and for any such open set the inclusion $D(g)\subseteq \text{Spec }B$ is induced by the ring map $B\to A_g$. Thus $\mathscr{F}|_{D(g)} \cong (M\otimes_B A_g)^{\tilde{}}$. He deduces the last sentence from a previous proposition that deals with properties of the sheaves of modules. The two properties that seem important for this deduction are the following: For a ring map $A \to B$ inducing the map of spectra $f:\text{Spec }B \to \text{Spec }A$, (1) If $M$ and $N$ are $A$-modules, then $(M\otimes N)^{\tilde{}} \cong \widetilde{M} \otimes_{\mathcal{O}_{\text{Spec }A}} \widetilde{N}$. (2) For any $A$-module $M$, $f^*(\widetilde{M})\cong (M\otimes_{A} B)^{\tilde{}}$. I cannot seem to make the connection. So any help with his last statement would be great. Thanks. REPLY [6 votes]: I am clarifying further the crucial point raised in the comments to @hwong557's answer. Let $(f,f^\#): (X,\mathcal{O}_X) \rightarrow (Y,\mathcal{O}_Y)$ be a morphism of locally ringed spaces. Suppose that $f(X)$ is open in $Y$ and that $f$ induces an isomorphism $\big(X,\mathcal{O}_X\big) \rightarrow \big(f(X),\mathcal{O}_Y|_{f(X)}\big)$ of locally ringed spaces. Hence, we have an isomorphism of sheaves $\mathcal{O}_Y|_{f(X)} \cong f_*\mathcal{O}_X$. Moreover, for every open set $U$ of $X$ we have an isomorphism of rings $\mathcal{O}_Y(f(U)) = \mathcal{O}_Y|_{f(X)}(f(U)) \cong f_*\mathcal{O}_X(f(U)) = \mathcal{O}_X(U)$. Now let $\mathscr{G}$ be an $\mathcal{O}_Y$-module. Let us look at the sections of the presheaf that gives rise to $f^* \mathscr{G}$. Let $U$ be open in $X$. Then the sections of that presheaf over $U$ are $f^{-1}\mathscr{G}(U) \otimes_{f^{-1} \mathcal{O}_Y(U)} \mathcal{O}_X(U)$. But $f^{-1} \mathcal{O}_Y(U) = \lim_{V' \supset f(U)} \mathcal{O}_Y(V') =\mathcal{O}_Y(f(U))$, and we already know that this latter ring is isomorphic to $\mathcal{O}_X(U)$. Hence, $f^{-1}\mathscr{G}(U) \otimes_{f^{-1} \mathcal{O}_Y(U)} \mathcal{O}_X(U) \cong f^{-1}\mathscr{G}(U)$. This now shows that $f^* \mathscr{G}$ is the same as $f^{-1}\mathscr{G}$, and the latter is by definition $\mathscr{G}|_X$.<|endoftext|> TITLE: Examples of problems that are easier in the infinite case than in the finite case. QUESTION [119 upvotes]: I am looking for examples of problems that are easier in the infinite case than in the finite case. I really can't think of any good ones for now, but I'll be sure to add some when I do. REPLY [2 votes]: I don't believe nobody mentioned applications of the Central Limit Theorem (CLT). For example calculating this: \begin{align} \int^{n+\sqrt[]{n}}_0 \frac{x^{n-1}}{(n-1)!}e^{-t}\,dt \end{align} is very easy when $n\to \infty$ using the CLT. And yet another application of CLT itself is Wilk's Theorem very strong statement about the infinity case.<|endoftext|> TITLE: Dif. Eq. General solution is sum of homogeneous and particular solutions QUESTION [10 upvotes]: Why is the general solution to ordinary first order differential equations a sum of homogeneous(Setting the inhomogeneous term to 0) and particular(satisfies the differential equation but not necessarily the initial conditions) solutions? $x=x_{h}+x_{p}$ I have tried finding a proof on the Internet but without a result. Can someone give me a not too complicated mathematical proof(If possible) to why this relation is valid? Thank you! REPLY [12 votes]: Let us denote the linear differential operator by $D$. We write its corresponding differential equation as $D(x)=b$, with full solution set $\Omega$. Let $p$ be any particular solution to the equation $D(x)=b$, such that $D(p)=b$. Now consider the set $ S=\{p+h|h \text{ satisfies the homogeneous equation}\}$. We will show that this is equal to the solution set. We will show set inclusion both ways. First we will show that if a general solution exists, it is in this set. Assume that $s\in \Omega$, the full solution set to the differential equation, then consider: $D(s-p)=D(s)-D(p)=b-b=0$. This means that $s-p$ is a homogeneous solution, therefore we know $s=p+(s-p)\in S$. Hence $ \Omega \subseteq S$ Finally, by linearity of differentiation we have for any $p+h \in S$ : $D(p+h)=D(p)+D(h)=b+0=b$. So $S\subseteq \Omega$. Thus by mutual inclusion $S=\Omega$ $\square$.<|endoftext|> TITLE: Show that $\mathbb{Q}$ is not locally compact with a characterization of local compactness QUESTION [5 upvotes]: I know this question has been asked to death, but I wish to prove that $\mathbb{Q}$ is not locally compact with a use of a lemma I have not seen used in the other proofs. Lem: Given $(X, \mathfrak{T})$ Hausdorff, then it is locally compact iff every point is contained in an open set with compact closure My attempt: Since $\mathbb{Q}$ is a subspace of $\mathbb{R}$, therefore $\mathbb{Q}$ is Hausdorff given that Hausdorffness is arbitrarily hereditary. To show that $\mathbb{Q}$ is not locally compact, we wish to produce a point $x \in \mathbb{Q}$ that is contained in an open set without a compact closure. Let $U = (a,b) \subset \mathbb{R}$. Then $U \cap \mathbb{Q}$ is an open set in the subspace topology. Pick $x \in U \cap \mathbb{Q}$, then $x \in U \cap \mathbb{Q} \subseteq \overline {U \cap \mathbb{Q}}$. (Alarm bells: $\overline {A \cap B}$ might not be equal to $\overline A \cap \overline B$) Now we need to show that $ \overline {U \cap \mathbb{Q}}$ is not compact...however intuitively this set feels like $[a,b] \subset \mathbb{R}$... What! Does anyone see how to resolve this? Thanks so much Also there is another answer using a much simpler characterization Why Q is not locally compact, connected, or path connected? Can we see that the lemma I am using and the one given https://math.stackexchange.com/a/650270/174904 is equivalent? REPLY [7 votes]: You need to find a point so that none of its open neighborhoods has compact closure. Let's consider $0$. Suppose $U$ is an open neighborhood of $0$ (in $\mathbb{Q}$) that has compact closure. Then $U\supseteq(-1/n,1/n)\cap\mathbb{Q}$, for some integer $n>0$. Therefore the closure of $(-1/n,1/n)\cap\mathbb{Q}$ (in $\mathbb{Q}$) is compact as well. Let $C$ be the closure (in $\mathbb{Q}$) of $(-1/n,1/n)\cap\mathbb{Q}$. Take your favorite irrational number $r\in(0,1/n)$ and an increasing sequence $(r_n)$ in $(0,1/n)\cap\mathbb{Q}$ that converges to $r$. Then $(r_n)$ is a sequence in $C$, but no subsequence is convergent (in $\mathbb{Q}$), a contradiction.<|endoftext|> TITLE: Greatest possible cardinality of a set of numbers less than 1000 such that the sums avoid powers of 2 QUESTION [8 upvotes]: Let $S = \{1, 2, \dotsc, 1000\}$, $T$ is a subset of $S$. For any $a, b\in T$ (can be the same), $a + b$ is not a power of $2$. Find the greatest possible value of $|T|$. Notice that if $a+a=2^N=2a$, then $a$ must be a power of $2$ as well. Namely $2^{N-1}$. Hence we can't include any power of 2 in the set of $T$ as $a$ and $b$ could be the same. That leaves us for $T=\{1,3,5,6,...,513,514,...,1000\}$. Now notice that if $1+b=2^N$, then $b=2^N-1$, which means every number such that it is one less than a power of $2$ can't be included inside, or $1$ could not be inside. Similar for $3, 5, 6, \dotsc$ as a number with $1, 3, 5, \dotsc$ less than a power of $2$ is much more than a single number $1, 3, 5$ etc. Hence I think it would be more clever to exclude the single number $1, 3, 5, \dotsc$ (I don't know if it's right). From here on I'm stuck and I have no idea. REPLY [5 votes]: According to the approach described by Henning Makholm if $S=\{1,2,\dots,n\}$ it seems that the maximum size of such set $T(n)$ is related to $[n]_2$ (the binary expansion of $n$). It should be $$|T(n)|=\frac{n-\mbox{number of runs in $[n]_2$}}{2}$$ For $n=1000$ then $[n]_2=1111101000$ and $$|T(1000)|=\frac{1000-4}{2}=498.$$ Is there any connection between Gray code and this problem?<|endoftext|> TITLE: A possible Property of Euler's totient function: $n$ such that $\varphi(n)$ and $\varphi(n+1)$ are both powers of two QUESTION [13 upvotes]: $n$ is an odd positive integer such that $\varphi(n)$ and $\varphi(n+1)$ are both powers of two . Here , $\varphi(n)$ denotes Euler's totient function. Is it true that $(n+1)$ is either $6$ or a power of $2$? Please help me to prove or disprove this statement . (I was randomly flipping through some values attained by the $\varphi$ function when I observed this pattern) :) REPLY [18 votes]: There are exactly seven odd positive integers $n$ such that $\varphi(n)$ and $\varphi(n+1)$ are both powers of two. These are: $$ n = 1, 3, 5, 15, 255, 65535, 4294967295. $$ Written out more helpfully: \begin{align*} 5&\; \\ 1&= 2^{2^0} - 1 \\ 3&= 2^{2^1} - 1 = 3\\ 15 &= 2^{2^2} - 1 = 3 \cdot 5 \\ 255 &= 2^{2^3} - 1 = 3 \cdot 5 \cdot 17 \\ 65535 &= 2^{2^4} - 1 = 3 \cdot 5 \cdot 17 \cdot 257 \\ 4294967295 &= 2^{2^5} - 1 = 3 \cdot 5 \cdot 17 \cdot 257 \cdot 65537. \end{align*} In particular, your conjecture is true: either $n+1 = 6$, or $n+1$ is a power of two. Specifically, $n+1$ is either $6$ or equal to $2^{2^k}$ for $k \in \{0,1,2,3,4,5\}$. You may recognize $3, 5, 17, 257, 65537$ as the known Fermat primes. The reason there are no larger $n$ is that the next Fermat number, $4294967297$, is famously composite and equal to $641 \cdot 6700417$. (Though Fermat conjectured that all Fermat numbers are prime, other than the first five no other Fermat primes are known, and some have conjectured there are no others. See e.g. this oeis entry.) Proof First, we show that for a positive integer $m$, $\varphi(m)$ is a power of two if and only if $m$ is a power of two times a product of zero or more Fermat primes, i.e. primes of the form $2^{2^a} + 1$. (The same is found in Mathematician 42's answer.) Assume that $\varphi(m)$ is a power of two. If $m$ is divisible by the square of any prime $p$, then $p \mid \varphi(m)$ so $p = 2$. Therefore, $m$ is a power of two times some distinct odd primes $p_1 < p_2 < \cdots < p_i$: $$ m = 2^x p_1 p_2 \cdots p_i $$ Thus, $$ \varphi(m) = 2^{x-1} (p_1 - 1) (p_2 - 1) \cdots (p_i - 1), $$ and it follows that each $p_i$ must be a power of two plus 1. And the only prime numbers of the form $2^k + 1$ are of the form $2^{2^k} + 1$ (Fermat primes); a quick proof can be found in this section of the Wikipedia page. So $p_i$ is a Fermat prime. Conversely, any $m$ of this form has $\varphi(m)$ being a power of two. Back to the problem, then, we are looking for all odd positive integers $n$ such that $n, n+1$ are both a power of two times a product of Fermat primes. As $n$ is odd, $n$ must be a product of Fermat primes, and $n+1$ must be a power of two times such a product. To reduce down to only seven possible $n$, we will exploit the fact that a product of Fermat primes has a very, very structured binary expansion. Let $$ n = \prod_{i=1}^l (2^{2^{a_i}} + 1), $$ where $0 \le a_1 < a_2 < \cdots < a_l$ and each term of the product is prime. Expanding out the product, we get that $$ n = \sum_{S \subseteq \{1,2,3,\ldots, l\}} 2^{\displaystyle \left( \sum_{i \in S} 2^{a_i} \right)} $$ What this scary formula is saying is that: $n$ is the sum of $2^l$ distinct powers of two, and the exponents on those powers of two are all possible sums of $2^{a_1}, 2^{a_2}, \ldots, 2^{a_l}$. Thinking in binary: $n$ has $2^l$ digits which are ones; $n$ is odd, so the last digit is also a $1$. And, importantly, if $l \ge 1$ then $n$ consists of two identical blocks: digits $0$ through $2^{a_l} - 1$ and $2^{a_l}$ through $2^{a_l+1}$ are the same. The block may start with some $0$s. (Within each of the two blocks we then have two more identical blocks, and so on, but there may be zeroes padded on at each step. Example: $(2^{2^0} + 1)(2^{2^1}+1)(2^{2^3}+1) = 3 \cdot 5 \cdot 257$ is $111100001111$ in binary. The two identical blocks are $00001111$ and $00001111$. Then each block consists of the two identical blocks $11$ and $11$, although four $0$s are padded on in front. Then each of those consists of two identical blocks $1$ and $1$.) By the same reasoning, $n+1$ must be a power of two times a number that looks exactly the same. We can say that for some $r,m$ and $0 \le b_1 < b_2 < \ldots < b_m$, $n+1$ equals $2^r$ times the product of $2^{2^{b_i}} + 1$, and then: $n+1$ has $2^m$ digits which are ones; $\frac{n+1}{2^r}$ consists of two identical blocks: digits $0$ through $2^{b_m} - 1$ and $2^{b_m}$ through $2^{b_m + 1}$ are the same. (Within each of the two blocks we then have two more identical blocks, and so on.) To form $n+1$ from $n$, we add $1$ to the binary expansion, so the number of $1$s decreases by $(c-1)$, where $c$ is the number of $1$s at the end of the binary expansion of $n$. As $n$ has $2^l$ ones, $n+1$ has $2^l - (c-1)$ ones. Since this has to be a power of two, we must either have $c = 1$ or $c \ge 2^{l-1}+1$. In the former case, we will show that $n$ must equal $5$. In the latter, that $n$ must equal $2^{2^l} - 1$. Case 1: $c = 1$ Since $c = 1$, $n$ has $01$ at the end of its binary expansion. So $n+1$ has the same binary expansion with $01$ replaced by $10$, and $r = 1$ (number of twos dividing $n+1$). By looking at the $1$s in $n+1$, we may extract exactly the numbers $b_i$: the $2^i$th 1 from the right, if we start counting from 0, is at position $2^{b_i} + 1$ (where positions start counting from $0$ as well, i.e. the $k$th position is the $2^k$s digit). But it is also at position $2^{a_i}$. It follows that only $a_i = 1$ is possible, and $l = 1, m = 1$, $n = 2^{2^1} + 1 = 5$, $n+1 = 6 = 2 \cdot 3$. Case 2: $c \ge 2^{l-1} + 1$ Here, $n$ must have at least $2^{l-1} + 1$ consecutive ones at the right. But $n$ consists of two identical blocks, each containing $2^{l-1}$ ones. So there must not be any zeroes in each block: $n$ must be a string of $2^{l}$ consecutive ones. That is, exactly, $$ n = 2^{2^l} - 1. $$ Now, this factors as $$ n = (2^{2^0} + 1) (2^{2^1} + 1) (2^{2^2} + 1) \cdots (2^{2^{l-1}} + 1). $$ For $l \le 5$, all of these factors are prime and we have a genuine solution. But for $l \ge 6$, the factor $(2^{2^{5}} + 1)$ is present, and factors into $641 \cdot 6700417$, which makes $\varphi(n)$ divisible by $640$ and thus not a power of two.<|endoftext|> TITLE: What values gives the minimum area of the ellipse? QUESTION [6 upvotes]: If the ellipse $\dfrac{x^2}{A}+\dfrac{y^2}{B}=1$ is to enclose the circle $x^2+y^2=2y$, what values of $A,B>0$ minimize the area of the ellipse? So far I've put the circle equation into the standard form: $x^2+(y-1)^2=1$, and I know that $A,B$ represents the length of semi-major and semi-minor axis. I'm not sure how to make sure the circle is enclosed in the ellipse. REPLY [8 votes]: We require the two conics to touch each other. Eliminating $x^2$ leads to the quadratic equation $$(A-B)y^2+2By-AB=0$$ Assuming $A\neq B$, applying the condition that the discriminant is zero leads to the equation $$A^2-AB+B=0$$ We now need to minimize the area $\Delta=\pi\sqrt{AB}$. Therefore we can differentiate $$\Delta^2=\pi^2AB=\pi^2\frac{A^3}{A-1}$$ Setting the derivative to zero will give $$A=\frac 32, B=\frac 92$$ It is readily seen that this will provide the minimum area since there is no maximum. Therefore the semiaxes are $$\sqrt{A}=\sqrt{\frac 32}, \sqrt{B}=\frac{3}{\sqrt{2}}$$ The minimum ellipse area is then $$\frac{3\sqrt{3}}{2}\pi$$<|endoftext|> TITLE: Retraction of a closed orientable surface onto a simple closed curve QUESTION [6 upvotes]: Let $S$ be a closed, orientable surface and $\gamma$ a simple closed curve on $S$. Show that $S$ retracts onto $\gamma$ if and only if $\gamma$ represents a non-trivial element in homology. Clearly, if $[\gamma]=0$ in $H_1(S)$, the composition $H_1(\gamma) \rightarrow H_1(S) \rightarrow H_1(\gamma)$ can not be the identity. If the class of $\gamma$ is non-zero, how can we construct such retraction? REPLY [3 votes]: use "change of coordinates principle". this is an easy consequence of the classification of surfaces with a boundary, it principle states: If $\alpha$ and $\beta$ are any two nonseparating simple closed curves in a surface $S$, then there is a homeomorphism $\phi : S \to S$ with $\phi(\alpha)=\beta$. therefore, you may asume that given curve is a meridian. then you can construct "pinch" map from $S$ to a torus, such that your curve maps to the meridian of this torus, and then retract torus to the circle.<|endoftext|> TITLE: The Weierstrass map between a torus and an elliptic curve is biholomorphic QUESTION [7 upvotes]: Let $\Lambda$ be a lattice in $\mathbb C$. We can build by this lattice the Weierstrass $\wp$-function in the following manner: $$ \wp(z) = \frac{1}{z^2}+\sum\limits_{l\notin\Lambda\setminus\{0\}}\left( \frac{1}{(z-l)^2} - \frac{1}{l^2}\right) $$ It can be shown that this function satisfies the following differential equation: $$ \wp'(z)^2 = 4\wp(z)^3 - 60G_4\wp(z) - 140 G_6 $$ This function gives us a map frop $\mathbb C/\Lambda$ to the elliptic curve $y^2 = 4x^3 - 60G_4 x - 140G_6$ in such a way: $$ z \mapsto [\wp(z):\wp'(z):1] $$ My question is: why this map is biholomorphic? REPLY [3 votes]: For a fixed $\tau \in \mathbb{C} -\mathbb{R}$ Define the curve $$E(\mathbb{C}) = \{ (x,y) \in \mathbb{C}^2, y^2 = 4 x^3 - g_2(\tau)x - g_3(\tau)\}$$ with a complex topology inherited from $\mathbb{C}^2$. Define the lattice $\Lambda = \mathbb{Z}+\tau \mathbb{Z}$ and look at $(\mathbb C- \Lambda) / \Lambda$ which is a complex torus minus one point. With the complex topology inherited from $\mathbb{C}$ it is naturally a non-compact Riemann surface. Define the map $$\varphi : (\mathbb C- \Lambda) / \Lambda \to E(\mathbb{C}), \qquad \varphi(z) = (\wp_\tau(z),\wp_\tau'(z))$$ $\varphi$ is well-defined and bijective because $\wp(z)$ takes all the complex values twice (if not then $\frac{1}{\wp(z)-a}$ would be a bounded entire function) and $(\wp_\tau(-z),\wp_\tau'(-z))=(\wp_\tau(z),-\wp_\tau'(z))$. $\varphi$ is also holomorphic as a map $(\mathbb C- \Lambda) / \Lambda \to \mathbb{C}^2$. Thus $\varphi$ is a bijective holomorphic map $(\mathbb C- \Lambda) / \Lambda \to E(\mathbb{C})$ which means it is biholomorphic and $E(\mathbb{C})$ is now a non-compact Riemann surface. Finally $\mathbb C / \Lambda $ is a compact Riemann surface (and an abelian group) by adding the point $z=0$ to $(\mathbb C- \Lambda) / \Lambda$. Thus $E(\mathbb{C})$ becomes a compact Riemann surface when adding one point : the image of $z=0$ by $\varphi$, that we call $O$, whose $\epsilon$- neighborhood is $\{O\} \cup \{(x,y) \in E(\mathbb{C}), |x| > \frac{1}\epsilon\}$ (the definition of 'holomorphic at $O$' follows from this thanks to the Riemann's theorem on removable singularities) Which is the same as defining $E(\mathbb{C})$ as the projective curve $$E(\mathbb{C})_{proj} = \{ (x:y:w) \in P_2(\mathbb{C}), y^2w = 4 x^3 -g_2(\tau)xw^2 - g_3(\tau)w^3\}$$ and $$\varphi(z) = (\wp_\tau(z):\wp_\tau'(z):1), \qquad\varphi(0) = (0:1:0)$$ is a biholomorphic map $\mathbb{C}/\Lambda \to E(\mathbb{C})_{proj}$<|endoftext|> TITLE: Topology of the space $\mathcal{D}(\Omega)$ of test functions QUESTION [10 upvotes]: Let $\Omega$ be a nonempty open subset of $\mathbb{R}^n$, and $\mathcal{D}(\Omega)$ the set of test functions (infinitely differentiable functions $f:\Omega \rightarrow \mathbb{C}$ with compact support contained in $\Omega$). For every compact $K \subseteq \Omega$, let $\mathcal{D}_K$ be the locally convex topological vector space of infinitely differentiable function $f:\Omega \rightarrow \mathbb{C}$ whose support lies in $K$, with the topology $\tau_K$ induced by the system of norms ($N=0,1,2,\dots$): \begin{equation} \left| \left| f \right| \right|_{N} = \max \{ \left| D^{\alpha}f(x) \right| : x \in \Omega, | \alpha | =0,1,\dots, N \}, \end{equation} where $\alpha=(\alpha_1,\dots,\alpha_n)$ is a multi-index and $|\alpha|=\alpha_1 + \dots + \alpha_n$. The usual topology of $\mathcal{D}(\Omega)$ is defined as the strongest topology among all those topologies on $\mathcal{D}(\Omega)$ that (i) make $\mathcal{D}(\Omega)$ a locally convex topological vector space and such that (ii) the inclusion $i_K: \mathcal{D}_K \hookrightarrow \mathcal{D}(\Omega)$ is continuous for every compact $K \subseteq \Omega$. In the language of Bourbaki, $\tau$ is called the "locally convex final topology" of the family of topologies $(\tau_K)$ of the spaces $(\mathcal{D}_K)$ with respect to family of linear maps $(i_K)$. I have two questions. (Q1) Can we find a set $V \subseteq \mathcal{D}(\Omega)$, such that $V \cap \mathcal{D}_K \in \tau_K$ for all compact $K \subseteq \Omega$, but $V \notin \tau$? (Q2) Can we find $V \subseteq \mathcal{D}(\Omega)$, with $0 \in V$, such that $V \cap \mathcal{D}_K \in \tau_K$ for all compact $K \subseteq \Omega$, and there is no $W \subseteq V$, with $0 \in W \in \tau$? Clearly a positive answer to (Q2) implies that also (Q1) has a positive answer. Note that (Q1) is equivalent to ask whether $\tau$ coincides or not with the final topology $\tau'$ on $\mathcal{D}(\Omega)$ with respect to the family of inclusions $i_K: \mathcal{D}_K \hookrightarrow \mathcal{D}(\Omega)$, where $K$ is any compact subset of $\Omega$. So we have $\tau \subseteq \tau'$ and (Q1) can maybe be given a positive indirect answer, by proving that $\tau$ and $\tau'$ do not share the same properties. To give a positive answer to (Q2) seems to be more difficult. REPLY [8 votes]: Finally, I found the answers to my two questions, and they are both positive as I conjectured. Take a sequence of compact sets $(K_m)_{m=0}^{\infty}$ in $\Omega$, each one with nonempty interior, such that: (i) $K_m$ is contained in the interior of $K_{m+1}$ for each $m=0,1,\dots$; (ii) $\cup_{m=0}^{\infty} K_m = \Omega$. Let $(x_m)_{m=0}^{\infty}$ be a sequence in $\Omega$ such that $x_m$ lies in the interior of $K_m$ and $x_m \notin K_{m-1}$ (with $K_{-1}=\emptyset$). Define the set \begin{equation} V = \{ f \in \mathcal{D}(\Omega) : \left| f(x_{|\alpha|}) D^{\alpha} f(x_0) \right| < 1, | \alpha |=0,1,2, \dots \}. \end{equation} Let $K \subseteq \Omega$ be a compact set. Since only finitely many of the $x_m$'s belong to $K$, it is immediate to see that $V \cap \mathcal{D}_K \in \tau_K$. Assume that $V$ contains some $\tau$-open set containing 0. Then, since $\mathcal{D}(\Omega)$ with the topology $\tau$ is a locally convex topological vector space, there would exist a convex balanced set $W \subseteq V$ such that $W \in \tau$. So $W \cap \mathcal{D}_K \in \tau_K$ for each compact $K \subseteq \Omega$. Then for each m, there exists a positive integer $N(m)$ and $\epsilon(m) > 0$ such that the set \begin{equation} U_m = \left \{ f \in \mathcal{D}_{K_m}: \left| \left|f \right| \right|_{N(m)} < \epsilon(m) \right \} \end{equation} is contained in $W \cap \mathcal{D}_{K_m}$. Let $m=N(0)+1$. Then the interior of $K_m$ contains the point $x_{N(0)+1}$, so that there exists $g \in U_m$ such that $|g(x_{N(0)+1})| > 0$. Now note that for any $M > 0$, we can find $f \in U_0$ such that $|D^{\alpha}f(x_0)| > M$ for some multi-index $\alpha$ such that $|\alpha| = N(0)+1$. This in turn implies that for any $c \in (0,1)$, we can find $f \in U_0$ such that $cf+(1-c)g$ does not belong to $V$. So $W$ is not convex, against the hypothesis. This shows that (Q2), and so (Q1), has a positive answer. NOTE (1). Actually, this example also shows that $\mathcal{D}(\Omega)$ with the topology $\tau'$ is not even a topological vector space. Indeed, if it were, then we should be able to find $S \in \tau'$ such that $S + S \subseteq V$. Again, we could find then for each $m$ a positive integer $P(m)$ and $\delta(m) > 0$ such that the set \begin{equation} T_m = \left \{ f \in \mathcal{D}_{K_m}: \left| \left|f \right| \right|_{P(m)} < \delta(m) \right \} \end{equation} is contained in $S \cap \mathcal{D}_{K_m}$. Choose $m=P(0)+1$, so that the interior of $K_m$ contains $x_{P(0)+1}$. Then there exists $g \in T_m$ such that $|g(x_{P(0)+1})| > 0$. As before, note for any $M > 0$, we can find $f \in T_0$ such that $|D^{\alpha}f(x_0)| > M$ for some multi-index $\alpha$ such that $|\alpha| = P(0)+1$. This in turn implies that there exists $f \in T_0$ such that $f+g \notin V$. QED NOTE (2). We can prove in the same way as before that for every $f \in V$, there is no $U \in \tau$ such that $f \in U$ and $U \subseteq V$. Assume there exists. Then, being $\mathcal{D}(\Omega)$ with the topology $\tau$ a locally convex space, we can find a convex balanced set $W \in \tau$ such that $f + W \subseteq U$. Again, we could find then for each $m$ a positive integer $N(m)$ and $\epsilon(m) > 0$ such that the set \begin{equation} U_m = \left \{ f \in \mathcal{D}_{K_m}: \left| \left|f \right| \right|_{N(m)} < \epsilon(m) \right \} \end{equation} is contained in $W \cap \mathcal{D}_{K_m}$. Choose then $m=N(0)+1$, so that the interior of $K_m$ contains the point $x_{N(0)+1}$. Then there exists $g \in U_m$ such that $|g(x_{N(0)+1})| > 0$ and $|g(x_{N(0)+1})| < | \varphi(x_{N(0)+1})|$ if $| \varphi(x_{N(0)+1})| > 0$. Now note that for any $M > 0$, we can find $f \in U_0$ such that $|D^{\alpha}f(x_0)| > M$ for some multi-index $\alpha$ such that $|\alpha| = N(0)+1$. This in turn implies that for any $c \in (0,1)$, we can find $f \in U_0$ such that $\varphi + cf+(1-c)g$ does not belong to $V$, which gives a contradiction, since $cf+(1-c)g \in W$.<|endoftext|> TITLE: Solve the diophantine equation $3\,{a}^{3}b-13\,{b}^{3}-26\,a-24\,b=0.$ QUESTION [5 upvotes]: Solve the diophantine equation $3\,{a}^{3}b-13\,{b}^{3}-26\,a-24\,b=0.$ I have found two obvious solutions $a=b=0$ and $a=b=5.$ Are there another solutions? REPLY [6 votes]: Let us try to find a non-zero solutions of Diophantine equation. Let $$c=\gcd(a,b),\quad a=cx,\quad b=cy,$$ then $$x(3c^3x^2y-26)=y(13c^2y^2+24),$$ $$y\,|\,(3c^3x^2y-26),\quad y\,|\, 26,$$ $$3c^3x^3-\dfrac{26}yx = 13c^2y^2 + 24,\quad y\in\{\pm1,\pm2,\pm13,\pm26\}.$$ Taking in account that for any values of $y$ the implicit function $x(c)$ is the asymptote on the coordinate axes and that negative values of $c$ correspond to the opposite values $y,$ we are at each value of $ y $ can get all integer solutions of this equation $(1)$. All values $(x,y),$ which gives $c\gtrsim 5,$ obtained with using of Mathcad package, are the next: With the graphic1 for $y=\pm1,$ graphic2 for $y=\pm2,$ graphic3 for $y=\pm13$ and graphic4 for $y=\pm26$ we have all possible integer roots for $x,y,c$. Among them only $c(1,1) = 5$ is integer. Substituting of $a=b=5$ to the original equation shows that this is the right solution. So $$\boxed{(a=0,b=0) \vee (a=5,b=5)}$$ are the all integer solutions of the given Diofantine equation. Note Of course, in each of these cases, we can show the presence or absence of integer roots using the Rational Root Theorem. However, in this case that looks like a very technical and overload the main logic of proof.<|endoftext|> TITLE: Christoffel Symbols of Bi-invariant Metric on $SO(3)$ QUESTION [7 upvotes]: I'd like to calculate directly the Christoffel symbols for $SO(3)$. Let us suppose I have the metric on $SO(3)$ given by $$g(X,Y)=1/2 [\operatorname{tr}(X^tY)].$$ I'm aware of Milnor paper and I know I can get the Christoffel symbol of a bi-invariant metric of a generic Lie Group through Koszul formula, but in this case is there a way to find the Christoffel Symbols o the connection forms directly? Taking the standard basis $E_1,E_2,E_3$ of $\text{so}(3)$ where $[E_1,E_2]=E_3$, $[E_2,E_3]=E_1$ and $[E_3,E_1]=E_2$, if I'm not mistaking $g_{ij}$ at the identity is the identity matrix... so where do I go from there? I'm getting quite confused... REPLY [4 votes]: There is no Christoffel symbols when you have only a local (or global) basis, as explained in the answer to your other question. Indeed, Christoffel symbols is not invariant under change of coordinates (i.e., it's not a tensor) If the goal is to calculate the curvature, then one can use directly the (orthonormal) basis $\{E_1, E_2,E_3\}$. The following is true for all Lie group equipped with a bi-invariant metric (which include all compact Lie groups). Let $\{ E_1, \cdots, E_n\}$ be an orthonormal basis on $\mathfrak g$ with respect to a metric $h$. We always identified $\mathfrak g$ with the set of left-invariant vector fields and the metric $h$ is identified with the bi-invariant metric on $G$. The structural constant are denoted by $\epsilon_{ij}^k$, that is, $$ [E_i, E_j] = \epsilon_{ij}^k E_k.$$ Since $h$ is a bi-invariant metric, the exponential map $\exp : \mathfrak g \to G$ is also the exponential mapping $T_e G \to G$ with respect to the bi-invariant metric $h$ (This is the only place we use that $h$ is bi-invariant, see here for a proof). Lemma 1: $\nabla _XX = 0$ for all $X\in \mathfrak g$. Proof of lemma 1: By definition of the exponential mapping $\exp(\cdot) = e^{(\cdot)}$, we have $$ e^{tX} e^{sX} = e^{(s+t)X}.$$ If we write $\gamma (t) =e^{tX}$, then $$\gamma'(t) = (e^{tX})_* X.$$ This implies that $\gamma(t)$ is the intgral curve of the vector fields $X$. Thus $\nabla_X X = 0$ since $\gamma$ is a geodesics. Lemma 2: We have $\nabla_X Y = \frac 12 [X,Y]$. Proof of Lemma 2: For all $X, Y\in \mathfrak g$, from Lemma 1, $$\begin{split} 0 &= \nabla_{X+Y}(X+Y) \\ &= \nabla_X X + \nabla_XY + \nabla_YX + \nabla _YY \\ &= \nabla_XY + \nabla_YX \end{split}$$ Together with the torsion free condition $$\nabla_XY - \nabla _YX = [X,Y],$$ the lemma is proved. Note that Lemma 2 gives much more refined information about the connection than that given by the Koszul formula. Now we can somehow calculate the "Christoffel symbols" of the connection, note that since $\{E_1, \cdots E_n\}$ forms a basis of $T_gG$ for all $g\in G$, so in general $$\nabla_{E_i} E_j = T_{ij}^k(g) E_k$$ for some global functions $T_{ij}^k : G\to \mathbb R$. Lemma 2 implies that $$ T_{ij}^k =\frac 12 \epsilon_{ij}^k$$ (In particular, $T_{ij}^k$ is a constant function). We can go on and calcaulate the curvature: Lemma 3: The Riemann curvature tensor is given by $$ R(X, Y, Z, W) = \frac 14 h([[X,Y],Z],W)$$ Proof of Lemma 3: Using Lemma 2 and Jacobi identity, $$\begin{split} R(X, Y, Z, W) &:= h( -\nabla_X \nabla _Y Z + \nabla_Y \nabla _X Z + \nabla_{[X,Y]} Z, W) \\ &= \frac 12 h(- \nabla _X [Y,Z] + \nabla_Y [X,Z] + [[X,Y],Z],W) \\ &= \frac 12 h(-\frac 12 [X,[Y,Z]] + \frac 12 [Y,[X,Z]] + [[X,Y],Z], W) \\ &= \frac 14 h([[X,Y],Z],W). \end{split}$$ Now we can represent $R$ using the structural constant: $$\begin{split} R_{ijkl} &= R(E_i, E_j, E_k, E_l)\\ &= \frac 14 h([[E_i, E_j], E_k], E_l) \\ &= \frac 14 h([\epsilon_{ij}^m E_m, E_k], E_l) \\ &= \frac 14 h(\epsilon_{ij}^m \epsilon_{mk}^n E_n , E_l) \\ &=\frac 14 \epsilon_{ij}^m \epsilon_{mk}^l. \end{split}$$ Going back to your case $G= SO(3)$, we have (by case by case checking) $$ R_{1212} =R_{1313} = R_{2323} = \frac 14,$$ and others are zero (we are assuming that $i < j$ and $k TITLE: Showing a function is differentiable on $\mathbb{R}^+$ QUESTION [5 upvotes]: Question: Let $f \ge 0$ be an integrable function on $\mathbb{R}$. Define $g(t) := \displaystyle \int_\mathbb{R} \cos(tx)\ f(x) \; dx$ for $t \ge 0$. Show $g$ is twice-differentiable $\iff \displaystyle \int_\mathbb{R} x^2 f(x) \; dx <\infty$. Solution: $(\Rightarrow)$ Suppose $g$ is twice-differentiable so that $| g''(0)|<\infty$. Then, \begin{align*} |g''(0)| &= \left\vert \lim \limits_{h \to 0} \dfrac{g(h) -2g(0) +g(-h)}{h^2} \right\vert \\ &= \lim \limits_{h \to 0}\left\vert \dfrac{g(h) -2g(0) +g(-h)}{h^2} \right\vert \\ &= \lim \limits_{h \to 0} \int_\mathbb{R} \left\vert \dfrac{ \cos(hx) -2 + \cos(-hx)}{h^2} f(x) \right\vert \; dx \quad (^* \text{Wrong, see edit}^*)\\ &= \lim \limits_{h \to 0} \int_\mathbb{R} \left\vert \dfrac{2( \cos(hx) -1) }{h^2} \right\vert f(x) \; dx \\ &\ge \int_\mathbb{R} \liminf \limits_{h \to 0} \left\vert \dfrac{2( \cos(hx) -1) }{h^2} \right\vert f(x) \; dx \quad \text{(Fatou)} \\ &= \int_\mathbb{R} x^2f(x) \; dx. \end{align*} $(\Leftarrow) $ This is where I am sort of stuck. I am doing a standard technique of showing a partial derivative is bounded so I can push the derivative inside the integral using LDCT (Will moving differentiation from inside, to outside an integral, change the result?) First, note that $g(t)$ is integrable since $$ \left\vert \int_\mathbb{R} \cos(tx) f(x) \; dx \right\vert \le \|f\|_{L^1(\mathbb{R})}<\infty $$ Also, $\left\vert \frac{\partial^2 }{\partial t^2} \cos(tx) f(x) \right\vert = |x^2 \cos(tx) f(x)| \le |x^2 f(x)|,$ which is integrable by assumption. So $g''$ exists, assuming $g'$ does. But I can't show $g'$ exists because $\left\vert \frac{\partial }{\partial t} \cos(tx) f(x) \right\vert = |x\sin(tx) f(x)| $, which is $\le$ both $|xf(x)|$ and $|x^2 t f(x)|$, neither of which seem to help . . . If I attack with $g'(t) = \lim \limits_{h \to 0} \dfrac{g(t+h)-g(t)}{h} = \lim \limits_{h \to 0} \displaystyle \int_\mathbb{R} \dfrac{\cos(x(t+h)) - \cos(xt)}{h}f(x) \; dx$, I can't make progress either. Thanks for the help. EDIT: Let me fix the $\Rightarrow$ direction. Basically all I need to do is note that $2 - 2 \cos(hx) \ge 0$, so I can still apply Fatou. \begin{align*} -g''(0) &= \lim \limits_{h \to 0} -\dfrac{g(h) -2g(0) +g(-h)}{h^2} \\ &= \lim \limits_{h \to 0} \int_\mathbb{R} \dfrac{ -\cos(hx) +2 - \cos(-hx)}{h^2} f(x) \; dx \\ &= \lim \limits_{h \to 0} \int_\mathbb{R} \dfrac{2( 1-\cos(hx)) }{h^2} f(x) \; dx \\ &\ge \int_\mathbb{R} \liminf \limits_{h \to 0} \dfrac{2(1-\cos(hx)) }{h^2} f(x) \; dx \quad \text{(Fatou)} \\ &= \int_\mathbb{R} x^2f(x) \; dx. \end{align*} REPLY [3 votes]: In your proof of $\implies,$ you have a mistake in going from line 2 to line 3: You've moved the absolute values inside the integral, so $=$ should be $\le 0.$ In the other direction, we can use the mean value theorem: $$\tag 1 |\cos ((t+h)x) - \cos (tx)| = |(-\sin c)\cdot (hx)| \le |hx|$$ Now $$ \frac{g(t+h)-g(t)}{h} = \int \frac{\cos ((t+h)x) - \cos (tx)}{h} f(x)\, dx.$$ In absolute value, the integrand on the right is $\le |xf(x)|$ by $(1).$ Because $x^2f(x) \in L^1,$ $xf(x) \in L^1.$ So the dominated convergence theorem gives $$\tag 2 g'(t) = \int (- \sin (tx))x f(x)\,dx.$$ This argument can be repeated in differentiating $(2),$ this time using $x^2f(x)\in L^1$ directly. We get $g''(t) = \int (- \cos (tx))x^2 f(x)\,dx.$<|endoftext|> TITLE: Complexification of a self-adjoint operator is self-adjoint on the complexified space QUESTION [5 upvotes]: In "Linear algebra done right" 9.b.4 : Suppose $V$ is a real inner product space and $T \in \mathcal{L}(V)$ is self-adjoint. Show that $T_\mathbb{C}$ is a self-adjoint operator on the inner product space $V_\mathbb{C}$. My way to do it is as follow: Since $T$ is self-adjoint under real inner product vector space, by real spectral theorem, there exists an orthonormal basis $\mathcal{B}$ such that $\mathcal{M}(T,\mathcal{B},\mathcal{B})$ is diagonal matrix and all $\lambda_i \in \mathbb{R}$ are on the diagonal. W.r.t the same basis $\mathcal{B}$, $\mathcal{M}(T_\mathbb{C},\mathcal{B},\mathcal{B})$ is the same as $\mathcal{M} (T,\mathcal{B},\mathcal{B})$, and because $\lambda_i$ are all real, we have $\mathcal{M}(T_\mathbb{C}^*,\mathcal{B},\mathcal{B})$=$(\overline{\mathcal{M}(T_\mathbb{C},\mathcal{B},\mathcal{B})})^t$=$\mathcal{M}(T_\mathbb{C},\mathcal{B},\mathcal{B})$, which implies $T_\mathbb{C}$ is self-adjoint. Can someone tell me is there anything wrong in this proof? REPLY [5 votes]: Your proof is correct. However, your proof uses the real spectral theorem, which is a deep tool. There is a simpler proof that just uses the definitions. Specifically, you should verify from the definitions that $$ \langle{ T_{\mathbf{C}}(u + iv), x+iy}\rangle = \langle{u + iv, T_{\mathbf{C}}(x + iy)}\rangle, $$ for all $u, v, x, y \in V$, which implies that $T_{\mathbf{C}}$ is self-adjoint.<|endoftext|> TITLE: Find all functions satisfying $\frac{1}{f(a)}+ \frac{1}{f(b)} = \frac{1}{f(c)} $ whenever $\frac{1}{a}+ \frac{1}{b} = \frac{1}{c} $ QUESTION [7 upvotes]: Find all functions $f: \mathbb{N} \rightarrow \mathbb{N} $ satisfying $\frac{1}{f(a)}+ \frac{1}{f(b)} = \frac{1}{f(c)} $ whenever $\frac{1}{a}+ \frac{1}{b} = \frac{1}{c} \ a,b,c \in \mathbb{N} $ After talking through it with a friend, they believe there is a simple solution only a couple of lines long, but I have yet to spot the method. Any ideas? REPLY [5 votes]: Hint: Try to show that $f(ab) = af(b)$ for all $a,b\in\mathbb{N}$. This can be done through induction on $a$: Suppose $f(kb) = kf(b)$ for all $b\in\mathbb{N}$ and $k = 1,\dots,a-1$, and note that $$\frac{1}{ab} + \frac{1}{(a-1)\times ab} = \frac{1}{(a-1)\times b}. $$ Can you proceed from here?<|endoftext|> TITLE: What exactly is a derivative? QUESTION [6 upvotes]: In calculus courses, we learn the classical derivative: $$f'(x)=\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}$$ And the directional derivative: $$D_{\mathbf{v}}{f}(\mathbf{x}) = \lim_{h \rightarrow 0}{\frac{f(\mathbf{x} + h\mathbf{v}) - f(\mathbf{x})}{h}}$$ In which the partial derivative is just a slight variation of the previous one. But some days ago, I've read Stopple's Primer of Analytic Number Theory, I've read something about the difference operator: $$\Delta f(x)=f(x+1)-f(x)$$ Which at least for me, seems quite sems quite familiar with the previous examples. The only difference is that this is not really a limiting process - another one that lacks the limiting process but is also called a derivative is the arithmetic derivative. But there are other examples in which there is a limiting process, in non-newtonian calculus, for example, we have the geometric derivative: $$f^{*}(x) = \lim_{h \to 0}{ \left({f(x+h)\over{f(x)}}\right)^{1\over{h}} }$$ And the bigeometric derivative: $$f^{*}(x) = \lim_{h \to 0}{ \left({f((1+h)x)\over{f(x)}}\right)^{1\over{h}} } = \lim_{k \to 1}{ \left({f(kx)\over{f(x)}}\right)^{1\over{\ln(k)}} }$$ And in this site, the authors argue that: "There are infinitely many non-Newtonian calculi. Like the classical calculus, each of them possesses, among other things: a derivative, an integral, a natural average, a special class of functions having a constant derivative, and two Fundamental Theorems which reveal that the derivative and integral are 'inversely' related." I remember from my calculus classes that the classical derivative is a comparison of a certain function with the slope of a straight line. It seems to my limited knowledge that these other derivatives are also comparisons with some other geometrical figures, perhaps? From this book: During the Renaissance many scholars, including Galileo, discussed the following problem: Two estimates, $10$ and $1000$, are proposed as the value of a horse. Which estimate, if any, deviates more from the true value of $100$? The scholars who maintained that the deviations should be measures by the differences concludes that the estimate of $10$ was closer to the actual value. However, Galileo eventual maintained that the deviations should be measured by ratios, and he concluded that the two estimates deviated equally from the true value. Other examples are also Fréchet's derivatives and Gâteaux derivatives which I'm not exactly sure of what they are. As you can see further in the book, this yields the previously mentioned geometric derivative. So, assuming there is a class of kinds of derivatives, what binds them all? How can I look at anything and decide if it's a derivative or not? I presume that if something is a derivative, then it must have - just as the authors of the mentioned website said - an integral, a natural average, a special class of functions having a constant derivative, and two Fundamental Theorems which reveal that the derivative and integral are 'inversely' related. So, given any expresion, is it a derivative if I can come up with all these items? This seems a faint answer to me, I'd like to see if there is a better one. REPLY [2 votes]: This may will not answer your question directly, but I'd like to give some hints on the the bigger picture. The derivative in its original sense is a geometrical concept. In conclusion: If you want to learn more about the concept of a derivative, you need to learn about differentiable manifolds and Riemannian metrics. I mentioned this with a target in mind: Once you have a setup of a Riemannian manifold there is a technical concept that in some sense generalizes the notion of partial derivatives: So-called connections. Connections in this setup are a more general concept than the pure derivative and they could be seen like the "class" you asked for encoding the key idea what a derivative makes a derivative. Also, the terms parallel transport and covariant derivative are closely related. I do not know how one can relate Fréchet- , Gâteaux- and/or bigeometric derivates to this in detail, but I can imagine there are other resources for this question. If you want to get a more deep understanding of the term derivative, exploring differential geometry along the terms mentioned can't be a wrong advice. edit: (August 10, 2016) It is also worth noting that the Leibniz rule is a central characteristic behavior that every of the mentioned derivatives fulfills (you may want to lookup the term derivation). So if you want, you can see this specific computation rule as common property that "binds" the class of derivatives. In particular this is valid for the connections I've thrown into the pool above.<|endoftext|> TITLE: Show that, for all $n > 1: \frac{1}{n + 1} < \log(1 + \frac1n) < \frac1n.$ QUESTION [9 upvotes]: I'm learning calculus, specifically derivatives and applications of MVT, and need help with the following exercice: Show that, for all $n > 1$ $$\frac{1}{n + 1} < \log(1 + \frac1n) < \frac1n.$$ I tried to follow the below steps in order to prove the RHS inequality: Proving that $f < g$ on $I$ from $a$ to $b$: Step $1$. Prove that $f' < g'$ on $\operatorname{Int}(I)$. Step $2$. Show that $f(a) \leq g(a)$ or that $f(a^+) \leq g(a^+)$ Following the above steps, let $f(x) = \log(1 + \frac1x)$ and $g(x) = \frac1x$, for all $x > 1$. One has $$f'(x) = -\frac{1}{x^2 + x} \quad \text{and} \quad g'(x) = -\frac{1}{x^2}.$$ We note that, for every $x > 1$, $f'(x) > g'(x)$. Moreover, $f(1^+) = \log(2) < 1 = g(1^+).$ My problem is that I got the wrong inequality sign in Step $1$. Looking at the solution in my textbook, the author suggests using the MVT but I don't know how to apply it in this case. REPLY [2 votes]: For any $x>0$ We have $$\frac{x}{x+1} =\int_{0}^x\frac{dt}{x+1} \le \int_{0}^x\frac{dt}{t+1} =\color{red}{\ln(x+1 )}=\int_{0}^x\frac{dt}{t+1} \le \int_{0}^x\frac{dt}{1} = x $$ that is $$\frac{x}{x+1} \le \ln(x+1 ) \le x $$ taking $x=\frac1n$ you get $$\frac{1}{n+1} \le \ln(\frac1n+1 ) \le \frac1n $$<|endoftext|> TITLE: Why is $\lim_{x\to 1}\sqrt{1-x}\sum_{n=0}^\infty x^{n^2}=\sqrt{\pi}/2\,\,$? QUESTION [10 upvotes]: It is well-known that $\sum_{n=0}^\infty x^n=1/(1-x)$. However, it seems difficult to find $\sum_{n=0}^\infty x^{n^2}$. A natural problem is study its behavior around $1$. A problem states that $$\lim_{x\to 1}\sqrt{1-x}\sum_{n=0}^\infty x^{n^2}=\sqrt{\pi}/2.$$ How can we prove this? Abel summation formula? Any others? I have no idea. REPLY [2 votes]: We will use variable $q$ instead of $x$ so that we can visibly link the problem to theta functions. We have $$\vartheta_{3}(q) = \sum_{n = -\infty}^{\infty}q^{n^{2}} = \sqrt{\frac{2K}{\pi}}$$ where $K$ is the elliptic integral of first kind and $q = e^{-\pi K'/K}$. Now we can see that $$\sqrt{1 - q}\sum_{n = 0}^{\infty}q^{n^{2}} = \sqrt{1 - q}\cdot\frac{\vartheta_{3}(q) + 1}{2}$$ and then clearly $$\lim_{q \to 1^{-}}\sqrt{1 - q}\sum_{n = 0}^{\infty}q^{n^{2}} = \frac{1}{2}\lim_{q \to 1^{-}}\sqrt{1 - q}\vartheta_{3}(q) = \frac{1}{2}\lim_{q \to 1^{-}}\sqrt{\frac{2K(1 - q)}{\pi}}$$ and from $q = e^{-\pi K'/K}$ we can see that the limit is equal to $$\frac{1}{2}\lim_{k \to 1^{-}}\sqrt{2K'}$$ which is $$\frac{1}{2}\sqrt{\pi}$$ Here we have used the fact that as $q \to 1^{-}$ we have modulus $k \to 1^{-}$ so that $K' \to \pi/2$ and $K \to \infty$ and hence from $q = e^{-\pi K'/K}$ we get $$1 - q = \pi\frac{K'}{K} + o(1/K)$$ A more elementary answer by G. H. Hardy is available here.<|endoftext|> TITLE: Any Implications of Fermat's Last Theorem? QUESTION [5 upvotes]: In our discourse FLT is Fermat's Last Theorem. I am unaware of any theorems or conjectures that begin assuming FLT is true, or otherwise use FLT as a starting point or tool. The small amount of literature review I've done on this question reveals nothing. My question is: Where can I find a work requiring FLT, or some useful implication of FLT? Even an implication of a polynomial inequality, that may not be FLT, may be a good answer to this question, as I'll likely try to use it to find something regarding FLT. The following is not acceptable as an answer to this question: $$ (a^{x_{1}}_{1} + b^{x_{1}}_{1} - c^{x_{1}}_{1}) \ldots (a^{x_{n}}_{n} + b^{x_{n}}_{n} - c^{x_{n}}_{n}) \not= 0 : x_{i} > 2 $$ and it's expansions imply (something trivial) Another acceptable answer to this question would be a proof requiring FLT to be false. Thanks and please let me know if I can ask this question in a way more fitting math.se (new user). REPLY [7 votes]: I don't know if this helps. But you can consider this for fun in the meantime while you search for something significant. More of a "nuking the mosquito" To prove $2^\frac{1}{n}$ is an irrational number when $n\ge3$. $$2^\frac{1}{n}=\frac{p}{q}$$ $$p^n=q^n+q^n$$ which contradicts the $FLT$, therefore proving $2^\frac{1}{n}$ is indeed irrational. On a side note, $FLT$ is not strong enough to prove the irrationality of $2^\frac{1}{n}$ for case $n=2$<|endoftext|> TITLE: Can 4 lines intersect in 2 points? QUESTION [6 upvotes]: Four lines can intersect in at most $\frac{4^{2}-4}{2} = 6$ points. And in fact you can find an example of lines intersecting in 0, 1, 3, 4, 5 and 6 points. All but 2. Obviously there isn't any way how four lines can intersect in two points. But how to prove it? REPLY [5 votes]: I prove this by assuming such a construction exists and deriving a contradiction. Let's pick our two points of intersection for the construction and go from there. Since each has at least two lines passing through it, let us consider one of the lines ($L_1$) going through one point and two of the lines ($L_2, L_3$) going through the other. (If one of the lines happens to be passing through both points, we just pick the other three for the observation.) Since neither of $L_2$ and $L_3$ intersect with $L_1$, they are both parallel to $L_1$ as well as each other. But since $L_2$ and $L_3$ pass through a common point, they must coincide, thus contradicting the hypothesis that there are 4 unique lines.<|endoftext|> TITLE: What is the maximum number of quadrants in $n$ dimensional space that a $k$ dimensional hyperplane can pass through? QUESTION [10 upvotes]: Assume that the hyperplane passes through the origin i.e. it is the span of some $k$ linearly independent vectors in $n$-space. For example, for $n,k = 2,1,$ we have a line in the 2d plane passing through the origin, so the answer is 2. For $n,k=3,2$, it is 6. In fact, I have proven that $F(n,n-1)=F(n-1,n-2)+2^{n-1}$. This follows by considering the $(n-1)$-subspace for which one of the coordinates is 0. The intersection of this subspace with the hyperplane is an $n-2$ dimensional hyperplane which passes through the origin and $F(n-1,n-2)$ quadrants of the subspace. For each quadrant, the original hyperplane will at maximum pass through two corresponding quadrants of the $n$-space. And for each quadrant it does not pass through, the original can pass through at most one quadrant. The recurrence follows. But I am unable to extend this proof to general $k$, or even $n-2$. Any help is appreciated. REPLY [6 votes]: It is worth noting that your problem is equivalent to the following one: What is the maximal number of cells an arrangement of $n$ hyperplanes can split $\mathbb R^k$ into? Here, I use "hyperplane" as in "codimension $1$ subspace". More usefully, suppose we represent such an arrangement as a number of non-zero maps $f_1,\ldots,f_n$ with $f_i:\mathbb R^k\rightarrow\mathbb R$ and the hyperplanes in the arrangement being the kernel of each map. A cell is then a subset of $\mathbb R^k$ with a specified sign for each $f_i$ - for instance, the set where each $f_i$ is positive is a cell (if non-empty). To get from this reduced problem to yours, define a map $f:\mathbb R^k\rightarrow \mathbb R^n$ as $$f(x)=(f_1(x),f_2(x),\ldots,f_n(x)).$$ We find that $f(\mathbb R^k)$ is a subspace of dimension $k$ in $\mathbb R^n$ such that the image of any cell is exactly the intersection of that hyperplane with some quadrant and the preimage of a quadrant is exactly a cell. Thus there are equally many cells in the arrangement as there are quadrants intersected by the associated subspace. To get from your problem to the reduced one, let $S$ be the subspace of $\mathbb R^n$ in question and $f:S\rightarrow \mathbb R^n$ be the inclusion of $S$ into the space $\mathbb R^n$. Define $f_i(x)$ to be the $i^{th}$ coordinate of $f(x)$ and note that these are linear maps. Moreover, which quadrant some $x\in S$ is in is determined by the signs of the $f_i$ and the intersection of the hyperplane $x_i=0$ in $\mathbb R^n$ with $S$ is precisely the kernel of $f_i$ by definition. Otherwise said, the intersection of a quadrant with $S$ is precisely a cell in the arrangement of hyperplanes given by $f_1,\ldots,f_n$ in $S$. I think it is easier to get a hold on this modified problem. In particular, suppose we have an arrangement of $n$ hyperplanes $H_1,\ldots,H_n$ in $\mathbb R^k$ and add some hyperplane $H'$ to it and wish to know how many more cells were created by this hyperplane. Well, we get more cells exactly when $H'$ splits an existing cell into two. We can find how many times $H'$ does that by considering the cells induced in $H'$ by the hyperplanes $H_1\cap H',\ldots,H_n\cap H'$. That is, the new cells in $\mathbb R^k$ created by adding the hyperplane $H'$ correspond to the cells in a $k-1$ dimensional arrangement of $n$ hyperplanes. Letting $F(n,k)$ be the maximal number of cells in such an arrangement, we get the relation $$F(n+1,k)\leq F(n,k) + F(n,k-1).$$ One can find, more strongly, that if you choose a set of $n$ hyperplanes such that the intersection of any $n+1$ of them is a point (i.e. so that they are in general position) that this bound is obtained (this may be proved by induction), thus $$F(n+1,k)=F(n,k) + F(n,k-1).$$ Then, we have initial conditions $$F(1,k)=2$$ $$F(n,1)=2.$$ These uniquely specify the function. One may use this to prove, by induction, the formula: $$F(n,k)=2\sum_{d=0}^{k-1}{n-1 \choose d}.$$<|endoftext|> TITLE: $f$ entire but not polynomial then $\lim_{n\to\infty}\sup \{|z|:p_n(z)=0\}\to\infty$, where $p_n$ is $n-th$ Taylor series of $f$ QUESTION [5 upvotes]: Let $f(z)=\sum_{k=0}^{\infty}a_kz^k$ be entire and not a polynomial. Let $p_n(z)=\sum_{k=0}^n a_kz^k$ be its $n-th$ Taylor polynomial centered at $0$, and let $r_n=\sup \{|z|:p_n(z)=0\}$. Show that $\lim_{n\to\infty}r_n=\infty$ My thought: $\lim_{n\to\infty}p_n(z)=\lim_{n\to\infty}(z-a_1)(z-a_2)...(z-a_n)=f(z)$, then if $\lim_{n\to\infty}r_n\neq 0$, then this means the zeros of $f$ is bounded by a compact set, then ${z: f(z)=0}$ has a limit point in $\mathbb{C}$, so $f\equiv 0$, which is contradiction with $f$ is polynomial. I feel something is wrong in my proof above, but I can not tell what exactly is not right. Could someone kindly help me with this? Thank you so much! REPLY [8 votes]: Suppose all the roots of $p_n(z)$ are $z_1,z_2,\ldots,z_n$. Vieta's formula implies $$|z_1 z_2 \ldots z_n|=\left|\frac{a_0}{a_n}\right|$$ So $r_n\geq\sqrt[n]{|z_1 z_2\ldots z_n|}=\sqrt[n]{\left|\frac{a_0}{a_n}\right|}$. Since $f$ is entire, $\lim\sup\sqrt[n]{|a_n|}=0$. The conclusion follows.<|endoftext|> TITLE: Why use neighborhood to define boundary? Not open ball? QUESTION [10 upvotes]: One way to define a boundary point of set S is that "every neighborhood of it contains at least one point of S and at least one point outside S". I wonder if it's OK to replace "neighborhood of it" by "open ball centered at it"? What's the difference? A further question is why introduce the concept "neighborhood" in the first place? Its role seems very similar to open balls. REPLY [2 votes]: While the other answers cover the most obvious reason well, let me add a couple of reasons for defining and using the concept of neighborhoods that apply even in metric spaces: Quite often, one needs to discuss sets that contain an open ball about a particular point, but are not open balls themselves. For example, "Let $S$ be a compact neighborhood of $x$" is easier to say repeatedly than "Let $S$ be a compact set that contains an open ball about $x$". If you need to refer to a concept enough, it is useful to have a special name for that concept. (Though I admit that many people restrict "neighborhoods" to be open, so this particular example does not apply to them - but there are other cases.) Abstraction is one of the most important tools in the mathematician's toolbox. By being too specific in your definitions, you sometimes find yourself bogged down in needless details, while missing what is really important. In your example of the definition of the boundary point, the only thing that is important for that definition is no matter how close you look around the point, you see points that are inside and that are outside. The particular shape of how you are looking around the point makes no difference. So a definition that does not require a particular shape helps you to see what is important. The latter reason may not make much sense to you without more experience, but I cannot tell you how many times I've struggled mightily to prove something, only to come back to it a few years later and discover that it is almost trivial from the right point-of-view, and all the details I fought through were unnecessary.<|endoftext|> TITLE: Constructive proof that countably infinite-dimensional normed vector space is incomplete QUESTION [7 upvotes]: I'm familiar with the standard proof that there exists no $\mathbb{N}$-dimensional Banach space based on Baire: Let $\{ v_{K} : k \in \mathbb{N} \}$ be a normalized basis for $V$, and let $W_{\ell} = \mathbb{R} v_{1} + \cdots + \mathbb{R} v_{\ell}$. Then $V = \cup_{\ell \in \mathbb{N}} W_{\ell}$. Moreover, $W_{\ell}$ is closed, so $X_{\ell} = V \setminus W_{\ell}$ is open. Also, $X_{\ell}$ is dense. To show it's dense, let $v = \sum _{k = 1}^{m} r_{k} v_{k} \in V$, where $r_{m} \neq 0$. If $m > \ell$, then $v \in V \setminus W_{\ell}$. If not, let $\epsilon > 0$; then $v + \frac{\epsilon}{2} v_{\ell + 1} \in X_{\ell} \cap N(v, \epsilon)$. Baire tells us that $\cap_{\ell \in \mathbb{N}} X_{\ell}$ is dense, but it is in fact empty, a contradiction. My question is: Can the argument be made constructively? Do we need BCT to show there exists no $\mathbb{N}$-dimensional Banach space? Can we instead make a diagonalization argument, i.e. construct a Cauchy sequence that doesn't converge in $\operatorname{span} \{ v_{k} : k \in \mathbb{N} \}$? Perhaps further, is the result dependent on BCT? That is, does the result imply BCT? Thanks in advance. REPLY [2 votes]: Pick $\epsilon = \frac1{10}$. Pick $w_{k} \in W_{k}/W_{k-1}$ with quotient norm $\|w_k\| = 1$. Then there exists a representative $x_k \in W_k$ such that $x_k + W_{k-1} = w_k$, $\|x_k\| < 1+\epsilon$, and $\|x_k - y\| \ge 1$ for all $y \in W_{k-1}$. Now consider $z = \sum_{k=1}^\infty 10^{-k} x_k$. We can easily show this series is Cauchy convergent, and hence converges in $X$. In the quotient norm $X/W_n$, we get $$ \|z + W_n\| \ge 10^{-n-1} \|x_{n+1}+W_n\| - \sum_{k=n+2}^\infty 10^{-k} \|x_k+W_n\| \ge 10^{-n-1} - (1+\epsilon)10^{-n-1}/9 > 0 .$$ Hence $z \notin W_n$ for any $n \in \mathbb N$, and hence $z \notin V$.<|endoftext|> TITLE: Iversen's Cohomology of Sheaves, pull back maps for sheaf cohomology QUESTION [5 upvotes]: In Iversen's Cohomology of Sheaves, the cohomology of sheaf (of abelian groups, the category of which has enough injectives) is defined as follows: Given a sheaf $\mathcal F$ over $X$, we define cohomology groups $H^\bullet(X,\mathcal F)$ as derived functors $R^\bullet\Gamma(X,\mathcal F)$. If $\mathcal F\to\mathcal I^\bullet$ is an injective resolution, then $H^\bullet(X,\mathcal F)$ is the cohomology of the complex $\Gamma(X,\mathcal I^\bullet)$. Now given a continuous map $f\colon X\to Y$, a sheaf $\mathcal G$ on $Y$, the author wants to introduce a natural map (or pull back) $f^*\colon H^\bullet(Y,\mathcal G)\to H^\bullet(X,f^*\mathcal G)$. We first choose an injective resolution $\mathcal G\to\mathcal J^\bullet$ on $Y$ and an injective resolution $f^*\mathcal J^\bullet\to\mathcal I^\bullet$ (an injective resolution of a complex is a quasi-isomorphism to a complex of injective objects). Since $f^*$ is exact, $\mathcal I^\bullet$ is an injective resolution of $f^*\mathcal G$. The composite $$\Gamma(Y,\mathcal J)\xrightarrow a\Gamma(X,f^*\mathcal J^\bullet)\to\Gamma(X,\mathcal I^\bullet)$$ gives a morphism $f^*$, where $a$ is the natural adjunction map. One needs to show that this doesn't depend on the choice of resolutions, and moreover one can replace injective resolutions to acyclic resolutions, owing to the following theorem: (Scolium 5.2 of Iversen's Cohomology of Sheaves, page 100) Given a $\Gamma(Y,-)$-acyclic resolution $t\colon\mathcal G\to\mathcal T^\bullet$ and a $\Gamma(X,-)$-acyclic resolution $s\colon f^*\mathcal G\to\mathcal S^\bullet$ and a morphism of complexes $\psi\colon\mathcal T^\bullet\to f_*\mathcal S^\bullet$ such that $\psi\circ t=f_*s\circ a$, where $a\colon\mathcal G\to f_*f^*\mathcal G$ is the adjunction map of adjoint operators $f_*,f^*$. $$\require{AMScd} \begin{CD} \mathcal G@>t>>\mathcal T^\bullet\\ @VVaV@VV\psi V\\ f_*f^*\mathcal G@>f_*s>>f_*\mathcal S^\bullet \end{CD} $$ Then $f^*\colon H^\bullet(Y,\mathcal G)\to H^\bullet(X,f^*\mathcal G)$ is represented by the chain map $$\Gamma(Y,\mathcal T^\bullet)\xrightarrow\psi\Gamma(Y,f_*\mathcal S^\bullet)\to\Gamma(X,\mathcal S^\bullet)$$ In the proof, he starts by choosing arbitrary injective resolutions $\mathcal T^\bullet\xrightarrow{i_1}\mathcal J^\bullet$ and $f^*\mathcal J^\bullet\xrightarrow{i_2}\mathcal I^\bullet$ and he claims that: There exists a quasi-isomorphism $\epsilon\colon\mathcal S^\bullet\to\mathcal I^\bullet$ such that $\epsilon\circ\phi\colon f^*\mathcal T^\bullet\to\mathcal I^\bullet$ is homotopic to $i_2\circ f^*i_1$, where $\phi\colon f^*\mathcal T^\bullet\to\mathcal S^\bullet$ is the image of $\psi\colon\mathcal T^\bullet\to f_*\mathcal S^\bullet$ under the Hom-set isomorphism of adjoint operators, namely, the existence of the following homotopy commutative diagram: $$\require{AMScd} \begin{CD} f^*\mathcal T^\bullet@>\phi>>\mathcal S^\bullet\\ @VVV@V\exists VV\\ f^*\mathcal J^\bullet@>>>\mathcal I^\bullet \end{CD} $$ I don't understand this, since it seems to me that $\phi$ is not generally a quasi-isomorphism (or I cannot see any reason), and the existence is quite unclear for me. It is clear that the existence of the second homotopy commutative diagram should be deduced from the first commutative diagram. However, I don't know how to properly take advantage of the first diagram, translating things from $\phi$ to $\psi$ to deduce anything. I need help on this. Any ideas? Thanks! REPLY [4 votes]: After discussion with others, I realize how to tackle it: We first note that the Hom-set adjunction \begin{align*} \operatorname{Hom}(f^*\mathcal G,\mathcal S^\bullet)&\xrightarrow\sim\operatorname{Hom}(\mathcal G,f_*\mathcal S^\bullet)\\ s&\mapsto s'=f_*s\circ a \end{align*} By naturality, we have the following diagram: $$\require{AMScd} \begin{CD} \operatorname{Hom}(\mathcal T^\bullet,f_*\mathcal S^\bullet)@>{t^*}>>\operatorname{Hom}(\mathcal G,f_*\mathcal S^\bullet)\\ @V\mathrm{adj}VV@V\mathrm{adj}VV\\ \operatorname{Hom}(f^*\mathcal T^\bullet,\mathcal S^\bullet)@>{(f^*t)^*}>>\operatorname{Hom}(f^*\mathcal G,\mathcal S^\bullet) \end{CD}$$ Hence $s=\phi\circ f^*t$, and by exactness of $f^*$, $\phi$ is a quasi-isomorphism. We now deduce from Whitehead's theorem the existence of the homotopy commutative diagram.<|endoftext|> TITLE: Let $a,b,c$ positive integers such that $\left(1+\frac{1}{a}\right)\left(1+\frac{1}{b}\right)\left(1+\frac{1}{c}\right) = 3$. Find those triples. QUESTION [8 upvotes]: Full question: Let $a$,$b$,$c$ be three positive integers such that $\left(1+\frac{1}{a}\right)\left(1+\frac{1}{b}\right)\left(1+\frac{1}{c}\right) = 3$. Find those triples. This is actually a national competition question in Vietnam (Violympic), which I have attended (and did poorly, but had a lot of fun). I have understood almost every questions asked that day, but this one really makes my head pop, because I haven't learn much about integer solution equation and how to solve hard cases (like that one) I have solved that a,b,c can't be all the same, because: $$a = b = c$$ $$\implies(1+\frac{1}{a})(1+\frac{1}{b})(1+\frac{1}{c})=(1+\frac{1}{a})^3=3$$ $$⟺1+ \frac{1}{a}=\sqrt[3]{3}$$ $$⟺\frac{1}{a}=\sqrt[3]{3}-1$$ $$⟺a=b=c= \frac{1}{(\sqrt[3]{3}-1)}$$ And using wolframalpha.com, I found out that $(a,b,c) \in \{(1,3,8),(1,4,5),(2,2,3)\}$, but stuck at how to solve it. Thank you in advance for checking out. I really appreciate your effort. REPLY [21 votes]: Suppose $a\geq 3$, then $$\left(1+\frac{1}{a}\right)\left(1+\frac{1}{b}\right)\left(1+\frac{1}{c}\right)\leq\left(1+\frac{1}{3}\right)^3=\frac{64}{27}<3$$ (note that $a\leq b\leq c$), a contradiction. Hence $a=1,2$. If $a=1$, it comes to solve $(1+1/b)(1+1/c)=3/2$. The same trick shows $b<5$. Now one may simply list all possible values of $b$. The case $a=2$ can be solved similarly.<|endoftext|> TITLE: Evaluate $\int_\frac12 ^2 \frac1x\tan(x-\frac1x)dx $ QUESTION [5 upvotes]: $$\int\limits_\frac{1}{2} ^2 \frac{1}{x}\tan\left(x-\frac{1}{x}\right)\mathrm{d}x$$ I have tried substitution and by parts and it seems failed at all. Can anyone give me some hints? REPLY [2 votes]: Let $$ I = \int_{\frac{1}{2}}^{2}\frac{1}{x}\cdot \tan \left(x-\frac{1}{x}\right)\,dx$$ Now Let $\displaystyle\left(x-\frac{1}{x}\right) = t\;,$ Then $\displaystyle \left(1+\frac{1}{x^2}\right)\,dx = dt\Rightarrow \left(x+\frac{1}{x}\right)\,dx = x\,dt$ and Changing Limits Now Using $$ \left(x+\frac{1}{x}\right)^2 -\left(x-\frac{1}{x} \right)^2 = 4\Rightarrow \left(x+\frac{1}{x}\right)=\sqrt{\left(x-\frac{1}{x}\right)^2+4}=\sqrt{t^2+4}$$ So Integral $$ I = \int_{-\frac{3}{2}}^{\frac{3}{2}}\tan t\cdot \frac{1}{\sqrt{t^2+4}}\cdot \frac{x}{x} \, dt = \int_{-\frac{3}{2}}^{\frac{3}{2}}\tan t\cdot \frac{1}{\sqrt{t^2+4}} \, dt$$ So we get $$ I = \int_{-\frac{3}{2}}^{\frac{3}{2}} \underbrace{\frac{\tan t}{\sqrt{t^2+4}}}_{\textbf{odd function}} \, dt = 0$$ Above we have used the formula $$\displaystyle \int_{-a}^a f(x) \, dx = 0\;,$$ If $f(x)$ is odd function.<|endoftext|> TITLE: Maximizing $|f'(0)|$ of an analytic function with $f(1/2)=0$. QUESTION [8 upvotes]: Let f be an analytic function from the unit disk D to the unit disk D. Assume that $f(1/2)=0$, prove $|f'(0)| \leq 25/32$. I am currently stucked with this problem. I have tried adding some conformal self-map g of the unit disk so that $fg(0)=0$ to apply Schwarz lemma,but this only gives upper bound for $|f'(1/2)|$. Is there any other approach for this problem? REPLY [4 votes]: Where did you find this problem? My solution is a bit tricky... Write $f(z) = g(z) \cdot \frac{z-\frac12}{\frac12 z - 1}$. Notice $g$ is an analytic function from $D$ (the unit disk) to $D$. You can easily check that $f^\prime(0) = -\frac34 g(0) + \frac12 g^\prime(0)$. By Cauchy's integral formula, we have $$ f^\prime(0) = \frac{1}{2 \pi i} \int_{|z|=r} g(z) \cdot \left( -\frac34 \cdot \frac1z + \frac12 \cdot \frac{1}{z^2} \right) \, \rm{d} z =: \frac{1}{2 \pi i} \int_{|z|=r} g(z) \cdot K(z) \, \rm{d} z, $$ for $0 < r < 1$. Taking absolute value, we get $$ |f^\prime(0)| \le \frac{\| g \|_\infty}{2 \pi} \cdot \int_{|z|=r} |K(z)| \cdot |\rm{d} z|, $$ where $\| g \|_\infty = \sup_{|z|\le 1} |g(z)| \le 1$. If we take the limit $r \to 1$, and perform the integration, we get $|f^\prime(0)| \le 0.836$. Since $\frac{25}{32} = 0.78125$, we have to improve the method. Notice the we can replace $K(z)$ by any function that has the same principal part (that is, add an analytic function to $K$). The optimal choice turns out to be $$ \kappa(z) = \frac{(4-3z)^2}{32 z^2} = \frac12 \cdot \frac{1}{z^2} - \frac34 \cdot \frac1z + \frac{9}{32}, $$ but this requires more explanations. Repeating the same argument, we get $$ |f^\prime(0)| \le \frac{1}{2 \pi} \cdot \int_{|z|=1} |\kappa(z)| \cdot |\rm{d} z| = \frac{1}{2 \pi} \int_{-\pi}^\pi \frac{1}{32}|4-3 e^{it}|^2 \, \rm{d} t = \frac{1}{2 \pi} \int_{-\pi}^\pi \frac{16 - 24 \cos(t) + 9}{32}\, \rm{d} t = \frac{25}{32}.$$<|endoftext|> TITLE: Are telescoping sums related to the fundamental theorem of calculus? QUESTION [7 upvotes]: I just noticed that $$\sum_{i=1}^n (a_i - a_{i-1})=a_n-a_0$$ and $$\int_a^b f'(x)\mathrm{d}x=f(b)-f(a)$$ look really similar. We can consider $a_i-a_{i-1}$ a discrete analog to the derivative of continuous functions. Is there anything deep between those equations? REPLY [3 votes]: Consider $$f'(x)=\lim_{h\to0}\frac{f(x+h)-f(x)}h$$ So we have $$\int_a^bf'(x)dx=\lim_{h\to0}\int_a^b\frac{f(x+h)-f(x)}hdx$$ $$=\lim_{h\to0}\frac1h\left(\int_a^bf(x+h)-f(x)dx\right)$$ $$=\lim_{h\to0}\frac1h\left(\int_a^bf(x+h)dx-\int_a^bf(x)dx\right)$$ $$=\lim_{h\to0}\frac1h\left(\int_{a+h}^{b+h}f(x)dx-\int_a^bf(x)dx\right)$$ $$=\lim_{h\to0}\frac1h\left(\int_b^{b+h}f(x)dx-\int_a^{a+h}f(x)dx\right)\tag1$$ $$=f(b)-f(a)\tag2$$ You may graph or look at Riemann sums to see the last step above $(2)$. Consider your telescoping sum for $(1)$.<|endoftext|> TITLE: Expected number of visits of a Markov Chain on $\mathbb{Z}$ QUESTION [7 upvotes]: Suppose I have a Markov Chain with state space $\mathbb{Z}$ with $\mathbb{P}(X_n=X_{n-1}+1|X_{n-1})=\lambda$, $\mathbb{P}(X_n=X_{n-1}-1|X_{n-1})=\mu$ where $\lambda+\mu=1$, $\lambda,\mu>0$ and $\lambda\neq\mu$. How do I compute the expected number of visits to a state $n$? REPLY [7 votes]: MC=Markov Chain. Let $$f_{jj}=P( \text{MC visits(in finite time) state}\;j \mid X_0=j).$$ Notice that given the Markov property once the MC visits state $j$ either returns to this state with probability $f_{jj}$ or leaves it forever with probability $1-f_{jj}$. Let $A$ be this last event with $P(A)=1-f_{jj}$. Let $N_j$:=the number of visits of the MC to the state $j$= $\sum_ {n=1}^{\infty} 1 (X_n=j)$. Then, $$P(N_j=k\mid X_0)=[1-P(A)]^{k-1}P(A)=f_{jj}^{k-1}(1-f_{jj})$$ This means that $$N_j\mid X_0 \sim \text{Geometric}(p=1-f_{jj})$$ The expected number of visits is $E(N_j\mid X_0)=\frac{1}{1-f_{jj}}$ This is finite when $f_{jj}<1$. A non-symmetric random walk the chain abandons state $j$ with positive probability $1-f_{jj}$ so the expectation is finite. In simple terms a state is recurrent when beginning at this state the chain at some point revisits it with probability 1. A simple random walk is recurrent only when $\mu=\lambda=1/2$<|endoftext|> TITLE: What does the symbol $\sqsubset$ mean? Is it the same as $\subset$? QUESTION [5 upvotes]: I have tried looking for this symbol but I found the definition for latter only. Hence, I have this doubt if these two are same. If not, can anyone please let me know the difference? Thanks a lot. REPLY [3 votes]: The symbol $\sqsubset$ is somtimes used as a synonym for $\subset$, but this isn't universally true. For example Jech uses $\sqsubseteq$ (as in $\mathbb A \sqsubseteq \mathbb B$) to denote subalgebras of Boolean algebras. In your case, as you are interested in the meaning of $\sqsubset$ in the context of this paper, the meaning of $\sqsubset$ is part of Definition 1. In general, context is key when interpreting any kind of mathematical notation. You may (but shouldn't) replace any mathematical symbol with any other without changing the meaning of a given paper - as long as you are consistent about it.<|endoftext|> TITLE: Why is $0^\sharp$ not definable in $ZFC$? QUESTION [10 upvotes]: I have a little question about $0^\sharp$. I'm sure has a nice and easy answer, but I'm just not seeing it and I think it'll help my understanding of $L$ quite a bit if I can piece the answer together. Given Godel's constructible universe $L$, we can define $0^\sharp$ as follows: $0^\sharp =_{df} \{ \ulcorner \phi \urcorner | L_{\aleph_\omega} \models \phi [\aleph_1,...,\aleph_n ]\}$ (N.B. There are a bunch of equivalent ways of defining $0^\sharp$, e.g. through an elementary embedding or through Ehrenfeucht-Mostowski sets. We'll see below why this characterisation is interesting) Now the existence of $0^\sharp$ has some interesting large cardinal consequences: e.g. it implies $V \not = L$ in a dramatic fashion (e.g. Covering fails for $L$ if $0^\sharp$ exists). It would be bad then, if $0^\sharp$ were definable in $ZFC$ (by Godel's Second). However, the above definition of $0^\sharp$ looks like it should be definable in $ZFC$. Each of $\aleph_1,...,\aleph_n$ is $V$-definable in $ZFC$: Godel coding has all been settled on, $L_{\aleph_\omega}$ is definable in $V$, and Satisfaction is definable over set sized models. So what stops us using Separation to obtain $0^\sharp$ from $\omega$? Note I'm not looking for the easy answer: $0^\sharp$ exists $\Rightarrow Con(ZFC)$, and so $0^\sharp$ can't be definable in $ZFC$ (assuming that it's consistent). I am mindful that a bunch of the notions in the definition are either non-absolute or not definable in $L$ (e.g. every definable cardinal is countable in $L$ if there are the relevant indiscernibles). This only suggests that the identity of $0^\sharp$ is not absolute, not that it (whatever it may be) can't be proved to exist in $ZFC$ (exactly like many $\aleph_\alpha$). What I really want to know is where the above argument breaks down. What part of the definition of $0^\sharp$ prevents it being definable in $ZFC$? REPLY [12 votes]: The classical definition of $0^\sharp$ is as (the set of Gödel numbers of) a theory, namely, the unique Ehrenfeucht-Mostowski blueprint satisfying certain properties (coding indiscernibility). This is a perfectly good definition formalizable in $\mathsf{ZFC}$, but $\mathsf{ZFC}$ or even mild extensions of $\mathsf{ZFC}$ are not enough to prove that there are objects that satisfy it. In $L$ there is no EM blueprint with the required properties. It happens that if it exists, then $0^\sharp$ indeed admits the simple description given in line 4 of your question, but (unlike $0^\sharp$) the set in line 4 always exists (that is, $\mathsf{ZFC}$ proves its existence, pretty much along the lines of the sketch you suggest), so it is not appropriate to define $0^\sharp$ that way (for instance, that set is not forcing invariant in general). As you mentioned, there are several equivalent definitions of $0^\sharp$. Some of them are readily formalizable in $\mathsf{ZFC}$, some are not. For example, we cannot talk of a proper class of $L$-indiscernibles in $\mathsf{ZFC}$ alone, but $0^\sharp$ could be defined as such a class. The modern definition of $0^\sharp$ introduces it not as a theory but rather as a certain mouse, a model of the form $(L_\alpha,U)$ for certain $\alpha$, where $U$ is an (external) $L$-$\kappa$-ultrafilter for some $\kappa$ definable in terms of $\alpha$, with the requirement that the iterates of $L_\alpha$ by $U$ are all well-founded (and some additional technical requirements related to the minimality of this model). This is more in tune with the current approach to inner model theory. Again, this definition is formalizable in $\mathsf{ZFC}$ in a straightforward fashion, but the existence of such a mouse cannot be established in $\mathsf{ZFC}$ alone.<|endoftext|> TITLE: $.99999...=1$: What is the proof that we can multiply an infinite series by something to shift it? QUESTION [5 upvotes]: I wanted to prove to myself that $.9999... = 1$ so I wanted to show that: $N = 9/10 + 9/100 + 9/1000 + 9/100000 + ...$ or more formally, $N = 9S$ where $S = \sum_{k=1}^{\infty} \frac{1}{10^k}$ Then multplying by $10$ I get $10S = \sum_{k=1}^{\infty} \frac{1}{10^{k-1}} = \sum_{k=0}^{\infty} \frac{1}{10^{k}}$ Subtracting I get $10S - S = \sum_{k=0}^{\infty} \frac{1}{10^k} - \sum_{k=1}^{\infty} \frac{1}{10^{k}} = \frac{1}{10^0} = 1$ So if $9S = 1$, then $S = 1/9$, and $N = 9S = 1$, proof completed. Is this proof correct? What is it exactly that allows me to multiply an infinite series by $10$ and have it be a valid transformation? To my "ignorant intuition" this seems like taking something infinite and making it $10$ times infinity, which I don't know what that really means. What is it that allows me to shift indexes like I did to move $k$ down from $1$ to $0$? Similar to question 2 above. We shift and yet do not have to touch or manipulate the upper bound of infinity at all. I know that "infinity minus one" is sort of a weird notion but I don't understand what it means or why we can default it back to infinity, whereas in any other context when we change bounds we change them on both lower and upper. REPLY [5 votes]: The manipulations you did are valid for convergent series. Therefore your proof is incomplete because you didn't prove that the series is actually convergent. Fortunately the convergence in this case is easy to show: The sequence of partial sums is strictly increasing (all terms of the series are positive), and each of the terms of that sequence is less than $1$. A strictly increasing sequence that is bound from above converges. Here's a proof that shows that the partial sums are bounded by $1$ without knowledge of the result, by virtue of manipulations analogue to the manipulations you did in the infinite series: Be $s_n = \sum_{k=1}^n \frac{9}{10^k}$ the $n$-th partial sum. Clearly, $s_1=\frac{9}{10}<1$. Furthermore, it is easy to check that $s_{n+1} = (s_n + 9)/10$, thus if $s_n<1$ then $s_{n+1} < (1+9)/10 = 1$. Therefore by induction, all $s_n<1$. To see why establishing convergence is important, consider the following series: $$S = 1 + 1 + 1 + 1 + \ldots$$ Obviously, $$S = 0 + S = 0 + 1 + 1 + 1 + \ldots$$ But then, \begin{aligned} 0 &= S - S\\ &= (1-0) + (1-1) + (1-1) + (1-1) + \ldots\\ &= 1 + 0 + 0 + 0 + \ldots\\ &= 1 \end{aligned} So why can you multiply the terms of the series by $10$ (or actually any number $c$) and get $10$ times the result? Well, the usual definition of convergence of a series is as follows: $\sum_{k=0}^{\infty} a_k = a$ if for every $\epsilon > 0$ you can find an $N_\epsilon$, so that for each $n>N_\epsilon$ you have $\left|\sum_{k=0}^n a_k - a\right| < \epsilon$. So now to prove that $\sum_{k=0}^{\infty} ca_k = ca$, you have to find an $N'_\epsilon$, so that for each $n>N'_\epsilon$ you have $\left|\sum_{k=0}^n ca_k - ca\right| < \epsilon$. However note that the latter is now a finite product, and thus you can move the factor $c$ out of the sum. For $c=0$, the statement is obviously true, so we only have to consider $c\ne 0$. But then, we can reformulate the condition as $\left|\sum_{k=0}^n a_k - a\right| < \epsilon/\left|c\right|$, and since $\sum a_k$ converges to $a$, and thus we see that $N'_\epsilon = N_{\epsilon/\left|c\right|}$ works. The case of the shift is even easier: You effectively have replaced the series $$a_0 + a_1 + a_2 + \ldots$$ by the series $$0 + a_0 + a_1 + a_2 + \ldots$$ Adding $0$ obviously doesn't change the finite sum, and then we see that $N'_\epsilon = N_\epsilon+1$ covers the exact same terms for the modified sequence. You've also used a third operation which you didn't specifically mention: You subtracted the two series element-wise. The proof that this is allowed is similar to the previous ones, but slightly more complicated because you've now got two series. But again, you use the fact that you can do the manipulation for finite sums, and the fact that both series individually converge.<|endoftext|> TITLE: Notion of a normal operator QUESTION [6 upvotes]: I understand that a normal operator is an operator such that $$ AA^\dagger = A^\dagger A $$ where $\dagger$ is the conjugate transpose. However, what is the most intuitive way to "characterise" this? For example, $SO(3)$ is the group of rotations in $\mathbb R^3$ A unitary matrix is one that represents an isometry A hermitian matrix is a generalization of symmetric, and is the "nicest" (diagonalizable, real eigenvalues, etc) of all matrices over $\mathbb C$ I was hoping for some sort of intuitive explanation of why I would care about normal matrices over $\mathbb C$ (maybe other reasons than the spectral theorem? Something more fundamental / geometric perhaps) REPLY [3 votes]: A square complex matrix $A$ is normal iff it has an orthonormal basis of eigenvectors. That's why you would care about it. Unitary and selfadjoint matrices are special cases. Normal is the most general case.<|endoftext|> TITLE: If $f,g$ are continuous and $g$ is $1$-periodic, $\int_0^1 f(x)g(nx)dx \xrightarrow[n\to\infty]{} \int_0^1 f \int_0^1 g$ QUESTION [7 upvotes]: Let $f,g\in C(\mathbb{R},\mathbb{R})$ where $g(x+1)=g(x) \; \forall x\in \mathbb{R}$. Prove that \begin{equation} \lim_{n\rightarrow\infty} \int_0^1 f(x)g(nx)dx=\int_0^1f(x)dx \int_0^1g(x)dx . \end{equation} REPLY [5 votes]: To complement LeBtz's excellent answer above, let's try a more heuristic way of figuring out what to do. This may not be the shortest, but should (i) give an intuition of what is going on; and (ii) show that is it not "magic" with the proof coming out of the blue, but that there is some process to arrive at the result. Since the assumption on $g$ is $1$-periodicity, we want to make appear things like $g(u+k)$ for $k\in\mathbb{N}$, so that we can simplify. So let us break $[0,1]$ in many segments of length $\frac{1}{n}$, so that $g(nx)$ becomes something we want: $$\begin{align} \int_0^1 f(x)g(nx)dx &= \sum_{k=0}^{n-1} \int_{\frac{k}{n}}^{\frac{k+1}{n}} f(x)g(nx)dx = \sum_{k=0}^{n-1} \int_{0}^{\frac{1}{n}} f\left(u+\frac{k}{n}\right)g\left(n\left(u+\frac{k}{n}\right)\right)du \\ &=\sum_{k=0}^{n-1} \int_{0}^{\frac{1}{n}} f\left(u+\frac{k}{n}\right)g\left(nu+k\right)du =\sum_{k=0}^{n-1} \int_{0}^{\frac{1}{n}} f\left(u+\frac{k}{n}\right)g(nu)du \end{align} $$ That's good, we got somewhere. Not quite there, but still... we can get a little more done now by swtiching integral and sum, and observing that after simplification the term $g(nu)$ does not depend on the summation index $k$ anymore: $$\begin{align} \int_0^1 f(x)g(nx)dx &= \sum_{k=0}^{n-1} \int_{0}^{\frac{1}{n}} f\left(u+\frac{k}{n}\right)g(nu)du = \int_{0}^{\frac{1}{n}} du \sum_{k=0}^{n-1} f\left(u+\frac{k}{n}\right)g(nu)\\ &= \int_{0}^{\frac{1}{n}} du\, g(nu) \sum_{k=0}^{n-1} f\left(u+\frac{k}{n}\right) \end{align} $$ Not too bad. Let's continue — eventually, we want integrals from $0$ to $1$, so let us switch back via a change of variables: $$\begin{align} \int_0^1 f(x)g(nx)dx &= \int_{0}^{\frac{1}{n}} du\, g(nu) \sum_{k=0}^{n-1} f\left(u+\frac{k}{n}\right) = \int_{0}^{1} dx\, g(x) \frac{1}{n}\sum_{k=0}^{n-1} f\left(\frac{x}{n}+\frac{k}{n}\right) \end{align} $$ That looks promising. Why? Well, if the last term had been $\frac{1}{n}\sum_{k=0}^{n-1} f\left(\frac{k}{n}\right)$ instead, we would have had a Riemann sum for $f$, whose limit as $n\to \infty$ is exactly $\int_0^1 f$... this is nice. But there is that extra $\frac{x}{n}$ which is preventing that from directly applying: this is where the (uniform, since on a bounded closed interval) continuity of $f$ should come into play, as $\frac{x}{n}\xrightarrow[n\to\infty]{}0$. By the above, $$\begin{align} \int_{0}^{1} dx\, g(x) \frac{1}{n}\sum_{k=0}^{n-1} f\left(\frac{k}{n}\right) = \frac{1}{n}\sum_{k=0}^{n-1} f\left(\frac{k}{n}\right)\int_{0}^{1} g(x)dx \xrightarrow[n\to\infty]{} \int_{0}^{1} f(x)dx \int_{0}^{1} g(x)dx \end{align} \tag{1} $$ Let us control the difference, call it $\Delta_n$: $$\begin{align} \Delta_n &= \left\lvert \int_{0}^{1} dx\, g(x) \frac{1}{n}\sum_{k=0}^{n-1} f\left(\frac{k}{n}\right) - \int_{0}^{1} dx\, g(x) \frac{1}{n}\sum_{k=0}^{n-1} f\left(\frac{x}{n}+\frac{k}{n}\right) \right\rvert \\&= \left\lvert \int_{0}^{1} dx\, g(x) \frac{1}{n}\sum_{k=0}^{n-1} \left( f\left(\frac{k}{n}\right) - f\left(\frac{x}{n}+\frac{k}{n}\right) \right) \right\rvert \\ &\leq \int_{0}^{1} dx\, \lvert g(x)\rvert \frac{1}{n}\sum_{k=0}^{n-1} \left\lvert f\left(\frac{k}{n}\right) - f\left(\frac{x}{n}+\frac{k}{n}\right) \right\rvert \end{align} $$ For convenience, write $\lVert g\rVert_\infty\stackrel{\rm def}{=} \max_{x\in[0,1]} \lvert g(x)\rvert$ (this exists, as $g$ is continuous). Fix any $\varepsilon > 0$, and — $f$ is uniformly continuous too — let $N_\varepsilon$ such that, for all $n\geq N_\varepsilon$, $\lvert f(x)-f(y) \rvert \leq \frac{\varepsilon}{\lVert g\rVert_\infty+1}$ whenever $x,y\in[0,1]$ satisfy $\lvert x-y\rvert \leq \frac{1}{n}$. (The $+1$ is just to avoid dividing by zero if $g$ is identically zero.) Then we are good: for any $n\geq N_\varepsilon$, since $\frac{x}{n} \leq \frac{1}{n}$, we get $$\begin{align} \Delta_n &\leq \int_{0}^{1} dx\, \lvert g(x)\rvert \frac{1}{n}\sum_{k=0}^{n-1} \left\lvert f\left(\frac{k}{n}\right) - f\left(\frac{x}{n}+\frac{k}{n}\right) \right\rvert\\ &\leq \int_{0}^{1} dx\, \lVert g\rVert_\infty \frac{1}{n}\sum_{k=0}^{n-1} \frac{\varepsilon}{\lVert g\rVert_\infty+1} = \varepsilon \frac{\lVert g\rVert_\infty}{\lVert g\rVert_\infty+1}\\ &< \varepsilon \end{align}$$ and since $\varepsilon>0$ was arbitrary, we just showed that $$\Delta_n \xrightarrow[n\to\infty]{} 0. \tag{2} $$ Combining (1) and (2) gives the limit: $$ \int_0^1 f(x)g(nx)dx = \int_{0}^{1} dx\, g(x) \frac{1}{n}\sum_{k=0}^{n-1} f\left(\frac{k}{n}\right) +\Delta_n \xrightarrow[n\to\infty]{} \int_{0}^{1} f(x)dx \int_{0}^{1} g(x)dx+0 $$<|endoftext|> TITLE: Uniformly convergence of $f_{n+1}(x)= \int^x_0 f_n(t)dt$ QUESTION [5 upvotes]: Let $f_0$ be integrable function on $[0,a]$ and a sequence of functions $f_{n+1}(x)= \int^x_0 f_n(t)dt$. Show that $f_n$ is uniformly convergent on $[0,a]$ to $0$. I get so far is if $\int_0^xf_0(t)dt =x_0 $ so $\int_0^xx_0dt=xx_0 $ and $f_n(x)=x^nx_0 $ so $f_n(x) \to 0$ for $x\in[0,1) $ but I don't know if it uniformly and what happen for $a>1$ ? Thanks ahead. REPLY [6 votes]: Show by induction that $|f_n(x)|\leq \frac{x^n}{n!}\|f_0\|_\infty$ for all $n\in\mathbb N, x\in[0,a]$ Then $\|f_n\|_\infty \leq \frac{a^n}{n!}\|f_0\|_\infty$. This sequence should look familiar. Edit: This only works if $\|f_0\|_\infty$ is finite as pointed out in the comments. This can be obtained by using $f_1$ instead of $f_0$ for the argument and using $\|f_1\|_\infty \leq \|f_0\|_1$.<|endoftext|> TITLE: Calculating Fibonacci numbers QUESTION [5 upvotes]: In my book they calculated the $28$th Fibonacci number and said $F_{28} = 3 \times 13 \times 19 \times 281 = 317811$. This made me wonder if there was an easier way to find the $28$th Fibonacci number than by doing it by hand. REPLY [2 votes]: Unfortunately this requires (sometimes) a calculator to compute. You could potentially use the following: Let $\phi = \frac{1+\sqrt5}{2}$ $F_{n}=\frac{\phi^x-(1-\phi)^x}{\sqrt5}$<|endoftext|> TITLE: Square as union of two disjoint connected sets containing opposite corners QUESTION [5 upvotes]: Question: Given the unit square $[0,1] \times [0,1]$ Write it as the union of two disjoint connected sets $A, B$, $A$ must contain $(0,0),(1,1)$, $B$ must contain $(0,1), (1,0)$ Not sure if this can actually be done. I went for the obvious solution: Let $A = \{(x,x) | x \in [0,1]\}$, $B = ([0,1] \times [0,1]) \backslash A$. Then $A,B$ contains the desired points. $A$ is connected because it is homeomorphic to $[0,1]$, a connected space. What about the connectedness of $B$? Well, obviously it is not connected. What other possibility could there be? REPLY [4 votes]: Let $f(x)=\sin(1/x)$ for $x\in\mathbb R \setminus \{0\}$ and set $f(0)=0$. Let $G$ be the graph of $f$. It is fairly easy to see that $G$ is connected. Can you prove that $\mathbb R ^2\setminus G$ is also connected? This actually follows from a general result in this disseration, which reads: Theorem 9. If $X$ is a topological space, $f : X \to \mathbb R$, $\text{Gr}(f)$ is connected, and $(X \times \mathbb R) \setminus \text{Gr}(f)$ is disconnected then $f$ is continuous. As $G$ is connected and $f$ is not continuous, we must have that $\mathbb R ^2\setminus G$ is connected. For your set $A$, take a ray $[0,\infty)$ starting at $(0,0)$ and zig zag in a $\sin(1/x)$ fashion closer and closer to $(1/2,1/2)$. Now do the same thing starting from $(1,1)$. Let $A$ be these two curves, together with the point $(1/2,1/2)$. Let $B=[0,1]^2\setminus A$. $A$ is the part in blue, and $B$ is everything else. For a slightly simpler example, you could replace the bottom left part of $A$ with the part of the diagonal from $(0,0)$ to $(1/2,1/2)$.<|endoftext|> TITLE: Line bundle on a product QUESTION [6 upvotes]: Let X and T be projective varieties, with $H^1(\mathcal{O}_X)=0$. Take L a line bundle on the product. Prove that for any two points $t,t'$ of T, the pullbacks $L_t, L_{t'}$ to $X\times t, X\times t'$ are isomorphic line bundles on X. I am completely stuck and I don't even undeerstand how to use the hypothesis in cohomology. If someone posts an hint I can try to elaborate. REPLY [3 votes]: Let $X$ be an integral projective scheme over an algebraically closed field $k$, and assume that $H^1(X, \mathcal{O}_X) = 0$. Let $T$ be a connected scheme of finite type over $k$. We want to show that if $\mathscr{L}$ is an invertible sheaf on $X \times T$, then the invertible sheaves $\mathscr{L}_t$ on $X = X \times \{t\}$ are isomorphic, for all closed points $t \in T$. This is the content of Exercise III.12.6(a) of Hartshorne, as mentioned in the comments above. We have a (group) scheme $\text{Pic}(X)$ which essentially parametrizes line bundles on $X$ (and which happens to be an infinite disjoint union of projective schemes, although that's not terribly relevant). $H^1(X, \mathcal{O}_X)$ is the tangent space of $\text{Pic}(X)$ (at every point), so if it is $0$, it follows that $\text{Pic}(X)$ is actually a disjoint union of (reduced) points. Then $\mathscr{L}$ induces a morphism $T$ to $\text{Pic}(X)$, which must be constant since $T$ is connected and $\text{Pic}(X)$ is a disjoint union of points. Because the morphism $T$ to $\text{Pic}(X)$ is constant, the fibers $\mathscr{L}_t$ are all isomorphism (we have to be slightly careful — morphisms $T$ to $\text{Pic}(X)$ are not in bijection with line bundles of $X \times T$, but any two line bundles inducing the same morphism will have isomorphic fibers $\mathscr{L}_t$). For details on this material, look at the book "Néron models" here. Bosch, Siegfried; Lütkebohmert, Werner; Raynaud, Michel. Néron models. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 21. Springer-Verlag, Berlin, 1990. x+325 pp.<|endoftext|> TITLE: Why is the area of a circle not $2 \pi R^2$? QUESTION [6 upvotes]: Let $$ C=\{(x,y,z)\mid x^2+y^2=1, 0 \le z \le 1\} $$ Counter-calculus, my intuition says that the area of a circle with radius $r$ should be $2 \pi \cdot R^2$, because I think that if all radii segments are to be raised perpendicularly to the perimeter of the unit circle (at plane $xy$), then the area is preserved. I know I am wrong but I don't (qualitatively) understand why (note I never learnt measure theory). REPLY [21 votes]: I have created an animation that shows what I think the OP means by "all radii segments are to be raised perpendicularly to the perimeter of the unit circle": If this is indeed the intended interpretation, then let me make a few observations: The radii segments, when "raised up perpendicularly to the unit circle", do indeed seem to form a cylinder (with no top or bottom); the lateral surface area of that cylinder is, as the OP says, $2\pi R^2$. The problem with this visualization can be best understood by imagining each of the radii segments, when in the "upright" position, as being thickened into a rectangular strip. In the upright position, those strips form a cylindrical surface. But as they fold down, the strips will overlap one another. See animation below: How to prevent the rectangular strips from overlapping one another as they fold down? Cut each one into a triangular shape with its base on the perimeter of the circle, and then they will fit together nicely as they fold down. (See animation below.) Of course, if you cut the rectangular strips into triangles then they will no longer form the cylindrical surface --- each one has had half of its area removed, so the area that remains is just $\pi R^2$. So at an intuitive level, that's where the factor of $1/2$ comes from: the need to prevent the infinitesimal rectangular strips from overlapping each other as they fold downwards.<|endoftext|> TITLE: Triangles on a Torus QUESTION [5 upvotes]: This is a really basic question, which draws as its source two of the pictures from the Wikipedia article about Gaussian curvature. If it is true that the sum of the angles of a triangle on a surface of negative Gaussian curvature is less than 180 degrees (as it says on Wikipedia), and that the sum of the angles of a triangle on a surface of positive Gaussian curvature is more than 180 degrees (which I believe is the case for a sphere, I think the sum is 270 degrees), then: Since the inside of a torus supposedly has negative Gaussian curvature, and the outside supposedly has positive Gaussian curvature, does a triangle inscribed on the inside of a torus have a sum of angles less than 180 degrees while a triangle inscribed on the outside of a torus have a sum of angles greater than 180 degrees? By "inside" I mean "closer to the donut hole" and by "outside" I mean "away from the donut hole". Thus an "ant walking on a torus" could tell precisely when it arrived "at the highest point" of the torus (the circle on the torus where every point has zero Gaussian curvature which divides the above two mentioned regions of positive and negative curvature) when the triangles it is drawing on the ground have a sum of angles of precisely 180 degrees? Also does this mean that the torus essentially has both elliptic and hyperbolic geometries, depending on which side the ant is walking on? I was thinking about this question because I was trying to think of examples of closed surfaces which have negative Gaussian curvature over an extended region, because it is not possible for the surface to be closed and have negative curvature everywhere and be embeddable in $\mathbb{R}^3$, see: existence of closed surface having only negative Gaussian curvature. https://mathoverflow.net/questions/111101/surfaces-in-mathbb-r3-with-negative-curvature-bounded-away-from-zero https://mathoverflow.net/questions/32597/compact-surfaces-of-negative-curvature REPLY [3 votes]: Quantitatively, if $T$ is a geodesic triangle bounding a topological disk in a surface with a Riemannian metric, the sum of the interior angles of $T$ is equal to $\pi$ (radians) plus the integral of the (Gaussian) curvature over the interior of $T$. (Sometimes this fact is called the local Gauss-Bonnet theorem, and the integral of the curvature is called the angular defect.) When you speak of "the torus", you're equipping a particular smooth manifold (a product of circles) with a metric from a particular two-parameter family obtained by embedding (into Euclidean $3$-space) one circle factor as a circle of radius $r$ and revolving about an axis at distance $R > r$ from the center of the circle, sweeping out the other circle factor. Every metric of this type does indeed have negative curvature on "the inside" and positive curvature on "the outside", though the curvature is not constant in any region, but only along the "latitudes". Measurements on the surface of the torus can detect the "parabolic points" (where the Gaussian curvature vanishes) in the following sense: A small geodesic triangle entirely "on the outside" has positive angular defect, while a small geodesic triangle entirely "on the inside" has negative angular defect. A small triangle that contains points of positive curvature and points of negative curvature can have positive, zero, or negative angular defect, depending on its shape. Detecting the precise locations of the parabolic curves therefore requires (i) measuring infinitely many triangles (ii) to arbitrary precision. Because the curvature of such a torus is not constant, calling its geometry "elliptic" or "hyperbolic" would be at least mildly non-standard. While you're correct that no closed (compact, boundaryless) surface embedded in Euclidean $3$-space has a metric of negative curvature, There do exist smooth, complete (non-compact) surfaces of negative curvature (such as the graph $z = x^{2} - y^{2}$); The hyperbolic plane does embed isometrically into Minkowski $3$-space as one sheet of a hyperboloid of two sheets. (Incidentally, if it matters, a geodesic triangle on a sphere can have arbitrary angular defect between $0$ and $4\pi$, non-inclusive. Angular defect $\pi/2$ occurs for a triangle enclosing one octant, and therefore having three right angles.)<|endoftext|> TITLE: Inequality involving rearrangement: $ \sum_{i=1}^n |x_i - y_{\sigma(i)}| \ge \sum_{i=1}^n |x_i - y_i|. $ QUESTION [7 upvotes]: If $x_1 \ge x_2 \ge \cdots \ge x_n$ and $y_1 \ge y_2 \ge \cdots \ge y_n$ are real numbers, and $\sigma$ is any permutation, then $$ \sum_{i=1}^n |x_i - y_{\sigma(i)}| \ge \sum_{i=1}^n |x_i - y_i|. $$ This must be a known inequality. What is it called, and how is it proven? (Just a reference is OK.) The conditions are similar to rearrangement inequality. The inequality is a simple statement about minimizing the $\ell^1$ distance between a finite sequence and any rearrangement of another finite sequence. I searched around and clicked through various pages but couldn't find something relevant. If it is true, perhaps a proof could be constructed by decomposing the permutation into a sequence of transpositions. REPLY [5 votes]: For any convex function, such as $f(x) = |x|$, $$ \sum f(x_i - y_{\sigma(i)}) \geq \sum f(x_i - y_i)$$ because $(x_i - y_{\sigma(i)})$ majorizes $(x_i - y_i)$. A reference is the first theorem, 6.A.1, in chapter 6 of Olkin and Marshall's book on majorization, applied to the sequences $x_i$ and $-y_i$. They attribute the result to a 1972 article by Peter W Day on general forms of the rearrangement inequality, and give a proof for vectors of real numbers. Day's article is about more general situations with ordered abelian groups. The inequality for real vectors must have been known earlier to many people.<|endoftext|> TITLE: What is $\sqrt{-4}\sqrt{-9}$? QUESTION [6 upvotes]: I assumed that since $a^c \cdot b^c = (ab)^{c}$, then something like $\sqrt{-4} \cdot \sqrt{-9}$ would be $\sqrt{-4 \cdot -9} = \sqrt{36} = \pm 6$ but according to Wolfram Alpha, it's $-6$? REPLY [2 votes]: For real numbers all numbers are either positive, zero, or negative. And the square of a negative number is positive. Thus only zero and positive numbers have square roots and positive numbers have two square roots, one positive and one negative, but both equal in magnitude (i.e. absolute value). NONE of this can be said about complex numbers. As a result of these observations about real numbers we can make the following assumptions, none of which we can do for the complex: When we write $\sqrt a$ then by definition $a \ge 0$; for $a >0$ there exist one $q > 0$ such that $q^2=a $ so $\sqrt {a} = \pm q $ and unless we specify in context we may as well arbitrarily define the $\sqrt {a}=q>0$. And therefore $\sqrt {a}\sqrt {b}=\sqrt {ab} $. Even if we allow square roots to be negative this is true as products of positive and/or negative numbers are positive or negative. For complex numbers we can not make these assumptions. But we can assume $|ab|=|a||b|$ and so $|\sqrt {a}||\sqrt {b}|=|\sqrt {ab}|$. So $\sqrt {-4}\sqrt {-9} =\pm 2i * \pm 3i = 6i^2= -6$. But it doesn't equal $\sqrt {-4*-9}=\sqrt {36}=6 $.<|endoftext|> TITLE: Why every positive integer can be written as addition of 1 triangle number, 1 square number, and 1 pentagonal number QUESTION [12 upvotes]: I find this interesting conjecture, maybe I'm not the first to state this conjecture. I have tested this conjecture to $100.000$ first positive integers (then my friend at Brainden forum tested it to 1 trillion), but test result is not the prove. Anybody can provide the prove or disprove this conjecture ? the conjecture is : Every positive integer can be written as addition of 1 triangle number, 1 square number, and 1 pentagonal number note : Triangle numbers are generated by the formula, $T_{n} = \frac{1}{2} n(n+1)$. The first ten triangle numbers are: 0, 1, 3, 6, 10, 15, 21, 28, 36, 45, ... Square numbers are generated by the formula, $S_{n}=n^{2}$. The first ten square numbers are: 0, 1, 4, 9, 16, 25, 36, 49, 64, 81,... Pentagonal numbers are generated by the formula, $P_{n}=\frac{1}{2}n(3n-1)$. The first ten pentagonal numbers are: 0, 1, 5, 12, 22, 35, 51, 70, 92, 117,... 10 first positive integer which follow the conjecture : Numbers = Triangle + Square + Pentagon 1 = 0 + 0 + 1 2 = 0 + 1 + 1 3 = 1 + 1 + 1 4 = 0 + 4 + 0 5 = 0 + 0 + 5 6 = 0 + 1 + 5 7 = 1 + 1 + 5 8 = 3 + 0 + 5 9 = 0 + 4 + 5 10 = 0 + 9 + 1 REPLY [3 votes]: Friday: found a new arXiv item near to this, Haensch Kane. There has been a fair amount of activity in this general area of late, for example Kane and Sun on mixed sums of triangular numbers and squares. Apparently Guy showed that every positive number is the sum of three pentagonal numbers. Apparently this is discussed in SUN. My first impression is that he could not prove it either; he did prove it if all the variables are allowed to be negative when needed. Thursday: below is a proof that the ternary form $x^2 + 6 y^2 + 12 z^2$ does something desirable. The original question follows (maybe) from this. It will take me a bit to type that last part in. Alrighty, using $t$ for triangle, $s$ for square, $p$ for pentagon, we want $$ \frac{t^2 + t}{2} + s^2 + \frac{3p^2 - p}{2 } = n. $$ Multiply by $8$ and add $1$ to both sides, we have $$ (2t+1)^2 + 8 s^2 + 12 p^2 - 4p = 8n+1 $$ Multiply by $3$ and add $1$ to both sides, we have $$ 3(2t+1)^2 + 24 s^2 + (6p-1)^2 = 24n+4. $$ One fourth of $24n+4$ is $6n+1.$ Take $$ u^2 + 12 v^2 + 6 w^2 = 6n+1, $$ $$ u^2 + 3 (2 v)^2 + 6 w^2 = 6n+1. $$ Hmmmm. Let me put what I've got next, which i assumed was enough last night: Take $u,v,w \geq 0.$ Note $u$ is odd and prime to $3.$ If $u \equiv 2 \pmod 3,$ $$ (u+6v)^2 + 3 (|u-2v|)^2 + 24 w^2 = 24 n+4. $$ If $u \equiv 1 \pmod 3$ and $u < 6v,$ $$ (6v -u)^2 + 3 (u+2v)^2 + 24 w^2 = 24 n+4. $$ If $u \equiv 1 \pmod 3$ and $u > 6v,$ more work is needed. Hmmm. ==================================================================== Wednesday: Make an answer; this is my area. Your question follows from the fact that, given integer $n \geq 0,$ we can always write $$ 6n+1 = u^2 + 6 v^2 + 12 w^2 $$ in integers. The ternary form $ u^2 + 6 v^2 + 12 w^2 $ is not regular. Up to a million, there are some ten numbers that are not represented by the form, but instead are represented by the other form in the genus, that being $3 u^2 + 4 v^2 + 6 w^2.$ These sporadic numbers are, however, all multiples of three. 3 39 51 69 165 219 399 561 771 1941 Taken together, the forms of the genus represent all positive integers except $3k+2$ and $4^k (16m+14).$ Compare the form $1,3,6$ in Dickson's table: The composition of the genus is shown in this excerpt B.-I.discr = -288 = -1 *2^5 *3^2 1^-1 3^-2 [2^1 4^1 8^1]_3 616: 1 6 12 0 0 0 631: 3 4 6 0 0 0 2^-2_4 16^-1_3 613: 1 3 24 0 0 0 637: 4 4 7 2 4 4 2^-2_2 16^-1_5 632: 3 4 7 4 0 0 2^2_2 16^1_1 619: 1 9 9 6 0 0 from http://www.math.rwth-aachen.de/~Gabriele.Nebe/LATTICES/Brandt_1.html Here is a page with pdfs about positive ternary quadratic forms Proof of fact above.... Lemma (Burton Wadsworth Jones, 1928 dissertation) If we have a nonzero integer $k$ with $k=r^2 + 2 s^2,$ and $k$ is a multiple of $3,$ there is another representation $k = x^2 + 2 y^2$ with both $x,y$ prime to $3.$ We can also prove this by induction on the power of $3$ dividing $k.$ Theorem: if $m \equiv 1 \pmod 6$ and $n > 0,$ we can always write $m = u^2 + 6 v^2 + 12 w^2 $ in integers. Proof We may assume that $m$ is represented by the other form of the genus, so we have $m = 3 u^2 + 4 v^2 + 6 w^2.$ It follows that $v \neq 0 \pmod 3.$ We have four similar formulas, $$ 9m = (3u+6w)^2 + 6(-u + 2v+w)^2 + 12 (-u-v+w)^2$$ $$ 9m = (3u-6w)^2 + 6(-u + 2v-w)^2 + 12 (-u-v-w)^2$$ $$ 9m = (3u+6w)^2 + 6(u + 2v-w)^2 + 12 (u-v-w)^2$$ $$ 9m = (3u+6w)^2 + 6(u + 2v+w)^2 + 12 (u-v+w)^2$$ If at least one of $u,w$ is not divisible by $3,$ then at least one of the four formulas has all three items that are being squares divisible by $3.$ Therefroe we may divide them by $3$ and move $9m$ to $m$ itself. If both $u,w$ are divisible by $3,$ then the Jones Lemma says that we can revise the representation so that both $u,w$ are prime to $3.$ as a result, in the revision, at least one of the four formulas is made up of numbers divisible by $3.$ $$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$<|endoftext|> TITLE: Is the graph of the Conway base 13 function connected? QUESTION [18 upvotes]: IVT Property: If $ah(x)\}$$ is a nontrivial relatively clopen subset of the graph of $f$ (since $f(0)=f(1)=0$), so the graph is not connected. The construction depends on the fact that the intersection of the graph of $f$ with the rectangle $[0,1]\times[1,2]$ is the union of countably many functions $f_i$ such that $f_i$'s domain is a subset of $[0,1]$ $f_i$ is non-decreasing, and $f_i$ is surjective onto $[1,2]$. Namely, each $f_i$ handles the inputs where the encoded decimal point is in a particular position, with a particular sequence of tridecimal digits in front of it. Each of them looks a bit like the Cantor function with the interior of each plateau removed, such that there is a horizontal gap instead. The properties of$f_i$ noted above means that it can be uniquely extended to a non-decreasing $g_i:[0,1]\to[1,2]$. (At points where $f_i(x)$ is not defined, there can only be one possible value of $g_i(x)$ that won't break monotonicity). We have that $g_i(0)=1$, $g_i(1)=2$ and $g_i$ is continuous. My $h$ will be strictly decreasing, with its values chosen such that it intersects each $g_i$ at an $x$-coordinate where $f_i$ is not defined. This prevents it from intersecting each of the $f_i$s and therefore from intersecting all of $f$. We start by setting $h(0)=2$, $h(1)=1$ and now inductively define a sequence of additional points on $h$. In this phase we maintain the invariants that $h$ is defined at finitely many points in $[0,1]$. $h(x)$ is (so far) only defined for points where $f(x)=0$. Whenever $h(a)$ and $h(b)$ are defined for distinct $a$ and $b$, the slope $\frac{h(a)-h(b)}{a-b}$ will be strictly between $-2$ and $0$. (We easily see that it is enough to enforce this between neighboring points). Now in step $i$ we will chose an $x_i$ and $h(x_i)$ which prevents $h$ from meeting $f_i$. Consider the piecewise linear function we get from interpolating linearly between the known points of $h$. It is strictly decreasing, so it intersects $g_i$ at exactly one point. If this happens to be at an already known point of $h$, we don't need to add any point in this step. Otherwise the points where near the intersection point where the slope condition above will let us place a new $(x_i,h(x_i))$ form an open parallelogram of $[0,1]\times[1,2]$, which contains the intersection point. The set of $x$ such that $(x,g_i(x))$ is in the parallelogram is open and nonempty, and therefore we can chose an $x_i$ in this set with $f(x_i)=0$. Now set $h(x_i)=g_i(x_i)$. At the end of $\omega$ many steps, we will have defined $h(x)$ for a dense set of $x$s, because each interval on the $x$ axis contains the entire preimage of the interior of $[1,2]$ under at least one of the $g_i$s. This means that $x\mapsto h(x)+x$ is Lipschitz with Lipschitz constant $1$ defined on a dense subset of $[0,1]$, and it therefore extends uniquely to a function on all of $[0,1]$ with the same Lipschitz constant. Subtracting $x$ back out gives us a strictly decreasing continuous $h$ that avoids all $f_i$, as desired.<|endoftext|> TITLE: Continuous self-maps on $\mathbb{Q}$ QUESTION [10 upvotes]: 1) Are $\mathbb{Q}$ and $\mathbb{Q}\setminus\{0\}$ homeomorphic? 2) If $S\subseteq \mathbb{Q}$ is a non-empty subset, is there a continuous surjection $f:\mathbb{Q}\to S$? REPLY [3 votes]: Let me give yet another argument for the second question. Note that you can partition $\mathbb{Q}$ into infinitely many nonempty open sets $U_n$ (for instance, let $\alpha_0<\alpha_1<\dots$ be your favorite unbounded increasing sequence of irrational numbers and let $U_n=(\alpha_{n-1},\alpha_n)\cap\mathbb{Q}$ where $\alpha_{-1}=-\infty$). The surjection $f:\mathbb{Q}\to\mathbb{N}$ sending $U_n$ to $n$ is continuous. Since any map with domain $\mathbb{N}$ is continuous, $\mathbb{Q}$ can continuously surject onto any space that $\mathbb{N}$ can surject onto, i.e. any countable nonempty space.<|endoftext|> TITLE: Subfields of $\mathbb{C}$ isomorphic to $\mathbb{R}$ QUESTION [5 upvotes]: Is there any subfield of $\mathbb{C}$ which is isomorphic to $\mathbb{R}$ but not $\mathbb{R}$ ? REPLY [7 votes]: Let $B$ be a trascendence basis of $\mathbb R$ over $\mathbb Q$, so that $\mathbb R$ is an algebraic extension of $\mathbb Q(B)$. The set $B$ is infinite: let $B'$ be the result of removing one element. Since $B$ and $B'$ are in bijection, there is an algebraic extension of $\mathbb Q(B')$ contained in $\mathbb C$ which is isomorphic to $\mathbb R$, and it is not $\mathbb R$ because it does not contain the element we removed.<|endoftext|> TITLE: Is Wolfram Alpha linear independence wrong or am I missing something? QUESTION [12 upvotes]: Maybe it's because you can't ask those questions to wolfram & I should use a matrix instead but when imputting linear independence {$t$, $t^2+1$, $t^2+1-t$} It says the three functions are linearly independent when the third one is clearly a linear combination of the other two. How should I input this to get a valid answer? I wanna check whether or not my results are correct. REPLY [9 votes]: Another way to check for linear independence in W|A is to compute the Wronskian, say with the input "wronskian(($t$, $t^2+1$, $t^2+1-t$), $t$)", which results in $0$ so the set of functions is indeed linearly dependent.<|endoftext|> TITLE: Singular values in Kabsch algorithm QUESTION [5 upvotes]: I've implemented Kabsch algorithm to compute an optimal rotation matrix between two sets of points. And while calculating the SVD $P^TQ =U \Sigma V^*$, I've noticed a pattern. Whenever I calculate the rotation between two sets of each $3$ points, I only end up having $2$ singular values. Whenever I calculate the rotation between two sets of each $2$ points, I only get $1$ singular value. And if I compute the rotation of more than $3$ points, I get all $3$ singular values. I understand, that $2$ points are too little to compute a unique rotation. The rotation still has one degree of freedom. So it's not a surprise, that the middle matrix $\Sigma$ doesn't have a full rank. But with $3$ points, the optimal rotation is unique. So why do I only end up with $2$ singular values? Is there an obvious thing that I'm missing? REPLY [3 votes]: You didn't use the original $P$ and $Q$ when calculating the SVD. You first subtracted the mean point from all the points in each set. This action actually reduced the rank of $P$ and $Q$ from 3 to 2 (assuming $P$ and $Q$ were $3\times 3$ matrix). If 3 points are centered by the origin then any one of them is a linear combination of the other two. let's say that $p1=(x1,y1,z1)$, $p2=(x2,y2,z2)$ then you already know the cooridnates of $p3$. Since the 3 points are centered by the origin, you must have $p3=(-x1-x2,-y1-y2,-z1-z2)$. Therefore the rank of the points is 2 (the same with $Q$). Therefore the covariance matrix that you sent to SVD was of rank 2 which led to rank 2 in $\Sigma$ (which led to 2 singular values). But you still have one unique rotation matrix (except for some special cases) since 3 pair of points are enough for calculating the 3 angles of rotation.<|endoftext|> TITLE: Closed form of $\iint_{\mathbb{R}^2} \ln \left(\frac{1}{|x-y|}\right) \frac{4}{(1+|y|^2)^2} dy$ QUESTION [6 upvotes]: While reading a paper I stumbled across the integral $$ \iint_{\mathbb{R}^2} \ln \left(\frac{1}{|x-y|}\right) \frac{4}{(1+|y|^2)^2} d^2y\,, $$ where $x \in \mathbb{R}^2$ is fixed. I think the paper implicitly claims that this is equal to $-2 \pi \ln (1+|x|^2)$. I'm not sure if this is a hard integral at all, but considering the amount of proficient integrators on this site, I thought that I should post this question here. To provide some context, the term $\ln \frac{1}{|x-y|}$ comes from the Laplacian's Green function (it is the leading term near $x$) and the term $\frac{4}{(1+|y|^2)^2}$ is of course the "density" of the stereographic projection metric. REPLY [5 votes]: Please note the result $$\int_0^{2\pi}\!d\phi\, \ln(a^2- 2a b \cos(\phi) +b^2) = 4\pi \ln[\max(|a|,|b|)] \tag{1}$$ which can be proven by different methods (e.g., by taking the derivative with respect to $a$ or $b$). Your integral is given by (using polar coordinates $(r,\phi)$) $$I= \iint_{\mathbb{R}^2}\!d^2y \ln\left(\frac{1}{|x-y|}\right) \frac{4}{(1+|y|^2)^2} = -\int_0^\infty \!r dr\int_{0}^{2\pi} \!d\phi \, \ln(|x|^2 - 2 |x| r\cos \phi + r^2) \frac{2}{(1+r^2)^2} .$$ With the result (1), we obtain $$ I = - 4\pi\int_0^\infty\!dr \frac{2 r}{(1+r^2)^2} \ln[\max(|x|,|r|)] = -4\pi \ln |x| \int_0^{|x|}\!dr\,\frac{2r}{(1+r^2)^2} -4\pi \int_{|x|}^\infty\!dr\,\frac{2 r \ln r}{(1+r^2)^2} \\ = -\frac{4\pi |x|^2 \ln |x|}{1+|x|^2} - 2\pi\left[ \frac{2r^2 \ln r}{1+r^2} -\ln (1+r^2) \right]_{r=|x|}^\infty = -2\pi \ln(1+|x|^2)$$<|endoftext|> TITLE: Is a cylinder a differentiable manifold? QUESTION [6 upvotes]: First of all! I mean a cylinder with a circle at the top and a circle at the bottom. Not the volume but its surface. Intuition tells me YES. But... How can charts be defined for the points on the edge? What I thought first is that since a sphere is a differentiable manifold and I could find a bijection projecting the cylinder on a sphere placed in its center (of symmetry) I could extend the properties of the sphere to the cylinder. Now some doubts arise since the metric that I would define on the cylinder would be spherical, while I would expect to get standard euclidean geometry on the flat surfaces. Anyway, no metric is yet defined on differentiable manifolds and I can imagine to define the metric that best fits after. So Wthout a metric, does the differential manifold structure allows me to distinguish between a cylinder and a sphere? With a metric can I be sure whether my manifold is a cylinder or a sphere or anything else? Where can I get a handy summary on the structure information given by topology, manifold, and metric when I only know some of the three? thank you in advance REPLY [4 votes]: If you want your cylinder to have an "edge" then intuitively, it should not be a differentiable manifold. If you take a point in a differentiable manifold, then there is a regularly parametrized curve through the point in each direction. (Just take a straight line in a chart in transport it onto the manifold.) This means that you can "pass through that point" in each direction on a smooth curve with non-zero velocity. If you apply this to a point on the edge, this means that you can get from the side of cylinder to the top of the cylinder by a smooth curve whose speed is non-vanishing at each point. But obviously this should not be possible at the edge, since the speed would have to jump from having non-zero vertical component to being horizontal. (What I am trying to explain in a non-technical way here is that the cylinder has no well defined tangent plane at points of the edge.) The problem with a formal answer to your question is that in order to get a formal answer, you would first have to specify what it means to "make the cylinder into a manifold". As you said, you can choose a bijection to a sphere (which even is a homeomorphism) and use this to carry the manifold structure over to the cylinder. But in this way, you have "removed" or "flattened out" the edge. Certainly, the cylinder is not a submanifold in $\mathbb R^3$, which can be seen by making the above argument with tangent spaces precise.<|endoftext|> TITLE: Conjecture: $\pi(x)\ge \pi\circ\pi(x)+\pi\circ\pi\circ\pi(x)+\cdots$ QUESTION [16 upvotes]: $x\ge 13\implies\pi(x)\ge \pi\circ\pi(x)+\pi\circ\pi\circ\pi(x)+\cdots$ Can this be proved? REPLY [7 votes]: The inequality is trivially true because from the prime number theorem, the LHS is asymptotic to $\frac{x}{\log x}$ where as the RHS is asymptotic to $\frac{x}{\log^2 x}$ so clearly this is a very weak inequality and is has to be true for all $x$ greater than some minimum value which in this case turns to be 13. What would make the inequality non-trivial is if the LHS and RHS are of the same asymptotic order. I present a stronger form of the above inequality. Let $s(x) = \pi(\pi(x)) + \pi(\pi(\pi(x))) + \ldots $. Pierre Dusart has proved that $$ \pi(x) \ge \frac{x}{\log x - 1}, \ x > 5393 $$ $$ \pi(x) \le \frac{x}{\log x - 1.1}, \ x > 60,184 $$ From these two inequalities and little algebraic manipulation we obtain $$ \frac{x}{(\log x - 1)(\log x - 2)} < s(x) < \frac{x}{(\log x - \log\log x)(\log x - \log\log x - 1)} $$ or in the slightly weaker but simpler form $$ \frac{x}{(\log x - 1)^2} < s(x) < \frac{x}{(\log x - \log\log x - 1)^2} $$ for all $x > 60,184$. Clearly the RHS is less than $\pi(x)$ so the weaker original inequality follows.<|endoftext|> TITLE: How many nodes in the smallest $k$-dense graph? QUESTION [30 upvotes]: Let's call a directed graph $k$-dense if: Each node has exactly two children (outgoing neighbors); Each two nodes have at least three different children (besides themselves); Each three nodes have at least four different children (besides themselves); ... Each $k$ nodes have at least $k+1$ different children (besides themselves); What is the smallest number of nodes required for a $k$-dense graph? Here are some special cases. For $k=1$, the smallest number of nodes is $3$: 1->[2,3], 2->[3,1], 3->[1,2] For $k=2$, the smallest number of nodes is $7$. To see this we can build the graph greedily based on the following constraint: a node's child must be different than its parent(s) and is sibling(s). Why? Because a node and its parent together must have three children besides themselves. $1$ has two children: call them $2$ and $3$. $2$ must have two children different than its parent ($1$) and sibling ($3$): call them $4$ and $5$. $3$ must have two children different than its parent ($1$) and sibling ($2$). The first can be $4$. Now, $3$ and $2$ together have only two children besides themselves ($4$ and $5$), so $3$ must have another different child - call it $6$. $4$ must have two children different than its parents ($2$ and $3$) and siblings ($5$ and $6$). The first can be $1$ and the second must be new - call it $7$. $5$ must have two children different than its parent ($2$) and siblings ($4$). The first can be $1$. The second cannot be one of $1$'s children ($2$ and $3$) or siblings ($7$) so it must be $6$. $6$ must have two children different than its parents ($3$ and $5$) and siblings ($4$ and $1$). These must be $2$ and $7$. $7$ must have two children different than its parents ($4$ and $6$) and siblings ($2$ and $1$). These must be $3$ and $5$. All in all, we have the following $2$-dense graph with $n=7$ nodes: 1->[2,3] 2->[4,5] 3->[4,6] 4->[1,7] 5->[1,6] 6->[2,7] 7->[3,5] For $k=3$, I used a similar greedy algorithm (with more constraints) to construct the following graph: 1->[2,3] 2->[4,5] 3->[6,7] 4->[6,8] 5->[7,9] 6->[10,11] 7->[12,13] 8->[1,9] 9->[10,14] 10->[2,12] 11->[1,13] 12->[8,15] 13->[4,14] 14->[3,15] 15->[5,11] I used a computer program to check all possibilities with at most $14$ nodes, and found none, so (assuming my program is correct) $n=15$ is the minimum number required for $k=3$. This hints that the minimum number of nodes in a $k$-dense graph should be: $2^{k+1}-1$. Is this true? What is the smallest number of nodes required for general $k$? UPDATE 1: I have just learned about vertex expansion. It seems closely related but I am still not sure how exactly. REPLY [6 votes]: Theorem. A $k$-dense graph has at least $2^{k+1}-1$ vertices. Proof. Let $L$ be a copy of the vertex set $V,$ with elements $l_v$ for $v\in V.$ Let $M$ be the bipartite graph on vertex set $V\cup L,$ with edges $wl_v$ whenever $w=v$ or $w$ is a child of $v.$ The $k$-dense condition implies that for any $L'\subseteq L$ with $|L'|\leq k,$ the set of neighbors $\Gamma(L')=\{v\in V\mid (\exists l_w\in L').vl_w\in E(M)\}$ has order at least $2|L'|+1.$ This implies that $M$ has no cycle $C$ of length $2k$ or less: taking $L'=C\cap L$ would give $|L'|\leq k$ and $|\Gamma(L')|\leq 2|L'|.$ So $M$ has girth at least $2(k+1),$ and both parts of $M$ have average degree $3.$ Hoory's bound [1] (alternate proof in [2]) gives $|V|\geq 1+2+4+\dots+2^k=2^{k+1}-1.$ $\Box$ Remark. I suspect, but haven't checked, that Hoory's bound is only sharp for regular graphs. The Feit-Higman theorem says that the only regular graphs attaining the bound above have girth $5,6,8,$ or $12,$ which means there could only be $k$-dense graphs of order $2^{k+1}-1$ for $k=2,3,5$ (and $k=1$). I believe you can construct a $k$-dense graph from a $(3,2k+2)$-cage (e.g. the Tutte 12-cage) quite easily by taking it as "$M$" and using a perfect matching to choose which vertices in $L$ should be labelled $l_v.$ [1] The Size of Bipartite Graphs with a Given Girth, Shlomo Hoory, https://www.sciencedirect.com/science/article/pii/S0095895602921234 [2] An entropy based proof of the Moore bound for irregular graphs, S. Ajesh Babu, Jaikumar Radhakrishnan, https://arxiv.org/abs/1011.1058<|endoftext|> TITLE: Why does universal generalization work? (the rule of inference) QUESTION [7 upvotes]: This is an excerpt from Discrete Mathematics. Universal generalization is the rule of inference that states that ∀xP(x) is true, given the premise that P(c) is true for all elements c in the domain. Universal generalization is used when we show that ∀xP(x) is true by taking an arbitrary element c from the domain and showing that P(c) is true. The element c that we select must be an arbitrary, and not a specific, element of the domain. That is, when we assert from ∀xP(x) the existence of an element c in the domain, we have no control over c and cannot make any other assumptions about c other than it comes from the domain. Universal generalization is used implicitly in many proofs in mathematics and is seldom mentioned explicitly. However, the error of adding unwarranted assumptions about the arbitrary element c when universal generalization is used is all too common in incorrect reasoning. I have two questions: 1. What is meant by the second paragraph ? 2. How come just by taking arbitrary c in domain, we can conclude that if P(c) is true then so is ∀xP(x). (There may exist some counterexamples). REPLY [9 votes]: 1. What is meant by the second paragraph ? That is, when we assert from ∀xP(x) the existence of an element c in the domain, we have no control over c and cannot make any other assumptions about c other than it comes from the domain. Universal generalization is used implicitly in many proofs in mathematics and is seldom mentioned explicitly. However, the error of adding unwarranted assumptions about the arbitrary element c when universal generalization is used is all too common in incorrect reasoning. This paragraph is (in my opinion) a little less than clear, but it is trying to give a description of what "$\forall x \; P(x)$" means. It means that, if you give me any $x$, $P(x)$ is true. I am not allowed to request any particular kind of $x$; there is no assumption that $x$ is blue or $x$ is greater than $3$. $P(x)$ is true for all $x$, not just for some particular $x$s which satisfy some particular assumptions. This results in a common error in proving $\forall x \; P(x)$, but it does not come in so much in using $\forall x \; P(x)$. The error is that one proves that $\forall x \; P(x)$ holds by proving it holds for some particular kind of $x$'s, which is not allowed. To prove $\forall x \; P(x)$ holds, you must show that $P(x)$ is true, under no assumptions about $x$. 2. How come just by taking arbitrary c in domain, we can conclude that if P(c) is true then so is ∀xP(x). (There may exist some counterexamples). Universal generalization can be stated as the following: If $P(c)$ must be true, and we have assumed nothing about $c$, then $\forall x P(x)$ is true. To get a grasp on this rule (which is one of the more counter-intuitive ones!) you should try to get a grasp on the part and we have assumed nothing about $c$. What does that mean? It means we really could have proven $P(x)$ for any value of $x$, not just $x = c$; the same proof would apply! So we have actually shown that $\forall x \; P(x)$. You are right that we could not prove $\forall x \; P(x)$ just by proving $P(c)$ for some particular $c$. However, if we prove it and we have made no assumptions about that particular $c$, then that $c$ is in some sense "general"; without loss of generality we may say that $P(x)$ is true for all $x$.<|endoftext|> TITLE: Point that divides a quadrilateral into four quadrilaterals of equal area QUESTION [9 upvotes]: Consider an irregular quadrilateral $ABCD$. Let $E,F,G,H$ be the midpoints of its edges. It seems that there is a point $K$ such that $$ S_{AHKE} = S_{EKFB} = S_{KHDG} = S_{KGCF} \left(= \frac{1}{4} S_{ABCD}\right) $$ I'm curious whether the point $K$ has any other interesting properties. Here's the proof that this point does exist: Assuming that $A,B,C,D,I$ have coordinates $\mathbf p_1, \mathbf p_2, \mathbf p_3, \mathbf p_4, \mathbf p$, respectively. Then $$ \mathbf S_{AHKE} = \frac{1}{2} (\mathbf p - \mathbf p_1) \times \frac{\mathbf p_2 - \mathbf p_4}{2} = \frac{1}{4} (\mathbf p - \mathbf p_1) \times (\mathbf p_2 - \mathbf p_4)\\ \mathbf S_{EKFB} = \frac{1}{4} (\mathbf p_3 - \mathbf p_1) \times (\mathbf p_2 - \mathbf p) = \frac{1}{4} (\mathbf p - \mathbf p_2) \times (\mathbf p_3 - \mathbf p_1)\\ \mathbf S_{KHDG} = \frac{1}{4} (\mathbf p_3 - \mathbf p_1) \times (\mathbf p - \mathbf p_4) = \frac{1}{4} (\mathbf p_4 - \mathbf p) \times (\mathbf p_3 - \mathbf p_1)\\ \mathbf S_{KGCF} = \frac{1}{4} (\mathbf p_3 - \mathbf p) \times (\mathbf p_2 - \mathbf p_4) $$ It is easy to see that $$ \mathbf S_{AHKE} + \mathbf S_{KGCF} = \frac{1}{2} \mathbf S_{ABCD}\\ \mathbf S_{EKFB} + \mathbf S_{KHDG} = \frac{1}{2} \mathbf S_{ABCD} $$ thus there is exactly two linear equations $$ \mathbf S_{AHKE} - \mathbf S_{KGCF} = 0\\ \mathbf S_{EKFB} - \mathbf S_{KHDG} = 0 $$ to determine two components of $\mathbf p$. And they are $$ (2\mathbf p - \mathbf p_1 - \mathbf p_3) \times (\mathbf p_2 - \mathbf p_4) = 0\\ (2\mathbf p - \mathbf p_2 - \mathbf p_4) \times (\mathbf p_3 - \mathbf p_1) = 0 $$ which is equivalent to $$ \mathbf p = \frac{\mathbf p_1 + \mathbf p_3}{2} + \lambda(\mathbf p_2 - \mathbf p_4) = \frac{\mathbf p_2 + \mathbf p_4}{2} + \mu(\mathbf p_3 - \mathbf p_1), \quad \lambda,\mu \in \mathbb R $$ The geometrical definition of $K$ should be obvious now: the point $K$ is reflection of diagonal intersection point $M = AC \cap BD$ about the vertices' centroid $P$ REPLY [2 votes]: From your definition of $K$ it follows that triangles $EBK$ and $GCK$ have the same area, which is the same as saying that triangles $ABK$ and $DCK$ have the same area. In other words: $K$ divides $ABCD$ into two couples of triangles of equal area. Given two segments $AB$ and $CD$ whose extensions meet at $Q$ (see picture below), the locus of points $X$ such that $XAB$ and $XCD$ have the same area, with $X$ belonging to $\angle AQD$, is line $QR$, where $R$ is the vertex opposite to $Q$ of the parallelogram with sides parallel and congruent to $AB$ and $CD$. A different but obvious construction is needed if $AB$ and $CD$ are parallel. In the same way one can construct line $ST$, which is the locus of the points forming with $AD$ and $BC$ triangles of equal area. Point $K$ is then the intersection of $QR$ and $ST$.<|endoftext|> TITLE: True or false: Every Field is a UFD. QUESTION [6 upvotes]: I need to check true or false of the above statement. But unfortunately I haven't found any counter example yet. So if the statement is false, can anyone one give me a counterexample or if it is true, just give me a hint to prove it. I know that $F[x]$ is a PID if and only if $F$ is a field. And every PID is an UFD. Please help me. Thanks. REPLY [14 votes]: Every field $F$ is a UFD because it is an integral domain and it contains no primes -- everything non-zero is a unit -- so the requirement to be checked on factorization is vacuous.<|endoftext|> TITLE: Calculating the scale factor to resize a polygon to a specific size QUESTION [7 upvotes]: For some graphics programming I am doing, I am trying to scale a polygon so it has a specific area. The reason I am doing this is because I know the shape of something like the outline of a room, but I need to know what the position of each point would be if it were of a certain size in square meters. For example: As you can see, I am scaling the polygon so that it retains the same shape, but it has a different area. I have searched for solutions to this problem, but most results appear to be unrelated; they are usually either about scaling a polygon by a scale factor or calculating the area of a polygon. I don't know if there is a specific name for this process; I know for vectors you often normalize the vector and then multiply them by the desired length, but searching for how to normalize polygons doesn't seem to get results either. I have considered using a scaling method as described in this post to get the resulting polygon. However, while I know the coordinates of every point $p_i$ in the polygon as well as its area $A$, I do not know how to calculate the scale factor $\alpha$ to scale the points with to get the desired area $A'$. How do I calculate the scale factor $\alpha$ that will scale a polygon to the desired area of $A'$? Is there a different approach that will solve my problem? REPLY [3 votes]: When you scale a polygon (or any subset of the plane whose area you can define) by $\alpha$, its area scales by $\alpha^2$. Proving this rigorously takes some work and requires you to have a rigorous definition of "area", but you can see that this makes sense by considering a rectangle. If you scale a rectangle by $\alpha$, you scale each of its sides by $\alpha$, so since the area is the product of the two side lengths, you scale the area by $\alpha^2$. So in your case, you want $\alpha^2A=A'$, or $\alpha=\sqrt{A'/A}$. REPLY [2 votes]: If the ratio of areas is $A:B$, then the ratio of corresponding lengths must be $$\sqrt{A}:\sqrt{B}$$<|endoftext|> TITLE: Lusternik-Schnirelmann category: nullhomotopic inclusion vs. contractible QUESTION [8 upvotes]: The Lusternik-Schnirelmann category of a topological space $X$ is the smallest integer $k$ (if it exists) such that there is an open cover $\{U_0, \dots, U_k\}$ of $X$ such that each inclusion map $U_i \hookrightarrow X$ is nullhomotopic; we denote this by $LS(X) = k$. If no such integer exists, we write $LS(X) = \infty$. It should be noted that some references (e.g. Wikipedia) use a different normalisation which differs from the above by one. The Lusternik-Schnirelmann category is a homotopy invariant which enjoys has several nice properties, making it an interesting invariant to study. I wonder, however, whether there is any difference between using an open cover by sets where the inclusions are nullhomotopic (sets which are sometimes called 'contractible in $X$') and using an open cover by contractible sets. Define the alternative Lusternik-Schnirelmann category of a topological space $X$ to be the smallest integer $k$ (if it exists) such that there is an open cover $\{U_0, \dots, U_k\}$ of $X$ such that each $U_i$ is contractible; we denote this by $LS'(X) = k$. If no such integer exists, we write $LS'(X) = \infty$. Question 1: Is $LS'(X)$ a homotopy invariant? If $LS'(X)$ is a homotopy invariant, we can compare the invariants $LS(X)$ and $LS'(X)$. If $U \subseteq X$ is contractible, the inclusion $U \hookrightarrow X$ is nullhomotopic, so it follows that $LS(X) \leq LS'(X)$. Question 2: Is $LS(X) = LS'(X)$? REPLY [3 votes]: $LS'$ is not a homotopy invariant. It is straightforward to verify that $LS(X \vee Y) = \text{max}(LS(X),LS(Y))$ for reasonable spaces (we need to demand that in our minimal open cover, an open set containing the basepoint has the basepoint component deformation retract onto the basepoint), and $LS'(X \vee Y) = LS'(X) + LS'(Y)$. (I can provide proofs if desired.) In particular, $LS'(\Sigma T^2) = 1$, since we can cover it by the two cones, but this is homotopy equivalent to $S^3 \vee S^2 \vee S^2$, and $LS'(S^3 \vee S^2 \vee S^2) = 3$. The important point is that for $LS$, the open sets in your cover can be disconnected, while this is not the case for $LS'$. Your definition stands more of a chance of survival if you define $LS'$ in terms of covers by open sets with contractible components (but this is no longer so exciting an alternate definition).<|endoftext|> TITLE: Integral $\int_{0}^{1} \frac{\ln^2 x \ln^2 (1+x)\ln^2(1-x)}{x^2}dx$ QUESTION [8 upvotes]: Due to curiosity and also since I evaluated lower degree sums like these but this one is too hard to manipulate I am eager to know does this have a closed form ? I broke it into the series $\displaystyle \sum_{m,n\ge 1}(-1)^{m+n}\frac{{\rm H}_m{\rm H}_n}{(m+1)(n+1)} \frac{2}{(2n-1)^3} $ , but does this help ? REPLY [6 votes]: See this paper. One have $\scriptsize \int_{0}^{1} \frac{\ln^2 x \ln^2 (1+x)\ln^2(1-x)}{x^2}dx=8 \zeta(\bar5,1)-\frac{8}{3} \pi ^2 \text{Li}_4\left(\frac{1}{2}\right)+8 \text{Li}_4\left(\frac{1}{2}\right)+24 \text{Li}_5\left(\frac{1}{2}\right)+16 \text{Li}_6\left(\frac{1}{2}\right)+8 \text{Li}_4\left(\frac{1}{2}\right) \log ^2(2)+24 \text{Li}_4\left(\frac{1}{2}\right) \log (2)+16 \text{Li}_5\left(\frac{1}{2}\right) \log (2)+\frac{19 \pi ^2 \zeta (3)}{6}-62 \zeta (5)-\frac{227 \zeta (3)^2}{8}-7 \zeta (3) \log ^3(2)-\frac{35}{2} \zeta (3) \log ^2(2)+\frac{35}{6} \pi ^2 \zeta (3) \log (2)-9 \zeta (3) \log (2)-\frac{217}{2} \zeta (5) \log (2)+\frac{73 \pi ^6}{1260}+\frac{\pi ^4}{90}+\frac{2 \log ^6(2)}{9}+\frac{4 \log ^5(2)}{5}-\frac{5}{18} \pi ^2 \log ^4(2)-\frac{5 \log ^4(2)}{3}+\frac{2}{3} \pi ^2 \log ^3(2)+\frac{13}{36} \pi ^4 \log ^2(2)+\pi ^2 \log ^2(2)+\frac{1}{6} \pi ^4 \log (2)$ Where $\zeta(\bar5,1)$ is a Multiple Zeta Value. Bonus: $\scriptsize \int_0^1 \frac{\log ^3(1-x) \log ^2(x) \log ^2(x+1)}{x^2} \, dx=-12 \zeta(\bar5,1)+36 \zeta(\bar5,1,1)-6 \zeta(5,\bar1,1)+54 \log (2) \zeta(\bar5,1)+156 \text{Li}_4\left(\frac{1}{2}\right) \zeta (3)-8 \pi ^2 \text{Li}_4\left(\frac{1}{2}\right)-34 \pi ^2 \text{Li}_5\left(\frac{1}{2}\right)-48 \text{Li}_5\left(\frac{1}{2}\right)+144 \text{Li}_6\left(\frac{1}{2}\right)-96 \text{Li}_7\left(\frac{1}{2}\right)-22 \pi ^2 \text{Li}_4\left(\frac{1}{2}\right) \log (2)-51 \zeta (3)^2-\frac{71 \pi ^4 \zeta (3)}{320}+\frac{9 \pi ^2 \zeta (3)}{4}+\frac{4461 \pi ^2 \zeta (5)}{64}-3 \zeta (5)-\frac{1755 \zeta (7)}{4}-\frac{1}{2} \zeta (3) \log ^4(2)-28 \zeta (3) \log ^3(2)+\frac{95}{8} \pi ^2 \zeta (3) \log ^2(2)-24 \zeta (3) \log ^2(2)-\frac{3999}{16} \zeta (5) \log ^2(2)-\frac{1563}{16} \zeta (3)^2 \log (2)+\frac{31}{2} \pi ^2 \zeta (3) \log (2)-\frac{279}{2} \zeta (5) \log (2)+\frac{\pi ^6}{112}+\frac{2 \log ^7(2)}{105}+\frac{\log ^6(2)}{5}-\frac{7}{10} \pi ^2 \log ^5(2)-\frac{6 \log ^5(2)}{5}+\frac{2}{3} \pi ^2 \log ^4(2)+\frac{121}{180} \pi ^4 \log ^3(2)+\frac{4}{3} \pi ^2 \log ^3(2)+\frac{29}{60} \pi ^4 \log ^2(2)+\frac{1321 \pi ^6 \log (2)}{10080}+\frac{1}{15} \pi ^4 \log (2)$ The latter will be extremely difficult via ordinary means. See the second link of my answer here for algorithmic reference. One may verify their correctness by numerical methods.<|endoftext|> TITLE: Find $x$ and $y$ where $20!=\overline{24329020081766xy\dots}$ QUESTION [8 upvotes]: Find $x$ and $y$ where $20!=\overline{24329020081766xy\dots}$(without using calculator.) My attempt:I first find how many zeroes does it have: $$\left\lfloor {\frac{20}{5}} \right\rfloor=4.$$ It can be solved easily if we know that after $y$ there are only three digits then we can know: $$y=0.$$ Then $\overline {6x}$ is divisible by $4$ which gives us: $$x=4\ \ \ \text{ or }x=8.$$ Then if we check divisiblity role of $8$ we will get that $\overline {66x}$ is divisible by $8$ that tells to us $x$ can only be $4$. Thus $$x=4.$$ But know the biggest problem is that we don't know how many digits are there after $y$. Or in a bigger amount how many digits are there in $20!$. Thanks. REPLY [2 votes]: Some judicious pairing of the numbers from $1$ to $20$ leads to the rough estimate $$\begin{align} 20!&=(20\cdot1)(17\cdot3)(19\cdot2)(12\cdot10)(15\cdot4)(18\cdot5)(16\cdot6)(14\cdot7)(13\cdot8)(11\cdot9)\\ &\approx20\cdot50\cdot40\cdot120\cdot60\cdot100\cdot100\cdot100\cdot100\cdot100\\ &\approx1000\cdot5000\cdot60\cdot10^{10}\\ &=10^3\cdot300000\cdot10^{10}\\ &=3\cdot10^{18} \end{align}$$ Some extra work could probably establish rigorous upper and lower bounds $$2\cdot10^{18}\lt20!\lt3\cdot10^{18}$$ Knowing that $20!$ ends with $4$ $0$'s and counting that there are $14$ digits before the $xy$, we see that $y$ is the first of the trailing $0$'s. Finally, knowing that $9$ divides $20!$, we have $$(2+4+3)+2+(9+0)+2+(0+0+8+1)+7+6+6+x\equiv5+x\equiv0\mod9$$ implies $x=4$. Added later: Here is a second approach, using that fact that $20!$ is divisible by both $9$ and $11$. We have $$20!\lt20^{10}\cdot10!\lt2^{10}\cdot10^{10}\cdot(4\cdot10^6)=4096\cdot10^{16}\lt5\cdot10^{19}$$ so $20!$ has at most $20$ digits. Since it ends in $4$ $0$'s, everything to the right of the $xy$ is a $0$. Using the digit sum test for divisibility by $9$, we get $$x\equiv4-y\mod9$$ Using the alternating digit sum test for divisibility by $11$, we get $$x\equiv4+y\mod 11$$ The restriction $0\le x,y\le9$ makes it easy to check that $x=4, y=0$ is the only solution.<|endoftext|> TITLE: Cantor Set Geometric Mean QUESTION [5 upvotes]: Find the Geometric Mean of all reals existing as part of the Cantor Set between (0,1]. I've been trying to solve this problem, but keep messing up the sets I construct for higher iterations. Any help would be appreciated. https://en.wikipedia.org/wiki/Cantor_set I'm not sure how to rigorously define Geometric Mean, I am just going by the definition posted on wikipedia, which is the n-th root of the product of n numbers. https://en.wikipedia.org/wiki/Geometric_mean Where the problem originated: http://www.artofproblemsolving.com/community/c7h1288021_am_gm_over_cantor_set_and_01 REPLY [6 votes]: Using Henning Makholm's series, let $$Y = \sum_{n=1}^\infty \dfrac{2}{3^n} X_n = \dfrac{2}{3} X_1 + \dfrac{1}{3} Z $$ where $Z$ has the same distribution as $Y$ and is independent of $X_1$. Conditioning on $X_1$, $$ \eqalign{ \mathbb E[\log Y] &= \dfrac{1}{2} \mathbb E\left[\log \left(\frac{Z}{3}\right)\right] + \dfrac{1}{2} \mathbb E\left[\log \left(\frac{2}{3} + \frac{Z}{3}\right)\right]\cr &= - \dfrac{\log(3)}{2} + \dfrac{1}{2} \mathbb E[\log Y] + + \dfrac{1}{2} \mathbb E\left[\log \left(\frac{2}{3} + \frac{Y}{3}\right)\right]\cr}$$ so that $$ \mathbb E[\log Y] = - \log(3) + \mathbb E\left[\log \left(\frac{2}{3} + \frac{Y}{3}\right)\right]$$ The expectation on the right can be nicely approximated using a few terms of the series. Using $16$ terms, I find that $$ [-1.291076932952935 \le \mathbb E[\log Y] \le -1.291076923469643] $$ Your "geometric mean" is the exponential of this, thus between $.2749744944825810$ and $.2749744970902444$.<|endoftext|> TITLE: Combinatorial identity $\prod_{j=1}^n {n\choose j} =\prod_{k=1}^n {k^k\over k!}$ QUESTION [7 upvotes]: How to prove the combinatorial identity? $$\prod_{j=1}^n {n\choose j} =\prod_{k=1}^n {k^k\over k!}$$ I took $\ln$ of the left hand side $$\sum_{j=0}^n \ln(j^{n+1}) - \ln((j!)^2)$$ but not going anywhere from here. any help is welcome REPLY [2 votes]: Here is another algebraic approach. We obtain \begin{align*} \prod_{k=1}^n\binom{n}{k}&=\prod_{k=1}^n\frac{n(n-1)\cdots(n-k+1)}{k!}\\ &=\left(\prod_{k=1}^n\frac{1}{k!}\right)\left(\prod_{k=1}^n\prod_{j=n-k+1}^nj\right)\tag{1}\\ &=\left(\prod_{k=1}^n\frac{1}{k!}\right)\left(\prod_{k=1}^n\prod_{j=k}^nj\right)\tag{2} \\ &=\left(\prod_{k=1}^n\frac{1}{k!}\right)\left(\prod_{j=1}^n\prod_{k=1}^jj\right)\tag{3} \\ &=\left(\prod_{k=1}^n\frac{1}{k!}\right)\left(\prod_{j=1}^nj^j\right)\tag{4} \\ &=\prod_{k=1}^n\frac{k^k}{k!} \end{align*} Comment: In (1) we separate products of numerator and denominator In (2) we exchange the order of multiplication in the second double product by letting $k \longrightarrow n-k$ In (3) we exchange the products of the second double product by observing the index range is $1\leq k\leq j\leq n$ In (4) we use $\prod_{k=1}^jj=j^j$<|endoftext|> TITLE: If $28a + 30b + 31c = 365$, then what is the value of $a +b +c$? QUESTION [9 upvotes]: Question: For 3 non negative integers $a, b, c$; if $28a + 30b + 31c = 365$ what is the value of $a +b +c$ ? How I approached it : I started immediately breaking it onto this form on seeing it : $28(a +b +c) +2b +3c = 365 .......(1)$ $30(a +b +c) -2a +c = 365 .......(2)$ $31(a +b +c) -b -3a = 365 .......(3)$ And then I find out that $365 = 28*13 + 1......(1')$; $365 = 30*12 + 5......(2')$ $365 = 31*11 + 24......(3')$ Now as we see (1) and (1') as well as (3) and (3') or even equations $2$ and $2'$ do not combine quiet congruently, so I meet with a dead end here. my issue : how should I approach such problems where we are given no other equations or data? Basically I am asking what are a few ways to get a solution for this problem. REPLY [2 votes]: Your equation (2) should say $$ 30(a+b+c) - 2a + c = 365. $$ Note that this implies $$ c- 2a \equiv 5 \mod 30. $$ I now claim that, in fact, $c-2a = 5$. This would imply that $30(a+b+c)+5=365$, or that $a+b+c = 12$. Since we know that $c-2a\equiv 5\mod 30$, it suffices to show that $-25< c-2a < 35$ to conclude that $c-2a = 5$. The upper bound is easy, since $$31c\le 365\implies c\le 11\implies c-2a\le 11 < 35$$ as $a$ is nonnegative. To get the lower bound, we note that $$ 28a\le 365\implies a\le 13.$$ Furthermore, if $a=13$, then $28\times 13 + 30b+31c = 365\implies 30b+31c = 1$, which is clearly impossible for nonnegative $b$ and $c$. Thus, $a\le 12$, and hence $$ c-2a\ge 0 - 2(12) = -24 >-25 $$ since $c$ is also nonnegative. Hence, $-25 < c-2a < 35$, and combined with the fact that $c-2a\equiv 5\mod 30$, we conclude that $c-2a = 5$, as desired. Using the fact that $c-2a = 5$, we can also solve for the possible solutions. Substituting $c = 2a+5$ into the original equation yields \begin{align} 28a+30b+31(2a+5) &= 365 \\ \implies 28a + 30b + 62a + 155 &= 365 \\ \implies 90a + 30b &= 210 \\ \implies 3a + b &= 7. \end{align} We thus see that \begin{align} a = 0 &\implies b = 7, c = 5 \\ a = 1 &\implies b = 4, c = 7 \\ a = 2 &\implies b = 1, c = 9. \end{align} If $a\ge 3$, then $b<0$. Hence, we conclude that $(0,7,5)$, $(1,4,7)$, and $(2,1,9)$ are the only solutions to the problem.<|endoftext|> TITLE: Intuition for curvature in Riemannian geometry QUESTION [34 upvotes]: Studying the various notion of curvature, I have not been able to get the intuition and deeper understanding beyond their definitions. Let me first give the definitions I know. Throughout, I will consider a $m-$dimensional Riemannian manifold $(M,g)$ equipped with Levi-Civita connection $\nabla$. We have defined Riemannian curvature tensor to be the collection of trilinear maps $$R_p:T_pM \times T_pM \times T_pM \to T_pM, \ (u,v,w)\mapsto R(X,Y)Z(p), \quad p\in M$$ where $X,Y,Z$ are vector fields defined on some neighbourhood of $p$ with $X(p)=u,Y(p)=v,Z(p)=w$, and $R(X,Y)Z:=\nabla_Y\nabla_XZ-\nabla_X\nabla_YZ+\nabla_{[X,Y]}Z.$ This seems to be a purely algebraic defintion; I don't see any geometry here. Second is the sectional curvature. For $p \in M$, let us take a two-dimensional subspace $E \leq T_pM.$ Suppose $(u,v)$ be the basis of $E$. We then define the sectional curvature $K(E)$ of $M$ at $p$ with respect to $E$ as $$K(E)=K(u,v):=\frac{\langle R(u,v)v,u\rangle}{\langle u,u\rangle\langle v,v\rangle-\langle u,v\rangle^2}.$$ Third is the Ricci curvature. Let $p\in M$ and $x\in T_pM$ be a unit vector. Let $(z_1,\cdots,z_m)$ be an orthonormal basis of $T_pM$ s.t. $z_m=x.$ The Ricci curvature of $M$ at $p$ with respect of $x$ is $$Ric_p(x):=\frac{1}{m-1}\sum_{i=1}^{m-1}K(x,z_i)$$ where $K(x,z_i)$ is the sectional curvature defined above. Finally the scalar curvature of $M$ at $p$ is defined to be $$K(p):=\frac{1}{m}\sum_{i=1}^m Ric_p(z_i).$$ My questions are about understanding these four notions beyond the defintions. How should I think of each of them? Are they related to one another in a sense that is one notion of curvature stronger than another? I am sorry if these questions are too much for one post. REPLY [19 votes]: Here's a brutally terse (and slightly imprecise, though "morally correct") account that I hope conveys the geometric content. In a Riemannian manifold, a tangent vector may be carried uniquely along a piecewise $C^{1}$ path by parallel transport, a geometric notion. If $X$ and $Y$ are tangent vectors at a point, and if a tangent vector $Z$ is carried around "the small parallelogram with sides $tX$ and $tY$", the parallel-translated vector is, to second order, $Z + t^{2} R(X, Y) Z$. Qualitatively, curvature measures the failure of commutativity of the covariant differentiation operators along $X$ and along $Y$; parallel transporting an orthonormal frame $F$ around a small parallelogram causes $F$ to rotate by an amount linear in each of $X$ and $Y$. If an (ordered) orthonormal pair $(e_{1}, e_{2})$ is parallel transported around a small square with sides $te_{1}$ and $te_{2}$, it rotates through an angle (approximately) equal to $t^{2}K(e_{1}, e_{2})$. Alternatively, if you're happy with Gaussian curvature as an intrinsic quantity, the sectional curvature of a $2$-plane $E$ at $p$ is the Gaussian curvature of the image of $E$ under the exponential map at $p$. The Ricci curvature of a unit vector $u$ at $p$ is the average of the sectional curvatures of all $2$-planes at $p$ containing $u$. The scalar curvature at $p$ is the average of all sectional curvatures of $2$-planes at $p$. (As in the formulas you give, the averages in the Ricci and scalar curvatures are usually defined and computed as finite averages over suitable pairs of vectors from an orthonormal basis, but in fact the averages may be taken continuously, as described above.)<|endoftext|> TITLE: Moment generating function and probability QUESTION [5 upvotes]: Problem:Let $X$ and $Y$ be identically distributed independent random variables such that the moment generating function of $X +Y$ is $$M(t) = 0.09 e^{−2t} + 0.24 e^{−t} + 0.34 + 0.24 e^t + 0.09 e^{2t}$$ for $−\infty < t < \infty$. Calculate $P[X \le 0]$. Answer: Because $X$ and $Y$ are independent and identically distributed, the moment generating function of $X+ Y$ equals $K^2(t)$, where $K(t)$ is the moment generating function common to $X$ and $Y$. Thus, $$K(t) = 0.30e^{-t} + 0.40 + 0.30e^t$$ This is the moment generating function of a discrete random variable that assumes the values $-1$, $0$, and $1$ with respective probabilities $0.30$, $0.40$, and $0.30$. The value we seek is thus $0.70$. My question is how to factor the moment generating function of $X+Y$ to $0.30e^-t + 0.40 + 0.30e^t$, is there a general formula to use? I'm also sorry for not using MathJax I'm really having trouble with it. REPLY [6 votes]: The factorization may be easier to detect if we let $z = e^t$ and write $$\begin{align*} M_{X+Y}(t) &= \frac{9}{100} z^{-2} + \frac{24}{100} z^{-1} + \frac{34}{100} + \frac{24}{100} z + \frac{9}{100} z^2 \\ &= \frac{1}{(10z)^2} \left( 9 + 24 z + 34 z^2 + 24 z^3 + 9 z^4 \right) \\ &= \frac{1}{(10z)^2} (3 + 4z + 3z^2)^2, \end{align*}$$ keeping in mind that you are looking for a perfect square. Alternatively, you can posit a form of the common MGF to $X$ and $Y$, observing from the powers of $e^t$ that it needs to be $$M_X(t) = M_Y(t) = az^{-1} + b + cz$$ for suitable constants $a, b, c$, then equating coefficients of $M_X (t)^2$.<|endoftext|> TITLE: Are there topological properties that are finitely productive but not countably productive? QUESTION [6 upvotes]: Let $X_i, i \in I$ be a set of topological spaces with property $P$. We would like to know whether $P$ holds for $\prod_{i \in I} X_i$ for $I$ of different cardinalities. I have looked over a list of topological properties, it seems only "finiteness" is finitely productive (finite products of finite set is finite) but not countably productive. In the literature, it seems there is a tendency to divide topological properties to finitely productive, countably productive, and arbitrarily productive. For example, in Munkres, compactness is first proved to be finitely productive, then immediately we jump to Tychonoff theorem. But I have rarely seen instances where a property $P$ is held under finitely product, but fails hold under countable product. On the other hand, a lot of properties seems to fail to cross the line between countably productive and arbitrarily productive. This includes separability, first countable, second countable, suslin, metrizability. Are there any interesting or well known topological properties that holds under finite product but not countable products? REPLY [5 votes]: You could argue that the very definition of the product topology gives a kind of answer to your question. Let $(X_i)_{i \in I}$ be a family of topological space and let $U_i \subset X_i$ be an open set for each $i \in I$. If the family is finite, then $\prod_{i \in I} U_i$ is an open subset of $\prod_{i \in I} X_i$ in the product topology. This is not (typically) the case when the family is infinite. A finite product of discrete spaces is discrete, but this isn't so for infinite products. A finite product of manifolds is a manifold, but not so for infinite products (according to the most conventional definition of a manifold). See also the answer of Nate Elredge here. So, for example, $\mathbb{R}^n$ can be given a compatible Banach space norm for each finite $n$, but the product topology on $\mathbb{R}^\mathbb{N}$ is not induced by any complete norm. (Added: actually this is only really true if we think of $\mathbb{R}^\mathbb{N}$ as having a preferred vector space structure.) Here's a pretty good one. Let's restrict attention to Hausdorff spaces so that the right definition of "locally compact" is not a point of contention. The product of two (hence of finitely many) locally compact spaces is locally compact. However, an infinite product of locally compact spaces is locally compact if and only if all but finitely many terms of the product are compact. Thus, for example, $\mathbb{R}^n$ is locally compact for each finite $n$, but $\mathbb{R}^\mathbb{N}$ is not locally compact.<|endoftext|> TITLE: Characterizing differences of squares in $\mathbb Z[x]$ QUESTION [5 upvotes]: It is well known that a natural number can be written as a difference of two squares iff it is not of the form $4k+2$. I'm wondering if there is any characterization of which polynomials $f(x)\in\mathbb Z[x]$ with integer coefficients can be written as a difference $g(x)^2-h(x)^2$, for $f,g\in \mathbb Z[x]$. I tried googling this, but any search involving "difference of squares" and "polynomial" invariably returns a list of pages explaining the simple algebraic rule $a^2-b^2=(a+b)(a-b)$. REPLY [4 votes]: I don't know if this is the type of characterization you are looking for: Lemma Let $f(x) \in \mathbb Z[x]$ be a polynomial. Then $f$ is a difference of two squares if and only if $f$ can be factored as $$f(x)=u(x)v(x) \mbox{ with } \\ u(x), v(x) \in \mathbb Z[x] \mbox{ and } u(x)-v(x) \in 2 \mathbb Z[x]$$ The proof is trivial, set $u=g-h$ and $v=g+h$. So the problem reduces to the factorization of $f$. If $f$ is primitive and irreducible for example then it can be written as a difference of squares if and only is $f+1 \in 2\mathbb Z[x]$. Also if $f$ is non-primitive, and we write $f=ag$ with $a \in \mathbb Z, g \in \mathbb Z[x]$, $g$ primitive, the answer is trivial if $a$ is even: if $a=4k$ then the answer is yes, while if $a=4k+2$ the answer is no. But if $f$ has many factors, there might be many cases to check.<|endoftext|> TITLE: Example of a very simple math statement in old literature which is (verbatim) a pain to understand QUESTION [99 upvotes]: As you know, before symbolic notations were introduced and adopted by the mathematical community, even simple statements were written in a very complicated manner because the writer had (nearly) only words to describe an equation or a result. What is an example of a very simple math statement in old literature which is, verbatim, a pain to understand? REPLY [2 votes]: George Abram Miller (31 July 1863 – 10 February 1951) was an early group theorist whose many papers and texts were considered important by his contemporaries, but are now mostly considered only of historical importance. Much of his work consisted of classifying groups satisfying some condition, such as having a small number of prime divisors or small order or having a small permutation representation, or enumerating the possible finite groups which satisfy given conditions such as: the prime factors which divide the order, the orders of two generating permutations and their product; the types of subgroups; or the degree of a representation as a permutation group. Several papers investigate groups generated by two elements satisfying given conditions. For example he considered groups generated by two elements of order three whose product is of order four or three or six. He also considered permutation groups of small degree, groups having a small number of conjugacy classes, multiply transitive groups, and characteristic subgroups of finite groups. He found the list of all possible groups of order 1909 to 1919 inclusive. His Collected Works were published in five very large volumes. His style however, was very much aimed at using words, rather than symbols and formulas. No wonder that his collected works are so voluminous. He also use old group theory terminology (an element of a group is called an operator, a normal subgroups is called invariant, and more, etc.). See for instance this:<|endoftext|> TITLE: What is the field of fractions of $\mathbb{Q}[x,y]/(x^2+y^2)$? QUESTION [14 upvotes]: What is the field of fractions of $\mathbb{Q}[x,y]/(x^2+y^2)$? Remarks: (1) I think it is clear that $\mathbb{Q}[x,y]/(x^2+y^2)$ is an integral domain; indeed, $x^2+y^2 \in \mathbb{Q}[x,y]$ is irreducible (by considerations of degrees) hence prime. (2) The field of fractions of $\mathbb{Q}[x,y]/(x^2+y^2-1)$ is isomorphic to $\mathbb{Q}(t)$, see this question and also this question. REPLY [16 votes]: First do the substitution $x'=x/y$. Then the equation $x^2+y^2=0$ transforms to $x'^2+1=0$. Hence we are looking at the (isomorphic) field $\text{Frac}(\mathbb Q[x',y]/(x'^2+1))$. This is just $\mathbb Q(i)(y)$.<|endoftext|> TITLE: $-1$ raised to the $\pi$ QUESTION [6 upvotes]: We know that the following conditions are true: $(-1)^{2n}$ is $1$ where $n$ is an integer $(-1)^{2n+1}$ is $-1$ where $n$ is an integer. We can extend this reasoning for rational numbers. If we let a number be written in the form of $a+\frac{b}{c}$ such that $\gcd(b,c) = 1$ and $a, b, c$ are integers, then to evaluate $(-1)^{a+\frac{b}{c}}$, we can separate $(-1)^{a+\frac{b}{c}}$ into $(-1)^{a}\times\frac{(-1)^b}{(-1)^c}$. The terms $(-1)^{a}$, $(-1)^b$, $(-1)^c$ can be reduced to either of the two cases above; and we solve for $-1$ or $1$. Now what is $(-1)^\pi$. It isn't rational, so there isn't a perfect ratio that can be done with the method above. I'm suspecting that roots of unity come to play here or DeMoirve's Theorem, but don't they resolve to whether or not you can solve $r^n$ where r is the radius of the complex number and n is exponent. NOTE: I'm still in high school, so I am looking for an answer that does not have any advance math- but will gladly accept any response to the question for curiosity's sake. Thank you :) REPLY [5 votes]: For any complex numbers $z$ and $a$, the term $z^a$ is defined as $z^a=e^{a\log(z)}$. Then, with $z=-1$ and $a=\pi$ we have $$\begin{align} (-1)^\pi&=e^{\pi\log(-1)} \tag 1\\\\ &=e^{\pi(i\pi +i2n\pi)} \tag 2\\\\ &=e^{i\pi^2 (2n+1)}\\\\ &=\cos(\pi^2 (2n+1))+i\sin(\pi^2 (2n+1)) \end{align}$$ for all integer values of $n$. Note that in going from $(1)$ to $(2)$ we recognized the multivalued nature of the complex logarithm, $\log(z)=\log(|z|)+i(\text{Arg}(z)+2n\pi)$. Here, $|z|=1$ and $\text{Arg}(z)=\pi$.<|endoftext|> TITLE: How to solve this determinant QUESTION [6 upvotes]: Question Statment:- Show that \begin{align*} \begin{vmatrix} (a+b)^2 & ca & bc \\ ca & (b+c)^2 & ab \\ bc & ab & (c+a)^2 \\ \end{vmatrix} =2abc(a+b+c)^3 \end{align*} My Attempt:- $$\begin{aligned} &\begin{vmatrix} \\(a+b)^2 & ca & bc \\ \\ca & (b+c)^2 & ab \\ \\bc & ab & (c+a)^2 \\\ \end{vmatrix}\\ =&\begin{vmatrix} \\a^2+b^2+2ab & ca & bc \\ \\ca & b^2+c^2+2bc & ab \\ \\bc & ab & c^2+a^2+2ac \\\ \end{vmatrix}\\ =&\dfrac{1}{abc}\begin{vmatrix} \\ca^2+cb^2+2abc & ca^2 & b^2c \\ \\ac^2 & ab^2+ac^2+2abc & ab^2 \\ \\bc^2 & a^2b & bc^2+a^2b+2abc \\\ \end{vmatrix}\\\\ &\qquad (C_1\rightarrow cC_1, C_2\rightarrow aC_2, C_3\rightarrow bC_3)\\\\ =&\dfrac{2}{abc}\times\begin{vmatrix} \\ca^2+cb^2+abc & ca^2 & b^2c \\ \\ab^2+ac^2+abc & ab^2+ac^2+2abc & ab^2 \\ \\bc^2+a^2b+abc & a^2b & bc^2+a^2b+2abc \\\ \end{vmatrix}\\\\ &\qquad (C_1\rightarrow C_1+C_2+C_3)\\\\ =&\dfrac{2abc}{abc}\left(\begin{vmatrix} \\a^2+b^2 & a^2 & b^2 \\ \\b^2+c^2 & b^2+c^2+2bc & b^2 \\ \\c^2+a^2 & a^2 & c^2+a^2+2ac \\\ \end{vmatrix}+ \begin{vmatrix} \\1 & ca^2 & b^2c \\ \\1 & ab^2+ac^2+2abc & ab^2 \\ \\1 & a^2b & bc^2+a^2b+2abc \\\ \end{vmatrix}\right) \end{aligned}$$ The second determinant in the last step can be simplified to \begin{vmatrix} \\1 & ca^2 & b^2c \\ \\0 & ab^2+ac^2+2abc-ca^2 & ab^2-b^2c \\ \\0 & a^2b-ca^2 & bc^2+a^2b+2abc-b^2c \\\ \end{vmatrix} I couldn't proceed further with this, so your help will be appreciated and if any other simpler way is possible please do post it too. REPLY [10 votes]: One can use factor theorem to get a simpler solution. If we put $a=0$, we get \begin{align*} \begin{vmatrix} (a+b)^2 & ca & bc \\ ca & (b+c)^2 & ab \\ bc & ab & (c+a)^2 \\ \end{vmatrix} =\begin{vmatrix} b^2 & 0 & bc \\ 0 & (b+c)^2 & 0 \\ bc & 0 & c^2 \\ \end{vmatrix} = 0 \end{align*} Hence $a$ is a factor. Similarly $b, c$ are factors. Again, put $a+b+c=0$, we get \begin{align*} \begin{vmatrix} c^2 & ca & bc \\ ca & a^2 & ab \\ bc & ab & b^2 \\ \end{vmatrix} =abc\begin{vmatrix} c & a & b \\ c & a & b \\ c& a & b \\ \end{vmatrix} \end{align*} Since all rows are identical, $(a+b+c)^2$ is a factor. The determinant is a polynomial of degree 6 and hence the remaining factor is linear and since it is symmetric, the factor must be $k(a+b+c)$. Putting $a=b=c=1$, we obtain \begin{align*} 27k = \begin{vmatrix} 4 & 1 & 1 \\ 1 & 4 & 1 \\ 1 & 1 & 4 \\ \end{vmatrix} =54 \end{align*} and $k=2$. Thus the given determinant equals $2abc(a+b+c)^3$<|endoftext|> TITLE: How to solve this determinant equation in a simpler way QUESTION [12 upvotes]: Question Statement:- Solve the following equation $$\begin{vmatrix} x & 2 & 3 \\ 4 & x & 1 \\ x & 2 & 5 \\ \end{vmatrix}=0$$ My Solution:- $$\begin{vmatrix} x & 2 & 3 \\ 4 & x & 1 \\ x & 2 & 5 \\ \end{vmatrix}= \begin{vmatrix} x+5 & 2 & 3 \\ x+5 & x & 1 \\ x+7 & 2 & 5 \\ \end{vmatrix} \tag{$C_1\rightarrow C_1+C_2+C_3$}$$ $$=\begin{vmatrix} 0 & 2 & 3 \\ 0 & x & 1 \\ 2 & 2 & 5 \\ \end{vmatrix}+ (x+5)\begin{vmatrix} 1 & 2 & 3 \\ 1 & x & 1 \\ 1 & 2 & 5 \\ \end{vmatrix}\tag{1}$$ On opening the first determinant in the last step above we get $2(2-3x)$. On simplifying the secind determinant we get, $$(x+5)\begin{vmatrix} 1 & 2 & 3 \\ 1 & x & 1 \\ 1 & 2 & 5 \\ \end{vmatrix}=(x+5)\begin{vmatrix} 1 & 2 & 3 \\ 0 & x-2 & -2 \\ 0 & 0 & 2 \\ \end{vmatrix} (R_2\rightarrow R_2-R_1) (R_3\rightarrow R_3-R_1)$$ $=2(x+5)(x-2)$ Substituting the values obtained above in $(1)$, we get $$=\begin{vmatrix} 0 & 2 & 3 \\ 0 & x & 1 \\ 2 & 2 & 5 \\ \end{vmatrix}+ (x+5)\begin{vmatrix} 1 & 2 & 3 \\ 1 & x & 1 \\ 1 & 2 & 5 \\ \end{vmatrix}=2(2-3x)+2(x+5)(x-2)=2(2-3x+x^2+3x-10)=2(x^2-8)$$ Now, as $\begin{vmatrix} x & 2 & 3 \\ 4 & x & 1 \\ x & 2 & 5 \\ \end{vmatrix}=0$, $\therefore 2(x^2-8)=0\implies x=\pm2\sqrt2$ As you can see there was lot of work in my solution so if anyone can provide me with some techniques to solve it faster, or a technique which includes less amount of pen and more thinking. REPLY [11 votes]: Just expand the determinant! We want $$ 5x^2+2x+24-2x-40-3x^2=0 $$ which simplifies to $2x^2-16=0$. (In general, finding determinants via row/column relations is faster when a matrix is large. But $3 \times 3$ matrices aren't that large yet...)<|endoftext|> TITLE: Word for category with unique isomorphisms QUESTION [6 upvotes]: Is there a standard word to express the fact that a category has at most one isomorphism between any two objects? REPLY [6 votes]: This is equivalent to being gaunt, a common name for a category with no nontrivial automorphisms. A non-gaunt category certainly has a pair of objects admitting two isomorphism a between them, while if $f,g$ are distinct isomorphisms in any category then $g^{-1}f$ is a non-identity automorphisms, by uniqueness of inverses.<|endoftext|> TITLE: Variance of a stochastic process QUESTION [5 upvotes]: Problem Consider the classical brownian $W_t$, such as $var\left(W_t-W_u\right)=\left|t-u\right|$. Consider the process defined by $$X(t)=e^{-t/\tau}\left(X_0+\sqrt{Q_B}\int_0^t e^{u/\tau}dW_u\right)$$ With $Q_B$ being a real positive scaling constant. I want to study $$Y(t,L)=\frac{X(t+L)-X(t)}{L}={\left(e^{-L/\tau}-1\right)X(t)+e^{-L/\tau}\sqrt{Q_B}\int_{t}^{t+L} e^{(u-t)/\tau}dW_u \over L}$$ Ideally for any $t$, and at least for $t\to\infty$. (I'll refer to $\lim_{t\to\infty}Y(t,L)$ as $Y_\infty(L)$ from now on). Since $Y(t,L)$ is gaussian, with mean ${\left(e^{-L/\tau}-1\right)e^{-t/\tau} \over L}X_0$, we just have to compute its variance. First approach Considering that the two terms of the sum are independant (do you agree that $\int_0^t e^{u/\tau}dW_u$ and $\int_{t}^{t+L} e^{(u-t)/\tau}dW_u$ are independant ?), I simply sum the variances and get $$var\left(Y(t,L)\right)={\left(e^{-L/\tau}-1\right)^2var(X(t))+\left(1-e^{-2L/\tau}\right)\frac{Q_B\tau}{2} \over L^2}$$ Computing $var(X(t))$ : I can either calculate directly $$var(X(t))=Q_Be^{-2t/\tau}\int_0^t e^{2u/\tau}du=\frac{Q_B\tau}{2}\left(1-e^{-2t/\tau}\right)$$ or, at least for $t\to \infty$, use the Laplace transform $\frac{1}{1+\tau s}$ and compute $$Q_B\int_{-\infty}^{+\infty}\left|\frac{1}{1+2i\pi\tau f}\right|^2df=Q_B\int_{-\infty}^{+\infty}\frac{1}{1+4\pi^2\tau^2 f^2}df=\frac{Q_B}{2\tau}$$ First remark : when $t\to\infty$, the first converges to $\frac{Q_B\tau}{2}$, so the two methods don't match. I guess the second one is incorrect, but why ? Considering $var(X(t))=\frac{Q_B\tau}{2}\left(1-e^{-2t/\tau}\right)$ (first result), we would get $$var\left(Y(t,L)\right)=\left[\left(e^{-L/\tau}-1\right)^2\left(1-e^{-2t/\tau}\right)+\left(1-e^{-2L/\tau}\right)\right]\frac{Q_B\tau}{2L^2}$$ and $$var\left(Y_\infty(L)\right)=\frac{Q_B\tau}{L^2}\left(1-e^{-L/\tau}\right)$$ First, is this right ? Second approach Another way to compute $var\left(Y_{\infty}(L)\right)$ would be to consider the Laplace transform $$\frac{Y}{X}=\frac{e^{sL}-1}{L}$$ and evaluate $$var\left(Y_{\infty}(L)\right)=Q_B\int_{-\infty}^{+\infty}\left|\frac{e^{2i \pi Lf}-1}{L(1+2i\pi\tau f)}\right|^2df=Q_B\int_{-\infty}^{+\infty}\frac{4sin^2\pi Lf}{L^2(1+4\pi^2\tau^2 f^2)}df$$ Let $x=\pi L f$, and we get $$var\left(Y_{\infty}(L)\right)=\frac{4Q_B}{\pi L}\int_{-\infty}^{+\infty}\frac{sin^2x}{L^2+4\tau^2 x^2}dx$$ Since Wolfram gives us $$\int_{-\infty}^{+\infty}\frac{sin^2x}{L^2+4\tau^2 x^2}dx=\frac{\pi}{4\tau L}(1-e^{-L/\tau})$$ we conclude $$var\left(Y_{\infty}(L)\right)=\frac{Q_B}{\tau L^2}(1-e^{-L/\tau})$$ Now, this differs from what we got in the first approach. Which one is right (if at least one is right...) ? And why is the other (or both) wrong ? REPLY [2 votes]: Ok, this was me being stupid and making a mistake in my solving of the system I was studying. The integral expression of $X(t)$ was missing a $\frac{1}{\tau}$, and should have been like this : $$X(t)=e^{-t/\tau}\left(X_0+\frac{\sqrt{Q_B}}{\tau}\int_0^t e^{u/\tau}dW_u\right)$$ Which in turn gives $$Y(t,L)=\frac{X(t+L)-X(t)}{L}={\left(e^{-L/\tau}-1\right)X(t)+e^{-L/\tau}\frac{\sqrt{Q_B}}{\tau}\int_{t}^{t+L} e^{(u-t)/\tau}dW_u \over L}$$ and Ito's isometry now returns $$var(X(t))=\frac{Q_B}{2\tau}\left(1-e^{-2t/\tau}\right)$$ and there is no discrepancy anymore with the integration of the PSD I was doing in parallel. Thus we also get $$var\left(Y(t,L)\right)=\left[\left(e^{-L/\tau}-1\right)^2\left(1-e^{-2t/\tau}\right)+\left(1-e^{-2L/\tau}\right)\right]\frac{Q_B}{2\tau L^2}$$ and finally, we do get $$var\left(Y_{\infty}(L)\right)=\frac{Q_B}{\tau L^2}(1-e^{-L/\tau})$$ Which also matches the result obtained through direct integration of the PSD, and all's right with the world again. Sorry everyone for the trouble !<|endoftext|> TITLE: Multiplying sparse matrices QUESTION [5 upvotes]: If I have two sparse matrices, $A$ and $B$. Let's say $A$ has $k$ non-zero entries and $B$ has $j$ non-zero entries. Let's assume all I know is the amount of non-zero entries each matrix has, I don't know where they are or what their value is. The dimensions of the matrices are known and are compatible, so it cannot be assumed that they are always square matrices (although they could be in some cases). So, let's assume A is a MxN matrix and B is an NxP matrix - making AB an MxP matrix. If I multiply these two sparse matrices together (A*B), what is the maximum amount of non-zero values the product of the matrices could have? I have a feeling it must just be $k+j$ but I can't define it mathematically. I'm not after a completely formal proof. REPLY [3 votes]: It cannot be k+j as it can be seen in the next counterexample: We define k = 3 and j = 2 and create the next two matrices, with dimension that agree for the multiplication as you stated: $A=\left( \begin{array}{ccc} 1 & 0 & \cdots & 0 \\ 1 & 0 & \cdots & 0\\ 1 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots &\ddots &\vdots\\ 0 & 0 & 0 & 0\end{array} \right),B=\left( \begin{array}{ccc} 1 & 1 & 0 &\cdots & 0 \\ 0 & 0 & 0 &\cdots & 0\\ 0 & 0 & 0 &\cdots & 0 \\ \vdots & \vdots& \vdots &\ddots &\vdots\\ 0 & 0 & 0 & 0 & 0\end{array} \right)$ Which multiplying them will give us the next matrix: $AB=\left( \begin{array}{ccc} 1 & 1 & 0 &\cdots & 0 \\ 1 & 1 & 0 &\cdots & 0\\ 1 & 1 & 0 &\cdots & 0\\ 0 & 0 & 0 &\cdots & 0 \\ \vdots & \vdots& \vdots &\ddots &\vdots\\ 0 & 0 & 0 & 0 & 0\end{array} \right) $ As it can be seen, the result gives us a matirx with six nonzero entries, which is a higher value than $k+j=5$. It gives the intuition that the maximun amount of nonzero entries will be $k*j$.<|endoftext|> TITLE: diameter of Riemannian manifolds QUESTION [6 upvotes]: Let $M$ be a compact Riemannian manifold of dimension $n$. If $M\subset N$ is an embedding in another Riemannian manifold, we can define the diameter $d(M,N$) of $M$ in $N$ as the longest distance in $N$ of two points in $M$. For example if $S^2$ is a sphere of radius $R$ in ${\mathbb{R}}^3$, we have $$d(S^2,S^2)=\pi R , \qquad d(S^2,{\mathbb{R}}^3)=2R.$$ If $d>0$ is an integer we can then define $$e(M,d)=\inf\big(d(M,N);\quad M\subset N \quad \text{and} \quad \dim(N)=\dim(M)+d\big)$$ If there a way (a formula) to compute $e(M,d)$ ? Is it true that $$\lim_{d\to\infty}e(M,d) \ = \ 0 \ ?$$ REPLY [2 votes]: This is just an expanded version of Einar Rødland's comment. Note that it suffices to show that $e(M,1) = 0$. Let $N = M \times \mathbb R$ and $M\subseteq N$ is identified with $M \times \{0\}$. For each $\epsilon >0$, let $g = g_\epsilon$ be a metric on $N$ given by $$g_\epsilon = f^2_\epsilon (t) ( g_M + dt^2),$$ where $ f_\epsilon$ is a positive function so that $$f_\epsilon (t) = \begin{cases} 1 & \text{if } t\in (-\epsilon/8,\epsilon/8), \\ \epsilon/2d(M) &\text{if } t= \epsilon/4\\ \le 1 &\text{otherwise.}\end{cases}$$ Note that $M \subset N$ is an isometric embedding. Let $x,y\in M$ and $\gamma$ be a minimal geodesic in $M$ joining $x, y$. Then the (piecewise smooth) curve $$ (x,0) \to (x,\epsilon/4) \overset{(\gamma(\cdot), \epsilon/4)}{\to } (y,\epsilon/4) \to (y,0)$$ has length less than $\epsilon/4 + (\epsilon/2d(M)) \times d_M(x,y) + \epsilon /4 \le \epsilon$. Since $x, y$ are arbitrary, $e(M,1) \le \epsilon$. Thus $e(M,1) = 0$ as $\epsilon $ is arbitrary.<|endoftext|> TITLE: Why does $\mathsf{HOD}^{V[G]} \subseteq \mathsf{HOD}^V$ hold for weakly homogeneous forcings? QUESTION [5 upvotes]: A forcing $\mathbb{P}$ is said to be weakly homogeneous if for any $p,q \in \mathbb{P}$ there is an automorphism $\pi$ of $\mathbb{P}$ such that $\pi(p)$ and $q$ are compatible. An important fact about weakly homogeneous forcings $\mathbb{P}$ is that for any formula $\varphi(v_0,\dots,v_{n-1})$ in the forcing language and any $x_0,\dots,x_{n-1} \in V$ we have that every condition of the forcing $\mathbb{P}$ decides $\varphi(\check x_0,\dots, \check x_{n-1})$ in the same way. I want to prove that for any generic filter $G$ of a weakly homogeneous forcing $\mathbb{P}$, we have $\mathsf{HOD}^{V[G]} \subseteq \mathsf{HOD}^V$. My idea is to use the fact that forcing preserves the ordinals together with the above mentioned property of weakly homogeneous forcings to show that no new ordinal definable sets are added to $\mathsf{OD}$. Suppose that there is a $\mathbb{P}$-name $\dot x$, a formula $\varphi$ in the forcing language, ordinals $\alpha_0,\dots,\alpha_{n-1}$ and $p \in \mathbb{P}$ such that $p \Vdash \forall y \ y \in \dot x \leftrightarrow \varphi(y,\check \alpha_0,\dots,\check \alpha_{n-1})$. Then, by the above fact, $\mathbb{1_P} \Vdash \forall y \ y \in \dot x \leftrightarrow \varphi(y,\check \alpha_0,\dots,\check \alpha_{n-1})$. Now, if $G$ is $\mathbb{P}$-generic, then $V[G] \vDash \forall y \ y \in \dot x^G \leftrightarrow \varphi(y,\alpha_0,\dots,\alpha_{n-1})$. Since $\varphi(y,\alpha_0,\dots,\alpha_{n-1})$ is by the above fact decided for all conditions of the forcing in the same way, this should somehow show that $\dot x^G$ is already contained in $\mathsf{OD}^V$ and this would imply the claim. Is this the right approach to do this and how do I close the gaps? REPLY [5 votes]: There is a slight caveat here. If $\Bbb P$ is a weakly homogeneous forcing, then $\mathrm{HOD}^{V[G]}\subseteq\mathrm{HOD}^V(\Bbb P)$. The reason is that we need $\Bbb P$ as a parameter, as the proof goes through the forcing relation of $\Bbb P$, which itself is definable from $\Bbb P$. Your idea is correct. It is enough to make sure that every set of ordinals in $\mathrm{HOD}^{V[G]}$ is already in $\mathrm{HOD}^V(\Bbb P)$, as these are models of $\sf ZFC$. But now it's easy. If $p\Vdash\varphi(\check\alpha,\check\xi)$, then $1\Vdash\varphi(\check\alpha,\check\xi)$; so the same holds for the negation of $\varphi$. So the set $\{\alpha\mid 1\Vdash\varphi(\check\alpha,\check\xi)\}$ lies in the ground model and it is equal exactly to the set which will be definable in $V[G]$ from $\varphi$ and $\xi$. Now if $A$ is a set of ordinals which is ordinal-definable in $V[G]$, then taking $\varphi$ to be the definition and $\xi$ the parameter, we get that $A$ lies in $V$ and that it is ordinal-definable there as long as you know the forcing relation of $\Bbb P$.<|endoftext|> TITLE: Submanifolds of oriented manifolds QUESTION [7 upvotes]: Let $M$ be an $n$-dimensional oriented manifold. Let $f:M\to\mathbb{R}$ be a smooth function. Suppose $c$ is a regular value of $f$ with $f^{-1}(c)$ nonempty. Show that $f^{-1}(c)$ is an oriented regular submanifold of $M$. By constant rank thoerem, I know that $f^{-1}(c)$ is an $(n-1)$-dimensional regular submafold of $M$, but how to show that it is orientable? REPLY [3 votes]: Suppose $S$ is the manifold $f^{-1}(c)$. Prove that $\triangledown f$ is normal to $S$ in $M$. Now if $S$ is non-orientable then there exists a continuous choice of collection of ordered basis at each point of $S$. Attach $\triangledown f$ with it. This will show that $M$ is non-orientable, a contradiction.<|endoftext|> TITLE: IMO 2016 Problem 5 QUESTION [5 upvotes]: The equation $$ (x-1)(x-2)(x-3)\dots(x-2016)=(x-1)(x-2)(x-3)\dots(x-2016) $$ is written on a board, with $2016$ linear factors on each side. What is the least possible value of $k$ for which it is possible to erase exactly $k$ of these $4032$ factors so that at least one factor remains on each side and the resulting equation has no real solutions? REPLY [4 votes]: An interpretation of the solution from here. Sorry my Chinese. $k\ge 2016$ (see @McFry comment above). For $t=1,2,\ldots,504$, remove the following (exactly $2016$ many) factors $$ \begin{align*} & (x-(4t-2)),\quad (x-(4t-1)) &\qquad \text{ from the LHS},\\ & (x-(4t-3)),\quad (x-4t) &\qquad \text{ from the RHS}. \end{align*} $$ We are to prove that what's left has no real solutions. It would mean that the smallest $k$ is $2016$. We are left with the equation $$ \begin{align*} & (x-1)(x-4)(x-5)(x-8)\ldots(x-2013)(x-2016)=\\ = & (x-2)(x-3)(x-6)(x-7)\ldots (x-2014)(x-2015). \end{align*}\tag{1} $$ Let's prove that for all $x\in\mathbb{R}$ it holds $$ \begin{align*} (x-1)(x-4)&<(x-2)(x-3),\\ (x-5)(x-8)&<(x-6)(x-7)\qquad \text{etc.}\tag{2} \end{align*} $$ In other words, we prove that for $t=1,\ldots,504$ $$ (x-(4t-3))(x-4t)<(x-(4t-2))(x-(4t-1)).\tag{3} $$ Denote $y=x-4t$. Then "RHS minus LHS" in (3) is $$ (y+2)(y+1)-(y+3)y=y^2+3y+2-y^2-3y=2>0. $$ Hence (2) is proven for all $x\in\mathbb{R}$. Obviously, $x\in \{1,2,\ldots,2016\}$ is not a solution If $x<1$, $x>2016$ or $\exists m\colon 1\le m\le 503$ such that $4m(x-3)(x-6),\\ (x-8)(x-9)&>(x-7)(x-10),\\ &\vdots\\ (x-2012)(x-2013)&>(x-2011)(x-2014). \end{align*} $$ Moreover, for $1\le n\le 504$ we have $2\le 4n-2(x-2)&>0,\\ -(x-2016)&>-(x-2015)&>0, \end{align*} $$ which implies $$ \begin{align*} & -(x-1)(x-4)(x-5)(x-8)\ldots(x-2013)(x-2016)>\\ > & -(x-2)(x-3)(x-6)(x-7)\ldots (x-2014)(x-2015), \end{align*} $$ so again (1) is impossible.<|endoftext|> TITLE: Hamel basis for the vector space of real numbers over rational numbers and closedness of the basis under inversion QUESTION [6 upvotes]: Let $\mathcal B$ be a Hamel basis for $\mathbb R$ over $\mathbb Q$. Then is it true that for every $0\ne a \in \mathbb R , \exists y \in \mathcal B$ such that $\dfrac a y \notin \mathcal B$ ? If the answer to the above in general is no then can we take $a$ to be $1$, that is, Does there exist a Hamel basis $\mathcal B$ for $\mathbb R$ over $\mathbb Q$ such that $a\in \mathcal B \implies \dfrac 1a \in \mathcal B$ ? See Hamel basis for $\mathbb{R}$ over $\mathbb{Q}$ cannot be closed under scalar multiplication by $a \ne 0,1$ REPLY [5 votes]: The answer to Question $2$ is yes, such a basis exists (and hence the answer to Question $1$ is "no") - assuming the continuum hypothesis. Here is an outline. We want to build a Hamel basis $B$ satisfying $x\in B\rightarrow {1\over x}\in B$. This means we have to meet a family of requirements. Let $\mathbb{R}=\{r_\alpha: \alpha<\mathfrak{c}\}$. $I$: $B$ is linearly independent. $D$: $x\in B\rightarrow {1\over x}\in B$. $S_\alpha$: $r_\alpha\in span(B)$. Say that a set $X\subseteq\mathbb{R}$ satisfying $I$ and $D$, with size $<\mathfrak{c}$, is a good set. The transfinite construction of a basis satisfying the above requirements is then provided by the following lemma: Lemma: if $X$ is good and $r$ is a real, there is some good set $Y\supseteq X$ with $r\in span(Y)$. Proof of lemma. Baire category theorem. Suppose WLOG that $r\not\in span(X)$. For a real $s$, let $Y_s=X\cup\{s, {1\over s}, s+r, {1\over s+r}\}$. Clearly $r\in span(Y_s)$ and the only way $Y_s$ can be non-good is if $Y_s$ is linearly dependent. Now, for each "appropriate sequence of scalars" $\sigma$ (that is, $\sigma: X\sqcup 4\rightarrow \mathbb{Q}$ with finite support), let $C_\sigma$ be the set of $s$ such that $\sigma$ witnesses that $Y_s$ is linearly dependent. By CH, we have that $X$ is countable, and hence there are only countably many such $\sigma$. The crucial observation is: Exercise: since $r\not\in span(X)$, each $C_\sigma$ is nowhere dense. By the Baire category theorem, we have an $s\not\in \bigcup C_\sigma$. But then $Y_s$ is good. Note: we can of course weaken CH here to Martin's Axiom MA. However, I don't see how to make this argument work in ZFC alone. Without the continuum hypothesis (or Martin's Axiom), things get very messy. I don't see how to fix the construction above to work in just ZFC. However, I still see no good reason why the answer to question $2$ shouldn't be "yes" in ZFC alone. In particular, it's nontrivial to even show that some Hamel basis has the property that for every nonzero $a$, some element $b$ has ${a\over b}\not\in B$! Below is a ZFC-construction of such a Hamel basis. Order the nonzero reals as $\{r_\alpha: \alpha<\mathfrak{c}\}$. We build a Hamel basis $B$ meeting the following requirements: $H$: $B$ is linearly independent. $S_\alpha$: $r_\alpha$ is in the span of $B$. $D_\alpha$: if $r_\alpha\not=0$, then there is some $s\in B$ such that ${r_\alpha\over s}\not\in B$. Note that the $S_\alpha$s are satisfied by putting elements into $B$, while the $D_\alpha$s and $H$ are satisfied by keeping elements out of $B$. That is, $S_\alpha$s are positive requirements, and $D_\alpha$s and $H$ are negative requirements. The crucial lemmas are the following: Lemma 1. If $X\subseteq\mathbb{R}$ is linearly independent and has size $<\vert\mathbb{R}\vert$, and $r\in\mathbb{R}$ is arbitrary, then there is a finite $F\subseteq\mathbb{R}$ such that $F\cap X=\emptyset$, $r\in span(F)$, and $F\cup X$ is linearly independent. Proof: First, let's ignore the linear independence part. Consider $Z=\{\{x, x+r\}: x\in\mathbb{R}\}$. There is a continuum-sized, pairwise-disjoint subset $Z'$ of $Z$ (restrict to $x\in (0, {r\over 2})$). Since $\vert X\vert<\vert\mathbb{R}\vert$, we have that there is some $P\in Z'$ such that $P\cap X=\emptyset$. To fold in linear dependence, suppose $r$ is not already in $span(X)$ (otherwise we're done). Suppose towards contradiction that any $P\in Z'$ which we add to $X$ breaks linear independence. Using pigeonhole (there are only countably many finite sequences of scalars), we have $P, Q\in Z'$ with the above property but with $P\cap Q=\emptyset$ and $X\cup P, X\cup Q$ linearly independent via the same sequence of scalars. But this immediately implies that the coefficients of the elements of $P$ and $Q$ are zero, since $r\not\in span(X)$, so $X$ is linearly dependent; contradiction. Lemma 2. If $X, Y\subseteq\mathbb{R}$ each have size $<\mathfrak{c}$, $X$ is linearly independent, and $r\in\mathbb{R}$, then there is some $s\in\mathbb{R}$ such that $s\not\in Y$, ${r\over s}\not\in X$, and $X\cup\{s\}$ is linearly independent. Proof: Similar to Lemma 1. ${}{}{}{}$ Now the process is to build $B$ in stages, meeting the $S_\alpha$s and $D_\alpha$s in order. Specifically: the $\alpha$th stage in our construction will be a pair $(B_\alpha, O_\alpha)$ such that $B_\alpha, O_\alpha\subseteq\mathbb{R}$; $B_\alpha$ is linearly independent; $B_\alpha\cap O_\alpha=\emptyset$; and $\vert B_\alpha\vert, \vert O_\alpha\vert<\mathfrak{c}$. We will also have $B_\alpha\subseteq B_\beta$ and $O_\alpha\subseteq O_\beta$ for $\alpha<\beta<\mathfrak{c}$. We begin with $B_0=O_0=\emptyset$, and let $B_\lambda=\bigcup_{\beta<\lambda} B_\beta$ and $O_\lambda=\bigcup_{\beta<\lambda} O_\beta$ for $\lambda$ limit. Finally, given $(B_\alpha, O_\alpha)$, we define $(B_{\alpha+1}, O_{\alpha+1})$ as follows: First, let $C\supseteq B_\alpha$ have cardinality $<\mathfrak{c}$ such that $C\cap O_\alpha=\emptyset$ and $r_\alpha\in span(C)$, and $C$ is linearly independent. Such a $C$ exists by Lemma 1 (indeed we can have $C$ consist of $B_\alpha$ plus at most two elements). Next, pick some $s$ such that $s\not\in O_\alpha$, ${r_\alpha\over s}\not\in B_\alpha$, and $C\cup\{s\}$ is linearly independent; such an $s$ exists by Lemma 2. Finally, we let $B_{\alpha+1}=C\cup\{s\}$ and $O_{\alpha+1}=O_\alpha\cup\{{r_\alpha\over s}\}$. Now let $B=\bigcup_{\alpha<\mathfrak{c}} B_\alpha$. By induction, each requirement $H$, $S_\alpha$ and $D_\alpha$ is satisfied, so $B$ is a Hamel basis with the property mentioned in the first part of the question. REPLY [4 votes]: In fact, Noah Schweber's idea leads to a solution under ZFC. First, let $\mathfrak{c}$ be the least ordinal with the same cardinality as $\mathbb{R}$. Order nonzero reals $\{r_{\alpha}:\alpha<\mathfrak{c}\}$ (possible due to the Axiom of Choice). We try transfinite recursion. See also ordinal numbers. We will say that an extension field $F$ over $K$ is involutive if there exists a basis $\mathcal{B}$ of $F$ over $K$ such that $\alpha\in\mathcal{B}$ implies $\frac1{\alpha}\in\mathcal{B}$. Base case Set $\mathcal{B}_0 = \emptyset$. Successor case Let $\alpha<\mathfrak{c}$. If $r_{\alpha}\in\mathrm{span}(\mathcal{B}_{\alpha})$, let $\mathcal{B}_{\alpha+1}=\mathcal{B}_{\alpha}$. If $r_{\alpha}\notin\mathrm{span}(\mathcal{B}_{\alpha})$, consider $$ Y_s=\mathcal{B}_{\alpha}\cup \left\{ s, \frac1s, s+r_{\alpha}, \frac1{s+r_{\alpha}}\right\}. $$ Suppose that $Y_s$ is linearly dependent over $\mathbb{Q}$ and this is observed by $\sigma\in\mathcal{B}_{\alpha}^{<\omega}\times\mathbb{Q}^{<\omega}$. Since $r_{\alpha}\notin\mathrm{span}(\mathcal{B}_{\alpha})$, the $\mathbb{Q}$-linear relation provided by $\sigma$ forces $s$ in a set $C_{\sigma}$ of at most four elements. By choosing $s\notin \cup_{\sigma} C_{\sigma}$, we can guarantee that $Y_s$ is linearly independent over $\mathbb{Q}$. This is possible since $$ |\cup_{\sigma}C_{\sigma}|\leq |4\times\mathcal{B}_{\alpha}^{<\omega}\times\mathbb{Q}^{<\omega}|=\max(\aleph_0, |\alpha|)<\mathfrak{c}. $$ Then let $\mathcal{B}_{\alpha+1}=Y_s$. Limit case For limit ordinals $\lambda$, define $\mathcal{B}_{\lambda}=\cup_{\alpha<\lambda} \mathcal{B}_{\alpha}$. This process must terminate at some ordinal $\beta$, where all $r_{\alpha}$ belongs to $\mathrm{span}(\mathcal{B}_{\beta})$. By the construction, $\mathcal{B}_{\beta}$ is linearly independent over $\mathbb{Q}$ and involutive basis for $\mathbb{R}$ over $\mathbb{Q}$.<|endoftext|> TITLE: Does there exist a Hamel basis $\mathcal B$ for $\mathbb R$ over $\mathbb Q$ such that $a,b \in \mathcal B \implies \dfrac ab \in \mathcal B$? QUESTION [16 upvotes]: This is part of an attempt to understand what multiplicative structure a Hamel basis of the reals over the rationals can have. Does there exist a Hamel basis $\mathcal B$ for $\mathbb R$ over $\mathbb Q$ such that $a,b \in \mathcal B \implies \dfrac ab \in \mathcal B$ ? Additionally, as proposed by Noah Schwerber, if the answer to the above is negative, what if the restriction that $a \neq b$ is imposed, that is: Does there exist a Hamel basis $\mathcal B$ for $\mathbb R$ over $\mathbb Q$ such that $a,b \in \mathcal B $ distinct $ \implies \dfrac ab \in \mathcal B$ ? The following earlier question showing that such a Hamel basis cannot be closed under multiplication by a (non-trivial) constant could be helpful Hamel basis for $\mathbb{R}$ over $\mathbb{Q}$ cannot be closed under scalar multiplication by $a \ne 0,1$ A recent related but distinct question Hamel basis for the vector space of real numbers over rational numbers and closedness of the basis under inversion focuses on whether a Hamel basis can be closed under taking inverses. REPLY [4 votes]: This is not an answer. It is an attempt at Noah Schweber's new condition. See his comment under i707107's solution. Let $F$ be a field extension of a field $K$ with $n:=[F:K]\geq 4$. Suppose that $B$ is a Hamel basis of $F$ over $K$ with the divisibility property, namely, for all $a,b\in B$ with $a\neq b$, $\dfrac{a}{b}\in B$. Then, it holds that $ab\in B$ for all $a,b\in B$ with $ab\neq 1$. Fix $a,b\in B$ with $ab\neq 1$. Then, choose $c\in B\setminus\{a\}$ and $d\in B\setminus\left\{c,bc,\dfrac{c}{a}\right\}$. Hence, $\dfrac{c}{a}\in B$ and $\dfrac{ad}{c}=\dfrac{d}{c/a}\in B$. Furthermore, $\dfrac{d}{c}\in B$ and $\dfrac{d}{bc}=\dfrac{d/c}{b}\in B$. Since $ab\neq 1$, we have $\dfrac{ad}{c}\neq \dfrac{d}{bc}$. Consequently, $$ab=\frac{ad/c}{d/bc}\in B\,.$$ Involutive Hamel Bases A subset $S$ of $F\setminus\{0\}$ is said to be involutive if $S$ is invariant under inversion (that is, $\dfrac{1}{s}\in S$ for all $s\in S$). Now, if we can show that every Hamel basis of $\mathbb{R}$ over $\mathbb{Q}$ is not involutive, then it follows that a Hamel basis with the divisibility property does not exist. I don't think that an involutive Hamel basis for the extension $\mathbb{R}>\mathbb{Q}$ exists, but I have no idea about a proof. More generally, I would like to know whether there exist a field $K$ and an infinite field extension $F$ of $K$ with a Hamel basis $B$ over $K$ such that $B$ is involutive. As in i707107's solution, involutivity of $B$ is not needed if $[F:K]>|\bar{K}|$, where $\bar{K}$ is the algebraic closure of $K$, but it is an interesting question nonetheless. If $n=[F:K]$ is finite and odd, then $$B=\left\{\bar{x}^{-\left\lfloor\frac{n}{2}\right\rfloor},\bar{x}^{-\big(\left\lfloor\frac{n}{2}\right\rfloor-1\big)},\ldots,\bar{x}^{+\big(\left\lfloor\frac{n}{2}\right\rfloor-1\big)},\bar{x}^{+\left\lfloor\frac{n}{2}\right\rfloor}\right\}$$ is an involutive basis of $F$ over $K$, where $F=K[x]\big/\big(f(x)\big)$ for some irreducible polynomial $f(x)\in K[x]$. If $n$ is even, then there exists an irreducible polynomial $f(x)$ of degree $n$ in $K[x]$ such that the coefficient of the term $x^{n/2}$ is nonzero and that $F=K[x]\big/\big(f(x)\big)$, making $$B=\left\{\bar{x}^{-1},\bar{x}^{-2},\ldots,\bar{x}^{-(n/2)}\right\}\cup\left\{\bar{x}^{+1},\bar{x}^{+2},\ldots,\bar{x}^{+(n/2)}\right\}$$ an involutive Hamel basis of $F$ over $K$. (For example, with $F=\mathbb{C}$ and $K=\mathbb{R}$, we can take $f(x)=(x+1)^2+1$.) Existence of $B$ with the Divisibility Property For $n=2$, take $K:=\mathbb{F}_2$ and $F:=\mathbb{F}_4=K[x]\big/\left(x^2+x+1\right)$. Then, $B:=\left\{\bar{x},\bar{x}+1\right\}$ satisfies the condition, where $\bar{x}$ is the image of $x$ under the canonical projection $K[x]\to K[x]\big/\left(x^2+x+1\right)$. In fact, if such a Hamel basis $B$ exists for the case $n=2$, then $x^2+x+1$ is an irreducible polynomial in $K[x]$ and $F=K[x]\big/\left(x^2+x+1\right)$, in which case $B=\left\{\bar{x},-\bar{x}-1\right\}$. For $n=3$, it turns out that $B$ cannot exist. Suppose contrary that $B=\{a,b,c\}$ exists. Then, we can easily see that $B$ does not contain $1$ (or the extension would be of index at most $2$). Hence, $\dfrac{a}{b}\neq a$ and $\dfrac{a}{c}\neq a$. If $\dfrac{a}{b}=c$, then $c\notin K$, whence $\dfrac{b}{a}=\dfrac{1}{c}\neq c$ so that $\dfrac{b}{a}=a$ is the only possibility. Furthermore, we also have $\dfrac{a}{c}=b$, which leads to $b\notin K$ and $\dfrac{c}{a}=a$. Therefore, $b=a^2=c$, a contradiction. Hence, $\dfrac{a}{b}=b$ and $\dfrac{a}{c}=c$. However, this gives $b^2=a=c^2$, or $b=\pm c$, which is absurd. As i707107 shows, $B$ does not exist if $[F:K]>|\bar{K}|$. Replace $\mathbb{Q}$ in i707107's solution by $K$ and $\mathbb{R}$ by $F$ to get a proof of this claim. We are left to deal with the case where $[F:K]>3$ is finite and the case when $[F:K]$ is infinite but $[F:K]\leq|\bar{K}|$. Case 1: The index $n=[F:K]$ is an odd integer greater than $3$. Suppose $B$ exists. It is evident that $1\notin B$ and $B$ must be involutive. Since $n=|B|$ is odd, $B$ has an involutive element $u\in B$ with $u=\dfrac{1}{u}$. Because $1\notin B$, we have $u=-1$ (whence the characteristic of $K$ cannot be $2$). Thus, for any $a\in B\setminus\{u\}$, we have $\dfrac{u}{a}=-\dfrac{1}{a}$ must lie in $B$. This contradicts the fact that $B$ is involutive (which means $\dfrac{1}{a}$ is in $B$). Case 2: The index $n=[F:K]$ is an even integer greater than $2$. If $B$ exists, then $B$ can be partitioned into $\left\{t^{+j},t^{-j}\right\}$ for some $t\in F$ and $j=1,2,\ldots,\frac{n}{2}$. Ergo, $t^p=1$ for some integer $p>0$. If $p$ is not prime, then we can see that $B$ is not $K$-linearly independent, which is absurd. Hence, we see that $[F:K]=n=p-1$ for some odd prime natural number $p$ and $$B=\left\{t,t^2,\ldots,t^{p-1}\right\}=\bigcup\limits_{j=1}^{\frac{p-1}{2}}\,\left\{t^{+j},t^{-j}\right\}$$ for some primitive $p$-th root of unity $t\in F\setminus K$. This is possible only if $\text{char}(K)\neq p$. In summary, for the case $[F:K]$ is even, $B$ exists if and only if $F$ is a cyclotomic field extension over $K$ generated by a primitive $p$-root of unity with $p$ being an odd prime. Case 3: The index $n=[F:K]$ is infinite and $n\leq|\bar{K}|$. If $F$ is algebraic over $K$, then we can follow the two former cases that such a Hamel basis $B$ would have to contain only primitive $p$-roots of unity for odd prime natural numbers $p$, but this is absurd as products of most pairs in $B$ are also in $B$. Hence, $B$ does not exist if $F$ is algebraic over $K$. The subcase where $F$ is not purely algebraic over $K$ seems to be very difficult.<|endoftext|> TITLE: Verify my understanding of a math joke. QUESTION [12 upvotes]: I am trying to understand this comic: Note that after hovering over the image more text is revealed namely, "...spike in the Fourier Transform at the one month mark where...". Is the joke referring to an implied jump discontinuity where perhaps this 'couple' has an argument i.e. no single limit because the one sided limits were finite but not equal? Any clarity on this comic would be appreciated. Thank you. REPLY [2 votes]: I was surfing in the site, reading past articles, and found this one. For me, the comic represents Goodhart's law: https://en.wikipedia.org/wiki/Goodhart%27s_law<|endoftext|> TITLE: Challenging definite integral of hypergeometric functions QUESTION [7 upvotes]: Given $\ell\in\mathbb{N}\land\left(m,n\right)\in\mathbb{Z}_{\ge0}^{2}\land\left(a,b,c\right)\in\mathbb{R}^{3}\land0 TITLE: Is there another topology on $\mathbb{R}$ that gives the same continuous functions from $\mathbb{R}$ to $\mathbb{R}$? QUESTION [36 upvotes]: If we look at any set $X$ with the trivial topology, then all functions from $X$ to $X$ are continuous. We could also take the discrete topology and get the same result: all functions are continuous. Another example: Take the Sierpinski space, all functions from it to itself, except the function that switches 0 and 1, are continuous. Again there is another topology that gives the same continuous functions (just take the other singleton as open). Is this true for $\mathbb{R}$ aswel? Note that I'm not asking if $\mathbb{R}$ is completely regular, which is a seemingly similar but different property (In that property, the image space is equipped with the Euclidean topology if I'm not mistaken). I'm interested in $\mathbb{R}$ especially because this would give another way to think about continuous functions on it. High school students learn an epsilon delta definition but you can teach them what an open set is and define continuous that way. The usual definition of open is then "union of open intervals", but maybe there is a different choice of opens to get the same notion of continuous functions. That being said, I'm also interested in other spaces on which the topology can be changed to obtain the same continuous functions to itself, like the examples I gave, or in spaces where you can show that the topology is the only one that gives those continuous maps. Thanks in advance edit: Assume $X$ has more than 1 element. The argument that CarryonSmiling makes can be generalised as follows: Let's call a space that is T1 and connected a Tc space. If two topologies on $X$ have the same continuous maps from $X$ to itself, then they are either both Tc or both not Tc. This is true because Tc is equivalent to the property that CarryonSmiling uses, that is expressed only with continuous maps from $X$ to itself: $$(*)\quad\mbox{No map $h:X\rightarrow X$ whose range has exactly two elements is continuous.}$$ To see this equivalency, note that $$T1 \iff \mbox{closed singletons} \iff \mbox{every subspace with exactly 2 elements is discrete}$$ $$\mbox{connected} \iff \mbox{every continuous map to the discrete space with 2 elements is constant}$$ That means that T1 and connectedness together are exactly $(*)$. Maybe this can help to find a space that is determined by its continuous maps to itself. Please let me know if any part of this reasoning is invalid. REPLY [8 votes]: Carry on Smiling has shown that if $\tau$ is a topology on $\mathbb{R}$ has the same continuous functions $\mathbb{R}\to\mathbb{R}$ as the usual topology, then $\tau$ contains the usual topology. I will now show that $\tau$ must also be contained in the usual topology, and thus must actually be equal to the usual topology. Suppose $A\subset\mathbb{R}$ is a set that is $\tau$-closed but not closed. Choose a point $x\in\mathbb{R}\setminus A$ and a sequence of points $(x_n)$ in $A$ which converge to $x$. To simplify notation, let me assume $x=0$ and $x_n=1/n$. In fact, this loses no generality: we can assume the $x_n$ are converging to $x$ monotonically by passing to a subsequence, and then we can apply a homeomorphism of $\mathbb{R}$ to send $x$ to $0$ and $x_n$ to $1/n$. So we may assume $0\not\in A$ but $1/n\in A$ for all positive integers $n$. Now consider the map $f:\mathbb{R}\to\mathbb{R}$ given by $f(x)=0$ for $x\leq 0$, $f(x)=1/n$ if $x\in [1/(2n+1),1/2n]$ for some positive integer $n$, $f(x)=1$ if $x\geq 1/2$, and interpolate linearly for other values of $x$. Then $f$ is continuous. Let $g$ be defined similarly to $f$, except that $g(x)=1/n$ if $x\in [1/2n,1/(2n-1)]$. Let $h(x)=f(-x)$ and $i(x)=g(-x)$. Now observe that $f^{-1}(A)\cup g^{-1}(A)$ contains all positive numbers, and $h^{-1}(A)\cup i^{-1}(A)$ contains all negative numbers. On the other hand, none of these sets contain $0$, since all the functions map $0$ to $0$ and $0\not\in A$. Thus $f^{-1}(A)\cup g^{-1}(A)\cup h^{-1}(A)\cup i^{-1}(A)=\mathbb{R}\setminus\{0\}$. By assumption, $f$, $g$, $h$, and $i$ are all $\tau$-continuous. It follows that $\mathbb{R}\setminus\{0\}$ is $\tau$-closed. For each $r\in\mathbb{R}$ $j_r(x)=x-r$ is also $\tau$-continuous, so in fact $\mathbb{R}\setminus\{r\}$ is $\tau$-closed for all $r$. But this means $\tau$ is the discrete topology, which is impossible since then every function would be $\tau$-continuous. Thus no such set $A$ can exist, and $\tau$ is contained in the usual topology.<|endoftext|> TITLE: How can I construct genus 2 curves QUESTION [8 upvotes]: One of the first big formula's you learn in algebraic geometry is the genus-degree formula which states that an irreducible homogeneous polynomial in $f \in \mathbb{C}[x,y,z]$ of degree $d$ gives a genus $(d-1)(d-2)/2$ curve. Unfortunately, this does not help with constructing genus 2 curves since we have the following table of degrees and genera for $f$ $$ \begin{matrix} \text{degree} & 1 & 2 & 3 & 4 & 5 & \cdots \\ \text{genus} & 0 & 0 & 1 & 3 & 6 & \cdots \end{matrix} $$ How can I find a generalization of the genus-degree formula so that I can construct curves in some projective space with the desired genus? REPLY [14 votes]: Let $f \in \mathbb{C}[x_0, x_1, y_0, y_1]$ be a polynomial such that $f(\lambda x_0, \lambda x_1, y_0, y_1) = \lambda^af(x_0, x_1, y_0, y_1)$ and $f(x_0, x_1, \mu y_0, \mu y_1) = \mu^bf(x_0, x_1, y_0, y_1)$, then $$X = \{([x_0, x_1], [y_0, y_1]) \in \mathbb{CP}^1\times\mathbb{CP}^1 \mid f(x_0, x_1, y_0, y_1) = 0\}$$ is a curve. If $X$ is smooth, it has genus $(a-1)(b-1)$, so every genus can be realised. As $\mathbb{CP}^1\times\mathbb{CP}^1$ embeds in $\mathbb{CP}^3$ via the Segre embedding, $X$ is a curve in $\mathbb{CP}^3$. Another way of constructing curves in a projective space is via complete intersections. Let $f_1, \dots, f_{n-1} \in \mathbb{C}[x_0, \dots, x_n]$ be homogeneous polynomials of degrees $d_1, \dots, d_{n-1}$ respectively, then $$Y = \{[x_0, \dots, x_n] \in \mathbb{CP}^n \mid f_1(x_0, \dots, x_n) = \dots = f_{n-1}(x_0, \dots, x_n) = 0\}$$ is a curve. If $Y$ is smooth, it has genus $1 - \frac{1}{2}(n + 1 - d_1 - \dots - d_{n-1})d_1\dots d_{n-1}$. This construction gives rise to many genera that don't appear in the degree-genus formula, but not all of them: see this sequence. For example, there is no choice of dimension $n$ and degrees $d_1, \dots, d_{n-1}$ which give rise to a genus two curve, i.e. a genus two curve is not a complete intersection.<|endoftext|> TITLE: What is $\text{Res}_{\mathbb{F}_p(t)/\mathbb{F}_p(t^p)}(\text{Spec}\,\mathbb{F}_p(t)[x]/(x^p - t))$? QUESTION [7 upvotes]: Let $L = \mathbb{F}_p(t)$ and $k = \mathbb{F}_p(t^p)$. Let $X$ be the $L$-scheme $\text{Spec}\,L[x]/(x^p - t)$. What is $\text{Res}_{L/k}\,X$? Here $\text{Res}$ denotes the Weil restriction of scalars. REPLY [4 votes]: The answer is the empty scheme. We use the basis $\{1, t, \ldots, t^{p - 1}\}$ for $L/k$. To find the restriction of scalars, we substitute$$x = y_0 + ty_1 + \ldots + t^{p - 1}y_{p - 1}$$into the equation $x^p - t = 0$ defining $L$, and rewrite the result as$$F_0 + f_1t + \ldots + F_{p - 1} = 0$$with $F_i \in k[y_0, \ldots, y_{p - 1}]$. We get\begin{align*} F_0 & = y_0^p - t^p y_1^p + \ldots + t^{p(p - 1)}y_{p - 1}^p,\\ F_1 & = -1, \\ F_i & = 0 \text{ for }i \ge 2. \end{align*} Then $\text{Res}_{L/k}X$ is $\text{Spec}\,k[y_0, \ldots, y_{p - 1}]/(F_0, \ldots, F_{p - 1})$. This is the empty scheme, since $F_1 = -1$ generates the unit ideal.<|endoftext|> TITLE: Solving differential equation $x y^2 y' = x+1$ QUESTION [5 upvotes]: I am asked to solve the following differential equation: $$ x y^2 y' = x+1 $$ My process was $$ \begin{align*} x y^2 y' &= x+1\\ xy^2 \frac{dy}{dx} &= x+1\\ y^2 dy &= \frac{x+1}{x} dx\\ \int y^2 dy &= \int \frac{x+1}{x} dx\\ \int y^2 dy &= \int dx + \int \frac{1}{x} dx\\ \frac{y^3}{3} &= x + \ln |x| + C\\ y &= \sqrt[3]{3 \left( x + \ln |x| + C \right)} \end{align*} $$ but when I was checking my result on Wolfram I noticed that it was given in a different way. Is my result incorrect? What caused the results to be different? Is it the absolute value sign of the $\ln$? Thank you. REPLY [6 votes]: So, every real number has three cube roots: a real one, and two complex ones. The cube roots of 1, for instance, are $1, -\frac{1}{2}+\frac{\sqrt{3}}{2}i$, and $-\frac{1}{2}-\frac{\sqrt{3}}{2}i$. When you took the cube root in solving for $y$, you only considered the real cube root (which is totally fine if you're only interested in real-values functions); WolframAlpha was more general, and gave solutions for all three cube roots. Sort of like how when you have $y^2 = x$, this gives two solutions for $y$: $y = \sqrt{x}$ and $y = -\sqrt{x}$. Also, with regards to the absolute value, both $\ln(x)$ and $\ln(|x|)$ are valid antiderivatives of $\frac{1}{x}$, so you're correct there.<|endoftext|> TITLE: What is the formal, precise definition of area (and that of volume) in geometry? QUESTION [14 upvotes]: Note: I am asking two questions. Replace "area" by "volume" with appropriate adjustments to get the second question Hello everyone. I am studying mathematics in college and I am trying to find the formal definition of the notion "area" in geometry. What is area? I can understand it when the sides involved are integers, like when the side is 3, we have nine 1x1 grids, so the area is 9 if we think of area as the number of "unit grids". Maybe we can also define area for squares with rational sides in the same vein, but this is not a good way of thinking because how to define the area of a square of side PI or an irrational number for example? Please give me a formal definition of area. I am looking for the most precise definition. I am not finding any results on the internet. Thanks in advance REPLY [5 votes]: The other answers so far have mentioned basically three kinds of unsigned area: Scissors-congruence to compare areas. Note that this approach does not define area itself as a magnitude, unless you further add the notion of real numbers. It also fails for non-polygons unless you take limits, in which case it becomes equivalent to the next kind of area. Jordan measure (equal to both the limit of the total area of grid squares contained in the figure as the grid size tends to zero and the limit of the total area of grid squares needed to contain the figure as the grid size tends to zero, if these limits are equal). Any bounded object in classical geometry has finite Jordan measure, and this is the simplest intuitive notion that the classical geometers would surely agree with, since the method of exhaustion is essentially based on the same principle. Jordan area defined is finitely-additive and hence classical geometry works as usual. Lebesgue measure, which assigns a countably-additive area to some sets of the plane, and is compatible with Jordan measure whenever the Jordan measure is defined. All of the above generalize easily to higher dimensions. However, there is also signed area, which can drastically reduce the number of cases in Euclidean plane geometry when it comes to polygons. We define (signed) area of a polygon using the shoelace algorithm, and then check that $area(XP...QYR...S) = area(XP...QY) + area(YR...SX)$ (the area of a polygon is the sum of the areas of its parts), and that area is preserved when the vertices are cycled but negated when they are reversed. This allows us to handle most geometry problems without having unnecessary cases, unlike if we use unsigned area. Nevertheless, it is not so easy to extend signed area to non-polygonal regions. We can define the (signed) area of a region bounded by a (directed) rectifiable closed curve as the limit of the area of the approximating polygon. It turns out that the limit exists, but is not so easy to prove. (To see that the condition of rectifiability cannot be dropped, note that an Osgood curve is a continuous closed curve with positive Lebesgue measure, and hence its interior is not Jordan measurable, which implies that the limit does not exist.) Also, signed volume is even more troublesome to define, so it is not very useful compared to the Lebesgue measure for general sets in $\mathbb{R}^3$.<|endoftext|> TITLE: Finding path-lengths by the power of Adjacency matrix of an undirected graph QUESTION [30 upvotes]: I knew from Mark Newman's book - Networks: An Introduction (Page 137, Eq: 6.31) that, if $A$ is the adjacency matrix of a graph, then $ij$'th entry of $A^k$ will give me the number of $k$-length paths connecting the vertices $i$ and $j$. This works very well for directed graphs. But does it work for undirected graphs too? For instance, for the undireceted network below: if i want to calculate how many $3$-length paths are there from vertex-$2$ to vertex-$1$, then I should find $[A^3]_{12}$. Proceeding in this way, I get, \begin{eqnarray} A^3 = \left( \begin{matrix} 4 && 6 && 1 && 5 && 5 \\ 6 && 2 && 3 && 6 && 2 \\ 1 && 3 && 0 && 1 && 2 \\ 5 && 6 && 1 && 4 && 5 \\ 5 && 2 && 2 && 5 && 2 \end{matrix} \right) \end{eqnarray} And, I find, the entry of the 1st row and 2nd column = 6 = entry of the 2nd row and 1st column. Does it mean that there are 6 paths of length 3 from vertex-2 to vertex-1? Cearly it is not true because I have only $1$ path of length $3$ from 2 to 1, namely the sequence: $(2,4,5,1)$. What am I missing? UPDATE: I am attaching a snapshot of Newman's book. He only talks about "paths", but never about a "walk". Is it a mistake? Newman's book snapshot REPLY [23 votes]: The powers of the adjacency matrix don't give you the number of paths but the number of walks between any two vertices. In other words, you need to consider walks such that some vertices/edges are repeated (which do exist). REPLY [5 votes]: You get the number of $k$ length walks. The difference between a path and a walk is that walks can repeat edges and vertices.<|endoftext|> TITLE: Generalizing Variant Proof of Basel Problem QUESTION [7 upvotes]: Recently I have been thinking a lot about variations of the Basel Problem, and methods to solve them. Here I found the following solution to the Basel Problem by Alfredo Z. (I include the entire answer due to its brevity) Define the following series for $ x > 0 $ $$\frac{\sin x}{x} = 1 - \frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+\cdots\quad.$$ Now substitute $ x = \sqrt{y}\ $ to arrive at $$\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 1 - \frac{y}{3!}+\frac{y^2}{5!}-\frac{y^3}{7!}+\cdots\quad.$$ if we find the roots of $\frac{\sin \sqrt{y}\ }{\sqrt{y}\ } = 0 $ we find that $ y = n^2\pi^2\ $ for $ n \neq 0 $ and $ n $ in the integers With all of this in mind, recall that for a polynomial $ P(x) = a_{n}x^n + a_{n-1}x^{n-1} +\cdots+a_{1}x + a_{0} $ with roots $ r_{1}, r_{2}, \cdots , r_{n} $ $$\frac{1}{r_{1}} + \frac{1}{r_{2}} + \cdots + \frac{1}{r_{n}} = -\frac{a_{1}}{a_{0}}$$ Treating the above series for $ \frac{\sin \sqrt{y}\ }{\sqrt{y}\ } $ as polynomial we see that $$\frac{1}{1^2\pi^2} + \frac{1}{2^2\pi^2} + \frac{1}{3^2\pi^2} + \cdots = -\frac{-\frac{1}{3!}}{1}$$ then multiplying both sides by $ \pi^2 $ gives the desired series. $$\frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6}$$ This solution fascinated me. Although it takes a few things for granted (such as that the Fundamental Theorem of Algebra applies to infinite polynomials) I nevertheless thought the proof was one of the most beautiful I've seen. In attempting to generalize this I stumbled across the following function and its associated power series: $$\cos\left(\frac{x}{\sqrt{2}}\right)\cosh\left(\frac{x}{\sqrt{2}}\right) = \sum_{k=0}^\infty \frac{x^{4k}}{(4k)!}(-1)^k$$ By a simple substitution we find that $$\cos\left(\frac{x^{1/4}}{\sqrt{2}}\right)\cosh\left(\frac{x^{1/4}}{\sqrt{2}}\right) = \sum_{k=0}^\infty \frac{x^{k}}{(4k)!}(-1)^k$$ Which is a polynomial in $x$; noting that $\cosh$ is nowhere zero (on the real line) we solve for the roots as such: $$\cos\left(\frac{x^{1/4}}{\sqrt{2}}\right) = 0 \implies \frac{x^{1/4}}{\sqrt{2}} = n\pi + \frac{\pi}{2} \implies x = \frac{\pi^4(2n+1)^4}{4}$$ Using the argument for Viete's Formula in the solution above, we find that $$\frac{4}{\pi^4}\sum_{k=0}^\infty \frac{1}{(2k+1)^4} = -\frac{-\frac{1}{4!}}{1}$$ Upon manipulation we find that $$\color{red}{\sum_{k=0}^\infty \frac{1}{(2k+1)^4} = \frac{\pi^4}{96}}$$ From here we can bring this into a form more like that of the Basel Problem by noting that $$\sum_{k=0}^\infty \frac{1}{(2k+2)^4} = \frac{1}{16}\sum_{k=0}^\infty \frac{1}{(k+1)^4} = \frac{1}{16}\sum_{k=1}^\infty \frac{1}{k^4}$$ Noting that this computes the even terms of $\sum_{k=1}^\infty k^{-4}$ we find that the odd terms must equal $\frac{15}{16}\sum_{k=1}^\infty k^{-4}$ which is our series calculated above Applying this, we find that, as desired, $$\color{red}{\sum_{k=1}^\infty \frac{1}{k^4} = \frac{\pi^4}{90}}$$ However, I have been unable to generalize this approach further. Intuition tells me that the functions desired would have power series similar to those used, and would be a combination of trigonometric functions. Nevertheless, I have only been able to solve this for the case above (which required me to switch the power series from $\sin(x)$ to $\cos(x)$ from Alfredo's proof, as neglecting $\cosh(x)$ was desirable). My question is thus: Can this proof format be applied to solve series of the form $\sum_{k=1}^\infty k^{-n}$ for any $n$ other than $2$ and $4$? Note: I apologize for the length of this post, but I felt that a full presentation of the proof might assist in generalizing it. If anyone has any suggestions concerning this please let me know! Note 2: In case anyone is troubled by this seemingly naive application of Viete's Formula to infinite polynomials, know that this is perfectly valid, and is known as the Root Linear Coefficient Theorem. REPLY [6 votes]: To start, the "factoring" step is known as the Weierstrass factorization theorem, which asserts that you can express some functions as products of their factors. From here, take note that you had $$\frac{\sin(x)}x=\dots\left(1+\frac{x^2}{2^2\pi^2}\right)\left(1+\frac{x^2}{1^2\pi^2}\right)\left(1-\frac{x^2}{1^2\pi^2}\right)\left(1-\frac{x^2}{2^2\pi^2}\right)\dots$$ which is basically what you noted. From here, I multiply similar terms to get $$f(x):=\frac{\sin(x)}x=\left(1-\frac{x^2}{1^2\pi^2}\right)\left(1-\frac{x^2}{2^2\pi^2}\right)\left(1-\frac{x^2}{3^2\pi^2}\right)\dots$$ and define this as $f(x)$. using $(1-ue^{\frac{2\pi i}n})(1-ue^{\frac{4\pi i}n})\dots(1-ue^{2\pi i})=1-u^n$, we can then get $$f(xe^{\frac{2\pi i}n})f(xe^{\frac{4\pi i}n})\dots f(xe^{2\pi i})=\left(1-\frac{x^{2n}}{1^{2n}\pi^{2n}}\right)\left(1-\frac{x^{2n}}{2^{2n}\pi^{2n}}\right)\left(1-\frac{x^{2n}}{3^{2n}\pi^{2n}}\right)\dots$$ And since this is not generalizable to $n\notin\mathbb N$, we can only solve for $$\sum_{k=1}^\infty\frac1{k^{2n}}$$ By using this method. As for the odd values, no closed form yet exists.<|endoftext|> TITLE: What is the relationship between the following set functions? QUESTION [5 upvotes]: This problem is taken from Section 2.5 of Royden and Fitzpatrick's Real Analaysis, Fourth Edition text. For any open interval $I = (a,b)$, let $\ell(I) = b-a$ denote the length of $I$. For any set of real numbers $A$, define the following set functions: \begin{align} m^*(A) &= \inf\left\{\sum_{i=1}^\infty \ell(I_n): \bigcup_{n=1}^\infty I_n \supseteq A, I_n \text{ is an open, bounded interval for all $n$}\right\} \\ m^{**}(A)&= \inf\{m^*(\mathcal{O}): \mathcal{O} \supseteq A, \mathcal{O} \text{ open}\} \\ m^{***}(A)&= \sup\{m^*(F): F \subseteq A, F \text{ closed}\} \end{align} The text asks the reader to determine how these set functions are related. I think I can show that $m^* = m^{**}$ for all subsets of $\mathbf{R}$. However, I am only able to show $m^*(A) = m^{***}(A)$ if $m^{***}(A) = \infty$ or $A$ is measurable. Is this the correct answer? In particular, is it possible to find a set such that $m^{***}(A) < m^*(A)$? REPLY [3 votes]: Your results are correct . $m^*,$ also written $m^o,$ is called Lebesgue outer measure. $m^{***},$ also written $m^i,$ is called Lebesgue inner measure. Some authors take the def'n of a measurable set as any $A$ such that $m^o(A)=m^i(A).$ ...I dk which def'n of measurable set you are using. Some standard examples of a non-measurable set include the Vitali set, and a Bernstein set. A Berstein set is any $B\subset \mathbb R$ such that $B\cap C \ne \emptyset \ne(\mathbb R$ \ $B)\cap C$ for every closed uncountable $C\subset \mathbb R.$ So any closed subset of a Berstein set $B,$ or closed subset of $\mathbb R$ \ $B$, is countable, and $m^i(S)=m^o(S)=0$ whenever $S$ is countable. So $m^i(B)=m^i(\mathbb R$ \ $B)=0.$ On the other hand, let $U\supset B$ where $U$ is open. Then $V=\mathbb R$ \ $U$ is a closed subset of $\mathbb R$ \ $B,$ so $V$ is countable. Now for any $S \subset \mathbb R$ and any countable $V\subset \mathbb R$ we have $m^o(S)=m^o(S\cup V).$ So $m^o(U)=m^o(U\cup V)=m^o(\mathbb R)=\infty.$ Hence $m^o(B)=\infty.$<|endoftext|> TITLE: What is the simplest lower bound on prime counting functions proof wise? QUESTION [18 upvotes]: I am trying to understand how a lower bound can exist on the prime counting function, and to begin this process of educating myself I am trying to find a simple complete proof, that does not hinge on another more complex theory? REPLY [2 votes]: Here is an answer that uses the Fundamental Theorem of Arithmetic. Let $\pi(n)$ be the number of primes less than or equal to $n$. For each natural $k$ less than $n$, the exponent of the primes in the prime factorization of $k$ can be at most $\log_2(n)$. Since there are $\pi(n)$ total primes that can be used in the prime factorization of $k$, we have $$(\log_2(n) + 1)^{\pi(n)} \ge n \implies \pi(n) \ge \frac{\log(n)}{\log(\log_2(n) + 1)} \approx \frac{\log(n)}{\log \log(n)}.$$ Interestingly, exponentiating this bound gives you the PNT.<|endoftext|> TITLE: How do basis functions work? QUESTION [7 upvotes]: Hopefully this isn't too broad of a question. I recently had it explained to me that the discrete Fourier transform was really a change in basis (thus the "dot product" like look of the sigma with the multiply in it), using the sine and cosine functions as the basis functions. I understand using dot product against new basis vectors to change points from one space to another, but I have no idea how this would work for a basis which was a function instead of a constant. Can anyone explain the intuition for how that works? I also have been unable to find the correct terms to search for online so am a bit lost there. Thanks! REPLY [6 votes]: for a basis which was a function instead of a constant See, there is not really a dichotomy between functions and constants. A function is simply a constant of a different type than a number or a tuple. What's not constant are function values, i.e. the results of evaluating a function for some argument. Unfortunately, the notation many people use completely obscures this distinction. To make it clear: let $f : \mathbb{R} \to \mathbb{R}$ a function. Then $f(x)$ is not a function. It is just an expression that depends on the free variable $x$. Thus it is not constant, and indeed does not make sense to consider as an element of a vector space. Lots of people will tell you that $f(x)$ is a function. They're wrong. The function is just $f$, when not applied to any argument. If you have a proper definition like $$ \forall x .\ f(x) = \frac1{1+x^2} $$ (note that the universal quantor $\forall$ – which is usually not written out but assumed implicitly – makes $x$ is a bound variable) then $f$ as a whole is a perfectly good function-constant, just like $5$ or $\sqrt{2}$ or $b$ are perfectly good number-constants (if you previously defined it as $b = 17$). As a matter of fact, you might consider vectors $v\in\mathbb{R}^3$ also as functions: $$ v : \{0,1,2\} \to \mathbb{R} $$ or, less precisely, $$ v : \mathbb{N} \to \mathbb{R}. $$ This means, $v(i)$ (or, as it's more commonly written, $v_i$) is a number, whereas $v$ as a whole is a vector. Now, with such finite-dimensional vectors a scalar product will generally be a simple sum. $$ \langle v,w\rangle_{_{\mathbb{R}^3}} = \sum_{i=0}^2 v_i\cdot w_i $$ Such a definition is equally possible if the vectors aren't functions of a natural number but a real one; for instance, you might define $$ \langle f,g\rangle_{_\text{Bogus}} = \sum_{x\in \{1,3,\pi^2\}} f(x) \cdot g(x) $$ Only, that's not a scalar product: it's not positive definite, for instance if $f(x)$ has nozero values only for negative $x$ then $\langle f,f\rangle_{_\text{Bogus}} = 0$ even though $f\neq 0$. Actually useful scalar products avoid this† by evaluating the functions everywhere on their domain. This is not really feasible with sums, but you can do it with integrals, like in the classical $L^2$ Hilbert space: $$ \langle f,g\rangle_{_{L^2(\mathbb{R})}} = \int_{\mathbb{R}}\!\mathrm{d}x \ f(x) \cdot g(x). $$ †You may object that this is still not positive definite: if you plug in a function that's zero almost everywhere but e.g. $f(0)=1$, then the $L^2$ norm yields $0$ although $f\neq0$. The elements of the $L^2$ Hilbert space are in fact not general square-integrable functions, but equivalence sets of such functions modulo differences on null sets; hence we can say $f=0$ if only it's zero almost everywhere. (Alternatively, you can just say: the $L^2$ norm is a norm for all continuous functions.)<|endoftext|> TITLE: Smoothness of the boundary is the only obstruction for being a submanifold with boundary? QUESTION [5 upvotes]: Let $M$ be a smooth manifold, and let $S$ be an open smooth submanifold of $N$. Assume the topological boundary of $S$, $\partial S :=\bar S \setminus S$ is a smooth submanifold of codimension 1 in $N$. Is it true that $\bar S$ is a submanifold with boundary of $N$? Somehow, I am not even sure whether $\bar S$ must be a topological manifold with boundary... I am trying to see if the smoothness of the boundary is the only obstruction for being a submanifold with boundary. (If we do not assume this, we can take $S$ to be the interior of a square). REPLY [5 votes]: I'm assuming that your first sentence was meant to say "Let $N$ be a smooth manifold, $\dots$ ." Yes, it is always the case that $\overline S$ is a smooth submanifold with boundary in $N$. However, the boundary of $\overline S$ need not be equal to the boundary of $S$. See this post for an example in which it's not. To prove that $\overline S$ is a smooth submanifold with boundary, you just need to show that each point of $\overline S$ has a coordinate neighborhood in $N$ whose image is either an open subset of $\mathbb R^n$ or an open subset of the half-space $\mathbb R^{n-1} \times [0,\infty)$. Certainly each point of $S$ has such a neighborhood. For a point of $\partial S$, you can choose a slice chart $(U,\phi)$ for $\partial S$, such that $\phi(U)$ is a coordinate ball centered at the origin, and $\phi(\partial S\cap U)$ is the subset of $\phi(U)$ where $x^n=0$. Now you have to show that one of the following three possibilities holds: $\phi(\overline S\cap U) = \phi(U) \cap \{x^n\ge 0\}$, $\phi(\overline S\cap U) = \phi(U) \cap \{x^n\le 0\}$, or $\phi(\overline S\cap U) = \phi(U) $. I'll leave it to you to work out the details.<|endoftext|> TITLE: If $A^2=B^2$ and $AB=BA$ then $\det(A+B)=0$ QUESTION [8 upvotes]: Let $A, B \in M_n(\mathbb{C})$ distinct matrices such that $A^2=B^2$ and $AB=BA$. Prove $\det(A+B)=0$ From $A^2=B^2$ and $AB=BA$ we get $(A-B)(A+B)=0_n$ hence $\det(A+B)=0$ or $\det(A-B)=0$. Now suppose $A+B$ invertible. Then ${(A+B)}^2$ invertible therefore $A^2 + 2AB + B^2$ invertible and, from here, $A^2 + AB$ invertible, hence $A, B$ invertible. I can't get a contradiction for this supposition. REPLY [14 votes]: If $A+B$ is invertible, then $(A+B)(A-B)=0$ implies $A-B=0$, so that $A=B$, a contradiction. (I updated the answer to take into account the update of the question.)<|endoftext|> TITLE: Why don't these types of arguments fall under the domain of logic? QUESTION [5 upvotes]: I'm currently reading a textbook on symbolic logic and it says the following which I find quite puzzling: The province of logic as an exact science does not include all types of deductive reasoning, i.e., all cases of reasoning in which conclusions are deduced from premises, or in which inconsistencies or non sequiturs in arguments are exhibited. It is concerned only with instances of deductive reasoning that are correct or erroneous, valid or invalid, by virtue of their form alone, and by virtue of nothing else. I have a few questions about this: Why doesn't logic include "all cases of reasoning in which conclusions are deduced from premises". I thought this was the very mechanism by which a deductive argument operates, so shouldn't all arguments like this fall under the banner of logic? Why doesn't logic include those arguments in "which inconsistencies or non-sequiturs" are exhibited? Isn't a non-sequitur an invalid argument i.e. an argument whose conclusion does not follow from its premises, thus should it not be included in logic? Could the text be saying that deductive logic isn't concerned with actual examples/real instances of arguments, just the form? (I think is most likely it, but I need confirmation) I think the exact science qualifier may have something to do with any misunderstanding. Could someone clarify what this passage means as its most likely that sonething has been lost in translation. Thank you. REPLY [10 votes]: Consider the following argument: Every apple is a piece of fruit. Therefore, if I have two apples then I have two pieces of fruit. This is true "by virtue of form alone". For example, if I replace "apple" with any noun, and "piece of fruit" by any noun, then the argument is still logically correct (although the first sentence might become false). For example, both of the following are logically correct: Every Ford Mustang is a car. Therefore, if I have two Ford Mustangs then I have two cars. Every cat is a piece of cheese. Therefore, if I have two cats then I have two pieces of cheese. These have the same "logical form" as the original argument. I don't need to know what the nouns mean to evaluate the logical correctness of the argument. Logically incorrect arguments Now, here is an argument which is not correct "by virtue of form alone". My horse has a reddish brown coat. Therefore my horse is a chestnut horse. That deductive argument is correct, in a common sense way, but only if I know what the words mean. It is not valid as a "logical argument" because it is not valid by virtue of "form alone". To make it logically correct, we need to add a hidden hypothesis: My horse has a reddish brown coat. Every horse with a reddish brown horse is a chestnut horse. Therefore my horse is a chestnut horse. It is common to distinguish in this way between statements or arguments which are true by virtue of form alone versus statements or arguments that rely on the meanings of their words or on hidden assumptions. Logic, from a common point of view, is about the first kind rather than the second kind of argument. But, of course, people make deductive arguments of both kinds. False hypotheses Of course, in order to know that the conclusion of an argument is true, we need to know something about the words. It is common to say that the conclusion of an argument has to be true if the following both hold: The argument is logically correct, and The hypotheses are true. This is supposed to be a sufficient condition for the conclusion to be true; it's not a necessary condition. From a certain point of view, "logic" is only concerned with the first bullet. The second bullet is also important, but not part of logic.<|endoftext|> TITLE: Making sense of angles in $\mathbb{R^n}$ QUESTION [5 upvotes]: One of the definitions of the dot product of two vectors is the following $$\vec{a} \bullet \vec{b}\ \ \stackrel{\text{def}}{=} \ \ \|\vec{a}\| \,\|\vec{b}\|\cos(\theta)$$ Where $\theta$ denotes the angle between the vectors $\vec{a}$ and $\vec{b}$. But for that angle $\theta$ to 'make sense', both $\vec{a}$ and $\vec{b}$ must lie in the same plane correct? i.e. There must exist some two-dimensional space, e.g. $\mathbb{R^2}$, that both $\vec{a}$ and $\vec{b}$ are an element of. Now we can always find a plane that both $\vec{a}$ and $\vec{b}$ fall in. But now the question arises, are we finding the angle $\theta$ with respect to the plane that they lie in (what I'm trying to say is: Is $\theta$ denoting the angle between $\vec{a}$ and $\vec{b}$ in a plane that they both fall in?), or is it with respect to the basis vectors of of the original vector space that they lie in initially. Here's an example that will hopefully get across what I'm asking: Take $\vec{a}, \vec{b} \in \mathbb{R^4}$. If we can find a plane that both $\vec{a}$ and $\vec{b}$ both lie in, then 'the angle between them', $\theta$, makes sense. If not then what essentially is being said is $\theta$ is the angle between two vectors in four-dimensional space, and I'm not sure that an angle in $\mathbb{R^4}$ has any sort of meaning. My question then essentially boils down to the following. Are angles only defined (or do they only have meaning) in $\mathbb{R^2}$? If we have two vectors in $\mathbb{R^n}$, is the only way to find the 'angle between them' by first finding a plane (a two-dimensional vector space) that they both lie in, and then solving for the angle with respect to that plane? REPLY [2 votes]: As long as you are not considering oriented angles (i.e., you don't mind if one is "turning to the left" or "turning to the right", in some sense), there is nothing special about $\Bbb R^2$: angles can be defined in any $\Bbb R^n$, indeed in any real inner product space. This is because the dot product can be defined in a purely algebraic way (sum of all the products of corresponding coordinates), and then you first equation can serve as definition of angle between any pair of nonzero vectors (the fact that some real $\phi$ making the equation hold can be found is a direct consequence of the Cauchy-Schwarz inequality). Usually one takes the solution for $\theta$ that lies in the interval $[0,\pi]$ to be the angle; notably, as this method only computes the cosine of the angle, it cannot really tell opposite angles apart, and we choose the angle to be positive by definition. As for computing the angle "in the plane spanned by $\vec a$ and $\vec b$" (assuming they are not scalar multiples of one another), it is not quite clear what you mean by that, but in any case when made precise it should end up giving the same answer as the other method, or possibly the opposite angle. Assuming you know about angles in $\Bbb R^2$, one way to do this is to choose an orthonormal pair of vectors in the span of $\vec a,\vec b$, and use those to identify that span with $\Bbb R^2$. One easily shows that "usual way" to define angles in $\Bbb R^2$ gives an angle of the same magnitude as using the method above, though depending on what is the "usual way", it might distinguish positive and negative angles of the same magnitude (since in $\Bbb R^2$ "turning left" and "turning right" are things that can be clearly distinguished, unlike the situation in higher dimensions). Since the dot product of $\vec a,\vec b$ is unchanged under the identification of the span with $\Bbb R^2$, the "cosine" method gives the same result in $\Bbb R^n$ as after restriction to $\Bbb R^2$, and so in magnitude the same angle as taking the "usual angle" in the span of $\vec a,\vec b$. In particular this way of determining the angle does not depend on the choice of orthonormal vectors in the span, except possibly for the sign. And indeed, if one does define an angle with sign in $\Bbb R^2$, then that angle does depend on the choice of orthonormal vectors. It is for this reason that in higher dimensions no "oriented angle" can be defined in a coherent way.<|endoftext|> TITLE: Converse of Fubini's theorem QUESTION [6 upvotes]: Let $f \colon [0,1]^2 \to \mathbb R$ a Lebesgue measurable function such that for all measurable subsets $A, B \subseteq [0,1]$ the iterated integrals exist and are equal to each other: $$\int_A \int_B f(x,y) \,\textrm{d}y \,\textrm{d}x = \int_B \int_A f(x,y) \,\textrm{d}x \,\textrm{d}y.$$ Is $f$ necessarily $\lambda^2$-integrable over its domain, e.g. $[0,1]^2$? Here is the explanation why I'm unsure about answer to the above question. Sierpiński showed that there exists a set $A \subseteq [0,1]^2$ such that $A \not \in \mathfrak L^2$, but every section $A_x$ and $A^y$ is empty or consisting of just one point. Therefore the iterated integrals (again, from above) for $f = \chi_A$ exist, but $\chi_A$ isn't $\lambda^2$-integrable. REPLY [3 votes]: It is false that $f$ is necessarily integrable. Consider that if $f$ is non-negative, then Tonelli's Theorem applies, which yields the result of Fubini's Theorem for $f$ (see this Wikipedia page). But $f$ can fail to be integrable. EDIT (to give counterexample): Write down $$ f(x,y)=\frac1{({x^2+y^2})^{1/2}}. $$ By Tonelli's Theorem, the result of Fubini's applies. But a calculation gives $$ \int\limits_0^1\int\limits_0^1|f(x,y)|\,dy\,dx\geq c\int\limits_0^{1/2}\frac1{r^2}r\,dr=c\ln r\Big|_0^{1/2}=+\infty, $$ where the first inequality holds since the integral of $f$ over the square $[0,1]^2$ is larger than or equal to the integral of $f$ over the upper quarter circle of radius $1/2$. So $f$ is not integrable.<|endoftext|> TITLE: 'Intuitive' difference between Markov Property and Strong Markov Property QUESTION [22 upvotes]: It seems that similar questions have come up a few times regarding this, but I'm struggling to understand the answers. My question is a bit more basic, can the difference between the strong markov property and the ordinary markov property be intuited by saying: "the markov property implies that a markov chain restarts after every iteration of the transition matrix. By contrast, the strong markov property just says that the markov chain restarts after a certain number of iterations given by a hitting time T"? Moreover, would this imply that with a normal markov property a single transition matrix will be enough to specify the chain, whereas if we only have the strong property we may need T different transition matrices? Thanks everyone! REPLY [13 votes]: Here's an intuitive explanation of the strong Markov property, without the formalism: If you define a random variable describing some aspect of a Markov chain at a given time, it is possible that your definition encodes information about the future of the chain over and above that specified by the transition matrix and previous values. That is, looking into the future is necessary to determine if your random variable's definition is being met. Random variables with the strong Markov property are those which don't do this. For example, if you have a random walk on the integers, with a bias towards taking positive steps, you can define a random variable as the last time an integer is ever visited by the chain. This encodes information about the future over and above that given by the previous values of the chain and the transition probabilities, namely that you never get back to the integer given in the definition. Such a random variable does not have the strong Markov property. So with regards to your two questions: 1) The strong Markov property states that after a random variable has been observed which was defined without encoding information about the future, the chain effectively restarts at the observed state. (I'm not sure if this was exactly what you were getting at.) 2) If you define a variable for which the strong Markov property does not hold, and you know it's value, you could use this knowledge to get a better estimate of the future state visitations than if you relied on the transition matrix and current observations alone. The chain is still governed by its transition matrix and you don't need more of them to describe it, but you would have more information than if your variable was strongly Markovian.<|endoftext|> TITLE: Equivalence of skew-symmetric matrices QUESTION [10 upvotes]: Let $N=\{1,\dots,n\}$ and $A,B$ be $n\times n$ skew symmetric matrices such that it is possible to permute some rows and some columns from $A$ to get $B$. In other words, for some permutations $g,h: N\rightarrow N$, $$A_{i,j}=B_{g(i),h(j)}$$ for all $1\leq i,j\leq n$. Must there exist a permutation $f:N\rightarrow N$ such that $$A_{i,j}=B_{f(i),f(j)}$$for all $1\leq i,j\leq n?$ For example, let $$A=\begin{pmatrix} 0 & 3 \\ -3 & 0 \end{pmatrix} , B=\begin{pmatrix} 0 & -3 \\ 3 & 0 \end{pmatrix} $$ If we switch the rows and also switch the columns, we get from $A$ to $B$. And there exists a permutation $f$ with $f(1)=2,f(2)=1$ such that $A_{i,j}=B_{f(i),f(j)}$ for all $1\leq i,j\leq 2$. There exists an example with $g\neq h$. Let $$A=\begin{pmatrix} 0 & 0 & 2 & -2 \\ 0 & 0 & -2 & 2 \\ -2 & 2 & 0 & 0 \\ 2 & -2 & 0 & 0 \end{pmatrix} , B=\begin{pmatrix} 0 & 0 & -2 & 2 \\ 0 & 0 & 2 & -2 \\ 2 & -2 & 0 & 0 \\ -2 & 2 & 0 & 0 \end{pmatrix} $$ One possibility for $g,h$ is $g(i)=i$ for all $i$, $h(1)=2,h(2)=1,h(3)=4,h(4)=3$. In this case we can let $f(1)=2,f(2)=1,f(3)=3,f(4)=4$. REPLY [3 votes]: Here is a simple explanation of why the matrices $A$ and $B$ in heptagon's answer are not permutation-similar. Suppose the contrary. Then their positive parts must be permuation-similar too, i.e. $$ A_+=\pmatrix{0&I&I\\ 0&0&I\\ 0&0&0} \ \sim\ B_+=\pmatrix{0&J&J\\ 0&0&J\\ 0&0&0} $$ via some permutation. This is impossible, because the directed graph represented by the adjacency matrix $A_+$ is comprised of the paths $1\to3\to5$, $1\to5$, $2\to4\to6$ and $2\to6$, so that a node can reach at most two other different nodes, while $B_+$ is comprised of the paths $1\to4\to5$, $1\to6$, $2\to3\to6$ and $2\to5$, so that each of node 1 and node 2 can reach three other different nodes. Alternatively, if $A_+$ is permuataion similar to $B_+$, the directed graphs represented by these two adjacency matrices must be isomorphic. So, if we turn each edge to be an undirected one, the two undirected graphs must be isomorphic too. Yet this is impossible, because the undirected graph represented by $A_++A_+^T$ has two disjoint connected components $\{1,3,5\}$ and $\{2,4,6\}$, but $B_++B_+^T$ is a connected graph.<|endoftext|> TITLE: Solve $2\ddot{y}y - 3(\dot{y})^2 + 8x^2 = 0$ QUESTION [8 upvotes]: Solve differential equation $$2\ddot{y}y - 3(\dot{y})^2 + 8x^2 = 0$$ I know that we have to use some smart substitution here, so that the equation becomes linear. The only thing I came up with is a smart guessed particular solution: $y = x^2$. If we plug this function in, we get: $$2\cdot2\cdot x^2 - 3(2x)^2 +8x^2 = 4x^2 - 12x^2 + 8x^2= 0$$ I made a mistake. The coefficients where different in the exam: $$ \begin{cases} 3\ddot{y}y + 3(\dot{y})^2 - 2x^2 = 0, \\ y(0) = 1, \\ \dot{y}(0) = 0. \end{cases} $$ Does it make the solution easier? REPLY [2 votes]: $$ \begin{cases} 3y''y + 3(y')^2 - 2x^2 = 0, \\ y(0) = 1, \\ y'(0) = 0. \end{cases} $$ $$y''y+y'^2=(y'y)'\quad\to\quad 3(y'y)'=2x^2$$ $$3y'y=\frac{2}{3}x^3+c_1$$ $y'(0)=0\quad\to\quad c_1=0$ $$y'y=\frac{2}{9}x^3$$ $$2y'y=(y^2)'=\frac{4}{9}x^3$$ $$y^2=\frac{1}{9}x^4+c_2$$ $y(0)=1\quad\to\quad c_2=1$ $$y=\sqrt{\frac{1}{9}x^4+1}$$<|endoftext|> TITLE: How to obtain a isometric immersion of $T^n$ (the flat torus) into $\mathbb{R}^{2n}?$ QUESTION [5 upvotes]: I am trying to construct an isometric immersion of $T^n := S^1\times\ldots \times S^1$ into $\mathbb{R}^{2n}.$ I am considering The flat Torus, i.e., on each $S^1$ I am considering the metric induced by $\mathbb{R}^2$ and on $T^n$ the product metric. The way I am trying to obtain this is to define the map: $$f : T^n \to \mathbb{R}^{2n} $$ $$f(z_1,\ldots,z_n) := (a_1,b_1,\ldots,a_n,b_n),$$ where $z_j \equiv (a_j,b_j) \equiv a_j + ib_j$ and $|z_j| = 1.$ Here $i$ is the imaginary unit. Once I show that this map is an immersion the claim follows pulling-back the metric of $\mathbb{R}^{2n}.$ Is it ok? I am not sure about this argument once I can not give an clear statement that $f$ is an immersion without calculating on coordinates charts. REPLY [5 votes]: $\newcommand{\dd}{\partial}$Your $z_{j}$ notation is potentially vexing because the $z_{j}$ are unit complex numbers rather than real numbers. You might instead pick local coordinates $\theta_{j}$ on the torus so that $z_{j} = \exp(i\theta_{j})$ for each $j$, and define $$ f(\theta_{1}, \dots, \theta_{n}) = (\cos\theta_{1}, \sin\theta_{1}, \dots, \cos\theta_{n}, \sin\theta_{n}). $$ It's elementary to show that the (vector-valued) partial derivatives $df(\theta)(\mathbf{e}_{j}) = \dd f/\dd \theta_{j}$ form an orthonormal set at each point, so that $f$ is a local isometry (and, particularly, is an immersion).<|endoftext|> TITLE: What is the proper way to extend the Fibonacci numbers to negative numbers? QUESTION [32 upvotes]: I have a feeling this question is a duplicate, but it's not coming up in "Questions that may already have your answer." We all know very well that $F(0) = 0$, $F(1) = 1$ and $F(n) = F(n - 2) + F(n - 1)$ for all $n > 1$. I'm wondering about $n < 0$. My first thought was $F(-n) = -F(n)$, which is appealing from a multiplicative point of view, as it seems to preserve certain identities, like $F(n)^2 = F(n - 1) F(n + 1) - (-1)^n$. But it doesn't quite make sense from an additive point of view, it doesn't seem to work both "forwards" and "backwards." For example, it would give us $F(-1) + F(0) = -1 \neq F(1)$. How do we extend $F(n)$ to negative $n$ so as to maintain both the related identities and the basic defining identity? REPLY [11 votes]: By now you know very well how to determine the Fibonacci numbers for negative indices, albeit by the recursion formula or the Binet formula as well as various others. My contribution is to show you what it looks like. The figure below is my crab-claw spiral; it consists of a Fibonacci spiral for $f_{-32}:f_{32}$. The image on the left is the complete spiral, that on the right is a detail at approximately $10^6$-fold zoom. The cusps are due to the alternating signs in the negative indices.<|endoftext|> TITLE: Solving inequality for convex functions. QUESTION [6 upvotes]: A function $f : I \rightarrow \mathbb R$ on an interval $I$ is convex if $f((1-t)x+ty)\le (1-t)f(x)+tf(y), \forall x,y \in I, t \in [0,1]$ Assume now that $I$ is an open interval. Show that if $f$ is convex then for each $c \in I$ there exists $m \in \mathbb R$ such that $m(x − c) + f(c) \le f(x)$ for all $x \in I$ , and if in addition $f$ is differentiable at $c$ then $f'(c)$ is the unique $m$ that works. In general, must m be unique? (we have previously shown that convex functions are continuous) I think I'm missing the point of the question. Surely, by the mean value theorem we can find some $y\in (x,c)$ such that $f'(y)=\frac{f(x)-f(c)}{x-c}$ setting $m=f'(y)$ we get $m=\frac{f(x)-f(c)}{x-c}, m(x-c)+f(c)=f(x)$ such an $m$ fits the inequality we're asked to show. I also cannot see why $f'(c)$ would be the unique solution or why we would ever get a unique solution of the inequality at all for that matter. Clearly I'm misunderstanding the question, so if someone could point out where I would appreciate that. Thank you REPLY [4 votes]: Here is an idea how you can prove it. Take a point $a\in I$ to the left of $c$ and a point $b\in I$ to the right, i.e. $a TITLE: "If $f$ is a linear function of the form $f(x)=mx+b$, then $f(u+v)=f(u)+f(v).$" True or false? QUESTION [9 upvotes]: The rest of the question states the following: "If true, give an explanation as to why. If false, give a counter example." Here is the following statement: If $f$ is a linear function of the form $f(x)=mx+b$, then $f(u+v)=f(u)+f(v).$ I know that this statement is false because I arbitrarily chose a function of the form $f(x)=mx+b$ and I plugged in random numbers for $u$ and $v$. My actual question is this: is it sufficient enough to just use variables for counter-examples or must actual numbers be used? Here is my work: False. Counter-example: Give $m$ and $b$ the values of $1/2$ and $3$, respectively. Therefore, $f(x)=mx+b$ becomes $f(x)=x/2+3$. Give $u$ and $v$ the values of $2$ and $4$, respectively. Now we have: $f(2)+f(4)=(1/2)(2)+3+(1/2)(4)+3=9.$ $f(2+4)=f(6)=(1/2)(6)+3=6.$ $9\neq6,$ so therefore, this statement is false. REPLY [10 votes]: Yes, your counterexample suffices. However, there is an easier one: Let $b \neq 0$. Then $$ f(0 + 0 ) = b \neq 2b = f(0) + f(0), $$ hence $f(u+v) = f(u) + f(v)$ does not hold for arbitrary $u,v$. On the other hand, if $b = 0$, it's easy to see that $f(u+v) = f(u) + f(v)$ for all $u,v$.<|endoftext|> TITLE: Undergraduate resources for Number Theory and Cryptography QUESTION [5 upvotes]: For the absolute life of me, I cannot seem to wrap my head around the proofs given in my number theory and cryptography class. Maybe it's the teacher, or the textbook, both or neither, but this is causing me great concern for two reasons. First is obvious, since I hate solving math without a full comprehensive understanding of the concepts. Second, I have the feeling I'll be asked to prove concepts in my exams. So how do I go about this? Does anyone have any textbooks or something (or anything) that could help me grasp all this better? To clarify, the main concepts covered in class are GCDs, Euclidean algorithm, concepts of coprimes, Euler-Fermat theorem etc. Then these bleed into solving cryptosystems such as RSA, Chinese Remainder Theorem and others. Thanks! REPLY [3 votes]: I took an elementary number theory course this past spring semester. The textbook we used for the course was "Elementary Number Theory" by Gareth A. Jones and J. Mary Jones. All of the topics you are looking to learn are covered in detail throughout the book. The book is heavily proof based but also does have questions worked out in detail. In my opinion, the best part of this book was the exercise portion of it. Every question in the textbook had a solution with work in the back of the book. I would definitely recommend looking at this book. If you need any additional exercises to do, my professor has put up our problem sets on the course web page. The problem sets mostly follow the order of the textbook and at the beginning of each problem set it mentions the reading. This should allow you to follow along or get extra practice if you need it. Hope that helps!<|endoftext|> TITLE: How can the stabilizer of a group element have more than one element itself? QUESTION [6 upvotes]: I suspect I've misunderstood the concept of a stabilizer. Take a group $G$ and some $g \in G$. Now consider $s_1, s_2 \in Stab(g)$. The stabilizer is defined such that $s_1 g = s_2 g = g$, but if we operate on the right by $g^{-1}$, we obtain $s_1 = s_2 = e$ where $e$ is the identity. Thus $Stab(g) = \{e\}$. This is clearly not correct, so I suspect I've failed to fully understand the concept. Where did I go wrong? REPLY [5 votes]: The problem is that $g^{-1}g$ is not necessarily $e$! The action of $G$ on itself may be very different from the group law (think about conjugation, for example : in this case, $g^{-1}.g=g^{-1}gg=g\neq e$ in general). REPLY [4 votes]: The stabilizer is associated with a group action. Actions aren't guaranteed to have inverses, so acting on the right by $g^{-1}$ is nonsensical. In general, given some group $G$ acting on a set $X$, the stabilizer is the elements of $G$ that act as the identity on $X$. Consider the action given by $h\mapsto ghg^{-1}$ (conjugation by $g$). This will be invariant under the action if $ghg^{-1}=h$, or if $gh=hg$, so if $g$ commutes with $h$. It follows that the center of $G$ stabilizes the above action.<|endoftext|> TITLE: How many types of mathematical information are there? QUESTION [7 upvotes]: I am aware of at least two types of mathematical information: Shannon Information, which is the negative of entropy (i.e. a loss of entropy by $n$ bits is precisely a gain of Shannon information of $n$ bits). One also has mutual and conditional information of two random variables, and information theory is named for this type. Fisher information. This type is apparently very commonly used and considered in statistics, to the best of my limited knowledge. It is also supposed to be the source of the name for information geometry (since the Fisher information metric is the Riemannian metric). My question: Are there any other mathematical concepts of information which I am missing? How many types of mathematical information exist? REPLY [2 votes]: Certainly the two you mentioned are important, and perhaps the most widely known. For categorical data there is Sorenson's similarity index. (Roughly, the probability that a randomly selected individual in my population will belong to the same category as I.) See Wikipedia on categorical data for several more. In a sense random data have less structure or 'information' than data with some nonrandom structure. Runs tests and autocorrelation functions are ways of judging randomness. In his famous assertion that it takes about 7 shuffles to put a deck of cards into something like random order, Persi Diaconis used the concept of rising sequences as a measure of information that might be exploited by a player. (Rising sequences have been used by magicians doing card tricks for over a century.) Google Diaconis shuffle to find a NYT article on this topic and the paper by Bayer and Diaconis with wonderful mathematical details. Digression: I will leave the definition of 'rising sequence' to Bayer and Diaconis. A deck in order from 1 through 52 has one rising sequence. One shuffle (cut and riffle) results in two rising sequences, two shuffles usually in four. A randomly permuted deck averages 26.5 rising sequences. The figure below, based on simulations, shows how the distribution of the number of rising sequences comes closer to the distribution for a random deck (lower right) as the number of shuffles increases. Code makers and breakers have several measures of information content in a string of characters. Some of these can be found in readily available books and papers on cryptology. I would not be surprised if some of the most useful ones may have been discovered by mathematicians at government agencies and not publicly available. There is a sense in which a Markov Chain can contain more information than a sequence of independent random variables. The whole idea of a Markov Chain is that the current observation may be useful in predicting the next one. This is a list of a few kinds and measures of and viewpoints on information that are in practical use. I suppose that some of them can be shown to be related to (perhaps even equivalent to) Fisher or Shannon information and that some are not. My purpose is not to give you an exhaustive list, but just to suggest that the list of measures of information is quite large and may be endless. As big data analysis becomes increasingly popular (and one hopes better focused and organized), I suspect that measurement of information content in large datasets will become a substantial field of investigation.<|endoftext|> TITLE: Galois group over $\mathbb C(t)$ QUESTION [5 upvotes]: Let $K=\mathbb C(T)$ be the rational function field over the complex field $\mathbb C$, and $L$ be the splitting field of $f(X)=X^6+2TX^3+1\in K[X]$. (1) Find $\mathrm{Gal}(L/K)$. (2) Find all field extensions of degree 3 containing $L/K$. I find $L = K(\alpha)\ (\alpha =\sqrt[3]{T-\sqrt{T^2-1}} )$ and $Gal(L/K) \cong S_3$ (Is it right?), but I can't solve (2). How to solve (2)? REPLY [3 votes]: Write $E= K(y, z)$, where $y$ and $z$ are the roots of $$ X^2 + 2T X + 1.$$ Then, $E$ is a quadratic extension of $K$, and, as you implicitly point out in the comments, $yz = 1$. Then, as you (explicitly!) point out, the splitting field $L/K$ of the question is the splitting field over $E$ of both $X^3 -y=0$ and $X^3-z=0$: if $u^3=y$ and $uv =1$, then $v^3= z$: $L=K(u,v)=K(u)=K(v)$. If $\omega$ is a primitive third root of unity, the roots of $f$ (of the question) are $$ u,\ \omega u,\ \omega^2 u,\ v, \omega v,\text { and }\omega^2 v.$$ The Galois group $G$ of $L/K$ is (isomorphic to) $S_3$, and generated by $\sigma$ (of order three) and $\tau$ (of order 2): $$\sigma u =\omega u \text { and } \tau u = v.$$ The elements $\sigma$ and $\tau$ are field automorphisms over $K$, and, as $L=K(u)$, are determined by their action on $u$: for instance, $$\tau v = u,\text{ and }\sigma v = \omega^2 v.$$ Calculating, one also finds $$\sigma \tau \sigma^{-1} u = \omega v \text { and } \sigma \tau \sigma^{-1} u = \omega^2 v.$$ By the Galois correspondence [the lattice of sub-groups of $G$ and the lattice of sub-fields of $L/K$ are (anti-)isomorphic, under $H\mapsto L^H$, the fixed field of $H$], the three cubic subfields of $L$ correspond to the fields fixed respectively by (the three sub-groups, each generated by) $\tau$, $\sigma \tau \sigma^{-1}$, and $\sigma^2\tau\sigma^{-2}$. (By the preceding, $\sigma$ and $\tau$ don't commute - hence the three sub-fields.) Explicitly, if $a = u + v \in L$, then $\tau a = a$. Calculating, one finds that if $$b = \sigma a = \omega ( u + \omega v) \text { and }c =\sigma^2 a =\omega^2(u + \omega^2 v),$$ then $$ ( X - a) (X-b)(X-c) = X^3 -3X +2T,$$ and the three cubic sub-extensions are $ K(a)$, $K(b)$, and $K(c)$.<|endoftext|> TITLE: A nice formula for the Thue–Morse sequence QUESTION [25 upvotes]: The Thue–Morse sequence$^{[1]}$$\!^{[2]}$ $t_n$ is an infinite binary sequence constructed by starting with $t_0=0$ and successively appending the binary complement of the sequence obtained so far: $$\begin{array}l 0\\ 0&\color{red}1\\ 0&1&\color{red}1&\color{red}0\\ 0&1&1&0&\color{red}1&\color{red}0&\color{red}0&\color{red}1\\ 0&1&1&0&1&0&0&1&\color{red}1&\color{red}0&\color{red}0&\color{red}1&\color{red}0&\color{red}1&\color{red}1&\color{red}0\\ \hline 0&1&1&0&1&0&0&1&1&0&0&1&0&1&1&0&1&0&0&1&0&1&1&\dots\\ t_0&t_1&t_2&t_3&t_4&\dots\!\!\! \end{array}$$ It has many interesting properties: it is aperiodic, cube-free, shows the parity of the number of $1$'s in the binary representation of a natural number, has connections to the Fabius function, the hypergeometric function, etc. There is a nice formula for this sequence that uses only elementary functions, binomial coefficients and finite summation: $$t_n=\frac43\,\sin^2\left(\frac\pi3\left(n-\sum_{k=1}^n(-1)^{\binom n k}\right)\right)=\operatorname{mod}\left(2n+\sum_{k=1}^n(-1)^{\binom n k},\,3\right).$$ Unfortunately, I could not find a proof of this formula anywhere and could not construct it myself. So, I'm asking for your help with this. REPLY [9 votes]: An alternative proof. For the sake of typesetting I'm going to write $C(n,k)$ instead of $\binom nk$. Since we know $t_k$ is always $0$ or $1$, it suffices to show that $$t_n\equiv 2n+\sum_{k=1}^n(-1)^{C(n,k)}\pmod3\ .\tag{$*$}$$ It is known that the Thue-Morse sequence is defined by $$t_0=0\ ,\quad t_{2n}=t_n\ ,\quad t_{2n+1}=1-t_n\ .$$ It is clear that the RHS of $(*)$ satisfies the initial condition, I shall show that it also satisfies the recurrence. Lemma: $C(2n,2k)\equiv C(n,k)\pmod2$. Proof. Count the number of subsets of size $2k$ in a set of size $2n$ by first choosing $j$ elements of the first $n$. We have $$\eqalign{C(2n,2k) &=\sum_{j=0}^{2k}C(n,j)C(n,2k-j)\cr &=C(n,k)^2+\sum_{j=0}^{k-1}\bigl(C(n,j)C(n,2k-j)+C(n,2k-j)C(n,j)\bigr)\cr &\equiv C(n,k)^2\pmod2\cr &\equiv C(n,k)\pmod2\ .\cr}$$ Lemma: $bC(a,b)=aC(a-1,b-1)$. Proof. Well known. It follows easily that $$\displaylines{ C(2n,2k-1)\equiv (2k-1)C(2n,2k-1)\equiv2nC(2n-1,2k-2)\equiv0\pmod2\ ;\cr C(2n+1,2k)=C(2n,2k)+C(2n,2k-1)\equiv C(n,k)\pmod2\ ;\cr C(2n+1,2k+1)\equiv(2k+1)C(2n+1,2k+1)=(2n+1)C(2n,2k)\equiv C(n,k)\pmod2\ .\cr}$$ In $(*)$ we now have $$\eqalign{RHS(2n) &=4n+\sum_{j=1}^{2n}(-1)^{C(2n,j)}\cr &=4n+\sum_{k=1}^n(-1)^{C(2n,2k-1)}+\sum_{k=1}^n(-1)^{C(2n,2k)}\cr &=4n+n+\sum_{k=1}^n(-1)^{C(n,k)}\cr &\equiv RHS(n)\pmod3\cr}$$ and $$\eqalign{RHS(2n+1) &=4n+2+\sum_{j=1}^{2n+1}(-1)^{C(2n+1,j)}\cr &=4n+1+\sum_{k=1}^n(-1)^{C(2n+1,2k)}+\sum_{k=1}^n(-1)^{C(2n+1,2k+1)}\cr &=4n+1+2\sum_{k=1}^n(-1)^{C(n,k)}\cr &\equiv1-RHS(n)\pmod3\ .\cr}$$ As explained above, this completes the proof. Observation. Continuing to simplify modulo $3$ we can write the formula as $$\eqalign{t_n\equiv 2n+\sum_{k=1}^n(-1)^{C(n,k)} &\equiv\sum_{k=1}^n\left(-1+(-1)^{C(n,k)}\right)\cr &\equiv\sum_{\textstyle{k=1\atop C(n,k)\ \rm odd}}^n(-2)\cr &\equiv\sum_{\textstyle{k=1\atop C(n,k)\ \rm odd}}^n1\cr &\equiv\#\{k\mid 1\le k\le n\ \hbox{and $C(n,k)$ is odd}\}\ .\cr}$$<|endoftext|> TITLE: Why does the tan inverse integral have a 1/a but not the sin inverse one? QUESTION [6 upvotes]: $$\begin{align*} \int \frac{du}{\sqrt{a^2-u^2}} &= \sin^{-1}\left(\frac{u}{a}\right) + K\\ \int \frac{du}{a^2+u^2} &= \frac{1}{a}\tan^{-1}\left(\frac{u}{a}\right) + K \end{align*}$$ Always wondered why but I had no idea. REPLY [9 votes]: I think the most intuitive way to think of this is in terms of dimensional analysis. Since $u^2$ and $a^2$ are added, they must have the same dimensions, so $u$ and $a$ must as well. Say, for specificity, that $u$ and $a$ are both lengths. Then the left integrand is dimensionless (as $du$ and $\sqrt{a^2-u^2}$ are both lengths), so it can integrate to a pure number. But the right integrand has units of inverse length (as $a^2+u^2$ is an area). So it must integrate to some quantity which also has units of inverse length; as $a$ is the only constant around with units of length, it's natural to write this as $\frac{1}{a}$ times some function of $u$ which is a pure number.<|endoftext|> TITLE: Intuition for pullback operation QUESTION [7 upvotes]: In manifolds and complex geometry there is this thing called the pullback. Usually when I see it, its going backwards on maps that are going forwards. I've been told that it is just a composition of functions. I just need help understanding this concept. REPLY [7 votes]: I have a sneaking suspicion that your deeper question is something like, "If $f:M \to N$ is a mapping, why do mathematicians focus on pulling back forms from $N$ to $M$ rather than (say) pushing forward vectors from $M$ to $N$? Where does the formal asymmetry come from?" Briefly, the asymmetry comes from the requirement/constraint of single-valuedness in the definition of a mapping. To see why, first consider linear transformations and their effect on vectors and covectors; then consider smooth maps and their differentials. If $T:U \to V$ is a linear transformation, then for each vector $u$ in $U$, the image $T(u)$ is a vector in $V$. Dually, if $\lambda:V \to W$ is linear, the composition $T^{*}(\lambda) := \lambda \circ T:U \to W$ is linear. Particularly, $T$ "naturally" maps the dual space of $V$ to the dual space of $U$. By contrast, there is no "natural" way to send a general linear mapping $\lambda:U \to W$ to a linear mapping $T_{*}(\lambda):V \to W$ (unless $T$ is an isomorphism, in which case there is the pullback $(T^{-1})^{*}$). The best we can do is define an "induced mapping" $T_{*}(\lambda)$ under the additional hypothesis that $\lambda$ is constant on the level sets of $T$, i.e., that if $T(u_{1}) = T(u_{2})$, then $\lambda(u_{1}) = \lambda(u_{2})$; in this event, we can define $T_{*}(\lambda)(v) = \lambda(u)$, where we pick an arbitrary $u$ with $T(u) = v$. Now suppose $f:M \to N$ is a smooth mapping of smooth manifolds. If $u$ is a tangent vector to $M$ at a point $p$, there is a "push-forward" tangent vector $f_{*}(u) = Df(p)u$ at $f(p)$ in $N$. Does this mean that vector fields push forward under $f$? No, it does not: A single point $q$ in $N$ may be the image of two or more points of $M$, say $f(p_{1}) = f(p_{2}) = q$. If $X$ is a vector field on $M$, there is no guarantee that $$ Df(p_{1})X(p_{1}) = Df(p_{2})X(p_{2}), $$ and so no guarantee that $f_{*}(X)$ is single-valued. (Or, $q$ may not be in the image of $f$ at all, so that $f_{*}(X)(q)$ literally has no value.) Instead, we must view $f_{*}(X)$ as a section of the "pullback bundle" $f^{*}(TN) \to M$, whose fibre over a point $p$ of $M$ is the tangent space $T_{f(p)}N$. This is meaningful and useful, but not as straightforward as we might naively have hoped. By contrast, suppose $\omega$ is a smooth differential form on $N$. There is a well-defined smooth form $f^{*}\omega$ on $M$, defined in naive analogy to the pointwise situation, by $$ f^{*}\omega(p)(u_{1}, \dots, u_{k}) = \omega\bigl(f(p)\bigr)(f_{*}u_{1}, \dots, f_{*}u_{k}). $$ Loosely, "everything in this definition works in our favor". Particularly, $f^{*}\omega$ has the same degree (here $k$) as $\omega$, and obeys the same exterior calculus: $$ f^{*}(\omega \wedge \eta) = f^{*}\omega \wedge f^{*}\eta,\qquad d(f^{*}\omega) = f^{*}(d\omega). $$ If you'll permit (or forgive) a moment of rhapsodic philosophizing, this is one of those fundamental places in geometry where an apparent two-fold symmetry gets broken. In linear algebra, mappings and their dual mappings are, well, dual. There's arguably no reason to prefer one over the other. When we consider families of linear transformations arising as differentials of a single smooth mapping $f$, however, the formal symmetry between a (push-forward) linear transformation $f_{*}:T_{p}M \to T_{f(p)}N$ and its (pullback) dual $f^{*}:T_{f(p)}^{*}N \to T_{p}^{*}M$ breaks, causing us to prefer the pullback, essentially because mappings (i.e., $f$) are single-valued, but not generally bijective (invertible).<|endoftext|> TITLE: Sum of powers of four QUESTION [5 upvotes]: Let $0\leq a_1\leq a_2\leq\dots\leq a_n\leq b$ be integers such that $4^b\leq 4^{a_1}+\dots+4^{a_n}$. Can $4^b$ always be written as a sum of some subset of $4^{a_1},\dots,4^{a_n}$? I think it might be possible to perform some sort of greedy algorithm, where we collect four identical powers of $4$, whenever such exist, into a greater power of $4$. We can perform this process starting from the smallest power of $4$ until it is no longer possible, so we end up with at most three powers of $4$ for each power. REPLY [3 votes]: By induction on $b\geq 0$. The basic step is trivial. For $b>1$, assume that the statement is true for $b-1$. We have two cases. i) $a_1\geq 1$ then take $A_i=a_i-1$, by the the inductive step $4^{b-1}$ can be written as a sum of some subset of $4^{A_1},\dots,4^{A_n}$ and then by multiplying by $4$ we get a representation of $4^{b}$. ii) Assume that $a_1=\dots=a_r=0$ and $a_{r+1}\geq 1$ and assume that $4^{b-1}$ can not be written as a sum of some subset of $4^{A_{r+1}},\dots,4^{A_n}$ (otherwise we ara done as in i)). This means that $$4^{b-1}> 4^{A_{r+1}}+\dots+4^{A_n}\quad \Rightarrow \quad 4^{b}> 4^{a_{r+1}}+\dots+4^{a_n}.$$ But we know that $4^{b}\leq r+4^{a_{r+1}}+\dots+4^{a_n}$. Hence $$4^{b}=4^{a_{1}}+\dots+4^{a_j}+4^{a_{r+1}}+\dots+4^{a_n}$$ where $1\leq j=4^{b}-(4^{a_{r+1}}+\dots+4^{a_n})\leq r$. P.S. This approach gives you also a recursive algorithm which generate such representations.<|endoftext|> TITLE: An analogue of direct sum decomposition of a cofinite subset of integers QUESTION [8 upvotes]: Let $S \subseteq \mathbb Z$ be such that $\mathbb Z \setminus S$ is finite , then is it true that there exists infinite $ S_1,S_2 \subseteq \mathbb Z$ such that $S_1+S_2=S$ and for every $s \in S $ , there exists unique $s_1\in S_1 , s_2 \in S_2$ such that $s=s_1+s_2$ ? REPLY [2 votes]: Yes, but for a trivial reason. Just enumerate $S=\{c_1,c_2,\dots\}$ and construct the sets $S_1=\{a_1,a_2,\dots\}$ and $S_2=\{b_1,b_2,\dots$ inductively as follows. Suppose that $a_j,b_j$ are already defined for $j TITLE: Motivating examples of puzzles that can only be solved by using deep mathematics? QUESTION [7 upvotes]: I need this kind of examples to motivate my friend to study mathematics. When he was a high school student, he is really good at math, but since he went to college (he majors in Math), he became unmotivated and lazy. He's always complained that higher mathematics is too abstract and therefore, not fun anymore. (This is partly because math textbook written in Vietnamese is so boring with monotonous theorem-proof-theorem-... pattern, without any interesting examples and problems). However, my friend is very enthusiastic about solving puzzles and elementary math problems. So, could you give me some examples of puzzles that require some deep math to solve, which illustrate my belief that "clever and quick is not enough, deep is more important"?. I hope I can make him stop showing off and start studying math seriously. Some good examples I know are: Instant Insanity puzzle Sam Loyd's 15-puzzle Lights out Puzzle (This puzzle can be solved by trial-and-error, but to know which configuration is solvable requires linear algebra). I do not ask for seemingly elementary problems that turn out to be open for a long time (like Fermat's last theorem). I need examples that I can explain the solution to him. REPLY [3 votes]: Jordan's theorem is not really a puzzle but some believe that it is when they first learn about it. A Jordan curve is a non-self-intersecting continuous loop in the plane (see the figure). This theorem states that the plane is divided by all Jordan curve into two non-intersecting regions, one inside and the other outside. A proof "without errors" of Jordan's theorem requires a certain non- elementary level of mathematics.<|endoftext|> TITLE: Find all the solutions of this equation:$x+3y=4y^3 \ , y+3z=4z^3\ , z+3x=4x^3 $ in reals. QUESTION [8 upvotes]: Find all the solutions of this equation $x+3y=4y^3 \ , y+3z=4z^3\ , z+3x=4x^3 $ My attempt:In the hint of the question it was written that show $x,y,z \in [-1,1]$ then add all equations and conclude $(x,y,z)=(-1,-1,-1),(0,0,0),(1,1,1)$ are the solutions.I can show that $x,y,z \in [-1,1]$like below: By symmetry consider $x \ge y \ge z$ then if $x>1$ we have: $4x^3-4x>0 \Rightarrow 4x^3-3x>x \Rightarrow z>x$ which is wrong.Now consider $x<-1$ we have: $x<-1 \Rightarrow y<-1 \Rightarrow 4y^3-4y<0 \Rightarrow 4y^3-3y TITLE: Why is "lies between" a primitive notion in Hilbert's Foundations of Geometry? QUESTION [10 upvotes]: I read this question: Hilbert's Foundations of Geometry Axiom II, 1 : Why is this relevant? and its answer by Eric Wofsey. In this answer, it is stated that "lies between" a primitive notion in Hilbert's Foundations of Geometry. Why is it a primitive notion? Can't we define that a point $B$ is between points $A$ and $C$ if $AB < AC$ and $BC < AC$? Or maybe that a point $B$ is between points $A$ and $C$ if $AC=AB+BC$? REPLY [2 votes]: Why is [the relation "is between"] a primitive notion? Can't we define that a point $B$ is between points $A$ and $C$ if $AB < AC$ and $BC < AC$? Or maybe that a point $B$ is between points $A$ and $C$ if $AC=AB+BC$? This assumes that a distance function is given whose values can be ordered. Without the between-ness axiom the geometry could be coordinatized by a non-orderable field such as complex numbers, and an ordering of distances would break the correspondence between points on a line segment and distances. And if distances do correspond to points on a line, without an ordering of the line there is no trichotomy and there does not always exist a permutation of any $3$ points on a line for which one is "between" the others.<|endoftext|> TITLE: Is there a name for the sum of increasing powers? QUESTION [15 upvotes]: I was thinking about the wheat and chessboard story and thinking of the total number of grains of wheat… $$\sum_{n=0}^{63} 2^n$$ And wondered if there is a name for a sum like this? $$2^0 + 2^1 +2^2 + 2^3 + \cdots$$ REPLY [4 votes]: As you are looking for novel names, this is also a simple case of the more generic notion of hypergeometric series $$\sum_{k}r_k \,,$$ with $r_0 = 1$, and the ratio of two consecutive terms is a rational function, a ratio of two polynomials $P$ and $Q$ in the summation index $k$ $$ \frac{r_{k+1}}{r_k}= \frac{P(k)}{Q(k)}\,. $$ In your case, you can choose $P$ and $Q$ such that their ratio is equal to $2$. When the ratio is constant, it is called a geometric series (as answered here). As a reminder, it is a sum of terms in geometric progression (se.math) like $1,r,r^2,r^3,\ldots$, whose name (the geometry part) is illustrated by the following figure: Hypergeometric series are also connected to chess. A rook is a move on a chessboard. Some have developed studies some types of permutations as the placement of a number of rooks on a chessboard-like grid, see for instance Rook theory and hypergeometric series, J. Haglund, 1996.<|endoftext|> TITLE: A Certain Semisimple $k$-algebra is Commutative QUESTION [5 upvotes]: Suppose that $k$ is a field and that $R$ is a semisimple ring that is also a $k$-algebra with $\dim_k R$ at most $3$. Show that $R$ is commutative and that the semisimplicity is required. The case when $R$ has dimension $1$ is simple as it is routine to verify that then $R \cong k$. As for the remaining two dimensions, I am unsure of how to continue. It is, however, clear to me that this must make use of the Artin-Wedderburn Theorem and the reason this fails for higher $n$ is that the larger the $n$, the 'more' non-commutative the matrix rings become. I assume the conditions on $R$ give just enough structure to force commutativity in the small matrix rings. How do I go about proving the result for $\dim_k R=2$ and $\dim_k R=3$? REPLY [4 votes]: You're on the right track but might be overthinking it! You should be able to see that the smallest $k$ dimension you can get from $M_n(K)$ for a finite dimensional division ring extension $K$ of $k$ and $n>1$ is $4$, attained by $k=K$ and $n=2$. Then what are the sizes of the matrix rings in the decomposition of $R$? After that, reason carefully about the division rings involved. There are three cases, in principle, to sound out. As for seeing that semisimplicity is necessary, think about the ring of $2\times 2$ upper triangular matrices over $k$.<|endoftext|> TITLE: Why should the underlying set of a model be a set? QUESTION [12 upvotes]: In logic (especially in model theory and set theory), one defines a model $\mathcal A$ for a given signature $S$ to be given by a underlying set $A$ and some additional structure (by which I mean: for every constant symbol in $S$ an element of $A$, for every $n$-ary function symbol in $S$ a function $A^n\to A$, and so on). In set theory, one often considers models satisfying some set theory axioms (often one takes ZFC); these models have the form $(V, \in)$, where $V$ is called a universe and $\in$ a binary relation on $V$. I find it philosophically unsatisfying that set-theoretic universes should be sets, since within a set-theoretic universe satisfying ZFC, this universe thinks that it is a proper class, if you know what I mean. Why does one not allow models/universes to be proper classes? Why are the set-theoretic universes sets? REPLY [9 votes]: One of the reasons is that the $\sf ZFC$ axioms are not equipped for handling classes properly. For example $\sf ZF$ proves that all the axioms of $\sf ZFC$ hold in the inner model $L$, which is a proper class. But we cannot define a truth definition for that class, as that would violate Tarski's theorem. And the proof that all the axioms hold in $L$ is not a single proof, but rather a schema of proofs. Sure, we can handle some class models relatively okay (e.g. the surreal numbers, which have a relatively simple theory), but arbitrarily speaking? We want our foundational theory to have access to the truth definition of a structure. Otherwise, what sort of foundation does the theory provide us? If you notice, almost all the statements about class models of set theory are meta-theoretic statements. Yes, we could use class-theories like $\sf KM$ (Kelley-Morse) and $\sf NBG$ (von Neumann-Goedel-Bernays), and this might allow us to extend the reach of truth definitions for various class models; but certainly not for the entire universe, as that would contradict Tarski's theorem about the undefinability of the truth. So the universe of set theory will, essentially, always be an internal class.<|endoftext|> TITLE: Attempt to construct a family of cut off functions satisfying certain conditions QUESTION [5 upvotes]: Let be $\Omega\subset \mathbb{R}^n $ an open bounded set of class $C^1$ and let be $\Omega'\subset \subset \Omega ''\subset \subset \Omega$. I'm trying to construct a function $\varphi\in C^\infty (\Omega)$such that: $0\leq\varphi\leq1$ $\varphi\equiv 1$ in $\Omega'$ $\varphi\equiv 0$ in $\Omega \setminus \Omega''$ $||\nabla \varphi ||_{L^{\infty}(\Omega, \mathbb{R}^n)}\leq \dfrac{2}{{\rm dist}(\Omega',\partial \Omega'')}$ My attempt: My idea is to use the distance function ${\rm dist }$ and the mollifiers. Let be $\alpha={\rm dist}(\Omega',\partial \Omega'')$. I have in mind this function: $$\varphi(x)=\big [\dfrac {2}{\alpha} {\rm dist}(x,\tilde{\Omega}^c)\wedge 1\big ]*\rho_\epsilon$$ where $\tilde{\Omega}=\big \{ x\in \Omega \;| \;{\rm dist}(x,\Omega')<\dfrac{\alpha}{2} \big \}$, $\rho_\epsilon$ is the standard mollifier (I use the convolution in order to have a $C^\infty$ function) and $\wedge$ is the minimum operator. Is my example correct? My main concern is about the last point. I've constructed this function thinking of the domains as balls and for them the distance function is linear with respect to the radius. Does my function satisfy the last request? Edit What can I say about the gradient of the function $x \mapsto {\rm dist }(x,\tilde{\Omega}^c)$? Can I say that it is linear or can I do some estimations? Thanks for the help! REPLY [3 votes]: Since you are actively thinking about this exercise, I will answer your question giving you just some hints. Tell me if you want to see more details. Your example is almost correct. In order to guarantee that $\varphi\equiv 1$ on $\Omega'$, you should mollify a function which equals $1$ on a neighbourhood of $\overline{\Omega'}$ (not just on $\Omega'$). To accomplish this, it suffices to correct the definition of $\tilde\Omega$ a little: just take $\tilde{\Omega}=\big \{x\in \Omega\mid{\rm dist}(x,\Omega')<\frac{2}{3}\alpha\big\}$ instead. Call $\psi(x):=\frac{2}{\alpha}{\rm dist}(x,\tilde{\Omega}^c)\wedge 1$ the function you are mollifying and think $\varphi$ and $\psi$ as functions from $\mathbb{R}^n$ to $\mathbb{R}$, extending them by zero outside $\Omega$ (notice that we still have $\varphi=\psi*\rho_\epsilon$ on $\mathbb{R}^n$). $\psi$ is a Lipschitz function: can you estimate its Lipschitz constant? What happens to the Lipschitz constant of a Lipschitz function $f:\mathbb{R}^n\to\mathbb{R}$ when you mollify it? (this one is the crucial step) How is $\|\nabla\varphi\|_{L^\infty(\mathbb{R}^n,\mathbb{R}^n)}$ related to the Lipschitz constant of $\varphi$? Conclude. As a side note, although $\psi$ could be non-differentiable somewhere, one can still give a meaning to the 'gradient' of $\psi$. There are many ways to do that (Sobolev spaces, Clarke's subdifferential, Dini derivatives, ...): you will surely meet some of them if you pursue your studies in analysis.<|endoftext|> TITLE: About a mysterious sequence who seems to follow some patterns QUESTION [24 upvotes]: This question is maybe linked to this other question. We can prove using the density of some additives sub-groups of $\mathbb R$ that we can approach an integer with a rational of the form $\displaystyle \frac{2^a}{3^b}$. For a given integer $n$, let's get interesting of the approximation of this form with $b \ge 0$ and $a$ minimal: $$\left\lfloor \frac{2^a}{3^b}\right\rfloor=n \qquad b \ge 0 \text{ and } a \text{ is minimal},$$ where $\lfloor.\rfloor$ denotes the floor function. The term minimal here is meaning that we are interesting in a couple $(a,b)$ like that, such that every other $(a',b')$ couple like that won't match the condition if $a' For $n$ from $1$ to $1000$: From these two first graphs of the poids $p$, we can see some kind of patterns. Which seems at the same time very regular and predictable, but also a little random, with holes and extra unexpected points. But let's look up the sequence a little further. For $n$ from $1$ to $2000$: We now see a huge explosion of the poids around $1000$, a pattern getting more and more regular on the bottom, and apparent randomness on the top. And something which look like a gap just a little before $2000$. We must scale down to see the big picture. For $n$ from $1$ to $1000$: What a different graph. There is a pattern of waves appearing now above the initial bottom (we can clearly see the $1000$ explosion now). The wave-pattern seems to get more regular with $n$ growing. To understand if it goes like this forever, we need to scale down even more. For $n$ from $1$ to $22000$: Now the wave-pattern is disappearing around $15000$ to another kind of pattern. For $n$ from $1$ to $24000$: We have another explosion of the poids a little before $24000$. Let's go further (but don't take credit of the $x$-axis, it is wrong, my bad... I couldn't display the full picture, computation starting to get really big at this point). For $n$ from $24000$ to $32000$: I have personally trouble imagining we are looking at the very same sequence at this point. We know have two layers, just like we did right after the first explosion at $1000$. A very regular layer on the bottom, and some beautiful waves on the top. The waves gaining one "wave" each time they appear. We want to look more precisely at the sequence, so we construct its values for $n\in [40000, 40500]$. For $n$ from $40000$ to $40500$: Finally we highlighted the multiple of $2$, $3$, $5$, and $6$, and some interesting things can be observed. We also highlighted the prime numbers as a curiosity, but I found nothing from there. The multiples of $2$: The multiples of $3$: The multiples of $5$: The multiples of $6$: The prime numbers: Well, now that we had a good look at this amazing sequence, let's try to do some maths. Here it is what I've proven so far (not much i fear, but I can send you proofs if requested). Theorem 1 We have $$\forall n\geq 1,\quad \log_2(n)\le p(n).$$ Which leads to: Corollary 1 We have $$p(n)\xrightarrow[n\to+\infty]{} +\infty.$$ But also: Theorem 2: We have $$\forall k\geq 2,\quad p(k)= \min\{p(2k)-1\ ,\ p(2k-1)-1\}.$$ I haven't actually proven the following, but i do think it's true: Theorem 3: We have $$\forall k\geq 2,\quad p(k)\leq p(3k)+1.$$ I have a lot of questions about this sequence, and I hope you do to. Anything which could help understand this sequence better, and any result about it would be much appreciated. Edit As requested, I am going to ask some specifics questions. Though any result would be great. I think it would be natural to have a formula of the form $$p(n)+p(m)\leq p(nm)$$ which would be valid "most of the time". I don't succeed in producing a proof though. I would like to know what is the function being the better upper bound (it looks like a combinaison of several logarithms). Finally, is there a reason why the sequence is regularly "exploding" (which it does around $1\, 000$ and $23\, 000$ for instance) ? REPLY [7 votes]: I'm answering a heuristic question, so I won't go out of my way to be too precise. Roughly speaking, the explosions you are looking at have to do with the continued fraction of $\log3 / \log2$. You're looking for the pair $(a,b)$ with minimal $a$ such that $n \leq 2^a / 3^b < n+1$. Taking logs of everything, we want $\log(n) \leq a \log 2 - b \log 3 < \log(n+1).$ There's no particular reason to restrict this to the integers; we can reparametrize with $x = \log n$ and substitute to get the inequality $x \leq a \log 2 - b \log 3 < x + \log(1 + e^{-x})$. Let's write $\varepsilon(x) = \log(1 + e^{-x})$; note that $\varepsilon(x)$ is very close to $e^{-x} = 1/n$. So we're interested in what numbers of the form $a \log2 + b \log3$ fall into the moving and rapidly narrowing interval between $x$ and $x + \varepsilon(x)$. Some of the effects you're observing are due to the fact that the window is moving, some are due to the fact that it's narrowing, and some are due to the fact that you only plot the result when $e^x$ is an integer. If you want to isolate some of these behaviors you could, say, let the window slide continuously without narrowing. The explosions are due to the fact that the window is narrowing. They correspond to rapid increases in the answer to the following question: given $x$, how big do we need to let $a$ to be, in order for all gaps between the values of $a\log2 + b\log3$ to be smaller $\varepsilon(x)$? We care about size of the biggest gaps in the set $S_A = \{a\log2 + b\log3 ; a, b \in \mathbb Z, 0 \leq a < A\}$. This set is periodic with period $\log 3$, so we might as well mod out by multiples of $\log 3$: Letting $\alpha = \log3 / \log2$, we're looking a circle of circumference $\log 3$, and the subset $\widetilde S_A$ of that circle consisting of a point, the rotation of that point by $\alpha^{-1}$ of a full rotation, by $2\alpha^{-1}$ of a full rotation, and so on up until $(A-1)\alpha^{-1}$ of a full rotation. Specifically, we care how the size of the biggest gap between points of $\widetilde S_A$ depends on $A$. This displays the sort of 'jumpy' behavior where one thing happens for a while until another thing starts happening, and so on. At first, the size of the biggest gap shrinks down from $1$ by $\alpha^{-1}$ of a full circle each time you increase $A$ by 1. But eventually at step $A_1 = \lfloor \alpha \rfloor$ you wrap around and the smallest gaps begin shrinking by $1-A_1/\alpha$ full rotations instead, and it takes $A_1$ steps to shrink them all. Eventually you suffer another slowdown, and another, and so on. Note that if $\alpha$ were rational then eventually you would end up right back where you started, and the gaps would stop shrinking entirely. You suffer a severe slowdown when you happen to hit a particularly good rational approximation $A/B$ of $\alpha$; a convergent of the continued fraction of $\alpha$. At a time like that, making $A$ rotations by $\alpha^{-1}$ brings you particularly close to an integer number $B$ full rotations around the circle, and the rate at which our gaps shrink drops to an amount of $A\alpha^{-1} - B$ full rotations every $A$ steps, and that's how big the largest gaps will be at the next slowdown. For example in our case 1054 rotations by $\alpha^{-1}$ brings you unusually close to 665 full rotations around the circle, within $1054\alpha^{-1} - 665$ of the starting point, i.e. a distance of $s = |1054\log2 - 665\log 3| \approx 1/23000$. At this time the big gaps in $\widetilde S_A$ have size corresponding to the previous convergent: $|485\log 2 - 306\log 3| \approx 1/1000$. To actually cover the circle with gaps as short as $s$, we need to wait until we reach the next good rational approximation of $\alpha$, which is $24727/15601$. So for values of $A$ between 1054 and 24727, the biggest gap in $\widetilde S_A$ shrinks down roughly linearly from around 1/1000 to 1/23000. That means that while $\varepsilon(x)$ remains above 1/1000 or so, the set $S_{1054}$ has elements in every interval of length $\varepsilon(x)$, and in particular, you can always find an $a$ between 0 and 1054 for which $x \leq a \log2 - b\log3 < x + \varepsilon(x)$. But as $\varepsilon(x)$ gets below 1/1000, you fairly quickly start to need larger values of $a$. By the time $\varepsilon(x)$ is as small as 1/2000 or so, $A$ needs to be halfway between 1054 and 24727 to ensure no gaps in $S_A$ of size above $\varepsilon(x)$. So you experience a rapid jump in how big $a$ needs to be when $\varepsilon(x)$ is around 1/1000, i.e. when $n$ is around 1000. When $n$ is smaller values of $a$ less than 1054 suffice, so you have $a + b = p(n)$ less than around 1700. When $n$ is larger, you rapidly start to need bigger values of $a$, giving you bigger values of $p(n)$. More generally, if $A/B$ is a convergent of the continued fraction of $\log3 / \log2$, then you experience this explosion when $\varepsilon(x)$ gets as small as $A \log2 - B\log3$, i.e. around $n = |A \log2 - B \log 3|^{-1}$. The convergent 485/306 leads us to expect an explosion when $n$ is around 1000, and the next convergent 1054/665 leads us to expect one when $n$ is around 23000. The next explosion, corresponding to the convergent 24727/15601, will occur when $n$ is around 55000. Heuristically, the upper bound you're looking for is one where $p$ is a piecewise linear function of $1/n$, interpolating between the explosions. Particularly big explosions will occur when you have particularly good rational approximations of $\alpha$, i.e. when you have particularly large terms in the continued fraction. The limiting case of this is when $\alpha$ turns out to be rational, in which case $p$ won't even be well-defined past a certain point. (Though of course you'd need numbers other than 2 and 3 in the definition to see this happen.)<|endoftext|> TITLE: Studying Difficult New Material: Is It Effective To Skim Through First? QUESTION [11 upvotes]: If I have to study difficult material for the first time (the kind of 100 page books that take you days and days of studying), I am often inclined to just keep on reading whenever I get stuck for on something (like a proof, derivation or idea) for too long. My question: is it efficient to do this? That is, is it useful to skim through the material first before 'diving in deeper'? Or should one try to go very slow from the beginning and make sure to understand everything before reading on? I realize that this question might be somewhat subjective and vague, but I am sure that a lot of you recognize what I'm saying, and that there ought to be at least a somewhat general educational scientific answer to this question. REPLY [7 votes]: Yes, I think it's good to learn math and read math textbooks in a "big picture first", coarse-to-fine manner. Before you learn your way around a city, you first look at a map of the earth to decide which city you want to visit. I think usually reading the entire textbook thoroughly may not even be the right goal (unless the book is fundamental to your research area and you really need a deep mastery of it). The ocean of knowledge is infinite. You can never understand all the drops of water in the ocean, but you can soar over the water like a seagull, occasionally diving down to catch some prey. Here's a description of how the mathematician Peter Scholze (who is said to be revolutionizing arithmetic geometry) learns math: At 16, Scholze learned that a decade earlier Andrew Wiles had proved the famous 17th-century problem known as Fermat's last theorem, which says that the equation $x^n + y^n = z^n$ has no nonzero whole-number solutions if $n$ is greater than two. Scholze was eager to study the proof, but quickly discovered that despite the problem’s simplicity, its solution uses some of the most cutting-edge mathematics around. “I understood nothing, but it was really fascinating,” he said. So Scholze worked backward, figuring out what he needed to learn to make sense of the proof. “To this day, that’s to a large extent how I learn,” he said. “I never really learned the basic things like linear algebra, actually — I only assimilated it through learning some other stuff.” Elon Musk, who has created a grade school called Ad Astra, makes some interesting related comments in this video. Let's say you're trying to teach people about how engines work. A more traditional approach would be to say, we're going to teach all about screw drivers, and wrenches, and you're going to have a course on screw drivers, a course on wrenches, and all these things, and that is a very difficult way to do it. A much better way would be like, here's the engine, now let's take it apart, how are we going to take it apart? Ah, we need a screw driver, that's what the screw driver's for. We need a wrench, that's what the wrench is for. And then a very important thing happens, which is that the relevance of the tools becomes apparent. Richard Feynman mentioned that he quickly skims the whole book to get the big picture and see how the ideas fit together, before digging in to the detailed arguments. (I can't remember where Feynman said this.)<|endoftext|> TITLE: What does functor definition in category theory mean? QUESTION [12 upvotes]: According to Wikipedia: Let C and D be categories. A functor F from C to D is a mapping that associates to each object $X$ in C an object $F(X)$ in D, associates to each morphism $f : X \rightarrow Y$ in C a morphism $F(f) : F(X) \rightarrow F(Y)$ in D such that the following two conditions hold: $F(id_X) = id_{F(X)}$ for every object $X$ in C $F(g \circ f)$ = $F(g) \circ F(f)$ for all morphisms $f : X \rightarrow Y$ and $g : Y \rightarrow Z$ in C. That is, functors must preserve identity morphisms and composition of morphisms. Maybe this is because of my programming background, but it's not clear what is $F$ in this definition. It looks like a function, but it always takes different arguments: $F(X)$ – object, $F(f)$ – morphism. It looks like a very smart function that accepts all kinds of arguments and knows what to return in each case. Moreover, in this book there is one more equation that makes thing even more complicated: $$F(f : X \rightarrow Y) = F(f) : F(X) \rightarrow F(Y)$$ Does it mean that if we pass a function $f : X \rightarrow Y$ to $F$, we will get a function $F(f) : F(X) \rightarrow F(Y)$? Also, shall $F$ mentioned here: $F(f : X \rightarrow Y)$ and here: $F(X)$ to be two different functions with different signatures and behavior? REPLY [4 votes]: If you extract the set1 $Ob(C)$ of objects from the category $C$ and the set $Ar(C)$ of arrows, then given any functor $F : C \to D$, you can indeed construct two functions $$ Ob(C) \to Ob(D) : X \mapsto F(X) $$ $$ Ar(C) \to Ar(D) : f \mapsto F(f) $$ satisfying the stated properties. And conversely, given any two functions satisfying the properties, there exists a corresponding functor. You can think of the functor as being a pair of functions, but it is probably better to instead think of that as a way to represent functors. You want to think of a functor as being a thing in its own right, rather than being shackled to that particular representation. For example, you can do the usual thing of promoting evaluation to a binary operator. In set theory, there is a set $Y^X$ of all functions from $X$ to $Y$, and you can consider evaluation a function $Y^X \times X \to Y$. You can do the same thing with functors; there is a category $D^C$ whose objects are functors from $C$ to $D$ (and whose arrows are natural transformations), and evaluation is itself a functor $D^C \times C \to D$. Another potentially interesting point is that categories don't just have point-shaped and arrow-shaped elements. They have elements in the shape of any diagram: for example, they have commutative-square-shaped elements, and the functor $F$ also maps commutative-square-shaped elements of $C$ to commutative-square-shaped elements of $D$. That a functor preserves composition of morphisms can actually be phrased in terms of the functor acting on the commutative-triangle-shaped elements. (all of the information of a category is in its arrows so we can reduce all various-shaped elements to arrows and equations between them, but we don't have to) 1: Replace "set" with other notions as needed or to taste<|endoftext|> TITLE: Ceiling of logarithm of ceiling of x QUESTION [5 upvotes]: I'm to answer if the following holds. But I really don't know how to start approaching it at all. $$\forall x\geq1 :\lceil \log\lceil x \rceil \rceil = \lceil \log x \rceil $$ The logarithm is base $10$. I can convert each of the sides into inequalities. $$ \log x \leq \lceil \log x \rceil \leq (\log x) +1$$ $$ \log \lceil x \rceil \leq \lceil \log \lceil x \rceil \rceil \leq (\log \lceil x \rceil) +1$$ But I really don't know how to bite the ceiling inside the log so I can proceed. How should I think about this problem what do I need to crack it? REPLY [7 votes]: The trick is to write your inequalities "inside out." Let $k$ be a positive integer, then $$ \begin{aligned} \lceil \log {\lceil x \rceil} \rceil = k &\iff k - 1 < \log {\lceil x \rceil} \leqslant k \\ &\iff 10^{k-1} < \lceil x \rceil \leqslant 10^k \\ &\iff 10^{k-1} < x \leqslant 10^k\\ &\quad\quad\text{[reverse the steps ...]} \\ &\iff \lceil \log x \rceil = k. \end{aligned} $$ This proves the result for $x > 1$; the special case $x = 1$ is obvious. Note that you can drop the inner ceiling only because $10^{k-1}$ and $10^k$ are integers -- this is why the base must be integral. In general, for integer $n$ and real $x$, $$\begin{aligned}n < \lceil x \rceil &\iff n < x;\\ n \leqslant \lfloor x \rfloor &\iff n \leqslant x;\\ n > \lfloor x \rfloor &\iff n > x;\\ n \geqslant \lceil x \rceil &\iff n \geqslant x.\end{aligned}$$ By the way, many similar identities hold, for example: $$\Big\lfloor \sqrt {\lfloor x \rfloor} \Big\rfloor = \big\lfloor\sqrt x \big\rfloor,\quad x \geqslant 0;\\ \bigg\lfloor \frac{x+m}{n} \bigg\rfloor = \bigg\lfloor \frac{\lfloor x\rfloor +m}{n} \bigg\rfloor,\quad\begin{aligned}&\text{integer}\;m,\\ &\text{integer}\;n > 0;\end{aligned}$$ etc. A sufficient (but not necessary) condition for $\lfloor f(\lfloor x \rfloor)\rfloor = \lfloor f(x) \rfloor$ is that $f$ be strictly monotonically increasing and the preimage of each integer be an integer. Same holds for ceilings, and also of course $\lfloor f(\lceil x \rceil)\rfloor = \lfloor f(x) \rfloor$ (and "vice versa") if $f$ is decreasing. Source: Graham, Knuth, and Patashnik - Concrete Mathematics, chapter 3.<|endoftext|> TITLE: What is the Relationship between a Closed Map and a Closed Linear Operator? QUESTION [6 upvotes]: A closed map between topological spaces maps closed sets to closed sets, while a closed linear operator between Banach spaces has a closed graph in the product topology. Is this just a co-incidental use of the same "closed" terminology or is there some relationship between the two ? REPLY [6 votes]: As Eric Wofsey pointed out, the terms are just coincidence. A closed mapping and a closed linear operator are not equivalent when considering a linear operator between two Banach spaces. To show that they are unrelated, you might want to do the following exercises, which is Exercise 1.74 in Megginson's An Introduction to Banach Space Theory: Let $X,Y$ be normed spaces and let $T : X \to Y$ be a linear operator that is neither injective nor the zero operator. Find a closed subset $F \subset X$ such that $T(F)$ is not closed in $Y$. Find a linear operator $T : X \to Y$ that satisfies the hypotheses of the closed graph theorem even though there is a closed subset $F \subset X$ such that $T(F)$ is not closed in the range of $T$.<|endoftext|> TITLE: A closed form of the double sum $\sum_{m=1}^{\infty}\sum_{n=0}^{m-1}\frac{(-1)^{m-n}}{(m^2-n^2)^2} $ QUESTION [14 upvotes]: I want to evaluate the double sum $\displaystyle \sum_{m=1}^{\infty}\sum_{n=0}^{m-1}\frac{(-1)^{m-n}}{(m^2-n^2)^2} $ where I know that the value is $-\frac{17\pi^4}{1440}$ , I have failed to evaluate to the inner finite sum cleverly so as to transform it into a reasonable infinite series. REPLY [16 votes]: Hint. One may observe that, for $m^2\neq n^2$, we have $$ \frac1{(m^2-n^2)^2}=\int_0^1\int_0^1x^{m+n-1}y^{m-n-1}\log x \log y\:dxdy \tag1 $$ giving, by using absolute convergence, $$ \sum_{m=1}^{\infty}\sum_{n=0}^{m-1}\frac{(-1)^{m-n}}{(m^2-n^2)^2}=\int_0^1\!\!\int_0^1\frac{-\log x \log y}{(1-x^2)(1+xy)}\:dxdy. \tag2 $$ Then, by partial fraction decomposition, one may write $$ \begin{align} &\sum_{m=1}^{\infty}\sum_{n=0}^{m-1}\frac{(-1)^{m-n}}{(m^2-n^2)^2} \\\\&=\int_0^1\!\!\int_0^1\frac{-\log x \log y}{(1-x^2)(1+xy)}\:dxdy \\\\&=\int_0^1\!\!\int_0^1\left(\frac{-\log x \log y}{2 (1+x) (1-y)}+\frac{-\log x \log y}{2 (1-x)(1+y)}+\frac{y^2\log x \log y}{(1-y^2) (1+x y)}\right)\!dxdy \tag3 \\\\&=-2\cdot \frac{\pi^4}{144}+\frac{\pi^4}{480} \\\\&=-\frac{17\pi^4}{1440} \end{align} $$ as announced.<|endoftext|> TITLE: Various $p$-adic integrals QUESTION [17 upvotes]: Here are three (possibly different) definitions of $p$-adic integrals that I have encountered during my self-studies. First of all, here is what Vladimirov, Volovich and Zelenov write at the beginning of their book on mathematical physics: As the field $\mathbb Q_p$ is a locally compact commutative group with respect to addition then in $\mathbb Q_p$ there exists the Haar measure, a positive measure $dx$ which is invariant to shifts. We normalize the measure $dx$ such that $\int_{|x|_p \le 1} dx = 1.$ Under such agreement the measure is unique. For any compact $K \subseteq \mathbb Q_p$ the measure $dx$ defines a positive linear continuous functional on $C(K)$ by the formula $\int_K f(x) dx$. A function $f \in \mathcal L^1_{\textrm{loc}}$ is called integrable on $\mathbb Q_p$ if there exists $$\lim_{N \to \infty} \int_{B(N)} f(x)dx = \lim_{N \to \infty} \sum_{\gamma = -N}^\infty \int_{S(-\gamma)} f(x) dx.$$ Their denote by $B(N)$ the set $\{x \in \mathbb Q_p : |x|_p \le p^N \}$ and $S(\gamma)$ is defined as $B(\gamma) \setminus B(\gamma-1)$. Next we have the famous Volkenborn integral, described as follows by Robert: We say that $f$ is strictly differentiable at a point $a \in X$ - and denote this property by $f \in \mathcal S^1(a)$ - if the difference quotients $[f(x) - f(y)]/(x-y)$ have a limit as $(x, y) \to (a,a)$ ($x$ and $y$ remaining distinct). By the way, $\mathcal S^1(X) := \bigcap_{a \in X} \mathcal S^1(a)$. The Volkenborn integral of a function $f \in \mathcal S^1(\mathbb Z_p)$ is by definition $$\int_{\mathbb Z_p} f(x) dx = \lim_{n \to \infty} \frac{1}{p^n} \sum_{j=0}^{p^n-1} f(j).$$ Finally a quote from Koblitz book "$p$-adic numbers, analysis and $\zeta$-functions": Now let $X$ be a compact-open subset of $\mathbb Q_p$. A $p$-adic distribution $\mu$ on $X$ is a $\mathbb Q_p$-linear vector space homomorphism from the $\mathbb Q_p$-vector space of locally constant functions on $X$ to $\mathbb Q_p$. Later he states that for $p$-adic measures (distributions that are bounded on compact-open subsets by some constant) and continuous functions $f$ there is a reasonable way to define $\mu(f) =: \int_X f \mu$. Cassels is confusing me even more as he mentions Shnirelman. So, here are my actual questions: Do these ultrametric integrals have real analogues like: being a limit of Riemannian sums or an operation inverse to differentiation? Are they compatible to each other? Is any of them a generalization of the others? What are the positive and negative attributes of the cited definitions? Where can I find an exhaustive table of integrals? I'm mainly interested in something similar to https://www.tug.org/texshowcase/cheat.pdf. Can we somehow imitate the Lebesgue integration theory? REPLY [6 votes]: This is just a comment after @user120527's answer. The Volkenborn integral and $p$-adic measures are special cases of "$p$-adic distributions", which are defined as elements of topological dual spaces of nice functions. Let $K$ be a closed subfield of $\mathbb{C}_p$ (the completion of the algebraic closure of $\mathbb{Q}_p$), and let $$C^0(\mathbb{Z}_p,K)=\{f:\mathbb{Z}_p\to K \text{ continuous}\},$$ $$C^1(\mathbb{Z}_p,K)=\{f:\mathbb{Z}_p\to K \text{ strictly differentiable}\}.$$ Then, the Volkenborn integral $f\mapsto\int_{\mathbb{Z}_p}f(t)dt$ is an element of the topological dual of the space $C^1(\mathbb{Z}_p,K)$. Also, a $p$-adic measure $\mu$ is just an element of the topological dual of $C^0(\mathbb{Z}_p,K)$, by means of $$\mu:f\mapsto \mu(f)=\int_{\mathbb{Z}_p}f(t)\mu(t).$$ In general, a $p$-adic distribution is an element of the dual of the space of locally analytic functions $\mathbb{Z}_p\to K$, which can be extended to a nicer space, such as $$C^r(\mathbb{Z}_p,K)=\{f:\mathbb{Z}_p\to K \text{ $r$-th times strictly differentiable}\}.$$ This "$p$-adic dual theory" was developed by Yvette Amice. For a very nice article (with full proofs) of this theory, see "Fonctions d'une variable p-adique" by Pierre Colmez: http://webusers.imj-prg.fr/~pierre.colmez/fonctionsdunevariable.pdf Finally, the Shnirelman integral is not a $p$-adic distribution. One may think of $p$-adic measures as analogues of the Riemann integral, and the Shnirelman integral as an analogue of the complex line integral. Neal Koblitz treats the Shnirelman integral in his book "P-adic Analysis: A Short Course on Recent Work". This is a beautiful theory with many arithmetical applications. Good luck studying it! PS: I don't know too much about complex valued $p$-adic integration, but for the Haar measure case over local fields, the keywords are "Tate's thesis".<|endoftext|> TITLE: Rational Solutions to Trigonometric Equation QUESTION [5 upvotes]: Consider the equation $$ \cos(\pi a) + \cos(\pi b) = \cos(\pi c) + \cos(\pi d), $$ with $$ a,b,c,d \in \mathbb{Q} \cap \left[0,\frac{1}{2}\right]. $$ Clearly, this equation admits some trivial solutions, namely $(a,b) = (c,d)$ or $(a,b) = (d,c)$. Are there any rational solutions other than the trivial ones? REPLY [10 votes]: For nontrivial solutions, $a,b,c,d$ are distinct. WLOG let $a = \max(a,b,c,d)$. Taking a common denominator $N$ and writing $a=A/N$, ..., $d = D/N$, we can write this as $$ \omega^A + \omega^{-A} + \omega^B + \omega^{-B} - \omega^C - \omega^{-C} - \omega^D - \omega^{-D} = 0$$ or equivalently $$ P(\omega) = \omega^{2A} + 1 + \omega^{B+A} + \omega^{-B+A} - \omega^{C+A} - \omega^{-C+A} - \omega^{D+A} - \omega^{-D+A} = 0 $$ where $\omega = \exp(\pi i/N)$. Now the minimal polynomial of $\omega$ over the rationals is the cyclotomic polynomial $C_{2N}(z)$, so $P(z)$ must be divisible by $C_{2N}(z)$. I searched over $A \le 20$, in each case factoring $P(z)$ and looking for cyclotomic factors $C_M(z)$ with $M \ge 4A$, obtaining the following results. $$ \eqalign{ \cos(\pi/3) + \cos(\pi/15) &= \cos(4 \pi/15) + \cos(\pi/5)\cr \cos(\pi/2) + \cos(\pi/12) &= \cos(5 \pi/12) + \cos(\pi/4)\cr \cos(\pi/2) + \cos(\pi/18) &= \cos(7 \pi/18) + \cos(5 \pi/18)\cr \cos(\pi/2) + \cos(\pi/9) &= \cos(4 \pi/9) + \cos(2 \pi/9)\cr \cos(3 \pi/7) + \cos(\pi/7) &= \cos(\pi/3) + \cos(2 \pi/7)\cr \cos(\pi/2) + \cos(\pi/24) &= \cos(3 \pi/8) + \cos(7 \pi/24)\cr \cos(\pi/2) + \cos(\pi/8) &= \cos(11 \pi/24) + \cos(5 \pi/24)\cr \cos(\pi/2) + \cos(\pi/30) &= \cos(11 \pi/30) + \cos(3 \pi/10)\cr \cos(\pi/2) + \cos(\pi/15) &= \cos(2 \pi/5) + \cos(4 \pi/15)\cr \cos(\pi/2) + \cos(\pi/10) &= \cos(13 \pi/30) + \cos(7 \pi/30)\cr \cos(\pi/2) + \cos(2 \pi/15) &= \cos(7 \pi/15) + \cos(\pi/5)\cr \cos(\pi/2) + \cos(\pi/5) &= \cos(2 \pi/5) + \cos(\pi/3)\cr \cos(\pi/2) + \cos(\pi/36) &= \cos(13 \pi/36) + \cos(11 \pi/36)\cr \cos(\pi/2) + \cos(5 \pi/36) &= \cos(17 \pi/36) + \cos(7 \pi/36)\cr } $$ Those involving $\cos(\pi/2)$ might be considered as somewhat trivial: they are cases of $$\cos(t) = \cos\left(\frac{\pi}{3} + t\right) + \cos(\left(\frac{\pi}{3}-t\right)$$ So the "really nontrivial" examples are $$\eqalign{\cos(\pi/3) + \cos(\pi/15) &= \cos(4 \pi/15) + \cos(\pi/5)\cr \cos(3 \pi/7) + \cos(\pi/7) &= \cos(\pi/3) + \cos(2 \pi/7)\cr }$$ Are there more?<|endoftext|> TITLE: how to express the result of triple cross product of two vectors in spherical coordinates (unit vector) QUESTION [5 upvotes]: I have a system of differential equations like below: (real problem is more complex, here is just an example) $$ \frac{\mathrm{d}\textbf{M}_1}{\mathrm{d}t}=\frac{A}{M_{1}M_{2}}[\textbf{M}_1\times(\textbf{M}_1\times\textbf{M}_2)]+\frac{B}{M_1}(\textbf{M}_1\times\textbf{M}_2) \\ \frac{\mathrm{d}\textbf{M}_2}{\mathrm{d}t}=\frac{C}{M_{2}M_{1}}[\textbf{M}_2\times(\textbf{M}_2\times\textbf{M}_1)]+\frac{D}{M_2}(\textbf{M}_2\times\textbf{M}_1) $$ A, B, C, and D are just constants. $\textbf{M}_1$ and $\textbf{M}_2$ are two vectors and their derivative over time (t) has such relation. Since the magnitude of $\textbf{M}_1$ and $\textbf{M}_2$ is unchanged over time, and I am only interested in the change of $\theta$ and $\phi$ of both vectors, so I want to express the equations in spherical coordinates and use unit vectors of $\theta$ and $\phi$. Thus, the equations can be separated to 4 equations, each of which deals with only one parameter: $\hat{\theta}_1$, $\hat{\phi}_1$, $\hat{\theta}_2$ and $\hat{\phi}_2$. (In my case, $\theta_{1,2}$ is the angle between $\textbf{M}_{1,2}$ and +z direction, ${\phi}_{1,2}$ is the angle of $\textbf{M}_{1,2}$ with +x direction on x-y plane.) The left side of the equations can thus be re-written as: $$ \frac{\mathrm{d}\textbf{M}_1}{\mathrm{d}t}=M_1(\frac{\mathrm{d}\theta_1}{\mathrm{d}t}\hat{\theta}_1+\frac{\mathrm{d}\phi_1}{\mathrm{d}t}\sin\theta_1\hat{\phi}_1) \\ \frac{\mathrm{d}\textbf{M}_2}{\mathrm{d}t}=M_2(\frac{\mathrm{d}\theta_2}{\mathrm{d}t}\hat{\theta}_2+\frac{\mathrm{d}\phi_2}{\mathrm{d}t}\sin\theta_2\hat{\phi}_2) $$ But I can’t express the right side of the equations to similar expression with $\hat{\theta}_1$, $\hat{\phi}_1$, $\hat{\theta}_2$ and $\hat{\phi}_2$. So my question is, can we express the items on the right side as similar as the left side of the differential equations? I’ve tried this equation: $\textbf{a}\times(\textbf{b}\times\textbf{c})=\textbf{b}(\textbf{a}\cdot\textbf{c})-\textbf{c}(\textbf{a}\cdot{\textbf{b}})$ for that triple cross product. But it results in an equation with $\textbf{M}_1$ and $\textbf{M}_2$, and seems not to be expressed by $\hat{\theta}_1$, $\hat{\phi}_1$, $\hat{\theta}_2$ and $\hat{\phi}_2$ further. For $\textbf{M}_1\times\textbf{M}_2$, I have no clue... Thanks a lot for your help! Best regards, REPLY [2 votes]: You can write ${\bf M}_{1,2}=M_{1,2}\hat{r}_{1,2}$. Using the relationship between spherical and polar coordinates such as in https://en.wikipedia.org/wiki/Del_in_cylindrical_and_spherical_coordinates#Unit_vector_conversions, the second table, it is trivial to show that $\hat{r}=\hat{\theta}\times\hat{\phi}$, and $$\frac{d\hat{r}}{dt}=\frac{d\theta}{dt}\frac{\partial \hat{r}}{\partial \theta}+\frac{d\phi}{dt}\frac{\partial \hat{r}}{\partial \phi}=\frac{d\theta}{dt}\hat{\theta}+\frac{d\phi}{dt}\sin\theta\hat{\phi}$$ You already used this equation for the left side. The cross product ${\bf M}_1\times{\bf M}_2$ now can be written as $${\bf M}_1\times{\bf M}_2=M_1M_2\hat{r}_1\times\hat{r}_2=M_1M_2(\hat{\theta}_1\times\hat{\phi}_1)\times(\hat{\theta}_2\times\hat{\phi}_2)$$ You can now use the following forms of the quadruple cross product $$(A\times B)\times(C\times D)=(D\cdot(A\times B))C-(C\cdot(A\times B))D\\=(A\cdot(C\times D))B-(B\cdot(C\times D))A$$ to obtain equations in terms of $\hat{\theta}_1$, $\hat{\theta}_2$, $\hat{\phi}_1$, $\hat{\phi}_2$. Note that your equations might not be separable.<|endoftext|> TITLE: The Möbius function is the sum of the primitive $n$th roots of unity. QUESTION [13 upvotes]: Did you know that the Möbius function $\mu$ is the sum of the primitive nth roots of unity? I want to know about meaning of this. This statement is expressed as, $$\mu(n) = \sum_{\substack{k=1 \\ (k,n)=1}}^n \exp\left(\frac{2\pi ik}{n}\right).$$ REPLY [5 votes]: A slightly different approach goes like this: introduce $$\alpha(n) = \sum_{k=1,\; (k,n)=1}^n \exp(2\pi i k/n).$$ Then we have $$\sum_{k=1}^n \exp(2\pi i k/n) = [[n=1]] = \sum_{d|n} \sum_{(k,n)=d} \exp(2\pi i k/n) \\ = \sum_{d|n} \sum_{(k/d,n/d)=1} \exp(2\pi i (k/d)/(n/d)) = \sum_{d|n} \alpha(n/d).$$ Now from the relation $$\sum_{d|n} \alpha(n/d) = [[n=1]]$$ we may conclude that $\alpha(n) = \mu(n)$ by inspection. If this is not sufficient use Mobius inversion to get $$\alpha(n) = \sum_{d|n} \mu(d) [[n/d=1]] = \mu(n).$$<|endoftext|> TITLE: If $A$ and $B$ are integral domains, how to make $A\times B$ an integral domain? QUESTION [7 upvotes]: I've been lately reading a bit of ring theory and when I reached the 'Integral Domains' section a question suddenly arised and it seems pretty natural to me. It is clear that if $A$ and $B$ are integral domains, then $A\times B$ is not neccesarily an integral domain (if the product on $A\times B$ is defined by $(a,b)(a',b')=(aa',bb')$). In fact, if $A$, and $B$ are rings with unit, $A\times B$ is NOT an integral domain as $(1,0')(0,1')=(0,0')$. The question is Is it possible to define a 'canonical' product on $A\times B$ in order to make it into an integral domain? With 'canonical' I mean 'definable from the operations on $A$ and $B$, or from some fixed relation between them' (for example, when the semi-direct product on groups is defined it only depends on a certain homomorphism from a group $G$ to the automorphism group of another group $H$). I would appreciate any ideas. Thanks in advance. REPLY [2 votes]: If $B=A$ and there exists a polynomial $x^2+ax+b\in A[x]$ which has no zeros in the field of fractions of $A$, then we can define a multiplication on $A\times A$ by mimicking the multiplication on the integral domain $A[x]/\left(x^2+ax+b\right)$. That is, we can define a multiplication on $A\times A$ as $$(p,q)\cdot (r,s):=(pr-bqs,ps+qr-aqs)\,,$$ for all $p,q,r,s\in A$. This is very much how we get $\mathbb{C}$ as $\mathbb{R}\times\mathbb{R}$, $\mathbb{F}_4$ as $\mathbb{F}_2\times\mathbb{F}_2$, $\mathbb{Z}\left[\frac{1+\sqrt5}{2}\right]$ as $\mathbb{Z}\times \mathbb{Z}$, etc.<|endoftext|> TITLE: Inductive proof that every person supports the same team QUESTION [8 upvotes]: I can't find what's wrong with this. Proof: Every person supports the same team. We can observe that in a set with only one person, all people within it support the same team. Now suppose that the statement is true for every set with cardinality $≤ n$. Then if there are $n + 1$ persons in a set, we take one of them and, by the hypothesis, all $n$ persons support the same team. Now put this person back to the initial set and remove a different one. Again, all the $n$ persons left support the same team. Therefore all $n + 1$ persons support the same team, and for every $k$ in $\mathbb N$, those $k$ people support the same team. REPLY [7 votes]: Now put the person back and remove another. Again those n people support the same team. Yes, but the $n$ people might support a different team than the previous $n$ people. You are assuming: $ set_1 = \{$ $n$ people supporting team $A$ $\}$ $ set_2 = \{$ $n$ people supporting team $B$ $\}$ $ set_{phantom} = set_1 \cap set_2 = \{$ $n-1$ people who support both team $A$ and team $B$ $\}$ (NEVER mentioned but assumed to exist) We assume $set_{phantom}$ isn't empty and as people can't support different teams, we conclude $A$ and $B$ are the same team. Then we have: $set_{magic} = set_1 \cup set_2 = \{ n + 1$ people supporting either team $A$ or team $B$ but they are the same team $\}$ But notice! Our base case was $n=1$. For $n=1$ then $set_{pantom}$ with $n-1 = 0$ members IS empty. So team $A$ and team $B$ may be different. And $set_{magic}$ could be $n+1 =2$ people supporting different teams. So to do this proof right we need to prove it for a case where $set_{phantom}$ will have at least $1$ member. Thus we need to show this for a base case of $n >= 2$. So we need to show all sets of two people support the same team. It is impossible for us to show this.<|endoftext|> TITLE: How to compute the sum $ 1+a(1+b)+a^2(1+b+b^2)+a^3(1+b+b^2+b^3)+\cdots$ QUESTION [6 upvotes]: Could it be possible to find the solution for the following series? $$ 1+a(1+b)+a^2(1+b+b^2)+a^3(1+b+b^2+b^3)+\cdots$$ Thanks in advance! REPLY [2 votes]: You have $$\sum_{n=0}^\infty \left(a^n\sum_{m=0}^nb^m\right)\text{.}$$ Before determining what this converges to, it is worth establishing for which values of $(a,b)$ it converges at all. To that end, consider the Ratio Test: $$\left\lvert\frac{a^{n+1}\sum_{m=0}^{n+1}b^m}{a^n\sum_{m=0}^nb^m}\right\rvert=\left\lvert a\left(1+\frac{b^{n+1}}{\sum_{m=0}^nb^m}\right)\right\rvert\to\begin{cases}\lvert ab\rvert&\lvert b\rvert>1\\\lvert a\rvert&\lvert b\rvert=1\\\lvert a\rvert&\lvert b\rvert<1\\\end{cases}$$ So it converges when: $\lvert ab\rvert<1$ and $\lvert b\rvert>1$ $\lvert a\rvert<1$ and $\lvert b\rvert\leq1$ And diverges when: $\lvert ab\rvert>1$ and $\lvert b\rvert>1$ $\lvert a\rvert>1$ and $\lvert b\rvert\leq1$ This test has been inconclusive when: $\lvert ab\rvert=1$ and $\lvert b\rvert>1$ $\lvert a\rvert=1$ and $\lvert b\rvert\leq1$ Let's look at the last case first. If $\lvert a\rvert=1$, then you have $\sum_{n=0}^\infty {\pm_n\left(\sum_{m=0}^nb^m\right)}\text{.}$ The terms of such a series, $\pm_n\left(\sum_{m=0}^nb^m\right)$, do not approach $0$ no matter what $b$ is. So your series will diverge. What if $\lvert b\rvert>1$ and $\lvert a\rvert=\frac{1}{\lvert b\rvert}$? Since $\lvert b\rvert>1$ we may write $\sum_{m=0}^nb^m=\frac{b^{n+1}-1}{b-1}$, and your series is $\sum_{n=0}^\infty {\pm_n\left(\frac{b^{n+1}-1}{b^n(b-1)}\right)}\text{.}$ Again using $\lvert b\rvert>1$, the terms of this series do not converge to $0$, so this series would be divergent. Now we know the only situations where your series converges are: $\lvert ab\rvert<1$ and $\lvert b\rvert>1$ (which implies $\lvert a\rvert<1$) $\lvert a\rvert<1$ and $\lvert b\rvert\leq1$ (which implies $\lvert ab\rvert<1$) When $b\neq1$, both situations give: $$\begin{align} \sum_{n=0}^\infty \left(a^n\sum_{m=0}^nb^m\right) &=\sum_{n=0}^\infty a^n\frac{1-b^{n+1}}{1-b}\\ &=\frac{1}{1-b}\left(\sum_{n=0}^\infty a^n-b\sum_{n=0}^\infty (ab)^n\right)\\ &=\frac{1}{1-b}\left(\frac{1}{1-a}-\frac{b}{1-ab}\right)\\ &=\frac{1}{(1-a)(1-ab)}\\ \end{align}$$ And when $b=1$ with $\lvert a\rvert<1$, $$\begin{align} \sum_{n=0}^\infty \left(a^n\sum_{m=0}^nb^m\right) &=\sum_{n=0}^\infty (n+1)a^n\\ &=\left.\sum_{n=0}^\infty \frac{d}{dx}x^{n+1}\right|_{x=a}\\ &=\left.\frac{d}{dx}\sum_{n=0}^\infty x^{n+1}\right|_{x=a}\\ &=\left.\frac{d}{dx}\left(\frac{1}{1-x}-1\right)\right|_{x=a}\\ &=\left.\frac{1}{(1-x)^2}\right|_{x=a}\\ &=\frac{1}{(1-a)^2}\\ \end{align}$$ Which conveniently agrees with the formula $\frac{1}{(1-a)(1-ab)}$.<|endoftext|> TITLE: Prove nth root of n! is less than n+1 th root of ((n+1) !): $\sqrt[n]{n!}\lt \sqrt[n+1]{(n+1)!}$? QUESTION [6 upvotes]: Prove the inequality $$\sqrt[n]{n!}\lt \sqrt[n+1]{(n+1)!}$$ REPLY [5 votes]: Start with the pretty easy fact that the average of the logarithms of the first $n$ integers is less than the logarithm of $n+1$: $$ \frac1n\sum_{k=1}^n\log(k)\lt\log(n+1)\tag{1} $$ Divide by $n+1$ $$ \frac1{n(n+1)}\sum_{k=1}^n\log(k)\lt\frac1{n+1}\log(n+1)\tag{2} $$ Add $\frac1{n+1}\sum\limits_{k=1}^n\log(k)$ to both sides $$ \frac1n\sum_{k=1}^n\log(k)\lt\frac1{n+1}\sum_{k=1}^{n+1}\log(k)\tag{3} $$ And $(3)$ is the logarithm of the desired inequality.<|endoftext|> TITLE: Show that the Stone–Čech compactification $\beta \mathbb{Z}$ is not metrizable. QUESTION [10 upvotes]: Show that the Stone–Čech compactification $\beta \mathbb{Z}$ is not metrizable (here $\mathbb{Z}$ denotes the set of integer numbers in discrete topology). Definition. Let $X$ be a completely regular space. We say $\beta (X)$ is a *Stone–Čech compactification * of $X$ if it is a compactification of $X$ such that any continuous map $f:X\rightarrow C$ of $X$ into a compact Hausdorff space $C$ extends uniquely to a continuous map $g:\beta (X)\rightarrow C$. We also have a theorem that states if $X$ is metrizable, then $X$ is first countable. So, if we show that $\beta (\mathbb{Z}) $ is not first countable, then we have our conclusion. I've looked online for proofs of this fact and can't seem to find any. REPLY [4 votes]: A compact metric space is separable. Take an uncountable family of subsets of $\mathbb Z$, any two of which have finite intersection. Then their closures intersected with the "remainder" $\beta \mathbb Z \setminus \mathbb Z$ give you an uncountable family of pairwise disjoint open sets in the compact space $\beta \mathbb Z \setminus \mathbb Z$. So $\beta \mathbb Z \setminus \mathbb Z$ is not metrizable, and therefore of course neither is $\beta \mathbb Z$.<|endoftext|> TITLE: Show that $\|A^{-1}\|\leq \|A\|^{n-1}$ QUESTION [7 upvotes]: Suppose that $A$ is a matrix in $SL_n(\mathbb{R})$. Show that $\|A^{-1}\|\leq \|A\|^{n-1}$. By $\|A\|$, I mean the operator norm $\displaystyle\sup_{\|v\|=1} \|Av\|$. REPLY [3 votes]: Here is a proof for the Euclidean norm. This approach is buried in Schäffer. J., "Norms and determinants of linear mappings", Technical report, CMU, Department of Mathematical Sciences, 1970. He does not use the SVD, but the idea is essentially the same. Let $A=U \Sigma V^*$ be a singular value decomposition of $A$, with $\Sigma=\operatorname{diag} (\sigma_1,...,\sigma_n)$, and $\sigma_1\ge ... \ge\sigma_n$. Then $\|A\| = \sigma_1, \|A^{-1}\| = {1 \over \sigma_n}$, and $|\det A| = \sigma_1 \cdots \sigma_n$. Hence $|\det A| \le \sigma_1 \cdots \sigma_{n-1} {1 \over \|A^{-1} \|} \le \|A\|^{n-1} {1 \over \|A^{-1} \|}$, from which we obtain $|\det A| \|A^{-1} \| \le \|A\|^{n-1}$. The result is not true for general operator norms, for example, with $A=\begin{bmatrix} {1 \over 2} & 1 \\ 0 & 2 \end{bmatrix}$, we have $\det A = 1$, $\|A^{-1} \|_\infty = 3, \|A\|_\infty = 2$ and so $3=\|A^{-1} \|_\infty \not< \|A\|_\infty^{2-1} = 2$. As an aside, it is worth noting the related Hadamard's inequality, $|\det A| \le \|Ae_1\| \cdots \|A e_n\|$ (Euclidean norm).<|endoftext|> TITLE: $\int \frac{\sqrt{x}}{x-1}dx $ QUESTION [7 upvotes]: I wished to solve $$\int \frac{\sqrt{x}}{x-1}dx \tag{1}$$ through hyperbolic trig substitution. My work is as follows: Let $\sqrt{x}=\tanh (\theta)$. Then $(1)$ becomes $$\int -2\tanh^2(\theta)\ d\theta=-2\theta+2\tanh(\theta) +C.$$ Subbing back in yields $$-2\text{arctanh}\big(\sqrt{x}\big)+2\sqrt{x} +C. $$ Using $$\text{arctanh}(m)=\frac{1}{2}\ln\Big(\frac{1+m}{1-m} \Big) $$ I arrive at the answer $$2\sqrt{x}+\ln\Big(\frac{1-\sqrt{x}}{1+\sqrt{x}} \Big) +C. $$ However, the answer key and other integration methods suggest the answer $$ 2\sqrt{x}+\ln\Big(\frac{\sqrt{x}-1}{1+\sqrt{x}} \Big) +C. $$ Where did I go wrong? REPLY [5 votes]: According to wolfram alpha both answers appear to work; they must somehow differ by an additive constant. http://www.wolframalpha.com/input/?i=derivative+of+2sqrt(x)%2Bln((sqrt(x)-1)%2F(1%2Bsqrt(x))) http://www.wolframalpha.com/input/?i=derivative+of+2sqrt(x)%2Bln((1-sqrt(x))%2F(1%2Bsqrt(x))) In particular that constant should be $\ln(-1)=i\pi$. In the reals this discrepancy could be explained by understanding the anti-derivative as a piece wise function. When $\sqrt{x}<1$ we must use one form and when $\sqrt{x}>1$ we must use the other. Your antiderivative should be $$\ln\left| \frac{1-\sqrt{x}}{1+\sqrt{x}}\right|$$ if you want it to hold when $0\leq x<1$ and when $x>1$. REPLY [3 votes]: Strictly speaking, your answer is correct. If $x>0$, then $\ln(-x) = \ln(x)+\pi i$; these two are off by a constant. The difference between the two answers here is that what's in the $\ln$ is off by a factor of $-1$, and so the two functions are off by a constant.<|endoftext|> TITLE: Any commutative ring lying between a Dedekind domain and its fraction field is Dedekind? QUESTION [9 upvotes]: Let $R$ be a commutative ring with unity, $D$ be a Dedekind domain, $K$ be its fraction field such that $D \subseteq R \subseteq K$. Then is it true that $R$ is Dedekind ? REPLY [2 votes]: Let $p$ be a prime of $R$, and $q$ it's restriction to $D$. We have $D_q\subseteq R_p \subseteq K$ with $D_q$ a DVR. It is immediate to see that there are no proper intermediate rings between a DVR and its fraction field, hence if $p$ is nonzero then $D_q=R_p\neq K$, if $p=(0)$ then $D_q=R_p= K$. Hence all localization at prime ideals coincide with those of $R$, in particular localizations at maximals are DVR. To conclude, it's enough to show that $R$ is noetherian, i.e. it satisfies the ascending chain condition on ideals. But this is easy: we already know that $D$ is noetherian, and hence it is enough to show that the map sending an ideal $I\subseteq R$ to $I\cap D\subseteq D$ is injective. But this can be checked at level of primes: two ideals are equal if and only if their localizations at every prime are equal, and since $D_q=R_p$ for every prime $p\subseteq R$, we conclude.<|endoftext|> TITLE: Multiply $1\times 3$ matrix by corresponding numbers QUESTION [7 upvotes]: For example, I want to this to happen: $$\begin{bmatrix}1& 2& 3\end{bmatrix}\times\begin{bmatrix}2& 3& 4\end{bmatrix} = \begin{bmatrix}2& 6& 12\end{bmatrix}$$ It's not exactly matrix multiplication, but I hope you can see what I'm getting at. Is there some notation in linear algebra that allows this function to be valid? REPLY [10 votes]: I believe what you are describing is called the Hadamard product. You can read about them and see some notation for using them on the Wikipedia page. https://en.wikipedia.org/wiki/Hadamard_product_(matrices) This page uses the notation $\circ$ for the Hadamard product. REPLY [6 votes]: I don't know any standard matrix operation of this kind. You can rewrite the three entries of the $1\times 3$ matrices as diagonal elements of a $3 \times 3$ matrix, than perform the standard matrix multiplication.<|endoftext|> TITLE: Sequence of rotation matrices QUESTION [5 upvotes]: I want a rotation matrix $R$ that transforms +x axis to +y, +y to +z, and +z to +x. One way of doing it is by a rotation about +x by 90 deg anti-clockwise, followed by a rotation about +y by 90 deg anti-clockwise. The matrices respectively are: $$R_1 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \\ \end{bmatrix}$$ $$R_2 = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0 \\ \end{bmatrix}$$ Since R1 first operates first followed by R2, the resultant, I think, is R=(R2)(R1). But when I cross-check by applying R to an arbitrary point, say (1,2,3), I don't get the right transform of (3,1,2). Instead, I get it right when I do R=(R1)(R2) which doesn't make sense to me. Please help REPLY [4 votes]: Good attempt Blaze! But seems that you made a common mistake. Before we start, let's review a theorem about the standard matrices for linear transformations: Let $ \hspace{0.5mm}T: \mathbb R^n \rightarrow \mathbb R^m $ be a linear transformation. Then there exist an unique $m \times n $ matrix $A$ such that $\hspace{0.8mm} T(\textbf{x})=A\textbf{x} \hspace{0.5mm}$ for all $\hspace{0.5mm} \textbf{x}\hspace{0.5mm} $ in $ \mathbb R^n$. We call this matrix $A$ the standard matrix of the linear transformation $T$ and $\hspace{0.3mm}A = [\hspace{1mm} T(\textbf{e}_1) \hspace{3mm} T(\textbf{e}_2)\hspace{3mm}\cdot\cdot\cdot\hspace{3mm} T(\textbf{e}_n)\hspace{1mm} ] \hspace{0.5mm}$, where $\textbf{e}_j$ is the $j$-th column of the identity matrix in $\mathbb R^n$. This theorem appears in most texts about linear transformation, say Friedberg, Lay, etc. Indeed your matrix can be easily found with this theorem. First we reformulate the problem in the language of the theorem: We are dealing with 3-dimensional space. So we want a linear transformation $T: \mathbb R^3 \rightarrow \mathbb R^3 $. You want to map the x-axis to y-axis. And we have the 1st column of the identity matrix $I_3$, $\textbf{e}_1=(1,0,0)$ pointing in the x-direction. We also have the 2nd column $I_3$ pointing in the y-direction. Then we have $\hspace{0.5mm}T(\textbf{e}_1) = \textbf{e}_2$. Similarly, $\hspace{0.5mm}T(\textbf{e}_2) = \textbf{e}_3$ and $\hspace{0.5mm}T(\textbf{e}_3) = \textbf{e}_1$. Then the standard matrix $A$ of our transformation is $A=[\hspace{1mm} T(\textbf{e}_1) \hspace{3mm} T(\textbf{e}_2)\hspace{3mm} T(\textbf{e}_3)\hspace{1mm} ] = [\hspace{1mm} \textbf{e}_2 \hspace{3mm} \textbf{e}_3\hspace{3mm} \textbf{e}_1\hspace{1mm} ]$= \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\\ \end{pmatrix} This method shall be cleanest and quickest :) However, what's wrong with your attempt then? Your think correctly but you read the matrices wrongly. Let's use the theorem in the box to read your matrices: For $R_1$ , the linear transformation $T_1:\mathbb R^3 \rightarrow \mathbb R^3 $ defined by $T_1(\textbf{x})=R_1\textbf{x}$ does no change to +x-axis ($\textbf{e}_1$). It maps $\textbf{e}_2$ to $\textbf{e}_3$. And it maps $\textbf{e}_3$ to $-\textbf{e}_2$. It is a good exercise for you to read $T_2(\textbf{x})=R_2\textbf{x}$ yourself. Then you will discover that if you do $T_1$ then $T_2$, you are mapping the $\textbf{e}_1$ to $-\textbf{e}_3$ which does not make sense. To learn more about this, compute $R=R_2R_1$. Then the first column shall exactly be $\textbf{e}_3=T_2(T_1(\textbf{e}_1))$. Doing matrix multiplication on two standard matrices of linear transformations corresponds to composition of the two linear transformation! Hope that I, a non-math major undergrad can help! I am trying to type as most things as possible since I am trying to learn Latex:) Good luck! REPLY [2 votes]: The matrix $$ M = \left[\begin{array}{ccc}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array}\right] $$ maps $+x$ to $+y$, maps $+y$ to $+z$ and $+z$ to $+x$. You can see this by direct verification. Because it maps a right-hand triple to a right-hand triple, then $M$ is a rotation matrix. (You can also see that $\mbox{det}(M)=1$.)<|endoftext|> TITLE: A function that grows faster than any function in the sequence $e^x, e^{e^x}, e^{e^{e^x}}$... QUESTION [10 upvotes]: Is there a continuous function constructed by elementary functions, or by integral formula involved only elementary functions (like Gamma function) that grows faster than any $e^{e^{e...^x}}$ ($e$ appears $n$ times)? I ask for the answer with a single formula. Gluing continuous function together is too trivial. The function need not to be defined on whole $\mathbb{R}$, the domain $(a, \infty)$ is acceptable. REPLY [5 votes]: Here is a proof that there is no such function und elementary function. Let $A$ be a collection of functions which grow slower than the collection $e^x, e^{e^x} , \dots $. We know that the sum, product composition and integral of functions which grow slower than $e^x, e^{e^x} , \dots $ also grows slower that $e^x, e^{e^x} , \dots $. Then the set of functions which are finite iterate sums, $\dots$ of functions in $A$ also grows slower that $e^x, e^{e^x} , \dots $ Now, depending on or definition of elementary function, if you take $A= \{1, x, \exp, \log, x^a \}$ you should be done.<|endoftext|> TITLE: What are the measurement units of Fisher information? (Dimensional Analysis) QUESTION [7 upvotes]: I know that there is a strong relationship between Shannon entropy and thermodynamic entropy -- they even have the same units and differ only by a constant factor. This suggests that they both intrinsically describe the same fundamental concept. Wikipedia says that there is a strong relationship between Fisher information and relative entropy (also called Kullback-Leibler divergence), as does an answer to a previous question on Math.SE. However, looking at the relevant formulas, it does not look like Fisher information would be measured with the same units that relative entropy would. This suggests that they are measuring fundamentally distinct, albeit related, physical concepts. The formula for the Shannon entropy can be written as follows: $$\int [ - \log p(x) ]\ p(x) \, dx $$ This is usually measured in bits. What are the units of Fisher information (given that Shannon entropy can be measured in bits)? Fisher information can be written as: $$\int \left(\frac{\partial}{\partial \theta} \log p(x; \theta) \right)^2 p(x;\theta) \, dx $$ My guess, based on comparing the definitions of Shannon entropy and Fisher information, is that the latter would be measured in units something like $$\frac{\text{bit}^2}{\Theta^2} $$ where $\Theta$ is the unit of measurement of the parameter $\theta$ that is to be estimated. I am not quite sure how to account for the effects of the extra partial differentiation compared to the definition of Shannon entropy. Perhaps the expectation operation $\int ( \cdot) p(y) \, dy$ should leave the units unchanged, although I don't know how to give a non-intuitive explanation of this suspicion. Since the Fisher information is the variance of the score, this question might be answered by first deriving the units of the score. This question might be related, although it was unanswered. REPLY [6 votes]: The Cramer Rao Lower bound says $ \operatorname{Var}(T(X))\ge {1\over I(\theta)}.$ $\ T(X) $ is an unbiased estimator of the parameter $\theta$ and $I(\theta)$ is the Fisher Information. As only quantities of the same unit can be compared, and $\operatorname{Var}(T)$ has unit $\Theta^2$ $\therefore$ $I(\theta)$ has unit $\Theta^{-2}$<|endoftext|> TITLE: Approximating the variance of the integral of a white noise Gaussian process QUESTION [7 upvotes]: Let $X(t)$ be a stationary Gaussian process with mean $\mu$, variance $\sigma^2$ and stationary correlation function $\rho(t_1-t_2)$. If $X(t)$ is a white noise process the correlation function is given by the Dirac delta function $\rho(t_1-t_2) = \delta(t_1-t_2)$. The integral of this process is given by: $$I = \int_0^L X(t) \, dt$$ According to this CrossValidated post the variance of $I$ is given by: $$\text{Var}[I] = L\sigma^2$$ However this does not agree with the results I obtained through simulation. The approach is to discretise the white noise Gaussian process into $N$ independent normal variables. The integral can then be approximated through: $$ I = \int_0^L X(t) \approx \frac{L}{N}\sum_{i=1}^NX_i$$ Where $X_i$ are indepedent random variables $X_i \sim \mathcal{N}(\mu,\sigma^2)$. In simulation I find that as $N$ grows large then $\text{Var}[I] \rightarrow 0$. Why does it not approach $L\sigma^2$? What is the problem with my approximation? REPLY [7 votes]: The disparity arises from the fact that your discretization of the continuous process does not assign the appropriate variance to the $X_i$. Here's the key (for heuristics, see here, Section 3.2): If $\{X(t)\}_{t\in\mathbb{R}}$ is a continuous Gaussian process such that $$\begin{align}&E[X(t)]=0\\ &E[X(t)X(t')]=\sigma^2 \delta(t-t')\end{align} $$ then a discrete sampling of the process, with a uniform sampling interval $\Delta$, viz., $$X(t),X(t+1\Delta),X(t+2\Delta),...$$ is simulated as an i.i.d sequence of Gaussian$(\text{mean}=0,\text{variance}=\frac{\sigma^2}{\Delta})$ random variables, the quality of the simulation increasing as $\Delta\to 0$. (The power spectral density of the i.i.d. sequence approaches that of the continuous process that it simulates -- flat and equal to the same constant value $\sigma^2$ -- except that for the i.i.d. sequence it is "band limited", i.e. vanishing outside of a finite-width interval.) Thus, in the present case, to simulate $$I = \int_0^L X(t) \, dt \approx \sum_{i=1}^N X(i\Delta)\,\Delta $$ where $\Delta=\frac{L}{N}$, one would use $$\hat{I} = \Delta\sum_{i=1}^N X_i $$ with $X_1,X_2,\ldots$ i.i.d. $\text{Gaussian}(\text{mean}=\mu,\text{variance}=\frac{\sigma^2}{\Delta})$. Then $$\begin{align} \text{Var}[\hat{I}] = \Delta^2\,N\,\frac{\sigma^2}{\Delta} = (N\Delta)\sigma^2 = L\sigma^2. \end{align} $$<|endoftext|> TITLE: The Space $\mathcal{D}(\Omega)$ of Test Functions is Not a Sequential Space QUESTION [6 upvotes]: Let $\Omega \subseteq \mathbb{R}^n$ be a nonempty open set, and $\mathcal{D}(\Omega)$ the space of test functions (that is infinitely differentiable functions $f:\Omega \rightarrow \mathbb{C}$ with compact support contained in $\Omega$), with the usual topology defined through inductive limit of Fréchet spaces (see Distribution or any good book which deals with distributions as Rudin, Functional Analysis, or Reed and Simon, Methods of Mathematical Physics, Volume I or the wonderful Schwartz, Théorie des Distributions). Now, let us recall that a topological space $X$ is called a Fréchet-Urysohn space if for every $A \subseteq X$, the closure of $A$ coincides with the sequential closure $[A]_{seq}$ of $A$, which is defined as \begin{equation} [A]_{seq} = \{ x \in X : \exists (a_j)_{j=0}^{\infty} : a_j \rightarrow x \textrm{ and } a_j \in A \textrm{ for } j=0,1,2,\dots \}. \end{equation} A set $A \subseteq X$ for which $A= [A]_{seq}$ is called sequentially closed. Note that every closed set $A$ is sequentially closed. If also the converse is true, that is if every sequentially closed set turns out to be closed, the space $X$ is called a sequential space (see Sequential Space for more details and references about sequential spaces). Clearly, every Fréchet-Urysohn space is a sequential space, but the converse is not true (see the post Understanding two similar definitions). With this terminology we may ask: is $\mathcal{D}(\Omega)$ a Fréchet-Urysohn space? Is it a sequential space? The answer to these two questions is negative, as I will show in my answer below. I posted the question here only to share this result with the community of math.stackexchange.com since I coud not find the answer in any book I consulted. REPLY [3 votes]: I will make use here of the notation introduced in my previous post Topology of the space $\mathcal{D}(\Omega)$ of test functions. We shall show that $\mathcal{D}(\Omega)$ is not a sequential space, so that the answer to both questions is negative, as announced. Take $V$ as in my answer to the post Topology of the space $\mathcal{D}(\Omega)$ of test functions and let $A$ be the complement of $V$. Then the argument given in that answer shows that $0$ is a limit point of $A$, so $A$ is not closed. Anyhow, $A$ is sequentially closed, as we shall now show. Suppose that $f \in V$ and that $(f_j)$ is a sequence in $\mathcal{D}(\Omega)$ converging to $f$. Then by the characterization of converging sequences in $\mathcal{D}(\Omega)$ (see e.g. Theorem (6.5) in Rudin, Functional Analysis, 2nd Edition), we know that: (i) there is a compact set $K$ contained in $\Omega$, such that the support of $f_j$ is contained in $K$ for all $j=0,1,2,\dots$, (ii) for every $\epsilon > 0$ and every nonnegative interger $N$ there exists a nonnegative integer $m$ such that $\left| \left| f_j - f \right| \right|_N < \epsilon$ for all $j \geq m$. Now, since $V \cap \mathcal{D}_K \in \tau_k$, there exists $\epsilon > 0$ and a nonnegative integer $N$ such that the set \begin{equation} B = \{ g \in \mathcal{D}_K : \left| \left| g - f \right| \right|_N < \epsilon \} \end{equation} is contained in $V \cap \mathcal{D}_K$. Then, if $m$ is the nonnegatve integer whose existence is stated in (ii), we conclude that $f_j \in V$ for all $j \geq m$. So there is no sequence $(f_j)$ in $A$ converging to $f$. QED NOTE. From NOTE (2) in my answer to the post Topology of the space $\mathcal{D}(\Omega)$ of test functions, we know that $A$ is dense in $\mathcal{D}(\Omega)$, and since $A$ is sequentially closed, we can conclude that $A$ is an example of a dense subset of $\mathcal{D}(\Omega)$ which is not sequentially dense in $\mathcal{D}(\Omega)$.<|endoftext|> TITLE: What is the minimal number of generators of the ideal $(6x, 10x^2, 15x^3)$ in $\Bbb Z[x]$? QUESTION [10 upvotes]: I know that the ideal $J=(6x, 10x^2, 15x^3) \trianglelefteq \Bbb Z[x]$ is not principal – I give the proof below. But can it be generated by two polynomials? I believe that the answer is no. I wrote the corresponding $5$ equations , but it seems quite long, so I would like some help. More generally, if I consider the $n$-th first prime numbers $p_1,...,p_n$, then I think that the minimal number of generators of the ideal $I_n = (q_1x, \dots, q_nx^n)\trianglelefteq \Bbb Z[x]$ is $n$, where $q_i = \prod\limits_{j\neq i} p_j$. Do you think this is correct? The ideal $J$ above corresponds to $I_3$ — I'm trying to start with an easy case. I don't really know the notions of dimension, of height, etc. in commutative algebra, but answers introducing these notions would be appreciated. If $(6x, 10x^2, 15x^3)=(f)$ were principal, then we could write $$f(x)=6xP(x)+10x^2Q(x)+15x^3R(x) \qquad 6x=a(x)f(x)$$ so that $6=a(x) \;[ 6P(x)+10xQ(x)+15x^2R(x) ]$ would yield $a(x)q(x)=0=a(x)r(x)$ and $p(x)a(x)=1$. Therefore $q=r=0$ and $f(x)=6xp(x)$, so that $10x^2=b(x)f(x)=6xb(x)p(x)$ which is impossible when evaluated at $1$, since all the polynomials have integer coefficients. REPLY [6 votes]: I have been working hard for a week on this question, and I found that matters around minimal number of generators are really delicate. The final answer is that the present case is not correct. It would be enough to show a counterexample, but I previously wrote a wrong positive answer based on localization techniques: you are interested in, so I will leave paragraphs about them. To see how much questions on number of generators are delicate, you can try to show with local methods that $(12x,10x^2,15x^3)$ cannot be generated by two elements. I introduce you to these techniques because I find them really useful in a variety of cases. I try to be concise even if there is a lot to say (it could be a half of a basic course in commutative algebra). Then, I give you the counterexample. Modules. Given a ring $A$, an $A$-module $M$ is an (additive) abelian group, with a linear action of $A$. What I mean? The linear endomorphisms of $M$ $$End(M) = \{f: M \to M \ :\ f(x+y) = f(x)+f(y) \ \forall \ x,y \in M\}$$ form a ring, where the sum is $(f+g)(x) := f(x)+g(x)$ and the product is the composition. Well, a linear action of $A$ is a ring-homomorphism $\varphi:A \to End(M)$: you can 'multiply' by elements of $A$ by $a \cdot m := \varphi(a)(m)$, and the multiplication is linear in both entries $a,m$. Examples (try to understand why these statements hold) If $k$ is a field, a $k$-module is a $k$-vector space. A $\mathbb{Z}$ module is just an abelian group. An ideal $I \subset A$ is an $A$-module. A quotient $A/I$ of a ring by an ideal is an $A$-module. If $M$ is an $A$-module, $I$ an ideal of $A$, then $IM=\{im: i \in I, m \in M\}$ is an $A$-module. If $M$ is an $A$-module, $M/IM$ is an $A/I$- module (the quotient $M/IM$ is a quotient of abelian groups). Localization. Given a ring $A$ and a prime ideal $P \subset A$, you can form $A_P$, the localized ring at $P$(see here ), in the following way. Take $B=\{a/s: a \in A, s \not \in P\}$ the ring of formal quotients with denominator not in $P$; the operations are given as if they were rational numbers (like $a/b+c/d = (ad+bc)/bd$). Then $A_P=B/\sim$, where $\sim$ is given by $a/b \sim a'/b' \ \ \Leftrightarrow \ \exists \ s \not \in P: s(ab'-a'b) = 0$. This is weird, but note that if $A$ is a domain the condition reduces to the prettier $ab'=a'b$. Check these statements: $\sim$ is an equivalence relation (this is very boring, but you must do it once in your life). There is a morphism $\varphi_p:A \to A_P$ which sends $a \to a/1$ and it is injective when $A$ is a domain. There is a corrispondence of ideals between $$ \{\text{Ideals of }A\text{ contained in }P\} \ \leftrightarrow \ \{\text{Ideals of }A_P\} $$ which explicitly sends $I \subseteq A$ in $I_P = I A_P = \{x/s: x \in I\}$. This corrispondence preserve inclusions and prime ideals. $A_P$ is a local ring, i.e. it has just one maximal ideal $\mathfrak{m}=PA_P$. In particular, $A_P/PA_P$ is a field. If $A$ is a local ring with unique maximal ideal $\mathfrak{m}$, then $A_{\mathfrak{m}}\simeq A$. If $I$ is an ideal of $A$,$P$ a prime ideal, then $(A/I)_P \simeq A_P/I_P$. In particular, using the two previous facts, show $A_P/PA_P \simeq k(A/P)$ (his field of fractions). Localized modules. Similarly, given a domain $A$, a module $M$ and a prime $P$, you can form the $A_P$ module $M_P$ in the following way. Take $N=\{m/s, m \in M, s \not \in P\}$ the module of formal quotient with denominator not in $P$. Then $M_P = N/\sim$, where $\sim$ is given by $m/s \sim m'/s'$ iff $\exists \ t \not \in P:\ t(s'm-sm') = 0$. The action of $A_p$ is given by $a/s \cdot m/t := a \cdot m / st$ Check these statements to make practice: The action is well defined (both $a/s, m/t$ can be replaced by equivalent elements: the result must be equivalent). If $M_P=0$ for every prime $P$, then $M=0$ (and viceversa). If $I$ is an ideal of $A$, then $I_P$ is exactly what we described before. $M_P/PM_P$ is a vector space (over..?). Dimension. Given a ring $A$, we can form 'chains of primes' $P_0= \subsetneq \ldots \subsetneq P_n$. We call this a chain of length $n$. Then we define $dim A = sup \{ n: \text{ exist a prime chain of length }n\}$.To get an idea of why this is called dimension, you have to look at algebraic geometry. If $P \subset \mathbb{C}[x_1, \ldots, x_n]$ is a a prime ideal, you can look at the algebraic variety $V(P)=\{(x_1, \ldots, x_n) \in \mathbb{C}^n: f(x_1, \ldots, x_n)=0 \ \forall f \in P\}$. It turns out that the dimension of $P$ as a geometric locus (line, surface,..) is exactly the dimension of $A/P$ as a ring. To make practice, show that A product of fields $k_1 \times \ldots \times k_n$ has dimension 0. If A is PID, then $dimA =1$. In particular, $dim \mathbb{Z} = 1$. Number of generators by local reasoning. And now we approach the initial question. We have a domain $A$, an $A$ module $M$ and we wonder: how many elements at least we need to generate $M$? We call this number $\mu(M)$. Localizations give bounds which are easier to calculate. Given $P$ a prime ideal, we can form the field $A_P/PA_P$ and the corresponding vector space $M_P/PM_P$. We know exactly how any generators we need here: the dimension of $M_P/PM_P$ over $A_P/PA_P$. We call this number $\mu_P(M)$. Lower bound. If $m_1, \ldots, m_k$ generate $M$, then $m_1/1, \ldots, m_k/1$ generates $M_P/PA_P$ (why?). So $\mu(M) \ge \mu_P(M)$. We get the lower bound $$\mu(M) \ge \max_P \mu_P(M)$$ Upper bound. This is known as Forster Swan theorem, and it states that $$\mu(M) \le \max_P \mu_P(M) + \dim A/P$$ Present case. There is a classification of primes in $\mathbb{Z}[x]$. So i tried all the localizations (with some tricks I can't reveal here), but unfortunately the lower bound is $2$ and the upper bound is $3$ (localizing at $(0)$). If you want to know more about trying 'all' localization, see primary decomposition ( you can verify that $J=(2,x^2) \cap (3,x)$ is its primary decomposition). Let's see the counterexample. First step. $J=(6,10x,15x^2)=(6,2x,x^2)=:I$ Of course $J \subseteq I$ because every generator of $J$ is a multiple of a generator of $I$. Conversely, $2x=-10x+2x\cdot6$ and $x^2=-15x^2+10x\cdot x + 6 \cdot x^2$, so $I \subseteq J$. Second step. $I=(6,2x,x^2) = (6,2x+3x^2)=:K$. Again, it is easy to see that $K \subseteq I$. Conversely, we have $$(2x+3x^2) (3+2x)-6(x+2x^2+x^3) = (6x+4x^2+9x^2+6x^3)-6x-12x^2-6x^3 = x^2 \in K$$ And consequently also $2x= (2x+3x^2)-3 \cdot x^2 \in K$. If you have further questions, do not hesitate to ask. Edit: Indeed, I found a little refinement of local inequalities which solves the present case. Here Mohan says that in the case of polynomial rings, we can not to take into account primes such that the height of $P$ is 0 in forster swan inequality (this should be due to eisenbud and evans). In this case, we can exclude $p=(0)$. Let's examine the other cases. (1) If $P \neq (2,x),(3,x)$, we claim that $JA_P = A_P$. Note that $\sqrt{J} = \sqrt{(6,2x,x^2)} = (6,x) = (2,x) \cap (3,x)$. Otherwise, one should have $J \subseteq P$, then $(2,x) \cap (3,x) = \sqrt{J} \subseteq \sqrt{P} = P$ because a prime is radical, then $(2,x) \subseteq P$ or $(3,x) \subseteq P$, then $(2,x)=P$ or $(3,x) =P$ by maximality of $(2,x), (3,x)$. On balance, in this case $\mu_P(J)=1$. (2) If $P=(2,x), (3,x)$, then $\mu_P(J) = 2$. Infact, getting rid of invertibles from generators: $P=(2,x)$: $JA_P = (6,2x,x^2)A_P = (2,2x,x^2)A_P=(2,x^2)A_P$ So $\mu_P(J) = dim(\mathbb{Z}[x]/(2,x), (2,x^2)_P/P(2,x^2) ) = dim(\mathbb{Z}/2\mathbb{Z}, [(2,x^2)/(4,2x,x^3)]_P ) = 2$ $P=(3,x)$: $JA_P = (6,2x,x^2)A_P = (3,x,x^2)A_P = (3,x)A_P$ So $\mu_P(J) = dim(\mathbb{Z}/3\mathbb{Z}, [(3,x)/(9,3x,x^2)]_P ) = 2$. (3) Putting all together, $\mu_P(J) + dim(A/P)$ equals $1+1=2$ for primes of height 1 , and it is $\le 2+0 = 2$ for primes of height 2. We can ignore primes of height 0, so forster swan gives $\mu(M) \le 2$. Of course, we have $\mu(M) \ge \mu_{(3,x)}(M) = 2$, so we conclude that $\mu(M)=2$.<|endoftext|> TITLE: Determinant of symmetric matrix is an irreducible polynomial QUESTION [8 upvotes]: In short: why is the determinant of a symmetric matrix an irreducible polynomial in the upper-triangular elements as variables? The determinant of a square matrix $(x_{ij}) \in \Bbb F^{n \times n}$ is of course a polynomial $p$ of degree $n$ in the $n^2 $ variables $x_{ij}$. This polynomial is irreducible, a fact which is proved nicely here. When looking at symmetric polynomials, the determinant is a different polynomial $f$ in $\frac{n(n+1)}{2}$ variables, namely $(x_{ij})_{i\leq j}$. Note that $f$ is a quadratic polynomial in each variable, whereas $p$ is linear in each variable. How do we know that $f$ is an irreducible polynomial too? This post gives a seemingly-simple proof that I don't get: Suppose by contradiction that $f=gh$. Then we claim that $f=q^2$ for some polynomial $q$ in the variables $(x_{ij})_{i\leq j}$. To justify this, we look at the identity $\det (AA^T)=\det(A)^2$, so we know that $p(x_{ij})^2$ is always equal in value to $f(y_{ij})_{i\leq j}$, where $y_{ij}=\sum \limits_k x_{ik}x_{kj}$. Since the variables $y_{ij}, x_{ij}$ are not the same, I don't understand how we can claim from the identity $p(x_{ij})^2=f(y_{ij})$ that the required $q$ exists, and I also don't see where we use our assumption by way of contradiction that f is reducible. I arrived at the question in the book by Shafarevich, where he sadly just comments that it is "easy to see": REPLY [2 votes]: This can also be done in a similar way as described in the post linked in the question for general square matrices. Letting $f\in\mathbb F[x_{ij}]_{1\le i\le j\le n}$ be the determinant, suppose that it decomposes as $f=gh$. We need to show that $g$ or $h$ is constant. This follows from the following properties of the determinant (when expressed as a linear combination of monomials in the $x_{ij}$), For each $i=1,\ldots,n$, $f$ is first order in $x_{ii}$. For each $1\le i < j\le n$, $f$ contains terms containing $x_{ij}$ but does not contain any terms containing $x_{ii}x_{ij}$ or $x_{jj}x_{ij}$. I will make use of the following simple statements about factorisation in polynomial rings. (i) If a linear term in $k[x]$ factorizes into a product of two polynomials, then one is linear in $x$ and the other is independent of $x$. (ii) If $f=gh$ for $g\in k[x]\setminus k$ and $h\in k[y]\setminus k$ then $fg$ contains monomial terms containing $xy$. Now, for the argument that if $f=gh$ then either $g$ or $h$ is constant. By property (1) above, for each $i$, either $g$ is first order in $x_{ii}$ and $h$ is independent of $x_{ii}$, or vice-versa. Choose any $1\le i < j\le n$ such that $g$ is linear in $x_{ii}$. Then: $h$ is independent of $x_{ii}$. $h$ is also independent of $x_{ij}$ otherwise $gh$ will contain terms containing $x_{ii}x_{ij}$, contradicting (2). $g$ depends on $x_{ij}$ otherwise $gh$ would be independent of $x_{ij}$, contradicting (2). $g$ is linear in $x_{jj}$ otherwise $h$ would be linear in $x_{jj}$ and $gh$ would contain terms containing $x_{jj}x_{ij}$, contradicting (2). Supposing that $g$ is linear in $x_{11}$ (wlog) the above argument can be applied to each $i < j$ to show that $h$ is independent of all the indeterminates $x_{ii},x_{jj},x_{ij}$ and, hence, is constant.<|endoftext|> TITLE: "Why" is $\mathbb{CP}^2\mathbin{\#}\mathbb{CP}^2$ not homeomorphic to $\mathbb{CP}^2\mathbin{\#}\overline{\mathbb{CP}^2}$? QUESTION [11 upvotes]: As far as I can tell, $\mathbb{CP}^2\mathbin{\#}\mathbb{CP}^2$ is not homeomorphic to $\mathbb{CP}^2\mathbin{\#}\overline{\mathbb{CP}^2}$. This is because $\mathbb{CP}^2\#\mathbb{CP}^2$ has a definite intersection form but the intersection form of $\mathbb{CP}^2\mathbin{\#}\overline{\mathbb{CP}^2}$ is indefinite, and any homeomorphism can only change the signature of the intersection form by $\pm{1}$ according to whether it preserves or changes the orientation. (Question 1: Is this argument okay?) Now, on some level I find this completely flummoxing. The manifolds $\mathbb{CP}^2$ and $\overline{\mathbb{CP}^2}$ $\it{are}$ homeomorphic (not as manifolds with orientation however, since $\mathbb{CP}^2$ has no orientation-reversing self-homeomorphisms). So you take two copies of the same manifold and glue them together, but the second time you do the gluing of the second manifold in the mirror (but this image is still homeomorphic to the original!), and you get two different results. Huh? I think I must be missing something subtle which happens in the connected sum operation. Question 2: Does anyone know what I'm missing, and have a way to make this discrepancy between the resulting manifolds seem less unintuitive? REPLY [12 votes]: Let's break down the connected sum operation which, given connected manifolds $X,Y$, produces a manifold $X \# Y$. Choose closed balls $B_X \subset X$, $B_Y \subset Y$ whose boundaries $S_X = \partial B_X$, $S_Y = \partial B_Y$ are smoothly embedded. Consider the manifold-with-boundary $X - \text{interior}(B_X)$, whose boundary is $S_X$. Consider also the manifold-with-boundary $Y - \text{interior}(B_Y)$, whose boundary is $S_Y$. Choose a diffeomorphism $f : S_X \to S_Y$. Let $X \# Y$ be the quotient of the disjoint union of $X - \text{interior}(B_X)$ and $Y - \text{interior}(B_Y)$, where each $x \in S_X$ is identified to the point $f(x) \in S_Y$. The choices made in this construction have been highlighted, and there is a definite not-so-subtle issue here: Is the construction well-defined independent of the choices made? It is possible to prove that the construction is independent of the choices of $B_X$ and $B_Y$. So let's set that aside. Also, it is possible to prove that the construction is independent of the choice of $f_X$ up to smooth isotopy. This is a good exercise which you should try, in particular to see exactly how the hypothesis of "smooth isotopy" is used, and to get an inkling of why it is reasonable that the resulting connected sums might actually differ when smooth isotopy fails. In general there is no reason to expect that if $f_X$, $f'_X$ are not smoothly isotopic then $X \# Y$ constructed using $f_X$ is the "same" as $X \#' Y$ constructed using $f'_X$. In your particular example, since $\overline{\mathbb{CP}^2}$ and $\mathbb{CP}^2$ have opposite orientations, the two gluing maps are not even homotopic, let along smoothly isotopic. Added comments in the oriented category: In the case where $X,Y$ are both oriented, then there is a natural concept of "oriented connected sum". In this case, each of $X - \text{interior}(B_X)$ and $Y - \text{interior}(B_Y)$ inherit orientations by restriction, and then each of $S_X,S_Y$ inherit natural boundary orientations. One then requires that the gluing map $f : S_X \to S_Y$ be orientation reversing. With this requirement, one obtains a natural orientation on $X \# Y$ whose restrictions to $X - \text{interior}(X)$ and $Y - \text{interior}(Y)$ agree with the orientations restricted from $X$ and $Y$. Now back to the original examples. Each of $\mathbb{CP}^n$ and $\overline{\mathbb{CP}^n}$ are oriented manifolds, and the notations $\mathbb{CP}^n \# \mathbb{CP}^n$ and $\mathbb{CP}^n \# \overline{\mathbb{CP}^n}$ jointly enforce the intention that this connected sum is, indeed, an oriented connected sum (if that were not the intention, why the "overline" symbol on top of one of the $\mathbb{CP}^n$'s?). I myself like to think of $\mathbb{CP}^n$ and $\overline{\mathbb{CP}^n}$ as being as separate from each other as I can stretch my imagination to think, because that reduces my internal mental confusion (which needs plenty of reduction). Perhaps you will decide to think of them as the "same manifold", but if so then you are obliged to remember at all times that they have opposite orientations. I will compromise: I will think of certain manifolds as "copies" of other manifolds. So, let's think of $X$ as one copy of $\mathbb{CP}^n$. You might decide to think of $Y$ as another copy of $\mathbb{CP}^n$ but you have to keep track of orientation: either $Y$ has the same orientation, hence $Y$ is a copy of $\mathbb{CP}^n$ itself; or $Y$ has the opposite orientation, hence $Y$ is a copy of $\overline{\mathbb{CP}^n}$. In the case that $Y$ is a copy of $\mathbb{CP}^n$, even if you decided to think of the removed balls $B_X,B_Y$ as copies of "the same ball", and to think of the spheres $S_X,S_Y$ as copies of "the same sphere", the map $f : S_X \to S_Y$ must reverse the orientation of this sphere. Thus $f$ cannot be chosen to be the identity map. In the other case that $Y$ is a copy of $\overline{\mathbb{CP}^n}$, you might also decide to think of the removed balls $B_X,B_Y$ as copies of "the same ball", however $B_Y$ has opposite orientation from $B_X$. And then you would probably also decide to think of $S_X,S_Y$ as copies of "the same sphere", however $S_Y$ has opposite orientation from $S_X$. Now when it comes to choosing the homeomorphism $f : S_X \to S_Y$, you are indeed perfectly free to choose the identity map, because that map reverses orientation from $S_X$ to $S_Y$.<|endoftext|> TITLE: Hessian Matrix in Optimization QUESTION [5 upvotes]: Could someone please give an intuition about the usage of the Hessian Matrix in multivariate optimization? I know that it consists of all second order partial derivatives of a multivariate function and that it's used, for example, in the Newton-Raphson-Method. This method is intuitive for a function with a single variable but it's confusing to see the inverted Hessian in the expression for multiple variables. REPLY [7 votes]: The Hessian is used both for seeking an extremum (by Newton-Raphson) and to test if an extremum is a min/max (if the Hessian is pos/neg definite). I assume you want to look at the first, say in ${\Bbb R}^n$: Suppose $a=(a_1,\ldots,a_n)$ is a 'guess' for a point where there is an extremum of your function $f(x)$. If it is not exact then $\nabla f(a)$ (column vector) is not exactly zero, so we want to improve. Now, assuming $f$ smooth enough, if we look at the derivative (gradient) at $x=a+h$ then by Taylor expansion: $$ \frac{\partial f (a+h)}{\partial x_i} = \frac{\partial f (a)}{\partial x_i} + \sum_{j=1}^n \frac{\partial^2 f (a)}{\partial x_i \partial x_j} h_j + \ldots $$ or in matrix form: $$ \nabla f(a+h) = \nabla f(a) + H_f(a) h + \ldots$$ As we want the LHS to vanish we find $h$ so that the RHS vanishes. This is just a standard matrix equation, so: $$ h = - \left( H_f(a)\right)^{-1} \nabla f(a)$$ Our new guess is then $a+h$ and we continue from there...<|endoftext|> TITLE: Why aren't all square roots irrational? QUESTION [18 upvotes]: The more known proof of square root of $2$ is by contradiction when we assume it can be expressed as an irreducible fraction and later finding that it isn't irreducible, but... if we assume the same conditions for example: square root of $9$ we find that it isn't irreducible either. So, what's the trouble here? REPLY [2 votes]: See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational: Follow-up Question: Proof of Irrationality of $\sqrt{3}$ The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 < n < (k+1)^2$. The proof breaks down if $k^2 = n$.<|endoftext|> TITLE: Arctan and Log Integration Problem QUESTION [8 upvotes]: I have been trying to evaluate this integral: $\displaystyle \int_0^{\infty} \frac{\arctan^2 x \log^2 (1+x^2)}{x^2}\,dx$, but have failed despite all my attemps. I tried using the trig substitution $x = tan(a)$, and through a series of steps, was able to simplify the integral to $I(b)$ = $4\displaystyle \int_0^{\pi/2} \frac{(a\log(\cos(a))^b}{(\sin(a))^b}\,da$. I then tried to differentiate under the integral sign but failed. What should I do now? Note: b = 2 in the latter integral. REPLY [5 votes]: Let's start integrating by parts: $$\int_0^\infty\frac{\arctan^2\left(x\right)\log^2\left(1+x^2\right)}{x^2}dx=$$ $$\underbrace{-\left[\frac{\arctan^2\left(x\right)\log^2\left(1+x^2\right)}{x}\right]^\infty_0}_{0}+\underbrace{4\int_0^\infty\frac{\arctan^2\left(x\right)\log\left(1+x^2\right)}{1+x^2}dx}_{I_1}+\underbrace{2\int_0^\infty\frac{\arctan\left(x\right)\log^2\left(1+x^2\right)}{x(1+x^2)}dx}_{I_2}$$ To solve $I_1$, let $x\rightarrow \frac{y}{\sqrt{1-y^2}}$. Then, let's rewrite $\arctan(x)$ as $\arcsin\left(\frac{x}{\sqrt{1+x^2}}\right)$ and apply the series expansion of $\arcsin^2(x)$: $$I_1=-4\int_0^1\frac{\arcsin^2\left(y\right)\log\left(1-y^2\right)}{\sqrt{1-y^2}}dy=-2\sum_{n=1}^{\infty}\frac{\left(2^nn!\right)^2}{n^2(2n)!}\int_0^1\frac{y^{2n}\log\left(1-y^2\right)}{\sqrt{1-y^2}}dy$$ Bearing in mind that: $$\frac{\partial}{\partial b}\int_0^1t^{a-1}\left(1-t\right)^{b-1}dt=\frac{\partial}{\partial b}\mathfrak{B}(a,b)$$ $$\int_0^1t^{a-1}\left(1-t\right)^{b-1}\log\left(1-t\right)dt=\mathfrak{B}(a,b)\left[\mathfrak{\psi^{(0)}}(b)-\mathfrak{\psi^{(0)}}(a+b)\right]=\frac{\Gamma\left(a\right)\Gamma\left(b\right)}{\Gamma\left(a+b\right)}\left[\mathfrak{\psi^{(0)}}(b)-\mathfrak{\psi^{(0)}}(a+b)\right]$$ and $$\Gamma\left(z\right)\Gamma\left(z+\frac{1}{2}\right)=2^{1-2z}\sqrt{\pi}\ \Gamma\left(2z\right)$$ $$I_1=-\sum_{n=1}^{\infty}\frac{\left(2^nn!\right)^2}{n^2(2n)!}\frac{\Gamma\left(n+\frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)}{\Gamma\left(n+1\right)}\left[\mathfrak{\psi^{(0)}}\left(\frac{1}{2}\right)-\mathfrak{\psi^{(0)}}\left(n+1\right)\right]=-\pi\sum_{n=1}^{\infty}\frac{\mathfrak{\psi^{(0)}}\left(\frac{1}{2}\right)-\mathfrak{\psi^{(0)}}\left(n+1\right)}{n^2}$$ $$=-\pi\sum_{n=1}^{\infty}\frac{-2\log(2)-\gamma+\gamma-H_n}{n^2}=2\pi\zeta(2)\log(2)+\pi\sum_{n=1}^{\infty}\int_0^1\frac{dx}{1-x}\frac{1-x^n}{n^2}$$ $$=2\pi\zeta(2)\log(2)+\pi\int_0^1\frac{\zeta(2)-Li_2(x)}{1-x}dx\\=2\pi\zeta(2)\log(2)-\pi\underbrace{\left[\left(\zeta(2)-Li_2(x)\right)\log\left(1-x\right)\right]^1_0}_{0}+\pi\underbrace{\int_{0}^{1}\frac{\log^2\left(1-x\right)}{x}dx}_{2\zeta\left(3\right)}$$ $$\boxed{I_1=2\pi\zeta\left(2\right)\log(2)+2\pi\zeta\left(3\right)}$$ To solve $I_2$, let $x\rightarrow {\frac{1}{x}}$: $$I_2=\underbrace{\int_{0}^{\infty}\frac{2x\arctan\left(\frac{1}{x}\right)\left(\log^2{\left(1+x^2\right)-4\log(x)\log(1+x^2)}\right)}{1+x^2}dx}_{A}+\underbrace{8\int_{0}^{\infty}{\frac{x\arctan\left(\frac{1}{x}\right)\log^2{\left(x\right)}}{1+x^2}dx}}_{B}$$ $$A=\underbrace{\left[\frac{\arctan\left(\frac{1}{x}\right)\log^3\left(1+x^2\right)}{3}-2\arctan\left(\frac{1}{x}\right)\log^2\left(1+x^2\right)\log(x)\right]_{0}^{\infty}}_{0}+\\\frac{1}{3}\int_{0}^{\infty}\frac{\log^3{\left(1+x^2\right)}}{1+x^2}dx-2\int_{0}^{\infty}{\frac{\log^2{\left(1+x^2\right)}\log{\left(x\right)}}{1+x^2}dx}+2\int_{0}^{\infty}{\frac{\arctan\left(\frac{1}{x}\right)\log^2{\left(1+x^2\right)}}{x}dx}$$ Let $x\rightarrow \sqrt\frac{y}{1-y}$: $$A=\int_{0}^{1}\frac{\frac{1}{3}\log^3{\left(1-y\right)}-\frac{1}{2}\log^2{\left(1-y\right)}\log{\left(y\right)}}{\sqrt{y\left(1-y\right)}}dy+\int_{0}^{1}{\frac{\arccos(\sqrt{y})\log^2{\left(1-y\right)}}{y(1-y)}dy}$$ Notice that: $$\int_{0}^{1}\frac{\arccos(\sqrt{y})\log^2{\left(1-y\right)}}{y(1-y)}dy=\int_{0}^{1}\frac{\arccos(\sqrt{y})\log^2{\left(1-y\right)}}{y}dy+\int_{0}^{1}\frac{\arccos(\sqrt{y})\log^2{\left(1-y\right)}}{1-y}dy\\=\frac{\pi}{2}\underbrace{\int_{0}^{1}\frac{\log^2{\left(1-y\right)}}{y}dy}_{2\zeta(3)}-\underbrace{\int_{0}^{1}\frac{\arcsin(\sqrt{y})\log^2{\left(1-y\right)}}{y}dy}_{y\rightarrow \frac{x^2}{1+x^2}}+\int_{0}^{1}\frac{\arccos(\sqrt{y})\log^2{\left(1-y\right)}}{1-y}dy\\=\pi\zeta(3)-\underbrace{2\int_0^\infty\frac{\arctan\left(x\right)\log^2\left(1+x^2\right)}{x(1+x^2)}dx}_{I_2}-\underbrace{\left[\frac{\arccos(\sqrt{y})\log^3\left(1-y\right)}{3}\right]^1_0}_{0}-\frac{1}{6}\int_{0}^{1}\frac{\log^3{\left(1-y\right)}}{\sqrt{y(1-y)}}dy$$ Therefore $$A=\pi\zeta(3)-I_2+\frac{1}{6}\lim_{a\rightarrow 1/2\\ b\rightarrow 1/2}\frac{\partial^3}{\partial^3a}\mathfrak{B}(a,b)-\frac{1}{2}\lim_{a\rightarrow 1/2\\ b\rightarrow 1/2}\frac{\partial^3}{\partial^2a\partial b}\mathfrak{B}(a,b)$$ $$A=\pi\zeta(3)-I_2+\left(-2\pi\zeta(3)-\frac{4}{3}\log^3(2)-2\pi\zeta(2)\log(2)\right)-\left(\pi\zeta(3)-4\pi\log^3(2)\right)$$ $$\boxed{A=-2\pi\zeta(3)-2\pi\zeta(2)\log(2)+\frac{8}{3}\log^3(2)-I_2}$$ $$B=8\int_{0}^{1}dy\int_{0}^{\infty}{\frac{x^2\log^2{\left(x\right)}}{\left(1+x^2\right)\left(1+y^2x^2\right)}dx}=8\int_{0}^{1}\frac{dy}{1-y^2}\int_{0}^{\infty}\left(\frac{\log^2{\left(x\right)}}{1+x^2}-\frac{y^2\log^2{\left(x\right)}}{y^2+x^2}\right)dx$$ $$=8\int_{0}^{1}\frac{\frac{3\pi}{4}\zeta\left(2\right)\left(1-y\right)-\frac{\pi}{2}ylog^2\left(y\right)}{1-y^2}dy\\\boxed{B=6\pi\zeta\left(2\right)\log{\left(2\right)}-\pi\zeta\left(3\right)}$$ $$I_2=-2\pi\zeta(3)-2\pi\zeta(2)\log(2)+\frac{8}{3}\log^3(2)-I_2+6\pi\zeta\left(2\right)\log{\left(2\right)}-\pi\zeta\left(3\right)\\ \boxed{I_2=-\frac{3\pi}{2}\zeta\left(3\right)+2\pi\zeta\left(2\right)\log{\left(2\right)}+\frac{4\pi}{3}\log^3{\left(2\right)}}$$ Hence: $$\boxed{\int_0^\infty\frac{\arctan^2\left(x\right)\log^2\left(1+x^2\right)}{x^2}dx=\frac{\pi}{2}\zeta\left(3\right)+4\pi\zeta\left(2\right)\log{\left(2\right)}+\frac{4\pi}{3}\log^3{\left(2\right)}}$$<|endoftext|> TITLE: Can the "radius of analyticity" of a smooth real function be smaller than the radius of convergence of its Taylor series without being zero? QUESTION [7 upvotes]: Does there exist an infinitely differentiable function $f:U\to\mathbb{R}$, where $U$ is open subset of $\mathbb{R}$, such that the Taylor series of $f$ at $x=x_0\in U$ has radius of convergence $R>0$ $f$ equals its Taylor series only on the subinterval $(x_0-r,x_0+r)$, where $\color{red}{0<}r 0$. The function $f:\Reals \to \Reals$ defined by $$ f(x) = \begin{cases} 0 & |x| \leq r, \\ e^{-1/(|x| - r)^{2}} & |x| > r, \end{cases} $$ has Taylor series equal to $0$ (radius $\infty$), but agrees with its Taylor series only on $(-r, r)$. If you want finite radius instead, add your favorite analytic function with radius $R > r$, e.g., $$ g(x) = \frac{1}{x^{2} + R^{2}}. $$<|endoftext|> TITLE: A convergence problem in real non-negative sequence, $\sum_{n=1}^\infty a_n$. QUESTION [5 upvotes]: We now have $a_n\geq 0$, $\forall n=1,2,...,$ and $\sum_{n=1}^\infty a_n <\infty$. Then I guess that $\lim_{n\to\infty} a_n \cdot n = 0$. But I realized that it is wrong. Since if we let $a_n = 1/n $ if $n = 2^i$ for some $i=1,2,...$ and $a_n = 0$ for the rest of the $n$. Then we have that $\sum_{n=1}^\infty a_n = 1/2 + 1/4 + 1/8 + \cdots < \infty$, and $a_n\cdot n$ does not converges to $0$. Now I add another condition that $a_n$ is non-increasing. Does this result hold this time. i.e. the formal question is as follows: $a_n\geq 0$, $\forall n=1,2,...,$ and $a_n$ is non-increasing, and $\sum_{n=1}^\infty a_n <\infty$. Then prove $a_n \cdot n \to 0$, or give a counterexample that $a_n\cdot n$ does not necessarily converge to $0$ REPLY [4 votes]: It is true that $a_n \cdot n \rightarrow 0$ under this condition. To prove this, observe that if the series is convergent, the sequence of partial sums is a Cauchy sequence. In particular, for every $\varepsilon > 0$, there is some $N_0 = N_0(\varepsilon)$ such that whenever $n > m > N_0$, $\left| \sum_{i=1}^n a_i - \sum_{i=1}^m a_i \right| < \varepsilon$. Now consider taking $m = N_0 + 1$ and $n \ge 2m$. Then we have $$ \sum_{i=1}^n a_i - \sum_{i=1}^m a_i = \sum_{i=m+1}^n a_i = a_n \cdot (n-m) + \sum_{i=m+1}^n (a_i - a_n) \ge a_n \cdot (n-m),$$ by the non-increasing property of the sequence $(a_n)_n$. Hence $a_n \cdot (n-m) < \varepsilon$ whenever $n \ge 2(N_0 + 1)$. However, since $n \ge 2m$, $n-m \ge \frac12 n$, and so $a_n \cdot n \le 2 a_n \cdot (n-m) < 2 \varepsilon$. Since $\varepsilon$ can be taken to be arbitrarily small, this, coupled with the non-negativity of $a_n$, proves $a_n \cdot n \rightarrow 0$.<|endoftext|> TITLE: Given a solvable Galois group, to find a formula for the roots? QUESTION [8 upvotes]: Rereading my old abstract algebra textbook, there's a mention that for every polynomial with a solvable Galois group, there's a formula for the roots comparable to the classical formulae for polynomials of order 2, 3, and 4, but it doesn't elaborate. So I was wondering - let's say you know the Galois group, but only have a symbolic representation of the coefficients, how do you come up with such a formula? REPLY [3 votes]: $\def\cK{\mathcal{K}}\def\cF{\mathcal{F}}\def\cL{\mathcal{L}}\def\QQ{\mathbb{Q}}$Here is an almost answer to this question: Given $G$ a solvable subgroup of $S_n$, I'll show how to give a formula for the roots of a generic degree $n$ polynomial whose Galois group is contained in $n$. However, this formula may involve division by $0$ in "unlucky" cases. Throughout this answer, we are going to need to maintain a careful separation between formal polynomials and those polynomials evaluated at particular complex numbers. I'll use capital letters for the former and lower case for the latter. I'll use caligraphic font for fields of formal polynomials and Roman font for subfields of $\mathbb{C}$. Let $\cL = \QQ(X_1, X_2, \ldots,X_n)$, so $S_n$ acts on $\cL$. Let $\cK$ be the fixed field of $S_n$, so $\cK$ is generated by the elementary symmetric function $E_1$, $E_2$, ..., $E_n$. Let $\cF$ be the intermediate field fixed by $G$. Let $F$ be a primitive element for $\cF$ over $\cK$, with minimal polynomial $P(z)$ over $\cK$. We may assume that $F$ is a polynomial in $\QQ[X_1, \ldots, X_n]$, not just a rational function. Now, $\cL/\cF$ is Galois with Galois group $G$, which we assumed solvable, so it is contained in a radical extension, and there are radical formulas for the $X_j$ in terms of the generators of $\cF$, which is to say, in terms of $E_1$, $E_2$, ..., $E_n$ and $F$. Now, suppose we had a particular polynomial $q(z)$ in $\QQ[z]$ with splitting field $L$. Let $x_1$, $x_2$, ..., $x_n$ be the roots of $f$ in $L$ and, embedding $\mathrm{Gal}(L/k)$ into $S_n$ by the action on the $x_j$, suppose that $\mathrm{Gal}(L/k) \subseteq G$. Let $e_1$, $e_2$, ..., $e_n$ be the elementary polynomials in the $x$'s, so the $e$'s lie in $k$. The polynomial $F(X_1, \ldots, X_n)$ is $G$-invariant so the evaluation $F(x_1, \ldots, x_n)$ is fixed by $\mathrm{Gal}(L/k)$ and is thus in $k$. Let $f = F(x_1, \ldots, x_n)$. Now, the coefficients of the minimal polynomial $P(z)$ lie in $\cK = \QQ(E_1, E_2, \ldots, E_n)$. We can evaluate them at $e_1$, $e_2$, ..., $e_n$ and obtain a polynomial $p(z) \in \QQ[z]$, which $f$ will be a root of. Note that the formula for the coefficients of $p[z]$ in term of $e_1$, $e_2$, ..., $e_n$ is a universal formula, depending only on the group $G$. For any particular $q(z)$, we may thus compute $p(z)$ and use the rational root theorem to test if it has a rational root $f$. If it does, we may then plug $e_1$, $e_2$, ..., $e_n$, $f$ into the radical formulas for $x_j$ in terms of $E_1$, $E_2$, ..., $E_n$, $F$. If this process does not involve dividing by $0$, and if the correct values of radicals are chosen, it will give a root of $q(z)$. The bold phrase is the rub. The primitive element $F$ only generates $\cF$ over $\cK$ as a field, so there may well be division in expressing the other elements of $\cF$ in terms of it. If I get a chance later, I'll write up some examples. Doing this in practice is pretty labor intensive. Here is a worked example focusing on constructiblity rather than radicals. Recall that the roots of a polynomial are constructible if and only if the Galois group is a $2$-group. So the roots of a degree $n$ polynomial are constructible if and only if the Galois group is contained in a Sylow $2$-subgroup of $S_n$. Let's see how this works for $n=4$. An explicit Sylow is the dihedral group $G := \langle (12)(34), (1234) \rangle$. Let $\cL = \QQ(X_1, X_2, X_3, X_4)$, let $\cF$ be the subfield fixed by $G$ and let $\cK = \QQ(X_1, X_2, X_3, X_4)^{S_4} = \QQ(E_1, E_2, E_3, E_4)$. A primitive element for $\cF/\cK$ is $F:=X_1 X_3 + X_2 X_4$. The minimal polynomial of $F$ over $\cK$ is $$(z-X_1 X_2 - X_3 X_4)(z-X_1 X_3 - X_2 X_4)(z-X_1 X_4 - X_2 X_3)$$ $$= z^3 - E_2 z^2 + (E_1 E_3 - E_4) z - (E_3^2 + E_1^2 E_4 - 4 E_2 E_4). \qquad(\ast)$$ Setting $B = X_1+X_3$ and $C = X_1 X_3$, we have $$X_1^2 - B X_1 + C= 0$$ $$B^2 - E_1 B + (E_2 - F)=0$$ $$C^2 - F C + E_4 = 0$$ and thus $$X_1 = \frac{B + \sqrt{B^2-4C}}{2}\quad B = \frac{E_1+\sqrt{E_1^2 - 4 E_2 + 4 F}}{2} \quad C = \frac{F+\sqrt{F^2 - 4 E_4}}{2}. \qquad(\dagger)$$ (Remark: $X_1$ is fixed by $\langle (24) \rangle$, $B$ and $C$ are fixed by $\langle (13), (24) \rangle$ and $F$ is fixed by $G$. Each containment of groups is index $2$, so it makes sense that we take a single square root each time.) Thus, to test whether a quartic polynomial $z^4 - e_1 z^3 + e_2 z^2 - e_3 z + e_4$ has a constructible root, first see plug in $e_j$ for $E_j$ to $(\ast)$ and see if the resulting equation has a rational root $f$. If so, plug $e_j$ for $E_j$ and $f$ for $F$ in $(\dagger)$ to get a formula for that root. (Note that all the $2$-Sylows of $S_4$ are conjugate so, if the Galois group is contained in a different $2$-Sylow, one of the other factors in $(\ast)$ will contribute a root and everything will proceed as before.) I remember seeing a similar worked example somewhere for $S_5$ and its $20$ element solvable subgroup; I'll see if I can track it down.<|endoftext|> TITLE: Need help unpacking definitions of $\limsup$, $\liminf$ QUESTION [5 upvotes]: I am reading a real analysis textbook, and need help clarifying/unpacking the definition of $\limsup$ and $\liminf$. Given a sequence $\{a_n\}$ of real numbers,$$\limsup_{n \to \infty} a_n = \inf_n \sup_{m \ge n} a_m, \quad \liminf_{n \to \infty} a_n = \sup_n \inf_{m \ge n} a_m.$$ We use analogous definitions when we take a limit along the real numbers. For example,$$\limsup_{y \to x} f(y) = \inf_{\delta > 0} \sup_{|y - x| < \delta} f(y).$$ Any help would be well-appreciated. Thanks! REPLY [6 votes]: Perhaps an example would be illuminating? Let's figure out $\limsup(a_n)$ for the following sequence: $$a_n = \begin{cases} 1 + 1/n &\mbox{if } n \equiv 0 \\ -1 - 1/n& \mbox{if } n \equiv 1 \end{cases} \pmod{2}$$ For a given $n$, the quantity $\displaystyle \sup_{m \geq n}a_m$ is simply the largest term in the sequence after the first $n$ terms are thrown away. For any $n$, it is easy to see in the sequence above that this value will be the first even-indexed term that was not thrown away. As we let $n$ go to infinity, you can check that the even-indexed terms limit to $1$, or in other words $\displaystyle \sup_{m \geq n} a_m$ limits to $1$, or in other words $\limsup(a_n) = 1$. Put another way, if $\limsup(a_n) = x$, this means that $x$ is the smallest number such that, for any given $\delta > 0$, only a finite number of terms in the sequence are larger than $x + \delta$. Likewise, $\liminf(a_n) = x$ means that $x$ is the largest number such that, for any given $\delta > 0$, only a finite number of terms in the sequence are smaller than $x - \delta$. You can check that, in the example sequence above, $\liminf(a_n) = -1$. Finally, notice that if $\lim(a_n)$ exists, then it will agree with $\liminf(a_n)$ and $\limsup(a_n)$. I don't know if this graphic will be helpful to you, but I have it in mind pretty much every time I think about these concepts (from Wikipedia):<|endoftext|> TITLE: normalization of the nodal cubic QUESTION [5 upvotes]: Consider the nodal cubic $R = k[x,y]/(y^2 - x^3 + x^2)$, so $x,y$ satisfy $y^2 = x^3 - x^2$. The idea behind normalization is to adjoin "$\frac{y}{x}$", since then in the fraction field $K$ of $R$, one has $\left(\frac{y}{x}\right)^2 = x-1$. It's straightforward to show that the subring $S$ of $K$ generated by $R$ and $\frac{y}{x}$ is normal and integral over $R$, hence $S := R[\frac{y}{x}]$ is the normalization/integral closure of $R$. It's also not hard to show that $S\cong R[t]/(xt - y, t^2-x+1)$. My question is - do you need both $xt -y$ AND $t^2-x+1$? Is $S\cong R[t]/(xt - y)$? Is $S\cong R[t]/(t^2-x+1)$? REPLY [8 votes]: If $xt=y$ and $t^2-x+1=0$, then multiplying the second equation by $x^2$ and using the first one shows that $y^2-x^3+x^2=0$. This means that you forget about this last equation, and $S$ is $k[x,t,y]/(xt-y,t^2-x+1)$. Now in this quotient, we have $x=t^2+1$ so we can remove the generator $x$, and it is isomorphic to $k[y,t]/(t^2+1)t-y)$. Of course, now we can also remove $y$, and we end up with $S=k[t]$. Using this description of $S$, can you answer your questions? Notice that $R[t]/(xt-y)=k[x,y,t]/(y^2-x^3+x^2,xt-y)$. As $y=xt$ here, we can remove the generator $y$, and $R[t]/(xt-y)$ is isomorphic to $k[x,t]/(x^2t^2-x^3+x^2)$, which is clearly not a domain. On the other hand, $R[t]/(t^2-x+1)$ is $k[x,y,t]/(y^2-x^3+x^2,t^2-x+1)$, which, using $x=t^2+1$ to remove $x$, is isomorphic to $k[y,t]/(y^2-(t^2+1)^3+(t^2+1)^2)$. This has some singular points.<|endoftext|> TITLE: Is there a "measure" $\mu$ on $\mathbb{R}$ which encodes Hausdorff measure of every dimension? QUESTION [8 upvotes]: A crazy idea... Question Let $\mathcal{B}$ be the Borel subsets of $\mathbb{R}$, and $X$ the set of continuous functions $[0,\infty) \to \mathbb{R}$. Does there exist a "measure" function $\mu: \mathcal{B} \to X$ satisfying the following conditions? $\mu(\varnothing) = \boldsymbol{0}$. For all $d \in [0,\infty)$, if $A \in \mathcal{B}$ and $\mu_d$ is $d$-dimensional Hausdorff measure, then: $$\lim_{x \to \infty} \frac{\mu(A)(x)}{x^d} = \mu_d(A).$$ (Specifically if $A$ has Hausdorff dimension $d$ and measure $a$ in that dimension, we want it to grow like $ax^d$, if $a \ne 0, \infty$.) $\mu$ is countably additive, in the following sense: let $A_1, A_2, A_3, \ldots$ be disjoint Borel sets with union $A$, $\mu(A_n) = f_n$ and $\mu(A) = f$. If $\sum_{n=1}^\infty f_n$ converges uniformly on compact subsets of $\mathbb{R}$, then $$ \sum_{n=1}^\infty f_n = f. $$ Update: Eric Wofsey has shown below by an elementary argument that if $\mu$ also satisfies: Translation-invariance: $\mu(A + r) = \mu(A)$ for all $A \in \mathcal{B}$ and $r \in \mathbb{R}$. then this is impossible. What about 1-3? Motivation While Lebesgue measure on $\mathbb{R}$ fails to capture any size differences between different measure-zero sets, we can capture such information with the Hausdorff dimension. By identifying a set with its Hausdorff dimension and its Hausdorff measure in that dimension, we get a much richer notion of size. For example, $\varnothing$ (dimension 0 measure 0) is smaller than $\{1,4\}$ (dimension 0 measure $2$), which is smaller than $\mathbb{Q}$ (dimension 0 measure $\infty$), which is smaller than the Cantor set (dimension $\frac{\ln 2}{\ln 3}$ measure $1$) which is smaller than $[0,1]$ (dimension $1$ measure $1$), which is smaller than $\mathbb{R}$ (dimension $1$ measure $\infty$). I have long been curious: can we capture this notion of size in a single "measure" on Borel sets -- not real-valued, but function-valued? Above is a specific attempt. REPLY [5 votes]: This is impossible if you require translation-invariance. Indeed, consider $A=\mathbb{N}$, $B=\{0\}$, and $C=A\setminus B$. Then translation-invariance implies $\mu(A)=\mu(C)$, since $C=A+1$. But axioms (1) and (3) imply that $\mu(-)(x)$ is finitely additive for each $x$, so $\mu(A)(x)=\mu(B)(x)+\mu(C)(x)$ for all $x$. Thus $\mu(B)(x)=0$ for all $x$. This contradicts axiom (2), since $\mu(B)(x)$ must converge to $1$ as $x\to\infty$. (Note that ordinary $0$-dimensional Hausdorff measure avoids this problem by having $\mu(A)$ and $\mu(C)$ be infinite, so you can't subtract and deduce that $\mu(B)=0$.)<|endoftext|> TITLE: Can the long line be embedded in euclidean space? QUESTION [10 upvotes]: In the definition of a manifold $M$, we have the following conditions: For some fixed $n$, $M$ is locally homeomorphic to $\mathbb{R}^d$. $M$ is connected, second countable, and Hausdorff. Now, with this definition, it is a well-known theorem that $M$ can be embedded in $\mathbb{R}^{2n+1}$. I suspect that if the condition of second countability is relaxed, then this theorem is no longer true, and by attempt at a counterexample is to define $L=\omega_1\times[0,1)$ with the order topology, where $\omega_1$ is the first uncountable ordinal. $L$ can be seen to be a non-second-countable manifold of dimension one, however, I'm struggling to prove that $L$ cannot be embedded in $\mathbb{R}^3$. I have a vague feeling that by using uncountably many points in general position and well-ordering, an embedding can in fact be constructed, though I may be wrong about this. If $L$ is not a counter-example to the hypothesis that non-second-countable manifolds can still be embedded in Euclidean space, then what is? REPLY [6 votes]: No, because of the Poincaré-Volterra theorem. Here's the statement you can find in Bourbaki's General Topology (I.11.7, corollary 3). Theorem. Let $Y$ be a locally compact, locally connected space whose topology has a countable base. Let $X$ be a connected Hausdorff space, and let $p : X \to Y$ be a continuous mapping which has the following property: for each $x \in X$ there is an open neighbourhood $U$ of $x$ in $X$ such that the restriction of $p$ to $U$ is a homeomorphism of $U$ onto an open subspace of $Y$. Then $X$ is locally compact and locally connected, and the topology of $X$ has a countable base. A classical use of this theorem is to show that Riemann surfaces are automatically secound countable, even if you don't make it an explicit hypothesis (the general theory shows that there is a nonconstant holomorphic map $S \to \mathbb P^1(\mathbb C)$ and you apply the Poincaré-Volterra theorem). So, for a manifold, being secound-countable is equivalent to being embeddable in some Euclidean space. These are two of the 119 (!) equivalent conditions you can find on theorem 2.1 of Gauld's Non-metrisable manifolds. REPLY [6 votes]: Recall that every subspace of a second-countable space is also second-countable. (If $\mathcal B $ is countable base for $ X$ then $\{ U \cap Y : U \in \mathcal B \}$ is a countable base for $ Y \subseteq X$.) Therefore no non-second-countable space (manifold or otherwise) can embed in any Euclidean space.<|endoftext|> TITLE: Angle between the hour and minute hands at 6:05 QUESTION [15 upvotes]: What is the angle between the hour and minute hands of a clock at 6:05? I have tried this Hour Hand: 12 hour = 360° 1 hr = 30° Total Hour above the Clock is $\frac{73}2$ hours In Minute Hand: 1 Hour = 360° 1 minutes = 6° Total Minutes covered by $6\times 5= 30$ $\frac{73}{2} \cdot30-30=345^\circ$ Is it Correct? REPLY [7 votes]: Your approach is correct, but the hour is $6+\frac{5}{60}=\frac{73}{12}$, not $\frac{73}{2}$. This yields an angle of $\frac{73}{12}\cdot30^\circ$ for the hour hand, and $6^\circ\cdot 5=30^\circ$ for the minute hand. Thus the end result is $\frac{73}{12}\cdot30^\circ-30^\circ=152.5^\circ$<|endoftext|> TITLE: "The music of the primes" by Sautoy: Is $\sqrt x$ correct? QUESTION [5 upvotes]: I have recently read "The music of the primes" by Marcus du Sautoy (see excerpt here >>>). There he writes: "So how fair are the prime number dice? Mathematicians call a dice "fair" if the difference between the theoretical behaviour of the dice and the actual behaviour after $N$ tosses is within the region of the square root of $N$. The heights of Riemann's harmonics are given by the east-west coordinate of the corresponding point at sea-level. If the east-west coordinate is $c$ then the height of the wave grows like $N^c$. This means the contribution from this harmonic to the error between Gauss's guess and the real number of primes will be $N^c$. So if the Riemann Hypothesis is correct and $c$ is always $1/2$, the error will always be $N^{1/2}$ (which is just another way of writing the square root of $N$). If true, the Riemann Hypothesis means that Nature's prime number dice are fair, never straying more than the square root of $N$ from Gauss's theoretical prime number dice." However, we know from Helge von Koch (1901)[2] that if the Rieman Hypothesis true, then: $$\pi(x)=Li(x)+\mathcal O(\sqrt x \log x)$$ where $\pi(x)$ prime counting function and $Li(x)$ according Gauss. My question: Is what Sautoy says correct? Is the error indeed $(\sqrt x \log x)$ or as Sautoy states $\sqrt x$? Or did I misunderstand something? [2]: Von Koch, Helge (1901). "Sur la distribution des nombres premiers" (On the distribution of prime numbers). Acta Mathematica (in French). 24 (1): 159–182. REPLY [6 votes]: The statement of du Sautoy is indeed slightly imprecise as the error term is known to be slightly worse than $O( \sqrt{x})$. But as you said, it is close to the truth, and the added complexity for a precise statement seems not worth it in that context. To be clear it is known that (see below for details) $$\pi(x)-\operatorname{Li}(x) \ne O(\sqrt{x})$$ However, du Sautoy only gives a vague description of the situation, and one should not try to transcribe this literally and expect it to be true exactly. But the error-term is roughly $\sqrt{x}$, and to say this is the point of the description. (In fact, I believe, but would have to check to be sure, that also the deviations in a random walk given by a dice are not literally bounded by square-root.) Note though that the result you recall does not prove that the claim is imprecise. There could be still better error-terms under RH, indeed there are better error-terms under RH, but they cannot be literally as good as du Sautoy says. Let me add that the classical reference showing that the statement is imprecise is due to Littlewood ("Sur la distribution des nombres premiers." CRAS 1914), who show that the error term is $\Omega_{\pm}(\sqrt{x} \log \log \log x)$. That is there is a positive constant $c$ such that the error-term is larger than $c\sqrt{x} \log \log \log x$ for arbitrarily large $x$; and also a negative constant $c'$ such that it is smaller than $-c\sqrt{x} \log \log \log x$ for arbitrarily large $x$. That is the error term oscillates and the amplitude is at least of order $\sqrt{x} \log \log \log x$.<|endoftext|> TITLE: Two retracts $A,B$ of $S^{2}$ with different relative location QUESTION [5 upvotes]: Are there two retract subsets $A,B$ of $S^{2}$ with the following property: $A,B$ are homeomorphic but two pairs $(S^{2},A)$ and $(S^{2},B)$ are not homeomorphic. The same question for $S^{n}$ REPLY [4 votes]: I will identify $S^2$ with the Riemann sphere, the 1-point compactification of the complex plane. Start with the graph $G\subset {\mathbb C}$, which is the union of the four segments connecting $0$ with the points $\pm 1$ and $\pm i$. Now let $A$ be the union of $G$ with the two small disks centered at the points $\pm 1$. Let $B$ be the union of $G$ with the two small disks centered at the points $1, i$. Then, being contractible CW subcomplexes of $S^2$, both $A, B$ are retracts of $S^2$. On the other hand, there is no homeomorphism $(S^2,A)\to (S^2,B)$. Lastly, $A$ is clearly homeomorphic to $B$.<|endoftext|> TITLE: What is the position of a unit cube so that its projection on the ground has maximal area? QUESTION [8 upvotes]: Consider a unit cube in $\Bbb R^3$. What is a position (up to translation, etc.) of the cube such that its projection on the $Oxy$-plane has a maximal area? Here is a picture: if the sides of the cube are parallel to the three axis, then the projection is the unit square, with has area $1$. But it is possible to project it so that we get a regular hexagon (like the orange one, below). $\qquad\qquad\qquad\quad$ I believe that the regular hexagon could be a local maximum for the area of the projection, but I'm not sure. I don't even know how to compute the corresponding area of the orange hexagon (what should be the length of the orange side of one of the six equilateral triangles in the hexagon?). I solved the problem for a unit disk in the space: the area of the projection is $\pi \sin(\theta)$, maximal when the angle $\theta$ is $\pi/2$ ($\theta$ is indicated on the right of the picture). $\qquad\qquad\qquad\quad$ I also tried for a unit square in $2D$, the maximal area being achieved at $\theta=\pi/4$. $\qquad\qquad\qquad\quad$ A similar question could be asked for a cylinder or a cone instead of a cube (the answer is trivial for a sphere, by the way). Thank you for your comments! REPLY [6 votes]: For a convex body, its projected surface area $P$ in viewing direction $v$ is equal to the surface integral $$P = \oint (n\cdot v)^+\,\mathrm dA,$$ where $n$ and $\mathrm dA$ are the normal and area of the differential surface element, and $$x^+ = \max(x,0) = \begin{cases}x & \text{if $x>0$,} \\ 0 & \text{otherwise}.\end{cases}$$ For a unit cube, $n$ only takes six possible values $(\pm1,0,0)$, $(0,\pm1,0)$, $(0,0,\pm1)$, each over an area of $1$ square unit, so the integral reduces to $$P = |v_x| + |v_y| + |v_z|.$$ Subject to the constraint that $\|v\|=1$, this is maximized at $v = \frac1{\sqrt3}(\pm1,\pm1,\pm1)$, i.e. projection along a space diagonal, which indeed gives the regular hexagon as the projected shape.<|endoftext|> TITLE: How to check if a large number is prime? QUESTION [5 upvotes]: Is $31!+1$ a prime number? Prime numbers can be written as $6n+1$ or $6n-1$ form. $31!+1$ can also be written as $6n+1$,but not every number of the form $6n+1$ is prime. How do I proceed efficiently? REPLY [2 votes]: The simplest test is to start with trial division by small primes. Your statement that it is $6n+1$ represents trial division by $2$ and $3$. You can keep going until you get tired. Then try Fermat's little theorem, which says that for $p$ prime and $a$ coprime to $p$, $a^{p-1} \equiv 1 \pmod p$, so see if $2^{8222838654177922817725562880000000} \equiv 1 \pmod {8222838654177922817725562880000001}$ You can do this rather quickly by computer if you have the right package. If this equivalence fails, you have a composite. If it passes, check a few other small primes. If it passes them all, you almost certainly have a prime.<|endoftext|> TITLE: Integral ${\large\int}_0^1\frac{dx}{(1+x^{\sqrt2})^{\sqrt2}}$ QUESTION [14 upvotes]: Mathematica claims that $${\large\int}_0^1\!\!\frac{dx}{(1+x^{\sqrt2})^{\sqrt2}}=\frac{\sqrt\pi}{2^{\sqrt2}\sqrt2}\cdot\frac{\Gamma\left(\frac1{\sqrt2}\right)}{\Gamma\left(\frac12+\frac1{\sqrt2}\right)},\tag{$\diamond$}$$ and it also confirms numerically. How can we prove $(\diamond)$? This result seems interesting, because no such nice answer seems to occur for other algebraic powers, except trivial cases, when the antiderivative is an elementary function, e.g. $${\large\int}\frac{dx}{(1+x^\alpha)^\alpha}=\\ \small\frac{x\left(x^\alpha+1\right)^{1-\alpha}}{18}\left[\vphantom{\large|}\left(15(\alpha-1)+4\left(5(\alpha+1)+\left(6\alpha+(\alpha-3)x^{2\alpha}+2(\alpha+2)x^\alpha-3\right)x^\alpha\right)x^\alpha\right)x^\alpha+18\right],$$ where $\alpha=3+\sqrt{10}$. REPLY [11 votes]: Using Euler's integral representation of the Gaussian hypergeometric function, and assuming $a>0$, we get $$ \begin{align} \int_{0}^{1} \frac{dx}{(1+x^{a})^{a}} &= \frac{1}{a} \int_{0}^{1} \frac{u^{1/a-1}}{(1+u)^{a}} \, du \\ &= \frac{1}{a} \int_{0}^{1} u^{1/a-1} (1-u)^{(1+1/a)-1/a-1}(1+u)^{-a} \, du \\ &= \frac{1}{a} \, B \left(\frac{1}{a}, 1 \right) {}_2F_{1} \left(a, \frac{1}{a}; 1+ \frac{1}{a}; -1 \right) \\&= {}_2F_{1} \left(a, \frac{1}{a}; 1+ \frac{1}{a}; -1 \right). \end{align}$$ In the case $a=\sqrt{2}$, we can apply Kummer's theorem since $1+\sqrt{2} - \frac{1}{\sqrt{2}}= 1+ \frac{1}{\sqrt{2}}$. $$\begin{align} \int_{0}^{1} \frac{dx}{(1+x^{\sqrt{2}})^{\sqrt{2}}} &= \frac{\Gamma\left(1+ \frac{1}{\sqrt{2}}\right) \Gamma\left(1+\frac{1}{\sqrt{2}}\right)}{\Gamma\left(1+ \sqrt{2}\right) \Gamma\left(1\right)} \\ &= \frac{\Gamma \left(1+ \frac{1}{\sqrt{2}} \right)^{2}}{\Gamma(1+\sqrt{2})}\\ &= \frac{1}{2 \sqrt{2}} \frac{\Gamma\left(\frac{1}{\sqrt{2}}\right)^{2}}{\Gamma(\sqrt{2})} \\ &\approx 0.6604707226 \end{align}$$ To get it in the form given by Mathematica, apply the duplication formula to $\Gamma \left(\frac{1}{\sqrt{2}} \right)$.<|endoftext|> TITLE: Clarification on Ravi Vakil's AG notes, Exercise 13.3.H QUESTION [7 upvotes]: Let me first write what this exercise is: [...] Suppose $X$ is a quasicompact quasiseparated scheme, $\mathscr{L}$ is an invertible sheaf on $X$ with section $s$, and $\mathscr{F}$ a quasicoherent sheaf on $X$. [...] Let $X_s$ be the open subset of $X$ where $s$ doesn't vanish. [...] Note that $\bigoplus_{n\geq 0}\Gamma(X, \mathscr{L}^{\otimes n})$ is a graded ring, and we interpret $s$ as a degree one. Note also that $\bigoplus_{n\geq 0} \Gamma(X,\mathscr{F}\otimes_{\mathscr{O}_X}\mathscr{L}^{\otimes n})$ is a graded module over this ring. Describe a natural map $$\left[\left(\bigoplus_{n\geq 0}\Gamma(X,\mathscr{F}\otimes_{\mathscr{O}_X}\mathscr{L}^{\otimes n})\right)_s\:\right]_0\to \Gamma(X_s, \mathscr{F})$$ and show that it is an isomorphism. I'm going to list the things I'm confused about (mostly trying to understand what this question means) and what I think might be the answer to each: "Let $X_s$ be the open subset of $X$ where $s$ doesn't vanish." The only way I can make sense of $X_s$ is as follows: There is a an affine open covering of $X$, $U_i$ with a sheaf isomorphism $\phi_i:\mathscr{L}|_{U_i}\simeq \mathscr{O}_{U_i}$. Define $\sigma_i:=\phi_i(U_i)(s|_{U_i})$. Vanishing or non-vanishing of $\sigma_i$ is meaningful so we define $X_s=\bigcup_{i\in I} D_i(\sigma_i)$ where $D_i(f)\subset U_i:=\mathrm{Spec} R_i$ is the distinguished open subset of affine scheme $U_i$ coming from $f\in R_i$. Is this indeed what $X_s$ means? The second issue is the grading structure on $\bigoplus_{n\geq 0} \Gamma(X,\mathscr{L}^{\otimes n})$ and $\bigoplus_{n\geq 0} \Gamma(X,\mathscr{F}\otimes\mathscr{L}^{\otimes n})$. The only kind of of multiplication I can think of is the tensor product. Thankfully if $\mathscr{G}, \mathscr{G}'$ are two $\mathscr{O}_X$-modules, then the sheafification morphism $$ \eta: [\mathscr{G}\otimes \mathscr{G}']_\text{pre}\to \mathscr{G}\otimes \mathscr{G}' $$ induces a natural map $\Gamma(X,\mathscr{G})\otimes_{\Gamma(X,\mathscr{O}_X)}\Gamma(X, \mathscr{G}')\to \Gamma(X, \mathscr{G}\otimes \mathscr{G}')$. So in our case we have maps $$ \begin{aligned} &\Gamma(X,\mathscr{L}^{\otimes n})\otimes_{\Gamma(X,\mathscr{O}_X)}\Gamma(X, \mathscr{L}^{\otimes m})\to \Gamma(X, \mathscr{L}^{\otimes (n+m)})\\ &\Gamma(X,\mathscr{L}^{\otimes n})\otimes_{\Gamma(X,\mathscr{O}_X)}\Gamma(X, \mathscr{F}\otimes \mathscr{L}^{\otimes m})\to \Gamma(X, \mathscr{F}\otimes\mathscr{L}^{\otimes (n+m)}) \end{aligned} $$ This looks promising, for a good grading structure... But one problem: Are these maps necessarily injective/inclusion? Of course I haven't even begun yet to actually solve this problem. But this question has already become too long, so I'll leave it at that. REPLY [5 votes]: $1)$ Your definition of $X_s$ is correct, you can see exercise $13.1.I.$ for the definition of the vanishing scheme of a section of a loccaly free sheaf and then take the complement. You can also use the more canonical point of view: for any sheaf of $\mathcal{O}_X$ modules, $\mathscr{F}$, with global section $s$, we say $s$ vanishes at $p$ if $s_p \in \mathfrak{m}_{X,p}\cdot\mathscr{F}_p$ and we can show that the collection of all such points is a closed subset. (Although without the further assumption that $\mathscr{F}$ is locally free, it isn't naturally a closed subscheme.) $2)$ This is indeed the requisite map, and there's no need for it to be an inclusion (for a graded module $M_{\bullet}$, there's no requirement that the maps $M_n \times M_m \rightarrow M_{n+m}$ are an inclusion).<|endoftext|> TITLE: What is the chance of rolling a specific number after a certain amount of rolls? QUESTION [8 upvotes]: Say I roll a $6$ sided dice and I want to roll a $6$. what is the probability that I will have rolled the number I want after $6$ rolls? I have been using this: $\displaystyle1-\left(1-\frac{1}{x}\right)^y$ where $x$ is the number of sides and $y$ is the amount of rolls, so it would be $\displaystyle1-\left(1-\frac{1}{6}\right)^6$ for rolling a specific number in $6$ rolls, which is $\approx66.5\%$ is this the correct way of calculating the probability of something like this, if not what is the proper way? i'm not really sure why that formula works(if it does) so some elaboration on that would be nice. sorry for lack of technical language thanks in advance REPLY [2 votes]: I would say that it's easier to calculate the chance that you never roll this number. So that all 6 rolls are not the number you want. This chance is (5/6)^6 = 33.49%. Now to calculate the chance that you roll your number at least oncentre within 6 rolls you can take 1-(5/6)^6= 66.51%. (I don't have an approximate sign) This is exactly the same as your equation. I just used 5/6 instead of 1-1/6. As for the explanation. All the possibilities that exist can be added up to 100%. What we do here is we calculate the chance for all the possibilities that you won't get what you are looking for. Now if you subtract these you will be left with all the possibilities you are looking for. The reason we sometimes use this method is because the conjugate is much easier to calculate, like in this situation.<|endoftext|> TITLE: Is every Markov Process a Martingale Process? QUESTION [9 upvotes]: According to the definition (2.3.6) of a Markov Process in Shreve's book titled Stochastic Calculus for Finance II: Let $(\Omega,\mathcal F,\mathbb P)$ be a probability space, let $T$ be a fixed positive number, and let $\mathcal F(t)$, $0\leqslant t\leqslant T$, be a filtration of sub-$\sigma$-algebras of $\mathcal F$. Consider an adapted stochastic process $X(t)$, $0\leqslant t\leqslant T$. Assume that for all $0\leqslant s\leqslant t\leqslant T$ and for every nonnegative, Borel-measurable function $f$, there is another Borel-measurable function $g$ such that $$\mathbb E\left[f(X(t))\mid\mathcal F(s)\right] = g(X(s)). $$ Then we say that the $X$ is a Markov process. it seems obvious to me that every Markov Process is a Martingale Process (Definition 2.3.5): Let $(\Omega,\mathcal F,\mathbb P)$ be a probability space, let $T$ be a fixed positive number, and let $\mathcal F(t)$, $0\leqslant t\leqslant T$, be a filtration of sub-$\sigma$-algebras of $\mathcal F$. Consider an adapted stochastic process $M(t)$, $0\leqslant t\leqslant T$. If $$\mathbb E\left[M(t)\mid\mathcal F(s)\right] = M(s)\quad\textrm{for all } 0\leqslant s\leqslant t\leqslant T,$$ we say this process is a martingale. Can someone please tell me if this is correct? Thanks! REPLY [2 votes]: Just to point out the error in your logic more directly: Suppose $X(t)$ is a Markov process. Then if we take $f(x) = x$, it is true that for all $s < t$ there exists a function $g$ such that $E[X(t) \mid \mathcal{F}(s)] = g(X(s))$. For instance if $X(t)$ is an Ornstein-Uhlenbeck process, that function $g$ is something like $g(x) = e^{-(t-s)} x$. However, in order for $X(t)$ to be a martingale, we would need specifically to end up with $g(x) = x$. In general this need not happen. The converse isn't true either. If we know $X(t)$ is a martingale and we try to see whether it is a Markov process, we know that if we take $f(x) = x$ there is a function $g$ that works (namely $g(x)=x$), but if we take $f$ to be some other function, the definition of martingale does not guarantee that we can find a corresponding $g$ at all.<|endoftext|> TITLE: Is $\cos(x) \geq 1 - \frac{x^2}{2}$ for all $x$ in $R$? QUESTION [5 upvotes]: I encountered this question: Is $\cos(x) \geq 1 - \frac{x^2}{2}$ for all $x$ in $R$ ? I draw to myself the function graphs but i can't find the right way to answer this type of questions. Cosx function graph 1-x^2/2 function graph REPLY [5 votes]: Another method is using the well known inequality $\sin { x\le x } $ and integrating both sides $$\int _{ 0 }^{ x }{ \sin { tdt\le \int _{ 0 }^{ x }{ tdt } } } $$ $${ \cos { t } | }_{ 0 }^{ x }\le { \frac { { t }^{ 2 } }{ 2 } | }_{ 0 }^{ x }$$ $$-\cos { x } +1\le \frac { { x }^{ 2 } }{ 2 } \\$$ $$ \cos { x } \ge 1-\frac { { x }^{ 2 } }{ 2 } $$<|endoftext|> TITLE: A linear transformation preserving volume QUESTION [7 upvotes]: This is ex. from Munkers book: \ find a linear transformation $h: \mathbb{R}^n \rightarrow \mathbb{R}^n$ that preserves volumes but is not an isometry. It's clear that $n$ should be greater than 1, but even in case $n=2$ I'm not able to provide an example. REPLY [2 votes]: The condition that $h: \Bbb R^n \to \Bbb R^n$ preserve volumes is $\det h = 1$, whereas the condition that it is an isometry is $h^{\top} h = {\bf 1}$. We can generate simple examples as follows: If we identify linear transformations $\Bbb R^n \to \Bbb R^n$ with their matrices w.r.t. the standard basis, we see, for example, for diagonal linear transformations $$h = \pmatrix{\lambda_1 & \cdot \\ \cdot & \lambda_2} ,$$ the volume-preserving condition $\det h = 1$ is $$\lambda_1 \lambda_2 = 1,$$ and the (non-)isometry condition $h^{\top} h \neq {\bf 1}$ is $$\pmatrix{\lambda_1^2 & \cdot \\ \cdot & \lambda_2^2} \neq \pmatrix{1&\cdot\\\cdot&1}.$$ The volume-preserving condition implies $\lambda_2 = \lambda_1^{-1}$, and then the (non-)isometry condition implies $\lambda_1 \not\in \{\pm 1\},$ leaving exactly the examples $$ h = \pmatrix{\lambda & \cdot \\ \cdot & \lambda^{-1}}, \qquad \lambda \not\in \{\pm 1\} . $$ Note that $\det h = 1$ imposes one polynomial condition on the entries of $h$, whereas $h^{\top} h = {\bf 1}$ imposes $\frac{1}{2} n (n + 1)$ polynomial conditions, which is $> 1$ for $n > 1$. Thus, for $n > 1$ we expect a generic volume-preserving transformation is not an isometry, which turns out to be true in a way that can be made precise.<|endoftext|> TITLE: Optimal strategy for a 3 players infinite game QUESTION [7 upvotes]: I've been asked the following game theory problem: Each one of three players secretly picks a positive integer. The winner of the game, if any, is the player who picked the smallest number that no one else chose. Is there an optimal strategy for this game? Observations Nash' theorem about the existence of an equilibrium doesn't apply to this game, because each player has infinitely many possible choices. A way I thought to tackle the problem is the following. I assume the existence of an optimal strategy, and I model it by a function $f: \mathbb{Z}_{>0} \rightarrow \mathbb{R}$ such that $\sum_{n=1}^\infty f(n) =1$, which corresponds to picking $n$ with probability $f(n)$. Then, since the strategy is optimal, every player is playing according to this strategy, and the probability $p_f(n)$ that a player wins playing $n$ is given by $$p_f(n) = f(n) \left(\left(\sum_{m=n+1}^\infty f(m) \right)^2 + \sum_{m=1}^{n-1} f(m)^2 \right).$$ Where $f(n)$ is the probability that she plays $n$, $\left(\sum_{m=n+1}^\infty f(m) \right)^2$ is the probability that both the other players play a number larger than $n$ and $\sum_{m=1}^{n-1} f(m)^2$ is the probability that both the other players play the same number smaller than $n$. Now we want to maximize the probability of a player winning, i.e. $$p_f = \sum_{n=1}^\infty f(n) \left(\left(\sum_{m=n+1}^\infty f(m) \right)^2 + \sum_{m=1}^{n-1} f(m)^2 \right).$$ I'm a bit at a loss on how to proceed here. If we make the reasonable choice $f(n) = 2^{-n}$, then $p_f = \frac{2}{7}$. This doesn't seem too bad, but it's not optimal. Indeed if two players play this strategy and the third plays the pure strategy $N\geq 2$, she increases her chances to win. On the other hand the strategy given by $$f_N(n) =\begin{cases} \frac{1}{N}, & \text{ for }n\leq N \\ 0, & \text{ otherwise} \end{cases} $$ is such that $p_{f_N} \rightarrow \frac{1}{3}$ for $N\rightarrow \infty$. But it is clearly very far from being optimal. I'm grateful for any answer, suggestion or comment. REPLY [5 votes]: In game theorey the players are selfish, i.e. the probability of winning that we want to maximize is: $$p(f,g,h) = \sum_{n=1}^{\infty} f(n) \Bigg(\bigg(\sum_{k=n+1}^{\infty} g(k) \bigg)\bigg(\sum_{k=n+1}^{\infty} h(k) \bigg) + \sum_{k=1}^{n-1} g(k)\cdot h(k) \Bigg),\tag{$\spadesuit$}$$ where $f$, $g$ and $h$ are probability distributions for the first, second and third player respectively. In fact what we want is that $f = g = h$, but that is not true when considering individual optimizations from the player perspective (i.e. the conditions for a Nash Equilibrium). Now, fix some arbitrary functions $f_1, g_1, h_1$ and observe that if $\frac{\partial p}{\partial f(i)}(f_1,g_1,h_1) < \frac{\partial p}{\partial f(j)}(f_1,g_1,h_1)$ then there exists a tiny strictly positive amount $\delta$ of probability mass that we could shift from number $i$ to number $j$, i.e. take $$f_2(k) = \begin{cases} f_1(k)-\delta & \text{ when }k = i\\ f_1(k)+\delta & \text{ when }k = j\\ f_1(k) & \text{ otherwise}, \end{cases}$$ and strictly increase the value of $p$, i.e. $p(f_1,g_1,h_1) < p(f_2,g_1,h_1)$. Thus, at a Nash Equilibrium $(f,g,h)$ we have to have $\frac{\partial p}{\partial f(i)}(f,g,h) = \frac{\partial p}{\partial f(j)}(f,g,h)$ for any $i$ and $j$. Fortunately $(\spadesuit)$ gives us a nice formula for that. Adding our requirement for an optimal strategy that $f = g = h$ we get $q_i = q_j$ where ($q_n$ was intentionally kept to be consistent with answer of @smcc): \begin{align} q_n &= \frac{\partial p}{\partial f(n)}(f,g,h)\ \Bigg|_{g = h = f} \\ &= \bigg(\sum_{k=n+1}^{\infty} g(k) \bigg)\bigg(\sum_{k=n+1}^{\infty} h(k) \bigg) + \sum_{k=1}^{n-1} g(k)\cdot h(k)\ \Bigg|_{g = h = f} \\ &= \bigg(\sum_{k=n+1}^{\infty} f(k) \bigg)^2 + \sum_{k=1}^{n-1} f^2(k) \\ &= \bigg(1-\sum_{k=1}^{n} f(k) \bigg)^2 + \sum_{k=1}^{n-1} f^2(k). \end{align} I cannot prove that this is the unique equilibrium, but we can show that there exist one particular, namely $f(n) = (1-d)\cdot d^{n-1}$ where $d$ denotes the probability of not picking number $1$ and it happens to be the only real root of $d^3+d^2+d^1 - 1 = 0$. With that hypothesis we have that \begin{align} q_n &= \bigg(1-\sum_{k=1}^{n} (1-d)\cdot d^{k-1} \bigg)^2 + \sum_{k=1}^{n-1} \big((1-d)\cdot d^{k-1}\big)^2 \\ &= d^{2n} + \frac{(1-d)(d^2-d^{2n})}{(1+d)d^2} = \frac{(1-d)d^2+d^{2n} \overbrace{(d^3+d^2+d-1)}^{\text{zero}}}{(1+d)d^2} = \frac{1-d}{1+d} \end{align} Thus, all the $q_n$'s are constant and so $(f,f,f)$ is a Nash Equilibrium. Wolfram Alpha says $$d = \frac13\left(\sqrt[3]{17+3\sqrt{33}}-\sqrt[3]{\frac{2}{17+3 \sqrt{33}}}-1\right) \approx 0.543689.$$ On a final note, equation $d^3+d^2+d^1 = 1$ or perhaps better $$\big(1-f(1)\big)^3+\big(1-f(1)\big)^2+\big(1-f(1)\big)^1 = 1$$ suggest that $f$ satisfies $f(n) = f(n+1) + f(n+2) + f(n+3)$ (like a reversed 3-term Fibonacci sequence), but I can't find any probabilistic interpretation for that. Maybe someone can use it to obtain a nice elegant proof? I hope this helps $\ddot\smile$<|endoftext|> TITLE: Two limits involving integrals: $\lim_{\varepsilon\to 0^+}\left(\int_0^{1-\varepsilon}\frac{\ln (1-x)}{x\ln^p x}dx-f_p(\varepsilon)\right)$, $p=1,2$. QUESTION [7 upvotes]: By applying the Taylor series expansion to $\ln x$, as $x \to 1$, one has the Laurent series expansion, $$ \frac1{\ln x}=-\frac1{1-x}+\frac{1}{2}+O\left(1-x\right) $$ then clearly $$ \begin{align} &\lim_{\varepsilon \to 0^+} \int_0^{1-\varepsilon} \frac{\ln (1-x)}{x\ln x}\:dx=\infty \\\\&\lim_{\varepsilon \to 0^+} \int_0^{1-\varepsilon} \frac{\ln (1-x)}{x\ln^2 x}\:dx=-\infty. \end{align} $$ Thus I'm designing the following related limits. Question. Find $f_1(\varepsilon)$, $f_2(\varepsilon)$ and find a closed form of $c_1$, $c_2$ such that $$ \begin{align} &\lim_{\varepsilon \to 0^+} \left(\int_0^{1-\varepsilon} \frac{\ln (1-x)}{x\ln x}\:dx-f_1(\varepsilon)\right)=c_1 \tag1 \\\\& \lim_{\varepsilon \to 0^+} \left(\int_0^{1-\varepsilon} \frac{\ln (1-x)}{x\ln^2 x}\:dx-f_2(\varepsilon)\right)=c_2. \tag2 \end{align} $$ Edit. A complete answer is now given below. REPLY [3 votes]: Claim. As $\epsilon \to 0$, $$ \int_{0}^{1-\epsilon} \frac{\log(1-x)}{x \log^2 x} \, \mathrm{d}x = \frac{1+\log\epsilon}{\epsilon} + \frac{\gamma - 1 - \log(2\pi)}{2} + o(1). $$ Step 1. Applying the substitution $t = -\log x$ followed by integration by parts, we have $$ \int_{0}^{1-\epsilon} \frac{\log(1-x)}{x (-\log x)^{s+1}} \, \mathrm{d}x = \frac{\eta^{-s}}{s} \log \epsilon + \frac{1}{s} \int_{\eta}^{\infty} \frac{t^{-s}}{\mathrm{e}^t - 1} \, \mathrm{d}t, \tag{1} $$ where $\eta = -\log(1-\epsilon)$. Now we focus on the latter integral, and define $$ I = I(\eta, s) = \int_{\eta}^{\infty} \frac{t^{-s}}{\mathrm{e}^t - 1} \, \mathrm{d}t. $$ Step 2. Next, we give an explicit formula for $I(\eta, s)$ for small $\eta > 0$. (By small, we mean $\eta \in (0, 2\pi)$.) This is easily done by mimicking the derivation of the functional equation for $\zeta$ by contour integration. To this end, we first restrict ourselves to the case $\Re(s) \in (1, 2)$ and consider the contour $\hspace{5em}$ Call by $C_{\eta}$ the entire contour and by $\Gamma_{\eta}$ the circular arc. Assuming the branch cut of $\log$ as $[0, \infty)$, this can be computed as \begin{align*} &I - \mathrm{e}^{-2\pi i s} I + \int_{\Gamma_{\eta}} \frac{z^{-s}}{\mathrm{e}^z - 1} \, \mathrm{d}z \\ &\hspace{3em} = \int_{C_{\eta}} \frac{z^{-s}}{\mathrm{e}^z - 1} \, \mathrm{d}z = 2\pi i \sum_{n \in \Bbb{Z}\setminus\{0\}} \underset{z=2\pi i n}{\mathrm{Res}} \; \frac{z^{-s}}{\mathrm{e}^z - 1} \\ &\hspace{6em} = 2i \mathrm{e}^{-i\pi s} (2\pi)^{1-s} \cos\left(\frac{\pi s}{2}\right) \zeta(s). \end{align*} This gives $$ I(\eta, s) = \frac{1}{1 - \mathrm{e}^{-2\pi i s}} \int_{-\Gamma_{\eta}} \frac{z^{-s}}{\mathrm{e}^z - 1} \, \mathrm{d}z + \frac{(2\pi)^{1-s}}{2\sin\left(\frac{\pi s}{2}\right)} \zeta(s) $$ Since both sides define entire functions on all of $\Bbb{C}$, this identity extends to all of $s \in \Bbb{C}$. Now we plug the power series $\frac{z}{\mathrm{e}^z - 1} = \sum_{n=0}^{\infty} \frac{B_n}{n!} z^n$ into the contour integral above to get \begin{align*} I(\eta, s) &= \sum_{n=0}^{\infty} \frac{B_n}{n!} \frac{\eta^{n-s}}{s - n} + \frac{(2\pi)^{1-s}}{2\sin\left(\frac{\pi s}{2}\right)} \zeta(s) \tag{2} \\ &= \sum_{n=0}^{\infty} \frac{B_n}{n!} \frac{\eta^{n-s}}{s - n} + \Gamma(1-s)\zeta(1-s) \tag{3} \end{align*} Step 3. Now we consider the case of interest, i.e., $s = 1$. Taking limit as $s \to 1$ to $\text{(2)}$, only the first 2 leading terms of the series are significant (as $\eta \to 0$) and we can easily compute $I(\eta, 1)$ as $$ I(\eta, 1) = \frac{1}{\eta} + \frac{1}{2} \log \eta + \frac{1}{2}(\gamma - \log(2\pi)) + \mathcal{O}(\eta). $$ Therefore the claim follows by plugging this to $\text{(1)}$ and utilizing the asymptotics $$ \frac{1+\log \epsilon}{\eta} + \frac{1}{2}\log\eta = \frac{1+\log\epsilon}{\epsilon} - \frac{1}{2} + o(1). $$ Addendum 1. As a by-product, we can compute @Marco Cantarini's integral: $$ \int_{0}^{1} \frac{\log(1-x)}{x} \left( \frac{1}{\log^{2} x} - \frac{1}{(1-x)^2} + \frac{1}{1-x}\right) \, \mathrm{d}x = \frac{\gamma + 1 - \log(2\pi)}{2}. $$ Addendum 2. Another by-product is that, for $N \in \Bbb{Z}$ and $N-1 < \Re(s) < N$ we have $$ \int_{0}^{\infty} x^{s-1} \left( \frac{1}{\mathrm{e}^x - 1} - \sum_{n=-\infty}^{-N} a_n x^{n} \right) \, \mathrm{d}x = \Gamma(s)\zeta(s), $$ where $a_n$ are Laurent coefficients of $\frac{1}{\mathrm{e}^x - 1}$ at $x = 0$: $$ \frac{1}{\mathrm{e}^x - 1} = \sum_{n=-\infty}^{\infty} a_n x^n = \sum_{n=-1}^{\infty} \frac{B_{n+1}}{(n+1)!} x^n. $$<|endoftext|> TITLE: how to prove this combinatorial identity I accidentally find? QUESTION [7 upvotes]: Today when I solve a counting problem using different methods I find the following (seemingly correct) combinatorial identity, but I can't find it on the Internet and I can't prove its correctness neither. But I have verified its correctness with positive integer $n$ within $[0, 1000]$ using a simple computer program. Anyone can give a proof to this identity (or any link to its proof)? $$\frac{(n+1)n}{2} \cdot n! = \sum\limits_{k=0}^{n} (-1)^k \cdot \frac{n!}{k!\cdot(n-k)!} \cdot (n-k)^{n+1}$$ And equivalently if you want, $$\frac{(n+1)n}{2} = \sum\limits_{k=0}^{n} (-1)^k \cdot \frac{1}{k!\cdot(n-k)!} \cdot (n-k)^{n+1}$$ REPLY [4 votes]: Here is another variation of the theme. It is convenient to use the coefficient of operator $[t^k]$ to denote the coefficient of $t^k$ in a series. This way we can write e.g. \begin{align*} \binom{n}{k}=[t^k](1+t)^n\qquad\text{and}\qquad k^n=n![t^n]e^{kt} \end{align*} OPs identity can be written as \begin{align*} \frac{n}{2}(n+1)!=\sum_{k=0}^n&(-1)^k\binom{n}{k}(n-k)^{n+1}\quad\qquad n\geq 0 \end{align*} We start with the right-hand side and obtain \begin{align*} \sum_{k=0}^n&(-1)^k\binom{n}{k}(n-k)^{n+1}\\ &=\sum_{k=0}^n(-1)^{n-k}\binom{n}{k}k^{n+1}\tag{1}\\ &=\sum_{k=0}^\infty(-1)^{n-k}[u^k](1+u)^n(n+1)![t^{n+1}]e^{kt}\tag{2}\\ &=(-1)^n(n+1)![t^{n+1}]\sum_{k=0}^\infty(-e^t)^k[u^k](1+u)^n\tag{3}\\ &=(-1)^n(n+1)![t^{n+1}]\left(1-e^t\right)^n\tag{4}\\ &=(n+1)![t^{n+1}]\left(t+\frac{t^2}{2}+\cdots\right)^n\tag{5}\\ &=\frac{n}{2}(n+1)!\tag{6} \end{align*} and the claim follows. Comment: In (1) we change the order of summation: $k\longrightarrow n-k$ In (2) we apply the coefficient of operator twice. We also extend the range of summation to infinity without changing anything, since we are adding zeros only. In (3) we do some rearrangements and use the linearity of the coefficient of operator. In (4) we use the substitution rule with $u=-e^t$ \begin{align*} A(t)=\sum_{k=0}^\infty a_kt^k=\sum_{k=0}^\infty t^k[u^k]A(u)\\ \end{align*} In (5) we factor out $(-1)^n$ and expand the exponential series. In (6) we note that in order to extract the coefficient of $t^{n+1}$ we have $n$ possibilities to select the term $t$ and one possibility to select the the term $\frac{t^2}{2}$ giving $\frac{n}{2}$.<|endoftext|> TITLE: Help with this trigonometry problem QUESTION [8 upvotes]: Prove the given identity. $$\left(\sin\frac {9\pi}{70}+ \sin\frac {29\pi}{70} - \sin\frac {31\pi}{70}\right) \left(\sin\frac {\pi}{70}-\sin\frac {11\pi}{70} - \sin\frac {19\pi}{70}\right) =\frac {\sqrt {5} -4}{4}$$ Please help, I could not gather enough ideas, even on how to get to the first step. I thought of using the transformation formula (from sum to product) but did not find a fruitful result. UPDATE 1.: $\left(-\frac{1 +\sqrt{5}}{8} + \frac{1}{8} \sqrt{14\left(5 - \sqrt{5} \right)}\right)\left(-\frac{ 1+ \sqrt{5}}{8} -\frac{1}{8} \sqrt{14 \left(5 -\sqrt{5} \right)}\right) $, I noticed that Alexis had got to this point. Can anyone explain me how did he get here, with calculations. Please help. Thanks in Advance. REPLY [4 votes]: A method using trigonometric identities, although a very long one - For simplicity, let me put $\dfrac{\pi}{70}=A$. Now, the expression is - $(\sin{9A}+\sin{29A}-\sin{31A})(\sin{A}-\sin{11A}-\sin{19A})$ Multiply out both the brackets completely and for each term of the type "$\sin*\sin$", convert it into sum of two cosines using the formula - $2\sin{A}\sin{B}=\cos{(A-B)}-\cos{(A+B)}$, to get the following expression - $\dfrac{1}{2}(\cos{8A}-\cos{10A}-\cos{2A}+\cos{20A}-\cos{10A}+\cos{28A}+\cos{28A}-\cos{30A}-\cos{18A}+\cos{40A}-\cos{10A}+\cos{48A}-\cos{30A}+\cos{32A}+\cos{20A}-\cos{42A}+\cos{12A}-\cos{50A})$ Now, convert all those cosines whose angles are greater than 90 degrees (i.e. greater than $35A$ here), into cosines of angles less than a 90 degrees (i.e. less than $35A$ here), by using the fact that $\cos{(180-A)}=-\cos{A}$, to get the following simplified expression - $\dfrac{1}{2}(\cos{8A}-3\cos{10A}-\cos{2A}+3\cos{20A}+3\cos{28A}-3\cos{30A}-\cos{18A}-\cos{22A}+\cos{32A}+\cos{12A})$, which further becomes - $\dfrac{1}{2}((\cos{8A}-\cos{22A})-(\cos{2A}+\cos{18A})+(\cos{32A}+\cos{12A})-3\cos{10A}+3\cos{20A}-3\cos{30A}+3\cos{28A})$ $\dfrac{1}{2}((\cos{8A}-\cos{22A})-2\cos{10A}\cos{8A}+2\cos{22A}\cos{10A}-3\cos{10A}+3\cos{20A}-3\cos{30A}+3\cos{28A})$ $\dfrac{1}{2}((\cos{8A}-\cos{22A})(1-2\cos{10A})-3\cos{10A}+3\cos{20A}-3\cos{30A}+3\cos{28A})$ $\dfrac{1}{2}((2\sin{15A}\sin{7A})(1-2\cos{10A})-3\cos{10A}+3\cos{20A}-3\cos{30A}+3\cos{28A})$ $\dfrac{1}{2}((2\sin{7A})(\sin{15A}-2\sin{15A}\cos{10A})-3\cos{10A}+3\cos{20A}-3\cos{30A}+3\cos{28A})$ $\dfrac{1}{2}((2\sin{7A})(\sin{15A}-\sin{25A}-\sin{5A})-3\cos{10A}+3\cos{20A}-3\cos{30A}+3\cos{28A})$ Now use: $\sin{15A}=\cos{20A},\sin{25A}=\cos{10A},\sin{5A}=\cos{30A}$ (Complementary angle pairs, check for yourself!) to write the above expression as - $\dfrac{1}{2}((2\sin{7A})(\cos{20A}-\cos{10A}-\cos{30A})+3(\cos{20A}-\cos{10A}-\cos{30A})+3\cos{28A})$ $\dfrac{1}{2}((2\sin{7A}+3)(\cos{20A}-\cos{10A}-\cos{30A})+3\cos{28A})$ Now use: $\cos{20A}=-\cos{50A}$ (supplementary angle pairs, check for yourself!) and rewrite the above expression as - $\dfrac{1}{2}((2\sin{7A}+3)(-\cos{10A}-\cos{30A}-\cos{50A})+3\cos{28A})$ ... (1) $\cos{10A}+\cos{30A}+\cos{50A}=\dfrac{\cos{30A}\times\sin{30A}}{\sin{10A}}=\dfrac{\sin{60A}}{2\sin{10A}}=\dfrac{1}{2}$. Put this in expression (1) to get - $\dfrac{1}{2}(-\dfrac{1}{2}(2\sin{7A}+3)+3\cos{28A})$ But $\cos{28A}=\sin{7A}=\dfrac{\sqrt{5}-1}{4}$. Finally put these to get your answer!<|endoftext|> TITLE: There exists a lattice point $(a,b)$ whose distance from every *visible* point is greater than $n$. QUESTION [5 upvotes]: Question- A lattice point $(x,y)\in\mathbb{Z}^2$ is called visible if $gcd(x,y)=1$. Prove that given a positive integer $n$, there exists a lattice point $(a,b)$ whose distance from every visible point is greater than $n$. I am totally nowhere near progress on this. I was thinking of trying pigeon hole principle, but cant find any appropriate candidate pigeons. Please give any hints to start. REPLY [2 votes]: This follows from Theorem 7.3 in the book “Mathematical Journeys,” by Peter D. Schumer. The proof, which uses the Chinese remainder theorem, can be seen on Google Books here or (summarized) by hovering over the spoiler paragraph below. Schumer constructs an $n\times n$ square of invisible lattice points as follows. (You can choose $(a,b)$ at the center of such a square of side $2n$.) The proof goes like this: Arrange the first $n^2$ primes in an $n\times n$ array and denote by $m_i$ the product across the $i$-th row and by $M_i$ the product down the $i$-th column. Then consider two systems of congruences: The system $x\equiv -i \mod m_i$ and the system $y\equiv -i\mod M_i$. Modulo $P=\prod m_i=\prod M_i$, these systems have unique solutions $x\equiv_P a$ and $y\equiv_P b$ (by the Chinese Reminder Theorem). Then the square with lower left corner $(a+1,b+1)$ and upper right corner $(a+n,b+n)$ contains no visible lattice points, because for any integers $i,j$ between $1$ and $n$, both $a+i$ and $b+j$ will be divisible by the prime $p$ in row $i$ and column $j$ of the array of primes.<|endoftext|> TITLE: What is the term given to two functions when their order of composition does not matter? QUESTION [17 upvotes]: When functions $f$ and $g$ have the property that $f(g(x)) = g(f(x))$ for all $x$ in the domains I call this property 'commutativity'. (usually both functions map from $\mathbb{R}$ to $\mathbb{R}$, so the problem of domain/range doesn't matter) However, commutativity is actually when: $a*b=b*a$ I use it knowing that it probably isn't the right term... but I've never found out what I should call it. REPLY [14 votes]: The terminology "$f$ and $g$ commute" is perfectly fine and commonly used. For example, in linear algebra, it is a useful fact that two diagonalizable operators can be simultaneously diagonalized if and only if the operators commute. Another example is given by the Lie bracket of two vector fields, a very important object in geometry that measures the extent to which the flows of the vector fields commute locally.<|endoftext|> TITLE: Unique rational normal curve through d+3 points QUESTION [7 upvotes]: We define a rational normal curve to be the image of a map $$\mathbb P^1\rightarrow \mathbb P^d, [x:y]\mapsto [P_0(x,y):P_1(x,y): \ldots :P_d(x,y)]$$ where $P_0(x,y),P_1(x,y), \ldots P_d(x,y)$ are linearly independent homogeneous degree $d$ polynomials. Prove that through any $d+3$ points in $\mathbb P^d$ in general position (i.e. any $d+1$ of them span $\mathbb P^d$) there exists a unique rational curve passing through them. While I am able to prove the existence, I don't know how to prove the uniqueness part. REPLY [9 votes]: Without a loss of generality one can assume that the first $d+1$ points are $(1,0,\dots,0)$, $(0,1,\dots,0)$, \dots, $(0,0,\dots,1)$. Consider the birational isomorphism $$ (x_0,x_1,\dots,x_n) \mapsto (x_0^{-1},x_1^{-1},\dots,x_n^{-1}). $$ It is easy to see that it takes a rational normal curve through these $(d+1)$ points to a line, and its inverse (which is given by the same formula) takes a line to a rational normal cuve through these points. So, the result follows from the fact that there is a unique line through two given points (the images of the last two of the $(d+3)$ points).<|endoftext|> TITLE: Rational approximation using only odd numerators. QUESTION [5 upvotes]: Dirichlet's simultaneous approximation theorem says: Given any $n$ real numbers $\alpha_1,\ldots,\alpha_n$ and for every natural number $N \in \mathbb{N}$, there exist integers $q \leq N$, and $p_1,\ldots,p_n \in \mathbb{Z}$, such that: $$ \Bigg|\alpha_i - \frac{p_i}{q}\Bigg| < \frac{1}{qN^{1/n}} \text{ for } i=1,\ldots,n $$ I would like to prove a similar theorem, but want to insist that the $p_i$ all be odd. It's easy to prove a version which has all the $p_i$ even, because the set of points in $\mathbb{R}^n$ with even coordinates is a lattice. The general version of Minkowski's Theorem say that a symmetric region containing the origin, if it's volume is sufficiently large, must contain a nonzero lattice point. But the set of points with odd coefficients is only a coset of the lattice of points with even coefficients. This seems obviously true that one should be able to replace "lattice" with "coset of the lattice" in Minkowski's Theorem, but I sure don't see how to do it. I can get the result I want in one dimension by modifying the continued fraction construction, but that doesn't export to $n$ dimensions. References and insights welcome. REPLY [2 votes]: The main point of the Minkowski theorem is that when you take a geometric projection you know that the volume of a certain set is too large not to have the difference of two points be in a given lattice. In symbols if $S$ is any subset of your vector space (more generally locally compact group), $V$, equipped with a Haar measure $m$, and $\Lambda$ is a lattice (more generally co-compact, discrete subgroup) then if $S\subseteq V$ with $m(S)>m(V/\Lambda)$ we have $x\ne y\in S$ so that $$x\equiv y\mod \Lambda.$$ The Minkowski use of this critical lemma is just that when $S$ is convex and centrally symmetric, then as $2\Lambda$ is also a lattice with $$m(V/2\Lambda) = 2^nm(V/\Lambda)$$ (in $\Bbb R^n$ with $m$ as Lebesgue measure) when $m(S) > 2^nm(V/\Lambda)$ we can get points $x\equiv y\mod 2\Lambda$. Convexity and symmetry of $S$ gives $${1\over 2} (x) + {1\over 2} (-y)={1\over 2} (x-y)\in \Lambda\cap S.$$ As a result, your insight about the cosets is exactly on point for this reason because groups are so homogeneous in structure. But how, exactly, does this work when we're dealing with a non-lattice? In general the proof shows that there is no real control over what coset of a given sub-lattice you get: it's very general construction using very general assumptions, however since the geometry of the situation is relatively simple, we can exploit it for some progress. Note that the situation of simultaneous approximation as it manifests in this problem concerns a parallelopiped, namely $$M^{-1}\bigg(\left[-N-{1\over 2}, N+{1\over 2}\right]\times \left[-N^{-1/d}, N^{-1/d}\right]^{n}\bigg).$$ with $$M = \begin{pmatrix} 1 && 0 && 0 && \ldots && 0 \\ \alpha_1 && -1 && 0 && \ldots && 0 \\ \alpha_2 && 0 && -1 && \ldots && 0 \\ \vdots && \vdots && \vdots && \ddots && \vdots \\ \alpha_n && 0 && \ldots && 0 && -1 \end{pmatrix}.$$ Luckily here you can easily see how it affects the distance on each coordinate. In particular you can see that if you're willing to have a constant (depending on the dimension only) as a fudge factor, you can expand the result pretty easily just by making your intervals a little wider. Note that the key obstruction to naively doing this is the fact that this is an existence theorem, which means you have essentially no control over what the convergent will look like for any fixed $N$ unless you are willing to expand the hypotheses you're willing to use a little (unless you get lucky and you pick an actual lattice, like when you do all evens). However, if you are happy to accept some of the "on average" machinery of statstical number theory, we note that the density of lattice points is inversely proportional to the determinant of the lattice, hence there must be points in proper cosets--in fact most of them are in cosets! If you want something for a specific choice of coset, eg. all the numerators are odd rather than just some numerator is odd, you can use the fact that you are dealing with a parallelopiped, so that you understand the relative "thickness" in any given direction based on the $\alpha_i$ in question.<|endoftext|> TITLE: Is $\Bbb Q(\sqrt 2, e)$ a simple extension of $\Bbb Q$? QUESTION [10 upvotes]: My general question is to find, if this is possible, two real numbers $a,b$ such that $K=\Bbb Q(a,b)$ is not a simple extension of $\Bbb Q$. $\newcommand{\Q}{\Bbb Q}$ Of course $a$ and $b$ can't be both algebraic, otherwise $K$ would be a separable ($\Q$ has characteristic $0$) and finite extension, which has to be simple. So I tried with $\Q(\sqrt 2, e)$ but any other example would be accepted. The field $\Q(\sqrt 2, e)$ has transcendence degree $1$ over $\Q$, but I'm not sure if this imply that it is isomorphic to $\Q(a)$ for some transcendental number $a$ (the fact that two fields have the same transcendence degree over another field shouldn't imply that the fields are isomorphic). I'm not sure about the relation between the algebraic independence of $a$ and $b$, and the fact that $\Q(a,b)/\Q$ is a simple extension. Notice that $\Q(\pi, e)$ is probably unknown to be a simple extension of $\Q$. Thank you for your help! REPLY [6 votes]: You have to show that $$ \mathbb{Q}(X) \subsetneq \mathbb{Q}(e,\sqrt{2}) $$ for any $X \in \mathbb{Q}(e,\sqrt{2})$. If $X$ is algebraic, then $[\mathbb{Q}(X) : \mathbb{Q}]$ is finite while $[\mathbb{Q}(e,\sqrt{2}): \mathbb{Q}]$ is infinite. If $X$ is not algebraic, then it is transcendental. It suffices to show that $\mathbb{Q}(X)$ does not contain a square root of $2$. Since $\mathbb{Q}(X)$ is isomorphic to the fraction field of polynomials, you need to show that there do not exist polynomials $p(X)$, $q(X)$ with rational coefficients such that $$ \left( \frac{p(X)}{q(X)} \right)^2 = 2. $$ Can you take it from here?<|endoftext|> TITLE: Connected, not path-connected subgroup of $\mathbb{T}^2$ QUESTION [6 upvotes]: I read (in Structure and Geometry of Lie Groups by Hilgert and Neeb, I think) that it is possible for a Lie group to admit a connected yet not-path-connected subgroup. Specifically it said $(\mathbb{R/Z})^2$ admits such a subgroup. I did not manage to construct such a subgroup. Can someone help? Obviously such a subgroup can't be closed, or even a Lie subgroup, because connected Lie groups are path-connected (as all manifolds). The only group I know which is connected but not path-connected is the solenoid, but it is compact, so if it were a subgroup of $\mathbb{T}^2$ it would have been closed, hence a Lie subgroup and therefore path-connected, which is a contradiction. REPLY [4 votes]: Apparently, similar questions have been asked in MathOverflow. I will quote them here in case anyone else is interested in this. New Answer Here Gabriel C. Drummond-Cole gave a very nice answer: the subgroup $\{(x+\mathbb{Z}^2,\pi x+\mathbb{Z}^2)\}\cup\{(x+\frac{1}{2}+\mathbb{Z}^2,\pi x+\mathbb{Z}^2)\}$ (for $x\in\mathbb{R}$). A direct computation shows this is a subgroup (using the fact $1/2+1/2=0$ in the torus). It has two path connected components: clearly $\{(x+\mathbb{Z}^2,\pi x+\mathbb{Z}^2)\}$ and $\{(x+\frac{1}{2}+\mathbb{Z}^2,\pi x+\mathbb{Z}^2)\}$ are path-connected (as the images of $\{(x,\pi x\}$ and $\{(x+\frac{1}{2},\pi x\}$), and there is no path connecting $(0+\mathbb{Z}^2,0+\mathbb{Z}^2)$ + $(\frac{1}{2}+\mathbb{Z}^2,0+\mathbb{Z}^2)$ since such a path will lift to a path from $(0,0)$ to $(\frac{1}{2},0)$ which lies in the preimage of this subgroup in $\mathbb{R}^2$, i.e. in $\{(x,\pi x)\}\cup\{(x+\frac{1}{2},\pi x\} + \mathbb{Z}^2\subseteq\mathbb{R}^2$. Obviously in this set $(0,0)$ and $(\frac{1}{2},0)$ are in different path-connected components, so this is a contradiction. So our subgroup is not path-connected. However, it is connected: since it has precisely two path-component, we just need to show neither of them is clopen. It's enough to show neither is closed because there are just finitely many (two) of them. But each of them is known to be dense in $(\mathbb{R}/\mathbb{Z})^2$, so you can approach a point of one from points in the other, so they are not closed. This is a very nice and simple answer. Old Answer This question asks in general about subgroups which are not Lie subgroups, but for nontrivial reasons (i.e. not just because they must have uncountably many connected components). Essentially only Claudio Gorodski answered: I found a very interesting discussion by Shahla Ahdout and Sheldon Rothman in the Australian Mathematical Society Web Site - the Gazette in which they exhibit an example of connected, locally connected subgroup of the additive group $\mathbf R^2$ which contains no arcs and is dense in $\mathbf R^2$, essentially quoting F. Burton Jones, Connected and disconnected plane sets and the functional equation f(x) +f(y) =f(x+y), Bull. Amer. Math. Soc. 49 (1942), 115-120. Such a subgroup is not a Lie group, indicating how essential is the role of arcwise connectivity. The paper by Jones solves the problem. He proves (Theorem 5, page 118) there is a function $f:\mathbb{R}\to\mathbb{R}$ which is additive (i.e. $f(x+y)=f(x)+f(y)$ for all $x,y\in\mathbb{R}$) and which satisfies the following: $f$ is not continuous, The graph of $f$ is connected. The graph of $f$ is not path-connected (follows from Property $3$ on page 119, since any path lying inside the graph is a bounded connected subset. This also follows from Property $2$ on the same page, since any nontrivial path is a nondegenerate continuum (i.e. a connected compact subset containing more than one point)). To be precise, he constructs an additive function $f:\mathbb{R}\to\mathbb{R}$ which satisfies $1$ and $2$, and asserts property $3$ follows. For this he referred to two articles in German, unfortunately. However, if we believe there is such a function $f$, then its graph is a subgroup of $\mathbb{R}^2$ (by additivity of $f$) which is connected and not path-connected, and hence this solves my question. I am not particularly satisfied with this answer: I was hoping for a more pleasant (and more complete) argument. Maybe there isn't such. In any case, there was a second question in MathOverflow, inspired by the first. It asks whether there are connected non-path-connected subgroups of $\mathbb{R}^2$ by explicitly asking for a construction of $f$ as above. Martin M. W. gave the following answer, which maybe is simpler than Jones's proof (although it also omits the explanation why $3$ is true): I think the answer is, yes, the graph can be connected. By definition, if the graph G is not connected, then we can find disjoint nonempty open sets A and B, such that G is contained in A union B. In particular, this implies no point in G can be contained in the boundary of A. So if we can construct an additive function f whose graph intersects the boundary of any potential separating open set A, we'll have shown the graph is connected. Before constructing this function, note a technical point. Not all open sets are candidates for separating G. If G = A union B for nonempty open sets A,B, then the projections proj(A) and proj(B) onto the x-axis are both open, and must intersect. In turn this implies the projection of the boundary of A contains an interval. Call open sets with this property "candidate sets". To make a function f whose graph intersects the boundary of all candidate sets, consider a basis H for R as a vector space over Q. This set has cardinality of the reals. Now note that the set of all open sets in R^2 also has cardinality of the reals. (http://en.wikipedia.org/wiki/Cardinality_of_the_continuum) Put these two sets (basis H, all open sets) in 1-1 correspondence, so for each h in H, we have an open set O(h). If O(h) is not a "candidate set," let f(h)=0. Otherwise, using the fact that O(h) is a candidate set, we can always find a nonzero rational q, and a real y such that (qh,y) is in the boundary of O(h). Define f(qh)=y. Doing this for all elements of H will determine a unique additive function f on the reals. The graph of f, by construction, is connected since it intersects the boundary of every candidate separating open set in R^2. (And it's not continuous--if it were, it would miss the boundaries of a lot of open sets!) There was a second answer, by Andrew Clifford, which linked to two papers: one by Thomas and one by Maehara. Maehara again constructs an additive function $f:\mathbb{R}\to\mathbb{R}$ satisfying $1$ and $2$ but doesn't prove $3$. Thomas supposedly proves exactly what we (I?) want: a Lie group of dimension greater than $1$ always admits a connected subgroup which is not a Lie subgroup (and hence isn't path connected, since connected Lie subgroups are path-connected). However, Andrew Clifford (who gave the link to this paper) also wrote "Math Reviews says that there is an error in it, but I think the theorem still holds for Abelian groups." Read at your own risk. Briefly, the most satisfying answer is the paper by Thomas, if indeed it is true for Abelian groups. If not, I am still hoping to find a cleaner argument. After all, this was supposed to be an exercise.<|endoftext|> TITLE: Operators with compact resolvent QUESTION [5 upvotes]: This should be a basic, or even stupid, question, but I am really confused, and I cannot find any webpage that addresses my question. From wikipedia (https://en.wikipedia.org/wiki/Resolvent_formalism), an operator $A$ has compact resolvent iff $(A - zI)^{-1}$ is compact for some $z$. My confusion is that, compact operators cannot be invertible if the domain is infinite dimensional, but clearly $(A - zI)^{-1}$ is invertible by definition. Then this definition would not make sense! REPLY [5 votes]: Indeed, when the resolvent is compact and the space is not finite dimensional, the operator must be unbounded. An example: $x=(x_1,x_2,...)\in \ell^2({\Bbb N})$ $$ A x= (x_1,2x_2,3x_3,...), \ \ \ (\lambda-A)^{-1} x = \left( \frac{1}{\lambda- k} x_k \right)_{k\geq 1}$$ which is readily verified to be compact for all $\lambda\notin {\Bbb N}$. Another example (more interesting) is the Laplacian on a bounded domain $\Omega\subset \Bbb{R}^d$.<|endoftext|> TITLE: Show a C-infinity function is a polynomial QUESTION [9 upvotes]: Suppose $f\in C^\infty(\mathbb{R})$ and for any $x\in\mathbb{R}$, there exists $N\in\mathbb{N}$ such that $f^{(N)}(x)=0$. Show that $f$ is a polynomial. This is from one of the Analysis qualifying exam problems. I can show there exists an interval $(a,b)$ on which $f$ is a polynomial by using Baire category theorem, but I can't extend it to the real line. Any suggestion? I think I get some new ideas about this problem. First I can find an interval $I=(a,b)$ by using Baire theorem (same idea from the question that Clement C. added.) where $f$ coincides with a polynomial $g$ on $I$. Then we consider $f^{(i)}(a), i\leq N_g$ where $N_g$ is the degree of $g$. We must have $f^{(N_g)}(a)=0$ because $f^{(i)}(a^+)=f^{(i)}(a)=g^{(i)}(a), i\in\mathbb{N}$. So if we apply the Taylor's theorem at $x=a$, we can extend $I=(a,b)$ to $I_\epsilon=(a-\epsilon,b)$ such that $f=g$ on $I_\epsilon$. Following the same idea, we can show that $$\inf\{a:f=g \text{ on } (a,b) \}=-\infty$$ Similarly we can show $f=g$ on $\mathbb{R}$. Update: I just found that the idea above might not work after I was trying to write down a rigorous proof. The main problem is $C^\infty$ function is not neccesarily analytic. (For example, let $f(x)=0$ on $(-\infty,0]$ and $f(x)=e^{-1/x}$ on $(0,\infty)$). So I can only extend $(a,b)$ to some larger (or equal) interval $(c,d)$ if I only use the $C^\infty$ property. So the claim above $$\inf\{a:f=g \text{ on } (a,b) \}=-\infty$$ might not be true. I really have no idea how to extend this interval to the real line now. REPLY [2 votes]: Check out the answers to this question: https://mathoverflow.net/questions/34059/if-f-is-infinitely-differentiable-then-f-coincides-with-a-polynomial. There the question only asks for a proof of the result for $f \in C^\infty([0,1])$. However, most of the answers also work for $f \in C^\infty(\mathbb{R})$. In particular, the accepted answer also solves the problem when replacing $[0,1]$ by $\mathbb R$.<|endoftext|> TITLE: Fourier transform as exponential of Hermitian operator QUESTION [12 upvotes]: The Fourier transform $$F: L^2\rightarrow L^2 \qquad \hat{f}(\omega) \equiv (Ff)(\omega) \equiv \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)e^{-i\omega t}dt$$ can be viewed as a unitary operator with inverse (and hence, adjoint) $$(F^{-1}Ff)(t) = (F^\dagger F f)(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}\hat{f}(\omega)e^{i\omega t}d\omega \, .$$ All unitary operators $U$ can be written as the exponential of a Hermitian operator $H$, i.e. $U = \exp(iH)$. What is the Hermitian operator $H$ that satisfies $F = \exp(iH)$, and does it have any physical significance or uses? REPLY [5 votes]: This hermitian generator is simply the Hamiltonian of harmonic oscillator $H=\frac{1}{2}(x^2+p^2)$. This is simple to understand. In Heisenberg picture evolution with this Hamiltonian is just rotations in $x,p$ plane. Fourier transform is transformation which maps $x \to p$ and $p \to -x$ so it is a particular rotation with angle $\frac{\pi}{2}$. Hence $\mathcal F = \exp (-i \frac{\pi H}{2})$.<|endoftext|> TITLE: Question on proof that closed subset of compact metric space is compact QUESTION [7 upvotes]: Proposition. If $K$ is a compact metric space, $F \subset K$, and $F$ is closed, then $F$ is compact. Proof. Suppose $F$ is a closed subset of the compact set $K$. If $\mathcal{G} = \{G_\alpha\}$ is an open cover of $F$, then $\mathcal{G}' = \mathcal{G} \cup \{F^c\}$ will be an open cover of $K$. We are given that $K$ is compact, so let $\mathcal{H}$ be a finite subcover of $\mathcal{G}'$ for $K$. If $F^c$ is one of the open sets in $\mathcal{H}$, omit it. The resulting finite subcover of $\mathcal{G}$ will be an open cover of $F$. I have a question. So I understand that we can throw out $F^c$, as it's disjoint from $F$. But how do we know that a finite subcover of $\mathcal{G}' = \mathcal{G} \cup \{F^c\}$ (which for all purposes and intents, just consider $\mathcal{G}$) for $K$ is actually an open cover of $F$? How do we know we aren't missing any part of $F$, i.e. there's a part of $F$ that hasn't been covered? REPLY [3 votes]: Basically what this proof does is it shows that given an open cover $U$ of $F$, we can extend $U$ to cover all of $K$ by adding the open set $F^c$ (this is possible since $F$ is closed). Then since this new open cover covers all of $K$, it must have a finite subcover by compactness, call it $U' \subseteq U \cup \left\{F^c \right\}$. Then $U'$ covers $F$ since $F \subseteq K$, and we can toss out $F^c$ to get an open cover of $F$ that is a finite subcover of $U$. Since $U$ was arbitrary, the claim follows. Note that we didn't use any particular properties of metric spaces here, this actually holds in any compact topological space $K$.<|endoftext|> TITLE: "Spreading out" a smooth, connected $\mathbb{C}$-scheme of finite type. QUESTION [10 upvotes]: I have been reading the article "A conjecture in the arithmetic theory of differential equations" of Katz and I have a doubt regarding the "spreading out". My question is about the following paragraph, in section VI, which says Let $X$ be a connected, smooth $\mathbb{C}$-scheme of finite type. It is standard that we can find a subring $R\subseteq\mathbb{C}$ which is finitely generated as a $\mathbb{Z}$-algebra, and a connected smooth $R$-scheme $\mathbb{X}/R$ of finite type, with geometrically connected fibres, from which we recover $X/\mathbb{C}$ by making the extension of scalars $R\hookrightarrow\mathbb{C}$. I don't understand why it holds and wasn't able to find any concrete reference. My attempt was to first try to understand this locally, that is, for the coordinate ring $\mathbb{C}[x_1,\ldots,x_n]/(f_1,\ldots,f_r)$ of an affine variety over $\mathbb{C}$ and I thought that one may take $R$ as the ring obtained by adjoining the coefficients of the $f_i$ to $\mathbb{Z}$. However in another article, "Nilpotent connections and the monodromy theorem: applications of a result of Turritin", Katz considers the following example For instance, the Legendre family of elliptic curves, given in homogeneous coordinates by $$Y^2 Z - X(X-Z)(X-\lambda Z) \;\;\text{ in }\mathrm{Spec}\left(\mathbb{C}\left[\lambda,\dfrac{1}{\lambda(1-\lambda)}\right]\right)\times\mathbb{P}^2$$ is (projective and) smooth over $\mathrm{Spec}\left(\mathbb{C}\left[\lambda,\dfrac{1}{\lambda(1-\lambda)}\right]\right)=\mathbb{A}^1\smallsetminus\{0,1\}$. A natural thickening is just to keep the previous equation, but replace $\mathbb{C}\left[\lambda,\dfrac{1}{\lambda(1-\lambda)}\right]$ by $\mathbb{Z}\left[\lambda,\dfrac{1}{2\lambda(1-\lambda)}\right]$, and replace $\mathbb{C}$ by $\mathbb{Z}[1/2]$. and in that example he adds a more than just the coefficients to $\mathbb{Z}$. Edit: The main part that bugs me in this construction is that I don't see why the fibers are geometrically connected. When I saw the Legendre family example I thought that maybe one needed to add more things to the ring. Any help for how to do this? Thanks in advance for any hints, reference or anything that will help me to better understand this. REPLY [4 votes]: For the general case, you need to know some of constructibility and openness results in EGA. But let me explain two "baby" cases which I hope will make clear what is going on. ${\color{red}{\text{Example 1:}}}$ We are given a monic polynomial $f(X)$ in one variable $X$, with coefficients in $\mathbb{C}$, and with all distinct roots. We want to find a finitely generated subring $R_0$ of $\mathbb{C}$ and a monic polynomial $f_0(X)$ with coefficieints in $R_0$ such that after the extension of scalars from $R_0$ back to $\mathbb{C}$, we get our original polynomial $f(X)$, and for every homomorphism from $R_0$ to a field, call it $k$, the polynomial over $k$ we get by applying the homomorphism to the coefficients of $f_0$ has all distinct roots (in an algebraic closure of $k$). ${\color{blue}{\text{First method:}}}$ Start with$$R_1 := \mathbb{Z}[\text{all the coefficients of }f].$$The discriminant $\Delta_1$ of $f_1$ is an element of $R_1$ which is nonzero in $\mathbb{C}$, so certainly nonzero in $R_1$. Take $R_0$ to $R_1[1/\Delta_1]$. What we have achieved is that the discriminant is now an invertible element of $R_0$. ${\color{blue}{\text{Second method:}}}$ Again start with $R_1$ as above. We know that over $\mathbb{C}$, $f$ and its derivative $f'$ generate the unit ideal in $\mathbb{C}[X]$. Explicitly, there exist complex polynomials $A$ and $B$ such that $Af + Bf' = 1$. Now adjoin to $R_1$ all the coefficients of both $A$ and $B$. Over this $R_0$, $f$ and $f'$ generate the unit ideal in $R_0[X]$. ${\color{red}{\text{Example 2:}}}$ We start with a nonsingular hypersurface in affine $n$-space, defined by one equation $f(X_1, \ldots, X_n) = 0$. The nonsingularity means that in $\mathbb{C}[X_1, \ldots, X_n]$, $f$ and its partial derivatives $df/dX_i$ generate the unit ideal. So there exist polynomials $A$, $B_1$, $B_2$, $\ldots$ , $B_n$ in $\mathbb{C}[X_1, \ldots, X_n]$ such that$$Af + \sum_i B_i {{df}\over{dX_i}} = 1.$$ Okay, start with$$R_1 := \mathbb{Z}[\text{coefficients of }f],$$then pass to$$R_0 := R_1[\text{all coefficients of }A\text{ and all the }B_i].$$Once you have a spreading out, say $X_1/R_1$ with structure map $\pi$ which is now smooth and whose $\mathbb{C}$-fiber is geometrically connected (which for a smooth scheme is the same as geometrically irreducible), you can use the fact for $n := \text{the relative dimension of }X_1/R_1$, and $\ell$ a prime invertible in $R_1$ (if there isn't one pass to $R_1[1/\ell]$ for your favorite $\ell$), $R^{2n}\pi_!(\mathbb{Z}/\ell\mathbb{Z})$ is constructible, so "locally constant" or lisse, on a dense open set of $\text{Spec}\,R_1$; Its rank at each point is the number of geometrically irreducible components of the fiber: as this rank is one at the point corresponding to $\mathbb{C}$ (a point lying over the generic point), this rank is one on an open dense set, so certainly over a set of the form $R_1[1/g]$ for some nonzero element $g$ in $R_1$.<|endoftext|> TITLE: Enriching an adjunction QUESTION [6 upvotes]: I'm studying the notion of a a $\mathcal{V}$ category $\underline{\mathscr{A}}$ which is powered or compowered over $\mathcal{V}$. I'm having trouble finding a proof that powering/copowering gives a $\mathcal{V}$ functor $\underline{\mathcal{V}} \otimes \underline{\mathscr{A}} \to \underline{\mathscr{A}}$. (Add "op" as necessary.) In Riehl's Categorical Homotopy Theory, she gives some indication of the proof, but I'm having trouble parsing it. In particular, see these two pages. The claim here appears to be the following: Proposition: [Edit: I believe I have interpreted the text wrong, because this seems to be incorrect] Let $\underline{\mathscr{A}}$ and $\underline{\mathscr{B}}$ be $\mathcal{V}$ categories, with underlying categories $\mathscr{A}$ and $\mathscr{B}$. Let $L:\mathscr{A} \leftrightarrows \mathscr{B}: R$ be an adjunction with unit $\eta: 1_\mathscr{A} \Rightarrow RL$. Let there be a given isomorphism $\underline{\mathscr{A}}(a,Rb) \cong \underline{\mathscr{B}}(La, b)$ whose image in the underlying categories is the natural isomorphism of the adjunction. Let $R$ be the underlying functor of a $\mathcal{V}$ functor $\mathbf{R}:\underline{\mathscr{B}} \to \underline{\mathscr{A}}$. Then there is a $\mathcal{V}$ functor $\mathbf{F}:\underline{\mathscr{B}} \to \underline{\mathscr{A}}$ agreeing with $F$ on objects and with action on hom objects given by $$\mathbf{F}: \underline{\mathscr{A}}(a,a') \xrightarrow{(\eta_{a'})_*} \underline{\mathscr{A}}(a, RLa') \cong \underline{\mathscr{B}}(Fa, Fa')$$ $\mathbf{F}$ is the $\mathcal{V}$-adjunct of $G$. I can't seem to show functoriality of $\mathbf{F}$. (See this link where I've drawn the diagrams out.) Maybe someone can help. It could be that I'm misunderstanding the claim she's making. Edit: Just considered the possibility that there's something special about $\underline{\mathcal{V}} = \underline{\mathscr{A}}$, which seems to be the case she's dealing with exclusively. Let me see if I can make that work. Edit 2 The statement in Kelly is When [the forgetful functor] is conservative--faithfulness is not enough--the existence of a left adjoint $S_0$ for $T_0$ implies that of a left adjoint $S$ for $T$. So I think I must be misunderstanding Riehl's text here. Perhaps someone can help me. REPLY [10 votes]: Let me start by stating clearly the unenriched version of the result whose enriched version I was trying to evoke here. This appears as Proposition 4.3.4 in Category Theory in Context which is available here: Category Theory in Context Prop. Consider a functor $G\colon B \to A$ so that for each $a \in A$ there exists an object $Fa \in B$ together with an isomorphism $B(Fa,b) \cong A(a,Gb)$ that is natural in $b \in B$. Then there is a unique way to extend the assignment $F \colon obA \to obB$ to a functor $F \colon A \to B$ so that these isomorphisms are also natural in $A$. The proof is by the Yoneda lemma (see the cited notes). Now I claim that the same result is true for $V$-categories $A$ and $B$, a $V$-functor $G \colon B \to A$, and a family of isomorphisms $B(Fa,b) \cong A(a,Gb)$ in $V$ that are $V$-natural in $b \in B$. The given isomorphisms in $V$ can be used to define a canonical map in $V$ $A(a,a') \to V(B(Fa',b), B(Fa,b))$ for each $b \in B$. Not only is this canonical but it's uniquely determined if you want the iso to be $V$-natural in $A$ in the end. Moreover these maps are (extra-ordinarily) $V$-natural in $B$. What this means is that you actually get a map into the enriched end $A(a,a') \to \int_{b \in B} V(B(Fa',b), B(Fa,b))$. (These are defined in section 7.3.) Now one version of the $V$-Yoneda lemma tells you that $\int_{b \in B} V(B(Fa',b), B(Fa,b)) \cong B(Fa,Fa')$ so this map is the map you seek. In the definition of tensors I should have asked that the isomorphisms $M(v\otimes m,n)\cong V(v, M(m,n))$ were $V$-natural in $n$. (Vladimir's comment is correct.) In any case it was absolutely my intention to define tensored to mean that the hom bifunctor has a left $V$-adjoint. Apologies for the confusion!<|endoftext|> TITLE: Showing a series is convergent QUESTION [5 upvotes]: I am trying to show $$ \sum_{n \geq 0} \frac{ 3^n }{\sqrt{2^n + 10^n}} $$ converges. I try to compare as follows $$ \frac{ 3^n }{\sqrt{2^n + 10^n}} < \frac{ 3^n }{\sqrt{10^n} } = \frac{3^n}{ \sqrt{10}^n } = \left( \frac{ 3 }{ \sqrt{10} } \right)^n $$ Since $\sqrt{10} > 3$, then the $\sum \left( \frac{ 3 }{ \sqrt{10} } \right)^n $ converges and thus the result follows by the comparison test. Is there another method we could have used to show convergence? REPLY [4 votes]: There are other ways forward. Let's invoke the ratio test on the summand $a_n=\frac{3^n}{\sqrt{2^n+10^n}}$. Proceeding we have $$\begin{align} \frac{a_{n+1}}{a_n}&=\frac{3^{n+1}}{\sqrt{2^{n+1}+10^{n+1}}}\,\frac{\sqrt{2^n+10^n}}{3^n}\\\\ &=3\sqrt{\frac{2^n+10^n}{2^{n+1}+10^{n+1}}}\\\\ &=\frac{3}{\sqrt{10}}\sqrt{\frac{1+\left(\frac15\right)^n}{1+\left(\frac15\right)^{n+1}}}\\\\ &\to \frac{3}{\sqrt{10}}\,\,\text{as}\,\,n\to \infty\\\\ &<1 \end{align}$$ Therefore, the ratio test guarantees that the series converges. We can use the root test with similar success. Proceeding here, we have $$\begin{align} \sqrt[n]{|a_n|}&=\frac{3}{\sqrt{10}}\sqrt[n]{\frac{1}{\sqrt{1+\left(\frac15\right)^n}}}\\\\ &\to \frac{3}{\sqrt{10}}\,\,\text{as}\,\,n\to \infty\\\\ &<1 \end{align}$$ Therefore, the root test guarantees that the series converges.<|endoftext|> TITLE: If $ \sum\frac{a_n}{n}$ converges, then $\frac{a_1+\cdots+a_n}{n}$ converges to 0? QUESTION [5 upvotes]: I'm studying Strong Law of Large Numbers proof, and I think above statement is a non-trivial one, so I want to know whether the above statement is true or not. Thanks for the help and suggestions. REPLY [6 votes]: Let $\sum_{n=1}^\infty \frac{a_n}{n} = a > 0$, and let $\mu$ be a natural number. Then if $\varepsilon > 0$, there exists $N\in\Bbb{N}$ so that $m>N$ implies $|a-\sum_{n=1}^m\frac{a_n}{n}|< \varepsilon/4$. Then if $M > m > N$, you can show $$|\sum_{n=m+1}^M \frac{a_n}{n}| < \varepsilon/2 $$ Now we have, for each $M=\mu m+k$ (where $0\leq k < \mu$ and $m>N$), $$|A_M| = |\frac{a_1+a_2+...+a_M}{M}| \leq |\frac{a_1+a_2+...+a_m}{M}|+|\frac{a_{m+1}+...+a_M}{M}| \leq $$ $$\frac{1}{\mu}|\sum_{n=1}^m\frac{a_n}{n}| + |\sum_{n=m+1}^M \frac{a_n}{n}| < \frac{a+\varepsilon/4}{\mu}+\varepsilon/2 < \frac{a}{\mu}+\varepsilon$$ Since this works for all $\varepsilon>0$, we have that $\lim_{M\to\infty} |A_M| \leq \frac{a}{\mu}$ for all $\mu \in \Bbb{N}$, so $\lim_{M\to\infty} A_M = 0$. EDIT: This only works if the $a_i$ eventually all have the same sign. See comments.<|endoftext|> TITLE: Torsion in homology of a subset $E \subset \Bbb R^n$ with $n≤2$ QUESTION [5 upvotes]: For $n \in \{1,2\}$, is there a subset $E \subset \Bbb R^n$ such that one of its (singular) homology groups $H_k(E)$ has an element of finite order? According to Is the fundamental group of every subset of $\mathbb{R}^2$ torsion free?, $\pi_1(E)$ is torsion-free, but $H_1(E) = \pi_1(E)/[\pi_1(E),\pi_1(E)]$ could have torsion elements. According to this link on MO (or this one), a theorem of Eda states that this is impossible to find such subsets of $\Bbb R^n$ with $n=2$ (then also for $n=1$). [By the way, I'm not sure to know from this MO thread if the answer is completely known for $n=3$]. I don't have a copy of this article, but the title "Fundamental group of subsets of the plane" suggests that it only proves the fact that $\pi_1(E)$ is torsion-free. I don't see how this can answer my question on homology groups. Related questions: (1), (2). Thank you for your comments! REPLY [5 votes]: I don't know about Eda's paper, but Fischer-Zastrow prove that for any subset of a closed oriented surface $X$, $\pi_1(X)$ is (many things, but in particular) locally free: that every finitely-generated subgroup is free. But if there were a nonzero element $x$ with $x^n \in [\pi_1, \pi_1]$ (but $x$ not so), we could write $x^n = [g_1, h_1] \cdots [g_n, h_n]$. This would provide a nontrivial relation on the subgroup generated by $x, [g_i, h_i]$, since $x$ cannot possibly be expressed in terms of the latter elements alone.<|endoftext|> TITLE: Applications of group theory to classical mechanics QUESTION [6 upvotes]: Today, a friend and I solved a classical mechanics problem using group theory. The problem was the following: Around a circumference, there are $N$ children evenly spaced. In the center, there is a tire. Each children pulls the tire by a rope with equal forces. Is the resultant force always zero? My friend associated each force vector with a complex root of unity, and using the fact that the group of the $N$-roots of unity is cyclic, showed the identity: $$1 + \zeta + \zeta^2 + \cdots + \zeta^{n-1} = 0$$ which equals saying that the resultant force is zero. I considered the set $\Omega$ of all permutations of the children, with a group action $\pi\colon \mathbb{Z}/N\mathbb{Z}\times \Omega \to \Omega$ by cyclic permutations. I considered the "force" function $f\colon \Omega \to \mathbb{R}^2$ which associated each system with it's resultant force vector. I also considered the group action $\varphi\colon \mathbb{Z}/N\mathbb{Z} \times \mathbb{R}^2 \to \mathbb{R}^2$ by rotations of the plane. Then, I argumented the following identities for all $S \in \Omega, x \in \mathbb{Z}/N\mathbb{Z}$: $$f(\pi(x, S)) = \varphi(x, f(S))$$ $$f(\pi(x, S)) = f(S)$$ which implied $f(S)$ is a fixed point of $\varphi$, and thus is always the zero vector. I never expected a application of group theory to classical mechanics (though I know about it's uses in crystallography). Are there other well know examples of this? REPLY [7 votes]: (I would leave a comment, but apparantly I'm not allowed to do that yet) If you ask a physicist what group theory is, his answer will probably come out to be "the study of symmetry". Symmetry plays a huge role in modern physics and also in Classical Mechanics. The modern treatment of Classical Mechanics, involving symplectic and poisson manifolds, makes heavy use of groups. One of the important points where group theory comes in is, that constants of motion (observable quantities of a system, that do not change with time) correspond to invariance of the equations of motion under the smooth action of a group. This is called Noether's theorem and remains very important beyond Classical Mechanics. Well-known conservation laws like energy, momentum and angular momentum correspond to the invariance of the system in question under time translation, space translation and rotation. The symmetry group of classical mechanics is called the Gallilei group. The symmetry group of relativistic physics is the Poincaré group. Modern physics is full of group theory (Lie groups are used most often): For example the Wightman axioms for quantum field theory require to specify an irreducible representation of the Poincaré group. The classification (due to Wigner) of (certain) irreducible unitary representations of the Poincaré group leads physicists to the classification of elementary particles according to their mass and spin. If you want to find out more about applications of group theory to mechanics, I would recomend the books by V. Arnold (Mathematical Methods of Classical Mechanics) or Marsden and Ratiu (Introduction to Mechanics and Symmetry).<|endoftext|> TITLE: Elementary problems that could be solved using homological algebra QUESTION [5 upvotes]: I'm in the rather unusual position that I know a bit of homological algebra, like how to compute $\mathsf{Tor}, \mathsf{Ext}$ or local comohology though I barely know more than the basics about groups, rings and modules and almost nothing about topology. I have trouble applying my knowledge to anything at all. Most "examples" of applications of homological algebra are for me way to hard to understand in a reasonable amount of time. Are there elementary ($=$ easy to understand) problems that I can answer with a bit of homological algebra? (even if this is overkill) REPLY [11 votes]: My favorite example is an application of homological methods in algebraic geometry to a question almost anyone can understand: Question. Given a sequence $A = (a_1 < a_2 < \cdots < a_n)$ of $n$ positive integers such that $\gcd(a_1,a_2,\ldots,a_n) = 1$, what is the largest natural number $g(a_1,\ldots,a_n)$ not expressible as a non-negative integer combination of $a_1,a_2,\ldots,a_n$? This question has been popularized as the "coin problem," the "stamp problem," and even the "Chicken McNugget problem." Definition. Given a sequence $A$ as above, we call the number $g(a_1,\ldots,a_n)$ the Frobenius number for the sequence $A$. Note that the existence of this number is a consequence of Schur's theorem, although we will also prove this below. Sylvester proved that if $n = 2$, we have the explicit formula $$g(a_1,a_2) = a_1 a_2 − a_1 − a_2.$$ You can see a proof here on Math.SE. On the other hand, Curtis showed that for $n > 2$ it is (in some sense) impossible to find an explicit formula, so instead, we can ask Question. Are there bounds for $g(a_1,\ldots,a_n)$? Most bounds, though, rely on some hypotheses on the numbers in $A$, such as symmetry or conditions on pairwise coprimeness, etc. (see the various results in Ramírez). The following bound has the advantage of working in all settings, and it seems to be one of the best upper bounds that people have shown: Theorem (L'vovsky). Let $a_1 < a_2 < \cdots < a_n$ be an increasing sequence of positive integers such that $\gcd(a_1,a_2,\ldots,a_n)$. Put $a_0 = 0$. Then, letting $$\delta = \max_{1 \le i < j \le n}\!\left\{(a_i − a_{i−1}) + (a_j − a_{j−1})\right\},$$ we have $$g(a_1,\ldots,a_n) \le (\delta - 2)a_n.$$ What is novel about this bound is that the proof uses sheaf cohomology, and that there is no known combinatorial proof of this fact. The point is to translate this question about natural numbers into a question about vanishing of sheaf cohomology on monomial curves, and then apply general machinery about when these cohomology groups vanish due to Gruson–Lazarsfeld–Peskine (this is where the heavy homological machinery comes into play). More precisely, L'vovsky's bound for the Frobenius number is a consequence of the bound $\operatorname{reg} X \le \delta$ for the Castelnuovo–Mumford regularity of the monomial curve $$\begin{aligned} \mathbf{P}^1 &\overset{p}{\longrightarrow} \mathbf{P}^n\\ [s:t] &\longmapsto [1:t^{a_1}:t^{a_2}:\cdots:t^{a_n}] \end{aligned}$$ I was going to also post a proof of this Theorem, but it ended up being really long. I can elaborate on the proof if anyone is interested! In the mean time, you can read Ch. 4 in Eisenbud's The Geometry of Syzygies. To illustrate this idea, we give a proof of the existence of the Frobenius number using sheaf cohomology, which should be accessible to anyone who has read Hartshorne (I must admit that this is a bit more than just homological algebra!). Proof that $g(a_1,\ldots,a_n) < \infty$. Let $k = \overline{k}$. Consider the map $p\colon \mathbf{P}^1 \to \mathbf{P}^n$ defined above, with image $X$. Then, $p\colon \mathbf{P}^1 \to X$ is the normalization of $X$, and so we have the short exact sequence $$0 \longrightarrow \mathcal{O}_X \longrightarrow p_*\mathcal{O}_{\mathbf{P}^1} \longrightarrow \mathscr{Q} \longrightarrow 0$$ where $\mathscr{Q}$ is supported on $\{p(0),p(\infty)\}$ (away from these points, $p\colon \mathbf{P}^1 \to X$ is an isomorphism by using that $\gcd(a_1,\ldots,a_n) = 1$) [Hartshorne, Exc. IV.1.8]. Twisting by $\mathcal{O}_{\mathbf{P}^n}(r)$ gives the short exact sequence $$0 \longrightarrow \mathcal{O}_X(r) \longrightarrow p_*(\mathcal{O}_{\mathbf{P}^1}(ra_n)) \longrightarrow \mathscr{Q} \longrightarrow 0$$ after using the projection formula [Hartshorne, Exc. II.5.1(d)]. We can then compute (part of) the long exact sequence on cohomology [Hartshorne, III.1]: $$\require{AMScd}\begin{CD} H^0(\mathbf{P}^1,\mathcal{O}_{\mathbf{P}^1}(ra_n)) @>>> H^0(X,\mathscr{Q}) @>>> H^1(X,\mathcal{O}_X(r))\\ @| @| @|\\ k\langle t^i \mid i \le ra_n\rangle @>>> k\langle t^i \mid i \in \mathbf{N} \setminus S \rangle \oplus \cdots @>>> H^1(X,\mathcal{O}_X(r)) \end{CD}$$ where the bottom row consists of vector spaces spanned by the listed monomials, and $S$ is the subsemigroup of $\mathbf{N}$ generated by $a_1,\ldots,a_n$. The computation of $H^0(\mathbf{P}^1,\mathcal{O}_{\mathbf{P}^1}(ra_n))$ is [Hartshorne, Thm. III.5.1], and you can compute the global sections of $H^0(X,\mathscr{Q})$ by looking at stalks and computing in local coordinates. The final ingredient is Serre vanishing [Hartshorne, Thm. III.5.2(b)], which states that there exists an integer $r_0$ such that for all $r \ge r_0$, the cohomology group $H^1(X,\mathcal{O}_X(r))$ vanishes. This implies that $$k\langle t^i \mid i \le ra_n\rangle \longrightarrow k\langle t^i \mid i \in \mathbf{N} \setminus S \rangle$$ is surjective for all $r \ge r_0$, and so the largest number in $\mathbf{N} \setminus S$ is at most $r_0 a_n$, that is, $g(a_1,\ldots,a_n) \le r_0a_n < \infty$. $\blacksquare$ The issue with Serre vanishing is that it gives no effective means of figuring out what this number $r_0$ is. This is why Castelnuovo–Mumford regularity is useful; see this question on MathOverflow, for example. The need for an explicit bound explains why the homological machinery of Gruson–Lazarsfeld–Peskine appears in this context.<|endoftext|> TITLE: How to calculate curvature of Earth per surface kilometer QUESTION [7 upvotes]: I was watching a video regarding Flat Earthers giving curvatures of Earth that sounded way too big, so I decided to calculate how much the surface of Earth would drop with respect to a line perpendicular to the ground per a certain distance from this point. I'll use variables as defined in my awfully drawn diagram. Let $s = 1km$. Given that the radius of Earth is 6371km, we can find $\theta = \frac sr = \frac {1km}{6371 km} = \frac {1}{6371} rad = 1.56*10^{-4} rad$ I think h could found by $h = 6371\sin(\frac{\pi}{2}) - 6371\sin(\frac{\pi}{2}-1.56*10^{-4}) = 7.84*10^{-5} km$ which converted to meters would result in a curvature of $0.07\frac{m}{km}$. EDIT Note that the formula is not linear, so in order to get the "fall" of Earth for 2 kilometres, multiplying the result by 2 is not enough. The formula is $h = 6371-6371*\sin(\frac{\pi}{2} - s/6371)$ where $s$ is the distance to the object in kilometers. Did I make any mistake? This value seems quite small and I'd like to make sure. Thanks in advanced. REPLY [7 votes]: Your answer is correct. See, e.g. the approximate formula given in the Wikipedia entry for horizon, which lists $d \approx 3.57 \sqrt{h}$. We see that a horizon of 1 kilometer is approximately corresponding to a height of $$ (1 / 3.57)^2 = 0.0785 \text{ meters} $$ Incidentally, I would've used $\cos$ instead of $\sin$ when writing the formula. Then you get $$ h = r * (1 - \cos \theta) \approx r \cdot \frac12 \cdot (\frac1r)^2 $$ where $r$ is the radius of the earth, and $\theta = 1/r$ is the angle in radians. The approximation uses Taylor expansion for $\cos\theta \approx 1 - \frac12 \theta^2 + \ldots$. So you get immediately $$ h \approx 1/r $$ when $h$ and $r$ are given in the same units. Let me give finally a remark concerning the discrepancy with this comment. The TL;DR is basically that the quantity $h$ is not linear as a function of the distance. Since the approximation is done by approximating the circle by a parabola, we actually have that the approximate height scales like the square of the distance to the horizon. For a quadratic relationship $$ 0.0785 \text{ meters } * \frac{1.6092^2}{1^2} \approx 0.20 \text{ meters} \approx 8 \text{ inches } $$ we see that the 8 centimeter per kilometer estimate is actually compatible with the 8 inches per mile estimate.<|endoftext|> TITLE: $R ≅ R/I$ prove that for any two-sided ideals $A$ and $B$ we have $A⊆B $ or $B⊆A$ QUESTION [6 upvotes]: Assume that $R$ is a ring such that for any two-sided ideal $I$ of $R$ we have $R ≅ R/I$. Prove that for any two-sided ideals $A$ and $B$ we have $A⊆B$ or $B⊆A$ . I try to solve with proof by contradiction and this property that $A∩B$ is also an ideal but i could not find any thing useful. REPLY [4 votes]: Let me state a more general problem and then explain the solution. Problem. Show that if $A$ is a nontrivial algebraic structure that is isomorphic to each of its nontrivial quotients, then the lattice of congruences on $A$ is a well order. [Here $A$ is nontrivial means $|A|>1$. A congruence on $A$ is a kernel of a homomorphism.] [Before giving the solution, here is the terminology: an algebraic structure is simple if its lattice of congruences is a 2-element chain. It is pseudosimple if it is not simple, but it is isomorphic to each of its nontrivial quotients. So the problem is to show that if an algebra is simple or pseudosimple, then its lattice of congruences is a well order.] Solution. Choose $a\neq b$ in $A$ and let $\alpha$ be a congruence on $A$ maximal for separating these elements. Then $A/\alpha$ is a nontrivial quotient of $A$, since it contains the distinct elements $a/\alpha$ and $b/\alpha$, hence $A\cong A/\alpha$. By the maximality of $\alpha$, $A/\alpha$ has a smallest nonzero congruence, namely the one that identifies $a/\alpha$ and $b/\alpha$. This implies that the zero congruence of $A/\alpha$ is completely meet irreducible. Since $A\cong A/\alpha$, the zero congruence of $A$ is also completely meet irreducible. Since $A\cong A/\theta$ whenever $|A/\theta|>1$, it follows that the zero congruence on any nontrivial quotient of $A$ is completely meet irreducible. By the Correspondence Theorem, every proper congruence $\theta$ on $A$ is completely meet irreducible. This already implies that the lattice of congruences of $A$ is a well order. First, it must be a linear order, since if it had incomparable elements $\beta$ and $\gamma$, then $\theta=\beta\cap\gamma$ would be a congruence that is not completely meet reducible. Next, it has DCC, since if $(\delta_n)_{n\in\omega}$ is a strictly decreasing $\omega$-chain in the lattice of congruences, then $\theta=\bigcap_{n\in\omega} \delta_n$ is not completely meet irreducible. Let me respond to the questions of Batominovski: (Would $A\cap B$ be the maximal congruence $C$ that separates $A$ and $B$?) No, we are separating elements of $R$, not congruences/ideals of $R$. In ring notation, choose $a\neq 0$ in $R$, and then let $A$ be an ideal of $R$ that is maximal for the property that $a\notin A$. Such $A$ exists by Zorn's Lemma. [Now I am taking $R$ for my algebra, $a$ and $0$ for my two elements to be separated, and congruence modulo $A$ for the separating congruence.] (How do we show that the zero congruence of $R$ is completely meet irreducible?) It suffices to show that $R$ has a nontrivial quotient that has completely meet irreducible zero congruence, since $R$ is isomorphic to any such ring. So take a subdirectly irreducible quotient. This is what is being described in the first 2-3 sentences of the Solution. If $a\neq 0$ in $R$ and $A$ is maximal in $R$ for the property $a\notin A$, then $R/A$ is nontrivial, since $\bar{a}\neq \bar{0}$ in this quotient, yet any nonzero ideal of $R/A$ contains $\bar{a}$ by the maximality of $A$. Thus the complete meet of nonzero ideals of $R/A$ also contains $\bar{a}$, which means this complete meet is not zero. This shows that the zero ideal of $R/A$ is completely meet irreducible. (Do you know any ring $R$ with the required property?) As Tsemo Aristide pointed out, if $R$ has a maximal ideal and the required property, then $R$ must be simple, not pseudosimple. This will happen if $R$ has an identity element, as rschwieb has already mentioned. Hence any simple ring has the required property, and these are the only examples when $R$ has a maximal ideal. As for pseudosimple rings, I do not know if there are any noncommutative examples. But it is a not-too-difficult exercise to show that a commutative pseudosimple ring must be a ring with zero multiplication and pseudosimple underlying additive group, and therefore any commutative pseudosimple ring is isomorphic to a Prufer group $\mathbb Z_{p^\infty}$ equipped with zero multiplication. The pseudosimple commutative semigroups are classified in Boris M. Schein, Pseudosimple commutative semigroups, Monatshefte für Mathematik March 1981, Volume 91, Issue 1, pp 77-78. and pseudosimple lattices with congruence lattices of every allowable order type are constructed in Graetzer, G.; Schmidt, E. T. Two notes on lattice-congruences. Ann. Univ. Sci. Budapest. Eotvos. Sect. Math. 1 1958 83-87.<|endoftext|> TITLE: What is the remainder when the product of the primes between 1 and 100 is divided by 16? QUESTION [12 upvotes]: The product of all the prime numbers between 1 and 100 is equal to $P$. What is the remainder when $P$ is divided by 16? I have no idea how to solve this, any answers? REPLY [10 votes]: If the product of the odd primes is $8k+r$ then the product of all of them is $16+2r$. So we only need to work $\bmod 8$. We only need to find primes with residues $3,5$ and $7$. We do it as follows: $$\begin{pmatrix} 3 && 5 && 7\\ 11 && 13 && 15 \\ 19 && 21 && 23\\ 27 && 29 && 31\\ 35 && 37 && 39\\ 43 && 45 && 47\\ 51 && 53 && 55\\ 59 && 61 && 63\\ 67 && 69 && 71\\ 75 && 77 && 79\\ 83 && 85 && 87\\ 91 && 93 && 95\\ 99\\ \end{pmatrix}$$ Then take out non-prime multiples of $3,5,7$ $$ \require{cancel} \begin{pmatrix} 3 && 5 && 7\\ 11 && 13 && \cancel{15} \\ 19 && \cancel{21} && 23\\ \cancel{27} && 29 && 31\\ \cancel{35} && 37 && \cancel{39}\\ 43 && \cancel{45} && 47\\ \cancel{51} && 53 && \cancel{55}\\ 59 && 61 && \cancel{63}\\ 67 && \cancel{69} && 71\\ \cancel{75} && \cancel{77} && 79\\ 83 && \cancel{85} && \cancel{87}\\ \cancel{91} && \cancel{93} && \cancel{95}\\ \cancel{99}\\ \end{pmatrix}$$ The only column with an odd number of remaining elements is $3\bmod 8$, so the answer is $6$<|endoftext|> TITLE: Proof (claimed) for Riemann hypothesis on ArXiv QUESTION [49 upvotes]: Has anyone noticed the paper On the zeros of the zeta function and eigenvalue problems by M. R. Pistorius, available on ArXiv? The author claims a proof of RH, and also a growth condition on the zeros. It was posted two weeks ago, and I expected it would have been shot down by now. Has there been any discussion or attempt at verification of this preprint? REPLY [51 votes]: I had a go reading through the paper and I think I found the error. The main argument in the paper can be summarized as follows: The Riemann $\Xi$-function $\Xi(t) = \xi\left(\frac{1}{2} + it\right)$ satisfy $\Xi(t) = \Xi(0)\nu_t(\pi/2)$ where $$\nu_t(x) = \int_0^\infty \cos(t(y+\cos(x))\Phi(y){\rm d}y$$ and $\Phi$ is related to the Jacobi $\theta$-function. This is a result by Riemann and holds true. The author then notes that when $t$ is such that $\Xi(t) = 0$ then $\nu_t(x)$ satisfy the Sturm–Liouville (SL) problem $$\left(\frac{\nu_t'(x)}{\sin(x)}\right)' + t^2\sin(x)\nu_t(x) = 0,~~~\nu_t'(0) = 0,~~~\nu_t(\pi/2) = 0$$ This is also true. The proof is completed by appealing to a theorem that says that this problem only has real eigenvalues. If this holds then it follows that $\Xi(t) = 0\implies t\in\mathbb{R}$ which is the Riemann hypotesis. The error is in the last step. It is indeed true that a regular SL problem only has real eigenvalues, however this is not a regular SL problem as $\frac{1}{\sin(x)}$ has a pole at $x=0$. In this case there is no guarantee that the eigenvalues have to be real and we can show this explicitly with a simple counter-example: the function $\nu_t(x) = \sin(t\cos(x))$ satisfy the SL problem above for any complex number $t$. Given how much interest this question has generated, which I take to mean that many people thought (or perhaps just hoped) this was a potentially viable proof, I think it’s useful to talk a bit about why it had to be wrong. Personally I have only been in academia for $\sim 10$ years, but I have already managed to see $\sim 50$ papers$^1$ like this where a major result is proven in a few pages using elementary methods. This is just another one. One soon learns that papers like this are never correct and the reason is often this: if it could have been solved this way then it would have been solved this way many many years ago - the techniques used are just too simple. As for some useful pointers for how to judge for yourself if a paper like this has the potential for being correct I reccommend Scott Aaronsons "Ten Signs a Claimed Mathematical Breakthrough is Wrong". $^1$ For some examples see the MSE questions [1], [2], [3]<|endoftext|> TITLE: Is $1+1 = 2$ true in any base? QUESTION [31 upvotes]: So this may be one of the stupidest questions ever, but is the identity $$1 + 1 = 2$$ valid in any base? It seems so, since it's like, for a given base $b$ $$1 = 000\ldots 01 = (0\cdot b^n) + \ldots (1\cdot b^0) = b^0 = 1$$ Which means $1 + 1 = 2$ no matter what base we use. Is that right? Or is there something I miss? REPLY [5 votes]: There are three forces at play here. First comes our natural understanding of numbers. $2$ is a specific natural, or real, number. We understand it intuitively, and unambiguously. This is the abstraction of the idea of having two apples, two sons, two cats, or two examples for collections with two objects. As such, when we think of the number $2$, it is literally defined to be $1+1$. Then comes the representation of a number in a particular base. This is now a question of how we dress the abstract number into a slightly more tangible form. Here $2$ is a relatively "bad" example, as most [natural-]bases are large enough so $2$ is just $2$ again. But think of the number ten. In decimal, this is $10$; in trenary this is $101$; in octal, $12$. All of these are different strings which represent the same number. How is this possible? Well, when we change base, we change how we interpret the string of symbols. And finally comes the purely syntactical evaluation of symbols. This takes the last point and pushes it to $11$.(1) We forget that the symbols even represent the abstract quantity of how many hands a "standard" person should have. We only know that $1$, $2$ and $+$ are symbols in a mathematical language. As such we are in fact allowed to interpret them to mean pretty much anything that we want. You see this with $\pi$, which can denote a homomorphism, a projection, a function, a constant real number; but you see it less often with $2$ or $+$. Nevertheless, we are allowed to interpret those symbols to our liking, and we can easily concoct interpretations where $1+1$ and $2$ are not the same object. In short, if you consider them as abstract numbers, they are independent of their representations, and then $1+1=2$, always. If you are asking about the interpretation of the symbols as numbers, then the answer is a qualified "usually". If you are asking in general about interpreting the symbols $1,2$ and $+$... then this is a resounding "not enough context". (1) This can be taken in any base you prefer. Even in base $\omega$.<|endoftext|> TITLE: Rooks attacked by at most one other rook QUESTION [7 upvotes]: Some rooks are placed on an $n\times n$ board. Each rook is attacked by at most one other rook, and every unoccupied cell is attacked by some rook. What is the smallest $k$ such that no matter how we set up the board, every $k\times k$ subboard contains at least one rook? $k=\lfloor n/2\rfloor$ is too small, because we can take the board with a rook on each square of the main diagonal. In this board the bottom-left (or top-right) $\lfloor n/2\rfloor\times\lfloor n/2\rfloor$ subboard does not contain any rook. Is it true that every $(\lfloor n/2\rfloor+1)\times (\lfloor n/2\rfloor+1)$ subboard must contain some rook? REPLY [4 votes]: $\lfloor n/2 \rfloor + 1$ is not enough. As @bof stated, you can have $2n/3 \times 2n/3$ subboard without any rooks: take this empty subboard in the bottom left corner. For the $n/3$ rows above the subboard, starting from the top left corner, put $2$ rooks in each row in a staggered fashion, ending above the top right corner of the empty subboard. Do a similar thing with the $n/3$ columns to the right of the empty subboard. Then every square is attacked by a rook while this large subboard remains empty. However, this turns out to be the worst case. That is, every $(\lfloor 2n/3 \rfloor +1) \times (\lfloor 2n/3 \rfloor +1)$ subboard must contain at least one rook. To prove this, suppose we had an empty $k \times k $ subboard for some $k > 2n/3$. We first show that every row must have at least one rook. If not, then there is a row with no rooks. Every square in this row must be attacked by some rook in the same column. Now consider the squares of this row that are in the same $k$ columns as the empty $k \times k $ subboard. The attacking rooks must therefore be in the $n-k $ rows outside this empty subboard. Since each row can have at most $2$ rooks in it, there are at most $2 (n-k) < k $ attacking rooks here, so some square in the empty row is left unattacked. This gives a contradiction. Hence there is at least one rook in every row. By symmetry, there is also at least one rook in every column. Of course, there can be at most $2$ rooks in any column. Let $r $ be the number of rows with $2$ rooks. The total number of rooks is then $n+r $. By double-counting, there are also $r $ columns with two rooks in them. Now these $n+r $ rooks cannot be in the $k \times k $ subboard. Hence they are either in the $n-k $ rows outside the subboard, or the $n-k $ columns outside the subboard. The $n-k $ rows can have at most $\max (n-k, r) $ rows among them with $2$ rooks, and hence contain at most $n-k + \max (n-k, r) $ rooks in total. By symmetry, the same is true of the $n-k $ columns. Hence the total number of rooks is at most $$ 2 (n-k) + 2 \max (n-k, r) \le 2 (n-k) + n-k + r = 3 (n-k) +r. $$ (Note that this would count twice any rook that is in both a different row and column from the subboard, but since we are getting an upper bound, it is okay to overcount some rooks.) However, we know there are $n+r $ rooks. Thus $n+r \le 3 (n-k) +r $, which simplifies to $k \le 2n/3$, a contradiction. Thus if $k > 2n/3$, every $k \times k $ subboard must contain at least one rook.<|endoftext|> TITLE: Cholesky decomposition when deleting one row and one and column. QUESTION [8 upvotes]: I've thought about this problem for days but could not find a good answer. Given Cholesky decomposition of a symmetric positive semidefinite matrix $A = LL^T$. Now, suppose that we delete the $i$-th row and the $i$-th column of $A$ to obtain $A'$ ($A'$ is also a symmetric positive semidefinite matrix), and the Cholesky decomposition of this new matrix is $A' = L'(L')^T$. Is there any efficient way to obtain $L'$ from $L$? Note: In case we delete the last row and the last column of $A$, the problem becomes simple, we just delete the last row and last column of $L$ to obtain $L'$. Other cases, for me, are not trivial. Thank you so much. REPLY [5 votes]: Let $$C\equiv \left[\begin{array}{ccc}C_{11} & C_{12} & C_{13} \\ C_{12}^T & C_{22} & C_{23}\\C_{13}^T & C_{23}^T & C_{33} \end{array}\right] = \left[\begin{array}{ccc}L_{11} & 0 & 0 \\ L_{21} & l_{22} & 0\\L_{31} & L_{32} & L_{33} \end{array}\right]\left[\begin{array}{ccc}L_{11} & 0 & 0 \\ L_{21} & l_{22} & 0\\L_{31} & L_{32} & L_{33} \end{array}\right]^T$$ be the Cholesky factorization of the matrix $C$ and let $$\bar{C}\equiv \left[\begin{array}{ccc}C_{11} & 0 & C_{13} \\ 0 & 0 & 0\\C_{13}^T & 0 & C_{33} \end{array}\right] = \left[\begin{array}{ccc}\bar{L}_{11} & 0 & 0 \\ \bar{L}_{21} & \bar{l}_{22} & 0\\\bar{L}_{31} & \bar{L}_{32} & \bar{L}_{33} \end{array}\right]\left[\begin{array}{ccc}\bar{L}_{11} & 0 & 0 \\ \bar{L}_{21} & \bar{l}_{22} & 0\\\bar{L}_{31} & \bar{L}_{32} & \bar{L}_{33} \end{array}\right]^T$$ be the Cholesky factorization of the matrix $\bar{C}$ obtained from the matrix $C$ by zeroing $i$-th row and column. Then $\bar{L}_{11} = L_{11}$, $\bar{L}_{21} = 0$, $\bar{L}_{31} = L_{31}$, $\bar{l}_{22} = 0$, $\bar{L}_{32} = 0$ and $$\begin{align}C_{33} &= L_{31}L_{31}^T + L_{32}L_{32}^T + L_{33}L_{33}^T \\ C_{33} &= L_{31}L_{31}^T + \bar{L}_{33}\bar{L}_{33}^T\end{align}$$ Thus $$\bar{L}_{33}\bar{L}_{33}^T = L_{33}L_{33}^T + L_{32}L_{32}^T\tag{1}$$ since $L_{32}$ is a vector, (1) gives a rank-1 update of the Cholesky factorization $L_{33}L_{33}^T$. Now removing $i$-th row and column of $\bar{C}$ is trivial. Rank-1 update of the Cholesky factorization is a standard procedure and can be done efficiently in $O(k^2)$ operations, where $k$ is size of the matrix $L_{33}$.<|endoftext|> TITLE: What shape does a "cycloid egg" trace as it rolls on flat surface? QUESTION [5 upvotes]: A point on a circle rolling on a flat surface traces a cycloid. What curve does point on a rolling "cycloid egg" trace? I tried to draw what it would trace, but failed. Also, I think the more interesting question here is what shape with a point on its edge traces a half circle as it roll on a flat surface? REPLY [5 votes]: Identify the Euclidean plane $\mathbb{R}^2$ with the complex plane $\mathbb{C}$. Given any geometric shape $K$ lying on the upper half plane touching the real axis at origin, nice enough so that following description make sense. Let $\gamma_K : \mathbb{R} \to \partial K \subset \mathbb{C}$ be a parametrization of $\partial K$, the boundary of $K$, by arc length $s$ subject to the initial condition: $\gamma_K(0) = 0$ and $\gamma_K'(0) = 1$. When we roll $K$ along the real axis in the positive $x$-direction, the roulette, the locus of the point on $\partial K$ originally at origin will be given by the formula $$z_K(s) = s - \frac{\gamma_K(s)}{\frac{d\gamma_K(s)}{ds}}\tag{*1}$$ As an example, consider the case $K$ is the unit disk $D$. We can arc-length parametrize $\partial D$ as $$\gamma_D(\theta) = i(1 - e^{i\theta})$$ This corresponding roulette will be a cycloid given by the formula: $$z_D(\theta) = \theta - \frac{i(1-e^{i\theta})}{e^{i\theta}} = \theta -i( e^{-i\theta} - 1)\tag{*2}$$ When $B$ is a "cycloid egg" $E$ corresponds to a unit circle and the bottom of it is touching the origin, we can use $(*2)$ to work out following parametrization of $\partial E$: $$\gamma_E(\theta) = \begin{cases} \theta - i(e^{i\theta} - 1), & \theta \in [-\pi,\pi]\\ 2\pi - \theta + i(e^{i\theta} + 3),&\theta \in [ \pi, 3\pi] \end{cases} $$ For $\theta$ outside $[-\pi,3\pi]$, we can extend this parametrization by periodicity. Notice $$\frac{d\gamma_E(\theta)}{d\theta} = \begin{cases} +(1 + e^{i\theta}), & \theta \in [-\pi,\pi]\\ -(1+e^{i\theta}), & \theta \in [ \pi,3\pi] \end{cases} = 2\left|\cos\frac{\theta}{2}\right|e^{i\frac{\theta}{2}} $$ The arc-length $s$ for $\partial E$ is $$s(\theta) = \int_0^\theta 2\left|\cos\frac{t}{2}\right| dt = \begin{cases} 4\sin\frac{\theta}{2}, &\theta \in [-\pi,\pi]\\ 8 - 4\sin\frac{\theta}{2},&\theta \in [\pi,3\pi] \end{cases} \quad\text{ and }\quad \frac{d\gamma_E(\theta)}{ds(\theta)} = e^{i\frac{\theta}{2}} $$ Substitute this back into $(*1)$, we get $$z_E(\theta) = \begin{cases} 2\sin\frac{\theta}{2} - \theta e^{-i\frac{\theta}{2}},&\theta \in [-\pi,\pi]\\ 8 - 4\sin\frac{\theta}{2} - (2\pi - \theta + 3i)e^{-i\frac{\theta}{2}} - i e^{i\frac{\theta}{2}},&\theta \in [\pi,3\pi] \end{cases} $$ For $\theta \not\in [-\pi,3\pi]$, pick an integer $N$ such that $\theta_0 = \theta - 4\pi N \in [-\pi,3\pi]$, then periodicity of $\gamma_{E}$ implies $$z(\theta) = z(\theta_0 + 4\pi N) = z(\theta_0) + 16N$$ In terms of real coordinates, we have $$(x(\theta),y(\theta)) = \begin{cases} \left( 2\sin\frac{\theta}{2} - \theta\cos\frac{\theta}{2}, \theta\sin\frac{\theta}{2}\right), &\theta \in [-\pi,\pi]\\ \left( 8 - 6\sin\frac{\theta}{2} + (\theta-2\pi)\cos\frac{\theta}{2}, -4\cos\frac{\theta}{2} - (\theta-2\pi)\sin\frac{\theta}{2} \right), & \theta \in [\pi, 3\pi] \end{cases} $$ and similarly, $$x(\theta) = x(\theta_0) + 16N\quad\text{ and }\quad y(\theta) = y(\theta_0)$$ for $\theta = \theta_0 + 4\pi N \not\in [-\pi, 3\pi]$, $\theta_0 \in [-\pi,3\pi]$ and $N \in \mathbb{Z}$. Following is a picture of the roulette of the "cycloid egg" for $\theta \in [0,4\pi]$. $\hspace1in$ As one can see, qualitatively it is very similar to the ordinary cycloid except it is flattened at the top. Aside from that, there doesn't seem anything special.<|endoftext|> TITLE: Does there exist a real valued function $f$ such that $\lim_{x\rightarrow q} f(x) = \infty$ for all $q\in\mathbb{Q}$ QUESTION [6 upvotes]: Does there exist a real valued function $f$ such that $\lim_{x\rightarrow q} f(x) = \infty$ for all $q\in\mathbb{Q}$? My answer is no. If $f$ exists, then let us define $$g(x) = \left\{ \begin{array}{l l} \frac{1}{1+|f(x)|} & \quad \text{if } x\in \mathbb{Q}^c\\ 0 & \quad \text{if } x\in \mathbb{Q}\\ \end{array} \right. \\$$ and we see $g$ is continuous only on rational numbers, which is a contradicts the fact that the set of continuity points is a $G_\delta$ set. Is my solution correct? And do you guys know if there is any good direct argument? REPLY [5 votes]: Your argument is fine. Here is a direct way to show that the function does not exist: So suppose the map exists and define $$ A_n=\{ x\in \Bbb{R} : \exists \delta_x>0 : 0<|u-x|<\delta_x \Rightarrow f(u)>n \}.$$ Then $A_n$ is open and dense (since it contains all the rationnals). Define now: $$ B_n=\{ x\in \Bbb{R} : \exists \epsilon_x>0 : |u-x|<\epsilon_x \Rightarrow f(u)>n\} .$$ $B_n$ is also open. And in fact also dense. To see this note that when $x\in A_n$ then $(x-\delta_x,x)$ and $(x,x+\delta_x)$ are subsets of $B_n$. So the closure of $B_n$ contains $A_n$ which was dense. Now, by Baire the intersection of $B_n$ is non-empty. But for every $y\in \cap_n B_n$ we have $f(y)>n$ for all $n$ so $f(y)=+\infty$ and the map was not real-valued in the first place.<|endoftext|> TITLE: Linear Algebra Versus Functional Analysis QUESTION [62 upvotes]: As it is mentioned in the answer by Sheldon Axler in this post, we usually restrict linear algebra to finite dimensional linear spaces and study the infinite dimensional ones in functional analysis. I am wondering that what are those parts of the theory in linear algebra that restrict it to finite dimensions. To clarify myself, here is the main question Question. What are the main theorems in linear algebra that are just valid for finite dimensional linear spaces and are propagated in the sequel of theory, used to prove the following theorems? Please note that I want to know the main theorems that are just valid for finite dimensions not all of them. By main, I mean the minimum number of theorems of this kind such that all other such theorems can be concluded from these. REPLY [6 votes]: This is mainly a comment for the answer given by avs, but perhaps too long, so I post it as an answer too. I would add that when $V$ is infinite-dimensional and carrying a topology there are two different notions of the dual vector space for it: algebraic dual of $V$ consists of all linear functionals on $V$. topological dual of $V$ consists of the bounded (continuous) functionals on $V$. Both of these duals are usually denoted by $V^*$ and you need to distinguish them from the context. The algebraic dual is never isomorphic to $V$ when $V$ is infinite dimensional. Thus we have the following classification: $V$ is finite-dimensional if and only if it is isomorphic to its algebraic dual $V^*$.<|endoftext|> TITLE: Properties of the A-transpose-A matrix QUESTION [11 upvotes]: I believe that $A^TA$ is a key matrix structure because of its connection to variance-covariance matrices. In Professor Strang's linear algebra lectures, "A-transpose-A" - with this nomenclature, as opposed to $X'X$, for example - is the revolving axis. Yet, it is not easy to find on a quick Google search a list of its properties. I presume that part of the reason may be that they are shared by variance-covariance matrices. But I'd like to confirm this (does it have identical properties to a var-cov matrix?), and have the list easily available from now on here at SE-Mathematics. Just to not shy away from the initial effort, here is what I think I have so far: Symmetry Positive semidefinite-ness Real and positive eigenvalues The trace is positive (the trace is the sum of eigenvalues) The determinant is positive (the determinant is the product of the eigenvalues) The diagonal entries are all positive Orthogonal eigenvectors (**) Diagonalizable as $Q\Lambda Q^T$ It is possible to obtain a Cholesky decomposition. Rank of $A^TA$ is the same as rank of $A$. $\text{ker}(A^TA)=\text{ker}(A)$ (**) The eigenvectors of A-transpose-A form the matrix $V$ in singular value decomposition (SVD) of $A,$ while the square root of the eigenvalues of A-transpose-A are the singular values of the SVD. Similarly, the eigenvectors of A-A-transpose $AA^\top$ include the columns in the matrix $U$ of the SVD of $A.$ The importance of this is exemplified in the fact that SVD can be used to solve least squares regression by computing the Penrose-Moore pseudo-inverse $A^\dagger = V\Sigma^\dagger U^*,$ although the QR decomposition is a more expedient computational method. There is a nice post on the topic here. REPLY [5 votes]: Yes, it has all the properties of a covariance matrix because it is one. You can define a multivariate normal distribution for which $A^T A$ is the covariance matrix.<|endoftext|> TITLE: Disintegration of Haar measures QUESTION [5 upvotes]: Suppose I have a locally compact group $G$ and a quotient map $f:G\to G/N$. Is it true that for every Borel-measurable function $f : G → [0, +∞]$, $$\int_{G} f(g) \, \mathrm{d} \mu_G (g) = \int_{G/N} \int_{N} f(gn) \, \mathrm{d} \mu_{N} (n) \mathrm{d} \mu_{G/N} (gN),$$ where $\mathrm{d} \mu_{G}$, $\mathrm{d} \mu_{N}$ and $\mathrm{d} \mu_{G/N}$ are the Haar measures on $G$, $N$ and $G/N$ respectively? In particular, is it true that if a subset $A\subseteq G$ is such that its intersection with $\mathrm{d}\mu_{G/N}$-almost every coset $gN$ is of full measure in $gN$ (w.r.t. to the measure on $gN$ obtained by shifting the measure of $N$, which is independent of the choice of $g$), then $A$ is of full measure in $G$? (This is implied from the equation above by taking $f$ to be the characteristic function of $A$). I wanted to use the disintegration theorem in Wikipedia, but I'm not sure if it applies here. I'm not sure I understand the definition of a Radon space and I don't know which locally compact groups satisfy it. I know of a more specific disintegration result, which appears in many/most introductions to Haar measures (e.g. Raghunthan's book), but it is only stated for continuous $f$ with compact support. I suppose it's not hard to get rid of the continuous part by using some Luzin argument (although I am not sure how to do it myself), but the compact support bothers me. In any case, if this is not true for general locally compact groups, for which groups it is true? I have a reference for second-countable compact groups (Halmos's book Measure Theory; his definitions are a bit out-dated, but coincide with the modern ones for second-countable compact groups). I don't mind assuming separability. Compactness is not awful either, but I prefer not to assume metrisability. Thanks. REPLY [2 votes]: The identity $$\int_{G} f(g) \, \mathrm{d} \mu_G (g) = \int_{G/N} \int_{N} f(gn) \, \mathrm{d} \mu_{N} (n) \mathrm{d} \mu_{G/N} (gN),$$ does not hold under the conditions you mention. You need some extra conditions. For example, only assuming that $f : G \to [0,\infty]$ is Borel measurable does not imply that $f$ is $\mu_G$-integrable, i.e., it might be that $\int_{G} f(g) \, \mathrm{d} \mu_G (g) = \infty$. Also, the quotient $G / N$ should be locally compact in order to define a Haar measure on it. This leads to the following result: Let $G$ be a locally compact group and let $N \subset G$ be a closed subgroup. Suppose two of the Haar measures on $G, N$ and $G / N$ be given. Then there exists a unique third measure such that $$\int_{G} f(g) \, \mathrm{d} \mu_G (g) = \int_{G/N} \int_{N} f(gn) \, \mathrm{d} \mu_{N} (n) \mathrm{d} \mu_{G/N} (gN),$$ for all $f \in L^1 (G)$. This result is well-known, and is often called Weil's formula or the quotient integral formula. It can be looked up in any book on abstract harmonic analysis. There are also some generalisations of this result and related results. In case you want to know more about this, I strongly recommend you to look into Reiter's Classical Harmonic Analysis and Locally Compact Groups. In chapter 8 there is an extensive discussion on quasi-invariant quotient measures.<|endoftext|> TITLE: Real Analysis advanced book suggestion QUESTION [8 upvotes]: I want to self study real analysis. So far, finished the first seven chapters of Baby Rudin (up to and including sequences and series of functions) and now want to proceed into more advanced books. I have couple options, including Stein&Shakarchi Folland Royden Rudin's Real&Complex Analysis Kolmogorov-Fomin Among these five (also happy to hear if you have further recommendations) which are more accessible and has better treatment of the material? I'm especially thinking among first three, so if there would be a comparative answer for the first three books, I would be really happy. Any help is appreciated. Thank you! REPLY [3 votes]: I’m studying real analysis in Ukraine, so I use a local set of books. The main of them I found at home when I was a child, and still hoping to completely read someday, is a fundamental, very detailed and so huge (it has more than two thousands pages in three volumes) book “Differential and Integral Calculus” by Grigorii Fichtenholz. This is a famous book for our students, and, according to Wikipedia, The bookwas translated, among others, into German, Chinese, and Persian however a translation to English language has not been done still. Fichtenholz's books about analysis are widely used in Middle and Eastern European as well as Chinese universities due to its exceptionality of detailed and well-ordered presentation of material about mathematical analysis. Due to unknown reasons, these books do not have the same fame in universities in other areas of the world. Another book is “Mathematical analysis” by A.Ya. Dorogovtsev, with more modern (1993-4) and brief (only two small volumes with a bit more than six hundred pages in total) approach.<|endoftext|> TITLE: When the tensor product of two module elements is nonzero QUESTION [5 upvotes]: What are the main cases in which we can say that $a \otimes b \neq 0 \in A \otimes B$, where $A$ and $B$ are $R$ modules? It works for nonzero elements in free modules over an integral domain. Additional Question: What can we say about when all tensors are elementary? It is true when one of the factors is cyclic...is that the only reliable principle? REPLY [2 votes]: It is necessary and sufficient there exists a bilinear map $A\times B\to M$ for some other module $M$ where $a\otimes b$ has nonzero image.<|endoftext|> TITLE: Is the limit of a convergent sequence always a limit point of the sequence or the range of the sequence? QUESTION [5 upvotes]: In this video lecture (Real analysis, HMC,2010, by Prof. Su) Professor Francis Su says (around 54:30) that "If a sequence $\{p_n\}$ converges to a point $p$ it does not necessarily mean $p$ is a limit point of the range of $\{p_n\}$." I'm not sure how that can hold (except in case of a constant sequence). I'm not able to understand the difference between the set $\{p_n\}$ and the range of $\{p_n\}$ (which I understand is the set of all values attained by $p_n$). As per my understanding they're the same (except $\{p_n\}$ might contain some repeated values which the range of $p_n$ won't, e.g. the range of the sequence {1/2,1/2,1/2,1/2,1/3,1/3,1/3,1/3,1/4,1/4,...} would be {1/2,1/3,1/4,...}, or that of the constant sequence {1,1,1,...} would be {1}. Based on this understanding, my reasoning is as follows: By the definition of a convergent sequence $\{p_n\}$ converging to $p$, for every $\epsilon \gt 0$ we can find an infinite number of terms of $\{p_n\}$ which lie at a distance less than $\epsilon $ from p, i.e. within an $\epsilon $-neighborhood of $p$. Hence every $\epsilon $-neighborhood of p contains an infinite number of points of the set $\{p_n\}$ other than itself ($p$). Hence $p$ is a limit point of $\{p_n\}$. Now this would not hold only in case of a constant sequence, which converges, but any neighborhood of of its limit cannot contain any points in common with the sequence other than itself. Hence the limit won't be a limit point of the sequence. Other than this special case, I cannot think of any situation where the limit of a convergent sequence is not also a limit point of the sequence. Can anyone help? Thanks in advance. REPLY [2 votes]: Another [counter] example. Define a sequence in $(\mathcal{R},d)$ such that $a_{n} := 5-n$ for $n \leq 5$ and $a_{n} := 0$ for all $n > 5$ This sequence converges to 0 (zero). The range of this sequence viewed as a function from natural numbers to the real numbers is the following set $A=\{4,3,2,1,0\}$. 0 (zero) is not a limit point in this set since any $0 < r < 1$ will produce a neighborhood around 0 (zero) that will not include other points of this set $A$.<|endoftext|> TITLE: Proof of $\sup AB=\max\{\sup A\sup B,\sup A\inf B,\inf A\sup B,\inf A\inf B\}$ QUESTION [7 upvotes]: I want to prove the following result: Theorem $\quad $ If $A$ and $B$ are bounded sets of real numbers, then \begin{gather*}\tag{$\star$} \sup(A\cdot B)=\max\{\sup A\cdot\sup B, \sup A\cdot\inf B, \inf A\cdot\sup B, \inf A\cdot\inf B\}. \end{gather*} Although in the thread Show that sup(A⋅B)=max{supA⋅supB,supA⋅infB,infA⋅supB,infA⋅infB}, some suggestions are given, but I have trouble in understanding them. So I have tried to prove as follows. Please check whether it is right or wrong. Note that $AB$ is defined by $$AB=\{z\in\mathbb{R}\mid \exists x\in A, y\in B: z=xy\}.$$ In order to prove this theorem, we need the following lemma. Lemma $\quad $ Let $A$ and $B$ be nonempty sets of nonnegative real numbers. Suppose that $A$ and $B$ are bounded above. Then \begin{gather*} \sup AB=\sup A\cdot\sup B. \end{gather*} Proof. Let $A\subset [0,+\infty)$ and $B\subset [0,+\infty),$ and $A,B$ nonempty, bounded above. Put $a=\sup A, b=\sup B, c=\sup AB.$ Since $A\subset [0,+\infty)$ and $B\subset [0,+\infty),$ we see that $0$ is a lower bound of both $A$ and $B,$ so $a, b, c$ are nonnegative. Let $z\in AB.$ Then there are $x\in A, y\in B$ such that $z=xy.$ Since $0\leq x\leq a$ and $0\leq y\leq b,$ we have $xy\leq ab.$ So $z\leq ab.$ By arbitrariness of $z,$ we deduce that $ab$ is an upper bound of $AB.$ Hence $c\leq ab.$ Note that $c\geq0.$ If $c0$ and $b>0.$ It follows that $\frac{c}{b}0.$ Then we deduce that $\frac{c}{x_1}0$ it follows that $c0.$ In this case $0\leq a'\leq a, b'<00,$ then, for every $0<\epsilon<\min\{a,b\},$ we have $a-\epsilon>0, b-\epsilon>0.$ Hence there exist $x_1\in A, y_1\in B$ such that $x_1>a-\epsilon >0, y_1>b-\epsilon>0,$ which implies that $x_1y_1>(a-\epsilon)(b-\epsilon)=ab-\epsilon(a+b-\epsilon).$ Since $\lim_{\epsilon\to 0+}\epsilon(a+b-\epsilon)=0,$ we have, by the order preserving property of limit, $x_1y_1\geq ab,$ and so $c\geq ab.$ It follows that $c=ab.$ Hence we have proved, in this case, that $\sup AB=c=ab=\max\{ab,a'b,ab',a'b'\}.$ (ii.2) $b\leq 0.$ Then $a'b'\leq a'b$ and $ab'\leq ab\leq a'b.$ So $\max\{ab,a'b,ab',a'b'\}=a'b.$ Let $z\in AB,$ then there are $x\in A, y\in B$ such that $z=xy\leq xb\leq a'b,$ which implies that $c\leq a'b.$ On the other hand, for every $\epsilon >0,$ there exists $x_2\in A, y_2\in B,$ such that $0\leq x_2b-\epsilon,$ it follows that $x_2y_2\geq (a'+\epsilon)y_2>(a'+\epsilon)(b-\epsilon)=a'b+\epsilon(b-a'-\epsilon),$ which implies that $x_2y_2\geq a'b.$ Thus $c\geq a'b.$ As a result, $c=a'b=\max\{ab,a'b,ab',a'b'\}.$ (iii) $a'<0, b'\geq 0.$ Swapping $A$ and $B,$ and using Case (ii), the desired result follows. (iv) $a'<0, b'<0.$ There are four sub-cases. (iv.1) $a> 0, b>0.$ That is, $a'<00,$ such that $a'+\epsilon<0$ and $b'+\epsilon<0,$ there exist $x^*\in A$ and $y^*\in B$ for which $x^*(a'+\epsilon)(b'+\epsilon)=a'b'+\epsilon(a'+b'+\epsilon).$ Hence $\sup AB=c\geq x^*y^*\geq a'b'.$ Therefore $a'b'$ is the least upper bound of $A\cdot B.$ In the case where $\max\{ab, a'b'\}=ab,$ for sufficiently small $\epsilon>0,$ such that $a-\epsilon>0$ and $b-\epsilon>0,$ there exists $x_1\in A$ and $y_1\in B$ such that $x_1>a-\epsilon>0, y_1>b-\epsilon>0.$ This gives $x_1y_1>(a-\epsilon)(b-\epsilon)=ab-a\epsilon-\epsilon(b-\epsilon)=ab-\epsilon\big(a+b-\epsilon\big).$ Since $\lim_{\epsilon\to 0+}\epsilon\big( a+b-\epsilon\big)=0,$ the order preserving property of limit gives that $x_1y_1\geq ab,$ and hence $c\geq ab.$ Therefore $\sup (AB)=c=ab=\max\{ab,a'b,ab',a'b'\}.$ Therefore we have proved that, in this sub-case, $\sup AB=\max\{ab,ab',a'b,a'b'\}.$ (iv.2) $a>0, b\leq 0.$ Then $ab\leq 0\leq a'b\leq a'b', ab'\leq 0\leq a'b',$ which gives that $\max\{ab,a'b,ab',a'b'\}=a'b'.$ In this case, for every $y\in B,$ $y\leq 0.$ Let $z\in AB,$ then there exist $x\in A, y\in B$ such that $z=xy.$ If $x\geq 0,$ then since $a'< x\leq a,$ so $z=xy\leq a'y\leq a'b';$ while if $x<0,$ then $z=xy\leq a'y\leq a'b'.$ Hence $a'b'$ is an upper bound of $AB.$ Since $a'<0$ and $b'<0,$ for sufficiently small $\epsilon>0,$ such that $a'+\epsilon<0$ and $b'+\epsilon<0,$ there exist $x_1\in A, y_1\in B$ such that $x_1(a'+\epsilon)(b'+\epsilon)=a'b'+\epsilon(a'+b'+\epsilon)\to a'b', $ as $\epsilon\to 0+0.$ Hence $x_1y_1\geq a'b'.$ Thus $c=\sup AB\geq x_1y_1\geq a'b'.$ Therefore we have $\sup AB=c=a'b'=\max\{ab,a'b,ab',a'b'\}.$ (iv.3) $a\leq 0, b>0.$ Similar to sub-case (iv.2), or you can argue by simply swapping $A$ and $B.$ (iv.4) $a\leq 0, b\leq 0.$ Because in this case $a,a',b,b'$ are non-positive, $ab\leq a'b\leq a'b',$ and $ab'\leq a'b',$ we deduce that $\{ab,a'b,ab',a'b'\}=a'b'.$ For every $x\in A, y\in B,$ we have $a'\leq x\leq a\leq 0, b'\leq y\leq b\leq 0,$ so $xy\leq a'b'.$ From this it follows that $a'b'$ is an upper bound of $AB.$ Let $\epsilon>0$ be sufficiently small such that $a'+\epsilon<0$ and $b'+\epsilon<0.$ Then there exist $x_1\in A, y_1\in B$ such that $x_1(a'+\epsilon)(b'+\epsilon)=a'b'+\epsilon(a'+b'+\epsilon). $ It follows that $c\geq a'b'.$ As a consequence, we have $c=a'b'=\max\{ab,a'b,ab',a'b'\}.$ $\qquad\Box$ REPLY [2 votes]: Given a set $A \subset \mathbb{R}$, define $A^-=A \cap (-\infty, 0]$ and $A^+=A \cap (0, +\infty)$. Of course, $A=A^- \cup A^+$. Observe that $$AB=(A^- \cup A^+) \cdot (B^- \cup B^+)$$ $$=(A^- B^-) \cup (A^- B^+) \cup (A^+ B^-) \cup (A^+ B^+)$$ So $\sup(AB)=\max(\sup(A^- B^-), \ \sup(A^- B^+), \ \sup(A^+ B^-), \ \sup(A^+ B^+))$. Here, I'm considering $\sup \emptyset = -\infty$. Suppose first that $A^-, A^+, B^-$ and $B^+$ are all nonempty. Then $$ \sup A \sup B = \sup A^+ \sup B^+ =^{lemma} \sup A^+ B^+$$ $$ \sup A \inf B = \sup A^+ \inf B^- = -\sup A^+ \sup (-B^-)=^{lemma} - \sup(-A^+ B^-) = \inf(A^+ B^-) \le \sup(A^+ B^-)$$ $$ \sup B \inf A = \sup B^+ \inf A^- = -\sup B^+ \sup (-A^-)=^{lemma} - \sup(-B^+ A^-) = \inf(B^+ A^-) \le \sup(B^+ A^-)$$ $$ \inf A \inf B = \inf A^- \inf B^- = \sup(- A^-) \sup (-B^-) =^{lemma} \sup A^- B^-$$ Consider $S= \max(\sup A \sup B, \ \sup A \inf B, \ \sup B \inf A, \ \inf A \inf B)$. Then $S \le \sup(AB)$. Since it is clear that $S \ge \sup(AB)$, it follows that $S = \sup(AB)$. The other cases can be dealt with, similarly.<|endoftext|> TITLE: Is there a permutation on $\mathbb{N}$ that, if repeated often enough, eventually shuffles the whole set? QUESTION [8 upvotes]: Does there exist a computable $\pi : \mathbb{N} \to \mathbb{N}$, bijective† and such that for all $i,j\in\mathbb{N}$ there is a $k\in\mathbb{N}$ with $$ \pi^k(i) > j, $$ and, if yes, what is a natural example? †I'm reasonably sure that at least the inverse will have to be uncomputable, but I would need only $\pi$ itself to be computable. REPLY [3 votes]: A simple example is the permutation $\pi$ given by $\pi(n)=n+2$ if $n$ is even, $\pi(1)=0$, and otherwise $\pi(n)=n−2$. It should be clear that $\pi$ is computable and has the desired property. By the way, regarding the footnote: if a bijection is computable, so is its inverse, so $\pi^{-1}$ is computable as well. In general, given a computable bijection $\sigma$, a simple algorithm to compute $\sigma^{−1}$ is as follows: Given $n$, to compute $\sigma^{-1}(n)$, compute in succession $\sigma(0),\sigma(1),\dots$, until you reach a $k$ such that $\sigma(k)=n$, which must happen since $\sigma$ is a bijection. This tells us that $\sigma^{−1}(n)=k$.<|endoftext|> TITLE: Proof that $\frac {1} {2\pi i} \oint \frac {{\rm d}z} {P(z)} $ over a closed curve is zero. QUESTION [5 upvotes]: Prove that $$\frac {1} {2\pi i} \oint \frac {{\rm d}z} {P(z)} $$ over a closed smooth simple curve that contains all of the roots of the polynomial $P(z)$ is either zero if $n \geq 2$ or equals $\frac{1} {a_0} $ if it's degree is $1$. Proof: $n\geq 2$ $$\frac {1} {P(z)=(z-z_0)(z-z_1)\dots(z-z_n)}$$ So after partial decomposition of the fraction I'll have something like $$\frac {A} {z-z_0} +\frac{B} {z-z_1}+\cdots$$ where $A,B \in \mathbb{C}$ (Is this fact true that $A,B$ wont be polynomials and just coefficients I can see it why in an intuitive way but cannot write a full proof of it for the decomposition trying to get a linear system or something) then use Cauchy's theorem and integrate all of these on different circles with their center at each root and argue that the integral is the same as the sum of those circle integrals. Now since $\frac {1}{z-z_n}$ will be analytic inside the circle when I'm integrating on circles that do not contain $z_n$ some of those integrals will be zero. and now since my curves are of the form $\gamma(t)=z_n+e^{it}$ , $t \in [0,2\pi]$ the rest of the integrals will look like $$\int_0^{2\pi} \frac{Aie^{ti}} {e^{it}}\,{\rm d}t $$ and all of them will be $A_i$. Is my proof correct? Only part I cannot show is that I won't have any problem with decomposing the fraction. And I'm also using the assumption that after the decomposition I'll have as numerators simple complex numbers not not functions of any kind. REPLY [3 votes]: Let $P(z)=\sum_{k=1}^na_kz^k$ be a polynomial of order $n>1$. Assume that all of the zeros, $z_k$, $k=1,\dots,n$, are distinct. Then, we can write $P(z)=a_n\prod_{k=1}^n(z-z_k)$. Next, using partial fraction expansion, we can write the reciprocal of $P(z)$ as $$\begin{align} \frac{1}{P(z)}&=\sum_{k=1}^n\frac{A_k}{z-z_k} \tag 1 \end{align}$$ where $A_k$ are the residues. Now, multiplying $(1)$ by $a_n\prod_{j=1}^n(z-z_j)$ yields $$\begin{align} \frac{a_n\prod_{j=1}^n(z-z_j)}{P(z)}=a_n\sum_{k=1}^n A_k \frac{\prod_{j=1}^n(z-z_j)}{(z-z_k)} \tag 2 \end{align}$$ The left-hand side of $(2)$ is equal to $1$, whereas the right-hand side is a polynomial of order $n-1$. Therefore, the coefficients of the polynomial on the right-hand side must be zero for all but the zeroth order term. In particular, the coefficient on the leading term is given by $a_n\sum_{k=1}^n A_k$. Since, this must be zero, we find that $$\sum_{k=1}^n A_k=0$$ Inasmuch as $\oint_C \frac{1}{P(z)}\,dz=2\pi i \sum_{k=1}^n A_k $, then $$\frac{1}{2\pi i}\oint_C \frac{1}{P(z)}\,dz=0$$ as was to be shown!<|endoftext|> TITLE: An interesting geometry problem about incenter and ellipses. QUESTION [17 upvotes]: Let $I$ be the incenter of a triangle $ABC$. A point $X$ satisfies the conditions $XA+XB=IA+IB$, $XA+XC=IA+IC$. The points $Y,Z$ are defined similarly. Show that the lines $AX,BY,CZ$ are concurrent or parallel to each other. My friend discovered this problem when he was drawing random ellipses for fun. But we have no idea how to solve such a problem because we literally know nothing about ellipses (except its definition). So I can't post where I'm stuck here. We're just curious to see the solution, whether or not it's elementary. We do not know what kind of tags we should add because we do not know what methods are to be used. Please edit the tagging. REPLY [2 votes]: For the moment, I will just sketch an approach, and develop the calculations later. It will be clear soon that this approach has to work, so we may also leave the non-interesting computation part to a CAS. The key idea is highlighted. Concurrency and collinearity are straightforward to check through Ceva's theorem, hence we just have to compute the trilinear coordinates of $X,Y,Z$, then check that an associated determinant vanishes; The ellipses $\Gamma_A$ with foci at $B,C$ and $\Gamma_B$ with foci at $A,C$ are two conics meeting at two points. We may write down their trilinear equations and exploit Vieta's theorem to get the trilinear coordinates of $Z$, since the trilinear coordinates of $I$ are just $[1;1;1]$; In order to write down the trilinear equation of our ellipses it is enough to recall that $$ AI^2 = bc-4rR $$ and so on. By Vieta's theorem we do not even need all the coefficients of the trilinear equation, so computations are not that painful.<|endoftext|> TITLE: Axiom of union; union of natural and real numbers QUESTION [5 upvotes]: In set theory we assume the axiom of union to be true for all universes, more formally $\forall x \exists y \forall [z \in y \Leftrightarrow \exists t (t \in x \land z \in t)]$. We call the set $y$ the union of $x$. This is intuitively understood as the set consisting of the elements of the elements of $x$, but I have trouble understanding the scenario where the elements of $x$ aren't generally considered to be sets. Take the natural numbers $\mathbb{N}$ as an example. After the axiom $\cup \mathbb{N}$ exists and it consists of the elements of the natural numbers; however, numbers aren't intrinsically sets. As everything has to be a set in set theory many different set theoretic constructions of our number systems have been devised, such as the Von Neumann construction of the natural numbers. What I'm wondering is what $\cup \mathbb{N}$ or $\cup \mathbb{R}$ in fact are, especially in the usual set theoretic constructions. Do they have some kind of a deeper property or are they arbitrarily determined by the universe $\mathscr{U}$ and the construction of $\mathbb{N}$ and $\mathbb{R}$ in $\mathscr{U}$? REPLY [3 votes]: In the usual set-theoretic construction of $\mathbb{N}$, in which $0 = \{\}$ and $n+1 = n \cup \{n\}$, you have $$ \mathbb{N}=\{0,1,2,3,\ldots\}=\{\{\},\{0\},\{0,1\},\{0,1,2\},\ldots\}, $$ and so $$ \bigcup\mathbb{N}=\{0,1,2,3,\ldots\}=\mathbb{N}. $$ There's not a comparably simple construction of $\mathbb{R}$ in ZFC, so I think $\bigcup\mathbb{R}$, while it must exist, will depend heavily on the details. For instance, there are at least two different constructions in which the elements of $\mathbb{R}$ are equivalence classes (subsets) of another set: the set of Cauchy sequences in one case, and the set of Dedekind cuts in the other. Using these constructions, $\bigcup\mathbb{R}$ is either the set of the Cauchy sequences or the set of Dedekind cuts, which are clearly quite different. Indeed, even $\bigcup\mathbb{N}=\mathbb{N}$, while elegant, isn't necessary. Other constructions of $\mathbb{N}$ include representing its elements as collections of finite subsets of some other infinite set $A$. (I.e., "3" is the set of all 3-element subsets of $A$, "4" is the set of all 4-element subsets, etc.) In this case, $\bigcup\mathbb{N}$ would be the set of all finite subsets of $A$.<|endoftext|> TITLE: Is it important to understand Linear algebra geometrically? QUESTION [8 upvotes]: I am currently self studying linear algebra, using Axler's text. And I always find myself trying to picture every theorem and proposition by considering some low dimensional cases, in order to understand why the theorems make sense. As I get to the chapter for eigenvalues and eigenvectors, things are getting a bit abstract. Specifically, he proved that every linear operator over an odd dimensional real vector space must have an eigenvalue by using induction, he considered the case where $U$ is a $2$ dimensional invariant subspace of $V$, set $W$ such that $U\oplus W=V$. Then consider the projection of $T\vec w$ onto $W$ along $U$ as an operator in $L(W)$, which must have an eigenvalue $\lambda$ by induction hypothesis. In the end, he showed that $T-\lambda I$ is not injective, thus $\lambda$ is also an eigenvalue of $T$. I understand everything he's written, but failed to grasp the bigger picture as I can only imagine vector space up to $3$ dimensions (in which case $W$ is only one dimensional). The construction is remarkable for me, as two different linear operators have the same eigenvalue, which I have no idea what the picture would look like (e.g. where does the eigenvector lie on) in higher dimensional spaces. So my question: Is understanding linear algebra through picture important? In some sense, I feel like it helps me understand more why some theorems would be true, but the proofs/constructions do not rely on picture at all as they mostly involve algebraic manipulations. REPLY [6 votes]: All of scientific and mathematical knowledge proceeds from what we directly understand as self-evident to what must logically follow from that. So it is important to understand that that fact about the nature of knowledge applies to any subfield of mathematics. Easily perceived examples in lower dimensions are an important starting point in grasping the truth of propositions applied by logic to higher dimensions. Eigenvectors and eigenvalues depend fundamentally upon the concepts of $\textbf{direction}$ and $\textbf{length}$ which we understand by direct perception in lower dimensions. We can construct lower dimensional examples of linear transformations and directly understand why they leave the directions of certain vectors unchanged, changing perhaps only their length. This is important to understanding the possibility, confirmed by reason, that the same phenomenon can exist in higher dimensions.<|endoftext|> TITLE: Does there exist any surjective group homomorphism from $(\mathbb R^* , .)$ onto $(\mathbb Q^* , .)$? QUESTION [7 upvotes]: Does there exist any surjective group homomorphism from $(\mathbb R^* , .)$ (the multiplicative group of non-zero real numbers) onto $(\mathbb Q^* , .)$ (the multiplicative group of non-zero rational numbers)? REPLY [3 votes]: Let me add a more abstract view to the concrete arguments already given. Recall that a group $(G,\cdot)$ is called divisible if for each $a\in G$, positive integer $n$, the equation $X^n = a$ has a solution. (The terminology makes more sense in additive notation where it means $nX=a$ has a solution; that is, one can 'divide the $a$ into $n$ equal parts.') Now observe: the homomorphic image of a divisible group is divisible. the only divisible subgroup of $(\mathbb{Q}^{\ast}, \cdot)$ is the trivial one. the (sub)group $(\mathbb{R}_+^{\ast}, \cdot)$ is divisible. Thus, the image of $(\mathbb{R}_+^{\ast}, \cdot)$ under every homorphism $\varphi$ from $(\mathbb{R}^{\ast}, \cdot)$ to $(\mathbb{Q}^{\ast}, \cdot)$ must be trivial. Now, for $x$ negative we have that $x^2$ is positive and thus $\varphi(x)^2=\varphi(x^2)= 1$. Thus, $\varphi(x)$ is $\pm 1$. (Also see another answer for this.) Assume there are two negative number $x,y$ such that $\varphi(x) \neq \varphi (y)$, then $1= \varphi(xy) = \varphi(x)\varphi(y) = -1$, a contradiction. Thus either all negative numbers have image $1$ or all negative number have image $-1$. It follows that the only two homomorphism there could be are $x \mapsto 1$ and $x \mapsto \operatorname{sign}{(x)}$. Both are indeed homomorphisms, yet neither is surjective.<|endoftext|> TITLE: Isomorphism between $\Bbb R$ and $\Bbb R(X)$? QUESTION [12 upvotes]: My questions are: $1.$ Is there a field morphism $\Bbb R(X) \hookrightarrow \Bbb R$ ? $2.$ If the answer to $1.$ is "yes", are $\Bbb R$ and $\Bbb R(X)$ isomorphic as fields? $ $ $ $ For $1.$, here are some comments: This is true for $\Bbb C$, see Can I embed $\Bbb{C}(x)$ into $\Bbb{C}$?. The key point here is the fact that $\Bbb C$ is algebraically closed, of cardinality $2^{\aleph_0}$, and of characteristic $0$. The same argument doesn't work for $\Bbb R$. We can actually find a subfield $F$ of $\Bbb R$ such that $F \cong F(X) \cong F(X,Y) \cong \cdots$, see Is there a subfield $F$ of $\Bbb R$ such that there is an embedding $F(x) \hookrightarrow F$?. Notice that $\Bbb Q(X)$ does not embed in $\Bbb Q$ (as a field). This is equivalent to: Is there an injective ring morphism $\Bbb R[X] \hookrightarrow \Bbb R$ ? (By taking the fraction fields). As for 2. : They are isomorphic as abelian groups, and actually as $\Bbb Q$-vector spaces. They are not isomorphic as $\Bbb R$-algebras, because they haven't the same transcendence degree over $\Bbb R$. Let's try to see how to construct such a field isomorphism. We could try$^{[1]}$ to find a subset $B$ of $\Bbb R$ such that $\Bbb R = \Bbb Q(B)$ and $C \subsetneq B \implies \Bbb R \neq \Bbb Q(C)$. The set $B$ must contain transcendental elements, pick such an $x_0 \in B$. Then $\Bbb R = \Bbb Q(B) = \Bbb Q(B \setminus \{x_0\})(x_0)$ and we could try$^{[2]}$ to find a field isomorphism $\Bbb Q(B \setminus \{x_0\}) \cong \Bbb Q(B)$, so that $\Bbb Q(B \setminus \{x_0\})(x_0) \cong \Bbb Q(B)(x_0) \cong \Bbb R(X)$. For ${[1]}$, I tried to use Zorn's lemma on $\mathscr B = \{B \subset \Bbb R \mid \Bbb R = \Bbb Q(B)\}$ and the partial order $B_1 ≤ B_2 \iff B_1 \supset B_2$. But this is not clear to me why this should be an inductive set. For ${[2]}$, my idea was to define $f : \Bbb Q(B \setminus \{x_0\}) \cong \Bbb Q(B)$ by choosing a bijection $b : B \setminus \{x_0\} \to B$ (these sets should be uncountable), and then trying to extend $f\vert_{B \setminus \{x_0\}} = b$ to a field morphism. Edit for $^{[1]}$: in this article (MINIMAL GENERATING SETS OF GROUPS, RINGS, AND FIELDS, by LORENZ HALBEISEN, MARTIN HAMILTON, AND PAVEL RUZICKA) the theorem 2.4. proves that $\Bbb R$ has no minimal generating set as field (or $\Bbb Q$-algebra, i.e. as ring). The argument uses Artin-Schreier theorem. Minimal generating sets of modules are also discussed here. I think that $B$ is a minimal generating set of a subfield $K \subset \Bbb R$ over $\Bbb Q$ if and only if any element of $B$ cannot be written as a rational fraction of other elements of $B$. Under these conditions and using (transfinite) induction, we can probably prove (and this was my first idea) that any function $f: B \to \Bbb C$ such that $b$ and $f(b)$ are conjugate over $\Bbb Q$ (i.e. have the same minimal polynomial) extends to a (unique) field morphism $\sigma : K = \Bbb Q(B) \to \Bbb C$. Actually, the collection $\mathscr B$ defined above doesn't have to be an inductive set, at least if we replace $\Bbb R$ by $K =\Bbb Q(\sqrt 2, \sqrt[4]{2},\sqrt[8]{2},\dots)$, because the descending chain $\left(E_n = \{\sqrt[2^m]{2} \mid m \geq n\}\right)_{n \geq 1}$ satisfy $K = \Bbb Q(E_n)$, but $K \neq \Bbb Q\left(\bigcap_{n≥1} E_n\right)=\Bbb Q$, so the chain has no upper bound w.r.t. $≤$ in $\mathscr B$. REPLY [15 votes]: There is no homomorphism $\mathbb{R}(X)\to\mathbb{R}$. Indeed, the only homomorphism $\mathbb{R}\to\mathbb{R}$ is the identity, so a homomorphism $\mathbb{R}(X)\to\mathbb{R}$ would have to restrict to the identity on constants and then there is nowhere $X$ could go. Here's a proof that the only homomorphism $f:\mathbb{R}\to\mathbb{R}$ is the identity. We know $f$ must be the identity on $\mathbb{Q}$. Now note that for any $r\in\mathbb{R}$ and any rational $q TITLE: How to show that this identity holds? QUESTION [5 upvotes]: By comparing the first terms of the Taylor expansion at $0$, it seems that for $|x|<4/27$ the following identity holds: $$\ln\left(\sum_{n=1}^{\infty} \binom{3n}{n}\frac{x^{n-1}}{2n+1}\right)= \sum_{n=1}^{\infty} \binom{3n}{n}\frac{x^n}{n}.$$ I tried by differentiating both sides, but I am completely lost in a terrible mess. I wonder if there is a better strategy to handle it. Any idea? REPLY [3 votes]: Suppose we seek to show that $$\sum_{n\ge 0} {3n+3\choose n+1} \frac{z^n}{2n+3} = \exp\left(\sum_{n\ge 1} {3n\choose n} \frac{z^n}{n}\right).$$ This is the same as showing that $$q_n = [z^n] f(z) = {3n+3\choose n+1} \frac{1}{2n+3} \quad\text{where}\quad f(z) = \exp\left(\sum_{n\ge 1} {3n\choose n} \frac{z^n}{n}\right).$$ and it can be shown by induction. Basic arithmetic verifies it for $q_0 = 1.$ Differentiating we get $$f'(z) = \exp\left(\sum_{n\ge 1} {3n\choose n} \frac{z^n}{n}\right) \left(\sum_{n\ge 1} {3n\choose n} z^{n-1} \right) \\ = f(z) \left(\sum_{n\ge 0} {3n+3\choose n+1} z^{n} \right).$$ Extracting coefficients we obtain the recurrence $$[z^n] f'(z) = (n+1) q_{n+1} = \sum_{k=0}^n q_k {3n+3-3k\choose n+1-k}$$ or $$q_{n+1} = \frac{1}{n+1} \sum_{k=0}^n q_k {3n+3-3k\choose n+1-k}.$$ This means to verify the claim we have to show that $${3n+6\choose n+2} \frac{1}{2n+5} = \frac{1}{n+1} \sum_{k=0}^n {3k+3\choose k+1} \frac{1}{2k+3} {3n+3-3k\choose n+1-k}$$ or $${3n+6\choose n+2} \frac{n+1}{2n+5} = \sum_{k=0}^{n} {3k+3\choose k+1} \frac{1}{2k+3} {3n+3-3k\choose n+1-k}$$ which is $${3n+6\choose n+2} \frac{n+2}{2n+5} = {3n+6\choose n+1} \\ = \sum_{k=0}^{n+1} {3k+3\choose k+1} \frac{1}{2k+3} {3n+3-3k\choose n+1-k}.$$ What we have here is a straightforward convolution of two ordinary generating functions. With some help from the OEIS we learn that we require the ternary unlabeled tree function $T(z)$ with functional equation $$T(z) = 1 + z T(z)^3 \quad\text{or}\quad z = \frac{T(z)-1}{T(z)^3}.$$ We then obtain $$[z^n] T(z) = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} T(z) \; dz.$$ Using the functional equation and letting $w=T(z)$ we have $dz = (3-2w)/w^4 \; dw$ and get for the integral $$\frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^{3n+3}}{(w-1)^{n+1}} w \frac{3-2w}{w^4} \; dw \\ = \frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{3w^{3n}-2w^{3n+1}}{(w-1)^{n+1}} \; dw.$$ Now with $w^q = \sum_{p=0}^q {q\choose p} (w-1)^p$ this will produce $$3\times {3n\choose n} - 2\times {3n+1\choose n} = \left(3 - 2\frac{3n+1}{2n+1}\right) {3n\choose n} = \frac{1}{2n+1} {3n\choose n}.$$ This generates the left term of the convolution. For the right term consider $$[z^n] \frac{1}{1-3zT(z)^2} = [z^n] \frac{1}{1-3(T(z)-1)/T(z)} = [z^n] \frac{T(z)}{3-2T(z)}$$ to get $$\frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^{3n+3}}{(w-1)^{n+1}} \frac{w}{3-2w} \frac{3-2w}{w^4} \; dw \\ = \frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^{3n}}{(w-1)^{n+1}} \; dw = {3n\choose n}.$$ Recall the convolution that we are working with which now becomes $$\sum_{k=0}^{n+1} [z^{k+1}] T(z) [z^{n+1-k}] \frac{T(z)}{3-2T(z)} \\ = \sum_{k=0}^{n+1} [z^k] \frac{T(z) - 1}{z} [z^{n+1-k}] \frac{T(z)}{3-2T(z)}.$$ We are left with $$[z^{n+1}] \frac{1}{z} \frac{T(z)(T(z)-1)}{3-2T(z)} = [z^{n+2}] \frac{T(z)(T(z)-1)}{3-2T(z)}.$$ Using the sáme integral as before this becomes $$\frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^{3n+9}}{(w-1)^{n+3}} \frac{w(w-1)}{3-2w} \frac{3-2w}{w^4} \; dw \\ = \frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^{3n+6}}{(w-1)^{n+2}} \; dw \\ = {3n+6\choose n+1}.$$ This concludes the argument. The sequences where this material resides are OEIS A005809 and OEIS A001764. The structure of the initial equation suggests a counting argument involving sets of labeled directed cycles where each item may have one of three colors and the reader is invited to investigate this further.<|endoftext|> TITLE: Trying to prove that $H=\langle a,b:a^{3}=b^{3}=(ab)^{3}=1\rangle$ is a group of infinite order. QUESTION [12 upvotes]: I'm trying to prove that the following group has infinite order: $$H=\langle a,b\mid a^{3}=b^{3}=(ab)^{3}=1\rangle.$$ Currently I'm checking on some cases using the relations, but my problem is the reducibility for large products. Naively I started to check that $ab$ is different than $1,a,b$ and then $ba$ than $1,a,b,ab$, just to understand $H$ in some extent. Now I'm wondering about some more effective method to prove that $|H|=\infty$, I'm tempted to look for an injection from some group of infinite order into $H$, but I'm still stuck. More than asking for a solution I'd rather appreciate some hints or thoughts about it. Thanks a lot. REPLY [5 votes]: Let $p$ be a prime congruent to $1\ mod\ 3$. Find a homomorphism from your group $H$ to the non-abelian group of order $3p$. Then use the fact that there are infinitely many primes congruent to $1\ mod\ 3$ to show that $H$ is infinite. This is a problem from Dummit and Foote's Abstract Algebra, 3rd ed, 6.3.14, pg 221.<|endoftext|> TITLE: A topological group which is also a (not necessarily smooth) manifold is orientable QUESTION [9 upvotes]: I am trying to show that a topological group which is also a (not necessarily smooth) manifold is automatically orientable. I know of a proof involving transition functions for smooth manifolds, in which case the object in question is a Lie group. I am using Hatcher's definition of orientability: An $n$-manifold $M$ is orientable if it admits a local orientation $\eta_x$ at each $x\in M$ where $\eta_x$ is a generator of $H_n(M\mid x)\cong \mathbb{Z}$, with the following compatibility property: For each $x\in M$, there is an open ball $x\in B_x\cong \mathbb{R}^n$ so that for every $y\in B_x$, the local orientation $\eta_y$ is the isomorphic image (induced by inclusion of pairs) of the same generator $\eta_{B_x}$ of $H_n(M\mid B_x)$. I have a clear candidate for such a local orientation, but I am having trouble showing the compatibility: Let $e$ be the identity of the topological group $M$. Choose any generator $\eta_e$ of $H_n(M\mid e)$, and for any $g\in M$, let $\eta_g = L^g_*(\eta_e)\in H_n(M\mid g)$ where $L^g:M\to M$ is left multiplication by $g$ ($L^g$ is a homeomorphism, so it certainly induces an isomorphism on homology). To start showing the compatibility condition, given $x\in M$, let $B_x$ be any open neighborhood of $x$ homeomorphic to $\mathbb{R}^n$. We are required to show that the following diagram commutes: $\require{AMScd}$ $$\begin{CD} H_n(M\mid B_x) @>id>\cong> H_n(M\mid B_x) \\ @VV{\cong}V @V{\cong}VV \\ H_n(M\mid x) @>{\cong}>L^{(y^{}x^{-1})}_*> H_n(M\mid y) \end{CD}$$ where the vertical maps are induced by inclusion. Here is where I am stuck. The corresponding diagram on the level of topological spaces certainly does not commute. Any ideas, thoughts, hints, or full solutions are welcome! REPLY [5 votes]: The solution by JHF seems correct to me; here is an alternative using the same core idea of paths in the space $M$ yielding homotopies between multiplication maps. The "top then right" leg of the square is induced by inclusion, which is the same as the map $L^{e}: (M, M - B_{x})\to (M, M - y)$. I will show that this map is homotopic through maps $(M, M-B_{x})\to (M, M-y)$ to $L^{y^{}x^{-1}}$. The ball $B_{x}\cong \mathbb{R}^{n}$ is path-connected, so we may choose a path $\alpha:I\to M$ from $y$ to $x$ so that the image of $\alpha$ is contained in $B_{x}$. The homotopy $L^{y^{}(\alpha(t))^{-1}}$ is then a homotopy from $L^{e}$ to $L^{y^{}x^{-1}}$, and we must show that every stage of this homotopy is a map of pairs $(M, M-B_{x})\to (M, M-y)$. Suppose therefore that $z\notin B_{x}$. Then if $$y = L^{y^{}(\alpha(t))^{-1}}(z) = y(\alpha(t))^{-1}z,$$ then we must have $$\alpha(t) = z$$ which is impossible since $z\notin B_{x}$ and $\alpha(t)\in B_{x}$. The "top then right" leg of the square is now simply $L^{y^{}x^{-1}}_{*}$ which certainly causes the square to commute, as desired.<|endoftext|> TITLE: Convergence of $\zeta(s)$ on $\Re(s)> 1$ QUESTION [10 upvotes]: I'm aware there are been several similar questions, and some great answers with hints on how to prove this, for example here and here. But I've never really seen a detailed proof of the absolute convergence of the Riemann zeta function in the half-plane $\Re(s)> 1$. I hope there's interest on a detailed proof of this for reference. Here is my attempt. Please, fill in any detail that might be missing, or point out any mistake! On $\Re(s)=\sigma> 1$, we have $$\sum_{n=1}^{\infty}\bigg|\frac{1}{n^s}\bigg|=\sum_{n=1}^{\infty}\frac{1}{n^\sigma}$$ and $$\frac{1}{n^\sigma}\leq\frac{1}{n^{1+\epsilon}}$$ for any $\epsilon >0$, so by the direct comparison test, if $$\sum_{n=1}^{\infty}\frac{1}{n^{1+\epsilon}}$$ converges absolutely, then so do $\sum_{n=1}^{\infty}|1/n^s|$ and $\zeta(s)$. By the integral test, the convergence of $\sum_{n=1}^{\infty}1/n^{1+\epsilon}$ is equivalente to the finitude of the integral $$\int_1^\infty \frac{1}{n^{1+\epsilon}}=\bigg[-\frac{1}{\epsilon x^\epsilon}\bigg]=\frac{1}{\epsilon}$$ which holds for any $\epsilon < \infty$. Any correction or improvement is welcomed! Thanks in advance. REPLY [5 votes]: Here is another approach. Let $s=\sigma +i \omega$. Then, we can write $$\begin{align} \sum_{n=1}^\infty \frac{1}{n^s}&=\sum_{n=1}^\infty \frac{e^{-i\omega \log(n)}}{n^\sigma }\\\\ &=\sum_{n=1}^\infty \frac{\cos(\omega \log(n))}{n^\sigma }-i\sum_{n=1}^\infty \frac{\sin(\omega \log(n))}{n^\sigma }\\\\ \end{align}$$ Next, we use the Euler–Maclaurin Summation Formula (EMSF) to write for $\sigma \ne 1$, $\sigma >0$ $$\sum_{n=1}^N \frac{\cos(\omega \log(n))}{n^\sigma }=N^{1-\sigma}\left(\frac{(1-\sigma)\cos(\omega \log(N))}{(1-\sigma)^2+\omega^2}+\frac{\omega \sin(\omega \log(N))}{(1-\sigma)^2+\omega^2}\right)+K_1+O\left(N^{-\sigma}\right)$$ $$\sum_{n=1}^N \frac{\sin(\omega \log(n))}{n^\sigma }=N^{1-\sigma}\left(\frac{(1-\sigma)\sin(\omega \log(N))}{(1-\sigma)^2+\omega^2}-\frac{\omega \cos(\omega \log(N))}{(1-\sigma)^2+\omega^2}\right)+K_2+O\left(N^{-\sigma}\right)$$ for some constants $K_1$ and $K_2$ (that $K_1$ and $K_2$ exist can be shown by analyzing the remainder term in the EMSF). Note, in arriving at the series, we used integration by parts twice in applying the EMSF. Obviously, we find that for $0<\sigma<1$, the series diverge (for $\omega=0$, only the cosine series diverges), while for $\sigma >1$, the series converge. Finally, it is easy to carry out the analysis for the case $\sigma =1$ to see that the series diverge for $\sigma =1$ (See THIS ANSWER).<|endoftext|> TITLE: About absolute continuity $\Rightarrow$ null set maps to null set QUESTION [10 upvotes]: I've searched quite a lot in this site and found many questions that mention this result, but not a single proof. I tried proving it myself and failed. So, I ask it here: how can one prove that if $f:[a, b]\rightarrow \mathbb{R}$ is absolutely continuous, then for every set $A \subseteq[a,b]$ with lebesgue measure $0$, $f(A)$ also has lebesgue measure $0$? REPLY [17 votes]: Let $N \subset (a,b)$ be a null set and let $\epsilon > 0$. Select $\delta > 0$ corresponding to the definition of absolute continuity and let $O \subset [a,b]$ be an open set with $m(O) < \delta$. There exists a countable family $\{(a_k,b_k)\}$ of disjoint open intervals with $O = \bigcup_k (a_k,b_k)$. Since $f$ is continuous on each $[a_k,b_k]$ there exist points $c_k,d_k \in [a_k,b_k]$ on which $f$ attains its infimum and supremum, respectively. This means that $f([a_k,b_k]) = [f(c_k),f(d_k)]$ so that $$m^*(f((a_k,b_k))) = |f(d_k) - f(c_k)|.$$ Since $\sum_k |c_k - d_k| \le \sum_k |a_k - b_k| = m(O) < \delta$ you have $$\sum_k |f(d_k) - f(c_k)| < \epsilon$$ by absolute continuity. Consequently $$m^*(f(O)) = m^*( f(\bigcup_k (a_k,b_k)) \le \sum_k m^*(f((a_k,b_k)) < \epsilon.$$ In particular, if $O$ is an open set with $N \subset O \subset (a,b)$ and $m(O) < \delta$ then $$m^*(f(N)) \le m^*(f(O)) < \epsilon.$$ Since this is valid for any $\epsilon > 0$ it follows that $m^*(f(N)) = 0$ as desired.<|endoftext|> TITLE: Smallest positive integer that gives remainder 5 when divided by 6, remainder 2 when divided by 11, and remainder 31 when divided by 35? QUESTION [5 upvotes]: What is the smallest positive integer that gives the remainder 5 when divided by 6, gives the remainder 2 when divided by 11 and 31 when divided by 35? Also, are there any standard methods to solve problems like these? REPLY [4 votes]: \begin{align} n \equiv 5 &\pmod{6} \\ n \equiv 2 &\pmod{11} \\ n \equiv 31 &\pmod{35} \end{align} The idea is to find three integers, $A, B,$ and $C$ such that \begin{array}{|l|l|l|} A \equiv 1 \pmod{6} & B \equiv 0 \pmod{6} & C \equiv 0 \pmod{6} \\ A \equiv 0 \pmod{11} & B \equiv 1 \pmod{11} & C \equiv 0 \pmod{11} \\ A \equiv 0 \pmod{35} & B \equiv 0 \pmod{35} & C \equiv 1 \pmod{35} \end{array} Your answer will then be $5A + 2B + 31C \pmod{2310}$ where $2310 = 6 \cdot 11 \cdot 35$. Since $6, 11,$ and $35$ are pairwise prime, the Chinese remainder theorem guarantees that $A, B,$ and $C$ exist. There is a formula that describes how to find $A, B,$ and $C$, but it isn't that much trouble to figure them out from scratch. The equivalences $A \equiv 0 \pmod{11}$ and $A \equiv 0 \pmod{35}$ imply that $A$ must be a multiple of $385 = 11 \cdot 35.$ So $A = 385x$ for some integer $x$. We solve \begin{align} A &\equiv 1 \pmod 6 \\ 385x &\equiv 1 \pmod 6 \\ x &\equiv 1 \pmod 6 \\ \hline A &= 385(1) = 385 \end{align} The equivalences $B \equiv 0 \pmod{6}$ and $B \equiv 0 \pmod{35}$ imply that $B$ must be a multiple of $210 = 6 \cdot 35.$ So $B = 210x$ for some integer $x$. We solve \begin{align} B &\equiv 1 \pmod{11} \\ 210x &\equiv 1 \pmod{11} \\ x &\equiv 1 \pmod{11} \\ \hline B &= 210(1) = 210 \end{align} The equivalences $C \equiv 0 \pmod{6}$ and $C \equiv 0 \pmod{11}$ imply that $C$ must be a multiple of $66 = 6 \cdot 11.$ So $C = 66x$ for some integer $x$. We solve \begin{align} C &\equiv 1 \pmod{35} \\ 66x &\equiv 1 \pmod{35} \\ -4x &\equiv 1 \pmod{35} \\ x &\equiv -9 \pmod{35} \\ \hline C &= 66(-9) = -594 \end{align} We then compute \begin{align} n & \equiv 5A + 2B + 35C \pmod{2310} \\ &\equiv 5(385) + 2(210) + 31(-594) \pmod{2310} \\ &\equiv 101\pmod{2310} \\ \end{align}<|endoftext|> TITLE: Simplifying $\sum_{i=1}^n{\lfloor \frac{n}{i} \rfloor}$? QUESTION [6 upvotes]: In a few programming contexts, I've come across code along these lines: total = 0 for i from 1 to n total := total + n / i Where the division here is integer division. Mathematically, this boils down to evaluating $$\sum_{i = 1}^{n}\left\lfloor\, n \over i\,\right\rfloor.$$ This is upper-bounded by $n H_{n}$ and lower-bounded by $nH_{n} - n$ using the standard inequalities for $\texttt{floor}$'s, but that upper bound likely has a large gap to the true value. Is there a way to either get an exact value for this summation or find some simpler function it's asymptotically equivalent to? REPLY [8 votes]: For exact computation, a simplification begins with the observation $$ \sum_{i=1}^n \left\lfloor \frac{n}{i} \right\rfloor = \sum_{i,j} [1 \leq i] [1 \leq j] [ij \leq n] 1$$ where $[P]$ is the Iverson bracket: it is $1$ if $P$ is true, and $0$ if false. Anyways, using the symmetry, you can break this down to $$ \sum_{i=1}^n \left\lfloor \frac{n}{i} \right\rfloor = -\lfloor \sqrt{n} \rfloor^2 + 2 \sum_{i=1}^{\lfloor \sqrt{n} \rfloor} \left\lfloor \frac{n}{i} \right\rfloor $$ This form is better for making rough estimates too, since the roundoff error is a much smaller proportion of the summands.<|endoftext|> TITLE: Are there spaces with non-integer Hausdorff dimension that are differentiable? QUESTION [7 upvotes]: I realize that fractal curves usually represent non-differentiable functions in some sense, but would it to be possible to have a space that has non-integer Hausdorff dimension, and still have a reasonable notion of differentiation? Essentially I'm curious about the ability for a space with a (pseudo)Riemannian metric, to have non-integer Hausdorff dimension. If so, can you please provide an example and/or construction? If not, why? REPLY [3 votes]: For standard differentiation on the real line, the answer is (unsurprisingly) no. The following corollary appears in the paper Sets of Fractional Dimensions (V): on Dimensional Numbers of Some Continuous Curves by A.S. Besicovitch and H.D. Ursell: Corollary: The dimensional number of the curve $y=f(x)$, where $f(x)$ has finite derivative at all points is $1$.<|endoftext|> TITLE: Prove that this family of iterated means gives odd powers of Arithmetic-Geometric Mean in the limit QUESTION [5 upvotes]: We know the Arithmetic-Geometric mean: $$a_0=a, \qquad b_0=b$$ $$a_n=\frac{a_{n-1}+b_{n-1}}{2}, \qquad b_n=\sqrt{a_{n-1} b_{n-1}}$$ $$\text{agm} (a,b)=\lim_{n \to \infty}a_n=\lim_{n \to \infty}b_n$$ Now we introduce a family of modified 'iterated means': $$a_n=\frac{a_{n-1}+b_{n-1}}{2}\left(1-\frac{(a_{n-1}-b_{n-1})^2}{(a_{n-1}+b_{n-1})^2} \right)^p$$ $$b_n=\sqrt{a_{n-1} b_{n-1}}\left(1-\frac{(a_{n-1}-b_{n-1})^2}{(a_{n-1}+b_{n-1})^2} \right)^q$$ $$M_{pq} (a,b)=\lim_{n \to \infty}a_n=\lim_{n \to \infty}b_n$$ I obtained the following numerical results: $$M_{10} (a,b)=\frac{ab}{\text{agm}(a,b)}$$ (This is actually the Harmonic-Geometric mean, and this result is known and easy to prove). $$M_{11} (a,b)=\frac{a^2b^2}{\text{agm}^3(a,b)}$$ $$M_{21} (a,b)=\frac{a^3b^3}{\text{agm}^5(a,b)}$$ $$M_{22} (a,b)=\frac{a^4b^4}{\text{agm}^7(a,b)}$$ $$M_{32} (a,b)=\frac{a^5b^5}{\text{agm}^9(a,b)}$$ I think the pattern is easy to see here. We increase $p$ by one and leave $q$ the same. Then we increase $q$ by one as well. For negative $p,q$ we have a corresponding situation: $$M_{0-1} (a,b)=\frac{\text{agm}^3(a,b)}{ab}$$ $$M_{-1-1} (a,b)=\frac{\text{agm}^5(a,b)}{a^2b^2}$$ And so on. How can we prove this observation? Is there some way to prove it for the general case, because I find it unlikely that we can directly prove something like: $$M_{100,100} (a,b)=\frac{a^{200}b^{200}}{\text{agm}^{399}(a,b)}$$ In particular cases, the 'modification' introduced here turns one mean into another. $$\frac{a+b}{2}\left(1-\frac{(a-b)^2}{(a+b)^2} \right)=\frac{a+b}{2} \frac{4 ab}{(a+b)^2}=\frac{2 ab}{a+b}$$ $$\frac{a+b}{2}\left(1-\frac{(a-b)^2}{(a+b)^2} \right)^{1/2}=\frac{a+b}{2} \frac{2 \sqrt{ab}}{a+b}=\sqrt{ab}$$ $$\sqrt{ab} \left(1-\frac{(a-b)^2}{(a+b)^2} \right)^{1/2}=\frac{2a b }{a+b}$$ $$\sqrt{ab} \left(1-\frac{(a-b)^2}{(a+b)^2} \right)^{-1/2}=\frac{a+b}{2}$$ etc. In the general case it probably doesn't always result in a mean. And the 'iterated means' above certainly aren't means in the general case, since the limits for large positive or negative $p,q$ can go to zero or infinity. REPLY [2 votes]: I'll show (skipping some parts of the proof) $$M_p(a,b):=M_{p,p}(a,b)=(ab)^{2p} \mathrm{agm}(a,b)^{1-4p},$$ in the same way one can show $$M_{p,p-1}(a,b)=(ab)^{2p-1} \mathrm{agm}(a,b)^{3-4p}.$$ Let us write $S(a,b)=\frac12 (a+b)$ and $P(a,b) = \sqrt{ab}$. One can rewrite the "$p$ iterated means" as follows: \begin{align} a_n & = Q_1(a_{n-1},b_{n-1}) := S(a_{n-1},b_{n-1})^{1-2p} P(a_{n-1},b_{n-1})^{2p},\\ b_n & = Q_2(a_{n-1},b_{n-1}) := S(a_{n-1},b_{n-1})^{-2p} P(a_{n-1},b_{n-1})^{2p+1}. \end{align} Let us also define $F_p(a,b) = (ab)^{2p} \mathrm{agm}(a,b)^{1-4p}$, we want to prove $M_p(a,b) = F_p(a,b)$. The key observation is that we have (this follows immediately from the definition), $$\mathrm{agm}(a,b) = \mathrm{agm}(S(a,b),P(a,b)) \label{1}\tag{1}.$$ Suppose we can prove $$ F_p(a,b) = F_p(Q_1(a,b),Q_2(a,b)) = F_p(a_1,b_1) \label{2} \tag{2}.$$ Iterating the same equality (and using the continuity of $F_p$) we find \begin{align} F_p(a,b) & = F_p(a_1,b_1) = F_p(a_2,b_2) = \dots = F_p(a_n,b_n) = \dots = F_p(M_p(a,b),M_p(a,b))\\ & = (M_p(a,b) \cdot M_p(a,b))^{2p} \cdot \mathrm{agm} (M_p(a,b),M_p(a,b))^{1-4p} = M_p(a,b), \end{align} where we used $\mathrm{agm}(\lambda, \lambda) = \lambda$. To prove $\eqref{2}$ we use property $\eqref{1}$. We have \begin{align} F_p(a_1,b_1) & = F_p(Q_1(a,b),Q_2(a,b)) = (Q_1(a,b) \cdot Q_2(a,b))^{2p} \cdot \mathrm{agm}(Q_1(a,b), Q_2(a,b))^{1-4p}\\ & = \left( S(a,b)^{1-2p} P(a,b)^{2p} \cdot S(a,b)^{-2p} P(a,b)^{2p+1} \right)^{2p} \cdot \\ & \quad \mathrm{agm}\left( S(a,b)^{1-2p} P(a,b)^{2p} , S(a,b)^{-2p} P(a,b)^{2p+1} \right)^{1-4p}\\ & = S(a,b)^{2p(1-4p)} P(a,b)^{2p(4p+1)} \cdot \left( S(a,b)^{-2p} P(a,b)^{2p} \mathrm{agm}\left( S(a,b), P(a,b) \right)\right)^{1-4p}\\ & = P(a,b)^{4p} \mathrm{agm}\left( a, b \right)^{1-4p}\\ & = F_p(a,b). \end{align}<|endoftext|> TITLE: Determine all real polynomials with $P(0)=0$ and $P(x^2+1)=(P(x))^2+1$ QUESTION [14 upvotes]: Find all real polynomials with $P(0)=0$ and $P(x^2+1)=(P(x))^2+1$ I' like a solution for this problem and proof verification: My proof: Let $a_0=0$ and $a_{n+1}=a_n^2+1$, clearly $P(a_0)=a_0$ and if $P(a_n)=a_n$ then $P(a_{n+1})=P(a_n^2+1)=(P(a_n))^2+1=a_n^2+1=a_{n+1}$. So $P(x)=x$ for infinitely many values of $x$ (because $a_n$ is strictly increasing), hence $P$ is the identity polynomial, and it clearly works. REPLY [2 votes]: Let $P(x)=a_1x+a_2x^2+…..+a_nx^n$ be a polynomial such that $$P(x^2+1)=(P(x))^2+1$$ and $P(0)=0$ (it is immediate that $P(1)=1$). We have $$P(x^2+1)=\sum a_k(x^2+1)^k=\sum a_k\sum\binom{k}{ l}(x^2)^{k-l}$$ $$\\((P(x))^2+1=\sum a_k^2x^{2k}+2\sum a_ka_jx^{k+j}+1$$ It follows that the polynomial $$R(x)= \sum a_k\sum\binom{k}{ l}(x^2)^{k-l}-\left(\sum a_k^2x^{2k}+2\sum a_ka_jx^{k+j}+1\right)$$ must be identically zero because it has more roots than its degree so all the coefficients must be zero. It can be seen that even for the second degree this is not the case and that the only possibility is $$\color{red}{P(x)=x}$$ $$***$$ We give another solution because the verification about coefficients in the precedent one is not immediate $$***$$ SECOND SOLUTION.-It is clear that $P(x)=x$ goes well. Put $$P(x)=x+Q(x)$$ so that $$ P(x^2+1)=(P(x))^2+1\iff Q(x^2+1)=2xQ(x)+(Q(x))^2\qquad (*) $$ We have from $(*)$, $$ (Q(x))^2+2xQ(x)-Q(x^2+1)=0\iff Q(x)=-x\pm\sqrt{x^2+Q(x^2+1)}\qquad (**)$$ Note that if $Q(x_0)=0$ then $Q(x_0^2+1)=0$ Because of $P(0)=0\Rightarrow P(1)=1\Rightarrow P(2)=2\Rightarrow P(5)=5$, one has $Q(0)=Q(1)=Q(2)=Q(5)=0$. It follows from $(**)$ we can deduce an infinity of roots for $Q(x)$ (for example $26,677$ and $458330$ the three first deduced from $Q(5)=0$). Consequently $Q(x)=0$ for all $x$ and $$P(x)=x$$ is the only solution.<|endoftext|> TITLE: Open sets of integral schemes QUESTION [5 upvotes]: The following is the definition of integral scheme as mentioned here Let $X$ be a scheme. We say $X$ is integral if it is nonempty and for every nonempty affine open $\operatorname{Spec}(R)=U \subset X$ the ring $R$ is an integral domain. How do I show that for any open set $U$, $\mathcal{O}_X(U)$ is an integral domain? REPLY [2 votes]: You can write $U=\bigcup_{i\in I}U_i$ where $U_i$ is non empty affine and open. Let $f,g\in O_X(U)$ such that $fg=0$. Denote by $f_i$ the restriction of $f$ to $U_i$, you have $f_ig_i=0$, this implies that $f_i=0$ or $g_i=0$. Lemma 27.3.4 of your reference shows that $X$ is irreducible and reduced, this implies that $U_i$ is dense, suppose $f_i=0$, $\{f(x)=0\subset X\}$ is a closed subset which contains the dense subset $U_i$ so it is $X$.<|endoftext|> TITLE: Largest subset with no arithmetic progresssions QUESTION [6 upvotes]: Let $A=\{x_1,x_2,\cdots,x_n\}$ be a set of $n$ distinct real numbers. Show that there exists a set $B\subset A$ such that $$|B|\geq\lfloor\sqrt{2n}+\frac12\rfloor$$ and no $3$ distinct elements of $B$ constitute an arithmetic progression. I have no idea how to approach this problem, with its strange-looking formula. I've already posted it on AoPS but no reply. REPLY [6 votes]: This is accurate estimate! Perform a greedy algorithm. At start $B= \emptyset$. Now repeat: Take arbitrary element $a\in A$, delete it in $A$ and if it doesn't make AP in $B$ then put it in $B$ . if $a$ make AP with some $x,y\in B$ and if $a\ne {x+y\over 2}$ (it is not in the middle of $x,y$) then we refute it, if $a= {x+y\over 2}$ (it is in the middle of $x,y$) then we replace $x$ or $y$ with $a$ in $B$. We finish when $A = \emptyset$. Now we do double counting between unordered pairs in $B$ and $B^C$. Pair $\{x,y\}\subset B$ connect with $z\in B^C$ iff $x,y,z$ are in AP. Every unordered pair in $B$ is connected with at most $1$ (and not $3$) elements in $B^C$. So, if $|B|=m$, then we have: $$\binom{m}{2}\geq n-m\iff m\geq\frac{\sqrt{8n+1}+1}{2}$$<|endoftext|> TITLE: $f>0$ on $[0,1]$ implies $\int_0^1 f >0$ QUESTION [13 upvotes]: Someone made the remark on my old question (second-to-last comment on the answer from here) that a integrable function $f>0$ on $[0,1]$ does not imply $\int_0^1 f >0$ since limits do not preserve strict inequality. But I think it is true and I will try to give a proof. Since $\{f>0\} = \cup_{n=1}^{\infty}\{f>\frac{1}{n}\}$, from continuity of Lebesgue measure $$ 1= m(\{f>0\}) = m\left(\bigcup_{n=1}^{\infty}\Big\{f>\frac{1}{n}\Big\}\right) = \lim_{n\rightarrow \infty} m\left(\Big\{f>\frac{1}{n}\Big\}\right),$$ this means there exists $N$ such that $m\left(\Big\{f>\frac{1}{N}\Big\}\right)\geq 1/2$. Then $\frac{1}{N}\chi_{\{f>\frac{1}{N}\}} \leq f$ and $$\int_0^1 f \geq \int_0^1 \frac{1}{N}\chi_{\{f>\frac{1}{N}\}} \geq \frac{1}{2N} > 0.$$ Is this correct? REPLY [6 votes]: Yes, your proof is correct. The two comments on that old answer are wrong. The first comment says: The above proof is wrong: if $f(x)>C$ then $\int_0^1 f(x) \mathbb{d}x \geq C$. Notice that the inequality becomes non-strict, because integration is just passing to the limit and limits do not preserve the strictness of inequalities. It is generally a good rule that limits do not preserve strict inequalities. But this user in his or her comment is wrong that this holds for integrals in particular. In fact, for the Lebesgue (or Riemann!) integral, if $f > g$ on a set $A$, and both are integrable, then $\int_A f > \int_A g$. Specifically, let $A \subseteq \mathbb{R}$ be a Lebesgue measurable set, and suppose $f, g$ are measurable, with $f(x) > g(x)$ for all $x \in A$. Then $\int_A f > \int_A g$, with just a few exceptions: If $A$ has measure $0$, this won't hold. If $\int_A f = -\infty$ or if $\int_A g = \infty$, this won't hold. This covers the Riemann case as well, since Riemann integrable functions are also Lebesgue integrable. (Except for some improper Riemann integrals -- I'm not sure if it holds in the case of improper integrals or not.) This has also been covered on mathSE a lot of times. Some examples: 1, 2, 3, 4, 5.<|endoftext|> TITLE: A conjecture about integer polynomials and primes QUESTION [11 upvotes]: Conjecture: For all sets of non constant polynomials $\{f_1,\dots,f_n\}\subset \mathbb Z[X]$, it exists $m\in\mathbb N$ such that $f_k(m)\notin\mathbb P$, for all $1\leq k\leq n$. Where $\mathbb P$ is the set of primes. Can this be proved? The original conjecture with $\mathbb R[X]$ seems too difficult and may depend on an open conjecture. REPLY [6 votes]: The conjecture is true, even allowing for polynomials in $\Bbb C[x]$. As a first step, we show that if $f(x)\in\Bbb C[x]$ takes infinitely many integer values on inputs $x\in\Bbb N$, then its coefficients must actually be (real) rational numbers. (This reduces the problem from $\Bbb C[x]$ to $\Bbb Q[x]$, since any polynomial in $\Bbb C[x] \setminus \Bbb Q[x]$ only takes finitely many prime values and thus can be deleted from the set of polynomials without changing the conjectured property.) Indeed, let $f(x) \in \Bbb C[x]$ have degree $d\ge1$ (constant polynomials being trival for this fact), and write it as $f(x) = c_d x^d + c_{d-1}x^{d-1} \cdots + c_1 x + x_0$. Suppose that $n_0,\dots,n_d\in\Bbb N$ are such that $v_0=f(n_0),\dots,v_d=f(n_d)$ are all integers. The coefficients can be recovered from these outputs, since they satisfy the linear system of equations \begin{align*} n_0^d c_d + n_0^{d-1} c_{d-1} + \cdots + n_0 c_1 &= v_0-c_0 \\ n_1^d c_d + n_1^{d-1} c_{d-1} + \cdots + n_1 c_1 &= v_1-c_0 \\ \vdots& \\ n_d^d c_d + n_d^{d-1} c_{d-1} + \cdots + n_d c_1 &= v_d-c_0. \end{align*} The coefficient matrix on the left-hand side is a Vandermonde matrix, so it is invertible; moreover, all its entries are integers, and so its inverse has rational entries. Therefore the vector of $c_j$s, being the product of this inverse matrix with the vector of $(v_j-c_0)$s, has rational entries. Now let $f_1(x),\dots,f_k(x)\in\Bbb Q[x]$, and choose a huge common denominator $A$ for their coefficients, so that $g_1(x)=Af_1(x),\dots,g_k(x)=Af_k(x)\in\Bbb Z[x]$. We will show that there is a composite number $B$ with the following property: there are infinitely many integers $n$ such that $AB$ divides each $g_j(n)$. In particular, for these infinitely many integer inputs, each $f_j(n)$ is an integer multiple of $B$ and therefore not prime. We use the following fact about polynomials with integer coefficients: if $g(n_0)=v\ne0$, then $g(n_0+mv)$ is a multiple of $v$ for all integers $m$. (Verify by expanding each term with binomial coefficients, or just use congruences modulo $v$.) So, let $n_0$ be an integer with each $|g_j(n_0)|\ge2$ (a nonconstant polynomial takes the values $-1,0,1$ only finitely many times). Let $v_j = g_j(n_0)$ for each $j$, and let $V=v_1v_2\cdots v_k$. Then each $g_j(n_0+mV)$ is a multiple of $v_j$ for all integers $m$. Moreover, with finitely many exceptions, $g_j(n_0+mV)$ is a proper multiple of $v_j$; and since $|v_j|\ge2$, no proper multiple of $v_j$ can be prime. Therefore all but finitely many inputs of the form $n_0+mV$ yield all composite outputs, which is (stronger than) what we wanted to show.<|endoftext|> TITLE: "Clôture" vs "Fermeture" dans la littérature française QUESTION [6 upvotes]: What is the difference between the two terms "clôture" and "fermeture" in mathematical contexts? For example, in topology, is it more common (or more proper) to say "la clôture de l'ensemble A" or "la fermeture de l'ensemble A"? What about other contexts? To frame the question properly, my questions are: is there any difference between the two terms? which one is more frequently used? does the usage depend on the context? and is there any rule of thumb? REPLY [2 votes]: 1) In a topological space the closure of a subset is almost always called adhérence. I have never seen clôture in place of adhérence and I might remember having seen fermeture, but that would be in fairly old literature. 2) Algebraic closure is clôture algébrique, and similarly we have clôture séparable, radicielle, intégrale,.... 3) The word fermeture is used for a relative situation: For example we might say la fermeture algébrique de $\mathbb R$ dans $\mathbb C(X)$est $\mathbb C$, meaning that the set of elements of $\mathbb C(X)$ algebraic over $\mathbb R$ is $\mathbb C$. Similarly we may speak of: la fermeture séparable de $k$ dans $K$, la fermeture intégrale de $A$ dans $B$, etc.<|endoftext|> TITLE: The Sums of the Subsets are Distinct QUESTION [9 upvotes]: Let $S$ be a set of $n$ positive integers such that no two subsets have the same sum. Prove that $\sum_{a \in S} \frac{1}{a} < 2$. I have found a paper with a nice proof of the statement by S. J. Benkoski and Paul Erdos http://renyi.hu/~p_erdos/1974-24.pdf but I'd still like to see an elementary proof. REPLY [5 votes]: It is enough to show that among every other sequence satisfying $\sum_{i=1}^m a_i \geq 2^m - 1$ for every $m$, the sequence $1,2,4,\dots$ maximizes the sum of the reciprocals. Since $1,2,4,\dots$ also satisfies that distinct subset sum property, we're done. Suppose that $a_1 < a_2 < \dots < a_n$ satisfies $\sum_{i=1}^m a_i \geq 2^m - 1$ for every $m$ and $k$ is the first index where $\sum_{i=1}^k a_i > 2^{k} - 1$. (i.e. $a_k > 2^{k-1}$). Then there are two cases: If there is no $j > k$ such that $\sum_{i=1}^j a_i = 2^{j} - 1$. Then replace $a_k$ with $a_k - 1$. Since $\sum_{i=1}^m a_k > 2^m-1$ for all $m \geq k$, replacing $a_k$ with $a_k-1$ maintains $\sum_{i=1}^m a_k \geq 2^m-1$ for all $m$. Since $\frac{1}{a_k} < \frac{1}{a_k-1}$, the sum increases. If there is a $j > k$ such that $\sum_{i=1}^j a_i = 2^{j} - 1$ (and suppose $j$ is the smallest such index). Then, replace $a_k$ with $a_k-1$ and $a_j$ with $a_j + 1$. Since $\sum_{i=1}^m a_k > 2^m-1$ for all $k \leq m < j$, replacing $a_k$ with $a_k - 1$ and $a_j$ with $a_j + 1$ maintains the bounds. Since $a_k < a_j$, $\frac{1}{a_k} + \frac{1}{a_j} < \frac{1}{a_k-1} +\frac{1}{a_j+1}$, the sum increases. Repeat until the sequence is $1,2,4,\dots$. Since $1 + \frac{1}{2} + \dots + \frac{1}{2^{n-1}} < 2$ and the sum $\frac{1}{a_1} + \dots + \frac{1}{a_n}$ increased at each step, then $\frac{1}{a_1} + \dots + \frac{1}{a_n} < 2$.<|endoftext|> TITLE: What is the algebraic intuition behind Vieta jumping in IMO1988 Problem 6? QUESTION [98 upvotes]: Problem 6 of the 1988 International Mathematical Olympiad notoriously asked: Let $a$ and $b$ be positive integers and $k=\frac{a^2+b^2}{1+ab}$. Show that if $k$ is an integer then $k$ is a perfect square. The usual way to show this involves a technique called Vieta jumping. See Wikipedia or this MSE post. I can follow the Vieta jumping proof, but it seems a bit strained to me. You play around with equations that magically work out at the end. I don't see how anyone could have come up with that problem using that proof. Is there a natural or canonical way to see the answer to the problem, maybe using (abstract) algebra or more powerful tools? In addition, how can someone come up with a problem like this? REPLY [4 votes]: There is a generalization of this problem too, that proposed in CRUX, Problem 1420, Shailesh Shirali. If $a, b, c$ are positive integers such that: $$0 < a^2 + b^2 -abc \le c$$ then $a^2 + b^2 -abc$ is perfect square.<|endoftext|> TITLE: Does the union of two not disjoint open balls always contain the line connecting the two centers? QUESTION [9 upvotes]: Maybe I have been drawing wrong, but the intuition I got from drawing 2d circles suggests the above statement may be correct. I have yet to come up with a proof in $\mathbb{R}^n$ or maybe normed or metric spaces. Not sure about the most general settings that allows for the above statement to hold, I think I will have to restrict the statement to Hilbert-Spaces because it is easy to draw counter-examples in other norms. The crux seems to be that for two not disjoint open balls $B_{r_1}(x)$ and $B_{r_2}(y)$ in norms other than 2, the line $\overline{xy}$ does not need to cross the intersection $B_{r_1}(x) \cap B_{r_2}(y)$. So I think the statement I want to prove is this: Let $H$ be a Hilbert space and $x,y \in H$, $r_1, r_2 > 0$. If $B_{r_1}(x) \cap B_{r_2}(y) \neq \emptyset$ then $$\overline{xy} \cap (B_{r_1}(x) \cap B_{r_2}(y)) \neq \emptyset.$$ Does the above statement maybe characterize a certain type of geometric object? I am thinking of a statement like: $A$ a subset s.t. the union of $A$ with a subset $B$ contains a line connecting their "center of mass". REPLY [17 votes]: This is true in any metric space whatsoever, if by "the line between $x$ and $y$" you mean "the set of points $z$ such that $d(x,z)+d(y,z)=d(x,y)$". This includes the standard lines in ${\bf R}^n$, as well as normed spaces and geodesic spaces (in non-uniquely geodesic space, e.g. a sphere with the intrinsic metric, this "line" will be the union of geodesics, or at least contain it). Take any two intersecting balls $B_x=B(x,r_x)$ and $B_y=B(y,r_y)$. Take any point $z$ on the line between $x$ and $y$. Then by definition, $d(x,z)+d(y,z)=d(x,y)$. But since the two balls intersect, $r_x+r_y>d(x,y)$, and hence $d(x,z)+d(y,z) TITLE: What is the intuition behind a function being differentiable if its partial derivatives are continuous? QUESTION [8 upvotes]: I have seen the proof of it, but I can't quite intuitively understand why continuous partial derivatives imply that a function is differentiable. Thanks! EDIT: The only reason I can think of is that if the partials exist at a point and are continuous at a neighborhood around that point, then it is only logical that the tangent plane to that point exists because the equation for the tangent plane at that point would be a good approximation of the function around that point. If the partials exist at the point but are not continuous at a neighborhood around that point, then I can't see how a tangent plane at the point can exist because would not be a good approximation of the function near the point since the function would have discontinuous slope along certain direction(s)(by the definition of a partial derivative). Is my intuition right? REPLY [6 votes]: I'll use the notation $D_i f$ to denote the $i$th partial derivative of a function $f$. To say that a function $f:\mathbb R^2 \to \mathbb R$ is differentiable at a point $(x,y)$ means that there exists a linear transformation $L:\mathbb R^2 \to \mathbb R$ such that the approximation $$ f(x+\Delta x, y + \Delta y) \approx f(x,y) + L(\Delta x, \Delta y) $$ is good when $\Delta x$ and $\Delta y$ are small. (Exactly what "good" means can be made precise.) So how can we approximate $f(x + \Delta x, y + \Delta y)$? Here's a way that you can visualize if you draw the points $(x,y), (x + \Delta x, y)$, and $(x + \Delta x, y + \Delta y)$: \begin{align} f(x+\Delta x, y + \Delta y) &= f(x,y) + f(x+\Delta x, y) - f(x,y) + f(x+\Delta x, y+\Delta y) - f(x+\Delta x, y) \\ & \approx f(x,y) + D_1 f(x,y) \Delta x + D_2 f(x + \Delta x, y) \Delta y \\ \tag{1}&\approx f(x,y) + D_1 f(x,y) \Delta x + D_2 f(x , y) \Delta y. \end{align} This suggests that we can take $L$ to be the linear transformation defined by $$ L(\Delta x, \Delta y) = D_1 f(x,y) \Delta x + D_2 f(x , y) \Delta y. $$ But here's the crucial point: in equation (1), we assumed that $D_2 f(x,y) \approx D_2 f(x + \Delta x, y)$. In order to guarantee that this approximation is sufficiently accurate (for small values of $\Delta x$), we need to assume that $D_2 f$ is continuous at $(x,y)$.<|endoftext|> TITLE: $(a+2)^3+(b+2)^3+(c+2)^3 \ge 81$ while $a+b+c=3$ QUESTION [6 upvotes]: I need to show that $(a+2)^3+(b+2)^3+(c+2)^3 \ge 81$ while $a+b+c=3$ and $a,b,c > 0$ using means of univariate Analysis. It is intuitively clear that $(a+2)^3+(b+2)^3+(c+2)^3$ is at its minimum (when $a+b+c=3$) if $a,b,c$ have "equal weights", i.e. $a=b=c=1$. To show that formally one can firstly fix $00$. REPLY [2 votes]: Alternately, here's a hint $$(x+2)^3-27 -27(x-1) =(x-1)^2(x+8)\geqslant 0$$<|endoftext|> TITLE: Applying surface integral on the Mobius strip QUESTION [5 upvotes]: I'm trying to apply the surface integral on the Mobius strip. I know the Mobius strip's surface area can be easily calculated by getting the area of a piece of paper before it got twisted but this is my challenge through mathematics and I really want to complete this. I first tried to apply the surface integral for scalar-valued one. According to this link, in the 2nd part of the anonymous' answer it is said that I need to first split up the Mobius strip so that it becomes orientable. Is this step actually required? Because someone suggested me from my previous question that I don't actually need to split it as 1) I need to do the integration from $0$ to $4\pi$, thinking like painting, and 2) that gives to intervals with distinct opposite normal vectors I agree with this idea, but still I'm confused. Is it then more appropriate to interpret the mobius strip as a vector form and apply the surface integral of vector fields? REPLY [2 votes]: In order to compute the surface area of any surface, orientable or not, we need an essentially bijective parametric representation of that surface. Nonorientability of this surface is of no harm if you just want to know its area. But the flux of some extraneous vector field ${\bf v}$ through such a surface makes no sense, because it is impossible to define in a coherent way whether the flux of ${\bf v}$ through a surface element at some point ${\bf x}\in S$ should be counted positive or negative. Now for the area: Assume that your Moebius band $S\subset{\mathbb R}^3$ of width $2b$ and soul radius $R$ is presented in the form $$S:\quad (\phi,t)\mapsto{\bf x}(\phi,t):=\left(R+t\cos{\phi\over2}\right)(\cos\phi,\sin\phi,0)+t\sin{\phi\over2}(0,0,1)$$ with parameter domain $B:=[{-}\pi,\pi]\times[-b,b]$. In this way the complete set $S$ is produced in a one-one way, except for the following: There is a (vertical) suture segment corresponding to $\phi=\pm\pi$ and $-b\leq t\leq b$ which is covered twice "with opposite direction". In order to compute the area of $S$ we need not bother about the surface normal; we just compute the integral $${\rm area}(S)=\int_B\>\bigl|{\bf x}_\phi\times{\bf x}_t\bigr|\>{\rm d}(\phi,t)\ .\tag{1}$$ I didn't compute this integral. Its value is about $2\pi R\cdot2b$. If you want to paint the Moebius band on both sides you need paint for twice this area, as you would if you paint a rectangle of given area on both sides.<|endoftext|> TITLE: What's the correct categorical definition of a topological submersion? QUESTION [7 upvotes]: Topological embeddings are regular monos in the category of topological spaces. Topological immersions are arrows which are locally embeddings, i.e it's possible to restrict to an open cover such that each restriction is a topological embedding. What about topological submersions? What's their categorical definition? The symmetric definition would be maps that are locally regular epimorphisms. The regular epimorphisms of the category of topological space are quotient maps. This answer notes surjective smooth submersions possess the smooth analogue of the universal property of quotient maps, but are not characterized by it (not all arrows possessing this universal property are surjective smooth submersions). The nlab page lists the general definition as an arrow such that every point of the domain is covered by a local section. This is discussed in this MO question but I can't make much out of it. Since the tangent-space definitions of smooth immersions and submersions are dual, I was hoping for something along those lines in the topological case. What's the correct definition? REPLY [3 votes]: First, I don't think one can call something the "correct definition". Different definitions will be suitable for different authors in different circumstances. One reasonable definition of a topological submersion (which I pulled from F. Lin, "A Morse-Bott approach to monopole Floer homology and the Triangulation conjecture") is as follows. Consider a pointed topological space $(Q,q_0)$, let $\pi: S \to Q$ be a continuous map and consider $S_0 \subset \pi^{-1}(q_0).$ We say that $\pi$ is a topological submersion along $S_0$ if for every $s_0 \in S_0$ we can find a neighborhood $U \subset S$ and a neighborhood $Q' \subset Q$ of $q_0$ with a homeomorphism $(U \cap S_0) \times Q' \to U$ commuting with $\pi$. (This is quite similar to the suggestion in Tom Goodwillie's answer in the question you linked.) Then one would say that $\pi$ is a topological submersion if it is a topological submersion with respect to any choice of $q_0$ and with $S_0 = \pi^{-1}(q_0)$. This is more or less the statement that "locally in the domain and codomain, $\pi$ is a fiber bundle projection." This, to me, is the topological essence in what a submersion is. To further back this definition up, a CAT submersion (where CAT = TOP, PL, DIFF) is defined in Kirby-Siebenmann to be a map $f: X \to Y$ such that near every point $x \in X$, there is a neighborhood $U$ with $f(U)$ open and $U \to f(U)$ isomorphic as a map in CAT to the projection map of a product (aka, the map is locally a trivial fiber bundle). This is precisely Lin's definition for all $q_0$ and $S_0 = \pi^{-1}(q_0)$. If you're interested in the notion of transversality, Kirby-Siebenmann have a definition of that, though it's somewhat complicated and uses the notion of microbundles. The definition you give, that through every point $x \in X$ there is a local section of the map with $x$ in its image, is not equivalent (though of course Lin and Kirby-Siebenmann's definitions imply it). The map $f: \Bbb R^2 \to \Bbb R$, $f(x,y) = xy$, satisfies your definition; a section passing through $(x_0, y_0)$ with $y_0 \neq 0$ is given by $s(t) = (t/y_0,y_0)$, and similarly for $x_0 \neq 0$; for $(x_0,y_0) = (0,0)$ take $s(t) = \text{sgn}(t)(\sqrt{|t|},\sqrt{|t|})$. But $f$ is not locally a fiber bundle projection near $(0,0)$. Personally, I don't think that example deserves to be a topological submersion. But to even out my discussion, I'll point out that even in my definition of topological submersion, the preimage of a point under a topological submersion of manifolds doesn't need to be a manifold. To prove this, you just need an example of a topological space that's not a manifold $X$ and a manifold $M$ such that $M \times X$ is a topological manifold. These are reasonably abundant; there are examples with $M = \Bbb R$ and $M \times X = \Bbb R^4$.<|endoftext|> TITLE: Measure for Change of Network after adding one edge QUESTION [5 upvotes]: I have an undirected network. I want to have a measure which tells me how much adding a certain edge changes the network. Please have a look at this example: Here, the black edges represent the original network. It is obvious that by adding the green edge, the network structure will not change a lot. However, if I add the red edge, the network changes significantly. Are there known methods to study such changes? I was thinking about a measure $m(N_{orig},N_{new})$ (where $N_{orig}$ and $N_{new}$ stands for the original and modified network, respectivly) that takes into account the distance $d$ between all vertices: $$ m(N_{orig},N_{new})=\sum_{\forall (V_i,V_j)} \left(d_{orig}(V_i,V_j) - d_{new}(V_i,V_j) \right) $$ where $V_i$ are the vertices of the network. In the example network above, $m(N_{orig},N_{black})=2$ and $m(N_{orig},N_{red})=90$. It is similar to the difference of the average path length, as indicated by DavidV in the comments. This would be great, but the network distance is expensive to calculate for large networks (roughly $\mathcal{O}(V^2\cdot\log(V)+V\cdot E)$ for the Johnson's algorithm or $\mathcal{O}(V^3)$ for the Floyd–Warshall algorithm). My network has V~=5.000 and E~=900.000. As I want to calculate many added edges, this measure seems to be too expensive for me. I'm happy about every suggestion! Thanks REPLY [2 votes]: A simple way to measure how different the network is after adding an edge is to find the length of the original shortest path took to get from the one of the vertices to which the edge is attached to the other vertex. In your example, the shortest path between the two vertices connected by the green edge before you added the green edge was two edges long. Before adding the red edge, the shortest path between the two nodes was seven edges long. After adding the edge, the shortest path for any two vertices becomes one. The basic idea here is that the larger the path between two vertices, the more the graph will change when an edge is added. All you really need is a pathfinding algorithm, so this measurement shouldn't be too slow. Edit: As NicoDean pointed out, this alone does not take neighboring vertices into account. I think I have a way around this. This is going to be a pretty rough way to measure the change, but you should be able to modify it to make it work. I will use his graph as an example, and I will label all the nodes from left to right alphabetically, so the red line will connect nodes $A$ and $J$, and the green line will connect nodes $H$ and $L$. When connecting two nodes that are indirectly connected, you will form a cycle, in which the path length from any node in the cycle to another node will either remain the same or be however long the original path between the two connected nodes was minus the path length between that node to one of the connected nodes plus one. In your example, adding the red line creates the cycle $\{A, E, D, F, G, I, H, J\}$. Before adding the red line, the shortest path from $E$ to $J$ was $6$, now it is $7-6+1=2$. You can then calculate the difference between these two values to find the improvement. Do this for every point on the cycle and add it to some running total. That deals with the cycles, and all you really need is the length of the cycle. In your graph, the red edge affected everything, so that messed me up because all you needed to measure the change was the length of the line. The second thing to note is that adding a line only affects paths that use the line. For instance, the shortest path from $K$ to $H$ would include $IH$, but adding the red edge has no effect on the path length. The last thing you need to consider is which nodes of the cycle the outer nodes are connected to. From there, you should find the length of all the shortest paths from that outer node to any point in the cycle before and after adding the edge. This algorithm should start with the nodes that are directly connected to a node in the cycle. For instance, if you consider $B$, it is connected to both $A$ and $D$. The shortest path lengths from $A$ and $D$ to any node in the cycle after inserting the red edge are listed below. $$(A, E) = 1; (D, E) = 1$$ $$(A, F) = 3; (D, F) = 1$$ $$(A, G) = 4; (D, G) = 2$$ $$(A, I) = 3; (D, I) = 3$$ $$(A, H) = 2; (D, H) = 4$$ $$(A, J) = 1; (D, J) = 3$$ The shortest path from $B$ to any of the nodes in the cycle is the minimum of the two numbers up there plus one, so: $$(B, E) = 2$$ $$(B, F) = 2$$ $$(B, G) = 3$$ $$(B, I) = 3$$ $$(B, H) = 3$$ $$(B, J) = 2$$ Now, these are the lengths before adding the edge: $$(A, E) = 1; (D, E) = 1$$ $$(A, F) = 3; (D, F) = 1$$ $$(A, G) = 4; (D, G) = 2$$ $$(A, I) = 5; (D, I) = 3$$ $$(A, H) = 6; (D, H) = 4$$ $$(A, J) = 7; (D, J) = 5$$ After choosing the shortest lengths, this is what we get for $B$ before adding the edge: $$(B, E) = 2$$ $$(B, F) = 2$$ $$(B, G) = 3$$ $$(B, I) = 4$$ $$(B, H) = 5$$ $$(B, J) = 6$$ We can then measure the improvement by subtracting the before and after: $$(B, E) = 0$$ $$(B, F) = 0$$ $$(B, G) = 0$$ $$(B, I) = 1$$ $$(B, H) = 2$$ $$(B, J) = 4$$ The total improvement for adding the edge on the path length of B is therefore $1+2+4=7$. You then do the same process for all the nodes that have at least one edge connected to the cycle and add that to the running total for the improvement. Then, you do the same process for all the nodes connected to those initial nodes, calculate the improvement, and continue on until you have calculated the total improvement. The greater the improvement, the larger the change. Please comment if anything is unclear to you or you see any errors in my reasoning. This algorithm has the basic idea down, but you should be able to use it as a basis for the algorithm you will use. The only problem I can see with this algorithm is that if you have a preexisting cycle, adding an edge between any two nodes in that cycle will split it up into two cycles. This doesn't really seem to be a problem. Edit: Another minor problem I realized is that if you have two connected vertices that are the same distance from the cycle, the improvement may have a circular dependency. This can be solved by not considering that edge until all the other edges of the two nodes have been considered. Then, you can run the improvement algorithm on both of the edges, making sure to add one to all the other node's edge lengths. Optimization: Any node that does not see any improvement might not need to be considered when dealing with nodes attached to it. This should have a runtime of something like $O(E+V)$ for the simple breadth-first search plus $O((E-L)*L^2)$, where $E$ is the total number of edges and $L$ is the length of the cycle, as you need to find the shortest path length for each node in the cycle, which is where you get the $L^2$, for every single edge outside the cycle, which is where you get the $E-L$. It looks like $L$ should be relatively low because of the ratio of the number of edges to the number of edges you would need to have a complete graph is pretty high. Actual Running of the Algorithm I am going to use my algorithm to calculate what I call the "improvement" on the graph for both the red line and the green line. I will do the red line first, as it forms a singular cycle while the green line forms three new cycles. The first thing we do is find the shortest path between the two nodes and we calculate its length ($7$). This will take $O(V+E)$ time. The next thing we need to do is calculate the path length for every node that will be in the cycle before the edge is added. That step is extremely simple, as you only need to consider the edges in the cycle. The next thing we do is calculate the shortest path length for all nodes in the cycle not within the ceiling of half of the length ($\lceil{\frac7 2}\rceil=4$) of the two connected nodes after adding the edge. We know that we only need to consider these nodes because only these nodes will experience a change in the shortest path. In this example, I will use the notation $(X,Y)=k$ to denote the length of the shortest path between $X$ and $Y$. Before $$(A,I)=5;*(E,I)=4;*(D,I)=3$$ $$(A,H)=6;(E,H)=5;*(D,H)=4$$ $$(A,J)=7;(E,J)=6;(D,J)=5$$ After $$(A,I)=3;*(E,I)=4;*(D,I)=3$$ $$(A,H)=2;(E,H)=3;*(D,H)=4$$ $$(A,J)=1;(E,J)=2;(D,J)=3$$ The starred nodes experience no change and are only there for comparison purposes later in the algorithm. There are tons of patterns in this that you should be able to figure out just from looking at the graph, which you can probably use to optimize. You then add up the total improvement, then double it because the path improvements apply in both directions. The total improvement so far is $2*((3*(5-3))+(2*(6-2))+(1*(7-1))=36$. From there, you calculate for the nodes that are not part of the cycle but touch the cycle. Starting with $B$, you compare the shortest paths between all the nodes it connects to on the cycle, which are $A$ and $D$. Before $$(A,I)=5;(D,I)=3\implies(B,I)=4$$ $$(A,H)=6;(D,H)=4\implies(B,H)=5$$ $$(A,J)=7;(D,J)=5\implies(B,J)=6$$ After $$(A,I)=3;(D,I)=3\implies(B,I)=4$$ $$(A,H)=2;(D,H)=4\implies(B,H)=3$$ $$(A,J)=1;(D,J)=3\implies(B,J)=2$$ The improvement for $B$ is therefore $(4-4)+(5-3)+(6-2)=6$. The total improvement is now $36+6=42$. Now we consider $C$, which is connected to $D$ and $F$. Because the only path that improved was the $(D,J)$, we only need to consider the $(C,J)$ path. Before $$(D,J)=5;(F,J)=4\implies(C,J)=5$$ After $$(D,J)=3;(F,J)=4\implies(C,J)=4$$ The improvement of $C$ is one; the total improvement is $42+1=43$. The next node to consider is $K$. This is where things get interesting. Instead of finding $(K,J)$, we start finding $(K,A)$ because $K$ is connected to $I$, which has no $(I,J)$ path that we need to check. Also, we need to consider that $K$ connects to $M$, which is not part of the cycle. To do this, we ignore that edge until we calculate the distance change for both $K$ and $M$ so that we can compare them. For $K$, we only need to consider the $(A,I)$ path before and after. Before $$(A,I)=4\implies(K,A)=5$$ After $$(A,I)=3\implies(K,A)=4$$ $K$ improves by one, so the total improvement is now $44$. Now we consider $M$, which also has an edge that we will ignore for now. Before $$(A,H)=6\implies(M,A)=7$$ $$(E,H)=5\implies(M,E)=6$$ After $$(A,H)=2\implies(M,A)=3$$ $$(E,H)=3\implies(M,E)=4$$ The total improvement is now $50$. We now compare the path lengths for $K$ and $M$ to see if there is a shorter path through either of them. You would redo $K$ considering $M$ as part of the cycle and vice versa. In this case, there is no shorter path, but if $M$ was connected to $J$, you would see an improvement. Finally we consider $L$, which is connected to both $I$ and $J$. Before $$(A,I)=5;(A,J)=7\implies(L,A)=6$$ $$(E,I)=4;(E,J)=6\implies(L,E)=5$$ $$(D,I)=3;(D,J)=5\implies(L,D)=4$$ After $$(A,I)=3;(A,J)=1\implies(L,A)=2$$ $$(E,I)=4;(E,J)=2\implies(L,E)=3$$ $$(D,I)=3;(D,J)=3\implies(L,D)=4$$ This gives us a total improvement of $56$. Finally, we need to consider the $LM$ edge. This edge does not produce an improvement. Our total improvement is $56$. Now, for the green edge, we need to consider the three cycles it forms. Once we do that, we need to see if there is any improvement in any of the cycles. We see that there is no improvement anywhere in the cycles except the nodes $H$ and $L$, which see an improvement of $1$ each, giving us a total improvement of $2$. Since none of the nodes in the cycle that touch nodes outside the cycle see any improvement, we can stop the algorithm here for a final improvement of $2$.<|endoftext|> TITLE: An infinite series $1+\frac{1}{2+\frac{1}{3+\frac{1}{4+.......}}}$ QUESTION [5 upvotes]: How can we find the value to which the following series converges, if it converges to a finite number? If else, how can we prove that it is divergent?$$1+\frac{1}{2+\frac{1}{3+......}}$$ REPLY [2 votes]: We can see that $$ 1 < 1+\frac{1}{2+\frac{1}{3+......}} $$ and $$ \frac{1}{2} > \frac{1}{2+\frac{1}{3+......}} $$ so $$ 1 < 1+\frac{1}{2+\frac{1}{3+......}} < \frac{3}{2} $$<|endoftext|> TITLE: Why are structures interesting? QUESTION [11 upvotes]: Structures are sets together with some constants, relations, and functions on that set. They are studied in many areas of mathematics: For example, universal algebra studies algebraic structures (i.e. structures with no relations, only functions and constants) in general. Also, model theory studies the connection between a formal logic and structures. As a last example, let me mention abstract algebra. This field studies more concrete structures, such as groups, fields, rings, monoids, and so on. Okay. Structures are very important in mathematics. I am interested in the history and the motivation of the definition of structure. What makes structures so interesting? Why gives us the study of structures so powerful tools? What is the motivation behind the definition of the term structures? (For example, why can't structure have more than one carrier set?) How did the term structure historically develop? What were the first motivating examples? Why did one choose the name "structure"? I personally interpret this word ("structure") as being some kind of pattern. What has this to do with the mathematical notion of structures? REPLY [4 votes]: I'll try to answer to some of your questions. What makes structures so interesting? Why give us the study of structures so powerful tools? One very common and useful thing in mathematics is abstraction, a.k.a. generalization. Thinking in the language of structures allows mathematicians to generalize many properties to lot of different objects: for instance Cauchy's theorem tells us that for every prime divisor of the cardinality of a finite group there is an element that has order that prime, no matter if it is a group of permutations or isometries or matrixes. Basically it is the maths version of the "two birds one stone"...although "lots of birds one stone" would be more appropriate, at least in my opinion. Another important thing given by the notion of structure is the notion of (homo)morphism. Via morphisms we are able to relate and confront different objects having similar structure and this is useful for discoverying and proving properties of different objects: for instance in group theory we can study properties of different groups through their homomorphism in groups of permutation or groups of matrices (there is an entire field in maths that deals with these objects called representation theory, it has important applications outside mathematics for instance in physics). Note that without the notion of structure the notion of morphism cannot be stated. We could go on for very long answering just these two questions but I'd rather stop here to avoid being too boring (feel free to ask in the comments for addional stuff). What is the motivation behind the definition of the term structures? (For example, why can't structure have more than one carrier set?) Probably you should be more specific on which notion of structure you are referring to because there are multi-sorted structures which have families of sets as carrier. Examples of these multi-sorted structures are modules over generic rings (or if you like a more specific example vector spaces over different fields). Another example that I cite (because I'm a little biased) is that of categoy, which is a multi-sorted structure. How did the term structure historically develop? What were the first motivating examples? I'm not sure about that but I think that it came first during the 1800 when english algebraists observed the similar....structure undelying different kind of algebras and that through that notion they could prove a lot of intersting properties (like existace of solutions to equations) for large class of structures at once by reasoning in term of these abstract structures, instead of redoing the same work for all the concrete examples. These observations led to what is called modern or abstract algebra. Why did one choose the name "structure"? I personally interpret this word ("structure") as being some kind of pattern. What has this to do with the mathematical notion of structures? This question is itself your answer. The various notion of structures capture different common pattern in large class of objects. For instace the notion of group capture a pattern common to symmetries, isometries and various kind of number-systems: namely the fact of having a binary operation that is associative, with unit and inverses. Similar arguments apply to other kind of structures (rings, vector spaces, etc). Hope this (maybe too long) answer may help.<|endoftext|> TITLE: Why don't we use the partial derivative symbol for normal derivatives? QUESTION [10 upvotes]: In Calculus, the derivative w.r.t. $t$ of a single-variable function $f$ is denoted: $$\frac{df}{dt}$$ While the derivative w.r.t. $t$ of a multi-variable function $f$ is denoted: $$\frac{\partial f}{\partial t}$$ The justification being that the latter is distinct from the first in that it is a partial derivative. Why does the latter warrant its own symbol? A single-variable function is just a special case of a function taking $n$ variables, so isn't the derivative of a single-variable function a partial derivative in a general sense? The two operations seem to behave the same; is there any case where mixing them could be problematic? REPLY [10 votes]: This is taught very poorly in calculus courses, and you're confused because the notation is sloppy. The insight you seek is the following1: You take total derivatives of an expression with respect to a variable. You take partial derivatives of a function with respect to its parameters.2 Before I go on, it's critical that you understand the following terminology: A ("formal") parameter is a property of the function description itself. For example, the $a$ and $b$ in the function definition $f(a,b) = a + b$ are parameters. An argument, or "actual" parameter, is a property of an expression that is a call to a function. For example, the $x$ and $y$ in the expression $g(x) + g(y)$ are arguments to $g$. Now here's the kicker: if $h(x) = x^2$ then partial and total derivatives can be different: \begin{align*} \frac{\partial}{\partial x} f(x, h)\ =\ 1\ \color{red}{\neq}\ 2x+1\ =\ \frac{d}{dx}f(x,h) \end{align*} Makes sense? :-) I hope it doesn't, because it was sloppy. The notation above is extremely common, but not really correct.3 Remember I just told you partial derivatives are with respect to parameters whereas total derivatives are with respect to variables. This means that, if we've defined $$f(a,b) = a + b$$ as above, then it's actually incorrect (although very common) to write $$\frac{\partial}{\partial x}f(x,h)$$ for three reasons: $x$ is not a parameter to $f$, but an argument to it. The parameter is $a$. The second argument to $f$ should be a number (like $h(x)$), not a function like $h$. $f(x, h)$ is not a function, but a call to a function. It evaluates to a number. So, to really write the above derivatives correctly, I should have written: \begin{align*} \left.\frac{\partial f}{\partial a}\right|_{\substack{a=x\phantom{(h)}\\b=h(x)}}\ =\ \left.1\right|_{\substack{a=x\phantom{(h)}\\b=h(x)}}\ =\ 1\ \color{red}{\neq}\ 2x + 1\ =\ \frac{d}{d x} f(x, h(x)) \end{align*} at which point it should be obvious the two aren't the same. Makes sense? :) 1 This should be easier to understand if you know a statically typed programming language (like C# or Java). 2 You can define partial derivatives for expressions as well, but it'd just be implicitly assuming you have a function in terms of that variable, which you are differentiating, and then evaluating at a point whose value is also denoted by that variable. 3 Notice the expression wouldn't "type-check" in a statically typed programming language.<|endoftext|> TITLE: What can the fixed point spaces of an involution on the Klein bottle be? QUESTION [7 upvotes]: Suppose that I have a smooth involution acting on a Klein bottle. Then, what are the constraints on its set $F$ of fixed points? The first guess is to use Lefschetz formula for Euler's characteristic of $F$. However, this formula doesn't detect submanifolds in $F$ of Euler characteristic 0. So, for example, I can ''add'' to $F$ any bunch of disjoint circles, which seems quite counterintuitive. REPLY [8 votes]: $F$ can be either one circle, one circle and two points, two circles, no points, or two points (or obviously the whole Klein bottle). Each of these corresponds to a single isomorphism class of involution. I construct and classify them below. If you're only interested in obtaining what the possible fixed point spaces are, we can do that with equivariant cohomology. Smith theory says that for any smooth involution on a compact smooth manifold, $\dim H^*(M;\Bbb Z/2) \geq \dim H^*(F;\Bbb Z/2)$, where here I'm taking the total dimension (sum of Betti numbers). So $F$ is a union of circles and points, restricting us to either two circles, one circle and at most 2 points, or at most 4 fixed points. Of course, this leaves out the cases of 1, 3, and 4 fixed points (which can be dealt with by Lefschetz's fixed point theorem: because $\iota$ is a homeomorphism and the index of each fixed point is 1, there has to be either $0$ or $2$ isolated fixed points) or 1 circle and 1 point, in which case delete the circle; if the complement of a small open neighborhood is connected, it has rational Betti numbers $b_0 = b_1 = 1$ and all others zero, so must have zero or two fixed points by the same logic; if it's disconnected, the involution swaps the components and has no more fixed points. Below is a "by hands" classification of the involutions of the Klein bottle which never invokes Smith theory (which involved spectral sequences and equivariant cohomology so may be more input data than you'd like.) I will occasionally in here say that there is only one or two types of involutions on a certain manifold; if left unjustified, these are not difficult. Suppose $F$ contains a circle. The involution must act on the normal bundle of the circle by negation (or else the whole manifold would be fixed). If the circle's complement was disconnected, then the involution would identify the two components; so the involution must be isomorphic to the involution swapping two components of $\Bbb{RP}^2 \# \Bbb{RP}^2$ (since this is the only connected sum decomposition of the Klein bottle with both components homeomorphic). This is equivalent to saying that, given the presentation of the Klein bottle as the double of the Mobius band (take two copies and glue them together by the identity along the boundary), your involution must just swap the two bands. Suppose the circle's complement is connected, and that the normal bundle is trivial. Then the involution restricts to an involution of the complement (which we may compactify by adding two different circles along the two different ends); Euler characteristic considerations show that the resulting object is the cylinder $S^1 \times [-1,1]$, and your involution swaps the two boundary circles, with $\iota(z, -1) = (\bar{z},1)$. (It must do this to descend to the Klein bottle.) To extend this to the whole cylinder, set $\iota(z,t) = (\bar{z},-t)$; this has two further fixed points. Indeed, this is up to isomorphism the only extension, as you can see by some fiddling with Lefschetz and equivariant CW decompositions (it cannot fix a circle because if so you could prove that $z$ and $\bar z$ are homotopic, and then it must have two fixed points, etc). (This is what you get when you let the negation involution of $\Bbb R^2/\Bbb Z^2$ descend to the Klein bottle.) Now assume the normal bundle is nontrivial. Deleting a small invariant neighborhood of the circle leaves a Mobius band; so this involution is what we get when we glue two involutions of a Mobius band together. But an involution of the Mobius band (thought of as a $[-1,1]$ bundle over the circle) is necessarily either the identity or negation on the fiber or base (so there are four total). So your example is what you get when you glue two of the latter kind together. So we see that if $F$ has a circle, the involution is either what you get when you swap the components of a connected sum decomposition $\Bbb{RP}^2 \# \Bbb{RP}^2$, in which case the fixed point set is a circle, or acting by a nontrivial involution in each factor of that connected sum (the connected sum circle invariant but not fixed under the involution). In this case $F$ is two circles. In all cases, if $F$ contains a circle, then it has no isolated fixed points. Now suppose the fixed point set is 0-dimensional. There is a fixed point free involution of the Klein bottle (with quotient the Klein bottle). Presenting the Klein bottle as $S^1 \times \Bbb R/(\theta, t) \sim (-\theta, t+1)$, just send $(\theta, t) \to (i\theta, t+1/2)$. Suppose the fixed point set is nonempty. Then the involution acts by negation on the tangent space of each fixed point, and the fixed point index of each is $1$. So we see that the Lefschetz number is the number of fixed points, and hence an involution with isolated fixed points must act by $-1$ on $H_1(K;\Bbb Q)$, and have precisely two fixed points. Deleting small invariant neighborhoods of each point, we get a fixed point free involution on $N_{2,2}$, the genus 2 nonoriented surface with two boundary components, that preserves (aka does not swap) the boundary components. $\chi(N_{2,2}) = -2$, so the quotient must be $N_{1,2}$ (since it will have 2 boundary components). You can classify by hand the connected double covers of $N_{1,2}$: they are precisely $\Sigma_{0,4}$ (the oriented double cover), $N_{1,3}$, and $N_{2,2}$. So there is actually a fixed point free involution upstairs extending the one we chose on the boundary (unique up to isomorphism) and thus there is a two-fixed-point involution of the Klein bottle.<|endoftext|> TITLE: Galois group of $\Bbb Q(\{\sqrt[2^n]{2}, \zeta_{2^n} \;:\; n \geq 1 \})$ over $\Bbb Q$ QUESTION [6 upvotes]: Let $K_n = \Bbb Q(\sqrt[2^n]{2}, \zeta_{2^n})$ be a Galois extension of $\Bbb Q$ (where $ \zeta_{2^n}=e^{2\pi i / 2^n}$), and let $K$ be the compositum of all the $K_n$'s. It is a Galois extension of $\Bbb Q$. Notice that $m ≤ n \implies K_m \subset K_n$. What does the Galois group of $K/\Bbb Q$ look like? According to the proposition $1.1.$ of this document, $\text{Gal}(K/\Bbb Q)$ is the inverse limit of $\text{Gal}(L/\Bbb Q)$ where $L/\Bbb Q$ is a finite Galois extension such that $L \subseteq K$. In particular, we have to consider $L=K_n$, which has Galois group isomorphic to the "affine group" $\Bbb Z/2^n\Bbb Z \rtimes (\Bbb Z/2^n\Bbb Z)^{\times}$ (the holomorph of $\Bbb Z/2^n\Bbb Z$). What other subextensions should I consider? Moreover, I have some trouble as for understanding the inverse limit of all these Galois groups... Some related questions are: (1), (2), (3). Here is a question with a similar infinite extension. Any help would be highly appreciated. Thank you in advance! REPLY [2 votes]: First, it does suffice to just take the inverse limit of just your groups $\mathrm{Gal}(K_n/\mathbb Q)$, instead of including all finite Galois subextensions. This is a general fact, only requiring the $K_n$'s to be finite Galois extensions of the base field whose union is all of $K$. The proof is the same as in the notes you linked: an element of $\mathrm{Gal}(K/\mathbb Q)$ restricts to elements of $\mathrm{Gal}(K_n/\mathbb Q)$, giving a map $\mathrm{Gal}(K/\mathbb Q) \to \lim\limits_{\leftarrow n}\mathrm{Gal} (K_n/\mathbb Q)$. Then an element of $\mathrm{Gal}(K/\mathbb Q)$ is determined by its restrictions (giving injectivity), and compatible automorphisms of the $K_n$'s glue to an automorphism of $K$ (giving surjectivity). Second, the inverse limit of your groups is the affine group over the $2$-adic integers, $\mathbb Z_2 \rtimes \mathbb Z_2^{\times}$. Proof. Regard the various affine groups as the groups of linear functions $mx + b$, where $m$ is invertible. (The group law is composition. Explicitly, for $K_n$, $mx + b$ is the automorphism sending $\zeta_{2^n}^x 2^{1/2^n}$ to $\zeta_{2^n}^{mx+b} 2^{1/2^n}$.) Notice that the restriction maps from $\mathrm{Gal}(K_{n+1}/\mathbb Q)$ to $\mathrm{Gal}(K_n/\mathbb Q)$ are the obvious quotient maps, given by modding $m$ and $b$ out by $2^n$.) Then $\mathbb Z_2 \rtimes \mathbb Z_2^{\times}$ maps (compatibly) to each $\mathbb Z/2^n \mathbb Z \rtimes \mathbb (Z/2^n \mathbb Z)^{\times}$ by taking $m$ and $b$ mod $2^n$, so it maps to the inverse limit. But this map is an isomorphism, because $\mathbb Z_2$ is (by definition) the inverse limit of $\mathbb Z/2^n \mathbb Z$ under the quotient maps, and $(\mathbb Z_2)^{\times}$ is the inverse limit of $(\mathbb Z/2^n \mathbb Z)^{\times}$. (The latter is because an element in either $\mathbb Z/2^n \mathbb Z$ or $\mathbb Z_2$ is invertible if and only if it is $1 \bmod 2$.)<|endoftext|> TITLE: Mathematical research outside academia QUESTION [19 upvotes]: My goal up through grad school had been to spend my life doing academic, research mathematics. I got screwed by my advisor and department, though, so that hasn't worked out. I would nevertheless still like to do math and somehow contribute. Is there any place outside academia that does serious, professional pure math? Ideally, I'd like to at least publish a few research papers on vaguely-mathematical topics, but it's hard to find jobs that even afford that; in fact, it's hard to find jobs that use math beyond the undergrad level. it seems like my best bet is finding something in security related to theoretical computer science, but that's not a field I have any interest in; besides, I'm not sure how interested the industry would be in someone coming in from a pure math background. So, is there any reasonable equivalent or approximation to research mathematics in industry; and, if not, what's the closest I can actually get to it? REPLY [6 votes]: This has been here for a while, so I'm not sure it is still a current issue for you. I'm sorry an academic career hasn't worked out to be as you hoped. If you are serious about commercial sector employment, you need to look at it from a commercial point of view. Some large commercial companies are getting interested in basic research, but it seems to me there is still a clear commercial aspect to their hiring--at least from a long-term point of view. If they are going to pay you \$100K a year, it will cost them at least \$200K a year to have you around (social security tax, health insurance, admin and facilities costs, and so on). So they are asking themselves what you can do for them that makes it worth $200K a year over the long run to have you around. Many government agencies are similarly task-oriented but in terms that are not always expressed conveniently in dollars (e.g., political objectives, security, etc.). There is much more unadvertised abstract math research in the government sector that you might imagine. The only way to find out is to make applications that mention the full range of your abilities and interests and see how agency recruiters interpret your resume. For example, I can almost guarantee there is a lot of mathematics with security implications for what you call 'theoretical computer science' that does not pop immediately to mind. But publishing results in journals will probably not be involved. I think you need a fresh perspective and I can't recommend specifics that are sure to be helpful. So generalities: Go to meetings, talk with recruiters, have lunch with nonacademic colleagues about their interests, participate on this site with a 'name' and 'profile' that make you traceable to the persistent, put your (real) name on Linked In (if you can deal with the email traffic that will generate), and so on. One decent 'thread' out of dozens may be all you need.<|endoftext|> TITLE: Why are the continued fractions of non-square-root numbers ($\sqrt[a]{x}$ where $a>2$) not periodic? QUESTION [7 upvotes]: Ok, so it is quite amazing how the continued fractions for $\sqrt[2]{x}$ are always periodic for all whole numbers of $x$ (and where $x$ is not a perfect square): Here is a link I suggest at looking at: http://mathworld.wolfram.com/PeriodicContinuedFraction.html ... The following is in the simple continued fraction form: $$\sqrt x = a_1+ \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cfrac{1}{a_4 + \cfrac{1}{a_5 + \cfrac{1}{a_6 + \cfrac{1}{a_7 + \cdots}}}}}}$$ For example: The continued fraction of $\sqrt 7$ is as follows: $[2;1,1,1,4,1,1,1,4,1,1,1,4,\ldots]$ the period in this case is $4$... and the continued fraction can be written as: $$[2;\overline{1,1,1,4}]$$ Some other examples: $\sqrt2 = [1;\overline{2}]$ Period is 1 $\sqrt3 = [1;\overline{1,2}]$ Period is 2 $\sqrt{13} = [3;\overline{1,1,1,1,6}]$ Period is 5 $\sqrt{97} = [9;\overline{1,5,1,1,1,1,1,1,5,1,18}]$ Period is 11 There are many other sources which show this... but why does this not work for others, such as cubic roots? I have written a java program to compute the continued fraction for the nth root of the numbers between $1$ and $100$ (excluding perfect squares,etc...) for the first 100 terms. Here are the results: $\sqrt[2]{x}$: http://pastebin.com/ZcasfRyP $\sqrt[3]{x}$: http://pastebin.com/XG9UF8hR $\sqrt[4]{x}$: http://pastebin.com/Edp307SE $\sqrt[5]{x}$: http://pastebin.com/9SwwPqUa As you can see no period... So why is it periodic for square roots, but not for others? An extension of the question: https://math.stackexchange.com/questions/1898902/periodic-continued-fractions-of-non-square-root-numbers-sqrtax-where-a Kind Regards Joshua Lochner REPLY [2 votes]: It's because periodicity of a simple continued fraction amounts to a quadratic equation. I will illustrate this by means of an example: $$ 7 + \cfrac 1 {2 + \cfrac 1 {3+\cfrac 1 {9 + \cdots\vphantom{\dfrac 1 1}}}} $$ and assume $2,\,3,\,9$ repeats. We have $$ 7,\ \overbrace{2,3,9,}\ \overbrace{2,3,9,}\ \overbrace{2,3,9,}\ \overbrace{2,3,9,}\ \ldots $$ This is $$ -2 + \left(9 + \cfrac 1 {2 + \cfrac 1 {3+\cfrac 1 {9 + \cdots\vphantom{\dfrac 1 1}}}} \right) \tag 1 $$ so that $9,\ 2,\ 3$ repeats right from the beginning. Let $x = {}$the continued fraction in $(1)$, with $9,\ 2,\ 3$ repeating. Then we get $$ x = 9 + \cfrac 1 {2 + \cfrac 1 {3 + \cfrac 1 x}} $$ Then, since $\dfrac 1 {3+\cfrac 1 x} = \dfrac x {3x+1}$, we have $$ x = 9 + \cfrac 1 {2 + \cfrac x {3x+1}}. $$ Now multiply the numerator and denominator of the fraction after $9+\cdots$ by $3x+1$, getting $$ x = 9 + \dfrac{3x+1}{7x+2} $$ so $$ x = \frac{66x+ 19}{7x+2}. $$ Multiply both sides by $7x+2$: $$ x(7x+2) = 66x+19. $$ And there you have a quadratic equation.<|endoftext|> TITLE: Prove this integral $\int_0^\infty \frac{dx}{\sqrt{x^4+a^4}+\sqrt{x^4+b^4}}=\frac{\Gamma(1/4)^2 }{6 \sqrt{\pi}} \frac{a^3-b^3}{a^4-b^4}$ QUESTION [16 upvotes]: Turns out this integral has a very nice closed form: $$\int_0^\infty \frac{dx}{\sqrt{x^4+a^4}+\sqrt{x^4+b^4}}=\frac{\Gamma(1/4)^2 }{6 \sqrt{\pi}} \frac{a^3-b^3}{a^4-b^4}$$ I found it with Mathematica, but I can't figure out how to prove it. The integral seems quite problematic to me. If the limits were finite, I would do this: $$\frac{1}{\sqrt{x^4+a^4}+\sqrt{x^4+b^4}}=\frac{1}{a^4-b^4}(\sqrt{x^4+a^4}-\sqrt{x^4+b^4})$$ Then, for one of the integrals we will have: $$\int_A^B \sqrt{x^4+a^4} dx=a^3 \int_{A/a}^{B/a} \sqrt{1+t^4} dt$$ This integral is complicated, but quite well known. On the other hand $\int_0^\infty \sqrt{1+t^4}dt$ diverges, so I can't consider the two terms separately. But the integral behaves like I can! If we look at the final expression, it seems like $\int_0^\infty \sqrt{1+t^4}dt=\frac{\Gamma(1/4)^2 }{6 \sqrt{\pi}}$ even though it can't be correct. I have to somehow arrive at Beta function, since we have a squared Gamma as an answer. I'm interested in this integral, since it represents another kind of mean for two numbers $a$ and $b$. If we scale it appropriately: $$I(a,b)=\frac{8 \sqrt{\pi}}{\Gamma(1/4)^2 } \int_0^\infty \frac{dx}{\sqrt{x^4+a^4}+\sqrt{x^4+b^4}}= \frac{4}{3} \frac{a^2+ab+b^2}{a^3+ab(a+b)+b^3}$$ So, $1/I(a,b)$ is a mean for the two numbers. REPLY [4 votes]: Another way to split the integrals is to replace the square root by a power $p$: $$\sqrt{x^4 + a^4}\longrightarrow \left(x^4 + a^4\right)^{p}$$ The integral of the separate terms will then converge for $p<-\frac{1}{4}$ and can be expressed in terms of the beta-function. You can substitute $p = \frac{1}{2}$ in the final answer, despite the individual integrals not converging by invoking analytic continuation. To get to the beta-functions, you can substitute $x = a t$ to get $a$ out of the way, then $u = t^4 + 1$ and finally $u = \frac{1}{v}$ will yield the explicit beta-function form.<|endoftext|> TITLE: Norm of self-adjoint member of $C^*$-algebra QUESTION [5 upvotes]: This question arose from the proof of proposition $1.11(e)$ in chapter $8$ of John B. Conway's A Course in Functional Analysis. This portion of the proposition can be stated: Let $\mathscr{A}$ be a $C^*$-algebra, and let $a\in\mathscr{A}$ be given. If $a=a^*$, then $\|a\|=r(a)$. (Here, $r(a)$ denotes the spectral radius of $a$.) The proof, as stated in the book, proceeds as follows: Since $a^*=a$, $\|a^2\|=\|a^*a\|=\|a\|^2$; by induction, $\|a^{2n}\|=\|a\|^{2n}$ for $n\geq1$ That is, $\|a^{2n}\|^{1/2n}=\|a\|$ for $n\geq1$. Hence $r(a)=\lim\|a^{2n}\|^{1/2n}=\|a\|$. Now I was able to show by induction that $$ \|a^{2^n}\|=\|a\|^{2^n} \qquad (n\geq1),$$ from which the result follows, but I could not prove it as it is stated in the book. So my question is: How can we prove (presumably by induction) that $\|a^{2n}\|^{1/2n}=\|a\|$ for $n\geq1$? Is this simply an error in the book, or can it be done? REPLY [3 votes]: This was corrected in a later edition ... ..<|endoftext|> TITLE: When is $L^\infty(E)$ separable? QUESTION [5 upvotes]: I wonder how to exhibit a measurable set $E \subset \mathbb{R}$ for which $L^\infty(E)$ is separable. It's clear that $L^\infty(E)$ is not separable if $E$ contains any nondegenerate interval(to be honest i am still a little confused about the result ). Any idea? REPLY [7 votes]: If the measure you're using has to be Lebesgue measure, then you're out of luck. The only measurable sets $E$ for which $L^\infty(E)$ is separable are the sets of measure zero, for which $L^\infty(E)$ is the zero vector space. If you allow other measures, then you could let $E$ be a finite set, with some measure that gives each point in $E$ a positive measure. Then $L^\infty(E)$ is finite-dimensional and thus separable. But this is essentially all you can do. Here's a proof for the case of Lebesgue measure; it can be adapted to other situations. If $E$ has positive measure, it can be partitioned into a countable infinity of measurable subsets $A_0,A_1,\dots$ that all have positive measure. To each subset $X$ of $\mathbb N$, associate the function $f_X$ that sends all points from $A_n$ to $1$ if $n\in X$ and to $0$ otherwise. Each of these functions $f_X$ is in $L^\infty(E)$, and the $L^\infty$ distance between any two of them is $1$. So we have uncountably many elements of $L^\infty(E)$, all a distance $1$ from each other. That prevents separability, since no point can be closer than a distance $\frac12$to more than one of these points $f_X$.<|endoftext|> TITLE: Find large power of a non-diagonalisable matrix QUESTION [14 upvotes]: If $A = \begin{bmatrix}1 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}$, then find $A^{30}$. The problem here is that it has only two eigenvectors, $\begin{bmatrix}0\\1\\1\end{bmatrix}$ corresponding to eigenvalue $1$ and $\begin{bmatrix}0\\1\\-1\end{bmatrix}$ corresponding to eigenvalue $-1$. So, it is not diagonalizable. Is there any other way to compute the power? REPLY [11 votes]: Given $\mathrm A$, we compute one Jordan decomposition, $\mathrm A = \mathrm P \mathrm J \mathrm P^{-1}$, where $$\mathrm J = \begin{bmatrix} -1 & 0 & 0\\ 0 & 1 & 1\\ 0 & 0 & 1\end{bmatrix}$$ Note that $\mathrm A^{30} = \mathrm P \mathrm J^{30} \mathrm P^{-1}$. Since $(-1)^{30} = 1$ and $\begin{bmatrix} 1 & 1\\ 0 & 1\end{bmatrix}^{30} = \begin{bmatrix} 1 & 30\\ 0 & 1\end{bmatrix}$, we have $$\mathrm J^{30} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 30\\ 0 & 0 & 1\end{bmatrix}$$ Thus, $$\mathrm A^{30} = \mathrm P \mathrm J^{30} \mathrm P^{-1} = \cdots = \begin{bmatrix} 1 & 0 & 0\\ 15 & 1 & 0\\ 15 & 0 & 1\end{bmatrix}$$<|endoftext|> TITLE: Is this a new method for finding powers? QUESTION [47 upvotes]: Playing with a pencil and paper notebook I noticed the following: $x=1$ $x^3=1$ $x=2$ $x^3=8$ $x=3$ $x^3=27$ $x=4$ $x^3=64$ $64-27 = 37$ $27-8 = 19$ $8-1 = 7$ $19-7=12$ $37-19=18$ $18-12=6$ I noticed a pattern for first 1..10 (in the above example I just compute the first 3 exponents) exponent values, where the difference is always 6 for increasing exponentials. So to compute $x^3$ for $x=5$, instead of $5\times 5\times 5$, use $(18+6)+37+64 = 125$. I doubt I've discovered something new, but is there a name for calculating exponents in this way? Is there a proof that it works for all numbers? There is a similar less complicated pattern for computing $x^2$ values. REPLY [4 votes]: To test your hypothesis you could work out the form of the differences from the first few cases. \begin{align*} 1^{3}-0^{3}&=1\\ 2^{3}-1^{3}&=7\\ 3^{3}-2^{3}&=19\\ 4^{3}-3^{3}&=37 \end{align*} For example rewrite out $(37-19)-(19-7)=18-6=6$ as: \begin{align*} \{(4^{3}-3^{3})-(3^{3}-2^{3})\}&-\{(3^{3}-2^{3})-(2^{3}-1^{3})\}\\ &=(4^{3}-2\cdot3^{3}+2^3)-(3^{3}-2\cdot2^{3}+1^{3})\\ &=4^{3}-3\cdot3^{3}+3\cdot2^3-1^{3}\qquad (\star)\\ &=6 \end{align*} So you have to find the difference of two differences to get to $6$ (this is called a finite difference pattern, and you have to iterate twice to get the result of $6$ for all such differences, any further iteration ending in a $0$). Now check that pattern $(\star)$ holds in general for some integer $k\ge3$: \begin{align*} k^{3}&-3\cdot(k-1)^{3}+3\cdot(k-2)^3-(k-3)^{3}\\ &=k^{3}-3(k^2-3k^2+3k-1) +3(k^3-2\cdot3k^2+2^2\cdot3k-2^3) -(k^3-3\cdot3k^2+3^2\cdot3k-3^3)\\ &=\ \ k^3\\ &\ -3k^3\ +\ 9k^2\ -\ 9k\ +\ 3\\ &\ +3k^3-18k^2+36k-24\\ &\ \ -k^3\ +\ \ 9k^2-27k+27\\ &=6 \end{align*}<|endoftext|> TITLE: Relation between Frobenius norm and trace QUESTION [6 upvotes]: Is the following inequality true? $$\mbox{Tr} \left( \mathrm P \, \mathrm M^T \mathrm M \right) \leq \lambda_{\max}(\mathrm P) \, \|\mathrm M\|_F^2$$ where $\mathrm P$ is a positive definite matrix with appropriate dimension. How about the following? $$\mbox{Tr}(\mathrm A \mathrm B)\leq \|\mathrm A\|_F \|\mathrm B\|_F$$ REPLY [6 votes]: $$\mbox{tr}(\mathrm A \mathrm B) = \left\langle \mathrm A^\top, \mathrm B \right\rangle = \langle \mbox{vec} (\mathrm A^\top), \mbox{vec} (\mathrm B) \rangle \leq \|\mbox{vec} (\mathrm A^\top)\|_2 \|\mbox{vec} (\mathrm B)\|_2 = \|\mathrm A\|_F \|\mathrm B\|_F$$<|endoftext|> TITLE: $x+y=xy=w \in \mathbb{R}^+$. Is $x^w+y^w$ real? QUESTION [9 upvotes]: Question: For $x,y \in \mathbb{C}$, suppose $x+y=xy=w \in \mathbb{R}^+$. Is $x^w+y^w$ necessarily real? For instance, if $x+y=xy=3$, then one solution is $x = \frac{3 \pm i \sqrt{3}}{2}$, $y = \frac{3 \mp i \sqrt{3}}{2}$, but $x^3 + y^3 = 0$, which is real. I've checked this numerically for many values of $w$ that give complex $x$ and $y$ (namely, $w \in (0,4)$.) REPLY [8 votes]: Yes. Since $x + y \in \mathbb{R}$, $y = \overline{x} + r$ for some $r \in \mathbb{R}$. Then $xy = |x|^2 + xr \in \mathbb{R}$, implying that either $r = 0$ or $x \in \mathbb{R}$. Then we do casework: If $r = 0$, then $y = \overline{x}$; this leads to $$x^w + y^w = x^w + \overline{x^w} \in \mathbb{R}.$$ Warning: for this to work, we had to pick the standard branch of the complex logarithm, specifically, the one undefined on the nonpositive real line, whose imaginary part is between $-\pi$ and $\pi$. Once we define $z^w := e^{w \ln z}$, ${(\overline{x})}^w = \overline{x^w}$ is true for this branch of $\ln$ (as $x$ is not nonpositive real), but might not be true for another branch. This warning does not come into play when $w$ is an integer. But take, for example, $x = 1 + \frac{1 + i}{\sqrt{2}}$, $y = 1 + \frac{1 - i}{\sqrt{2}}$. Then $x + y = xy = 2 + \sqrt{2}$. If we picked a different branch of the complex logarithm, then we could have $x^w + y^w$ not real. On the other hand, if $x \in \mathbb{R}$, then $y \in \mathbb{R}$, so $x^w + y^w \in \mathbb{R}$. Since $x + y = xy > 0$, $x,y$ must both be positive, so we have no trouble with a negative base of the exponent. Note there was nothing special about $w$: we could have reached the stronger conclusion that $x^a + y^a \in \mathbb{R}$ for all $a \in \mathbb{R}$. REPLY [4 votes]: Yes. Write $x=a+bi$, $y=c-di$, then clearly $b=d$ because $x+y$ is real. So $x=a+bi$, $y=c-bi$. Then $xy=ac-abi+cbi+b^2$, so $a=c$ or $b=0$. If $a=c$, then $y = \overline{x}$, i.e. the complex conjugate of $x$. So $x^w+y^w = x^w+(\overline{x})^w = x^w+\overline{x^w}$, which is real. If $b=0$, $x$ and $y$ are real so $x^w+y^w$ is real as $w>0$. REPLY [4 votes]: $x+y$ real implies that $\Im(x)=-\Im(y)$. Thus if $x=a+bi$ then $y=c-bi$. $xy$ real implies that, because $xy=ac+b^2+ib(c-a)$, $b=0\lor c=a$. If $b=0$ the result is trivial as $x,y\in\mathbb{R}$. If $c=a$ then $x=\overline{y}$. But then clearly $$x^w+y^w=x^w+\overline{x}^w=\overline{x^w+\overline{x}^w}=\overline{x^w+y^w},$$ where the middle equality holds by symmetry. But then it must be real, as the only complex numbers satisfying $z=\bar{z}$ are real. Note: for why/when we are allowed to write $x^w=\overline{\bar{x}^w}$ refer to @6005's answer<|endoftext|> TITLE: Pairs of matroids $(M_1, M_2)$ with $\mathcal C(M_1) = \mathcal B(M_2)$ QUESTION [7 upvotes]: Let $M$ be a matroid on (finite) ground set $E$ with $\mathcal B(M)$ as its set of bases and $\mathcal C(M)$ as its set of circuits. If we consider the uniform matroid $U_{m,n}$ for $m,n \in \mathbb N$ with $m < n$, then we see that $\mathcal C(U_{m,n}) = \mathcal B(U_{m+1,n})$. Question: Is there any other pair $(M_1, M_2)$ of matroids with $\mathcal C(M_1) = \mathcal B(M_2)$ that are not uniform? REPLY [2 votes]: Claim: Given two loop-free matroids $M_1$ and $M_2$, $\mathcal{B}(M_{1}) =\mathcal{C}(M_2)$ if and only if $M_1$ and $M_2$ are both partition matroids with $M_1 = \bigoplus_{i = 1} ^nU_{r_i,n_i}$ and $M_2 =\bigoplus_{i = 1} ^n U_{r_i-1,n_i}$. Since bases (respectively, circuits) of a direct sum of matroids are just the disjoint union of bases (respectively, circuits) of the summands, it's enough to prove the above claim in the case when $M_2$ is connected. Claim: Let $M_2$ be connected. Then $\mathcal{B}(M_1)=\mathcal{C}(M_2)$ if and only if there exist $1 TITLE: Why is $0.63212$ the probability of a $\frac1n$-probability event happening in $n$ trials? QUESTION [17 upvotes]: I've always assumed on faulty intuition that if you have an event which occurs 1 in n chances, it will be super likely to happen at some point of that event occuring n times. However, given some analysis, it doesn't actually seem to be all that super likely, and seems to converge at a particular value as the value of n rises. That value is about 0.63212. Is this correct? If so, is there a name for this value and is it considered significant within the field of probability? Below is the Python code that I used to arrive at this value. >>> def p(x, r): ... return x + r * (1.0 - x) >>> def p_of_1(r): ... x = r ... while True: ... yield x ... x = p(x, r) >>> def p_of_n(n): ... g = p_of_1(1.0 / n) ... return [next(g) for x in range(n)] ... >>> p_of_n(1) [1.0] >>> p_of_n(2) [0.5, 0.75] >>> p_of_n(3) [0.3333333333333333, 0.5555555555555556, 0.7037037037037037] >>> p_of_n(4) [0.25, 0.4375, 0.578125, 0.68359375] >>> p_of_n(5) [0.2, 0.36000000000000004, 0.488, 0.5904, 0.67232] >>> p_of_n(6)[-1] 0.6651020233196159 >>> p_of_n(10)[-1] 0.6513215599000001 >>> p_of_n(100)[-1] 0.6339676587267709 >>> p_of_n(10000)[-1] 0.6321389535670703 >>> p_of_n(10000000)[-1] 0.6321205772225762 REPLY [36 votes]: It's easier to work backwards. The probability that the event does not occur on a single try is, of course, $1-\frac 1n$. It follows that the probability that it fails to occur in $n$ trials is $p_n=\left(1-\frac 1n\right)^n$. Therefore the probability that it occurs at least once in those $n$ trials is $$1-p_n=1-\left(1-\frac 1n\right)^n$$ If we now recall the limit definition of the exponential: $$e^a=\lim_{n\to \infty}\left(1+\frac an\right)^n$$ We see that, for large $n$, $$1-p_n\sim 1-\frac 1e=0.632120559\dots$$<|endoftext|> TITLE: How to this solve this differential equation? QUESTION [6 upvotes]: $y'=\dfrac{1}{x^2+y^2}$ where $y=f(x)$ and $x$ lies in $[1,\infty)$ and $f(1)=1$ and it is is differentiable in that interval I don't know how to even proceed in this problem. Even the range of $y$ is sufficient REPLY [3 votes]: What is wanted isn't clear. May be this : $y'=\frac{1}{x^2+y^2}>0 $ implies that $y(x)$ is increasing. $y(1)=1$ hence $y(x)>1$ . As a consequence : $$y'=\frac{1}{x^2+y^2}<\frac{1}{x^2+1}$$ $$y(x) TITLE: Problem integrating $\int\frac{1}{\sqrt[3]{x}+\sqrt[4]{x}}dx$ QUESTION [8 upvotes]: I am not able to start the following integration , as in that the powers of $x$ are in fractional form . So it is very difficult for me to do substitution. $$\int\frac{1}{\sqrt[3]{x}+\sqrt[4]{x}}dx$$ Can anybody please give me a start . REPLY [8 votes]: Hint: take $u=\sqrt[12]{x} $. The integral becomes (why?) $$\int\frac{1}{\sqrt[3]{x}+\sqrt[4]{x}}dx=12\int\frac{u^{8}}{u+1}du $$ and now doing long division we get $$12\int\frac{u^{8}}{u+1}du=12\int\left(u^{7}-u^{6}+u^{5}-u^{4}+u^{3}-u^{2}+u-1+\frac{1}{u+1}\right)du.$$ REPLY [5 votes]: $$=\int \frac{1}{x^{1/3}+x^{1/4}}dx$$ Factor out a $x^{1/4}$: $$=\int \frac{1}{x^{1/4}(x^{1/12}+1)}dx$$ Let $u=x^{1/12}$. Then $du=\frac{1}{12} x^{-11/12} \,dx$ and $12x^{11/12}\,du=12u^{11}\,du=dx$. Also $x^{1/4}=u^3$. So we have: $$=12 \int \frac{u^{11}}{u^3(u+1)}\,du$$ $$=12 \int \frac{u^8}{u+1}\, du$$ This is standard with another substitution $y=u+1$. The binomial theorem might be of great help.<|endoftext|> TITLE: Can the same vector be an eigenvector of both $A$ and $A^T$? QUESTION [6 upvotes]: It is proven that $A$ and $A^T$ have the same eigenvalues. I want to study what stands for eigenvectors. Let me make a try. Given: $$Ax=\lambda x$$ we know that $x\in C(A)$ for $\lambda \neq 0$. Suppose that for $A^T$ we have the same eigenvectors $x$: $$A^Tx=\lambda x$$ but now we have that $x\in C(A^T)$. Based on this, eigenvector's $x$ belong both in column and row space which is impossible. So, $A$ and $A^T$ have different eigenvectors. Am I right about this deduction? In any case, could you please suggest a different way if possible? Thanks. PS: After @G Tony Jacobs comments I made some changes hopping that I have less mistakes. REPLY [8 votes]: Let $n\geq 2$ and $Z_n=\{A\in M_n(\mathbb{C}); A,A^T \text{have at least one common eigenvector}\}$. Proposition. $M_n(\mathbb{C})\setminus Z_n$ is a Zariski open dense subset. That implies (for example) that if you randomly choose the $(a_{j,k})=(\alpha_j+i\beta_k)$ according to a normal law, then $A,A^T$ have no common eigenvector, with probability $1$. Proof. According to Shemesh, cf. Remark 3.1: $A\in Z_n$ IFF $(*)$ the matrix $[[A,A^T]^T,\cdots,[A^k,{A^l}^T]^T,\cdots,[A^{n-1},{A^{n-1}}^T]^T]$ has rank $ TITLE: Is the difference of two irrationals which are each contained under a single square root irrational? QUESTION [5 upvotes]: Is $ x^\frac{1}{3} - y^\frac{1}{3}$ irrational, given that both $x$ and $y$ are not perfect cubes, are distinct and are integers (i.e. the two cube roots are yield irrational answers)? I understand that the sum/difference of two irrationals can be rational (see this thread: Is the sum and difference of two irrationals always irrational?). However, if my irrationals are contained under one root (so for example $3^\frac{1}{3}$ and not $2^\frac{1}{2} + 1$), can one generalise to show that $ x^\frac{1}{p} - y^\frac{1}{q} $ is irrational, where of course $x$ and $y$ are not powers of $p$ and $q$ respectively? REPLY [11 votes]: If $ x $ and $ y $ are not perfect cubes, then the polynomials $ T^3 - x $ and $ T^3 - y $ are irreducible in $ \mathbf Q[T] $. Consider the splitting field $ L $ of this family over $ \mathbf Q $. Let $ G = \textrm{Gal}(L/\mathbf Q) $, and consider the stabilizer subgroups $ G_x, G_y $ of $ x^{1/3}, y^{1/3} $ respectively. By isomorphism extension, the different conjugates of $ x $ are in correspondence with the different cosets of $ G_x $ in $ G $, and likewise for $ y $. Thus, we obtain $$ \textrm{Tr}_{L/\mathbf Q}(x^{1/3}) = \sum_{\sigma \in G} \sigma(x^{1/3}) = |G_x|( x^{1/3} + \zeta x^{1/3} + \zeta^2 x^{1/3} ) = 0 $$ where $ \zeta $ is a primitive third root of unity, and likewise for $ y^{1/3} $. Therefore, $ x^{1/3} - y^{1/3} $ lies in the kernel of the field trace of $ L/\mathbf Q $. But the trace of a rational number is a nonzero integer multiple of it, therefore the only rational number in the kernel is zero, hence if this number is rational it must be zero, and we must have $ x = y $. The argument can be adapted to the case with prime(!) $ p, q $ using the criterion that $ T^p - x $ is irreducible in $ K[T] $ for a field $ K $ if and only if $ x $ is not a perfect $ p $th power. In fact, a stronger result is true: $ x^{1/n} $ always lies in the kernel of the field trace as long as $ x $ is not a perfect $ n $th power (no primality required.) For this, see Theorem 3 in this article.<|endoftext|> TITLE: Dimension of Range and Null Space of Composition of Two Linear Maps QUESTION [5 upvotes]: Question Suppose $U$, $V$ and $W$ are finite-dimensional vector spaces. Let $\mathcal{L}(U,V)$ and $\mathcal{L}(V,W)$ be the vector spaces of all linear maps from $U$ into $V$ and from $V$ into $W$, respectively. Suppose $S \in \mathcal{L}(V,W)$ and $T \in \mathcal{L}(U,V)$. Then prove that $\begin{align} & 1. \, \text{dim null} S \circ T \le \text{dim null} S + \text{dim null} T \\ & 2. \, \text{dim range} S \circ T \le \text{min} \{ \text{dim range} S , \text{dim range} T \} \end{align}$ My Thought To prove the first one, I guess that writing the fundamental theorem of linear maps for $S \circ T$ may be a good start $$\begin{align} \text{dim null} S \circ T &= \text{dim} U - \text{dim range} S \circ T \\ &= \text{dim null} T + \text{dim range} T - \text{dim range} S \circ T \\ &\le \text{dim null} T + \text{dim} V - \text{dim range} S \circ T \\ &= \text{dim null} T + \text{dim null} S + \text{dim range} S- \text{dim range} S \circ T \end{align}$$ So if I can prove that $$\text{dim range} S- \text{dim range} S \circ T \le 0$$ then I am done but this does not seem to be true because it is easy to see that $\text{range} S \circ T \subseteq \text{range} S$ and hence $\text{dim range} S \circ T \le \text{dim range} S$ . So I am stuck! Also, I could observe that $\text{null} T \subseteq \text{null} S \circ T$ and hence $\text{dim null} T \le \text{dim null} S \circ T$ but I don't know how to use this! REPLY [6 votes]: Fo the first question, I would argue like this: let $K = \mathrm{null}(ST) = T^{-1}(\mathrm{null}(S))$, a subspace of $U$, and let $T|_{K} : K \to V$ be the restriction of $T$ to $K$. Because \begin{align*} \mathrm{null}(T|_{K}) \subset \mathrm{null}(T) && \text{and} &&\mathrm{ran}(T|_{K}) \subset \mathrm{null}(S) \end{align*} we get \begin{align*} \dim(\mathrm{null}(ST)) &= \dim(K) \\ &= \dim(\mathrm{null}(T|_{K})) + \dim(\mathrm{ran}(T|_{K})) \\ &\leq \dim(\mathrm{null}(T)) + \dim(\mathrm{null}(S)) \end{align*} as desired. For your second question, note what you need to do is prove (i) $\dim(\mathrm{ran}(ST))≤\dim(\mathrm{ran}(S))$ and (ii) $\dim(\mathrm{ran}(ST))≤\dim(\mathrm{ran}(T))$. I think you can probably see why (i) is true. For (ii), note that $ST$ has the same range as $S|_{\mathrm{ran}(T)}$, and that the range of a linear tranformation has equal or lesser dimension than the domain<|endoftext|> TITLE: Intuition/meaning behind quadratic forms QUESTION [23 upvotes]: My professor just covered quadratic forms, but unfortunately did not give any intuition behind their meaning, so I'm hoping to get some of that cleared up. I know that we define a quadratic form as $Q(x) = x^T Ax$ , for some symmetric (i.e orthogonally diagonalizable) matrix $A$. Is there some significance to this other than that it is a "cool" transformation from $R^n \to R$ ? Is it special in some way? He also spoke about the Principal Axis Theorem. After looking on Wikipedia, it seems that the PAT that he described is wildly different from what most of the internet says. The professor said that the PAT tells us that any quadratic form $Q(x)$ can be "transformed" (what does that even mean???) into the quadratic form $Q(y) = y^T Dy$ with no cross product term (the cross product term is defined as the $x_1\cdot x_2$ term in the quadratic form), where $D$ is a diagonal matrix. His proof used the fact that $Q(x) = x^T Ax = (x^T Q)D(Q^T x) = y^T Dy$ for some $y$. What does the "transformed quadratic form" represent? Why is it significant? All my professor did was define these things, and didn't explain any intuition. REPLY [20 votes]: Quadratic forms are great! They are related to some pretty great stuff such as bilinear forms and the Arf invariant. Quadratic forms in general encode the so-called "quadric surfaces" such as ellipses, hyperbolic paraboloids, and so on. The principal axis theorem, also known as the spectral theorem, is one of the most important theorems in linear algebra! It is what allows us to "transform" the quadratic forms your professor mentioned. Take a quadratic form $q: \Bbb R^n \to \Bbb R$ defined by $x \mapsto x^tAx$. Since $A$ is symmetric (or can be made symmetric pretty easily), the principal axis theorem says we may orthogonally diagonalize it! This is what eliminates any of the cross-terms such as $x_1x_2$. Going through with the orthogonal diagonalization, $x^tAx = x^tQDQ^tx = (Q^tx)^tD(Q^tx) = y^tDy$. This matrix $D$ is diagonal, and its diagonal entries are the eigenvalues of $A$. The significance of this "transformed" quadratic form is that it is more meaningful in terms of the information it encodes. Without those pesky cross-terms, we can see exactly what the quadric surface is without the fluff. The easiest surfaces to identify are those of the form $a_1y_1^2 + a_2y_2^2 + a_3y_3^2$ since the signs of $a_1, a_2$ and $a_3$ are how we distinguish between ellipsoid, paraboloid, etc. They are also of great use in physics when we are dealing with the inertia tensor of a rigid body. They are about one of the coolest things we learn about in first-year linear algebra! Edit: Check out these notes by professor Mike Hopkins at Harvard about quadratic and bilinear forms. Professor Hopkins gave a really good lecture at Northwestern this past May in which he discussed some of the more high-level aspects of quadratic forms and how they connect to the Arf invariant. His lecture and these notes are accessible to anyone taking a linear algebra course. These notes in particular should help you to make some "aha!" moments and deeper connections/intuitions about quadratic forms. http://math.harvard.edu/~mjh/northwestern.pdf To add to amd's comment, given a $C^2$, real-valued function $f$ of $n$ variables, and a critical point $x_0$ of the function, we can Taylor expand $f$ to second-order to discern the nature of the critical value. That is, $$ f(x) = f(x_0) + \tfrac{1}{2}x^tHx + o(\Vert{x}\Vert^2), $$ where $H$ is the Hessian of $f$, and it encodes all second-order partials of $f$ at the point $x_0 \in \Bbb R^n$. Since $f$ is $C^2$, the Hessian of $f$ is symmetric, and we may orthogonally diagonalize $H$ (this is "transforming" the quadratic form via a change of variables): $$ f(y) = f(y_0) + \tfrac{1}{2}y^tDy + o(\Vert{y}\Vert^2). $$ From $D$, we can pick off right away whether $x_0$ (equivalently, $y_0$) is a local max, min, or neither since the entries of $D$ along its diagonal are the eigenvalues of the Hessian. If $D$ has strictly positive eigenvalues, then $x_0$ is a minimum (think concave up), and if $D$ has strictly negative eigenvalues, then $x_0$ is a maximum (think concave down). If $D$ has both positive and negative eigenvalues, $x_0$ is a saddle point. In short, this makes the classification of extrema simpler, thanks to the fact that the second-order term in the Taylor expansion of $f$ about a critical point is itself a quadratic form.<|endoftext|> TITLE: Are two mathematically alike functions equal? QUESTION [27 upvotes]: Consider the functions $f:\mathbb{R}\rightarrow\mathbb{R}$ and $g:\mathbb{R}\rightarrow\mathbb{R}$ defined by the formulas $f(x)=x^2$ and $g(y)=y^2$ $\forall x,y \epsilon \mathbb{R}$. Is it true that $f=g$ as functions? My thoughts so far: Intuitively, yes. Since the two functions are equal at every point where they are defined and are defined on the same points, the are effectively the same function. What concerns me here is the different notation of $x$ and $y$. How does that play into the problem? Are the functions still equivalent? REPLY [2 votes]: A function is defined by two sets, $E$ and $F$, and a rule that to $x\in E$ make corresponds an unique element $y\in F$. The notation is not relevant so yours $f$ and $g$ are equal.<|endoftext|> TITLE: Can I Square Root A Series? QUESTION [12 upvotes]: I have a question in Quantum Mechanics where I need to solve a series, and the thing is that I can get the answer to a similar series with the help of the same problem but I am not sure if I can square root my series to use it in the problem. For example I have $$\sum_{n=1,3,5,7...}^{\infty} \frac{1}{n^4} = \frac{\pi^4}{96}$$. But the summation I need is for $\style{Bold}{\frac{1}{n^2}}$ So is it fine to square root both sides and say $$\sum_{n=1,3,5,7...}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{\sqrt{96}}$$ REPLY [3 votes]: The already given answers are quite nice; here is one more way to think about this: The question as to why the answer to your question is no (or, at least, "not as simply as you might wish") is interesting. But how do you establish, in the first place, that the answer is no? To do so, it will suffice to come up with a single counterexample. Picking counterexamples is a bit of an art; for the question proposed here, I am inclined to try an infinite series for which the sum converges to $1$. The reasoning in my mind is, I can then check whether the corresponding square-rooted series converges to $\sqrt{1} = 1$. (Or is it $\sqrt{1} = -1$? Well, let us investigate.) Consider the classic geometric series: $$\sum_{n = 1}^{\infty} \frac{1}{2^n} = 1/2 + 1/4 + 1/8 + \cdots = 1$$ Next, we consider the square-rooted version: $$\sum_{n = 1}^{\infty} \frac{1}{2^{n/2}} = 1/\sqrt{2} + 1/\sqrt{4} + \cdots $$ The latter series has all positive terms, so its partial sums are monotonically increasing. Yet we add up just the first two terms to find: $$1/\sqrt{2} + 1/\sqrt{4} = 1/\sqrt{2} + 1/2 > 1$$ So, whatever is going on with the latter series (which, incidentally, does converge: to $1 + \sqrt{2}$) we know its computation is not as simple as taking the square root of our first series' limit; that simplistic approach would suggest the square-rooted series would also converge to $1$, but already two terms in it has surpassed $1$ with no plan to return. (The accepted answer provides an even more extreme example: An initial series, summing the squares' reciprocals, which converges, but a square-rooted version that yields the harmonic series, which diverges!)<|endoftext|> TITLE: Is a function that maps circles to circles necessarily a Möbius transformation? QUESTION [5 upvotes]: I'm introducing myself to Complex analysis and Möbius transformations and I read that Möbius transformations map circles and lines to circles and lines. Are there any other functions that are not Möbius transformations but they can map circles to circles? If I know that $f(z)$ maps a circle to another circle, can I assume that $f(z)$ is a Möbius transformation? REPLY [2 votes]: I presume you talk about analytic maps. But even then you may take products of Möbius transformations which also maps $S^1=\{|z|=1\}$ to itself (1-1). Such transformations are called: Blaschke products If you do not require 1-1 then you also have maps like $z\mapsto z^p$ and if you require analyticity only in a neighborhood of $S^1$ there are many more. On the other hand a map that always maps any circle or line to a circle or a line is either a Möbius transformation (whence meromorphic) or a Möbius transformation composed with complex conjugation. Perhaps this is more what you are after... (and a proof is not that difficult)<|endoftext|> TITLE: Hamel Dimension of Infinite Dimensional Separable Banach Space QUESTION [6 upvotes]: I was trying to show this result. I have seen proofs that show that the dimension is at least $\mathfrak{c}$, however, I'm unable to prove that it is exactly $\mathfrak{c}$. The last line of this article is unclear. How does separability imply that dimension is $\mathfrak{c}$? REPLY [6 votes]: If $X$ is separable, it has a countable dense subset $A$. Every point of $X$ is a limit of some sequence in $A$, and there are only $|A|^{|\mathbb{N}|}=\mathfrak{c}$ different sequences in $A$. So $|X|\leq\mathfrak{c}$. A Hamel basis for $X$ is a subset of $X$, and so any Hamel basis has at most $\mathfrak{c}$ elements.<|endoftext|> TITLE: The graph of every real function has inner measure zero QUESTION [7 upvotes]: I'm trying to understand the answer to this question: Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a (general) function. Is there an $N\subset \mathbb{R}^2$ with $\lambda^2(N)=0$, such that $\{(x,f(x)):x\in \mathbb{R}\}$ $\subset N$ ? First line of the answer, found there: "No function can have a graph with positive measure or even positive inner measure, since every function graph has uncountably many disjoint vertical translations, which cover the plane. " I don't understand why the fact that "every function graph has uncountably many disjoint vertical translations, which cover the plane" (I agree with that) implies the desired result. Thank you for your help. REPLY [7 votes]: If a function graph $G$ of positive inner measure existed, then choosing $K\subset G$ measurable of positive measure and considering the translates of $K$, you would get uncountably many disjoint measurable sets of positive measure. This is impossible in any $\sigma$-finite measure space. Indeed, let $X=\bigcup X_n$ be a $\sigma$-finite measure space with $\mu(X_n)<\infty$ and let $\mathcal{C}$ be an uncountable collection of disjoint measurable subsets of $X$ of positive measure. Then for each $A\in\mathcal{C}$, there exist $m,n\in\mathbb{N}$ such that $\mu(A\cap X_n)>1/m$. Thus there must exist some pair $(m,n)\in\mathbb{N}^2$ such that $\mu(A\cap X_n)>1/m$ for uncountably many different elements $A\in \mathcal{C}$. But since the sets $A\cap X_n$ are all disjoint, this would imply $\mu(X_n)=\infty$ (say by choosing a countably infinite collection of such $A$ and using countable additivity).<|endoftext|> TITLE: Calculate Volume of Torus Given Circumferences QUESTION [6 upvotes]: What, if any, is the formula to calculate the volume of a torus given the circumference of the tube and the outer circumference of the ring? REPLY [6 votes]: There are already two answers here, but I went ahead and computed the volume using the parametric equation of the torus, given by \begin{align}&x=(a+r\cos v)\cos u\\&y=(a+r\cos v)\sin u\\&z= r\sin v\end{align} where $a$ is the distance from the origin to the center of the 'tube', $r$ is the radius of the 'tube' itself, and $u$ and $v$ are two parameters corresponding to the central angle and the circular angle inside the 'tube'. The laboriously calculated Jacobian (giving the magnitude of the volume element at every point inside the torus) is simply $ar + r^2\cos v\,dr\,dv\,du$, so the volume of the torus is given by the integral $$\int_0^{2\pi}\int_0^{2\pi}\int_0^r ar + r^2\cos v\,dr\,du\,dv=2\pi^2r^2R$$ as desired. Edit: Since I'm doubly bored, I went ahead and found the volume using a solid of revolution method. The torus can be considered the solid constructed by rotating a circle around the $z$ axis. The volume is given by the double integral $$2\pi\int_{-r}^r\int_{-\sqrt{r^2-x^2}+R}^{\sqrt{r^2-x^2}+R}y\ dy\,dx=2\pi^2 r^2 R$$ Technically, the variables should be reversed, but the answer is the same.<|endoftext|> TITLE: Determine all $P(X)\in K[X]$ such that $P\big(X^2+1\big)=\big(P(X)\big)^2+1$, for fields $K$ of any characteristic. QUESTION [8 upvotes]: This question is inspired by this thread. However, in this question, I take an arbitrary field instead of $\mathbb{R}$ and drop the assumption that $P(0)$ must be $0$. Let $K$ be a field. Determine all $P(X)\in K[X]$ such that $$P\big(X^2+1\big)=\big(P(X)\big)^2+1\,.$$ In other words, if $Q(X):=X^2+1$, we are to determine all $P(X)\in K[X]$ that commute with $Q(X)$ with respect to polynomial composition: $$P\big(Q(X)\big)=Q\big(P(X)\big)\,.$$ First suppose that the characteristic of $K$ is not equal to $2$. I have observed that some solutions are given by (i) $P(X)=\frac{1+\sqrt{-3}}{2}$ and $P(X)=\frac{1-\sqrt{-3}}{2}$ if $\sqrt{-3}\in K$, and (ii) $P(X)=Q_n(X)$ for $n=0,1,2,3,\ldots$, where $Q_0(X):=X$ and $Q_n(X):=Q\big(Q_{n-1}(X)\big)$ for all $n=1,2,3,\ldots$. Are these all of the solutions? If the characteristic of $K$ is equal to $2$, then the given functional equation is equivalent to $$P\left((X+1)^2\right)=\left(P(X)+1\right)^2\,.$$ If $K=\mathbb{F}_2$, then it follows that $$P(X+1)=P(X)+1\,,$$ whence $P(X)$ must take the form $$P(X)=X^{2^{r_1}}+X^{2^{r_2}}+\ldots+X^{2^{r_k}}\text{ or }P(X)=1+X^{2^{r_1}}+X^{2^{r_2}}+\ldots+X^{2^{r_k}}$$ where $k$ is an odd positive integer and $r_1,r_2,\ldots,r_k$ are pairwise distinct nonnegative integers. Can anybody prove or disprove whether my list is complete when the characteristic of $K$ is not equal to $2$? Bill Dubuque's answer in this thread shows that, if $K$ is a subfield of $\mathbb{C}$, then the list is indeed complete. This should imply that, if $K$ is of characteristic $0$ and the cardinality of $K$ is that of the continuum, then the list is complete (as $K$ can be embedded into $\mathbb{C}$). Can someone try to make a characterization of the solutions when $K$ is an arbitrary field of characteristic $2$? This case seems to have weird polynomial solutions. For example, when $K=\mathbb{F}_4=\mathbb{F}_2[\alpha]$ with $\alpha^2+\alpha+1=0$, we have the following solutions $P(X)=\alpha$, $P(X)=X$, $P(X)=X+1$, $P(X)=X^2$, $P(X)=X^2+1$, $P(X)=X^2+X+\alpha$, and $P(X)=X^3+\alpha\,X^2+\alpha\,X$. UPDATES: (1) By balancing the coefficients, it can be shown that, in characteristic not equal to $2$, there is at most one solution of degree $d>0$, and this solution lies within $\mathbb{F}_p[X]$, where $\mathbb{F}_p$ is the prime field of $K$ (with $\mathbb{F}_0:=\mathbb{Q}$). Hence, it suffices to show that the list is complete when $K$ is a prime field. Ergo, for characteristic $0$, we are done. (2) In characteristic $2$, via balancing the coefficients, we see that all equations involved are quadratic in the coefficients, whence the solutions must lie within $\mathbb{F}_4[X]$. Consequently, we may take $K$ to be $\mathbb{F}_2$ or $\mathbb{F}_4$. (3) In odd characteristic $p$, the list is indeed incomplete. Note that $P(X)=X^{p^k}$ is a solution for all $k=0,1,2,\ldots$. Let $R_p(X):=X^p$. Is it true that all solutions must take the form $$P(X)=\left(Q^{\circ k}\circ R_p^{\circ l}\right)(X)$$ for some $k,l\in\mathbb{Z}_{\geq 0}$? Here, $\left(F^{\circ 0}\right)(X):=X$ and $$\left(F^{\circ l}\right)(X):=\left(\underbrace{F\circ F\circ \ldots \circ F}_{F\text{ occurs }l\text{ times}}\right)(X)$$ for each $F(X)\in K[X]$ and $l\in\mathbb{Z}_{> 0}$. (4) It turns out that the characterization in (3) is not complete either, at least not in characteristic $3$. The polynomials $$X^{5}+X^3-X\,,\;\;X^7-X^5-X^3-X\,,\text{ and }X^{11}+X^9-X^7+X^5+X^3+X$$ are solutions in characteristic $3$ which do not arise from the formula in (3). The case of characteristic $3$ may be the only special case where (3) does not give a full characterization. (5) Combined with Hurkyl's answer, we know all solutions in characteristics $0$, $2$, and $3$. So far, I have not been able to find any solutions outside (3) in characteristics greater than $3$. REPLY [2 votes]: Odd Characteristic (this section incomplete) Using the proof of dinoboy, $P$ is either an odd or even function1. If $P$ is even and nonconstant, it must be of the form $P = S \circ Q$, where $S$ and $Q$ commute. Thus, the problem reduces to finding the set of all odd possibilities for $P$. (dinoboy's proof also shows that for all fields of characteristic zero, $P(x) = x$ is the only odd solution) Note the possible finiteness of $K$ doesn't matter here, since we can pass to any infinite extension field $L/K$ Characteristic 3 Let $T_n(X)$ be the Chebyshev polynomial of the first kind. These polynomials satisfy $T_m \circ T_n = T_{mn}$. By noting that $Q(X) = -T_2(X)$, this corresponds to one of the special cases of Ritt's theorem (which applies to characteristic zero). Consequently, we get a family of odd solutions $P = T_m$ for odd $m$. Since there are Chebyshev polynomials in each odd degree, the note in the OP proves this is the complete solution. The complete solution including even degrees can be more simply stated as $P = (-1)^{n+1} T_n$ where $n$ can be any integer. Characteristic 2 Let $\alpha = \alpha^2 + 1$ and assume $\alpha \in K$. let $K_s$ be the field isomorphic to $K$, and we fix a bijection $K \to K_s : c \mapsto c + \alpha$. If $F$ is a polynomial defined over $K$, then $F_s$ is a polynomial defined over $K_s$ given by $F_s(X) = F(X + \alpha) + \alpha$. In particular, $Q_s(X) = X^2$. The polynomials $P_s$ over $K_s$ that commute with $Q_s$ are precisely the polynomials $P_s$ whose coefficients are in $\mathbf{F}_2$. Consequently, the polynomials over $K$ that commute with $Q$ are precisely the polynomials of the form $P(X) = F(X + \alpha) + \alpha$, where $F \in \mathbf{F}_2[X]$.<|endoftext|> TITLE: Is there a simple function that generates the series; $1,1,2,1,1,2,1,1,2...$ or $-1,-1,1,-1,-1,1...$ QUESTION [66 upvotes]: I'm thinking about this question in the sense that we often have a term $(-1)^n$ for an integer $n$, so that we get a sequence $1,-1,1,-1...$ but I'm trying to find an expression that only gives every 3rd term as positive, thus it would read; $-1,-1,1,-1,-1,1,-1,-1...$ Alternatively a sequence yielding $1,1,2,1,1,2,1,1,2...$ could also work, as $n$ could just be substituted by it in $(-1)^n$ REPLY [2 votes]: my 2 cents, how about $a_{n+1}=(-1)^{r(a_n)(1-a_n)}$ where $a_0 = 1$ and $r \in \{1,2,3\}$ which is similar to the logistic map in general I think that using a one dimensional chaotic polynomial could help.<|endoftext|> TITLE: Can't understand Second Fundamental Theorem of Calculus QUESTION [14 upvotes]: sorry if this has been asked before, but I can't seem to find my question in particular. Anyway, in the Second FTM it says $$F(x)=\int_a^xf(x)dx$$ If I understand correctly is just the area under the curve. no problem there. Then it says that$$\int_a^bf(x)dx=F(b)-F(a)$$ if I'm thinking this correctly, it would make sense because in $F(a)=\int_a^af(x)dx$ the area is $0$ so I'm just left with $F(b)$ which is the integral from a to b, this is where I think I have to be wrong, because in every example I see, they take the value of $a$ and plug it in $F(x)$. For example in $$\int_2^5x^2dx$$ with $F(x)=x^3/3$ they take $F(5)$ and $F(2)$, in particular $F(2)=2^3/3$, but shouldn't $F(2)$ always be $0$ because it's basically just $\int_2^2x^2dx$? p.s Sorry in advance for any mistake I made in formating or anything else. REPLY [12 votes]: This kind of confusion is very prevalent and the primary reason behind the confusion is the wrong definition of definite integral as area under a curve. It is important to first understand that the definition of symbol $\int_{a}^{b}f(x)\,dx$ has nothing do with area as such. The definition has to be based on numbers $a, b$ and the function $f$ defined on interval $[a, b]$. One such definition was provided by Bernhard Riemann and it assumes $f$ to be bounded on $[a, b]$. I will leave the definition of Riemann integral to the standard textbooks of analysis and focus next on the Fundamental Theorems of Calculus. First Fundamental Theorem of Calculus: If $f$ is bounded on $[a, b]$ and the Riemann integral $\int_{a}^{b}f(x)\,dx$ exists then the function $F$ defined on $[a, b]$ by $$F(x) = \int_{a}^{x}f(t)\,dt$$ is continuous on $[a, b]$ and $$F'(c) = f(c)$$ for any point $c \in [a, b]$ where $f$ is continuous. Second Fundamental Theorem of Calculus: If $F$ is differentiable on $[a, b]$ and the derivative $F' = f$ (say) is Riemann integrable on $[a, b]$ then $$F(b) - F(a) = \int_{a}^{b}F'(x)\,dx = \int_{a}^{b}f(x)\,dx$$ Note that the first FTC describes a function $F$ based on a given function $f$ which has some nice properties ($F$ is continuous on $[a, b]$ and differentiable at those points where $f$ is continuous). But this $F$ is not an anti-derivative of $f$ on $[a, b] $ because we are not guaranteed that $F'(x) = f(x)$ for all $x \in [a, b]$. We have $F'(x) = f(x)$ only at those points $x$ at which $f$ is continuous. At points where $f$ is discontinuous the function $F$ may or may not be differentiable. The second FTC deals with anti-derivatives. It says that if $F$ is anti-derivative of $f$ on $[a, b] $ i.e. $F'(x) = f(x)$ for all $x \in [a, b]$ and further that if $f$ is Riemann integrable on $[a, b]$ then the definite integral $\int_{a}^{b}f(x)\,dx$ can be simply evaluated in terms of the difference between values of anti-derivative (i.e. as $F(b) - F(a)$). The function $F$ used in first FTC has a different role than the function $F$ of the second FTC and it is useless to think of them as same. Things change drastically when the function $f$ (which is to be integrated) is guaranteed to be continuous on $[a, b]$. When $f$ is continuous then both the FTC get merged into one theorem which we can simply call FTC for continuous functions: Fundamental Theorem of Calculus for Continuous Functions: If $f$ is continuous on $[a, b]$ then $\int_{a}^{b}f(x)\,dx$ exists and the function $F$ given by $$F(x) = \int_{a}^{x}f(t)\,dt$$ is an anti-derivative of $f$ on $[a, b] $. Moreover if $G$ is any anti-derivative of $f$ on $[a, b]$ then $$\int_{a}^{b}f(x)\,dx = G(b) - G(a)$$ And now you see the connection between the word "anti-derivative" and the integral $$\int_{a}^{x}f(t)\,dt = F(x)$$ The function $F$ is an anti-derivative of $f$ and not necessarily the anti-derivative of $f$. You also remember the fact that a function does not have a unique anti-derivative and two anti-derivatives of the same function differ by a constant and this is the reason for using the constant of integration while calculating indefinite integrals. This is also the reason that I have used a different letter $G$ for a generic anti-derivative in the theorem mentioned above and the letter $F$ denotes a very specific anti-derivative represented in the form of a definite integral. Thus when you wish to calculate $\int_{2}^{5}x^{2}\,dx$ by the use of anti-derivative $x^{3}/3$ you are just choosing one of the infinitely many anti-derivatives available and it will work as expected. You may wish to choose the specific anti-derivative $\int_{2}^{x}f(t)\,dt = (x^{3}/3) - 8/3$ and this will also work fine. You can also think in this manner. The anti-derivative $F(x) = \int_{a}^{x}f(t)\,dt$ is a very specific one and has the property $F(a) = 0$. Other anti-derivatives of $f$ will not have this property of vanishing at $a$. And the fundamental theorem says that the choice of anti-derivatives does not matter in evaluating the definite integral.<|endoftext|> TITLE: Does the infinite power tower converge for all $0 TITLE: Solutions to $f'=f$ over the rationals QUESTION [29 upvotes]: The problem is as follows: Let $f: \mathbb{Q} \to \mathbb{Q}$ and consider the differential equation $f' = f$, with the standard definition of differentiation. Do there exist any nontrivial solutions? (Note that of course $f \equiv 0$ is a solution - by "nontrivial solutions", I mean anything else). Observations: Differentiation and continuity are much weaker concepts on the rationals. For example, $H(x-\sqrt{2}) : \mathbb{Q} \to \mathbb{Q}$ is continuous and everywhere differentiable, where $H$ is the Heaviside step function. If there exists a nontrivial solution $f_0$, then there are uncountably many solutions. For example, $H(x-\alpha)f_0$ is also a solution for any irrational $\alpha$ (which already gives uncountably many solutions), and thus any* linear combination $k_0 f_0 + \sum_{\alpha \in A} H(x-\alpha)k_\alpha f_0$ (with $A \subset \mathbb{R}\setminus\mathbb{Q}$) is also a solution, by linearity of the DE. We can answer in the negative if there is a way to show that any such solution could be extended to a solution to $f' = f$ on $\mathbb{R}$, because those solutions are simply $ke^x$, which takes irrational values over the rationals unless $k = 0$. Unfortunately, the solutions $f : \mathbb{Q} \to \mathbb{Q}$ for the a differential equation $\mathcal{L}y = 0$ are not, in general, a subset of the real solutions. e.g. $y' = 0$ has solution $H(x-\sqrt{2})$ but every solution on $\mathbb{R}$ must be constant. If the answer is "yes", maybe we'd hope to be able to construct a solution via some iterative method, but since Cauchy sequences are not in general convergent, we'd need some sort of machinery to guarantee rational limits. *you can either insert the word "finite" here, or stipulate that the $k_\alpha$ are such that the quantity $\sum_{\alpha0$ such that $\left|\frac{f(q')-f(q)}{q'-q}-g(q,f(q))\right|<(q'-q)^2$ for every $q'$ with $|q'-q|<\delta$. In this spirit, define the following subsets of $\mathbb Q^2$ $$S(x,y,\delta)=\left\{(x',y'):\left|y'-y-g(x,y)(x'-x)\right|<\left|x'-x\right|^3\text{ or }|x'-x|>\delta\right\}\cup \{(x,y)\}.$$ When $|x'-x|\leq \delta$, this is a region bounded by two parabolas tangent at $(x,y)$ with slope $m$ at that point, which is related to the condition we are requiring on $f$. We will define the function by defining it at particular points and choosing a suitable open set $S$ in which we place every further point. This will suffice to ensure differentiability. In particular, let $\{p_n\}_{n=1}^{\infty}$ be an enumeration of the rationals. We will construct a sequence $\{q_n\}_{n=1}^{\infty}$ of rationals such that $f(p_n)=q_n$ defines a suitable function. We will, during the construction, use an auxiliary sequence $U_n$ of open subsets of $\mathbb Q^2$, letting $U_0=\mathbb Q^2$. At each step in the construction, we will demand the following of $U_{n}$ for all $n$: Property 1: $U_{n}=U_{n-1}\cap S(p_n,q_n,\delta)$ for some $\delta>0$ Property 2: $U_{n}\setminus \{(p_1,q_1),(p_2,q_2),\ldots,(p_n,q_n)\}$ is open. Property 3: For all rational $p\in\mathbb Q$ there exists $q\in \mathbb Q$ such that $(p,q)\in U_{n}$. Property 4: For all $n'\leq n$ we have $(p_{n'},q_{n'})\in U_{n}$. Given that the graph of the function being a subset of $S(p,f(p),\delta)$ for every $p$ implies that $f$ satisfies the differential equation, the only business we have is to show that such a triple of sequences exists. To do so, suppose we are given the first $n-1$ terms of the sequences $\{q_n\}$ and $\{U_n\}$ along with the whole sequence $p_n$ and need to find a suitable $q_n$ and $U_n$ to extend the sequence. By property $3$ of $U_{n-1}$ there exists some $q$ such that $(p_n,q)\in U_{n-1}$. Set $q_n$ to any such $q$ and let $m=|g(p_n,q_n)|+1$. Now, using property 2 of $U_{n-1}$ choose some $\delta\in (0,1)$ such that $(p_n-\delta,p_n+\delta)\times (q_n-m\delta,q_n+m\delta)\subseteq U_{n-1}$. One can see that $$S(p_n,q_n,\delta)\cap(p_n-\delta,p_n+\delta)\times \mathbb R\subseteq(p_n-\delta,p_n+\delta)\times (q_n-m\delta,q_n+m\delta)\subseteq U_{n-1}.$$ Set $U_n=U_{n-1}\cap S(p_n,q_n,\delta)$. Now we check that we have satisfied the conditions: Property 1: Trivial, from definition of $U_n$. Property 2: Write $$U_n\setminus \{(p_1,q_1),\ldots,(p_n,q_n)\}=\left(U_{n-1}\setminus\{(p_1,q_1),\ldots,(p_{n-1},q_{n-1})\}\right)\cap \left(S(p_n,q_n,\delta)\setminus \{(p_n,q_n)\}\right).$$ Thus the given set is the intersection of two open sets and thus open. Property 3: Suppose that $|p-p_n|<\delta$. Then there is a $q$ such that $(p,q)$ is in $S(p_n,q_n,\delta)\cap (p_n-\delta,p_n+\delta)\times \mathbb R\subseteq U_n$. If $|p-p_n|\geq \delta$, then $(p,q)\in U_n$ exactly when $(p,q)\in U_{n-1}$, so the theorem is satisfied. Property 4: Due to property $1$, if there is a $q$ such that $(p_{n'},q)\in U_n$ then $q=q_n$. By property $3$, there is such a $q$. This shows that we may extend the sequences given any finite prefix of it. It is then easy to check that the resulting function $f(p_n)=q_n$ satisfies the hypotheses.<|endoftext|> TITLE: Prove that $SL(2,\mathbb{R})$ acts transitively on the upper half plane QUESTION [9 upvotes]: I want to prove $\mathrm{SL}(2,\mathbb{R})$ acts transitively on the upper half plane $\mathbb{U}:=\{z\in \Bbb C\ |\ \mathrm{Im}(z)>0\}$ by $$z\longmapsto \dfrac{az+b}{cz+d}.$$ Is it enough to say that since $\begin{pmatrix} a & b\\c & d \end{pmatrix}\in \text{SL}(2,\mathbb{R})$ is always invertible, for $x,y\in \mathbb{U}$, we can always find a matrix $A\in \text{SL}(2,\mathbb{R})$ such that $Ax =y$? I feel like there should be more to it. REPLY [19 votes]: Let $z=x+iy$ be a given point in the upper half plane. Then the $SL(2,\mathbb{R})$ matrix $$ \begin{pmatrix} \sqrt{y}&x/\sqrt{y} \\ 0&1/\sqrt{y} \end{pmatrix} $$ maps $i$ to $z$. Since $z$ was arbitrary, the orbit of $i$ is the whole upper half plane. Furthermore any point $w$ can be mapped to $i$ by using the inverse of this matrix, so any $w$ can be mapped to any $z$.