TITLE: Exercise 2.13, I. Martin Isaacs' Character Theory QUESTION [12 upvotes]: I am trying to solve the exercise 2.13 in Isaacs' Character Theory Book. However I met some difficulties, let me sketch out what I am thinking so that you may tell me a hint. The problem 2.13 is stated as follows : Let $|G'|=p$, a prime. Assume that $G'\subseteq Z(G)$. Show that $\chi(1)^2=|G:Z(G)|$ for every nonlinear $\chi\in Irr(G)$. I proceed as follows : Let $\chi$ be a nonlinear character of $G$. Since $G'\subseteq Z(G)=\cap Z(\chi)$ then $G'\subseteq Z(\chi)$. Therefore $G/Z(\chi)$ is abelian. By THeorem 2.31, we have that $$\chi(1)^2=|G:Z(\chi)| $$ We would like to prove that $Z(G)=Z(\chi)$. We just need to prove the converse. Assume that $g\in Z(\chi)$, we need to prove that $g\in Z(G)$,i.e., commutes with all $h\in G$. Let $h\in G$, then we need to prove that $[g,h]=1$. Since $[g,h]\in G'\subset Z(\chi)$, then $[g,h]^p=1$ since $|G'|=p$. Moreover, $g\in Z(\chi)$ implies that $[g,h]\in Ker\chi$, and so $\chi([g,h])=\chi(1)>1$, since $\chi$ is nonlinear. But then, at this stage, I dont know how to proceed to get $[g,h]=1$. Could you give me some hints then. Thanks a lot in advance. REPLY [7 votes]: Since $\chi$ is nonlinear, $G'$ is not a subset of the kernel, so $\ker(\chi) \cap G' = 1$ since $|G'|=p$ is so small. Hence $[g,h]=1$ and your argument is complete.<|endoftext|> TITLE: Why is the determinant invariant under row and column operations? QUESTION [8 upvotes]: I know that we may add any row to any other in a determinant and its value remains the same. This is clear enough since elementary matrices corresponding to row and column operations have determinant 1. Is there an explanation of this fact in terms of geometric ideas such as volume and linear transformations? REPLY [12 votes]: Think of the area of a rectangle (ABCD in the figure). Now, take one of the sides of the rectangle (DC) and shift it along the line it lies on so that the sides not parallel to the shifting one slant at an ever sharper angle (DC $\to$ FE). This process alters the shape of the figure into a parallelogram, but it does not alter the area of the figure. That's exactly the operation you do when you add one of the rows multiplied by a factor to another row of the determinant. The rows (or columns) correspond to the vectors $\vec{AD}$ and $\vec{AB}$. If you add $\vec{DF}$ to $\vec{AD}$, you obtain the new row $\vec{AF}$. Thus the determinant represents now the area of the parallelogram, which is the same as that of the rectangle. This generalizes to higher dimensions.<|endoftext|> TITLE: Where do Chern classes live? $c_1(L)\in \textrm{?}$ QUESTION [17 upvotes]: If $X$ is a complex manifold, one can define the first Chern class of $L\in \textrm{Pic}\,X$ to be its image in $H^2(X,\textbf Z)$, by using the exponential sequence. So one can write something like $c_1(L)\in H^2(X,\textbf Z)$. But if $X$ is a scheme (say of finite type) over any field, then I saw a definition of the first Chern class $c_1(L)$ just via its action on the Chow group of $X$, namely, on cycles it works as follows: for a $k$-dimensional subvariety $V\subset X$ one defines \begin{equation} c_1(L)\cap [V]=[C], \end{equation} where $L|_V\cong\mathscr O_V(C)$, and $[C]\in A_{k-1}X$ denotes the Weil divisor associated to the Cartier divisor $C\in\textrm{Div}\,V$ (the latter being defined up to linear equivalence). So then one shows that this descends to rational equivalence and we end up with a morphism $c_1(L)\cap -:A_kX\to A_{k-1}X$. So, my naive questions are: $\textbf{1.}$ Where do Chern classes "live"? (I just saw them defined via their action on $A_\ast X$ so the only thing I can guess is that $c_1(L)\in \textrm{End}\,A_\ast X$ but does that make sense?) $\textbf{2.}$ How to recover the complex definition by using the general one that I gave? $\textbf{3.}$ Are there any references where to learn about Chern classes from the very beginning, possibly with the aid of concrete examples? Thank you! REPLY [5 votes]: Your conceptual questions have been nicely answered by Georges Elencwajg. For your third question, and to learn how all he says and more is developed from scratch, you may find very interesting the following freely available course notes, where the authors develop the whole machinery at an "introductory" level. The second one requires a previous course in algebraic geometry, but the first reference provides you exactly with the needed background before introducing Segre and Chern classes in general and intersection theory up to Hirzebruch-Riemann-Roch theorem: Gathmann, A. - Algebraic Geometry, Notes for a Class at University of Kaiserslautern. Vakil, R. - Topics in Algebraic Geometry: Introduction to Intersection Theory. To get an easier quick glimpse at all the topics covered by the mentioned master monograph by Fulton, look at his own overview: Fulton W. - Introduction to Intersection Theory in Algebraic Geometry (CBMS Regional Conference Series in Mathematics), AMS 1984. Georges Elencwajg has recommended in several of his posts the future book by Eisenbud and Harris, but I have found all the links to be of an old 2010 version, I recommend anybody interested in the evolution of the book to get the latest version available, as it includes refinements, many more pictures and is more complete: Eisenbud; Harris - 3264 & All That, Intersection Theory in Algebraic Geometry (UPDATE: new draft from April 2013).<|endoftext|> TITLE: Binomial series with two binomial coefficents QUESTION [5 upvotes]: My question reads: Does this formula has mathematical meaning at first place? Is it summable? $$\sum^{\infty}_{k=0}{n\choose k}{m\choose k} x^k$$ REPLY [6 votes]: It's quite easy to verify that your sum is equal to the coefficient of $z^m$ in the product: $$ (1+xz)^n\,(1+z)^m.$$ If you set $x=1$ you can find a slight generalization of the Chu-Vandermonde identity: $$ \sum_{k=0}^{+\infty}\binom{m}{k}\binom{n}{k}=\binom{m+n}{n}.$$<|endoftext|> TITLE: Does the derivative of a continuous function goes to zero if the function converges to it? QUESTION [6 upvotes]: Physicist here. I am puzzled by a question: looking at a continuous function $g :\mathbb{R} \rightarrow \mathbb{R}$ that goes to zero at infinity, I am interested in the behavior of its derivative $\beta = g'$. Precisely, does it go to zero too? By writing on paper it looks like: $$ \beta(+ \infty)=\text{lim}_{x\rightarrow \infty}\text{lim}_{h\rightarrow 0} \frac{g(x+h)-g(x)}{h}$$ And if I can invert the two limits, I get what I expect: $\beta(+\infty)=0$, so I was curious about the hypothesis behind this permutation. Is the requirement of continuity enough? Thanks. REPLY [9 votes]: No, a counterexample is $$ \begin{eqnarray} g(x) &=& \frac{\sin(x^2)}{x} \text{,} \\ g'(x) &=& 2\cos(x^2) - \frac{\sin(x^2)}{x^2} \end{eqnarray} $$ $g$ obviously goes to zero as $x \to \infty$ but $g'(x)$ doesn't. In general, you need uniform convergence of the inner limit with respect to the outer one to swap the two. In the case of this counterexample, you don't have that. (Technically, my counterexample is continuous only on $\mathbb{R}\setminus\{0\}$, but since you're interested only in $g$'s asymptotic behaviour, that doesn't really matter. You can obviously make it continuous on the whole real line by adjusting it on some interval around $0$, which won't change the asymptotic behaviour at all) REPLY [5 votes]: No, it isn't enough. Let's consider \[ g(x) = \frac{\sin(e^x)}{1 + x^2} \] which goes to zero at infinity, but it's derivative \[ g'(x) = \frac{e^x\cos(e^x)(1+x^2) - 2x\sin(e^x)}{(1 + x^2)^2} \] doesn't.<|endoftext|> TITLE: Calculate average angle after crossing 360 degrees QUESTION [6 upvotes]: For a piece of code I am writing to smooth out movements I need to calculate the average angle over the past 5 recorded angles given (used to give directionality to an object) This can be achieved quite simply by calculating the median of the 5 previous angles. However, the angles given range from 0 degrees to 360 degrees so we get an immediate issue here. Once you move over 360 degrees the angle is reset back to 0 so if we was to send 0 to the array of previous angles then the following would happen: (355 + 359 + 360 + 360 + 1) / 5 = 287 Now obviously 287 is completely the wrong angle and gives an abnormal movement once we cross this 360 degree border. I've tried checking if the previous values are on the 360 side or the 0 side then adjusting the new value accordingly but we get an issue with 1; the performance (there is a very short update time before it effects the user interface) and 2; when we get to 720 it will have to keep looping around again. I don't have a very good background with maths so I thought I would ask here as my last resort but is there a way/formula I can calculate the average with the 360 to 0 gap in mind and give a result on the correct side of this instead of just giving a false value? Thanks for looking, please let me know if i need to provide any more information to help you :) Liam REPLY [6 votes]: One straightforward way to solve this is to convert the headings (angles) to Cartesian coordinates, average the x and y components separately, then convert back to a heading. So the answer is: AverageAngle = ATAN2(sum_of_sin(theta), sum_of_cos(theta))<|endoftext|> TITLE: Non linear transformation satisfying $T(x+y)=T(x)+T(y)$ QUESTION [5 upvotes]: Given V a vector space with vectors and scalars $\mathbb{C}$, does there exists a non linear transformation $T:V\rightarrow V$ such that $T(x+y)=T(x)+T(y)$ for all $x,y\in V$? I think such a transformation will be 'like' one that satisfies Cauchy's functional equation $f(x+y)=f(x)+f(y)$ without any other conditions, but other than that, I have no idea. REPLY [2 votes]: The function $T \colon V \to V$ is a homomorphism of abelian groups or equivalently $\mathbb{Z}$-modules. If we use Zorn's Lemma we can realise $V$ as a freely generated module over $\mathbb{Z}$ with generators $X$ say. Then $X$ is uncountable. Picking $X$ is like picking a basis of a complex vector space. You can define every $T$ by taking any function $X \to V$ and extending it to the whole of $V$ to be a homomorphism like you extend any map of a basis to be a linear map. Unfortunately I don't think there is any constructive way of finding $X$ so all this shows is there are lots of maps $T$. Might these all be linear ? No as you can arrange them not to be. As $X$ is uncountable you can choose $\dim(V) + 1$ distinct generators from $X$. These must be linearly dependent so arrange $T$ to not be linear on them and then extend it to the rest of $X$ arbitrarily. EDIT: Previous comment on automorphisms of $\mathbb{C}$ removed as not really relevant.<|endoftext|> TITLE: Relation between uniform continuity and uniform convergence QUESTION [17 upvotes]: Is there a relationship between uniform continuity and uniform convergence? For example, suppose $\{f_{n}\}$ is a sequence of functions each of which is uniformly continuous on $[a, b]$. Then does it follow that $f_{n}$ converges to $f$ uniformly on $[a, b]$? (Maybe with some additional conditions?) REPLY [12 votes]: No, for example each $f_n$ can be equal to a constant $c_n$, but such that the sequence of real numbers $\{c_n\}$ is not convergent. Even if $\{f_n\}$ converges pointwise, it's not enough (take $f_n(x)=x^n$ on $[0,1]$). However, it's true that a uniform limit on $I$ of uniformly continuous functions on $I$ is uniformly continuous on $I$. To see that, use a $3\varepsilon$-argument: take an integer such that the uniform distance between $f$ and $f_n$ is $\leq\varepsilon$, and use uniform continuity of $f_n$ on $I$ to get the result. There are also cases where the convergence can be uniform, like in the context of Dini's theorem for example.<|endoftext|> TITLE: Logarithm of absolute value of a holomorphic function harmonic? QUESTION [15 upvotes]: Let $f:U\rightarrow\mathbb{C}$ be holomorphic on some open domain $U\subset\hat{\mathbb{C}}=\mathbb{C}\cup\{\infty\}$ and $f(z)\not=0$ for $z\in U$. Is it true that $z\mapsto \log(|f(z)|)$ is harmonic on $U$ ? I guess the answer is yes and if that is true, how can I see that without a long and nasty calculation? REPLY [7 votes]: Locally (but not necessarily globally), $\log f(z)$ is an analytic function because $\log$ is an analytic function. The real part of $\log f(z)$ is $\log |f(z)|$, i.e. the polar representation of the complex number $w$ is $w = r e^{i\theta}$ where $r = |w|$, and $\log w = \log r + i \theta$ so $\text{Re}(\log w) = \log r = \log |w|$. The real part of an analytic function is harmonic.<|endoftext|> TITLE: Generate the smallest $\sigma$-algebra containing a given family of sets QUESTION [7 upvotes]: My teacher gave me an example of performing the subject: Example Let $\Omega = \Bbb R$ and $\mathcal R = \{(-\infty,-1),(1,+\infty)\}$. Then $\sigma(\mathcal R) = \{\emptyset, \Bbb R, (-\infty,-1), (1,+\infty), [-1, \infty), (-\infty,1], (-\infty,-1)\cup(1,+\infty),[-1,1]\}$. There was a different example, where she also generated the smallest $\sigma$-algebra for the family of sets $\mathcal A = \{A,B\} \subset 2^\Omega$ in the same way: $\sigma(\mathcal R) = \{\emptyset, \Omega, A, B, A^C,B^C,A\cup B, (A\cup B)^C\}$. I certainly understand why $\emptyset$, $\Omega $ and a family of sets itself are there, and I certainly know that $\sigma$-algebra is closed under the operations of complement and union. What I don't understand about the other elements of generated $\sigma$-algebras: why we are taking exactly them? Does this work in a general case? REPLY [3 votes]: The sigma needs to contain $\varnothing$, $\Omega$, $A$ and $B$. Since a sigma algebra is closed under countable unions and complements it must contain $A \cup B$, $A^{C}$, and $B^{C}$.Since a sigma algebra is closed under complements it must contain $(A^{C} \cup B^{C})^{C} = A \cap B$. A more general construction of a sigma algebra from $\mathcal{A} \subseteq 2^{X}$ is a follows: The construction is in stages indexed by the ordinals. Stage $0$. $\sigma_{0} = \mathcal{A} \cup \{ \varnothing, X\} $. Stage $\alpha + 1$. We suppose we have defined $\sigma_{\alpha}$. Then $\sigma_{\alpha + 1} = \sigma_{\alpha} \cup \{ \cup f(i)\colon f: \mathbb{N} \rightarrow \sigma_{\alpha} \} \cup \{ X \smallsetminus \cup f(i)\colon f: \mathbb{N} \rightarrow \sigma_{\alpha} \}$. Stage $\lambda$. Here $\lambda$ is a limit ordinal. We suppose that we have defined $\sigma_{\alpha}$ for all $\alpha < \lambda$. The $\sigma_{\lambda} = \cup \{ \sigma_{\alpha} \colon \alpha < \lambda \} $. This step might require the axiom of choice. With this construction $\sigma_{\aleph_{1}}$ is the smallest sigma algebra containing $\mathcal{A}$. Stage $\alpha + 1$ guarantees if something is in the collection its complement is also in the collection. Suppose the for each $i \in \mathbb{N}$ we have $A_{i} \in \sigma_{\aleph_{1}}$. For each $i \ in \mathbb{N}$ there exists a least countable $\alpha_{i}$ with $A_{i} \in \sigma_{i}$. Possibly using the axiom of choice there is a countable ordinal $\alpha^{*}$ greater than any of the $\alpha_{i}$. We will have both $\cup \{A_{i} \colon i \in \mathbb{N} \} \in \sigma_{\alpha^{*}}$ and $X \smallsetminus \cup \{A_{i} \colon i \in \mathbb{N} \} \in \sigma_{\alpha^{*}}$.<|endoftext|> TITLE: Asymptotic growth of $\prod_{k=1}^n \frac{k^\alpha}{\lambda + k^\alpha}$? QUESTION [5 upvotes]: Can you please explain why for $\lambda > 0$ and $0 < \alpha < 1$ $$\prod_{k=1}^n \frac{k^\alpha}{\lambda + k^\alpha} \sim \exp\left(-\frac{\lambda}{1-\alpha}n^{1-\alpha}+o(n^{1-\alpha})\right)$$ holds? I'm stuck at the moment. EDIT: added $o()$ term for correctness. REPLY [4 votes]: Define $u:x\mapsto\log(1+\lambda x^{-\alpha})$ and, for every $n\geqslant1$, $s_n=\sum\limits_{k=1}^nu(k)$. The function $u$ is decreasing hence $$ \int_{k}^{k+1}u(x)\,\mathrm dx\leqslant u(k)\leqslant\int_{k-1}^ku(x)\,\mathrm dx. $$ Summing these yields, for every $n$, $$ u(n)+\int_1^nu(x)\,\mathrm dx\leqslant s_n\leqslant u(1)+\int_1^nu(x)\,\mathrm dx. $$ For every $z\geqslant0$, $z-\frac12z^2\leqslant\log(1+z)\leqslant z$, hence $$ w_n-v_n\leqslant\int_1^nu(x)\,\mathrm dx\leqslant w_n, $$ where $$ w_n=\lambda\frac{n^{1-\alpha}}{1-\alpha},\qquad v_n=\frac12\lambda^2\int_1^nx^{-2\alpha}\,\mathrm dx\leqslant \lambda^2 C_\alpha\,\max\{n^{1-2\alpha}\log(n+1),1\}. $$ In particular, $v_n\ll n^{1-\alpha}$, $u(1)\ll n^{1-\alpha}$ and $u(n)\sim\lambda n^{-\alpha}\ll n^{1-\alpha}$, hence $s_n=w_n+o(w_n)$ and $$\prod_{k=1}^n \frac{k^\alpha}{\lambda + k^\alpha}=\mathrm e^{-s_n}= \exp\left(-\frac{\lambda}{1-\alpha}n^{1-\alpha}+o(n^{1-\alpha})\right). $$<|endoftext|> TITLE: When does a line bundle have a meromorphic section? QUESTION [7 upvotes]: Let $X$ be a scheme and $D$ be a Cartier divisor on $X$. Then $D$ determines a line bundle $\mathcal{O}(D)$ on $X$. Under which condition, is the converse true? That is, when does a line bundle come from a Cartier divisor. This is equivalen to saying when does a line bundle have a meromorphic section? I know that when $X$ is a non-projective manifold line bundles do not have sections in general. REPLY [9 votes]: The map you describe $Cacl(X)\to Pic(X)$ sending the linear equivalence class $[D]$ of a Cartier divisor $D$ to the line bundle $\mathcal O(D)$ is always injective. It is very often surjective: it is the case if $X$ is integral or if $X$ is projective over a field. However Kleiman has given a complicated example of a complete non-projective 3-dimensional irreducible scheme on which there is a line bundle not having any non-zero rational section and thus not coming from a Cartier divisor. The scheme $X$ is obtained from Hironaka's complete, integral, non-singular, non projective variety of dimension 3 (which is already a strange beast!) by adding nilpotents to the local ring of just one point. The details can be found in Hartshorne's Ample Subvarieties of Algebraic Varieties , Chapter I, Example 1.3, page 9. Here is a picture (in blue) of Hironaka's strange beast . The description is on page 185 of Shafarevich's book. REPLY [2 votes]: The most general assertion I know is: Any invertible sheaf on $X$ is isomorphic to a $O_X(D)$ when $X$ is locally noetherian and if the associated points of $X$ are contained in an affine open subset of $X$ (EGA IV.21.3.4). The condition on the associated points is satisfied if for instance $X$ is quasi-projective over a noetherian ring (then any finite subset of $X$ is contained in an affine open subset), or if $X$ is noetherian and reduced.<|endoftext|> TITLE: Is there an alternate definition for $\{ z \in \mathbb{C} \colon \vert z \vert \leq 1 \} $. QUESTION [5 upvotes]: Is there a method of constructing a subset of a reasonably arbitrary ring so that when the construction is applied the $\mathbb{C}$ the result is $B = \{ z \in \mathbb{C} \colon |z| \leq 1 \} $? My interest is in constructing something like an absolute value for an arbitrary ring. Notice that $\{ zB \colon z\in \mathbb{C} \}$ is, as an ordered set, isomorphic to $[0, \infty)$. The isomorphism is $zB \mapsto |z|$. Suppose we have a ring $R$ and a $G \subseteq R$ satisfying: We have $0, 1, -1 \in G$. For all $x,y \in G$ we have $xy \in G$. For all $r \in R$ there exists an $s \in R$ with $rG = Gs$. Then we can define a map $$\Vert \Vert \colon R \rightarrow \{ \sum_{i = i}^{n}a_{i}G \colon a_{i} \in R \} $$ for all $r \in R$ by the assignment $$r \mapsto rG.$$ The image of this map is partially ordered by inclusion. The smallest element is $\Vert 0 \Vert = \{ 0 \} $. For all $r,s \in R$ we have both $\Vert rs \Vert = \Vert r \Vert \Vert s \Vert$ and$\Vert r +s \Vert \subseteq \Vert r \Vert + \Vert s \Vert$. This summation property is why I used a collection of finite sums for the range of the function. The idea is to use such a function a type of absolute value for a arbitrary ring. It would be nice to have a definition for $G$ so that when the construction was applied to the complex numbers the result yielded a structure isomorphic to the usual absolute value for complex numbers. Perhaps such a construction is impossible. If so it would be nice to know that as well. REPLY [2 votes]: I don't think there's a purely algebraic way to characterize the unit ball in $\mathbb{C}$ as is pointed out by Henning Makholm. However, I think your approach to try and define some notion of "abstract unit ball" is interesting (but I think your conditions are not sufficient, for example you might want to exclude $G = R$). A more standard approach is simply to call an absolute value on a ring $R$ any function $|| : R \to \mathbb{R}$ satisfying the following properties : $|x| = 0$ if and only if $x = 0$ $|xy| = |x| |y|$ $|x + y| \le |x| + |y|$ Conditions 1 and 3 insure $d(x,y) = |x-y|$ is a distance on $R$ for which addition is continuous, and condition 2 insures multiplication and inversion (when defined) is continuous. So in a nutshell, it puts a nice topology on such a ring. Two absolute values on $R$ are said to be equivalent if they define the same topology. Well known example of rings with absolute value include $\mathbb{C}$ (or any subring such as $\mathbb{R}$ or $\mathbb{Q}$) with the usual absolute value. Notice that $x \mapsto \sqrt{|x|}$ is also an absolute value, but it is equivalent to $x \mapsto |x|$. From properties 1 and 2, we see that the existence of an absolute value implies the ring is an integral domain, and the absolute value can be naturally extended to its field of fraction. So it's hopeless to expect an absolute value on an arbitrary ring (to include some additional rings, you might relax condition 2 to $|xy| \le |x| |y|$, and the product would still be continuous) and you might as well assume that $R$ is a field. Also a given field can have many inequivalent absolute values (i.e. yielding different topologies). It is the case for $\mathbb{Q}$. Fix a prime $p$ and for any $x \in \mathbb{Q}^*$, denote $v_p(x)$ the exponent of $p$ in the decomposition of $x$ as a product of (possibly negative) powers of primes ($v_p(x)$ is called the $p$-adic valuation of $x$). We define the $p$-adic absolute value by $|x|_p = p^{-v_p(x)}$ (and $|0|_p = 0$) and check that this is an absolute value on $\mathbb{Q}$ (actually, we have a stronger version of property 3 : $|x+y|_p \le \max(|x|_p, |y|_p)$ called the ultrametric inequality). The topology induced on $\mathbb{Q}$ by this $p$-adic absolute value is very different from the one you get with the usual absolute value. The unit ball in $\mathbb{Q}$ is the subring of rational numbers $x$ that can be written $x = \frac{a}{b}$ with $a, b \in \mathbb{Z}$ and $b$ coprime to $p$. So you get a different ball for every prime $p$, and also different from the usual unit ball $\{x \in \mathbb{Q}, |x| \le 1\}$. But all of them are unit balls for some absolute value. So even from this point of view, the unit ball in a field is not unique. Now you might wonder if there are other absolute values on $\mathbb{C}$. The answer is yes : in some sense, we can extend the $p$-adic absolute value to $\mathbb{C}$. The process is quite intricate : first take the completion $\mathbb{Q}$ with regards to $||_p$, call that $\mathbb{Q}_p$, then extend $||_p$ to the algebraic closure $\overline{\mathbb{Q}}_p$ (this step is not obvious, but we can show there a unique way of doing so) and denote $\mathbb{C}_p$ the completion of $\overline{\mathbb{Q}}_p$ with regards to $||_p$. We can show $\mathbb{C}_p$ is algebraically isomorphic to $\mathbb{C}$, so you can "transport" $||_p$ to $\mathbb{C}$ (but it's better to denote $\mathbb{C}_p$ to keep track of the topology we put on it). The unit ball in $\mathbb{C}_p$ is a subring of $\mathbb{C}_p$ that does not contain the number $\frac{1}{p}$, so once again, it's very different from the unit ball in $\mathbb{C}$.<|endoftext|> TITLE: Union of two subspaces versus intersection of two subspaces QUESTION [7 upvotes]: According to the definition, the union of two subspaces is not a subspace. That is easily proved to be true. For instance, Let $U$ contain the general vector $(x,0)$, and $W$ contain the general vector $(0,y).$ Clearly, the union of these two subspaces would not be in either of the subspaces as it will violate closure axioms. As for the intersection of the two subspaces, I believe I understand the concept. However, I want to be sure of that, and I believe it comes down to the difference between union and intersection as applied to vector/subspaces. Basically, union - in this context - is being used to indicate that vectors can be taken from both subspaces, but when operated upon they have to be in one or the other subspace. Intersection, on the other hand, also means that vectors from both subspaces can be taken. But, a new subspace is formed by combining both subspaces into one. To explain, I'll use the same subspaces as above. Let $U$ contain the general vector $(x,0)$, and $W$ contain the general vector $(0,y).$ So, the intersection of $U$ and $V$ would contain the general vector $(x,y)$ (I should say based on what I said above). Therefore, the closure axioms are fulfilled. Am I correct in my reasoning? Any feedback is appreciated. REPLY [2 votes]: Let's prove that the intersection of two subspaces is also a subspace. Assume that we have a vector space $V$ which has two subspaces $S$ and $T$ inside. We would like to prove that $S\ \cap\ T$ is also a subspace of $V$. So we can list the assumptions as given below: $S$ is a subspace of $V$. $T$ is a subspace of $V$. $S\ \cap\ T$ is a subset (not subspace) of $V$. we want to prove that that: $S\ \cap\ T$ is a subspace of $V$. Based on assumptions 1 and 2 we can imply that all linear combinations of vectors inside $S$ reside in $S$, and that all linear combinations of vectors inside $T$ reside in $T$. Formally speaking: $$ \alpha x + \beta y \in S $$ $$ \gamma m + \theta n \in T $$ where $x$ and $y$ are two random vectors in $S$ and $m$ and $n$ are two random vectors in $T$. $\alpha$, $\beta$, $\gamma$ and $\theta$ are real numbers. Based on the third assumption we can choose a set of vectors $x$ and $y$ which reside in both sets $S$ and $T$ and we can rewrite the above formalization as: $$ \alpha x + \beta y \in S $$ $$ \gamma x + \theta y \in T $$ Simplifying the above formula, we can rewrite the case in which $\alpha$ is equal to $\gamma$ and $\beta$ is equal to $\theta$. In this case, we'll have: $$ \alpha x + \beta y \in T $$ $$ \alpha x + \beta y \in S $$ This shows that linear combinations of two random vectors $x$ and $y$ reside in both sets $S$ and $T$. So these linear combinations will be definitely inside the intersection of $S$ and $T$ as well. $$ \alpha x + \beta y \in S\ \cap\ T $$ hence $S\ \cap\ T$ is just a cute little subspace.<|endoftext|> TITLE: An inequality in Evans' PDE QUESTION [7 upvotes]: In Section $9.2$ Theorem $5$ of Lawrence Evans' Partial Differential Equations, First Edition the author proves that for a large enough $\lambda$, the equation $$\begin{array}-\Delta u+b(\nabla u)+\lambda u=0\ &\mbox{ in } U\\ u=0&\mbox{on }\partial U\end{array}$$ has a solution in $H_0^1(U)$. On page 507, the author writes $$\int_UC(|\nabla u|+1)|u|dx\leq\frac{1}{2}\int_U|\nabla u|^2dx+C\int_U(|u|^2+1)dx\ \mbox{ for }u\in H_0^1(U).$$ Here $C$ is the Lipschitz constant for the Lipschitz function $b$. My problem is that I cannot show this no matter how much I try. How is the gradient term becoming independent of $C$? Could someone please help! REPLY [6 votes]: Since the OP didn't seem to understand my comment, I'll make it into an answer. The Peter-Paul inequality (one guy big, the other one small) is the simple arithmetic estimate ($ab\in\mathbb{R}$, $\varepsilon>0$) $$ ab \le \varepsilon a^2 + \frac1{4\varepsilon}b^2\,. $$ This simple little guy is all you need to establish the OP's desired estimate, once you accept the fact that the $C$ is different on each side. In detail, we obtain the estimates $$ C|u| \le \frac{C^2}2|u|^2 + \frac12\,, $$ and $$ C|\nabla u|\,|u| \le \frac12|\nabla u|^2 + \frac{C^2}2|u|^2\,, $$ which we integrate and add together to conclude $$ \int_UC(|\nabla u|+1)|u|dx \leq\frac{1}{2}\int_U|\nabla u|^2dx + C^2\int_U|u|^2dx + \frac12\int_Udx\,, $$ as desired.<|endoftext|> TITLE: Is a bivariate function that is a polynomial function with respect to each variable necessarily a bivariate polynomial? QUESTION [13 upvotes]: Let $ \mathbb{F} $ be an uncountable field. Suppose that $ f: \mathbb{F}^{2} \rightarrow \mathbb{F} $ satisfies the following two properties: For each $ x \in \mathbb{F} $, the function $ f(x,\cdot): y \mapsto f(x,y) $ is a polynomial function on $ \mathbb{F} $. For each $ y \in \mathbb{F} $, the function $ f(\cdot,y): x \mapsto f(x,y) $ is a polynomial function on $ \mathbb{F} $. Is it necessarily true that $ f $ is a bivariate polynomial function on $ \mathbb{F}^{2} $? What if $ \mathbb{F} $ is merely countably infinite? REPLY [6 votes]: As shown is Gerry Myerson’s answer, the answer is NO when $\mathbb F$ is countably infinite. The answer is YES when $\mathbb F$ is uncountable, however. Sketch of proof : since there are only countably many degrees, the polynomials will share a common degree on an uncountable set. This bound on the degree allows one to use interpolation, and to retrieve the whole of $f$. More detailed proof : Denote by $d(x)$ the degree of the univariate polynomial $f(x,.)$ for $x\in {\mathbb F}$ (recall that the degree of the zero polynomial is $-\infty$), and put $U_d=\lbrace x \in {\mathbb F} | d(x)=d\rbrace$ for $d\in \lbrace -\infty \rbrace \cup {\mathbb N}$. Then the $U_d$ form a countable partition of $\mathbb F$, so at least one of the $U_d$, say $U_{n}$, is uncountable. We may assume that $n>0$, as the cases $n=-\infty$ and $n=0$ are similar and simpler. Let $y_0,y_1, \ldots y_{n}$ be $n+1$ distinct values in $\mathbb F$, this is possible because $\mathbb F$ is uncountable. (if the characteristic of $\mathbb F$ is zero, we can simply take $y_i=i$). Using Lagrange interpolation, let us put $$L_k(y)=\frac{\prod_{j \neq k}{(x-x_j)}}{\prod_{j \neq k}{(x_k-x_j)}}$$ for $0 \leq k \leq n$. Then one has, for any polynomial $P$ of degree $\leq n$ and any $y\in{\mathbb F}$, $$ P(y)=P(y_0)L_0(y)+P(y_1)L_1(y)+ \ldots +P(y_n)L_n(y) $$ In particular, one has for any $(x,y)\in U_n \times {\mathbb F}$, $$ (1) \ f(x,y)=f(x,y_0)L_0(y)+f(x,y_1)L_1(y)+f(x,y_2)L_2(y)+ \ldots +f(x,y_n)L_n(y) $$ The right-hand side is a fixed bivariate polynomial, let us denote it by $Q(x,y)$. Let $y\in {\mathbb F}$. Then the two univariate polynomials $f(.,y)$ and $Q(.,y)$ coincide on the uncountable set $U_n$, so they must coincide everywhere. Finally $f=Q$ everywhere and we are done.<|endoftext|> TITLE: Why is category theory not just another theory? QUESTION [7 upvotes]: Consider category theory as one theory among many others: with a simple signature and some simple axioms. Compare it with - e.g. - group theory as another theory with a simple signature and some simple axioms. Compare it with set theory as still another theory with a simple signature and some (not so simple) axioms. How could you tell in advance that (especially and somehow exclusively) category theory gives rise to (and makes definable) such a fundamental concept like universal property? Consider the way universal properties are defined: Why isn't one able to define comparable abstract and useful concepts on top of groups, sets, and so on? Or is one? What makes categories special in this respect - from an abstract point of view? REPLY [6 votes]: I am going to rephrase your questions, the way I understand them. I hope this is a reasonably correct interpretation. 1- question: how can you tell by just looking that category theory (CT) (a theory with a simple signature and some simple axioms, in your words) is more powerful than say group theory or..... Answer: take CT axioms (for ex. Here) and take monoid theory (MT) axioms (for ex. Here). You see almost immediately that MT is a special case of CT where all identities are the same. You also see that groupoid theory (GdT) is a special case of CT when you assume that all arrows are invertible. Since group theory (GT) is a special case of MT or GdT you already see that: CT is more general (thus presumably more powerful) than MT, GdT or GT. That's not bad, considering that all this can be seen almost immediately. If you specialize CT is more complicated ways you can prove that the resulting theory is a model for (a flavour of) Set theory. And so on with Rings, topological spaces, ect. So we can say that the simple axioms of CT can be augmented to obtain many other previously known mathematical structures. CT - as defined above - has been generalized further with higher categories so I would not say that it is unique in any (permanent) sense. Perhaps in the future we will abstract even more with some other theory, who knows. At the moment Category theorIES are at the forefront of generality and abstraction. That's all we can say. 2- question: Universal properties (UP) are logical statements expressible in the language of CT. Can we guess by looking at their formal structure that they are going to be a fundamental concept in CT? Answer: I do not think this is possible at the moment. We can feed a computer with a theory and a statement and ask whether the statement is true or not (a Theorem). But we cannot decide - by just looking at the structure of the statement - whether it is going to be very useful or just moderately useful in the future development of the theory. This can only be decided ex-post. In the case of UP, they are not even theorems, just properties (definitions basically), which may or may not apply to a specific category/functor. They turn out to be fundamental concepts by the fact that they appear to be satisfied by many important categories/functors. Ex-post unfortunately. Samuel defined UP in 1948 and Kan went on with adjoints in 1958. CT was founded in 1942. So UP and adjoints were not obvious things. 3- question:Why isn't one able to define comparable abstract and useful concepts on top of groups, sets, and so on? Answer: even the most abstract construction in group theory is just something that applies to groups (and some derived set or ring, or...) only. It will never be automatically applicable in a non-group setting (topological spaces which are not groups, for example). Conclusion. It seems that much axiomatic mathematics has been developed by "reverse-engineering". You take some nice theorem (Pythagoras' theorem for ex. ) and you work backward to find axioms such that the theorem can be deduced from them. This s apparently what Euclid did. Perhaps UP were invented that way. Basic CT axioms certainly were developed that way. hth<|endoftext|> TITLE: Are there other Identity Matrices? QUESTION [7 upvotes]: Is there only one identity matrix $$\begin{pmatrix} 1&0&...&...&0\\0&1&0&...&0\\...&0&1&...&0\\...&...&0&1&0\\...&...&...&0&1\end{pmatrix}$$ etc.. Or are there different identity matrices for other bases? A textbook example asks if $[T]_{\beta} = I$ (the $n\times n$ identity matrix) for some basis $\beta$, is $T$ the identity operator? REPLY [5 votes]: Suppose we want $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} p & q \\ r & s \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$ to be true regardless of which matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is, so that $\begin{bmatrix} p & q \\ r & s \end{bmatrix}$ is an identity matrix. Since it's true regardless of which matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is, it must be true in particular if $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$, so we have $$ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} =\begin{bmatrix} p & q \\ r & s \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}. $$ This last equality clearly implies that $\begin{bmatrix} p & q \\ r & s \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$. Conclusion: if $\begin{bmatrix} p & q \\ r & s \end{bmatrix}$ is an identity matrix, then $\begin{bmatrix} p & q \\ r & s \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$. Therefore there is only one $2\times2$ identity matrix. And the same argument works for bigger matrices.<|endoftext|> TITLE: Sum of Square Roots Problem QUESTION [5 upvotes]: Consider these two lists A = {1,25,31,84,87,134,158,182,198} B = {2,18,42,66,113,116,169,175,199} Now for both lists, add 1,000,000 to each of the numbers then take the sum of their square roots. For example, list A gives us $\sqrt{1000001}+\sqrt{1000025}+...\sqrt{1000198} = 9000.44998...$ Now what is interesting to note here is that the number obtained from list A and the number obtained from list B differ first in the 37th decimal place. A gives us the digit 2 and B gives us 5. Any explanation as to how/why does this happen? Or rather how were these numbers chosen? Is there some theory behind this, an algorithm perhaps to compile such lists or is it just some random number theory abomination? REPLY [4 votes]: This is really a comment on Marvis' answer, but too large for one. It shows a bit more detail about how it works. Let's take a simpler set: $A=\{1,6,8\}, B=\{2,4,9\}$. We can see that $1+6+8=15=2+4+9, 1^2+6^2+8^2=101=2^2+4^4+9^2$. Then if we form $\sqrt{1000001}+\sqrt{1000006}+\sqrt{1000008}-\sqrt{1000002}-\sqrt{1000004}-\sqrt{1000009}$ we get about $4.5 \cdot 10^{-15}$. The point is that we have canceled out the leading terms in the Taylor series. As Marvis says, $$\sqrt {1000006}= 1000\sqrt{1+\frac 6{1000000}}\approx 1000\left(1+\frac 12\frac 6{1000000}-\frac 12 \cdot \frac 12 \left(\frac 6{1000000}\right)^2+\frac 12 \cdot \frac 12 \cdot \frac 32 \left(\frac 6{1000000}\right)^3\right)$$ where we have kept the first four terms. Because the sum of each set and the sum of squares of each set are the same, the terms linear and quadratic in $\frac 1{1000000}$ will cancel and we will be left with the term in $1000\left(\frac 1{1000000}\right)^3$, which is $10^{-15}$<|endoftext|> TITLE: Proof of an inequality about $\frac{1}{z} + \sum_{n=1}^{\infty}\frac{2z}{z^2 - n^2}$ QUESTION [5 upvotes]: I've encountered an inequality pertaining to the following expression: $\frac{1}{z} + \sum_{n=1}^{\infty}\frac{2z}{z^2 - n^2}$, where $z$ is a complex number. After writing $z$ as $x + iy$ we have the inequality when $y \gt 1$ and $|x| \le \frac{1}{2} $: $|\frac{1}{z} + \sum_{n=1}^{\infty}\frac{2z}{z^2 - n^2}| \le C + C\sum_{n=1}^{\infty}\frac{y}{y^2+n^2}$ The "proof" of the inequality is given as follows: $\frac{1}{z} + \sum_{n=1}^{\infty}\frac{2z}{z^2 - n^2} = \frac{1}{x+iy} +\sum_{n=1}^{\infty}\frac{2(x+iy)}{x^2 - y^2 - n^2 + 2ixy}$ But I fail to see how the inequality follows. REPLY [2 votes]: The following is not the simplest way to solve this question but it may provide additional insight. The key observation is that both sums have a closed-form representation. To see this, we first need to show that for $w = \sigma + it$ $$ |\pi \cot(\pi w)| \le \pi \coth(\pi t).$$ This is because $$ |\pi \cot(\pi w)| = \pi \left| \frac{e^{i\pi\sigma-\pi t}+e^{-i\pi\sigma+\pi t}} {e^{i\pi\sigma-\pi t}-e^{-i\pi\sigma+\pi t}} \right|\le \pi \frac{e^{\pi t}+e^{-\pi t}}{e^{\pi t}-e^{-\pi t}} = \pi \coth(\pi t)$$ for $t>0.$ Similarly, when $t<0$, $$ |\pi \cot(\pi w)| = \pi \left| \frac{e^{i\pi\sigma-\pi t}+e^{-i\pi\sigma+\pi t}} {e^{i\pi\sigma-\pi t}-e^{-i\pi\sigma+\pi t}} \right|\le \pi \frac{e^{\pi t}+e^{-\pi t}}{e^{-\pi t}-e^{\pi t}} = -\pi \coth(\pi t)$$ Now we use a classic technique to evaluate the two sums, introducing the functions $$ f_1(w) = \pi \cot(\pi w) \frac{2z}{z^2-w^2} \quad \text{and} \quad f_2(w) = \pi \cot(\pi w) \frac{z}{z^2+w^2},$$ with the conditions that $z$ not be an integer for $f_1(z)$ and not $i$ times an integer for $f_2(z).$ We choose $\pi \cot(\pi w)$ because it has poles at the integers with residue $1$. The key operation of this technique is to compute the integrals of $f_1(z)$ and $f_2(z)$ along a circle of radius $R$ in the complex plane, where $R$ goes to infinity. Now by the first inequality we certainly have $|\pi\cot(\pi w)| < 2\pi$ for $R$ large enough. The two terms $\frac{2z}{z^2-w^2}$ and $\frac{z}{z^2+w^2}$ are both $\theta(1/R^2)$ so that the integrals are $\theta(1/R)$ and vanish in the limit. (Here we have used the two bounds on $|\pi \cot(\pi z)|$ that we saw earlier.) This means that the sum of the residues at the poles add up to zero. Now let $$ S_1 = \sum_{n=1}^\infty \frac{2z}{z^2-n^2} \quad \text{and} \quad S_2 = \sum_{n=1}^\infty \frac{y}{y^2+n^2}.$$ By the Cauchy Residue Theorem, $$ \frac{2}{z} - 2\pi\cot(\pi z) + 2S_1 = 0 \quad \text{and} \quad \frac{1}{y} - \pi\coth(\pi y) + 2S_2 = 0.$$ Solving these, we obtain $$ S_1 = -\frac{1}{z} + \pi\cot(\pi z) \quad \text{and} \quad S_2 = - \frac{1}{2} \frac{1}{y} + \frac{1}{2} \pi\coth(\pi y).$$ Starting with the left side of the original inequality we finally have $$\left| \frac{1}{z} + S_1\right| < \pi \coth(\pi y) = 2 S_2 + \frac{1}{y} .$$ It follows that $C=2$ is an admissible choice.<|endoftext|> TITLE: Prime, followed by the cube of a prime, followed by the square of a prime. Other examples? QUESTION [5 upvotes]: The numbers 7, 8, 9, apart from being part of a really lame math joke, also have a unique property. Consecutively, they are a prime number, followed by the cube of a prime, followed by the square of a prime. Firstly, does this occurence happen with any other triplet of consecutive numbers? More importantly, is there anyway to predict or determine when and where these phenomena will occur, or do we just discover them as we go? REPLY [4 votes]: If you'll settle for a prime, cube of a prime, square of a prime in arithmetic progression (instead of consecutive), you've got $$5,27=3^3,49=7^2\qquad \rm{(common\ difference\ 22)}$$ and $$157,\ 343=7^3,\ 529=23^2 \qquad \rm{(common\ difference\ 186)}$$ and, no doubt, many more where those came from. A bit more exotic is the arithmetic progression $81,\ 125,\ 169$ with common difference 44: the 4th power of the prime 3, the cube of the prime 5, the square of the prime 13. So $$3^4,5^3,13^2$$ is an arithmetic progression of powers of primes, and the exponents are also in arithmetic progression.<|endoftext|> TITLE: Is the pull back of a generated $\sigma$-algebra itself a generated $\sigma$-algebra? QUESTION [6 upvotes]: Let $f : \Omega_{1} \rightarrow \Omega_{2}$ be a map from a measurable space $\Omega_{1}$ to another measurable spacce $(\Omega_{2},\mathcal{B})$, where $\mathcal{B}$ is the generated $\sigma$-algebra of some $\textbf{countable}$ collection of subsets of $\Omega_{2}$: $$\mathcal{B}=\sigma(\mathcal{C}),$$ where $\mathcal{C}$ is countable. Do we have that the pull-back $\sigma$-algebra on $\Omega_{1}$, $f^{-1}(\mathcal{B})$, also a generated-$\sigma$ algebra? I think that the answer should be yes and $f^{-1}(\mathcal{B})$ is generated by $f^{-1}(\mathcal{C})$ but I don't know how to prove this. Could anyone help on this? Thank you so much! REPLY [9 votes]: Yes (might be already explicitly answered somewhere else on the Stack, but I couldn't find it, or maybe embedded in answers to related questions). For any (not necessarily countable) collection ${\cal C}$ of subsets of $\Omega_2$ and any function $f:\Omega_1 \rightarrow \Omega_2$, we have: $$ f^{-1}(\sigma({\cal C})) =\sigma(f^{-1}({\cal C})).$$ The "$\supseteq$" is clear as $f^{-1}({\cal C})\subseteq f^{-1}(\sigma({\cal C}))$ and $f^{-1}(\sigma({\cal C}))$ is a $\sigma$-algebra, hence including $\sigma(f^{-1}({\cal C}))$ (the minimal $\sigma$-algebra including collection $f^{-1}({\cal C})$). The "$\subseteq$" is not immediately obvious. Let us denote by ${\cal G}$ the collection of all sets $G\subseteq \Omega_2$ such that $f^{-1}(G) \in \sigma(f^{-1}({\cal C}))$. We note that ${\cal G}$ is a $\sigma$-algebra and that ${\cal C}\subseteq {\cal G}$, so $\sigma({\cal C})\subseteq {\cal G}$ (again by minimality of generated $\sigma$-algebra). This means $f^{-1}(\sigma({\cal C})) \subseteq \sigma(f^{-1}({\cal C}))$.<|endoftext|> TITLE: How do you compute the normal vector to a hyperplane in $\mathbb{R}^n$ given $n$ representative points? QUESTION [8 upvotes]: Given $n$ points (no two identical, no three colinear, no four coplanar, etc.), I'd like to find a formula for the normal vector to the unique hyperplane that intersects each of these points. In three dimensions, we use a cross product: given $x_1, x_2, x_3$, the normal vector is given by $(x_1 - x_2) \times (x_1 - x_3)$. How does this generalize? REPLY [3 votes]: Another way to do this, which I (re)discovered when helping son learn about vectors, is to generalize the representation of the cross-product of two 3-vectors as a 3 by 3 matrix with the first row being the three unit vectors and the other two rows being the components of the two 3-vectors. In $R^n$, if we have $n-1$ n-vectors, form the n by n matrix with the first row being the n unit vectors, and the next n-1 rows being the components of the vectors. the resulting vector is orthogonal to each of the n-1 vectors.<|endoftext|> TITLE: Demonstrate Cantor set contains points other than interval endpoints. QUESTION [9 upvotes]: I am stumped on a problem in a text book. This is not homework. I'm a physicist doing some self study on Lebesgue integrals and Fourier theory. I'm starting with the basics, and reading up on measure theory. The problem is to show that $\frac{1}{4}$ is an element of the Cantor set. My first thought would be t0 find a ternary expansion consisting of only 0's and 2's. However, what I'm having trouble with is imagining that anything remains following the infinite intersection creating the Cantor set other than the interval endpoints. I imagine that if I pick a real number not lying on some interval endpoint I could find an $N$ large enough that the portion of the real line the number belongs to would be deleted. I'd like to see why this argument breaks down. REPLY [7 votes]: Your first thought is a good one. $(\frac 14)_{10}=0.\overline{02}_3=\sum \frac 2{9^i}=\frac {\frac 29}{1-\frac 19}$. Since each stage deletes the numbers left that have a $1$ at that point in the expansion, it never gets deleted. It isn't the endpoint of an interval, but it is a limit of endpoints of intervals.<|endoftext|> TITLE: Does pullback and d and dbar commute? QUESTION [6 upvotes]: If $M$ is a complex manifold. we can write $d = \partial + \overline{\partial}$. Does pullback commute with either $\partial$ or $\overline{\partial}$? REPLY [4 votes]: If $M$ and $N$ are complex manifolds and $f:M\to N$ is a smooth map, then $f^*$ commutes with $\partial$ and $\bar\partial$ if and only if $f$ is holomorphic. To prove the "only if" implication, write $f$ in local holomorphic coordinates as $(w^1,\dots,w^n) = (f^1(z),\dots,f^n(z))$, and note that the equation $\bar\partial(f^*w^j)= f^*(\bar\partial f^j)$ reduces to $\bar\partial f^j=0$, which is exactly the Cauchy-Riemann equations for $f^j$. The converse is a straightforward computation in local holomorphic coordinates.<|endoftext|> TITLE: Inverse of a Positive Definite QUESTION [37 upvotes]: Let K be nonsingular symmetric matrix, prove that if K is a positive definite so is $K^{-1}$ . My attempt: I have that $K = K^T$ so $x^TKx = x^TK^Tx = (xK)^Tx = (xIK)^Tx$ and then I don't know what to do next. REPLY [13 votes]: K is positive definite so all its eigenvalue are positive. The eigenvalues of $K^{-1}$ are inverse of eigenvalues of K, i.e., $\lambda_i (K^{-1}) = \frac{1}{\lambda_i (K)}$ which implies that it is a positive definite matrix.<|endoftext|> TITLE: Prime decomposition of 3-manifolds QUESTION [5 upvotes]: Let $H_g$ be a three dimensional handlebody bounded by a genus $g$ surface. Let $M_g$ be a manifold obtained by gluing two copies of $H_g$ via an orientation reversing homeomorphism of the surface of $H_g$. I would like to know what is a prime decomposition of the manifold $M_g$. When $g=1$, we have $M_1$ is homeomorphic to $S^2 \times S^1$ and this is a prime decomposition. What's the decomposition of $M_2$? Is it a connected sum of two $S^2 \times S^1$? I appreciate any help. Thank you in advance. REPLY [3 votes]: Note that in the case $g=1$ you don't always get $S^1 \times S^2$, but you may obtain also $S^3$ or the lens spaces, depending on the homeomorphism you choose for the gluing. The point is that the torus has a lot of non isotopic homeomorphism. The same is true for higher $g$ as well. What I'm suggesting is that a priori the decomposition will depend on the chosen gluing...I'm not aware of any kind of independence result. As you can see in the $g=1$ case, if your gluing fixes the two generators of $\pi_1 (M)$ then you get $S^1 \times S^2$ which is the prime decomposition of itself (being prime); if your gluing swaps them, then you get $S^3$ which is the prime decomposition of itself (being prime). So you get two different decompositions of two different manifolds. "The manifold obtained gluing two copies of $H_1$" is an ill-posed term, and so is "the decomposition of the manifold obtained gluing two copies of $H_1$". In general, you have to specify which is the gluing $\varphi \in Homeo (\partial H_g)$ you are performing, at least modulo isotopy of $\partial H_g$ (since isotopic homeomorphisms give homeomorphic manifolds $M_g$).<|endoftext|> TITLE: The determinant function is the only one satisfying the conditions QUESTION [11 upvotes]: How can I prove that the determinant function satisfying the following properties is unique: $\det(I)=1$ where $I$ is identity matrix, the function $\det(A)$ is linear in the rows of the matrix and if two adjacent rows of a matrix $A$ are equal, then $\det A=0$. This is how Artin has stated the properties.I find Artin's first chapter rough going and would appreciate some help on this one. REPLY [6 votes]: An alternative way is the Gaussian elimination: for a given $n\times n$ matrix $A$ with rows $r_1,..,r_n$, the following steps are allowed to use, in order to arrive to the identity matrix or one with a zero row (by the linearity, if $A$ has a zero row, the 'Artinian determinant' has to be zero). Add a scalar multiple of a row $r_j$ to another row $r_i$, i.e.: $i\ne j$ and $$r_i':= r_i+\lambda r_j$$ Multiply a row by a nonzero scalar, i.e.: $\lambda\ne 0$ and $$r_i':=\lambda\cdot r_i$$ Exchange 2 rows (can also be obtained by 1. and 2.) Let's assume, we have two 'Artinian determinants': $D$ and $D'$. Using the above mentioned fact that every matrix can be transformed to the indentity or with a zero row, we will have $D=D'$, because 1. keeps both $D$ and $D'$ (why?), 2. multiplies both $D$ and $D'$ by $\lambda$, and 3. by $-1$.<|endoftext|> TITLE: There exists a rational sequence that converges to $\sqrt3$ QUESTION [6 upvotes]: I got a proof of this but I am quite sure that it is not what was expected on the exam. Also, this proof seems really kludgy and non-kosher. Because of the density of the raitonals in the reals, there exists a $q\in\mathbb{Q}$ such that $\sqrt3-\frac1 n < q < \sqrt3$ For each n, let $a_n = q$. ($\sqrt3-\frac1 n < a_n < \sqrt3$) So for n=1. There exists a $q$ such that $\sqrt3-1 < q < \sqrt3$. Choose this q for $a_1$ For n=2, choose $q$ such that $\sqrt3-\frac1 2 < q < \sqrt3$. Since $\sqrt3-\frac1 n$ converges to $\sqrt3$, $a_n$ must also converge to $\sqrt3$. So we are constructing a sequence out of things that we are only know the existence of. Also does this require the axiom of choice? Anyways, on the exam there was a hint "consider $S = \{r\in\mathbb{Q}|r>0 \,\mathrm{and}\,r^2<3\}$" And I am not exctly sure what to make of it other than the fact that $\sup S = \sqrt 3$ Edit: To be specific about my question, is this proof ok? And what is the standard proof (the proof that my prof was hinting at)? REPLY [2 votes]: A simpler sequence to calculate than that using the continued fraction is to let $r_1=1/1$ and $r_2=4/2$, and generally if $r_n=p/q$ let $r_{n+1}=(p+3q)/(p+q)$. The first several terms: $1/1,4/2,10/6,28/16,76/44,208/120,...$ The terms will be alternately below and above $\sqrt{3}$, and approach it. It's fairly fast, for example for the sixth term, $208/120-\sqrt{3}=.001285...$ It doesn't converge as quickly as does the sequence arising from the continued fraction. But this approach has the advantage that one can do any squareroot of a positive (nonsqure) integer $m$ this way. Start with $r_1=1/1$ and if $r_n=p/q$ let $r_{n+1}=(p+mq)/(p+q)$ Then the sequence $r_n$ will approach $\sqrt{m}$, the terms alternately below and above it. As with the continued fraction method, there is a recursive formula going from one fraction in the sequence to the next; the recusion here looks a bit simpler than does the continued fraction one, in that one only need keep track of the current numerator and denominator.<|endoftext|> TITLE: Riemann-Stieltjes integral, integration by parts (Rudin) QUESTION [10 upvotes]: Problem 17 of Chapter 6 of Rudin's Principles of Mathematical Analysis asks us to prove the following: Suppose $\alpha$ increases monotonically on $[a,b]$, $g$ is continuous, and $g(x)=G'(x)$ for $a \leq x \leq b$. Prove that, $$\int_a^b\alpha(x)g(x)\,dx=G(b)\alpha(b)-G(a)\alpha(a)-\int_a^bG\,d\alpha.$$ It seems to me that the continuity of $g$ is not necessary for the result above. It is enough to assume that $g$ is Riemann integrable. Am I right in thinking this? I have thought as follows: $\int_a^bG\,d\alpha$ exists because $G$ is differentiable and hence continuous. $\alpha(x)$ is integrable with respect to $x$ since it is monotonic. If $g(x)$ is also integrable with respect to $x$ then $\int_a^b\alpha(x)g(x)\,dx$ also exists. To prove the given formula, I start from the hint given by Rudin $$\sum_{i=1}^n\alpha(x_i)g(t_i)\Delta x_i=G(b)\alpha(b)-G(a)\alpha(a)-\sum_{i=1}^nG(x_{i-1})\Delta \alpha_i$$ where $g(t_i)\Delta x_i=\Delta G_i$ by the intermediate mean value theorem. Now the sum on the right-hand side converges to $\int_a^bG\,d\alpha$. The sum on the left-hand side would have converged to $\int_a^b\alpha(x)g(x)\,dx$ if it had been $$\sum_{i=1}^n \alpha(x_i)g(x_i)\Delta x$$ The absolute difference between this and what we have is bounded above by $$\max(|\alpha(a)|,|\alpha(b)|)\sum_{i=1}^n |g(x_i)-g(t_i)|\Delta x$$ and this can be made arbitrarily small because $g(x)$ is integrable with respect to $x$. REPLY [9 votes]: Compare with the following theorem, Theorem: Suppose $f$ and $g$ are bounded functions with no common discontinuities on the interval $[a,b]$, and the Riemann-Stieltjes integral of $f$ with respect to $g$ exists. Then the Riemann-Stieltjes integral of $g$ with respect to $f$ exists, and $$\int_{a}^{b} g(x)df(x) = f(b)g(b)-f(a)g(a)-\int_{a}^{b} f(x)dg(x)\,. $$<|endoftext|> TITLE: open subsets in topological groups QUESTION [5 upvotes]: I'm starting to study topological groups, and I noticed that Every single theorem in topological groups I have to use the following statement: Let $G$ be a topological group and U an open subset of G, if $g\in G$, then $gU$ is an open subset of G. I can't prove it, please anyone can help me please. Thanks REPLY [4 votes]: In fact the map $\varphi_g:G\to G:x\mapsto gx$ is a homeomorphism. It’s clearly a bijection, since $\varphi_g^{-1}=\varphi_{g^{-1}}$. To see that it’s continuous, let $U\subseteq G$ be open. The group operation is continuous, so $V=\{\langle x,y\rangle\in G\times G:xy\in U\}$ is open in $G\times G$. Let $\pi:G\times G\to G:\langle x,y\rangle\mapsto y$, and let $G_g=\{g\}\times G$; $\pi\upharpoonright G_g:G_g\to G$ is a homeomorphism, and $V\cap G_g$ is open in $G_g$, so $\pi[V\cap G_g]$ is open in $G$. But $$\pi[V\cap G_g]=\{x\in G:\langle g,x\rangle\in V\}=\{x\in G:gx\in U\}=\varphi_g^{-1}[U]\;,$$ so $\varphi_g^{-1}[U]$ is open in $G$, and $\varphi_g$ is continuous. Since $g$ was arbitrary, it follows immediately that $\varphi_g^{-1}=\varphi_{g^{-1}}$ is also continuous and hence that $\varphi_g$ is a homeomorphism. In particular, then, $gU=\varphi_g[U]$ is open for every $g\in G$ and open $U\subseteq G$.<|endoftext|> TITLE: Exterior power of a tensor product QUESTION [14 upvotes]: Given 2 vector bundles $E$ and $F$ of ranks $r_1, r_2$, we can define $k$'th exterior power $\wedge^k (E \otimes F)$. Is there some simple way to decompose this into tensor products of various exterior powers of individual bundles? I am interested in the case when $F$ corresponds to the twisted line bundles $\mathcal{O}(k)$. REPLY [3 votes]: There is also a nice description in terms of Schur functor and Young diagrams. A Young diagram $\lambda$ is a picture made by a finite set of cells, left-aligned in rows, such that the length of the rows decreases going down. Any Young diagram can be transposed (exchanging rows and columns) to obtain another Young diagram $\lambda'$. To any Young diagram you can associate a functor $S_\lambda$, called Schur functor, which is an endofunctor of the category of finite vector spaces over a fixed field. The way how the Schur functor is constructed starting from the Young diagram is a rather complicated and you can find the details in this page of ncatlab. To make an example, the Schur functor associated to a diagram made by only one row with $n$ cells is the functor sending any vector space $V$ to the vector space of symmetric $n$-powers $Sym^n(V)$, while the transposed diagram gives the exterior $n$-powers $\Lambda^n(V)$. The product of two Schur functors can be decomposed into the linear combination of other Schur functors thanks to the Littlewood–Richardson rule. The application of this rule in the case of the $n$th exterior power of a tensor product yields the following formula: $\Lambda^n(V \otimes W) = \bigoplus (S_\lambda(V) \otimes S_{\lambda'}(W))$ where $\lambda$ in the direct sum runs over al the Young diagrams with $n$ cells and at most dim($V$) rows and dim($W$) columns. You can find a reference for this formula in Fulton, Harris, Representation Theory, exercise 6.11. [edit: this is an expaned comment based on the hint given in this mathoverflow discussion: mathoverflow.net/questions/126219/exterior-and-symmetric-powers-of-external-tensor-products-of-representations]<|endoftext|> TITLE: Is there a proper subfield $K\subset \mathbb R$ such that $[\mathbb R:K]$ is finite? QUESTION [20 upvotes]: Is there a proper subfield $K\subset \mathbb R$ such that $[\mathbb R:K]$ is finite? Here $[\mathbb R:K]$ means the dimension of $\mathbb R$ as a $K$-vector space. What I have tried: If we can find a finite subgroup $G\subset Gal (\mathbb C/\mathbb Q)$ such that $G$ contains the complex conjugation, it will be done by letting $K$ be the fixed field of $G$. But I don't know whether such a group exists. Maybe we can start with finding a suitable subgroup of $Gal(\bar{\mathbb Q}/\mathbb Q)$ and then lift it to $Gal(\mathbb C/\mathbb Q)$, where $\bar{\mathbb Q}$ denotes the algebraic closure of $\mathbb Q$. By isomorphism extension theorem, we can find many automorphisms of $\mathbb C$, none of them carries $\mathbb R$ to itself except for the identity. This is because $Gal(\mathbb R/\mathbb Q)$ is the trivial group. For example, now suppose $\{x_\alpha\}\subset \mathbb R$ is a transcendence basis over $\mathbb Q$. Let $\sigma$ be a permutation of $\{x_\alpha\}$, then by the isomorphism extension theorem, $\sigma$ extends to an automorphism of $\mathbb C$, which we still denote by $\sigma$. Then $L=\sigma(\mathbb R)$ is a copy of $\mathbb R$ and $\mathbb R$ is algebraic over $K=\mathbb R\cap L$. How large can $K$ be? Is it possible that $[\mathbb R:K]$ is finite? Does anyone has some ideas? Thanks! REPLY [17 votes]: According to Pete Clark's answer at https://mathoverflow.net/questions/13769/orders-of-field-automorphisms-of-algebraic-complex-numbers, if $L/K$ is a field extension with $L$ algebraically closed and $[L:K]\lt\infty$, then $[L:K]=1 {\rm\ or\ } 2$. Now, $\bf C$ is algebraically closed, and if $K$ is a subfield of $\bf R$, then $[{\bf C}:K]=2[{\bf R}:K]$, which pretty much settles it.<|endoftext|> TITLE: set of all trace $1$ matrices are connected? QUESTION [5 upvotes]: The heading is the question and here are my two approach, I want to know are they correct or not, if not I need to know the answer: 1) They are path connected as $\gamma(t)=At+(1-t)B, t\in [0,1]$ where $A$ and $B$ are trace one matrices. But I am not sure all matrices in this path are trace $1$? 2) as trace equals $1$, so considering diagonal entries I get a hyperplane $H$ with $x_{11}+\dots+x_{nn}=1$ and remaining other $n^2-n$ entries I can send to $\mathbb{R}^{n^2-n}$ and thus they are homeomorphic to $H\times \mathbb{R}^{n^2-n}$ as this is a product of two connected topological spaces, it is connected. So trace 1 matrices are connected. Well, here I have considered a matrix is just a point in $\mathbb{R}^{n^2}$ Thank you. REPLY [4 votes]: Partial answer for 1) $\text{tr} (\gamma(t))=t\cdot\text{tr}(A)+(1-t)\cdot\text{tr}(B)=t+(1-t)=1, \forall t$<|endoftext|> TITLE: Normalization of a Ring QUESTION [6 upvotes]: What is the exact definition of a normalization of a Ring? I have to show this: normalization of multiplicative subset of domain And the answer already helped, but I don't know what $S^{-1}R'$ is exactly, because I didn't find a good definition for $R'$... Thanks for any help! :) REPLY [3 votes]: I believe that you really should read the definitions and be sure of what you're trying to prove before trying to prove it. Anyhow, the definition of normalization: If A is an integral domain, we say that A is normal if it is integrally closed in its field of fractions. For a domain A , the normalization of A, $\tilde{A}$ is the integral closure of A in its field of fractions. In QiL's answer there, he gives you a straightforward way to do it. R' is the integral closure of R in its field of fractions.<|endoftext|> TITLE: Ultra Filter and Axiom of Choice QUESTION [8 upvotes]: Some person said me: "The fact that Ultra Filters exist is equivalent to the Axiom of choice". Is this correct? I nees some good references about the subject, please help me. Thanks REPLY [4 votes]: It is not true. First, there are always principal (or fixed) ultrafilters: if $S$ is any non-empty set, and $s\in S$, $\{U\subseteq S:s\in U\}$ is a principal (or fixed) ultrafilter on $S$. Note that in this case $\bigcap\mathscr{U}=\{s\}$. An ultrafilter $\mathscr{U}$ on a set $S$ is free if $\bigcap\mathscr{U}=\varnothing$. There are no free ultrafilters on any finite set. The existence of free ultrafilters on infinite sets requires some amount of choice, but less than the full axiom of choice. Specifically, the assertion that every filter can be extended to an ultrafilter is equivalent to the Boolean prime ideal theorem, which is independent of ZF but, by a result of Halpern and Levy, strictly weaker than the axiom of choice.<|endoftext|> TITLE: Hölder inequality from Jensen inequality QUESTION [11 upvotes]: I'm taking a course in Analysis in which the following exercise was given. Exercise Let $(\Omega, \mathcal{F}, \mu)$ be a probability space. Let $f\ge 0$ be a measurable function. Using Jensen's inequality, prove that for $1\le p < q$, $$\left(\int f^p\, d\mu\right)^{\frac{1}{p}}\le \left(\int f^q\, d\mu\right)^{\frac{1}{q}}.$$ Deduce Hölder's inequality from Jensen's inequality and discuss the cases of equality. The first part is standard and I had no problems with it. On the contrary, the second part is somewhat unclear. The standard proof of Hölder's inequality uses Young's inequality which may be proved by means of the convexity of the exponential function. So, strictly speaking, this is a way of "deducing Hölder's inequality from Jensen's", but I don't think this is what the examiner had in mind. More likely, one is supposed to look for a proof employing Jensen's inequality in $(\Omega, \mathcal{F}, \mu)$, or perhaps applying directly the first point. But I have no idea on how to do this. Thank you. REPLY [11 votes]: As Mike suggests, take the measure $\nu:=\frac{g^q}{\int g^qd\mu}\cdot \mu$ (a probability measure) and $h:=\frac f{g^{q-1}}$. Then $$\int fgd\mu=\int hg^qd\mu=\int g^qd\mu\cdot\int hd\nu\leqslant \int g^qd\mu \left(\int h^pd\nu\right)^{1/p}=\left(\int g^qd\mu\right)^{1/q}\left(\int f^pd\mu\right)^{1/p}.$$<|endoftext|> TITLE: Deal 4 cards from a deck. What is the probability that we get one card from each suit? QUESTION [8 upvotes]: My simple easy homework question. Just needed some double check :D Deal 4 cards from a deck of 52 cards. What is the probability that we get one card from each suit? My answer First Draw: We can get any card, and the card's suit will be done. $Chance:1$ Second Draw: Now we need to get 1 of the 3 remaining suits. There are 51 cards left. $Chance:\frac{13+13+13}{51}$ Third Draw: Now we need to get 1 of the 2 remaining suits. There are 50 cards left. $Chance:\frac{13+13}{50}$ Fourth Draw: Now we need to get the last remaining suit. There are 49 cards left. $Chance:\frac{13}{49}$ $P($One card from each suit$)=1*\frac{13+13+13}{51}*\frac{13+13}{50}*\frac{13}{49}=0.1055$ My tutor is known for giving not-so straightforward questions, so I'm wondering if I need to consider another way, or I could be wrong. Any alternatives welcome too! REPLY [4 votes]: The following is an (inferior) alternative. There are $\dbinom{52}{4}$ ways to choose $4$ cards, all equally likely. There are $\dbinom{13}{1}^4$ ways to choose $1$ card from each suit. Divide.<|endoftext|> TITLE: Is $\ell^1 \subset \ell^2$ meagre? QUESTION [5 upvotes]: Possible Duplicate: Prove $\ell_1$ is first category in $\ell_2$ Consider $\ell^2$ with the topology induced by the usual norm. We can easily prove that $\ell^1 \subset \ell^2$. I am wondering if $\ell^1$ is meagre (i.e. of first category) in $\ell^2$. In other words, I am looking for a countable family $(F_n)_{n \in \mathbb N}$ of $\ell^2$-closed set whose interiors are empty and such that $$ \ell^1 \subseteq \bigcup_{n\in\mathbb N} F_n . $$ What do you suggest? I tried with $B(0,n)=\{(x_k)_{k \in \mathbb N}: \sum_{k} \vert x_k\vert < n\}$ but I don't manage to prove - wheter it is true - that they are closed and with empty interior... REPLY [5 votes]: $\def\norm#1{\left\|#1\right\|}$Let's take $\bar B_n = \{(x_k) \in \ell^1 \mid \norm x_1 \le n \}$. Let $y \in \ell^2\setminus \ell^1$, e.g. $y = (1/n)_n$, then for each $x \in \bar B_n$ and each $\epsilon > 0$, $x + \epsilon y \not\in \bar B_n \subseteq \ell^1$. So $\bar B_n$ has empty interior. It remains to prove the closedness. So let $x^k \in \bar B_n$ for $k \in \mathbb N$ and $x \in \ell^2$ with $\|x^k - x\|_2 \to 0$. Then, as $\ell^2$-convergence implies pointwise convergence \begin{align*} \norm x_1 &= \sum_i |x_i|\\ &= \lim_I \sum_{i=1}^I |x_i|\\ &= \lim_I \sum_{i=1}^I\lim_k |x^k_i|\\ &= \lim_I \lim_k \sum_{i=1}^I |x^k_i|\\ &\le \limsup_I \limsup_k \norm{x^k}_1\\ &\le n. \end{align*} So $x \in \bar B_n$ and we are done.<|endoftext|> TITLE: Combinatorics: likelihood of a uniform draw QUESTION [6 upvotes]: An urn contains 10 kinds of pebbles, and 100 pebbles of each kind. We draw 100 pebbles (without replacement). What is the probability that we get between 8 and 12 pebbles of each kind? REPLY [3 votes]: The most likely of the admissible combinations is the completely uniform one, with a probability of $$ \frac{\binom{100}{10}^{10}}{\binom{1000}{100}^{\hphantom{10}}}\approx3.8\cdot10^{-8} $$ (computation). Presumably the most unlikely of the admissible combinations is the most non-uniform one with $12$ pebbles of five kinds and $8$ of the others, with a probability of $$ \frac{\binom{100}{12}^5\binom{100}8^5}{\binom{1000}{100}}\approx4.5\cdot10^{-9} $$ (computation). The number of admissible combinations can be calculated using the formula at the bottom of this page as the number of ways of distributing $k=20$ excess pebbles over $m=10$ kinds with a capacity of $R=4$ each, which yields $$ \sum_{t=0}^4(-1)^t\binom{10}t\binom{29-5t}9=856945 $$ (computation). Thus the desired probability $p$ satisfies $$ 0.03\approx 856945\cdot\frac{\binom{100}{10}^{10}}{\binom{1000}{100}^{\hphantom{10}}}\gt p\gt 856945\cdot\frac{\binom{100}{12}^5\binom{100}8^5}{\binom{1000}{100}}\approx0.004\;. $$ The exact answer is 226031412377730730814344253428220298277915460779610728832457924491489212422618433457300376001429754322127222112213012269223936000000000 ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————— 18724490300969246403723903560710344364006715745998044509175019419801870086437214647777411833371759499800416660391039940703106220925825529 or about $0.012$, as computed by this code, which enumerates all admissible combinations and checks the result with a simulation (and also checks the number of admissible combinations).<|endoftext|> TITLE: Real-valued 2D Fourier series? QUESTION [17 upvotes]: For a (well-behaved) one-dimensional function $f: [-\pi, \pi] \rightarrow \mathbb{R}$, we can use the Fourier series expansion to write $$ f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left( a_n \cos(nx) + b_n\sin(nx) \right)$$ For a function of two variables, Wikipedia lists the formula $$f(x,y) = \sum_{j,k \in \mathbb{Z}} c_{j,k} e^{ijx}e^{iky}$$ In this formula, $f$ is complex-valued. Is there a similar series representation for real-valued functions of two variables? REPLY [20 votes]: The full real-valued 2D Fourier series is: $$ \begin{align} f(x, y) & = \sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\alpha_{n,m}cos\left(\frac{2\pi n x}{\lambda_x}\right)cos\left(\frac{2\pi m y}{\lambda_y}\right) \\ & + \sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\beta_{n,m}cos\left(\frac{2\pi n x}{\lambda_x}\right)sin\left(\frac{2\pi m y}{\lambda_y}\right) \\ & + \sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\gamma_{n,m}sin\left(\frac{2\pi n x}{\lambda_x}\right)cos\left(\frac{2\pi m y}{\lambda_y}\right) \\ & + \sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\delta_{n,m}sin\left(\frac{2\pi n x}{\lambda_x}\right)sin\left(\frac{2\pi m y}{\lambda_y}\right) \\ \end{align} $$ The coefficients are found with: $$ \alpha_{n,m} = \frac{\kappa}{\lambda_x \lambda_y}\int_{y_0}^{y_0+\lambda_y}\int_{x_0}^{x_0+\lambda_x}f(x,y)cos\left(\frac{2\pi n x}{\lambda_x}\right)cos\left(\frac{2\pi m y}{\lambda_y}\right)dx dy \\ \beta_{n,m} = \frac{\kappa}{\lambda_x \lambda_y}\int_{y_0}^{y_0+\lambda_y}\int_{x_0}^{x_0+\lambda_x}f(x,y)cos\left(\frac{2\pi n x}{\lambda_x}\right)sin\left(\frac{2\pi m y}{\lambda_y}\right)dx dy \\ \gamma_{n,m} = \frac{\kappa}{\lambda_x \lambda_y}\int_{y_0}^{y_0+\lambda_y}\int_{x_0}^{x_0+\lambda_x}f(x,y)sin\left(\frac{2\pi n x}{\lambda_x}\right)cos\left(\frac{2\pi m y}{\lambda_y}\right)dx dy \\ \delta_{n,m} = \frac{\kappa}{\lambda_x \lambda_y}\int_{y_0}^{y_0+\lambda_y}\int_{x_0}^{x_0+\lambda_x}f(x,y)sin\left(\frac{2\pi n x}{\lambda_x}\right)sin\left(\frac{2\pi m y}{\lambda_y}\right)dx dy $$ $$ \begin{align} \text{Where } \kappa & = 1 \text{ if } n = 0 \text{ and } m = 0 \\ & = 2 \text{ if } n = 0 \text{ or } m = 0\\ & = 4 \text{ if } n> 0 \text{ and } m > 0 \end{align} $$ Example plot<|endoftext|> TITLE: Factoring out universal quantifier in combination with an implication QUESTION [6 upvotes]: I just began studying maths and so far everything made sense after tinkering around with it a little bit (e.g. $ \lnot(\forall x \in M : A(x)) = \exists x \in M : \lnot A(x) $ thinking "not all math students are dumb means that there's at least one who is not dumb") I already prooved this statement right (sry I'm not a native speaker) $( \forall x:P(x))\Rightarrow q = \exists x:(P(x)\Rightarrow q)$, but is there any example (like the one given above) for this as well? I just cannot understand why factoring out the universal quantifier makes any difference... (although I know it has to be right) REPLY [2 votes]: Brian & Henning have given excellent, precise answers above. Here is an imprecise attempt at providing some intuitive 'feel'. I think of $x$ as being a representative element of some countable set $\{x_1,x_2,...\}$. I think of $(\forall x P(x))$ as $(P(x_1) \land P(x_2) \land ...)$, and I think of $(\exists x R(x))$ as $(R(x_1) \lor R(x_2) \lor...)$. We have $A \Rightarrow B$ is the same as $\lnot A \lor B$. Then $( \forall x:P(x))\Rightarrow q$ becomes $\lnot (P(x_1) \land P(x_2) \land ...) \lor q = (\lnot P(x_1) \lor \lnot P(x_2) \lor ...) \lor q$, The statement $\exists x:(P(x)\Rightarrow q)$ becomes $((\lnot P(x_1) \lor q) \lor (\lnot P(x_1) \lor q) \lor...)$ which is logically the same as the statement above.<|endoftext|> TITLE: "A proof that algebraic topology can never have a non self-contradictory set of abelian groups" - Dr. Sheldon Cooper QUESTION [30 upvotes]: In the current episode "The Big Bang Theory", Dr. Sheldon Cooper has a booklet titled "A proof that algebraic topology can never have a non self-contradictory set of abelian groups". I'm still an undergrad in mathematics and I have no idea what an algebraic topology is and why it would never have "a non self-contradictory set of abelian groups". My first reaction of course was to read up the wikipedia article, but it's really short and doesn't explain a lot. So instead of reading through tons of articles, I wanted to ask whether it is possible to just very shallowly explain what this is about. What is an algebraic topology (I know what a topology is and I'm currently learning algebra)? What is "a non self-contradictory set of abelian groups" (I surely know what abelian groups are)? And how would one prove this (if I had to read up a lot to understand the proof, I guess I'd rather be left without proof)? I hope this is the right place to ask this question, thanks in advance for answers. REPLY [4 votes]: It does seem to be nonsense the best I got to making sense out of it was the following way: There is reference to "an" algebraic topology as opposed to just algebraic topology (it is actually a field of study not an object). This might be interpreted as "a functor from the category of topological spaces to some category of algebraic objects" like the category of groups or abelian groups (like homology groups). This resembles Segal's definition of conformal field theory which is different from the way physicists would define it. Then they say it contains no self-contradictory abelian groups. The only thing I can think of there is that this functor would not send two homeomorphic spaces to non-isomorphic abelian groups i.e. the abelian group is an invariant of the space (up to homeomorphism). However the definition of a functor already fixes this (which it should since the idea of functor comes from algebraic topology while looking for a way to find this sort of invariants for topological spaces, as far as I know) it has $$F(Id_X)= Id_F(X) \\\text{ and }F(f \circ g)=F(f)\circ F(g) $$ so if $f$ is an homeomorphism then just fill in $f$ inverse for $g$ and you see that $F(g)$ is the left and right inverse to $F(f)$. So the algebraic objects are isomorphic. So even though one might want to show this it is just plain silly (even for a five year old) to then only look at abelian groups since it is a purely categorical matter.<|endoftext|> TITLE: Uniform distribution on the unit circle (in the complex plane) QUESTION [6 upvotes]: I was trying to prove that for a standard complex Gaussian variable $Z$ it holds that $|Z|^2$ is exponentially distributed with parameter 1, $\frac{Z}{|Z|}$ is uniformly distributed on the unit circle $S^1:=\{z\in\mathbb{C} | |z|=1\}$ and that the two are independent. At some point I began asking myself: How does one describe the uniform distribution on the unit circle $S^1$? I resolved to say that it is the complex r.v. $e^{i\theta}$ where $\theta$ is uniformly distributed on $[0,2\pi]$. This seemed to work out fine (c.f. Byron's answer to this question). However, if this is correct then this small argument will go through: Let $f:S^1 \rightarrow \mathbb{R}$ be bounded. Then $$E[f(Z)]=\int_{0}^{2\pi}{f(e^{i\theta})\frac{1}{2\pi}}d\theta=\frac{1}{2\pi i}\int_{S^1}{\frac{f(z)}{z}}dz,$$ where for the last equation $z=e^{i\theta}$ and thus $\frac{dz}{d\theta}=ie^{i\theta}$ i.e. $\frac{dz}{iz}=\frac{dz}{ie^{i\theta}}={d\theta}$. So: Is $\frac{1}{2\pi i z}$ some kind of density for a uniformly distributed random variable on $S^1$? (I write "some kind" as it cannot be one because the unit circle has Lebesgue-measure 0 and hence the induced probability measure cannot be absolutely continuous to it.) Thanks for clearing my lack of clarity. REPLY [2 votes]: I try to reformulate the commentaries of fgp as an answer. First consider a probability space $(\Omega, \mathcal{A}, \mathbb{P}')$ and a random variable uniformly distributed on $[0,2\pi]$: $$X: (\Omega, \mathcal{A}, \mathbb{P}') \rightarrow ([0,2\pi],\mathcal{B}([0,2\pi])).$$ Furthermore consider the parametrization of the unit circle $$p: [0,2\pi] \rightarrow S^1; \quad x \mapsto e^{i\cdot x}$$ which is continuous. Now consider the space $(S^1,\mathcal{B}(S^1))$ where $\mathcal{B}(S^1)$ is the $\sigma$-algebra generated by the open sets of $S^1$. We define $\mathbb{P}$ to be the probability measure induced by the map $$p\circ X: (\Omega, \mathcal{A}, \mathbb{P}') \rightarrow (S^1,\mathcal{B}(S^1)); \quad \omega \mapsto e^{i X(\omega)}.$$ Then we have \begin{equation} P(Z\in A) = \frac{1}{2\pi i}\int_{A}{\frac{1}{z}}dz \qquad \forall A \in \mathcal{B}(S^1) \end{equation} or more generally $$E[f(Z)]=\frac{1}{2\pi i}\int_{S^1}{\frac{f(z)}{z}}dz$$ for any measurable, bounded function $f: S^1 \rightarrow \mathbb{R}$ So we must be careful what the measurable space really is. In the case of the "density" in the question we are considering the probability space $(S^1,\mathcal{B}(S^1), \mathbb{P})$ and on this we get can give the value of $\mathbb{P}$ via the above formula. Hence the above is not a density with respect to the Lebesgue-measure on $\mathbb{C}$, but a way of formulating the value with the help of the parametrization of (or - to be more precise - a contour integral along) the unit circle $S^1$.<|endoftext|> TITLE: uniqueness of induced representation QUESTION [5 upvotes]: I am studying the book "Representation Theory" by Fulton and Harris. And I just can not understand the part where they prove the uniqueness of induced representation. If someone could explain it I'd greatly appreciate it! It's on page 33 and it goes: Choose a representative $g_{\sigma} \in G$ for each coset $\sigma \in G/H,$ with $e$ representing the trivial coset $H$. To see the uniqueness, note that each element of $V$ has a unique expression $v = \sum g_{\sigma} w_{\sigma}$ for elements $w_{\sigma}$ in $W$. Given $g \in G$ write $g \centerdot g_{\sigma} = g_{\tau} \centerdot h$ for some $\tau \in G/H$ and $h \in H.$ Then we must have $$g \centerdot (g_{\sigma} w_{\sigma})= g_{\tau}( h w_{\sigma})\;.$$ This proves The uniqueness ... I understand everything until the end, but I just don't understand how this proves the uniqueness... If someone could give me a little more explanation, I would appreciate it! Thanks! REPLY [2 votes]: The question of uniqueness is whether we have any freedom in defining the action of an arbitrary element $g$ on $W$. The equation shows that we don't: The action of $g$ on each summand of $W$, and thus on $W$, is entirely determined by the action of $H$ on $V$. Writing out the action for a linear combination and denoting $g_\tau$ by $g_{\sigma g}$ and $h$ by $h_g$ to mark the dependencies, we have $$ g\sum g_\sigma w_\sigma=\sum g(g_\sigma w_\sigma)=\sum g_{\sigma g}(h_gw_\sigma)\;, $$ and this fully determines the action of $g$ on any element of $V$.<|endoftext|> TITLE: How to test whether two group presentations are isomorphic QUESTION [7 upvotes]: Suppose I have two presentations for groups: $\langle x,y|x^{7} = y^{3} = 1, yx = x^2y\rangle$ and $\langle x,y|x^{7} = y^{3} = 1, yx = x^4y\rangle$ What is the standard approach to deciding whether the presentations are isomorphic? I'm working through an application of Sylow Theory which classifies groups of order $21$. In the text it says that these two presentations above are isomorphic, but I cannot see how to prove it or even suspect it. REPLY [3 votes]: Well, you'd want to find a set of generators of the first group that satisfied the relations of the second group. If we rewrite $yx=x^2y$ as $yxy^{-1}=x^2$ (which turns this somewhat abstract equality into something a bit more concrete), we see immediately that $y^2xy^{-2}=x^4$ and indeed since $y^2$ is of order 3, $x,y^2$ are the generators you're looking for.<|endoftext|> TITLE: A binomial multiplied by a poisson QUESTION [5 upvotes]: What distribution do you obtain when you multiply a poisson distribution and a binomial distribution, and why? I'm assuming you obtain a poisson distribution. REPLY [10 votes]: Here is a physical example: If we interpret $\mu$ and the probability that a photon produces an electron, then for a given number of photons entering the photo detector (say $n$) the probability distribution of electrons coming out is a binomial distribution with n trials and a probability of success of $\mu$. $$ P(m)=\frac{n!}{m!(n-m)!}\mu^m (1-\mu)^{n-m}_{} =\binom{n}{m}\mu^m (1-\mu)^{n-m}_{} $$ The number of photons that go into the binomial distribution is the output of a Poisson distribution. We cannot get more electrons out than photons that went into the photo detector. We can sum up all the possible binomial distributions with a Poisson distribution weighting factor. $$ P(m,\lambda,\mu)=\sum_{j=m}^{\infty} \frac{j!}{(j-m)!m!}\mu^m(1-\mu)^{j-m} \frac{\lambda^j e^{-\lambda}}{j!} $$ Simplify and bring terms that do not depend on j outside of the sum. $$ P(m,\lambda,\mu)= \frac{\mu^m e^{-\lambda}}{m!} \sum_{j=m}^{\infty} \frac{\lambda^j(1-\mu)^{j-m}}{(j-m)!} $$ Let $n=j-m$ $$ P(m,\lambda,\mu)= \frac{\mu^m e^{-\lambda}}{m!} \sum_{n=0}^{\infty} \frac{\lambda^{n+m}(1-\mu)^{n}}{n!} $$ $$ P(m,\lambda,\mu)= \frac{(\lambda \mu)^m e^{-\lambda}}{m!} \sum_{n=0}^{\infty} \frac{(\lambda (1-\mu))^{n}}{n!} $$ $$ P(m,\lambda,\mu)= \frac{(\lambda \mu)^m e^{-\lambda \mu}}{m!} $$ This is a Poisson distribution with a mean of $\lambda \mu$! Note: the "!" in the last line of text should be understood as excitement and a not a factorial.<|endoftext|> TITLE: Show the distance does not exceed $\sqrt{2}$. QUESTION [6 upvotes]: Choose any ten points from the interior of a square with side length $3$. Show that the distance of some pair of these points does not exceed $\sqrt{2}$. Can someone help me? REPLY [20 votes]: Hint: divide the square $3 \times 3$<|endoftext|> TITLE: Gram Matrices Rank QUESTION [8 upvotes]: Let $A$ be an $m \times n$ matrix. Show that, even though they may be of different sizes, both Gram matrices $K = A^TA$ and $L = AA^T$ have the same rank. My attempt: We have that $K$ and $L$ are Gram matrices so $K = A^TA = (A^TA)^T = AA^T = L$ and by definition we have that $\mathrm{rank}(A) = A^T$. REPLY [2 votes]: Excellent answers from @Euyu !!. I would like to add another way of proof. Assume $A=U\Sigma V^{T}$ is the Singular Value Decomposition (SVD). So $A^{T}A=V\Sigma ^{2}V^{T}$ and $AA^{T}=U\Sigma ^{2}U^{T}$ (follows from substituting the SVD). So clearly, the eigen values of the symmetric matrices $AA^{T}$ and $A^{T}A$ are the squares of singular values of $A$. Hence they have same number of non-zero eigen values and hence their rank should be the same.<|endoftext|> TITLE: How can I use Gauss elimination to solve equations with Modular arithmetics? QUESTION [7 upvotes]: I've given some equations look like this. $a_{1,1} x_1 + a_{1,2} x_2 + a_{1,3} x_3 + ... + a_{1,n} x_n\equiv 1 \mod p$ $a_{2,1} x_1 + a_{2,2} x_2 + a_{2,3} x_3 + ... + a_{2,n} x_n\equiv 1\mod p$ $...$ $a_{m,1} x_1 + a_{m,2} x_2 + a_{m,3} x_3 + ... + a_{m,n} x_n\equiv 1\mod p$ ($p$ is prime, I know the values of $a_{1..m, 1..n}$, I have to get $x_{1..n}$) (all of the values of $a_{1..m, 1..n}, x_{1..n}$ should not be negative, and they must be integers) I think I can solve this using Gaussian elimination, but I'm not sure how to use this. I appreciate any help or tip. Thank you in advance. :) REPLY [2 votes]: If you are familiar with Gaussian elimination, then you can do this just as easily, all the same operations are valid, just reduce $\mod{p}$ any time you desire to simplify by reducing number size. If it is the fractions that you are worried about, then do only integer operations. For example, if you have numbers 2 and 5, first subtract 2*2 from 5 to get 1. The larger numbers may not get to 1 so fast, but the idea is the same. Subtract integer amounts that reduce the numbers, and repeat with the smaller numbers until all is reduced. It is the same idea, just combine the rows until things are simplified.<|endoftext|> TITLE: Prove that in $\mathbb{R}$, if $|a-b|>\alpha$ for all $a\in A$ and $b\in B$, then outer measure $m^*(A\cup B)=m^*(A)+ m^*(B)$ QUESTION [12 upvotes]: Prove that for sets $A,B$ bounded in $\mathbb{R}$: If there exists $\alpha > 0$ such that $|a-b|>\alpha$ for all $a\in A$ and $b\in B$, then outer measure $m^*(A\cup B)=m^*(A)+m^*(B)$. This comes out of section 2.2 of Royden's Real Analysis. I'm really having trouble with this one for some reason. The only theorem that I can see that might be of some help is that outer measure is preserved under set translation. But I would have to translate each point of one of these sets a different amount, so that seems hopeless. Because I have so few theorems to work with my hunch is that I need to go back to the very definition of outer measure and do something clever with it, but so far I haven't had any luck. Can anyone help me? Thanks. REPLY [9 votes]: HINT: Let $$U=\bigcup_{a\in A}\left(a-\frac{\alpha}2,a+\frac{\alpha}2\right)$$ and $$V=\bigcup_{b\in B}\left(b-\frac{\alpha}2,b+\frac{\alpha}2\right)\;.$$ Suppose that $x\in U\cap V$; then there are $a\in A$ and $b\in B$ such that $$|x-a|,|x-b|<\frac{\alpha}2\;.$$ Is this actually possible?<|endoftext|> TITLE: Vector Matrix Differentiation (to maximize function) QUESTION [8 upvotes]: how would I calculate the derivative of the following. I want to know the derivative so that I can maximise it. $$ \frac{x^TAx}{x^TBx} $$ Both the matricies A and B are symmetric. I know the derivative of $\frac{d}{dx}x^TAx = 2Ax$. Haven't been very successful applying the quotient rule to the above though. Appreciate the help. Thanks! EDIT: In response to "What goes wrong when applying the chain rule". We know that: $$ \frac{d}{dx}\frac{u}{v} = \frac{vu' - uv'}{v^2} $$ Which would give me: $$ \frac{2x^TBxAx - 2x^TAxBx}{x^TBx^2} \, or \, \frac{2Axx^TBx - 2Bxx^TAx}{(x^TBx)^2} $$ In the first case the dimensions don't agree. In the second they do, but I don't want to assume that it's correct just because the dimensions agree. If it is correct then please do let me know! REPLY [2 votes]: Use hypoograph/Epigraph Technique to maximize/minimize the ratio. For example $\min_x \frac{x^T A x}{x^T B x}$ $ \equiv \min_{x,t} t$ $ \text{ subject to }$ $ {x^T A x}\leq t(x^T B x),t>0$. Then form a Lagrangain to solve for the optimum.<|endoftext|> TITLE: Permutation module of $S_n$ QUESTION [14 upvotes]: Let $G=S_n$ and let $V$ be the permutation module of $G$ with basis $\{x_1,\ldots,x_n\}.$ Let $\lambda, \mu \in \mathbb{C}$ to allow one to define a $\mathbb{C}G$-homomorphism $\rho:V \to V$ by $$\rho(x_j):=\lambda x_j+\mu\sum_{i \neq j}x_i.$$ By using the above fact or otherwise, how can we prove that $V$ is the direct sum of two non-isomorphic irreducible $\mathbb{C}G$ -submodules? I tried to prove this by construction. A familiar irreducible submodule in this case is the $1$-dimensional space $U:=\operatorname{span}\{x_1+\cdots+x_n\}$. I intend to find another $(n-1)$-dimensional submodule $W$ which makes $V=U\oplus W$ hold, but it's hard to do so. Is there a way to use the fact instead of a random construction? REPLY [15 votes]: I'll try to give a simple solution (not using characters); but my solution is not using the homomorphism $\rho$, which was suggested in your post as a hint. This solution is based on a hint given by Qiaochu Yuan in this comment. We work with the permutation FG-module for $S_n$, i.e. we choose a basis $v_1,\dots,v_n$ for $U$ and the action of $S_n$ is given by $$\left(\sum x_iv_i\right)g = \sum x_iv_{ig}.$$ We denote this FG-module as $U$. The vector $v=v_1+\dots+v_n$ generates a one-dimensional FG-submodule $U_1$. It is relatively easy to find FG-submodule $U_2$ such that $U=U_1\oplus U_2$. (From Maschke's theorem we know that such a submodule exists.) This sumbodule is precisely $$U_2=\{\sum x_iv_i; \sum x_i=0\},$$ i.e. it contains precisely the vectors, for which the sum of coordinates is zero; $x_1+\dots+x_n=0$. (It is easy to see, that it is indeed an FG-submodule, its dimension is $n-1$ and $U_1\cap U_2=\{0\}$.) As a basis for $U_2$ we can choose, for example, $v_1-v_2,v_2-v_3,\dots,v_{n-1}-v_n$. $U_2$ is irreducible If $v=x_1v_1+\dots+x_nv_n$ is a non-zero vector from $U_2$, then $x_i\ne x_j$ for some $i$, $j$. (Since $v\notin U_1$.) We can choose a permutation $g$ in a such way, that for $w=vg=y_1v_1+\dots+y_nv_n$ we have $y_1\ne y_2$. Of course, $w\in U_2$. The submodule $U_2$ contains also the vector $w(12)$, which is the same as $w$, only the first two coordinates are swapped. Thus $$w - w(12)=(y_1-y_2)(v_1-v_2),$$ and $y_1-y_2\ne 0$. We can multiply this vector and get $\underline{v_1-v_2\in FG \cdot w}$. By applying the permutation $(12\dots n)$ to the vector $v_1-v_2$ we get all basic vectors $\underline{v_i-v_{i+1}\in FG \cdot w}$. So we have in fact shown that if we have some non-zero submodule $V$ of $U_2$ (i.e., if $V$ contains at least one non-zero vector), then this submodule contains the whole basis of $U_2$, an thus $V=U_2$. This means that $U_2$ is irreducible.<|endoftext|> TITLE: Arcsine law for Brownian motion QUESTION [5 upvotes]: Here is the question: $(B_t,t\ge 0)$ is a standard brwonian motion, starting at $0$. $S_t=\sup_{0\le s\le t} B_s$. $T=\inf\{t\ge 0: B_t=S_1\}$. Show that $T$ follows the arcsinus law with density $g(t)=\frac{1}{\pi\sqrt{t(1-t)}}1_{]0,1[}(t)$. I used Markov property to get the following equality: $P(T TITLE: Graph theory: Prove $k$-regular graph $\#V$ = odd, $\chi'(G)> k$ QUESTION [11 upvotes]: I'm looking to prove that any $k$-regular graph $G$ (i.e. a graph with degree $k$ for all vertices) with an odd number of points has edge-colouring number $>k$ ($\chi'(G) > k$). With Vizing, I see that $\chi'(G) \leq k + 1$, so apparently $\chi'(G)$ will end up equaling $k+1$. Furthermore, as $\#V$ is odd, $k$ must be even for $\#V\cdot k$ to be an even number (required to be even, since $\frac{1}{2}\cdot\#V\cdot k = \#E$. Does anyone have any suggestions on what to try? REPLY [8 votes]: Another way to prove this fact is to notice that in any proper edge coloring, every set of edges that share a color must form a matching. But for any given color, the matching touches an even number of vertices, so there must be one vertex missing that color. Since that vertex has $k$ edges, all of a different color, together there must be at least $k + 1$ colors.<|endoftext|> TITLE: Uniform convergence of the Bergman kernel's orthonormal basis representation on compact subsets QUESTION [5 upvotes]: Consider the Bergman kernel $K_\Omega$ associated to a domain $\Omega \subseteq \mathbb C^n$. By the reproducing property, it is easy to show that $$K_\Omega(z,\zeta) = \sum_{n=1}^\infty \varphi_k(z) \overline{\varphi_k(\zeta)},\qquad(z,\zeta\in\Omega)$$ where $\{\varphi_k\}_{k=1}^\infty$ is any orthonormal basis of the Bergman space $A^2(\Omega)$ of Lebesgue square-integrable holomorphic functions on $\Omega$. This series representation converges at least pointwise, since the Bergman kernel's Fourier series, $K_\Omega(\cdot,\zeta) = \sum_{k=1}^\infty \langle K_\Omega(\cdot,\zeta), \varphi_k \rangle \varphi_k$ with $\langle K_\Omega(\cdot,\zeta), \varphi_k \rangle = \overline{\varphi_k(\zeta)}$ converges in norm which implies uniform convergence in the first argument for fixed $\zeta \in \Omega$. Now in Books such as Function Theory of Several Complex Variables by S. Krantz, it is shown that the series is uniformly bounded on compact sets, namely $$ \sum_{k=1}^\infty \big| \varphi_k(z) \overline{\varphi_k(\zeta)} \big| \leq \bigg(\sum_{k=1}^\infty |\varphi_k(z)|^2 \bigg)^{1/2} \bigg(\sum_{k=1}^\infty |\varphi_k(\zeta)|^2 \bigg)^{1/2} \leq C(K)^2,\qquad(z,\zeta \in K)$$ where $C(K)$ is a constant depending only on the compact set $K\subseteq \Omega$. My question is this: Why does this imply uniform convergence on compact sets in $\Omega \times \Omega$? This is claimed in several sources, but just stated and not proven. Am I missing something obvious here? One book which is a bit more specific is Holomorphic Functions and Integral Representations in Several Complex Variables by M. Range. There it is written that uniform convergence on compact subsets of $\Omega \times \Omega$ follows from the uniform bound and a "normality argument", which I take as referring to Montel's theorem. Does anyone know the details on how this argument works? Any help is appreciated. REPLY [2 votes]: The following is a slight variation of a problem posed in the book on functional analysis by Reed & Simon (page 35, problem 33(b)). Consider the same statement as in froggie's answer, with the same assumptions: Theorem: Let $U\subseteq\mathbb{C}^n$ be a domain, and let $f_n\colon U\to \mathbb{C}$ be a sequence of holomorphic functions converging pointwise to a function $f\colon U\to \mathbb{C}$. Suppose that for each compact set $K\subset U$ the family $\{f_n|_K\}$ is uniformly bounded. Then $f_n$ converges uniformly on compact sets to $f$. Proof: Apply Montel's theorem to every sub-sequence of $(f_n)_n$ and get that every subsequence now has a uniformly convergent (on compact subsets) sub-sub-sequence. Since $(f_n)_n$ converges pointwise, these limits must coincide with $f$. But then, the orignal sequence must converge to $f$ too (in the topology of uniform convergence on compact subsets), for assume otherwise, then there exists $\varepsilon > 0$ such that for all $n$ there exists $k(n)\geq n$ such that $d(f_{k(n)},f) > \varepsilon$ (the metric is the one from $H(U)$). This contradicts the fact that $(f_{k(n)})_n$ should have a subsequence that converges to $f$. $\square$ I also seem to have come up with another way to show this, without using Montel's theorem: Proof: Let $K \subseteq U$ be compact and choose $V\subseteq U$ open such that $\overline V$ is compact and $K\subseteq V \subseteq \overline V \subseteq U$. Since $\overline V$ is compact, we have $$ |f_n(z)| \leq c(\overline{V}), \qquad(z \in V) $$ where $c(\overline V)$ is the uniform bound of the $f_n$ on $\overline V$. The constant function $z \mapsto c(\overline V)$ is Lebesgue-integrable on $V$ and dominates $(f_n)_n$ on $V$. Since $f_n \to f$ pointwise, we can apply the dominated convergence theorem to obtain $f_n \to f$ in $L^1(V)$. In particular, $(f_n)_n$ is Cauchy in $L^1(V)$. Now consider the Bergman space $A^1(V)$ of absolutely integrable holomorphic functions on $V$. Since $A^1(V)$ has the same norm as $L^1(V)$ and all $f_n$ are holomorphic, we get that $(f_n)_n$ is a Cauchy sequence in $A^1(V)$. But then, by the fundamental estimate for Bergman spaces (see e.g. the book of Krantz linked in the question; the proof there works exactly the same way for $A^1$ instead of $A^2$ and doesn't need the assumption of connectedness), there is a constant $\tilde C(K)$ such that $$ \sup_{z \in K} |f_n(z) - f_m(z)| \leq \tilde{C}(K) \|f_n - f_m\|_{A^1(V)}.$$ (Such an estimate holds for all compact sets in $V$.) Thus, $(f_n)_n$ is also Cauchy wrt. uniform convergence on $K$, hence convergent by completeness. $\square$ So basically, this replaces Montel's theorem with the dominated convergence theorem and some facts about Bergman spaces as the key ingredient. To apply this to my original problem, one would pick $\tilde K \subseteq \Omega \times \Omega$ compact and choose $V$ as above such that $(\mathrm{pr}_1(\tilde K)\cup \mathrm{pr}_2(\tilde K)) \subseteq V \subseteq \overline V \subseteq \Omega$, where $\mathrm{pr}_i : \mathbb C^{2n} \to \mathbb{C}^n$, $i=1,2$ are the projections of the first, respectively second $n$ components to $\mathbb C^n$. Then take $\tilde V := V \times V$. This is necessary since the uniform bound of the Bergman kernel's series representation works only on sets of the form $K \times K$ for $K$ compact in $\Omega$.<|endoftext|> TITLE: Group with order $p^2$ must be abelian . How to prove that? QUESTION [13 upvotes]: Possible Duplicate: Showing non-cyclic group with $p^2$ elements is Abelian I must show that a group with order $p^2$ with $p$ prime must be a abelian. I know that $|Z(G)| > 1$ and so $|Z(G)| \in \{p,p^2\}$. If I assume that the order is $p$ i get $|G / Z(G)| = p$ and so each coset of $Z(G)$ has order $p$ which means that each coset is cyclic and especially $Z(G)$ is cyclic. Can I conclude something by that? REPLY [14 votes]: Use the following theorem, probably the most important and basic in the theory of finite $\,p-\,$groups: Theorem: The center of a finite $\,p-\,$group is non-trivial Proof: Let $\,G\,$ be a finite $\,p-\,$group and make it act on itself by conjugation. Now just observe that: $$(1)\;\;\;\;\;\;|\mathcal Orb(x)|=1\Longleftrightarrow x\in Z(G)$$ $$(2)\;\;\;\;\;\;\mathcal Orb(x)=[H:Stab(x)]\Longrightarrow p\mid|\mathcal Orb(x)|\;\;\;\;\;\square$$ Finally, the following lemma together with the above gives you what you want: Lemma: For any group $\,G\,$ , $\,G/Z(G)\,$ is cyclic iff $\,G\,$ is abelian, or in otherwords: the quotient $\,G/Z(G)\,$ can never be non-trivial cyclic.<|endoftext|> TITLE: Highest weight of dual representation of $\mathfrak{sl}_3$ QUESTION [9 upvotes]: Suppose I have an irreducible representation $\phi:\mathfrak{sl}_3 \to \mathfrak{gl}(V)$ of the Lie algebra $\mathfrak{sl}_3$. Now I have been asked to express the heighest weight of the corresponding dual representation on $\mathfrak{gl}(V^\ast)$ in terms of those for that on $V$. The definition I have of a weight is: A linear function $\mu : \mathfrak{h} \to \Bbb{C}$ is said to be a weight for $\phi$ if there is $v \in V$ such that $$\phi(H)v = \mu(H)v$$ for all $H \in \mathfrak{h}$. $\mathfrak{h}$ is the usual Cartan subalgebra of $\mathfrak{sl}_3$. Now I seem to only be able to calculate explicitly the highest weight of $V^\ast$ in the case that I have a concrete representation, such as the standard representation. Furthermore, what does one mean by "express the highest weight of $V^\ast$" in terms of that for $V$? For example the highest weight of the standard representation of $\mathfrak{sl}_3$ is the linear functional $L_1$ defined by $$L_i \left( diag(a_1,a_2,a_3) \right) = a_i \hspace{2mm} \text{for $i=1,2,3$}.$$ Here $diag(a_1,a_2,a_3)$ is a matrix in the Cartan subalgebra $\mathfrak{h}$. The highest weight of $V^\ast$ here is now $-L_3$. How do I translate this into "expressing" $-L_3$ in terms of $L_1$? I am quite confused as to what I need to show. REPLY [4 votes]: Your description of the weights of the standard rep is correct, you might be confused by the fact that $L_1+L_2+L_3=0$ on $\mathfrak{sl}_3$, so $L_1 = -L_2 - L_3$. One way to approach this would be to write everything in terms of the two fundamental weights $\varpi_i$ (that is, the basis of $\mathfrak{h}^*$ dual to $h_1=\operatorname{diag}(1,-1,0)$ and $h_2=\operatorname{diag}(0,1,-1)$). What they are probably looking for is some statement of the form "if $V$ has highest weight $a\varpi_1 + b \varpi_2$ then $V^*$ has highest weight $f(a,b)\varpi_1 + g(a,b)\varpi_2$". $v$ in your first displayed equation is called "a weight vector of $V$ with weight $\mu$". You might try proving a result that says that if you take a basis of $V$ consisting of weight vectors and $v$ is an element of this basis with weight $\mu$, then the element $v^*$ of the dual basis of $V^*$ has weight XXX.... Do you know how to draw the irreducible modules on the weight/root lattice? That might help you, since then you can see what the highest/lowest weights look like.<|endoftext|> TITLE: Computing $\operatorname{Ext}^{1}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Z})$ QUESTION [7 upvotes]: I'm trying to find an abelian group $B$ such that $\operatorname{Ext}^{1}_{\mathbb{Z}}(\mathbb{Q},B)$ is non-zero. My first guess was just to choose $B=\mathbb{Z}$. Using the following argument, I deduced that $\operatorname{Ext}^{1}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Z})=\{0\}$, however I then read this question, which says in fact it is non-zero, so I'm assuming there's something wrong with my argument: First, take the injective resolution $0\rightarrow{\mathbb{Z}}\rightarrow{\mathbb{Q}}\rightarrow{\mathbb{Q}/\mathbb{Z}}\rightarrow{0}$ of $\mathbb{Z}$, form the deleted resolution $0\rightarrow{\mathbb{Q}}\rightarrow{\mathbb{Q}/\mathbb{Z}}\rightarrow{0}$ (no longer exact) and apply the $\operatorname{Hom}_{\mathbb{Z}}(\mathbb{Q},\bullet)$ functor to the deleted resolution to obtain the non-exact sequence $$0\rightarrow{\operatorname{Hom}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Q})}\rightarrow{\operatorname{Hom}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Q}/\mathbb{Z})\rightarrow{0}}$$ So, as far as I can see, $\operatorname{Ext}^{1}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Z})$ is the quotient of the kernal of the zero map from $\operatorname{Hom}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Q}/\mathbb{Z})$ by the image of the surjective map displayed in the functored sequence above, hence is zero. I'm guessing I have made a mistake either with the way I'm interpreting the functored sequence or with the definition of $\operatorname{Ext}$. Can anyone help me understand where I have gone wrong? REPLY [6 votes]: There are indeed homomorphisms $\mathbb Q\to\mathbb Q/\mathbb Z$ that are not induced by a homomorphism $\mathbb Q\to\mathbb Q$. For a prime $p$ we have the $p$-adic value on $\mathbb Q$ given by $\left |\pm\frac abp^k\right|_p=p^{-k}$ if $a,b$ are prime to $p$ and $k\in\mathbb Z$. For $n\in\mathbb N$ let $\mathbb Q_n\subset \mathbb Q$ be the set of rationals $x$ with $|x|_p\le \frac1n$ for all $p$. Then $\mathbb Q_n$ is a subgroup of $\mathbb Q$ and is cyclic with generator $g_n = (\prod_p p^{\lfloor \log_pn \rfloor})^{-1}$. A homomorphism $f\colon \mathbb Q\to \mathbb Q/\mathbb Z$ is determined by specifying values $f(g_n)$ such that $f(g_n)=\frac{g_n}{g_{n+1}}\cdot f(g_{n+1})$ always holds. Note that $k_n:=\frac{g_n}{g_{n+1}}$ is always an integer and if it is $>2$ (which happens infinitely often), we have several choices for $f(g_{n+1}) $, differing by $\frac1{k_n}$. At least one of the possible choices differs by at most $\frac1{2k_n}$ from $\frac12$, i.e. is in $[\frac13,\frac23]$. In the end we obtain a homomorphism $f\colon \mathbb Q\to \mathbb Q/\mathbb Z$ that cannot come from a homomorphism $\tilde f\colon \mathbb Q\to \mathbb Q$. Indeed, there would have do be infinitely many $m\in\mathbb N$ with $\tilde f(\frac1m)\ge\frac13$ or $\tilde f(\frac1m)\le-\frac13$. But then $|\tilde f(1)|=m\cdot|\tilde f(\frac1m)|\ge \frac m3$, which is absurd.<|endoftext|> TITLE: Let $m \in \mathbb Z, m>1$, then $\cos(2 \pi/m) \in \mathbb Q$ if and only if $m \in \{1,2,3,4,6\}$. QUESTION [6 upvotes]: Possible Duplicate: When is $\sin(x)$ rational? Let $m \in \mathbb Z, m\geq1$, then $\cos(2 \pi/m) \in \mathbb Q$ if and only if $m \in \{1,2,3,4,6\}$. Why is this statement true? Why is $\cos(2 \pi/m)$ always non rational for the integer $m >6$? Thanks very much. REPLY [4 votes]: One approach that may prove helpful: Lemma: $\theta =\frac{2\pi}{m} \text{ then }\cos \theta \in \mathbb Q \iff \cos \theta \in \{ 0, \pm \frac12, \pm1 \}$ Proof: Lest $2\cos \theta = \frac ab$ where $a$ and $b$ are co-prime.Then $2\cos 2\theta = {(2\cos \theta)}^2 -2$. $$ 2\cos 2\theta=\frac {a^2-2b^2}{b^2}$$ Now $\gcd (a^2-2b^2,b^2)=1$. Proof:Assume $p$ is a prime dividing both Numerator and Denominator. Then, $p|b^2 \text{ hence} p|b$ and $p|(a^2-2b^2) \text { hence }p|a$ giving us contradiction. So if $b \neq \pm1$, then we get that in $2\cos \theta, 2\cos 2\theta,2\cos 2^2\theta,\ldots$, the denominators get bigger and bigger and $\to \infty$. Also we have $\cos \theta$ is periodic with period $2\pi$. Hence the sequence noted above can admit at most $m$ different values and then will start repeating contradicting that Denominator tends to infinity. Hence $b=\pm 1$ and hence our claim is proved. I suppose it might help as now we know all possible (rational) values of $\cos \theta$ we can check for values corresponding to them. Thanks.<|endoftext|> TITLE: $G$ is isomorphic to a subgroup of $H$ and vice versa QUESTION [6 upvotes]: Let $G$ and $H$ are two divisible groups that each of which is is isomorphic to a subgroup of the other, then $G\cong H$. What I've done is to use the injective property for both groups: $G\cong K\le H$ so we have $G\stackrel{\iota}{\hookrightarrow} H$ and $G\stackrel{id}{\longrightarrow} G$ and then there exists $H \stackrel{\phi}{\longrightarrow} G$ that $\phi\circ i=id|_G$. $H\cong S\le G$ so we have $H\stackrel{\iota}{\hookrightarrow} G$ and $H\stackrel{id}{\longrightarrow} H$ and then there exists $G \stackrel{\psi}{\longrightarrow} H$ that $\psi\circ i=id|_H$. Is my approach right? May I ask you what will be happen if we omit the adjective divisible? Thanks. REPLY [3 votes]: The key here is the classification of divisible groups. Every divisible group is a direct sum of copies of $\mathbb{Q}$ and $\mathbb{Z}/{p^\infty}$ for any prime $p$, where $\mathbb{Z}/{p^\infty}$ denotes the group of $p^n$-torsion elements on the unit circle. None of these groups can map nontrivially into each other, except for $\mathbb{Q}$, which cannot inject into a sum of the others because the others are all torsion groups (check this!). Thus, if we have an injection from one divisible group to another, each summand in the domain will have its image inside summands of the codomain that are isomorphic to it. You are now reduced to proving that the cardinality of the collection of summands of a given isomorphism type in the domain is no more than what you have in the codomain. This is easy for $\mathbb{Q}$ since you can view it as a $\mathbb{Q}$-vector space. For each $p$, you can restrict your attention to $p$-torsion elements and apply the same argument over $\mathbb{Z}/p$.<|endoftext|> TITLE: Is there a map from a segment to a triangle? QUESTION [6 upvotes]: Is there a conformal map from a sector in the circle with 60 degrees angle to a equilateral triange that maps the points 0, 1 and $exp (\frac{\pi i}{3})$ to the vertices of a triangle? REPLY [3 votes]: I realized that my question is quiet easy, we can just map a segment to a circle as well as a triangle to a circle by Riman theorem. Then the property we do need that the vertices of triangle map to the same points as "vertices" of a segment. That can be done by adding an automorphism of a circle, that maps three first images on the boundary to the three second images on the boundary. Then we just have to take a composition. So, the form of the domains in this problem is not important as far as it concerns only three points on the boundary.<|endoftext|> TITLE: Why determinant is a natural transformation? QUESTION [9 upvotes]: I am reading a tutorial on Categories and it says that determinant $GL_n \longrightarrow \left( \right)^*$ is a simple example of a natural transformation, but embarrassingly I am a bit confused about it. What is this $\left( \right)^*$ means, what are our two categories here? Thanks. REPLY [12 votes]: $\mathrm{GL}_n(-)$ is a functor from the category $\textbf{CRing}$ of commutative rings to the category $\textbf{Grp}$ of groups, mapping a commutative ring $A$ to the group $\mathrm{GL}_n(A)$. Similarly, $(-)^*$ (I prefer to write $(-)^\times$) is a functor $\textbf{CRing} \to \textbf{Grp}$ sending a ring $A$ to its group of units $A^*$. One then checks that the determinant is a natural transformation of these functors. REPLY [8 votes]: $()^*$ is the functor that takes a ring $R$ and returns its group of units $R^*$. It is common to write functors using notation that reflects the way its values are written: since $R^*$ is the group of units of $R$, we write $()^*$ for the corresponding functor. Actually, $(-)^*$ is more common. Other examples you see might be something like $\hom(X, -)$ which is the functor that sends an object $Y$ of your category to the set $\hom(X, Y)$, and a corresponding action on morphisms $f:Y \to Z$. Anyways, both listed functors are from the category of commutative rings to the category of groups.<|endoftext|> TITLE: Problem with Morley's Theorem QUESTION [5 upvotes]: Greets. Morley's theorem states that a theory which is categorical for an uncountable cardinal is categorical in all uncountable cardinals. My problem with the theorem is that I haven't found a significant example in which this theorem can be applied, in which no other argument has been found. The only examples I know are $RG$, $ACF_0$, and the theory of vector spaces over a finite field, but this examples are worked out with simple cardinal and ordinal arguments. Maybe this question is useless, because maybe the importance of this theorem is just theoretic, if this is so, I would appreciate a reason why. Thanks REPLY [8 votes]: One of the many contributions of Morley's work was to introduce a very general model-theoretic notion of dimension. A strongly minimal formula $\phi(x)$ has the property that given any model $M\models T$, we can assign a (possibly infinite) dimension $\kappa$ to $\phi(M)$, and $|\phi(M)| \leq \aleph_0 + \kappa$. Morley proved that models for uncountably categorical theories are completely controlled by the dimensions of their strongly minimal sets. The proof of Morley's theorem goes like this: Step 1 (the hard part): If a theory $T$ is categorical in some uncountable cardinal, then there is a strongly minimal formula $\phi(x)$ such that a model $M\models T$ is determined up to isomorphism by the dimension of $\phi(M)$, and $|M| = |\phi(M)|$. Step 2 (an easy corollary): $T$ is $\kappa$-categorical for all uncountable $\kappa$. If $M,N\models T$ and $|M| = |N| = \kappa$, then $|\phi(M)| = |\phi(N)| = \kappa$, so $\phi(M)$ and $\phi(N)$ both have dimension $\kappa$, and $M\cong N$. Now the reason that Morley's Theorem seems to add nothing new in each of the classic example cases you have in mind is that in each of these cases, Step 1 is already done, i.e. the strongly minimal set and the dimension notion are already familiar: linear dimension in the case of vector spaces, transcendence degree in the case of algebraically closed fields, cardinality in the case of the theory of infinite sets... In fact, given any particular uncountably categorical theory, one can prove that it's uncountably categorical without appealing to Morley's theorem by doing Step 1 directly (exhibiting a dimension notion which determines models up to isomorphism) and then giving the argument for Step 2. The value of Morley's theorem, of course, is that it guarantees that such a dimension notion exists. As such it's very important as a theorem of model theory. It increases our understanding of what the classes of models for first-order theories can look like. EDIT: I also want point something out about your question. Morley's theorem has the structure "for all theories satisfying this property, the following is true". You complain that you can't find any examples of particular theories for which the conclusion can't be checked without appealing to Morley's theorem. This is a bit like complaining that the Pythagorean Theorem ("for all triangles satisfying the property of being right, the following is true") isn't useful, just because given any particular right triangle, you can do the arithmetic and check that $a^2 + b^2 = c^2$.<|endoftext|> TITLE: When do I write $\sin(x)$ and when $\sin x$? QUESTION [7 upvotes]: I sometimes see $\sin x$ and sometimes $\sin(x)$. Are the parenteses needed since the sine is a function or is it more an operator that can be premultiplied to the variable? Or are people just lazy? REPLY [9 votes]: There's no mathematical difference in when to write parentheses or not, as long as there is no doubt how much of the thing that follows "$\sin$" is part of the argument. Part of the syntactic role of parentheses is to make clear that the thing to the left of them is actually a function rather than something rather than something to be multiplied. The need for this is greater when the name of the function is just a letter ("$f$" or "$g$" could also conceivably be used as names of constants, for example), but on the other hand "$\sin$" is so unambiguously a function that we usually don't need parentheses to remind the reader that that's what it is. ... except in situations like $\sin(t+1)$ where "$\sin t + 1$" would have meant $(\sin t)+1$. Omitting the parentheses in unambiguous cases makes the expression slightly easier to read at a glance then there are many other levels of parentheses around.<|endoftext|> TITLE: The real numbers and the Von Neumann Universe QUESTION [9 upvotes]: So I'm going to prefix this question by saying that I probably don't have a great understanding of what I'm asking. We build the cumulative hierarchy as follows: $V_0=\emptyset$ For every $\alpha$, $V_{\alpha+1}=\mathcal{P}(V_\alpha)$ If $Lim(\lambda)$, then $V_\lambda=\bigcup _{\alpha<\lambda} V_\alpha$ We then define the Von Neumann universe to be the class $V=\bigcup_{\alpha} V_{\alpha}$ Once we have done this we can prove various things about this such as: Every set is in some $V_\alpha$ Intuitively we are supposed to picture this as the ordinal numbers being a vertical line starting at $\emptyset$ and going upwards. Then for each $\alpha$ the collection of sets of that rank are horizontal lines, so we get a sort of V shaped picture. What I am unsure about (and maybe this is a ridiculous question) is where in this construction the real numbers are? We have that the natural numbers are $\omega$- the first transfinite ordinal and that $\omega_1$-the supremum of all countable ordinals but I am unsure where the real numbers come in? Thanks for any help (sorry if the question is nonsense) REPLY [16 votes]: The real numbers show up in $V_{\omega+n}$ for some small finite $n$ whose precise value is sensitive to the exact details of how you choose to construct the reals. Even before you have to choose between Dedekind cuts and Cauchy sequences, the rationals are usually constructed as (infinite) equivalence classes of pairs of integers, and the integers themselves as (infinte) equivalence classes of pairs of naturals. Each of these constructions can only happen after $V_\omega$ and contribute a level to $n$, and then the Kuratowski pairs you use in the next construction take a few additional levels to show up. However, if you tune your constructions specially for the reals to exist early in the Von Neumann hiearchy, you can use canonical representatives rather than equivalence classes to represent integers and rationals. Then every rational is represented by a hereditarily finite set, and then $\mathbb Q$ itself as well as all its subsets will be present already in $V_{\omega+1}$ and you can have $\mathbb R\in V_{\omega+2}$ by Dedekind cuts. Note, however, that for many set theorists "the reals" tend to mean simply $\mathcal P(\omega)$ rather than $\mathbb R$, and $\mathcal P(\omega)$ certainly arises already in $V_{\omega+2}$. In any case, you cannot get $\mathbb R$ earlier than $V_{\omega+2}$, because every member of $V_{\omega+1}$ is at most countable. REPLY [10 votes]: The real numbers are not an intrinsic object to the universe of set theory. We have a good way of constructing them from the natural numbers, but actually every set of size continuum can be made into the real numbers. In particular we have that $V_{\omega+1}$ is of size continuum, so in $V_{\omega+2}$ you already have a set of size continuum which can function as the real numbers (e.g. $\mathcal P(\omega)$ ordered by $A\prec B\iff\min(A\Delta B)\in A$). If you wish to compute another construction of the real numbers, then you can do that manually. Suppose we wish to think of the real numbers as Dedekind-cuts, i.e. subsets of $\mathbb Q$, so we need to find when $\mathbb Q$ enters the universe; but again we have the same problem. What is $\mathbb Q$? Well, we can think of it as a quotient of $\mathbb Z\times\mathbb Z$, so again... when does $\mathbb Z$ enters the universe? Well, $\mathbb Z$ is a quotient of $\omega\times2$. Let us consider the following rules: Suppose that $A$ has rank $\alpha$. We know that pairs from $A$, $\langle a,b\rangle=\{\{a\},\{a,b\}\}$, which means that $A\times A\subseteq\mathcal{P P}(A)$, so $A\times A$ has rank of $\underline{\alpha+3}$. Quotients of $A\times A$ are subsets of $A$, though, so they have rank of $\alpha+1$. Now you need to sit and calculate, if $\omega$ has rank $\alpha$ how do we get to $\mathbb R$? Furthermore, if you also want the real numbers alongside with additions and other operations, you need to go higher as well, because those operations are only generated at higher stages. For further reading: Formalising real numbers in set theory In set theory, how are real numbers represented as sets?<|endoftext|> TITLE: $\mathcal{K}(L^2(\mathbb{R}^m \times \mathbb{R}^n)) = \mathcal{K}(L^2(\mathbb{R}^m)) \otimes \mathcal{K}(L^2(\mathbb{R}^n))$? QUESTION [6 upvotes]: QUESTION: Is it true that for the algebra of compact operators: $\mathcal{K}(L^2(\mathbb{R}^m \times \mathbb{R}^n))$ is as a $C^{\ast}$-algebra isomorphic to $\mathcal{K}(L^2(\mathbb{R}^m)) \otimes \mathcal{K}(L^2(\mathbb{R}^n))$? The latter tensor product is any $C^{\ast}$-tensor product (because the compact operators are nuclear it doesn't matter). On $L^2$ we use the ($\sigma$-finite) Lebesgue measure, but of course the algebra $\mathcal{K}(L^2)$ no longer depends on the measure. Clearly, $L^2(\mathbb{R}^{m} \times \mathbb{R}^n)$ can be identified with $L^2(\mathbb{R}^m) \otimes L^2(\mathbb{R}^n)$ by Fubini's theorem. This makes me think that $L^2$ has a better chance than other Hilbert spaces to make $\mathcal{K}(\mathcal{H}_1 \otimes \mathcal{H}_2) = \mathcal{K}(\mathcal{H}_1) \otimes \mathcal{K}(\mathcal{H}_2)$ hold. Thanks for your help. EDIT: Of course $\mathcal{K}(\mathcal{H})$ denotes the compact operators on the Hilbert space $\mathcal{H}$. REPLY [2 votes]: Notation $H$ - some Hilbert space $H^{cc}$ - complex conjugate Hilbert space, i.e. with multiplication on complex conjugate scalars $H_1\otimes H_2$ Hilbert tensor product of Hilbert spaces $H_1$ and $H_2$ $\mathcal{F}(H)$ finite rank operators on $H$ $\mathcal{K}(H)$ compact operators on $H$ $\mathcal{B}(H)$ bounded operators on $H$ $x\bigcirc y$ rank one operator on $H$ well defined by $(x\bigcirc y)(z)=\langle z,y\rangle x$ where $x,y,z\in H$ $a\;\dot{\otimes }\;b$ Hilbert tensor product of operators $a\in\mathcal{B}(H_1)$ and $b\in\mathcal{B}(H_2)$ well defined by $(a\;\dot{\otimes }\;b)(x\otimes y)=a(x)\otimes b(y)$ Facts $\mathcal{F}(H)=\operatorname{span}\{ x\bigcirc y:x\in H,\; y\in H\}$ $\mathcal{K}(H)=\operatorname{cl}_{\mathcal{B}(H)}\mathcal{F}(H)$ The proof given below is valid for all Hilbert spaces. Since $\mathcal{K}(H)$ is a nuclear $C^*$ algebra for any Hilbert space $H$, then we can consider any $C^*$ norm on the algebraic tensor product $\mathcal{K}(H_1)\odot K(H_2)$. We will consider the spatial tensor norm, so $$ \mathcal{K}(H_1)\otimes K(H_2)=\operatorname{cl}_{\mathcal{B(H_1\otimes H_2)}}(\operatorname{span}\{ a\;\dot{\otimes}\; b:a\in\mathcal{K}(H_1),\; b\in\mathcal{K}(H_2)\})\tag{1} $$ where $a\;\dot{\otimes}\; b\in\mathcal{B}(H_1\otimes H_2)$ is well defined by equality $(a\;\dot{\otimes}\; b)(x\otimes y)=a(x)\otimes b(y)$. Denote the closed linear subspace in the right hand side of $(1)$ by $E$. Lemma 1. $\mathcal{F}(H_1\otimes H_2)\subset E$. Proof. Since bilinear operator $\bigcirc:H\times H^{cc}\to\mathcal{F}(H)$ is bounded, then $\mathcal{F}(H)=\operatorname{span}\{x\bigcirc y: x,y\in S\}$ for any $S\subset H$ such that $H=\operatorname{cl}_H(\operatorname{span}S)$. For $H=H_1\otimes H_2$ we can take $S=\{x\otimes y:x\in H_1,y\in H_2\}$. Now to prove that $\mathcal{F}(H_1\otimes H_2)\subset E$ it is remains to show that $(x\otimes y)\bigcirc (x'\otimes y')\in E$ for all $x,x'\in H_1$, $y,y'\in H_2$. But this is indeed true because $(x\otimes y)\bigcirc (x'\otimes y')=a'\;\dot{\otimes}\; b'$ for $a'=x\bigcirc x'\in\mathcal{K}(H_1)$ and $b'=y\bigcirc y'\in\mathcal{K}(H_2)$. Lemma 2. $E\subset \mathcal{K}(H_1\otimes H_2)$. Proof. Consider $a'=x\bigcirc x'\in\mathcal{F}(H_1)$ and $b'=y\bigcirc y'\in\mathcal{F}(H_2)$ for some $x,x'\in H_1$ and $y,y'\in H_2$. Recall $a'\;\dot{\otimes}\; b'=(x\otimes y)\bigcirc (x'\otimes y')\in\mathcal{F}(H_1\otimes H_2)$. Since $\dot{\otimes}$ is bilinear operator and $\mathcal{F}(H)=\operatorname{span}\{x\bigcirc y:x,y\in H\}$ for any Hilbert space $H$ then $a'\;\dot{\otimes}\; b'\in\mathcal{F}(H_1\otimes H_2)$. In other words $\dot{\otimes}\;(\mathcal{F}(H_1),\mathcal{F}(H_2))\subset \mathcal{F}(H_1\otimes H_2)$. Since the bilinear operator $\dot{\otimes}:\mathcal{B}(H_1)\times \mathcal{B}(H_2)\to\mathcal{B}(H_1\otimes H_2)$ is bounded, then $$ \dot{\otimes}\;(\mathcal{K}(H_1),\mathcal{K}(H_2)) =\dot{\otimes}\;(\operatorname{cl}_{\mathcal{B}(H_1)}\mathcal{F}(H),\operatorname{cl}_{\mathcal{B}(H)}\mathcal{F}(H_2)) \subset\operatorname{cl}_{\mathcal{B}(H_1\otimes H_2)}\left(\dot{\otimes}\;(\mathcal{F}(H_1),\mathcal{F}(H_2))\right) \subset\operatorname{cl}_{\mathcal{B}(H_1\otimes H_2)}\mathcal{F}(H_1\otimes H_2) =\mathcal{K}(H_1\otimes H_2) $$ So $E=\operatorname{cl}_{\mathcal{B}(H_1\otimes H_2)}\dot{\otimes}\;(\mathcal{K}(H_1),\mathcal{K}(H_2))\subset\mathcal{K}(H_1\otimes H_2)$. Proposition. $E=\mathcal{K}(H_1\otimes H_2)$. Proof. Since $\mathcal{K}(H)=\operatorname{cl}_{\mathcal{B}(H)}\mathcal{F}(H)$ for any Hilbert space $H$ and $E$ is cloed, then it is enough to show that $\mathcal{F}(H_1\otimes H_2)\subset E\subset\mathcal{K}(H_1\otimes H_2)$. Now the result follows from lemma $1$ and lemma $2$.<|endoftext|> TITLE: Prove: If a sequence converges, then every subsequence converges to the same limit. QUESTION [56 upvotes]: I need some help understanding this proof: Prove: If a sequence converges, then every subsequence converges to the same limit. Proof: Let $s_{n_k}$ denote a subsequence of $s_n$. Note that $n_k \geq k$ for all $k$. This easy to prove by induction: in fact, $n_1 \geq 1$ and $n_k \geq k$ implies $n_{k+1} > n_k \geq k$ and hence $n_{k+1} \geq k+1$. Let $\lim s_n = s$ and let $\epsilon > 0$. There exists $N$ so that $n>N$ implies $|s_n - s| < \epsilon$. Now $k > N \implies n_k > N \implies |s_{n_k} - s| < \epsilon$. Therefore: $\lim_{k \to \infty} s_{n_k} = s$. What is the intuition that each subsequence will converge to the same limit I do not understand the induction that claims $n_k \geq k$ REPLY [12 votes]: About the indices. $n_1\geq 1$, because $(n_k)$ is defined to be a strictly increasing sequence of indices. Assume $n_k\geq k$ for some $k\in\mathbb{N}$, what follows by definition is that $n_{k+1} > n_k\geq k$. If $m,n\in\mathbb{N}$ such that $m>n$, then $m\geq n+1$. (property of natural numbers) We now conclude that $n_{k+1}\geq n_k +1\geq k+1$<|endoftext|> TITLE: Why completion of a metric space $X$ is 'unique' upto isometry? QUESTION [5 upvotes]: Let $(X,d)$ be a metric space. Let $({X_1}^*, {d_1}^*)$ and $({X_2}^*, {d_2}^*)$ be completions of $(X,d)$ such that $\phi_1:X\rightarrow {X_1}^*$ and $\phi_2:X\rightarrow {X_2}^*$ are isometries. ($\phi_1[X]$ and $\phi_2[X]$ are dense in ${X_1}^*$ and ${X_2}^*$ respectively) Then, there exists a unique bijective isometry $f:{X_1}^* \rightarrow {X_2}^*$ such that $f\circ \phi_1 = \phi_2$. Here, let $\phi_1=\phi_2$. It doesn't seem to me that ${X_1}^* = {X_2}^*$. What is 'unique' this theorem referring to? REPLY [4 votes]: It means exactly what you've written: that $X_1^*$ and $X_2^*$ are actually pretty much the same thing, including the way $X$ embeds in them. It doesn't mean that $X_1^*=X_2^*$. Even if $\phi_1=\phi_2$ (and even if $\phi_1=\phi_2=\operatorname{id} _X$ are identity), there is no reason for $f$ to be identity map. Indeed if we take $X=[0,1)$ with Euclidean metric then you can choose some arbitrary $x_0\notin [0,1]$ and put $X_1^*=[0,1]$, $X_2^*=X\cup\{x_0\}$ with $d_1^*$ and $d_2^*$ the obvious metrics, with $\varphi_1=\varphi_2=\operatorname{id}_X$. Then $f(1)=x_0$, $f(x)=x$ elsewhere is the unique isometry, but not identity. Furthermore, even if $X_1^*=X_2^*$ as a set, $f$ need not be identity. For example, consider a minor refinement of the above example with $X=(0,1),X_1^*=X_2^*=[0,1]$, with $X,X_1^*$ with Euclidean metric, and $X_2^*$ with almost Euclidean metric, except that it sees $1$ as $0$ and vice versa.<|endoftext|> TITLE: A bijection from the plane to itself that takes a circle to a circle must take a straight line to a straight line. QUESTION [14 upvotes]: Let $ f: \mathbb{R}^{2} \rightarrow \mathbb{R}^{2} $ be a bijective function. If the image of any circle under $ f $ is a circle, prove that the image of any straight line under $ f $ is a straight line. REPLY [6 votes]: This has the result (second page). I hope it's thorough enough to placate your curiosity...<|endoftext|> TITLE: If a maximal subgroup is normal, it has prime index QUESTION [9 upvotes]: I'm having trouble with an exercise in "Introduce to the theory of group" book. This is problem: Let $M$ be a maximal subgroup of $G$. Prove that if $M$ is a normal subgroup of $G$, then $[G: M]$ is finite and equal to a prime. REPLY [11 votes]: Notation: We denote the normal subgroup by $N$ instead. By the Correspondence Theorem, there exists a bijection from the set of all subgroups $H$ such that $N\subseteq H\subseteq G$ onto the set of all subgroups of $G/N$. Since the only such subgroups are $H=N$ and $H=G$, $G/N$ has only two subgroups, namely $N/N$ and $G/N$. Let $xN$ be a nontrivial element in $G/N$. $\langle xN\rangle$ is a nontrivial subgroup of $G/N$, thus $\langle xN\rangle=G/N$. This means $G/N$ is cyclic. If $|G/N|$ is infinite, then $G/N\cong\mathbb{Z}$ which is a contradiction as $\mathbb{Z}$ has infinite subgroups of the form $n\mathbb{Z}$. Therefore $[G:N]=|G/N|$ is finite. Thus $G/N\cong\mathbb{Z}/n\mathbb{Z}$ for some integer $n$. By the Correspondence Theorem, the subgroups of $\mathbb{Z}/n\mathbb{Z}$ are $m\mathbb{Z}/n\mathbb{Z}$ where $n\mathbb{Z}\subseteq m\mathbb{Z}\subseteq\mathbb{Z}$. This means $m\mid n$. Since $G/N$ has two subgoups, this means $n$ has exactly 2 divisors, so $n$ is a prime. Thus $[G:N]=|\mathbb{Z}/n\mathbb{Z}|=n$ is a prime.<|endoftext|> TITLE: Number of permutations which fixes a certain number of point QUESTION [8 upvotes]: Given the set $N:=\{1,\cdots,n\}$, let $\pi$ be a permutation on $N$. We say $i \in \{1,\cdots,n\}$ is fixed by $g$ iff $\pi(i)=i.$ Denote the set of all permuations on $N$ by $S_n$. Define $f :~N \cup \{0\} \to \mathbb{N}_{\geq 0}$ by $f(m):=$The number of permutations in $S_n$ which has exactly $m$ fixed points. Prove that $$\sum_{m=0}^{n} f(m) m^2=2n!$$ Remark: It seems $f(0)$ plays an prominent role. REPLY [2 votes]: This is another question that can be addressed using the symbolic method, as seen here. While this is not necessarily the simplest solution it does produce explicit forms of all generating functions and makes the problem amenable to automatic combinatorics, a powerful method developed by Chyzak, Salvy and Flajolet. Permutations are sets of cycles, having combinatorial class specification $\mathfrak P(\mathfrak C(\mathfrak Z))$. Hence the corresponding exponential generating function (EGF) is $$ \exp \log \frac{1}{1-z} = \frac{1}{1-z},$$ where $$ \log \frac{1}{1-z} $$ is the EGF of labelled cycles. (These two are easily verified as there are $n!$ permutations and $n!/n$ cycles.) If we want to count the number of fixed points we need to mark each fixed point with a new variable, $u$, which gives the class specification $\mathfrak P(\mathfrak C(\mathcal Z) -\mathcal Z + \mathcal U \mathcal Z )$. and the mixed generating function $$G(z, u) = \exp \left( \log \frac{1}{1-z} -z + uz \right) = \frac{1}{1-z} e^{-z} e^{uz}. $$ A term $u^m z^n/n!$ in $G(z, u)$ represents a permutation of length $n$ with $m$ fixed points. We seek to multiply this term by $m^2$. Hence we differentiate with respect to $u$, multiply by $u$, differentiate by $u$ again and finally multiply by $u$ one more time, obtaining $$ H(z, u) = \frac{1}{1-u} u \left( \frac{d}{du} \left( u \frac{d}{du} G(z, u) \right)\right) = \frac{1}{1-u} \frac{1}{1-z} e^{-z} (uz + u^2z^2) e^{uz}.$$ The factor $\frac{1}{1-u}$, when it occurs in a product with another formal power series in $u$, will produce the series for the sums of the first $n$ elements. It is included here to build a generating function for the sum, so that $$ n! [u^n][z^n] H(z, u) = \sum_{m=0}^n f(m, n) m^2.$$ The differentiate-and-multiply is known as a so-called marking operation in symbolic combinatorics. It remains to extract coefficients. We have $$ \begin{align} & [z^n] \frac{1}{1-u} \frac{1}{1-z} e^{-z} \, uz \, e^{uz} = \frac{u}{1-u} [z^{n-1}] \frac{1}{1-z} e^{(u-1)z} \\ &= \frac{u}{1-u} \sum_{k=0}^{n-1} [z^k] e^{(u-1)z} = \frac{u}{1-u} \sum_{k=0}^{n-1} \frac{(u-1)^k}{k!} \\ &= u \left( \frac{1}{1-u} - \sum_{k=1}^{n-1} \frac{(u-1)^{k-1}}{k!}\right) \end{align}$$ Similarly, $$ \begin{align} & [z^n] \frac{1}{1-u} \frac{1}{1-z} e^{-z} \, u^2z^2 \, e^{uz} = \frac{u^2}{1-u} [z^{n-2}] \frac{1}{1-z} e^{(u-1)z} \\ &= \frac{u^2}{1-u} \sum_{k=0}^{n-2} [z^k] e^{(u-1)z} = \frac{u^2}{1-u} \sum_{k=0}^{n-2} \frac{(u-1)^k}{k!} \\ &= u^2 \left( \frac{1}{1-u} - \sum_{k=1}^{n-2} \frac{(u-1)^{k-1}}{k!}\right) \end{align}$$ It follows that $$ n! H(z, u) = n! [u^n] \left( u \left( \frac{1}{1-u} - \sum_{k=1}^{n-1} \frac{(u-1)^{k-1}}{k!} \right) + u^2 \left( \frac{1}{1-u} - \sum_{k=1}^{n-2} \frac{(u-1)^{k-1}}{k!}\right)\right)$$ which yields $$n! \, [u^n] \left( u \frac{1}{1-u} + u^2 \frac{1}{1-u} \right)= 2n!,$$ which was to be shown. The beauty of this method is that it is algorithmic and can be implemented in computer algebra systems as in the package mentioned above. It is actually possible to have a program calculate all the generating functions that we have seen using only the specification of the combinatorial class. There is much more on this method at Wikipedia.<|endoftext|> TITLE: How to construct a bijection from $(0, 1)$ to $[0, 1]$? QUESTION [8 upvotes]: Possible Duplicate: Bijection between an open and a closed interval How do I define a bijection between $(0,1)$ and $(0,1]$? I wonder if I can cut the interval $(0,1)$ into three pieces: $(0, \frac{1}{3})\cup(\frac{1}{3},\frac{2}{3})\cup(\frac{2}{3},1)$, in which I'm able to map point $\frac{1}{3}$ and $\frac{2}{3}$ to $0$ and $1$ respectively. Now the question remained is how to build a bijection mapping from those three intervels to $(0,1)$. Or, my method just goes in a wrong direction. Any correct approaches? REPLY [6 votes]: The idea you mention, with mild modification, will work. Please note that what is below is a minor variant of the solution given by Patrick da Silva. Your decomposition of $(0,1)$ is not quite complete. We want to write $$(0,1)=\left(0,\frac{1}{3}\right) \cup\left\{\frac{1}{3}\right\} \cup \left(\frac{1}{3},\frac{2}{3}\right)\cup \left\{\frac{2}{3}\right\}\cup \left(\frac{2}{3},1\right),$$ so $5$ "intervals," two of them kind of boring. From now on, we will call $\frac{2}{3}$ by the more cumbersome name $1-\frac{1}{3}$. Use the identity function on the two endintervals $\left(0,\frac{1}{3}\right)$ and $\left(1-\frac{1}{3},1\right)$, and map $\frac{1}{3}$ to $0$, and $1-\frac{1}{3}$ to $1$. This leaves $\left(\frac{1}{3},1-\frac{1}{3}\right)$, which needs to be bijectively mapped to $\left[\frac{1}{3},1-\frac{1}{3}\right]$. Use the same trick on the interval $\left(\frac{1}{3},1-\frac{1}{3}\right)$ that we used on $(0,1)$. So this time the two special points "inside" that will be mapped to $\frac{1}{3}$ and $1-\frac{1}{3}$ respectively are $\frac{1}{3}+\frac{1}{9}$ and $1-\frac{1}{3}-\frac{1}{9}$. That leaves $\left(\frac{1}{3}+\frac{1}{9},1-\frac{1}{3}-\frac{1}{9}\right)$ to be mapped bijectively to $\left[\frac{1}{3}+\frac{1}{9},1-\frac{1}{3}-\frac{1}{9}\right]$. Continue, forever. As pointed out by Patrick da Silva, the point $\frac{1}{2}$ is not dealt with in this process: simply map it to itself. It would be notationally a little simpler to use the same idea to map $(-1,1)$ bijectively to $[-1,1]$, and use linear functions to take $(0,1)$ to $(-1,10$, and $[-1,1]$ to $[0,1]$ toadjust to our situation. The advantage is that first "middle" interval is $\left(-\frac{1}{3},\frac{1}{3}\right)$, the second middle interval is $\left(-\frac{1}{9},\frac{1}{9}\right)$, and so on.<|endoftext|> TITLE: Does $x/yz$ mean $x/(yz)$ or $(x/y)z$? QUESTION [7 upvotes]: When people write $x/yz$, do they usually mean $x/(yz)$ or $(x/y)z$? For example, from Wikipedia If $p\geq 1/2$ , then $$ \Pr\left[ X>mp+x \right] \leq \exp(-x^2/2mp(1-p)) . $$ Thanks! REPLY [3 votes]: I was told once by a senior engineer that the last explicit symbol is what counts, it cancels the action of the previous one. So, for example, xyz/abc means $(x*y*z)/(a*b*c)$ and xyz/ab*c means $(x*y*z)/(a*b)*c$, as expressed by any standard computer language. I was a little upset by that implicit convention, but soon found out that it worked well and was able to understand all old handbooks, where equations were written straight in a single line. I suppose aesthetics is what counts here. I have the feeling that our need for explicit symbols came with a generation which had computer programming lessons at college, so a relatively new thing starting at what, the 60's, the 70's?<|endoftext|> TITLE: What's the probability that the next coin flip is heads? QUESTION [5 upvotes]: I randomly chose between 2 coins. One of the coins has a 0.8 chance of heads and a 0.2 chance of tails. The other is a fair coin that has a 0.5 chance of either heads or tails. I flip this coin twice and get 2 heads. What's the probability that my next flip is heads? I tried using Bayes' Theorem: $$ P(H|HH) = \frac{P(HH|H)P(H)}{P(HH)} $$ But then $P(HH|H)$ is not easy to solve... REPLY [6 votes]: Let $HH$ be the event that we get $2$ heads in a row. Let $H_3$ be the event the third toss is a head. We want $\Pr(H_3|HH)$. Maybe we start from something a little simpler than the Bayes' Theorem that you used, essentially the definition of conditional probability: $$\Pr(H_3|HH)=\frac{\Pr(H_3 \cap HH)}{\Pr(HH)}.$$ We calculate the two probabilities on the right. For $\Pr(HH)$, note that two heads in a row happens with probability $(4/5)^2$ if we use the funny coin, and with probability $(1/2)^2$ if the coin is the ordinary coin. It follows that $$\Pr(HH)=\frac{1}{2}\left(\frac{4}{5}\right)^2+\frac{1}{2}\left(\frac{1}{2}\right)^2.$$ A similar calculation gives us the probability of $HH$, followed by $H_3$: $$\Pr(HH\cap H_3)=\frac{1}{2}\left(\frac{4}{5}\right)^3+\frac{1}{2}\left(\frac{1}{2}\right)^3.$$ Divide.<|endoftext|> TITLE: Integral Inequality $|f''(x)/f(x)|$ QUESTION [6 upvotes]: Let $f$ be a $C^2$ function in $[0,1]$ such that $f(0)=f(1)=0$ and $f(x)\neq 0\,\forall x\in(0,1).$ Prove that $$\int_0^1 \left|\frac{f{''}(x)}{f(x)}\right|dx\ge4$$ REPLY [2 votes]: Proving the Lower Bound Without loss of generality, assume that $f(x)\gt0$ for $x\in(0,1)$. Suppose that $f(x_0)=y_0=\max\limits_{x\in[0,1]}f(x)$. Then, $f'(x_0)=0$. By the Mean Value Theorem, for some $x_1\in(0,x_0)$, $f'(x_1)=\frac{y_0}{x_0}$. Therefore, $$ \int_0^{x_0}|f''(x)|\,\mathrm{d}x\ge\frac{y_0}{x_0} $$ Furthermore, for some $x_2\in(x_0,1)$, $f'(x_2)=-\frac{y_0}{1-x_0}$. Therefore, $$ \int_{x_0}^1|f''(x)|\,\mathrm{d}x\ge\frac{y_0}{1-x_0} $$ Since $f(x)\le y_0$, $$ \begin{align} \int_0^1\left|\,\frac{f''(x)}{f(x)}\,\right|\,\mathrm{d}x &\ge\frac{\frac{y_0}{x_0}+\frac{y_0}{1-x_0}}{y_0}\\ &=\frac1{x_0}+\frac1{1-x_0}\\ &=\frac1{\frac14-\left(x_0-\frac12\right)^2}\\[3pt] &\ge4 \end{align} $$ The Lower Bound is Sharp Let $$ f_a(x)=\sin^{-1}\left(\frac{\sin(\pi x)}{1+a^2\sin^2(\pi x)}\right) $$ then $$ \lim_{a\to0}\int_0^1\left|\,\frac{f_a''(x)}{f_a(x)}\,\right|\,\mathrm{d}x=4 $$ since $f_a''(x)$ is tends to $0$ except near $x=\frac12$, and $\int_0^1f_a''(x)\,\mathrm{d}x$ tends to $-2\pi$, whereas $f_a\!\left(\frac12\right)$ tends to $\frac\pi2$.<|endoftext|> TITLE: Show that an entire function bounded by $|z|^{10/3}$ is cubic QUESTION [8 upvotes]: Question: Let $f$ be an entire function such that $|f(z)|\leq1+2|z|^{10/3}$ for all z. Prove that $f$ is a cubic polynomial Thoughts so far: Using a corollary of Liouville's theorem, we know that we want to show that $|f(z)|\leq a+b|z|^3$ and $|f(z)|\geq a+b|z|^3$ for some constants a and b. We know that within the unit circle $|f(z)|\leq 1+2|z|^{10/3} < 1+2|z|^3$ which gives us an upper bound, while outside of the unit circle we know that $-|f(z)|\geq -1-2|z|^{10/3} \implies |f(x)| \geq |-1-2|z|^{10/3}| = |2|z|^{10/3}--1|$ (by triangle inequality) $\geq 2|z|^{10/3}-1 > 2|z|^3-1$, which provides an lower bound of three, which by the corollary of Liouville's theorem implies that f(z) must be cubic. However, this proof makes me quesy because I feel that the upper and lower limits were chosen arbitrarily and could be any such function with a power less than $\frac{10}{3}%$, which makes me feel rather frustrated. Furthermore, this also leads me to believe that this is not a constructive line of thought for this problem. Thank you in advance for any help that you may provide. REPLY [6 votes]: The general form of this classic problem is the following proposition: If $f$ is an entire function satisfying $|f(z)| \le A + B|z|^k$ for some positive constants $A,B$ and some nonnegative integer $k$, then $f$ is a polynomial of degree at most $k$. I'll prove this proposition in a moment, by induction on $k$. Note that if the proposition is true, then it implies the following theorem: If $f$ is an entire function satisfying $|f(z)| \le A + B|z|^\gamma$ for some positive constants $A,B,\gamma$, then $f$ is a polynomial of degree at most $\lfloor\gamma\rfloor$. (Justification: the inequality with $\gamma$ implies the inequality for $\lfloor\gamma\rfloor+1$, possibly with different constants $A,B$. So $f$ is a polynomial of degree at most $\lfloor\gamma\rfloor+1$. But if it truly had degree $\lfloor\gamma\rfloor+1$, then it would grow faster than $|z|^\gamma$, violating the given inequality; so it actually has degree at most $\lfloor\gamma\rfloor$.) The base case $k=0$ of the Proposition is simply Liouville's Theorem: a bounded entire function is constant. Suppose the proposition is true for $k-1$, and let $f$ satisfy $|f(z)| \le A+B|z|^k$. Let $g(z) = (f(z)-f(0))/z$, which is also an entire function (its simgularity at $z=0$ is removable). It's easy to check that $|g(z)| \le C + D|z|^{k-1}$ for some positive constants $C,D$ (check separately on $|z|\le1$ and $|z|\ge1$). By the induction hypothesis, $g$ is a polynomial of degree at most $k-1$; since $f(z) = zg(z)+f(0)$, we see that $f$ is a polynomial of degree at most $k$.<|endoftext|> TITLE: Multivariable Gauss's Lemma QUESTION [5 upvotes]: Gauss's Lemma for polynomials claims that a non-constant polynomial in $\mathbb{Z}[X]$ is irreducible in $\mathbb{Z}[X]$ if and only if it is both irreducible in $\mathbb{Q}[X]$ and primitive in $\mathbb{Z}[X]$. I wonder if this holds for multivariable case. Is it true that a non-constant polynomial in $\mathbb{Z}[X_1,\dots,X_n]$ is irreducible in $\mathbb{Z}[X_1,\dots,X_n]$ if and only if it is both irreducible in $\mathbb{Q}[X_1,\dots,X_n]$ and primitive in $\mathbb{Z}[X_1,\dots,X_n]$? Thank you for your help. REPLY [2 votes]: Let $n \ge 1$ be an integer. We define a total order on $\mathbb{Z}^n$ as follows. Let $(r_1,\dots, r_n)$, $(s_1,\dots, s_n) \in \mathbb{Z}^n$. Let $k = \min \{i; r_i \neq s_i\}$. Then $(r_1,\dots, r_n) > (s_1,\dots, s_n)$ if and only if $r_k > s_k$. Lemma 1 Let $r, s, t \in \mathbb{Z}^n$. Suppose $r > s$. Then $r + t > s + t$. Proof: Clear. Lemma 2 Let $r, s, r', s' \in \mathbb{Z}^n$. Suppose $r + s = r' + s'$ and $r > r'$. Then $s' > s$. Proof: $r - r' = s' - s$. By Lemma 1, $r - r' > 0$. Hence $s' - s > 0$. Hence $s' > s$ by Lemma 1. QED Let $A$ be a UFD. Let $p$ be a prime element of $A$. Let $x \in A$. If $x$ is divisible by $p^a$ but not by $p^{a+1}$, we denote this fact by $p^a||x$. Let $f \in A[X_1,\dots, X_n]$. We denote by $C(f)$ the gcd of all the coefficients of $f$. If $(C(f)) = (1)$, $f$ is called primitive. Let $\mathbb{N}$ be the set of integers $\ge 0$. We denote $(r_1,\dots, r_n) \in \mathbb{N}^n$ by $r$. We denote a monomial $X_1^{r_1}\cdots X_n^{r_n}$ by $X^r$. Lemma 3 Let $A$ be a UFD. Let $f, g \in A[X_1,\dots, X_n]$. Then $(C(fg)) = (C(f)C(g))$. Proof: Let $f = \sum_r \lambda_r X^r$. Let $g = \sum_s \mu_s X^s$. Let $fg = \sum_m \gamma_m X^m$. Then $\gamma_m = \sum_{r+s = m} \lambda_r\mu_s$. Let $p$ be a prime element of $A$. Suppose $p^a||C(f)$ and $p^b||C(g)$. It suffices to prove that $p^{a+b}||C(fg)$. It is clear that $p^{a+b}|C(fg)$. Let $h = max \{r \in \mathbb{N}^n\colon \lambda_r$ is not divisible by $p^{a+1}\}$. Let $k = max \{s \in \mathbb{N}^n\colon \mu_s$ is not divisible by $p^{b+1}\}$. Let $r, s \in \mathbb{N}^n$. Suppose $r + s = h + k$. If $r \neq h$, then $\lambda_r\mu_s$ is divisible by $p^{a+b+1}$ by Lemma 2. Since $\gamma_{h+k} = \sum_{r+s = h+k} \lambda_r\mu_s$, $\gamma_{h+k}$ is not divisible by $p^{a+b+1}$. QED Proposition Let $A$ be a UFD, $K$ its field of fractions. Let $f \in A[X_1,\dots, X_n]$ be non-constant. Then $f$ is irreducible if and only if $f$ is primitive and $f$ is irreducible in $K[X_1,\dots, X_n]$. Proof: Suppose $f$ is irreducible. Clearly $f$ is primitive. Suppose $f = g'h'$, where $g'$ and $h'$ are non-constant polynomial in $K[X_1,\dots, X_n]$. It is easy to see that there exist primitive polynomials $g, h \in A[X_1,\dots, X_n]$ and $a, b \in A -\{0\}$ such that $af = bgh$. By Lemma 3, $gh$ is primitive. Hence $b = a\epsilon$, where $\epsilon$ is an invertible element of $A$. Hence $f = \epsilon gh$. This is a contradiction. The converse is clear. QED<|endoftext|> TITLE: Closure, Interior, and Boundary of Jordan Measurable Sets. QUESTION [14 upvotes]: This question has a number of parts. Let $E\subset\mathbb{R}^{d}$ be a bounded subset. (1) Show that $m^{\star,(J)}(E)=m^{\star,(J)}(\bar{E})$ (closure) (2) Show that $m_{\star,(J)}(E)=m_{\star,(J)}(E^{\circ})$ (interior) (3) Show that $E$ is Jordan measurable if and only if $m^{\star}(\partial E)=0$. I originally had different proofs for parts (1) and (2), but I revised them because I wanted a more rigorous $\epsilon$-estimate type proof (I'm trying to practice getting better at this). But, looking at them again, I think there is a flaw. In particular, where I make the justification that we can "fatten up" or "shrink down" the covers in each part (see the proofs). Basically, the reason why I think it is now flawed is because if one replaces $\frac{\epsilon}{N}$ by $\frac{\epsilon}{2^{n}}$, the proof for the outer Jordan measure (part (1)) seems to carry over word-for-word in the case of Lebesgue outer measure, and that's clearly false since $\mu^{\star}([0,1]\cap\mathbb{Q})=0$, yet $\mu^{\star}(\overline{[0,1]\cap\mathbb{Q}})=1$. Anyway, I would appreciate anyone's assistance in helping me correct these proofs. Also, for part (3), I'm stuck on proving the implication $m^{\star,(J)}(\partial E)=0\rightarrow E\in\mathscr{J}(\mathbb{R}^{d})$. SOLUTION (1) By definition of $m^{\star,(J)}$, there exists a collection of boxes $\{B_{j}\}_{j=1}^{N}$ such that both \begin{align*} E\subset\bigcup\limits_{j=1}^{N}B_{j} &&\text{and} &&\sum\limits_{j=1}^{N}|B_{j}|\leq m^{\star,(J)}(E)+\epsilon \end{align*} hold for every $\epsilon>0$. Because $E\Delta\bar{E}\subset E'$ (the set of limit points of $E$), by enlarging each box $B_{j}$ (as necessary), we can find a new collection of boxes $\{B'_{j}\}_{j=1}^{N}$ such that each of the conditions \begin{align*} |B_{j}|\leq|B'_{j}|\leq|B_{j}|+\frac{\epsilon}{N}, &&\bigcup_{j=1}^{N}B'_{j}\supset\bar{E}\supset E, &&\text{and} &&\sum\limits_{j=1}^{N}|B'_{j}|\leq m^{\star,(J)}(\bar{E})+\epsilon \end{align*} also holds for the same $\epsilon$. Montonicity and the fact that $E\subset\bar{E}$ then gives \begin{align*} m^{\star,(J)}(E) &\leq m^{\star,(J)}(\bar{E})\\ &\leq\sum\limits_{j=1}^{N}|B'_{j}|\\ &\leq\sum\limits_{j=1}^{N}\left(|B_{j}|+\frac{\epsilon}{N}\right)\\ &\leq m^{\star, (J)}(E)+2\epsilon. \end{align*} In particular, we obtain the estimate $$\Bigg|m^{\star,(J)}(E)-m^{\star,(J)}(\bar{E})\Bigg|\leq2\epsilon,$$ and since $\epsilon$ was arbitrary, we conclude $m^{\star,(J)}(E)=m^{\star,(J)}(\bar{E})$ as required. SOLUTION (2) By definition of $m_{\star, (J)}$ there exists boxes $\{B_{j}\}_{j=1}^{N}$ such that both \begin{align*} \bigcup\limits_{j=1}^{N}B_{j}\subset E &&\text{and} &&\sum\limits_{j=1}^{N}|B_{j}|\geq m_{\star,(J)}(E)-\epsilon \end{align*} hold for every $\epsilon>0$. Because $E\Delta E^{\circ}\subset\partial E\subset E'\cup\{x\in E:N_{\delta}(x)\cap E=\{x\}\forall\delta>0\}$, and isolated points have inner (and outer) measure $0$, by shrinking each box $B_{j}$ (as necessary), we can find a new collection of boxes $\{B'_{j}\}_{j=1}^{N}$ such each of the conditions \begin{align*} |B_{j}|\geq|B'_{j}|\geq|B_{j}|-\frac{\epsilon}{N}, &&\bigcup\limits_{j=1}^{N}B'_{j}\subset E^{\circ}\subset E &&\text{and} &&\sum\limits_{j=1}^{N}|B'_{j}|\geq m_{\star,(J)}(E^{\circ})-\epsilon \end{align*} also holds for the same $\epsilon$. Montonicity and the fact that $E^{\circ}\subset E$ then gives \begin{align*} m_{\star,(J)} &\geq m_{\star,(J)}(E^{\circ})\\ &\geq\sum\limits_{j=1}^{N}|B'_{j}|\\ &\geq\sum\limits_{j=1}^{N}\left(|B_{j}|-\frac{\epsilon}{N}\right)\\ &\geq m_{\star,(J)}(E)-2\epsilon. \end{align*} In particular, we obtain the estimate $$\Bigg|m_{\star,(J)}(E^{\circ})-m_{\star,(J)}(E)\Bigg|\leq2\epsilon,$$ and since $\epsilon>0$ was arbitrary, we conclude $m_{\star,(J)}(E)=m_{\star,(J)}(E^{\circ})$ as required. SOLUTION (3) We have the inequality $$m_{\star,(J)}(E)=m_{\star,(J)}(E^{\circ})\leq m^{\star,(J)}(\bar{E})=m^{\star,(J)}(E).$$ In the case that $E$ is Jordan measurable, this immediately becomes an equality $$m_{\star,(J)}(E)=m_{\star,(J)}(E^{\circ})=m(E)=m^{\star,(J)}(\bar{E})=m^{\star,(J)}(E),$$ and monotonicity then implies that $m^{\star,(J)}(E\Delta\bar{E})=m^{\star,(J)}(E\Delta E^{\circ})=0$, which means that both $\bar{E}$ and $E^{\circ}$ are Jordan measurable with measure $m(E)$. Boolean closure then implies $\partial E$ is Jordan measurable since $\partial E=\bar{E}-E^{\circ}$, and from additivity we have $$m(\partial E)=m(\bar{E})-m(E^{\circ})=0.$$ Now suppose $\partial E=0$. Then sub-additivity of $m^{\star,(J)}$ and $m_{\star,(J)}$ implies $$0=m^{\star,(J)}_{\star,(J)}(\partial E)\geq m^{\star,(J)}_{\star,(J)}(\bar{E})-m^{\star,(J)}_{\star,(J)}(E^{\circ})\geq0,$$ so that the original inequality is extended to $$m_{\star,(J)}(E^{\circ})=m_{\star,(J)}(E)=m_{\star,(J)}(\bar{E})\leq m^{\star,(J)}(E^{\circ})=m^{\star,(J)}(E)=m^{\star,(J)}(\bar{E}).$$ COMMENTS (Update 1) I want to look a little closer at my argument in (1). I can't see why it's incorrect, but perhaps a modification will make it more rigorous. For convenience, let's used cubes $Q_{j}$ instead of boxes $B_{j}$ (that the former can be used in the definition of Jordan content is an easy consequence of the fact that any box $B_{j}$ can be arbitrarily approximated by a finite number of cubes by an obvious dissection process). Then I have a collection of cubes $\{Q_{j}\}_{j=1}^{N}$ such that $\bigcup_{j=1}^{N} Q_{j}\supset E$ and $\sum_{j=1}^{N}|Q_{j}|\leq m^{\star,(J)}(E)+\epsilon$. Suppose each cube has some index length $\ell_{j}$, and consider the cubes $\{Q_{j}'\}_{j=1}^{N}$ obtained by enlarging each $\ell_{j}\mapsto\ell_{j}+\frac{epsilon}{N}$. Then this collection covers $\bar{E}$ for the following reasons. If $E$ has any isolated points, then the $Q_{j}$ already covered them, so the $Q'_{j}$ certainly cover them as well. Therefore, the only other points in $\bar{E}\backslash E$ are limit points of the set $E$. But limit points are arbitrarily close to $E$, the set covered by the $Q_{j}$ (they have distance $0$ from the cover), and since by the explicit construction $dist(Q_{j},Q'_{j})=\frac{\epsilon}{N}>0$ and $Q'_{j}$ also cover $E$, we conclude the $Q'_{j}$ also cover $\bar{E}$. (If this is wrong, pleeaaassseee explain to me the logical fault). Then we have \begin{align*} m^{\star,(J)}(\bar{E}) &\leq\sum_{j=1}^{N}|Q'_{j}|\\ &=\sum_{j=1}^{N}\left(\ell_{j}+\frac{\epsilon}{N}\right)^{d}\\ &=\sum_{j=1}^{N}\sum_{k=0}^{d}\binom{d}{k}\left(\ell_{j}^{k}\left(\frac{\epsilon}{N}\right)^{d-k}\right)\\ &=\sum_{j=1}^{N}\left(\ell_{j}^{d}+d\frac{\epsilon}{N}\ell_{j}^{d-1}+\ldots+\frac{\epsilon^{d}}{N^{d}}\right)\\ &=\sum_{j=1}^{N}|Q_{j}|+NO\left(\frac{\epsilon}{N}\right)\\ &\leq m^{\star,(J)}(E)+O(\epsilon) \end{align*} which shows that the outer measure of $\bar{E}$ is less than the outer measure of $E$. I still don't know if this is fully rigorous or not though. It still has the problem that the argument for outer Lebesgue measure carries over by using $\ell_{j}'=\ell_{j}+\frac{\epsilon}{2^{j}}$ as each of the terms in the binomial expansion now are convergent infinite series and the result is still $O(\epsilon)$. Moreover, we would still also have $dist(Q_{j},Q'_{j})=\frac{\epsilon}{2^{j}}>0$. So I guess that basically proves there's something wrong with my logic. I just don't know how to fix it. (Update 2) So I have concluded there is nothing inherently wrong with my argument, just that it is incomplete. For one thing, you cannot conclude that an "$\epsilon$-close" countable covering of a set $E$ can be "$\epsilon$-fattened" to produce a cover of $\bar{E}$. The counter example is $[0,1]\cap\mathbb{Q}$. The reason for this somewhat peculiar fact is illustrated in this question I asked earlier today (Paradox as to Measure of Countable Dense Subsets?). But to put it summarily, countable coverings allow you to circumvent density arguments, in the sense that you can still cover $E$ while avoiding $\bar{E}$, and in fact do so on a set of positive outer measure; hence once $\epsilon$ is sufficiently small, the fattened cubes will still fail to cover $\bar{E}$. That this cannot happen in the finite case is made rigorous in the following (completely) redone solutions to (1) and (2). Actually, fattening of the original cover of $E$ is not even necessary, as it is shown that a finite cover of $E$ is necessarily a finite cover of $\bar{E}$, leading to a new characterization of outer Jordan measure involving closed boxes only. Moreover, we also obtain another surprising explanation as to why countable coverings can avoid this conclusion, the main idea being that countable unions of closed sets need not be closed. The same arguments apply in reverse for the case of inner Jordan measure of $E$ and $E^{\circ}$, where it is shown that any containment by $E$ is necessarily a containment by $E^{\circ}$, hence leading to a new definition of inner Jordan measure involving open boxes only. It is interesting to note that countable containments cannot avoid this conclusion (in contrast to the analogous situation for covers), and this stems from the fact that countable unions of open boxes are open. Incidentally, we conclude that nothing is gained by considering countable containments, hence why we do not consider a Lebesgue inner measure. I will formulate this all into a complete answer once I prove (3). Let $E\subset\mathbb{R}^{d}$ be a bounded set. (1) By definition of $m^{\star,(J)}$, there exists a collection of boxes $\{B_{j}\}_{j=1}^{N}$ such that both \begin{align*} E\subset\bigcup\limits_{j=1}^{N}B_{j} &&\text{and} &&\sum\limits_{j=1}^{N}|B_{j}|\leq m^{\star,(J)}(E)+\epsilon \end{align*} hold for every $\epsilon>0$. Now the definition of elementary measure implies $m(B_{j})=|B_{j}|=\bar{B_{j}}|=m(B_{j})$, and because $\bigcup_{j=1}^{N}\bar{B_{j}}\supset E,$ we may assume each $B_{j}$ is closed. But this implies that the cover is itself closed since $N$ is finite, e.g. $\bigcup_{j=1}^{N}B_{j}=\overline{\bigcup_{j=1}^{N}B_{j}}.$ Moreover, as $\bar{E}$ is the \emph{smallest} closed set which contains $E$, we have that $\{B_{j}\}_{j=1}^{N}$ is an ``$\epsilon$-close'' cover of $\bar{E}$ as well. It follows from monotonicity and the previous remarks that $$m^{\star,(J)}(E)\leq m^{\star,(J)}(\bar{E})\leq\sum_{j=1}^{N}|B_{j}|\leq m^{\star,(J)}(E)+\epsilon\leq m^{\star,(J)}(\bar{E})+\epsilon.$$ In particular, we obtain the estimate $$\Bigg|m^{\star,(J)}(E)-m^{\star,(J)}(\bar{E})\Bigg|\leq\epsilon,$$ and since $\epsilon$ was arbitrary, we conclude $m^{\star,(J)}(E)=m^{\star,(J)}(\bar{E})$ as required. (2) By definition of $m_{\star,(J)}$, there exists a collection of boxes $\{B_{j}\}_{j=1}^{N}$ such that both \begin{align*} E\supset\bigcup\limits_{j=1}^{N}B_{j} &&\text{and} &&\sum\limits_{j=1}^{N}|B_{j}|\geq m_{\star,(J)}(E)-\epsilon \end{align*} hold for every $\epsilon>0$. Now the definition of elementary measure implies $m(B_{j})=|B_{j}|=|B_{j}^{\circ}|=m(B_{j}^{\circ})$, and because $E\supset\bigcup_{j=1}^{N}B^{\circ}_{j}$, we may assume each $B_{j}$ is open. But this implies that the contained set is itself open, e.g. $\bigcup_{j=1}^{N}B_{j}=\left(\bigcup_{j=1}^{N}B_{j}\right)^{\circ}.$ Moreover, as $E^{\circ}$ is the \emph{largest} open set which is contained in $E$, we have that $\{B_{j}\}_{j=1}^{N}$ is an ``$\epsilon$-close'' set contained in $E^{\circ}$ as well. It follows from monotonicity and the previous remarks that $$m_{\star,(J)}(E)\geq m_{\star,(J)}(E^{\circ})\geq\sum_{j=1}^{N}|B_{j}|\geq m_{\star,(J)}(E)-\epsilon\geq m_{\star,(J)}(E^{\circ})-\epsilon.$$ In particular, we obtain the estimate $$\Bigg|m_{\star,(J)}(E)-m_{\star,(J)}(E^{\circ})\Bigg|\leq\epsilon,$$ and since $\epsilon$ was arbitrary, we conclude $m_{\star,(J)}(E)=m_{\star,(J)}(E^{\circ})$ as required. (Some Remarks) These results allow us to redefine $m^{\star,(J)}$ as the infimal elementary measure of a cover of $E$ by a finite number of closed boxes, and $m_{\star,(J)}$ as the supremal elementary measure of a containment by $E$ consisting of finite numbers of open boxes. These characterizations are are just as valid for countable coverings and containments. However, to obtain the conclusion from (1), finiteness of the cover is essential. The issue is that a countable union of closed boxes need not be closed, and so we can not conclude that the outer measures of $E$ and $\bar{E}$ coincide when countable covers are permitted. The set $A=[0,1]\cap\mathbb{Q}$ is a typical example of how the conclusion of (1) can fail when countable coverings are allowed. The Lebesgue outer measure of $A$ is $0$ since it can be covered by a countable union of degenerate boxes, while the Jordan outer measure of $A$ is $1$ since $\bar{A}=[0,1]$. On the other hand, no problem occurs when finite containments are upgraded to countable containments. This is because a countable union of open boxes is always open, and so the "Lebesgue inner measure" of $E$ must still coincide with the Lebesgue inner measure of $E^{\circ}$, since $E^{\circ}$ is the largest open set contained in $E$. Moreover, every open set $\mathscr{O}$ is the countable union of pairwise disjoint boxes, and by being a union of such boxes (as opposed to an intersection), we see that every finite subcollection of such boxes is contained in $\mathscr{O}$. It is easy to see then that the inner Jordan and Lebesgue measures of $\mathscr{O}$ coincide, and the conclusion from (2) implies that they agree for all sets. Incidentally, this demonstrates our lack of need for an inner Lebesgue measure. In fact, Littlewood's characterization of Lebesgue measurable sets being ``nearly open'' is basically equivalent to showing that the inner and outer Lebesgue measures are equal; in any case, clearly nothing is gained by upgrading finite containments to countable ones. REPLY [9 votes]: The following proof is intuitive enough to be summarized in a brief sketch: (i) A closed elementary set containing $E$ necessarily contains $\bar E$ (ii) An open elementary set contained in $E$ is necessarily contained in $E^\circ$ (iii) On the one hand, find elementary set $A,B$ such that $A\subset E\subset B$ and $B\backslash A$ serves as an elementary set containing $\partial E$; on the other hand, given an elementary set $C$ containing $\partial E$, find the corresponding elementary sets $A,B$. ${\bf Proof:}$ ${\bf (i)}$ By definition of outer measure, we can find an elementary set $B$ containing $E$. $B$ is by definition, the disjoint union of $N$ boxes $B_i$'s. It is apparent that $|B_i|=|B_i^\circ|=|\bar B_i|$. Therefore, $\bar B=\cup_{i=1}^N\bar B_i$ has the same Jordan measure as $B$. Also $\bar E\subset\bar B$. Therefore: $$ m^{\star,(J)}(\bar E)\leq m(\bar B)=m(B) $$ Take the infimum over all $B\in\varepsilon(\mathbb R^d)$ and we have: $$ m^{\star,(J)}(\bar E)\leq m^{\star,(J)}(E) $$ On the other hand, since $E\subset\bar E$, we have: $$ m^{\star,(J)}(E)\leq m^{\star,(J)}(\bar E) $$ and therefore $m^{\star,(J)}(E)=m^{\star,(J)}(\bar E)$; ${\bf (ii)}$ Similar to ${\bf (i)}$; ${\bf (iii)}$ If we have $E$ is Jordan measurable, there exists elementary sets $A,B$ such that $A\subset E\subset B$ and $m(B\backslash A)\leq\varepsilon>0$ for arbitrary small positive number $\varepsilon>0$. Recall $A,B$ is the disjoint union of finitely many boxes: $$ A=\cup_i B_{A,i},\qquad B=\cup_j B_{B,j} $$ Obviously: $$ \underbrace{\cup_i B_{A,i}^\circ}_{A^\circ}\subset\cup_i B_{A,i}\subset E\subset\cup_j B_{B,j}\subset\underbrace{\cup_j\bar B_{B,j}}_{\bar B} $$ w.l.o.g, we denote the two new elementary sets by $A^\circ\subset A$ and $\bar B\supset B$ to indicate they are open and closed respectively. It is easy to see that: $$ \bar E\subset\bar B $$ and $$ A^\circ\subset E\quad\Rightarrow\quad E^c\subset (A^\circ)^c\quad\Rightarrow\quad \overline{E^c}\subset (A^\circ)^c $$ since $(A^\circ)^c$ is closed. Therefore, $$ \partial E=\bar E\cap\overline{E^c}\subset \bar B\cap(A^\circ)^c =\bar B\backslash A^\circ $$ $\partial E$ has outer measure zero since $$ m(\bar B\backslash A^\circ)=m(B\backslash A)\leq\varepsilon $$ can be arbitrarily small. Next, assume we already have an elementary set $C$ with arbitrarily small measure and $\partial E\subset C$. Now by Boolean closure, $C^c$ is also elementary, which means $C^c$ is the disjoint union of finitely many boxes $B_k$'s. $$ C^c=\cup_kB_k $$ We claim that: $$ \text{either }B_k\subset E\text{ or }B_k\subset E^c $$ For otherwise, exists $x\in E, y\in E^c$ such that $x,y\in B_k$ for some $B_k$. Since $B_k$ is convex, the line segment $\overline{xy}$ is contained in $B_k$ too. Construct the function: $$ \gamma:[0,1]\to B_k\subset\mathbb R^d, \gamma(0)=x, \gamma(1)=y,\gamma(t)=x+t(y-x) $$ which is obviously continuous. Now look at the preimage of $\bar E$ and $\overline{E^c}$. We have: $$ \gamma^{-1}(\bar E)\cup\gamma^{-1}(\overline{E^c})=\gamma^{-1}(\bar E\cup\overline{E^c})=\gamma^{-1}(\mathbb R^d)=[0,1] $$ and $$ \gamma^{-1}(\bar E)\cap\gamma^{-1}(\overline{E^c})=\gamma^{-1}(\partial E)=\varnothing $$ since $B_k\subset C^c$ and $C^c\cap\partial E=\varnothing$. We have $[0,1]$ as the disjoint union of two non-empty (why?) closed sets $\gamma^{-1}(\bar E)$ and $\gamma^{-1}(\overline{E^c})$, which contradicts the fact that $[0,1]$ is connected. Now we are allowed to pick out all $B_{k}$'s in $C^c$ such that $B_{k}\subset E$ (denoted $B_{i,s}$'s) and simply discard all $B_{k}$'s outside $E$ (denoted $B_{o,s}$'s). Take the union of such boxes and call it $A$. To find $B$, simply take the disjoint union of $A$ with $C$: $B=A\cup C$. It covers $E$ since all the $B_i$'s we discarded are completely contained in $E^c$. Obviously $C=B\backslash A$. And we are done. ${\bf Remark:}$ The proof of ${\bf (iii)}$ is apparently motivated by a pictorial understanding...I am an engineer and this is what I do... Edit: A technical mistake in the proof. One should not take the complement of $C$ w.r.t. $\mathbb R^d$, but w.r.t. a closed and bounded box containing $C$, for otherwise, the jordan measure is not well defined. I could no longer change the gif (damn I lost my .AI)... Here an animated gif I made today for your consideration.<|endoftext|> TITLE: How many strings of 8 English letters are there...? QUESTION [6 upvotes]: 1) That contain at least one vowel, if letters can be repeated? $26^8-21^8$ 2) That contain exactly one vowel, if letters can be repeated? $8\cdot 5\cdot 21^7$ 3) That start with an X and contain at least one vowel, if letters can be repeated? $1\cdot 26^7-1\cdot 21^7$ Assume only upper-cased letters are used. I'm just trying to intuitively understand what's going on here. Can anyone explain in a clear and concise manner? Thank you! REPLY [11 votes]: There are $26$ letters, of which $21$ are consonants and $5$ are vowels. 1) There are $26^8$ words in all, and $21^8$ of them contain only consonants; all others contain at least one vowel. 2) There are $8$ positions for the vowel, $5$ options for the vowel and $21^7$ options for the $7$ consonants. 3) Same as 1), except one letter is fixed so there are only $7$ left.<|endoftext|> TITLE: Solving trigonometric equations of the form $a\sin x + b\cos x = c$ QUESTION [20 upvotes]: Suppose that there is a trigonometric equation of the form $a\sin x + b\cos x = c$, where $a,b,c$ are real and $0 < x < 2\pi$. An example equation would go the following: $\sqrt{3}\sin x + \cos x = 2$ where $0 TITLE: In a free group two elements commute if and only if they are powers of a common element QUESTION [6 upvotes]: In other words, $uv = vu$ in $F_n$ if and only if $u=w^m$ and $v=w^n$ for some $w\in F_n$. I would like to prove this without making use of Nielsen-Schreier (every subgroup of a free group is free). We can always find reduced representations $u=t_1^{\epsilon_1}\cdots t_k^{\epsilon_k}$ and $v=s_1^{\eta_1}\cdots s_l^{\eta_l}$ and the statement $uv = vu$ transforms into $$t_1^{\epsilon_1}\cdots t_k^{\epsilon_k}\cdot s_1^{\eta_1}\cdots s_l^{\eta_l}\cdot t_k^{-\epsilon_k}\cdots t_1^{-\epsilon_1}\cdot s_l^{-\eta_l}\cdots s_1^{-\eta_1}\sim 0$$ where $\sim$ denotes the equivalence relation coming from setting $x\cdot x^{-1}\sim 0$. However, the arbitrariness of $u$ and $v$ makes it hard to go on from here. REPLY [5 votes]: I answer the question just to remove it from the list of unanswered questions. Of course, the simplest solution is probably to use Nielsen-Schreier theorem: if $a$ and $b$ commute, then $\langle a,b \rangle$ is an abelian subgroup; but the only abelian free group is $\mathbb{Z}$, so $\langle a,b \rangle$ is cyclic. Otherwise, the result can be shown thanks to a combinatorial argument from the classical normal form in free groups. The proof is made by induction on $\mathrm{lg}(a)+\mathrm{lg}(b)$, and the argument follows the hint given by Derek Holt; the same proof can be found in Johnson's book, Presentations of Groups. The case where $\mathrm{lg}(a)$ or $\mathrm{lg}(b)$ belongs to $\{0,1\}$ is obvious, so let us suppose that $\mathrm{lg}(a), \mathrm{lg}(b) \geq 2$. First, we write $a$ and $b$ as reduced words on a free basis $$\left\{ \begin{array}{l} a= x_1 \cdots x_m \\ b= y_1 \cdots y_n \end{array} \right., \ n \leq m.$$ Then, we have the reduced product $$x_1 \cdots x_{m-r} y_{r+1} \cdots y_n =ab =ba = y_1 \cdots y_{n-s} x_{s+1} \cdots x_m.$$ Notice that $$m+n-2r-1= \mathrm{lg}(ab)= \mathrm{lg}(ba)= m+n-2s-1$$ implies $r=s$. Moreover, $0 \leq r \leq n$. Case 1: $r=0$, there is no cancellation in the product. Then $y_i=x_i$ for $1 \leq i \leq n$ hence $$ a = \underset{=y_1 \cdots y_n=b}{\underbrace{ \left( x_1 \cdots x_n \right) }} \cdot \underset{:=u}{\underbrace{ \left( x_{n+1} \cdots x_m \right) }} = bu.$$ Notice that $u$ and $b$ commute: $$bu=a=b^{-1}ab= ub.$$ Therefore, the induction hypothesis applies, and it is sufficient to conclude. Case 2: $r=n$, the number of cancellations is maximal. Then $y_i=x_{m-i+1}^{-1}$ for $1 \leq i \leq n$ hence $$a^{-1} = \underset{=y_1 \cdots y_n=b}{\underbrace{ \left( x_m^{-1} \cdots x_{m-n+1}^{-1} \right) }} \cdot \underset{:=u}{\underbrace{ \left( x_{m-n+2}^{-1} \cdots x_m^{-1} \right) }} = bu.$$ You conclude that $b$ and $u$ commute and you apply the induction hypothesis. Case 3: $0 < r TITLE: Show that $\sum_{k=0}^n\binom{3n}{3k}=\frac{8^n+2(-1)^n}{3}$ QUESTION [10 upvotes]: The other day a friend of mine showed me this sum: $\sum_{k=0}^n\binom{3n}{3k}$. To find the explicit formula I plugged it into mathematica and got $\frac{8^n+2(-1)^n}{3}$. I am curious as to how one would arrive at this answer. My progress so far has been limited. I have mostly been trying to see if I can somehow relate the sum to $$\sum_{k=0}^{3n}\binom{3n}{k}=8^n$$ but I'm not getting very far. I have also tried to write it out in factorial form, but that hasn't helped me much either. How would I arrive at the explicit formula? REPLY [8 votes]: $\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ $$ \mbox{Note that}\quad\sum_{k = 0}^{n}{3n \choose 3k} =\sum_{k = 0}^{\infty}{3n \choose 3k} $$ \begin{align} &\color{#c00000}{\sum_{k = 0}^{n}{3n \choose 3k}}= \sum_{k = 0}^{\infty}\oint_{\verts{z}\ =\ a\ >\ 1} {\pars{1 + z}^{3n} \over z^{3k + 1}}\,{\dd z \over 2\pi\ic} =\oint_{\verts{z}\ =\ a\ >\ 1}{\pars{1 + z}^{3n} \over z} \sum_{k = 0}^{\infty}\pars{1 \over z^{3}}^{k}\,{\dd z \over 2\pi\ic} \\[5mm]&=\oint_{\verts{z}\ =\ a\ >\ 1}{\pars{1 + z}^{3n} \over z} {1 \over 1 - 1/z^{3}}\,{\dd z \over 2\pi\ic} =\oint_{\verts{z}\ =\ a\ >\ 1} {z^{2}\pars{1 + z}^{3n} \over z^{3} - 1}\,{\dd z \over 2\pi\ic} \end{align} The integrand has three simple poles inside the contour: $\quad\ds{z_{m} \equiv \expo{2m\pi\ic/3}\,,\quad m = -1,0,1}$: \begin{align} &\color{#c00000}{\sum_{k = 0}^{n}{3n \choose 3k}}= \sum_{m = -1}^{1}\lim_{z \to z_{m}} \bracks{\pars{z - z_{m}}\,{z^{2}\pars{1 + z}^{3n} \over z^{3} - 1}} =\sum_{m = -1}^{1}{z_{m}^{2}\pars{1 + z_{m}}^{3n} \over 3z_{m}^{2}} \\[5mm]&={1 \over 3}\sum_{m = -1}^{1}\pars{1 + \expo{2m\pi\ic/3}}^{3n} ={1 \over 3}\sum_{m = -1}^{1}\expo{mn\pi\ic} \pars{\expo{-m\pi\ic/3} + \expo{m\pi\ic/3}}^{3n} \\[5mm]&={8^{n} \over 3}\sum_{m = -1}^{1} \pars{-1}^{mn}\cos^{3n}\pars{m\,{\pi \over 3}} \\[5mm]&={8^{n} \over 3}\bracks{\pars{-1}^{-n}\cos^{3n}\pars{-\,{\pi \over 3}} + 1 + \pars{-1}^{n}\cos^{3n}\pars{\pi \over 3}} ={8^{n} \over 3}\bracks{1 + 2\pars{-1}^{n}\pars{\half}^{3n}} \\[5mm]&={8^{n} \over 3}\bracks{1 + {2\pars{-1}^{n} \over 8^{n}}} \end{align} $$ \color{#66f}{\large\sum_{k = 0}^{n}{3n \choose 3k} ={8^{n} + 2\pars{-1}^{n} \over 3}} $$<|endoftext|> TITLE: Preparing for the Putnam QUESTION [6 upvotes]: I was wondering what resources people new to mathematics competitions could use for preparing for the William Lowell Putnam Competition(or just the Putnam). My background in mathematics is limited. I have recently started doing the book Abstract Algebra by Dummit and Foote and I am enjoying that book. I am yet to get a taste of real analysis though I have done single variable calculus rigorously . 1) How much mathematics does one need to learn before sitting for the Putnam? 2) What are the resources/books one could use? I may never even study in the USA but I wish to convince myself that I can solve the Putnam problems too! Thanks in advance! REPLY [6 votes]: You might start with this list of links. You will get a large number of useful hits if you Google Putnam practice problems. I particularly like the problem selection from the Berkeley course H90 that used to be run by Professor William Kahan. There are a number of other courses and training programs with a web presence. While it is not possible to duplicate the atmosphere of a course if you are working by yourself, they are the nearest one can get. One can do well on the Putnam with background that does not go beyond first or second year material (plus considerable problem solving experience). Many first year students have done very well.<|endoftext|> TITLE: Where is axiom of regularity actually used? QUESTION [29 upvotes]: Where is axiom of regularity actually used? Why is it important? Are there some proofs, which are substantially simpler thanks to this axiom? This question was to some extent provoked by Dan Christensen's comment: Would regularity ever be used in a formal development of, say, number theory or real analysis? I can't imagine it. I have to admit that I do not know other use of this axiom than the proof that every set has rank in cumulative hierarchy, and a few easy consequences of this axiom, which are mentioned in Wikipedia article. I remember seeing an introductory book in axiomatic set theory, which did not even mention this axiom. (And that book went through plenty of stuff, such as introducing ordinals, transfinite induction, construction of natural numbers.) Wikipedia article on Non-well-founded set theory links to Metamath page for Axiom of regularity and says: Scroll to the bottom to see how few Metamath theorems invoke this axiom. Based on the above, it seems that quite a lot of stuff can be done without this axiom. Of course, it's quite possible that this axiom becomes important in some areas of set theory which are not familiar to me, such as forcing or working without Axiom of Choice. (It might be difficult to define cardinality without AC and regularity, as mentioned here.) But even if the this axiom is important only for some advanced stuff - which I will probably never get to - I'd be glad to know that. REPLY [15 votes]: Here's an (IMO) interesting example. Surely you're familiar with the set-theoretic interpretation of ordered pairs given by $(x,y) = \{ \{ x \}, \{ x, y \} \}$. You may wonder, why not use $(x,y) = \{ x, \{ x, y\}\}$? We can... if we assume some amount of regularity. If there exists a set $S$ satisfying $S = \{ \{S, T\}, U \}$ with $T \neq U$, then we would have $$ (S, T) = \{ S, \{ S, T \} \} = \{ \{ \{ S, T \}, U \}, \{ S, T \} \} = ( \{ S, T \}, U ) $$ which contradicts the property of ordered pairs $(S, T) = (X, Y) \implies S = X \wedge T = Y$, however regularity forbids the existence of such a set $S$.<|endoftext|> TITLE: What is the explanation for similar decimal digits in values of Riemann zeta function with certain arguments close to one? QUESTION [5 upvotes]: In Mathematica I tried these values close to one as arguments for the Riemann zeta function: Zeta[1.000000000000010000000000000000000000000000000] Zeta[1.000000000000020000000000000000000000000000000] Zeta[1.000000000000040000000000000000000000000000000] Zeta[1.000000000000080000000000000000000000000000000] Zeta[1.000000000000160000000000000000000000000000000] Zeta[1.000000000000320000000000000000000000000000000] Zeta[1.000000000000640000000000000000000000000000000] N[EulerGamma, 30] Zeta[1.000000000000010000000000000000000000000000000^-1] Zeta[1.000000000000020000000000000000000000000000000^-1] Zeta[1.000000000000040000000000000000000000000000000^-1] Zeta[1.000000000000080000000000000000000000000000000^-1] Zeta[1.000000000000160000000000000000000000000000000^-1] Zeta[1.000000000000320000000000000000000000000000000^-1] Zeta[1.000000000000640000000000000000000000000000000^-1] N[1 - EulerGamma, 30] And got the output: 1.000000000000005772156649015336*10^14 5.000000000000057721566490153432*10^13 2.5000000000000577215664901535773*10^13 1.2500000000000577215664901538686*10^13 6.2500000000005772156649015445111*10^12 3.12500000000057721566490155616168*10^12 1.56250000000057721566490157946275*10^12 0.577215664901532860606512090082 -1.000000000000004227843350984679*10^14 -5.000000000000042278433509846860*10^13 -2.5000000000000422784335098470052*10^13 -1.2500000000000422784335098472965*10^13 -6.2500000000004227843350984787899*10^12 -3.12500000000042278433509849044046*10^12 -1.56250000000042278433509851374153*10^12 0.422784335098467139393487909918 So in the arguments above there are the powers of two in the decimal digits, and in the output there are the digits of the Euler gamma or Euler Mascheroni constant. What is the explanation for these similar decimal digits? I have looked at the series expansion of the zeta function but I did not understand why. REPLY [6 votes]: There's nothing mysterious going on here, and it has nothing to do with digits or bases or powers of $2$. The Laurent series of $\zeta(z)$ at $z=1$ is $$ \zeta(z)=\frac1{z-1}+\gamma + o(1)\;, $$ and this is exactly what you're seeing. It's not that the digits appear in some strange place; it's $\gamma$ itself, and the digits appear somewhere in a decimal expansion only because the numbers are being displayed in scientific notation. REPLY [4 votes]: Note that the "recurring digits" are actually always at the same position with respect to the real decimal point. So the numerical evidence is simply $$\zeta(1+\varepsilon) \approx \frac{1}{\varepsilon} + \gamma $$ for small $\varepsilon$. The next term seems to be on the order of $\varepsilon$ itself. These are indeed the first terms of the Laurent series for $\zeta(s)$ around $s=1$.<|endoftext|> TITLE: Show that norm of matrix $A$ is given by the square root of the largest eigenvalue of $A^tA$ QUESTION [5 upvotes]: The norm is defined as $\|A\|=\sup\{ \|A v \| : \|v\|=1\}$. I want to show it is equal to the square root of the largest eigenvalue of $A^tA$. I do not know why it is an eigenvalue of a product of $A^tA$ not simply an eigenvalue of $A$. How to proceed? REPLY [5 votes]: The singular value decomposition of $A$ gives us orthogonal matrices $U, V$ and a diagonal matrix $S$ such that $$A = U S V^T$$ Since $U$ and $ V$ are orthogonal, we have $\|U\| = \|V\| = 1$. Therefore, for all vectors $x$, we have $$\|A x\| = \|S x\|$$ Since $S$ is the diagonal matrix containing the singular values of $A$ (which by definition are the roots of the eigenvalues of $A^T A$), the $x'$ which maximizes $\|S x\|$ is the unit vector $e_1 = (1, 0, \dots, 0)$ assuming the singular values in $S$ are sorted in descending order of magnitude. Let $s$ be the largest singular value, then we have $$\|A\| = \|A x'\| = \|S x'\| = \|S e_1\| = \|e_1 s\| = s$$ So, the norm of $A$ is indeed the largest singular value $s$ of $A$, which is the root of the largest eigenvalue of $A^T A$.<|endoftext|> TITLE: Is the maximum function of a continuous function continuous? QUESTION [10 upvotes]: Suppose $f(x)$ is continuous on the closed interval $[a,b]$. Define $m(x)=\max_{a\leq s\leq x}\, f(s)$, $a\leq x\leq b$. Is $m(x)$ continuous necessarily? Thank you. REPLY [6 votes]: Yes it is. HINT: Clearly $m(s)$ is monotonically non-decreasing on $[a,b]$. Thus, if it is discontinuous at some $x\in[a,b]$, it must have a jump discontinuity at $x$: either $\lim\limits_{s\to x^-}m(s)u$; this is possible only if $f(x)=m(x)$. But then $\lim\limits_{s\to x^-}f(s)\le u0$, choose $\delta$ such that $|f(x)-f(y)|\leq\varepsilon$ when $|x-y|\leq \delta$ and let $x\in [a,b]$. We have to show that $|m(x)-m(x+t)|\leq\varepsilon$ if $|t|\leq \delta$. We have for $t>0$ that $$m(x+t)=\max_{0\leq s\leq x+t}f(s)=\max\{f(x),\max_{x\leq s\leq x+t}f(s)\},$$ and for $t<0$ that $$m(x+t)=\max\{f(x-t),\max_{x-t\leq s\leq x}f(s)\}.$$ We have $|f(x)-\max_{x\leq s\leq x+t}f(s)\}|\leq\varepsilon$ when $0t>-\delta$.<|endoftext|> TITLE: Calculate the continued fraction of square root QUESTION [6 upvotes]: I was having difficulty understanding the algorithm to calculate Continued fraction expansion of square root. I know the process is about extracting the integer part in repeat and maintaining the quadratic irrational $\frac{m_n + \sqrt{S}}{d_n}$. But I don't understand the equation: $d_{n+1} = \frac{S - m_{n+1}^2}{d_n}$ Why $S - m_{n+1}^2$ is dividable by $d_n$? This case for example: $$\ \dfrac {1-\sqrt{5}}2=-1+\dfrac {3-\sqrt{5}}2$$ $$\frac 1{\dfrac {3-\sqrt{5}}2}=\frac 2{3-\sqrt{5}}=\frac {2(3+\sqrt{5})}{(3-\sqrt{5})(3+\sqrt{5})}=\frac {2(3+\sqrt{5})}{9-5}=\frac {3+\sqrt{5}}{2}=2+\frac {\sqrt{5}-1}{2}$$ If $S - m_{n+1}^2$ is not dividable by $d_n$, in the step $\frac {2(3+\sqrt{5})}{9-5}=\frac {3+\sqrt{5}}{2}$, it may result in some result like $\frac{3 + 3\sqrt{5}}{2}$ and break the algorithm. So why won't this happened? REPLY [5 votes]: At the start we have (for $m=0$ and $d=1$) : $$\sqrt{S}=\frac{\sqrt{S}+m}d=a+\frac{\sqrt{S}+m-da}d$$ (with $a,\ m$ and $d$ are integers) Suppose that $\ d$ divides $(S-m^2)\ $ (this is true for $d=1$ of course). The fractional part $\displaystyle \frac{\sqrt{S}+m-da}d$ becomes : $$\frac{\sqrt{S}-da+m}d=\frac{S-(da-m)^2}{d\bigl(\sqrt{S}+da-m\bigr)}$$ The numerator $\ S-(da-m)^2=(S-m^2)+da(2m-da)$ will be divisible by $d$ (from our hypothesis). If we note $\ m':=da-m\ $ then the numerator divided by $d$ becomes $\ d':=\dfrac{S-(da-m)^2}d=\dfrac{S-m'^2}d$ and the next term to examine will be : $$\frac{\sqrt{S}+da-m}{\frac{S-(da-m)^2}d}=\frac{\sqrt{S}+m'}{d'}$$ But the conditions are the same as at the start : $\ d'$ divides $(S-m'^2)\ $ (the fraction is the previous $d$ !) and we may continue our rewriting : $$\frac{\sqrt{S}+m'}{d'}=a'+\frac{\sqrt{S}+m'-d'a'}{d'}$$ This recurrence shows that these conditions will hold at each iteration. (To be complete let's add that at each step $\ a:=\left\lfloor\dfrac{\sqrt{S}+m}d\right\rfloor$)<|endoftext|> TITLE: The usage of commutative diagrams QUESTION [7 upvotes]: I'm starting to understand what commutative diagrams are, but I'm not sure about their purpose, what is their intended use and what kind of problems are solvable with them. By "solvable with a commutative diagram" I mean some fancy graphical reasoning, redrawing etc. For example given only the commutative diagram for the exterior derivative $$ \newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}} \begin{array}{c} \Omega^k(N) & \ra{f^*} & \Omega^k(M) \\ \da{d} & & \da{d} \\ \Omega^{k+1}(N) & \ras{f^*} & \Omega^{k+1}(M) \\ \end{array} $$ is it even possible to tell that $d$ is a derivative, that is it is linear and the appropriate Leibniz rule holds? Another example is my own, I may have done it totally wrong. Given two vector spaces $V$ and $W$ (possibly of the same dimension) with scalar products, $f$ being a morphism, is it possible to prove that the following diagram commutes: $$ \newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}} \begin{array}{c} V & \ra{f} & W \\ \da{h} & & \da{h} \\ V & \ras{f} & W \\ \end{array} $$ only for $h$ of a certain form, which I guess is $$h(v) = \lambda(v^2) v$$ where $\lambda$ is some arbitrary function? Is it a valid commutative diagram? REPLY [5 votes]: A commutative diagram is a statement. In the case of your last diagram its claims that for all $v \in V$ $$ f(h_V (v)) = h_W (f(v)) \tag{1} $$ where $h_{V} \colon V \to V$ is defined as $$h_V (v) = \lambda ( \langle v,v \rangle_V ) v $$ where $\lambda$ is an arbitrary function $\lambda \colon \Bbb R \to \Bbb R$. Substituting the definition of $h$ into (1) we have $$ f\Big(\lambda( \langle v,v \rangle_V) v\Big) = \lambda (\langle f(v),f(v) \rangle_W) f(v) \tag{2} $$ Assuming that f is an isometry, that is f is linear and satisfies $$\langle f(v),f(v) \rangle_W = \langle v,v \rangle_V $$ so (2) becomes $$ \lambda \langle v,v \rangle_V f(v) = \lambda \langle v,v \rangle_V f(v) $$ Since we require $v$ be an arbitrary element of $V$ we may conclude that $$ f(v) = f(v) $$ for all $v \in V$. Thus, the statement of your second diagram is true. Edit. I corrected the above calculation to show that $\lambda$ can be an arbitrary real function, not just a multiplication to a scalar, as I erroneously assumed initially.<|endoftext|> TITLE: Covering $\mathbb{R}^n$ by countably many lower dimensional pieces? QUESTION [6 upvotes]: I would like to know if it is possible to cover $\mathbb{R}^n$ by countably many immersed submanifold of dimension less than $n$. A similar version is whether it is possible to cover $\mathbb{C}^n$ by countably many analytic subsets of lower dimension. The motivation is that an exercise I am working involves in proving a statement being true for a generic lattice, which seems to invoke statements of the sort above, but I am not sure how I can prove them. Thanks! REPLY [5 votes]: Here's the idea of a proof: Embedded submanifolds of lower dimension are nowhere dense. An immersed submanifold is a countable union of embedded submanifolds. The Baire category theorem. Another, similar approach would be to argue that that embedded submanifolds of lower dimension have Lebesgue measure zero.<|endoftext|> TITLE: How to evaluate this integral $\int_{-\infty}^{+\infty}\frac{x^2e^x}{(1+e^x)^2}dx$? QUESTION [10 upvotes]: I need to evaluate $$\int_{-\infty}^{+\infty}\frac{x^2e^x}{(1+e^x)^2}dx$$ I think the answer is $\frac{\pi^2}{3}$, but I'm not able to calculate it. REPLY [6 votes]: Slightly more generally, consider $$J(P,R) = \oint_\Gamma \frac{P(z)\; e^z\; dz}{(1+e^z)^2}$$ where $P$ is a polynomial and $\Gamma$ is the positively oriented rectangular contour from $-R$ to $R$ to $R+2\pi i$ to $-R+2\pi i$. Then $J(P,R) = 2 \pi i \text{Res}(P(z)\; e^z/(1+e^z)^2,z=\pi i) = - 2 \pi i P'(\pi i)$. On the other hand, it is easy to see that the contributions to the integral from the vertical sections go to $0$ as $R \to \infty$, and $$ \lim_{R \to \infty} J(P,R) = \int_{-\infty}^\infty \frac{(P(x) - P(x+2\pi i)) e^x}{(1+e^x)^2}\ dx$$ Now $P(x) - P(x + 2 \pi i) = x^2$ for $P(z) = -\dfrac{\pi i}{3} x + \dfrac{1}{2} x^2 + \dfrac{i}{6 \pi} x^3$, which makes $- 2 \pi i P'(\pi i) = \dfrac{\pi^2}{3}$.<|endoftext|> TITLE: Compactness in $C_0(\mathbb{R})$ QUESTION [6 upvotes]: Is there a compact set in $C_0(\mathbb{R})$ (continuous functions vanishing at infinity) that contains the unit sphere of $C_0^1(\mathbb{R})$ (differentiable functions in $C_0(\mathbb{R})$ such that the derivative is also in $C_0(\mathbb{R})$)? The norm in the Banach space $C_0^1(\mathbb{R})$ being defined as $\|f\|_1:=\max(\|f\|,\|f'\|)$. REPLY [3 votes]: Let $\phi$ a smooth function with support in $[0,1]$, and $\phi=1$ on $(1/4,3/4)$. Consider the sequence $f_n(x):=\frac{\phi(x+n)}{\lVert \phi\rVert+\lVert \phi'\rVert}$. Then $\{f_n\}$ is a sequence which lies in the unit ball of $C^1_0$. But $\lVert f_m-f_n\rVert_{\infty}=\frac 1{\lVert f\rVert+\lVert f'\rVert}$, so we can't find a compact set $K$ of $C_0(\Bbb R)$ containing the unit ball of $C^1_0(\Bbb R)$.<|endoftext|> TITLE: How to prove the sequence $a_0, a_1, \ldots$ converges iff $a_0, a, a_1, a \ldots$ converges? QUESTION [6 upvotes]: Problem Prove that the sequence $a_0, a_1, a_2, \ldots$ converges to $a$ if and only if the sequence $a_0, a, a_1, a, a_2, a, a_3, \ldots$ converges. Here is my approach: $\Rightarrow$: Since $a_0, a_1, a_2, \ldots$ converges to $a$, by definition of limit, for every $\epsilon > 0$, $\exists N \in \mathbb{N}$ such that for all $n > N$, then $|a_n - a| < \epsilon$. Now consider the subsequence $$a, a, a, a, \ldots$$ We have that $|a - a| < \epsilon, \, \, \forall \epsilon > 0$, thus $a, a, a, a \ldots$ also converges to $a$. Hence, $a_0, a, a_1, a, a_2, a, a_3, \ldots$ converges. $\Leftarrow$: Suppose that $a_0, a, a_1, a, a_2, a, a_3, \ldots$ converges to $L$, $L \neq \pm \infty$, by definition of limit, for every $\epsilon > 0$, $\exists N \in \mathbb{N}$ such that for all $n > N$, then $|a_n - a| < \epsilon$, thus there must be a sequence $$a_{N+1}, a, a_{N+2}, a, a_{N+3}, a, a_{N+4}, \ldots$$ that is getting closer and closer to $L$. But there is always an alternating $a$ between each $a_i$ and $a_{i+1}$, so $L = a$ otherwise $|a_n - L| < \epsilon$ would make no sense. Therefore $a_0, a_1, a_2, \ldots$ converges to $a$. However I still feel it's not complete because all my reasons were based on the definition of infinite sequence. I think there must be a way to give a strong argument for this problem. I wonder if anyone could give me a hint/suggestion on my solution? Thanks. REPLY [5 votes]: Suppose for every $\epsilon>0$, there is a positive integer $N$ such that for every $n>N, |a_n-a|<\epsilon$. Let $(b_n)$ be the sequence $(a_0,a,a_1,a,\ldots)$. Then for every $n>2N+1$, it is clear that $|b_n-a|<\epsilon$. Conversely, if $(b_n)$ converges, then all subsequences must converge to the same limit. Since $(a,a,a,\ldots)$ converges to $a$, the sequence $(a_n)$ does as well.<|endoftext|> TITLE: Prove: Convergent sequences are bounded QUESTION [34 upvotes]: I don't understand this one part in the proof for convergent sequences are bounded. Proof: Let $s_n$ be a convergent sequence, and let $\lim s_n = s$. Then taking $\epsilon = 1$ we have: $n > N \implies |s_n - s| < 1$ From the triangle inequality we see that: $ n > N \implies|s_n| - |s| < 1 \iff |s_n| < |s| + 1$. Define $M= \max\{|s|+1, |s_1|, |s_2|, ..., |s_N|\}$. Then we have $|s_n| \leq M$ for all $n \in N$. I do not understand the defining $M$ part. Why not just take $|s| + 1$ as the bound, since for $n > N \implies |s_n| < |s| + 1$? REPLY [36 votes]: $|s|+1$ is a bound for $a_n$ when $n > N$. We want a bound that applies to all $n \in \mathbb{N}$. To get this bound, we take the supremum of $|s|+1$ and all terms of $|a_n|$ when $n \le N$. Since the set we're taking the supremum of is finite, we're guaranteed to have a finite bound $M$. REPLY [18 votes]: Because you want to be sure that the bound is large enough to ensure that $|s_n|\le M$ for all $n\in\Bbb N$, not just for all $n>N$. Taking $M\ge|s|+1$ ensures that the only possible exceptions to $|s_n|\le M$ are $s_1,\dots,s_N$, and taking $M\ge\max\{|s_1|,\dots,|s_N|\}$ takes care of these as well.<|endoftext|> TITLE: parallel vectors along a curve QUESTION [5 upvotes]: What does it mean for a vector field X(t) to be parallel along a curve, gamma(t)? and how can we show that if X(t) is parallel along gamma(t), then |X(t)| is constant? Thanks REPLY [8 votes]: On a Riemannian manifold $M$, one has a notion of "parallel transport" defined along any curve $\gamma: [a,b] \to M$: given any tangent vector $X(a)$ based at the point $\gamma(a)$, one obtains a family of tangent vectors $X(t)$ ($t \in [a,b]$) with $X(t)$ based at $\gamma(t)$. Intuitively, the idea is that $X(t)$ is "the same vector" as $X(a)$, but moved from $\gamma(a)$ to $\gamma(b)$. (So $X(t)$ is "parallel" to $X(a)$, whence the name.) The actual definition is somewhat involved, using the concept of the Levi--Civitta connection, as briefly discussed in Berci's answer. One property of parallel transport is that, for any value of $t$, the parallel transport map $X(a) \mapsto X(t)$ from the tangent space at $\gamma(a)$ to the tangent space at $\gamma(t)$ is an isometry of inner product spaces, and in particular preserves lengths. Thus $|X(t)|$ is constant along the curve $\gamma$. If all this is unfamiliar to you, you will need to learn the basics of the Levi--Cevitta connection and related ideas. This is a topic that is notoriously complicated for a beginner to learn, since the presentations often emphasize techincal precision over intuitive clarity (and the wikipedia page on parallel transport linked to in Berci's answer doesn't seem to deviate from this general approach). For myself, I first learned this material from Spivak's differential geometry books (I think the second volume is the relevant one here); they are long, but I think he does a good job of emphasizing the intuitive meaning of things. This answer of mine might also help with the intuition.<|endoftext|> TITLE: Which are "big theorems" of descriptive set theory? QUESTION [7 upvotes]: Question: If one were to fully understand 10 theorems in DST, or 15,20,25,30 theorems, which ones would be the most important to understand in order to work towards an understanding of descriptive set theory (viewed from the boldface side of things)? Specifically, I mean theorems which are in Kechris, Moschovakis, Srivastava. I know that a lot of knowledge goes into understanding the "big theorems" and this will have to be obtained in the journey. My point: I'm working through Kechris and would like some sort of guide posts to help me know where I am in the subject. I would like the theorems to be from these books and not any theorems from current literature. Also, are there any books that can help me in the subject besides the 3 I mentioned above? REPLY [3 votes]: I would like to add on Trevor's list: Gale-Stewart theorem (or determinacy of any simple-enough class). Borel determinacy. $\Sigma^1_1$ determinacy implies $0^\#$. Solovay's construction of a model where all sets are Lebesgue measurable (while from a descriptive point of view it is not the most exciting theorem, it is an important proof). I'm not sure whether or not it's too far, but in a yearly course I'd definitely expect to hear something about it: The relations between Woodin cardinals and Projective Determinacy It is somewhat unclear to me how far do you want to go with this. Do you want to end up knowing? I would consider the projective determinacy equivalence with Woodin cardinals (and the natural extension of AD with infinitely many of those) quite an advanced theorem, but maybe you would like to go beyond that. Maybe you would like to know stationary tower forcing; and theorems about the absoluteness of the theory of $L(\mathbb R)$. That really depends on you. As for references, I would also suggest Miller's book: Descriptive Set Theory and Forcing: how to prove theorems about Borel sets the hard way. Lecture Notes in Logic 4(1995), Springer-Verlag. Which one can find for free on his homepage or on ArXiv.<|endoftext|> TITLE: Is every model of modular arithmetic either even or odd? QUESTION [17 upvotes]: Modular Arithmetic (MA) has the same axioms as first order Peano Axioms (PA) except $\forall x (Sx \ne 0)$ is replaced with $\exists x(Sx = 0)$. (http://en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic) MA has arbitrarily large finite models based on modular arithmetic. All finite models of MA have either an even or odd number of elements. I call a model of MA "even" if it satisfies either of these two sentences: E1) $\exists x(x \ne 0 \land x+x = 0)$ E2) $\forall x(x+x \ne S0)$ A model of MA is odd if it satisfies either of: O1) $\forall x(x = 0 \lor x+x \ne 0)$ O2) $\exists x(x+x = S0)$ We can use compactness to prove MA has infinite "even" size models by adding the even definitions above as axioms. We can similarly prove there are infinite "odd" size models of MA. Some infinite sets, like the integers, are both even and odd. The integers are not the basis for a model of MA. For example, the four square theorem (every number is the sum of at most four squares) is a theorem of both MA and PA. The four square theorem is false in the integers. It has been conjectured the complex numbers are a basis for a model of MA. If so, the complex numbers would be an "odd" model of MA. My question is whether every model of MA must be exclusively even or exclusively odd? Are the following statements theorems of MA? $$\exists x(x \ne 0 \land x+x = 0) \ \overline{\vee}\ \exists x(x+x = S0)$$ $$\forall x(x+x \ne S0) \ \overline{\vee}\ x(x = 0 \lor x+x \ne 0)$$ I included the ring theory tag because all of the axioms of ring theory can be derived from the axioms of MA. Every model of MA is a commutative ring with unity. I have found that the 1-element model of MA (the trivial ring) can cause a lot of problems in proofs. I would be happy to prove these statements are true for all models with two or more elements. REPLY [4 votes]: Posted to liberate this question from its Unanswered status. Emil Jeřábek answered this question satisfactorily on MathOverflow.<|endoftext|> TITLE: What are the epimorphisms in the category of Hausdorff spaces? QUESTION [20 upvotes]: It appears to be the case that the epimorphisms in $\text{Haus}$ are precisely the maps with dense image. This is claimed in various places, but a comment on my blog has made me doubt the source I got my proof from (Borceux). Borceux's argument crucially uses the following result: If $A \subset X$ is a closed subspace of a Hausdorff space $X$, then the quotient $X/A$ is Hausdorff. This appears to be false. As far as I can tell, if $X/A$ is Hausdorff, then $A$ and points in $X$ not in $A$ must be separated by open neighborhoods in $X$. But if this is true for every closed subspace $A$ of $X$, then $X$ is necessarily regular, and there are examples of Hausdorff spaces that aren't regular. So: is it still true that the epimorphisms are precisely the maps with dense image? If so, what is a correct proof of this? REPLY [5 votes]: This should be a comment rather than an answer, but I don't have enough rep. HTop is actually the largest subcategory of Top closed under finite limits (as computed in Top) where all maps with dense image are epi: If $X$ is not Hausdorff then the equalizer of the projections $\pi_{1},\pi_{2}:X\times X\rightarrow X$, which is just the diagonal $\delta:X\rightarrow X\times X$, is not closed. (Recall that a space is Hausdorff iff the diagonal is closed.) Let $C$ denote the closure of the diagonal in $X\times X$, let $d$ denote the factorization of $\delta$ through $C$, and let $p_{1}$ and $p_{2}$ denote the restrictions of $\pi_{1}$ and $\pi_{2}$ to $C$ respectively. Then the image of $X$ is dense in $C$ and $p_{1}\circ d = p_{2} \circ d$, but $p_{1}\neq p_{2}$, so $d$ is not epi. This shows the fact Andy mentions in the comments, that equalizers are closed subspaces in HTop, is essential.<|endoftext|> TITLE: Relationship between rate of convergence and order of convergence QUESTION [7 upvotes]: What is the difference between rate of convergence and order of convergence? Have they any relationship to each other? For example could i have two sequences with the same rates of convergence but different orders of convergence, and vice versa? REPLY [2 votes]: The order of convergence is one of the primary ways to estimate the actual rate of convergence, the speed at which the errors go to zero. Typically the order of convergence measures the asymptotic behavior of convergence, often up to constants. For example, Newton's method is said to have quadratic convergence, so the method has order 2. However, the true rate of convergence depends on the problem, the initial value taken, etc, and is typically impossible to quantify exactly. The order simply estimates this rate in terms of polynomial behavior, typically. The order of convergence doesn't tell you everything. A numerical integration scheme with step size $h$ could have cubic order of convergence, so the errors go as $O(h^3)$, but the true error could be $100000h^3 + \ldots$, which would mean that for many practical problems the rate of convergence is actually quite slow.<|endoftext|> TITLE: Proving q-binomial identities QUESTION [9 upvotes]: I was wondering if anyone could show me how to prove q-binomial identities? I do not have a single example in my notes, and I can't seem to find any online. For example, consider: ${a + 1 + b \brack b}_q = \sum\limits_{j=0}^{b} q^{(a+1)(b-j)}{a+j \brack j}_q$ I haven't made much progress on this one, but here's one that I have managed to get something out of: ${2n \brack n}_q = \sum\limits_{k=0}^{n} q^{k^{2}}{n \brack k}_q$ Using the q-binomial theorem from my notes, which is as follows: $(1+qx)(1+q^{2}x)...(1+q^{n}x) = \sum\limits_{k=0}^{n} q^{k(k+1)/2}{n \brack k}_q x^{k}$, I have managed to show that the coefficient of $x^{n}$ is equal to: $\sum\limits_{k=0}^{n} q^{(2k^{2} - 2nk + n^{2} + n)/2} {n \brack k}_q {n \brack n-k}_q$, which is when I was working on the right hand side of the identity. In order to get here, I considered the product of $(1+qx)...(1+q^{n}x)(1+qx)...(1+q^{n}x)$, then tried obtaining the coefficient of $x^n$, as one would in the ordinary binomial proof. I've been trying to mimic the proofs of the regular binomial counterparts of these identities but without much luck. Help would be appreciated, as I have a midterm exam coming up soon. Thanks :) REPLY [4 votes]: $\newcommand\gauss[2]{\genfrac[]0{}{#1}{#2}_q}$Here's a straightforward implementation of the method I gave in a hint. The natural combinatorial interpretation of a Gaussian binomial coefficient $\gauss {a+b}b$ is that it counts the lattice paths from the origin to $(a,b)$ (where each step advances one of the two coordinates) with as weight the number of squares above the path, within the rectangle with the origin and $(a,b)$ as diagonally opposite corners. Counting with weight means summing over the indicated set the monomials $q^k$, where $k$ is the weight of the element. (I'm using Cartesian coordinates to tell which side is "above" the path.) Now $\sum_{j=0}^b\binom{a+j}j$ counts the paths from the origin to one of the points $(a,j)$ for $0\leq j\leq b$, while $\binom{a+1+b}b$ counts the paths from the origin to $(a+1,b)$. Now it is not hard to see why $$ \binom{a+1+b}b = \sum_{j=0}^b\binom{a+j}j, $$ as for every path from the origin to $(a+1,b)$ there is a unique $j$ for which it contains a step $(a,j)\to(a+1,j)$, and once this $j$ is known, the path is entirely determined by the way it passes from the origin to $(a,j)$, because after arriving at $(a+1,j)$ it has no choice but to go straight up. Now the Gaussian binomial coefficient $\gauss{a+1+b}b$ on the left tells us we want to attach as weight to such paths the number of squares above it, within the rectangle with sides $a+1$ and $b$. On the right the coefficient $\gauss{a+j}j$ counts the squares above the path within the recatangle with sides $a$ and $j$, and since the path makes a horizontal step $(a,j)\to(a+1,j)$, this number is equal to the number of squares above the (extended) path within the rectangle with sides $a+1$ and $j$. But then the path goes straight up for $b-j$ steps, each of which has $a+1$ squares to its left which are also above the path, and within the rectangle with sides $a+1$ and $b$. So we must multiply the terms coming from $j$ by $q^{(a+1)(b-j)}$ to account for the extra squares counted on the left, and this is exactly what your $q$-formula says.<|endoftext|> TITLE: Triangle inequality for subtraction? QUESTION [34 upvotes]: Why is $|a - b| \geq|a| - |b|$? REPLY [10 votes]: The length of any side of a triangle is greater than the absolute difference of the lengths of the other two sides: $$||a|-|b||\leq |a-b|$$ Here is a proof: $$|a+(b-a)|\leq |a|+|b-a|$$ and, (1) $$|a-b|\geq |a|-|b|$$ Interchanging $a$ and $b$, we get also (2) $$|a-b|\geq |b|-|a|$$ Combining (1) and (2) we get our desired result.<|endoftext|> TITLE: Generously Feasible? QUESTION [7 upvotes]: In my machine learning class I have been provided a weight vector that has the property that it is generously feasible ? Formally, what does generously feasible mean? I can't seem to find a definition? REPLY [3 votes]: If the weight vector in the current iteration is in the region between the hyperplane and the magnitude of input vector, i.e. $\vec{w^t_x} \: \epsilon \: [ \langle \vec{w_{x}},\vec{x} \rangle , |\vec{x}| ]$, where $\langle \vec{w_{x}},\vec{x} \rangle$ is the hyperplane, then, since the perceptron adds $\vec{x}$ or $-\vec{x}$ to the weights each iteration, the weight vector will oscillate around the hyperplane. Hence for the algorithm to terminate with a solution, it should be allowed to accept a solution in this feasible space, hence called the "generously feasible" space.<|endoftext|> TITLE: Basics of Haar measure QUESTION [7 upvotes]: Suppose $G$ is a locally compact group. Then $G$ has a left-invariant measure $dg$, say, which means that $$\int f (hg) dg = \int f(g) fg$$ for any test function integrable on $G$. The left-invariant measure is unique up to a positive constant multiple; therefore, $$\int f (hg) dg = \delta(h) \int f(g) fg,$$ where $\delta(h) > 0$ depends only on $h$ because $dgh^{-1}$ is another left-invariant measure. The factor $\delta(h)$ is called the modular function of $G$. Clearly $\delta : G \to \mathbb{R}^+$ is a group homomorphism, and one also shows.... I feel totally confused about the sentence "therefore, ... because $dgh^{-1}$ is another left-invariant measure." What is the reason for "therefore"? Why is $dgh$ a left-invariant measure? (It seems right multiplication...) Also confused about why $dgh^{-1}$ is a left-invariant measure and why because of this fact, $\delta(h)>0$ depends only on $h$. Hope someone could explain it in details. Thanks a lot! REPLY [3 votes]: A good place to start is to make sure the definitions of things are clear. A measure $\mu$ on $G$ is left invariant if for every test function $f$ and every $h\in G$ one has $\int f(hg)\,d\mu(g) = \int f(g)\,d\mu(g)$. If $dg$ is the Haar measure on $G$ and $h_0\in G$ is a fixed element, then the measure $dgh_0^{-1}$ is by definition given by $\int f(g)\,dgh_0^{-1} := \int f(gh_0)\,dg$ for all test functions $f$. To show the measure $dgh_0^{-1}$ is left-invariant for any fixed $h_0\in G$, we must check that the condition in definition (1) holds. For any $h\in G$ and any test function $f$ that $$\int f(hg)\,dgh_0^{-1} := \int f(hgh_0)\,dg = \int f(gh_0)\,dg := \int f(g)\,dgh_0^{-1}.$$ The first and third equalities are by definition, and the second is because $dg$ is left invariant. This proves $dgh_0^{-1}$ is left invariant. It is a fact (assumed in the problem) that any left invariant positive measure $\mu$ on $G$ is a multiple of the Haar measure, i.e., $\mu = \delta dg$ for some $\delta>0$ depending on $\mu$. We have shown that $dgh_0^{-1}$ is left invariant for any fixed $h_0\in G$, so there is a $\delta>0$ depending on $h_0$ such that $dgh_0^{-1} = \delta dg$.<|endoftext|> TITLE: Start with a topological group, take the meet of the two uniformities, and take the topology. Is the result again a topological group? QUESTION [5 upvotes]: And what else can be said, if so? In more detail: Say $(G,\mathscr{T})$ is a topological group. It has a left uniformity $\mathscr{L}$ and a right uniformity $\mathscr{R}$. (It also has a two-sided uniformity $\mathscr{U}$, which is the join of the two.) Now, uniformities on a given set form a complete lattice, so we can also consider the meet of the two, $\mathscr{V}$. However, the meet of two uniformities that yield the same topology does not necessarily again yield the same topology, so it's possible that $\mathscr{T}'$, the topology coming from $\mathscr{V}$, is coarser than our original topology $\mathscr{T}$. (Obviously, this does not happen if the group is balanced, i.e. $\mathscr{L}=\mathscr{R}$; it also does not happen if $\mathscr{T}$ is locally compact, since the meet of two uniformities yielding the same locally compact topology does again yield the same topology. I think it also can't happen if $G$ embeds in a locally compact group, but I didn't work out all the details there. Actually, I don't know an actual case where this does happen, so I guess a first question I can ask is, are there any actual examples of this?) So my question is, is $(G,\mathscr{T}')$ again a topological group? Obviously inversion is continuous, since $\mathscr{V}$ makes inversion uniformly continuous, but it's not clear what would happen with multiplication. If it is a topological group, then we can ask things like, how does $\mathscr{V}$ compare to $\mathscr{L}'$, $\mathscr{R}'$, $\mathscr{U}'$, and $\mathscr{V'}$? (Well, obviously it's coarser than the last of these.) And considering $\mathscr{T} \mapsto \mathscr{T}'$ as an operation on group topologies on $G$, what happens when we iterate it? When we iterate it transfinitely? REPLY [3 votes]: This has since been answered over on MathOverflow by Todd Eisworth and Julien Melleray. The meet of these two uniformities has a name, the Roelcke uniformity, and it generates the original topology. It can be described quite simply, as the uniformity generated by the entourages $\{ (x,y): x\in VyV\}$ for $V$ a neighborhood of the origin. More information can be found in the book Topological Groups and Related Structures by Arhangel'skii and Tkachenko.<|endoftext|> TITLE: How many ways to merge N companies into one big company: Bell or Catalan? QUESTION [10 upvotes]: There's a famous interview question variously credited to Microsoft, Google and Yahoo: Suppose you have given N companies, and we want to eventually merge them into one big company. How many ways are there to merge them? Assuming you can merge as many companies as you like in a single step, I thought this boils down to "find the number of partitions of a set with N elements", in which case the answer is the Bell number $B_{n}$. This can be computed with this handy recursion cribbed shamelessly from Wikipedia: $B_{n+1}=\sum_{k=0}^{n}{{n \choose k}B_k}$ $1, 1, 2, 5, 15, 52, 203...$ And you have to substract one since you're already starting from one of the possible sets: $B_{2}=2$, but there's only one way to combine A and B into AB. However, there are a lot of sources on the net which claim that the correct solution is the Catalan number: $C_n = \frac{1}{n+1}{2n\choose n} = \frac{(2n)!}{(n+1)!\,n!} = \prod\limits_{k=2}^{n}\frac{n+k}{k} \qquad\mbox{ for }n\ge 0$ $1, 1, 2, 5, 14, 42, 132...$ Which is correct, and why? Or are they both correct depending on the assumptions you make about the somewhat vague problem statement? REPLY [5 votes]: Assume that any number of companies may be merged at a time. We must decide how much to care about timing. For example, do $$ \begin{aligned} &a,b,c,d\rightarrow\{a,b\},\{c,d\}\rightarrow\{a,b,c,d\}\\ &a,b,c,d\rightarrow\{a,b\},c,d\rightarrow\{a,b\},\{c,d\}\rightarrow\{a,b,c,d\}\\ &a,b,c,d\rightarrow a,b,\{c,d\}\rightarrow\{a,b\},\{c,d\}\rightarrow\{a,b,c,d\}\\ \end{aligned} $$ count as one outcome, two outcomes, or three outcomes? If we care only about which groups get merged, but not about the timing, counting these as one outcome makes sense. But if timing is important, counting these as three outcomes makes sense. One might also disallow the first of the three, assuming that two mergers never take place exactly simultaneously, in which case only the last two would count. If we assume that timing is unimportant, so that the scenarios above count as one outcome, then for four companies we have $$ \begin{aligned} &a,b,c,d\rightarrow\{a,b,c,d\}\\ &a,b,c,d\rightarrow\{a,b,c\},d\rightarrow\{a,b,c,d\}\quad\text{(plus three others)}\\ &a,b,c,d\rightarrow\{a,b\},c,d\rightarrow\{a,b,c,d\}\quad\text{(plus five others)}\\ &a,b,c,d\rightarrow\{a,b\},c,d\rightarrow\{a,b,c\},d\rightarrow\{a,b,c,d\}\quad\text{(plus 11 others)}\\ &a,b,c,d\rightarrow\{a,b\},\{c,d\}\rightarrow\{a,b,c,d\}\quad\text{(plus two others),} \end{aligned} $$ which is a total of $26$ outcomes. This is the result in the OEIS link relating to phylogenetic trees in Brian Scott's answer. If we consider timing to be important, then the three outcomes in the last line turn into nine outcomes, and the result is $32$ outcomes total. This is the correct answer for the partition model described, but not actually implemented, in Brian Scott's answer. (For a full discussion of the partition model, see Lengyel's constant; for the sequence, see A005121.) If we forbid simultaneous mergers, then we reduce the total by $3$, leaving $29$ total outcomes, as in Christian Blatter's answer. Added: There was a question in the comments to Brian Scott's answer about the derivation of the recurrence $$ \begin{aligned} M(n+1)=&(n+2)M(n)+2\sum_{k=2}^{n-1}\binom{n}{k}M(k)M(n-k+1)\\ =&nM(n)+2\sum_{k=2}^n\binom{n}{k}M(k)M(n-k+1) \end{aligned} $$ for the number of outcomes in the phylogenetic tree model. The idea is to partition the set of merger histories according to the size, which we will denote by $k+1$, of the first conglomerate to contain firm $n+1$. Note that $k$ ranges from $1$ to $n$. If $k\ge2$ then there are two scenarios, distinguished by the manner in which firm $n+1$ gets merged with the other $k$ firms. In Scenario I, the other $k$ firms first merge into a single conglomerate, and then firm $n+1$ merges with it. In Scenario II, the other $k$ firms first merge into two or more conglomerates, and then firm $n+1$ and these conglomerates all merge in a single step. Either scenario can take place in $M(k)$ ways, for a total of $2M(k)$ ways. Once firm $n+1$ has joined a conglomerate, that conglomerate must then be merged, along with the $n-k$ remaining firms, into a single conglomerate. This can take place in $M(n-k+1)$ ways. As there are $\binom{n}{k}$ ways to choose the $k$ firms with which firm $n+1$ forms its first conglomerate, there are a total of $2\binom{n}{k}M(k)M(n-k+1)$ outcomes for a given $k\ge2$. The only difference when $k=1$ is that Scenario II cannot occur. Hence the coefficient $2$ becomes $1$. The term $(n+2)M(n)$ is a combination of the $k=1$ and $k=n$ terms of the sum. 2nd addition: For completeness, I record the recurrence for the partition model here as well. Let $Z(n)$ be the number of ways in which $n$ firms may be merged. The initial set of $n$ firms may be partitioned into $k$ non-empty parts in $S(n,k)$ ways, where $S(n,k)$ is the Stirling number of the second kind. The set of $k$ parts represent the set of firms present after the first round of mergers. Since at least one merger must occur in each round, we have $1\le k\le n-1$. The new set of firms may be merged in $Z(k)$ ways. From this we get $$ Z(n)=\sum_{k=1}^{n-1}S(n,k)Z(k). $$<|endoftext|> TITLE: Point on the left or right side of a plane in 3D space QUESTION [9 upvotes]: I have an alpha plane determined by 3 points in space. How can I check if another point in space is on the left side of the plane or on the right side of it? For example if the plane is determined by points $A(0,0,0)$, $B(0,1,0)$ and $C(0,0,1)$ then point $X(-1, 0, 0)$ is on the left side of the plane and point $Y(1,0,0)$ is on the right side of the plane. I need a fast solution for plug-in development for a 3D application and I'm not very good at math. REPLY [4 votes]: I'm guessing from your comment that your planes intersect the $x$-axis in exactly one point, which determines the left and right sides. Let $A,B,C$ be the points that determine the plane. Then the cross product $(B-A) \times (C-A)$ gives us a normal ${\bf n}$ to the plane. Now consider a test point $(x,0,0)$ where $x$ is a huge positive number. This should be on the right side of the plane. Now $((x,0,0) - A) \cdot {\bf n}$ is just the first coordinate of ${\bf n}$ times $x$ minus some constant. For large enough $x$ the sign of the dot product is then just the sign of the first coordinate of ${\bf n}$. Thus: If the sign of first coordinate of $n$ is positive, then the right side consists of the points $P$ with $(P - A) \cdot {\bf n} > 0$ and the left side consists of the points with $(P - A) \cdot {\bf n} < 0$. The inequalities are reversed if the sign of the first coordinate of $n$ is negative.<|endoftext|> TITLE: Intuition behind Jacobian of the SVD QUESTION [7 upvotes]: I'm having a little trouble understanding the meaning behind the Jacobian of an SVD. I understand what the Jacobian is, but I don't see how you can derive a Jacobian from the SVD. To me, the SVD is just USV_transpose - I don't see how a matrix can be differentiated, since they don't really seem to be functions of anything. http://www.ics.forth.gr/_publications/2000_eccv_SVD_jacobian.pdf I've been looking at the above pdf just to get a better understanding, but equation (7) (the differentiation of the singular values) is really where I get lost. REPLY [5 votes]: Suppose $A=USV^T$ is the SVD of $A$. The Jacobian they are talking about is just the sensitivity of the singular vector matrices $U,S,V$ with respect to changes in the input matrix $A$. It answers the question: if you change one element of the input matrix $A$ a little bit, how much will each element of the singular vector matrices change? There is a lack of consistent notation for this sort of thing, because the input space and output space are both higher dimensional spaces of matrices - you can choose to go element by element on the input, or the output, or both, or neither, all leading to different notations. There are different notations even within that. Thus it may be helpful to try reading papers from different sources that use different notation - maybe one will make more sense than others. I personally prefer the notation in the following paper (SVD sensitivity in section 3.2), which tries to avoid element-by-element notation as much as possible: http://people.maths.ox.ac.uk/gilesm/files/NA-08-01.pdf Here are a few other papers/presentations using different notations for the eigenvalue sensitivity problem, which is very similar: Full matrix notation with the matrix as a function of a scalar parameter: http://www.win.tue.nl/casa/meetings/seminar/previous/_abstract051019_files/Presentation.pdf http://alexandria.tue.nl/repository/books/616489.pdf Element-by-element input, full matrix output notation (also includes second derivatives): http://ftp.cs.nyu.edu/cs/faculty/overton/papers/pdffiles/eighess.pdf One of the earlier papers on the subject from 1985 which uses an archaic notation that is very confusing to me but might make mores sense to you: http://janmagnus.nl/papers/JRM011.pdf<|endoftext|> TITLE: Uniform convergence of derivatives, Tao 14.2.7. QUESTION [30 upvotes]: This is ex. 14.2.7. from Terence Tao's Analysis II book. Let $I:=[a,b]$ be an interval and $f_n:I \rightarrow \mathbb R$ differentiable functions with $f_n'$ converges uniform to a function $g:I \rightarrow \mathbb R$. Suppose $\exists x_0 \in I: \lim \limits_{n \rightarrow \infty} f_n(x_0) = L \in \mathbb R$. Then the $f_n$ converge uniformly to a differentiable function $f:I \rightarrow \mathbb R$ with $f' = g$. We are not given that the $f_n'$ are continuous but he gives the hint that $$ d_{\infty}(f_n',f_m') \leq \epsilon \Rightarrow |(f_n(x)-f_m(x))-(f_n(x_0)-f_m(x_0))| \leq \epsilon |x-x_0| $$ This can be shown by the mean value theorem. My question is : How does this help me to prove the theorem ? REPLY [29 votes]: Since $\{f_n(x_0)\}$ converges, for each $\epsilon > 0$ and $n, m$ large enough we have $$ \begin{align} \lvert f_n(x) - f_m(x) \rvert &\leq \left\lvert (f_n(x)-f_m(x))-(f_n(x_0)-f_m(x_0)) \right\rvert + \left\lvert f_n(x_0) - f_m(x_0) \right\rvert \\ &\leq \epsilon \left\lvert x - x_0 \right\rvert + \epsilon \\ &\leq \epsilon (b - a) + \epsilon \end{align} $$ Hence $f_n$ converges uniformly on $I$ to a function $f$, moreover for each $\epsilon > 0$ and $m, n$ large enough, the inequality $$ \left\lvert \frac {f_n(y) - f_n(x)} {y - x} - \frac {f_m(y) - f_m(x)} {y - x} \right\rvert \leq \epsilon $$ holds for each $x\neq y\in I$. (It is the same inequality of the hint but now we can assume it holds for generic $y\in I$, because we showed $f_n(y)$ converges for all $y \in I$) The above relation implies that $\frac {f_n(y) - f_n(x)} {y - x}$ converges uniformly to $\frac {f(y) - f(x)} {y - x}$. Now we can write $$ \left\lvert\frac {f(y) - f(x)} {y - x} - g(x) \right\rvert \leq \\ \left\lvert\frac {f(y) - f(x)} {y - x} - \frac {f_n(y) - f_n(x)} {y - x} \right\rvert + \left\lvert \frac {f_n(y) - f_n(x)} {y - x} - f_n'(x)\right\rvert + \left\lvert f_n'(x) - g(x) \right\rvert $$ For each $\epsilon > 0$ and $n$ large enough we get $$ \left\lvert\frac {f(y) - f(x)} {y - x} - g(x) \right\rvert \leq 2\frac \epsilon 3 + \left\lvert \frac {f_n(y) - f_n(x)} {y - x} - f_n'(x)\right\rvert $$ and for $y$ close enough to $x$ $$ \left\lvert\frac {f(y) - f(x)} {y - x} - g(x) \right\rvert \leq \epsilon $$ So $f'(x)$ exists and is equal to $g(x)$. Edit To clarify the point raised by @DavidC.Ullrich. Since ${f'_n}$ converges uniformly, there exists $N \in \mathbb N$ such that $\lVert f'_n - f'_m \rVert_\infty < \epsilon$ for all $n, m > N$, that is $$ |f'_n(x) - f'_m(x)| < \epsilon \qquad \forall m,n > N, \forall x\in I $$ So, by means of the mean value theorem, for each $m,n > N$ and for each $x \neq y\in I$ we can write $$ \left\lvert \frac {f_n(y) - f_n(x)} {y - x} - \frac {f_m(y) - f_m(x)} {y - x} \right\rvert = \\ \left\lvert \frac {f_n(y) - f_m(y)} {y - x} - \frac {f_n(x) - f_m(x)} {y - x} \right\rvert = \\ \left\lvert \frac {(f_n - f_m)(y)- (f_n - f_m)(x)} {y - x}\right\rvert = \\ \lvert (f_n - f_m)'(\xi) \rvert = \\ \lvert f_n'(\xi) - f_m'(\xi)\rvert < \epsilon $$<|endoftext|> TITLE: Representation of $S^{3}$ as the union of two solid tori QUESTION [26 upvotes]: Well, I'm trying to prove that you can express the 3-dimensional sphere $S^{3}$ as the union of two solid tori. I tried first use that a solid tori is homeomorphic to $S^{1}$$\times$$D^{2}$ and use this to obtain some quotient space which would be homeomorphic to $S^{3}$, but I couldn't go any further. Is this the way or I need a little more to prove that? Thanks in advance. REPLY [6 votes]: The easier way is to think of $S^3$ as follows: $S^3$ ={$ (z,w)\in \mathbb{C}: |z|^2 +|w|^2=2$}. Take $V_1$={$ (z,w)\in S^3: |z|\geq |w|$} and $V_2$={$ (z,w)\in S^3: |z|\leq |w|$}. It is not difficult to show that $V_1$ and $V_2$ are both solid tori glued along their common boundary namely $\delta V_1$=$\delta V_2$={$ (z,w)\in S^3: |z|=|w|$}, which is a torus. More generally, There are different ways of gluing two solid tori to get different 3-manifolds. One way is to glue a meridian curve on the ist torus boundary to a longitude curve on the second. If this has to be injective, you have to glue each meridian on the ist to a longitude of the second. This gives you $S^3$. Hopf fibration helps you visualize this: See a short video here movie. (However, you might need some skills to notice what is going on. You can ask me of course). If on the other hand you glued a meridian on the ist torus boundary to a meridian on the second, what you get is an $S^1$-worth of 2-spheres i.e $S^2$ $\times$ $S^1$. In the two cases above, we got irreducible 3-manifolds. But in general, to obtain a new 3-manifolds, you will have to glue the meridian of the ist to any non-trivial simple curve on the second. A fancy name for such a curve is $(p,q)$-curve, where $p$ and $q$ denote the number of meridians and number of longitudes respectively. The manifold formed is called a Lens space $L(p,q)$. Something similar to Heegaard splitting discussed before is Dehn surgery on knots. Here you can do the 3 procedures that I have described. It involves removing a regular neighbourhood of a knot in $S^3$ and gluing it back in a different way.<|endoftext|> TITLE: Factorization of $x^7-1$ into irreducible factors over $GF(4)$ QUESTION [5 upvotes]: I need to find cyclotomic cosets depending on $n=7$ and $q=4$ and find the factorization of $x^7-1$ into irreducible factors over $GF(4)$. Thanks for any advice. REPLY [2 votes]: As has been noted by Jack D'Aurizio in his comment, the polynomial $x^{7}-1$ splits into a product of $x-1$ and two different irreducible factors of degree $3$ over $F_{2}.$ This certainly gives the same factorization (but not a priori into irreducible factors) over $F_{4}.$ However $F_{4}$ and $F_{16}$ contain no element of multiplicative order $7,$ so contain no root of $x^{7}-1$ other than $1,$ so the two factors of degree $3$ remain irreducible in $F_{4}[x].$<|endoftext|> TITLE: Norm of integral operator in $L^1$ QUESTION [6 upvotes]: What is the norm of the operator $$ T\colon L^1[0,1] \to L^1[0,1]: f\mapsto \left(t\mapsto \int_0^t f(s)ds\right) $$ ? REPLY [7 votes]: Let $f\in L^1([0,1])$. Then $$\|Tf\|_1=\int_0^1 \left|\int^t_0 f(s) ds\right| dt \le \int_0^1 \int_0^1 |f(s)| ds dt = \|f\|_1$$ This shows $\|T\|\le 1$. Setting $f_n(x)=n\chi_{[0,1/n]}(x)$, we see $||f_n||_1=1$. Note that $$\int^t_0 n\chi_{[0,1/n]}(s) ds=\left\{\begin{array}\,1 & \text{if}\;t\ge1/n\\ nt & \text{if}\;t<1/n\end{array}\right.$$ It follows that $$||Tf_n||_1=\int^1_0\int_0^t n\chi_{[0,1/n]}(s)ds dt=\int_0^{1/n}nt\,dt+\int_{1/n}^1 1\,dt =1-\frac{1}{2n}\rightarrow 1\;\text{as}\;n\rightarrow\infty. $$ Hence $||T||=1$.<|endoftext|> TITLE: Prove that $(X\times Y)\setminus (A\times B)$ is connected QUESTION [9 upvotes]: I'm reading topology of Munkres and I have a problem that stuck me for a while. I'm so greatful if anyone can help me with this. Let $A$ be a proper subset of $X$, and let $B$ is a proper subset of $Y$. If $X$ and $Y$ are connected, show that $$(X\times Y) \setminus (A\times B)$$ is connected. Thanks so much for your consideration ^^ REPLY [15 votes]: We can simplify Davide Giraudo's answer by noting that we only need to show that $(a,b)$ is in the same connected component as every other point. So, start by fixing $a \in X \setminus A$ and $b \in Y \setminus B$ as Davide does, and consider an arbitrary point $(x,y) \in (X \times Y) \setminus (A \times B)$. If $x \notin A$, then $\{x\} \times Y$ is connected and contains both $(x,y)$ and $(x,b)$, while $X \times \{b\}$ is connected and contains both $(x,b)$ and $(a,b)$. Thus, $(\{x\} \times Y) \cup (X \times \{b\})$ is connected and contains both $(x,y)$ and $(a,b)$. Otherwise, $x \in A \implies y \notin B$. Thus, analogously, $X \times \{y\}$ is connected and contains both $(x,y)$ and $(a,y)$, while $\{a\} \times Y$ is connected and contains both $(a,y)$ and $(a,b)$, and so $(X \times \{y\}) \cup (\{a\} \times Y)$ is connected and contains both $(x,y)$ and $(a,b)$.<|endoftext|> TITLE: strange metric $d(x,y) = ||x|| + ||y||$ if $x\ne y$, $d(x,y) = 0$ if $x = y$. QUESTION [8 upvotes]: Let $d : \mathbb{R}^n \times \mathbb{R}^n \to [0, \infty]$ be defined by $$ d(x,y) = \left\{ \begin{array}{ll} 0 & : ~ x = y \\ ||x|| + ||y|| & : ~ x \ne y \end{array} \right. $$ where $||\cdot ||$ denotes the usual norm of $\mathbb{R}^n$. Show that $d$ is a metric. Draw the $\varepsilon$-Spheres $B_{\varepsilon}(x_0) := \{ x \in \mathbb{R}^2 ~|~ d(x,x_0) < \varepsilon \}$ for $x_0 = (0,0)$ and $x_0 = (1,1)$ and $\varepsilon = \frac{1}{2}, 1, \frac{3}{2}$. Characterize the open, closed and compact sets with respect to this metric. Is $(\mathbb{R}^n, d)$ complete? Number 1) is simple, for 2) I got: If $x_0 = (0,0)$ then $$ d(x,x_0)= \left\{ \begin{array}{ll} 0 & \textrm{ for } x = (0,0) \\ \sqrt{x^2 + y^2} & \textrm{ otherwise } \end{array} \right. $$ and if $x_0 = (1,1)$ then $$ d(x,x_0) = \left\{ \begin{array}{ll} 0 & \textrm{ for } x = (1,1) \\ \sqrt{2} + \sqrt{x^2 + y^2} & \textrm{ otherwise } \end{array} \right. $$ and the pictures are simple spheres with the point $x_0$ in the sphere ($x_0 = (0,0)$) or isolated outside ($x_0 = (1,1)$). But with 3) I have my problems, i conjecture that $$ B_{\varepsilon}(x) \quad \textrm{ is open iff } \quad ||x|| - \varepsilon > 0 $$ and going on I know that finite intersections of open sets are open, but then had I got all open sets by this construction? And what about the other properties, how can I characterize them, do you have any hints? REPLY [3 votes]: For every $x \ne 0$ we can find $0 < \epsilon < \|x\|$ so that $B_\epsilon(x) = \{x\}$. Therefore, for every $x \ne 0$, $\{x\}$ is open. Also, evidently $B^d_\epsilon (0) = B_\epsilon(0)$; i.e. all Euclidean balls around $0$ are open in the topology induced by $d$ (denote it with $\tau_d$ from now on). Thus, every open in $\tau_d$ is of the form: $$B_\epsilon(0) \cup S$$ with $\epsilon \ge 0, B_\epsilon(0)$ an Euclidean ball around $0$ (remark the empty case $\epsilon = 0$) and $S \subseteq X \setminus \{0\}$. The closed sets are then characterised as $\complement(B_\epsilon(0)) \cap \complement(S)$ with $B_\epsilon(0)$ and $S$ as above. Now $\complement (S)$ contains $0$ but is otherwise an arbitrary subset of $X$. Since $\complement(B_\epsilon(0))$ does not contain $0$ if $\epsilon > 0$, the only requirement remaining for $C \subseteq S$ to be closed in this case, is that it be contained in the complement of some $B_\epsilon(0)$. If $\epsilon = 0$, then since $\complement(B_\epsilon(0)) = X$, the only remaining requirement is that $0 \in C$. This is summarized by saying that $C \subseteq X$ is closed iff: $$0 \in C \quad \text{or}\quad \inf \{\|c\|: c \in C\} =: \operatorname{dist}(C, 0) > 0$$ The compact sets are easily shown to be all finite $S \subseteq X$. Finally $(\Bbb R^n, d)$ is complete since a Cauchy sequence $(x_n)_n$ either takes finitely many values (and trivially converges) or has for $\epsilon >0$ that $d(x_n,x_m) < \epsilon \implies d(x_n,0) <\epsilon$ (since $x_n \ne x_m$ infinitely often), and we conclude $\displaystyle \lim_{n \to \infty} x_n = 0$ in this case. EDIT: Grateful thanks to fgp for pointing out clumsy errors in my analysis of the problem.<|endoftext|> TITLE: How to convert to conjunctive normal form? QUESTION [20 upvotes]: If i have a formula: $((a \wedge b) \vee (q \wedge r )) \vee z$, am I right in thinking the CNF for this formula would be $(a\vee q \vee r \vee z) \wedge (b \vee q \vee r \vee z) $? Or is there some other method I must follow? REPLY [13 votes]: Another possibility is to make a truth table (Note, in my symantics $1=T$ and $0=F$); it is longer but this method is fail safe. $\phi=((a\wedge b)\vee(q \wedge r))\vee z$ then: $$\begin{array}{ccccc|c} a & b & q & r & z & \phi\\\hline 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ \end{array}$$ And so on, and for every row in which $ \phi=0 $ you get a "Clause" by putting the literal in the clause if he takes 0 in that row and his "not" if the literal takes 1. For example the clause for the first line is $(x \vee y\vee q \vee r \vee z)$. the clause for the third line is $(x \vee y\vee q \vee \bar r \vee z)$. There is no clause for the second line because $ \phi=1 $. For the line $\begin{array}{ccccc|c}0&1&0&1&0&0\end{array}$ you get the clause $(x \vee \bar y \vee q \vee \bar r \vee z)$. Finally you put a $\wedge $ between the clauses.<|endoftext|> TITLE: trouble with this integral QUESTION [8 upvotes]: Could anyone help me to do this integral ? $$\int_{\,0}^\infty \; \frac{\exp \left( -\frac{1}{x} -x\right)}{\sqrt{x}} \, dx = \sqrt{\pi}e^{-2} $$ I think you start with completing the square in the exponent, but what substitution do you make then ? $u=\sqrt{x}$ didn't seem to get me far. REPLY [10 votes]: Substitute first $x=u^2$ in order to have: $$ I = \int_{0}^{+\infty}\frac{dx}{\sqrt{x}\exp\left(x+\frac{1}{x}\right)}=2\int_{0}^{+\infty}e^{-\left(x^2+\frac{1}{x^2}\right)}\,dx$$ Use now the substitution $x=\frac{1}{y}$ to have: $$ I = 2\int_{0}^{+\infty}\frac{1}{x^2}e^{-\left(x^2+\frac{1}{x^2}\right)}\,dx,$$ from which follows: $$ I = \int_{0}^{+\infty}\left(1+\frac{1}{x^2}\right)e^{-\left(x^2+\frac{1}{x^2}\right)}\,dx,$$ and the key substitution is now $u = x-\frac{1}{x}$, from which we have: $$ I = \int_{-\infty}^{+\infty}e^{-u^2-2}\,du = e^{-2}\sqrt{\pi}, $$ QED.<|endoftext|> TITLE: Summing (0,1) uniform random variables up to 1 QUESTION [20 upvotes]: Possible Duplicate: choose a random number between 0 and 1 and record its value. and keep doing it until the sum of the numbers exceeds 1. how many tries? So I'm reading a book about simulation, and in one of the chapters about random numbers generation I found the following exercise: For uniform $(0,1)$ random independent variables $U_1, U_2, \dots$ define $$ N = \min \bigg \{ n : \sum_{i=1}^n U_i > 1 \bigg \} $$ Give an estimate for the value of $E[N]$. That is: $N$ is equal to the number of random numbers uniformly distributed in $(0,1)$ that must be summed to exceed $1$. What's the expected value of $N$? I wrote some code and I saw that the expected value of $N$ goes to $e = 2.71\dots$ The book does not ask for a formal proof of this fact, but now I'm curious! So I would like to ask for A (possibily) simple (= undergraduate level) analytic proof of this fact An intuitive explanation for this fact or both. REPLY [11 votes]: In fact it turns out that $P(N = n) = \frac{n-1}{n!}$ for $n \ge 2$. Let $S_n = \sum_{j=1}^n U_j$, and $f_n(s)$ the probability density function for $S_n$. For $0 < x < 1$ we have $f_1(x) = 1$ and $f_{n+1}(x) = \int_0^x f_n(s) \ ds$. By induction, we get $f_n(x) = x^{n-1}/(n-1)!$ for $0 < x < 1$, and thus $P(S_n < 1) = \int_0^1 f_n(s)\ ds = \dfrac{1}{n!}$. Now \begin{align*} P(N=n) &= P(S_{n-1} < 1 \le S_n)\\ &= P(S_{n-1} < 1) - P(S_n \le 1)\\ &= \frac{1}{(n-1)!} - \frac{1}{n!} \\ &= \frac{n-1}{n!} \end{align*}<|endoftext|> TITLE: A non-negative matrix has a non-negative inverse. What other properties does it have? QUESTION [12 upvotes]: This is homework for my mathematical optimization class. Here is the exact question: Element-wise nonnegative matrix and inverse. Suppose a matrix $A \in\Bbb R^{n\times n}$ , and its inverse $B$, have all their elements nonnegative, i.e., $A_{ij}\geq 0$, $B_{ij}\geq 0$, for $i,j = 1,\dots,n$. What can you say must be true of $A$ and $B$? Please give your answer first, and then the justification. Your solution (which includes what you can say about $A$ and $B$, as well as your justification) must be short. I have no idea what they are looking for; so far, I've got just the basic facts stemming from the fact that an inverse exists (it's square, the determinant is non-zero etc.). What can I deduce from the "non-negative" property? REPLY [17 votes]: Suppose you have a non-negative matrix $A$ with a non-negative inverse $B$. Since the entries are non-negative, if the $k$th entry of row $i$ is non-zero, i.e. $A_{ik}\neq 0$, then we must have $B_{kj} = 0$ for all $j$ except $j = i$. Otherwise, we would have $$I_{ij} = 0 = \sum_{\ell = 1}^n A_{i\ell}B_{\ell j} \ge A_{ik}B_{kj} > 0$$ Since we cannot have a zero row in an invertible matrix, this in-turn implies that $B_{ki} \neq 0$. Applying a symmetric argument now suggests $A_{ij} = 0$ for all $j$ except $j=k$. Thus each row of the matrix has precisely one non-zero entry. It follows that the matrix is the permutation of a positive diagonal matrix, i.e. there exists diagonal matrix $D > 0$ and permutation matrix $P$ such that $$A = PD$$ Such matrices if you are interested are called monomial matrices. You can easily check that the above condition is in fact necessary and sufficient: Theorem: Let $A$ be a non-negative matrix. Then $A$ has a non-negative inverse if and only if $A$ is a positive monomial matrix (by positive monomial, I mean the non-zero entries are positive).<|endoftext|> TITLE: Let $G$ be any abelian group and $a\in{G}$. Show there exists a homomorphism $f:G\rightarrow{\mathbb{Q}/\mathbb{Z}}$ such that $f(a)\neq{0}$. QUESTION [13 upvotes]: Let $G$ be any abelian group and $a\in{G}$. Show there exists a homomorphism $f:G\rightarrow{\mathbb{Q}/\mathbb{Z}}$ such that $f(a)\neq{0}$. I can prove this question (I think) if I use the fact that $\mathbb{Q}/\mathbb{Z}$ is an injective abelian group: just define $f$ on the cyclic subgroup generated by $a$ and extend to $G$ via injectivity. However, I feel there should be a more 'elementary' way to prove this result, I just can't see one yet. One of my friends suggested using Zorn's Lemma, but I haven't done much with this information yet. REPLY [4 votes]: Let $M$ be an abelian group. Every integer $n$ defines an endomorphism $f_n\colon M \rightarrow M$ such that $f_n(x) = nx$. $M$ is called divisible if $f_n$ is surjective for every nonzero integer $n$. Clearly $\mathbb{Q}/\mathbb{Z}$ is divisible. Lemma 1 Let $M$ be a divisible abelian group. Let $I$ be an ideal of $\mathbb{Z}$. Then the canonical homomorphism $Hom(\mathbb{Z}, M) \rightarrow Hom(I, M)$ induced by the canonical injection $I \rightarrow \mathbb{Z}$ is surjective. Proof: If $I = 0$, the assertion is clear. Hence we assume $I \neq 0$. There exists a nonzero integer $n$ such that $I = \mathbb{Z}n$. Let $f \in Hom(I, M)$. Since $M$ is divisible, there exists $a \in M$ such that $f(n) = na$. Let $x \in I$. There exists an integer $m$ such that $x = mn$. $f(x) = f(mn) = mf(n) = mna = xa$. Hence $f$ is in the image of the map$\colon Hom(\mathbb{Z}, M) \rightarrow Hom(I, M)$. QED Lemma 2 Let $T$ be a divisible abelian group. Let $M$ be an abelian group. Let $N$ be a subgroup of $M$. Let $f\colon N \rightarrow T$ be a homomorphism. Let $x \in M - N$. Then there exists a homomorphim $g\colon N + \mathbb{Z}x \rightarrow T$ extending $f$. Proof: Let $I = \{a \in \mathbb{Z}\colon ax \in N\}$. Let $h\colon I \rightarrow T$ be the map defined by $h(a) = f(ax)$. Since $h$ is a homomorphism, by Lemma 1, there exists $z \in T$ such that $h(a) = az$ for all $a \in I$. Suppose $y + ax = y' + bx$, where $y, y' \in N, a, b \in \mathbb{Z}$. $y - y' = (b - a)x$ Hence $b - a \in I$. Hence $h(b - a) = f((b - a)x) = (b - a)z$. Hence $f(y - y') = (b - a)z$. Hence $f(y) + az = f(y') + bz$. Therefore we can define a map $g\colon N + \mathbb{Z}x \rightarrow T$ by $g(y + ax) = f(y) + az$. Clearly $g$ is a homomorophism extending $f$. QED Theorem Let $T$ be a divisible abelian group. Then $T$ is injective. Proof: This follows immediately from Lemma 2 and Zorn's lemma. Corollary $\mathbb{Q}/\mathbb{Z}$ is injective.<|endoftext|> TITLE: Pointed cofibrations between well-pointed spaces QUESTION [9 upvotes]: Recall that we call a map $i: A \rightarrow X$ a cofibration if it has the homotopy extension property. We will say a pointed space $X$ is well-pointed, if the inclusion of the basepoint $\{ * \} \hookrightarrow X$ is a cofibration. A pointed cofibration $i: A \rightarrow X$ is a based map of pointed spaces that has the homotopy extension property with respect to homotopies respecting the basepoint. Note that a cofibration is always a pointed cofibration, but the converse is not true. It is stated in May's "Concise Course in Algebraic Topology", although not proved, that if a map of well-pointed spaces is a pointed cofibartion, then it is already a cofibration. I've been trying to do it on my own using "the box method", but it didn't lead me anywhere. How does one go to prove such statements? Is there any general method, any useful tricks? I assume that all spaces are compactly generated weakly Hausdorff. REPLY [4 votes]: This is lemma 1.3.4 in May's "More concise algebraic topology", it looks very technical.<|endoftext|> TITLE: Example for an open set in $\mathbb R$ QUESTION [5 upvotes]: What would be your example of an open proper subset $S \subsetneq \mathbb R$ such that $\mathbb{Q} \subsetneq S$? REPLY [6 votes]: a) $A_k:=\bigcup_{j=1}^{+\infty}(r_j-2^{-j-k},r_j+2^{-j-k})$ where $\{r_j\}$ is an enumeration of rationals. This show that such that set can be taken as small (in measure) as we want. b) Take $O_n:=(-1/n,1-1/n)$; then $\{O_n,n\in\Bbb N^*\}$ is an open cover of $[0,1)$ which doesn't have any finite subcover.<|endoftext|> TITLE: Trigonometry Inequality QUESTION [8 upvotes]: This is the first time I'm posting here. If you can also tell me how to format this like a pro, I'll be very grateful. 1st question: Prove the following inequality: $$0^{\circ} < a, b, c < 180^{\circ}$$ $$\sin a \times \sin b \times \sin c \le \sin\left(\frac{a+b}{2}\right) \times \sin\left(\frac{a+c}{2}\right) \times \sin\left(\frac{a+b}{2}\right)$$ 2nd question: Prove the following inequality: $$a + b + c = 90^{\circ}$$ $$\sin a \times \sin b \times \sin c \le \frac{1}{8}$$ REPLY [2 votes]: First inequality By Jensen we have $$\sin\left(\frac{a+b}{2}\right) \geq \frac{\sin(a)+\sin(b)}{2}$$ By AM-GM we get $$\frac{\sin(a)+\sin(b)}{2} \geq \sqrt{ \sin(a) \sin(b)}$$ Combining we get $$\sqrt{\sin(a) \sin(b) } \leq \sin\left(\frac{a+b}{2}\right)$$ Similarly you get $$\sqrt{\sin(a) \sin(c) } \leq \sin\left(\frac{a+c}{2}\right)$$ $$\sqrt{\sin(b) \sin(c) } \leq \sin\left(\frac{b+c}{2}\right)$$ Multiplying them you get the desired inequality. Second Inequality: By AM-GM: $$\sin a \times \sin b \times \sin c \le \left( \frac{\sin(a)+\sin(b)+\sin(c)}{3} \right)^3$$ Now, by Jensen: $$\frac{\sin(a)+\sin(b)+\sin(c)}{3} \leq \sin(\frac{a+b+c}{3})=\frac{1}{2}$$ Combining the two Yields the desired result.<|endoftext|> TITLE: "Algorithmic" proofs in linear algebra QUESTION [5 upvotes]: Although I am new to linear algebra, I want to study it with as much rigor as possible. After searching around, I picked up Halmos' Finite Dimensional Vector Spaces and Axler's Linear Algebra Done Right. I've noticed that they state theorems which they prove by a method which I would describe as "algorithmic". For example, verbatim from Axler (although Halmos is very similar): Theorem: In a finite-dimensional vector space, the length of every lin. ind. tuple is $\leq$ to the length of every spanning tuple of vectors. Proof: Suppose that ($u_1$, ... $u_m$) is lin. ind. in $\mathcal{V}$ and ($w_1$, ... $w_n$) spans $\mathcal{V}$. We need to prove $m \leq n$. We do so through the multi-step process described below.... Step 1: The tuple $(w_1, ... w_n)$ spans $\mathcal{V}$, and thus adjoining any vector produces a linearly dependent tuple. In particular, the tuple $(u_1, w_1, ... w_n)$ is linearly independent. Thus, by the linear dependence lemma, we can remove one of the $w$'s so that the n-tuple B consisting of $u_1$ and the remaining $w$'s spans $\mathcal{V}$. Step j: The n-tuple B from step $j-1$ spans $\mathcal{V}$, and thus adjoining any vector to it produces a linearly dependent tuple. In particular, the $(n+1)$-tuple obtained by adjoining $u_j$ to B, placing it just after $u_1,...u_{j-1},$ is linearly dependent. By the linear dependence lemma (2.4), one of the vectors in this tuple is in the span of the previous ones.... We can remove that $w$ from $B$ so that the new $n$-tuple $B$ consisting of $u_1, ... u_j$ and the remaining $w$'s spans $\mathcal{V}$. After step $m$, we have added all the $u$'s and the process stops. If at any step we added a $u$ and had no more $w$'s to remove, then we would have a contradiction. Thus there must be at least as many $w$'s as $u$'s. I take issue with the level of rigor of "algorithmic" proof. Although I think these proofs might be amenable to treatment by induction, I'm not sure how to carry it out. As they stand, although I get the intuition, they don't really convince me. If I had to be precise about what bothers me, I'd say that the actual sets resulting from each step of the operation aren't stated explicitly, and I'm not sure how these sets are being ordered/indexed (they play fast and loose there). (Disclosure: I am generally not thrilled with ...'s, unless I can see a clear way to come up with an argument which doesn't rely on imagining "what's going on in there", so dealing with about ten of these arguments at one sitting is irritating for me.) Is there a way to make these arguments - in particular, this one - more precise? REPLY [2 votes]: Prove by induction $k$ that for $0\le k\le m$ there is a tuple $(v_1,\ldots, v_n)$ such that $(v_1,\ldots, v_n)$ spans $\mathcal V$ $k\le n$ $v_i=u_i$ for $1\le i \le k$ $v_i\in\{w_1,\ldots,w_n\}$ for $k TITLE: any idea what fractal algorithm might generate this shape? QUESTION [6 upvotes]: I Found this image around, and i'm curious what algorithm generates this kind of shape In particular, i'm curious how the flow lines are generated, since usually the Mandelbrot iteration just generates different regions depending on the steps required to get into divergence REPLY [4 votes]: The algorithm maybe external argument / field lines : http://fraktal.republika.pl/cpp_argphi.html http://linas.org/art-gallery/escape/phase/phase.html See also external ray in wikipedia or triangle inequility http://jussiharkonen.com/?page_id=65 www.ultrafractal.com/help/index.html?/help/coloring/standard/triangleinequalityaverage.html Nice image. Where did you find it ?<|endoftext|> TITLE: Are projective modules over an artinian ring free? QUESTION [5 upvotes]: Quoting a comment to this question: By a theorem of Serre, if $R$ is a commutative artinian ring, every projective module [over $R$] is free. (The theorem states that for any commutative noetherian ring $R$ and projective module $P$ [over $R$], if $\operatorname{rank}(P) > \dim(R)$, then there exists a projective [$R$-module] $Q$ with $\operatorname{rank}(Q)=\dim(R)$ such that $P\cong R^k \oplus Q$ where $k=\operatorname{rank}(P)−\dim(R)$.) When $R$ is a PID, this is in Lang's Algebra (Section III.7), and when $R$ is local this is a famous theorem of Kaplansky. But in spite of a reasonable effort, I can't seem to find any other reference to this theorem of Serre. Does anyone know of one? Is there any other way to show that every projective module over an artinian ring is free? REPLY [3 votes]: Sorry to show up late to this party, but you were quoting my comment and somehow I missed it. Yes, a condition was overlooked: we must presume that $P$ has constant rank, alternatively, $\operatorname{Spec}(A)$ is connected, or $A$ has no non-trivial idempotents, etc. This result is Serre's Splitting Theorem which states that a projective $A$-module, $P$, of constant rank $r \geq d+1$ where $d=\dim(A)$ must contain a unimodular element (SST) [not to be left out: $P$ will also be cancellative under this condition (Bass's Cancellation Theorem)]. $P$ containing a unimodular element $p \in P$ is equivalent to a surjection $P \twoheadrightarrow A$, giving us kernel $Q$ (which will also be projective), and because the surjection splits we have $P \simeq A \oplus Q$. Repeat this splitting until the rank of the resulting kernel matches the dimension of $A$. As a result, projective $A$-modules for which $\operatorname{rank}(P)=\dim(A)$ are called projective modules of top-rank. For further reference: see T.Y. Lam's Serre's Problem on Projective Modules (esp. p.291-2)<|endoftext|> TITLE: If $\{a_n\}$ is not summable, neither is $\left\{ {\frac{{{a_n}}}{{1 + {a_n}}}} \right\}$ QUESTION [5 upvotes]: Let $\{a_n\}$ be a sequence. We say $\{a_n\}$ is summable if the sequence $\{s_n\}$ defined by $$s_1=a_1\\s_{n+1}=s_n+a_{n+1}$$ converges to $\ell\in\Bbb R$, and write $$\sum\limits_{n = 1}^\infty {{a_n}} = \ell $$ $(1)$ Suppose $\{a_n\}$ is of non-negative terms, and that it is not summable, that is $\lim {s_n}$ fails to exist. Show that $$\left\{ {\frac{{{a_n}}}{{1 + {a_n}}}} \right\}$$ is not summable either. Now, since $$0 \leqslant \frac{{{a_n}}}{{1 + {a_n}}} \leqslant {a_n}$$ for each $n$, $\{a_n\}$ being summable implies $\left\{ {\dfrac{{{a_n}}}{{1 + {a_n}}}} \right\}$ also is (monotone convergence). I need to show the converse of this, but I can see no way to prove it. Since $a_n\geq0$, the partial sums $${s_N} = \sum\limits_{n = 1}^N {{a_n}} $$can be made as large as we want. REPLY [14 votes]: Suppose that $a_n\geq 0 (n\in \mathbb{N})$ and $\displaystyle\sum_{n=1}^{\infty}\frac{a_n}{1+a_n} \; \text{is convergent}$. Then $$ \lim_{n\rightarrow\infty}\frac{a_n}{1+a_n}=0. $$ It implies that $\displaystyle\lim_{n\rightarrow\infty}a_n=0$ and $$ \lim_{n\rightarrow\infty}\frac{a_n}{\frac{a_n}{1+a_n}}=\lim_{n\rightarrow\infty}(1+a_n)=1. $$ Since $\displaystyle\sum_{n=1}^{\infty}\frac{a_n}{1+a_n} \; \text{is convergent}$, we have $\displaystyle\sum_{n=1}^{\infty}a_n \; \text{is convergent}$<|endoftext|> TITLE: What hexahedra have faces with areas of exactly 1, 2, 3, 4, 5, and 6 units? QUESTION [12 upvotes]: I tried for a while, not very hard, to construct a polyhedron with exactly six faces, whose areas were respectively 1, 2, 3, 4, 5, and 6 units. I did not meet with any success. Still, it seems that it should exist, because the space of possibilities is so large and so weakly constrained. Perhaps you could make one by chopping off two of the vertices of a tetrahedron. To be more specific, I do not care whether the hexahedron is regular or whether its faces are regular, or the same shape. I would prefer that it be convex. REPLY [16 votes]: Arrange six vectors with lengths $(1,2,3,4,5,6)$ head to tail so that they form a closed loop in $\mathbb{R}^3$. As you say, there is a large space of possibilities here. One method would be to first arrange them in a plane to form a loop, and then kinking a few into 3D. You can form a planar loop by inscribing the chain in a large circle and shrinking the radius of the circle until the chain closes to a loop. One you have these six non-coplanar vectors, apply Minkowski's Theorem: Theorem (Minkowski). Let $A_i$ be positive faces areas and $n_i$ distinct, noncoplanar unit face normals, $i=1,\ldots,n$. Then if $\sum_i A_in_i =0$, there is a closed polyhedron whose faces areas uniquely realize those areas and normals. (See Chap. 7, p. 311: Aleksandr D. Alexandrov. Convex Polyhedra. Springer-Verlag, Berlin, 2005. Monographs in Mathematics. Translation of the 1950 Russian edition by N. S. Dairbekov, S. S. Kutateladze, and A. B. Sossinsky.) I wrote a note on this: "Convex Polyhedra Realizing Given Face Areas," arXiv:1101.0823 [cs.DM], 4Jan11. Here is a suggestive figure from my paper, which hints at one method of arranging the vectors in space:               Some computational aspects of Minkowski's Theorem are discussed in Geometric Folding Algorithms: Linkages, Origami, Polyhedra, p.340. Of course you don't need that generality to solve this specific problem instance.<|endoftext|> TITLE: Any simplification of $\log_e{(1+e^x)}?$ QUESTION [6 upvotes]: Is there any simplification or other interesting transformation of: $$\log_e{(1+e^x)}$$ (where $x \in \mathbb{R}$) ? REPLY [4 votes]: Manipulations of this and similar quantities like $ \frac{e^x}{1+e^x}$, or $e^{-x} (1 + e^x)$, and their logarithms, are common in logistic regression in statistics. There is no closed form simplification but you can expand $\log(1 \pm t) = \pm t + \frac{t^2}{2} \pm \frac{t^2}{3} + \cdots$ in a power series for specially chosen $t$ (smaller than 1 in absolute value) to illustrate how $f(x) = \log (1 + e^x)$ is approximately $x$. Possibilities include to write $f(x) = x + \log (1 + e^{-x})$ and use $t = e^{-x}$ to get a series valid for positive $x$, or $t = \frac{1}{1+e^x}$, for which $\log (1 + e^x) = x - \log(1-t) = x + t + \frac{t^2}{2} + \frac{t^3}{3} + \cdots$, is valid for all $x$.<|endoftext|> TITLE: Understanding the cartesian product of complex projective lines. QUESTION [6 upvotes]: I am trying to understand the space obtained by taking the cartesian product $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1$ and identifying some of its points by the rule $(x,y)\sim (y,x)$. Viewing $\mathbb{C}\mathbb{P}^1$ as a CW complex with one 0-cell and one 2-cell I computed the homology of $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1/\sim$ which matches that of $\mathbb{C}\mathbb{P}^2$ but I can't seem to visualize an "obvious" homeomorphism between the two spaces. My question is the following: is $\mathbb{C}\mathbb{P}^1\times \mathbb{C}\mathbb{P}^1/\sim$ homeomorphic to $\mathbb{C}\mathbb{P}^2$ and, if so, how? REPLY [7 votes]: It turns out that even more is true: the $n-$fold symmetric product of $\mathbb{C}\mathbb{P}^1$ is homeomorphic to $\mathbb{C}\mathbb{P}^n$! To see this in the $2-$fold case: consider homogeneous polynomials of degree two $\mathbb{C}[x,y]^{(2)}$ whose elements are of the form $ax^2+bxy+cy^2$ and notice that for $\lambda\in\mathbb{C}^\times$, $$\lambda[ax_0^2+bx_0y_0+cy_0^2]=0\iff ax_0^2+bx_0y_0+cy_0^2=0.$$ This allows us to identify points of $\mathbb{C}\mathbb{P}^2$ with elements of $\mathbb{C}[x,y]^{(2)}/\sim$, where $\sim$ identifies polynomials having the the same roots. The map from $\mathbb{C}\mathbb{P}^2$ to the symmetric product of two copies of $\mathbb{C}\mathbb{P}^1$ is then given by $$(a:b:c)\mapsto ax^2+bxy+cy^2=(\alpha x+\beta y)(\alpha'x+\beta'y)\mapsto [(\alpha:\beta),(\alpha':\beta')]$$ where the equality comes from the fundamental theorem of algebra.<|endoftext|> TITLE: Prove that $\sum_{k=1}^n \frac{2k+1}{a_1+a_2+...+a_k}<4\sum_{k=1}^n\frac1{a_k}.$ QUESTION [9 upvotes]: Prove that for $a_k>0,k=1,2,\dots,n$, $$\sum_{k=1}^n \frac{2k+1}{a_1+a_2+\ldots+a_k}<4\sum_{k=1}^n\frac1{a_k}\;.$$ REPLY [4 votes]: I must confess this problem took me a very long time! Step1. If $a_1,a_2,\alpha,\beta,\gamma$ are positive real numbers and $\gamma=\alpha+\beta$ holds, $$\frac{\gamma^2}{a_1+a_2}\leq \frac{\alpha^2}{a_1}+\frac{\beta^2}{a_2}$$ holds too, since it is equivalent to $(\alpha a_2-\beta a_1)^2\geq 0$. Step2. If $a_1,a_2,\alpha,\beta,\gamma,\delta$ are positive real numbers and $\delta=\alpha+\beta+\gamma$ holds, $$\frac{\delta^2}{a_1+(a_2+a_3)}\leq \frac{\alpha^2}{a_1}+\frac{(\beta+\gamma)^2}{a_2+a_3}\leq\frac{\alpha^2}{a_1}+\frac{\beta^2}{a_2}+\frac{\gamma^2}{a_3}$$ holds too, in virtue of Step2. By induction, it is easy to prove the analogous statement for $k$ variables $a_1,\ldots,a_k$. In fact, this is useless to the proof, but quite interesting in itself :) Step3. By Step1, $$\sum_{k=1}^{n}\frac{2k+1}{a_1+\ldots+a_k}-\frac{4}{a_n}\leq \sum_{k=1}^{n-1}\frac{2k+1}{a_1+\ldots+a_k}+\frac{(\sqrt{2n+1}-2)^2}{a_1+\ldots+a_{n-1}}\leq \sum_{k=1}^{n-2}\frac{2k+1}{a_1+\ldots+a_k}+\frac{n^2}{a_1+\ldots+a_{n-1}}$$ Step4. By Step3, $$\sum_{k=1}^{n}\frac{2k+1}{a_1+\ldots+a_k}-\left(\frac{4}{a_n}+\frac{4}{a_{n-1}}\right)\leq \sum_{k=1}^{n-2}\frac{2k+1}{a_1+\ldots+a_k}+\frac{(n-2)^2}{a_1+\ldots+a_{n-2}}\leq \sum_{k=1}^{n-3}\frac{2k+1}{a_1+\ldots+a_k}+\frac{(n-1)^2}{a_1+\ldots+a_{n-2}}. $$ Step5. By Step3, Step4, induction and Step1 again: $$\sum_{k=1}^{n}\frac{2k+1}{a_1+\ldots+a_k}\leq \frac{3}{a_1}+\frac{9}{a_2}+\sum_{j=3}^{n}\frac{4}{a_j}\leq \sum_{j=1}^{n}\frac{4}{a_j}.$$<|endoftext|> TITLE: If $f$ is a positive, monotone decreasing function, prove that $\int_0^1xf(x)^2dx \int_0^1f(x)dx\le \int_0^1f(x)^2dx \int_0^1xf(x)dx$ QUESTION [10 upvotes]: If $f$ is a positive, monotone decreasing function, prove that $\int_0^1xf(x)^2dx \int_0^1f(x)dx\le \int_0^1f(x)^2dx \int_0^1xf(x)dx$ REPLY [11 votes]: Consider the function $g$ defined on $[0,1]^2$ by $$g(x,y)=\tfrac12(x-y)(f(x)-f(y))f(x)f(y)$$ Then, on the one hand, $g\leqslant0$ on $[0,1]^2$ (why?). On the other hand, expanding $g(x,y)$ into the sum of four terms like $xf(x)^2f(y)$ or $xf(x)f(y)^2$, one gets the identity $$ \iint_{[0,1]^2} g(x,y)\mathrm dx\mathrm dy=\int_0^1xf(x)^2\mathrm dx\cdot\int_0^1f(x)\mathrm dx-\int_0^1xf(x)\mathrm dx\cdot\int_0^1f(x)^2\mathrm dx. $$ Since the LHS is $\leqslant0$, this proves the desired inequality.<|endoftext|> TITLE: fundamental group of $GL^{+}_n(\mathbb{R})$ QUESTION [8 upvotes]: I would like to know whether the $GL^{+}_n(\mathbb{R})$ the set of all invertible matrices with positive determinant is simply connected or not? I guess it is not simply connected but that is just a guess only, I do not know how to prove that, i.e how to show that its fundamental group is non-trivial. Well, I can rigorously prove that this is connected and hense path connected as it is Lie Group. Could any one rigorously tell me how to approach this kind problem and solve them from the basic knowledge of fundamental group or some other way? So basically I need some result or tools by which I can compute fundamental groups of all known classical matrix Lie groups. REPLY [23 votes]: The Gram-Schmidt process shows that $\text{GL}_n^{+}(\mathbb{R})$ deformation retracts onto $\text{SO}(n)$. There is a natural fiber bundle $$\text{SO}(n-1) \to \text{SO}(n) \to S^{n-1}$$ given by considering the action of $\text{SO}(n)$ on the unit sphere in $\mathbb{R}^n$, and the corresponding long exact sequence in homotopy shows that $\pi_1(\text{SO}(n)) \cong \pi_1(\text{SO}(3))$ for $n \ge 3$. But $\text{SO}(3) \cong \mathbb{RP}^3$ has fundamental group $\mathbb{Z}/2\mathbb{Z}$ (or more explicitly its double cover is $\text{SU}(2) \cong S^3$, which is simply connected), hence so does $\text{SO}(n)$ for $n \ge 3$, hence so does $\text{GL}_n^{+}(\mathbb{R})$ for $n \ge 3$. The cases $n = 1, 2$ are straightforward. The corresponding double covers of $\text{SO}(n), n \ge 3$ are the spin groups.<|endoftext|> TITLE: What does "spherical convex fuction" mean QUESTION [6 upvotes]: Let $M$ be a riemannian manifold. A function $f: M \mapsto \mathbb{R} $ is called spherical convex, if \begin{equation} \sin(\lvert xz \rvert) f(y) \leq \sin(\lvert xy\rvert) f(z) + \sin( \lvert yz\rvert ) f(x) \end{equation} For every 2 points $x,z$ and every third point $y$ lying on a shortest path between $x$ and $z$. For a regular convex function one can say: A function $f$ is convex iff its graph is below the straight line connecting 2 points on the graph of $f$. Is there a similar way to describe spherical convex functions? REPLY [2 votes]: For your characterisation of convex functions, you are using the fact that the straight line is an optimiser of the inequality. More precisely, we try to solve $$ |xz| f(y) = |xy| f(z) + |yz|f(x) $$ where $x,y,z$ are collinear. Without loss of generality, we can assume that $x,y,z\in \mathbb{R}$. Write $z = y + \delta y$ and $x = y - \delta y$ and doing a second order Taylor expansion we get that $$ f'' = 0 $$ and so a necessary condition is that $f$ is linear. (We are making assumption of differentiability etc.) We then check that all linear functions $f$ satisfy this condition. And we can use it as an upper envelope of "convexity" between two points. Now let us try to do the same with spherical convexity. We have $$ \sin( 2 \delta y) f(y) = \sin (\delta y) f(y + \delta y) + \sin (\delta y) f(y - \delta y) $$ Taking the Taylor expansion to $O(\delta y^3)$ on both sides we get $$ \left[ 2 \delta y - \frac{8}{6} (\delta y)^3\right] f(y) =_{O(\delta y^3)} \left[ \delta y - \frac{1}{6} (\delta y)^3\right] \left[ 2 f(y) + f''(y) (\delta y)^2\right] $$ which simplifies to $$ -f(y) = f''(y) $$ or that $f(y) = A \sin (y + B)$. We check that these functions indeed verify the hypothesis: assuming $z \geq y \geq x$, and $A = 1$ since the expression is scale invariant, $$ \sin( z - x) \sin (y + B) \overset{?}{=} \sin( y - x) \sin (z + B) + \sin(z - y) \sin(x + B) $$ or $$ \left( \sin z \cos x - \cos z \sin x \right) \left(\sin y \cos B + \cos y \sin B\right) \overset{?}{=} \left( \sin y \cos x - \cos y \sin x\right) \left( \sin z \cos B + \cos z \sin B\right) + \left( \sin z \cos y - \cos z \sin y\right) \left(\sin x \cos B + \cos x \sin B\right) $$ which one can simply check to hold. Therefore, the interpretation of "spherical convex" that is analogous to the characterisation of convex function is exactly like the standard convex case, except where the comparison is made with the straightline through $(x,f(x))$ and $(z,f(z))$, we compare against the unique function $g(s) = A \sin(s + B)$ defined over the geodesic connection $x$ and $z$, with $s$ an arclength parametrisation, such that $g(s(x)) = f(x)$ and $g(s(z)) = f(z)$.<|endoftext|> TITLE: How to prove that the closed convex hull of a compact subset of a Banach space is compact? QUESTION [6 upvotes]: Can anyone help me with this problem? Prove that if $K$ is a compact subset of a Banach space $X$, then the closed convex hull of $K$ (that is, the closure of the set of all elements of the form $\lambda_1 x_1+ \dots + \lambda_n x_n$, where $n \geq 1, x_i \in K, \lambda_i \geq 0, \sum_i \lambda_i = 1$) is compact. Any help appreciated! REPLY [6 votes]: Since $X$ is complete it is enough to show that $\mathrm{hull}(K)$ is completely bounded. The proof of this fact you can find in theorem 3.24 in Rudin's Functional analysis. This proof follows the same steps proposed by Harald Hanche-Olsen.<|endoftext|> TITLE: Fourier series for $\sin x$ is zero? QUESTION [6 upvotes]: I have no practical reason for wanting to do this, but I was wondering why the Fourier series for $\sin x$ is the identical zero function. I'm probably doing something wrong or missing some important condition. Could someone help me see? REPLY [13 votes]: Wolfram gives the following: $$ \frac2{\pi}\int_{0}^{\pi} \sin x \sin (nx)\ dx = -\frac{2\sin(n\pi)}{\pi(n^2-1)} $$ You are almost correct in that this is zero for all $n$ because $\sin(n\pi) = 0$ for every integer. But when $n=1$, the formula doesn't work, because the $n^2-1$ in the denominator becomes zero too. You need to consider that as a special case: $$ \frac2{\pi}\int_{0}^{\pi} \sin x \sin (1x)\ dx = \frac2{\pi}\int_0^{\pi} \sin^2 x \ dx = \frac2{\pi} \frac{\pi}{2} = 1. $$<|endoftext|> TITLE: Fulton and Harris A.23 QUESTION [9 upvotes]: I am reading the appendix of Fulton and Harris pg. 459 and am trying to understand the following setup. Suppose $\lambda : \lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_k \geq 0$ is a partition of a positive integer $d$ into at most $k$ parts. Suppose $P$ is a symmetric polynomial of degree $d$ in $k$ variables. Write $$\omega_\lambda(P) = [\Delta \cdot P]_l$$ where $\Delta$ is the discriminant $\prod_{1 \leq i < j \leq k} (x_i - x_j)$ and $[\Delta\cdot P]_l$ denotes the coefficient of the monomial $X^l = x_1^{l_1}x_2^{l_2} \ldots x_k^{l_k}$ in $\Delta\cdot P$. The tuple $l = (l_1,\ldots,l_k)$ has entries $$l_1 = \lambda_1 + k-1, \hspace{2mm} l_2 =\lambda_2 + k -2,\hspace{2mm}, \ldots \hspace{2mm}, l_k = \lambda_k.$$ Now Fulton and Harris claim the identity $$P = \sum_{\lambda \hspace{2mm} \text{a partition of $d$ into at most $k$ parts}} \omega_\lambda(P) s_\lambda$$ where $s_\lambda$ is the Schur polynomial $$s_\lambda = \frac{ \left|\begin{array}{cccc}x_1^{l_1} & x_2^{l_1} &\ldots & x_k^{l_1} \\ x_1^{l_2} & x_2^{l_2} &\ldots & x_k^{l_2} \\ &&\vdots\\ x_1^{l_k} & x_2^{l_k} & \ldots & x_k^{l_k}\end{array}\right|}{\Delta}.$$ Why should this be the case? They claim it is easy to see but I have been starring at this for several days now and don't understand how this claim comes. Am I missing something or is this another proof in Fulton and Harris that they just glance over? Thanks. REPLY [3 votes]: As the Wikipedia article you link to states, the Schur polynomials of degree $d$ in $k$ variables form a basis of the space of symmetric polynomials of degree $d$ in $k$ variables. Thus we can write $P=\sum_\lambda c_\lambda s_\lambda$. Then multiplying by $\Delta$ yields $\Delta\cdot P=\sum_\lambda c_\lambda \Delta\cdot s_\lambda=\sum_\lambda c_\lambda d_\lambda$, where $d_\lambda$ is the determinant in your last equation. For every $\lambda$, only the corresponding determinant contains the corresponding monomial $X^l$, and the coefficient of $X^l$ in it is $+1$, so $[\Delta\cdot P]_l=c_\lambda$. (This would be clearer if the connection between $\lambda$ and $l$ were reflected in the notation.)<|endoftext|> TITLE: deformation retract of $GL_n^{+}(\mathbb{R})$ QUESTION [15 upvotes]: Well, I need a deformation retract from $GL_n^{+}(\mathbb{R})$ to $SO(n)$ Here is what I tried, let $A\in GL_n^{+}(\mathbb{R})$ $A=(A_1,\dots,A_n)$ where $A_i$'s are column vectors, Recall that the Gram-Schmidt algorithm turns A into an orthogonal matrix by the following sequence of steps. First normalise $A_1$ (i.e. make it unit length) $A_1\mapsto \frac{A_1}{|A_1|}$ next I make $A_2$ orthogonal to $A_1$ like $A_2\mapsto A_2-\langle A_1,A_2\rangle A_1$ and normalize $A_2\mapsto \frac{A_2}{|A_2|}$ like this up to $A_n$ But I am not getting an explicit homotopy which gives me a deformation retract $GL_n^{+}(\mathbb{R})$ to $SO(n)$ REPLY [41 votes]: Here is a geometric way to see this. To any ordered basis $(v_1,v_2,\ldots,v_n)$ of your vector space $V$ associate the "flag" of subspaces $V_0=\{0\}$, $V_1=\langle v_1\rangle$, $V_2=\langle v_1,v_2\rangle$, ... $V_n=\langle v_1,v_2,\ldots,v_n\rangle=V$. The Gram-Schmidt algorithm turns any such basis into an orthonormal basis $(b_1,\ldots,b_n)$ that gives rise to the same flag of subspaces. It is moreover the unique such basis (orthonomal and with the same flag) for which in addition each $b_i$, inside $V_i$, is on the same side of the hyperplane $V_{i-1}$ as the original basis vector $v_i$. Now taking $V=\Bbb R^n$ we can identify $GL_n^+(\Bbb R)$ with the set of ordered bases $(v_1,v_2,\ldots,v_n)$ with $\det(v_1,v_2,\ldots,v_n)>0$, and $SO(n)$ with the set of ordered orthonormal bases $(b_1,b_2,\ldots,b_n)$ with $\det(b_1,b_2,\ldots,b_n)>0$. Now for such a basis $(v_1,v_2,\ldots,v_n)$ let $(b_1,\ldots,b_n)$ be the orthonormal basis associated to it under Gram-Schmidt, and simultaneously (or successively if you prefer) deform every $v_i$ linearly to $b_i$, as $t\mapsto (1-t)v_i+tb_i$. The intermediate vectors stay inside $V_i$, and since $b_i$ is on the same side as $v_i$, they never enter $V_{i-1}$. This means the deformed vectors stay linearly independent at all times, so the deformation takes place inside $GL_n(\Bbb R)$. As the determinant cannot vanish anywhere we have $\det(v_1,v_2,\ldots,v_n)>0\implies \det(b_1,b_2,\ldots,b_n)>0$ and we have a deformation retract of $GL_n^+(\Bbb R)$ to $SO(n)$. It is in fact a strong deformation retract: elements of $SO(n)$ remain fixed.<|endoftext|> TITLE: What is the main difference between a free tree and a rooted tree? QUESTION [6 upvotes]: In graph theory what is the difference between a rooted tree and a free tree ? What is normally meant when just the plain "tree" is used ? REPLY [7 votes]: A rooted tree comes with one of its vertices specially designated to be the "root" node, such that there's an implicit notion of "towards the root" and "away from the root" for each edge. In a free tree there's no designated root vertex. You can make a free tree into a rooted one by choosing any of its vertices to be the root. REPLY [6 votes]: In graph theory, the basic definition of a tree is that it is a connected graph without cycles. This definition does not use any specific node as a root for the tree. A rooted tree introduces a parent — child relationship between the nodes and the notion of depth in the tree. Roughly and visually, adding a root to a tree just corresponds to grabbing one of the nodes of the tree and hanging the tree with this node. Once the tree is hanged, each node has a depth related to its height, a parent to which it is suspended and several children which hang from this node.<|endoftext|> TITLE: How to prove gcd of consecutive Fibonacci numbers is 1? QUESTION [6 upvotes]: Possible Duplicate: Prove that two any consecutive terms of Fibonacci sequence are relatively prime How to prove it ? Can you help me ? Let $f_n$ be Fibonacci Sequence. $$(f_{n},f_{n+1})=1,\quad \forall\,n\in\mathbb{N}.$$ Here $(a,b)$ is the greatest common divisor. REPLY [3 votes]: Hint $\ $ Put $\rm\:a_n = 1 = b_n\:$ in the much more general result below. Theorem $\ $ If $\rm\:(\color{#c00}{b_n,\,f_n) = 1}\:$ and $\rm\:f_{n+1} = a_n f_n + b_n f_{n-1}\:$ then $\rm\:(f_{n+1},\,f_n) = (f_1,\,f_0).$ Proof $\ $ Clear if $\rm\:n = 0.\:$ Else by Euclid and induction we have $$\rm (f_{n+1},\,f_n) = (a_n f_n\! +\! \color{}{b_n} f_{n-1},\,f_n) = (\color{#c00}{b_n} f_{n-1},\, \color{#c00}{f_n}) = (f_{n-1},\,f_n) = (f_1,\,f_0)\qquad$$ Remark $ $ Similarly we can prove much more generally that the Fibonacci numbers $\rm\:f_n\:$ comprise a strong divisibility sequence: $\rm\, (f_m,f_n) = f_{(m,n)},\:$ i.e. $\rm\:gcd(f_m,f_n) = f_{\gcd(m,n)}.$ OP is case $\rm\,m=n\!+\!1.$<|endoftext|> TITLE: $2\mathbb Z$ is not a definable set in the structure $(\mathbb Z, 0, S,<)$ QUESTION [6 upvotes]: This is (a translation of) an excerpt from a model theory textbook that shows that $2\mathbb Z$ is not a definable set in the structure $(\mathbb Z, 0, S, <)$, where $S$ is the successor function. Suppose $2\mathbb Z$ is defined by a formula $\phi(x)$. Then let $\mathcal M \succ (\mathbb Z, 0, S, <)$ be a proper elementary extension and $D$ be the set defined by $\phi(x)$ in $\mathcal M$. In $\mathbb Z$, odd numbers and even numbers appears in turn. A similar property holds in $\mathcal M$, thus (1): $a\in D \Rightarrow S(a) \not \in D$. Let $\sigma : \mathcal M \rightarrow \mathcal M$ be a map defined by $a \mapsto a\ (a \in \mathbb Z)$ and $a \mapsto S(a)\ (a \not \in \mathbb Z)$. Then $\sigma$ is an isomorphism on $\mathcal M$. By (1), $\sigma$ does not preserve $D$. Thus $2\mathbb Z$ is not definable in $\mathbb Z$. This uses the following proposition: Suppose $A\subset |\mathcal M|^n$ is definable. Then for every isomorphism $\sigma$ on $\mathcal M$, $\sigma(A) = A $. What I don't understand is the argument that $\sigma$ does not preserve $D$. I suppose this argument assumes $D\setminus\mathbb Z\neq\emptyset$. This intuitively holds, because the language is not expressive enough to exclude non-integers from $D$ (I'm thinking of $\mathbb R$ as $\mathcal M$). But I am unable to show this. How can you prove that $D\setminus\mathbb Z\neq\emptyset$? REPLY [6 votes]: Note that if $\phi (x)$ defines $2 \mathbb{Z}$ in $\mathcal{Z} = ( \mathbb{Z} , 0 , S , < )$ then as $\mathcal{Z} \models ( \forall x ) ( \phi (x) \vee \phi ( S(x) )$, every elementary extension of $\mathcal{Z}$ must also satisfy this sentence. From this it follows that $D \setminus \mathbb{Z}$ is nonempty. Added for clarity: If $\phi (x)$ defines $2 \mathbb{Z}$ in the model $\mathcal{Z}$, then it must be that $$\mathcal{Z} \models ( \forall x ) ( \phi (x) \leftrightarrow \neg \phi ( S(x) )$$ (and this is probably much better than my original). As an elementary extension, $\mathcal{M}$ must also satisfy this sentence. From here we may conclude that there is an $a \in M \setminus \mathbb{Z}$ (where $M$ denotes the universe of $\mathcal{M}$) such that $\mathcal{M} \models \phi ( a )$. (Taking any $a \in M \setminus \mathbb{Z}$ either it or $S^{\mathcal{M}} (a)$ will be as required.) Using the automorphism $\sigma$ of $\mathcal{M}$ given in the question, and taking any $a \in M \setminus \mathbb{Z}$ such that $\mathcal{M} \models \phi (a)$, we then have the following: As $\mathcal{M} \models ( \forall x ) ( \phi (x) \leftrightarrow \neg \phi ( S(x) )$ it follows that $\mathcal{M} \models \neg \phi ( S(a) )$. As $\sigma$ is an automorphism it follows that $\mathcal{M} \models \phi ( \sigma(a) )$, but as $\sigma (a) = \mathcal{S}^\mathcal{M}$ we get $\mathcal{M} \models \phi ( S(a) )$. These conclusions are clearly contradictory!<|endoftext|> TITLE: Homological algebra in PDE QUESTION [13 upvotes]: I have been fascinated by the power and wide applicability of homological methods in algebra and topology. Because I am also interested in PDE, there arises a natural question for me. What is known about applications of methods from homological algebra to the analysis of solutions of PDE on domains in $\mathbb{R}^n$? REPLY [5 votes]: Homological techniques are very often seen in the literatures of PDE treated from a differential geometrical point of view. An extensive overview can be found in: Homological methods in equations of mathematical physics. Also I recommend one of my favorite book here: The Geometry of Physics: An Introduction by Theodore Frankel. Some simple google-fu gives me a recent book also: Cohomological Analysis of Partial Differential Equations and Secondary Calculus. I am working on computational physics, and the methodologies arised from de Rham cohomology have been used extensively in construction of the finite element spaces for equations in electromagnetism: Finite element exterior calculus, homological techniques, and applications. Mostly people in my field are interested in solving the Hodge Laplacian acting on a $k$-form(magnetic flux or electric field).<|endoftext|> TITLE: Derivative of the sine function when the argument is measured in degrees QUESTION [7 upvotes]: I'm trying to show that the derivative of $\sin\theta$ is equal to $\pi/180 \cos\theta$ if $\theta$ is measured in degrees. The main idea is that we need to convert $\theta$ to radians to be able to apply the identity $d/dx \sin x = \cos x $. So we need to express $ \sin \theta$ as $$ \sin_{deg} \theta = \sin(\pi \theta /180), $$ where $\sin_{deg}$ is the $\sin$ function that takes degrees as input. Then applying the chain rule yields $$ d/d\theta [ \sin(\pi\theta/180)] = \cos(\pi \theta/180) \pi/180 = \frac{\pi}{180}\cos_{deg}\theta. $$ Is this derivation formally correct? REPLY [10 votes]: This annoyed me when I revisited this material years later and had to teach this material to someone else, so I'm posting an answer here. (The other answer is perfectly fine, but I wanted to give a more complete exposition.) Now here's the thing: you're told to find the derivative of $\sin(\theta)$ when $\theta$ is in degrees. At a first glance, this seems simple: it should just be $\cos(\theta)$. However, this answer is wrong, because you found that $\sin(\theta)$ has derivative $\cos(\theta)$ under the assumption that $\theta$ is measured in radians, and not in degrees. Here's how you should approach the problem. Notice that $\sin(\theta)$, when $\theta$ is in degrees or when $\theta$ is in radians, gives two different values. So, in fact, for this problem, writing $\sin(\theta)$ is itself ambiguous, because it isn't clear if $\theta$ is in degrees or radians. (However, for the rest of your studies, you'll likely assume the radian form.) For the moment, suppose that $\sin_d(\theta)$ denotes $\sin(\theta)$ when $\theta$ is measured in degrees, and $\sin_r(\theta)$ denotes $\sin(\theta)$ when $\theta$ is measured in radians. We apply similar meaning for $\cos_d$ and $\cos_r$. The motivation for this seems confusing at a first glance: why do we need to do this? It's because when we're calculating $\sin(\theta)$ depending on whether or not $\theta$ is in degrees or radians, we are fundamentally working with two different functions. Assuming we only care about the standard unit circle, when $\theta$ is in degrees, the domain is $[0, 360)$. When $\theta$ is in radians, the domain is $[0, 2\pi)$. Hence why we use $\sin_d$ and $\sin_r$ for these two different cases. Now, back to the problem: what you are asked to find is $$\dfrac{\text{d}}{\text{d}\theta}\sin_d(\theta)\text{.}$$ You know for a fact that $$\dfrac{\text{d}}{\text{d}\theta}\sin_r(\theta) = \cos_r(\theta)\text{.}$$ So now, the problem boils down to this: how can we write $\sin_d$ in terms of $\sin_r$ so that we can apply the chain rule to find $\dfrac{\text{d}}{\text{d}\theta}\sin_d(\theta)$? Recall that $\pi$ radians is $180$ degrees. So, if we have an angle which is $\theta$ degrees, it follows that the equivalent radian angle is $\dfrac{\theta}{180}\pi$. It follows that $$\sin_d(\theta)=\sin_r\left(\dfrac{\theta}{180}\pi \right)\text{.}$$ Hence, $$\dfrac{\text{d}}{\text{d}\theta}\sin_d(\theta) = \dfrac{\text{d}}{\text{d}\theta}\sin_r\left(\dfrac{\theta}{180}\pi \right) = \dfrac{\pi}{180}\cos_r\left(\dfrac{\pi}{180}\theta \right)$$ from an application of the chain rule. Lastly, note that the angle $\dfrac{\pi}{180}\theta$ is in radians. The equivalent degree measure would be $$\dfrac{\pi}{180}\theta \cdot \dfrac{180}{\pi} = \theta$$ hence, $$\dfrac{\pi}{180}\cos_r\left(\dfrac{\pi}{180}\theta \right) = \dfrac{\pi}{180}\cos_d(\theta)$$ and then we obtain $$\dfrac{\text{d}}{\text{d}\theta}\sin_d(\theta) = \dfrac{\pi}{180}\cos_d(\theta)$$ as desired. Remark: This is an exercise in Stewart's text. I wouldn't expect a typical Calculus I student to be able to do this exercise, given that very little time (in my experience) is spent on functions and given the strange notation (this would look strange to a Calc. I student - remember, most of these people haven't even read proofs) involved. As you might guess, although this problem is interesting, given the way this exercise is usually written and implemented in calculus courses, I'm not a fan. More time should be dedicated to thinking about functions than usually done for a Calc. I course should this problem be assigned as an exercise.<|endoftext|> TITLE: Uniqueness of symmetric positive definite matrix decomposition QUESTION [6 upvotes]: We know that any symmetric positive semi-definite matrix $K$ can be written as $K= AA^T$, where $A$ has real components. One way to get to $A$ is to compute eigen value decomposition of $K= P^T DP$ and define $A= P^T \sqrt{D}$, where $\sqrt{D}$ simply computes the square roots of diagonal elements. Now, I wonder to what extent such a decomposition is unique. Of course if $AA^T=K$ then $-A$ also works. My questions are: Up to what transformation the above matrix decomposition is unique. Is positive definiteness (PD) and positive-semi definiteness (PSD) of $K$ makes difference in uniqueness of this decomposition? To have a unique solution, do we need to fix the number of columns of $A$ (for a PSD or PD matrix)? Is the decomposition unique only if we are given this dimension? $A$ is different from square root of $K$, right? Because square root does not have to be symmetric?! Answering any part will be useful for me. Specially part 2. REPLY [2 votes]: The Cholesky decomposition $K=AA^T$ of a positive definite matrix $K$ is unique when $A$ has positive diagonal entries. In general, Cholesky decomposition of positive semi-definite matrix $K$ is not unique. I don't understand question 3. Does't $A$ have the same size as $K$? The square root of a positive (semi-)definite matrix $K$ is defined as a Hermitian matrix $B$, such that $K=BB$, so in general, $A \neq B$.<|endoftext|> TITLE: Does the Riemann integral come from a measure? QUESTION [14 upvotes]: Can we approach the Riemann integral with measure theory? That is: can we find a measure $\mu$ defined on a $\sigma$-algebra of $\mathbb{R}$ such that a function is $\mu$-integrable if and only if it is Riemann integrable, and that the integral $\int f d\mu$ is equal to the corresponding Riemann integral. If so can we extend this to improper Riemann integrals? What about Riemann integration in $\mathbb{R}^n$? REPLY [16 votes]: No. Singletons would have to be measurable in this $\sigma$-algebra since their characteristic functions are Riemann integrable, alternatively because $\{x\} = \bigcap_{n=1}^\infty \left[x-\frac{1}{n},x+\frac{1}{n}\right]$ shows that they are a countable intersection of measurable sets. Therefore the characteristic function of every countable set would have to be Riemann integrable, but $\chi_{[0,1] \cap \mathbb{Q}}$ provides a counterexample.<|endoftext|> TITLE: Is there a formal definition of "Greater Than" QUESTION [5 upvotes]: Intuitively, one can say that $S(n) > n$. But how do we prove it using the Peano Axioms. It seems like I need a formal statement as to what $>$ means. REPLY [4 votes]: Here is a definition that uses the order axiom: For an ordered field $\mathbb{F}$, there is a unique subset $P$ satisfying the following conditions. if $a,b\in{P}$ then $a+b,ab\in{P}$. for all $a$ in $\mathbb{F}$, one and only one of the following is true: $a\in{P}$, $-a\in{P}$, or $a=0$. We say that $p \in \mathbb{F}$ is positive iff $p \in P$. The subset $P$ is the positive numbers of the field. From here, the relation of 'strictly greater than' can be defined as follows. $$a>b\iff{a-b}\in{P}$$<|endoftext|> TITLE: A new kind of fractal? QUESTION [51 upvotes]: http://www.gibney.de/does_anybody_know_this_fractal Is this some known kind of fractal? Update: This one got a lot of great feedback from around the net. I summarized it in the section labeled "Update 24.10.2012". REPLY [2 votes]: The intensity as defined is $0$ for almost all values of $z$, so it is unclear how the visualization relates to the definition. If you post the code that generated the images you will have a better chance of getting a satisfactory answer, and that would also address @doetoe's observation of what are apparently artifacts.<|endoftext|> TITLE: Uncountability of basis of $\mathbb R^{\mathbb N}$ QUESTION [5 upvotes]: Given vector space $V$ over $\mathbb R$ such that the elements of $V$ are infinite-tuples. How to show that any basis of it is uncountable? REPLY [3 votes]: I like the following solution (a colleague of mine told me about it): For any $a\in\mathbb R$ we have a sequence $\widehat a=(1,a,a^2,\dots,a^k,\dots)$. The set $\{\widehat a; a\in\mathbb R\}$ is linearly independent. (To see this just notice that if you choose $n$ sequences from this set then the first $n$ coordinates of these sequences form Vandermonde matrix.) Thus we have a linearly independent of cardinality $\mathfrak c$. Hence cardinality of any Hamel basis is at least $\mathfrak c$. At the same time, the cardinality of the whole space is $|\mathbb R^{\mathbb N}|=\mathfrak c^{\aleph_0}=\mathfrak c$, so the basis cannot have more than $\mathfrak c$ elements. Thus the Hamel dimension of this space is $\mathfrak c$.<|endoftext|> TITLE: Normal Subgroups in a p-group QUESTION [5 upvotes]: How can one prove the following claim: Elementary abelian $p$- group of order $p^n$ have the maximal number of normal subgroups among all $p$-groups of the same order. Is is indeed true? Thanks in advance REPLY [6 votes]: I'm going to prove a stronger result: An elementary abelian p-group has strictly more subgroups than any other group of the same size. Since every subgroup of an abelian group is normal this answers the question. Let $G$ be a $p$-group of size $p^n$ and fix some $0 \leq k \leq n$. We're going to bound the number of subgroups $H \subset G$ with rank $k$. Any such subgroup is generated by $k$ elements, but can be so generated in many different ways. In fact, by the Burnside basis theorem, a $k$-element subset generates $H$ if and only if it is a basis for the $\mathbb{F}_p$ vector space $G/\Phi(G)$. So for each $H$ there are always at least $(p^k-1)(p^k-p)\ldots(p^k-p^{k-1})$ different choices giving the same $H$. Thus the total number of $H$ of rank $k$ is at most $$\frac{(p^n-1)(p^n-p)\ldots(p^n-p^{k-1})}{(p^k-1)(p^k-p)\ldots(p^k-p^{k-1})}.$$ But that formula gives exactly the number of subgroups of rank $k$ of an elementary abelian group of size $p^n$, so the maximum possible number of subgroups of a $p$-group of given size is realized by an elementary abelian subgroup. Now I haven't yet proved that elementary abelian p-groups are the only p-groups which achieve the maximum. To that end consider the case $k=n$. To achieve the maximum there must be one subgroup of G with rank n, but this implies that G is elementary abelian.<|endoftext|> TITLE: Motivation For Biology Students QUESTION [5 upvotes]: Can someone give me ideas for specific examples that might motivate biology/chemistry students to learn basic calculus ( limits, derivatives and basic integrals and theorems such as Lagrange's, Rolle's ,etc... ) Thanks in advance ! REPLY [2 votes]: Even the least mathematical areas of biology, a good understanding of basic statistics is essential. Such an understanding is not really possible without some calculus background.<|endoftext|> TITLE: Direct limit of $\mathbb{Z}$-homomorphisms QUESTION [7 upvotes]: What is the direct limit of the following sequence of $\mathbb{Z}$-homomorphisms (as groups)? $$ \mathbb{Z} \xrightarrow{2} \mathbb{Z} \xrightarrow{3} \mathbb{Z}\xrightarrow{5} \mathbb{Z}\xrightarrow{7} \mathbb{Z}\xrightarrow{11} \mathbb{Z}\xrightarrow{13}\cdots $$ Here the label $p$ indicates multiplication by $p$, and the sequence is just the sequence of primes. Is there an easy description of this limit? Would we get a different limit if we take a subsequence of the primes, or the sequence of primes squared? REPLY [10 votes]: Let $(a_k)_{k\in\Bbb N}$ be a sequence of natural numbers. Consider the direct limit of the system $$\Bbb Z\xrightarrow{a_1}\Bbb Z\xrightarrow{a_2}\Bbb Z\xrightarrow{a_3}\Bbb Z\xrightarrow{a_4}\cdots.$$ We may rewrite the above as an isomorphic system comprised of inclusions of subgroups of $\Bbb Q$: $$\Bbb Z\hookrightarrow \frac{1}{a_1}\Bbb Z\hookrightarrow\frac{1}{a_1a_2}\Bbb Z\hookrightarrow \frac{1}{a_1a_2a_3}\Bbb Z\hookrightarrow\cdots.$$ Note the commutative diagrams that show this system is isomorphic. $\hskip 1.5in$ (Here $\alpha$ is division by $a_1\cdots a_{n-1}$ and $\beta$ is division by $a_1\cdots a_n$.) At this point the direct limit is just the union of ascending subgroups, which will give the subgroup of rationals with denominator dividing into some product of $a_i$'s.<|endoftext|> TITLE: Why this topological space is not a topological manifold? QUESTION [7 upvotes]: I'm having troubles to prove that the following space is not a topological manifold: Let $r:S^1\to S^1$ be a rotation of $\frac{2\pi}{3}$, i. e., $r(\cos\theta,\sin\theta)=\left(\cos\theta+\frac{2\pi}{3},\sin\theta+\frac{2\pi}{3}\right)$, such that $X=B^2/\sim$, where the partition is given by $P=\{x\}$ if $|x|\lt1$ and $P=\{x,r(x),r^2(x)\}$ if $|x|=1$. So this is my attempt: Let $x\in X$ be a point in the boundary of $B^2$, the neighborhood of $x$ in $B^2$ is an open set $U$ of $B^2$ with $r(U)$ and $r^2 (U)$. My strategy is to prove that this neighborhood is not homeomorphic to an open subset of $\mathbb R^n$, anyone can help me in this part? Thanks REPLY [5 votes]: All points of $X$ that come from $S^1$ ‘look the same’, so you might as well pick a particular one; the natural choice is to pick $x=q\big(\langle 1,0\rangle\big)$, where $q:B^2\to X$ is the quotient map. A set $U\subseteq X$ is an open nbhd of $x$ iff $q^{-1}[U]$ is an open nbhd of $\left\{\langle 1,0\rangle,\left\langle-\frac12,\frac12\sqrt3\right\rangle,\left\langle-\frac12,-\frac12\sqrt3\right\rangle\right\}$ in $B^2$. In the sketch below, the three red points show $q^{-1}[\{x\}]$, and the blue shows what a fairly typical $q^{-1}[U]$ should look like. $U$ looks rather like a book with three pages: each of the sets $q[A],q[B]$, and $q[C]$ is one page, and the $q$ takes the three bits of $S^1$ attached to $A,B$, and $C$ to a single interval homeomorphic to $(0,1)$ that forms the spine of this little book. If there were just two pages, they’d fit together perfectly to make something homeomorphic to an open disk in the plane, but there are three. Thus, if you remove the spine of the book, you’re left with three disconnected pages; removing a set homeomorphic to $(0,1)$ from an open disk in $\Bbb R^2$ leaves at most two disconnected pieces. Alternatively, note that the yellow curve(s) in the picture correspond to a simple closed curve in $X$. However, this simple closed curve doesn’t split $U$ into an inside, containing $x$, and an outside: you can travel in $q[C]$ from $x$ to a point on $q[S^1]$ ‘outside’ of the yellow curve and from there into the part of $A$, say, on the opposite side of the yellow curve from $\langle 1,0\rangle$. A simple closed curve in the plane, however, does split the plane into two regions; this is the Jordan curve theorem.<|endoftext|> TITLE: What is the difference between totally bounded and uniformly bounded? QUESTION [13 upvotes]: Can somebody please explain me what the difference is between totally bounded and uniformly bounded functions? REPLY [16 votes]: To illustrate the concepts, I consider real functions in one real variable in the following. Of course this carries over to arbitrarly generalized contexts (domains in $\mathbb{R}^n$, metric spaces, Banach spaces, whatever). A single function $f:\mathbb{R}\rightarrow\mathbb{R}$ is bounded, if there exists a constant $C\ge 0$ such that $|f(x)|\le C$ for all $x\in\mathbb{R}$. The term uniformly bounded only makes sense if you are considering an object that depends on at least one additional parameter, e.g. a sequence of functions $(f_k)_k$ ($f_k(x)$ depends on the index $k$ and on $x$). A sequence $(f_k:\mathbb{R}\rightarrow\mathbb{R})_k$ of functions is uniformly bounded if there exists a constant $C\ge 0$ s.t. for all $k$ we have $|f_k(x)|\le C$ for all $x\in\mathbb{R}$. The important thing here is that C does not depend on $x$. This is what the word uniformly means. In contrast, such a sequence is (pointwise) bounded, if for all $x\in\mathbb{R}$ there exists a constant $C=C(x)\ge 0$ such that $|f_k(x)|\le C$ for all $k$. Here, $C$ depends on $x$.<|endoftext|> TITLE: How to define a "metric" whose range is not the reals? QUESTION [6 upvotes]: This may sound a very stupid question. Why do we need to restrict a metric from a general set $X$ to map to the positive real numbers? I try to be clearer. We are given a set $X$ and a totally ordered set ($Y,\succeq $) with least element $0$ and "an addition-like operation on it" denoted by $+$. A metric $d$ is a function $d:X\times X \rightarrow Y $ satisfying the following axioms $\forall x,y,z\in X$: (1) $d(x,y)\succeq 0$ if $x\neq y$ and $d(x,y)=0$ if $x=y$; (2) $d(x,y)=d(y,x)$; (3) $d(x,z)\preceq d(x,y)+d(y,z)$. Does this definition make any sense? If yes, has work been done on this subject? Thank you REPLY [2 votes]: A very general answer in the non symmetric case is given by F.W. Lawvere in the paper available for download here entitled "Metric spaces, generalized logic and closed categories".<|endoftext|> TITLE: Subset of Power Set of Natural Numbers QUESTION [5 upvotes]: I was given a question today which asks to find a subset of the power set of natural numbers having cardinality $2^{\aleph_0}$ and in which any two subsets have finitely many elements in their intersection. My solution was to consider the sequences of rationals converging to a real number, for every real, and then to take the bijection of those sequences of rationals to sequences of naturals. Can one prove this more directly? That is, show an explicit subset of the power set? REPLY [6 votes]: Let ${^\Bbb N}\{0,1\}$ be the set of infinite sequences of $0$’s and $1$’s, and let $\Sigma$ be the set of finite sequences of $0$’s and $1$’s. For each $\sigma\in{^\Bbb N}\{0,1\}$ let $$S_\sigma=\{s\in\Sigma:s\text{ is an initial segment of }\sigma\}\;;$$ it’s not hard to show that if $\sigma,\tau\in{^\Bbb N}\{0,1\}$, and $\sigma\ne\tau$, then $S_\sigma\cap S_\tau$ is finite. Finally, $\Sigma$ is countably infinite, so it admits a bijection with $\Bbb N$. With a bit of work one can produce an explicit bijection $h:\Sigma\to\Bbb N$, and then the collection $\big\{h[S_\sigma]:\sigma\in{^\Bbb N}\{0,1\}\big\}$ has the desired properties. Added: Here’s a pretty nice bijection $h$. If $s=\langle b_0,\dots,b_{n-1}\rangle\in\Sigma$, let $$h(s)=2^n+\sum_{k=0}^{n-1}b_k2^k\;.$$ As $s$ runs over all sequences in $\Sigma$ of length $n$, $\sum_{k=0}^{n-1}b_k2^k$ runs over all non-negative integers in $\{0,\dots,2^n-1\}$, and $h(s)$ runs over $\{2^n,\dots,2\cdot2^n-1\}=\{2^n,\dots,2^{n+1}-1\}$, so $h$ is clearly a bijection from $\Sigma$ to $\Bbb N$. Indeed, given $n\in\Bbb N$, we can calculate $h^{-1}(n)$ as follows. First let $k=\lfloor \lg n\rfloor$, where $\lg x$ is the binary log; then $2^k\le n<2^{k+1}$. Let $m=n-2^k$; then $0\le m<2^k$, so $m$ has a unique binary representation $$m=\sum_{i=0}^{k-1}b_i2^i\;,$$ where each $b_i\in\{0,1\}$, and $h^{-1}(n)=\langle b_0,\dots,b_{k-1}\rangle$.<|endoftext|> TITLE: 1-separated sequences of unit vectors in Banach spaces QUESTION [5 upvotes]: Given an infinite-dimensional Banach space $X$, I would like to construct a sequence of linearly independent unit vectors such that $\|u_k-u_l\|\geqslant 1$ whenever $k\neq l$. Any ideas on how to realize this? REPLY [2 votes]: It can be done fairly easily in any infinite dimensional normed space $X$. For a proof, see Lemma 1.4.22 in Robert E. Megginson's An Introduction to Banach Space Theory.<|endoftext|> TITLE: Give an example of a function $f: \mathbb{R} \to \mathbb{R}$ which is continuous only at $0$. QUESTION [6 upvotes]: I do not know an example. Will ask question if in doubt of the proofs provided thank you!! REPLY [4 votes]: $$ F(x) = \left\{ \begin{array}{rcl} x,& \mbox{if} & x \in \mathbb{Q}\\ -x , & \mbox{if} & x \notin \mathbb{Q} \\ \end{array} \right. $$ $|x-0| < \varepsilon \Rightarrow |f(x) - f(0)| = |x| <\varepsilon$. If $x_0 > 0$. There is $x \notin \mathbb{Q}$ sufficiently near $x_0$ such that $f(x) = -x$ is sufficiently near $-x_0$. Thus do not sufficiently near $f(x_0) = x_0$.<|endoftext|> TITLE: Rank of product of a matrix and its transpose QUESTION [34 upvotes]: How do we prove that $\operatorname{rank}(A) = \operatorname{rank}(AA^T) = \operatorname{rank}(A^TA)$ ? Is it always true? REPLY [24 votes]: Here is a common proof. All matrices in this note are real. Think of a vector $X$ as an $m\!\times\!1$ matrix. Let $A$ be an $m\!\times\!n$ matrix. We will prove that $A A^T X = 0$ if and only if $A^T X = 0$. It is clear that $A^T X = 0$ implies $AA^T X = 0$. Assume that $AA^T X = 0$ and set $Y = A^T\!X$. Then $X^T\!A\, Y = 0$, and thus $(A^T\!X)^T Y = 0$. That is $Y^T Y = 0$. Setting $Y = [y_1 \cdots y_n]^\top$ we obtain $0 = Y^T Y = y_1^2 + \cdots + y_n^2$. Since the entries of $Y$ are real, for all $k \in \{1,\ldots,n\}$ we have $y_k^2 \geq 0$. Therefore, $Y^T Y = 0$ yields $y_k = 0$ for all $k \in \{1,\ldots,n\}$. Thus $Y = A^T X = 0$. We just proved that the $m\!\times\!m$ matrix $AA^T$ and the $n\!\times\!m$ matrix $A^T$ have the same null space. Consequently, they have the same nullity. The nullity-rank theorem states that $$ \operatorname{Nul} AA^T + \operatorname{Rank} AA^T = m = \operatorname{Nul} A^T + \operatorname{Rank} A^T. $$ Hence $\operatorname{Rank} AA^T = \operatorname{Rank} A^T$.<|endoftext|> TITLE: Why would we expect the pushforward to encode the total derivative of a smooth map? QUESTION [7 upvotes]: According to Lee, the pushforward was invented to give a coordinate independent definition of the total derivative of a smooth function between two smooth manifolds. To each smooth map $F:M \to N$ and each point $p \in M$ we associate a linear map $F_*:T_pM\to T_{F(p)}N$ defined by $F_*X(f) = X(f \circ F)$ where $X$ is any derivation in $T_pM$ and $f:N \to \mathbb{R}$ is any smooth function. Given a smooth chart $\phi:M \to \mathbb{R}^m$ near $p$ we have a basis for $T_pM$ given by $\frac{\partial}{\partial x^i}|_{p} = ({\phi ^{-1}}_*)\frac{\partial}{\partial x^i}|_{\phi(p)}$. Similarly given a smooth chart $\psi$ near $F(p)$ we have a basis for $T_{F(p)}N$ given by $\frac{\partial}{\partial y^i}|_{F(p)} = ({\psi ^{-1}}_*)\frac{\partial}{\partial y^i}|_{\psi(p)}$. A calculation in Lee shows that the matrix representation of $F_*$ with respect to these bases is the total derivative of the coordinate representation $\hat{F} = \psi \circ F \circ \phi ^{-1}$ evaluated at $\phi(p)$. My question is, is there some intuitive reason why we would expect this to be true? This all seems very abstract to me. I can't tell if it is supposed to be obvious that this definition should be a coordinate independent way of encoding the total derivative of $F$ and I am just missing something, or if it is just difficult to understand. How should I think about the pushforward? REPLY [6 votes]: As I understand your question, you want to know why the definition $F_*X(f) := X(f \circ F)$ is an appropriate generalization of the total derivative. In other words, knowing only the definition of the total derivative, how would one come to this definition of pushforward? Qiaochu's comment is the key: it comes down to the way directional derivatives relate to derivations. Let's flesh out this idea by recalling some multivariable calculus. Let $F\colon \mathbb{R}^m \to \mathbb{R}^n$ be smooth, and let $D_pF\colon \mathbb{R}^m \to \mathbb{R}^n$ denote the total derivative at $p \in \mathbb{R}^m$. To each vector $w \in \mathbb{R}^n$ (based at $F(p)$), we associate the derivation at $F(p) \in \mathbb{R}^n$ via: $$w \in \mathbb{R}^n \mapsto w^j \left.\frac{\partial}{\partial x^j}\right|_{F(p)}.$$ In particular, for $v \in \mathbb{R}^m$ (based at $p$), $$D_pF(v) \in \mathbb{R}^n \mapsto D_pF(v)^j \left.\frac{\partial}{\partial x^j}\right|_{F(p)}.$$ And in fact, this derivation on the right-hand side is none other than $$\left.v^i\frac{\partial}{\partial x^i}\right|_p(-\circ F).$$ To see this, we just use the chain rule: $$\begin{align*} v^i \left.\frac{\partial}{\partial x^i}\right|_p(-\circ F) & = v^i \left.\frac{\partial F^j}{\partial x^i}\right|_p \left.\frac{\partial}{\partial x^j}\right|_{F(p)} \\ & = v^i D_pF(e_i)^j \left.\frac{\partial}{\partial x^j}\right|_{F(p)} \\ & = D_pF(v)^j \left.\frac{\partial}{\partial x^j}\right|_{F(p)} \end{align*}$$ Alternatively, I believe it also suffices to note that both derivations give the same value when applied to the coordinate function $x^k$: $$D_pF(v)^j\frac{\partial x^k}{\partial x^j} = D_pF(v)^k = v^i D_pF(e_i)^k = \left.v^i\frac{\partial F^k}{\partial x^i}\right|_p = v^i\left.\frac{\partial}{\partial x^i}\right|_p(x^k \circ F).$$ Point: The derivation at $F(p) \in \mathbb{R}^n$ given by $$v^i\partial_i|_{p}(-\circ F)$$ is exactly $$D_pF(v)^j \left.\frac{\partial}{\partial x^j}\right|_{F(p)}$$<|endoftext|> TITLE: Meaning of mathematical operator that consists of square brackets with a plus sign as a subscript QUESTION [13 upvotes]: I was reading a paper on tomographic reconstruction, and I found an operator that is not explained: $[expression]_+$ The operator was used to compute a surrogate for the log-likelihood cost function. I do not know what that operator means. I've seen brackets without that plus sign before that were used to represent the rounding operation. Thanks! Edit: I've been looking at http://www.latexsearch.com and I found some results where $[x]_+:=\text{max}\{0,x\}$, and I think this agrees with how the paper uses it. REPLY [24 votes]: Looking at other papers I found through Latex search, I found that the bracket operator is defined as: $[x]_+=\max\{0,x\}$<|endoftext|> TITLE: Showing a function of two variables is measurable QUESTION [5 upvotes]: Let f(x,y) be a function defined on the unit square $0\leq x\leq1$, $0\leq y\leq1$ that is continuous on each variable separately. Is f a measurable function of (x,y)? I think I need to look at the pre-images of f, and I need to use the fact that it is continuous. Maybe I can use the epsilon-delta definition of continuous functions? REPLY [7 votes]: Theorem: Let $f(x,y)$ be a function defined on the unit square $0\le x\le 1,0\le y\le 1$ which is continuous in each variable separately. Show that $f$ is a measurable function of $(x,y).$ Proof: If we have a sequence of measurable functions $f_n$ such that $f_n \to f$, then $f$ is measurable. Define $f_n(x, y) = f_n\bigg(x, \frac{k}{n} \bigg)$ where $\frac{k}{n} \leq y < \frac{k + 1}{n}$. We are partitioning the $y$-axis unit interval into $n$ equal partitions and then looking at the largest endpoint of one of those partitions such that $y$ is greater than it, e.g., say $n$ = 3, then we have the following partition points $\{0, 1/3, 2/3, 1\}$, if we chose $y = 1/2$, then the largest endpoint such that $y$ is greater than it would be $1/3$. By taking $n$ to be arbitrarily large, we are squeezing $y$ into to rational numbers and looking at the lower one. Note that $f_n(x, y) \to f(x, y)$ since for $(x_0, y_0) \in X \times Y$, $\vert f(x_0, y_0) - f_n(x_0, y_0) \vert = \vert f(x_0, y_0) - f(x_0, k/n) \vert$, but as $n \to \infty$, $k/n \to y_0$ and since $x_0$ is fixed in this case, we can use the continuity of $f$ in the $y$ variable to show that $f(x_0, k/n) \to f(x_0, y_0)$. We must show $f_n$ is measurable for all $n$, but that means for all finite $\alpha$ and $n \in \mathbb N$, $\{f_n > \alpha\}$ is measurable. Observe that $\{f_n > \alpha\} = \{(x, y) \in [0, 1] \times [0, 1] : f_n(x, y) > \alpha\} = \bigcup_{k = 0}^{n - 1} \Bigg\{ \{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\} \times \bigg[\frac{k}{n}, \frac{k + 1}{n} \bigg) \Bigg\}$ but since $f$ is continuous in $y$, $\{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\} = \{f^{-1} \big( (\alpha, \infty) \big)$ is open and thus measurable and $\bigg[\frac{k}{n}, \frac{k + 1}{n} \bigg)$ is an interval, so it is trivially measurable. Recall that the Cartesian product of two measurable sets is measurable. We are taking a finite union of measurable sets which is measurable, so we conclude $f_n$ is measurable for $n \in \mathbb N$. The big question here, is: why is $\{f_n > \alpha\} = \bigcup_{k = 0}^{n - 1} \Bigg\{ \{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\} \times \bigg[\frac{k}{n}, \frac{k + 1}{n} \bigg) \Bigg\}$ true? Observe the following $(\subseteq)$ If $(x_0, y_0) \in \{f_n > \alpha\}$, then $f_n(x_0, y_0) = f_n\bigg(x_0, \frac{k}{n} \bigg) > \alpha$ where $\frac{k}{n} \leq y_0 < \frac{k + 1}{n}$ for some $k \in \{0, 1, \ldots, n - 1\}$. But then $x_0 \in \{x \in [0, 1] : f\bigg( x, \frac{k}{n} > \alpha\}$ and $y \in \bigg[ \frac{k}{n}, \frac{k + 1}{n} \bigg)$. so $(x_0, y_0) \in \bigcup_{k = 0}^{n - 1} \Bigg\{ \{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\} \times \bigg[\frac{k}{n}, \frac{k + 1}{n} \bigg) \Bigg\}$. $(\supseteq)$ If $(x_0, y_0) \in \bigcup_{k = 0}^{n - 1} \Bigg\{ \{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\} \times \bigg[\frac{k}{n}, \frac{k + 1}{n} \bigg) \Bigg\}$, then there exists $k \in \{0, 1, \ldots, n - 1\}$ such that $(x_0, y_0) \in \{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\} \times \bigg[\frac{k}{n}, \frac{k + 1}{n} \bigg)$. But then $x_0 \in \{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\}$ where $\frac{k}{n} \leq y_0 < \frac{k + 1}{n}$. But $x_0 \in \{x \in [0, 1] : f\bigg( x, \frac{k}{n}\bigg) > \alpha\} = \{ f_n > \alpha\}$.<|endoftext|> TITLE: Reconciling several different definitions of Radon measures QUESTION [18 upvotes]: Upon reviewing some basic real analysis I have encountered two different definitions for Radon measure. Let the underlying space $X$ be locally compact and Hausdorff. Folland's Real Analysis gives the definition A Radon measure is a Borel measure that is finite on all compact sets, outer regular on Borel sets, and inner regular on open sets. Folland goes on to prove that a Radon measure is inner regular on $\sigma$-finite sets, and remarks that full inner regularity is too much to ask for, especially in the context of the Riesz representation theorem for positive linear functionals on $C_c(X)$. Folland's approach seems to match the approach taken by Rudin, if I recall. However, I've heard from others, as well as Wikipedia, that a Radon measure is defined as a Borel measure that is locally finite (which means finite on compact sets for LCH spaces) and inner regular, and no mention of outer regularity. Neither definition seems to connect well with Bourbaki's approach of defining Radon measures as positive linear functionals on $C_c(X)$, because, at least according to Wikipedia's article on the Riesz representation theorem, a positive linear functional on $C_c(X)$ uniquely corresponds to a regular Borel measure, which is stronger than Radon in either of the two definitions given above. Sadly I do not have any more advanced analysis treatises to compare against, so I was hoping somebody could clear up this discrepancy. REPLY [9 votes]: One standard example is the reals numbers times the reals with the discrete topology: $X = \mathbb{R} \times \mathbb{R}_d$. This is a locally compact metrizable space. The compact subsets intersect only finitely many horizontal lines and each of those non-empty intersections must be compact. A Borel set $E\subset X$ intersects each horizontal slice $E_y$ in a Borel set. Consider the following Borel measure where $\lambda$ is Lebesgue measure on $\mathbb{R}$: $$ \mu(E) = \sum_{y} \lambda(E_y). $$ This is easily checked to define an inner regular Borel measure and its null sets are precisely those Borel sets that intersect each horizontal line in a null set. In particular, the diagonal $\Delta = \{(x,x) : x \in \mathbb{R}\}$ is a null set. However, every open set containing $\Delta$ must intersect each horizontal line in a set of positive measure, so it must have infinite measure and hence $\mu$ is not outer regular. Now define $\nu$ by the same formula as $\mu$ if $E$ intersects only countably many horizontal lines, and set $\nu(E) = \infty$ if $E$ intersects uncountably many horizontal lines. Now this measure $\nu$ is inner regular on open sets and outer regular on Borel sets. Finally, you can check that $\mu$ and $\nu$ assign the same integral to compactly supported continuous functions in $X$.<|endoftext|> TITLE: Why is $\Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}$? QUESTION [75 upvotes]: It seems as if no one has asked this here before, unless I don't know how to search. The Gamma function is $$ \Gamma(\alpha)=\int_0^\infty x^{\alpha-1} e^{-x}\,dx. $$ Why is $$ \Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}\text{ ?} $$ (I'll post my own answer, but I know there are many ways to show this, so post your own!) REPLY [3 votes]: By the Bohr-Mollerup theorem, $$\Gamma(1/2)=\lim_{n\to\infty}\frac{\sqrt n(n!)}{(1/2)(-1/2)\dots(1/2-n)}=\lim_{n\to\infty}\frac{\sqrt n4^{n+1}(n!)^2}{(2n)!(n+\frac12)}$$ Apply the Stirling approximation and watch almost everything simplify! $$\Gamma(1/2)=\sqrt\pi\lim_{n\to\infty}\frac n{n+\frac12}=\sqrt\pi$$<|endoftext|> TITLE: Prove that for all non-negative integers $m,n$, $\frac{(2m)!(2n)!}{m!n!(m + n)!}$ is an integer. QUESTION [14 upvotes]: Prove that for all non-negative integers $m,n$, $\frac{(2m)!(2n)!}{m!n!(m + n)!}$ is an integer. I'm not familiar to factorial and I don't have much idea, can someone show me how to prove this? Thank you. REPLY [9 votes]: Denote $\frac{(2m)!(2n)!}{m!n!(m + n)!}$ by $S(n,m)$. It satisfies relation $S(n+1,m)+S(n,m+1)=4S(n,m)$ (cf. Pascal's triangle) — or, equivalently, $S(n+1,m)=4S(n,m)-S(n,m+1)$. Since $S(0,m)=\binom{2m}m$ is an integer, this relation implies that all $S(n,m)$ are integers. Ref.: I. Gessel. Super Ballot Numbers (via MO)<|endoftext|> TITLE: If $\dfrac{4x^2-1}{4x^2-y^2}$ is an integer, then it is $1$ QUESTION [13 upvotes]: The problem is the following: If $x$ and $y$ are integers such that $\dfrac{4x^2-1}{4x^2-y^2}=k$ is also an integer, does it implies that $k=1$? This equation is equivalent to $ky^2+(1-k)4x^2=1$ or to $(k-1)4x^2-ky^2=-1$. The first equation is a pell equation (if $k$ is a perfect square) and the second is a pell type equation (if $k-1$ is a perfect square). I've tried setting several values of $k$ to get some solutions but i got nothing. I'm starting to think that $k$ must be $1$. REPLY [8 votes]: This is a fun problem! Where did you find it? There are no solutions except for $k=1$. Assume from now on that $k \neq 1$. Since $k$ is clearly odd, it is also not $0$ and we deduce that $k(k-1)>0$. It is convenient to set $M = k(k-1)$. Also, we may assume WLOG that $x$ and $y\geq0$. As you did, rewrite the equation to $$ky^2 - 4 (k-1) x^2 = 1$$ or $$(ky)^2 - k(k-1) (2x)^2 = k.$$ Set $Y=ky$ and $X=2x$ so the equation is $$Y^2 - M X^2 = k. \quad (\ast)$$ We will study the equation $(\ast)$. In the end, we will see that there are no solutions with $X$ even and $Y$ divisible by $k$. The rest of this proof works inside the ring $R:=\mathbb{Z}[\sqrt{M}]$. Note that $M=k(k-1)$ is not square, so this is an integral domain. For an element $\alpha = a+b \sqrt{M} \in R$, set $\bar{\alpha} = a - b \sqrt{M}$. Set $\epsilon = (2k-1) + 2 \sqrt{M}$. Note that $\epsilon \bar{\epsilon} = (2k-1)^2 - 4 k(k-1) = 1$, so $\epsilon$ is a unit of $R$. Set $\delta = Y+X \sqrt{M}$. Since $\delta$ is a positive real, and $\epsilon>1$, there is some integer $n$ such that $\epsilon^n \leq \delta < \epsilon^{n+1}$. Write $\delta = \gamma \epsilon^n$. Since $\epsilon$ is a unit, $\gamma$ is in the ring $R$ and, by construction, $$1 \leq \gamma < \epsilon.$$ We have $$\epsilon = 2k-1 + 2 \sqrt{k(k-1)} < 2k+ 2k = 4k.$$ So $$1 \leq \gamma < 4k.$$ But, also $\gamma \bar{\gamma} = \delta \bar{\delta} =k$. So $$\frac{1}{4} \leq \bar{\gamma} \leq k.$$ Write $\gamma = U + V \sqrt{M}$. So $$\begin{matrix} 1 & \leq & U+V\sqrt{M} & < & 4k \\ \frac{1}{4} & \leq & U-V\sqrt{M} & < & k \\ \end{matrix}.$$ Solving for $V$, we have $$\frac{-k}{2 \sqrt{M}} < V < \frac{2k}{\sqrt{M}}.$$ The LHS is $\approx -1/2$, and the right hand side is slightly larger than $2$. So $V$ is $0$, $1$ or $2$. We break into cases: $\bullet$ If $V=0$, then the equation $\gamma \bar{\gamma} =k$ gives $U^2 =k$. So $k$ is a square, say $k=m^2$ and $M=m^2(m^2-1)$. We have $$Y+X \sqrt{M} = m ((2k-1)+2 \sqrt{M})^n = m (2m^2-1 + 2 m \sqrt{m^2-1})^n.$$ An easy induction on $n$ shows that $Y \equiv (-1)^n m \mod m^2$ so, except when $m=1$, we do not have $Y$ divisible by $k=m^2$. Of course, the case $m=1$ corresponds to $k=1$. $\bullet$ If $V=1$ then $U^2 - k(k-1) = k$ so $U=k$. We have $$Y+ X \sqrt{M} = (k+\sqrt{M}) \cdot ((2k-1)+2 \sqrt{M})^n.$$ An easy induction on $n$ shows that $X$ is odd. $\bullet$ If $V=2$ then $U^2 - 4k(k-1) = k$ so $4k^2-3k = U^2$. This gives $64k^2 - 48 k + 9= 64 U^2+9$ or $(8k-3)^2 - (8U)^2 = 9$. The only ways to write $9$ as a difference of squares are $3^2-0^2$ and $5^2-4^2$. The former gives $k=0$, but we saw that $k$ must be odd; the latter gives no solution since $8$ does not divide $4$.<|endoftext|> TITLE: Meaning and example(s) of Qiaochu's quote. QUESTION [9 upvotes]: I happen to come across this page http://math.uchicago.edu/~chonoles/quotations.html which contains some beautiful quotes by various mathematicians and I came across Qiaochu's quote as claimed by the site which seemed intriguing. "I believe that in mathematics nothing is a trick if seen from a sufficiently high level." - Qiaochu Yuan Now I was wondering if anyone could interpret (maybe even Qiaochu himself) and give examples in mathematics that would convey the meaning of this quote. NB: Hopefully this question isn't too off topic? Can a moderator turn this into a wiki if deemed appropriate? Also, any appropriate tags? Edit: Since my question wasn't clear as I would have liked, I'd prefer this question to be example based. So I'd like as much examples from different areas of matheamtics as possible. Since a lot of users on this site are at different levels in terms of the amount of mathematics one has learned, maybe anyone can contribute by giving examples say at a high school level, undergraduate level, graduate level, or research level etc. REPLY [14 votes]: One example of such a "trick" is rationalizing the denominator. You are taught a pattern to follow, long before you understand the general theory. For example, $$ \frac{1+\sqrt{3}}{5-2\sqrt{3}} = \frac{1+\sqrt{3}}{5-2\sqrt{3}} \left( \frac{5+2\sqrt{3}}{5+2\sqrt{3}} \right) = \frac{(1+\sqrt{3})(5+2\sqrt{3})}{25-12} = \frac{11+7\sqrt{3}}{13}.$$ If you have a denominator of $a+b\sqrt{c}$, you can multiply the top and bottom by $a-b\sqrt{c}$ to get rid of the square root in the denominator. If you learn Galois theory, you can explain exactly why this works, and you know how to get rid of algebraic numbers in the denominator for any example, like $$ \frac{1}{1+\sqrt{2}+\sqrt{3}}, \qquad \frac{1}{3+\sqrt[3]{5}}, \quad \text{or} \quad \frac{1}{1+\alpha} $$ where $\alpha$ is the real root of $x^5+x+1=0$. We need to multiply the denominator by all of its Galois conjugates. That means that for the first example, we multiply the top and bottom by $$(1+\sqrt{2}-\sqrt{3})(1-\sqrt{2}+\sqrt{3})(1-\sqrt{2}-\sqrt{3}).$$ In the second example, we multiply the top and bottom by $$\left(3+\sqrt[3]{5}\left(\frac{-1+i\sqrt{3}}{2}\right)\right) \left(3+\sqrt[3]{5}\left(\frac{-1-i\sqrt{3}}{2}\right)\right).$$ And in the final example, we multiply the top and bottom by $$(1+\alpha_2)(1+\alpha_3)(1+\alpha_4)(1+\alpha_5),$$ where the $\alpha_i$ are the other 4 roots of $x^5+x+1=0$. Further, we can do this without introducing any new algebraic numbers. For example, $(1+\alpha_2)(1+\alpha_3)(1+\alpha_4)(1+\alpha_5)=2-\alpha+\alpha^2-\alpha^3+\alpha^4.$ Pretty cool! Once you understand the symmetries of algebraic numbers, the high school algebra "trick" of rationalizing the denominator isn't much of a trick. It's just a routine use of the basic ideas of Galois theory!<|endoftext|> TITLE: What is the line bundle $\mathcal{O}_{X}(k)$ intuitively? QUESTION [14 upvotes]: I am always confused about how to understand the line bundle $\mathcal{O}_{X}(k)$ on a projective scheme $X=\mathrm{Proj}(\oplus_{n=0}^\infty A_{n})$. Of course this is by definition the $\mathcal{O}$-module associated to $\oplus_{n=0}^\infty A_{n}(k)$. But I think this definition is not really geometric. My questions are following; How should I understand $\mathcal{O}_{X}(k)$ intuitively? Maybe for $k\ge0$ this is relatively easy as global sections are given by the graded piece $A_{k}$, but I still don't know any geometric picture of this. Let $X=\mathbb{P}^n$, then is it easy to tell why the universal bundle is given by $\mathcal{O}_{\mathbb{P}^n}(-1)$ (without computing the transition function)? I would appreciate it if you could provide me with your favorite ways to see these line bundles. REPLY [14 votes]: Here is a rather category-theoretic explanation where these Serre twists come from. When the graded ring $A$ is finitely generated by $A_1$ over $A_0$, there is a well-known universal property of $\mathrm{Proj}(A)$. Namely, morphisms $Y \to \mathrm{Proj}(A)$ over $A_0$ correspond bijectively to line bundles $\mathcal{L}$ on $Y$ together with an $A_0$-linear epimorphism $A_1 \to \Gamma(Y,\mathcal{L})$. This describes the functor of points $\hom(-,\mathrm{Proj}(A))$. Therefore, actually this may serve as a definition of the Proj construction. Now the universal element of this representable functor is a line bundle $\mathcal{O}(1)$ on $\mathrm{Proj}(A)$ together with a epimorphism $A_1 \to \Gamma(\mathrm{Proj}(A),\mathcal{O}(1))$. More generally, $\mathcal{O}(k) := \mathcal{O}(1)^{\otimes k}$ for $k \in \mathbb{Z}$. Intuitively, Serre twists make it possible to "shift to the affine case". If $\mathcal{F}$ is a coherent sheaf on an affine scheme, there is an epimorphism $\mathcal{O}^n \twoheadrightarrow \mathcal{F}$ (global generators). This is not true in the projective case. However, for every coherent sheaf $\mathcal{F}$ on a projective scheme $X$ with a choosen ample sheaf $\mathcal{O}(1)$ there is an epimorphism $\mathcal{O}^n \twoheadrightarrow \mathcal{F}(k) := \mathcal{F} \otimes \mathcal{O}(k)$ for $k$ large enough. Intuitively, we are just clearing denominators here in order to reduce to the affine situation. It follows that there is an exact sequence $\mathcal{O}(-k_2)^{n_2} \to \mathcal{O}(-k_1)^{n_1} \to \mathcal{F} \to 0$ In the affine case, we have $k_1=k_2=0$ and this would be a description by generators and relations. Here, this is something really similar, we just have added degrees to the generators. This also shows that the category of coherent sheaves is generated by the Serre twists. Similarily, cohomology vanishes after shifting high enough, etc.<|endoftext|> TITLE: How do you find a basis for the set of all $3 \times 3$ matrices whose rows and columns add up to zero? QUESTION [8 upvotes]: If $W$ is the set of all $3 \times 3$ matrices whose rows and columns add up to zero, how would you find a basis for this? There seem to be so many scenarios we'd need to cover, I can't find a way to succinctly find the answer / represent it well. On a related note, how would you extend the basis to be a basis for $M_3(R)$? (I feel like I might be able to get this one, though, once I can clearly find a basis to begin with.) Thanks in advance for the advice. REPLY [7 votes]: Douglas' observation that the upper left $2\times2$ submatrix determines the remaining values suggests another solution: We can take the canonical basis for $2\times2$ matrices and fill in the third row and column for each of its elements; the result is $$ \pmatrix{ 1&0&-1\\ 0&0&0\\ -1&0&1}, \pmatrix{ 0&1&-1\\ 0&0&0\\ 0&-1&1}, \pmatrix{ 0&0&0\\ 1&0&-1\\ -1&0&1}, \pmatrix{ 0&0&0\\ 0&1&-1\\ 0&-1&1}\;. $$<|endoftext|> TITLE: Function that sends $1,2,3,4$ to $0,1,1,0$ respectively QUESTION [5 upvotes]: I already got tired trying to think of a function $f:\{1,2,3,4\}\rightarrow \{0,1,1,0\}$ in other words: $$f(1)=0\\f(2)=1\\f(3)=1\\f(4)=0$$ Don't suggest division in integers; it will not pass for me. Are there ways to implement it with modulo, absolute value, and so on, without conditions? REPLY [5 votes]: Since you tagged "binary" in your question, you might also want to recall that Karnaugh map is a standard way to map inputs to outputs with just complement, AND and OR gates. (Or "~", "$\&$" and "|" bit-wise operators in C) For example, you can define $a,b,c$ to be bits at position 2,1,0 here to use the map. If you draw out the map, this is what it looks like: $$\begin{array}{c|c|c|c|c|c|} & &bc &bc &bc &bc\\ \hline & & 00 & 01 & 11 & 10\\ \hline a& 0 & \text{X} & 0 & 1 & 1 \\ \hline a& 1 & 0 & \text{X} & \text{X} & \text{X} \\ \hline \end{array}$$ Explanation: X denotes values that cannot occur (normally called "Don't care" I think). We want to focus on representing the "1"s, which is represented by the entries $\bar abc$ and $\bar a b\bar c$. (Notice that you only get "1" for one of the variables, they cannot occur together.) They can be combined: $\bar a bc + \bar a b \bar c=\bar a b (c+\bar c)=\bar a b$. Getting rid of the $\bar a$ is possible when you noticed that its alternative row has no entries. (i.e. the 2 entries below are "X"s) Using this idea you can construct any function for any bigger variable. It is probably not going to be the most efficient implementation, but you can get the solution fast. From there, you may do some reduction using logical operations and the final result should be decent.<|endoftext|> TITLE: if convolution of $f$ with itself remains same, then $f=0$ a.e? QUESTION [5 upvotes]: I'm trying to answer the question above.. But I'm not certain in either way. I tried to prove it by giving counter examples.. But it always failed.. Then i also tried to draw contradictions But that's not successful as well. Please give me some suggestion or ideas! p.s I forgot the condition that $f$ is in $L^1(\Bbb R)$. REPLY [8 votes]: Using the property $\widehat{f\star g}=\widehat f\widehat g$ for $f$ and $g$ integrable, we get $(\widehat f)²=\widehat f$, hence for all $x$, $\widehat f\in\{0,1\}$. By the dominated convergence theorem, $\widehat f$ is continuous, so either $\widehat f=1$ or $\widehat f=0$. By Riemann-Lebesgue lemma, $\widehat f(x)\to 0$ as $x\to +\infty$, so $f=0$ almost everywhere.<|endoftext|> TITLE: Two questions about weakly convergent series related to $\sin(n^2)$ and Weyl's inequality QUESTION [54 upvotes]: By using partial summation and Weyl's inequality, it is not hard to show that the series $\sum_{n\geq 1}\frac{\sin(n^2)}{n}$ is convergent. Is is true that $$\frac{1}{2}=\inf\left\{\alpha\in\mathbb{R}^+:\sum_{n\geq 1}\frac{\sin(n^2)}{n^\alpha}\mbox{ is convergent}\right\}?$$ In the case of a positive answer to the previous question, what is $$\inf\left\{\beta\in\mathbb{R}^+:\sum_{n\geq 1}\frac{\sin(n^2)}{\sqrt{n}(\log n)^\beta}\mbox{ is convergent}\right\}?$$ REPLY [3 votes]: I recall a generalization of partial summation formula: Suppose that $\lambda_1,\lambda_2,\ldots$ is a nondecreasing sequence of real numbers with limit infinity, that $c_1,c_2,\ldots$ is an arbitrary sequence of real or complex numbers, and that $f(x)$ has a continuous derivative for $x\geq \lambda_1$. Put $$ C(x)=\sum_{\lambda_n\leq x}c_n, $$ where the summation is over all $n$ for which $\lambda_n\leq x$. Then for $x\geq\lambda_1$, $$ \sum_{\lambda_n\leq x}c_nf(\lambda_n)=C(x)f(x)-\int^{x}_{\lambda_1}C(t)f'(t)dt.\tag 1 $$ Now we can write if $y=x^2$ and $\lambda_n=n^2$ and $C(t)=[\sqrt{t}]$ (integer part of $\sqrt{t}$): $$ S=\sum_{1\leq n\leq x}\frac{\sin(n^2)}{n^a}=\sum_{\lambda_n\leq y}\frac{\sin(\lambda_n)}{\lambda_n^{a/2}}= $$ $$ =[\sqrt{y}]\frac{\sin(y)}{y^{a/2}}-\int^{y}_{1}[\sqrt{t}]\frac{d}{dt}\left(\frac{\sin(t)}{t^{a/2}}\right)dt. $$ But it is $[\sqrt{t}]=\sqrt{t}-\{\sqrt{t}\}$, where $\{\sqrt{t}\}$ is the fractional part of $\sqrt{t}$. Hence $$ S=-\frac{1}{2}Re\left[iy^{1/2-a/2}E\left(\frac{1+a}{2},iy\right)\right]+\frac{1}{2}Re\left[iE\left(\frac{1+a}{2},i\right)\right]+\sin(1)-\{\sqrt{y}\}\frac{\sin(y)}{y^{a/2}}+ $$ $$ +\int^{y}_{1}\{\sqrt{t}\}\frac{d}{dt}\left(\frac{\sin(t)}{t^{a/2}}\right)dt, $$ where $$ E(a,z)=\int^{\infty}_{1}\frac{e^{-tz}}{t^a}dt $$ But when $a>0$ and $y\rightarrow+\infty$ we have $$ \lim_{y\rightarrow+\infty}\left\{-\frac{1}{2}Re\left[iy^{1/2-a/2}E\left(\frac{1+a}{2},iy\right)\right]+\frac{1}{2}Re\left[iE\left(\frac{1+a}{2},i\right)\right]\right\}+\sin(1)= $$ $$ =\frac{1}{2}Re\left[iE\left(\frac{1+a}{2},i\right)\right]+\sin(1) $$ Also $x$ is positive integer and $\{\sqrt{y}\}=0$. Hence when $a>0$, then $$ \lim_{x\rightarrow\infty}\sum^{x}_{n=1}\frac{\sin(n^2)}{n^a}=\frac{1}{2}Re\left[iE\left(\frac{1+a}{2},i\right)\right]+\sin(1)+\lim_{y\rightarrow\infty}\int^{y}_{1}\{\sqrt{t}\}\frac{d}{dt}\left(\frac{\sin(t)}{t^{a/2}}\right)dt $$ But $$ \int^{y}_{1}\{\sqrt{t}\}\frac{d}{dt}\left(\frac{\sin(t)}{t^{a/2}}\right)dt=\int^{y}_{1}\{\sqrt{t}\}\frac{\cos(t)t^{a/2}-a/2\sin(t)t^{a/2-1}}{t^a}dt= $$ $$ \int^{y}_{1}\{\sqrt{t}\}\left(\cos(t)t^{-a/2}-a/2\sin(t)t^{-a/2-1}\right)dt=\int^{y}_{1}\{\sqrt{t}\}\frac{\cos(t)}{t^{a/2}}dt-\frac{a}{2}\int^{y}_{1}\frac{\sin(t)}{t^{a/2+1}}\{\sqrt{t}\}dt. $$ Clearly when $a$ is positive and constant $$ \lim_{y\rightarrow+\infty}\int^{y}_{1}\frac{\sin(t)}{t^{a/2+1}}\{\sqrt{t}\}dt=2\lim_{x\rightarrow\infty}\int^{x}_{1}\frac{\sin(t^2)}{t^{a+1}}\{t\}dt<\infty, $$ since $0\leq\{t\}<1$ and $-1\leq\sin(t^2)\leq 1$, for all $t>0$. Hence it remains to find under what condition on $a>0$ we have $$ \int^{\infty}_{1}\{\sqrt{t}\}\frac{\cos(t)}{t^{a/2}}dt=2\int^{\infty}_{1}\cos(t^2)t^{1-a}\{t\}dt<\infty, $$ knowinig already that for all $00$, then $$ I(x)=\int^{x}_{1}\frac{1}{t^{a/2}}\cos(t)\{\sqrt{t}\}dt=\int^{x}_{1}\frac{F'(t)}{t^{a/2}}dt=\frac{F(x)}{x^{a/2}}+\frac{a}{2}\int^{x}_{1}\frac{F(t)}{t^{a/2+1}}dt=S_1+S_2,\tag 3 $$ where $$ S_1=\frac{1}{x^{a/2}}S(\sqrt{x})+\frac{\{\sqrt{x}\}\sin x}{x^{a/2}}+\frac{\sqrt{\pi/2}}{x^{a/2}}\left(\textrm{Fs}\left(\sqrt{\frac{2}{\pi}}\right)-\textrm{Fs}\left(\sqrt{\frac{2x}{\pi}}\right)\right) $$ and $$ S_2=\frac{a}{2}\int^{x}_{1}\frac{1}{t^{1/4-\epsilon/2+1}}\{\frac{\sqrt{2\pi}}{2}\textrm{Fs}\left(\sqrt{\frac{2}{\pi}}\right)-\frac{\sqrt{2\pi}}{2}\textrm{Fs}\left(\sqrt{\frac{2t}{\pi}}\right)+ $$ $$ +\{\sqrt{t}\}\sin(t) +\sum_{2\leq k\leq \sqrt{t}}\sin(k^2)\}dt. $$ But it is known that there exists constant $C$ such that for infinite values of $x\in\textbf{N}$ holds $$ \sum_{2\leq k\leq x}\sin(k^2)>Cx^{1/2}.\tag 4 $$ Hence for infinite values of $x$ we will have (easily) $$ S_1>C_1x^{\epsilon/2}.\tag 5 $$ Moreover if we assume that $$ \left|\sum_{2\leq k\leq x}\sin(k^2)\right|=O\left(x^{c+\delta}\right)\textrm{, }\forall \delta>0\textrm{ and }x\rightarrow+\infty,\tag 6 $$ then in view of (4) it must be $c\geq 1/2$. Also $$ S_2=C_{\epsilon}(x)+\int^{x}_{1}t^{-1/4+\epsilon/2-1}\left(\sum_{2\leq k\leq \sqrt{t}}\sin(k^2)\right)dt. $$ Hence $$ \left|S_2\right|=\left|C_{\epsilon}(x)+\int^{x}_{1}t^{-1/4+\epsilon/2-1}\left(\sum_{2\leq k\leq \sqrt{t}}\sin(k^2)\right)dt\right|\leq $$ $$ \leq \left|\left|C_{\epsilon}(x)\right|+\left|\int^{x}_{1}t^{-1/4+\epsilon/2-1}\left(\sum_{2\leq k\leq \sqrt{t}}\sin(k^2)\right)dt\right|\right|\leq $$ $$ |C_{\epsilon}(x)|+\int^{x}_{1}\left|t^{-1/4+\epsilon/2-1}\left(\sum_{2\leq k\leq \sqrt{t}}\sin(k^2)\right)\right|dt= $$ $$ =\left|C_{\epsilon}(x)\right|+C_2\int^{x}_{1}t^{-1/4+\epsilon/2-1}\left|\sum_{2\leq k\leq \sqrt{t}}\sin(k^2)\right|dt\leq $$ $$ \leq\left|C_{\epsilon}(x)\right|+C_2\int^{x}_1t^{-1-1/4+\epsilon/2}t^{1/4+\delta/2}dt= $$ $$ =\left|C_{\epsilon}(x)\right|+C_2\int^{x}_{1}t^{-1+\epsilon/2+\delta/2}dt=\left|C_{\epsilon}(x)\right|+\frac{2}{\delta+\epsilon}\left(x^{(\delta+\epsilon)/2}-1\right)< $$ $$ <|C_{0}|+\log x+C_3d\log^2 x,\tag 7 $$ where $\epsilon>0$ and $\delta>0$ so small as we please and $d=\frac{\epsilon+\delta}{2}>0$, $C_3>0$ constant. It is also easy to see someone that $\left|C_{\epsilon}(x)\right|$ are bounded by a constant $C_0>0$. Hence from $(3)$ and $(5),(7)$ we have if $a=1/2-\epsilon$, that $$ \left|\int^{x}_{1}\frac{\cos (t)\{\sqrt{t}\}}{t^{a/2}}dt\right|=|S_1+S_2|\geq |S_1|-|S_2|>C_1x^{\epsilon/2}-|C_0|-\log x-C_3d\log^2 x, $$ For infinite values of $x\in\textbf{N}$. Hence $$ \lim_{x\rightarrow+\infty}\int^{x}_{1}\frac{\cos(t)\{\sqrt{t}\}}{t^{a/2}}dt=+\infty $$ and we conclude that if (6) holds, then $\textrm{inf}\geq1/2$. I will argue now about the the case $a=\frac{1}{2}+2\epsilon$, $\epsilon>0$ i.e the case when $a$ is not $1/2$ but rather a limiting case and doesnot cover the case $a=1/2$. Both results $\textrm{inf}\geq1/2$ and $\textrm{inf}=1/2+2\epsilon$, clearly show us that for $1/20$ and for $x>>1$, we chose $\delta>0$ such that $$ S\left(\sqrt{x}\right)\leq C_1x^{1/4+\delta}, $$ we get $$ \left|I(x)\right|=|S_1+S_2|\leq \left|C_1\frac{S\left(\sqrt{x}\right)}{x^{a/2}}+C_2\frac{a}{2}\int^{x}_{1}\frac{S\left(\sqrt{t}\right)}{t^{a/2+1}}dt\right|\leq $$ $$ \leq\left|C_1'\frac{x^{1/4+\delta}}{x^{1/4+\epsilon}}+C_2'\left(\frac{1}{4}+\epsilon\right)\int^{x}_{1}\frac{t^{1/4+\delta}}{t^{1+1/4+\epsilon}}dt\right|= $$ $$ =\left|C_1'x^{-(\epsilon-\delta)}+C_2'\left(\frac{1}{4}+\epsilon\right)\int^{x}_{1}\frac{dt}{t^{1+\epsilon-\delta}}\right|. $$ For $\delta=\epsilon/2$ we get $$ |I(x)|\leq \left|C_1'x^{-\epsilon/2}-\frac{2C_2'}{\epsilon}\left(\frac{1}{4}+\epsilon\right)\left(x^{-\epsilon/2}-1\right)\right|= $$ $$ =\left|C_1'x^{-\epsilon/2}+\frac{C_2'}{2\epsilon}\left(1-x^{-\epsilon/2}\right)+2C_2'\left(1-x^{-\epsilon/2}\right)+H-H\right|\leq $$ $$ \leq\left|C_1'x^{-\epsilon/2}+\frac{C_2'}{2\epsilon}\left(1-x^{-\epsilon/2}\right)+2C_2'\left(1-x^{-\epsilon/2}\right)+H\right|+\left|H\right|.\tag 8 $$ Now we set $$ X=C_1'x^{-\epsilon/2}+\frac{C_2'}{2\epsilon}\left(1-x^{-\epsilon/2}\right)>0 $$ and $$ Y=2C_2'\left(1-x^{-\epsilon/2}\right)+H>0 $$ and I use the inequality $$ \left|X+Y\right|\leq \left|X-\frac{Y}{4\epsilon}\right|,\tag 9 $$ which is is true for small $\epsilon$ and $x>1$ since we can write equivalent $$ \left|X+Y\right|^2\leq \left|X-\frac{Y}{4\epsilon }\right|^2\Leftrightarrow X^2+Y^2+2XY\leq X^2+\frac{Y^2}{16\epsilon^2}-\frac{XY}{2\epsilon }\Leftrightarrow $$ $$ Y^2+2XY\leq\frac{Y^2}{16\epsilon^2}-\frac{XY}{2\epsilon}\Leftrightarrow Y+2X\leq \frac{Y}{16\epsilon^2}-\frac{X}{2\epsilon}\Leftrightarrow $$ $$ \left(\frac{1}{16\epsilon^2}-1\right)Y\geq X\left(2+\frac{1}{2\epsilon}\right) $$ This last inequality holds for all small $\epsilon>0$ and $x>>1$ since it can be writen equivalently as $$ (1-16\epsilon^2)Y-X(32\epsilon^2+8\epsilon)\geq0\Leftrightarrow $$ $$ 2\epsilon(1+2\epsilon)\left(C_2'-4\epsilon C_1'+4C_2'\epsilon\right)x^{-\epsilon/2}\geq 0, $$ where we have used the value $$ H=\frac{2(C_2'+4C_2\epsilon)}{1-4\epsilon} $$ Hence (9) is true and we can extract from relation (8) the conclusion $$ \left|I(x)\right|\leq \left|C_1'x^{-\epsilon/2}+\frac{C_2'}{2\epsilon}\left(1-x^{-\epsilon/2}\right)+2C_2'\left(1-x^{-\epsilon/2}\right)+H\right|+\left|H\right|= $$ $$ =\left|X+Y\right|+\left|H\right|\leq $$ $$ \leq\left|C_1'x^{-\epsilon/2}+\frac{C_2'}{2\epsilon}\left(1-x^{-\epsilon/2}\right)-\frac{C_2'}{2\epsilon}\left(1-x^{-\epsilon/2}\right)-\frac{H}{4\epsilon}\right|+|H|= $$ $$ =\left|C_1'x^{-\epsilon/2}-\frac{H}{4\epsilon}\right|+|H|. $$ Hence $$ \epsilon |I(x)|\leq C_1'x^{-\epsilon/2}\epsilon+H/4+|H|\epsilon. $$ Hence we conclude that $$ \epsilon \left|I(x)\right|=O(1) $$ is bounded. Hence for $\epsilon>0$ small but constant the $I(x)$ are bounded.<|endoftext|> TITLE: $f(x+y) + f( f(x) + f(y) ) = f( f( x+f(y) ) + f( y+f(x) ) )$ QUESTION [9 upvotes]: Suppose $f\colon \mathbb R\to\mathbb R$ is a strictly decreasing function which satisfies the relation $$f(x+y) + f( f(x) + f(y) ) = f( f( x+f(y) ) + f( y+f(x) ) ) , \quad \forall x , y \in\mathbb R $$ Find a relation between $f( f(x))$ and $x$. REPLY [13 votes]: I think I have got hold of something , putting y=x in the functional equation we get $$ f(2x) + f(2f(x)) = f(2f(x+f(x))) $$ Changing $x$ to $f(x)$ we also get $$ f(2f(x)) + f(2f(f(x))) = f(2f(f(x)+f(f(x)))) $$ Subtracting the former from the later equation we get $$ f(2f(f(x))) - f(2x) = f(2f(f(x)+f(f(x)))) - f(2f(x+f(x))) $$ Now since $f(x)$ is strictly decreasing $x>y$ if and only if $f(x) < f(y)$ . Assume that $f(f(x)) > x$ , for some $x$, then $$ \begin{align} 2f(f(x)) > 2x&\Longleftrightarrow f(2f(f(x))) < f(2x)\\ &\Longleftrightarrow f(2f(f(x)+f(f(x))))< f(2f(x+f(x)))\\ &\Longleftrightarrow 2f(f(x)+f(f(x))) > 2f(x+f(x))\\ &\Longleftrightarrow f(f(x)+f(f(x))) > f(x+f(x))\\ &\Longleftrightarrow f(x)+f(f(x)) < x+f(x)\\ &\Longleftrightarrow f(f(x)) < x \end{align} $$ contradiction! So , $f(f(x)) = x$ , for all real $x$.<|endoftext|> TITLE: Endormorphism ring of quaternions isomorphic to the quaternion ring QUESTION [5 upvotes]: I found the quoted question in this post interesting: Confusion regarding what kind of isomorphism is intended. I don't have commenting privileges just yet, and since the question already has an accepted answer and it's purpose was not to actually receive an answer to the quoted question, I've decided to post here. I've been able to do the first part there by using Maschke's theorem and showing that the existence of a two-sided inverse for the projection map from $V$ to a submodule forming part of a decomposition of $V$ leads to a contradiction. The second part however, I am stumped, could someone kindly provide some guidance? REPLY [2 votes]: Hint: Consider the map $\theta:\mathrm{End}_{\mathbb{R}Q}(\mathbb{H})\rightarrow \mathbb{H^{op}}$ given by $\theta(f)=f(1)$. Verify that this is a ring homomorphism onto $\mathbb{H^{op}}$. The kernel is therefore an ideal of $\mathrm{End}_{\mathbb{R}Q}(\mathbb{H})$. I can't justify being cryptic about the fact that $\mathbb{H}^{op}\cong\mathbb{H}$, so I have to give it away :( Composing that isomoprhism with $\theta$, we now have a homomorphism of $\mathrm{End}_{\mathbb{R}Q}(\mathbb{H})\rightarrow \mathbb{H}$ which is onto. However, the comment at the beginning of the question you linked shows that $\mathrm{End}_{\mathbb{R}Q}(\mathbb{H})$ is a division ring, so... (more hints needed?)<|endoftext|> TITLE: How does one show that these two expressions are the same? QUESTION [5 upvotes]: I tried to compute the value of $\sin 75^\circ$ using the sine of standard values $(30^\circ, 45^\circ...)$ and did it by two ways. One, by expanding $\sin (45^\circ+30^\circ)$ and the other by computing the half of $\sin 150^\circ$ using basic identities. It gave me these two answers respectively: $$\frac{\sqrt{3}+1}{2\sqrt{2}}\ ,\ \ \ \frac{\sqrt{2+\sqrt{3}}} {2}$$ When I first did it, I was worried that I had got one of them wrong as I couldn't think of a way to show them equal to each other. I evaluated them in the calculator and indeed, they are equal (and to $\sin 75^\circ$) which made me think of how does one show expressions like these to be equal. So is there any way one could show that these two expressions are equal to each other? Thanks. REPLY [2 votes]: $$ \frac{\sqrt{2+\sqrt{3}}}{2}=\frac{\sqrt{4+2\sqrt{3}}}{2\sqrt{2}}=\frac{\sqrt{(\sqrt{3})^2+2\sqrt{3}+1}}{2\sqrt{2}}=\frac{\sqrt{(\sqrt{3}+1)^2}}{2\sqrt{2}}=\frac{\sqrt{3}+1}{2\sqrt{2}} $$<|endoftext|> TITLE: The area form of a Riemannian surface QUESTION [6 upvotes]: Let $(M,g)$ be an oriented Riemannian surface. Then globally $(M,g)$ has a canonical area-$2$ form $\mathrm{d}M$ defined by $$\mathrm{d}M=\sqrt{|g|} \mathrm{d}u^1 \wedge \mathrm{d}u^2$$ with respect to a positively oriented chart $(u_{\alpha}, M_{\alpha})$ where $|g|=\mathrm{det}(g_{ij})$ is the determinant of the Riemannian metric in the coordinate frame for $u_{\alpha}$. Let $u^{i}=\Phi^{i}(v^1,v^2)$ be a change of variables (so $\Phi: V \to U$ is the diffeomorphism of the coordinate change). Calculate the effect on $\sqrt{|g|}$ and $\mathrm{d}u^1 \wedge \mathrm{d}u^2$ to prove $\mathrm{d}M$ is independent of the choice of positively oriented coordinates. Remark: I know $g$ is invariant under an orientation preserving change of variables ($\mathrm{det}(\Phi)>0$), but how to compute explicitly the effect of $\Phi$ on $\mathrm{d}u^1 \wedge \mathrm{d}u^2$? I want to use this as an example to learn the exterior calculus. REPLY [4 votes]: Followed by Willie's hint. Write $u^1=u^1(v^1,v^2)$ and $u^2=u^2(v^1,v^2)$ to get $$\mathrm{d}u^1 \wedge \mathrm{d}u^2=\mathrm{det}(A)\mathrm{d}v^1 \wedge \mathrm{d}v^2$$ where $A:=\begin{vmatrix}\frac{\partial u^1}{\partial v^1}&\frac{\partial u^1}{\partial v^2}\\ \frac{\partial u^2}{\partial v^1}&\frac{\partial u^2}{\partial v^2}\end{vmatrix}$ is the Jaobian for $\Phi$. Denote the metric $g$ under coordinates $v^1,v^2$ by $g'$ and apply chain rule to get $$g'_{ij}=g_{kl}\frac{\partial u^k}{\partial v^i}\frac{\partial u^l}{\partial v^j}.$$ Which is equivalent to $(g'_{ij})=(A)(g_{ij})(A)^t$, take determinant on both sides yield $|g'|=|A|^2|g|$. Now $$\mathrm{d}M=\sqrt|g'| \mathrm{d}v^1 \wedge \mathrm{d}v^2=|A|\sqrt|g|\frac{1}{|A|}\mathrm{d}u^1\wedge\mathrm{d}u^2=\sqrt|g|\mathrm{d}u^1\wedge\mathrm{d}u^2$$<|endoftext|> TITLE: Topological structure of a quotient of ${\rm{SU}}(2)\times{\rm{SU}}(2)$ QUESTION [7 upvotes]: I'm trying to understand the topology of the product of two three dimensional spheres $\mathbb{S}^3\times \mathbb{S}^3$ quotiented by the action of $\pm 1$ sending a pair of points $(x,y)$ to the corresponding pair of antipodal points $(-x,-y)$. My hunch is that this may be better understood by identifying $\mathbb{S}^3$ with the special unitary group $SU(2)$ via the well known homeomorphism and understanding the quotient of the topological group $SU(2)\times SU(2)$ by the group $H=\{I_2\times I_2,-I_2\times -I_2\}$ as a topological space but I haven't yet been successful with that either. I'd like to know if this approach is worthwhile or if I should be looking in another direction to recognize the space $\mathbb{S}^3\times \mathbb{S}^3/\pm1$. REPLY [5 votes]: The quotient $S^3\times S^3/\{\pm (1,1)\}$ is diffeomorphic (but not Lie isomorphic) to $S^3\times \mathbb{R}P^3$ (Here, $\mathbb{R}P^3$ is naturally diffeomorphic to $SO(3)$, so we give it the Lie group structure $SO(3)$ has). An explicit map is given as follows. Identifying $S^3$ as the unit quaternions, define $\tilde{f}:S^3\times S^3\rightarrow S^3\times \mathbb{R}P^3$ by $\tilde{f}(p,q) = (pq, [q])$ where $[q]$ denotes the class of $q$ in $\mathbb{R}P^3$. Notice that $$\tilde{f}(-p,-q) = \left((-p)(-q), [-q]\right) = (pq,[q]) = \tilde{f}(p,q)$$ so $\tilde{f}$ descends to a map on $S^3\times S^3/\pm(1,1)$, which I'l call $f$. The map $\tilde{f}$ is clearly smooth, so $f$ is as well. Further, $f$ has an inverse given by $f^{-1}(p,[q]) = [pq^{-1},q]$. This is also clearly smooth, so we have that $f$ is a diffeomorphism. Why aren't they isomorphic as Lie groups? I claim that $S^3\times \mathbb{R}P^3$ has a normal subgroup isomorphic to $SO(3)$ while $S^3\times S^3/\pm(1,1)$ doesn't. First, since $\mathbb{R}P^3$ is isomorphic to $SO(3)$, the subgroup $\{e\}\times \mathbb{R}P^3$ is isomorphic to $SO(3)$ and is normal in $S^3\times\mathbb{R}P^3$. Second, to see there are no normal subgroups isomorphic to $SO(3)$ in $S^3\times S^3/\pm(1,1)$ note that the Lie algebra of $S^3\times S^3/\pm(1,1)$ is $\mathfrak{so}(3)\oplus\mathfrak{so}(3)$ and $\mathfrak{so}(3)$ is simple, so the only nontrivial ideals are $\mathfrak{so}(3)\oplus 0$ and $0\oplus \mathfrak{so}(3)$. In $S^3\times S^3$, these exponentiate to the two different $S^3$ factors and only can easily check that under the projection $\pi:S^3\times S^3\rightarrow S^3\times S^3/\pm(1,1)$, $\pi$ is injective on each factor. It follows that the two ideals in $\mathfrak{so}(3)\oplus\mathfrak{so}(3)$ exponentitate to $S^3$s in $S^3\times S^3/\pm(1,1)$, so, in particular, they are not isomorphic to $SO(3)$.<|endoftext|> TITLE: Character of $\text{Hom}_{\mathbb{C}}(\mathbb{C}G,\mathbb{C})$ QUESTION [5 upvotes]: I've been given the following two questions, and for both I'm really unsure what to do (I'm a beginner with the theory of characters). Let $G$ be a finite group, $\mathbb{C}G$ its group algebra over $\mathbb{C}$. $\text{Hom}_{\mathbb{C}}(\mathbb{C}G,\mathbb{C})$ forms a $\mathbb{C}G$-module with the action defined, for $a,b \in \mathbb{C}G$, $f \in \text{Hom}_{\mathbb{C}}(\mathbb{C}G,\mathbb{C})$, by $(af)(b) = f(ba)$. With this module structure defined on $\text{Hom}_{\mathbb{C}}(\mathbb{C}G,\mathbb{C})$, firstly I want to find $\chi_{\text{Hom}_{\mathbb{C}}(\mathbb{C}G,\mathbb{C})}(g)$ for all $g \in G$. Secondly, in the case that $G = S_3$, I want to express $\chi_{\text{Hom}_{\mathbb{C}}(\mathbb{C}G,\mathbb{C})}(g)$ in terms of simple characters. Could someone please clearly describe how to do this. REPLY [2 votes]: The character is just the character $$ \rho(g) = \begin{cases} 0 & g\neq1\\ |G| & g=1\end{cases}$$ In other words, the character of the regular representation. Two ways to see this: Pick the basis for $Hom_\mathbb{C}(\mathbb{C}G,\mathbb{C})$ which consists of the functions $f_g$, where for $h\in G$ $$ f_g(h) = \begin{cases} 1 & g=h\\ 0 & g\neq h\end{cases}$$ Now to compute the trace $\chi(k)$ with $k\in G$, you need to know when $f_g(hk)=f_g(h)$, and that's easy. $\mathbb{C}G$ has character $\rho$, the regular representation. $\mathbb{C}$ has character $\psi$, the trivial representation. We also have the isomorphism $Hom_\mathbb{C}(\mathbb{C}G,\mathbb{C})\cong \mathbb{C}G^\ast\otimes\mathbb{C}$, with associate character $\bar{\rho}\cdot\psi=\rho$.<|endoftext|> TITLE: $\sqrt{x}$$\sqrt{x}$ = $x$ but $\sqrt{x^2}$ = $|x|$. Why? QUESTION [9 upvotes]: $\sqrt{x}$$\sqrt{x}$ = $x$ but $\sqrt{x^2}$ = $|x|$. Why is this? I'm just learning algebra again after many years and I can't seem to figure out why this is. I'm sure this is trivial but if someone could explain it it would help me a lot. Thanks! REPLY [17 votes]: By definition $\sqrt x$ is the unique non-negative real number $y$ such that $y^2=x$, so $\sqrt x\cdot\sqrt x=x$ is true by definition. Now apply the definition to $\sqrt{x^2}$: $\sqrt{x^2}$ is the unique non-negative real number $y$ such that $y^2=x^2$. If $x=0$, the only real number whose square is $x^2$ is $0$, so of course $\sqrt{x^2}=0=|0|$. If $x\ne 0$, there are always two real numbers $y$ such that $y^2=x^2$: one of them is $x$, and the other is $-x$. Exactly one of these two is positive. Since we don’t know whether $x$ is positive or not, we don’t know which of them is positive, but we know that whichever it is, it’s $|x|$. Therefore $|x|$ is the unique positive real number such that $|x|^2=x^2$, and by definition $\sqrt{x^2}=|x|$. As an example, suppose that $x=-3$. Then $x^2=9$, and the two real numbers whose squares are $9$ are $-3$ (i.e., $x$) and $3$ (i.e., $-x$). The non-negative one is $3=-(-3)=|-3|$. Had we started with $x=3$, $x^2$ would still have been $9$, and we’d still have wanted the positive one of $x$ and $-x$, but this time that would be $x$, not $-x$. It’s still true, however, that $|x|=|3|=3$, the one that we want.<|endoftext|> TITLE: How do you prove that proof by induction is a proof? QUESTION [13 upvotes]: Are proofs by induction limited to cases where there is an explicit dependance on a integer, like sums? I cannot grasp the idea of induction being a proof in less explicit cases. What if you have a function that suddenly changes behavior? If a function is positive up to some limit couldn't I prove by induction that it is always positive by starting my proof on the left of the limit? I feel like I am not getting a grasp on something fundamental and I hope you can help me with this. Thank you. REPLY [3 votes]: Induction is normally defined for the case when you have an explicit dependence on an integer (or an ordinal for transfinite induction, but I'm not going to talk about that). I'm not sure what you're talking about with the function, but since induction requires you to prove that $P(k)\Longrightarrow P(k+1)$ for all $n$, you wouldn't be able to 'prove' that the function was everywhere positive, because the statement $P\Longrightarrow P(n+1)$ would fail at the point where we cross the 'limit point'. You can prove that proof by induction is a proof as follows: Suppose we have that $P(1)$ is true, and $P(k)\Longrightarrow P(k+1)$ for all $n\ge 1$. Then suppose for a contradiction that there exists some $m$ such that $P(m)$ is false. Let $S=\{n\in \mathbb{N} : P(k) \text{ is false}\}$. $S$ is non-empty (since it contains $m$), so it has a least element $s$ (The statement that every non-empty set of natural numbers has a least element is known as the Well Ordering Principle). Now, since $P(1)$ is true, $s\neq 1$. So $s-1$ is a natural number. Now, if $s-1$ were true, then $P(k)\Longrightarrow P(k+1)$ would imply that $s$ were true (setting $k=s-1$). Since $s$ is not true, $s-1$ must not be true as well. But $s-13\}$. However, the real numbers do satisfy what is known as the Greatest Lower Bound property. This means that every set of real numbers which is bounded above has a greatest lower bound. For example, the set of real numbers that are greater than $3$ has no least element: if you take some number greater than $3$ - $3.001$, say - I can tell you another number that is greater than $4$ but still less than your number (like $3.0001$). But it does have a greatest lower bound, which is $3$. The set $\{x\in\mathbb{R} : x^2 > 2\}$ has no least element, but it has a greatest lower bound, which is $\sqrt{2}$. Exercise: what is the greatest lower bound for the set $\{1,0.1,0.01,0.001,\dots\}$? In general, a lower bound for a set is a number $b$ that is smaller than every element in the set. The Greatest Lower Bound Property is the statement that whenever you've got a lower bound, there is always a greatest lowers bound. We denote the greatest lower bound of a set $S$ by $\inf S$ ($\inf$ is short for infimum, another word for greatest lower bound). How can we use the Greatest Lower Bound Property to form an analogue of induction? I'll give you the statement of continuous induction first, and then prove it. Theorem Let $P(x)$ be a statement about an arbitrary real number $x$. Suppose we know the following two facts: $P(x)$ is always true if $x 0$, $P(x)$ is always true up to $c+\varepsilon$ (whenever $x 0$, $P(x)$ is true for all $xa$, and we had supposed $a$ to be the greatest lower bound for $A$. So we have a contradiction. So $P(x)$ is true for all real numbers $x$. This is really the same thing as proof by induction for integers, but using the Greatest Lower Bound Property instead of the Well Ordering Principle. It is worth mentioning that I have never had occasion to use continuous induction over real numbers, and it's not a standard technique: mathematicians tend to prove statements about the real numbers straight from the Greatest Lower Bound Principle, and not use any fancy tricks (just as you can turn any proof by induction into a proof where you try to find a 'minimal counterexample', as I did when I proved that induction works above). But it's quite fun, and it shows that you can generalize things like the induction principle on to more general sets, as long as you have some statement similar to the Well Ordering Principle.<|endoftext|> TITLE: conformally equivalent flat tori QUESTION [7 upvotes]: The interiors of any two rectangles are conformally equivalent, by the Riemann mapping theorem. Suppose with each rectangle, we glue opposite sides together, and the metric on the quotient space, which is a torus, is that the distance between two points is the length of the shortest arc connecting them. Are the two quotient spaces still conformally equivalent, when the shapes of the rectangles differ? For which pairs of shapes is the answer "yes"? REPLY [12 votes]: In general the answer is no, even in a slightly more general case. The tori you describe can be represented by $\mathbb{C} / \Gamma_\tau$, where $\Gamma_\tau$ is the lattice generated by $1$ and $\tau$ with $\Im \tau > 0$. (Rectangles correspond to $\tau$ being purely imaginary.) If you have a conformal map between two such tori, you can lift it to their universal covers, and so you get an affine map $T(z)=az+b$ in the plane. Composing with a translation (which projects to a conformal self-map of any such torus), you can assume that $T(0)=0$, so $T(z) = az$. This map has to map the lattice $\Gamma_{\tau_1}$ to $\Gamma_{\tau_2}$, so the question is which of these lattices are equivalent under multiplication with some complex scalar $a$. Obviously $\tau \Gamma_{-1/\tau} = \Gamma_\tau$, and $\Gamma_\tau = \Gamma_{\tau+1}$. The maps $\tau \mapsto -1/\tau$ and $\tau\mapsto \tau+1$ generate $PSL(2,\mathbb{Z}) = \left\{ \tau \mapsto \frac{a\tau+b}{c\tau+d}: a,b,c,d \in \mathbb{Z}, \, ad-bc=1 \right\}$, and it can be shown that $\tau_1$ and $\tau_2$ generate equivalent lattices iff they are in the same orbit of $PSL(2,\mathbb{Z})$ acting on the upper halfplane $\Im \tau > 0$. For the special case of rectangles, we get that $\tau_1 = i t_1$ and $\tau_2 = i t_2$ generate equivalent lattices iff $t_1=t_2$ or $t_1 = 1/t_2$, i.e., iff the rectangles are similar.<|endoftext|> TITLE: Has error correction been "solved"? QUESTION [20 upvotes]: I recently came across Dan Piponi's blog post An End to Coding Theory and it left me very confused. The relevant portion is: But in the sixties Robert Gallager looked at generating random sparse syndrome matrices and found that the resulting codes, called Low Density Parity Check (LDPC) codes, were good in the sense that they allowed messages to be transmitted at rates near the optimum rate found by Shannon - the so-called Shannon limit. Unfortunately the computers of the day weren't up to the task of finding the most likely element of M from a given element of C. But now they are. We now have near-optimal error correcting codes and the design of these codes is ridiculously simply. There was no need to use exotic mathematics, random matrices are as good as almost anything else. The past forty years of coding theory has been, more or less, a useless excursion. Any further research in the area can only yield tiny improvements. In summary, he states that the rate of LDPC codes is very near channel capacity—so near that further improvements would not be worth the while. So, my question is: What does modern research in error-correcting codes entail? I noticed that Dan did not mention what channel the rate of LDPC codes approach the capacity of, so maybe there exist channels that LDPC codes don't work well on? What other directions does modern research in the field explore? REPLY [2 votes]: Polar Codes, designed by Arikan, have definitely beaten LDPC codes, in terms of achieving cutoff rate at shorter blocklengths and comparable complexity. See Arikan's 2019 IEEE Information Theory society Shannon Lecture (video, as well as paper on arxiv) in July 2019 in Paris [plot at 40:10 of the video], after he was awarded the Shannon award. He presented "hot off the press" results in the Shannon lecture, which indicated that the new ensemble of "polarization-adjusted convolutional" codes which are a kind of concatenated codes (section VII of the paper) approach the dispersion lower bound for the cutoff rate, beating LDPC codes at a length of 128 bits [to my knowledge LDPC codes need a few thousand bits to do this].<|endoftext|> TITLE: Asymptotic behavior of $\sum\limits_{k=2}^{m}\frac{1}{\ln(k!)}$ QUESTION [6 upvotes]: The task is to find asymptotic behavior of sum: $$\sum\limits_{k=2}^{m}\frac{1}{\ln(k!)}$$ when $m\to\infty$. Any help with solving this one? REPLY [3 votes]: Using Stirling's approximation: $$\ln(n!)\sim n\ln(n)+O(n)$$ Next we approximate sum with integral: $$\sum\limits_{k=2}^{m}\frac{1}{k\ln(k)}\sim\int_{2}^{m}\frac{dx}{x\ln(x)}=\ln\ln(m)-\ln\ln(2)$$ Found asymptotic behavior — $\ln \ln(n)$.<|endoftext|> TITLE: "Real" cardinality, say, $\aleph_\pi$? QUESTION [6 upvotes]: Is there any meaningful definition to afford for $\aleph_r$ (as in cardinality) where $r\in\mathbb{R}^+$? $r\in\mathbb{C}$? What about $\aleph_{\aleph_0}$? Can we iterate this? $\aleph_{\aleph_{\aleph_{\cdots}}}$ I may be throwing in bunch of rather naive/basic questions, for I haven't learnt much about infinite cardinalities. If I am referring to bunch of stuff abundantly dealt in established areas, please kindly point out. REPLY [8 votes]: The $\aleph$ numbers are well-ordered. This means that every non-empty set has a minimal element. Furthermore, they are linearly ordered. This means that any indexing imposed on them should at least have these two properties. The real numbers are not well-ordered (consider the subset $(0,1)$, or even $\mathbb R$ itself) and the complex numbers are not even ordered in any natural sense. The idea behind having a well-ordering is to say what is the next cardinality. Given a set, we can easily tell what is the least $\aleph$ which is larger. In the natural numbers (and their generalization, the ordinals) we have a successor function which does that, so it is a good ground to use when indexing cardinalities. We don't have a nice successor function for the real numbers, or for any dense ordering for a matter of fact. It is possible to have $\aleph_{\aleph_0}$, but there is a minor problem here. $\aleph_0$ is the notation discussing size, whereas $\aleph_\alpha$ is the cardinal that the cardinals below it have order type $\alpha$. So we write $\aleph_\omega$, where $\omega$ is the least infinite ordinal. This is a limit cardinal, which means that it is not a successor of any cardinal -- but there are smaller $\aleph$'s nonetheless. This of course can be reiterated, but we need to use the ordinal form, rather the cardinal form. Namely, every $\aleph$ number is also an ordinal. $\aleph_\alpha$ is the actually the ordinal $\omega_\alpha$, where these ordinals are defined recursively as ordinals which are $(1)$ infinite; and $(2)$ do not have an injection into any smaller ordinal. The least is $\aleph_0$ and it is the cardinality of the natural numbers, which is the ordinal $\omega$. So if we wish to iterate, $\aleph_0\to\aleph_{\aleph_0}\to\aleph_{\aleph_{\aleph_0}}\to\dots$ we actually need to do it as following: $$\aleph_0\to\aleph_{\omega}\to\aleph_{\omega_\omega}\to\ldots$$ That been said, without the axiom of choice it is consistent to have sets whose cardinality is not an $\aleph$ number, namely sets which cannot be well-ordered. It is consistent that there is a collection of sets which is ordered (by inclusion) like the real numbers, and no two sets have the same cardinality (there is no bijection between two distinct sets). Some threads which may interest the reader: "Homomorphism" from set of sequences to cardinals? Non-aleph infinite cardinals What are Aleph numbers intuitively?<|endoftext|> TITLE: There is no difference between a metrizable space and a metric space (proof included). QUESTION [12 upvotes]: Willard says, "Whenever $(X,\tau)$ is a topological space whose topology $\tau$ is the metric topology $\tau_{\rho}$ for some metric $\rho$ on $X$, we call $(X,\tau)$ a metrizable topological space." I think giving a proof is the best way to illustrate how I think of these concepts and illustrate the exact points that I am not understanding. Theorem: Metric space iff metrizable space. (->) Let $(X,\rho)$ be a metric space. Consider the topology generated by this metric, $\tau_{\rho}$. Then $(X,\tau_{\rho})$ is a topological space whose topology is the metric topology for some metric, so by definition, metrizable. (<-) Let $(X,\tau)$ be a metrizable space. There $\exists \rho$ a metric such that $\tau$ is the metric topology given by $\rho$. And so $(X,\rho)$ is a metric space. REPLY [15 votes]: Really formally: A topological space is a pair $(X, \mathcal{T})$ of a set $X$ and a subset $\mathcal{T}$ of the power set of $X$ satisfying the appropriate axioms for open sets. A metric space is a pair $(X, d)$ of a set $X$ and a map $d: X \times X \to \mathbb{R}^{\ge 0}$ satisfying the appropriate axioms for a distance function. It does not make sense to say a topological space "is" a metric space, or vice-versa, because there is an ontological status problem - how can $(X, \mathcal{T}) = (X, d)$ when $X$ and $\mathcal{T}$ are completely different objects? (See important caveat at bottom.) If I give you a metric space $(X, d)$ then there is a canonical topological space $(X, \mathcal{T})$ with the same underlying set $X$, called the induced topological space, whose topology is generated by the $\epsilon$-balls. If I give you a topological space $(X, \mathcal{T})$, there may or may not be some metric space $(X, d)$ whose induced topological space is $(X, \mathcal{T})$. If there is one, then we say that $(X, \mathcal{T})$ is metrizable. If $(X, \mathcal{T})$ is metrizable, then there is some metric $d$ such that $(X, d)$ is a metric space whose induced topological space is $(X, \mathcal{T}$, but note that for instance $(X, \frac{1}{2}d)$ also has this property, as does $(X, 7d)$, so there is absolutely nothing canonical about $d$, i.e. you can't recover $d$ from $\mathcal{T}$. Caveat: Keeping $d$ and $\mathcal{T}$ around is horribly clunky and pretentious and nobody actually does it, and people say that a metric space "is a topological space" all the time, or vice-versa for metrizable topological spaces, e.g., I might say "the topological $\mathbb{R}^2$ is actually a metric space" when I mean "there exists a metric-space structure which induces the usual topology on $\mathbb{R}^2$." Being super-precise with ordered pairs is only useful when sorting out confusion like this one; it's not how to think about these things in practice.<|endoftext|> TITLE: Integration of a differential form along a curve QUESTION [6 upvotes]: Given the differential form $\alpha = x dy - \frac{1}{2}(s^2+y^2) dt$, I'd like to evaluate $\int_\gamma \alpha$ where $\gamma(s)=(\cos s,\sin s, s)$ and $0\leq s\leq \frac{\pi}{4}$. When attempting to evaluate this in a way similar to a line integral, first I need to find the derivative of each component of $\gamma$ and then take the square root of the sum of their squares: $$ dt = \sqrt{x'(s)^2+y'(s)^2+t'(s)^2} = \sqrt{2} ds $$ But when I'm trying to evaluate $$ \int_\gamma x dy- \frac{1}{2}(s^2+y^2) dt $$ I'm not sure how to handle the different differentials. The first statement I made about $dt$ doesn't seem to make sense in this case because I don't know how $dy$ and $dt$ compare. I honestly have no idea how to approach this and would appreciate a step-by-step guide to evaluating integrals of this type. REPLY [13 votes]: Differential forms exist to eat vector fields. The only vector field in sight is the velocity field on $\gamma$, so we may as well feed it to $\alpha$: \begin{eqnarray*}\alpha(\gamma'(s)) &= \cos(s)dy(-\sin(s),\cos(s),1) - \frac{1}{2}(s^2 + \sin^2(s))dt(-\sin(s),\cos(s),1) \\ &= \cos^2(s) - \frac{1}{2}(s^2 + \sin^2(s)).\end{eqnarray*} This gives us a nice function on $[0,\frac{\pi}{4}]$ which we can integrate. That's the integral of $\alpha$ on $\gamma$: $$\int_\gamma\alpha = \int_0^\frac{\pi}{4} \alpha(\gamma'(s))ds = \int_0^1 \cos^2(s) - \frac{1}{2}(s^2 + \sin^2(s))\ ds.$$ I trust you can evaluate this. As a general definition, to integrate a differential $k$-form $\omega$ on some oriented smoothly embedded $k$-dimensional submanifold $\Phi: U\hookrightarrow \mathbb{R}^n$, pull the differential form back via $\Phi$ and integrate over $U$: $$\int_U\Phi^*\omega.$$ The pullback form $\Phi^*\omega$ is defined by $\Phi^*\omega(V_1,V_2,\ldots,V_k) = \omega(d\Phi V_1,d\Phi V_2,\ldots, d\Phi V_k)$. (You'll notice that's exactly what we did with $\gamma$ and $\alpha$, since $\gamma'(s) = d\gamma_s(\frac{\partial}{\partial s})$.) Since the pullback is a $k$-form on a $k$-dimensional domain, it's a multiple of the volume form and so it can be integrated as an ordinary function. In practice, for one-forms, you'll compute the velocity field, feed it to the one-form to get a function, and then integrate that function with respect to the curve parameter. Here's an exercise to tie integrating one-forms with more familiar line integrals of vector fields. If $\gamma$ is a smooth curve on (say) $[0,1]$ into $\mathbb{R}^3$ and $V(p) = (V_x(p),V_y(p),V_z(p))$ is a smooth vector field on $\mathbb{R}^3$, you have defined $$\int_\gamma V = \int_0^1 \langle V(\gamma(s)), \gamma'(s)\rangle ds.$$ On $\mathbb{R}^3$, to any vector vield $V$, we associate the one-form $\vartheta$ defined by $\vartheta(X) = \langle V,X\rangle$. In terms of differentials, $\vartheta = V_xdx + V_ydy + V_zdz$. Now look at the integrand of $\int_\gamma V$: taking the inner product of $V$ and $\gamma'$ is the same as feeding $\gamma'(s)$ to $\vartheta$. The integral of $V$ over $\gamma$ is nothing but $$\int_\gamma \vartheta.$$<|endoftext|> TITLE: How many 0's are at the end of 20! QUESTION [5 upvotes]: I'm not exactly sure how to answer this question, any help would be appreciated. After reading this I'm still not sure. Cheers REPLY [2 votes]: General formula (for the interested) about the number of zeroes in n! in any base (b). First consider all prime factors of b, then consider the biggest one (p). Then use this formula. $\lfloor n/p \rfloor$ + $\lfloor n/p^2 \rfloor$ + $\lfloor n/p^3 \rfloor$ + .... This and using the fact that, the floor becomes zero after some exponent, you can calculate the number of zeroes in any base. http://maths-on-line.blogspot.in/<|endoftext|> TITLE: 1/f "Pink Noise" for the Math-Disabled QUESTION [6 upvotes]: I have very little skill when it comes to math beyond all the elementary level operations (addition, subtraction, multiplication, division, mean, mode, etc) and a vague grasp of statistics, what a graph is, and algebra. I'm a writer at heart, and noticed a few discussions online about 1/f wave patterns or "pink noise." As much as I tried to comprehend the maths, the subject is over my head What I want to do is structure the beats of my text to produce a more pleasant experience that helps hold reader attention using natural mathematical rhythms as a guidelines. Would someone write up a set of steps, or pseudo-code explaining how the relations of intervals between points on a number line produce 1/f "pink noise"? REPLY [2 votes]: Apologies for bumping the old thread, but... I think what you're looking for is related to Zipf's Law (http://en.wikipedia.org/wiki/Zipf%27s_law). A special case of this is Benford's Law, which is what forensic accountants use to analyse record books for irregularities. The idea is that the most common numbers will appear in a 1/f frequency, i.e. Pink Noise. 1: .30 2: .18 3: .12 4: .10 5: .08 6: .07 7: .06 8: .05 9: .04 So for your purposes I would suggest writing a piece, like a chapter, and then analyzing your beats for pacing/length. The shortest segments (fast action) should account for about 30% of the whole, and the longest segments (deep dialog) about 4%... Naturally if you want more or less granularity, you'll have to run the math on it to get the appropriate values, but this should give you a start. Another thing to keep in mind, is that this is fractal. So your beats should follow this pattern, your word lengths, paragraph lengths, chapter lengths... The more scales you apply it to, the more you should see your work start to flow. Hope this helps!<|endoftext|> TITLE: Understanding matrices whose powers have positive entries QUESTION [7 upvotes]: A regular matrix $A$ is described as a square matrix that for all positive integer $n$, is such that $A^n$ has positive entries. How then would I prove something is regular? I mean I can prove something is irregular if $A^2$ has some 0 or negative entries; but I cant prove regularity since I cant solve $A^n$ for all integers $n$. My thoughts are that if a matrix $A$ is diagonalisable as $A=PD^{-1}P$ then it is 'regular,' since then all $A^k$ exist; but does this also imply all entries of $A^k$ are positive? Any hints? REPLY [8 votes]: If $A$ has an entry that is $0$ or negative, then $A$ is not regular. If, on the other hand, every entry in $A$ is positive, can $A^2$ have a negative or zero entry? Can $A^3$? There’s an easy proof by induction waiting here for you to find it. Note that diagonalizability has nothing to do with the matter: if $A$ is square, $A^n$ exists for all $n\ge 0$ whether or not $A$ is diagonalizable. Diagonalizability of $A$ merely makes it easy to calculate the powers of $A$. However, that’s not the usual definition of regular matrix. The usual definition is that a square matrix $A$ is regular if it is stochastic and there is some $n\ge 1$ such that all of the entries of $A^n$ are positive.<|endoftext|> TITLE: Uniformly Continuous Like Property of the Integration on Measure Space QUESTION [6 upvotes]: This is the Excercise 1.12 of Rudin's Real and Complex Analysis: Suppose $f\in L^1(\mu)$. Prove that to each $\epsilon>0$ there exists a $\delta>0$ such that $\int_{E}|f|d\mu<\epsilon$ whenever $\mu(E)<\delta$. This problem likes the uniformly continuous property. I tried to prove it by making contradiction, but I can't figure it out. Thanks! REPLY [2 votes]: Here is another solution. Define $A_j=\{x:|f(x)|\geq j \},\forall j\in \mathbb N$ . The given statement is clearly true if $|f|$ is bounded. So if we can show the integrals over the defined sets tend to zero, we are done. Now, $ \chi_{_{A_j}} $ is a collection of monotonically decreasing sequence with limit $0$. And so is true for $ |f| \chi _{_{A_j}} $ with $|f|\chi _{_{A_1}}\in L^1(\mu)$. So apply Dominated Convergence Theorem to get $\lim_{j \to \infty}\int_{A_j}|f| d\mu=0 $.<|endoftext|> TITLE: Why $(1-\zeta)$ unit where $\zeta$ is a primitive nth and n divisible by two primes QUESTION [5 upvotes]: From Chapter VII of Lang's Algebra. The question asks if $n\geq 6$ and $n$ is divisible by at least two primes, show that $1-\zeta$ is a unit in the ring $\mathbb{Z}[\zeta]$ I am having a hard time understanding why this is true. This is in the integral dependence chapter, but that has not given me any inspiration. I have also tried using cyclotonic polynomial to no avail Thanks for any direction. REPLY [3 votes]: Hurkyl's advice in the comments is sensible. Here is a more theoretical way to think about it; I've never read Lang's book, so I don't know how well it fits with the material of the chapter (but it is a standard argument in number theory): Write $n = p^k m$ with $p \not\mid m$. Note $(1-\zeta)^{p^k} \equiv 1 - \zeta^{p^k} \bmod p.$ Now $\zeta^{p^k}$ is a primitive $m$th root of $1$, where $p \not \mid m$. Assuming $m \neq 1$, can you use this to prove that $1 - \zeta^{p^k}$ is a unit mod $p$? And hence that $1 - \zeta$ is a unit mod $p$? Now find another prime $q$ so that $1 - \zeta$ is also a unit mod $q$. Once you've done this, you're done. Do you see why?<|endoftext|> TITLE: Formally prove that every finite language is regular QUESTION [9 upvotes]: I know how to prove this informally, but don't know what the formal proof should look like. REPLY [4 votes]: Another definition of a regular language is one generated by a regular grammar. If $L = \{v_1, v_2, \dots , v_n\}$ then $$S \to v_1 \\ S \to v_2 \\ \vdots \\S \to v_n$$ is a regular grammar of $L$.<|endoftext|> TITLE: Euler and infinity QUESTION [6 upvotes]: What do people mean when they say that Euler treated infinity differently? I read in various books that, today, mathematicians would not approve of Euler's methods and his proofs lacked rigor. Can anyone elaborate? Edit: If I remember correctly Euler's original solution to the Basel problem is as follows. Using Taylor series for $\sin (s)/s$ we write $$\sin (s)/s = 1 - {s^2}/3! + {s^4}/5! - \cdots $$ but $\sin (s)/s$ vanishes at $\pm \pi$, $\pm 2\pi$, etc. hence $$\frac{{\sin s}}{s} = {\left( {1 - \frac{s}{\pi }} \right)}{\left( {1 + \frac{s}{\pi }} \right)}{\left( {1 - \frac{s}{{2\pi }}} \right)}{\left( {1 + \frac{s}{{2\pi }}} \right)}{\left( {1 - \frac{s}{{3\pi }}} \right)}{\left( {1 + \frac{s}{{3\pi }}} \right)} \cdots$$ or $$\frac{{\sin s}}{s} = {\left( {1 - \frac{{{s^2}}}{{{1^2}\pi^2}}} \right)}{\left( {1 - \frac{{{s^2}}}{{{2^2}{\pi ^2}}}} \right)}{\left( {1 - \frac{{{s^2}}}{{{3^2}{\pi ^2}}}} \right)} \cdots$$ which is $$\frac{{\sin s}}{s} = 1 - \frac{{{s^2}}}{{{\pi ^2}}}{\left( {\frac{1}{{{1^2}}} + \frac{1}{{{2^2}}} + \frac{1}{{{3^2}}} + \cdots } \right)} + \cdots.$$ Equating coefficients yields $$\zeta (2) = \frac{{{\pi ^2}}}{6}.$$ But $\pm \pi$, $\pm 2\pi$, etc. are also roots of ${e^s}\sin (s)/s$, correct? So equating coefficients does not give ${\pi ^2}/6$. REPLY [3 votes]: To respond to your question concerning Euler's treatment of infinity, note that Euler did indeed "treat infinity differently" from the way it is viewed today. Our conceptual framework today is dominated by the work of Cantor, Dedekind, and Weierstrass, who sought to eliminate infinitesimals and replace them by epsilon, delta procedures within the context of an Archimedean continuum devoid of infinitesimals. Euler, on the other hand, worked with infinitesimals galore, and used infinite numbers freely. Thus he viewed an infinite series as a polynomial of infinite order. In the terminology of the historian Detlef Laugwitz, his arguments contained some "hidden lemmas" that require further justification, which can indeed be provided in light of modern theories. Other than that, Euler's techniques and procedural moves are closely mirrored by techniques and principles developed in the context of a hyperreal extension $\mathbb{R}\subset{}^{\ast}\mathbb{R}$, and his "infinite numbers" admit of proxies in the hyperreal approach, namely hyperreal integers in $^\ast\mathbb{N}\setminus\mathbb{N}$. Thus, an infinite series is approximated (up to infinitesimal error) by a polynomial of infinite hyperfinite degree. These can be manipulated like ordinary polynomials by the transfer principle. Euler obviously did not have the semantic foundational frameworks developed from 1870 onward such as ZFC, but his syntactic procedures are successfully and faithfully mirrored in the hyperreal approach. Historians critical of Euler's techniques are generally ignorant of hyperreal techniques and therefore hostile toward them. A number of articles in the literature successfully interpret Euler's procedures in terms of modern infinitesimals (with the syntactic/semantic proviso stated above), including the work of Kanovei, Laugwitz, McKinzie, Tuckey, and others. Note that Euler did not jump from the equality of zeros to the equality of sine to the infinite product. He gave an elaborate, and essentially correct, argument in favor of such equality. More specifically, Euler provided an elaborate 7-step argument in favor of the decomposition. In fact his argument can be formalized, step-by-rigorous-step, in the framework of modern theories, as discussed in this recent article.<|endoftext|> TITLE: $\frac{a+b}{b+c} + \frac{c+d}{d+a} ≤ 4(\frac{a+c}{b+d}) ; a , b , c , d ∈ [1 , 2]$ QUESTION [6 upvotes]: Is it true that if all the real numbers $a , b , c , d$ are from the closed interval $[1 , 2]$ then we always have the inequality $$ \frac{a+b}{b+c} + \frac{c+d}{d+a} ≤ 4\Big(\frac{a+c}{b+d}\Big) $$ REPLY [3 votes]: Since $a, c \geq 1$, we have $$ a - \frac {a + b} {b + c} = \frac{ab + ac - a - b} {b + c} = \frac {b(a - 1) + a(c - 1)} {b + c} \geq 0 $$ A similar argument leads to $$ c - \frac{c + d} {d + a} \geq 0 $$ Finally $b + d\leq 4$ implies $$ \frac {a + b} {b + c} + \frac{c + d} {d + a} \leq a + c \leq \frac 4 {b + d} (a + c) $$<|endoftext|> TITLE: What do Greek Mathematicians use when they use our equivalent Greek letters in formulas and equations? QUESTION [19 upvotes]: Like for example, it's common to use the Greek letter $\theta$ to represent an angle right? So what would a Greek person doing math use to represent an angle? Would they also use $\theta$? Or is there another notation that they would use in order for them to use their letters like we do? Such as if we say $A\geq B$, would a Greek student, mathematician, or whoever say: $\alpha \geq \beta$ or is there something else they say? It just seems like the Greek letters from a non-Greek point of view have so much meaning to us, but then how do they percieve their letters used in mathematics? REPLY [8 votes]: I am Greek. We use all the letters, Greek or Latin. But we pronounce some of them in a different way. For example the letter $\mu$ is pronounced "me". The letter $\beta$ as "vita". The angles are almost always named as $\theta, \phi, \omega$. If somebody writes $A\ge B$ we read the same as $\alpha\ge \beta$. The only problem is when you want a student to understand that $\chi \psi$ denotes $\chi\cdot \psi$ but $\ln a$ has nothing to do with $l\cdot n \cdot a$. Finally, I wanted to add that actually $\pi$ is not pronounced like the food - for example (apple-pie) - but like "me" with the letter p at the beginning (that is, "pe").<|endoftext|> TITLE: Integrating each side of an equation w.r.t. to a different variable? QUESTION [10 upvotes]: Say I have $\frac{dy}{dt} = a$, for some constant $a$ So $dy = a dt$ $\int_{y0}^y dy = \int_{t0}^t a dt$ $y - y_0 = at - at_0$ How come I am allowed to integrate each side with respect to a different variable? If I had an equation $y = 5x$ and I differentiated the LHS w.r.t. to y, and the RHS w.r.t. x I would get $1 = 5$...so differentiating both sides w.r.t. to different variables doesn't work. Yet integrating does? REPLY [8 votes]: This is often a bit subtle to people beginning differential equations, but in reality you are just applying integration by substitution. Recall that for integration by substitution we have $$\int_{x_0}^{x_1} f(u(x))u'(x)\ dx = \int_{u(x_0)}^{u(x_1)} f(u)\ du$$ We can write this in the more familiar Leibniz notation, abbreviating $u(x_i)$ as $u_i$, in the following form $$\int_{x_0}^{x_1} f(u(x))\frac{du}{dx}\ dx = \int_{u_0}^{u_1} f(u)\ du$$ If we apply this to the differential equation $$\frac{dy}{dt} = a$$ then we integrate both sides with respect to $t$ $$\int_{t_0}^{t_1}1 \frac{dy}{dt}\ dt = \int_{t_0}^{t_1} a\ dt$$ You can recognize the left-hand side from the integration by subsitution equation with $f(y) = 1$. We therefore have $$\int_{y(t_0)}^{y(t_1)}1\ dy=\int_{y_0}^{y_1}dy$$ And this gives the illusion of integration with respect to different variables. In the end, it's just a clever mnemonic. If you keep this process in mind, it really is more intuitive to just manipulate the differentials however, so there's really nothing wrong with what you did (and I'm sure we all do it anyways). Just make sure you understand the underlying mechanics behind the separation of variables. As James S. Cook kindly points out, all of these results are pretty much the chain rule in disguise. Many results in calculus involving change of variables or reparametrization can be traced back in origin to the chain rule. If you are not aware of the connections between the chain rule and integration by substitution, then I would say you have quite a bit of exploring to do. The wikipedia page linked above should start you off quite nicely.<|endoftext|> TITLE: In a Venn diagram, where are other number sets located? QUESTION [8 upvotes]: I remember of this image I've learned at school: I've heard about other number (which I'm not really sure if they belong to a new set) such as quaternions, p-adic numbers. Then I got three questions: Are these numbers on a new set? If yes, where are these sets located in the Venn diagram? Is there a master Venn diagram where I can visualize all sets known until today? Note: I wasn't sure on how to tag it. REPLY [8 votes]: This Venn diagram is quite misleading actually. For example, the irrationals and the rationals are disjoint and their union is the entire real numbers. The diagram makes it plausible that there are real numbers which are neither rational nor irrational. One could also talk about algebraic numbers, which is a subfield of $\mathbb C$, which meets the irrationals as well. As for other number systems, let us overview a couple of the common ones: Ordinals, extend the natural numbers but they completely avoid $\mathbb{Z,Q,R,C}$ otherwise. $p$-adic numbers extend the rationals, in some sense we can think of them as subset of the complex numbers, but that is a deep understanding in field theory. Even if we let them be on their own accord, there are some irrational numbers (real numbers) which have $p$-adic representation, but that depends on your $p$. You can extend the complex numbers to the Quaternions (and you can even extend those a little bit). You could talk about hyperreal numbers, but that construction does not have a canonical model, so one cannot really point out where it "sits" because it has many faces and forms. And ultimately, there are the surreal numbers. Those numbers extend the ordinals, but they also include $\mathbb R$. Now, note that this diagram is not very... formal. It is clear it did not appear in any respectable mathematical journal. It is a reasonable diagram for high-school students, who learned about rationals and irrationals, and complex numbers. I would never burden [generic] high-school kids with talks about those number systems above.<|endoftext|> TITLE: Is there a quicker, nicer way to show that the union of compact sets is not necessarily compact? QUESTION [5 upvotes]: Working on this real analysis problem set, and the problem is: Is every union of compact sets compact? I decided to try to construct a counterexample, but it seems convoluted and hard to follow (maybe not). Here's my answer, which I intend to use as my answer, but I'm wondering if there is a way I can answer this using the properties of compactness rather than constructing an explicit counterexample. Let $S_n=[n,n+1]$. Then for any $i\in\mathbb{N}$, $S_i$ is compact (this is a theorem so I'm good here). Now suppose $S=\bigcup_{i=1}^\infty S_i$ is compact and let ${G_n}$ be the open interval $(0,2^n)$. Then $\bigcup_{i=1}^{\infty}G_i$ is an open cover of $S$ (is this obvious? Should I try to show this explicitly?), so there exist finitely many indices $\{\alpha_1,...,\alpha_n\}$ such that $G=\bigcup_{i=1}^nG_{\alpha_i}\supseteq S$. Since there are finitely many indices $\{\alpha _i\}$, there exists some index $\alpha_i^*$ with the greatest least upper bound $2^{\alpha_i^*}$. Now let $x=2^{\alpha_i^*}+1$. Then $x\in S_x\subset S$ but $x\notin G$. But under the hypothesis that $S$ is compact, $G\supseteq S$. Hence $S$ is not compact. $\square$ So yeah, it seems convoluted and I'm not sure if it's even airtight—there might be some blanks that I need to fill in. Is there a faster way? REPLY [5 votes]: The union of the compact intervals $[\frac{1}{n},1]$ is the non-compact interval $(0,1]$.<|endoftext|> TITLE: In a monoid, does $x \cdot y=e$ imply $y \cdot x=e$? QUESTION [6 upvotes]: A monoid is a set $S$ together with a binary operation $\cdot:S \times S \rightarrow S$ such that: The binary operation $\cdot$ is associative, that is, $(a\cdot b) \cdot c=a\cdot (b \cdot c)$ for all $a,b,c \in S$. There is an identity element $e \in S$, that is, there exists $e \in S$ such that $e \cdot a=a$ and $a \cdot e=a$ for all $a \in S$. Question: Suppose, $x,y \in S$ such that $x \cdot y=e$. Does $y \cdot x=e$? This question was motivated by the question here, where the author attempts to prove a special case of the above in the context of matrix multiplication. It was subsequently proved, but the proofs require the properties of the matrix. I attempted to use Prover9 to prove the statement. Here's the input: formulas(assumptions). % associativity (x * y) * z = x * (y * z). % identity element a x * a = x. a * x = x. end_of_list. formulas(goals). x * y = a -> y * x = a. end_of_list. and it returned sos_empty, which, I guess, implies that no proof of the above statement is possible from the axioms of monoids alone. I ran Mace4 on the same input, and found no counter-examples for monoids of sizes $1,2,\ldots,82$. A comment by Martin Brandenburg here regarding K-algebras might also apply here. For example, the property might be true for finite monoids, but not all infinite monoids. A counter-example would (obviously) need to be non-commutative. REPLY [21 votes]: Let $M$ be the monoid of all functions from $\mathbb N$ to $\mathbb N$, with function composition as the operation. Let $y(n)=n+1$ for all $n$, while $x(n)=n-1$ for $n\ge 1$, $x(0)=0$. Then $xy$ is the identity function, while $yx$ is not. As you conjecture, the statement is true for finite monoids. Suppose $xy=e$. If we have a $z$ such that $zx=e$, then $zxy=ey$, so $ze=ey$, so $z=y$. Thus it's enough to show $x$ has a left inverse $z$. To do this, consider the function $f$ from $M$ to $M$ given by $f(a)=ax$. If $ax=bx$, then $axy=bxy$, hence $a=b$. Thus $f$ is injective, and so (since $M$ is finite) $f$ must be surjective, so in particular $e$ is in its image, and we're done. In short: every monoid $M$ is isomorphic to a set of functions from $M$ to $M$ under composition, and $fg=\mathrm{id} \rightarrow gf=\mathrm{id}$ only holds on finite sets. REPLY [8 votes]: If a left inverse and a right inverse exist in a monoid, they are equal. However, the existence of a left inverse need not imply the existence of a right inverse, and vice versa. REPLY [4 votes]: Let $S$ be the set of maps $\mathbb{C} \rightarrow \mathbb{C}$. Let $f \in S$ be the map defined by $f(x) = x^2$. Since $f$ is surjective, there exists $g \in S$ such that $f\circ g = 1$. Since $f$ is not bijective, $g\circ f \neq 1$.<|endoftext|> TITLE: Sequences of simple functions converging to $f$ QUESTION [9 upvotes]: Proposition Let $f$ be a bounded measurable function on $E$. Show that there are $\{\phi_n(x)\}$ and $\{\psi_n(x)\}$ - sequences of simple functions on $E$ such that $\{\phi_n(x)\}$ is increasing and $\{\psi_n(x)\}$ is decreasing and each of these converge to $f$ uniformly on $E$. By the simple approximation lemma I know that both of the mentioned sequences exists such that $\phi_n(x) \leq f \leq \psi_n(x)$ and $\psi_n(x)-\phi_n(x) < \frac{1}{n}$ for each $n \in \mathbb{N}$. Using uniform continuity: for some $\varepsilon > 0$ there exists an $n \geq N$ such that $|\phi_n(x)-f| < \frac{1}{n}$. Since simple functions take on a finite number of values, it is reasonable to take a $\max$. So I let $\phi(x)=\max\{\phi_n(x)\}$. I guess I'm not sure how to pull together the lemma and the definition of simple function. REPLY [4 votes]: For every nonnegative integer $n$, define $A_n:\mathbb R\to\mathbb R$ by $A_n(t)=k2^{-n}$ for every $t$ such that $k\leqslant2^nt\lt k+1$, for some integer $k$ (in other words, $2^nA_n(t)$ is the integer part of $2^nt$). Let $\phi_n=A_n(f)$ and $\psi_n=-A_n(-f)$. Then $\phi_n$ and $\psi_n$ are step functions, $\phi_n\leqslant f\leqslant \psi_n$, and $\psi_n-\phi_n\leqslant2^{-n}$ for every $n$. Furthermore, the sequence $(\phi_n)_n$ is nondecreasing and the sequence $(\psi_n)_n$ is nonincreasing.<|endoftext|> TITLE: Necessity and Sufficiency QUESTION [6 upvotes]: I'm learning to write mathematical proofs. When the statement to be proven is in the form "$p$ if and only if $q$", the proof is often broken into two parts: necessity and sufficiency. I wonder whether I should organize my proof like: Necessity: $p \Rightarrow q$ Sufficiency: $ q \Rightarrow p$ ... or vice versa? Since $p \Leftrightarrow q$ is is equivalent to $q \Leftrightarrow p$, does it really matter? Is there any accepted practise to put $p \Rightarrow q$ in necessity or sufficiency, depending on the order in which the statements are presented? REPLY [7 votes]: Strictly speaking, there is no difference, but it is common to put the "subject" first. An example will make more sense. A subset of $\mathbb{R}^n$ (with the usual topology) is compact if and only if it is closed and bounded. vs. A subset of $\mathbb{R}^n$ (with the usual topology) is closed and bounded if and only if it is compact. They are both the same statement, but the purpose of the theorem is to characterize compactness, not to characterize (closed and bounded)-ness. For this reason, it is more pleasing (I think) to mention compactness first and also to use phrases like "necessary for compactness" and "sufficient for compactness".<|endoftext|> TITLE: Unique solution of system of differential equation QUESTION [5 upvotes]: Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a continuous function and $g:\mathbb{R}\rightarrow\mathbb{R}$ be a Lipschitz function. Would you help me to prove that the system of differential equation $$ x'=g(x)$$ $$y'=f(x)y $$ with initial value $x(t_0)=x_0$ and $y(t_0)=y_0$ has a unique solution. Could I prove the uniqueness solution of $x'=g(x)$, $x(t_0)=x_0$ by Gronwall Inequality first then use the result to prove the second? REPLY [5 votes]: Your reasonning is correct. Since $g$ is Lipschitz and the first equation of the system involves only $x$, there is a unique solution $x(t)$ such that $x(t_0)=x_0$. The second equation becomes $$ y'=f(x(t))\,y,\quad y(t_0)=y_0. $$ It is a linear equation and has a unique solution, given by $$ y(t)=y_0e\,^{\int_{t_0}^t f(x(s))ds}. $$<|endoftext|> TITLE: Transcendental extensions of $\mathbb{Q}$ containing algebraic elements. QUESTION [5 upvotes]: Suppose that $v$ is transcendental over $\mathbb{Q}$ and $a,b\notin\mathbb{Q}$ are algebraic over $\mathbb{Q}$. When does $\mathbb{Q}(v,av+b)$ contain an element algebraic over $\mathbb{Q}$ but not in $\mathbb{Q}$? Is it possible that the answer is always? REPLY [7 votes]: Yes, the answer is "always". (1) Let $f\in\mathbb{Q}[x]$ be an irreducible polynomial, let $F$ be some extension field of $\mathbb{Q}$ and assume that $f$ becomes reducible over $F$. Then $F$ contains an element $a\not\in\mathbb{Q}$ that is algebraic over $\mathbb{Q}$. Proof: the coefficients of the irreducible factors of $f$ over $F$ are algebraic over $\mathbb{Q}$ and at least one of them is not in $\mathbb{Q}$. (2) The rational function field $\mathbb{Q}(v)$ contains no elements $a\not\in\mathbb{Q}$ algebraic over $\mathbb{Q}$. This is well-known and has already been discussed on this site several times. (3) Let $a,b\not\in\mathbb{Q}$ be algebraic over $\mathbb{Q}$ and consider the field $F:=\mathbb{Q}(v,av+b)$, where $v$ is transcendental over $\mathbb{Q}$. Then $F$ contains elements $c\not\in\mathbb{Q}$ algebraic over $\mathbb{Q}$. Proof: let $f\in\mathbb{Q}[x]$ be the minimal polynomial of a primitive element of the algebraic extension $\mathbb{Q}\subseteq\mathbb{Q}(a,b)=:K$. Then by (2) $f$ remains irreducible over $\mathbb{Q}(v)$. Hence $[K:\mathbb{Q}]=[K(v):\mathbb{Q}(v)]$. By construction we have $FK=K(v)$, since $a=v^{-1}(av), b=v^{-1}(bv)$ in $FK$. Moreover $[FK:F]=[K(v):F]\leq [K(v):\mathbb{Q}(v)]$. Assume now that $F$ contains no $c\not\in\mathbb{Q}$ algebraic over $\mathbb{Q}$. Then by (1) $[FK:F]=[K(v):\mathbb{Q}(v)]$. Since degree of field extensions is multiplicative, we have the equation $[FK:\mathbb{Q}(v)]=[FK:F][F:\mathbb{Q}(v)]=[FK:K(v)][K(v):\mathbb{Q}(v)]$ we then get $[F:\mathbb{Q}(v)]=[FK:K(v)]=[K(v):K(v)]=1$, hence the contradiction $a,b\in\mathbb{Q}$.<|endoftext|> TITLE: $n$-sheeted branched covering QUESTION [5 upvotes]: Michael Artin's algebra let $f(x,y)$ be an irreducible polynomial in $\mathbb{C}[x,y]$ which has degree $ n>0$ in the variable $y$. The Riemann surface of $f(x,y)$ is an $n$-sheeted branched covering of the plane. The description in the text book for the $n$-sheeted branch covering is too complex for me to understand. can someone explain this Corollary in not that abstract way to me , and give me some hints in terms of understanding these concepts ? REPLY [14 votes]: I assume that by "the Riemann surface of $f$" you mean the set $V = \{(x,y)\in \mathbb{C}^2 : f(x,y) = 0\}$. This set $V$ is not exactly a Riemann surface, since it will probably have some singularities, but it will be a one (complex) dimensional analytic space. Let $\pi\colon V\to \mathbb{C}$ be the projection $\pi(x,y) = x$. This will be the $n$-fold branched covering map. Intuitively, you want to think of this as saying that the preimage $\pi^{-1}(x_0)$ of most points $x_0\in \mathbb{C}$ consists of $n$ points in $V$. Suppose I write out $f$ as $$ f(x,y) = p_n(x)y^n + p_{n-1}(x)y^{n-1} + \cdots + p_0(x),$$ where the $p_i(x)$ are polynomial is $x$, and, by assumption, $p_n(x)\not\equiv 0$. Then, if I've fixed $x_0\in \mathbb{C}$, the preimage $\pi^{-1}(x_0)$ consists of those points $(x_0,y)\in \mathbb{C}^2$ for which $p_n(x_0)y^n + \cdots + p_0(x_0) = 0$. If $p_n(x_0)\neq 0$, the degree of this polynomial is $n$, and hence there are exactly $n$ such values of $y$, at least when counted with multiplicities. For most values of $x_0$, there will even by exactly $n$ such values of $y$. In fact, you can show the following There exists a finite set $S\subset\mathbb{C}$ such that for every $x_0\in \mathbb{C}\smallsetminus S$, the preimage $\pi^{-1}(x_0)$ consists of exactly $n$ points, and moreover, the map $\pi\colon V\smallsetminus \pi^{-1}(S)\to \mathbb{C}\smallsetminus S$ is a degree $n$ covering map. Intuitively, the term "branched cover" refers to a map that is an actual covering map except over some bad points that will have less than the $n$ preimage points. It's easier to picture than to define, so here's a picture I've taken from wikipedia. This is a schematic picture of a degree $3$ branched cover. You should think of $V$ as being $X$, and $\mathbb{C}$ as being $Y$, and the map $\pi$ as being the map $f$ from the picture (the projection downward). This map is an actual degree $3$ covering map except for at the points marked with blue dots; these would be the set $S$. I should also say something about the irreducibilty assumption on your polynomial $f$. If you didn't have this assumption, it would be possible for, say $x$ to divide $f(x,y)$. If this were the case, then $\pi^{-1}(0)$ would be the set $\{(0,y) : y\in \mathbb{C}\}$, an infinite set. If this were to happen, then $\pi\colon V\to \mathbb{C}$ would not be considered a branched cover: the preimage $\pi^{-1}(x_0)$ of a point $x_0\in \mathbb{C}$ is allowed to have fewer than $n$ elements, but not more. The irreducibility hypothesis is there to rule out things like this. I hope that sheds some light one your question.<|endoftext|> TITLE: $\ell_p$ is Hilbert if and only if $p=2$ QUESTION [28 upvotes]: Can anybody please help me to prove this: Let $p$ be greater than or equal to $1$. Show that for the space $\ell_p=\{(u_n):\sum_{n=1}^\infty |u_n|^p<\infty\}$ of all $p$-summable sequences (with norm $||u||_p=\sqrt[p]{\sum_{n=1}^\infty |u_n|^p}\ )$, there is an inner product $<\_\,|\,\_> $ s.t. $||u||^2=$ if and only if $p=2$. REPLY [35 votes]: Assuming we are working with the usual norm (as OP said in comments), suppose $\ell_{p}$ is an Hilbert space. So its must satisfy for all $u,v$: $$2\|u\|_{p}^2 + 2\|v\|_{p}^2 = \|u + v\|_{p}^2 + \|u - v\|_{p}^2.$$ As suggested by martini, take $u=e_{1}=(1,0,...,0,...)$ and $v=e_{2}=(0,1,0,...,0,...)$. Hence, by the last equality, we have $$4=2^{\frac{2}{p}}+2^{\frac{2}{p}}$$ Now you can solve the last inequality and verify that $p=2$. On the other hand, if $p=2$, you can easily check that $\ell_{2}$ is a Hilbert space.<|endoftext|> TITLE: Is $f(x) = x/x$ the same as $f(x) = 1$? QUESTION [8 upvotes]: Let $f(x) = \frac{x}{x}$. Is it correct to say that $f(x) \ne 1$, since $f(x)$ has a discontinuity at $x=0$? REPLY [5 votes]: A given function always comes with a domain. For two functions to be equal (in a formal sense) the domains need to be the same. For example consider the functions: $$ f: \mathbb{R} \to [0,\infty) $$ given by $$ f(x) = x^2 $$ and the function $$ g: [0,\infty) \to [0,\infty) $$ given by $$ g(x) = x^2. $$ These two functions are not the same even though they are given by the same expression. Note for example that $g$ is one-to-one (injective), while $f$ is not. (If the functions were the same, then one would think that they should have exactly the same properties.) Now, often we don't specify the domain. We just talk about the function defined by (say) $h(x) = \frac{1}{x}$. In such a case we usually assume that the domain is the set of all $x$ ]for which the expression makes sense. Hence the domain of $h$ is $\mathbb{R}\setminus \{0\}$. Now then, when you write $f(x) = \frac{x}{x}$, then I would say that the domain is $\mathbb{R}\setminus\{0\}$, because I cannot evaluate the expression at $0$. The domain of the function $g(x) = 1$ is all real numbers. So we would not say that the functions are equal. Note also that to cancel $x$ in the expression $\frac{x}{x}$, $x$ would have to be non-zero. Edit: About the discontinuity argument, I notice that in the comments to your questions people are saying that "discontinuous" doesn't mean "not continuous". I am not sure that this is a universally agreed upon notion. According to Wikipedia: "If a function is not continuous at a point in its domain, one says that it has a discontinuity there." So Wikipedia seems to agree with the comments. According to Wolfram MathWorld: discontinuous means not continuous. In any case, you are right in noting that the functions are not equal because the function $f(x) = \frac{x}{x}$ is not continuous at $0$.<|endoftext|> TITLE: Are Sobolev Spaces Uniformly Convex? QUESTION [6 upvotes]: Let $W^{m,p}(\Omega)$ be a Sobolev space, where $\Omega\subset\mathbb{R}^{n}$ is a open set. So the question is: For what $p$ and $m$ the space $W^{m,p}(\Omega)$ is Uniformly Convex. Some reference or the answer would be appreciated. REPLY [9 votes]: Note, that the map ($N := |\{\alpha \in \mathbb N_{\ge 0}^n \mid |\alpha|\le m\}|$) \begin{align*} T\colon W^{m,p}(\Omega) &\to L^p(\Omega, \mathbb R^{N})\\ x &\mapsto (D^\alpha x)_{|\alpha| \le m} \end{align*} is a closed and isometric embedding. As $L^p(\Omega, \mathbb R^N)$ is uniformly convex for $1 < p < \infty$, so are its closed subspaces, and hence as $W^{m,p}(\Omega)$ is isometric to its image under $T$, so is $W^{m,p}(\Omega)$ for these $p$. On the other side, uniformly convex spaces are reflexive and $W^{m,1}(\Omega)$ and $W^{m,\infty}(\Omega)$ aren't.<|endoftext|> TITLE: Property of sum $\sum_{k=1}^{+\infty}\frac{(2k+1)^{4n+1}}{1+\exp{((2k+1)\pi)}}$ QUESTION [15 upvotes]: Is it true that for all $n\in\mathbb{N}$, \begin{align}f(n)=\sum_{k=1}^{+\infty}\frac{(2k+1)^{4n+1}}{1+\exp{((2k+1)\pi)}}\end{align} is always rational. I have calculated via Mathematica, which says \begin{align}f(0)=\frac{1}{24},f(1)=\frac{31}{504},f(2)=\frac{511}{264},f(3)=\frac{8191}{24}\end{align} But I couldn't find the pattern or formula behind these numbers, Thanks for your help! REPLY [18 votes]: Here is an approach using Mellin transforms to enrich the collection of solutions. We seek to evaluate (assuming that we start the original series at $k=0$ as observed above) $$f(n) = \sum_{k\ge 1} \frac{(2k-1)^{4n+1}}{1+\exp((2k-1)\pi)},$$ this one started at $k=1$ which corresponds to $k=0$ in the problem statement. There is a harmonic sum here which we now evaluate by Mellin transform inversion. Introduce $$S(x) = \sum_{k\ge 1} \frac{((2k-1)x)^{4n+1}}{1+\exp((2k-1)\pi x)}$$ so that we are interested in $S(1).$ Recall the harmonic sum identity $$\mathfrak{M}\left(\sum_{k\ge 1} \lambda_k g(\mu_k x);s\right) = \left(\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} \right) g^*(s)$$ where $g^*(s)$ is the Mellin transform of $g(x).$ In the present case we have that $$\lambda_k = 1, \quad \mu_k = 2k-1 \quad \text{and} \quad g(x) = \frac{1}{1+\exp(\pi x)}.$$ We need the Mellin transform $g^*(s)$ of $g(x)$ which is $$\int_0^\infty \frac{1}{1+\exp(\pi x)} x^{s-1} dx = \int_0^\infty \frac{\exp(-\pi x)}{1+\exp(-\pi x)} x^{s-1} dx \\= \int_0^\infty \left(\sum_{q\ge 1} (-1)^{q-1} e^{-\pi q x} \right) x^{s-1} dx = \sum_{q\ge 1} (-1)^{q-1} \int_0^\infty e^{-\pi q x} x^{s-1} dx \\= \frac{1}{\pi^s} \Gamma(s) \sum_{q\ge 1} \frac{(-1)^{q-1}}{q^s} = \frac{1}{\pi^s} \left(1 - \frac{2}{2^s}\right)\Gamma(s) \zeta(s).$$ The series that we have used here converges absolutely for $x$ in the integration limits. It follows that the Mellin transform $Q(s)$ of the harmonic sum $S(x)$ is given by $$Q(s) = \frac{1}{\pi^{s+4n+1}} \left(1 - \frac{2}{2^{s+4n+1}}\right)\Gamma(s+4n+1) \zeta(s+4n+1) \left(1 - \frac{1}{2^s} \right) \zeta(s) \\ \text{because}\quad \sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} = \sum_{k\ge 1} \frac{1}{(2k-1)^s} = \left(1 - \frac{1}{2^s} \right) \zeta(s)$$ for $\Re(s) > 1.$ To see this note that the base function of the sum is $$\frac{x^{4n+1}}{1+\exp(\pi x)} .$$ The Mellin inversion integral for $Q(s)$ is $$\frac{1}{2\pi i} \int_{3/2-i\infty}^{3/2+i\infty} Q(s)/x^s ds$$ which we evaluate by shifting it to the left for an expansion about zero. As it turns out we only need the contribution from the pole at $s=1.$ We have that $$\mathrm{Res}\left(Q(s)/x^s; s=1\right) = \frac{1}{\pi^{4n+2}} \left(1-\frac{2}{2^{4n+2}}\right) \times (4n+1)! \times \zeta(4n+2)\times \frac{1}{2} \times \frac{1}{x} \\= \frac{1}{\pi^{4n+2}} \frac{2^{4n+2}-2}{2^{4n+2}} \times (4n+1)! \times \frac{(-1)^{(2n+1)+1} B_{4n+2} (2\pi)^{4n+2}}{2\times (4n+2)!} \times \frac{1}{2} \times \frac{1}{x} \\ = (2^{4n+1}-1)\frac{B_{4n+2}}{8n+4} \times \frac{1}{x}.$$ This almost concludes the evaluation the result being the residue we just computed because we can show that $Q(s)/x^s$ with $x=1$ is odd on the line $\Re(s) = -2n$ so that it vanishes and if we stop after shifting to that line the Mellin inversion integral that we started with is equal to the contribution from the pole at $s=1.$ To see this put $s=-2n+it$ to obtain $$\frac{1}{\pi^{2n+1+it}} \left(1 - \frac{2}{2^{2n+1+it}}\right)\Gamma(2n+1+it) \zeta(2n+1+it) \left(1 - \frac{1}{2^{-2n+it}} \right) \zeta(-2n+it)$$ which is $$\frac{1}{\pi^{2n+1+it}} \frac{2^{2n+1+it}-2}{2^{2n+1+it}} \Gamma(2n+1+it) \zeta(2n+1+it) \frac{2^{-2n+it}-1}{2^{-2n+it}} \zeta(-2n+it)$$ Now use the functional equation of the Riemann Zeta function in the following form: $$\zeta(1-s) = \frac{2}{2^s\pi^s} \cos\left(\frac{\pi s}{2}\right) \Gamma(s) \zeta(s)$$ to transform the above into $$(2^{2n+1+it}-2) \times \frac{\zeta(1-(2n+1+it))}{2\cos(\pi(2n+1+it)/2)} \frac{2^{-2n+it}-1}{2^{-2n+it}} \zeta(-2n+it)$$ which is $$(2^{2n+it}-1) \times \frac{\zeta(-2n-it))}{\cos(\pi it + \pi(2n+1)/2)} (1-2^{2n-it}) \zeta(-2n+it)$$ which we finally rewrite as $$(2^{2n+it}-1) (1-2^{2n-it}) \frac{(-1)^{n+1}}{\sin(\pi it/2)} \zeta(-2n+it)\zeta(-2n-it)$$ or $$(1-2^{2n+it}) (1-2^{2n-it}) \frac{(-1)^n}{\sin(\pi it/2)} \zeta(-2n+it)\zeta(-2n-it).$$ Among this product of five terms the first two taken together are even as are the last two zeta function terms. The middle sine term is odd in $t$, so the entire product is odd in $t$ and we are done, having obtained the answer $$(2^{4n+1}-1)\frac{B_{4n+2}}{8n+4}.$$<|endoftext|> TITLE: Does $\pi$ contain all possible number combinations? QUESTION [728 upvotes]: $\pi$ Pi Pi is an infinite, nonrepeating $($sic$)$ decimal - meaning that every possible number combination exists somewhere in pi. Converted into ASCII text, somewhere in that infinite string of digits is the name of every person you will ever love, the date, time and manner of your death, and the answers to all the great questions of the universe. Is this true? Does it make any sense ? REPLY [17 votes]: That image contains a number of factual errors, but the most important one is the misleading assertion that irrationality implies disjunctiveness. One can easily construct an non-disjunctive, irrational number. Let $ r = \sum\limits_{n = 0}^\infty 2^{-n} \begin{cases} 1 & \text{if } 2 | n \\ s_n & \text{else} \end{cases} $ for any non-periodic sequence $ s_n \in \{0,1\} $. It is not known whether $ \pi $ is, in fact, disjunctive (or even normal).<|endoftext|> TITLE: Plancherel formula for compact groups from Peter-Weyl Theorem QUESTION [12 upvotes]: I'm trying to derive the following Plancherel formula: $$\|f\|^{2}=\sum_{\xi\in\widehat{G}}{\dim(V_{\xi})\|\widehat{f}(\xi)\|^{2}}$$ from the statement of the Peter-Weyl Theorem as given by Terence Tao here: Let $G$ be a compact group. Then the regular representation $\tau\colon G\rightarrow U(L^{2}(G))$ is isomorphic to the direct sum of irreducible representations. In fact, one has $\tau\cong\bigoplus_{\xi\in\widehat{G}}{\rho_{\xi}^{\bigoplus \dim(V_{\xi})}}$, where $(\rho_{\xi})_{\xi\in\widehat{G}}$ is an enumeration of the irreducible finite-dimensional unitary representations $\rho_{\xi}\colon G\rightarrow U(V_{\xi})$, up to isomorphism. I managed to prove Fourier inversion from this without any difficulty at all, but I'm really struggling to see how the Plancherel formula follows from it. I'm pretty sure that the fact that we have $\|\operatorname{Proj}_{\xi}f\|=\dim(V_{\xi})^{1/2}\|\widehat{f}(\xi)\|$ is the main ingredient of the proof (here, I've used $\|\operatorname{Proj}_{\xi}f\|$ to denote the orthogonal projection of $f$ to $L^{2}(G)_{\xi}$), and most of my attempts have boiled down to trying to show the equality by proving inequality in both directions - showing $\sum_{\xi\in\widehat{G}}{\dim(V_{\xi})\|\widehat{f}(\xi)\|^{2}}\le \|f\|^{2}$ is rather trivial, although if this is the correct approach, I simply can't make the inequalities work in the other direction, despite having tried pretty much every possible way of looking at it - my main issue is that I always end up at the point where I could prove the result if I could prove that a square of a sum of norms is bounded by the sum of the squares of norms, which clearly isn't true in general (i.e. if all of the norms are 1 and the sum is nontrivial), and I can see no reason why they should be equal in this case. Any help or suggestions would be much appreciated! Edit: Alternatively, I can see how this is from Tao's observation that we may write $L^{2}(G)\cong\bigoplus_{\xi\in\widehat{G}}{\dim(V_{\xi})\cdot HS(V_{\xi})}$, this is immediate - $\|f\|^{2}=\langle f,f\rangle=\sum_{\xi\in\widehat{G}}{\dim(V_{\xi})\cdot\langle\widehat{f},\widehat{f}\rangle}=\sum_{\xi\in\widehat{G}}{\dim(V_{\xi})\|\widehat{f}(\xi)\|^{2}}$, so I'd be fine with an explanation of how one obtains this isomorphism (if it can be obtained by any means other than combining the statements of Fourier inversion and the Plancherel formula). REPLY [2 votes]: Answered in the comments: ``I don't have the book at hand but I once saw this proof in Folland's A Course in Abstract Harmonic Analysis. It should be in chapter 4. ''<|endoftext|> TITLE: $AB \neq 0$ but $BA=0$ QUESTION [10 upvotes]: Do there exists to matrices or objects such that $AB \neq 0$ but $BA=0$? Another way to ask this question is if there exists objects or matrices $A$ and $B$ such that... $[A,B]=AB$ where $[ \, , \, ]$ is the commutator $[A,B]=AB-BA.$ If such matrices do not exist, what does that imply about the algebra that the elements are in? REPLY [4 votes]: As the other answer show: there are uncountably many pairs of matrices $(A,B)$ such that $AB = 0$ while $BA \neq 0$. If you think of square matrices as linear transformations then it is obvious why this should be so: in $AB$, we can think of the product as recording the image of each of the columns of $B$ under the linear transformation $A$, likewise with $BA$. A nice question to ask is the following: Given a fixed matrix $A \in \text{Mat}_n\mathbb{R},$ what is the following: $$\widetilde{A} := \{ X \in \text{Mat}_n\mathbb{R} : AX = 0 \ \wedge \ XA \neq 0\} \, ?$$ Clearly, if $X \in \widetilde{A}$ then $\lambda X \in \widetilde{A}$ for all $\lambda \neq 0.$ Interestingly, this space is not a vector space because the zero matrix $0 \notin \widetilde{A}$ and $X,Y \in \widetilde{A}$ does not imply that $X+Y \in \widetilde{A}.$<|endoftext|> TITLE: Universal cover via paths vs. ad hoc constructions QUESTION [24 upvotes]: I'm looking for some intuition regarding universal covers of topological spaces. $\textbf{Setup:}$ For a topological space $X$ with sufficient adjectives we can construct a/the simply connected covering space of it by looking at equivalence classes of paths at a given base point. We then can put a topology in the standard way done by Hatcher - an open set around an equivalence class of paths, say $[\gamma]$ is the set of $[\gamma\cdot\eta]$ where $\eta$ is a path starting at $\gamma(1)$ contained in $U$ open in $X$. Here are my questions: Q: I find this topological space, as constructed above, non-intuitive. Certainly I dont know how I would manipulate it and make topological arguments in it. What is the 'right' way of thinking about the topology here? Or is this construction useful solely for proving the existence of simply connected covers? Q: Often times it is tractable to construct a simply connected covering by ad-hoc methods (fancy guessing). The projective plane, torus, etc all spring to mind. By universality I know that the covering space obtained by any ad-hoc method is $\textit{the}$ universal covering space obtained by the above method, so there is an isomorphism of these two. Is there a standard way to see this isomorphism? Being really concrete, say in the cases of $\mathbb RP^2$, or $S^1\times S^1$. In simple terms: how can I 'see' what the universal cover looks like from the general construction? REPLY [32 votes]: Suppose $p:\overline{C}\to C$ is a universal covering. By definition, around every point in $C$ is an open set that lifts up to $\overline{C}$. So, locally, $\overline{C}$ looks just like $C$. Suppose one wanted to cut up $C$ into small (contractible) patches and then stitch them together again to form $\overline{C}$ - the problem is that $\overline{C}$ is to be simply connected, so if (say) we started stitching patches around a nontrivial loop in $C$ when we wrap back around to the basepoint we can't stitch that last patch back to the original patch, instead we have to create a copy of the original patch and continue on from there. Consider the space $C=\Bbb C\setminus\{0\}$. If one takes a counterclockwise loop from $-1$ around $0$ back to itself, then the last patch cannot be stitched to the first, so we should make a copy of the original patch to stitch it to. In the picture below, we've literally lifted the copy above the original: $\hskip 2in$ If we continue this process indefinitely, then there will lots of copies of pieces of $C$ that are being stitched together. Given a point in $C$ in a patch, there will be many copies of that patch in our quilt, and so many lifts of that point - what allows us to distinguish between lifts of the same point is how we got to it from the original basepoint. Thus, we can interpret points in $\overline{C}$ as points in the original space $C$ but with a "memory" of how we got there from a basepoint. This inspires us to formalize our construction by letting elements of $\overline{C}$ be paths in $C$, modulo endpoint-fixing homotopy. Points in $\overline{C}$ are specified by points in $C$ with a memory of how we got there from the basepoint, so if we got to $x\in C$ via a path $\gamma$ in $C$ and $U$ is any basic nbhd of $x\in C$, then the lift $\overline{U}$ of that nbhd is comprised of points $\overline{u}$, and to specify these $\overline{u}$ we must say which points of $C$ they are (done: they lie above $U$) and how we got to them. We got to these points in $\overline{U}$ by first travelling along $\gamma$ from the basepoint to $x$ and then wiggled around within $U$ itself. As for your other question, try lifting the paths. Say that $D\to C$ is a covering, where $D$ is a familiar space you know well, and in particular you know $D$ is simply connected. Our construction of $\overline{C}$ is comprised of paths emanating from (say) $x\in C$. To see what the corresponding point of $D$ is, just lift the path from $C$ to $D$ and look at its endpoint! This is the isomorphism.<|endoftext|> TITLE: Prove that a convex function on $\mathbb{R}^n$ is continuous QUESTION [6 upvotes]: Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a convex function on $\mathbb{R}^n$. How to prove that $f$ is continuous? REPLY [6 votes]: $\def\R{\mathbb R}\def\conv{\operatorname{conv}}$Note first, that a convex function is locally bounded: Let $x_0 \in \R^n$, and define $U := \conv\{x_0 \pm e_i \mid 1 \le i \le n\}$, then $U$ is a neighbourhood of $x_0$, given $x \in U$, there are $\lambda_{i,\pm}\ge 0$ with $\sum_{i=1}^n \lambda_{i,+} + \lambda_{i,-} = 1$ and $x = \sum_i \lambda_{i,+}(x_0+e_i) + \lambda_{i,-}(x_0 -e_i)$, convexity of $f$ gives \begin{align*} f(x) &\le \sum_i \lambda_{i,+}f(x_0 + e_i) + \lambda_{i,-}f(x_0 - e_i)\\ &\le \max\{f(x_0 \pm e_i)\mid 1 \le i \le n\}. \end{align*} Hence $f$ is locally bounded above. Now let $x \in U$, then $x_0 - (x-x_0)\in U$ and \[ f(x_0) \le \frac 12 f(x) + \frac 12 f(x- 2x_0) \iff f(x) \ge 2f(x_0) - f(x-2x_0) \] Hence, if $M$ denotes the upper bound of $f$ on $U$, $2f(x_0) - M$ is a lower bound. So, $f$ is locally bounded, now we will show, that it is locally Lipschitz (and hence, continuous): Let $x_0 \in \mathbb R^n$. By the above, there is an $\epsilon > 0$, such that $|f| \le M$ on $U_{\epsilon}(x_0)$. We will show, that $f$ is Lipschitz on $U_{\epsilon/2}(x_0)$. Suppose not, then there were $x_1, x_2 \in U_{\epsilon/2}(x_0)$ with \[ \frac{\|f(x_1) - f(x_2)\|}{\|x_2 - x_1\|} > \frac {4M}\epsilon \|x_2 - x_1\| \] Let $x_3 := x_2 + \frac \epsilon2\frac{x_2 - x_1}{\|x_2 - x_1\|}$, then $x_3 \in U_\epsilon(x_0)$ and $x_1$, $x_2$, $x_3$ are on one line, and $\|x_2 -x_3\| = \frac \epsilon 2$. Hence \[ \frac{f(x_3) - f(x_2)}{\|x_3 - x_2\|} \ge \frac{f(x_2) - f(x_1)}{\|x_2 - x_1\|} > \frac {4M}{\epsilon} \] So $f(x_3) - f(x_2) > 2M$, contradicting $|f| \le M$.<|endoftext|> TITLE: Convergence in measure implies convergence in $L^p$ under the hypothesis of domination QUESTION [17 upvotes]: Given a sequence $f_n \in L^p$ and $g \in L^p$, with $|f_n| \leq g$, I am trying to show that $f_n \to f$ in measure implies $f_n \to f$ in $L^p$. Firstly, I know that if $f_n \to f$ in measure, then there is a subsequence $f_{n_i}$ such that $f_{n_i} \to f$ almost everywhere. Then I can use the dominated convergence theorem to show that $\lVert f_{n_i} - f_p\rVert \to 0$. Now I am trying to show that $\lVert f_n - f\rVert_p \to 0$. My idea is to assume that $\lVert f_n - f\rVert_p \nrightarrow 0$ and then show that this contradicts the fact that $\lVert f_{n_i} - f\rVert_p \to 0$, but I am not sure of the details. Can anyone help me finish the argument? REPLY [17 votes]: If $\lVert f_n - f\rVert_p \nrightarrow 0$, there exist a subsequence $ f_{n_i} $ such that $ \| f_{n_i} - f\| _p \ge \varepsilon $ for some $ \varepsilon >0 $. But $ f_{n_i} $ still converges in measure to $ f $. So, again There is a subsequence $ f_{n_{i_j}} $ of the $ f_{n_i} $ such that $ \|f_{n_{i_j}} - f\|_p \rightarrow 0$, and this is a contradiction.<|endoftext|> TITLE: Why isn't there interest in nontrivial, nondiscrete topologies on finite groups? QUESTION [26 upvotes]: A topology on a group is required to be compatible with the group structure (multiplication must be a continuous map $G\times G\to G$ and inversion must be continuous). I've only ever seen the discrete topology referenced on finite groups, however (as e.g. subgroups of continuous matrix groups). Why isn't there any interest in nontrivial, nondiscrete topologies on finite groups? Is it because they are easy to classify, their study reduces to other areas of mathematics like graph theory, or that they haven't seen any external purpose elsewhere? Surely these topologies exist. For a subgroup $H\le G$ we can set all cosets of $H$ to be a base. We can speak of generating (group-compatible) topologies from subsets $X\subseteq G$; let the topology $\tau_G$ be the collection of all left and right translates of a given collection of subsets, as well as their unions and intersections (which will obviously be finite), and then their translates, etc. (this process will surely terminate because it acts like a monotone function being applied to the double power set $\mathcal{P}(\mathcal{P}(G))$.) Then we can ask for sufficient and necessary conditions for a family of subsets to generate the discrete topology. Other potentially interesting questions and answers: for a given family or handful of finite groups, we can ask for classification of their admissible topologies. So I for one think there are potentially interesting questions about nontrivial nondiscrete finite group topologies, but there does not seem to be any theory about it $-$ is there something I'm missing? REPLY [19 votes]: Evidently I was missing some things. All is well now, though. In response to Qiaochu's answer I wanted to flesh out the details. Let $(G,\tau_G)$ be a finite topological group, $1:=\{1_G\}$ the trivial subgroup, $\mathrm{cl}(\cdot)$ the topological closure operator, and $\mathrm{ncr}(\cdot)$ the group-theoretic normal core operator. So $\mathrm{cl}(A)=\bigcap \{F~\textrm{closed}:A\subseteq F\}$ and $\mathrm{ncr}(A)=\bigcap_{g\in G}g^{-1}Ag$. Lemma 1. Let $(X,\tau_X)$ be a topological space. The pairs $A,\,\mathrm{cl}_X(A)$ determine $\tau_X$ and vice-versa. Proof. The vice-versa direction is clear by $\mathrm{cl}$'s intersection formula. Conversely, a subset $A\subseteq X$ is closed if and only if $\mathrm{cl}(A)=A$ (if $A$ is closed then $A\subseteq \bigcap F\subseteq A\implies \mathrm{cl}(A)=A$, and conversely if equality holds then $A$ is an intersection of closed sets hence is closed). Hence we can determine the open sets of $X$ precisely in terms of closure: $A\subseteq X$ is open iff $X\setminus A=\mathrm{cl}(X\setminus A)$. Lemma 2. The topology $\tau_G$ on $G$ is determined by $S:=\mathrm{cl}(1)$. Proof. Let $X\subseteq G$ be a subset. Any closed set $F$ containing $X$ contains any singleton subset of $X$, so also contains the closure $\mathrm{cl}(x)$ for all $x\in X$, hence will contain the finite union $Y:=\cup_{x\in X}\mathrm{cl}(x)$; since the union is finite, $Y$ is also closed. Note $X\subseteq Y$ since $x\in\mathrm{cl}(x)$ always. So we conclude that $\mathrm{cl}(X)=Y$ because $Y$ exhibits the universal property of the closure of $X$. Moreover, $$\begin{array}{cl} \mathrm{cl}(x) & = \bigcap \{F~\textrm{closed}:x\in F\} \\ & = \bigcap x\{x^{-1}F~\textrm{closed}:1_G\in x^{-1}F\} \\ & =x\bigcap\{E~\textrm{closed}:1_G\in E\} \\ & = x\,\mathrm{cl}(1)=xS \end{array}$$ $$\therefore~~ \mathrm{cl}(X)=\bigcup_{x\in X}\mathrm{cl}(x)=\bigcup_{x\in X}xS=XS.$$ All closures are therefore uniquely determined by $S$, and hence so with $\tau_G$ by Lemma 1. Lemma 3. $S=\mathrm{cl}(1)\trianglelefteq G$ is a normal subgroup. Proof of normality. Since left and right translation are continuous and $S$ is closed containing $1_G$, any conjugate $g^{-1}Sg$ must be closed and contain $1_G$ hence $S\subseteq g^{-1}Sg=S^g$ for all $g$. Therefore $$S\subseteq \bigcap_{g\in G}S^g=\mathrm{ncr}(S) \subseteq S\implies S=\mathrm{ncr}(S).$$ Since the group-theoretic normal core is normal, $S$ is a normal subset of $G$. Proof of subgroup. Note that $S$ is a closed set containing $x$ for any $x\in S$, hence $xS\subseteq\mathrm{cl}(x)\subseteq S$, and since $x^{-1}S\subseteq S\implies S\subseteq xS$ by left-multiplication, $xS=S$ for all $x\in S$, which gives closure under multiplication as well as inverses (since $1_G\in S=xS\implies 1_G=xy$ for some $y\in S$). Theorem. The only nontrivial topologies on a finite group are lifted from discrete topologies on factor groups. That is, a topology on $G$ must have as base the coset space $G/N$ for a $N\trianglelefteq G$. Proof. Let $\tau_G$ be a topology on $G$ and let $N=S=\mathrm{cl}_G(1)$. A subset $X\subseteq G$ is open iff $G\setminus X$ is closed iff $G\setminus X=\mathrm{cl}(G\setminus X)=(G\setminus X)S=\cup_{y\in G\setminus X}yN$ is a union of left cosets of $N$ (and indeed since $N$ is closed and translation is continuous, any union of left cosets of $N$ is closed). But since cosets partition $G$, if $G\setminus X$ is a union of cosets, so is $X$. Hence $G/N$ is a base for $\tau_G$. Remark. Suppose $(G,\tau_G)$ is Hausdorff. Then for each nonidentity element $g\in G$, there are disjoint open sets $1_G\in U_g$ and $g\in V_g$. Then $1_G\in\cap_{g\ne 1_G}U_g$ is open and cannot contain a non-identity element hence $\{1_G\}$ is open, and subsequently by continuity any singleton and by union any subset is open, so $\tau_G=\mathcal{P}(G)$ is in fact the discrete topology.<|endoftext|> TITLE: Assuming $AB=I$ prove $BA=I$ QUESTION [9 upvotes]: Possible Duplicate: If $AB = I$ then $BA = I$ Most introductory linear algebra texts define the inverse of a square matrix $A$ as such: Inverse of $A$, if it exists, is a matrix $B$ such that $AB=BA=I$. That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition: Inverse of $A$, if it exists, is a matrix $B$ such that $AB=I$. Then they go and prove that $BA=I$. Do you know of a proof other than defining inverse through determinants or through using rref? Is there a general setting in algebra under which $ab=e$ leads to $ba=e$ where $e$ is the identity? REPLY [21 votes]: Multiply both sides of $AB-I=0$ on the left by $B$ to get $$ (BA-I)B=0\tag{1} $$ Let $\{e_j\}$ be the standard basis for $\mathbb{R}^n$. Note that $\{Be_j\}$ are linearly independent: suppose that $$ \sum_{j=1}^n a_jBe_j=0\tag{2} $$ then, multiplying $(2)$ on the left by $A$ gives $$ \sum_{j=1}^n a_je_j=0\tag{3} $$ which implies that $a_j=0$ since $\{e_j\}$ is a basis. Thus, $\{Be_j\}$ is also a basis for $\mathbb{R}^n$. Multiplying $(1)$ on the right by $e_j$ yields $$ (BA-I)Be_j=0\tag{4} $$ for each basis vector $Be_j$. Therefore, $BA=I$. Failure in an Infinite Dimension Let $A$ and $B$ be operators on infinite sequences. $B$ shifts the sequence right by one, filling in the first element with $0$. $A$ shifts the sequence left, dropping the first element. $AB=I$, but $BA$ sets the first element to $0$. Arguments that assume $A^{-1}$ or $B^{-1}$ exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.<|endoftext|> TITLE: Landau Notation Properties QUESTION [6 upvotes]: I would like to take the limit $x\to 0$ and prove or disprove the following statements concerning Landau notation. (a) $O(x^{3/2})\subset o(x)\subset O(x^{1/2})$ (b) $(1+O(x))^2=1+O(x^2)$ I know the deifiniton of the O-notation and also what it means but i have no idea how to show these kinds of inequalities. I would appreciate it if somebody can help me. REPLY [2 votes]: The expressions "$f(x)=O(x)$" or "$f(x) = o(x)$" in Landau notation really mean that the function $f$ belongs to a class of functions, so strictly speaking it would be more correct to write $f \in O(x)$ and $f\in o(x)$. However, the traditional notation is the standard, so we all have to live with it. Proving these, you have to remember that you are proving set inclusions or equalities, so you have to proceed like for those. (a) If $f(x) = O(x^{3/2})$, then $|f(x)| \le M|x|^{3/2} = (M|x|^{1/2}) |x|$ near $0$, and since $M|x|^{1/2} \to 0$ as $x\to 0$, this shows $f \in o(x)$. If you want to show strict inclusion, see that $x^{4/3} = o(x)$, but $x^{4/3} \neq O(x^{3/2})$. Now if $f(x) = o(x)$, then $|f(x)| \le \epsilon(x) |x| = (\epsilon(x) |x|^{1/2}) |x|^{1/2}$ with $\lim\limits_{\epsilon \to 0} \epsilon(x) = 0$, so $\lim\limits_{\epsilon\to 0} \epsilon(x) |x|^{1/2} = 0$, implying that $f(x) = o(x^{1/2})$. Since $o(x^{1/2}) \subset O(x^{1/2})$ (every convergent function is locally bounded), this implies that $f(x) \in O(x^{1/2})$. Again for the strict inclusion pick a function $x^{\alpha}$ with $1/2 < \alpha < 1$. (b) This is not true, as you can see from the simple example $f(x) = (1+x)^2 = 1+2x+x^2$. (Always test these first, i.e., just replace the $O(g(x))$ by $g(x)$ and see if the statement is true.) Obviously $(1+x)^2 = (1+O(x))^2$, but if we had $f(x) = 1+O(x^2)$, then $f(x)-1 = O(x^2)$, so $\frac{(1+x)^2 - 1}{x^2} = \frac{x+x^2}{x^2} = \frac{1}{x}+1$ would have to be bounded in a neighborhood of $0$, which it is not.<|endoftext|> TITLE: Can every recursive formula be expressed explicitly? QUESTION [8 upvotes]: I'm not sure if my wording is entirely correct, but I was just wondering if every recursive formula can be turned into an explicit formula. I am asking this because various sources online gives me opposite answers. Although, one thing I have noticed is that every source likes to use different words other than "formula", like "expression" and such. According to wiki, "Although not all recursive functions have an explicit solution" So I guess another part of my question is : What difference is it when people say recursive function, expression, formula, etc. (if there is any) But yeah, I have seen a stackoverflow post saying that every recursion can be turned into an iteration, and doesn't this also mean everything can be explicitly defined? REPLY [2 votes]: This question is not "well-formed". Recursion is a way (actually several ways, as there are many formalisms containing different recursion schemes) to describe a function (or a collection of functions or some other structures..). So, what you are actually asking, is whether or not it is possible to describe functions that can be described "by means of recursion" within some other formalism. You fail to specify the "other formalism" as well as the "recursion-formalism" (of which recursion will be only a part). Some formalisms that do not contain recursion can express exactly the same functions as some other "recursive formalisms" eg.: LOOP computable functions vs. primitive recursive functions WHILE computable functions vs. $\mu$-recursive functions However, if you aim at asking whether or not every formalism (that describes a family of functions) can be replaced by an equivalent "non-recursive" formalism, then we have to: Define what we mean by a formalism What we mean by equivalent And maybe ask ourselves why we would want to get rid of recursion in the first place ;) I think that in most cases (given proper definitions of the above) the answer would be 'yes' but at the cost of some other complexity of the formalism (like additional quantifiers or other advanced constructs)<|endoftext|> TITLE: No $\Delta-$System on a subset of a singular cardinal. QUESTION [5 upvotes]: I've been making my way through the new Kunen and I've come across an exercise that I can't work out. The question is this: Let $\kappa$ be a singular cardinal. Show that there is a collection $A$ of $\kappa$ many two-element subsets of $\kappa$ such that no element of $[A]^\kappa$ forms a $\Delta-$ system. Where $[A]^\kappa$ is the set of subsets of A of size $\kappa$. Any help would be appreciated (i.e. hints welcome). REPLY [3 votes]: $\newcommand{\cf}{\operatorname{cf}}$Let $\cf\kappa=\lambda<\kappa$, and let $\langle\alpha_\xi:\xi<\lambda\rangle$ be a strictly increasing sequence cofinal in $\kappa$ such that $\alpha_0=0$. For $\xi<\lambda$ let $K_\xi=[\alpha_\xi,\alpha_{\xi+1})$; then $\kappa=\bigcup_{\xi<\lambda}K_\xi$, and $|K_\xi|<\kappa$ for each $\xi<\lambda$. Let $$A=\bigcup_{\xi<\lambda}[K_\xi]^2\;;$$ clearly $|A|=\kappa$, and I leave to you the straightforward verification that $A$ has no $\Delta$-system of power $\kappa$.<|endoftext|> TITLE: Find the value of : $\lim_{n\to\infty}\prod_{k=1}^n\cos\left(\frac{ka}{n\sqrt{n}}\right)$ QUESTION [12 upvotes]: Find the limit (where a is a constant) $\lim_{n\to\infty}\prod_{k=1}^n\cos\left(\frac{ka}{n\sqrt{n}}\right)$ I think the answer is $1-a^2/6$ REPLY [14 votes]: Let $$f(n) = \prod_{k=1}^n \cos \left(\dfrac{ka}{n \sqrt{n}}\right)$$ $$g(n) = \log (f(n)) = \sum_{k=1}^{n} \log \left(\cos \left(\dfrac{ka}{n \sqrt{n}}\right) \right) = \sum_{k=1}^{n} \log \left(1 - \dfrac{\left(\dfrac{ka}{n \sqrt{n}}\right)^2}2 + \mathcal{O} \left( \dfrac{k^4}{n^6}\right)\right)$$ $$\log \left(1 - \dfrac{\left(\dfrac{ka}{n \sqrt{n}}\right)^2}2 + \mathcal{O} \left( \dfrac{k^4}{n^6}\right)\right) = -\left(\dfrac{\left(\dfrac{ka}{n \sqrt{n}}\right)^2}2 + \mathcal{O} \left( \dfrac{k^4}{n^6}\right)\right) + \mathcal{O} \left(\dfrac{k^4}{n^6} \right)$$ $$\sum_{k=1}^{n} \log \left(1 - \dfrac{\left(\dfrac{ka}{n \sqrt{n}}\right)^2}2 + \mathcal{O} \left( \dfrac{k^4}{n^6}\right)\right) = \sum_{k=1}^{n} \left( -\dfrac{\left(\dfrac{ka}{n \sqrt{n}}\right)^2}2 + \mathcal{O} \left( \dfrac{k^4}{n^6}\right)\right)\\ = -\dfrac{a^2}{2n^3} \dfrac{n(n+1)(2n+1)}{6} + \mathcal{O}(1/n)$$ $$\lim_{n \to \infty }\sum_{k=1}^{n} \log \left(1 - \dfrac{\left(\dfrac{ka}{n \sqrt{n}}\right)^2}2 + \mathcal{O} \left( \dfrac{k^4}{n^6}\right)\right) = -\dfrac{a^2}{6}$$ Hence, $$\prod_{k=1}^{\infty} \cos \left(\dfrac{ka}{n \sqrt{n}}\right) = \exp(-a^2/6)$$ The solution you have $1-a^2/6$ is a first order approximation to $\exp(-a^2/6)$ since $$\exp(x) = 1 + x + \mathcal{O}(x^2)$$<|endoftext|> TITLE: Inner product for vector space over arbitrary field QUESTION [7 upvotes]: The definition of an inner product in Linear Algebra Done Right by Sheldon Axler assumes that the vector space is over either the real or complex field. PlanetMath makes the same assumption. Is there a definition of an inner product over, for example, finite fields? I sometimes find finite fields easier to reason about, so it would be nice to have a definition of an inner product for vector spaces over them. REPLY [6 votes]: You could define an inner product on an ordered field, as you need to satisfy the positive-definiteness axiom. However, without a suitable order on the field, this axiom is meaningless. In order to generalise the conjugation in the definition of a hermitian inner product, you can introduce an involution on a field. As long as you can introduce an order and an involution, you should be able to generalise the definition easily enough. As an aside, it's worth noting that when you do away with the positive-definiteness axiom, what you have is a symmetric bilinear form, which you can define on most (all? I'm not 100%!) fields.<|endoftext|> TITLE: How to see $\operatorname{Spec} k[x]$ for non necessarily algebraic closed field $k$? QUESTION [10 upvotes]: I know that $\operatorname{Spec} \mathbb{C}[x]$ can be identified with the set $\mathbb{C}\cup *$, where $*$ is a generic point via the correspondence $$ \prod_{i}(x-a_i) \leftrightarrow \{a_i\}_i , \ \ \ \ (0)\leftrightarrow *. $$ This correspondence holds for any $\operatorname{Spec} k[x]$ with algebraic closed field $k$. When $k$ is not algebraic closed, how should one understand $\operatorname{Spec} k[x]$? REPLY [2 votes]: Let $\bar k$ be an algebraic closure of $k$. Let $G = Aut(\bar k/k)$. $\bar k/G$ be the set of $G$-orbits. We will show that there exists a canonical bijection $Spec$ $k[x] \rightarrow \bar k/G\cup \{*\}$. Let $Y =$ $Spec$ $k[x] - \{(0)\}$. It suffices to prove that there exists a canonical bijection $\psi\colon Y \rightarrow \bar k/G$. Let $p \in Y$. There exists a unique monic irreducible polynomial $f(x) \in k[x]$ such that $p = (f(x))$. Let $S$ be the set of roots of $f(x)$ in $\bar k$. It is clear that $S$ is $G$-stable. Let $\alpha, \beta \in S$. There exists a $k$-isomorphism $k(\alpha) \rightarrow k(\beta)$ transforming $\alpha$ to $\beta$. It can be extended to a $k$-automorphism of $\bar k$. Hence there exists $\sigma \in G$ such that $\sigma(\alpha) = \beta$. Hence $S \in \bar k/G$. Hence we get a map $\psi\colon Y \rightarrow \bar k/G$ such that $\psi(p) = S$. Clearly $\psi$ is injective. Let $T \in \bar k/G$. Let $\alpha \in T$. Let $f(x)$ be the minimal polynomial of $\alpha$ over $k$. Clearly $\psi((f(x)) = T$. Hence $\psi$ is surjective.<|endoftext|> TITLE: How are $\operatorname{Spec} \mathbb{Q}, \operatorname{Spec}\mathbb{R}, \operatorname{Spec}\mathbb{C}$ etc different? QUESTION [17 upvotes]: By definition $\operatorname{Spec}k$ is a point for any field $k$. So $\operatorname{Spec}\mathbb{Q}, \operatorname{Spec}\mathbb{R}, \operatorname{Spec}\mathbb{C}$ etc are all the same as topological spaces. But according to the natural inclusion map $$ \mathbb{Q} \rightarrow \mathbb{R} \rightarrow \mathbb{C} $$ there exist natural morphisms $$ \operatorname{Spec}\mathbb{Q} \leftarrow \operatorname{Spec}\mathbb{R} \leftarrow \operatorname{Spec}\mathbb{C}, $$ but not the other direction. So $\{\operatorname{Spec}k\}_k$ must carries more information than merely one point topological space. I would appreciate it if someone could kindly explain what is going on. REPLY [18 votes]: The extra information that's carried along is the scheme structure. I.e., these are all locally ringed spaces with a single closed point, but with different sheaves of regular functions corresponding to the rings $\Bbb Q,\Bbb R,\Bbb C.$ The functions you describe carry along this sheaf information via pushforward along a trivial (set-theoretic/topological) map. Note that $\operatorname{Spec}(\Bbb C[t]/t^n)$ is another one-pointed space with a different scheme structure from the rest. And there are many more examples, in fact you can take the spectrum of any local artinian ring. PS - Don't worry, this makes algebraic geometry very rich! In a certain sense, the scheme structure "remembers" information that the topological space forgets, for example in degenerating families. Eisenbud-Harris and Hartshorne have nice examples, in chapter II and chapter II.9 respectively, if I remember correctly.<|endoftext|> TITLE: $\sum_{k=1}^n a_k^3 = \left(\sum_{k=1}^n a_k \right)^2$ QUESTION [7 upvotes]: We know that $\sum_{k=1}^n k^3 = \left(\sum_{k=1}^n k \right)^2$. Interestingly, $1^3+2^3+2^3+4^3=(1+2+2+4)^2$. Are there other non-consecutive numbers $a_1, a_2, \ldots, a_k$ such that $$\sum_{k=1}^n a_k^3 = \left(\sum_{k=1}^n a_k \right)^2?$$ REPLY [9 votes]: Here is a surprising result: $$ \sum_{d\mid n} \tau(d)^3 = \big(\sum_{d\mid n} \tau(d)\big)^2 $$ where $\tau$ counts the number of divisors of an integer. It's exercise 12 in chapter 2 of Apostol's Introduction to Analytic Number Theory. If you take $n=2^N$, you get the classic result for consecutive numbers. If you take $n=pq$, a product of two primes, you get your example. I don't think all examples come from the result above, though. See John Mason, Generalising 'Sums of Cubes Equal to Squares of Sums', The Mathematical Gazette, Vol. 85, No. 502 (Mar., 2001), pp. 50-58.<|endoftext|> TITLE: $f$ is measurable if and only if for each Borel set A, $f^{-1}(A)$ is measurable. QUESTION [8 upvotes]: Let $f$ be defined on a measurable set $E$. Show that $f$ is measurable if and only if for every Borel set $A$, $f^{-1}(A)$ is measurable. Hint: The collection of sets $A$ that have the property that $f^{-1}(A)$ is measurable is a $\sigma$-algebra. I want to know if the following proof is correct: $\mathcal{A}$ is the collection in question, $\mathcal{B}$ is the set of Borel sets, and $\mathcal{M}$ is the set of measurable sets. Let $c \in \mathbb{R}$ arbitrarily chosen. Then $(c, \infty)$ is open so it is in $\mathcal{B}$ and $f^{-1}((c,\infty)) \in \mathcal{M}$. Also since $f^{-1}(\{\infty\}) \in \mathcal{M}$ so $\{\infty\} \in \mathcal{A}$. By both of the above this shows that $(c, \infty] \in \mathcal{A}$. Let $a,b \in \mathbb{R}$. Since $f$ is measurable $f^{-1}([-\infty,b))$ and $f^{-1}((a,\infty])$ are both in $\mathcal{M}$ so $[-\infty,b)$, $(a,\infty]$ are both in $\mathcal{A}$ and $f^{-1}([-\infty,b)) \cap f^{-1}((a,\infty])=f^{-1}((a,b)) \in \mathcal{M}$. So $(a,b)\in A$. This shows that all open intervals are in $\mathcal{A}$. For $A \in \mathcal{B}$, $f^{-1}(A) \in \mathcal{M}$ so $A \in \mathcal{A}$. $f^{-1}(\emptyset) \in \mathcal{M}$, so $\emptyset \in \mathcal{A}$. I've gotten $\LaTeX$ fatigue, but I think all that is left to show is that countable unions and complements are in $\mathcal{A}$ to complete the definition of $\sigma$-algebra. REPLY [8 votes]: For your proof, I'm not quite sure I understand where you are going with your list of bullet points. In the beggining it seems like you are assuming $f$ is measurable, and trying to prove that $f^{-1}(B)$ is measurable for any Borel set $B$, but your second to last bullet claims that $f^{-1}(B)$ is measurable for any borel set $B$, which then makes it seem like you are assuming this and trying to prove measurability. (But if you are assuming this, then all the other bullet points are trivial). Whichever it is you're trying to do you should make clear at the beginning of the proof. If the first is the case (which I think it probably is) what you want to do is show that the sets form a $\sigma$-algebra which contains the open sets, as this is the reason we can say that $f^{-1}(B)$ is measurable for every Borel set. Here is a sample proof highlighting what I've said above, since you said in the comments you would want an explicit answer. ($\Leftarrow $) if $f^{-1}(A)$ is measurable for every Borel set, then in particular $f^{-1}(A)$ is measurable for every open set, so $f$ is measurable. ($\Rightarrow$) if $f$ is measurable then we know that $f^{-1}(A)$ is measurable for every open set. Moreover, Let $\mathcal{A}$ be the collection of sets $A$ satisfying $f^{-1}(A)$ is measurable. In particular we know that $\mathcal{A}$ contains all open sets by above. Then let $A_1, A_2, \cdots \in \mathcal{A}$ Then $f^{-1}(\bigcup_1^{\infty} A_n) = \bigcup_1^{\infty} f^{-1}(A_n)$ is measurable since it is the countable union of measurable sets, so $\bigcup_1^{\infty} A_n \in \mathcal{A}$. Similarly, $f^{-1}(A_1^{c}) = f^{-1}(A_1)^c$ is measurable since it is the compliment of a measurable set, so $A_1^{c} \in\mathcal{A}$. Thus $\mathcal{A}$ is a $\sigma$-algebra, which contain the open sets, so $\mathcal{A}$ contains all borel sets and we're done.<|endoftext|> TITLE: How to show $x_1,x_2, \dots ,x_n \geq 0 $ and $ x_1 + x_2 + \dots + x_n \leq \frac{1}{2} \implies (1-x_1)(1-x_2) \cdots (1-x_n) \geq \frac{1}{2}$ QUESTION [6 upvotes]: How to show $x_1,x_2, \dots ,x_n \geq 0 $ and $ x_1 + x_2 + \dots + x_n \leq \frac{1}{2} \implies (1-x_1)(1-x_2) \cdots (1-x_n) \geq \frac{1}{2}$ REPLY [4 votes]: It is easy to see that: $$(1-a)(1-b) \geq 1-(a+b)$$ Then, you can use induction to prove that: $$(1-x_1)(1-x_2)...(1-x_n) \geq 1-(x_1+x_2+...+x_n)$$ The inductive step is: $$(1-x_1)(1-x_2)...(1-x_n)(1-x_{n+1}) \geq \left[ 1-(x_1+x_2+...+x_n) \right] (1-x_{n+1}) \geq 1-(x_1+x_2+...+x_n+x_{n+1})$$ For this to work you only need that all $1-x_i \geq 0$...Of course you need $x_1+..+x_n \leq \frac{1}{2}$ to get the desired inequality.<|endoftext|> TITLE: Vandermonde-like identities QUESTION [6 upvotes]: Vandermonde's identity gives $$\sum_{k=0}^r \binom{m}{k}\binom{n}{r-k}=\binom{m+n}{r}.$$ Here is an example of Vandermonde's-like identity: For all $0 \le m \le n$, $$\sum_{k=0}^{2m} \binom{\left\lfloor\frac{n+k}{2}\right\rfloor}{k}\binom{m+\left\lfloor\frac{n-k}{2}\right\rfloor}{2m-k}=\binom{m+n}{2m}$$ (Note that $\left\lfloor\frac{n+k}{2}\right\rfloor+\left(m+\left\lfloor\frac{n-k}{2}\right\rfloor\right)$ is either $m+n$ or $m+n \pm 1$) I wonder if there are some similar identities where $m(k)$ and $n(k)$ are functions of $k$ and $m(k)+n(k)$ is 'almost' constant, says $m+n$, the identity looks like $$\sum_{k=0}^r \binom{m(k)}{k}\binom{n(k)}{r-k}=\binom{m+n}{r}?$$ REPLY [2 votes]: Here are some almost "Vandermonde-like" identities that may be of interest. They're not exactly what you're asking for, but they are pretty close. $$\begin{align*} \sum_{k=0}^n \binom{p+k}{k} \binom{q+n-k}{n-k} &= \binom{n+p+q+1}{n} \\ 2 \sum_{k=0}^r \binom{n}{2k} \binom{n}{2r+1-2k} &= \binom{2n}{2r+1} \\ 2 \sum_{k=0}^r \binom{n}{2k} \binom{n}{2r-2k} &= \binom{2n}{2r} + (-1)^k \binom{n}{r} \\ 2 \sum_{k=0}^{r-1} \binom{n}{2k+1} \binom{n}{2r-2k-1} &= \binom{2n}{2r} - (-1)^k \binom{n}{r} \end{align*}$$ The first one is on p. 148 of Riordan's Combinatorial Identities, and the last three are on p. 144. There may be more in Riordan's book; I just flipped through until I found a few.<|endoftext|> TITLE: Continuity of a convex function QUESTION [9 upvotes]: I'm trying to solve the following problem: Let $f:K\rightarrow \mathbb{R} $, $f$ convex and $K \subseteq \mathbb{R}^n$ convex. Then $f$ is continuous on $K$. I have proved the only case $n=1$, but for an arbitrary $n$?? REPLY [2 votes]: A hypercube in dimension $n$ has $2n$ hyperfaces, not $2^n$. Apparently Bacon does not like this to be corrected (https://math.stackexchange.com/review/suggested-edits/827072). I do not have enough reputation for a proper comment, I'll delete this as soon as someone put it correctly.<|endoftext|> TITLE: Dimension analysis of an integral QUESTION [5 upvotes]: I'm reading Street-Fighting Mathematics and not sure if I understand integral dimension analysis. The idea is to "guess" integrals without explicit calculation, by just looking at their dimensions. It's been a good decade since I last touched integrals so please bear with me! My answer to the problem is that dimension of $\int_{a}^{b}f(x)dx$ depends on the dimension of the multiplication $f(x)*dx$, so while the integral sign is indeed dimensionless, if we have $f(x)$ being "length per second" and $x$ being "second" then the resulting integral will have "length" dimension. But the author also tells that $e^{-\alpha x^2}$ is dimensionless and thus derives dimension formula for $\alpha$: I don't understand why the $-\alpha x^2$ should be dimensionless. What if there's no $\alpha$ (or $\alpha=1$), in which case the exponent becomes $e^{-x^2}$ - will it have a dimension now? And if it won't, then why dimension of $\alpha$ should depend on the dimension of $x$ at all? Also, a final question - if we accept that the only dimension which affects the integral in question is the dimension of $x$, then setting it to "length" means that the integral dimension is "length". But integrals compute areas, don't they? I think that in this case the integral $\int_{a}^{b}f(x)dx$ will have a dimension of "length x 1" which is still area. Is this correct, is there a better explanation? REPLY [3 votes]: You can obtain the results of dimensional analysis using the substitution $y=\sqrt{\alpha} x$, it is easy to see that $$\int e^{-\alpha x^2}\,dx = \frac{1}{\sqrt{\alpha}} \int e^{-y^2}\,dy.$$ As the remaining integral is independent of $\alpha$, we have that the original integral $\propto \alpha^{-1/2}$. (dimensional analysis usually claims additionally that the remaining integral is of order 1) REPLY [3 votes]: To extend Henry's answer a bit: The exponential function can be written as $$\mathrm e^{-\alpha x^2} = \exp(-\alpha x^2) = \sum_{n=0}^{\infty} \frac{(-\alpha x^2)^n}{n!},$$ so with the same argument on sums as above used to derive the dimension of an integral you see that you must have $[-\alpha x^2] = 1$ to evaluate the exponential. This is true for all(?) non-monomial functions (i.e. all functions that are not of the form $x^n$) with dimension-less coefficients.<|endoftext|> TITLE: Is every sufficiently large semigroup $S$ a subsemigroup of the transformation semigroup on $S$? QUESTION [7 upvotes]: In this question, all sets are finite. A semigroup is a set equipped with an associative binary operation, which I will call $\circ$. The tranformation semigroup on a set $X$ is the semigroup of maps $X\to X$ with function composition as the semigroup operation. The degree of a semigroup $S$ is the smallest integer $n$ such that $S$ is a subsemigroup (a subset closed under the semigroup operation) of the transformation semigroup on a set of order $n$. The question is whether there exists $n$ such that for all semigroups of order at least $n$, the degree of $S$ is at most $|S|$. Every semigroup $S$ can be extended to a semigroup with identity (also called a monoid) $M_S=S\cup\{1\}$, where the semigroup operation $\circ$ is extended by setting $1\circ s=s\circ 1=s$ for all $s\in S$. A semigroup $M$ with identity has degree at most $|M|$, because it is isomorphic to its "regular representation", the semigroup of functions $f_m\colon M\to M$ where $f_m(m')=m\circ m'$. See http://en.wikipedia.org/wiki/Transformation_semigroup#Cayley_representation Restricting to $S$, this shows that the degree of $S$ is at most $|S|+1$. This cannot be improved to $|S|$. Let $S=\{a,b\}$ and $s\circ s'=a$ for all $s,s'\in S$. Suppose that $S$ was a subsemigroup of the four functions $\{0,1\}\to \{0,1\}$ under composition. Since $S$ does not have an identity elements, we can rule out using the identity function or the swapping function $t\mapsto 1-t$. That leaves only the constant maps. But the constant maps give a different semigroup operation, $s\circ s'=s$. Clearly a counterexample cannot have an identity element. It is known that there are groups $G$ (and therefore semigroups) of degree $|G|$, for example cyclic groups of order $p^n$ where $p$ prime. (More is known.) REPLY [2 votes]: The answer is negative. Let $\mathcal{T}_n$ be the transformation semigroup on $n$ elements and let $S_n = \{s, s^2, \ldots, s^n\}$ be the semigroup defined by $s^{n+1} = s^n$. I will use the following consequence of Green's lemma: Lemma 1. Let $S$ be a finite semigroup and let $s$ and $t$ be two $\mathcal{J}$-equivalent elements of $S$. If $ts = t$, then $s$ is idempotent. Proposition. Let $n > 1$. Then the semigroup $S_n$ is not a subsemigroup of $\mathcal{T}_n$. Proof. Suppose by contradiction there is an element $s \in \mathcal{T}_n$ such that $s^{n+1} = s^n$ but $s^i \not= s^j$ for $1 \leqslant i < j \leqslant n$. I claim that $s^n <_\mathcal{J} s^{n-1} <_\mathcal{J} \cdots <_\mathcal{J} s$. It is clear that $s^{i+1} \leqslant_\mathcal{J} s^i$ for $1 \leqslant i \leqslant n-1$. Moreover, if $s^{i+1} \mathrel{\mathcal{J}} s^i$, then by Lemma 1 applied to $t = s^i$, $s$ is idempotent, a contradiction. However, $\mathcal{T}_n$ has exactly $n$ $\mathcal{J}$-classes: $J_1 <_\mathcal{J} J_2 <_\mathcal{J} \cdots <_\mathcal{J} J_n$, where each $J_k$ is the set of elements of rank $k$. Thus $s$ necessarily belongs to $J_n$, the group of units of $\mathcal{T}_n$. It follows that $s^n = 1$ and $s^{n+1} = s$, a contradiction.<|endoftext|> TITLE: Combinatorial identity related to the volume of a ball in $\mathbb{R}^{2k+1}$ QUESTION [6 upvotes]: Calculating the volume of a ball in $\mathbb{R}^{2k+1}$ in two different ways gives us the following formula: $$\sum_{i=0}^k {k \choose i} \frac{(-1)^i}{2i+1} = \frac{(k!)^2 2^{2k}}{(2k+1)!}$$ Is there a more direct way to prove this identity? I'm interested if there is a more combinatorial or algebraic way to prove this. Given the sum on the left side, how would you find out the formula for it? Added: This is how I found the identity. The volume of an ball of radius $r$ in $\mathbb{R}^{2k+1}$ is given by the formula $$\mathscr{L}^{2k+1}(B(0,r)) = \frac{\pi^k k! 2^{2k+1}}{(2k+1)!}r^{2k+1}$$ and in $\mathbb{R}^{2k}$ by the formula $$\mathscr{L}^{2k}(B(0,r)) = \frac{\pi^k}{k!}r^{2k}$$ where $\mathscr{L}$ denotes Lebesgue measure. I was wondering if I could prove the formula for $\mathbb{R}^{2k+1}$ using the formula for $\mathbb{R}^{2k}$. With the formula for even dimension we can calculate \begin{align*} \mathscr{L}^{2k+1}(B(0,r)) &= (\mathscr{L}^{2k} \times \mathscr{L})(B(0,r)) \\ &= \int_{[-r,r]} \mathscr{L}^{2k}(B(0,\sqrt{r^2 - y^2}))d \mathscr{L}(y) \\ &= \frac{\pi^k}{k!} 2 \int_0^r (r^2 - y^2)^k dy \\ &= \frac{\pi^k}{k!} 2r^{2k+1} \sum_{i=0}^k {k \choose i}\frac{(-1)^i}{2i+1} \end{align*} Now equating the two formulas for $\mathscr{L}^{2k+1}(B(0,r))$ gives $$\sum_{i=0}^k {k \choose i} \frac{(-1)^i}{2i+1} = \frac{(k!)^2 2^{2k}}{(2k+1)!}$$ REPLY [4 votes]: Actually, we can derive the recurrence relation for the sum without going through integrals: $$ \begin{align} 2k(I_k-I_{k-1}) &= 2k\sum_{i=0}^k \left(\binom ki-\binom{k-1}i\right)\frac{(-1)^i}{2i+1} \\ &= 2k\sum_{i=0}^k \binom ki\left(1-\frac{k-i}k\right)\frac{(-1)^i}{2i+1} \\ &= 2k\sum_{i=0}^k \binom ki\frac ik\frac{(-1)^i}{2i+1} \\ &= \sum_{i=0}^k \binom ki(-1)^i\frac{2i}{2i+1} \\ &= \sum_{i=0}^k \binom ki(-1)^i\left(1-\frac{1}{2i+1}\right) \\ &= -\sum_{i=0}^k \binom ki(-1)^i\frac{1}{2i+1} \\ &= -I_k\;, \end{align} $$ where I made use of the fact that the alternating sum of $\binom ki$ over $i$ vanishes.<|endoftext|> TITLE: Why is ${\aleph_\omega}^{\aleph_1} = {\aleph_\omega}^{\aleph_0} \cdot {2}^{\aleph_1}$? QUESTION [6 upvotes]: I am supposed to prove that ${\aleph_\omega}^{\aleph_1} = {\aleph_\omega}^{\aleph_0} \cdot {2}^{\aleph_1}$ , but I really have no idea how to start or what to do. I thought I could use the following fact: ${2}^{\aleph_1}= {\aleph_1}^{\aleph_1}$, because of the infiniteness of ${\aleph_1}$. I hope someone will show me how this works. Thanks in advance! REPLY [4 votes]: We have, since $\aleph_\omega$ is a limit, that \[ \aleph_\omega^{\aleph_1} = (\sup_n \aleph_n^{\aleph_1})^{\operatorname{cf}\aleph_\omega} \] By Hausdorff and induction \[ \aleph_{n+1}^{\aleph_1} = \aleph_n^{\aleph_1} \cdot \aleph_{n+1} = \aleph_1^{\aleph_1}\cdot \aleph_{n+1} = 2^{\aleph_1} \cdot \aleph_{n+1} \] Hence \[ \sup_n \aleph_n^{\aleph_1} = 2^{\aleph_1} \cdot \aleph_{\omega} \] As $\operatorname{cf}\aleph_\omega = \aleph_0$, finally \[ \aleph_\omega^{\aleph_1} = (2^{\aleph_1} \cdot \aleph_\omega)^{\aleph_0} = 2^{\aleph_1} \cdot \aleph_\omega^{\aleph_0}. \]<|endoftext|> TITLE: $\mathbb R^3$ is not a field QUESTION [16 upvotes]: I'm trying to prove that $\mathbb R^3$ is not a field with component-wise multiplication and sum defined. I think it's weird, because every properties of a field are inherit from $\mathbb R$. Anyone can help? Thanks REPLY [16 votes]: Suppose that we want to put a field structure on the $\mathbb{R}$-vector space $\mathbb{R}^3$. Suppose $K$ is isomorphic to $\mathbb{R}^3$ as an $\mathbb{R}$-vector space and is a field. Then $K$ contains a copy of $\mathbb{R}$ as a subfield. Thus, $\mathbb{R}^3$ is an algebraic extension of $\mathbb{R}$ of degree $3$. But all algebraic extensions of $\mathbb{R}$ are either or degree $1$ or $2$ because all algebraic field extensions of $\mathbb{R}$ can be embedded into $\mathbb{C}$ and $\mathbb{C}$ has dimension $2$ as an $\mathbb{R}$ vector space. Thus, $\mathbb{R}^3$ can not be equipt with a field structure. In this answer I have assumed that you are viewing $\mathbb{R}^3$ as an $\mathbb{R}$-vector space. If you only want to view it as an abelian group then this argument won't work (I don't think).<|endoftext|> TITLE: Why is it that I cannot imagine a tesseract? QUESTION [9 upvotes]: I try hard to "visualise" (say "imagine") a tesseract but I can't. Why is it that I can't? This may be a question for a scholar of some other discipline and not for a mathematician, e.g. psychology (topic: cognition?), anthropology, etc., but I am sure it is well defined and answerable as a question. It could be answered with a definition of what I can imagine or the definition of what I can't imagine and why, for example. There may be some fundamental property of our geometry that limits what we can represent so the question may be interpreted mathematically... anyway I think it is not a question to bounce without any thought. Specifically: what is missing for me to be able to imagine a tesseract? Understanding? A different kind of brain, that processes information in a different way? Can a top mathematician visualise a tesseract? I am not inviting a discussion, which would be off-topic. I am soliciting a thoughtful and articulate answer, if possible. Note: I already saw this question: In what sense is a tesseract (shown) 4-dimensional? and this video: http://www.youtube.com/watch?NR=1&feature=endscreen&v=uP_d14zi8jk already read this link: http://en.wikipedia.org/wiki/Tesseract and I have studied calculus-level maths, etc. and I found no difficulty in reasoning about imaginary numbers, infinite quantities and/or series, demonstrations ad absurdum, etc. I would be really disappointed if this question were marked as "not constructive", or anything to that effect. I can accept it may be "off topic" because it may relate to how our brain visualises and not some mathematical property that prevents visualisation, but it really should not be considered as "not-constructive". It would actually help me so much to understand this conundrum... REPLY [2 votes]: The key to the answer to this question is already in the word "visualize": It contains "visual", which refers to seeing. Indeed, when you visualize something, you quite literally activate the same structures in your brain that you would also activate when seeing it. You are actually, in a quite direct sense, putting it in front of your "inner eye". Note that also in "imagine" there's the word "image", which also refers to seeing. This can be seen(!) in a few self-experiments. First, visualize a common item. Let's say a teapot. When you put this teapot in front of your inner eye, you see it the same way you would if you were looking at it. In particular, you'll see the outside, but not the inside (unless you mentally go inside the imagined teapot, in which case you see the inside, but not the outside). Also, you see the side facing you, but not the opposite side. Now imagine the teapot getting smaller and smaller. For a real shrinking teapot, as it reaches the boundary of your vision, it will lose structure until it ends up as a little point. And you'll notice that the same will happen with your imagined teapot. Note that I'm assuming that you really imagine it shrinking; you can of course instead keep its size in your imagination and put a "mental label" on it "like this, but much smaller". But you cannot imagine, in the literal sense, a microscopically small teapot with all its structure. Or in other words, strictly speaking you cannot even visualize Euclidean space. Now the answer to your question is obvious: You cannot visualize the tesseract because you cannot see it. Our eyes do projections of the three-dimensional space into the two-dimensional spaces of our retinas, and our brain then interprets those two images. The tesseract doesn't live in that three-dimensional space projected onto our retinas. So does this mean we're out of luck? Well, not completely. While in nature there are no tesseracts to see, and no extended four-dimensional space to see in which they could be put, we can do a mathematical projection of the tesseract into three-dimensional space, and then use computers and our visual apparatus (and, after some experience, also our imagination) to make mental images of those projections, while using our abstract understanding together with our three-dimensional intuition/experience on how projections from 3D to 2D behave, to interpret those projections. Using this method, it is indeed possible to develop a certain feeling for some aspects of four-dimensional objects, like the tesseract. However note that it will never be as natural as the three-dimensional experience which our brain explicitly evolved for (because our ancestors would not have survived without a sufficient understanding of three-dimensional space).<|endoftext|> TITLE: Why is $i^i$ real? QUESTION [12 upvotes]: Possible Duplicate: How to raise a complex number to the power of another complex number? My calculator (as well as WolframAlpha) gives me the approximation: $$0.2078795763507619085469...$$ But I don't understand how exponentiating two purely imaginary constructs yields a real (albeit irrational) number. When I do $i^{i+1}$ it gives me an imaginary number as well as $(i+1)^i$. So why does $i^i$ fall into that precise point where it is real and no longer imaginary? What is happening? I understand that exponentiation is not repeated multiplication, and it wouldn't make sense to multiply $i$ by itself $i$ times (because it would only yield $i$, $-i$, $1$, or $-1$). So what are we doing behind the scenes to get such a number? REPLY [19 votes]: Using Euler's formula: $$ i = e^{i\pi / 2} $$ So: $$ i^i = (e^{i\pi / 2})^i = e^{i^2\pi/2} = e^{-\pi/2} = 0.207... $$<|endoftext|> TITLE: Usage of finite fields or Galois fields in real world QUESTION [5 upvotes]: I'm currently studying the theory of Galois fields. And I have a question, what practical usage of this finite fields? As stated in Wikipedia: Finite fields are important in number theory, algebraic geometry, Galois theory, cryptography, coding theory and Quantum error correction. And what the other usage? I heard it used extensively in the image processing and recognition. REPLY [7 votes]: Finite fields are extensively used in design of experiments, an active research area in statistics that began around 1920 with the work of Ronald Fisher. Fisher was a major pioneer in the theory of statistics and one of the three major founders of population genetics. I've heard of the use of finite fields in scheduling tournaments. Problems in that area may be the same mathematical problems as some of those that occur in design of experiments.<|endoftext|> TITLE: Name of a set that allows repetition QUESTION [6 upvotes]: If a set cannot contain repetition, what would be the proper term for a group of items that allowed repetition? REPLY [14 votes]: Multisets are sets which allow repetitions, but the order does not matter. If you wish to allow repetitions and the order does matter, you are looking for a sequence.<|endoftext|> TITLE: What is algebraic geometry? QUESTION [28 upvotes]: I am a second year physics undergrad, loooking to explore some areas of pure mathematics. A word that often pops up on the internet is algebraic geometry. What is this algebraic geometry exactly? Please could you give a less technical answer to describe the what this field does and how? I have done linear algebra, some group and representation theory, and some basic point set topology all from mathematical physics textbooks. Also, a brief overview of the prerequisites to study and do research in the field. I know that commutative algebra, and topology is used, but in exactly what way and how are they inter-connected? How exactly do you mix algebra with topology? Thanks! REPLY [14 votes]: Suppose you look at all polynomials in two variables, $\mathbb C[x,y]$. Any element $p$ in that ring can be evaluated at any element of $\mathbb C^2$ - if we have two complex values, $a,b$ we can compute $p(a,b)$. And if $p,q$ evaluate to the same value at every point of $\mathbb C^2$ then they are actually the same polynomial. Now, look at how $\mathbb C[x,y]$ acts when it is evaluated only on some subset of $\mathbb C^2$, say the set of points $X=\{(a,b): b = a^2\}$. Then $\mathbb C[x,y]$ evaluation is no longer "faithful," but rather, two different polynomials, such as $p_1(x,y)=x^3y$ and $p_2(x,y)=xy^2$, can evaluate as the same on $X$. On X, then, the ring of evaluation functions is actually $R=\mathbb C[x,y]/\left\cong \mathbb C[x]$. That's a VERY simple case, but Algebraic Geometry is founded on the idea that we can learn something about the geometry of the solutions to a set of equations (in this case, $b=a^2$) by looking at the ring of evaluation functions on that set. Now, given any point $(a,b)\in X$, there is a natural ring homomorphism $\phi:R\rightarrow \mathbb C$ which corresponds to evaluation. Since this map is onto, this means that the point $(a,b)$ corresponds to some maximal ideal of $R$. That correspondence between points of the graph of your equations and maximal ideals of the "evaluation ring" associated with your equations, is the big starting point of algebraic geometry. I will skip the topology part of the question, because it is probably too confusing to try to mix in, except to say that the types of topology used in Algebraic Geometry are very different from the typical starting topologies used in Algebraic Topology.<|endoftext|> TITLE: A high-powered explanation for $\exp U(n)=2\iff n\mid24$? QUESTION [7 upvotes]: In What's so special about the divisors of $24$? it is noted that the exponent of the group of units modulo $n$, that is the highest order of an element of $U(n):=(\Bbb Z/n\Bbb Z)^\times$, is precisely $2$ if and only if the integer $n$ divides $24$. An elementary argument is given (see also answers here for a one-line proof), as well as some analytic machinery, but I recall (perhaps not quite accurately) that the number $24$ shows up a lot in high-powered math related to number theory, like lattices, moonshine, modular forms, string theory etc: suspicious. If I were a believer in the magical and skeptical of coincidences, I might want to know if there is a high-powered explanation of this fact from the cited theoretical areas (not including asymptotic or statistical heuristics from analytic number theory). Or is it merely a collision of small numbers? REPLY [4 votes]: I believe I've tracked down what Qiaochu was referencing in Gannon's Moonshine beyond the Monster, pg168-169 §2.5 (alas, five pages too late). I can't claim to understand any of it. $\qquad$<|endoftext|> TITLE: Abelian groups: proving $\prod\limits_{g\in G}g=\prod\limits_{\substack{g\in G\\g^2=1}}g$ QUESTION [9 upvotes]: Let $G$ be a finite abelian group. Why is $$\prod_{g\in G}g=\prod_{\substack{g\in G\\g^2=1}} g \ ?$$ I've tried to consider an element with its inverse but I don't get why you can combine the elements - which are different of its inverse - and they annul themselves in the product. Any help? REPLY [6 votes]: We have $$\prod_{g\in G}g = \prod_{\substack{g\in G\\g^2=1}}g \cdot\prod_{\substack{g\in G\\g^2\neq1}}g $$ In the product $\displaystyle\prod_{\substack{g\in G\\g^2\neq1}}g$ every element cancels with its inverse.<|endoftext|> TITLE: Why do mathematicians use this symbol $\mathbb R$ to represent the real numbers? QUESTION [8 upvotes]: So, I'm wondering why mathematicians use the symbols like $\mathbb R$, $\mathbb Z$, etc... to represent the real and integers number for instance. I thought that's because these sets are a kind of special ones. The problem is I've already seen letters like $\mathbb K$ to represent a field in some books just to say an example. So, someone knows why we use these kind of symbols? Thanks REPLY [2 votes]: See the section "Letters for the sets of rational and real numbers" at http://jeff560.tripod.com/nth.html<|endoftext|> TITLE: Old versus New enunciation of Taylor's Theorem. QUESTION [10 upvotes]: I am studying from Spivak' Calculus, and he states Taylor's Theorem as follows: THEOREM Let $f',\cdots,f^{(n+1)}$ be defined on $[a,x]$ and let $R_{n,a}(x)$ be defined by $$R_{n,a}(x)=f(x)-\sum_{k=0}^n \frac{f^{(k)}(a)}{k!}x^k$$ Then, for some $t\in (a,x)$ $$\eqalign{ & {R_{n,a}}(x) = \frac{{{f^{\left( {n + 1} \right)}}\left( t \right)}}{{n!}}{\left( {x - t} \right)^n}\left( {x - a} \right) \cr & {R_{n,a}}(x) = \frac{{{f^{\left( {n + 1} \right)}}\left( t \right)}}{{\left( {n + 1} \right)!}}{\left( {x - a} \right)^{n + 1}} \cr} $$ Moreover, if $f^{(n+1)}$ is integrable over $[a,x]$; then $${R_{n,a}}(x) = \int\limits_a^x {\frac{{{f^{\left( {n + 1} \right)}}\left( t \right)}}{{n!}}{{\left( {x - t} \right)}^n}} dt$$ On the other hand, Landau's older textbook states: THEOREM Let $h>0$. Suppose $f^{(n)}$ continuous for $0\leq x\leq h$ and differentiable on $00$. Suppose $f^{(n)}$ continuous for $\mu \leq x\leq \mu+h$ and differentiable on $\mu 0$ and then gives a formula for this fixed $h$ and the $t$ (he actually names this $x$, but I found it a little conflicting) , while Spivak gives the remainder as a function of $x$ and a fixed $t$. Maybe it is just to make his (Landau's) proof simpler? I see how to go from Spivak's result to Landau's, but not the other way around. REPLY [2 votes]: The Landau's version of Taylor theorem is eq. 3 of Spivak's version. They are actually defined in the same way, up to a translation to make the segment $[a,x]$ start from the origin. You can jump from Spivak version to Landau using the coordinate transformation $$s(p)=\frac{(p-a)h}{x-a},$$ where $p$ is the point in the Spivak coordinate system you're considering (in particular if $p=t_{\text{Spivak}}$ then $s(p)=t_{\text{Landau}}$). I don't think you can conclude Spivak stronger result without requiring integrability.<|endoftext|> TITLE: Determine $\lim_{x \to 0}{\frac{x-\sin{x}}{x^3}}=\frac{1}{6}$, without L'Hospital or Taylor QUESTION [11 upvotes]: How can I prove that $$\lim_{x \to 0}{\frac{x-\sin{x}}{x^3}}=\frac{1}{6}$$ without using L'Hospital or Taylor series? thanks :) REPLY [2 votes]: Reproduced from this answer, in which this answer is cited to show that $\lim\limits_{x\to0}\frac{\sin(x)}x=1$ and that $0\le x\lt\frac\pi2\implies0\le\sin(x)\le x\le\tan(x)$. Pre-Calculus Proof that $\boldsymbol{\lim\limits_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16}$ Assume that $0\lt x\le\frac\pi3$. Then, $\cos(x)\ge\frac12$ and $0\le\sin(x)\le x\le\tan(x)$. Therefore, $$ \begin{align} \frac{x-\sin(x)}{x^3} &\le\frac{\tan(x)-\sin(x)}{x^3}\tag{1a}\\ &=\frac{\tan(x)}{x}\frac{1-\cos(x)}{x^2}\tag{1b}\\ &=\frac1{\cos(x)}\frac{\sin(x)}{x}\frac{2\sin^2(x/2)}{4\,(x/2)^2}\tag{1c}\\[6pt] &\le1\tag{1d} \end{align} $$ Furthermore, $$ \begin{align} &\frac{x-\sin(x)}{x^3}-\frac14\frac{x/2-\sin(x/2)}{(x/2)^3}\tag{2a}\\ &=\frac{2(x/2)-2\sin(x/2)\cos(x/2)}{8(x/2)^3}-\frac{2(x/2)-2\sin(x/2)}{8(x/2)^3}\tag{2b}\\ &=\frac{2\sin(x/2)(1-\cos(x/2))}{8(x/2)^3}\tag{2c}\\ &=\frac{2\sin(x/2)\,2\sin^2(x/4)}{8(x/2)^3}\tag{2d} \end{align} $$ Since $\lim\limits_{x\to0}\frac{\sin(x)}x=1$, $(2)$ shows that $$ \lim_{x\to0}\left(\frac{x-\sin(x)}{x^3}-\frac14\frac{x/2-\sin(x/2)}{(x/2)^3}\right)=\frac18\tag3 $$ For any $n$, adding $\frac1{4^k}$ times $(3)$ with $x\mapsto x/2^k$ for $k$ from $0$ to $n-1$ gives $$ \begin{align} \lim_{x\to0}\left(\frac{x-\sin(x)}{x^3}-\frac1{4^n}\frac{x/2^n-\sin\left(x/2^n\right)}{\left(x/2^n\right)^3}\right) &=\frac18\frac{1-(1/4)^n}{1-1/4}\tag{4a}\\ &=\frac16-\frac16\frac1{4^n}\tag{4b} \end{align} $$ Thus, for any $\epsilon\gt0$, choose $n$ large enough so that $\frac1{4^n}\le\frac\epsilon2$. Then, $(4)$ says that we can choose a $\delta\gt0$ so that if $0\lt x\le\delta$, $$ \frac{x-\sin(x)}{x^3}-\overbrace{\frac1{4^n}\frac{x/2^n-\sin\left(x/2^n\right)}{\left(x/2^n\right)^3}}^{\frac12[0,\epsilon]_\#} =\frac16-\!\overbrace{\ \ \ \frac16\frac1{4^n}\ \ \ }^{\frac1{12}[0,\epsilon]_\#}\!+\frac12[-\epsilon,\epsilon]_\#\tag5 $$ where $[a,b]_\#$ represents a number between $a$ and $b$. The bounds above the braces follow from $(1)$ and the choice of $n$. Equation $(5)$ says that for $0\lt x\le\delta$, $$ \frac{x-\sin(x)}{x^3}=\frac16+[-\epsilon,\epsilon]_\#\tag6 $$ Since $\frac{x-\sin(x)}{x^3}$ is even, we can say that $(6)$ is true for $0\lt|x|\le\delta$, which means that $$ \lim_{x\to0}\frac{x-\sin(x)}{x^3}=\frac16\tag7 $$<|endoftext|> TITLE: Can we make $\mathbb{R}^{2}$ an ordered field? QUESTION [10 upvotes]: We know that if we give $\mathbb{R}^{2}$ the complex field structure, we cannot make it an ordered field. Is there any field structure that we can put on $\mathbb{R}^{2}$ that makes this ordered field? I don't think there is, but I don't know how to start my argument. REPLY [2 votes]: Whenever we say the term "ordered field", this means the order on the field has something to do with the field structure. Otherwise as a set you can put an order anyway. In $\Bbb{R}^2$ we have the well-known dictionary ordering which becomes total ordering. But what I was talking about is, the total order has to be compatible with the field operations in order to be an ordered field. A total order "$<$" on a field $F$ is said to be compatible with $F$ if the following holds true for all $a,b,c\in F$. $a\leq b$ implies $a+c\leq b+c$. $a\leq b$ and $c>0$ implies $ac\leq bc$. Next we see how the fact $i^2=-1$ in $\Bbb{C}$ ensures that there is no total order on $\Bbb{C}$ which is compatible with the field operations of $\Bbb{C}$. Let $<$ be any arbitrary total ordering on $\Bbb{C}$. Then $i\neq 0$ gives, either $i<0$ or $i>0$. But we will show none of them holds true. If $i>0$ then from the condition $(2)$ we get, $i\cdot i>i \cdot0\implies -1>0$. Now some may think that, we have arrived at a contradiction but unfortunately no. Since $<$ is an arbitrary ordering so this may happen. But apply condition $(2)$ again and we get, $(-1)\cdot i>0\cdot i\implies -i>0$. Now using condition $(1)$, $i>0$ and $-i>0\implies i+(-i)>0+0\implies 0>0$. Which is a contradiction. Similarly if we put $i<0$ then from condition $(2)$ we get, $i\cdot i>0\cdot i\implies -1>0$. Then again apply condition $(2)$ on $i<0$ to get, $i\cdot (-1)<0\cdot (-1)\implies -i<0$. Again using condition $(1)$, $i>0$ and $-i>0$ $\implies i+(-i)>0+0\implies 0>0$. Which is a contradiction. Hence $i$ and $0$ are not comparable, so there is no total order on $\Bbb{C}$ which makes it an ordered field.<|endoftext|> TITLE: Brownian motion: changing the order of expectation and integration in $E \left( \int_s^t B_x dx \mid F_s \right)$ QUESTION [5 upvotes]: Let $B$ be a standard Brownian motion with induced filtration $F$. Is it true that, for $s TITLE: Roots of unity and function $\mu$ QUESTION [6 upvotes]: I need to prove that for each positive integer $n$ the sum of the primitive $n$th roots of unity in $\mathbb{C}$ is $\mu(n)$, where $\mu$ is the Möbius function. REPLY [6 votes]: Do you know $$\sum_{d\mid m}\mu(d)=1{\rm\ if\ }m=1,\,\,=0{\rm\ else}$$ The sum of the primitive $n$th roots of unity is $$\sum_{\gcd(k,n)=1}e^{2\pi ik/n}=\sum_1^n\sum_{d\mid\gcd(k,n)}\mu(d)e^{2\pi ik/n}=\sum_{d\mid n}\mu(d)\sum_0^{(n/d)-1}e^{2\pi idk/n}$$ The inner sum os the sum of all the $m$th roots of unity where $m=n/d$, so it's zero except for $d=n$ when it's $1$. So, the original sum evaluates to $\mu(n)$.<|endoftext|> TITLE: If the action of a group $G$ on $\mathbb{R}$ is properly discontinuous then G is isomorph to $\mathbb{Z}$? QUESTION [5 upvotes]: Let $G$ be a topological group, acts on a topological space $X$, such that the map $f: G \times X \rightarrow X:(g,x)\mapsto g*x$ is continuous. We say that this action is $properly\;discontinuous$ if for every element $x\in X$, there exist an open neighbourhood $U_{x}$ for $x$ such that $gU_{x}\cap U_{x}\neq \phi$ and $g\in G$ implies that $g=1_{G}$ I am trying to show that if the action of a group $G$ on $\mathbb{R}$ is properly discontinuous then G is isomorphic to $\mathbb{Z}$. best regards REPLY [4 votes]: It suffices to show the orbit $G(0)$ of $0$ is a discrete set, and any $x\in\mathbb{R}$ is "sandwiched" by two representatives of $G(0)$. Then the space $X/G$ is homeomorphic to $\mathbb{S}^1$ (the action is continuous), and so $G\approx\pi_1(X/G)=\pi_1(S^1)=\mathbb{Z}$. The discreteness is natural. Suppose $G(0)$ is bounded above. Let $s\equiv\sup G(0)$; since $G\neq\{e\}$ and acts properly discontinuous, there is some $s'=gs\neq s$ with $g\neq e$. If $s's$, then for an interval $I$ containing $s$ such that $gI\cap I=\emptyset$ (such interval exists due to $G$ acting properly discontinuous), all elements of $gI$ are larger than $s$ and so $gI$ contains no representative of $G(0)$. This is however a contradiction since any interval $I$ containing $s=\sup G(0)$ (and so $gI$) must contain some representative of $G(0)$. An analogous argument using $i\equiv\inf G(0)$ shows $G(0)$ is also not bounded below.<|endoftext|> TITLE: How can a Markov chain be written as a measure-preserving dynamic system QUESTION [6 upvotes]: From http://masi.cscs.lsa.umich.edu/~crshalizi/notabene/ergodic-theory.html irreducible Markov chains with finite state spaces are ergodic processes, since they have a unique invariant distribution over the states. (In the Markov chain case, each of the ergodic components corresponds to an irreducible sub-space.) By "ergodic processes", I understand it to be the same as "ergodic measure-preserving dynamic system", if I am correct. As far as I know an ergodic measure-preserving dynamic system is a mapping $\Phi: T \times S \to S$ that satisfies a couple of properties, where $S$ is the state space, and $T$ is the time space. Sometimes there is a measure preserving mapping on $S$ that can generate the system by repeating itself. So I wonder how a Markov chain can be written as a mapping $\Phi: T \times S \to S$, and what the measure preserving mapping that generates the Markov chain is? Thanks! REPLY [7 votes]: The article talks about a (stationary) Markov chain ${(X_n)}_{n \in \mathbb{Z}}$ in discrete time with each $X_n$ taking its values in a finite set $E$. The canonical space of the Markov chain is the product set $E^{\mathbb{Z}}$. The trajectory $X=(\ldots, X_{-1}, X_{0}, X_1, \ldots)$ of the Markov chain is a random variable taking its values in $E^{\mathbb{Z}}$. Denoting by $\mu$ its distribution (which could be termed as the law of the Markov process) then $\mu$ is invariant under the classical shift operator $T \colon E^{\mathbb{Z}} \to E^{\mathbb{Z}}$. Then the Markov chain can be considered as the dynamical system $(T,\mu)$. In fact here we only use the fact that ${(X_n)}_{n \in \mathbb{Z}}$ is a stationary process. In the Markov case we can say in addition that the ergodicity of $T$ is equivalent to the irreducibility of ${(X_n)}_{n \in \mathbb{Z}}$.<|endoftext|> TITLE: Supremum and Infimum of Infinite Sets QUESTION [9 upvotes]: I just read in a textbook that "An infinite set may not have a maximum or minimum, but it will always have a supremum and infimum." Is this true? What, for example, is the supremum of the real numbers, or the infimum of the real numbers? I can imagine that any bounded infinite set has a supremum/infimum, but if a set is unbounded (e.g. $\mathbb{R}$), then how can it have a greatest lower bound or least upper bound? REPLY [7 votes]: Your idea is exactly right, but we often have a convention that a set with no upper bound, such as the positive real numbers, has a supremum of "$\infty$", and a set with no lower bound has an infimum of "$-\infty$". In this sense every set has a supremum and an infimum, although it may not have a minimum or a maximum.<|endoftext|> TITLE: About problem in complex integrals QUESTION [5 upvotes]: I solved this problem in complex integrals. Is my answer a correct ? Here $z$ is a complex value: $$ C:|z-1|=1 \ \ \ \ \ \mbox{integral path} $$ $$ \int_C\ \frac{2z^2-5z+1}{z-1}\ dz $$ My answer $$ z=1+e^{i\theta} \ \ \ \ \frac{dz}{d\theta}=ie^{i\theta} $$ $$ \int_{0}^{2\pi}\ \frac{-e^{i\theta}+2e^{2i\theta}-2}{e^{i\theta}} \cdot\ ie^{i\theta} d\theta $$ $$ =\left[ -e^{i\theta}+ e^{2i\theta} -2i\theta \right]^{2\pi}_0=-4\pi i $$ REPLY [4 votes]: If you are allowed to use the Cauchy's integral formula, then $$\int_{C} \frac{2 z^2 - 5x + 1}{z-1} dz = {2\pi i} \big( 2 z^2 - 5z + 1)_{z=1} = -4 \pi i,$$ showing that you did a great job.<|endoftext|> TITLE: $\pi$, Dedekind cuts, trigonometric functions, area of a circle QUESTION [12 upvotes]: (I should say at the outset that this question is broad, and may need splitting up. Although I ask several questions, I present them as one because they are not independent of one another, and I am seeking a unified answer.) My questions are: How can we establish that the circumference $C$ and area $A$ of a circle of radius $r$ satisfy $C = 2\pi r$ and $A = \pi r^2$ for some constant, $\pi$? How can we prove that $\pi$ is an element of the real field (e.g., a Dedekind cut)? How can we prove (perhaps trivial, if the above are satisfied) that there are real functions $\sin(x)$ and $\cos(x)$, which have the usual analytic properties, and also satisfy the usual geometric intuition? It seems like most calculus textbooks sort of weasel out on these questions. Usually, they ignore the first two questions pretty much completely, and their derivation on the third point is a filling-out of the following outline: (1) Define $\sin(x)$ as height of triangle of central angle $x$ inscribed in the unit circle, where $x$ is in radians, and $\cos(x)$ as the length of its base. Assume the usual values for these functions at $k(\frac{\pi}{2}), k \in \mathbb{N}$. (2) Notice, by assuming (a) that $A = \pi r^2$ and $C = 2\pi r$, and (b) that the area of a sector is proportionate to the length of arc subtended by the angle on the circumference, that the area $S$ of a sector of angle $x$ in a unit circle is $\frac{x}{2}$, since $$\frac{S}{\pi r^2} = \frac{x}{2 \pi r}, r = 1 \implies S = \frac{x}{2}$$ (Apparently, assumption (b) is Euclid VI 33. I haven't studied the proof, though.) (3) Prove, using a geometric argument, that $\frac{\sin(x)}{2} < \frac{x}{2} < \frac{\tan(x)}{2}$ for $x \in [0,\frac{\pi}{2})$. Deal similarly (not identically) with $(\frac{-\pi}{2}, 0]$. Prove that we always have $1 > \frac{\sin(x)}{x} > \cos(x)$ on $(\frac{-\pi}{2}, \frac{\pi}{2})$. (4) Derive (by a geometric argument, as done here) the usual angle addition formulas. (5) Noting that $|\sin(x)| < |x|$ (geometrically), conclude that $\lim_{x \to 0} \sin(x) = 0$. Use the identity - derived from (4) - that $$\cos(x) = 1 - 2\sin^2(\frac{x}{2}) = (1 - \sqrt{2}\sin(\frac{x}{2}))(1 + \sqrt{2}\sin(\frac{x}{2}))$$ and the product theorem for limits to conclude that $\lim_{x \to 0} \cos(x) = 1$. (6) Use the "Squeeze Theorem" and (3) to prove that $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$, $\lim_{x \to 0} \frac{\cos(x) - 1}{x} = 0$. Use (4) and (5) to establish continuity at all other values - e.g., $$\lim_{h \to 0} \sin(x_0 + h) = \lim_{h \to 0}\sin(x_0)\cos(h) + \sin(h)\cos(x_0) = \sin(x_0)$$ (7) Now prove that $\sin(x), \cos(x)$ are differentiable. A notably different approach is that of Spivak's Calculus. Spivak tacitly assumes that the first question has been answered, and notices that in that case, $$ \pi = 2\int_{-1}^{1} \sqrt{1-x^2} dx $$ which resolves the second question, albeit not too directly. Also assuming that the area of a sector of an angle of $x$ radians is $\frac{x}{2}$, he defines $$ A(x) = \frac{x \sqrt{1- x^2}}{2} + \int_{x}^{1} \sqrt{1-t^2} dt$$ The area function is a function of the $x$-coordinate, not an angle $x$; it tends from $\frac{\pi}{2}$ to $0$ as $x$ goes from $-1$ to $1$. However, $\forall x \in [0, \pi]$, we have $\exists !y \in [-1, 1]: A(y) = \frac{x}{2}$; this $y$ we set as $\cos(x)$, and we define $\sin(x) = \sqrt{1 - \cos^2(x)}$. (The uniqueness of the value of $\cos(x)$ is guaranteed by the fact that $A(x)$ is decreasing and continuous.) The remainder of Spivak's derivation is about extending these functions (by symmetry) to the rest of $\mathbb{R}$. Although I am familiar with these derivations, they are rather prominently silent about the first question; I've never seen any answer to that question which impressed me as rigorous. I am not at all sure that there is an actual cut which one can write down (i.e., in set-builder notation, as an explicit subset of $\mathbb{Q}$ in the usual way) for $\pi$. REPLY [13 votes]: The following answers your first question. We do not mention sine or cosine. We will use some basic integration techniques, but of course we will not use trigonometric substitution! Define the circle of radius $r$ with centre the origin by the equation $x^2+y^2=r^2$, and the disk by $x^2+y^2\le r^2$. We first show that the area of a disk of radius $r$ is a constant times $r^2$. By symmetry the area is $$4\int_0^r\sqrt{r^2-x^2}\,dx.$$ Make the change of variable $x=rt$. Our area is $$r^2\left(4\int_0^1 \sqrt{1-t^2}\,dt\right).$$ Thus $4\int_0^1 \sqrt{1-t^2}\,dt$ is the desired constant. We could call it $\pi$. But let us call it $4k$. For the circumference, the usual arclength formula, after some simplification, gives that the circumference is $$4\int_0^r \frac{r\,dx}{\sqrt{r^2-x^2}}.$$ The change of variable $x=rt$ transforms this to $$r\left(4\int_0^1 \frac{dt}{\sqrt{1-t^2}}\right).$$ So we have $r$ times a constant. Which constant? We will evaluate $\int_0^1\sqrt{1-t^2}\,dt$ (yes!) in a funny way, by parts. Let $u=\sqrt{1-t^2}$ and $dv=dt$. Then $du=-\frac{t}{\sqrt{1-t^2}}\,dt$ and $v=t$. After a little while we find that $$\int_0^1\sqrt{1-t^2}\,dt=\int_0^1 \frac{t^2\,dt}{\sqrt{1-t^2}}.$$ But the numerator on the right is $1-(1-t^2)$. And $\dfrac{1-t^2}{\sqrt{1-t^2}}=\sqrt{1-t^2}$. Thus $$\int_0^1\sqrt{1-t^2}\,dt=\int_0^1 \frac{dt}{\sqrt{1-t^2}}-\int_0^1\sqrt{1-t^2}\,dt.$$ We conclude that $$\int_0^1 \frac{dt}{\sqrt{1-t^2}}=2\int_0^1\sqrt{1-t^2}\,dt=2k.$$ It follows that the circumference of a circle of radius $r$ is $8kr$. Remark: One can introduce the trigonometric functions via integrals, as mentioned in the OP. Then their basic properties, such as the addition laws, are not difficult to derive. It is, however, mildly tedious.<|endoftext|> TITLE: An exercise in the Riemannian geometry book QUESTION [5 upvotes]: If $M$ is a smooth closed $n$-dimensional Riemannian manifold which is Riemannian embedded in $\mathbb R^{n+1}$, then there exists a point $p \in M$ such that the sectional curvatures at $p$ are all positive. Can any one give me a hint for this problem? I was considering the maximum $p$ of function $|x|^2$ on $M$, then near $p$, $M$ is "wrapped" by some $S^n$ and has the same tangent space as $S^n$. But I am stuck there. I have made some progress: We consider the functions $L_q(x)=|x-q|^2$. Then we have a maximum $p$ of $L_q$ and we fix the unit vector $v=\frac{p-q}{|p-q|}$ throughout, so $v$ is the normal vector at $p$. Now if we set $q(t)=p+tv$, then when $t\leq-|p-q|$, $p$ is always the maximum of function $L_{q(t)}$ (this is true if we draw a ball at $q(t)$ with radius $|p-q(t)|$, then all $M$ is contained in this ball). Therefore when $t$ sufficiently tends to $-\infty$, the Hessian $L_{q(t)}$ is always semi-positive definite. Now if we fix a coordinate neighborhood aroud $p$, then Hessian matrix $H$ of $L_{q(t)}$ at $p$ is given by $H=2(F-tS)$ where $F,S$ are the first and second fundamental forms of $M$ at $p$. So we conclude that $S$ has to be semi-positive definite. But how can we move further to say $S$ is positive definite? REPLY [2 votes]: There exists a linear function $L:\mathbb R^{n+1}\to\mathbb R$ such that $L|_M$ has non-degenerate critical points (by Sard's theorem). If you choose a maximum of $L$ on $M$ then the second quadratic form at that point is positively definite, hence all the sectional curvatures at that point are positive. edit (why positive 2nd fund. form implies positive curvature): if $K$ is the 2nd fundamental form then the curvature tensor is given by $$(R(u,v)u,v)=K(u,u)K(v,v)-K(u,v)^2$$ and if $u,v$ are linearly independent and $K$ is positively definite then the RHS is positive. Sectional curvatures are the LHS if $(u,u)=(v,v)=1$ and $(u,v)=0$.<|endoftext|> TITLE: Reference for a tangent squared sum identity QUESTION [12 upvotes]: Can anyone help me find a formal reference for the following identity about the summation of squared tangent function: $$ \sum_{k=1}^m\tan^2\frac{k\pi}{2m+1} = 2m^2+m,\quad m\in\mathbb{N}^+. $$ I have proved it, however, the proof is too long to be included in a paper. So I just want to refer to some books or published articles. I also found it to be a special case of the following identity, $$ \sum_{k=1}^{\lfloor\frac{n-1}{2}\rfloor}\tan^2\frac{k\pi}{n} = \frac16(n-1)(-(-1)^n (n + 1) + 2 n - 1),\quad n\in\mathbb{N}^+ $$ which is provided by Wolfram. Thank you very much! REPLY [3 votes]: Jolley, Summation of Series, formula 445 is $$\sum_{k=0}^{n-1}\tan^2\left(\theta+{k\pi\over n}\right)=n^2\cot^2\left({n\pi\over2}+n\theta\right)+n(n-1)$$ Let $\displaystyle\theta={\pi\over2m+1}$, $n=2m+1$ and we almost have your sum; we have twice your sum, since the angles here go from just over zero to just under $\pi$, while in your sum they go from just over zero to just under $\pi/2$, and $\tan^2\theta=\tan^2(\pi-\theta)$. Jolley's reference is to page 73 of S L Loney, Plane Trigonometry, Cambridge University Press, 1900. This book is best known from its part in Ramanujan's early education.<|endoftext|> TITLE: What is the importance of definite and semidefinite matrices? QUESTION [14 upvotes]: I would like to know some of the most important definitions and theorems of definite and semidefinite matrices and their importance in linear algebra. Thanks for your help REPLY [4 votes]: Positive definite matrices have applications in various domains like physics, chemistry etc. In CS, optimization problems are often treated as quadratic equations of the form $Ax=b$ where $x$ can be any higher degree polynomial. To solve such equations, we need to calculate $A^{-1}$ which is then used to find out $x=A^{-1}.b$. Computing $A^{-1}$ is time consuming for complex higher rank matrices. Instead we use Cholesky’s decomposition: $A = L.L^T$, if the matrix $A$ is symmetric, positive definite then we can decompose it into lower triangular matrix $L$. Finding the inverse of this lower triangular matrix $L$ and its transpose $L^T$ is computationally efficient process. Hence we get huge performance gain on $x = (L^T)^{-1}.(L^{-1}).b$<|endoftext|> TITLE: Apparently cannot be solved using logarithms QUESTION [9 upvotes]: This equation clearly cannot be solved using logarithms. $$3 + x = 2 (1.01^x)$$ Now it can be solved using a graphing calculator or a computer and the answer is $x = -1.0202$ and $x=568.2993$. But is there any way to solve it algebraically/algorithmically? REPLY [15 votes]: I have solved a question similar to this before. In general, you can have a solution of the equation $$ a^x=bx+c $$ in terms of the Lambert W-function $$ -\frac{1}{\ln(a)}W_k \left( -\frac{1}{b}\ln(a) {{\rm e}^{-{\frac {c\ln(a) }{b}}}} \right)-{\frac {c}{b}} \,.$$ Substituting $ a=1.01 \,,b=\frac{1}{2}\,,c=\frac{3}{2}$ and considering the values $k=0$ and $k=-1$, we get the zeroes $$x_1= -1.020199952\,, x_2=568.2993002 \,. $$<|endoftext|> TITLE: $\int_{0}^{\infty} \frac{\cos x - e^{-x}}{x} \mathrm dx$ Evaluate Integral QUESTION [11 upvotes]: Evaluate $$\int_{0}^{\infty} \frac{\cos x - e^{-x}}{x} \ dx$$ REPLY [3 votes]: $$\int_0^\infty {{{\cos x - {e^{ - x}}} \over x}dx} = \int_0^\infty {\left( {{s \over {1 + {s^2}}} - {1 \over {1 + s}}} \right)ds} = \mathop {\lim }\limits_{s \to \infty } \log {{\sqrt {{s^2} + 1} } \over {s + 1}} = 0$$<|endoftext|> TITLE: Question about the N/C theorem QUESTION [5 upvotes]: Let $H \leq G$. Define a map $f: N(H) \rightarrow Aut(H)$ given by $f(g) = \phi_g$, where $\phi_g$ is the inner automorphism of H induced by $g$: $\phi_g(h) = ghg^{-1} \forall h \in H$. That this map $f$ is a homomorphism is clear, but I have trouble trying to see why $Kerf = C(H)$. Can someone explain this to me? $N(H)$ is the normalizer of $H$ in $G$ and $C(H)$ the centralizer. REPLY [7 votes]: So, we have $g \in \ker f$ iff $\phi_g = \mathrm{id}_H$, that is iff $\phi_g(h) = h$ for all $h \in H$, so $ghg^{-1} = h$ for all $h$, which means $gh = hg$ for all $h \in H$. This holds exactly iff $g \in C(H)$ by definition of the centralizer.<|endoftext|> TITLE: How do you distinguish the difference between a negative sign and a minus sign in Algebra QUESTION [8 upvotes]: Both the negative sign symbol and subtraction's minus symbol look the same, so how does anyone tell them apart? REPLY [3 votes]: I beg to differ with the wikipedia description, which seems to be written for the feeble of mind, but does not describe the use of the sign "$-$" in mathematics, or in computer programming, correctly. Notably there is no need at all to distinguish the use of a "negative indicator" (case 2., as in $-5$) from the use as a (unary) "negation operator" (case 3. ,as in $-x$, or I suppose for instance in $-\pi$ or in $-i$ when $i$ designates the imaginary unit). One can of course decree that case 2. applies if and only if the sign "$-$" has no left operand and a right operand that is an explicit numeric constant (a sequence of digits, possible fractional part starting with decimal point); one then needs to remove that case from case 3. to avoid ambiguity. But the point is that case 3. perfectly well covers cases like $-5$ or $-5.272023$: the value denoted is the opposite of that of the (right) operand of the minus sign. Saying that case 2. is a "negative sign" has no added value; it just happens that explicit numeric constants as described above always designate non-negative numbers, so that particular case always describes a non-positive real number. Making the distinction just raises useless questions like whether there is a negative indicator in $-\frac15$, and in $-1/5$? Or in $-0.00$, which is no more negative than $0$ is positive. Therefore I would say There is no such thing as a negative sign in mathematics. To expresss the fact that the value of an expression $E$ is negative, one writes $E<0$, and this does not involve the sign "$-$" at all. The basic meaning of "$-$" is as a binary operator, where $x-y$ describes the unique value $z$ such that $z+y=x$. When used without a left operand (so as a unary operator) the value $0$ is implicitly taken as left operand. So $-x$ means $0-x$. And $-5$ means $0-5$, the unique value $z$ such that $z+5=0$, which might seem roundabout, but is correct. The fact that $-5$ is not an explicit constant expression like $24$, but the resultof applying a unary operator to "$5$", and that the value denoted by "$-5$" does not have any direct numeric representation should not be shocking; neither do $\pi$ or $\frac17$ (because decimals have to stop somewhere) or $3+4i$ have any numeric constant designating them, without using operators. As far as programming goes, I hardly know any language whose syntax for numeric constants allows for $-5$; most if not all would take that to be the unary minus applied to the constant $5$. (However if scientific notation with exponent-of-$10$ is included, the possible minus sign for the exponent will be part of the constant syntax, as an operator makes no sense in that position.) So the only relevant question is whether "$-$" is used in a specific case as unary or binary operator, which is determined merely by the absence or presence of an applicable left operand; which is the case is quite clear in practice. I might add that it makes sense in the points above to reverse the order in discussing unary and binary use, making $-x$ the more basic case (the unique value $z$ with $z+x=0$), and considering $x-y$ and abbreviation for $x+-y$; in the end for the meaning of each operator it makes no difference which approach is used.<|endoftext|> TITLE: Why is the Borel Algebra on R not equal the powerset? QUESTION [8 upvotes]: The borel algebra on the topological space R is defined as the σ-algebra generated by the open sets (or, equivalently, by the closed sets). Logically, I thought that since this includes all the open sets (a,b) where a and b are real numbers, then, this would be equivalent to the power set. For example, the set (0.001, 0.0231) would be included as well as (-12, 19029) correct? I can't think of any set that would not be included. However, I have read that the Borel σ-algebra is not, in general, the whole power set. Can anyone give a gentle explanation as to why this is the case? REPLY [14 votes]: You can show that there are $\mathfrak{c} = 2^{\aleph_0}$ Borel subsets of the real line, and so by Cantor's Theorem ($|X| < | \mathcal{P} (X)|$) it follows that there are non-Borel subsets of $\mathbb{R}$. To see that there are $\mathfrak{c}$-many Borel subsets of $\mathbb{R}$, we can proceed as follows: define $\Sigma_1^0$ to be the family of all open subsets of $\mathbb{R}$; for $0 < \alpha < \omega_1$ define $\Pi_\alpha^0$ to be the family of all complements of sets in $\Sigma_\alpha^0$ (so that $\Pi_1^0$ consists of all closed subsets of $\mathbb{R}$); for $1 < \alpha < \omega_1$ define $\Sigma_\alpha^0$ to be the family of all countable unions of sets in $\bigcup_{\xi < \alpha} \Pi_\xi^0$. Then you can show that $B = \bigcup_{\alpha < \omega_1} \Sigma_\alpha^0 = \bigcup_{\alpha < \omega_1} \Pi_\alpha^0$ is the family of all Borel subsets of $\mathbb{R}$. Furthermore, transfinite induction will show that $| \Sigma_\alpha^0 | = \mathfrak{c}$ for all $\alpha < \omega_1$, which implies that $\mathfrak{c} \leq | B | \leq \aleph_1 \cdot \mathfrak{c} = \mathfrak{c}$. Specific examples of non-Borel sets are in general difficult to describe. Perhaps the easiest to describe is a Vitali set, obtained by taking a representative from each equivalence class of the relation $x \sim y \Leftrightarrow x -y \in \mathbb{Q}$. Such a set is not Lebesgue measurable, and hence not Borel. Another example, due to Lusin, is given in Wikipedia.<|endoftext|> TITLE: How to find the radix (base) of a number given its representation in another radix (base)? QUESTION [9 upvotes]: What's the method to find the base of any given number? E.g. find $r$ such that $(121)_r=(144)_8$, where $r$ and $8$ are the bases. So how do I find the value of $r$? REPLY [5 votes]: remember the bold letters are the powers (121)r=(144)8 Now r and 8 is the base so, $1\cdot r^2+2 \cdot r^1+1\cdot r^0 = 1 \cdot 8^2+4\cdot 8^1+4 \cdot 8^0$ $r^2+2r+1=64+32+4$ $r^2+2r+1=100$ $(r+1)² =100$ $r+1=10$ $r= 10-1$ $r=9$<|endoftext|> TITLE: A metric on $\mathbb{R}^n$ such that $d(\lambda x, \lambda y)=|\lambda| d(x,y)$ which is not induced by a norm QUESTION [6 upvotes]: Let $V=\mathbb{R}^n$. Let $d:V \times V\rightarrow \mathbb{R}$ a metric on $\mathbb{R}^n$. Assume that for any $x,y\in V$ and $\lambda \in \mathbb{R}$, we have $d(\lambda x, \lambda y) = |\lambda|d(x,y)$. Is $d$ necessarily induced by a norm? Motivation: I've been thinking of $\pi$ and thought about why the ratio between a circles's circumference and its radius is constant. The proof is easy and is applicable to any norm. I think the "positive homogeneity" condition I posed on the metric above is enough for this ratio to be constant. REPLY [5 votes]: The answer is no. You need translational invariance as well; then it's a pretty well-known theorem (see e.g. here). As a counterexample when leaving out the translational invariance, consider: $$d: \Bbb R^n \times \Bbb R^n \to \Bbb R_{\ge 0}: d (x,y)=\begin{cases} \|x\|+\|y\| & \text{if $x \ne y$}\\ 0 & \text{otherwise.} \end{cases}$$ This metric is sometimes referred to as the "metric of the French railway system", although there are similar metrics with the same name (cf. the comments).<|endoftext|> TITLE: Compact subsets in $l_\infty$ (converse of my last question) QUESTION [6 upvotes]: (Converse of my last question) If $A \subseteq \ell_\infty$, and $A=\{l\in \ell_\infty: |l_n| \le b_n \}$, where $b_n$ is a sequence of real, non-negative numbers, then if $\lim (b_n) = 0$ it must mean that $A$ is compact subset of $X$. Take any sequence of sequences $(x_n)$, our goal is to construct a subsequence, $(x_{n_k})$ which will converge. Suppose for $n \geq N$ we have $|b_n|<\epsilon$ (by convergence of $(b_n)$). For the first $N-1$ points, however, I thought we could use Bolzanno-Weirstrass on each of the $N-1$ first terms (since $|x_n| \le b_n \forall n$) in the following way: apply Bolzano Weirstrass on all the first terms, then for the second terms apply it on the subsequence we got from the first terms, and so on... and claiming this subsequence will converge to $\{x_1,x_2,...,x_{N},...\}$ - edited, where $x_i$ is the limit of the $i_{th}$ subsequence. However, the subsequence created could have a few cases where we end up with no terms at the end of this inductive argument. My teacher told me to use Cantor Diagonalization to avoid this, but I don't see how this would work. REPLY [3 votes]: With diagonal argument: the sequence $\{x_n^{(1)}\}$ is bounded, hence we can find $A_1$, an infinite subset of $\Bbb N$, such that $\{x_n^{(1)}\}_{n\in A_1}$ is convergent. Then construct by induction a decreasing sequence $\{A_k\}$ of infinite subsets of $\Bbb N$ such that $\{x_n^{(k)}\}_{n\in A_k}$ converges to some $x^{(k)}$. Then denote $j_k$ the $k$-th element of $A_k$, and consider the sequence $x_{j_k}$. It's a subsequence of $\{x_n\}$, and for each $j$, $x_{n_k}^{(j)}\to x^{(j)}$. The fact that $b_n\to 0$ gives convergence in $\ell_{\infty}$. Using pre-compactness: fix $\varepsilon>0$, and $N$ such that $|b_n|<\varepsilon$ if $n\geq N$. Then we use the fact that $\prod_{j=1}^N[-b_j,b_j]\subset \Bbb R^n$ is precompact to get $v_1,\dots,v_l$ such that each element of $\prod_{j=1}^N[-b_j,b_j]$ is in some $B(v_j,\varepsilon)$. Finally, define $w_j:=(v_j,0,\dots,0)$. As $A$ is closed and precompact in a Banach space, it's a compact set.<|endoftext|> TITLE: the elliptic curves with j-invariant zero QUESTION [5 upvotes]: Let $B\in K^\ast$, where $K$ is a number field. Let $y^2=X^3+B$ be the Weierstrass equation for an elliptic curve $E_B$ over $K$. Note that the $j$-invariant of $E$ is zero. When is $E_B$ isomorphic to $E_{B^\prime}$ over $K$? (Here $B^\prime \in K^\ast$.) Does this happen if and only if $B=B^\prime$? Or does it also happen if $B^\prime = u B$, where $u$ is a unit in $O_K^\ast$? How do I determine the semi-stable reduction of $E_B$? That is, we know that $E_B$ has potential good reduction. How do I determine $L/K$ such that the elliptic curve $E_{B}\otimes_K L$ has good reduction over $O_L$? If $B$ is a unit, then $E_B$ has good reduction. So we may and do assume $B$ is not a unit, i.e., $B$ is a prime. I have a feeling that we must choose $L$ such that its ramification over the primes dividing $B$ is of the "right" type. But how do I see what the necessary type is? REPLY [5 votes]: Your first question is answered in Silverman AEC I Prop X.5.4.iii (I have the old edition, don't know if this matters). Namely, the elliptic curves corresponding to $B$ and $B'$ are isomorphic if and only if $B$ and $B'$ differ by a 6th power of an element of $K$. In fact, more is true, namely, the set of isomorphism classes of elliptic curves with $j$-invariant $0$ is a torsor for $$ \frac{K^\times}{{(K^\times)}^6} $$ under the action $d*E_a= E_{ad}$ where $E_c$ denotes the elliptic curve with Weierstrass equation $y^2 = x^3 + c$. This explicit description also helps tackle your second question. Namely, if we take a field extension that makes $A$ a 6th power (i.e. we adjoin a 6th root of $A$), then there's a model with everywhere good reduction, at least if we ignore primes above 2 and 3. Dealing with those is going to be a pain - note that your model has bad reduction at $2$ and $3$ even if $A$ is a unit, for instance - but you can solve it by chasing denominators in the formula for the discriminant. I'm not sure if you're going to wind up needing to adjoin a root of $2$ or of $3$ or not.<|endoftext|> TITLE: prove that a function is an inner product QUESTION [11 upvotes]: I would appreciate some assistance in answering the following problems. We are moving so quickly through our advanced linear algebra material, I can't wrap my head around the key concepts. Thank you. Let $V$ be the space of all continuously differentiable real valued functions on $[a, b]$. (i) Define $$\langle f,g\rangle = \int_a^bf(t)g(t) \, dt + \int_a^bf'(t)g'(t) \, dt.$$ Prove that $\langle , \rangle$ is an inner product on $V$. (ii) Define that $||f|| = \int_a^b|f(t)| \, dt + \int_a^b|f'(t)| \, dt$. Prove that this defines a norm on V. REPLY [13 votes]: Part (i) If you ever want to show something is an inner product, you need to show three things for all $f, g \in V$ and $\alpha \in \mathbb{R}$: Symmetry: $\newcommand{\inp}[1]{\left\langle #1 \right\rangle}$ $\inp{f, g} = \inp{g, f}$ (Or, if the field is the complex numbers, $\inp{f,g} = \overline{\inp{g,f}}$, i.e. "conjugate symmetry.) Linearity: $\inp{\alpha f, g} = \alpha \inp{f, g}$. Notice this also implies $\inp{f, \alpha g} = \alpha \inp{f, g}$ ($\overline{\alpha}$ in the complex case) by symmetry. Positive-definite: $\inp{f, f} \ge 0$ with equality if and only if $f = 0$, the zero function. The first two properties follow directly from the definition of an integral. For the third property, you have $$ \int_a^b f^2 + \int_a^b (f')^2 \ge 0 $$ Now when is this equal to $0$? Well, recall that if a continuous function is positive anywhere, then the integral is positive. Since $f^2$ is continuous, this means $f = 0$ everywhere. Thus $f' = 0 $ everywhere also, so equality holds. Part (ii) If you ever want to show something is a norm, you need to show three things for all $f, g \in V$ and $\alpha \in \mathbb{R}$: Scales in absolute value: $\newcommand{\norm}[1]{\left\| #1 \right\|}$ $\norm{\alpha f} = |\alpha| \norm{f}$. Triangle Inequality: $\norm{f + g} \le \norm{f} + \norm{g}$. Separates Points: $\norm{f} = 0$ if and only if $f = 0$. Again the first two properties follow directly from the definition of the norm, which is an integral. For the third property, we use the same property of integrals we used before: if a continuous function is positive anywhere, its integral is positive. Since $|f|$ is continuous when $f$ is, this means $\int |f| = 0$ if and only if $f = 0$. Which in turn implies $\int |f'| = 0$. You will notice parts (i) and (ii) seem very similar. It's almost the case that you can use part (i) for part (ii), but not quite. The problem is that $\inp{f,f}$ is not equal to $\norm{f}^2$ as expected.<|endoftext|> TITLE: Find the number of values of $a$? QUESTION [5 upvotes]: Consider a quadratic equation; $$ x^2 + 7x – 14(a^2 + 1) = 0,$$ … (where $a$ is an integer) For how many different value of $a$, the equation will have at least one integer root? I found out its discriminant, it comes out to be $$ (49 + 56(a^2+1))^{1/2}. $$ This should be the perfect square and also odd so that the at least one root be integer. But I am unable to get the values. How I can achieve this ? Thanks in advance. REPLY [3 votes]: If there is an integer solution, both (conceivably equal) solutions are integers, since their sum is $-7$. Moreover, since $7$ divides the last two terms, it must divide any integer solution $x$. So let the solutions be $7a$ and $7b$. The product of the solutions is $49pq$. But it is also $-14(a^2+1)$. Now use Mark Bennet's calculation that shows that $a^2+1$ cannot be divisible by $7$.<|endoftext|> TITLE: Supremum of an union of bounded sets QUESTION [9 upvotes]: Given $A$, $B$ are bounded subsets of $\Bbb R$. Prove $A\cup B$ is bounded. $\sup(A \cup B) =\sup\{\sup A, \sup B\}$. Can anyone help with this proof? REPLY [15 votes]: Without loss of generality assume that $\sup A\le\sup B$, so that $\sup\{\sup A,\sup B\}=\sup B$, and you simply want to show that $\sup(A\cup B)=\sup B$. Clearly $\sup(A\cup B)\ge\sup B$, so it suffices to show that $\sup(A\cup B)\le\sup B$. To show that $\sup(A\cup B)\le\sup B$, just prove that $\sup B$ is an upper bound for $A\cup B$, i.e., that $x\le\sup B$ for every $x\in A\cup B$. This isn’t hard if you remember that we assumed at the start that $\sup A\le\sup B$.<|endoftext|> TITLE: Let $X$ be an infinite dimensional Banach space. Prove that every Hamel basis of X is uncountable. QUESTION [70 upvotes]: Let $X$ be an infinite dimensional Banach space. Prove that every basis of $X$ is uncountable. Can anyone help how can I solve the above problem? REPLY [97 votes]: It seems that the proof using the Baire category theorem can be found in several places on this site, but none of those questions is an exact duplicate of this one. Therefore I'm posting a CW-answer, so that this question is not left unanswered. We assume that a Banach space $X$ has a countable basis $\{v_n; n\in\mathbb N\}$. Let us denote $X_n=[v_1,\dots,v_n]$. Then we have: $X=\bigcup\limits_{n=1}^\infty X_n$ $X_n$ is a finite-dimensional subspace of $X$, hence it is closed. (Every finite-dimensional normed space is complete, see PlanetMath. A complete subspace of a normed space is closed. See also: Finite-dimensional subspace normed vector space is closed) $X_n$ is a proper subspace of $X$, so it has empty interior. See Every proper subspace of a normed vector space has empty interior So we see that $\operatorname{Int} \overline{X_n} = \operatorname{Int} X_n=\emptyset$, which means that $X_n$ is nowhere dense. So $X$ is a countable union of nowhere dense subsets, which contradicts the Baire category theorem. Some further references: Other questions and answers on MSE Is there an easy example of a vector space which can not be endowed with the structure of a Banach space Your favourite application of the Baire Category Theorem Two problems: When a countinuous bijection is a homeomorphism? Possible cardinalities of Hamel bases? In the question Cardinality of a Hamel basis of $\ell_1(\mathbb{R})$ you can learn even more - that the cardinality of the Hamel basis is at least $\mathfrak c=2^{\aleph_0}$. Online Banach spaces of infinite dimension do not have a countable Hamel basis at PlanetMath uncountable Hamel basis - post by Henno Bradsma from Ask an Analyst (Wayback Machine) Blog post Uncountability of Hamel Basis for Banach Space II from Matt Rosenzweig's Blog Books Corollary 5.23 in Infinite Dimensional Analysis: A Hitchhiker's Guide by Charalambos D. Aliprantis, Kim C. Border. A Short Course on Banach Space Theory By N. L. Carothers, p.25 Exercise 1.81 in Banach Space Theory: The Basis for Linear and Nonlinear Analysis by Marián Fabian, Petr Habala, Petr Hájek, Vicente Montesinos, Václav Zizler<|endoftext|> TITLE: What are the eigenvalues of matrix that have all elements equal 1? QUESTION [15 upvotes]: As in subject: given a matrix $A$ of size $n$ with all elements equal exactly 1. What are the eigenvalues of that matrix ? REPLY [20 votes]: Suppose $\,\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}\,$ is an eigenvector of such a matrix corresponding to an eigenvalue $\,\lambda\,$, then $$\begin{pmatrix}1&1&...&1\\1&1&...&1\\...&...&...&...\\1&1&...&1\end{pmatrix}\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}=\begin{pmatrix}x_1+x_2+...+x_n\\x_1+x_2+...+x_n\\.................\\x_1+x_2+...+x_n\end{pmatrix}=\begin{pmatrix}\lambda x_1\\\lambda x_2\\..\\\lambda x_n\end{pmatrix}$$ One obvious solution to the above is $$W:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1+...+x_n=0\right\}\,\,\,,\,\,\lambda=0$$ For sure, $\,\dim W=n-1\,$ (no need to be a wizard to "see" this solution since the matrix is singular and thus one of its eigenvalues must be zero) Other solution, perhaps not as trivial as the above but also pretty simple, imo, is $$U:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1=x_2=...=x_n\right\}\,\,\,,\,\,\lambda=n$$ Again, it's easy to check that $\,\dim U=1\,$ . Now, just pay attention to the fact that $\,W\cap U=\{0\}\,$ unless the dimension of the vector space $\,V\,$ we're working on is divided by the definition field's characteristic (if you're used to real/complex vector spaces and you aren't sure about what the characteristic of a field is disregard the last comment) Thus, assuming this is the case, we get $\,\dim(W+U)=n=\dim V\Longrightarrow V=W\oplus U\,$ and we've thus found all the possible eigenvalues there are. BTW, as as side effect of the above, we get our matrix is diagonalizable.<|endoftext|> TITLE: The dual of the direct sum QUESTION [7 upvotes]: Let $X$, $Y$, $Z$ normed spaces If X$\cong Y\oplus Z$ why is $X^*\cong Y^*\oplus Z^*$? where $X^*$ is the dual of $X$. For example ${\ell^\infty}^*\cong\ell^1\oplus\mathrm{Null}\;C_0$ so if we take the double dual we find the ${\ell^\infty}^{**}\cong{\ell^1}^*\oplus (\mathrm{Null}\; C_0)^*$. I am not sure I understand why these equalities should hold? REPLY [12 votes]: Let $X \cong Y \oplus Z$, that is $Y$ and $Z$ can be regarded as closed, complemented subspaces of $X$. Now define $T \colon X^* \to Y^* \oplus Z^*$ by $Tx^* = (x^*|_Y, x^*|_Z)$. Then $T$ is bounded, as \begin{align*} \|Tx^*\| &= \|(x^*|_Y, x^*|_Z)\|\\ &\le C \bigl(\|x^*|_Y\| + \|x^*|_Z\|\bigr)\\ &\le 2C\|x^*\|. \end{align*} $T$ is one-to-one as for $x^*$ with $Tx^* = 0$ we have for $x \in X$, written as $x = y+z$, that $x^*(x) = x^*|_Y(y) + x^*|_Z(z) = 0+0=0$, so $x^* = 0$. $T$ is onto: Let $y^* \in Y$, $z^* \in Z$. Define $x^*\colon X \cong Y \oplus Z \to \mathbb K$ by $x^*(y+z) = y^*(y) + z^*(z)$, then $x^*$ is bounded as the projections onto $Y$ and $Z$ are bounded and $y^*$ and $z^*$ are, so $x^* \in X^*$ and obviously $Tx^*= (y^*,z^*)$. $T^{-1}$ is bounded: $T^{-1}(y^*, z^*) = y^*P_Y + z^*P_Z$ ($P_Y$, $P_Z$ denoting the projections) is bounded, as the projections are. So $T$ is an isomorphism.<|endoftext|> TITLE: What does an outer automorphism look like? QUESTION [18 upvotes]: I am working on a project in my group theory class to find an outer automorphism of $S_6$, which has already been addressed at length on this site and others. I have a prescription for how to go about finding this guy, but I have a larger conceptual question - what does an outer automorphism really look like? Is there an intuitive way to understand the difference between an inner and an outer automorphism? Inner automorphisms have always seemed easier for me to understand since we have an explicit representation $ghg^{-1}$ for members of the group. I can also understand this representation in terms of the Rubik's cube - rotate an edge, rotate a perpendicular edge, and the rotate the other edge back (is not the same as just rotating the perpendicular edge). What does an "outer automorphism" look like? REPLY [3 votes]: Another rather accessible example of an outer automorphism can be found on matrix groups: Look at the inverse transpose map $$\operatorname{GL}(n,F) \to \operatorname {GL}(n,F), \quad A \mapsto (A^{-1})^\top.$$ It is an automorphism, and except from all cases with $n=1$ and $\operatorname{GL}(2,\mathbb F_2)$, it is is outer.<|endoftext|> TITLE: Exercise 11.5 from Atiyah-MacDonald: Hilbert-Serre theorem and Grothendieck group QUESTION [12 upvotes]: I don't understand Exercise 11.5 of Atiyah & MacDonald, which demands one elaborate upon or rephrase the Hilbert–Serre Theorem (11.1) in terms of the Grothendieck group $K(A_0)$. Here's the set-up in more detail. $A$ is a commutative Noetherian graded ring, finitely generated as an algebra over its degree-$0$ summand $A_0$ by finitely many elements $x_j$ of degrees $k_j > 0$. $\lambda$ is some additive function from the class of finitely generated $A_0$-modules to the integers $\mathbb{Z}$ — no field is presumed involved, so $\lambda$ is not assumed to be dimension. For a finitely generated graded $A$-module $M$, the Poincaré series of $M$ with respect to $\lambda$ is the power series $$P_\lambda(M,t) := \sum \lambda(M_n) t^n \in \mathbb{Z}[[t]].$$ If we write $q(t) = \prod (1 - t^{k_j})$ and compute the reciprocal $q^{-1}$ of $q$ in the power series ring $\mathbb{Z}[[t]]$, then the theorem is that $$P_\lambda(M,t) \in q^{-1} \mathbb{Z}[t] \subset \mathbb{Z}[[t]];$$ that is, the Poincaré series is actually just a polynomial times the reciprocal of $q$. Under the book's definition, the Grothendieck group $K(A_0)$ is the quotient of the free abelian group on the isomorphism classes of finitely generated $A_0$-modules by the subgroup generated by all elements $[N] - [M] + [P]$ for short exact sequences $0 \to N \to M \to P \to 0$ of such modules. No grading is invoked in this definition, and all finitely generated modules are used as generators, not merely projective or flat ones. The question, again, is how to reformulate the result about $P_\lambda(M,t)$ in terms of $K(A_0)$. My attempts to say something meaningful, which have turned out to be rather inadequate, follow. First, if we define $K_{\mathrm{gr}}(A)$ to be the graded Grothendieck group of $A$, meaning we only admit isomorphism classes of graded $A$-modules as generators in the preceding definition and degree-preserving $A$-module homomorphisms in the exact sequences we use to generate the subgroup we quotient by, then it seems clear, using additivity of $\lambda$ on the summands $M_i$, that $M \mapsto P_\lambda(M,t)$ induces a homomorphism of additive groups $K_{\mathrm{gr}}(A) \to \mathbb{Z}[[t]]$ with values in the same subgroup $q^{-1} \mathbb{Z}[t]$ as before. Further, one can pass from additive $\mathbb{Z}$-valued functions $\lambda$ on the class of finitely generated $A_0$-modules to group homomorphisms $l\colon K(A_0) \to \mathbb{Z}$, and say that for any such $l$, one can define an analogous homomorphism $Q_l(-,t)\colon K_{\mathrm{gr}}(A) \to \mathbb{Z}[[t]]$, with image as before. This reformulation hardly seems enlightening, though. If we make $l$ a variable in this expression too, we get a function $$Q\colon \mathrm{Hom}\big(K(A_0),\mathbb{Z}\big) \times K_{\mathrm{gr}}(A) \to q^{-1} \mathbb{Z}[t] \subset \mathbb{Z}[[t]].$$ This looks a little different than the original, but is not particularly more interesting. Finally, another thing we can do is to factor all the $P_\lambda$ or $Q_l$ through the additive group $K(A_0)[[t]]$ to get a sort of "universal" Poincaré series, and note that for all $l \in \mathrm{Hom}\big(K(A_0),\mathbb{Z}\big)$, the image is contained in (well, I believe it is) $q^{-1} \mathbb{Z}[t]$. I would like to be able to lift the result about the image up to $K(A_0)[[t]]$, but since $K(A_0)$ doesn't usually have a ring structure, multiplication and hence $q^{-1}$ are not apparently defined in $K(A_0)[[t]]$. We can also make $l$ a variable as before. I have trouble believing that the intended answer could be something so insubstantial. Now, looking in Eisenbud, I did find that in the case where $A$ is a polynomial ring over a field $k$ and $\lambda = \dim_k$, the graded Poincaré series, if I am rephrasing correctly, gives an isomorphism from $K_{\mathrm{gr}}(A)$ to $q^{-1} \mathbb{Z}[t]$ (and hence implies an isomorphism with $\mathbb{Z}[t]$). That seems substantive enough. However, I don't see how hypotheses on $A$ and $\lambda$ as broad as ours could yield anything similarly interesting. So what do you suppose they are looking for here? REPLY [2 votes]: This is not a true answer, but I think it could be useful: the paper of W. Smoke Dimension and Multiplicity for Graded Algebras treats the case where $A_0$ is a field and obtains a homomorphism $\chi_R$ of $\mathbb{Z}[t]$-modules from $K_{\text{gr}}(A)$ to $\mathbb{Z}[[t]]$. Furthermore, if any finitely generated $A$-graded module have a finite graded free resolution, then $\chi_R$ is an isomorphism from $K_{\text{gr}}(A)$ to $\mathbb{Z}[t]$.<|endoftext|> TITLE: How to solve non-linear recurrence relation in general? QUESTION [14 upvotes]: For linear recurrence, we can use generating function. So is there a general technique to solve non-linear recurrence or it depends on a specific sequence? For example, $$a_{n+1} = \dfrac{a_n(a_n - 3)}{4}$$ for $a_0 = a$ REPLY [5 votes]: As already mentioned, currently for non-linear recurrence relations, there are no known general techniques for obtaining a closed form solution. For the very few non-linear recurrences that are actually solvable, the techniques used to solve them are strongly dependent on the specific recurrence that your dealing with.<|endoftext|> TITLE: What does $\binom{-n}{k}$ mean? QUESTION [14 upvotes]: For positive integers $n$ and $k$, what is the meaning of $\binom{-n}{k}$? Specifically, are there any combinatorial interpretations for it? Edit: I just came across Daniel Loeb, Sets with a negative number of elements, Advances in Mathematics. 91 (1992), 64-74, which includes a combinatorial interpretation for $\binom{n}{k}$ for any $n,k \in \mathbb{Z}$ in theorem 5.2. REPLY [3 votes]: For this answer it would probably be best if you're familiar with the Binomial distribution. I like the interpretation through probability & statistics via the negative binomial distribution. Let's say we're watching a sequence of cointosses (let's call tossing a "heads" a success and "tails" a failure) where the probability of a success is independently $p$. Then the negative binomial distribution is a distribution over the waiting times till the $r$th success. In other words, what's the probability we'll have to wait $t$ tosses/trials to see $r$ successes, $p(t|r,p)$? Equivalently, by the $t-1$ trial we must have seen $r-1$ successes and $k=t-r+1$ failures. But then $t=k+r-1$, and so there are $\binom{k+r-1}{k}$ ways to observe $k$ failures and $r$ successes in $t$ trials. Since $r$ and $k$ completely specify $t$, we can re-parameterize and write the probability that the $r$th success occurs on the $(r+k)$th trial as \begin{equation} p(k|r, p) = \binom{k+r-1}{k} p^r (1-p)^k \end{equation} Why the name negative binomial (and why is this an answer to your question)? Well, because \begin{equation} \binom{k+r-1}{k} = \frac{(k+r-1)_k}{k!} = \frac{(k+r-1)(k+r-2)\dots(r-1)}{k!} = (-1)^k \frac{(-r+1)\dots(-r -k+1)}{k} = (-1)^k \binom{-r}{k} \end{equation} Note that for integer $r$ the negative binomial distribution may be called the Pascal distribution.<|endoftext|> TITLE: How random is my deck of cards? QUESTION [6 upvotes]: I play a few card games, and I thought it would be fun to write a card shuffling program, to see how many shuffles it takes to randomize the deck. The algorithm is: Cut the deck in the middle ± random offset. while one hand is still full, place a small but random number of cards into the second hand in the front/back/both. repeat until random. The question is, how do I check for randomness? I've considered that this might be a 52-spin Ising model, and I can make some function 'cost energy's when they are ordered (ie ace of clubs followed by two of clubs 'cost' more than being next to say the seven of hearts) but this might be overkill... Is there a mathematical test for randomness I could use here, to check the order of my cards? REPLY [2 votes]: Valid or useful or interesting notions of "random-ness" need more than basic probability notions, since the probability of 100 "heads" in flipping a fair coin has the same probability as any other specific sequence of outcomes, but is arguably implausible as a "random" outcome. To my mind, the "Kolmogorov-Solomonoff-Chaitin" notion of "complexity" is the apt notion. This is discussed wonderfully and at length in the first part of Li-Vitanyi's book on the subject. A crude approximation of the idea is that a "thing" is "random" if it admits no simpler description than itself (!). Yes, of course, this depends on the language/descriptive apparatus, but has provable sense when suitably qualified. Given that most card games refer to discernible "patterns" (things with compressible descriptions), a "random" hand would be one lacking two-of-a-kind, and so on. A "random" distribution in a deck would, in particular, have no more pattern in it than might be "expected". The question of whether there is a notion of "too-violent-to-be-random" non-pattern-forming in a given context seems to be ambiguous: while long runs of all heads or all tails are suspicious, lack of them is also suspicious. This kind of example suggests that a configuration of a deck of cards to that no one has a playable hand might also be suspicious... depending on the context. The operationally significant question of whether or not an innocent-seeming "mixing" can produce "randomness" with relatively few iterations is slightly different. However, from the viewpoint of "complexity", surely the answer is "no", since the hands-of-cards which arise in this way immediately admit a much simpler description than themselves. Nevertheless, or perhaps because of this observation, we can decide to declare a merely relative notion of randomness for a deck of cards, in terms of a small proper subset of "genuine" tests of "randomness/compressibility". Of course, if the only "deals" of hands of cards that were allowed were "random" in any strong sense, the probability would be very low that anyone would have a playable hand...<|endoftext|> TITLE: Proving that $f(x)$ divides $x^{p^n} - x$ iff $\deg f(x)$ divides $n$ QUESTION [7 upvotes]: Prove that $f(x)$ divides $x^{p^n} - x$ if and only if $d := \deg f(x)$ divides $n$. I believe that I have the backward direction covered: Let $d \mid n$ say $n = dq$ for some $q$ in $\mathbb{F}_p[x]$. Consider the field $\mathbb{F}_p[x]/(f(x))$ which has $p^d$ elements. Take an element $x+I$ from the field (here $I = (f(x))$) so we have: $(x+I)^{p^n} = (x+I)^{p^{dq}}$. As long as you keep factoring out $(x+I)$ with the $p^d$ power you will get $(x+I)$ so $x^{p^n} - x \in (f(x))$. I am having trouble getting to the other direction. REPLY [4 votes]: This question is very old, but there is a more direct solution worth noting. Proof. Suppose $f(x)$ divides $h(x):=x^{p^n} -x$. Then since $h(x)$ splits over $\mathbb{F}_{p^n}$, so does $f(x)$. Let $\alpha \in \mathbb{F}_{p^n}$ be a root of $f(x)$. Then $\mathbb{F}_{p}(\alpha)\subset \mathbb{F}_{p^n}$, and $[\mathbb{F}_p(\alpha): \mathbb{F}_p] = d$. Finally, we have that $n = [\mathbb{F}_{p^n}: \mathbb{F}_p]= [\mathbb{F}_{p^n}: \mathbb{F}_p(\alpha)][\mathbb{F}_p(\alpha): \mathbb{F}_p]$, completing the proof that $d | n$.<|endoftext|> TITLE: GRE Math Subject Test QUESTION [18 upvotes]: I am studying for GRE Math. I am looking for specific tips. What types of questions usually come up? Does anyone know any tricks (e.g. integration tricks) that might be helpful? Which theorems are absolutely essential? Apparently, most of the test is calculus and probability theory. What types of calculus and probability questions come up? Overall, how to score high on the GRE Math? Please be specific. Edit: Please do not state the obvious. I know I need to study and take the practice tests. I am looking for specific tips and tricks that might help answer some types of questions faster. REPLY [9 votes]: Consensus from people I know is that the current tests are generally a bit harder than the practice tests which are available. If you want to score about the 85th percentile on test day, you should be able to finish a practice test to about the 90th percentile in a little over 2 hours having never seen that particular exam before. Another thing is that it helps to have a familiarity with the sort of calculus questions that are asked on the test, and a good strategy can be the following: grade for advanced calculus or beginning real analysis classes at your undergraduate institution. I found that having been a grader for my university's honors calculus class and a first quarter real analysis course helped because I had kept the knowledge in my head and I could quickly go through and do these sorts of problems. Other friends of mine who took the test who had graded calculus and analysis before reported similar statements. Finally, it helps to keep in mind that doing well on this test does not correlate strongly with going to good graduate programs. Most reasonable places look at the math subject GRE and expect you to not fail it- it is not so important except as a basic hurdle to get over. What I've heard from admissions committees is that most important things are good letters of recommendation from professors.<|endoftext|> TITLE: When can we interchange the derivative with an expectation? QUESTION [49 upvotes]: Let $ (X_t) $ be a stochastic process, and define a new stochastic process by $ Y_t = \int_0^t f(X_s) ds $. Is it true in general that $ \frac{d} {dt} \mathbb{E}(Y_t) = \mathbb{E}(f(X_t)) $? If not, under what conditions would we be allowed to interchange the derivative operator with the expectation operator? REPLY [4 votes]: The lemma which is stated in jochen's answer is quite useful. However, there are cases in which the integrand is not differentiable with respect to the parameter. Here, there is a discussion about some results which can be made in a more general setup. Let $\left(\mathbf{X},\mathcal{X},\mu\right)$ be a general measure space (e.g., a probability space) and let $\xi:\mathbf{X}\times[0,\infty)\rightarrow\mathbb{R}$ be such that: (a) For every $s\geq0$, $x\mapsto\xi(x,s)$ is $\mathcal{X}$-measurable. (b) For every $x\in\mathbf{X}$, $s\mapsto\xi(x,s)$ is right-continuous (This assumption can be weakened by letting it be valid just $\mu$-a.s. but then $\left(\mathbf{X},\mathcal{X},\mu\right)$ has to be a complete). In particular, notice that (a) and the right-continuity assumption which is listed in (b) imply that $\xi\in\mathcal{X}\otimes\mathcal{B}[0,\infty)$ where $\mathcal{B}[0,\infty)$ is the Borel $\sigma$-field which is generated by $[0,\infty)$. For details see, e.g., Remark 1.4 on p. 5 of I. Karatzas, S.E. Shreve, Brownian Motion and Stochastic Calculus, Springer, 1988. Then, for every $(x,t)\in\mathbf{X}\times[0,\infty)$ define $g(x,t)=\int_0^t\xi(x,s)ds$ and note that $t\mapsto g(x,t)$ has a right-derivative which equals to $s\mapsto\xi(x,s)$. In addition, for every $t\geq0$ let $$\varphi(t)\equiv\int_{\mathbf{X}}g(x,t)\mu(dx)=\int_{\mathbf{X}}\int_0^t\xi(x,s)ds\mu(dx)\,.$$ To make $\varphi(\cdot)$ be well-defined, let $m$ be Lebesgue measure on $[0,\infty)$ and assume that the pre-conditions of Fubini's theorem are satisfied, e.g., $\xi(x,s)$ is nonnegative (This assumption can be weakened by letting it be valid just $\mu$-a.s. but then $\left(\mathbf{X},\mathcal{X},\mu\right)$ has to be a complete) or integrable with respect to $\mu\otimes m$. Then, deduce that $$\varphi(t)=\int_0^t\zeta(s)ds\ \ , \ \ \forall t\geq0$$ such that for every $t\geq0$, $\zeta(t)\equiv\int_{\mathbf{X}}\xi(x,t)\mu(dx)$. This means that if there is a right-continuous version of $\zeta(\cdot)$, then it equals to the right-derivative of $\varphi(\cdot)$. Moreover, if this version is continuous, then the fundamental theorem of calculus implies that it is the derivative of $\varphi(\cdot)$. In particular, if some convergence theorem can be used in order to show that the right-continuity of $s\mapsto\xi(x,s)$ for every $x\in\mathbf{X}$ leads to a right-continuity of $\zeta(\cdot)$, then $$\partial_+\varphi(t)=\zeta(t)\ \ ,\ \ \forall t\geq0$$ where $\partial_+$ is a notation for a right-derivative. For example, this happens when $$|\xi(x,s)|\leq \psi(x) \ \ , \ \ \mu\text{-a.s.}$$ for some $\psi\in L_1(\mu)$.<|endoftext|> TITLE: Uniform Convergence of Difference Quotients to Partial Derivative QUESTION [11 upvotes]: I'm currently reading Evans' PDE book. In it he claims that for $f \in C^2_c(\mathbb{R}^n)$ $$\frac{f(x + he_i) - f(x)}{h} \to \frac{\partial}{\partial x_i}f(x)$$ and $$\frac{\frac{\partial}{\partial x_i}f(x + he_j) - \frac{\partial}{\partial x_i}f(x)}{h} \to \frac{\partial^2}{\partial x_jx_i}f(x)$$ uniformly as $h \to 0$. My question is why must the convergence be uniform? Thanks in advance. REPLY [7 votes]: I know that this, is an old question but one can improve the answer to the case mentioned in Evans book. In particular, one can show the uniform convergence of the difference quotients to the derivative for $C_c^1(\mathbb{R}^n)$ functions. The step for the second derivative follows from this case: The key idea is to use that for $f\in C_c^1(\mathbb{R}^n)$ we find that $\nabla f$ (and also $f$) is uniformly continuous. Hence, we receive with the mean value theoren: $$| \frac{f(x+he_i)-f(x)}{h} -\partial_i f(x)| = |\partial_i f(y) -\partial_i f(x)| $$ for some $y$ in the line from $x$ to $x+he_i$ and consequently for some $y\in B_h(x)$. Now let $\varepsilon >0$ be given and $x\in \mathbb{R}^n$ be arbitrary. Then, by the uniform continuity there is a $\delta >0$ (independent of $x$) such that for all $y\in B_\delta(x)$ we find $|\partial_i f(x) -\partial_i f(y)|< \varepsilon$. Choosing $h\leq\delta$ proves the statement.<|endoftext|> TITLE: How to calculate the cost of Cholesky decomposition? QUESTION [7 upvotes]: The cost of Cholesky decomposition is $n^3/3$ flops (A is a $n \times n$ matrix). Could anyone show me some steps to get this number? Thank you very much. REPLY [2 votes]: I found Section 1.6 ("Serial complexity of the algorithm") of the following webpage to be useful for this topic: https://algowiki-project.org/en/Cholesky_decomposition Edit: Here's the info from that page. 1.6 Serial complexity of the algorithm The following number of operations should be performed to decompose a matrix of order $n$ using a serial version of the Cholesky algorithm: $n$ square roots $\frac{n(n-1)}{2}$ divisions $\frac{n^3-n}{6}$ multiplications and $\frac{n^3-n}{6}$ additions (subtractions): the main amount of computational work. In the accumulation mode, the multiplication and subtraction operations should be made in double precision (or by using the corresponding function, like the DPROD function in Fortran), which increases the overall computation time of the Cholesky algorithm. Thus, a serial version of the Cholesky algorithm is of cubic complexity.<|endoftext|> TITLE: How do I show language of prime number of xs not context-free? QUESTION [8 upvotes]: I have a hunch that the language $L = \{ x^n : n \text{ is prime.} \}$ is not context-free. I am trying to show that by contradiction with the Pumping Lemma: First assume that $L$ is context-free. That means for any string in $L$ of a certain pumping length $p$ or greater, that string can be broken into $s = uvxyz$ where $|vxy| \le p$, $|vy| > 0$, and $uv^ixy^iz$ is in $L$ where $i$ can be any natural number including 0. I first tried letting $s = x^P$. However, I'm not quite sure how to divide this value up into $uxvyz$ to show that it cannot be pumped. Any advice? This is not homework. I am practicing on my own. Thanks! REPLY [18 votes]: Let $v=x^q$ and $y=x^t$, noting the pumping lemma requires $q+t>0$. Let $r=|uxz|=p-q-t$. Then $$|uv^rxy^rz|=r+rq+rt=r(1+q+t)$$ is divisible by both $r$ and $1+q+t>1$ and thus is not prime as long as $r>1$. Then there are two unsettled cases: if $r=0,$ $$|uv^2xy^2z|=|v^2y^2|=2p$$ is not prime. Finally, if $r=1,$ $$|uv^{p+1}xy^{p+1}pz|=1+(p+1)q+(p+1)t=1+(p+1)(q+t)=1+(p+1)(p-1)=p^2$$ isn't prime.<|endoftext|> TITLE: Prove that $\frac{n-1}{n}>\frac{2a_0a_2}{a_1^2}$ QUESTION [6 upvotes]: Given that the following equation $$p(x)=a_0x^n+a_1x^{n-1}+...+a_{n-1}x+a_n=0$$ has $n$ distinct real roots. Prove that $$\frac{n-1}{n}>\frac{2a_0a_2}{a_1^2}$$ REPLY [2 votes]: Hint proceed by induction on $n$ beginning with the case $n = 2$, which basically reduces to the condition for when a quadratic polynomial has two distinct real roots. Then if $f(x) = a_nx^n + \dots + a_0$ has n distinct real roots, the derivative \begin{equation} f'(x) = na_ox^{n-1} + (n-1)a_1x^{n-2} + (n-2)a_2x^{n-3} + \dots a_1 \end{equation} will have $n-1$ distinct real roots by Rolle's theorem. Think about this: in each interval of consecutive roots of $f(x)$ there will be a point where the tangent line is horizontal, and there are $n-1$ intervals if you arrange the roots of $f(x)$ in increasing order. Apply induction hypothesis to $f'(x)$, and magically everything will pop out after some elementary algebra. Let me know if this is too cryptic.<|endoftext|> TITLE: Difference between Gilbert Strang's "Introduction to Linear Algebra" and his "Linear Algebra and Its Applications"? QUESTION [20 upvotes]: Could someone please explain the difference between Gilbert Strang's "Introduction to Linear Algebra" and his "Linear Algebra and Its Applications"? Thank you. REPLY [9 votes]: Quoting an answer on Quora: Introduction to Linear Algebra is a more introductory book, whereas Linear Algebra and Its Applications assumes that the reader is already familiar with the basics of matrices and vectors. Introduction to Linear Algebra also seems to have some material introducing the abstract view of linear algebra, whereas Linear Algebra and Its Applications looks like it's mostly focusing on material that's relevant for engineering applications.<|endoftext|> TITLE: $H$ normal in $G$. Need $G$ contain a subgroup isomorphic to $G/H$ QUESTION [6 upvotes]: If $H \trianglelefteq G$, need $G$ contain a subgroup isomorphic to $G/H$? I worked out the isomorphism types of the quotient groups of $S_3, D_8, Q_8$. For $S_3$: $S_3/\{1\} \cong S_3$, $S_3/\langle (1\ 2\ 3)\rangle \cong \mathbb Z_2$, $S_3/S_3 \cong \{1\}$. For $D_8$: $D_8/\{1\} \cong D_8$, $D_8/\langle r\rangle \cong \mathbb Z_2$, $D_8/\langle s, r^2\rangle \cong \mathbb Z_2$, $D_8/\langle sr^3, r^2\rangle \cong \mathbb Z_2$, $D_8/\langle r^2\rangle \cong V_4$, $D_8/D_8 \cong \{1\}$. For $Q_8$ $Q_8/\{1\} \cong Q_8$, $Q_8/\{1, -1\} \cong V_4$, $Q_8/\langle i \rangle \cong \mathbb Z_2$, $Q_8/\langle j \rangle \cong \mathbb Z_2$, $Q_8/\langle k \rangle \cong \mathbb Z_2$, $Q_8/Q_8 \cong \{1\}$. So I'm guessing that the statement is true, but I don't know how to prove it. And if its not true, I haven't found a counter example. Can someone give me a proof or counterexample? Or a HINT :D EDIT: Ahhh. I feel stupid now. Given $\{1, -1\}$ normal in $Q_8$ there is no subgroup of $Q_8$ isomorphic to $V_4$. Correct? So the statement is false? REPLY [2 votes]: The statement not true in general, as Robert showed. But it does hold for finite abelian groups (since all subgroups of finite abelian groups are normal, we can drop the normal condition). You should be able to convince yourself of this by showing it for $\mathbb{Z}_n$ and then noting that any finite abelian group is a direct sum of cyclic groups. To see that the word "finite" is necessary, just consider $2\mathbb{Z}\leq\mathbb{Z}$.<|endoftext|> TITLE: Limit of a Recurrence Sequence QUESTION [9 upvotes]: $a_0=c$ where $c$ is positive, with $a_n=\log{(1+a_{n-1})}$,Find \begin{align}\lim_{n\to\infty}\frac{n(na_n-2)}{\log{n}}\end{align} I'have tried Taylor expansion, but I can't find the way to crack this limit. Thanks alot for your attention! REPLY [6 votes]: You may find an asymptotic formula for $a_n$ by improving the accuracy in an adaptive manner. Step 1. Since $$0 < a_{n+1} = \log (1+a_n) < a_n,$$ it is a monotone decreasing sequence which is bounded. Thus it must converge to some limit, say $\alpha$. Then $\alpha = \log(1 + \alpha)$, which is true precisely when $\alpha = 0$. Therefore it follows that $$a_n = o(1). \tag{1}$$ Before going to the next step, we make a simple observation: it is easy to observe that the function $$\frac{x}{\log(1+x)}$$ is of class $C^3$. In particular, whenever $|x| \leq \frac{1}{2}$, we have $$ \frac{x}{\log (1+x)} = 1+\frac{x}{2}-\frac{x^2}{12}+O(x^3). $$ This can be rephrased as $$ \frac{1}{\log(1+x)} = \frac{1}{x}+\frac{1}{2}-\frac{x}{12}+O(x^2). \tag{2}$$ Here, we note that the bound, say $C > 0$, for the Big-Oh notation does not depend on $x$ whenever $|x| \leq \frac{1}{2}$. Step 2. By noting $(1)$, we fix a positive integer $N$ such that whenever $n \geq N$, we have $|a_n| \leq \frac{1}{2}$. Then by $(2)$, $$ \frac{1}{a_{n+1}} - \frac{1}{a_n} = \frac{1}{2} + O(a_n), $$ where the bound for Big-Oh notation depends only on $N$. Indeed, we may explicitly choose a bounding constant as $$C'=\frac{1}{12} + \frac{1}{2}C,$$ where $C$ is the same as in $(2)$. Thus if $n > m > N$, we then have $$ \begin{align*} \frac{1}{a_n} &= \frac{1}{a_{m}} + \sum_{k=m}^{n-1} \left( \frac{1}{a_{k+1}} - \frac{1}{a_k} \right) \\ &= \frac{1}{a_{m}} + \sum_{k=m}^{n-1} \left( \frac{1}{2} + O(a_k) \right) \\ &= \frac{1}{a_{m}} + \frac{n-m}{2} + O((n-m)a_m). \end{align*} $$ Thus we have $$ \left|\frac{1}{n a_n} - \frac{1}{2}\right| \leq \frac{1}{n}\left(\frac{1}{a_m} + \frac{m}{2} + C'm a_m \right) + C' a_m.$$ Taking limsup as $n\to\infty$, we have $$ \limsup_{n\to\infty}\left|\frac{1}{n a_n} - \frac{1}{2}\right| \leq C' a_m. $$ Since now $m$ is arbitrary, the right-hand side can be made as small as we wish. Thus the left-hand side must vanish, yielding $$ \frac{1}{n a_n} = \frac{1}{2} + o(1),$$ or equivalently $$ n a_n = 2 + o(1). \tag{3} $$ Step 3. Let $N$ be as in the previous step. Then $(3)$ suggests that it is natural to consider $$ \left( \frac{1}{a_{n+1}} - \frac{n+1}{2} \right) - \left( \frac{1}{a_{n}} - \frac{n}{2} \right) = -\frac{a_n}{12} + O(a_n^2). $$ Now from $(3)$, we have $$ a_n = \frac{2}{n} + o\left(\frac{1}{n}\right) = 2(\log(n+1) - \log n) + o\left(\frac{1}{n}\right) = O\left(\frac{1}{n}\right).$$ Plugging this to the equation above, we have $$ \left( \frac{1}{a_{n+1}} - \frac{n+1}{2} \right) - \left( \frac{1}{a_{n}} - \frac{n}{2} \right) = -\frac{1}{6}\left( \log(n+1) - \log n \right) + o\left(\frac{1}{n}\right). $$ Now for each $\epsilon > 0$, choose $m > N$ such that whenever $n > m$, the Small-Oh term is bounded by $\epsilon / n$. Then for such $n$ we have $$ \left| \left( \frac{1}{a_{n+1}} - \frac{n+1}{2} + \frac{1}{6}\log(n+1) \right) - \left( \frac{1}{a_{n}} - \frac{n}{2} + \frac{1}{6}\log n \right) \right| \leq \frac{\epsilon}{n}. $$ Thus summing up from $m$ to $n-1$, we have $$ \left| \frac{1}{a_{n}} - \frac{n}{2} + \frac{1}{6}\log n \right| \leq \left| \frac{1}{a_{m}} - \frac{m}{2} + \frac{1}{6}\log m \right| + \epsilon (\log n - \log m). $$ Dividing both sides by $\log n$ and taking $n \to \infty$, we have $$ \limsup_{n\to\infty} \frac{1}{\log n} \left| \frac{1}{a_{n}} - \frac{n}{2} + \frac{1}{6}\log n \right| \leq \epsilon. $$ Since this is true for every $\epsilon > 0$, it must vanish. Therefore we have $$ \frac{1}{a_{n}} = \frac{n}{2} - \frac{1}{6}\log n + o(\log n). $$ In particular, $$ \begin{align*}a_n &= \left( \frac{n}{2} - \frac{1}{6}\log n + o(\log n) \right)^{-1} \\ &= \frac{2}{n} \left( 1 - \frac{1}{3n}\log n + o\left( \frac{\log n}{n} \right) \right)^{-1} \\ &= \frac{2}{n} + \frac{2}{3n^2} \log n + o\left( \frac{\log n}{n^2} \right). \end{align*} $$ Therefore $$ \frac{n(na_n - 2)}{\log n} = \frac{2}{3} + o(1)$$ and it follows that the limit is $$\lim_{n\to\infty} \frac{n(na_n - 2)}{\log n} = \frac{2}{3}.$$ Further discussions. In fact, we can show that $$ a_n = \frac{2}{n} + \frac{2}{3n^2} \log n + O\left( \frac{1}{n^2} \right). $$ More generally, we have the following proposition. Proposition. Suppose $(a_n)$ is a sequence of positive real numbers converging to 0 and satisfying the recurrence relation $a_{n+1} = f(a_n)$. If $f(x) = x \left( 1 - (a + o(1)) x^m \right)$ for some real $a \neq 0$ and integer $m \geq 1$, then $$ a_n = \frac{1}{\sqrt[m]{man}}(1 + o(1)). $$ If $f(x) = x \left( 1 - a x^m + (b+o(1)) x^{2m} \right)$ for some some reals $a \neq 0$ and $b$, and integer $m \geq 1$, then $$ a_n = \frac{1}{\sqrt[m]{man}} \left( 1 - \frac{(m+1) a^2 - 2 b}{2m^2a^2} \frac{\log n}{n} + o \left( \frac{\log n}{n} \right) \right). $$ If $f(x) = x \left( 1 - a x^m + b x^{2m} + O(x^{3m}) \right)$ for some reals $a \neq 0$ and $b$, and integer $m \geq 1$, then $$ a_n = \frac{1}{\sqrt[m]{man}} \left( 1 - \frac{(m+1) a^2 - 2 b}{2m^2a^2} \frac{\log n}{n} + O \left( \frac{1}{n} \right) \right). $$<|endoftext|> TITLE: What is reductive group intuitively? QUESTION [31 upvotes]: I am studying Geometric invariant theory and wonder how I should understand linearly reductive algebraic group. We say that an affine algebraic group $G$ is linearly reductive if all finite dimensional $G$-modules are semi-simple. I am not sure if linearly reductive groups are the same as reductive groups, which are defined as algebraic groups $G$ over algebraically closed field such that the unipotent radical of $G$ is trivial. But this definition is still beyond my intuition. Are there any good way to understand (linearly) reductive groups? It would especially be nice if reductive (Lie) groups can be characterized in geometry. REPLY [29 votes]: "Linearly reductive" and "reductive" are equivalent when the base field is of characteristic zero, but for prime characteristic they are different -- in fact, in prime characteristic, the only connected linearly reductive groups are algebraic tori, though for example $\operatorname{GL}_n$ is reductive. There is a nice characterization of reductive groups over $\mathbb{C}$: they are precisely the complexifications of compact connected Lie groups. More precisely: every compact connected Lie group is actually a real algebraic group, and every smooth homomorphism of compact Lie groups is algebraic. We can then look at the points of this group over $\mathbb{C}$, and thereby obtain a complex algebraic group. For example, the complexification of $U(n)$ is $\operatorname{GL}_n(\mathbb{C})$ -- this can be seen at the level of Lie algebras from $\mathfrak{gl}_n(\mathbb{C}) = \mathfrak{u}(n) \oplus i\mathfrak{u}(n)$. It turns out that in fact every complex reductive group arises in this way from a maximal compact subgroup, and this gives an equivalence between the category of compact connected Lie groups (with smooth homomorphisms) and the category of complex reductive groups (with algebraic homomorphisms).<|endoftext|> TITLE: Constructing Riemann surfaces using the covering spaces QUESTION [5 upvotes]: In the paper "On the dynamics of polynomial-like mappings" of Adrien Douady and John Hamal Hubbard, there is a way of constructing Riemann surfaces. I recite it as follow: A polynomail-like map of degree d is a triple $(U,U',f)$ where $U$ and $U'$ are open subsets of $\mathbb{C}$ isomorphic to discs, with $U'$ relatively compact in $U$, and $f: U'\rightarrow U $ a $\mathbb{C}$-analytic mapping, proper of degree $d$. Let $L \subset U' $be a compact connect subset containing $f^{-1}\left(\overline{U'}\right)$ and the critical points of $f$, and such that $X_0=U-L$ is connected. Let $X_n$ be a covering space of $X_0$ of degree $d^n$, $\rho_n:X_{n+1}\rightarrow X_n$ and $\pi_n:X_n\rightarrow X_0$ be the projections and let $X$ be the disjoint union of the $X_n$. For each $n$ choose a lifting $$\widetilde{f}_n\colon \pi_n^{-1}(U'-L)\rightarrow X_{n+1},$$ of $f$. Then $T$ is the quotient of $X$ by the equivalence relation identifying $x$ to $\widetilde{f}_n(x)$ for all $x\in \pi_n^{-1}(U'-L)$ and all $n=0,1,2,\ldots$. The open set $T'$ is the union of the images of the $X_n, n=1,2,\ldots$, and $F:T'\rightarrow T$ is induced by the $\rho_n$. Why $T$ is a Riemann surface and isomorphic to an annulus of finit modulus? Is there anything special about the $\pi_n,\rho_n$? What kind of background do I need? REPLY [2 votes]: I will give an informal answer. Your covering space is just a collection of holed spaces right? The equivalence relation just projects the holed space down onto a space isomorphic to $U'-L$. It does this by pasting the image and the preimage of $f$ together. (In an informal sense with each iteration the covering space gets bigger. To see this consider the map $z \mapsto z^2$ on the punctured disk (disk without the origin) with radius 1 from the origin. With 1 iteration you cover the disk twice. Applying it another time on the double covering of the disk you cover it four times etc. Quotienting by all these iterations you get the punctured disk.) Hence you get something isomorphic to a space with an annulus of finite modulus. The latter space is just a classical (Parabolic) Riemann surface (Google it!).<|endoftext|> TITLE: Generator for homology of surface of genus $g$ - Hatcher 2.2.29 QUESTION [9 upvotes]: Consider the surface $M_g$ of genus $g$, embedded in $\Bbb{R}^3$ in the standard way. It bounds some compact region $R$. Two copies of $R$ are glued together by the identity map between their boundary surfaces, which forms a closed 3-manifold $X$. I am asked to compute the homology groups of $X$. Now the computation of this for $n = 3$ of $H_n(X)$ is straightforward from Mayer - Vietoris. However now for $n=2$, I run into trouble: I am looking at the following part of the LES from Mayer - Vietoris. Put $A$ = one copy of $R$, $B$ = the other copy, their intersection $A \cap B = M_g$. $i,j$ are the inclusion maps of $A \cap B$ into $A$ and $B$ respectively. The lower end of Mayer - Vietoris looks like $$0 \rightarrow \tilde{H}_2(X) \rightarrow \tilde{H}_1(A \cap B) \stackrel{(i_\ast,j_\ast)}{\longrightarrow} \tilde{H}_1(A) \oplus \tilde{H}_1(B) \rightarrow \tilde{H_1}(C) \rightarrow 0 \rightarrow 0 \rightarrow 0 \rightarrow 0.$$ Now I know that $\tilde{H_1}(A \cap B) = \Bbb{Z}^{2g}$ and the same for $ \tilde{H}_1(A) \oplus \tilde{H_1}(B)$, but this does not imply that $(i_\ast,j_\ast)$ is an isomorphism. The first fact on the homology of $A \cap B$ comes from the CW - structure of $M_g$ having $2g$ one cells, the second from the fact that $A$ and $B$ respectively can be thought of the wedge sum of $g$ tori, which is homotopy equivalent to a wedge of $g$ circles. Now the complete the problem I need to know the kernel of the map $\Phi$. To do this, I need to know what are 1. Generators for the homology of $M_g$. 2. Generators for the homology of $A$ and $B$. How do I go about finding these? I would say my main problem in general is making connections between algebraic things and generators for homologies that comes from topology. REPLY [6 votes]: I have to think about it and don't have the time, but I would say the following: I would start with $g =1$. That's the standard trick. :-) In this case, I would remember that your morphism $\Phi$ is what I called in my answer $(i_*, j_*)$, the one induced by the inclusions $A \leftarrow A \cap B \rightarrow B$. Then, we can take the generators for the $H_1$ of $A \cap B = M_1 = \mathbb{T}^2$ to be two $S^1$: one "meridian" and one "parallel". Take the equatorial inner one for the later, for instance. What happens with these generators inside of $A = B= R$? (That is, once we apply $i_*$ and $j_*$ to them?) Well, the first one, the meridian, goes to zero and the second one, the parallel, survives as himself. Doesn't them? So, you've got your morphism $\Phi = (i_*, j_*)$ and you can pursue your computations, I think. EDIT. You've probably already guessed it, but for $g=2$ , you have four generators: two meridians and two parallels too. Meridians go to zero through $(i_*, j_*)$. Parallels remain the same. And so on: in general, you'll have $2g$ generators...<|endoftext|> TITLE: Understanding matrices as linear transformations & Relationship with Gaussian Elimination and Bézout's Identity QUESTION [7 upvotes]: I am currently taking a intro course to abstract algebra and am revisiting ideas from linear algebra so that I can better understand examples. When i was in undergraduate learning L.A., I thought of matrix manipulations as ways of solving $n \times n$ systems of equations. Recently i was exposed to the idea of a matrix being a linear transformation, and matrix multiplication being composition of linear transformations. Im trying to understand this in a more intuitive way and was hoping for some insight... I was thinking of a basic $2\times2$ example and how it affects a point $(x,y)$. We could have a matrix : \begin{bmatrix} a & b \\ c & d \end{bmatrix} When we 'apply' or multiply this to a point $(x,y)$ using matrix multiplication we get new $x' = ax + by$ and $y' = cx + dy$. So if $b,c = 0$, then I can see that what we are doing is 'scaling' both $x \;\& \;y$. I'm guessing that if $b,c \neq 0$, then this becomes some sort of rotation or reflection, but how do you understand this on a fundamental level? How do these operations relate to Gaussian elimination when we are trying to solve systems of equations? Or are are these two seperate applications of matrices? Another observation is that when multiplying a matrix such as this one with a point, we get two equations which remind me of Bézout's identity. Am I overanalyzing this or can I draw connections between these two concepts? Thanks for any input! REPLY [2 votes]: First, matrices can represent linear transformations as in $ x \mapsto Ax$, but they can also represent bilinear forms: $ = x^T A y$. But here we will stick to the linear transformation case. What you should do is search for some insight by looking fos some canonical examples. Rotation matrices: these are implementing rotations. For the simplest case, the plane, we have $R=\left(\begin{smallmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{smallmatrix}\right)$. You should try some examples and plot them! Then we have reflection matrices: You can find a derivation of a 2D reflection matrix here: http://www.scibuff.com/2009/06/22/reflection-matrix/ Together, rotation and reflection matrices are called orthogonal matrices. Rotations have determinant $1$, and reflections have determinant $-1$. They satisfy $Q^T Q= Q Q^T = I$ where $I$ is the identity matrix. To see they have determinant $\pm 1$, calculate $1=\det(I)=\det(Q^T Q) =\det(Q)^2$ and solve! A very simple example is in $\mathbb R^3$, a rotation with rotation axes equal to the vertical ($z$) axes, and rotating an angle $\theta$ in the $xy$-plane: $$ \begin{pmatrix} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \end{pmatrix} $$ Again, you should calculate an draw some examples! In mechanics and engineering diciplines the shear matrices are important: Let $$ S = \begin{pmatrix} 1 & 0.5 \\ 0 & 1 \end{pmatrix} $$ This effects a shear: You should compute some examples, and you will see what it means. More examples to come ...<|endoftext|> TITLE: The number of curves of given genus over a field QUESTION [8 upvotes]: Let $k$ be a field. Let $g\geq 0$ be an integer. I have an elementary question. Let $N$ be the "number" of $k$-isomorphism classes of smooth projective geometrically connected curves over $k$ of genus $g$. (Note that $N$ can also be $\infty$.) Is $N$ finite if $k$ is finite? When is $N$ finite in general? I'm looking for the "most elementary" answer to this question. REPLY [7 votes]: Yes, this number is finite when the field $k$ is finite. Use some version of the canonical embedding to show that your curve can be realized in some projective space by equations bounded by some degree, and then observe that there are only finitely many polynomials with a given number of variables and of given degree over $k$. For an infinite field the number of isomorphism classes is infinite as soon as $g \geq 1$: one can consider a hyperelliptic curve defined by a choice of $2g+2$ points on $\mathbf P^1$ (considered up to the action of $\mathrm{Aut}(\mathbf P^1)$), and there are infinitely many such choices when the field is infinite. A more sophisticated approach (when $g \geq 2$) uses the existence of a moduli space of curves of given genus. If $M_g$ is the coarse moduli space of curves of genus $g$ and $k$ is a finite field, then one has the formula $$ \# M_g(k) = \sum_{[C]} \frac{1}{\# \mathrm{Aut}_k(C)}$$ where the sum ranges over $k$-isomorphism classes of curves $C$, and clearly $\# M_g(k)$ is finite. Unfortunately the only proof of this formula that I know uses the Grothendieck-Lefschetz trace formula on the moduli stack...<|endoftext|> TITLE: Relations between p norms QUESTION [85 upvotes]: The $p$-norm on $\mathbb R^n$ is given by $\|x\|_{p}=\big(\sum_{k=1}^n |x_{k}|^p\big)^{1/p}$. For $0 < p < q$ it can be shown that $\|x\|_p\geq\|x\|_q$ (1, 2). It appears that in $\mathbb{R}^n$ a number of opposite inequalities can also be obtained. In fact, since all norms in a finite-dimensional vector space are equivalent, this must be the case. So far, I only found the following: $\|x\|_{1} \leq\sqrt n\,\|x\|_{2}$(3), $\|x\|_{2} \leq \sqrt n\,\|x\|_\infty$ (4). Geometrically, it is easy to see that opposite inequalities must hold in $\mathbb R^n$. For instance, for $n=2$ and $n=3$ one can see that for $0 < p < q$, the spheres with radius $\sqrt n$ with $\|\cdot\|_p$ inscribe spheres with radius $1$ with $\|\cdot\|_q$. It is not hard to prove the inequality (4). According to Wikipedia, inequality (3) follows directly from Cauchy-Schwarz, but I don't see how. For $n=2$ it is easily proven (see below), but not for $n>2$. So my questions are: How can relation (3) be proven for arbitrary $n\,$? Can this be generalized into something of the form $\|x\|_{p} \leq C \|x\|_{q}$ for arbitrary $0

1$ $$ \sum\limits_{i=1}^n |x_i|^p= \sum\limits_{i=1}^n |x_i|^p\cdot 1\leq \left(\sum\limits_{i=1}^n (|x_i|^p)^{\frac{q}{p}}\right)^{\frac{p}{q}} \left(\sum\limits_{i=1}^n 1^{\frac{q}{q-p}}\right)^{1-\frac{p}{q}}= \left(\sum\limits_{i=1}^n |x_i|^q\right)^{\frac{p}{q}} n^{1-\frac{p}{q}} $$ Then $$ \Vert x\Vert_p= \left(\sum\limits_{i=1}^n |x_i|^p\right)^{1/p}\leq \left(\left(\sum\limits_{i=1}^n |x_i|^q\right)^{\frac{p}{q}} n^{1-\frac{p}{q}}\right)^{1/p}= \left(\sum\limits_{i=1}^n |x_i|^q\right)^{\frac{1}{q}} n^{\frac{1}{p}-\frac{1}{q}}=\\= n^{1/p-1/q}\Vert x\Vert_q $$ In fact $C=n^{1/p-1/q}$ is the best possible constant. For infinite dimensional case such inequality doesn't hold. For explanation see this answer.<|endoftext|> TITLE: Prove: If $A \subseteq C$ and $B \subseteq D$, then $A \cap B \subseteq C \cap D$ QUESTION [8 upvotes]: Is the form and correctness of my elementwise proof of this correct? I don't have any other way of getting feedback for my proofs and I want to improve. Proof. Suppose $A, B, C, D$ are sets such that $A \subseteq C$ and $B \subseteq D$ and let $x \in A \cap B$. It has to be shown that $x \in C \cap D$. $x \in A \cap B$ means that $x \in A$ and $x\in B$. Because $A \subseteq C$, $x \in C$ and because $B \subseteq D$, $x \in D$. Thus, $x \in C \cap D$. Thus, if $A \subseteq C$ and $B \subseteq D$, then $A \cap B \subseteq C \cap D$. REPLY [6 votes]: This is a very well written proof. You state your assumptions and what you wish to prove, then you use the definitions to prove that. There is nothing more to add, and nothing to reduce. Incidentally today I had the first class of the semester and this is exactly what I tried to teach my students. If they all write such proofs by the end of the month, I should be proud of my work.<|endoftext|> TITLE: Every Hilbert space has an orthonomal basis - using Zorn's Lemma QUESTION [14 upvotes]: The problem is to prove that every Hilbert space has a orthonormal basis. We are given Zorn's Lemma, which is taken as an axiom of set theory: Lemma If X is a nonempty partially ordered set with the property that every totally ordered subset of X has an upper bound in X, then X has a maximal element. Given a orthonormal set $E$ in a Hilbert space $H$, it is apparently possible to show that $H$ has an orthonormal basis containing $E$. I tried to reason as follows: Suppose $E$ is a finite set of $n$ elements. Then one can number the elements of $E$ to create a totally ordered set of orthonormal elements. Then the span $$ can be identified with $R^n$, where each element $v = v_1 e_1 + v_2 e_2 + ... + v_n e_n$ is identified with the vector $(v_1, v_2, ..., v_n)$. On $R^n$ we have a total order, namely the "lexigraphical order" $(x_1,x_2,...,x_n) \leq (y_1,y_2,...,y_n)$ if $x_1 < y_1$ or if $x_1 = y_1$ and $x_2 < y_2$ or if $x_1 = y_1$, $x_2 = y_2$ and $x_3 < y_3$ and so on. Hence $E$ is a totally ordered subset of $H$ and $H$ is a partially ordered set. However, this set doesn't seem to have an upper bound. The set $E$ does have an upper bound. If we define a total order on $E$ only, then X is a partially ordered set satisfying the criteria so X has a maximal element? This is as far as I got, and I am not sure the entire argument is correct. I don't see what kind of maximal element I am seeking, since the orthonormal basis of a Hilbert space can have countably infinity number of elements. REPLY [18 votes]: First I should remark that there is absolutely no need to appeal to Zorn's lemma in the case of a finite dimensional vector space. I should also add that when talking about about a basis for a Hilbert space one has to distinguish between an algebraic space, which is a linearly independent space whose linear span is everything; and the topological basis whose span is dense in the space. The algebraic space is known as a Hamel basis whereas the topological one is known as a Schauder basis. The use of Zorn's lemma here is quite standard, but I suppose that you have yet to see many uses of Zorn's lemma, which is why you find this to be a difficult task. Zorn's lemma asserts that every partially ordered set in which every chain has an upper bound has a maximal element. So to use it we need to come up with a partially ordered set that has the wanted property. In our case, this should be sets of orthonormal vectors, ordered by inclusion. If we show that the increasing union of such sets is a set of orthonormal vectors then we have shown that every chain is bounded. The key in doing such things is to note that if $\{E_i\mid i\in I\}$ is a chain (namely for $i TITLE: Manifold with different differential structure but diffeomorphic QUESTION [22 upvotes]: I'm new to differential geometry and reading Lee's book Manifold and Differential Geometry. In the first chapter, he mentioned the following two maps on $\mathbb{R}^n$: (1) $id: (x_1,x_2\cdots x_n) \rightarrow (x_1,x_2\cdots x_n)$ (2) $\varphi: (x_1,x_2\cdots x_n) \rightarrow (x_1^3,x_2\cdots x_n)$ Then, $\mathcal{A}_1$= { $(\mathbb{R}^n,id)$ } and $\mathcal{A}_2$= { $(\mathbb{R}^n,\varphi)$ } are two differential structure on $\mathbb{R}^n$, and $\mathcal{M}_1=(\mathbb{R}^n, \mathcal{A}_1)$, $\mathcal{M}_2=(\mathbb{R}^n,\mathcal{A}_2)$ are two manifolds. It easy to verify that $\mathcal{M}_1$ and $\mathcal{M}_2$ have the same induced topology, the standard topology. $\mathcal{A}_1$ and $\mathcal{A}_2$ are not compatible, for $id\circ \varphi^{-1}:(x_1,x_2\cdots x_n) \rightarrow (x_1^{\frac{1}{3}},x_2\cdots x_n)$ is not differentiable at origin. Therefore, $\mathcal{M}_1$ and $\mathcal{M}_2$ have different differential structure. My question is: are they diffeomorphic? According to Lee, the author, they are diffeomorphic through $\varphi$ (page 27). But I don't think $\varphi$ is a diffeomorphism between them because $\varphi^{-1}$ is not differentiable at origin. So are they not diffeomorphic? But according the result of Donaldson and Freedman, each $\mathbb{R}^n$ except $n=4$ (with standard topology) only have one diffeomorphism class, so for any $\mathbb{R}^n$ except $\mathbb{R}^4$, $\mathcal{M}_1$ and $\mathcal{M}_2$ are diffeomorphic. But why? REPLY [25 votes]: In order not to confuse the diffeomorphism with the chart, define $$ u : (\mathbb{R}^n, \mathcal{A}_1) \rightarrow (\mathbb{R}^n, \mathcal{A}_2)$$ by $u(x_1, ..., x_n) = (x_1^3, x_2, ..., x_n)$. It is a homeomorphism (why?). To check that it is a diffeomorphism, you also need to check that $u$ and $u^{-1}$ are smooth. A map is smooth by definition if its local representation in charts are smooth. Here, we have two global charts. To check that $u$ is smooth, we need to check that $\varphi^{-1} \circ u \circ id$ is smooth as a regular map $\mathbb{R}^n \rightarrow \mathbb{R}^n$. And indeed, $$ (\varphi^{-1} \circ u \circ id) (x_1, ..., x_n) = (\varphi^{-1} \circ u)(x_1, ..., x_n) = \varphi^{-1} (x_1^3, x_2, ..., x_n) = (x_1, ..., x_n) $$ and this is a smooth map. To check that $u^{-1}$ is smooth, we need to check that $id^{-1} \circ u^{-1} \circ \varphi$ is smooth. Similarly, $$ (id^{-1} \circ u^{-1} \circ \varphi)(x_1, ..., x_n) = (x_1, ..., x_n). $$ Note that it doesn't matter that $u^{-1}(x) = (x_1^{\frac{1}{3}}, x_2, ..., x_n)$ is not smooth as a map $\mathbb{R}^n \rightarrow \mathbb{R}^n$, because you treat $u^{-1}$ as a map between the manifolds $(\mathbb{R}^n, \mathcal{A}_2) \rightarrow (\mathbb{R}^n, \mathcal{A}_1)$, and to check whether it is smooth as a map between the manifolds, you need to compose it with the charts and check. The map $u^{-1}$ is not smooth as a "regular" map or as a map $(\mathbb{R}^n, \mathcal{A}_1) \rightarrow (\mathbb{R}^n, \mathcal{A}_1)$, but is smooth as a map $(\mathbb{R}^n, \mathcal{A}_2) \rightarrow (\mathbb{R}^n, \mathcal{A}_1)$. REPLY [3 votes]: They are diffeomorphic through $\varphi$, by definition of the structure on $\mathcal{M}_2$. The inverse $\varphi^{-1}$ is not differentiable with respect to the standard structure, i.e., as a map from $\mathcal{M}_1$ to $\mathcal{M}_1$. However, to test whether a map is differentiable as a map from $\mathcal{M}_1$ to $\mathcal{M}_2$, you have to test it in the given charts, in which both $\varphi$ and $\varphi^{-1}$ become the identity. (The same argument would be true if you replace $\varphi$ by any homeomorphism from $\mathbb{R}^n$ to itself.)<|endoftext|> TITLE: Why decimal expansion of $e$ has two copies of $1828$ QUESTION [8 upvotes]: Is there any explanation why the block $1828$ occurs twice in the decimal expansion of the transcendental $e$, $2.718281828459\ldots$, but is not recurring? REPLY [7 votes]: I don't believe this question has a good answer, as I don't believe this repetition is very significant. Similarly, the $762^{\text {nd}}$ digit of $\pi$ begins the Feynman point, a sequence of six $9s$ (Feynman stated he wanted to memorize until this point, so he could recite the digits, ending with "nine nine nine nine nine nine, and so on"). This sequence of numbers in $\pi$ is similarly strange, however, it seems like this is simply a string of numbers that happened to be arranged this way in base $10$ and is a rather insignificant coincidence.<|endoftext|> TITLE: Showing $\mathbb{Z}_6$ is an injective module over itself QUESTION [6 upvotes]: I want to show that $\mathbb{Z_{6}}$ is an injective module over itself. I was thinking in using Baer's criterion but not sure how to apply it. So it suffices to look at non-trivial ideals, the non-trivial ideals of $\mathbb{Z_{6}}$ are: (1) $I=\{0,3\}$ (2) $J=\{0,2,4\}$ So take a $\mathbb{Z_{6}}$-map $f: I \rightarrow \mathbb{Z_{6}}$. Since $f$ is a group homomorphism it must map generators to generators right? so $3 \mapsto 1$ and $0 \rightarrow 0$. Now can we say suppose $f(1)=k$ then define $g: \mathbb{Z_{6}} \rightarrow \mathbb{Z_{6}}$ by sending the remaining elements, (those distinct from 0 and 3), say n, to $nk$? REPLY [4 votes]: I found a solution for the 'general' case: Let I be a ideal of $\mathbb{Z}/n\mathbb{Z}$ then we know that $I=\langle \overline{k} \rangle$ for some $k$ such that $k\mid n$. If $f:I\rightarrow \mathbb{Z}/n \mathbb{Z}$ is a $\mathbb{Z}/n \mathbb{Z}$-morphism then $im f\subset I$. To show this we note that if $\overline{x}\in im f$ then there exist $\overline{c}=\overline{lk}$ with $\overline{l}\in\mathbb{Z}/n \mathbb{Z}$ s.t. $f(\overline{c})=\overline{x}$, but $n=ks$ for some $s$, so $\overline{0}=f(\overline{ln})=\overline{s}\cdot f(\overline{lk})=\overline{sx}$, then $sx=nt$ for some $t$, but again since $n=ks$, we get $x=tk$. In particular for $\overline{k}\in I$ we have $f(\overline{k})=b\overline{k}$ for some $b$. So for any $x\in I$ we have $f(x)=bx$ then we just take the extension of $f$ to be the map $\mathbb{Z}/n\mathbb{Z}\rightarrow \mathbb{Z}/n\mathbb{Z}$ where $x\mapsto bx$.<|endoftext|> TITLE: Proof of Pitt's theorem QUESTION [7 upvotes]: I'm reading the book Topics in Banach Space Theory by Albiac F. Kalton N. J. I got stuck at the proof of Pitt's theorem. In the second paragraphs authors tries to prove ad absurdum that for weakly nuul sequence $\lim\limits_{n\to\infty}\Vert T(x_n)\Vert=0$. They say that without loss of generality one may suppose that $\{x_n\}_{n=1}^\infty$ is a weakly null sequence with $\Vert x_n\Vert=1$ and $\Vert T(x_n)\Vert>\delta$ for all $n\in\mathbb{N}$. I think they normalized original sequence $\{x_n\}_{n=1}^\infty$ and claims that it is also weakly null. Why is weakly null sequence remains weakly null after normalization? Another place I got stuck is the place where authors claims that passing to subsequence in $\{T(x_n)\}_{n=1}^\infty$ gives subsequence equivalent to the natural basis of $\ell_p$. And they also assume that after passing to subsequence $\{x_n\}_{n=1}^\infty$ remains to be equivalent to natural basis of $\ell_p$. Why is $\{x_n\}_{n=1}^\infty$ remains to be equivalent to natural basis of $\ell_p$? Thank you. REPLY [7 votes]: We assume that $\{x_n\}$ doesn't converge strongly to $0$. We show, that each subsequence of $\{Tx_n\}$ has a convergent subsequence. So here, we can find a constant $C>0$ and $A$ infinite such that $\lVert x_n\rVert\geq C$ for all $n\in A$. Let $y_n:=\frac 1{\lVert x_n\rVert}x_n$ for $n\in A$. Then $\{y_n\}_{n\in A}$ converges weakly to $0$, as for $f\in (\ell^r)^*$ and $n\in A$, we have $$|f(y_n)|=|f(x_n)|\frac 1{\lVert y_n\rVert}\leq \frac{|f(x_n)|}C.$$ By definition of equivalence $\{u_n\}$ and $\{v_n\}$ are equivalent if for all sequence $\{a_n\}$ of scalars, $\sum_{n=1}^{+\infty}a_nu_n$ is convergent if and only if so is $\sum_{n=1}^{+\infty}a_nv_n$. So $\{x_n\}_{n\in A}$ is equivalent to $\{e_n\}_{n\in A}$ (not to the whole sequence). But it's enough to conclude, as we would have boundedness from an infinite subspace of $\ell^r$ to $\ell^p$. REPLY [6 votes]: I think that for your first question the answer is: If $\|x_{n}\|\rightarrow 0$, then $x_{n}\rightarrow 0$ which implies that $T(x_n)\rightarrow 0$. So you can suppose that $\|x_{n}\|>\delta$ for some $\delta>0$. By hypothesis you have $$\langle y,x_{n}\rangle\rightarrow 0,\ \forall\ y\in X^{\star}$$ hence $$\Big|\Big\langle y,\frac{x_{n}}{\|x_{n}\|}\Big\rangle\Big|\leq \delta|\langle y,x_{n}\rangle|,\ \forall\ y\in X^{\star} $$ From the last inequality you can conclude that $\frac{x_{n}}{\|x_{n}\|}$ is an weakly null sequence.<|endoftext|> TITLE: How to get the floor function as a Mellin inverse of the Hadamard product of the Riemann zeta function? QUESTION [5 upvotes]: The floor function is given - by Perron's formula - as a Mellin inverse of the zeta function. namely : $$\left \lfloor x \right \rfloor=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\zeta(s)\frac{x^{s}}{s}ds\;\;\;(c>1)$$ This is easily proven using the Dirichlet series rep. of the zeta function : $\zeta(s)=\sum_{n=1}^{\infty}n^{-s}$. i was wondering if one can obtain the same result using the Hadamard product rep. : $$\zeta(s)=\pi^{s/2}\frac{\Pi_{\rho}\left(1-\frac{s}{\rho}\right)}{2(s-1)\Gamma\left(1+\frac{s}{2}\right)}$$ REPLY [3 votes]: Well, you can, somewhat indirectly. Riemann originally developed the intermediary step function $J(x)$, with steps of $1/j$ at each prime power $p^j$, to establish the prime number theorem. The function $J(x)$ is directly derived from $\ln \zeta(s)$, where he decomposed the Hadamard product into a sum under the natural logarithm (see "Riemann's Zeta Function," H.M. Edwards; I have the Dover edition, 2001). In that context, the whole number staircase function (sometimes called the integer floor function, or natural number counting function) may be expressed as, $$ N(w) = \int_{r=1^-}^{w} \exp^*(dJ(e^v))[\ln r] dr, $$ where $\ln r$ is the independent variable of the convolution exponential, and the argument of the c.e. is just Riemann's $J(x)$: $$ dJ(e^u) = \left( \frac{1}{u} - \sum_{\rho} \frac{e^{(\rho-1)u}}{u} - \frac{1}{e^{u}(e^{2u}-1)u} \right), $$ where the $\rho$ are the non-trivial zeros of the Riemann zeta function. To show it: $$ N(w) = \frac{1}{2 \pi i} \int_{x-i\infty}^{x+i\infty} \zeta(z) \frac{w^z}{z} dz \Rightarrow N(e^u) = \frac{1}{2 \pi i} \int_{x-i\infty}^{x+i\infty} \zeta(z) \frac{e^{uz}}{z} dz. $$ We'll drop the $1/(2\pi i)$ factor in some of what follows. Then rewrite, $$ N(e^u) = \int_{x-i\infty}^{x+i\infty} e^{\ln \zeta(z)} \frac{e^{uz}}{z} dz = \int_{x-i\infty}^{x+i\infty} \sum_{k=0}^{\infty} \frac{(\ln \zeta(z))^k}{k!} \frac{e^{uz}}{z} dz. $$ If we can integrate termwise (*), we've $$ = \sum_{k=0}^{\infty} \int_{x-i\infty}^{x+i\infty} \frac{(\ln \zeta(z))^k}{k!} \frac{e^{uz}}{z} dz. $$ Note, each integral is just Perron's formula applied to the Dirichlet series, $$ D^k = (\ln \zeta(z))^k = \left( \sum_i p_i^{-z} + (1/2)p_i^{-2z} + (1/3)p_i^{-3z} + ... \right)^k $$ where $i$ indexes the primes. We can take the derivative of the Perron step function to obtain a distribution of point masses of weight $c/(j_1\cdots j_k)$ at each $p_{i_1}^{j_1} \cdots p_{i_k}^{j_k}$. We can apply the Laplace transform convolution formula (in the sense of distributions): $$ \int_{0^-}^{u} \int_{x-i\infty}^{x+i\infty} \frac{D^k}{k!} e^{tz} dz dt = \int_{0^-}^{u} (1/k!) (\mathcal{L}^{-1}(D))^{*k} [t] dt, $$ where $t$ is the post-convolution independent variable. The whole series can then be expressed as a convolution exponential, $$ N(e^u) = \int_{0^-}^{u} \exp^*(\mathcal{L}^{-1}(D))[t] dt, $$ where the $u$-integral swap with the exponential series sum is justified by noting $\exp^*(.)$ is just a point measure on $[0,\infty)$, with finitely many delta masses in $[0,u]$. The argument of the convolution exponential has a common form, $$ \frac{1}{2\pi i} \int_{x-i\infty}^{x+i\infty} \ln(\zeta(z)) e^{uz} dz = dJ(e^u)e^u, $$ where $J(x)$ is from Edwards. The outer $e^u$ term can be passed through the convolution exponential, and we can change variables back, $u=\ln w$, and let $t=\ln r$ for the variable of integration, $$ N(w) = \int_{t=0^-}^{\ln w} \exp^*(e^v dJ(e^v))[t] dt = \int_{r=1^-}^{w} r \exp^*(dJ(e^v))[\ln r] \frac{dr}{r}. $$ The convolution exponential seems a bit underserved in the literature. I suppose it's assumed things can be done in the frequency domain. It has some of the expected properties, like for functions $f$ and $g$ under convergence conditions, $\exp^*(f+g) = \exp^*(f)*\exp^*(g)$, and if $01$), so is $\ln \zeta(z)$. Since on finite vertical contours $[x-ih,x+ih]$ the partial sums of the functions converge uniformly, we can pass this "middle" portion of the contour inside the infinite sum--we need only be concerned with the "tails" of the contours: $$ \int_{x+ih}^{x+i\infty} \frac{(\ln \zeta(z))^k}{k!} \frac{e^{wz}}{z} dz, $$ (likewise for $\rightarrow x-i\infty$). These actually behave nicely. In fact, from Edwards (Section 3.5), $$ \left| \int_{x+ih}^{x+i\infty} \left( \frac{w}{n} \right)^z \frac{dz}{z} \right| \le c \frac{1}{n^x h} $$ We apply this termwise to the terms in the $(\ln \zeta(z))^k$ Dirichlet series (again, by standard Perron formula results, this swap is OK to do), treating $e^u$ as $w$ to get $$ \left| \int_{x+ih}^{x+i\infty} \frac{(\ln \zeta(z))^k}{k!} \frac{e^{uz}}{z} dz \right| \le \frac{c_2^k}{h k!}, $$ where, $$ \left| \sum_i p_i^{-z} + (1/2)p_i^{-2z} + (1/3)p_i^{-3z} + ... \right| \le \sum_i p_i^{-x} + (1/2)p_i^{-2x} + (1/3)p_i^{-3x} + ... \equiv c_2 $$ since $x>1$ and everything is absolutely convergent. Since the partial sum of the tails converge $\mathcal{O}(1/h$), this justifies the swap.<|endoftext|> TITLE: How to round 0.45? Is it 0 or 1? QUESTION [5 upvotes]: This question is inspired by How to round 0.4999... ? Is it 0 or 1? I didn't quite understand the logic of the answer. It seems you round every decimal place no matter how far back it goes? In the case of 0.49999, you're rounding up the 9 increases which causes the .4 to be .5 making it round up to 1 (rather than down). So 0.45 will rounds to 1? Would .444444444444444444444444444449 also round to 1? REPLY [2 votes]: It is definitely 0, since it is less than 0.5. But in these cases i would not round to 0, since it might make the solution meaningless. It all depends from the problem's scope and you instructor (or the environment).<|endoftext|> TITLE: A conjugacy class $C$ is rational iff $c^n\in C$ whenever $c\in C$ and $n$ is coprime to $|c|$. QUESTION [7 upvotes]: Let $C$ be a conjugacy class of the finite group $G$. Say that $C$ is rational if for each character $\chi: G \rightarrow \mathbb C$ of $G$, for each $c\in C$, we have $\chi(c) \in \mathbb Q$. I am trying to show that $C$ is rational if and only if whenever $c\in C$ and $n$ is relatively prime to $|c|$, we have $c^n \in C$. Any suggestions? REPLY [5 votes]: Let $\mathbb{Q}(\chi)$ be a finite normal extension of $\mathbb{Q}$ containing the coefficients of $\chi'(g)$ for all $g \in G$, where $\chi'$ is the representation associated with $\chi$. Let $m=|G|$ and $\epsilon$ be a primitive $m$th root of unity. Let $E$ be a finite normal extension of $\mathbb{Q}$ containing $\mathbb{Q}(\chi)$ and $\mathbb{Q}(\epsilon)$. Then there is an injective homomorphism $\psi:\mathbb{Z}_m^*\rightarrow G(E,\mathbb{Q})$ defined by $\psi(a)=\sigma_a$, where $\sigma_a(\epsilon)=\epsilon^a$. Now define $\chi_a' = \tau_a\circ \chi'$ (where $\tau_a$ is the automorphism of $GL_m(E)$ induced by applying $\sigma_a$ to the coefficients of each matrix element) and let $\chi_a$ be its character. Then $\chi_a(g)=\chi(g^a)$ (why?). In one direction, $c$ is conjugate to $c^n$ for $\left(n,m\right)=1$, so $\chi(c)=\chi(c^n)$. Since $\chi(c^n)=\chi_n(c)$, we have that $\chi(c)$ is fixed under the action of $\sigma_n$. If this is true for every $n$ relatively prime to $m$, then $\chi(c)$ is fixed under the image of $G(\mathbb{Q}(\epsilon),\mathbb{Q})$. Thus $\chi(c)$ is a member of the fixed field of $G(\mathbb{Q}(\epsilon),\mathbb{Q})$, otherwise known as $\mathbb{Q}$. On the other hand, suppose $C$ is rational and for some $\chi$ there's an $c\in C$ and $n$ relatively prime to $m$ for which $\chi(c)\not= \chi(c^n)$, then $\chi(c)$ is not fixed under $\sigma_n$, so it is not a member of the fixed field of $G(\mathbb{Q}(\epsilon),\mathbb{Q})$, a contradiction. So $\chi(c)=\chi(c^n)$ for every character $\chi$ of $G$, whence $$\sum_i \chi_i(c)\overline{\chi_i(c^n)}=\sum_i\left|\chi_i(c)\right|^2= |C_G(c)|$$ where the summation runs over all irreducible characters, which implies that $c$ is conjugate to $c^n$.<|endoftext|> TITLE: sufficient condition for KKT problems QUESTION [11 upvotes]: For the Karush-Kuhn Tucker optimsation problem, Wikipedia notes that: "The necessary conditions are sufficient for optimality if the objective function f and the inequality constraints g_j are continuously differentiable convex functions and the equality constraints h_i are affine functions." link Could someone please show me how this result is derived? That is, given a convex objective function, convex inequality constraints and affine equality constraints, how can we show that any point in the feasible set that satisfies the KKT conditions must be a minimizer of the function over the feasible set? REPLY [8 votes]: The primal problem is \begin{align} \operatorname{minimize}_x & \quad f_0(x) \\ \text{subject to} & \quad f_i(x) \leq 0 \quad \text{for } i = 1,\ldots, m\\ & \quad Ax = b. \end{align} The functions $f_i, i = 0,\ldots,m$ are differentiable and convex. Assume $x^*$ is feasible for the primal problem (so $f_i(x^*) \leq 0$ for $i = 1,\ldots,m$ and $A x^* = b$) and that there exist vectors $\lambda \geq 0$ and $\eta$ such that $$ \tag{$\spadesuit$}\nabla f_0(x^*) + \sum_{i=1}^m \lambda_i \nabla f_i(x^*) + A^T \eta = 0 $$ and $$ \lambda_i f_i(x_i) = 0 \quad \text{for } i = 1,\ldots,m. $$ Because the functions $f_i$ are convex, equation ($\spadesuit$) implies that $x^*$ is a minimizer (with respect to $x$) of the Lagrangian $$ L(x,\lambda,\eta) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \eta^T(Ax - b). $$ Thus, if $x$ is feasible for the primal problem, then \begin{align*} f_0(x) & \geq f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \eta^T(Ax - b) \\ & \geq f_0(x^*) + \sum_{i=1}^m \lambda_i f_i(x^*) + \eta^T(Ax^* - b) \\ & = f_0(x^*). \end{align*} This shows that $x^*$ is a minimizer for the primal problem.<|endoftext|> TITLE: Why are Mersenne primes easier to find? QUESTION [7 upvotes]: 9 out of 10 biggest known prime numbers are Mersenne numbers. Are they easier to find? rank prime digits who when reference 1 2**243112609-1 12978189 G10 2008 Mersenne 47?? 2 2**242643801-1 12837064 G12 2009 Mersenne 46?? 3 2**237156667-1 11185272 G11 2008 Mersenne 45? http://primes.utm.edu/largest.html REPLY [9 votes]: "Easier to find" is not quite the right thing to say. The question is this: you want to pick an extremely large number to test for primality. What kind of large numbers should you test? A "random" large number is a bad choice: such a number will have a probability of approximately $\frac{1}{2}$ of being even, a probability of approximately $\frac{1}{3}$ of being divisible by $3$, and so forth. Mersenne numbers, on the other hand, are both extremely large and substantially more likely than "random" large numbers to be prime. Indeed, if $q$ divides a Mersenne number $2^p - 1$, then $q \equiv 1 \bmod p$ by Lagrange's theorem. This is a far smaller list of possible prime divisors, and in particular no prime less than or equal to $p$ can be a divisor. There is also a specialized primality test, the Lucas-Lehmer test, which is specific to Mersenne numbers.<|endoftext|> TITLE: Why does the column space of a linear transformation equal its image? QUESTION [8 upvotes]: I'm having trouble understanding this. Why does the column space of the matrix of a linear transformation equal the image of the linear transformation? REPLY [26 votes]: Look at a simple concrete example, say the matrix $$A=\begin{bmatrix}2&5&-1\\3&-1&2\end{bmatrix}\;.$$ The column space of $A$ is by definition the set of all linear combinations of the columns of $A$, i.e., the set of all vectors of the form $$\alpha{2\brack 3}+\beta{5\brack -1}+\gamma{-1\brack 2}$$ for real numbers $\alpha,\beta$, and $\gamma$. Now, what vectors are in the image of $A$? The image of $A$ consists of all vectors of the form $Av$, where $v$ is a $3\times 1$ column vector. A typical $3\times 1$ vector is $$v=\begin{bmatrix}\alpha\\\beta\\\gamma\end{bmatrix}\;,$$ and $$Av=\begin{bmatrix}2&5&-1\\3&-1&2\end{bmatrix}\begin{bmatrix}\alpha\\\beta\\\gamma\end{bmatrix}=\begin{bmatrix}2\alpha+5\beta-\gamma\\3\alpha-\beta+2\gamma\end{bmatrix}=\alpha{2\brack3}+\beta{5\brack-1}+\gamma{-1\brack2}\;.$$ Thus, both the column space of $A$ and the image of $A$ consist of all vectors of the form $$\alpha{2\brack3}+\beta{5\brack-1}+\gamma{-1\brack2}\;,$$ so they’re the same. To see that this always happens, you have to recognize that a product $Av$ always simply forms a linear combination of the columns of $A$. If the columns of $A$ are $A_1,\dots,A_n$, and $$v=\begin{bmatrix}v_1\\\vdots\\v_n\end{bmatrix}\;,$$ then $$Av=v_1A_1+v_2A_2+\ldots+v_nA_n\;.$$ When you calculate the $k$-th entry in $Av$, for instance, you get $v_1a_{k1}+v_2a_{k2}+\ldots v_na_{kn}$, where $a_{ki}$ is the $k$-th entry in $A_i$, so $$Av=\begin{bmatrix}v_1a_{11}+v_2a_{12}+\ldots v_na_{1n}\\v_1a_{21}+v_2a_{22}+\ldots v_na_{2n}\\v_1a_{31}+v_2a_{32}+\ldots v_na_{3n}\\\vdots\\v_1a_{m1}+v_2a_{m2}+\ldots v_na_{mn}\end{bmatrix}\;,\tag{1}$$ assuming that $A$ is $m\times n$. But $(1)$ is just $$Av=v_1\begin{bmatrix}a_{11}\\a_{21}\\a_{31}\\\vdots\\a_{m1}\end{bmatrix}+v_2\begin{bmatrix}a_{12}\\a_{22}\\a_{32}\\\vdots\\a_{m2}\end{bmatrix}+\ldots+\begin{bmatrix}a_{1n}\\a_{2n}\\a_{3n}\\\vdots\\a_{mn}\end{bmatrix}=v_1A_1+v_2A_2+\ldots+v_nA_n\;.$$ In short, every $Av$ in the image of $A$ is a linear combination of the columns of $A$ and vice versa.<|endoftext|> TITLE: Generalizing Artin's theorem on independence of characters QUESTION [5 upvotes]: Artin's theorem says that for any field $K$ and any (semi) group $G$, the set of homomorphisms from $G$ into the multiplicative group $K^*$ is linearly independent over $K$. Can this theorem be generalized into higher dimensions? That is, are there any simple restrictions to put on a set of (finite-dimensional) representations of a given (semi?) group $G$ over a fixed (algebraically closed?) field $K$ so as to assure that their characters are linearly independent? It is natural to assume that the representations are irreducible (otherwise, obviously the character of $\pi$ and $\pi\oplus \pi$ are linearly dependent, and the latter would even turn out to be $0$ if $\operatorname{char} K=2$), and in case of $K=\bf C$ and finite group $G$ I suppose irreducibility is enough by Schur's orthogonality (so I guess this is also true for algebraically closed $K$ of characteristic $0$ or large enough for a given $G$ by some model-theoretical argument). This question arose out of curiosity about the theorem as stated in a commutative algebra course, and I have little to no idea about modular representation theory, or even any non-$\bf C$ representation theory. Summing it all up, is there a known theorem that generalizes Artin's theorem, and if not, is there any reason that there isn't (perhaps the reason being that it is trivial from some viewpoint?)? REPLY [4 votes]: Bourbaki's generalization of Artin's theorem is as follows: Let $L/K$ be a field extension and $A$ be a $K$-algebra. Then the set $Alg_K(A,L)$ of $K$-algebra morphisms $A\to L$ is linearly independant in the $L$-vector space $\mathcal L_{K-lin}(A,L)$ of $K$-linear maps $A\to L$ (And, yes, these $K$-linear maps $\mathcal L_{K-lin}(A,L)$ form an $L$-vector space, even though $A$ is not an $L$-vector space: this is a bit confusing!) Artin's theorem is obtained by choosing for $A$ the group algebra $K[G]$, taking $L=K$ and remembering the isomorphism of $K$-vector spaces $$\mathcal L_{K-lin}(K[G],K)\xrightarrow \cong K^G:u\mapsto (u(g))_{g\in G}$$ sending $Alg_K(K[G],K)$ to $Hom_{groups}(G,K^*)=Char (G)$<|endoftext|> TITLE: What are the practical applications of the Taylor Series? QUESTION [84 upvotes]: I started learning about the Taylor Series in my calculus class, and although I understand the material well enough, I'm not really sure what actual applications there are for the series. Question: What are the practical applications of the Taylor Series? Whether it's in a mathematical context, or in real world examples. REPLY [3 votes]: Someone already mentioned the usefulness of Taylor's series in relativity, I would like to spent a few words to further explore this point because relativity is a good arena to test the very important role of Taylor series in solving practical problems in physics. Let's consider relativistic kinetic energy formula \begin{equation} E_K= mc^2 \left( \frac{1}{\sqrt{1-\frac{v^2}{c^2}}} -1 \right) \end{equation} Taylor series says that for $v \ll c$ the kinetic energy is about \begin{equation} E_K \approx \frac{mv^2}{2}+\frac{3m v^4}{8 c^2} \end{equation} and this allow you to evaluate the relativistic value when you are in classical regime, and then to get an idea of how relativistic corrections are far from our daily experience. At first glance you can say "who care about $E_K \approx$ blah blah...? We are in XXI century, I don't need approximate formulae to simplify pen & paper calculations, I simply can take my computer and insert numbers to see what happens". Well, things are not so simple. Let's consider this exercise I took from a Taha Sochi's book: we are evaluating kinetic energy of a 1 kg body moving at 100 m/s. Classical mechanics says 5000 J but what is the relativistic answer? The book's answer is completely wrong and it is very instructive to see what happened here. I think the author used $3.33\cdot10^{-7}$ in place of $\frac{v}{c}$, he took his computer, he entered the value, and he found an absurd 4996 J: a relativistic energy lower than classical one! You could think this is a problem related to the very bad habit to do big roundings in intermediate steps. You could say: "I can correct easily this naive mistake: let's use some more digits in $\frac{v}{c}$ value!". The idea of using so many digits until the result stabilizes seem reasonable. You can do calculations by exploiting Spreadsheets, WolframAlpha, or simply Google search cell, probably you will find (try!) 5009 J (or 5016 J if you use the approximate value $3\cdot 10^8$ for $c$ used by the author). You may feel satisfied and feel that the result is right, after all it is just a bit greater than classical. But wait a minute! Is it plausible that for a 1 kg ball moving at the ridiculously low speed of a fast car or slow airplane, the relativistic correction is of some joule? This would be decidedly huge: the second answer too is completely wrong. The problem is that computers usually works with a very limitate number of digits, and from something like $1,0000000$(...small)$ - 1$ you can get zero or any other strange results! The only way to solve this problem, as far as I know, is using Taylor formula (unless you know how to force computer using more digits, it is possible do that with some programming language, but probably this would be a more complicated and less sure way to solve the problem). Using Taylor formula written before (adding more Taylor terms the change is negligible) you simply got the correct relativistic kinetic energy of our moving ball: about $5000.000000000417$ J ($\frac{3mv^4}{8c^2} \approx 4.17 \cdot 10^{-10}$ J). So in this case classical and relativistic results differ of about $0.00000000001\%$. All this show that Taylor series are not only illuminating and useful, but sometimes practically indispensable.<|endoftext|> TITLE: Proving a necessary and sufficient condition for compactness of a subset of $\ell^p$ QUESTION [10 upvotes]: Let $A \subset \ell^p$, where $1 \le p \lt \infty$. Suppose the following conditions are true: 1) $A$ is closed and bounded 2) $\forall \epsilon \gt 0, \: \exists \: N \in \mathbb{N}$ such that $\forall x \in A$, we have $\sum_{n \ge N}|x_{n}|^{p} \lt \epsilon$. Then show that $A$ is compact. Also, show that the converse is true. REPLY [9 votes]: The properties 1) and 2) are quite obvious when $A$ is a finite set. So we see that a compact set "almost behaves as a finite set". Here is a formal proof. As $\ell^p$ is complete, $A$ is complete whenever it's closed. So we just have to show pre-compactness. Fix $\varepsilon>0$, and apply 2) with $\varepsilon/2$. This gives an integer $N$ such that for all $x\in A$, $\sum_{j\geq N}|x_j|^p<\varepsilon/2$. As $A$ is bounded, we can find $M>0$ such that $\lVert x\rVert\leq M$ for all $x\in A$. Since $[-M,M]^N$ is pre-compact, we can find an integer $K$ and sequences $x^{(0)},\dots,x^{(K)}$ such that for all $v\in [-M,M]^n$, there exists $i\in \{0,\dots,J\}$ such that $\sum_{j=0}^{N-1}|v_j-x^{(i)}_j|^p\leq \frac{\varepsilon^p}{2^p}$. Define $y^{(j)}:=(y^{(j)}_0,\dots,y^{(j)}_{N_1},0,\ldots,0)\in \ell^p$ to see that $A$ is pre-compact. Conversely, we assume $A$ compact. A compact subset of a Hausdorff space is closed. It's bounded, as we can extract from the open cover $\{B(x,1)\}_{x\in A}$ a finite subcover $\{B(x,1)\}_{x\in F}$, where $F$ is finite. Then for all $x\in A$, $\lVert x\rVert\leq 1+\max_{y\in F}\lVert y\rVert$. Fix $\varepsilon>0$, then by pre-compactness, we can find an integer $K$ and $x^{(1)},\dots,x^{(K)}$ such that $\bigcup_{j=1}^KB(x^{(j)},\varepsilon/2)\supset A$. For each $j\leq K$, take $N_j$ such that $\sum_{i\geq N_j}|x_i^{(j)}|^p<\frac{\varepsilon^p}{2^p}$. Then take $N:=\max_{1\leq j\leq K}N_k$.<|endoftext|> TITLE: Show that a function that is locally increasing is increasing? QUESTION [6 upvotes]: A function $f : \mathbb{R} \to \mathbb{R}$ is locally increasing at a point $x$ if there is a $\delta > 0$ such that $f(s) < f(x) < f(t)$ whenever $x-\delta < s < x < t < x+\delta$. Show that a function that is locally increasing at every point in $\mathbb{R}$ must be increasing, i.e., $f(x) < f(y)$ for all $x < y$. REPLY [10 votes]: HINT: Suppose that $f$ is locally increasing at every point of $\Bbb R$, but $f$ is not an increasing function; then there are $a,b\in\Bbb R$ such that $aa$. Then $f(x)>f(x_0)$ for each $x\in(a,x_0)$. Use the fact that $f$ is locally increasing at (what point?) to get a contradiction. Alternatively, if you’re familiar with the open cover definition of compactness, you can let $[a,b]$ be any closed interval and use the hypothesis that $f$ is locally increasing at every point to get a cover $\mathscr{U}$ of $[a,b]$ by open intervals on each which $f$ is increasing, then use compactness of $[a,b]$ to get a finite subcover and show directly from the existence of this finite subcover that $f$ must be increasing on $[a,b]$. Since $a\le b$ are arbitrary, this shows that $f$ is increasing.<|endoftext|> TITLE: Ways $S_3$ can act on a set of 4 elements. QUESTION [6 upvotes]: Describe all ways in which $S_3$ can operate on a set of four elements. My approach: This question can be broken down into: How many homomorphisms exist from $S_3$ to $S_4$. Say $\varphi : S_3 \to S_4$ is a homomorphism. Then we have three possibilities for $\text{ker }\varphi$: $\{1\}, \{1, (1\ 2\ 3), (1\ 3\ 2)\}$, and $S_3$. The case in which $\text{ker }\varphi = S_3$ is the trivial homomorphism that maps everything to the identity. Now, the case in which $\text{ker }\varphi = \{1\}$ is the same as saying that the mappings are injective. This comes down to picking three of the four elements and permutating them and leaving the fourth one fixed. There are $\binom43 = 4$ ways of doing this. Say $\text{ker }\varphi = \{1, (1\ 2\ 3), (1\ 3\ 2)\}$. This means that $\varphi((1)) = \varphi((1\ 2\ 3)) = \varphi((1\ 3\ 2)) = (1)$. Morover we can observe the following two properties immediately: $\varphi((1\ 2\ 3)) = \varphi((1\ 3))\varphi((1\ 2)) = (1)$. Equivalently $\varphi((1\ 2)) = \varphi((1\ 3))$. $\varphi((1\ 3\ 2)) = \varphi((1\ 3))\varphi((2\ 3)) = (1)$. Equivalently $\varphi((1\ 3)) = \varphi((2\ 3))$. and thus $\varphi((1\ 2)) = \varphi((1\ 3)) = \varphi((2\ 3))$. But we know by properties of homomorphisms that $\vert \varphi((1\ 3)) \vert \mid \vert (1\ 3) \vert = 2$. So $\vert \varphi((1\ 3)) \vert$ is 1 or 2. But if the order of $\varphi((1\ 3))$ were 1 it would be in the kernel, which would be a contradiction to the kernel we chose, so it must be 2. We can map $(1\ 3)$ to any 2-cycle in $S_4$, of which there are 6, as well as any product of disjoint 2-cycles, of which there are 3. Hence we have 9 possible homomorphic mappings given this kernel. Adding up all the possible homomorphisms from $S_3$ to $S_4$ that we counted, we get 14 different ways in which $S_3$ can act on four elements, as described above. Is this correct? Are there 14 homomorphisms from $S_3$ to $S_4$? Is my reasoning correct? Or are there any hidden assumptions I made that I shouldn't have made? REPLY [2 votes]: We work instead in terms of the corresponding permutations representations (homomorphisms) $\phi: S_3\to S_4$. To avoid confusion we work with $S_3(\{1,2,3\})$ and $S_4(\{a,b,c,d\})$. Let $x = (1,2)$ and $y = (1,2,3)$, so that $$x^2=y^3=1,\text{ }y^2 = (1,3,2),\text{ }xy = (2,3) = yx^2,\text{ }xy^2 = (1,3) = yx.$$ By the First Isomorphism Theorem, $\text{Ker}\,\phi$ is normal in $S_3$ and $\text{Im}\,\phi \cong S_3/\text{Ker}\,\phi$, with isomorphism $\phi(g\text{Ker}\,\phi) = \phi(g)$. But$$yxy^{-1} = xy^2y^{-1} = xy\ne 1,\,x,$$so $S_3$ has no normal subgroups of order $2$, and $\text{Ker}\,\phi$ must be one of $\{1\}$, $A_3$, $S_3$. If $\text{Ker}\,\phi = S_3$, then$$\text{Im}\,\phi\cong S_3/S_3\cong S_1,$$so $\phi$ is the trivial homomorphism, i.e. it always maps to $(a)(b)(c)(d)$. If $\text{Ker}\,\phi = A_3$, then$$\text{Im}\,\phi\cong S_3/A_3\cong S_2,$$ $$\phi(1)=\phi(y)=\phi(y^2) = (a)(b)(c)(d),$$while$$\phi(x)=\phi(xy)=\phi(xy^2)$$ has order $2$ and takes the form $(p,q)$ or $(p,q)(r,s)$, for distinct $p,q,r,s\in\{a,b,c,d\}$. If $\text{Ker}\,\phi = \{1\}$, then$$\text{Im}\,\phi\cong S_3/\{1\} \cong S_3.$$So $\phi(x)$ takes the form $(p,q)$ or $(p,q)(r,s)$, and $\phi(xy)$ takes the form $(p',q')$ or $(p',q')(r',s')$. Without loss of generality, $p' = p$. If $\phi(x) = (p,q)$ and $\phi(xy) = (p,q')$, then $q\ne q'$, or else$$\phi(y) = \phi(x)\phi(xy) = (p,q)(p,q')$$becomes the identity, so $\phi(y) = (q,p,q')$. Because $\phi(x) = (q,p)$, $\phi$ simply maps $\pi\in S_3(\{1,2,3\})$ to the corresponding permutation with $1$, $2$, $3$ replaced by $q$, $p$, $q'$, respectively. If $\phi(x) = (p,q)$ and $\phi(xy) = (p,q')(r',s')$ (or the other way around; these two cases are interchangeable by "switching" $1$ and $3$, since $x$, $xy$ are both transpositions), then $q\ne q'$, or else$$\phi(y) = \phi(x)\phi(xy) = (p,q)(p,q')(r',s')$$becomes a transposition and has order $2$ instead of $3$, so without loss of generality, assume $r' = q$. But then$$\phi(y) = (q,p,q')(q,s') = (q,s',p,q')$$has order $4$ instead of $3$, so this case is impossible. If $\phi(x) = (p,q)(r,s)$ and $\phi(xy) = (p,q')(r',s')$, then $q\ne q'$, or else $\{r,s\} = \{r',s'\}$ and $\phi(x)=\phi(xy)$ forces $\phi(y)$ to be the identity. Without loss of generality, assume $q' = r$, so $\{r',s'\} = \{q,s\}$ means $\phi(xy) = (p,r)(q,s)$. Thus$$\phi(y) = (p,q)(r,s)(p,r)(q,s) = (p,s)(q,r)$$ has order $2$ instead of $3$, so this case is also impossible. Remark. The only "unexpected" permutation here is $\phi$ taking transpositions to $(a, b)(c, d)$ (or $(a, c)(b, d)$ or $(a, d)(b, c)$) and other elements to the identity $(a)(b)(c)(d)$.<|endoftext|> TITLE: Why the spectral theorem is named "spectral theorem"? QUESTION [10 upvotes]: "If $V$ is a complex inner product space and $T\in \mathcal{L}(V)$. Then $V$ has an orthonormal basis Consisting of eigenvectors of T if and only if $T$ is normal".   I know that the set of orthonormal vectors is called the "spectrum" and I guess that's where the name of the theorem. But what is the reason for naming it? REPLY [14 votes]: The name is provided by Hilbert in a paper published sometime in 1900-1910 investigating integral equations in infinite-dimensional spaces. Since the theory is about eigenvalues of linear operators, and Heisenberg and other physicists related the spectral lines seen with prisms or gratings to eigenvalues of certain linear operators in quantum mechanics, it seems logical to explain the name as inspired by relevance of the theory in atomic physics. Not so; it is merely a fortunate coincidence. Recommended reading: "Highlights in the History of Spectral Theory" by L. A. Steen, American Mathematical Monthly 80 (1973) pp350-381<|endoftext|> TITLE: Is $\{g \in G : |g| < \infty\}$ always subgroup of a group $G$? QUESTION [7 upvotes]: Possible Duplicate: $T(G)$ may not be a subgroup? Let $G$ be a group, and consider $H = \{g \in G : |g| < \infty\}$. Question: Must $H$ necessarily be a subgroup of $G$? Here, $|g|$ denotes the order of the element $g$. REPLY [14 votes]: In general it is false that the subset of elements of a group $G$ of finite order is a subgroup. I think that the simplest, in some sense, case is that of $GL_2(\Bbb R)$. Let $s_1$ and $s_2$ be symmetries with respect to lines $\ell_1$ and $\ell_2$ through the origin. Then $s_1$ and $s_2$ have finite order (equal in fact to $2$) but the product $s_1s_2$ is a rotation whose order is finite if and only if the lines $\ell_1$ and $\ell_2$ form an angle which is a rational multiple of $2\pi$ (which is obviously not always the case). However, the claim is true when the group $G$ is commutative. This follows immediately from the observation that if $ab=ba$ then the order of $ab$ divides the least common multiple of the orders of $a$ and $b$.<|endoftext|> TITLE: Is it possible to write a number in a base of less than 1? QUESTION [18 upvotes]: Following on from this question: https://math.stackexchange.com/a/217112/45127 If we take base 10 as an example, the granularity is 1. I.e. we increment the digits in an increment of 1 until we reach 9 and then start a new column. If the base is less than 1, then would any number we write require an infinite number of columns or is there a way of writing a number in a base between 0 and 1? REPLY [3 votes]: Yipes! I just tried base -10. $$12345.678=\begin{aligned} 2&\times\;^{_-}10^{4}\\ +\;8&\times\;^{_-}10^{3}\\ +\;4&\times\;^{_-}10^{2}\\ +\;6&\times\;^{_-}10^{1}\\ +\;6&\times\;^{_-}10^{0}\\ +\;4&\times\;^{_-}10^{^{_-}1}\\ +\;8&\times\;^{_-}10^{^{_-}2}\\ +\;2&\times\;^{_-}10^{^{_-}3} \end{aligned}$$ $$12345.678_{10} = 28466.482_{^{_-}10}$$ Although, that's using positive digits with a neative base. You may need to use negative digits and negate the whole number.<|endoftext|> TITLE: Understanding the assumptions in the Reverse Fatou's Lemma QUESTION [23 upvotes]: Fatou's Lemma says the following: If $(f_n)$ is a sequence of extended real-valued, nonnegative, measurable functions defined on a measure space $\left(\mathbf{X},\mathcal{X},\mu\right)$, then $$ \int\lim\inf f_n d\mu \leq \lim\inf \int f_n d\mu. $$ In the statement of the Reverse Fatou's Lemma there's an addtional requirement that the given sequence be dominated by an integrable function. I'm interested in understanding what breaks down if this condition is not satisfied. For the sake of clarity and notation, here's the statement of the Reverse Fatou's Lemma: Let $(f_n)$ be a sequence of extended real-valued functions defined on a measure space $\left(\mathbf{X},\mathcal{X},\mu\right)$. If there exists an integrable function $g$ on $\mathbf{X}$ such that $f_n \leq g$ for all $n$, then $$ \lim\sup\int f_n d\mu \leq \int\lim\sup f_n d\mu. $$ Again, I'm curious to know what happens if this additional condition that the sequence be dominated is not satisfied. In the proofs that I've seen of the Reverse Fatou's Lemma they've all taken advantage of the fact that the functions are dominated, but I just don't see why there can't be a proof of the inequality that doesn't use this assumption. My interest was further piqued by the following problem I came across in Bartle's Elements of Integration and Lebesgue Measure: Let $(f_n)$ be a sequence of extended real-valued, nonnegative functions defined on $\left(\mathbf{X},\mathcal{X},\mu\right)$, $f_n \to f$, and let $\int f d\mu =\lim \int f_n d\mu < \infty.$ Show that for any $E \in \mathcal{X},$ $$\int_E f d\mu =\lim \int_E f_n d\mu.$$ I was able to prove this through two applications of Fatou's Lemma and use of the nice identity $\lim\sup(-f_n) =-\lim\inf(f_n)$. But there was another proof I abandoned after I failed to prove that the Reverse Fatou's Lemma held with the given hypotheses. Any insight is much appreciated. REPLY [19 votes]: For a counter-example to reverse Fatou lemma without the domination hypothesis, take $f_n:=\chi_{(n,n+1)}$, with $X$ the real line, Borel $\sigma$-algebra and Lebesgue measure. We have $\limsup_{n\to +\infty}f_n(x)=0$ for all $x$ but $\int f_nd\mu=1$.<|endoftext|> TITLE: How Strong is an Egg? QUESTION [9 upvotes]: You have two identical eggs. Standing in front of a 100 floor building, you wonder what is the maximum number of floors from which the egg can be dropped without breaking it. What is the minimum number of tries needed to find out the solution? REPLY [3 votes]: The critical strip of eggs is the interval $C:=[k,k+1]$ such that, when an egg is dropped from a height $\leq k$ it survives, and when it is dropped from a height $\geq k+1$ it brakes. The numbers $$h_r(n)\qquad(r\geq0,\ n\geq0)$$ ("allowed length for $r$ eggs and $n$ trials") are defined as follows: If we know that $C$ lies in a certain interval of length $\ell\leq h_r(n)$ we can locate $C$ with $r$ eggs in $n$ trials, but if we know only that $C$ lies in a certain interval of length $\ell>h_r(n)$ then there is no deterministic algorithm that allows to locate $C$ with $r$ eggs in $n$ trials for sure. Obviously $$h_0(n)=1\quad(n\geq0)\ ,\qquad h_r(0)=1\quad(r\geq0)\ .$$ The numbers $h_r(n)$ satisfy the recursion $$h_r(n)=h_{r-1}(n-1)+h_r(n-1)\qquad(r\geq1,\ n\geq1)\ .\qquad(*)$$ Proof. Assume $C$ lies in a certain interval $I$ of length $\ell:=h_{r-1}(n-1)+h_r(n-1)$. We may as well assume that the lower end of $I$ is at level zero. Drop the first egg at height $h_{r-1}(n-1)$. If it brakes then $C$ is contained in the interval $[0,h_{r-1}(n-1)]$ and can be located with the remaining $r-1$ eggs in $n-1$ trials. If it survives then $C$ is contained in the interval $[h_{r-1}(n-1),\ell]$ of length $h_r(n-1)$ and can be located with the $r$ eggs in $n-1$ trials. This proves $h_r(n)\geq\ell$. Conversely, assume that we know only that $C$ lies in a certain interval of length $\ell'>\ell$ and that there is an algorithm that locates $C$ with $r$ eggs in $n$ trials. This algorithm would tell us the height $k$ at which we should drop the first egg. If $k>h_{r-1}(n-1)$ and the egg brakes or if $k\leq h_{r-1}(n-1)$ and the egg survives it would be impossible to finish the task, as the remaining interval that contains $C$ is larger than allowed for the remaining resources. It follows that $h_r(n)\leq\ell$. From $(*)$ we obtain $$h_1(n)=n+1\ ,\quad h_2(n)={1\over2}(n^2+n+2),\quad h_3(n)={1\over6}(n^3+5n+6)\qquad(n\geq0)\ .$$ As $h_2(13)<100 TITLE: How to prove figure eight is not a manifold? QUESTION [9 upvotes]: Possible Duplicate: A wedge sum of circles without the gluing point is not path connected I know that figure eight is not a manifold because its center has no neighborhood homeomorphic to $\mathbb{R}^n$. But how to prove this strictly? REPLY [12 votes]: Suppose that there was a neighborhood $U$ of the center point $P$ that was homeomorphic to $\mathbb{R}^n$. Consider $U \setminus \{P\}$. How many connected components does it have? How many connected components are there in $\mathbb{R}^n \setminus \{\text{point}\}$? [Be careful to note that the answer is different for $n=1$ than for $n > 1$, but that doesn't ultimately cause any trouble.]<|endoftext|> TITLE: Probabilty of picking an irrational number QUESTION [6 upvotes]: I've started to learn some probabilty and it made think about this question: let us assume we randomize virtually any number between 0 and 1. What is the probability for this number to be irrational? REPLY [7 votes]: Modern probability uses measure theory. In particular the Lebesgue measure. This means that any countable set has probability zero to be chosen from. In particular this means that the probability for choosing an irrational number is $1$. In fact not just irrational, but also transcendental and normal, and any other property which occurs outside a countable set.<|endoftext|> TITLE: Global dimension of quasi Frobenius ring QUESTION [9 upvotes]: Let $R$ be a quasi-Frobenius ring (so $R$ is self-injective and left and right noetherian). I want to prove that $lD(R)=0$ or $\infty$, where $lD(R)$ denotes the left global dimension. I'm unsure about how to go about proving this; the only thing I can think of is to somehow show that if we assume that some $R$ module $A$ has a finite projective resolution then it is in fact projective and hence has global dimension $0$. I tried to do this using the fact that a module over a quasi Frobenius ring is projective if and only if it is injective, but I didn't get far. REPLY [8 votes]: Suppose you have a module $M$ with finite projective dimension, say $n$: $$0\to P_n\to \dots\to P_0\to M\to 0.$$ Look at the injection $0\to P_n\to P_{n-1}$. You have that $P_n$ is injective, hence $P_n$ is a direct summand of $P_{n-1}$. But then you could leave out the summand $P_n$ in both $P_n$ and $P_{n-1}$, contradiction to projective dimension $n$.<|endoftext|> TITLE: Does tensoring by a flat module preserve pullbacks of pairs of monos? QUESTION [6 upvotes]: Let $k$ be a commutative ring and let $C$ be a flat module over $k$. Let $M$ be a module and let $A,B \subseteq M$ be two submodules. We get a pullback diagram: where $s, i, j, t$ are inclusions. If we tensor by $C$ we get the diagram: However is this a pullback diagram? I cannot work out how to define the unique morphism. Sorry about the size of the pictures. REPLY [5 votes]: Yes. Consider the following conversion of pullback into kernel: $0\to A\cap B\to A\oplus B\stackrel{(i,-j)}{\to} Im(i,-j)\to 0$ is exact iff $A\cap B$ is the pullback of $i$ and $j$ (it satisfies the same universal property). Since $C$ is flat the following sequence is also exact: $0\to (A\cap B)\otimes C\to (A\otimes C)\oplus (B\otimes C)\stackrel{(i\otimes C,-j\otimes C)}{\to} Im(i,-j)\otimes C\to 0$ Hence by the same argument as above $(A\cap B)\otimes C$ is the pullback of the two given maps, hence $(A\cap B)\otimes C\cong (A\otimes C)\cap (B\otimes C)$.<|endoftext|> TITLE: Prove the trigonometric identity $(35)$ QUESTION [12 upvotes]: Prove that \begin{equation} \prod_{k=1}^{\lfloor (n-1)/2 \rfloor}\tan \left(\frac{k \pi}{n}\right)= \left\{ \begin{aligned} \sqrt{n} \space \space \text{for $n$ odd}\\ \\ \ 1 \space \space \text{for $n$ even}\\ \end{aligned} \right. \end{equation} I found this identity here at$(35)$. At the moment I don't know where I should start from. Thanks! REPLY [3 votes]: I forgot that I had posted an answer to this question, and answered a duplicate question recently. Since this answer to the odd case is significantly different from the other answers, I have moved it here. Note that $$ \tan^2(\theta/2)=-\left(\frac{e^{i\theta}-1}{e^{i\theta}+1}\right)^2 $$ Therefore, for odd $n$, $$ \begin{align} \prod_{k=1}^{(n-1)/2}\tan^2(k\pi/n) &=\prod_{k=1}^{(n-1)/2}(-1)\left(\frac{e^{2\pi ik/n}-1}{e^{2\pi ik/n}+1}\right)^2\\ &=\prod_{k=1}^{(n-1)/2}\left(\frac{e^{2\pi ik/n}-1}{e^{2\pi ik/n}+1}\right)\left(\frac{e^{-2\pi ik/n}-1}{e^{-2\pi ik/n}+1}\right)\\ &=\prod_{k=1}^{(n-1)/2}\left(\frac{e^{2\pi ik/n}-1}{e^{2\pi ik/n}+1}\right)\left(\frac{e^{2\pi i(n-k)/n}-1}{e^{2\pi i(n-k)/n}+1}\right)\\ &=\prod_{k=1}^{n-1}\frac{e^{2\pi ik/n}-1}{e^{2\pi ik/n}+1}\\ &=\prod_{k=1}^{n-1}\frac{1-e^{2\pi ik/n}}{1+e^{2\pi ik/n}}\\ &=\lim_{z\to1}\prod_{k=1}^{n-1}\frac{z-e^{2\pi ik/n}}{z+e^{2\pi ik/n}}\\ &=\lim_{z\to1}\frac{z^n-1}{z-1}\frac{z+1}{z^n+1}\\[12pt] &=n \end{align} $$ Since tangent is positive in the first quadrant, $$ \prod_{k=1}^{(n-1)/2}\tan(k\pi/n)=\sqrt{n} $$<|endoftext|> TITLE: Relation of Function Field of a scheme to the Local Ring of its Prime Divisor QUESTION [5 upvotes]: Refer to p. 130 in Hartshorne: Let $X$ be a noetherian, integral separated scheme, regular in codimension 1, and let $Y$ be a prime divisor of $X$, with generic point $\eta$. Let $\xi$ be the generic point of $X$ and $K=\mathcal{O}_{X,\xi}$ is the function field of $X$. I can see that $\mathcal{O}_{X,\eta}$ is an integral domain and that it can be injected into $K$. But why is $K$ the quotient field of $\mathcal{O}_{X,\eta}$? REPLY [5 votes]: Pick an affine open $U=\mathrm{Spec}(A)$ containing $\eta$. Then $\mathscr{O}_{X,\eta}$ is the localization of $A$ at the prime ideal $\mathfrak{p}\in\mathrm{Spec}(A)$ corresponding to $\eta$. Also, since the generic point $\xi$ is in $U$, and necessarily corresponds to the generic point of $\mathrm{Spec}(A)$, i.e., the zero ideal, the local ring at the generic point is $A$ localized at the zero ideal, i.e., the field of fractions of $A$. Now you just have to prove that any localization of $A$ has the same field of fractions as $A$. Or more precisely, the canonical map $S^{-1}A\rightarrow\mathrm{Frac}(A)$ identifies the target as the field of fractions of the source for any multiplicative set $S\subseteq A$. In general, since local rings can always be computed in affine opens containing the relevant point, for an integral scheme, the function field can be computed on any affine open. So my answer above does not use regularity in codimension one or the Noetherian hypothesis, or separatedness. It only uses integrality.<|endoftext|> TITLE: When does a Square Matrix have an LU Decomposition? QUESTION [28 upvotes]: When can we split a square matrix (rows = columns) into it’s LU decomposition? The LUP (LU Decomposition with pivoting) always exists; however, a true LU decomposition does not always exist. How do we tell if it does/doesn't exist? (Note: decomposition and factorization are equivalent in this article) From the Wikipedia article on LU decompositions: Any square matrix $A$ admits an LUP factorization. If $A$ is invertible, then it admits an LU (or LDU) factorization if and only if all its leading principal minors are non-zero. If $A$ is a singular matrix of rank $k$, then it admits an LU factorization if the first $k$ leading principal minors are non-zero, although the converse is not true. This implies that for a square matrix: LUP always exists (We can use this to quickly figure out the determinant). If the matrix is invertible (the determinant is not 0), then a pure LU decomposition exists only if the leading principal minors are not 0. If the matrix is not invertible (the determinant is 0), then we can't know if there is a pure LU decomposition. The problem is this third statement here. “If $A$ is a singular matrix of rank $k$, then it admits an LU factorization if the first $k$ leading principal minors are non-zero”, gives us a way to find out if LU decomposition exists for a singular (non-invertible) matrix. However, it then says, “although the converse is not true”, implying that even if a leading principal minor is 0, that we could still have a valid LU decomposition that we can't detect. This leads us back to the question: is there a way of truly knowing whether a matrix has an LU decomposition? REPLY [8 votes]: Main point from above mentioned article: Matrix A (k-by-k) has LU factorization if: $$\mathsf Rank(A_{11})+k\geq Rank(\begin{bmatrix} A_{11} & A_{12} \end{bmatrix}) +Rank(\begin{bmatrix} A_{11} \\ A_{21} \end{bmatrix})$$<|endoftext|> TITLE: An intuitive vision of fiber bundles QUESTION [24 upvotes]: In my mind it is clear the formal definition of a fiber bundle but I can not have a geometric image of it. Roughly speaking, given three topological spaces $X, B, F$ with a continuous surjection $\pi: X\rightarrow B$, we "attach" to every point $b$ of $B$ a closed set $\pi^{-1}(b)$ such that it is homeomorphic to $F$ and so $X$ results a disjoint union of closed sets and each of them is homeomorphic to $F$. We also ask that this collection of closed subset of $X$ varies with continuity depending on $b\in B$, but I don't understand why this request is formalized using the conditions of local triviality. REPLY [2 votes]: Maybe it is helpful to take a map $f\colon\mathbb R\to\mathbb R$ and consider its graph $\Gamma(f):=\{(x,f(x)) |x\in \mathbb R\}$. We get a continuous map $\pi\colon \Gamma(f)\to \mathbb R$, $(x,f(x)) \mapsto x$. For every $p\in \mathbb R$, the preimage $\pi^{-1}(p)$ is again a single point, so this has a chance to be a fiber bundle. However, it is a fiber bundle, if and only if $f$ is continuous: If $\pi$ is a fiber bundle near $x$, then there is a local trivialization, i.e. a homeomorphism $\pi^{-1}(U)\to U$ over $U$ for $U\subset \mathbb R$ a neighborhood of $x$. This implies that this homeomorphism is precisely given by $\pi$ and so the assignment $x\mapsto (x,f(x))\mapsto f(x)$ (which is the inverse of the homeomorphism composed with the projection) is continuous near $x$ and hence $f$ is continuous near $x$. On the other hand, if $f$ is continuous, the map $\pi$ itself is already a homeomorphism (an inverse is given by $x\mapsto (x,f(x))$) and hence a fiber bundle. So, local triviality in this case means that the map $f$ is locally continuous and the fiber which is essentially $f(x)$, varies continuously with the base point. Also, it is not hard to see that for the sign-function (as an example of a non continuous function), the non continuity at $0$ ruins the local triviality. I hope this is a little helpful.<|endoftext|> TITLE: The boundary of union is the union of boundaries when the sets have disjoint closures QUESTION [12 upvotes]: Assume $\bar A\cap\bar B=\emptyset$. Is $\partial (A \cup B)=\partial A\cup\partial B$, where $\partial A$ and $\bar A$ mean the boundary set and closure of set $A$? I can prove that $\partial (A \cup B)\subset \partial A\cup\partial B$ but for proving $\partial A\cup\partial B\subset \partial (A \cup B)$ it seems not trivial. I tried to show that for $x\in \partial A\cup\partial B$ WLOG, $x\in \partial A$ so $B(x)\cap A$ and $B(x)\cap A^c$ not equal to $\emptyset$ but it seems not enough to show the result. REPLY [2 votes]: Here is a proof using only set operations, to complement the other answers. Proposition. Suppose $\overline{A} \cap B = A \cap \overline{B} = \varnothing$. Then $\mathrm{Int}(A \cup B) = \mathrm{Int}(A) \cup \mathrm{Int}(B)$. Proof. Since $\mathrm{Int}(A) \cup \mathrm{Int}(B)$ is an open set contained in $A \cup B$, we have $$ \mathrm{Int}(A) \cup \mathrm{Int}(B) \subseteq \mathrm{Int}(A \cup B). $$ For the converse, note that $A \subseteq \overline{B}^c = \mathrm{Int}(B^c)$ and $B \subseteq \overline{A}^c = \mathrm{Int}(A^c)$, hence $\mathrm{Int}(A \cup B) \subseteq A \cup B \subseteq \mathrm{Int}(B^c) \cup \mathrm{Int}(A^c)$. Therefore $$ \mathrm{Int}(A \cup B) = \big(\mathrm{Int}(A \cup B) \cap \mathrm{Int}(B^c)\big) \cup \big(\mathrm{Int}(A \cup B) \cap \mathrm{Int}(A^c)\big). \tag*{$(1)$} $$ Note that $\mathrm{Int}(A \cup B) \cap \mathrm{Int}(B^c) \subseteq (A \cup B) \cap B^c = A \setminus B = A$. Since the LHS is an open set contained in $A$, it follows that $\mathrm{Int}(A \cup B) \cap \mathrm{Int}(B^c) \subseteq \mathrm{Int}(A)$. Analogously, $\mathrm{Int}(A \cup B) \cap \mathrm{Int}(A^c) \subseteq \mathrm{Int}(B)$, so it follows from $(1)$ that $\mathrm{Int}(A \cup B) \subseteq \mathrm{Int}(A) \cup \mathrm{Int}(B)$.$\quad\Box$ To prove the same for the boundary, recall that $\partial S = \overline{S} \setminus \mathrm{Int}(S)$. We get the following: Corollary. Suppose $\overline{A} \cap B = A \cap \overline{B} = \varnothing$. Then $\partial(A \cup B) = \partial A \cup \partial B$. Proof. Recall that $\overline{A \cup B} = \overline{A} \cup \overline{B}$, even for arbitrary sets. It follows that \begin{align*} \partial(A \cup B) &= \overline{A \cup B} \setminus \mathrm{Int}(A \cup B) \\[1ex] &= (\overline{A} \cup \overline{B}) \setminus (\mathrm{Int}(A) \cup \mathrm{Int}(B)) \\[1ex] &= \big(\overline{A} \setminus (\mathrm{Int}(A) \cup \mathrm{Int}(B))\big) \cup \big(\overline{B} \setminus (\mathrm{Int}(A) \cup \mathrm{Int}(B))\big) \\[1ex] &= (\overline{A} \setminus \mathrm{Int}(A)) \cup (\overline{B} \setminus \mathrm{Int}(B)) \tag*{$(2)$}\\[1ex] &= \partial A \cup \partial B, \end{align*} where to deduce $(2)$ we use that $\overline{A} \cap \mathrm{Int}(B) \subseteq \overline{A} \cap B = \varnothing$ and $\overline{B} \cap \mathrm{Int}(A) \subseteq \overline{B} \cap A = \varnothing$.$\quad\Box$ Under OP's stronger assumption that $\overline{A} \cap \overline{B} = \varnothing$, we have the following stronger result. Theorem. Suppose $\overline{A} \cap \overline{B} = \varnothing$. Then $\partial(A \cup B)$ is the topological disjoint union of $\partial A$ and $\partial B$. Proof. Since $\partial A \cap \partial B \subseteq \overline{A} \cap \overline{B} = \varnothing$, it follows from the preceding Corollary that $\partial(A \cup B)$ is the disjoint set union of $\partial A$ and $\partial B$. To see that it is also a topological disjoint union, note that $\partial A$ and $\partial B$ are closed sets, so both are clopen in $\partial(A \cup B)$.$\quad\Box$<|endoftext|> TITLE: How to compute the values of this function ? ( Fabius function ) QUESTION [14 upvotes]: How to compute the values of this function ? ( Fabius function ) It is said not to be analytic but $C^\infty$ everywhere. But I do not even know how to compute its values. Im confused. Here is the link : http://www.math.osu.edu/~edgar.2/selfdiff/ REPLY [7 votes]: The Fabius function assumes rational values at dyadic rational arguments. We will give an explicit formula for those values. Because dyadic rationals form a dense subset of the reals, and the Fabius function is continuous, its value at any real point can be found as a limit of its values on a sequence of dyadic rationals converging to that point. The following is based on the book Rvachev V. L., Rvachev V. A., "Non-classical methods of the approximation theory in boundary value problems", Naukova Dumka, Kiev (1979), pp. 116-126$^{[1]}$. The book is in Russian, and as far as I know, no English translation exists. But, fortunately, it mostly consists of math formulae. Let's define $t_n$ by the recurrence $$t_0 = 1, \quad t_n = (-1)^n \, t_{\lfloor n/2\rfloor}\tag1$$ It is easy to see that $|t_n|=1$, and the signs follow the same pattern as the Thue–Morse sequence. The same sequence is given by a non-recurrent, but rather clumsy formula $$t_n = 1-\frac83 \, \sin^2\left(\frac\pi3 \cdot \sum_{m=1}^n \left[2 + (-1)^{\binom n m}\right]\right)\tag2$$ Let's define $c_n$ by the recurrence $$c_0 = 1, \quad c_n = \frac1{(4^n - 1)(2n+1)} \, \sum_{m\ge1} \binom{2n+1}{2m+1} \, c_{n-m},\tag3$$ so we have $$c_1 = \frac19, \quad c_2 = \frac{19}{675}, \quad c_3 = \frac{583}{59535}, \quad c_4 = \frac{132809}{32531625}, \quad \text{etc.}\tag4$$ Note that only finite number of terms in the sum are non-zero. I am not aware of any non-recurrent formula for this sequence, but it would be very nice to find one. The values of the Fabius function at dyadic rationals are given by $$F\!\left(\tfrac{s}{2^n}\right) = \frac1{n! \, 2^{\binom{n+1}2}} \, \sum_{m\ge0}\binom n {2m} \, c_m \sum_{1 \le r \le s}(2r-1)^{n-2m} \, t_{s-r}.\tag5$$ Again, note that only finite number of terms in each sum are non-zero.<|endoftext|> TITLE: short exact sequence of holomorphic vector bundles splits but not holomorphically, only $C^{\infty}$ QUESTION [9 upvotes]: If there is a short exact sequence of holomorphic vector bundles, $$0 \overset{a_1}{\to} W \overset{a_2}{\to} V \overset{a_3}{\to} F \overset{a_4}{\to} 0,$$ then one can expect a $C^{\infty}$ splitting $$V \cong W \oplus F$$ rather than a holomorphic splitting. I know that a s.e.s. needs consecutive maps to equal $1$, and that for exactness that $im(a_i) = ker(a_{i+1})$. I also know that a vector bundle is just a manifold with the fiber as a vector space (complex here). For a shorthand of notation of a vector bundle, I use $\pi: E \to B$ where $B \times V$ is the product space and $\pi$ is the fiber bundle. Written like a s.e.s., this is $$V \to E \overset{\pi}{\to} B.$$ Also $a_2$ is injective and $a_3$ is surjective. So is the reason why the splitting is only $C^{\infty}$, and not holomorphic, because the maps, either $a_2^{-1}$ or $a_3^{-1}$ are not injective? REPLY [15 votes]: On a (paracompact) complex manifold all short exact sequences of $C^{\infty}$ vector bundles are $C^{\infty}$ split, so it is enough to exhibit an exact sequence of holomorphic vector bundles that doesn't holomorphically split. The simplest example is the exact sequence on $\mathbb P_\mathbb C^1$: $$0\to \mathcal O(-2) \to \mathcal O(-1) \oplus \mathcal O(-1)\to \mathcal O\to 0$$ It does not split because the bundles $\mathcal O(-1) \oplus \mathcal O(-1)$ and $\mathcal O(-2) \oplus \mathcal O$ are not isomorphic: the second has nonzero holomorphic sections but the first doesn't.<|endoftext|> TITLE: How many numbers in a given range are coprime to $N$? QUESTION [12 upvotes]: Is there a good algorithm for counting the numbers $x$ between $A$ and $B$ with $x$ and $N$ coprime? This is just like this question except for the range. The factorization of $N$ is known. I actually need to solve the problem for fixed $N$ and many ranges, so I think I can mark all multiples of factors of $N$ in a BitSet and simply count what remains. But is there a nicer solution (or one for the case I need the answer for a single range only)? REPLY [6 votes]: For a proof of André Nicolas's formula using multiplicative number theory, see Section 8.3 in I. Niven, H. S. Zuckerman, H. L. Montgomery, An Introduction to the Theory of Numbers, 5th ed., Wiley (New York), 1991, which gives the succinct expressions \begin{align} f(C) & = \sum_{d|N} \mu(d) \left\lfloor \frac{C}{d} \right\rfloor\\ & = \frac{C \varphi(N)}{N} - \sum_{d|N} \mu(d) \text{ frac} \left( \frac{C}{d} \right) \end{align} where the sums are taken over the positive square-free integers $d$ that divide $N$ because $\mu(d)$, the Möbius function, is $0$ when $d$ is not square-free, and $\text{frac}$ is the fractional part function. We see that $C\varphi(N)/N$ approximates $f(C)$ and that the approximation improves as $C$ increases or as the number of prime factors of $N$ decreases.<|endoftext|> TITLE: How do you sum PDF's of random variables? QUESTION [5 upvotes]: I have a question asking me to determine the PDF of $L=X+Y+W$, where $X$, $Y$ and $W$ are all independent. $X$ is a Bernoulli random variable with parameter $p$, $Y \sim \mathrm{Binomial}(10, 0.6)$ and $W$ is a Gaussian random variable with zero mean and unit variance (meaning is is a standard normal random variable). I know the PDF's of $X$, $Y$ and $W$ (sort of hard to type out, but I know them). Could I get some sort of hint as to how these are added together? REPLY [3 votes]: Let me get you started by doing $X + Y$. Let $X \sim Ber(p)$ and $Y \sim Bin(10, 0.6)$. Let $\mu$ be the law of $X$ and $\nu$ be the law of $Y$. The law of $X + Y$ is given by the convolution \begin{equation} (\nu * \mu)(H) = \int_{\mathbb{R}} \nu(H-x)\mu(dx), \qquad H \subseteq \mathbb{R}. \end{equation} $X$ and $Y$ are discrete and $X + Y$ can take values $0, 1, \ldots, 11$. Thus we specify $(\mu * \nu)(k)$ for $k = 0, 1, \ldots 11$. \begin{equation} (\nu * \mu)(k) = \int_{\mathbb{R}} \nu(k-x)\mu(dx) = \nu(k)(1-p) + \nu(k-1)p. \qquad k = 0, 1, \ldots, 11. \end{equation} To get the first equality substitute $k$ for $H$ into the convolution formula. The second equality uses the fact that we know $\mu$ because $X$ is $Ber(p)$. We also know $\nu$ and so we can write \begin{equation} (\nu * \mu)(k) = {10 \choose k} 0.6^k 0.4^{n-k}(1-p) + {10 \choose k-1} 0.6^{k-1} 0.4^{n-(k-1)}p \end{equation} You can check that the convolution has given a sensible formula for the distribution of $X + Y$. For example if $k = 10$ then \begin{equation} (\nu * \mu)(10) = 0.6^{10} (1-p) + 10 \cdot 0.6^{9} p. \end{equation} $X + Y = 10$ if $X = 0$ and $Y = 10$ or if $X = 1$ and $Y = 9$ and because $X$ and $Y$ are independent these formulas can be verified.<|endoftext|> TITLE: Show uncountable set of real numbers has a point of accumulation QUESTION [5 upvotes]: Show that every uncountable set of real numbers has a point of accumulation. REPLY [16 votes]: Hint: If $A$ is an uncountable set of real numbers then there exists $k\in\mathbb Z$ such that $A\cap[k,k+1]$ is infinite. Use the definition of compactness, and the fact $[k,k+1]$ is a closed and bounded interval.<|endoftext|> TITLE: What is the cardinality of the set of all sequences in $\mathbb{R}$ converging to a real number? QUESTION [9 upvotes]: Let $a$ be an real number and let $S$ be the set of all sequences in $\mathbb{R}$ converging to $a$. What is the Cardinality of $S$? Thanks REPLY [17 votes]: First note that there are only $2^{\aleph_0}$ sequences of real numbers. This is true because a sequence is a function from $\mathbb N$ to $\mathbb R$ and we have $$\left|\mathbb{R^N}\right|=\left(2^{\aleph_0}\right)^{\aleph_0}=2^{\aleph_0\cdot\aleph_0}=2^{\aleph_0}$$ Now note that take any injective sequence which converges to $a$, then it has $2^{\aleph_0}$ subsequences. All are convergent and they all converge to $a$. Therefore we have at least $2^{\aleph_0}$ sequences converging to $a$, but not more than $2^{\aleph_0}$ sequences over all, so we have exactly $2^{\aleph_0}$ sequences.<|endoftext|> TITLE: Trace of an integral QUESTION [11 upvotes]: Given appropriate matrices $A$ and $B_x$, is $\,\,tr\left(\int A B_x dx\right) = \int tr\left(A B_x\right) dx\,\,?$ If so, is it true by the argument that it transfers from (discrete) sums? REPLY [7 votes]: Suppose $A$ is $n \times m$ and $B$ is $m\times n$. Then $$\text{tr}\left(\int A B_x dx\right) = \sum_{i=1}^n\left(\int A B_x dx\right)_{ii} = \sum_{i=1}^n\left(\int \sum_{j=1}^nA_{ij}(B_x)_{ji} dx\right) $$ $$=\int \sum_{i=1}^n\sum_{j=1}^nA_{ij}(B_x)_{ji} dx = \int \sum_{i=1}^n (AB_x)_{ii}dx = \int \text{tr}(AB_x) dx,$$ where the only step which is not a definition is the commutation of the integral and the finite sum.<|endoftext|> TITLE: Application of the Intermediate Value theorem. QUESTION [5 upvotes]: Suppose $f$ is continuous on $[0,2]$ and that $f(0) = f(2)$. Then $\exists$ $x,$ $y \in [0,2]$ such that $|y - x| = 1$ and that $f(x) = f(y)$. Let $g(x) = f(x+1) - f(x)$ on $[0,1]$. Then $g$ is continuous on $[0,1]$, and hence $g$ enjoys the intermediate value property! Now notice $$g(0) = f(1) - f(0)$$ $$g(1) = f(2) - f(1)$$ Therefore $$ g(0)g(1) = -(f(0) - f(1))^2 < 0$$ since $f(0) = f(2)$. Therefore, there exists a point $x$ in $[0,1]$ such that $g(x) = 0$ by the intermediate value theorem. Now, if we pick $y = x + 1$, i think the problem is solved. I would like to ask you guys for feedback. Is this solution correct? Is there a better way to solve this problem? REPLY [2 votes]: Very good! It is enough to write $g(1)=-g(0)$.. And, as Hagen commented, you only forgot to mention the case when already $f(0)=f(1)$, but then it is readily done.<|endoftext|> TITLE: XOR properties in set of numbers QUESTION [9 upvotes]: Say I have n positive numbers A1,A2...An and Ak is minimum among them. And d is a number such that (A1-d) ⊕ (A2-d) ⊕ .......(An-1-d) ⊕ (An-d) = 0 where0<=d <= Ak. I want to know how many d are there . I know I can iterate all possible value of d and take XOR of n number every time and find out but complexity in this case is O(Ak∗n) which is O(n2) in worst case. Is their any property of XOR which can help us to find number of d in less complexity than this ? Edit : Eq: If n=4 and numbers are = 4 6 8 10 then d can be {0,3}as 4 ⊕ 6 ⊕ 8 ⊕ 10 =0and 1 ⊕ 3 ⊕ 5 ⊕ 7 =0 REPLY [8 votes]: You can figure out the possible values of d one bit at a time. Consider the equation mod 2: (A1-d) ⊕ (A2-d) ⊕ .......(A{n-1}-d) ⊕ (An-d) = 0 mod 2 Try 0 and 1 for d, see if either works. Say d==1 is the only assignment that works. Then try the values 1 and 3 in the equation: (A1-d) ⊕ (A2-d) ⊕ .......(A{n-1}-d) ⊕ (An-d) = 0 mod 4 and so on, taking all valid ds mod 2^i and trying both d and d+2^i mod 2^{i+1}. The reason why this works is that both subtraction and xor do not propagate any information from higher bits to lower bits, so if you find a solution to the mod 2 equation, it remains a solution no matter what the higher-order bits of d are set to. The running time should be something like O(n * log Ak * #of solutions). Example: A = {9,12,23,30} try d=0,1 mod 2. Both work. try d=0,1,2,3 mod 4. All work. try d=[0-7] mod 8. 1,3,5,7 work. try d=[1,9,3,5,7] mod 16. 3,7 work. (no need to check if d>Ak) now we've tried up to Ak, check the remaining solutions for the remaining high-order bits. 3 and 7 work. Example: A = {4,6,8,10} try d=0,1 mod 2. Both work. try d=0,1,2,3 mod 4. All work. try d=0,4,1,2,3 mod 8. All work. at this point you've tried all d up to Ak. Do a final check and 0,3,4 work.<|endoftext|> TITLE: Showing that if a subgroup is normal, it's surjective homomorphic image is normal QUESTION [6 upvotes]: $\alpha : G \to H$ is a surjective homomorphism. And $U \subset G$ is a subgroup of $G$. Verfiy the claim - The image of $U$, ie $\alpha(U)$, is a subgroup of $H$, and if $U$ is normal in $G$, then $\alpha(U)$ is normal in $H$. Answer: Firstly, do I have to show $\alpha(U)$, is a subgroup of $H$ or is that statement just a statement of fact as part of the question? Anyway here is what I have done..taking it as a given that $\alpha(U)$, is a subgroup of $H$ - As $U$ is normal we have $U = gUg^{-1}$ $\alpha(U) = \alpha(gUg^1) =$ {applying homomorphic mapping into H} = $\alpha(g)\alpha(U)\alpha(g^{-1})$ Is that correct? I have a feeling I should take the $-1$ exponent outside the bracket as an extra final step or is that superfluous? REPLY [2 votes]: To show that $\alpha(U)$ is a normal subgroup, you need to prove that $\alpha(U) = x \alpha(U) x^{-1}$ for all $x \in H$. But any $x \in H$ can be written in the form $\alpha(g)$ for $g \in G$ since $\alpha$ is surjective. Thus, you need only prove that $\alpha(U) = \alpha(g) \alpha(U) \alpha(g)^{-1}$ for all $g \in G$. Note the $-1$ exponent is "on the outside", so you really do need to take that last step as you suspected. And yes, as written it is meant that you should prove that $\alpha(U)$ is a subgroup. The proof requires you to carefully work through the definition/criterion for being a subgroup, but nothing beyond that. Also, +1 for showing your reasoning and clearly indicating what you are unsure of.<|endoftext|> TITLE: integral of exponential divided by polynomial QUESTION [7 upvotes]: I would like to solve the integral $$A\int_{-\infty}^\infty\frac{e^{-ipx/h}}{x^2+a^2}dx$$ where h and a are positive constants. Mathematica gives the solution as $\frac\pi{a}e^{-|p|a/h}$, but I have been trying to reduce my reliance on mathematica. I have no idea what methods I would use to solve it. Is there a good (preferably online) resource where I could look up methods for integrals like this fairly easily? REPLY [3 votes]: $\mathbf{Method\;1: }$ Integral Fourier Transform Consider the function $f(t)=e^{-a|t|}$, then the Fourier transform of $f(t)$ is given by $$ \begin{align} F(\omega)=\mathcal{F}[f(t)]&=\int_{-\infty}^{\infty}f(t)e^{-i\omega t}\,dt\\ &=\int_{-\infty}^{\infty}e^{-a|t|}e^{-i\omega t}\,dt\\ &=\int_{-\infty}^{0}e^{at}e^{-i\omega t}\,dt+\int_{0}^{\infty}e^{-at}e^{-i\omega t}\,dt\\ &=\lim_{u\to-\infty}\left. \frac{e^{(a-i\omega)t}}{a-i\omega} \right|_{t=u}^0-\lim_{v\to\infty}\left. \frac{e^{-(a+i\omega)t}}{a+i\omega} \right|_{t=0}^v\\ &=\frac{1}{a-i\omega}+\frac{1}{a+i\omega}\\ &=\frac{2a}{\omega^2+a^2}. \end{align} $$ Next, the inverse Fourier transform of $F(\omega)$ is $$ \begin{align} f(t)=\mathcal{F}^{-1}[F(\omega)]&=\frac{1}{2\pi}\int_{-\infty}^{\infty}F(\omega)e^{i\omega t}\,d\omega\\ e^{-a|t|}&=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{2a}{\omega^2+a^2}e^{i\omega t}\,d\omega\\ \frac{\pi e^{-a|t|}}{a}&=\int_{-\infty}^{\infty}\frac{e^{i\omega t}}{\omega^2+a^2}\,d\omega. \end{align} $$ Comparing the last integral to the problem yields $t=-\frac{p}{h}$. Thus, $$ \int_{-\infty}^{\infty}\frac{e^{-\frac{ipx}{h}}}{x^2+a^2}\,dx=\frac{\pi e^{-a\left|\frac{p}{h}\right|}}{a}. $$ $\mathbf{Method\;2: }$ Note that: $$ \int_{y=0}^\infty e^{-(x^2+a^2)y}\,dy=\frac{1}{x^2+a^2}, $$ therefore $$ \int_{x=0}^\infty\int_{y=0}^\infty e^{-(x^2+a^2)y}\;e^{-\frac{ipx}{h}}\,dy\,dx=\int_{x=0}^{\infty}\frac{e^{-\frac{ipx}{h}}}{x^2+a^2}\,dx $$ Rewrite $$ \begin{align} \int_{x=0}^{\infty}\frac{e^{-\frac{ipx}{h}}}{x^2+a^2}\,dx&=\int_{y=0}^\infty\int_{x=0}^\infty e^{-(yx^2+\frac{ip}{h}x+a^2y)}\,dx\,dy\\ &=\int_{y=0}^\infty e^{-a^2y} \int_{x=0}^\infty e^{-\left(yx^2+\frac{ip}{h}x\right)}\,dx\,dy. \end{align} $$ In general $$ \begin{align} \int_{x=0}^\infty e^{-(ax^2+bx)}\,dx&=\int_{x=0}^\infty \exp\left(-a\left(\left(x+\frac{b}{2a}\right)^2-\frac{b^2}{4a^2}\right)\right)\,dx\\ &=\exp\left(\frac{b^2}{4a}\right)\int_{x=0}^\infty \exp\left(-a\left(x+\frac{b}{2a}\right)^2\right)\,dx\\ \end{align} $$ Let $u=x+\frac{b}{2a}\;\rightarrow\;du=dx$, then $$ \begin{align} \int_{x=0}^\infty e^{-(ax^2+bx)}\,dx&=\exp\left(\frac{b^2}{4a}\right)\int_{x=0}^\infty \exp\left(-a\left(x+\frac{b}{2a}\right)^2\right)\,dx\\ &=\exp\left(\frac{b^2}{4a}\right)\int_{u=0}^\infty e^{-au^2}\,du.\\ \end{align} $$ The last form integral is Gaussian integral that equals to $\frac{1}{2}\sqrt{\frac{\pi}{a}}$. Hence $$ \int_{x=0}^\infty e^{-(ax^2+bx)}\,dx=\frac{1}{2}\sqrt{\frac{\pi}{a}}\exp\left(\frac{b^2}{4a}\right). $$ Thus $$ \int_{x=0}^\infty e^{-(yx^2+\frac{ip}{h}x)}\,dx=\frac{1}{2}\sqrt{\frac{\pi}{y}}\exp\left(\frac{\left(\frac{ip}{h}\right)^2}{4y}\right)=\frac{1}{2}\sqrt{\frac{\pi}{y}}\exp\left(-\frac{p^2}{4h^2y}\right). $$ Next $$ \int_{x=0}^{\infty}\frac{e^{-\frac{ipx}{h}}}{x^2+a^2}\,dx=\frac{\sqrt{\pi}}{2}\int_{y=0}^\infty \frac{\exp\left(-a^2y-\frac{p^2}{4h^2y}\right)}{\sqrt{y}}\,dy. $$ In general $$ \begin{align} \int_{y=0}^\infty \frac{\exp\left(-ay-\frac{b}{y}\right)}{\sqrt{y}}\,dy&=2\int_{v=0}^\infty \exp\left(-av^2-\frac{b}{v^2}\right)\,dv\\ &=2\int_{v=0}^\infty \exp\left(-a\left(v^2+\frac{b}{av^2}\right)\right)\,dv\\ &=2\int_{v=0}^\infty \exp\left(-a\left(v^2-2\sqrt{\frac{b}{a}}+\frac{b}{av^2}+2\sqrt{\frac{b}{a}}\right)\right)\,dv\\ &=2\int_{v=0}^\infty \exp\left(-a\left(v-\frac{1}{v}\sqrt{\frac{b}{a}}\right)^2-2\sqrt{ab}\right)\,dv\\ &=2\exp(-2\sqrt{ab})\int_{v=0}^\infty \exp\left(-a\left(v-\frac{1}{v}\sqrt{\frac{b}{a}}\right)^2\right)\,dv\\ \end{align} $$ The trick to solve the last integral is by setting $$ I=\int_{v=0}^\infty \exp\left(-a\left(v-\frac{1}{v}\sqrt{\frac{b}{a}}\right)^2\right)\,dv. $$ Let $t=-\frac{1}{v}\sqrt{\frac{b}{a}}\;\rightarrow\;v=-\frac{1}{t}\sqrt{\frac{b}{a}}\;\rightarrow\;dv=\frac{1}{t^2}\sqrt{\frac{b}{a}}\,dt$, then $$ I_t=\sqrt{\frac{b}{a}}\int_{t=0}^\infty \frac{\exp\left(-a\left(-\frac{1}{t}\sqrt{\frac{b}{a}}+t\right)^2\right)}{t^2}\,dt. $$ Let $t=v\;\rightarrow\;dt=dv$, then $$ I_t=\int_{t=0}^\infty \exp\left(-a\left(t-\frac{1}{t}\sqrt{\frac{b}{a}}\right)^2\right)\,dt. $$ Adding the two $I_t$s yields $$ 2I=I_t+I_t=\int_{t=0}^\infty\left(1+\frac{1}{t^2}\sqrt{\frac{b}{a}}\right)\exp\left(-a\left(t-\frac{1}{t}\sqrt{\frac{b}{a}}\right)^2\right)\,dt. $$ Let $s=t-\frac{1}{t}\sqrt{\frac{b}{a}}\;\rightarrow\;ds=\left(1+\frac{1}{t^2}\sqrt{\frac{b}{a}}\right)dt$ and for $0 TITLE: Convergence of an infinite product $\prod_{k=1}^{\infty }(1-\frac1{2^k})$? QUESTION [5 upvotes]: Problem: I want to prove that the infinite product $\prod_{k=1}^{\infty }(1-\frac{1}{2^{k}})$ does not converge to zero. It doesn't matter to find the value to which this product converges, but I am still curious to know if anybody is able (if possible of course) to find the value to which this infinite product converges. I appreciate any help. I tried the following trick: $\prod_{k=1}^{n}(1+a_{k})\geq 1+\sum_{k=1}^{n}a_{k}$ which can be easily proven by inudction, where $a_{k}>-1$ and they are all positive or negative. In this case, $a_{k}=-\frac{1}{2^{k}}$, but I get : the infinite product is greater than or equal to zero. REPLY [3 votes]: Suppose $\prod_{k=1}^n (1-x^k) \ge a + x^{n+1}$ where $0 < x < 1$ and $0 < a < 1$. Then $\prod_{k=1}^{n+1} (1-x^k) \ge (a + x^{n+1})(1-x^{n+1}) = a + x^{n+1}(1-a) $. To make this $\ge a + x^{n+2}$, we want $1-a \ge x$. For $x = 1/2$, $a = 1/4$ will work. So, this argument gives a basis for choosing values for $a$ that can makes this inequality true for this inductive proof.<|endoftext|> TITLE: Non-Isomorphic Group Extensions QUESTION [12 upvotes]: This is a question from a problem set on group cohomology, a subject I've just begun to learn. Let $B$ be a finite group and $A$ be abelian. I am looking for two groups $G_1$ and $G_2$ such that $G_1$ and $G_2$ are isomorphic as groups but $$1\rightarrow A\rightarrow G_1\rightarrow B\rightarrow 1$$ and $$1\rightarrow A\rightarrow G_2\rightarrow B\rightarrow 1$$ are not isomorphic as extensions. It has been suggested that I use $A=C_3^2$ and $B=C_2$. However, since the orders of $A$ and $B$ are relatively prime in this case, doesn't the Shur-Zassenhaus Lemma guarantee that the sequence splits so that there is only one extension? If this is the case, then how could we produce two non-isomorphic extensions? If someone could point out where I'm confused, I'd be very grateful. Thanks. REPLY [4 votes]: I think the "smallest" counter-example is the following : Here I denote $\mathbb{Z}_k$ the group $\frac{\mathbb{Z}}{k\mathbb{Z}}$. Take $A:=\mathbb{Z}_2$, $G_1=G_2=G=\mathbb{Z}_4\times\mathbb{Z}_2$ and $B:=\mathbb{Z}_2\times \mathbb{Z}_2$ as abelian groups. You have two injections : $$\alpha: A\rightarrow G $$ $$a\mapsto (0,a) $$ and : $$\alpha': A\rightarrow G $$ $$a\mapsto (2a,0) $$ Those maps are clearly monomorphisms of groups. Furthermore it is not hard to see that $G/\alpha(A)=G/\alpha'(A)=B$. Hence we indeed get two extensions of $B$ by $A$ : $$0 \rightarrow A \overset{\alpha}{\rightarrow} G \overset{\beta}{\rightarrow} B \rightarrow 0\text{ and }0 \rightarrow A \overset{\alpha'}{\rightarrow} G \overset{\beta'}{\rightarrow} B \rightarrow 0$$ Clearly $G_1$ and $G_2$ are isomorphic as abelian groups but the extensions cannot be isomorphic, if they were isomorphic then there would be an automorphism $\Phi$ of $G$ such that : $$\Phi\circ \alpha =\alpha'$$ In particular : $$\Phi(0,1)=(2,0)$$ But this impossible because if we define $a:=\Phi^{-1}(1,0)$ then : $$\Phi(a+a)=(1,0)+(1,0)=(2,0) $$ Hence $a+a=(0,1)$ but if $a=(a_1,a_2)$ then $a+a=(2a_1,0)\neq (0,1)$. Hence such a $\Phi$ cannot exist, hence the extensions are not equivalent.<|endoftext|> TITLE: How many limit points in $\{\sin(2^n)\}$? How many can there be in a general sequence? QUESTION [6 upvotes]: Analysis question - given a sequence $\{a_n\}_{n=1}^\infty$, how many limit points can $\{a_n\}$ have? Initially I thought only $\aleph_0$, or countably many, because there are only countably many terms in such a sequence. But then I thought about the sequence where $a_n = \sin(2^n)$, which looks like this for the first 1000 terms. This sequence has no repeating terms, or else $\pi$ is rational. However I do not think every real number in $[0,1]$ appears in this sequence, as then the reals are countable, an absurd conclusion. It would not surprise me if this sequence has limit points, but how many? I do not immediately see any reason why $\sin(2^n)\notin (r-\epsilon,r+\epsilon)\subset[0,1]$, for $n$ large enough, because $r$ is arbitrary. This seems to imply that this sequence could have every point in $[0,1]\subset \mathbb R$ as a limit point, which slightly frightens me. So, How many limit points does $\{\sin(2^n)\}_{n=1}^\infty$ have? And in general, How many limit points can a sequence with countably many terms have? What is an example of a sequence with the maximum number of limit points? I would really like to see a proof of the answer to this last statement. I can easily come up with a sequence with $\aleph_0$ limit points, but I can't prove that is the upper bound. Thanks for any help and tips on this topic. REPLY [2 votes]: The positive rationals are countable. meaning there is a sequence $x_n$ that includes every one. The set of limit points of this sequence is the entire positive real line. I'll see if I can dig up a picture for the ordering, it is this standard diagonal back and forth thing. Oh, well. It is like the Cantor pairing function, except each pair $(x,y)$ is regarded as the rational number $\frac{x}{y}$ when $y \neq 0,$ plus you ignore any pair where $\gcd (x,y) > 1$ because you have already represented that fraction in lowest terms.<|endoftext|> TITLE: Prove that $\int_0^1|f''(x)|dx\ge4.$ QUESTION [12 upvotes]: Let $f$ be a $C^2$ function on $[0,1]$. $f(0)=f(1)=f'(0)=0,f'(1)=1.$ Prove that $$\int_0^1|f''(x)| \, dx\ge4.$$ Also determine all possible $f$ when equality occurs. REPLY [14 votes]: The inequality does not hold. Take $$ f(x) = x^3 - x^2 $$ we have $$ f'(x) = 3x^2 - 2x \\ f''(x) = 6x - 2 \\ f(0) = f(1) = f'(0) = 0 \\ f'(1) = 1 $$ But $$ \int_0^1 \lvert f''(x)\rvert dx = \int_0^{1/3} (2 - 6x) dx + \int_{1/3}^1 (6x - 2)dx = 1/3 + 4/3 = 5/3 < 4 $$ Then, what can we say about the infimum of that integral? $$ \int_0^1 \lvert f''(x) \rvert dx \geq \int_0^1 f''(x) dx = f'(1) - f'(0) = 1 $$ To show that $1$ is the best lower bound, choose a $C^2$ function $h$ defined on $[0, 1/2]$ such that $$ h(0) = h'(0) = h'(1/2) = h''(1/2) = 0 \\ h(1/2) = -1 $$ Using the above function we can construct the map $$ f(x) = \begin{cases} kh(x) & \text{if }0\leq x \leq 1/2 \\ -k & \text{if }1/2 < x \leq 1 - 3k \\ \frac {(x - 1 +3k)^3}{27k^2} - k & \text{if }1 - 3k < x \leq 1 \end{cases} $$ where $k$ is a positive constant lesser than $1/6$. $f$ satisfies all the constraints and $$ \int_0^1 \lvert f''(x) \rvert dx = 1 + k\int_0^{1/2} \lvert h''(x) \rvert dx $$ Choosing $k$ small enough we can make the integral as close to $1$ as we want.<|endoftext|> TITLE: A curious identity on sums of secants QUESTION [6 upvotes]: I was working on proving a variant of Markov's inequality, and in doing so I managed to come across an interesting (conjectured) identity for any $n\in\mathbb{N}$: $$\sum_{m=0}^{n-1} \sec^2\left(\dfrac{(2m+1)\pi}{4n}\right)=2n^2.$$ I tried to prove this via induction, averaging arguments, trig identities, etc., but to no avail. Are there any suggestions on where this identity may be proven or how I should proceed? REPLY [2 votes]: I checked your sum and it is indeed correct. There is another one you can do too $$\sum_{m=0}^{n-1} \sec^2\left(\frac{m\pi}{2n}\right) = \sum_{m=0}^{n-1} \sec^2\left(\frac{2m\pi}{4n}\right) = \frac{1}{3}(n^2+1)$$ Combining the two, you will obtain $$\sum_{m=0}^{2n-1} \sec^2\left(\frac{m\pi}{4n}\right) = \frac{1}{3}(8n^2+1)$$ More specifically $$\sum_{m=0}^{n-1} \sec^2\left(\frac{m\pi}{2n}\right) = \frac{1}{3}(2n^2+1)$$ These and others can be done with the methods from this question Prove that $\sum\limits_{k=1}^{n-1}\tan^{2}\frac{k \pi}{2n} = \frac{(n-1)(2n-1)}{3}$<|endoftext|> TITLE: Injective map on coordinate ring implies surjective? QUESTION [8 upvotes]: Suppose that $f:X\rightarrow Y$ is a morphism between two affine varieties over an algebraically closed field $K$. I believe that if the corresponding morphism of $K-$algebras, $f^\ast:K[Y]\rightarrow K[X]$ is injective, it is not necessarily true that $f:X\rightarrow Y$ must be surjective but I have yet to come up with a counterexample. Is there such a counterexample? REPLY [7 votes]: Given a morphism of rings $\phi:A\to B$ and the corresponding morphism of affine schemes $\sideset {^a}{} \phi=f:Spec (B)\to Spec(A)$, we have the equivalence: $$ f (Spec(B))\: \text {is dense in}\: Spec(A)\iff \text {Ker } (\phi) \subset \text {Nil} (A)$$ From this it is very easy to find injective morphisms of rings $\phi:A\to B$ with associated non surjective morphisms $ f:Spec (B)\to Spec(A)$. For example if $A$ is a domain and $0\neq a\in A$, then the inclusion morphism $\phi:A\to A_a=A[\frac {1}{a}]$ yields the inclusion $\sideset {^a}{} \phi=f:Spec (A_a)\to Spec(A)$, which is not surjective as soon as $a$ is not an invertible element of $A$. (Qiaochu's example is of that type)<|endoftext|> TITLE: Number of ring homomorphisms from $\mathbb Z_{12}$ to $\mathbb Z_{28}$. QUESTION [9 upvotes]: Question: Find the number of non trivial ring homomorphisms from $\mathbb Z_{12}$ to $\mathbb Z_{28}$. ($f$ is not necessarily unitary, i.e., $f(1)$ need not be $1$.) Suppose $f$ is a ring homomorphism from $\mathbb Z_{12}$ to $\mathbb Z_{28}$. Consider $f$ as a additive group homomorphism. Let $k= |\ker f|$ and $ t = |\operatorname{im}(f)|$. Then $k\mid 12$ and $t\mid 24$ and $kt=12$, by first isomorphism theorem of groups. There are two possibilities $k=3$, $t=4$ and $k=6$, $t=2$. For the first case $f$ should map $1$ to an element of the subgroup generated by $7$ as there is a unique subgroup of $\mathbb Z_{28}$ of order $4$ generated by $7$. For the second case $1$ has to map to $14$, for the same reasoning. So there are at most two ring homomorphisms from $\mathbb Z_{12}$ to $\mathbb Z_{28}$. Question is how to check the possible maps which are ring homomorphisms. Thanks. REPLY [9 votes]: Number of ring homomorphism from $\mathbb Z_m$ into $\mathbb Z_n$ is $2^{[w(n)-w(n/\gcd(m,n))]}$ , where $w(n)$ denotes the numbers of prime divisors of positive integer n. From this formula we get number of ring homomorphism from $\mathbb Z_{12}$ to $\mathbb Z_{28}$ is $2$.<|endoftext|> TITLE: Can two planes intersect in a point? QUESTION [8 upvotes]: Is it true that two planes may intersect in a point ? or If they intersect then, they always make a straight line ? I have some doubt; please explain. REPLY [20 votes]: In $\Bbb R^3$ two distinct planes either intersect in a line or are parallel, in which case they have empty intersection; they cannot intersect in a single point. In $\Bbb R^n$ for $n>3$, however, two planes can intersect in a point. In $\Bbb R^4$, for instance, let $$P_1=\big\{\langle x,y,0,0\rangle:x,y\in\Bbb R\big\}$$ and $$P_2=\big\{\langle 0,0,x,y\rangle:x,y\in\Bbb R\big\}\;;$$ $P_1$ and $P_2$ are $2$-dimensional subspaces of $\Bbb R^4$, so they are planes, and their intersection $$P_1\cap P_2=\big\{\langle 0,0,0,0\rangle\big\}$$ consists of a single point, the origin in $\Bbb R^4$. Similar examples can easily be constructed in any $\Bbb R^n$ with $n>3$.<|endoftext|> TITLE: The role of dual space of a normed space in functional analysis QUESTION [9 upvotes]: We have known that dual space of a normed space is very important in functional analysis. I would like to ask two questions related dual space of a normed space: What is the motivation of constructing dual space of a normed space? What is the main role of dual space of a normed space in functional analysis? Thank you for all your construction and comments. REPLY [3 votes]: Continuous linear functionals are "measurement devices" specifically designed to observe different aspects of vectors in your vector space. Looking at a dual space is a way of studying the original space by looking at all possible measurements you could make on it. The functionals (measurements) are restricted to continuous linear functions since they are sufficient to completely determine vectors in your space. Adding in nonlinear or discontinuous functionals would be redundant and complicate things. Furthermore, linear functionals are those measurement devices where the measured quantity can be ascribed "units" (length, mass, time, whatever) consistent with the units of the original space, and continuous functionals are measurement devices wherein small changes to the quantity produce small changes in the measurement.<|endoftext|> TITLE: Wave equation with variable speed coefficient QUESTION [9 upvotes]: Consider the wave equation initial value problem in $\mathbb R^3$ with spatially variable wave speed, denoted by \begin{align*} \frac{\partial^2}{\partial t^2}u(x,t)-c^2(x)\Delta u(x,t)&=0\hspace{.2in}\text{in }\mathbb R^3\times(0,\infty),\\ u(x,0)&=f(x),\\\frac{\partial}{\partial t}u(x,0)&=g(x). \end{align*} My question: Is there a way to explicitly write down a solution similar to Kirchhoff's formula for the case of constant speed $c(x)=c_0$? Intuitivly, i would expect the solution $u(x_0,t_0)$ for fixed space and time to be dependent on values of spherical integrals of $f$ and $g$ as in the constant speed case, but in a sound-speed dependent metric. This is related to a similar question concerning a wave equation in 1 dimension: Wave propagation with variable wave speed REPLY [10 votes]: There is an extensive literature on wave equations with variable coefficients, which are equivalent to wave equations on curved spacetime - as in the monograph by Friendlander. Depending on what you are willing to accept as 'explicit', the answer is that yes, there is an explicit generalisation of the Kirchhoff integral. However, it is very difficult to calculate the coefficients of the data $f,g$ in this integral, which are determined by the retarded Green function $G_R$ of the wave operator: $$\square = \partial_t^2 - c(x)\Delta.$$ This Green function is intimately related to the geometry of the spacetime with line element $ds^2=dt^2-c(x)d\vec{x}\cdot d\vec{x}$. In order to calculate $G_R$, you essentially need to fully solve the geodesic equations obtained by extremising the action $\int ds$. The retarded Green's function satisfies $$ \square G_R(t,x;t',x')=-4\pi\delta_4(t,x;t',x'),$$ where $\delta_4$ is the 4-dimensional Dirac distribution. Then the generalized Kirchhoff formula allows us to explicitly write down the value of a solution $\Psi$ of the wave equation at a point $(t,x)$ to the future of an initial data hypersurface $\Sigma^\prime=\{(t',x'):x'\in\mathbb{R}^3\}$: $$ \Psi(t,x)=-\frac{1}{4\pi}\int_{\Sigma^\prime}(G_R(t,x;t',x')g(x')-f(x')\partial_{t'}G_R(t,x;t',x'))d\Sigma^\prime,$$ where $d\Sigma^\prime=c^{3/2}(x')d_3x'$ is the volume element on $\Sigma^\prime$. These things (Green's function, waves in curved spacetime) are of great interest in General Relativity, and the GR literature is a good place to look for more details. In particular, I recommend starting with Poisson's Living Review article which covers the geometrical background in a very readable way. You'll find details here (and in the citations) on how to calculate $G_R$, which is required for practical applications of the Kirchhoff formula.<|endoftext|> TITLE: What are the irreducible representations of the cyclic group $C_n$ over a real vector space $V$? QUESTION [10 upvotes]: It suffices just to consider a linear transformation $f$ such that $f^n=id$ and require $V$ to have no proper subspace invariant under $f$. But I still don't have a picture of what's going on. REPLY [19 votes]: Let $\rho : C_n \to \mathrm{GL}(V)$ be a representation. Equivalently, let $f : V \to V$ be an automorphism such that $f^n = \operatorname{id}_V$. Then $f$ is a root of the polynomial $X^n - 1$. This polynomial splits as a product of linear terms over $\mathbb{C}$. Therefore $f$ is diagonalizable over $\mathbb{C}$. It follows that $V$ splits as a direct sum of the sub-representations (over $\mathbb{C}$): $$V \otimes_{\mathbb{R}} \mathbb{C} = \bigoplus_{i=1}^k \ker(f - \lambda_i \operatorname{id}_V),$$ where the $\lambda_i$ are the complex eigenvalues of $f$. These are $n$th roots of unity since $f^n = \operatorname{id}$. The root $\lambda_i$ is either: Real, in which case $\ker(f - \lambda_i \operatorname{id})$ is also a sub-representation. It splits as a direct sum of one-dimensional representations (simply choose a basis). Or it comes in a pair of complex conjugate numbers. Indeed, let $A \in M_d(\mathbb{R})$ be the matrix associated to $f$. Then $Av = \lambda_i v$ for a nonzero $v$ in the eigenspace, and therefore: $$\overline{Av} = \overline{\lambda_i v} \implies A \bar{v} = \bar{\lambda}_i \bar{v},$$ and therefore $\bar{\lambda}_i$ is also an eigenvalue of $A$. Now you can pair up $\lambda = e^{2ik\pi/n}$ and $\bar{\lambda} = e^{-2ik\pi/n}$. Let $\theta = 2k\pi/n$, such that $n \theta \equiv 0 \pmod{2\pi}$. The two following matrices are similar: $$\begin{pmatrix} e^{i \theta} & 0 \\ 0 & e^{-i\theta} \end{pmatrix} \sim \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}$$ Conclusion: The irreducible real representations of $C_n$ are either 1-dimensional, with matrix a real $n$th root of unity; 2-dimensional, with matrix $\left(\begin{smallmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{smallmatrix}\right)$ where $n\theta \equiv 0 \pmod{2\pi}$ but $\sin\theta \neq 0$.<|endoftext|> TITLE: Continuous extension of a real function QUESTION [5 upvotes]: Related; Open set in $\mathbb{R}$ is a union of at most countable collection of disjoint segments This is the theorem i need to prove; "Let $E(\subset \mathbb{R})$ be closed subset and $f:E\rightarrow \mathbb{R}$ be a contiuous function. Then there exists a continuous function $g:\mathbb{R} \rightarrow \mathbb{R}$ such that $g(x)=f(x), \forall x\in E$." I have tried hours to prove this, but couldn't. I found some solutions, but ridiculously all are wrong. Every solution states that "If $x\in E$ and $x$ is not an interior point of $E$, then $x$ is an endpoint of a segment of at most countable collection of disjoint segments.". However, this is indeed false! (Check Arthur's argument in the link above) Wrong solution Q4.5; http://www.math.ust.hk/~majhu/Math203/Rudin/Homework15.pdf Just like the argument in this solution, i can see that $g$ is continuous on $E^c$ and $Int(E)$. But how do i show that $g$ is continuous on $E$? REPLY [6 votes]: A constructive and explicit proof proceeds as follows. Since $E$ is closed, $U=\mathbb{R}\setminus E$ is a countable union of disjoint open intervals, say, $U=\bigcup (a_n,b_n)$. Necessarily, we must have that $a_n,b_n\in E$. Define $f(x)$ as follows. $$ f(x) = \begin{cases} g(x) &\text{if }x\in E \\ \frac{x-a_n}{b_n-a_n}g(b_n)+\frac{b_n-x}{b_n-a_n}g(a_n) & \text{if }x\in[a_n,b_n] \end{cases} $$ Notice first that $f(x)$ is well-defined and also, for all $x\in(a_n,b_n)$, either $g(a_n)\le f(x)\le g(b_n)$ or $g(b_n)\le f(x)\le g(a_n)$ depending on whether $g(a_n)\le g(b_n)$ or otherwise. Clearly, $f$ is continuous on $U$. Now suppose that $x\in E$ and $\epsilon>0$. Then there are a few cases. Case 1: Suppose that for every $\eta>0$, $(x-\eta,x)\cap E\not=\emptyset$ and $(x,x+\eta))\cap E\not=\emptyset$. Then since $f\vert_E=g$, there is some $\delta>0$ such that if $y\in E$ and $\vert x-y\vert<\delta$ then $\vert f(x)-f(y)\vert<\epsilon$. Because of the condition we have for Case 1, we may choose some $x_1,x_2\in E$ with $x-\delta0$ for which $(x-\eta,x)\cap E=\emptyset$ or $(x,x+\eta)\cap E=\emptyset$. In this case, $x$ is an endpoint of one of the intervals of $U$. Thus $f$ is linear on either $[x,x+\eta)$ or $(x-\eta,x]$ (maybe both). Certainly, we can get a $\delta>0$ corresponding to $\epsilon$ on this side of $x$. For the other side of $x$, use the argument from Case 1 to get some $\delta'$. Choosing $\delta''=\min\{\delta,\delta'\}$ proves the result.<|endoftext|> TITLE: Some equation with complex numbers QUESTION [6 upvotes]: Given $a,b \in \mathbb{C}$ such that $a^2+b^2=1$, it is clear that $x:=a\bar{a}+b\bar{b}$ is a real number and that $yi:=a\bar{b}-\bar{a}b$ is imaginary (i.e $y$ is real). Moreover, a direct computation shows that $x,y$ satisfy $x^2-y^2=1$. Now, the question is whether the converse holds as well. Namely, given $x,y\in \mathbb {R}$ such that $x^2-y^2=1$, are there $a,b\in \mathbb{C}$ with $a^2+b^2=1$ and such that $x=a\bar{a}+b\bar{b}$ and $yi=a\bar{b}-\bar{a}b$? Unfortunatly, the motivation for this is a bit difficult to explain, so I will not try to. REPLY [2 votes]: You certainly need to assume $x\ge0$ (which then implies $x\ge1$), since you want $x=|a|^2+|b|^2$, which is always $\ge 0$. Once you have that, you can get your $a$ and $b$ with the idea that $a=\cos w$ and $b=\sin w$ with some $w=u+iv\in\mathbb{C}$. A bit of calculation then shows $x=\cosh 2v$ and $y=-\sinh 2v$. (I think I got the sign right, but you better check...) So you have to choose $v=-\frac12\sinh^{-1} y$, choose any $u\in \mathbb{R}$, and then $a=\cos w$ and $b=\sin w$ with $w=u+iv$ will do. The particular choice $u=0$ gives you $a=\cos( \frac{i}2\sinh^{-1} y) = \cosh(\frac12\sinh^{-1}y)$ and $b=-\sin(\frac{i}2\sinh^{-1} y) = -i\sinh(\frac12 \sinh^{-1} y)$. (These can undoubtedly be simplified.)<|endoftext|> TITLE: Combinatorial properties of permutation groups QUESTION [5 upvotes]: Let $P_n$ denote the set of pairs $(x,y)$ of permutations on $S_{2n}$, where each permutation is a product of $n$ disjoint cycles of length two. Let i and j be two fixed elements of the set $\{1,2, \cdots,2n\}$. Select an element $(x,y)$ of $P_n$. What is the probability that the product $xy$ contains $i$ and $j$ in the same cycle? REPLY [3 votes]: Since every element other than $i$ has a $1$ in $2n-1$ chance of being $j$, the desired probability is $(a_n-1)/(2n-1)$, where $a_n$ is the expected length of the cycle containing $i$. Let $k=y(i)$. There is a $1$ in $2n-1$ chance that $x(k)=i$, in which case the cycle length is $1$. Otherwise swap $i$ and $x(k)$ in the cycle representation of $y$. Now both permutations have a two-cycle with $k$ and $x(k)$, the remaining cycles form two admissible permutations in $S_{2(n-1)}$, and the length of the cycle containing $i$ in their product is one less than the length of the cycle containing $i$ in the original product. This yields the recurrence $$ a_n=1+\frac{2n-2}{2n-1}a_{n-1} $$ with the initial condition $a_1=1$, and it is readily checked that this is solved by $a_n=\frac13(2n+1)$. Thus the desired probability is $$ \frac{\frac13(2n+1)-1}{2n-1}=\frac23\frac{n-1}{2n-1}\;. $$ It's interesting that the product can only contain cycles up to length $n$. For large $n$, the average length of the cycle containing $i$ is approximately $2/3$, so the probability for $j$ to be in it is approximately $1/3$.<|endoftext|> TITLE: A naturally occurring non-locally small category QUESTION [6 upvotes]: Let $\mathcal{C}$ be a category. We say that $\mathcal{C}$ is locally small if $\mathrm{Hom}_{\mathcal{C}}(A,B)$ is a set for all $A$, $B$ in $\mathcal{C}$. I can't think of any natural examples of non-locally small categories which are 'obviously' not locally small. We can take $\mathcal{C}$ to have one object, and a morphism for every $x \in V$ (say in ZFC), with composition of morphisms given by the union of two sets, but I can't think when this would ever come up 'naturally' Are there any natural examples of non-locally small categories which are obviously not locally small? REPLY [3 votes]: Simple examples are given by "large monoids," for example the large monoid of sets under Cartesian product, or the large monoid of vector spaces under tensor product. If you're a fan of ordinals, the large monoid of ordinals under ordinal sum is another example. More generally you can take isomorphism classes of objects in any monoidal category under the monoidal product, e.g. a category with finite products or coproducts. (Recall that monoids are small categories with one object. This identification is perfectly natural if one is willing to think of categories both as settings for studying other mathematical objects and as mathematical objects in their own right.) As Lord_Farin says in the comments, functor categories are also not locally small in general. These arise quite naturally.<|endoftext|> TITLE: Probability that a sequence repeats itself QUESTION [9 upvotes]: Given an infinite sequence $a_n$ of uniformly random integers $0$ to $9$, what is the probability there exist an integer $m$ such that the sequence $a_1$ to $a_m$ is equal to that from $a_{m+1}$ to $a_{2m}$? What if we restrict to two symbols, or $k$ symbols? REPLY [5 votes]: I can't give an exact answer, but an interesting connection and some numerical results. Some dabbling with short examples shows that the events for different integers $m$ are nearly but not exactly independent. For instance, some sequences that don't repeat at $m=3$ repeat at $m=2$ but not at $m=1$, whereas for sequences that do repeat at $m=3$ repetition at $m=2$ implies repetition at $m=1$. However, the effect from the dependence appears quite small, so $$ \prod_{m=1}^\infty(1-k^{-m}) $$ should be a good approximation to the probability that there is no repetition. Incidentally, if $k$ is a prime power, this is the probability that a large random square matrix over $\mathbb F_k$ is invertible; see Probability that a random binary matrix is invertible. In that answer, I computed the product for $k=2$ to be approximately $0.288788$, which would make the probability for a binary sequence to exhibit repetition approximately $0.711212$. Here are numerical results for $k=2$ for repetitions up to $m=17$. Each row contains the number of strings of length of $2m$ that repeat, the number of strings that don't repeat, the proportion $p_m$ of strings that repeat, and a value $\alpha_m=-\log_2(1-(1-p_m)/(1-p_{m-1}))$ that would be $m$ if repetition at $m$ were independent of previous repetitions (here's the code): $$ \begin{array}{rrrlr} m&\text{repeating}&\text{non-repeating}&\text{proportion}&\alpha_m\\\hline 1 & 2 & 2 & 0.5 & 1.00000\\ 2 & 10 & 6 & 0.625 & 2.00000\\ 3 & 44 & 20 & 0.6875 & 2.58496\\ 4 & 182 & 74 & 0.7109375 & 3.73697\\ 5 & 738 & 286 & 0.720703125 & 4.88753\\ 6 & 2972 & 1124 & 0.7255859375 & 5.83794\\ 7 & 11924 & 4460 & 0.7277832031 & 6.96450\\ 8 & 47768 & 17768 & 0.7288818359 & 7.95290\\ 9 & 191214 & 70930 & 0.7294235229 & 8.96725\\ 10 & 765136 & 283440 & 0.7296905518 & 9.98483\\ 11 & 3061104 & 1133200 & 0.7298240662 & 10.98340\\ 12 & 12245530 & 4531686 & 0.7298904657 & 11.99044\\ 13 & 48984342 & 18124522 & 0.7299235761 & 12.99397\\ 14 & 195941804 & 72493652 & 0.7299401015 & 13.99640\\ 15 & 783776080 & 289965744 & 0.7299483567 & 14.99761\\ 16 & 3135122038 & 1159845258 & 0.7299524820 & 15.99838\\ 17 & 12540523572 & 4639345612 & 0.7299545438 & 16.99901\\ \end{array} $$ The agreement with $0.711212$ is OK but not marvelous. As suggested by the $m=3$ example, the dependence slightly increases the probability of repetition because a repetition implies correlations between the conditions of repetition at lower values of $m$; but this effect is strongest at $m=3$ and becomes negligible at higher values of $m$, where $\alpha_m\approx m$ shows that almost exactly the expected proportion of sequences repeat for the first time. Thus we can get an accurate estimate of the limit probability from $p\approx p_{17}+(p_{17}-p_{16})\approx0.7299566056$, which should be accurate to eight or nine digits. I checked OEIS for the sequences of both the repeating and the non-repeating counts; no hit. P.S.: Note that as t.b. pointed out in a comment under the answer linked to above, by the pentagonal number theorem the above product is given by $$ \begin{align} \prod_{m=1}^\infty(1-k^{-m})&=\sum_{j=-\infty}^\infty(-1)^jk^{-j(3j-1)/2)}\\&=1-k^{-1}-k^{-2}+k^{-5}+k^{-7}-k^{-12}-k^{-15}+k^{-22}+k^{-26}-\dotso\;, \end{align} $$ which leads to an interesting pattern of digits in its representation in base $k$: $$ \begin{align} \prod_{m=1}^\infty(1-2^{-m})&=0.01001001111011100000010000111111110111110000000000100\ldots_2\;, \\ \prod_{m=1}^\infty(1-10^{-m})&=0.89001009999899900000010000999999998999990000000000100\ldots_{10}\;. \end{align} $$<|endoftext|> TITLE: Combinations of i.i.d Inverse Chi-Square RVs and their characteristic functions QUESTION [5 upvotes]: I am working on a few self-study problems in probability/measure theory and am stuck on characteristic functions. I have the following problem: Given: $X_1,\ldots,X_n$ are iid inverse chi-square(1) random variables with PDF: $f(x;\nu)=\frac{2^{-\nu/2}}{\Gamma(\nu/2)}x^{-\nu/2-1}e^{-1/(2x)}$ What is the characteristic function for $\frac14(X_1-X_2)$ What is the characteristic function for $\frac1{n^2}(X_1+\cdots+X_n)$ Is the second example related to the normal distribution? Lastly, how do I verify that $E(X_1^r)<\infty$ if and only if $r<\frac12$ Ideas/attempts I found the CF of $X_1$ to be $\frac{2}{\Gamma\frac{\phi}{2}}\left(\frac{-it}{2}\right)^{\frac\phi4}K_{\frac\phi2}\left(\sqrt{-2it}\right)$ just by searching around, but I do not know/understand what $K$ represents. I am having trouble seeing what the sum or difference of two RVs with the above CF signify, and similarly a sum of them. For the third part, the "if" is fairly straightforward, but how do I approach showing/proving the "only if"? More thoughts I have found the following results for combinations of CFs: $\phi_{aX+b}(t)=e^{ibt}\phi_X(at),\forall a,b,t\in\mathbb{R}$ If $X_1,...,X_n$ are independent, then $\phi_{X_1+...+X_n}(t)=\prod_{k=1}^n\phi_{X_k}(t)$ If $X_1,X_2$ are independent and have the same distribution, then $\phi_{X_1-X_2}(t)=|\phi_{X_1}(t)|^2$ These facts help get things started, but I'm at a loss of how to continue. Many thanks! REPLY [2 votes]: Let us use this as PDF for the inverse chi-square distribution $$f(x; \nu) = \frac{2^{-\nu/2}}{\Gamma(\nu/2)}\,x^{-\nu/2-1} e^{-1/(2 x)}$$ The characteristic funcion is $$\phi_X(t)=\int_0^{+\infty} f(x; \nu) e^{itx}\,\mathrm{d}x$$ $$=\int_0^{+\infty} \frac{2^{-\nu/2}}{\Gamma(\nu/2)}\,x^{-\nu/2-1} e^{-1/(2 x)} e^{itx}\,\mathrm{d}x$$ Not an easy integral. Probably the function $K$ in your result is the complete elliptic integral of the first kind. When you calculate the integral using WA you get something with a hypergeometric function, which can probably be converted into a complete elliptic integral of the first kind. To answer your second question $$\phi_{\frac{1}{4}(X_1-X_2)}=\frac{1}{4}\phi_{X_1}(t)\phi_{X_2}(-t)$$ To answer your third question $$E(x^r)=\int_0^{\infty}x^r \cdot f(x; \nu)\mathrm{d}x$$ diverges when $r < \nu/2$<|endoftext|> TITLE: Show that binary < "less than" relation is not definable on set of Natural Numbers with successor function QUESTION [12 upvotes]: After reading about the question, I've come to believe that it would suffice to exhibit and automorphism of that is not order preserving. However, I'm unsure of how to construct such an automorphism, and am a bit uncertain if my approach is sound. Any guidance would be appreciated. The exact question reads: Let $\mathcal{N} = ⟨\Bbb{N},S,0⟩$ where $\Bbb{N} = \{0,1,2,\cdots\}$ and $S$ is the successor function. Show that the binary relation $\{⟨a, b⟩:a, b ∈ \Bbb{N} \land a < b\}$ is not definable over $\Bbb{N}$. REPLY [13 votes]: Suppose that $\varphi ( x , y )$ is a first-order formula defining the relation $<$ in $\mathcal{N} = ( \mathbb{N} , S , 0 )$; that is $$\mathcal{N} \models \varphi ( m,n ) \quad\Leftrightarrow\quad m < n.$$ Note that if $\mathcal{M}$ is any elementary extension of $\mathcal{N}$ it must be that $\varphi (x,y)$ defines a strict total (linear) order on the universe of $\mathcal{M}$. Consider the structure $\mathcal{M} = ( M , s , 0 )$ where $M = \mathbb{N} \cup ( \mathbb{Z} \times \{ 0 , 1 \} )$; $s(m) = m+1$; $s(m,i) = (m+1,i)$. Then $\mathcal{M}$ is an elementary extension of $\mathcal{N}$. Let $a_0 = (0,0)$ and $a_1 = (0,1)$. As $\varphi$ defines a strict total order on $M$ we may assume without loss of generality that $$\mathcal{M} \models \varphi ( a_0 , a_1 ).$$ Consider the following function $\sigma : M \to M$: $\sigma ( m ) = m$; and $\sigma ( m , i ) = ( m , 1-i )$ (i.e., $\sigma$ switches the two disjoint copies of $\mathbb{Z}$). It is quite easy to show that $\sigma$ is an automorphism of $\mathcal{M}$, and therefore $$\mathcal{M} \models \varphi ( \sigma(a_0) , \sigma(a_1) ).$$ But as $\sigma ( a_0 ) = a_1$ and $\sigma ( a_1 ) = a_0$ we have that $$\mathcal{M} \models \varphi ( a_1 , a_0 ),$$ contradicting the fact that $\varphi$ defines a strict total order on $M$!<|endoftext|> TITLE: Computational Complexity of Modular Exponentiation QUESTION [8 upvotes]: The following was posted from a lecture: "($a^n \bmod N$) has a runtime complexity of $\mathcal{O}(n*|a|*|N|)$ using the brute force method. $Z_1 = a \bmod N$ $Z_2 = (aZ_1) \bmod N$ $Z_3 = (aZ_2) \bmod N$ . . . $Z_n = (aZ_{n-1}) \bmod N$ Taking |a| = |N|, the runtime complexity of ($a^n \bmod N$) is $\mathcal{O}(n*|N|^2)$ The usual approach to computing $a^n \bmod N$ is inefficient as it is exponential in $n$." How is $\mathcal{O}(n*|N|^2)$ exponential in $n$? It appears polynomial in $n$ to me. Can someone explain? Thanks. REPLY [8 votes]: $O(n \cdot |N|^2)$ is linear in $n$, but exponential in the length of $n$, which is $\Theta(\log(n))$. Since the input is presumably written in binary (or any base other than unary), this means that the runtime is exponential in the size of the input.<|endoftext|> TITLE: How many ways can the letters of the word TOMORROW be arranged if the Os can't be together? QUESTION [6 upvotes]: How many ways can the letters of the word TOMORROW be arranged if the Os cant be together? I know TOMORROW can be arranged in $\frac{8!}{3!2!} = 3360$ ways. But how many ways can it be arranged if the Os can't be together? And what is the intuition behind this process? REPLY [11 votes]: We interpret the question as saying we cannot have two (or three) O's together. Think of the slots occupied for the remaining $5$ letters. There are $6$ spaces "between" these slots for the O's to be squeezed into, no more than one O per space. Here the number of spaces is $6$ because I am counting the two endspaces. We choose $3$ of these $6$ spaces for the O's. This can be done in $\dbinom{6}{3}$ ways. For each of these ways, the T can be placed in $5$ ways, then the M in $4$ ways, then the W in $3$ ways. Now it is all done, the R's take the remaining two slots. So our count is $$\binom{6}{3}(5)(4)(3).$$ Remark: There are many other ways of counting. The advantage of this one is that it generalizes smoothly to a situation where the length of the word, and the number of O's, is much larger. The idea can be adapted for similar problems. A standard one is to ask how many ways can we line up $9$ adults and $5$ children in a row if no two childen can be next to each other.<|endoftext|> TITLE: Any countable $A \subseteq \mathbb{R}$ satisfies $(x+A) \cap A = \emptyset$ for some $x$ QUESTION [7 upvotes]: I have to prove that if $A \subseteq \mathbb{R}$ is countable, then $\exists x \in \mathbb{R}\, (x+A) \cap A = \emptyset, $ where $x+A$ denotes the set $\{x + a \mid a \in A\}$. I can see why this is true for some specific subsets (like the set of rationals or the set of algebraic numbers), but the general approach eludes me. Any hints would be appreciated. REPLY [11 votes]: Note that $x+A$ and $A$ meet if and only if there are $y,z\in A$ with $x+y=z$, so $x=z-y$. This means that $(x+A)\cap A\ne\emptyset$ iff $x\in A-A$, which is a countable set, as it is the image of $A\times A$ under the map $(y,z)\mapsto y-z$. So, since $\mathbb R$ is uncountable, we can find values of $x$ not in $A-A$, and any such $x$ works. REPLY [2 votes]: Suppose the conclusion does not hold: i.e. for every $x$ in $\mathbb{R}$, there are elements $a_x$ and $b_x$ of $A$ such that $x+a_x=b_x$. This defines an injection from $\mathbb{R}$ to $A^2$: just map $x$ to $(a_x,b_x)$. Thus $|\mathbb{R}|\le |A^2|$, so $|A|\ge |\mathbb{R}|$, and in particular $A$ is uncountable.<|endoftext|> TITLE: Volterra Operator is compact but has no eigenvalue QUESTION [12 upvotes]: Volterra operator is defined as operator $V:L^2[0,1]\rightarrow L^2[0,1]$ by \begin{eqnarray} (V)(f(x))=\int_0^xf(y)dy \end{eqnarray} Would you help me to prove that this operator is compact but has no eigenvalues. REPLY [36 votes]: Note that $$ Vf(x)=\int_0^1f(t)k(x,t)\,dt, $$ where $k(x,t)=1_{[0,x]}(t)$. It is a general fact that such an operator is Hilbert-Schmidt (and in particular compact) if and only if $k\in L^2([0,1]^2)$. Or one can show that the measurable function $k$ is a uniform limit of simple functions, and these simple functions can be used as kernels to define operators that approximate $V$. As these operators are finite-rank, $V$ is compact. As for the eigenvalues, if $\lambda\ne0$ and $Vf=\lambda f$, then we get $$\tag{1} f(x)=\frac1\lambda\,\int_0^xf(t)\,dt. $$ Using that $f$ is in $L^2$ we have, for $x TITLE: Open subsets of a complete metric space. QUESTION [5 upvotes]: I've been going over previous exams, and I came across a question that I missed. It is as follows: Let $X$ be a complete metric space. Show that every open subset of $X$ is homeomorphic to a complete metric space. I am having difficulty showing this. Any help would be greatly appreciated. REPLY [8 votes]: Here’s an outline to get you started. Let $U$ be a proper open subset of $X$, and let $d$ be the given metric on $X$. Define $$f:U\to\Bbb R:x\mapsto\frac1{d(x,X\setminus U)}\;,$$ and show that $f$ is continuous. Now define $$\rho:U\times U\to\Bbb R:\langle x,y\rangle\mapsto d(x,y)+|f(x)-f(y)|\;,$$ and show that $\rho$ is a metric on $U$ that generates the same topology as $d$. Then show that $\langle U,\rho\rangle$ is complete. The intuitive idea is that we want to kill off Cauchy sequences in $U$ whose limits are not in $U$; $f(x)$ is large when $x$ is close to the boundary of $U$, so adding the extra $|f(x)-f(y)|$ term stretches the metric and keeps sequences converging to points outside of $U$ from being $\rho$-Cauchy.<|endoftext|> TITLE: Determinant of rank-one perturbations of (invertible) matrices QUESTION [18 upvotes]: I read something that suggests that if $I$ is the $n$-by-$n$ identity matrix, $v$ is an $n$-dimensional real column vector with $\|v\| = 1$ (standard Euclidean norm), and $t > 0$, then $$\det(I + t v v^T) = 1 + t$$ Can anyone prove this or provide a reference? More generally, is there also an (easy) formula for calculating $\det(A + wv^T)$ for $v,w \in \mathbb{K}^{d \times 1}$ and some (invertible) matrix $A \in \Bbb{K}^{d \times d}$? REPLY [5 votes]: Here's another proof (cf. Sherman–Morrison formula): The non-zero eigenvalues of $AB$ and $BA$ are the same. This is straightforward to prove. Hence the non-zero eigenvalues of $ab^T$ and $b^Ta$ are the same (that is, exactly one non-zero eigenvalue). Hence the eigenvalues of $I+ ab^T$ are $1+b^Ta, 1,...,1$, and since the determinant is the product of eigenvalues, we have $\det(I+ab^T) = 1+b^Ta$. In this particular example, $a=tv$, $b=v$, and $\|v\| = 1$, hence $b^Ta = t$, and so $\det(I+t v v^T) = 1+t$.<|endoftext|> TITLE: What is the gender of $K(\pi,n)$ in French? QUESTION [11 upvotes]: This is a kind of silly question, but I don't know where else to ask. Suppose I wanted to say "Ceci n'est pas une pipe" but with $K(\pi,n)$ substituted for "pipe." Would the article be "un" or "une"? In case there are additional complications in the translation: What would be the French for "This is not a $K(\pi,n)$," where "this" refers to a picture over/under/beside the text? REPLY [5 votes]: You would use the same gender that the object's name has. For example, one could say: Ceci n'est pas un $\triangle$ beside the picture of a square, but one could say "Voici la $\mathscr{L}(cos(t))$" besides the expression of $\cos(t)'s$ Laplace transform.<|endoftext|> TITLE: Showing that $f = 0 $ a.e. if for any measurable set $E$, $\int_E f = 0$ QUESTION [22 upvotes]: Let $(X, \mathcal{B}, \mu)$ be a measure space and $f$ a measurable function on $X$ and suppose that $\forall E \in \mathcal{B}$ we have that $\int_E f = 0$. Then I want to show that $f = 0$ almost everywhere (a.e.). Suppose for sake of contradiction that $f \ne 0$ a.e. Then $\not\exists E \in \mathcal{B}$ s.t. $\mu(E) = 0$ and $f(x) = 0$, $\forall x \in X - E$ Then $\{x : f(x) \ne 0\} = A$ is not measure zero so that either $\mu(A) > 0$ or $A \notin \mathcal{B}$. Now if $\mu(A) > 0$ then it is easy to see that $\int_A f \ne 0$ so that we have a contradiction of our original hypothesis that $\forall E \in \mathcal{B}, \int_E f = 0$. But if on the other hand $A \notin \mathcal{B}$, I cannot no longer appeal to $\int_A f \ne 0$ since $\int_A f$ is non-sense. So I'm having trouble with this part of the argument. REPLY [26 votes]: Arguing by contradiction definitely works, here's the idea. Let $\mu(\{ x : f(x) \neq 0 \}) > 0$. Then we have $\{x : f(x) \neq 0\} = \{x : f(x) > 0\} \cup \{x : f(x) < 0\}$, so we must have one of these two sets have positive measure. Let's say its the first one (the argument for the second is analogous). Then $\{x : f(x) > 0\} = \bigcup \{ x : f(x) \geq \frac{1}{n}\} = \bigcup E_n$ so again one of these must have positive measure. So say $E_k$ has positive measure, then $f$ dominates $\frac{1}{k}$ on $E_k$ so $$\int_{E_k} f \geq \int_{E_k} \frac{1}{k} = \mu(E_k)\frac{1}{k} > 0.$$ So we have a contradiction. EDIT: also, to comment on your proof: as you see here we don't need to deal with whether or not $A$ is measurable, it definitely is. And also, for step $4$ of your proof $\mu(A) > 0$ does not imply that $\int_A f \neq 0$ for example if $f = 1_{(0,1)} - 1_{(-1,0)}$<|endoftext|> TITLE: Is every complex number the root of a polynomial? (Converse to fundamental theorem of algebra.) QUESTION [7 upvotes]: For every polynomial with complex coefficients, the fundamental theorem of algebra guarantees the existence of complex numbers which happen to be roots of it. But is this everything? i.e. is the converse true? Suppose you give me a complex number: can I find a polynomial whose roots include it? Trivially, if you allow for complex coefficients, the answer is yes: given a set of {w_n} in C, just build the polynomial up out of the factors (z - w_n). But what if I restrict my coefficients to smaller sets? If I restrict to integer coefficients, the answer is already no, even on just the real line: real transcendental numbers exist. What about other sets? What if I have rationals coefficients? Gaussian integers? How do I go about investigating these sorts of questions? REPLY [11 votes]: Suppose you give me a complex number: can I find a polynomial whose roots include it? As you noted, the answer is trivial for complex coefficients: if you want a polynomial with the root $w$, then $P(z) = z-w$ will do. What about a polynomial with real coefficients? There too, it's relatively straightforward; the polynomial $P(x) = (x-w)(x-\bar{w}) = x^2-2\mathcal{R}(w)x+\left|w\right|^2$ has real coefficients and has roots $w$ and $\bar{w}$. What if I have rational coefficients? The answer here is no, for the exact same reason as for integer coefficients; the two cases are equivalent. To see why, consider as an example the polynomial $P(x) = \frac{1}{5}x^3-\frac{7}{3}x^2+\frac{5}{4}x-\frac{17}{8}$. Then $P(x)$ has the same roots as the polynomial $Q(x) = 120P(x) = 24x^3-280x^2+150x-255$ obtained by multiplying $P$ by the LCM of the denominators of all its coefficients. Gaussian integers? Again no, by two separate results: one relatively straightforward, but one more subtle but much more far-reaching. The straightforward way of seeing this is again by using the complex conjugate: if $P(x)$ is a polynomial with Gaussian integers for coefficients, then $Q(x) = P(x)\bar{P}(x)$ has all of $P$'s roots among its roots, but has (real) integer coefficients; for instance, if $P(x) = x^2+(2+i)x+3i$, then $\bar{P}(x) = x^2+(2-i)x-3i$ and $P(x)\bar{P}(x) = x^4+(2+i)x^3+(2-i)x^3+3ix^2-3ix^2+(2+i)(2-i)x^2 + 3i(2+i)x -3i(2-i)x +(3i)^2 = x^4+4x^3+5x^2-6x-9$. The more abstract reason, though, is that the algebraic numbers (defined as 'roots of polynomials with integer coefficients') are algebraically closed: any number that's the root of a polynomial with algebraic numbers for coefficients (which includes all the cases you mentioned above, as well as many more) is algebraic itself (and thus already the root of a polynomial with integer coefficients). This can be shown by, essentially, clearing coefficients: for an example of how this works, suppose we want to find an integer polynomial whose roots include the roots of the polynomial $P(x) = x^2-\sqrt{2}x+1$. Well, first rewrite the equation $P(x) = 0$ by moving the $\sqrt{2}$ term to the left, giving $\sqrt{2} x = x^2+1$; now by squaring both sides (which may introduce new roots, but won't cost the roots we have) we get $2x^2 = \left(x^2+1\right)^2 = x^4+2x^2+1$, or $x^4+1=0$; thus, any root of $P(x) = x^2-\sqrt{2}x+1$ is also a root of $Q(x) = x^4+1$. A (much!) more complicated but similar procedure will allow all algebraic coefficients to be 'cleared' from a polynomial, giving a polynomial with only integer coefficients and roots a superset of the original polynomial's roots. Taking the contrapositive of this indicates that no transcendental number can be the root of any polynomial with algebraic coefficients, whether real or complex. In fact, it's possible to go one step farther still: we can say with confidence that, for instance, not every number is the root of a polynomial with coefficients any algebraic expression of whole numbers, $e$ and $\pi$! This time the argument is more abstract still: there are only countably many '$\pi e$-algebraic' expressions, so there are only countably many polynomials with $\pi e$-algebraic expressions for coefficients; and since each polynomial has finitely many roots, there are only countably many such roots. But because there are uncountably many real numbers, our set of roots must exclude some (in fact, almost all) real numbers. This argument shows that the reals have infinite (in fact, uncountably infinite) transcendence degree over the rationals; no finite (or countable) number of additional reals that are made available as coefficients will let you 'capture' all the real numbers (or complex numbers, of course) as roots of polynomials.<|endoftext|> TITLE: Showing $\int_{0}^{\infty} \frac{\sin{x}}{x} \ dx = \frac{\pi}{2}$ using complex integration QUESTION [8 upvotes]: Recently I had to use the fact that the Dirichlet integral evaluates as $$\int_{0}^{\infty} \frac{\sin{x}}{x} \ dx = \frac{\pi}{2}$$ a couple of times. There already is a question that specifically ask for methods to show this result $\textbf{not}$ using complex integration. In this question I am interested in seeing the derivation via contour integration. ( I am aware of the wikipedia entry, but am looking for more detail ) REPLY [10 votes]: We need to use $f(z) = (e^{iz} - 1)/z$ because it has a removable singularity at $z = 0$. Consider a contour $C = [-R, R] \cup C_R$ for $R > 0$. Then $$I \equiv \int_{-R}^R f(z)dz + \int_{C_R} f(z)dz = 0$$ by Cauchy Theorem, i.e., $$\int_{-R}^R f(z)dz = \int_{C_R} \frac{1}{z}dz - \int_{C_R} \frac{e^{iz}}{z}dz$$ but $$\int_{C_R} \frac{1}{z}dz = \pi i$$ and we can show that the other integral goes to zero as $R \to \infty$. Therefore, because $$\int_{-R}^R \frac{\sin x}{x}dx = \operatorname{Im}I,$$ we see that $$\int_{-\infty}^\infty \frac{\sin x}{x}dx = \pi$$ or $$\int_0^\infty \frac{\sin x}{x}dx = \frac{\pi}{2}.$$ Hope this helps.<|endoftext|> TITLE: Show that both mixed partial derivatives exist at the origin but are not equal QUESTION [11 upvotes]: $$f(x,y) = \begin{cases} \displaystyle \frac{xy(x^2-y^2)}{x^2+y^2} & \text{if } (x,y) \neq (0,0), \\ 0 & \text{if } (x,y) = (0,0). \end{cases}$$ I tried finding both mixed partial derivatives but they ended up being the same for that function. I must be failing to take into account something dealing with the fact that it is piece-wise. I still need to show the mixed partial derivatives exist. How can I do all of this? REPLY [14 votes]: Use the definition of partial derivative: $$ f_x(0,0) ~=~ \lim_{h\to 0} \frac{f(h,0)-f(0,0)} h ~=~ \lim_{h\to 0} \frac{\frac{h\cdot 0(h^2-0^2)}{h^2+0}-0} h ~=~ \lim_{h\to 0} \frac{0}{h} ~=~ 0 $$ A similar computation shows that $f_y(0,0)=0$ too, so that $(0,0)$ is a critical point (i.e. $\nabla f(0,0)=\binom 00$). EDIT Now that we know the values of $f_x(0,0)$ and $f_y(0,0)$ we can compute $f_{xy}(0,0)$ and $f_{yx}(0,0)$: $$ f_{xy}(0,0) ~=~ \lim_{k\to 0} \frac{f_x(0,k)-f_x(0,0)}k ~=~ \lim_{k\to 0} \frac{f_x(0,k)}k $$ and $$ f_{yx}(0,0) ~=~ \lim_{h\to 0} \frac{f_y(h,0)-f_y(0,0)}h ~=~ \lim_{h\to 0} \frac{f_y(h,0)}h $$ First, note that for $(x,y)\neq(0,0)$ you have $$ f_x(x,y)=\frac{y\big(x^4+4x^2y^2-y^4\big)}{\big(x^2+y^2\big)^2} $$ and $$ f_y(x,y)=\frac{x\big(x^4-4x^2y^2-y^4\big)}{\big(x^2+y^2\big)^2} $$ so that for $h,k\neq 0$ $$ f_x(0,k)=-k \quad\text{and}\quad f_y(h,0)=h $$ Putting all together: $$ f_{xy}(0,0) ~=~ \lim_{k\to 0} \frac{f_x(0,k)}k ~=~ \lim_{k\to 0}\frac{-k}{k} ~=~ -1 $$ and $$ f_{yx}(0,0) ~=~ \lim_{h\to 0} \frac{f_y(h,0)-f_y(0,0)}h ~=~ \lim_{h\to 0} \frac{f_y(h,0)}h ~=~ \lim_{h\to 0}\frac{h}{h} ~=~ 1 $$ so $$ f_{xy}(0,0)~=~-1 ~~\neq~~ 1~=~f_{yx}(0,0) $$<|endoftext|> TITLE: $a+b=c \times d$ and $a\times b = c + d$ QUESTION [8 upvotes]: There is a 'nice' relationship between the integers (1,5) and (2,3) as $$1+5=2 \times 3;$$ $$1\times 5 = 2 + 3.$$ So I tried to find all positive integers pairs $(a, b)$ and $(c, d)$ such that $$a+b=c \times d;$$ $$a\times b = c + d.$$ To find this, $a, b, c, d$ must satisfy $$(a+1)(b+1)=(c+1)(d+1).$$ However, this condition is only necessary but not sufficient. Any idea? REPLY [4 votes]: The following approach is based on the idea that, for positive integers $x,y$, the product $xy$ typically exceeds the sum $x+y$. We can apply this to show that in the problem of the question, either $a=b=c=d=2$ or else at least one of $a,b,c,d$ is 1; in this case it's easy to arrive at the only remaining solution $1,5;2,3$. Suppose for two positive integers $x$ and $y$, that $x,y \ge k$ for some fixed $k \ge 2$. Then from $(x-k)(y-k) \ge 0$ we have, after expanding and rearranging, that $xy \ge k(x+y-k)$. Now assume that each of $a,b,c,d$ is at least $k$. Of course we also use the assumptions $ab=c+d,cd=a+b$ of the question. We then have $ab \ge k(a+b-k)=k(cd-k)$, and also $cd \ge k(c+d-k)=k(ab-k)$. Putting these together we have $ ab \ge k(k(ab-k)-k)=k^2ab-(k^3+k^2)$. This inequality simplified is $$ab \le \frac{k^2}{k-1}.$$ Now if $k=2$ here we obtain $ab \le 4$ which with the assumptions $a,b \ge 2$ leads to $a=b=2$, and (by symmetry or by the initial relations) also $c=d=2$. If $k=3$ then we obtain $ab \le 9/2 = 4.5$, but this cannot hold since we're assuming $a,b \ge 3$ so that in fact $ab \ge 9$. So by considering how products are typically larger than sums, we have shown that, except for the solution $a=b=c=d=2$, one of the values $a,b,c,d$ must be 1. Putting say $a=1$ into the two equations, one easily gets $a=1,b=5$ and that $c,d$ are $2,3$ insome order.<|endoftext|> TITLE: $\cos^n x-\sin^n x=1$ QUESTION [5 upvotes]: For $0 < x < 2\pi$ and positive even $n$, the only solution for $\cos^n x-\sin^n x=1$ is $\pi$. The argument is simple as $0\le\cos^n x, \sin^n x\le1$ and hence $\cos^n x-\sin^n x=1$ iff $\cos^n x=1$ and $\sin^n x=0$. My question is that any nice argument to show the following statement? 'For $0 < x < 2\pi$ and positive odd $n$, the only solution for $\cos^n x-\sin^n x=1$ is $\frac{3\pi}{2}$.' REPLY [8 votes]: We leave the case $n = 1$ and $n = 2$ separately, and assume $n \geq 3$ from now on. Observe that if $|r| \leq 1$, then $|r^n| \leq r^2$ with equality if and only if $r = 0$ or $|r| = 1$. Then it follows that $$1 = \left|\cos^n x - \sin^n x\right| \leq \left|\cos^n x\right| + \left|\sin^n x\right| \leq \cos^2 x + \sin^2 x = 1. $$ This forces every intermediate inequality to be equality. In particular, we must have $$ \cos x , \sin x \in \{0, \pm 1\}.$$ Thus $x \in \{ \frac{\pi}{2}, \pi, \frac{3\pi}{2} \}$. Now the rest is clear.<|endoftext|> TITLE: Approximation of $\sum_{x \le k} \frac{\log(x)}{x}$ QUESTION [5 upvotes]: Originally posted as a non-homework question. New to the site, and didn't know asking for homework advice was O.K. Anyways, here's what's going on: I'm trying to show there exists a constant $B$ such that $$ \sum_{x \le k} \frac{\log(x)}{x} = \frac{1}{2}\log^2(k) + B + O\left(\frac{\log(k)}{k}\right) $$ I'm trying via partial summation to establish this. I think some of my trouble lies in understanding the question. If we're using the $O$ notation to bound an error term, and if we just need to show there exists a constant $B$ such that the above holds, why isn't $B$ absorbed into the error term? REPLY [3 votes]: The Euler-Maclaurin Sum Formula gives this immediately because $$ \int\frac{\log(x)}{x}\,\mathrm{d}x=\frac12\log(x)^2+C $$ The constant $B$ dominates the error term $O\left(\frac{\log(x)}{x}\right)$, so it is separate.<|endoftext|> TITLE: Is $Z(x^2-y^3)$ isomorphic to $Z(y^2-x^3-x^2)$ over the complex numbers? QUESTION [5 upvotes]: I'm having trouble determining if the algebraic sets $Z(x^2-y^3)\subset \mathbb{A}^2$ and $Z(y^2-x^3-x^2)\subset\mathbb{A}^2$ are isomorphic over $\mathbb{C}$. My guess is that this boils down to determining if $\mathbb{C}[x,y]/(x^2-y^3)$ is isomorphic to $\mathbb{C}[x,y]/(y^2-x^3-x^2)$ but then again, I'm stuck. REPLY [4 votes]: Thinking geometrically, we expect these varieties are not isomorphic, due to the fact that the first is a cusp, while the second is a node. One way to verify this is to consider the tangent cone of each. In the first case, we get $TC_{(0,0)}=V(x^2)$ which is interpreted as the line $x=0$ with multiplicity $2.$ In the second case, we get $TC_{(0,0)}=V(y^2-x^2)=V((y-x)(y+x))$ which is two distinct lines. Since the tangent cone is an invariant under isomorphism, we see that there is no point on the first variety to correspond with the origin in the second, and vice-versa.<|endoftext|> TITLE: Relation involving the conductor of an elliptic curve QUESTION [5 upvotes]: Consider an elliptic curve $E: y^{2} = x^{3} + ax + b$. Then the quadratic twist by a squarefree $d$ is given by $E^{d} : dy^{2} = x^{3} + ax + b$. What is the relationship between the conductor of $E^{d}$ and $E$? REPLY [2 votes]: OP, I think the conductor given in your comment is wrong. SAGE says that the conductor of $E^{19}$ is $2^4 \cdot 19^2 \cdot 53$. What David wrote is true in the case $d \equiv 1$ mod $4$ i.e. for $d=-19$ instead of $19$. You can imagine the twist by $19$ as a twist by $-19$ then by $-1$. $2$ appears in the conductor because of the twist by $-1$. I think this recent question is reusable here: writing down the minimal discriminant of an elliptic curve It seems that for $p \ge 5$ the twist by $\tilde{p} = \left( \frac{-1}{p} \right)p$ (which is always of the form $4k+1$) decreases the discriminant if and only if $p^6| \Delta$ and $p| c_4$. If $E$ has multiplicative reduction at $p$ then $E^{\tilde{p}}$ will have additive potentially multiplicative reduction therefore the exponent of $p$ in the conductor will change from $1$ to $2$. And if $E$ had additive, potentially multiplicative reduction at $p$, the exponent will change from $2$ to $1$. If $E$ has potentially good additive reduction at $p$ then the exponent of $p$ in the conductor can stay $2$ or decrease to $0$. In the latter case, the discriminant also decreases and we have $p^6 | \Delta$ and $p|c_4$. But there are examples where the discriminant decreases but the conductor does not e.g. Cremona 121A2 has $-11$-twist 121C1 where the discriminant changes from $-1 \cdot 11^{10}$ to $-1 \cdot 11^4$ but the conductor stays the same. (See e.g. http://www.lmfdb.org/EllipticCurve/Q/121.c2 and http://www.lmfdb.org/EllipticCurve/Q/121.a1 )<|endoftext|> TITLE: Is greatest common divisor of two numbers really their smallest linear combination? QUESTION [23 upvotes]: In a lecture note from MIT on number theory says: Theorem 5. The greatest common divisor of a and b is equal to the smallest positive linear combination of a and b. For example, the greatest common divisor of 52 and 44 is 4. And, sure enough, 4 is a linear combination of 52 and 44: 6 · 52 + (−7) 44 = 4 What about 12 and 6 their gcd is 6 but 0 which is less than 6 can be REPLY [50 votes]: You wrote it yourself: the gcd is the smallest positive linear combination. Smallest positive linear combination is shorthand for smallest positive number which is a linear combination. It is true that $0$ is a linear combination of $12$ and $6$ with integer coefficients, but $0$ is not positive. The proof is not difficult, but it is somewhat lengthy. We give full detail below. Let $e$ be the smallest positive linear combination $as+bt$ of $a$ and $b$, where $s$ and $t$ are integers. Suppose in particular that $e=ax+by$. Let $d=\gcd(a,b)$. Then $d$ divides $a$ and $b$, so it divides $ax+by$. Thus $d$ divides $e$, and therefore in particular $d\le e$. We show that in fact $e$ is a common divisor of $a$ and $b$, which will imply that $e\le d$. That, together with our earlier $d\le e$, will imply that $d=e$. So it remains to show that $e$ divides $a$ and $e$ divides $b$. We show that $e$ divides $a$. The proof that $e$ divides $b$ is essentially the same. Suppose to the contrary that $e$ does not divide $a$. Then when we try to divide $a$ by $e$, we get a positive remainder. More precisely, $$a=qe+r,$$ where $0\lt r\lt e$. Then $$r=a-qe=a-q(ax+by)=a(1-qx)+b(-qy).$$ This means that $r$ is a linear combination of $a$ and $b$, and is positive and less than $e$. This contradicts the fact that $e$ is the smallest positive linear combination of $a$ and $b$.<|endoftext|> TITLE: How does smoothness prevent "singularities"? QUESTION [6 upvotes]: This is a refinement of one of my earlier questions (I failed to put into words what I really wanted to ask). First of all, I'm not sure "singularity" is the correct word to use hence the quotes. Consider the following wild knot: Then what exactly happens where the curls get infinitely small? Is the knot still differentiable there? I'm asking because I'm trying to understand why requiring a knot to be differentiable is not enough to prevent knots from being wild. On the other hand, smoothness is enough. Thanks for help! (If anyone knows the parametric equation of this curve it might make it easier to see what happens.) REPLY [3 votes]: The image given is only an embedding of manifolds, and doesn't look like an embedding of differential manifolds, and even less of an embedding of smooth manifolds. To be an embedding of differential manifolds, you need your map to induce an injective map on tangent vectors. Here, at the end of the infinitely shrinking knot, there is no tangent line to your embedding : on one side you want to send the tangent vector to a horizontal vector, but on the other side, the slope keeps turning in circle as you get closer to the singularity. However, all is not lost, you can squeeze the chain : instead of using a map of the kind $t \mapsto (tx(\log t), ty(\log t), tz(\log t))$ for some periodic functions $x,y,z$, use $t \mapsto (tx(\log t), t^2x(\log t)y(\log t), t^2x(\log t)z(\log t))$ instead. It is still not differentiable at $t=0$ because $x(\log t)$ isn't convergent, but at least there is a tangent line. I think you may find a differentiable wild knot with enough tinkering. But you can't have a map of $C^1$ manifolds (nor of smooth manifolds) that looks like that. If the map was $C^1$, the tangent line has to vary continuously when you move along the knot, so there has to be a whole neighboorhood around the singularity where all the tangent lines don't differ from the horizontal line by, say, a $\pi/4$ angle. But you can't make any wild knotting with only using tangent lines in that cone. In particular, you can locally thicken your knot by attaching small disks all orthogonal to the original horizontal tangent vector. Then since the circle is compact, you should be able to get a global thickening from a finite number of those local ones.<|endoftext|> TITLE: What's up with Plouffe's inverter? Is there an alternative? QUESTION [16 upvotes]: For quite some time now (at least a year), whenever I tried to use Plouffe's inverter, the request timed out, but there's no indication either on the site itself or on the Web that it's out of service or having problems. Does anyone know whether it's still online? Have I just been extremely unlucky? Also, if it really is offline or practically unusable, do you know of any alternatives? I know OEIS, but that's for integer sequences, and its use for decimal representations of real numbers feels a bit unnatural (though it has quite a few sequences of that kind). REPLY [2 votes]: It seems that all the referenced websites are down, at least for the moment. However, I just noticed the following on Simon Plouffe's website: 2016 : Portable version of the Plouffe Inverter : Version portable de l'Inverseur de Plouffe. 3 billion entries at 32 digits precision. This portable version can be accessed here.<|endoftext|> TITLE: Recovering a finite group's structure from the order of its elements. QUESTION [29 upvotes]: Suppose you know the following two things about a group $G$ with $n$ elements: the order of each of the $n$ elements in $G$; $G$ is uniquely determined by the orders in (1). Question: How difficult is it to recover the group structure of $G$? In other words, what is the best way to use this information to construct a Cayley table for $G$? Note: (1) alone is not enough to uniquely determine a group. See this MO post for more. Information about identifying when (1) implies (2) would be welcomed as well. REPLY [18 votes]: This is actually quite a nontrivial question and is related to a concept called OD-characterizability, a topic of current research. Let me throw some definitions at you. Definition. The prime graph of a group $G$ is a graph $\Gamma_G=\langle V, E \rangle$ where the vertex set $V$ is comprised of the prime divisors of $|G|$ and $\{p,q\}\in E$ if and only if there exists an element of order $pq$ in $G$. The degree pattern of a group $G$ is defined as $(\operatorname{deg}(p_1),\ldots,\operatorname{deg}(p_k))$ for $i=1,\ldots ,|V|$, where $\operatorname{deg}(p)$ denotes the degree of the vertex $p$ in $\Gamma_G$. Definition. We say that a group $G$ is $n$-fold OD characterizable if there are exactly $n$ nonisomorphic finite groups with the same order and degree pattern as $G$. If a group $G$ is $1$-fold OD-characterizable, we simply say $G$ is OD-characterizable. There is no reason why a group with a unique order sequence could not have the same degree pattern as another group of the same order. $p$-groups are an obvious example. On the other hand, assuming we know the order sequence of $G$, we can certainly construct the degree pattern of $G$. Of course, two groups with which have the same order sequence surely implies that they have the same degree pattern, so if a group is not uniquely determined by its order sequence it is not OD-characterizable. Thus every group which is OD-characterizable has a unique order sequence, and all the research which has been done about those should apply to your groups. Unfortunately most OD-characterizability papers that I have seen focus on proving that certain classes of groups are OD-characterizable, e.g. alternating and symmetric groups, rather than on what OD-characterizability itself says about group structure. I suspect that's because it doesn't actually say a whole lot. For this reason I think that the best place to look if you planned to research this further would be at the order sequences of $p$-groups, as that is the primary place your condition differs from OD-characterizability. However, not to be a bummer, but I wouldn't expect to be able to make any widesweeping statements. For example, amongst groups of order 32, there are $21$ order sequences. Out of those, the $10$ groups with unique order sequences are: $\mathbb{Z}_{32}$, $\left(\mathbb{Z}_{2}\right)^5$, $Q_{32}$, $D_{32}$, $D_{16}\times \mathbb{Z}_2$, $D_8 \times V$, the semidihedral group $SD_{32}$, the holomorph of $\mathbb{Z}_8$, and some nonabelian groups which are just referred to as $\text{SmallGroup}(32,7)$ and $\text{SmallGroup}(32,15)$. So whatever properties would be true of groups which are uniquely characterizable by their order sequences would have to be shared by all those groups, which as you can see are quite different.<|endoftext|> TITLE: Sum of the digits of a numbers QUESTION [5 upvotes]: Take a number say 987654. Sum it's digits 9 + 8 + 7 + 6 + 5 + 4 = 39 3 + 9 = 12 1 + 2 = 3 i.e. keep doing this till you get a single digit answer. Now I take the same number & do it in other different ways, I still end up with the same answer. 987 + 654 = 1641 16 + 4 + 1 = 21 2 + 1 = 3 Or 98765 + 4 = 98769 9876 + 9 = 9885 988 + 5 = 993 99 + 3 = 102 1 + 0 + 2 = 3 How come I always get the same answer (3 in this case). This is not special for 987654. It's for any number you take. What's the reason or theory behind this? (PS - I am not sure what's the right tag for this question. Please correct if necessary). REPLY [2 votes]: What you have stumbled upon is known as taking the digital root, which is also sometimes referred to as a part of "Vedic mathematics." You might also be interested in reading about "Casting out nines". These sources should help point you in the right direction.<|endoftext|> TITLE: Are maps inducing the same cohomology homomorphisms homotopic? QUESTION [10 upvotes]: It is not hard to show that given $f,g: X \rightarrow Y$, with $f$ and $g$ homotopic the induced homomorphisms $f^*, g^* : H^* (Y, \mathbb{Z}) \rightarrow H^* (X, \mathbb{Z})$ are the same. Is the converse true? i.e. if $f^* = g^*$ then is it necessarily true that $f$ and $g$ are homotopic? I'm sure this is far too strong to hold in general. But I'm particularly interested in the case where $X$ and $Y$ are finite CW-complexes. I feel that results like CW-approximation make it plausible that such a result may hold. I just can't see how to prove it or how to construct a counterexample. Thanks. REPLY [14 votes]: This is not true; for instance, every map $S^3 \rightarrow S^2$ induces the trivial map on cohomology. However, you can detect nontriviality by taking the (homotopy) cofiber of the map, i.e. attach a 4-disk to $S^2$ along the image of $S^3$. For the trivial map this gives you $S^2 \vee S^4$, whereas for instance the Hopf fibration will give you $\mathbb{C}P^2$. To be totally precise, you can tell that this is distinct from $S^2 \vee S^4$ by checking that the self-cup product of the 2-dimensional generator $\alpha$ is nontrivial -- in fact, it's a 4-dimensional generator $\beta$. In general you get that $\alpha \smile \alpha = n\beta$ for some $n \in \mathbb{Z}$. This $n$ is called the Hopf invariant of the map; the Hopf invariant can be defined for any map $S^{2k-1} \rightarrow S^k$, and in fact defines a homomorphism $\pi_{2k-1}(S^k) \rightarrow \mathbb{Z}$. It's rather easy to show that this always hits the even integers when $k$ is even and is trivial when $k$ is odd. With a little more work, you can show that if this is surjective, it must be that $k=2^t$. But in fact, it's surjective precisely when $k \in \{1 , 2, 4, 8\}$, and moreover this is actually equivalent to the statement that the only real division algebras are the real numbers $\mathbb{R}$, the complex numbers $\mathbb{C}$, the quaternions $\mathbb{H}$, and the octonions $\mathbb{O}$! If you think this is as cool as I do, you should check out Mosher & Tangora's excellent (and incredibly inexpensive) book Cohomology Operations and Applications to Homotopy Theory.<|endoftext|> TITLE: Any example of manifold without global trivialization of tangent bundle QUESTION [19 upvotes]: It is said for most manifolds, there does not exist a global trivialization of the tangent bundle. I am not quite clear about it. The tangent bundle is defined as $$TM=\bigsqcup_{p\in M}T_PM$$ So is the above statement saying that generally $$ \bigsqcup_{p\in M}T_PM\neq M\times\mathbb{R}^n? $$ But I think the tangent space is just attaching a $\mathbb{R}^n$ to every point on $M$, so I wonder what's the reason for it is not a product space? Plus, when defining trivialization, we have a lot of constraint on the function $F:TM\rightarrow M\times V$, can anyone explain the necessity of those constraint? At last, does $S^2$ has a global trivialization? Update: Following is my attempt to trivialize $S^2$, but meet some problem. I think it may reflect some aspect in the impossibility to trivialization of $S^2$, isn't it? We want to define trivialization $F:TS^2\rightarrow S^2\times \mathbb{R}^2$. First of all, $F$ should be well-defined. There are 3 different approach to define a tangent space, here I take the definition via chart. So, an element in $TS^2$ is $\left[(p, v, (U,\varphi))\right]$, and of course I try to define its image to be $(p, v)$. Then the problem comes. Because we need at least 2 chart to cover $S^2$, so when taking another representative $(p, w, (V,\phi))$ of the equivalent class $\left[(p, v, (U,\varphi))\right]$, we map it to $(p, w)$, which conflicts the previous image. Of course it is only one attempt, but I think it may reflect some difficulty to define $F$ because it need to preserve coordinate transformation. Right? Eh.. I realized my attempt is too trivial. If I apply this method to any manifold, $F$ is never well-define... Can anyone provide an manifold which can be trivialize? I think I may use it to get better understanding. REPLY [27 votes]: We define $$TM=\bigsqcup_{p\in M}T_PM$$ with a smooth structure pulled back from the projection map. This is a key point. The tangent bundle's topology and smooth structure capture some of the manifold's topology. You can find an easy bijection $TM\leftrightarrow M\times\mathbb{R}^n$, but you cannot in general find a fiber-preserving diffeomorphism between the two spaces. When we trivialize, we require that $F:TM\to M\times V$ be not just a diffeomorphism but a diffeomorphism that is a fiberwise isomorphism. A tangent bundle isn't just some disjoint collection of vector spaces all floating off in abstract mathland - there's more structure than that. We need two additional things: a projection map $p:TM\to M$ taking a tangent vector to its basepoint, and that $M$ is covered by neighborhoods $U$ which obey two conditions: $p^{-1}U$ is diffeomorphic to $U\times\mathbb{R}^n$ (say via $\phi_U$) in a way that respects projection onto $U$ (i.e. $\pi_U\circ\phi_U = p$), and for two such neighborhoods $U$ and $V$, there is a family of vector space isomorphisms which govern the transformation of the fibers: $$U\cap V\ni x\mapsto\theta_{UV}(x):\{x\}\times \mathbb{R}^n\to\{x\}\times\mathbb{R}^n$$ (where the first $\{x\}\times\mathbb{R}^n\subset U\times\mathbb{R}^n$ and the second $\{x\}\times\mathbb{R}^n\subset V\times\mathbb{R}^n$). This condition is in place so that when we change coordinates, the new fiber still has the structure of a vector space. These neighborhoods are called "local trivializations;" they're analogous to coordinate neighborhoods in a manifold. (In fact, one method of constructing $TM$ is by suitably patching together local trivializations from a cover of coordinate neighborhoods.) For $F:TM\to M\times\mathbb{R}^n$ to be a global trivialization, we need not just that $F$ is a diffeomorphism, but that $F$ preserves all of this structure. In particular, when restricted to a single tangent space, $F$ must be an isomorphism. This is much stronger than simply requiring $F$ be a diffeomorphism between $TM$ and $M\times\mathbb{R}^n$. The standard counterexample against the idea that all tangent bundles are trivializable is $T\mathbb{S}^2$. The "hairy ball" theorem states that there is no nonvanishing vector field on $\mathbb{S}^2$. You can see this from the Poincare-Hopf index theorem: On any closed smooth manifold $M$, for any nondegenerate vector field $V$ on $M$, the Euler characteristic $$\chi(M) = \sum_{x\in\{\mbox{zeros of }V\}} \iota_v(x)$$ where $\iota_v(x)$ is the index of $v$ at $x$, the degree of the vector field when restricted to a small circle about $x$ and normalized. Now it's clear that $T\mathbb{S}^2$ is not trivializable: $\chi(\mathbb{S}^2) = 2$, and if we had a trivialization, then we would have a nonvanishing vector field which would force the Euler characteristic to $0$. In fact, we can see from this much more than that $T\mathbb{S}^2$ is nontrivializable: the Euler characteristic is an obstruction to the trivializability of the tangent bundle of a manifold. In order for the tangent bundle to be trivializable, we must be able to find $n$ global sections which are a pointwise basis for the tangent spaces. Each of these sections would be a nonvanishing vector field, which would imply that the Euler characteristic of the manifold is $0$. (Note that, as Jason DeVito points out below, a zero Euler characteristic is necessary but not sufficient for a trivializable tangent bundle.) This is an edited response to your attempt to trivialize $T\mathbb{S}^2$. Let's be a little more concrete: represent $\mathbb{S}^2$ as $\widehat{\mathbb{C}}$. Charts are the identity $\widehat{\mathbb{C}}-\{\infty\}\to\mathbb{C}$, and inversion $\widehat{\mathbb{C}}-\{0\}\to\mathbb{C}$ where $p\mapsto \frac{1}{p}$. (We define $\frac{1}{\infty}=0$). Note that transition maps are given by inversion, $w = z^{-1}$. Each of these neighborhoods is a trivialization of $T\mathbb{S}^2$, so in each of them we can represent a tangent vector as $(v,z)$ where $v$ is the vector and $z$ is the basepoint. Let's start with $\mathbb{C}$. Define on this neighborhood $F(v,z) = (v,z)$. This takes care of the map $F$ for all of $\widehat{\mathbb{C}}-\{\infty\}$. To extend to infinity, we need to define $F$ on $\widehat{\mathbb{C}}-\{0\}$ so that it agrees with the definition we have given on $\mathbb{C}$. Note that the differential of the transition function $\frac{1}{z}$ is $\frac{-1}{z^2}$. For every $w\in\widehat{\mathbb{C}}-\{\infty\}$, we need to define $F(v,w) = (\frac{-v}{w^2},w^{-1})$ so that it is well-defined under coordinate changes. Now how should we define $F(v,\infty)$? We see the problem: We'll have to map $(v,\infty)\mapsto 0$ in order for $F$ to be continuous at $\infty$. This prevents $F$ from being an isomorphism on $T_\infty\widehat{\mathbb{C}}$, so it's not possible to use this method to trivialize $F$. (In fact, it's not possible for reasons discussed above.) REPLY [6 votes]: An example and an answer to your last question: $S^2$ does not admit a global trivialization of $TS^2$. By the Hairy Ball Theorem every vector field on $S^2$ has at least one zero. If $TS^2$ had a global trivialization $F \colon S^2 \times \mathbb{R}^2 \to TS^2$ then $X(p) = F(p,v)$ would be a globally non-vanishing vector field for any $0 \neq v \in \mathbb{R}^2$.<|endoftext|> TITLE: Prove $\sum_{i=0}^{n}\left(x_{i}^{n}\prod_{0\leq k\leq n}^{k\neq i}\frac{x-x_k}{x_i-x_k}\right)=x^n$ QUESTION [7 upvotes]: Suppose $x_0$ , $x_1$ , $x_2$ , ... , $x_n$ are distinct real numbers , prove that : $$ \large{\displaystyle{\sum_{i=0}^{n} \left( x_{i}^{n}\prod_{\substack{0\leq k\leq n \\ k\neq i }}\frac{x-x_k}{x_i-x_k} \right)=x^n}} $$ I have no ideas to do this question REPLY [6 votes]: That's simply Lagrange's polynomial interpolation formula for the values of the polynomial $x^n$. Since there are $n+1$ data points, the two polynomials coincide.<|endoftext|> TITLE: How to Decompose $\mathbb{N}$ like this? QUESTION [8 upvotes]: Possible Duplicate: Partitioning an infinite set Partition of N into infinite number of infinite disjoint sets? Is it possible to find a family of sets $X_{i}$, $i\in\mathbb{N}$, such that: $\forall i$, $X_i$ is infinite, $X_i\cap X_j=\emptyset$ for $i\neq j$, $\mathbb{N}=\bigcup_{i=1}^{\infty}X_i$ Maybe it is an easy question, but I'm curious about the answer and I couldnt figure out any solution. Thanks REPLY [5 votes]: This is much like the answer by GEdgar, but takes into account the fact that $0\in\Bbb N$. Define $X_i$ to be the set of numbers whose representation ends with exactly $i$ digits $1$ (so in particular $X_0$ is the set of numbers that do not end with a digit $1$; it is obviously infinite (and $0\in X_0$; this is why I didn't take digits $0$), and you can get $X_i$ from $X_0$ by adding $i$ digits $1$ to the end of each element). I was originally thinking of binary representation, but it actually works for any base, in particular for base $10$.<|endoftext|> TITLE: How to present math as something interesting? QUESTION [7 upvotes]: If you are helping someone with mathematics in high school, what do you need to do to win his attention so she/he can focus on curriculum? REPLY [2 votes]: Mathematics is INHERENTLY interesting if it is UNDERSTOOD. "Understand" does not mean ability to remember and recite and mechanically perform and willingness to conform and follow directions. "Understand" means that you see what we are after, what we are about, what we want, what there IS to want, where we are going, what we are doing. Mathematics as rules and arguments does reveal that or even hint at it. Neither does mathematics as rigor or reasoning or application or cognition or history or success or mathematicians or philosophy. Mathematics is about constructing and exploring. Promoting successes gives no clue to what it's about. It doesn't matter how many findings one remembers. Mathematics is ABOUT creating and searching, not pride in what was created and found. The relevance of mathematics is not in its application. Mathematics is the physical science of quantity and the phenomenal developments that proceed from naming and symbolizing quantities, operations with quantities, relationships between and among quantities, and so on, including interest in the things to which quantity applies - mainly sets and fields and rings and algebras and whatever - and the astounding implications and properties of all of these and their descendants. The claim that mathematics is abstract surgically removes all possible interest in it except for exceptional jugglers or those who see beyond the claim. It is no more abstract than any other science. The fact that many of its findings can be proven is due to the nature of quantity and not the esotericness (esotericality, esotericosity, esotericicity) of the subject.<|endoftext|> TITLE: Is Fourier transform characterized by its diagonalization properties? QUESTION [20 upvotes]: Let us fix the following convention for the Fourier transform in $L^1(\mathbb{R})$ space: $$\hat{f}(\xi)=\int_{-\infty}^\infty f(x)\, e^{-2\pi i x\xi}\, dx.$$ We then have the following properties: \begin{align}\tag{1} \displaystyle \left[\frac{df}{dx}\right]^\hat{}(\xi)=2\pi i\xi\, \hat{f}(\xi); \\ \tag{2} \displaystyle \left[ -2\pi i x\, f\right]^\hat{}(\xi)=\frac{d\hat{f}}{d\xi}(\xi);\\ \tag{3} \displaystyle \hat{f}(0)=\int_{-\infty}^\infty f(x)\, dx. \end{align} Question Let $K(x, \xi)$ be a bounded function. Suppose that the integral transform $$Tf(\xi)=\int_{-\infty}^{\infty}f(x)K(x, \xi)\, dx,\quad f \in L^1(\mathbb{R})$$ satisfies properties (1), (2) and (3). Is it true that $K(x, \xi)=\exp(-2\pi i x\xi)$? Motivation for this question comes from the fact that one can evaluate $$\hat{G}(\xi)=\left[ \exp(-\pi x^2)\right]^\hat{}$$ by using only the properties (1), (2) and (3). This is done by Fourier transforming both sides of the differential identity $$\frac{d}{dx}e^{-\pi x^2}=-2\pi x\, e^{-\pi x^2},$$ obtaining the Cauchy problem $$ \begin{cases} -2\pi \xi\, \hat{G}(\xi)=\frac{d}{d\xi}\hat{G}(\xi) \\ \hat{G}(0)=1 \end{cases} $$ whose unique solution is $$\hat{G}(\xi)=\exp(-\pi\,\xi^2).$$ REPLY [16 votes]: Apart from some quibbling about making sure $K$ is sufficiently integrable or something... this is true. E.g., for precision, take $K$ to be a tempered distribution in two variables. Using the hypothesis that $2\pi ix\cdot T=T\cdot {d\over dx}$ as operators on Schwartz functions, (thinking of "$x$" as multiplication-by-$x$), integration by parts in the integral for $T$ gives ${\partial\over \partial y}K(x,y)=2\pi i x\cdot K(x,y)$ as tempered distribution in two variables. This has obvious classical solutions $C\cdot e^{2\pi ixy}$, as expected. To show that there are no others, among tempered distributions, one way is to divide $K(x,y)$ by $e^{2\pi ixy}$, so the equation becomes ${\partial \over \partial y}K(x,y)=0$. By symmetry, ${\partial\over \partial x}K(x,y)=0$. Integrating, $K(x,y)$ is a translation-invariant tempered distribution in two variables. It is a separate exercise to see that all such are constants. Edit: in response to @GiuseppeNegro's comment (apart from correcting the sign), the secondary exercise of proving that vanishing first partials implies that a tempered distribution is (integrate-against-) a constant has different solutions depending on one's context, I think. Even in just a single variable, while we can instantly invoke the mean value theorem to prove that a function with pointwise values is constant when it is differentiable and has vanishing derivative, that literal argument does not immediately apply to distributions. In a single variable, integration by parts proves that $u'=0$ for distribution $u$ implies that $u(f')=0\,$ for all test functions $f$, and we can characterize such $f$, namely, that their integrals over the whole line are $0$, from which a small further argument proves that $u$ is a constant. This sort of argument seems to become a little uglier in more than one variable... and, in any case, I tend to favor a slightly different argument that is a special case of proving uniqueness of various group-invariant functionals. E.g., on a real Lie group $G$, there is a unique right $G$-invariant distribution (=functional on test functions), and it is integration-against right Haar measure. The argument is essentially just interchange of the functional and integration against an approximate identity, justified in the context of Gelfand-Pettis (weak) integrals. Probably there are more elementary arguments, but this sort of approach seems clearer and more persuasive in the long run. Edit-Edit: on a Lie group $G$, to prove that all distributions annihilated by the left $G$-invariant differential operators attached to the Lie algebra $\mathfrak g$ (acting on the right) are (integrate-against-) constants: Let $f_n$ be a Dirac sequence of test functions. A test function $f$ acts on distributions by $f\cdot u=\int_G f(g)\,R_gu\;dg$, where $R$ is right translation, and the integral is distribution-valued (e.g., Gelfand-Pettis). A basic property of vector-valued integrals is that $f_n\cdot u\to u$ in the topology on distributions. At the same time, the distribution $f_n\cdot u$ is (integration-against) a smooth function. It is annihilated by all invariant first-order operators, so by the Mean Value Theorem it is (integration against) a constant. The distributional limit of constants is a constant.<|endoftext|> TITLE: Divide circle into 9 pieces of equal area QUESTION [27 upvotes]: I'd like to divide a unit circle disk into nine parts of equal area, using circle arcs as delimiting lines. The whole setup should be symmetric under the symmetry group of the square, i.e. 4 mirror axes and 4-fold rotational symmetry. The dividing arcs should all be of equal curvature. (Thanks to the comment by i. m. soloveichik for making me aware this latter requirement.) For these reasons, several areas will automatically be of the same size, indicated by a common color in the figure above. There are three different colors corresponding to three different shapes, and the requirement that all three of these should have the same area therefore corresponds to two equations. This agrees nicely with the fact that there are two real parameters one may tune, e.g. the distance $d$ between the center of the figure and the centers of the dividing circles, together with the radius $r$ for these dividing circles. Other combinations are possible. But how would one obtain the actual numbers for these parameters? Is the solution even unique? I understand that it might be difficult to give an exact answer to this question. So numeric answers are acceptable as well, as long as they explain how the numbers were obtained, not only what the numbers are. REPLY [8 votes]: Based on the answer by Hagen, I wrote a bit of code to numerically compute $d$ in an outer loop and $r$ for a given $d$ in an inner loop. The results I obtained look like this: \begin{align*} r &= 4.740253970598989846488464631691100376659654929999896463057971 \\ d &= 4.441836291757233092492625306779987045065972123154874957376197 \end{align*} This was computed using arbitrary precision interval arithmetic, so unless I completely garbled up my bisection algorithm, the given digits should be reliable. The parameters denote the common zero of these two functions, written in Python for Sage and using the comment by joriki: def area1(d, r): """Deficit value of two blue and one red area.""" x = (r^2 - 1 - d^2)/(2*d) # intersection as @joriki described it a1 = 2*x.arccos() # angle for the central circle a2 = 2*((d + x)/r).arccos() # angle for the outer circle a1 = (a1 - a1.sin())/2 # angle of central circle segment a2 = (a2 - a2.sin())/2*r^2 # angle of outer circle segment return d.parent().pi()/3 - (a1 - a2) # expected minus actual moon shape def area2(d, r): """Excess value of the green area.""" x = ((2*r^2 - d^2).sqrt() - d)/2 # intersect outer circle with line x = y a2 = 2*((d + x)/r).arccos() # angle for outer circle a2 = (a2 - a2.sin())/2*r^2 # area for one green circle segment return ((2*x)^2 + 4*a2) - d.parent().pi()/9 # square + segments - expected The signs of the results are chosen such that area2 increases with $r$ for fixed $d$, while area1 increases with $d$ for optimal $r$. Approximating the resulting areas using polygons, I could verify the result with resonable precision, so I believe that in this second attempt (see edit history for first mistake), I got the formulas right. The resulting figure, by the way, looks like this:<|endoftext|> TITLE: Are Euclid numbers squarefree? QUESTION [7 upvotes]: Are Euclid numbers squarefree ? An Euclid number is by definition a Primorial number + 1. See http://mathworld.wolfram.com/Primorial.html. In notation the $n$ th Euclid number is written as $E_n = P_n+1.$ Thus I wonder about $a^2b = E_n$ for positive integer $a,b,n$ and $a>1$. I was thinking about Korselt's criterion : http://mathworld.wolfram.com/KorseltsCriterion.html and Fermats little. Im unaware of other techniques for proving the squarefree property apart from more elementary tools as gcd and basic modular aritmetic. I cannot imagine infinite descent to work here ? Maybe a binomium will help ? I guess I missed something trivial and have a bad day. Maybe $a^2b = E_n-2$ is easier ? REPLY [3 votes]: Just to try to explain something about mathematics programming. You don't find $1 +P_n$ and hope to factor it. What you do, also exhaustive, is fix a prime $p$ as large as you can stand. Then you keep track of the values of $P_n \pmod {p^2}$ and see if you get $p^2 - 1.$ There is no reason to take $p_n \geq p,$ as that would imply $P_n \equiv 0 \pmod p,$ so for each $p$ this is a finite calculation. The result of this search, instantaneous on the machine, is that no $1 + P_n$ is divisible by the square of any prime smaller than $1625.$ I did this for $p^3 < 2^{32}$ with no luck. But I also took the easier values $\pmod p,$ just to see how often that gave anything. Note that there are repeats for $p = 277, \; 1051, \; 1381.$ EDDDIITTT, Thursday morning: I got this working in GMP, got the prime $p$ up above 1,000,000. So, no $1 + P_n$ is divisible by the square of any prime smaller than $1000000.$ I will make an attempt to paste the C++ program below, see how that goes. ========================= p p_n 3 2 7 3 19 17 31 5 59 13 61 41 73 53 97 17 131 89 139 53 149 107 167 43 173 53 181 37 211 7 223 61 271 263 277 17 277 59 307 283 313 239 317 23 331 29 347 19 463 443 467 199 509 13 571 29 601 179 673 659 809 677 827 499 877 677 881 137 953 47 983 463 997 769 1031 937 1033 587 1039 89 1051 211 1051 739 1063 71 1069 523 1109 839 1259 907 1279 811 1283 509 1291 439 1297 769 1361 163 1381 157 1381 1097 1471 1459 1543 127 1579 347 1619 1481 ========================= ========================= #include #include #include #include #include #include #include #include #include #include #include #include #include using namespace std; int PrimeQ(int i) { if ( i < 0 ) i *= -1; if ( i <= 3) return 1; else { int boo = 1; int j = 2; while (boo && j * j <= i ) { if ( i % j == 0) boo = 0; ++j; } return boo; } } // PrimeQ deterministic and guaranteed, used on a range just once // File named aa_GMP_trial.cc // compile with // g++ -o aa_GMP_trial aa_GMP_trial.cc -lgmp -lgmpxx // run with // ./aa_GMP_trial // because my machine needs to be told an executable is in the same directory // William C. Jagy October 2012 int main() { set Primes; mpz_class n ; n = 2; // g++ -o aa_GMP_trial aa_GMP_trial.cc -lgmp -lgmpxx for (unsigned m = 2; m <= 1654321; ++m) { { n = m; if (PrimeQ(m)) { Primes.insert(n); // cerr << n << endl; } } } set::iterator iter; int count = 0; for(iter = Primes.begin() ; iter != Primes.end() ; ++iter) { mpz_class p = *iter; // cout << setw(8) << p; ++count; // if(0 == count % 10) cout << endl; if (0 == count % 1000) cout << "progress " << p << endl; // g++ -o aa_GMP_trial aa_GMP_trial.cc -lgmp -lgmpxx // mpz_class target = p - 1; // mpz_class p2 = p; mpz_class target = p * p - 1; mpz_class p2 = p * p; mpz_class q = 1; mpz_class product = 1; set::iterator iter2; if( p > 2) { for(iter2 = Primes.begin() ; q < p && iter2 != Primes.end() ; ++iter2) { q = *iter2; product *= q; product %= (p2); if ( target == product) { cout << p << setw(12) << q << endl; } } // iter2 for q } // p > 3 } // iter for p cout << endl << endl; // g++ -o aa_GMP_trial aa_GMP_trial.cc -lgmp -lgmpxx return 0 ; } ========================<|endoftext|> TITLE: Does the product of two invertible matrix remain invertible? QUESTION [9 upvotes]: If $A$ and $B$ are two invertible $5 \times 5$ matrices, does $B^{T}A$ remain invertible? I cannot find out is there any properties of invertible matrix to my question. Thank you! REPLY [10 votes]: Yes. $$ \det(B^T\,A)=\det(B^T)\det(A)=\det(B)\det(A)\ne0. $$ Moreover $$ (B^T\,A)^{-1}=A^{-1}(B^{-1})^T. $$<|endoftext|> TITLE: Sum of two uniform random variables QUESTION [17 upvotes]: I am calculating the sum of two uniform random variables $X$ and $Y$, so that the sum is $X+Y = Z$. Since the two are independent, their densities are $f_X(x)=f_Y(x)=1$ if $0\leq x\leq1$ and $0$ otherwise. The density of the sum becomes $f_Z(z)=\int_{-\infty}^\infty f_X(z-y)f_Y(y)dy=\int_0^1f_X(z-y)dy$ by convolution. I am stuck at this stage. How do I proceed with my integral? I think a diagram make it easy but I dont know how to proceed. REPLY [22 votes]: Hint: Split the calculation into two cases: (i) $0\le z\le 1$ and (ii) $1\lt z\le 2$. Added: (i) if $0\le z\le 1$, then $f_X(z-y)=1$ if $0\le y\le z$, and $f_X(z-y)=0$ if $y\gt z$. It follows that $$\int_0^1 f_X(z-y)\,dy=\int_0^z 1\cdot dy=z$$. (ii) If $1\lt z\le 2$, then $f_X(z-y)=1$ if $z-1\le y \le 1$, and $f_X(z-y)=0$ elsewhere. It follows that $$\int_0^1 f_X(z-y)\,dy=\int_{z-1}^1 1\cdot dy=2-z.$$ Thus $f_Z(z)=z$ if $0\le z\le 1$, and $f_Z(z)=2-z$ if $1\le z\le 2$. And for completeness, $f_Z(z)=0$ if $z$ is outside the interval $[0,2]$. Remark: I suspect that the convolution way is in this case effectively no faster than the "slow" way of finding first the cumulative distribution function $F_Z(z)$, and differentiating. REPLY [7 votes]: hint: the integrand is zero unless $0 \le z-y \le 1$<|endoftext|> TITLE: Möbius strip and $\mathscr O(-1)$. Or $\mathscr O(1)$? QUESTION [12 upvotes]: On the real $\textbf P^1$ we have these algebraic line bundles: $\mathscr O(1)$ and $\mathscr O(-1)$. Which one corresponds to the Möbius strip? (Both are $1$-twists of $\textbf P^1\times\textbf A^1$, so how to distinguish them? Yes, by means of their transition functions, but how do they tell me which one has the Möbius strip as total space?) And what is the total space of the other one? I can only imagine that non-orientability should correspond to the absence of global sections, so I would bet on $-1$, but with no real reason. Also, I can't figure if to every $\mathscr O(d)$ there corresponds a different total space, or there are repetitions. Of course, by viewing those bundles as holomorphic bundles, there are only two surfaces up to diffeomorphism. But what about the algebraic category? Thank you for any help! REPLY [8 votes]: Edit This is a corrected answer. Many thanks to Ben who pointed out the idiocy of my original version. Given an arbitrary field $k$, the Picard group of $\mathbb P^1_k$ (consisting of the isomorphism classes of algebraic line bundles) is isomorphic to $\mathbb Z$ via the degree map $$\text {deg} : \text {Pic} (\mathbb P^1_k) \stackrel {\cong }{\to} \mathbb Z ,$$ the inverse isomorphism being $$ \mathbb Z \stackrel {\cong }{\to} \text {Pic} (\mathbb P^1_k) :n\mapsto \mathscr O(n) $$ In the case $k=\mathbb R$ things become quite interesting because the real points $\mathbb P^1_\mathbb R (\mathbb R)$ of the projective line $\mathbb P^1_\mathbb R$ are endowed with the structure of a real differentiable manifold diffeomorphic to the circle $S^1$. And that manifold has a differentiable Picard group $\text {Pic}^{\text {diff}} (\mathbb P^1(\mathbb R)) $ of order two generated by the Möbius bundle $M$, so that we have a group isomorphism $$ \text {Pic}^{\text {diff}} (\mathbb P^1(\mathbb R)) \stackrel {\cong }{\to} \mathbb Z/2\mathbb Z: M\mapsto \bar 1 $$ We then have a forgetful group homomorphism $\text {Pic} (\mathbb P^1_\mathbb R)\to \text{Pic}^{\text {diff}} (\mathbb P^1(\mathbb R)) $ forgetting the algebraic structure of a line bundle and retaining only its differentiable structure. In the above identification this morphism is just reduction modulo $2$: $$\text {Pic} (\mathbb P^1_\mathbb R)\to \text{Pic}^{\text {diff}} (\mathbb P^1(\mathbb R))\cong \mathbb Z: \mathscr O(n) \mapsto M^n\cong \bar n$$ In particular both $\mathscr O(1)$ and $\mathscr O(-1)$ are sent to the Möbius bundle, which answers your question (I hope!)<|endoftext|> TITLE: Colimits in that category of short exact sequences of abelian groups QUESTION [5 upvotes]: I'm wondering whether the category whose objects are short exact sequences of abelian groups, and whose morphisms are commutative diagrams of such short exact sequences, is cocomplete. Working naively, it seems you can get coproducts by taking them componentwise. However, for coequalizers, I think we are not so lucky. Consider the two short exact sequences $0 \rightarrow 0 \rightarrow \mathbb{Z} \rightarrow \mathbb{Z} \rightarrow 0$ and $0 \rightarrow \mathbb{Z} \rightarrow \mathbb{Z} \rightarrow 0 \rightarrow 0$ with maps between integers being the identity. Consider the morphism from the first sequence to the second which is the identity on the middle map between the integers and obviously zero elsewhere. Then taking cokernels componentwise would give $0 \rightarrow \mathbb{Z} \rightarrow 0 \rightarrow 0 \rightarrow 0$, which clearly cannot be exact. So the obvious candidate for cokernels is not the correct one, but perhaps there is another not-so-obvious choice. I'm wondering how I might go about showing whether or not there are coequalizers. REPLY [9 votes]: This is an expansion of Jason's comment. The category of short exact sequences is obviously additive and has arbitrary coproducts. Thus, cocompleteness is equivalent to the existence of cokernels. So let us given a morphism $f_* : A_* \to B_*$ of short exact sequences: $\begin{array}{ccccccccc} 0 & \rightarrow & A_1 & \rightarrow & A_2 & \rightarrow & A_3 & \rightarrow & 0 \\ & & ~ ~ \downarrow f_1 & &~~ \downarrow f_2 & & ~~ \downarrow f_3 & & \\ 0 & \rightarrow & B_1 & \rightarrow & B_2 & \rightarrow & B_3 & \rightarrow & 0 \end{array}$ The snake lemma gives us an exact sequence $0 \to \mathrm{ker}(f_1) \to \mathrm{ker}(f_2) \to \mathrm{ker}(f_3) \to \mathrm{coker}(f_1) \to \mathrm{coker}(f_2) \to \mathrm{coker}(f_3) \to 0.$ Let $K$ be the kernel of $\mathrm{coker}(f_1) \to \mathrm{coker}(f_2)$. Equivalently, $K \cong \mathrm{ker}(f_3) / (\mathrm{ker}(f_2) / \mathrm{ker}(f_1))$. Then we have a short exact sequence $C_*$ defined by $0 \to \mathrm{coker}(f_1) /K \to \mathrm{coker}(f_2) \to \mathrm{coker}(f_3) \to 0$ together with a morphism $p_* : B_* \to C_*$. I claim that this is the cokernel of $f_*$. It is an epimorphism since it components are epimorphisms. It factors through the non-exact cokernel of $f_*$, therefore we have $p_* f_* = 0$. Now let $D_*$ be another exact sequence and $g_* : B_* \to D_*$ be a morphism satisfying $g_* f_*$. Then $g_*$ factors through the non-exact cokernel of $f_*$. We are left to prove that $g_1 : B_1 \to D_1$ vanishes on $K$, so that it even factors through $C_1= \mathrm{coker}(f_1) /K$. But this is a diagram chase: An element in $K$ is represented by an element in $B_1$ whose image in $B_2$ comes from an element in $A_2$. Thus the image in $D_2$ vanishes. Since $0 \to D_1 \to D_2$ is exact, this means that the image in $D_1$ already vanishes, qed. Of course this reasoning can also be done with arrows. Therefore, if $\mathcal{A}$ is an arbitrary abelian category, then the category $S(\mathcal{A})$ of short exact sequences in $\mathcal{A}$ has cokernels and (by duality also) kernels. If $\mathcal{A}$ has coproducts and satisfies has AB4, then $S(\mathcal{A})$ has arbitrary coproducts and is therefore cocomplete. The category of short exact sequences is never abelian, in fact not balanced: From the above description of cokernels we see that $f_*$ is an epimorphism in that category iff $f_2,f_3$ are epimorphisms. Intuitively it is ok that $f_1$ doesn't appear here since $f_1$ is uniquely determined by $f_2,f_3$ due to the exactness! Similarily, $f_*$ is a monomorphism iff $f_1,f_2$ are monomorphisms. Thus, $f_*$ is a mono- and an epimorphism iff $f_2$ is an isomorphism, $f_1$ is a monomorphism and $f_3$ is an epimorphism. However, $f_*$ is an isomorphism iff $f_1,f_2,f_3$ are isomorphisms. Addendum. We can simplify these arguments a lot: Consider the category $E(\mathcal{A}) \subseteq \mathrm{Mor}(\mathcal{A})$ of epimorphisms in $\mathcal{A}$. The morphisms are commutative diagrams. Obviously, coproducts and cokernels exist in this category, since the category is closed under these operations in $\mathrm{Mor}(\mathcal{A})$. There is a forgetful functor $S(\mathcal{A}) \to E(\mathcal{A})$, which turns out to be an equivalence of categories! The quasi-inverse chooses for every epimorphism $A_2 \to A_3 \to 0$ a kernel $0 \to A_1 \to A_2$. For every morphism $(f_2,f_3) : A_* \to B_*$ in $E(\mathcal{A})$ the universal property of the kernel yields a unique $f_1 : A_1 \to B_1$ such that $(f_1,f_2,f_3)$ becomes a morphism in $S(\mathcal{A})$. Since $E(\mathcal{A})$ is cocomplete, the same is true for $S(\mathcal{A})$. If one unwinds the definitions, one gets the same cokernels as described above explicitly.<|endoftext|> TITLE: Weakly convex functions are convex QUESTION [5 upvotes]: Let us consider a continuous function $f \colon \mathbb{R} \to \mathbb{R}$. Let us call $f$ weakly convex if $$ \int_{-\infty}^{+\infty}f(x)[\varphi(x+h)+\varphi(x-h)-2\varphi(x)]dx\geq 0 \tag{1} $$ for all $h \in \mathbb{R}$ and all $\varphi \in C_0^\infty(\mathbb{R})$ with $\varphi \geq 0$. I was told that $f$ is weakly convex if, and only if, $f$ is convex; although I can imagine that (1) is essentially the statement $f'' \geq 0$ in a weak sense, I cannot find a complete proof. Is this well-known? Is there any reference? REPLY [6 votes]: By a change of variables (translation-invariance of Lebesgue measure) the given inequality can be equivalently rewritten as $$ \int [f(x+h)+f(x-h)-2f(x)]\varphi(x)\,dx \geq 0 \qquad \text{for all }0 \leq \varphi \in C^{\infty}_0(\mathbb{R})\text{ and all }h \gt 0. $$ If $f$ were not midpoint convex then there would be $x \in \mathbb{R}$ and $h \gt 0$ such that $f(x+h) + f(x-h) - 2f(x) \lt 0$. By continuity of $f$ this must hold in some small neighborhood $U$ of $x$, so taking any nonzero $\varphi \geq 0$ supported in $U$ would yield a contradiction to the assumed inequality. Thus, $f$ is midpoint convex and hence convex because $f$ is continuous. Edit: The converse direction should be clear: it follows from $f(x+h) + f(x-h) - 2f(x) \geq 0$ for convex $f$ and all $h \gt 0$ so that the integrand in the first paragraph is non-negative. Finally, the argument above works without essential change for continuous $f \colon \mathbb{R}^n \to \mathbb{R}$.<|endoftext|> TITLE: Understanding Borel sets QUESTION [75 upvotes]: I'm studying Probability theory, but I can't fully understand what are Borel sets. In my understanding, an example would be if we have a line segment [0, 1], then a Borel set on this interval is a set of all intervals in [0, 1]. Am I wrong? I just need more examples. Also I want to understand what is Borel $\sigma$-algebra. REPLY [7 votes]: Another strange Borel set: the set of all numbers in [0,1] whose decimal expansion does not contain 7. One more: The set of all real numbers in [0,1] whose decimal expansion contains 2 or 5. (Cantor type set) Last but not least: The set of all real numbers in [0,1] whose decimal expansion contains only finitely many 6.<|endoftext|> TITLE: Does $\sin(t)$ have the same frequency as $\sin(\sin(t))$? QUESTION [7 upvotes]: I plotted $\sin(t)$ and below it $\sin(\sin(t))$ on my computer and it looks as if they have the same frequency. That led me to wonder about the following statement: $\sin(t)$ has the same frequency as $\sin(\sin(t))$ Is this statement true or false, and how to prove it? Many thanks REPLY [3 votes]: In general, it is possible to express trigonometric functions of trigonometric functions via the Jacobi-Anger expansion. In the case of $\sin(\sin(t))$, we have: $\sin(\sin(t)) = 2 \sum_{n=1}^{\infty} J_{2n-1}(1) \sin\left[\left(2n-1\right) t\right]$, where $J_{2n-1}$ is the Bessel function of the first kind of order $2n-1$. It is clear from this expansion that the zeroes of $\sin(\sin(t))$ are the same as that of $\sin(t)$, since any even multiple of $\pi$ for the argument $t$ will also lead to an even multiple of $\pi$ for the $\sin[(2n-1)t]$ term in the expansion. As MrMas mentioned, though the functions have the same period, their spectral content is different. The expansion can viewed as a Fourier series for the spectral components of $\sin(\sin(t))$, the amplitudes of which are governed by the amplitude of the $2n-1$-th Bessel function. Here is a plot of $|J_n(1)|$ for $n \in \mathbb R [1, 10]$: For z = 1 in $\sin(z\sin(t))$, there is very little harmonic content, and in the time domain $\sin(\sin(t))$ doesn't look terribly different from an ordinary sine wave.<|endoftext|> TITLE: Tips for finding the Galois Group of a given polynomial QUESTION [5 upvotes]: I am currently in an introductory Galois Theory course, and I thought it would be nice to compile a list of standard tricks for finding the Galois Groups of certain polynomials. I am studying from Stewart's Galois Theory, and while the book is very readable, he has given us one worked example (apparently the "canonical" $t^4-2$). Aside from some easy examples given in class, I feel like I should be more confident with these methods. I'm aware there isn't one method for finding Galois Groups, but any tips or tricks, things to look out for, etc, would be much appreciated. Although I'm interested in all answers, keep in mind that this is an undergraduate introduction to Galois Theory, and thus far, we are only working over $\mathbb{C}$ (if you own Stewart's Galois Theory, my class is currently on Chapter 13). Also, I know that questions of this nature have been asked before, but often the responses link to papers / allude to topics not covered in an intro level galois theory course. Any help is appreciated! REPLY [6 votes]: Apart from basic techniques, there are some beautiful theorems in mathematics, which can help you to find Galois group of a given polynomial over rationals. The theorem is called Dedekind and Frobenius Theorem which can be found in Dummit Foote Algebra book Chp. 13 and David Cox's Galois theory. The theorem says that if you reduce an irreducible polynomial modulo primes not dividing the discriminant of the polynomial you get information about the elements of the Galois group. Example: If g is a polynomial of degree 5. Reducing polynomial modulo some prime, if g remains irreducible, this means Galois group a 5-cycle. If g splits as quadratic times three linear polynomials. This means Galois group has an element of order 2, i.e., transposition. Therefore by a theorem in group theory you can conclude that G is S_5. There is also a probabilistic way of finding Galois group by using Chebotarev density theorem. However this probabilistic technique fails some times, such as in degree 8. See the following for more details. http://www.math.colostate.edu/~hulpke/paper/gov.pdf http://www.math.colostate.edu/~hulpke/talks/galoistalk.pdf http://websites.math.leidenuniv.nl/algebra/Lenstra-Chebotarev.pdf http://www.math.uconn.edu/~kconrad/blurbs/galoistheory/galoisSnAn.pdf<|endoftext|> TITLE: Linear Algebra: Projection Maps QUESTION [6 upvotes]: I would like to check if my understanding of projection maps is correct. I have been given the following subset of $\mathbb{R}^3$: $$A=\left\{\begin{pmatrix} x \\ y \\ -x+2y \end{pmatrix} \middle| x,y,z\in\mathbb{R}\right\}$$ A basis for this subset is $\mathscr{B}=\left\{ \begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \\ 2 \end{pmatrix} \right\}$, and to extend this basis to one for the vector space $\mathbb{R^3}$ we simply add to the basis the vector: $$\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$$ To obtain $\mathscr{C} = \left\{ \begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \\ 2 \end{pmatrix},\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \right\}$, a basis for $\mathbb{R}^3$. We can call $B = Span\left\{\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}\right\}$, and then we can say $\mathbb{R}^3=A\bigoplus B$. What I want to know is if I am correct in interpreting the definition of projection map. Let $P:\mathbb{R}\to\mathbb{R}$ be the projection map onto A. The question asks me to calculate $P(e_1)$, $P(e_2)$ and $P(e_3)$ then write down the matrix of $P$ with respect to the standard basis of $\mathbb{R}^3$. Without explicitly giving my answer (I want to check my method, not my answers), this is my method: Write each vector $e_1$, $e_2$ and $e_3$ as a linear combination of the vectors in $\mathscr{C}$, so, for example, $e_1 = \alpha\begin{pmatrix} 1 \\ 0 \\ -1 \end{pmatrix}+\beta\begin{pmatrix} 0 \\ 1 \\ 2 \end{pmatrix} \gamma\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$. For the projection map onto $A$ we take only the first two terms as the first two terms are in the basis $\mathscr{B}$. So, for the combination in step 1, $P(e_1)=\begin{pmatrix} \alpha \\ \beta \\ \gamma \end{pmatrix} = \alpha e_1+\beta e_2+\gamma e_3$ To form the matrix P we write down the columns of the matrix the coefficients describe in the last step, so we get: $P=\begin{pmatrix} \alpha & . & . \\ \beta & . & . \\ \gamma & . & . \end{pmatrix}$, and fill in the missing columns as we did for the first column above. Am I correct in my method? If I have any of this wrong, please guide me in the right direction. REPLY [2 votes]: (This discussion applies to finite dimensional spaces. Recall that to define a linear operator, it is sufficient to define its behaviour on a basis.) Your approach is correct. However, there is some ambiguity in defining $P$, you have defined one projection, but there are others. Suppose you have a projection $P$ onto a subspace $A$ with basis $a_1,.., a_k$. Then $P$ is uniquely defined on $A$ (since $Pa_i = a_i$). Suppose $b_{k+1},.., b_n$ together with $a_1,.., a_k$ form a basis for the whole space. Then $P$ can be arbitrarily defined on $b_i$ as long as $P b_i \in A$. So, the projection is not unique. Back to the problem on hand: Let $v_1,v_2,v_3$ be the vectors you have in $\mathscr{C}$ which form a basis for $\mathbb{R}^3$ (and $v_1, v_2$ form a basis for $A$). Then you must have $P v_1 = v_1$, $P v_2 = v_2$, but the only requirement for $P v_3$ is that it lie in $A$. So $P v_3 = \alpha_1 v_1 + \alpha_2 v_2$, where $\alpha_i$ are arbitrary (but fixed, of course). Let $V = \begin{bmatrix} v_1&v_2&v_3 \end{bmatrix}$. Then we have $PV = \begin{bmatrix} v_1&v_2& \alpha_1 v_1 + \alpha_2 v_2\end{bmatrix} = W$. You were asked to compute $P e_i$, which is tantamount to computing $P = W V^{-1}$. Noting that $W = \begin{bmatrix} v_1 & v_2 & 0 \end{bmatrix} + \alpha_1 \begin{bmatrix} 0 & 0 & v_1 \end{bmatrix} + \alpha_2 \begin{bmatrix} 0 & 0 & v_2 \end{bmatrix}$, we see that P can be expressed as $$ P = \begin{bmatrix} v_1 & v_2 & 0 \end{bmatrix}V^{-1} + \alpha_1 \begin{bmatrix} 0 & 0 & v_1 \end{bmatrix} V^{-1} + \alpha_2 \begin{bmatrix} 0 & 0 & v_2 \end{bmatrix} V^{-1}$$ Grinding through through the calculation gives: $$P = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -1 & 2 & 0 \end{bmatrix} + \alpha_1 \begin{bmatrix} 1 & -2 & 1 \\ 0 & 0 & 0 \\ -1 & 2 & -1 \end{bmatrix} + \alpha_2 \begin{bmatrix} 0 & 0 & 0 \\ 1 & -2 & 1 \\ 2 & -4 & 2 \end{bmatrix}$$<|endoftext|> TITLE: Accumulation points: an example of a sequence that has exactly two accumulation points QUESTION [5 upvotes]: The definition in my text defines A.P. as: A point p is an A.P. of a set S if it is the limit of a sequence of points of S-{p}. I am confused by my classnotes and information about A.P. in my textbook is not enough for me to understand the topic. REPLY [4 votes]: The sequence $(-1)^n\left(1+\frac{1}{n}\right)$ has exactly two A.Ps $1$ and $-1$. More generally if $(a_n)_{n\in \mathbb{N}}$ is a sequence s.t. $a_n \to a \neq0 $ and the set $\{a_n:n \in \mathbb{N}\}$ is infinite then the sequence $((-1)^na_n)_{n\in \mathbb{N}}$ has exactly two accumulation points $a$ and $-a$.<|endoftext|> TITLE: A problem dealing with Sylow's subgroups QUESTION [12 upvotes]: Lets look at this exercise: $G$ is a finite group and $p$ is a prime that divides $|G|$. If for every element $x\in G$ such that $g.c.d.(o(x),p)=1$, we have that $g.c.d.\left([G:C_G(x)],p\right)=1$, prove that $G$ is the direct product of a $p$-Sylow with a group that has order coprime with $p$. (notation: clearly $C_G(\cdot)$ is the centralizer in $G$) If $|G|=p^rm$ ($p$ is coprime with $m$), to solve the exercise, I think that it is enough to find a normal subgroup of $G$ that has order $m$ and such that centralize a $p$-Sylow. REPLY [3 votes]: I'd translate your condition on $G$ to "every $p'$-element centralizes some $p$-Sylow subgroup". As all $p$-Sylow subgroups are conjugate, this is equivalent to saying that every $p'$-element has a conjugate lying in the centralizer of a fixed $p$-Sylow subgroup $S$. For finite groups the only subgroup intersecting each conjugacy class is the whole group (see https://mathoverflow.net/questions/26979/generating-a-finite-group-from-elements-in-each-conjugacy-class). So if you can show that the normalizer $N_G(S)$ contains a conjugate to every element of $G$ that has order a multiple of $p$, you know that $S$ is normal in $G$. Then by Schur-Zassenhaus there exists a $p'$-subgroup $H$ such that $G = S\rtimes H$. But as $H$ centralizes $S$ by the assumptions, the product is direct. To close the gap, by Sylow's theorem it is enough to show that given an element $x \in G$ of order divisible by $p$ it normalizes some $p$-Sylow subgroup. Write $x = y\cdot z$ with $y$ a $p$-element, $z$ a $p'$-element and $y$ and $z$ commuting (look at $\langle x \rangle$ if you are unsure about the existence of such a decomposition). If $z=1$ then $x$ is contained in some $p$-Sylow subgroup. Otherwise the centralizer of $z$ contains a $p$-Sylow $T$ that wlog contains $y$. As $z$ centralizes $T$ it also normalizes it, and we are done.<|endoftext|> TITLE: Are squares of independent random variables independent? QUESTION [11 upvotes]: If X and Y are independent random variables both with the same mean (0) and variance, how about $X^2$ and $Y^2$? I tried calculating E($X^2Y^2$)-E($X^2$)E($Y^2$) but haven't been able to get anywhere. REPLY [16 votes]: As per joriki's suggestion, my comment (with additional information) is posted as an answer. If $X$ and $Y$ are independent, then so are $g(X)$ and $h(Y)$ independent random variables for (measurable) functions $g(⋅)$ and $h(⋅)$. In particular, $X^2$ and $Y^2$ are independent random variables if $X$ and $Y$ are independent random variables. Means and variances don't come into the picture at all, and your attempted calculation of $\text{cov}(X^2,Y^2)$ will not prove independence even though the covariance will turn out to be $0$.<|endoftext|> TITLE: Why are the phase portrait of the simple plane pendulum and a domain coloring of sin(z) so similar? QUESTION [9 upvotes]: The simple plane pendulum $$\frac{d^2\theta}{dt^2} + \frac{g}{l}\sin{\theta} = 0$$ has the very perdy phase portrait Meanwhile, a domain coloring of $\sin(z)$ in the complex plane is Why are these so similar? REPLY [4 votes]: The equations of the phase curves in the phase portrait of the simple plane pendulum actually correspond to different energy conservation relations: $$ \dot{\theta}^2 - \frac{g}{l}\cos(\theta) = C_0 $$ And in the colored graph of $\sin(z)$ in the complex plane the lines are the lines of constant magnitude: $$ \|\sin(x+yi)\|^2 = C $$ which can be transformed into another form by the steps below $$ \begin{align} \|\sin(x)\cosh(y) + i\cos(x)\sinh(y)\|^2 &= C \\ \sin(x)^2\cosh(y)^2 + \cos(x)^2\sinh(y)^2 &= C \\ (\sin(x)^2 + \cos(x)^2)\frac{e^{2y}+e^{-2y}}{2} + \sin(x)^2-\cos(x)^2 &= C \\ \frac{e^{2y}+e^{-2y}}{2} -\cos(2x) &= C \end{align} $$ when $y$ is not far from $0$, $\frac{e^{2y}+e^{-2y}}{2} \approx 4y^2 = (2y)^2$,so if we replace $(x,y)$ by $(u,v)$ with $u=2x, \, v=2y$, then the equation becomes $$ v^2 -\cos(u) = C. $$ I think this is why the two plots look so similar. When $y$ goes far from $0$, their forms may no longer be such similar.<|endoftext|> TITLE: Karatsuba multiplication with integers of size 3 QUESTION [8 upvotes]: I understand how to apply Karatsuba multiplication in 2 digit integers. $$\begin{array} \quad & \quad & x & y \\ \times & \quad & z & w \\ \hline \quad &?&?&? \end{array}$$ $$\begin{align} i & = zx \\ ii & = wy \\ iii & = (x+y)(w+z) \end{align}$$ and the result is $$i \cdot 10^2n + [(iii - ii - i) \cdot 10^n] + ii$$ So then my question is, how do I apply this in 3 digit integers? i.e., this is the problem: I have 2 three digit numbers and I want to split this into 5 multiplications and they're of size $n/3$. I'm pretty sure I have to use Karatsuba multiplication, but I'm not entirely sure... I'm pretty new to here so my phrasing might be very poor. I hope I conveyed myself well enough. Thank you for any help. REPLY [11 votes]: Well I'm also pretty new to multiplication and I've stumbled on your question after looking for multiplication algorithm over solving multiplication of two 3-digit numbers with only 5 multiplications. So... First thing - with Karatsuba you probably can't do this. Deriving from 2-digit version of algorithm: $$ (10a + b) (10x + y) = 100ax + by + 10\Big( (a+b)(x+y) - ax - by\Big) $$ and applying it recursively we'll get: $$ \Big( \; 10 (10a + b) + c \; \Big) \cdot \Big( \; 10 (10x + y) + z \; \Big) = 100 (10a+b)(10x+y) \;+\; cz \;+\; 10\Big( (10a+b+c)(10x+y+z) \; - \; (10a+b)(10x+y) \; - \; cz \Big) = 100 \Bigg( 100ax + by + 10 \Big( (a+b)(x+y) - ax - by \Big) \Bigg) \;+\; cz \;+\; 10\Bigg\{ 100ax \;+\; (b+c)(y+z) \;+\; 10\bigg[ \; (a+b+c)(x+y+z) \;-\; ax \;-\; (b+c)(y+z) \; \bigg] \;-\; \bigg[ \; 100ax \;+\; by \;+\; 10\Big( (a+b)(x+y) - ax - by \Big) \bigg] \;-\; cz \Bigg\} \;\;\;\; = \;\;\;\;10^4ax +10^2by +10^3(a+b)(x+y)-10^3ax-10^3by+cz+10^3ax+10(b+c)(y+z)+10^2(a+b+c)(x+y+z)-10^2ax-10^2(b+c)(y+z)-10^3ax-10by-10^2(a+b)(x+y)+10^2ax+10^2by-10cz \;\;\;\; = \;\;\;\; 10^4ax +10^3\Big( (a+b)(x+y)-ax-by \Big) + 10^2\Big( (a+b+c)(x+y+z)-(b+c)(y+z)-(a+b)(x+y)+2by\Big)+10\Big((b+c)(y+z)-by-cz\Big)+cz $$ So it reduces number of necessary multiplications to 6: ax by cz (a+b)(x+y) (b+c)(y+z) (a+b+c)(x+y+z) But it should be possible to do this with Toom-Cook multiplication algorithm. What is idea behind that? Simply - instead of doing calculations we treat each set of numbers as polynomial: $$ p(t) = at^2 + bt + c $$ $$ q(t) = xt^2 + yt + z $$ And we are looking for polynomial: $$ w(t) = p(q) \cdot q(t) = w_4 t^4 + w_3 t^3 + w_2 t^2 + w_1 t + w_0 $$ As it can be easily seen each coefficient $w_i$ of our $w(t)$ polynomial is one of products from multiplication. To avoid multiplication of polynomials (which would kill the case) we will try to calculate coefficients of $w(t)$ by using interpolation on our $p(t)$ and $q(t)$ polynomials. To do so we need some interpolation points. For 5 variables ($w_0,w_1,w_2,w_3,w_4$) we need 5 points. Let those be (as in wikipedia article) $t \in \{-2,-1,0,1,\infty\} $. Points of polynomial evaluation may be whatever you wish. But it is common to chose $0$ and $\infty$ among them. In case of polynomials value at $\infty$ doesn't really make sense so instead of this we are using $$ p(\infty) = \lim_{t \to \infty} {{p(t)} \over {t^{deg(p)}}} $$ And thus it is always coefficient of highest power (like $w_4$ in case of $w(t)$) Having this explained let's try it: $$ \begin{array}[b]{l} t=0 &&p(0) = c \\ &&q(0) = z \\ t=1 &&p(1) = a +b +c \\ &&q(1) = x +y +z \\ t=-1 &&p(-1) = a -b +c \\ &&q(-1) = x -y +z \\ t=-2 &&p(-2) = 4a-2b +c \\ &&q(-2) = 4x-2y +z \\ t=\infty &&p(\infty) = a \\ &&q(\infty) = x \end{array} $$ Which leads us to: $$ \begin{array}[b]{l} w(0) &&= w_0 &&= cz \\ w(1) &&= w_4 +w_3+w_2+w_1+w_0 &&= (a+b+c)(x+y+z) \\ w(-1) &&= w_4 -w_3+w_2-w_1+w_0 &&= (a-b+c)(x-y+z) \\ w(-2) &&= 16w_4-8w_3+4w_2-2w_1+w_0 &&= (4a-2b+c)(4x-2y+z)\\ w(\infty) &&= w_4 &&= ax \end{array} $$ So we see that there is only needed 5 multiplications of those specific sums and we're almost there. To speed things up let's use matrix representation of equations above: $$ \begin{pmatrix} w(0) \\ w(1) \\ w(-1)\\ w(-2)\\ w(\infty) \end{pmatrix} = \begin{pmatrix} 1 && 0 && 0 && 0 && 0 \\ 1 && 1 && 1 && 1 && 1 \\ 1 && -1 && 1 && -1 && 1 \\ 16 && -8 && 4 && -2 && 1 \\ 0 && 0 && 0 && 0 && 1 \end{pmatrix} \cdot \begin{pmatrix} w_0 \\ w_1 \\ w_2 \\ w_3 \\ w_4 \end{pmatrix} $$ Using inverse of this matrix (taken directly from Wikipedia): $$ \begin{pmatrix} w_0 \\ w_1 \\ w_2 \\ w_3 \\ w_4 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 1/2 & 1/3 & -1 & 1/6 & -2 \\ -1 & 1/2 & 1/2 & 0 & -1 \\ -1/2 & 1/6 & 1/2 & -1/6 & 2 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} w(0) \\ w(1) \\ w(-1) \\ w(-2) \\ w(\infty) \end{pmatrix} $$ Which leads to: $$ \begin{array}[b]{l} w_0 = cz \\ w_1 = \frac {w(0)} 2 + \frac {w(1)} 3 - w(-1) + \frac {w(-2)} 6 -2 w(\infty) \\ w_2 = -w(0) + \frac {w(1)} 2 + \frac {w(-1)} 2 - w(\infty) \\ w_3 = -\frac{w(0)} 2 + \frac{w(1)} 6 + \frac{w(-1)} 2 - \frac{w(-2)} 6 + 2 w(\infty) \\ w_4 = ax \end{array} $$ And finally: $$ \begin{array}[b]{l} w_0 = cz \\ w_1 = \frac {cz} 2 + \frac {(a+b+c)(x+y+z)} 3 - (a-b+c)(x-y+z) + \frac {(4a-2b+c)(4x-2y+z)} 6 -2 ax \\ w_2 = -cz + \frac {(a+b+c)(x+y+z)} 2 + \frac {(a-b+c)(x-y+z)} 2 - ax \\ w_3 = -\frac{cz} 2 + \frac{(a+b+c)(x+y+z)} 6 + \frac{(a-b+c)(x-y+z)} 2 - \frac{(4a-2b+c)(4x-2y+z)} 6 + 2 ax \\ w_4 = ax \end{array} $$ Edit: For those looking to use those equations as support for some programming (as myself - I needed this for my own 96bit integer class): Take overflow into account - especially with addition as it may be easily overlooked but will happen - especially when adding 3 segments with multipliers like 16 etc. Quite important cost (algorithmically speaking) is division here - although it will be "exact division" (see in GMP math library) so it means that it will always come up without any reminder (so it's cheaper) and it can be precalculated (either by compiler or you in assembly or whatever) You may try to experiment with different interpolation points to get "more fitting" division/multiplication numbers - although going any further than 8/-8 seems impractical - you will fast hit overflow limits or hurt precision (depends if you are going for either integer or floating point numbers) PS. Please point me out any mistakes I've made - both in english as well in equations<|endoftext|> TITLE: Fourier transform of the Cantor function QUESTION [9 upvotes]: Let $f:[0,1] \to [0,1]$ be the Cantor function. Extend $f$ to all of $\mathbb R$ by setting $f(x)=0$ on $\mathbb R \setminus [0,1]$. Calculate the Fourier transform of $f$ $$ \hat f(x)= \int f(t) e^{-ixt} dt $$ where $dt$ is the Lebesgue measure on $\mathbb R$ divided by $2\pi$, and the integral is over $\mathbb R$. I think this MO post says the result is $$ \hat f (x)= \frac{1}{ix}-\frac{1}{ix}e^{ix/2}\prod_{k=1}^{\infty} \cos(x/3^k). \tag{1} $$ To get this, I approximate $f$ by simple function $$ f_n(x)= \sum_{i=1}^n \sum_{j=1}^{2^{i-1}} \frac{2j-1}{2^i}\chi_{E_{n,k}} $$ where $E_{n,k}$ is the $k$th set removed during the $n$th stage of the Cantor process. Then $$ \hat f_n(x) = \sum_{i=1}^n \sum_{j=1}^{2^{i-1}} \frac{2j-1}{2^i}\int_{E_{n,k}} e^{-ixt} dt \tag{2} $$ But I don't see how, in the limit, (2) simplifies to (1). REPLY [3 votes]: Let $\mu$ be the standard Cantor measure on the interval $(-1, 1)$. If we set $\mu(x)=\mu((-\infty, x))$, considering the self-similarity of $\mu$ on the first level, we easily obtain $$ \mu(x)=\frac{1}{2}\Big(\mu(3x+2)+\mu(3x-2)\Big). $$ Hence $$(\mathcal F\mu)(3t)=\int \exp(3itx)\,d\mu(x) =\frac{1}{2}\left(\int\exp(3itx)\,d\mu(3x+2)+\int\exp(3itx)\,d\mu(3x-2)\right) =\frac{1}{2}\left(\int\exp(it(y-2))\,d\mu(y) +\int\exp(it(y+2))\,d\mu(y)\right) =\frac{1}{2}\Big(\exp(-2it)+\exp(2it)\Big)\times\int\exp(ity)\,d\mu(y) =\cos 2t\cdot(\mathcal F\mu)(t)$$ and $$ (\mathcal F\mu)(t) =\cos\frac{2t}{3}\times(\mathcal F\mu)\left(\frac{t}{3}\right) =\cos\frac{2t}{3}\cdot\cos\frac{2t}{9}\times(\mathcal F\mu)\left(\frac{t}{9}\right)=\dots =\prod_{n=1}^\infty \cos\frac{2t}{3^n}, $$ because the function $\mathcal F\mu$ is continuous at the origin and $(\mathcal F\mu)(0)=1$.<|endoftext|> TITLE: Irreducible polynomials of real algebraic numbers QUESTION [6 upvotes]: Suppose $\alpha$ is a real algebraic number with the property that its irreducible polynomial over $\mathbb{Q}$ is not a binomial, i.e., it is not of the form $x^n-q$ for some $n\geq 1$ and $q\in\mathbb{Q}$. True or false: $\alpha^k\not\in\mathbb{Q}$ for all $k\geq 1$. Example: $\sqrt{2}+\sqrt{3}$ has irreducible polynomial $x^4-10x^2+1$, and no integer power of $\sqrt{2}+\sqrt{3}$ is rational. Non-example when "real" assumption is dropped: $1+i$ has irreducible polynomial $x^2-2x+2$, but $(1+i)^4=-4\in\mathbb{Q}$. REPLY [3 votes]: Let $\alpha$ be a real algebraic number. So that there exists some $k$ such that $\alpha^k \in \mathbb Q$. Let $t$ be the smallest positive number such that $\alpha^t \in \mathbb Q$. We claim that $$p(x)=x^t-\alpha^t$$ is irreducible over $\mathbb Q$. Suppose not, then we know that $$p(x)=\prod_{i=1}^t (x-\alpha \zeta^i)$$ where $\zeta$ is a $t$-root of unity. Now if $p(x)$ is not irreducible we have that for some $a_1,\dots,a_n$ that $$\prod_{i=1}^n \alpha \zeta^{a_i}=\alpha^n \prod_{i=1}^n \zeta^{a_i} \in \mathbb Q.$$ Since $\alpha$ is a real algebraic number this can only happen if $$\beta=\prod_{i=1}^n \zeta^{a_i}$$ is also real. Since $\beta$ is a $t$-root of unity, $\beta$ is real if and only if $\beta= \pm 1$. Thereby $\alpha^n \in \mathbb Q$, but $t$ was the smallest such that this happened so $p(x)$ is irreducible. This proves the contrapositive of your statement.<|endoftext|> TITLE: Examples for subspace of a normal space which is not normal QUESTION [22 upvotes]: Are there any simple examples of subspaces of a normal space which are not normal? I know closed subspace of a normal space is normal, but open subspace in most cases which I can think of are also normal. REPLY [7 votes]: Let, $X= \{a,b,c,d\}$ And $T= \{ \emptyset ,X,\{d\} , \{b,d\} ,\{ c,d\},\{b,c,d\}\}$ Then $(X,T)$ is a topological space. Since $(X,T)$ has no pair of disjoint non-empty closed sets, $(X,T)$ is a normal space. Consider ,$Y=\{b,c,d\}$ of $X$. Then $T(Y)= \{\emptyset,Y, \{d\} , \{b,d\} ,\{ c,d\}\}$ Then $\{b\}$ and $\{c\}$ are disjoint closed sets in $(Y,T(Y))$ and they cannot be separated in $(Y,T(Y))$. Hence ,$(Y,T(Y))$ is not a normal space.<|endoftext|> TITLE: Expected number of times random substring occurs inside of larger random string QUESTION [7 upvotes]: I have a four-letter alphabet containing A, B, C, and D. What is the expected number of times a string of length $m$ occurs inside of larger random substring of length $n$, both generated from the same alphabet? I think I've got it so far for an even distribution, where each letter has a probability of $0.25$: $$(n-m)\cdot\left(\frac 1 4\right)^m$$ What if the letters are not evenly distributed? What if A and B had probabilities of $0.115$, and C and D had probabilities of $0.385$? How does that change the problem? REPLY [5 votes]: This is too long for a comment to the answer by @joriki, but it's intended to fill in some details for those commenters who questioned whether the sum of squared probabilities is correct (it is). For an alphabet of $k$ letters $\{a_1\ldots a_k\}$, we're given two random words on that alphabet: $X_1\ldots X_m$ and $Y_1\ldots Y_n,\ \ m TITLE: Rudin Series ratio and root test. QUESTION [6 upvotes]: In Rudins Principles of Mathematical Analysis he says consider the following series $$\frac 12 + \frac 13 + \frac 1{2^2} + \frac 1{3^2} + \frac 1{2^3} + \frac 1{3^3} + \frac 1{2^4} + \frac 1{3^4} + \cdots$$ for which $$\liminf \limits_{n \to \infty} \dfrac{a_{n+1}}{a_n} = \lim \limits_{n \to \infty} \left( \dfrac {2}{3} \right)^n =0, $$ $$\liminf \limits_{n \to \infty} \sqrt[n]{a_n} = \lim \limits_{n \to \infty} \sqrt[2n]{\dfrac{1}{3^n}} = \dfrac{1}{\sqrt{3}}, $$ $$\limsup \limits_{n \to \infty} \sqrt[n]{a_n} = \lim \limits_{n \to \infty} \sqrt[2n]{\dfrac{1}{2^n}} = \dfrac{1}{\sqrt{2}}, $$ $$\limsup \limits_{n \to \infty} \dfrac{a_{n+1}}{a_n} = \lim \limits_{n \to \infty} \dfrac 12\left( \dfrac {3}{2} \right)^n =+\infty, $$ The root test indicates convergence; the ratio test does not apply. In the book he defines the root and ratios test for the lim sup. I am not exactly sure how he goes from the lim sup to the lim and also why there is a $2n$ (which I assume comes from even terms of the sequence) in the root test. Also why is he checking the lim inf? I believe that my understanding of lim sups and infs are not well developed or I would probably understand what’s going on. Also how does he get the terms that he is taking the limit of. A nudge in the right direction to figure this out would be much appreciated. Thank you!! REPLY [7 votes]: For definiteness, call the terms of our sequence $a_1,a_2,a_3,\dots$. A similar analysis with minor differences of detail can be made if we call the first term of our sequence $a_0$. Note that for $n=1,2,3,\dots$ we have $a_{2n-1}=\dfrac{1}{2^n}$ and $a_{2n}=\dfrac{1}{3^n}$. The $k$-th root of the $k$-th term is "small" when the $k$-th term is a power of $\dfrac{1}{3}$. The $k$-th root of the $k$-th term is "large" when the $k$-th term is a power of $\dfrac{1}{3}$. More precisely, $\liminf \sqrt[k]{a_k}=\lim\inf \sqrt[2n]{\frac{1}{3^n}}=\dfrac{1}{3}$. For even $k$ the $k$-th root is constant. Also, $\limsup\sqrt[k]{a_k}=\liminf\sqrt[2n-1]{\dfrac{1}{2^n}}$. But $$\sqrt[2n-1]{\dfrac{1}{2^n}}=\left(\frac{1}{2^n}\right)^{1/(2n-1)}=\left(\frac{1}{2^n}\right)^{2n/(2n(2n-1))}=\left(\frac{1}{\sqrt{2}}\right)^{2n/(2n-1)}.$$ The expression on the right has limit $\dfrac{1}{\sqrt{2}}$. That takes care of one of the gaps. For the Ratio Test, we are interested in the behaviour of $\left|\dfrac{a_{k+1}}{a_k}\right|$. Let $k$ be odd, say $k=2n-1$. Then $a_k=\dfrac{1}{2^n}$. And $a_{k+1}=a_{2n}=\dfrac{1}{3^n}$. It follows that $$\frac{a_{k+1}}{a_k}=\frac{a_{2n}}{a_{2n-1}}=\left(\frac{2}{3}\right)^n.$$ This has very pleasant behaviour for large $n$, indeed for any $n$: it is safely under $1$, indeed has limit $0$. Now let $k$ be even, say $k=2n$. Then $a_k=\dfrac{1}{2^n}$. and $k+1=2n+1$. The $2n+1$-th term of our sequence is $\dfrac{1}{2^{n+1}}$. It follows that in the case $k=2n$ we have $$\frac{a_{k+1}}{a_k}=\frac{a_{2n+1}}{a_{2n}}=\frac{\frac{1}{2^{n+1}}}{\frac{1}{3^n}}=\frac{1}{2}\left(\frac{3}{2}\right)^n.$$ This unfortunately behaves badly for large $n$: we would like it to be safely under $1$, and it is very much over. The limit of the ratios $\dfrac{a_{k+1}}{a_k}$ does not exist. The ratios do not (uniformly) blow up, since for $k$ odd, the ratios approach $0$. The ratio behaves very nicely at odd $k$, and very badly at even $k$. So the Ratio Test is inconclusive. The bad behaviour prevents us from concluding convergence. But the good behaviour prevents us from concluding divergence.<|endoftext|> TITLE: Constructing a bijection from (0,1) to the irrationals in (0,1) QUESTION [18 upvotes]: How does one construct a bijection from (0,1) to the irrationals in (0,1)? Or if I am getting my notation right, can you provide an explicit function $f:(0,1)\rightarrow(0,1)\backslash\mathbb{Q}$ such that $f$ is a bijection? REPLY [20 votes]: (1) Choose an infinite countable set of irrational numbers in $(0,1)$, call them $(r_n)_{n\geqslant0}$. (2) Enumerate the rational numbers in $(0,1)$ as $(q_n)_{n\geqslant0}$. (3) Define $f$ by $f(q_n)=r_{2n+1}$ for every $n\geqslant0$, $f(r_n)=r_{2n}$ for every $n\geqslant0$, and $f(x)=x$ for every irrational number $x$ which does not appear in the sequence $(r_n)_{n\geqslant0}$. Let me suggest you take it from here and show that $f$ is a bijection between $(0,1)$ and $(0,1)\setminus\mathbb Q$.<|endoftext|> TITLE: Representation of a linear functional in vector space QUESTION [11 upvotes]: In the book Functional Analysis, Sobolev Spaces and Partial Differential Equations of Haim Brezis we have the following lemma: Lemma. Let $X$ be a vector space and let $\varphi, \varphi_1, \varphi_2, \ldots, \varphi_k$ be $(k + 1)$ linear functionals on $X$ such that $$ [\varphi_i(v) = 0 \quad \forall\; i \in \{1, 2, \ldots , k\}] \Rightarrow [\varphi(v) = 0]. $$ Then there exist constants $\lambda_1, \lambda_2, \ldots, \lambda_k\in\mathbb{R}$ such that $\varphi=\lambda_1\varphi_1+\lambda_2\varphi_2+\ldots+\lambda_k\varphi_k$. In this book, the author used separation theorem to prove this lemma. I would like ask whether we can use only knowledge of linear algebra to prove this lemma. Thank you for all helping. REPLY [2 votes]: Here is a repackaging of linearalgebraist's answer: Let $$ L: X\to\mathbb{R}^k\\ Lx:=(\phi_1(x),\dots, \phi_k(x)) $$ Then $$ {L}^*: (\mathbb{R}^k)^* \to X^* \\ {L}^*f:=f\circ {L}= f_1\phi_1+\dots+f_k\phi_k , $$ so the conclusion you seek is equivalent to the general algebraic fact that $$ \text{im} L^* = (\ker L)^{\bot}:=\{\phi \in X^* : \phi (x) = 0\; \forall x \in \ker L\} $$ More intuitively, $L$ is injective on $X/\ker L$, which means that knowing $Lx$ amounts to knowing $x$ up to an element of $\ker L$. Hence, knowing $Lx$ implies knowing $\phi(x)$, because the element of $\ker L$ doesn't affect that. Clearly, all of this "knowing things" is linear, so you can write $\phi(x)$ as a linear combination of $\phi_1(x),\dots,\phi_k(x)$.<|endoftext|> TITLE: Integrating $\frac{x dx}{\sin x+\cos x}$ QUESTION [8 upvotes]: I am trying to carry out this integration but I seem to be going wrong: $$I=\int_{0}^{\frac{\pi}{2}}\frac{x dx}{\sin x+\cos x}=\int_{0}^{\frac{\pi}{2}}\frac{(\frac{\pi}{2}-x) dx}{\sin(\frac{\pi}{2}-x)+\cos (\frac{\pi}{2}-x)} \implies 2I=\frac{\pi}{2}\int_{0}^{\frac{\pi}{2}}\frac{ dx}{\sin x+\cos x}$$ I am not able to proceed from here. REPLY [4 votes]: $$\dfrac1{\sin(x) + \cos(x)} = \dfrac1{\sqrt{2}} \left(\dfrac1{\dfrac1{\sqrt{2}}\sin(x) + \dfrac1{\sqrt{2}}\cos(x)} \right) = \dfrac1{\sqrt{2}} \sec(x - \pi/4)$$ Now integrate, $$\int_0^{\pi/2} \dfrac{dx}{\sin(x) + \cos(x)} = \dfrac1{\sqrt{2}}\int_0^{\pi/2} \sec(x-\pi/4)dx = \dfrac1{\sqrt{2}}\int_{-\pi/4}^{\pi/4} \sec(x)dx$$ and finish it off.<|endoftext|> TITLE: Can someone explain this non-noetherian subring example? QUESTION [6 upvotes]: I'm trying to find an example showing that subrings of noetherian rings are not necessarily noetherian. So I just searched the net and this site, and came upon this: "A common example showing that a subring of a Noetherian ring is not necessarily Noetherian is to take a polynomial ring over a field $k$ in infinitely many indeterminates, $k[x_1,x_2,\dots]$. The quotient field is then Noetherian obviously, but the subring $k[x_1,x_2,\dots]$ is not since there is an infinite ascending chain of ideals which never stabilizes." I haven't had the chance to read my textbook the past week so I'm a bit behind on my understanding of noetherian rings. I understand that they are rings in which every chain of its ideals stabilizes. I also understand that a polynomial ring $R[x_1, x_2, ...]$ of noetherian ring $R$ is also noetherian. So what I don't get in the example is when they say polynomial ring, do they mean a ring that has infinite indeterminates or it could just be any polynomial. Another question is what are quotient fields and why is it obviously noetherian? REPLY [7 votes]: $k[x_1,x_2,\dotsc]$ is not noetherian since the chain of ideals $(x_1) \subseteq (x_1,x_2) \subseteq \dotsc$ is not stationary. Fields are noetherian since there are only two ideals. Thus a counterexample is $k[x_1,x_2,\dotsc] \subseteq k(x_1,x_2,\dotsc)$. But you can do this with any other non-noetherian integral domain, such as the ring of holomorphic functions on a region. By the way, Hilbert's Basis Theorem says that if $R$ is a noetherian ring and $n \in \mathbb{N}$, then $R[x_1,\dotsc,x_n]$ is noetherian. We don't need this here, but this shows that we really need infinitely many indeterminates in order to get a non-noetherian polynomial ring.<|endoftext|> TITLE: Is a bounded and continuous function uniformly continuous? QUESTION [24 upvotes]: $f\colon(-1,1)\rightarrow \mathbb{R}$ is bounded and continuous does it mean that $f$ is uniformly continuous? Well, $f(x)=x\sin(1/x)$ does the job for counterexample? Please help! REPLY [4 votes]: $\sin(x^2)$ is also a nice example and it's happening because it's not periodic.<|endoftext|> TITLE: Integrating $\frac{\log(1+x)}{1+x^2}$ QUESTION [5 upvotes]: Possible Duplicate: Evaluate the integral: $\int_{0}^{1} \frac{\ln(x+1)}{x^2+1} dx$ I am a bit stuck here in evaluating the following integral:$$\int_{0}^{1}\frac{\log(1+x)}{1+x^2}\,\mathrm dx$$.Your help is appreciated. REPLY [17 votes]: Put $x=\tan t$ so that $x=1$ corresponds to $t=\frac \pi 4$, and $x=0$ to $t=0$. Hence, $$\int_{0}^{1}\frac{\log(1+x)}{1+x^2}dx=\int_{0}^{\frac \pi 4}\frac{\log(1+\tan t)}{(1+\tan^2t)}\sec^2t dt=\int_{0}^{\frac \pi 4}\log(1+\tan t)dt$$ Let $I=\int_{0}^{\frac \pi 4}\log(1+\tan t)dt$, then: $I=\int_{0}^{\frac \pi 4}\log(1+\tan(\frac \pi 4- t)dt$ using $\int_{a}^{b}f(x)dx=\int_{a}^{b}f(a+b-x)dx$ (Proof) $I=\int_{0}^{\frac \pi 4}\log\left(1+\frac{1-\tan t}{1+\tan t}\right)dt$ $I=\int_{0}^{\frac \pi 4}\log\left(\frac 2{1+\tan t}\right)dt$ $I=\log 2\int_{0}^{\frac \pi 4}dt-\int_{0}^{\frac \pi 4}\log\left(1+\tan t\right)dt$ $I=\log 2(\frac \pi 4-0)-I$ Hence $2I=\frac \pi 4 \log 2$ REPLY [7 votes]: A thought: Notice $$\int\limits_0^1 \frac{\log(1 + x)dx}{1 + x^2} = \int\limits_0^1 \frac{\log(1 + x)dx}{1 + 2x -2x + x^2} = \int\limits_0^1 \frac{\log(1 + x)dx}{(x+1)^2 - 2x } \int\limits_0^1 \frac{\log(1 + x)dx}{(x+1)^2 - 2(x+1) + 2}$$ Now put $u = x + 1$ and so youll have $$\int\limits_1^2 \frac{\log(u)du}{u^2 - 2u + 2} = \int\limits_1^2 \frac{\log(u)du}{(u-1)^2 + 2} = \int\limits_1^2 \log(u)d[\arctan(u-1)]$$ Now use integration by parts.. REPLY [5 votes]: A related problem. You can get the answer in terms of the dilogarithm function $Li_s(z)$ $$\frac{i}{2}{Li_{2}} \left( \frac{1}{2}+\frac{1}{2}\,i \right)-\frac{i}{2}{Li_2} \left( \frac{1}{2}-\frac{1}{2}\,i \right) +\frac{1}{4}\,\pi \,\ln \left( 2 \right) -{\it Catalan}= 0.2721982614 \,,$$ where $\mathrm{Catalan}= 0.9159655942 $ . See here for the technique. To use the mentioned method, first, use the change of variables $t=x+1$ $$ \int_{0}^{1}\frac{\log(1+x)}{1+x^2}\,\mathrm dx= \int_{1}^{2} \frac{\ln(t)\,dt}{(t-(1+i))(t-(1-i))} \,.$$ Note that, this is a general technique which can handle more general integrals.<|endoftext|> TITLE: Faster way to compute the integral QUESTION [5 upvotes]: Here it is: $$\int_o^\pi x\cos^4x\,dx$$ I used integration by parts but I would be grateful if someone told me an alternate method to compute the integral faster. REPLY [6 votes]: The well known formula $$\int_0^\pi xf(\sin x )dx=\frac \pi2\int_0^\pi f(\sin x )dx$$ yields $$I=\int_{o}^{\pi} x\cos^4x=\frac \pi2\int_0^\pi \cos^4xdx=\pi\int_0^{\pi/2} \cos^4xdx$$ then let $x=\pi/2-y$ and have that $$I=\pi\int_0^{\pi/2} \cos^4xdx=\pi\int_0^{\pi/2} \sin^4xdx$$ $$2I=\pi\int_0^{\pi/2} (\sin^4x+\cos^4x)dx=\frac \pi4\int_0^{\pi/2} (\cos(4 x)+3) dx= \frac{3 \pi^2}{8}$$ $$I= \frac{3 \pi^2}{16}.$$ Chris<|endoftext|> TITLE: Small Question on the Tor functor QUESTION [7 upvotes]: Suppose that I have an $A$ - module $N$ with $A$ commutative and I take a projective resolution of $N$: $$\ldots \rightarrow P_2 \rightarrow P_1 \rightarrow P_0 \rightarrow N \rightarrow 0.$$ Suppose $M$ is some other $A$ - module. Now why is it the case that $$\ldots \rightarrow P_2 \otimes_A M \rightarrow P_1 \otimes_A M \rightarrow P_0 \otimes_A M \rightarrow N \otimes_A M \rightarrow 0 $$ is not exact? I know that the tensor product is not in general left exact. However if the projective resolution is an infinite one then there is no "left" so why should the sequence above not be exact? There has to be some problem with my understanding for then we always have $\textrm{Tor}_i^A(M,N) = 0$ for all $i$. REPLY [6 votes]: Example: $A = k[x]/x^2$, $k$ some field, $M = N = k$, the trivial module. A resolution of $N$ is given by $$ \cdots A \to A \to A \to A \to k $$ where the maps $A\to A$ are multiplication by $x$, and the map $A \to k$ sends $1$ to $1$ and $x$ to $0$. I claim all the maps $A\otimes _A k \to A \otimes _A k$ are zero. This is true because $1 \otimes _A 1 \mapsto x \otimes _A 1 = 1 \otimes _A x\cdot 1 = 1\otimes _A 0 = 0$. The new sequence fails to be exact since it consists of non-zero modules, but all of the maps (except the last) are zero.<|endoftext|> TITLE: Are coinductive proofs necessary? QUESTION [10 upvotes]: I've been exploring corecursion in Coq (specifically, infinite streams of natural numbers) lately and so far any coinductive predicate I've constructed and its coinductive proof can be transformed into an inductive predicate and an inductive proof with a proof that either style proof can be constructed from the other. Certainly the coinductive version is often easier to write, but is there anything that can be proven with coinduction that cannot be with induction? Maybe it is the case for infinite streams but there are other infinite objects where this is not true? As an example I have the coinductive predicate $\operatorname{increasing} : \forall s\,n,\; n < \operatorname{head} s \implies \operatorname{increasing} s \implies \operatorname{increasing} (n \operatorname{cons} s)$ and the inductive predicate $\operatorname{increasing'} : \forall s,\; (\forall n,\; s[n] < s[n+1]) \implies \operatorname{increasing'} s$ and a proof that $\forall s,\; \operatorname{increasing} s \iff \operatorname{increasing'} s$ Intuitively it seems like the coinductive principle says something like "you can never see a falsification of the predicate" which can be turned around to an inductive argument on the number of observations or equivalently how much of the infinite object you force. If you'd like to see the Coq development for the increasing predicate it is at https://gist.github.com/3953040 REPLY [4 votes]: Inductive proofs can only directly prove properties of the set of finite initial subsequences of streams - it cannot directly prove anything about infinite streams. This is because the induction only "reaches" well-founded elements of the type constructor, while coinductive proofs reach all of these elements and the ill-founded elements, such as the infinite sequence 1,2,3,... That said, by appeal to the take lemma (two streams that agree on all initial subsequences of given length are the same), you can use inductive proofs to reason about all streams. But you cannot prove the take lemma inductively.<|endoftext|> TITLE: What is the difference between continuous derivative and derivative? QUESTION [20 upvotes]: What is the difference between continuous derivative and derivative? According to my teachers solution to the assignment,it seems there exits difference between continuous derivative and derivative. However, aunt Google does not tell me what I want. Edit: Here is a example. $$f(x) = \begin{cases} \frac{1-cos2x}{x} & \text{otherwise} \\ k & \text{if x=0} \end{cases}$$ Does k is continuous but not continuous derivative at $0$? Thanks:) REPLY [5 votes]: A function needs to be continuous in order to be differentiable. However the derivative is just another function that might or might not itself be continuous, ergo differentiable.<|endoftext|> TITLE: Bezier curvature QUESTION [7 upvotes]: I'm trying to understand quadratic Bézier curves but I cannot get pass one thing. Please, what is a "curvature" and how can I calculate it? I'm asking because I found for instance this and this. I also saw: $$\text{Curvature}\, = \,(P1x - P2x)(P3y - P2y) - (P3x - P2x)(P1y - P2y) $$ where $P1$, $P2$, $P3$ are points defining the curve. There is the problem, I don't see how one could arrive to such formula. Could someone explain it to me? REPLY [7 votes]: The curvature for a parameterized curve $B(t) = ((x(t), y(t))$ is given by [1] $$ \kappa(t) = \frac{\left|B'(t), B''(t)\right|}{|| B'(t)||^3}, $$ (Edit: fixed power in the denominator.) where the numerator is the determinant of the matrix formed by concatenating $B'(t)$ and $B''(t)$. Note that the curvature is a function of the parameter $t$, the curvature is not necessarily constant over the curve. A quadratic Bezier curve is defined by the points $P_0$, $P_1$ and $P_2$ is parameterized by [2] $$ B(t) = \left(1 - t\right)\left[\left(1 - t\right) P_0 + t P_1\right] + t \left[ \left(1 - t\right) P_1 + t P_2 \right], $$ with derivatives $$ B'(t) = 2\left(1 - t\right)\left(P_1 - P_0\right) + 2t\left(P_2 - P_1\right) $$ and $$ B''(t) = 2\left(P_2 - 2P_1 + P_0\right). $$ Substituting these into the expression for the curvature (using the bilinearity of the determinant operator and the fact that $\left|x,x\right|\equiv0$) yields the numerator $$\begin{align} n(t) &= \left|B'(t), B''(t)\right| \\ &= 4(1-t)\left|P_1-P_0, P_0 - 2P_1 + P_2\right| \\ &\quad+ 4t\left|P_2-P_1, P_0 - 2P_1 + P_2\right| \\ &= 4(1-t)\left|P_1-P_0, P_2-P_1\right| + 4t\left|P_2-P_1, P_0-P_1\right| \\ &= 4\left| P_1-P_0, P_2-P_1 \right|. \end{align}$$ The denominator is given by $$ m(t) = ||B(t)||^3, $$ with $$\begin{align} ||B(t)||^2 &= 4(1-t)^2 ||P_1 - P_0||^2 + 8t(1-t)(P_1 - P_0)\cdot(P_2 - P_1) + 4t^2||P_2 - P_1||^2. \end{align}$$ As I originally came here in search for maximum curvature of a quadratic Bezier curve, I will also present that here, even if it is not strictly in the question. The maximum curvature is found at either (i) the maximum of the function $\kappa(t)$ or (ii) one of the endpoints of the curve if the maximum lies outside the range $(0,1)$. The maximum of the function $\kappa(t)$ corresponds to $\kappa'(t) = 0$, i.e. $$ \kappa'(t) = \frac{n'(t) m(t) - n(t) m'(t)}{m(t)^2}. $$ Given that the numerator $m(t)$ is a constant, finding zeros of $\kappa'(t)$ equates to finding zeros of $m'(t)$, which in turn reduces to finding zeros of $||B'(t)||^2$. This is given by $$ \frac{\mathrm{d}}{\mathrm{d}t} ||B(t)||^2 = 8(P_1 - P_0) \cdot (P_0 - 2P_1 + P_2) + 8t || P_0 - 2P_1 + P_2 ||, $$ which gives us the optimal parameter value $$ t^* = \frac{(P_1 - P_0) \cdot (P_0 - 2P_1 + P_2)}{|| P_0 - 2P_1 + P_2 ||}. $$ Substituting this in the expression and some more algebra yields $$ \kappa(t^*) = \frac{||P_2 - 2P_1 + P_0||}{2|P_1 - P_0, P_2 - P_1|}. $$ Hope this helps (someone, somewhere, somewhat, someday) [1] https://en.wikipedia.org/wiki/Curvature [2] https://en.wikipedia.org/wiki/B%C3%A9zier_curve#Quadratic_B%C3%A9zier_curves<|endoftext|> TITLE: How to do well in higher level math classes QUESTION [9 upvotes]: $\quad$ Hello everybody, I have a bit of a problem and I know this question has been discussed before but I just want some insight on how to do well in more higher level math courses (There is a $\textbf{TL;DR}$ version of this at the bottom if you don't feel like reading a block of text). I am currently taking an introduction to topology course (we are using Munkres as the text) and I find myself doing rather poorly for now. I enjoy doing proofs more than computation but I still find my proof abilities stagnant. All of our assignments have literally been questions from the textbook and I often do not know how to even approach the proofs. Sometimes I will write down the given information and maybe a definition or two but it still boggles my mind at times how to solve these questions. In terms of my background in math I have never taken an analysis course, I took a single-variable calculus course and a course called Advanced Calculus (which to be honest was actually very computational and very easy, the most theoretical idea we learned was how to do epsilon-delta proofs for multi-variable functions, but no euclidean topology, which is apparently kind of common for the course I took but for some reason when I took it they cut a lot of more theoretical stuff out). I have taken other more proof-heavy courses including a courses in linear algebra, group theory, ring and polynomial theory, number theory, and differential geometry (which was an odd course because the textbook questions were often quite theoretical but our midterms and exam was mostly computational, like finding the first and second fundamental form or calculating Gaussian curvature and stuff like that, so I ended up doing really well since I do well in courses that are just computation. So although topology is not my first foray into proof heavy courses this one is certainly my most difficult and the most rigorous course I have ever taken. I really don't know how to do well in this course, or at least I don't think I know how to do well. Often I just end up finding the solutions for questions cause I just get too frustrated (I do not do this typically when working on assignments though... I really hate the feeling of possibly cheating). So, I guess the question really is the approach to courses like this because I really do want to take more higher-level courses cause I find all this stuff super fascinating (I am a math major so I really am serious about this) but I just feel overwhelmed and stressed a lot of the time. Thank you for any answers. $\textbf{TL;DR}$ I am a math major who has continually struggled in more abstract and proof-intensive courses (specifically right now a first topology course that I am taking) and I would just like some advice on how to improve in proofs and just math in general! REPLY [8 votes]: I suspect your problem with topology is that, without the usual progression of sequences and series, real and complex analysis, metric spaces and then point-set topology, you've not had the chance to develop your intuition on how to go about attacking a problem in the simpler settings. For example, topological proofs tend to involve a lot of looking at the images/inverse images of sets, which can be a bit confusing, especially when you haven't seen similar proofs in the case of metric spaces. Just for the sake of building up a few pictures that you can use to try and get the idea of what's happening in a proof, it might be a good idea to learn some basic metric spaces - at least enough to understand the idea of where the definitions of an open set and continuous function which are then used in the general setting of topological spaces come from. When I was learning metric and topological spaces, I found Sutherland's book to be pretty useful. Not only is it a fairly cheap book that serves as a good introduction to metric spaces, but it will hopefully help you out with some topological ideas too. Once you've got a very basic idea of metric spaces and appreciate what's going on in the metric space proofs (i.e. choosing small $\epsilon$-balls), you'll be able to draw pictures to help you see what's going on, and see if you can generalise that to topological spaces (so replacing balls with open sets). Drawing pictures is in general a good way to get the hang of definitions and theorems in topology. Even if it's completely unrigorous, sitting and thinking about what a definition is saying and trying to draw a picture is always a good way to build up an intuition for what it's actually saying. As far as general proofs go, you certainly get better with practice. A lot of the people I know that find proofs difficult aren't so much struggling with the maths behind it, but rather struggling with the idea of what exactly a proof involves, especially as simpler metric spaces / topology proofs are often just combining and rewriting definitions. Being clear in how you set out the start of your proof is always a good idea. You want to show that a set of assumptions leads to a conclusion, so set it out along the lines of: Suppose [assumption 1], [assumption 2]. That is -definition of what [assumption 1] actually means, i.e. if we have assumed a function is continuous, the inverse image of an open set in its image is open. -definition of what [assumption 2] actually means. Then we want to prove [conclusion], i.e. definition of what [conclusion] actually means. Quite often, when you've got the definitions spelled out in front of you, something will jump out as a good starting point. Obviously at this point, I can't keep giving general tips as not all proofs are the same. The one important thing is to always keep in mind what you're trying to prove, i.e. what your conclusion is. If you always know exactly where you want to go, then there's usually going to be an obvious next step from what you have in front of you. And as long as you're careful about writing out exactly what you mean at each step, you'll always have everything you know in front of you. Oh and while I'm at it, another book that I remember being useful when I was a first year undergraduate is this one. The content is rather basic, and none of the proofs are difficult at all, but it's good in that it explains what the proofs are doing very clearly, and often preceeds an actual proof with a short discussion of how to choose the correct strategy for that proof.<|endoftext|> TITLE: Fourier transform of a Generalized Gaussian QUESTION [5 upvotes]: I've got a family of functions called Generalized Gaussians. They're given by: $f(x) = \exp(-ax^{2p})$ Where $p \in \{1,2,3,\ldots\}$ Could anyone tell me how to find their Fourier transforms? REPLY [2 votes]: Here is a method: we define $g(t):=\int_{\mathbb R}e^{itx}e^{-x^{2p}}\mathrm dx$. Then, taking the derivative under the integral and integrating by parts, we derive the differential equation $$g^{(2p-1)}(t)=(-1)^p\frac t{2p}g(t).$$ The solutions of this equation are analytic, hence we can find a recurrence relation between the coefficients.<|endoftext|> TITLE: problem with proving a property of $n\choose k$ QUESTION [6 upvotes]: today, at college, we had the following definition of $n\choose k$: For any set $S=\{a_1,\ldots,a_n\}$ containing n elements, the number of distinct k-element subsets of it that can be formed is given by $n\choose k$. Now I've wanted to use the definition to prove ${n-1\choose k}+{n-1\choose k-1}={n\choose k}$ for all $1\leq k1$ as an exercise. I've tried to prove by induction. So the step $n=2$ is easy. But I have difficulties with $n\rightarrow n+1$. I've tried to split the set $S=\{a_1,\ldots,a_n,a_{n+1}\}$ into subsets: 1) containing $a_{n+1}$ 2) not contaning $a_{n+1}$ The second one is easy, there are $n\choose k$ subsets. But what about the first one? And do you need anything more for the right side? I think the left side would be almost the same just n and k are different. Thanks guys! REPLY [3 votes]: The number of size-$k$ subsets of $\{a_1,\ldots,a_{n+1}\}$ that contain $a_{n+1}$ is the number of size-$(k-1)$ subsets of $\{a_1,\ldots,a_n\}$, thus it is $\binom{n}{k-1}$. The number of size-$k$ subsets of $\{a_1,\ldots,a_{n+1}\}$ that do not contain $a_{n+1}$ is the number of size-$k$ subsets of $\{a_1,\ldots,a_n\}$, thus it is $\binom{n}{k}$. Add those together to get the number of size-$k$ subsets of $\{a_1,\ldots,a_{n+1}\}$, which is $\binom{n+1}{k}$. That's not a proof by induction though: You're not using any induction hypothesis.<|endoftext|> TITLE: Is $n \sin n$ dense on the real line? QUESTION [64 upvotes]: Is $\{n \sin n | n \in \mathbb{N}\}$ dense on the real line? If so, is $\{n^p \sin n | n \in \mathbb{N}\}$ dense for all $p>0$? This seems much harder than showing that $\sin n$ is dense on [-1,1], which is easy to show. EDIT: This seems a bit harder than the following related problem, which might give some insight: When is $\{n^p [ \sqrt{2} n ] | n \in \mathbb{N}\}$ dense on the real line, where $[\cdot]$ is the fractional part of the expression? I am thinking that there should be some probabilistic argument for these things. EDIT 2: Ok, so plotting a histogram over $n \sin n$ is similar to plotting $n \sin(2\pi X)$ where $X$ is a uniform distribution on $[-1,1].$ This is not surprising, since $n$ mod $2\pi$ is distributed uniformly on $[0,2\pi].$ Now, the pdf of $\sin(2\pi X)$ is given by $f(x)=\frac{2}{\pi \sqrt{1-x^2}}$ in $(-1,1)$ and 0 outside this set. The pdf for $n \sin(2\pi X)$ is $g_n(x)=\sum_{k=1}^n \frac{1}{nk} f(x/k)$ so the limit density is what we get when $n \rightarrow \infty.$ (This integrates to 1 over the real line). Now, it should be straightforward to show that for any interval $[a,b],$ $\int_a^b g_n(x) dx \rightarrow 0$ as $n \rightarrow \infty.$ Thus, the series $g_n$ is "too flat" to be able to accumulate positive probability anywhere. (The gaussian distribution on the other hand, has positive integral on every interval). REPLY [14 votes]: For large $p$ the sequence $(n^p \sin n)$ can't be dense by the following argument: There exists a sequence of positive integers $(m_n)$ such that $|n - m_n \pi| \le \pi/2$. Using the estimate $|\sin t| \ge \frac2\pi |t|$ for $|t| \le \pi/2$ we get $$|n^p \sin n| \ge \frac2\pi n^p |n-m_n \pi| \ge \frac2\pi n^p m_n \left|\frac{n}{m_n} - \pi\right|.$$ Now it is known that there exists $\nu<\infty$ and $q_0 <\infty$ such that for all rational approximations $p/q$ with $q \ge q_0$ to $\pi$ we have $|\pi - p/q| \ge q^{-\nu}$. (Apparently the best current known value is $\nu = 7.6063\ldots$, as shown in a paper of V. Kh. Salikhov.) Since $m_n = n (1/\pi + o(1))$ there exists $n_0$ such that $m_n \ge q_0$ for $n\ge n_0$, and we have $$|n^p \sin n| \ge \frac{2}{\pi}n^{p}m_n^{1-\nu} = \frac{2}\pi n^{p+1-\nu}(1/\pi + o(1))^{1-\nu} = 2 n^{p+1-\nu}(\pi+o(1))^\nu.$$ So if $p+1-\nu > 0$, or equivalently, if $p>\nu -1 =6.6063\ldots$, then $|n^p \sin n| \to \infty$ as $n\to \infty$. (Notice that even when $p=\nu-1$, the sequence could not be dense, it would be bounded away from $0$.)<|endoftext|> TITLE: Independent increments? QUESTION [5 upvotes]: The questions are simple: Does the process $ X(t) = \int_0^t B(s)ds$ have independent increments? What about $X(t) = \int_{t-r}^{t}B(s)ds$? Here $B$ denotes the standard Brownian motion. Thanks a lot! REPLY [2 votes]: Well I am not sure to fully understand the heuristic argument of Robert Israel so I give my answer hoping that someone can spot my mistakes if any. So first let's remark that thanks to the representation of $X_t$ as the sum of two gaussian processes that are jointly gaussian. Indeed, using integration by part formula (or Itô's lemma if you want to) we have : $X_t=t.B_t+\int_0^tr.dB_r$ (the fact that a Wiener integral is a gaussian process is considered known the jointy gaussian of the process $tB_t$ and $\int_0^tr.dB_r$ is also not difficult to derive but I can elaborate on this if asked for). So now, are increments of this gaussian process independents ? To get to the conclusion it suffices to examine the covariation of the increments. I get for $0 TITLE: Book on matrix computation QUESTION [6 upvotes]: I'm taking a machine learning course and it involves a lot of matrix computation like compute the derivatives of a matrix with respect to a vector term. In my linear algebra course these material is not covered and I browsed some book in the school library but didn't find something relevant to my problem. So can you recommend me a book which covers these matrix computations? Thanks! REPLY [2 votes]: Bishop's excellent machine learning textbook, Pattern Recognition and Machine Learning, has an appendix on "Properties of Matrices". One section in this appendix (p. 697) is about Matrix Derivatives, and discusses formulas like this: \begin{equation} \frac{\partial}{\partial A} \text{Tr}(AB) = B^T \end{equation} and \begin{equation} \frac{\partial}{\partial A} \ln | A | = \left(A^{-1} \right)^T, \end{equation} for example. Peter Lax has a great book called Linear Algebra and its Applications. Chapter 9 is entitled "Calculus of Vector- and Matrix-Valued Functions".<|endoftext|> TITLE: Why are continuous functions not dense in $L^\infty$? QUESTION [28 upvotes]: Why are the continuous functions not dense in $L^\infty$? I mean both concretely (i.e. a counter example) and intuitively why is this the case. REPLY [26 votes]: If a convergent sequence of continuous functions converges uniformly then the limiting function is continuous. Consequently, the only functions which can be approximated to arbitrary accuracy in $L^\infty$ by continuous functions are the continuous functions. In other words, the subspace of $L^\infty$ consisting of continuous functions is closed. Of course, there are many functions in $L^\infty$ which are not continuous!<|endoftext|> TITLE: Prove that $\int_0^1[f''(x)]^2dx\ge4.$ QUESTION [8 upvotes]: Let $f$ be a $C^2$ function on $[0,1]$ such that $f(0)=f(1)=f'(0)=0,f'(1)=1.$ Prove that $$\int_0^1[f''(x)]^2dx\ge4.$$ Find all $f$ for equality to occur. REPLY [9 votes]: First a variational argument. Assume that this expression is minimal for some smooth function $f$. Then let $\delta: [0,1] \to \mathbb{R}$ be twice differentiable and such that $\delta(0) = \delta'(0) = \delta'(1) = \delta(1) = 0$ and consider $f + t \cdot \delta$ for a real number $t$. This function also satisfies the boundary conditions, so $$ \int_0^1 \left(f''(x) + t\cdot \delta''(x)\right)^2 dx $$ is minimal for $t = 0$. This implies that $$ \int_0^1 f''(x) \delta''(x) dx = 0. $$ Apply partial integration twice to get $$ \int_0^1 f^{(4)}(x) \delta(x) dx = 0. $$ Since this must hold for any such function $\delta$ it follows that $f^{(4)}$ is identically zero on $[0,1]$ and so must be a polynomial of degree at most three. The only such polynomial that satisfies the boundary conditions is $$ f(x) = x^2 (x - 1). $$ For this $f$ you obtain the lower bound $$ \int_0^1 \left(f''(x)\right)^2 dx = 4. $$ This argument has produced a nice candidate minimal function. Once this candidate is found the claim follows easily. Let $f$ be the polynomial above and take $g \in C^2[0,1]$ satisfying the boundary conditions as stated in the problem. Then $g = f + (g - f)$ and if we define $\delta = g - f$ then $\delta$ has the properties assumed above. In particular, by partial integration, we know that $$ \int_0^1f''(x)\delta''(x) dx = 0. $$ Then $$ \int_0^1\left(g''(x)\right)^2 dx = \int_0^1\left(f''(x) + \delta''(x)\right)^2 dx = 4 + \int_0^1 \left(\delta''(x)\right)^2 dx \geq 4 $$ with equality only for $\delta'' = 0$. The latter implies $\delta = 0$ since $\delta'(0) = \delta(0) = 0$. So equality only holds when $g = f$.<|endoftext|> TITLE: How to prove that the image of a continuous curve in $\mathbb{R}^2$ has measure $0$? QUESTION [9 upvotes]: How to prove that the image of a continuous curve in $R^2$ has measure $0$? This is an exercise given in Real Analysis-Stein & Shakarchi. A hint is given as follow: Cover the curve by rectangles, using the uniformly continuity of $f$. REPLY [8 votes]: Basically, just do what the hint says. The idea is to cover the curve (i.e. the graph of your function $f:\mathbb R \to\mathbb R$) by rectangles such that the sum of their areas (measures) is smaller than $\epsilon>0$ and do so for every $\epsilon>0$. This would show that $m(C)<\epsilon$ for every $\epsilon>0$ and thus $m(C)=0$. Let's do this. Lemma. Let $g:[a,b]\to\mathbb R$ be a continuous function and let $\Gamma_g = \{(x,g(x))|\;x\in[a,b]\}$ be the graph of $g$. Then $m(\Gamma_g)=0$. Proof. The function $g$ is uniformly continuous, since $[a,b]$ is compact. Let $\epsilon>0$. Then there exists a $\delta>0$, such that for each pair of points $x,y\in [a,b]$ such that $|x-y|<\delta$, the inequality $|g(x)-g(y)|<\epsilon$ holds. We may assume without loss of generality that $\delta b-a$. Then there exist numbers $a=x_0b-a$, we must also have $b-a+\delta\geq n\delta$ (since otherwise we would have $b-a+\delta0$ the inequality $m(\Gamma_g)<4\epsilon (b-a)$ holds. This is only possible if $m(\Gamma_g)=0$ and thus the proof is complete. $\square$ Now, to show this for $f:\mathbb R\to\mathbb R$ notice that the graph of $f$ (defined by $\Gamma_f=\{(x,f(x))|\;x\in\mathbb R\}$) is the union of graphs of restrictions to $[-n,n]$. More precisely: define for each $n\in\mathbb N$ a function $f_n:[-n,n]\to\mathbb R$ by the formula $f_n(x) = f(x)$. Then $$\Gamma_f=\bigcup_{n=1}^\infty\Gamma_{f_n}.$$ Therefore, $$m(\Gamma_f)\leq\sum_{n=1}^\infty m(\Gamma_{f_n})=\sum_{n=1}^\infty 0 = 0$$ and thus $m(\Gamma_f)=0$. We are done.<|endoftext|> TITLE: How do we find the inverse of a vector function with $2$ variables? QUESTION [9 upvotes]: $$f(m,n) = (2m+n, m+2n)$$ What do we have to do to find the inverse of this function? I don't even know where to begin. REPLY [6 votes]: The following is a solution with matrices: $f$ can be regarded as a linear function, e.g. $\mathbb{Q}^2\to \mathbb{Q}^2$. With respect to the standard basis this is given by the matrix $$\begin{pmatrix}2&1\\1&2\end{pmatrix}$$ Now you can invert this matrix. There are different methods for that. I'm just using the explicit formula given by Cramer's rule. The inverse is $$\frac{1}{3}\begin{pmatrix}2&-1\\-1&2\end{pmatrix}$$ Now you can interpret this as a linear function again and get $f^{-1}(m,n)=\left(\frac{2m-n}{3},\frac{-m+2n}{3}\right)$.<|endoftext|> TITLE: Can we ascertain that there exists an epimorphism $G\rightarrow H$? QUESTION [182 upvotes]: Let $G,H$ be finite groups. Suppose we have an epimorphism $$G\times G\rightarrow H\times H$$ Can we find an epimorphism $G\rightarrow H$? REPLY [85 votes]: Let $G=Q_8\times D_8$, where $Q_8$ is the quaternion group and $D_8$ is the dihedral group of order $8$. Let $f$ be an isomorphism $$f:G\times G =\left(Q_8\times D_8\right)\times \left(Q_8\times D_8\right)\longrightarrow \left(Q_8\times Q_8\right)\times \left(D_8\times D_8\right).$$ Now, let $\mu$ and $\lambda$ be epimorphisms $$\begin{eqnarray*}\mu:Q_8\times Q_8&\longrightarrow&Q_8 {\small \text{ Y }} Q_8\\ \lambda:D_8 \times D_8&\longrightarrow&D_8 {\small \text{ Y }}D_8\end{eqnarray*}$$ where $A {\small \text{ Y }} B$ denotes the central product of $A$ and $B$. Then $$\mu\times \lambda:\left(Q_8\times Q_8\right)\times \left(D_8\times D_8\right)\longrightarrow \left(Q_8 {\small \text{ Y }}Q_8\right)\times \left(D_8 {\small \text{ Y }}D_8 \right)$$ is an epimorphism. The key is that $D_8{\small \text{ Y }} D_8\cong Q_8{\small \text{ Y }} Q_8$, so if we take an isomorphism $$\phi:D_8{\small \text{ Y }} D_8\longrightarrow Q_8{\small \text{ Y }} Q_8,$$ then we can take $H=Q_8{\small \text{ Y }} Q_8$ and form an isomorphism $$1_H\times \phi:\left(Q_8 {\small \text{ Y }}Q_8\right)\times \left(D_8 {\small \text{ Y }}D_8 \right)\longrightarrow \left(Q_8 {\small \text{ Y }}Q_8\right)\times \left(Q_8 {\small \text{ Y }}Q_8 \right)=H\times H.$$ So, all in all, we have $$\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}} \begin{array}{c} \left(Q_8\times D_8\right) \times \left( Q_8 \times D_8 \right)& \ra{f} &\left(Q_8\times Q_8\right) \times \left( D_8 \times D_8 \right)&\\ & & \da{\mu\times \lambda} & & & & \\ & & \left(Q_8 {\small \text{ Y }}Q_8\right)\times \left(D_8 {\small \text{ Y }}D_8\right) & \ras{1_H\times \phi} & \left(Q_8 {\small \text{ Y }}Q_8\right)\times \left(Q_8 {\small \text{ Y }}Q_8\right) \end{array} $$ and thus an epimorphism $$f(\mu\times\lambda)(1_H\times \phi):G\times G\longrightarrow H\times H.$$ However, $Q_8{\small\text{ Y }}Q_8$ is not a homomorphic image of $Q_8\times D_8$. So this is a counterexample. Appendix. Credit and thanks to Peter Sin for his help with the crucial step in this answer. See Prop. 3.13 of these notes ("The Theory of $p$-groups by David A. Craven", in case the link breaks again) for a proof that $Q_8 {\small \text{ Y }} Q_8\cong D_8 {\small \text{ Y }} D_8 \not\cong Q_8 {\small \text{ Y }} D_8$.<|endoftext|> TITLE: Maximal ideal space of $c_{\mathcal{U}}$ QUESTION [11 upvotes]: Let $\mathcal{U}$ be an filter over $\mathbb{N}$. Define $$c_{\mathcal{U}} = \{{(x_n)\in \ell_\infty\colon \lim_{\mathcal{U}, n}x_n =0\}},$$ which is a C*-algebra. Is there an accessible topological description of the maximal ideal space of $c_{\mathcal{U}}$? At least for ultrafilters? REPLY [3 votes]: (I will use $\omega$ for the ultrafilter since using a lowercase letter will improve readability) The way I see it, your algebra $c_\omega$ is simply $$ c_\omega=\{f:\ f(\omega)=0\}\subset C(\beta \mathbb N). $$ So you can make the identification $c_\omega=C_0(\beta\mathbb N\setminus\{\omega\})$. Note that $c_\omega$ is an ideal in $C(\beta\mathbb N)$, so the ideals in $c_\omega$ are ideals in $C(\beta\mathbb N)$. This is important because, with $\beta\mathbb N$ being compact, the ideals of $C(\beta\mathbb N)$ are precisely the sets of functions that annihilate a fixed closed subset. Thus, the ideals of $c_\omega$ are the sets of the form $$ \{f\in C_0(\beta\mathbb N\setminus\{\omega\}):\ f=0\ \mbox{ on }\{\omega\}\cup K\} $$ for a fixed closed $K\subset\mathbb N\setminus\{\omega\}$. We conclude that maximal ideals of $c_\omega$ are of the form $$ \{f\in C_0(\beta\mathbb N\setminus\{\omega\}):\ f(\omega)=f(\eta)=0\} $$ for some $\eta\in\beta\mathbb N\setminus\{\omega\}$.<|endoftext|> TITLE: Relationship between dual space and adjoint of a linear operator QUESTION [8 upvotes]: I am having a hard time understanding the concept of adjoint of a linear operator. Given a finite dimensional Hilbert space $H$ over a field $F$, I know the dual space is the vector space $H^*$ of all linear forms $f:H\rightarrow F$. Is the adjoint of a linear map $A$ on $H$ a member of $H^*$? Or what is the relationship between a linear map $f:H\rightarrow K$ and the dual space, generally? What confuses me even more is this special case: take the bra-ket notation. It says that for any ket $|\psi\rangle\in H$ there is a linear form from $H^*$ called a bra ($|\psi\rangle:=\langle\psi,-\rangle$-also weird, is $\psi$ a function or not?...), such that bra is the adjoint of ket. It's a bit of a mess, any help would be appreciated. REPLY [10 votes]: First, there's something called the Riesz Representation Theorem. To understand it, start by fixing a vector $v \in H$. We can now use the dot product to define a continuous linear functional $L_v: H \rightarrow F$ by $$L_v(w) = \langle w,v \rangle.$$ What the Riesz Representation Theorem says is that every continuous linear functional $\phi:H \rightarrow F$ arises in this way! That is, given any element $\phi$ of the dual space $H^\ast$, there is some $v \in H$ so that $\phi(w) = \langle w, v \rangle$. So this is why we can sort of think as linear functionals as elements of the Hilbert space, and vice versa. In more mathematical terms, we say that the dual of $H$ is isomorphic to $H$. Second, a note on dual spaces: I believe that typically the space $H^\ast$ is defined to be the set of continuous linear functionals. This distinction is important, as there are different types of duals. So far, I've been talking about the topological dual. However, there is an algebraic dual, which I have seen denoted $H^\star$, which is just all linear functionals $H \rightarrow F$, no continuity assumed. The Reisz Representation Theorem concerns only the topological dual. (The duals are actually the same for finite dimensional Hilbert spaces, but I don't believe the Hilbert spaces encountered in QM are.) Third, adjoints: Given any Hilbert spaces $H, K$, and a continuous linear functional $A: H \rightarrow K$, there is a continuous linear map called the adjoint $A^\ast:K \rightarrow H$ (note that it goes the other way) that is defined by the equation $\langle Av, w \rangle = \langle v, A^\ast w \rangle$. You typically only see the case $K = H$. So no, the adjoint of an operator $A: H \rightarrow H$ is not an element of $H^\ast$, since the members of $H^\ast$ are continuous linear functionals from $H$ into $F$, and $H^\ast$ goes from $H$ into $H$. Unfortunately, I don't know much about QM, so this last bit is just speculating on how I think the notation works. If you consider kets $|\phi \rangle \in H$ to be an element of the Hilbert space, then there is a continuous linear functional that I suppose you could call $\langle \phi |$ defined by $\langle \phi | v \rangle = \langle v, \phi \rangle.$ And conversely, given a bra $\langle \phi |$, by the Reisz Representation Theorem, there is a bra $| \phi \rangle$ so that $\langle \phi | v \rangle = \langle \phi , v \rangle.$<|endoftext|> TITLE: Maximal order of an element in a symmetric group QUESTION [22 upvotes]: If we let $S_n$ denote the symmetric group on $n$ letters, then any element in $S_n$ can be written as the product of disjoint cycles, and for $k$ disjoint cycles, $\sigma_1,\sigma_2,\ldots,\sigma_k$, we have that $|\sigma_1\sigma_2\ldots\sigma_k|=\operatorname{lcm}(\sigma_1,\sigma_2,\ldots,\sigma_k)$. So to find the maximum order of an element in $S_n$, we need to maximize $\operatorname{lcm}(\sigma_1,\sigma_2,\ldots,\sigma_k)$ given that $\sum_{i=1}^k{|\sigma_i|}=n$. So my question: How can we determine $|\sigma_1|,|\sigma_2|,\ldots,|\sigma_k|$ such that $\sum_{i=1}^k{|\sigma_i|}=n$ and $\operatorname{lcm}(\sigma_1,\sigma_2,\ldots,\sigma_k)$ is at a maximum? Example For $S_{10}$ we have that the maximal order of an element consists of 3 cycles of length 2,3, and 5 (or so I think) resulting in an element order of $\operatorname{lcm}(2,3,5)=30$. I'm certain that the all of the magnitudes will have to be relatively prime to achieve the greatest lcm, but other than this, I don't know how to proceed. Any thoughts or references? Thanks so much. REPLY [4 votes]: For more detail you can see this paper. The maximum order of an element of finite symmetric group by William Miller, American Mathematical Monthly, page 497-506.<|endoftext|> TITLE: $f(x)=1/x$ over $[1, \infty)$ is not Lebesgue integrable QUESTION [6 upvotes]: How does one show that $\chi_{[1, \infty)}1/x$ is not (Lebesgue) integrable? What I could think of is as follows: Letting $f(x)=1/x$ (defined for $x\geq 1$), define $$ f_n(x)=f\chi_{[1, n)}(x). $$ Each $f_n$ is, therefore, Riemann integrable on $[1, n)$ with value $\ln n$, hence integrable there. As $0\leq f_n\nearrow f$ on $[1, \infty)$, the monotone increasing theorem says $$ \int_{[1, \infty)}f_n\nearrow\int_{[1, \infty)} f $$ and so $\int_{[1, \infty)} f=\infty$ since $\ln n\nearrow\infty$. Is there a more obvious reason why the given integral isn't finite? It seems that my method needs quite some modification if we go to $n$-dimensional integrals of $$ f(x)=\frac{1}{|x|}\chi_{|x|>1}. $$ REPLY [2 votes]: So we have that $\int f$ is the supremum of the integrals of simple functions of finite support no more than $f$. Let $\phi_{n} = \sum_{k = 1}^{n} \chi_{[k, k + 1)} / (k + 1)$. Then $\phi_{n}$ has measure $n$ and integral $\sum_{k = 1}^{n} 1/(k + 1)$. Thus $\int f \geq \sum_{k = 1}^{n} 1/(k + 1)$ for all $n$, so we also have $\int f \geq \sum_{k = 1}^{\infty} 1/(k + 1) = \infty$.<|endoftext|> TITLE: Non-projective flat module over a local ring QUESTION [7 upvotes]: Could you give me an example of a finitely generated module that is flat over a local ring but not projective? For a non finitely generated I took $\mathbb{Q}$ over $\mathbb{Z}_p$, but I cannot find an example of a finitely generated one. Of course it should be a module over a local non-noetherian ring. I don't know a lot of local non-noetherian rings, the first that came to my mind was $k[x_1,x_2,\ldots]/(x_1,x_2^2,x_3^3,\ldots)$, since localization is a good way to find flat modules I wanted to localize this ring, but it has only one prime ideal (the maximal ideal); so I really don't know what to do. Any help? REPLY [11 votes]: Over a (commutative) local ring (non necessarily noetherian), any finitely generated flat module is free (Matsumura, Commutative Algebra, Prop. 3.G, p. 21), hence projective.<|endoftext|> TITLE: How does slice category help create functor? QUESTION [7 upvotes]: Reading Awodey [p.16-17], he states the following: The slice category $\boldsymbol{C}/C$ of a cateogry $\boldsymbol{C}$ over an object $C\in\boldsymbol{C}$ has [definition of slice category follows] (...) If $g: C\to D$ is any arrow, then there is a composition functor, $g_{*}: \boldsymbol{C}/C \to \boldsymbol{C}/D$ defined by $g_{*}(f) = g\circ f$, and similarly for arrows in $\boldsymbol{C}/C$. Indeed, the whole construction is a functor, $\boldsymbol{C}/(-): \boldsymbol{C} \to \boldsymbol{\operatorname{Cat}}$ as the reader can easily verify. So I have a few questions: What does the $(-)$ symbol mean? What does the author mean by the expression "the whole construction"? And how is that "construction" a functor? To my best understanding slice category was a "category" and not a functor. P.S.: Please let me know if it's not clear, and I'll expand/clarify. P.S.S.: My mathematics level: newbie REPLY [10 votes]: $\boldsymbol{C}/(-)$ is category theory notation jargon for the mapping which takes an object $X$ from $|\boldsymbol{C}|$ and yields the slice category $\boldsymbol{C}/X$. Since slice category is a category, it can be thought of as an object in $\boldsymbol{Cat}$, which is the category whose objects are categories. So the mapping $\boldsymbol{C}/(-)$ takes objects in category $\boldsymbol{C}$ and yields objects in category $\boldsymbol{Cat}$. Together with the corresponding mapping of arrows that takes an arrow $g$ (from category $\boldsymbol{C}$) and yields the arrow $g_*$ (from category $\boldsymbol{Cat}$), the mapping $\boldsymbol{C}/(-)$ forms a functor from $\boldsymbol{C}$ to $\boldsymbol{Cat}$. It has the necessary ingredients: a mapping between the objects of two categories (namely $\boldsymbol{C}$ and $\boldsymbol{Cat}$) and a mapping between the arrows of those categories. (You should now verify that these mappings do in fact form a functor: They must respect sources and targets of arrows, and identities and compositions.) By "the whole construction is a functor", Awodey means "the slice construction (for objects), together with the $g_*$ construction I just described (for arrows) is a functor." The "$(-)$" notation is common in category theory; for some reason they don't use the $\mapsto$ notation that one would expect. The mapping that turns $g$ into $g_*$ might be written as $(-)_*$.<|endoftext|> TITLE: Notation: What's meant by $C^{\infty}_{0}(\mathbb{R}^{+})$? QUESTION [8 upvotes]: In Chapter 0 of Iwaniec's Spectral Methods of Automorphic Forms Iwaneic uses the notation $C^{\infty}_{0}(\mathbb{R}^{+})$ without definition. I assume that it's the set of infinitely differentiable functions from $\mathbb{R} \to \mathbb{R}$ with range a subset of the positive reals and which tend toward zero "sufficiently quickly," but I don't know whether my guess is right or what the precise definition of "sufficiently quickly" is in this context. What does the subscript of $0$ refer to? REPLY [15 votes]: This is one of the unfortunate cases where the notation can mean two different things. It can either be the compactly supported smooth functions on the positive reals, or it can be the smooth functions which tend to $0$ when $x \to 0$ or $x \to \infty$. I personally prefer to use a $c$ subscript for the former and reserve the $0$ subscript for the latter. Without further context, it really isn't possible to answer your query. REPLY [8 votes]: From an MO question in which that notation is defined (second paragraph): $C_b(\mathbb{R})$ (resp. $C_0(\mathbb{R})$) are the continuous functions from $R$ to $R$ that are bounded (resp. vanish at infinity).<|endoftext|> TITLE: Modeling the Decay of a Pack of Cannibalistic Hyenas QUESTION [7 upvotes]: A population of $p_0$ hyenas has run out of food in their ecosystem, and so sadly they have resorted to eating each other. Hyenas need to consume one meal a day, and so exactly once per day, any given hyena will kill another hyena. The time at which this happens is random, meaning each hyena's mealtime is uniformly distributed throughout a set of $24$ hours. Assuming that the last hyena will get hungry and die trying to eat himself, after how long will the population of hyenas become extinct? I'm can think of two different ways to answer this (neither of which I know how to solve). The first is simpler and less accurate (and will therefore merit less glory). Answer 1 We can discretize time into seconds, and so the population $p(t)$ hyenas after $t$ seconds can be written as: $$p(t) = p(t-1) - \frac{p(t-1)}{86400}.$$ (Why? At any given second--there are $60*60*24 = 86400$ seconds in a day--each hyena has a $1$ in $86400$ chance of eating another hyena.) Clearly this is sloppy, since it is not guaranteed that such a number of hyenas will die every second. Also, when one hyena remains, it gets messy (although you could say it will take him a full day to die). I think this is actually modelling the expected value of $p(t)$, but I'm not sure. But still, I'd like to see how we can get a clean formula for $p(t)$ from this recursion, and see when it becomes $0$. Answer 2 We think of time as continuous, and the cannibalism of the hyenas as a (Poison?) process, in which each event--the death of a hyena--occurs at some rate. The tricky part is that this rate is dependent on the current population of the hyenas... I assume the solution will be given by $E[t | p(t) = 0]$. I've been thinking about this for a while, and am genuinely curious to see what you think! REPLY [2 votes]: Here is a sketch of an approach that should show that the number of days to termination is asymptotic to $c\log (n)$ for some computable $c$. I'm a bit busy now, but I can add more details later if necessary. Let $n$ be the number of hyenas at the beginning of a day. It suffices to show that the fraction of hyenas surviving to the next day is asymptotic to some constant. Arrange the hyenas from left to right by their planned meal times. Now construct the directed graph on the hyenas where an edge points from hyena $A$ to hyena $B$ if $A$ eats $B$. Observe that every valid graph is equally probable. What does this graph look like? A component consists of a path of leftward edges terminating in a single rightward edge. The number of surviving hyenas is the number of components in the graph. Let the number of such graphs be $f(n)$. We can count the number of such graphs with an EGF. We will find that $f(n+1)/f(n)\sim \alpha n^d$ for some constants $\alpha$ and $d$. Now we can calculate the expected number of components of size $k$, since we can count the number of ways to choose a component of size $k$ and then fill in the rest of the graph in $f(n-k)\sim \alpha^d n^{kd}f(n)$ ways. Summing over $k$ will yield the expected number of components, which should be asymptotically $cn$ for some constant $c$. To prove the desired concentration result, we can use Chebyshev's inequality. Calculating the variance of the number of components is equivalent to counting the expected number of pairs of components.<|endoftext|> TITLE: Regular in codim one scheme and DVR QUESTION [8 upvotes]: Let $X$ be a noetherian integral (separated) scheme which is regular in codimension one. Let $Y$ be a prime divisor and let $\eta$ be the generic point of $Y.$ It seems I am missing something easy but why $\mathcal{O}_{X, \eta}$ is a DVR with the quotient field the function field of $X?$ And when it is said, $X$ is regular (non-singular) of codimension one, does it follow from the definition that the local ring of a codimension one closed subscheme is regular in general? (otherwise, the terminology doesn't make sense to me!) REPLY [8 votes]: $\mathcal{O}_{X,\eta}$ is a regular local $1$-dimensional noetherian domain. It is a Theorem in commutative algebra which says that this is precisely a DVR. If $X$ is an arbitrary integral scheme and $x \in X$, then the quotient field of $\mathcal{O}_{X,x}$ is the function field of $X$. Namely, since this a local issue, we may assume $X=\mathrm{Spec}(A)$ for some integral domain $A$, and just have to observe that $\mathrm{Quot}(A_{\mathfrak{p}}) = \mathrm{Quot}(A)$ for every prime $\mathfrak{p} \subseteq A$. As for the last question, you should look at the definitions. Nothing happens.<|endoftext|> TITLE: Proving Baire's theorem: The intersection of a sequence of dense open subsets of a complete metric space is nonempty QUESTION [7 upvotes]: The following is problem 3.22 from Rudin's Princples of Mathematical Analysis: Suppose $X$ is a nonempty complete metric space, and $\{G_n\}$ is a sequence of dense open subsets of $X$. Prove Baire's theorem, namely, that $\bigcap_{n=1}^\infty G_n$ is not empty. Hint: find a shrinking sequence of neighbourhoods $E_n$ such that $\overline{E}_n\subset G_n$. Here's what I've tried so far: Let $\{r_n\}$ be a Cauchy sequence of positive real numbers converging to $0$. Fix $x\in X$ and define $E_i=\{g\in G_i:d(g,x) TITLE: Why define the Cantor set with an intersection? QUESTION [5 upvotes]: Define $E_n$ as $ E_1 = \left[0,\frac{1}{3}\right] \cup \left[\frac{2}{3},1\right]$ $ E_2 = \left[0,\frac{1}{9}\right] \cup \left[\frac{2}{9},\frac{3}{9}\right] \cup \left[\frac{6}{9},\frac{7}{9}\right] \cup \left[\frac{8}{9},1\right]$ and so on. I usually see the Cantor set defined as $C = \bigcap_{n=1}^\infty E_n $. Why use this limit with an intersection, instead of the seemingly more natural $C = \lim_{n\to\infty} E_n $ ? As far as I can tell, when the limit isn't involved, the intersection is unnecessary: $E_N = \bigcap_{n=1}^N E_n $ REPLY [5 votes]: Limits are associated with topology; whereas the first time I saw the Cantor set constructed, I was in my first semester. Topology had yet to come out behind the curtain. For a freshman, limits are for sequences of numbers, or functions of real and complex numbers. Limits are preserved for objects which are not sets in their essence. On the other hands, when you have a family of sets you obviously can discuss their unions and intersections. So from a pedagogical point of view, this is indeed a reasonable approach to avoid the discussion about continuous operations on sets in the topological space $\mathcal P(\Bbb R)$ (the power set of the real numbers). Much much later I have learned that a function from ordinals into sets is called continuous if at limit stages we have $\bigcup_{\alpha<\delta} f(\alpha)=f(\bigcup_{\alpha<\delta}\alpha)=f(\lim_{\alpha\to\delta}\alpha)=f(\delta)$. In this aspect, $E_n$ makes somewhat of a continuous sequence of length $\omega$, whose limit point is the Cantor set. The above definition fails here because we wish to discuss intersection and the continuity defined above discussed unions, alas this is not an important matter as we can always talk about $D_n=\mathbb R\setminus E_n$ instead. To sum up my ramble above, yes it is possible to discuss limits instead of intersections, but the limit is the intersection (at least in the Cantor set case), but from a teaching point of view it is often the case where the Cantor set is introduced the students have a proper grasp of topology, sufficient to discuss limits of sets. In this case the use of intersections which is much clearer to understand when discussing sets is better.<|endoftext|> TITLE: Conjugacy classes of SO3 and O3 QUESTION [5 upvotes]: I'm trying to find the conjugacy classes of SO3 and O3. How do I do this? SO3 consists of all rotations around any axis in three dimensions but how do I determine which are conjugate? REPLY [10 votes]: Any two rotations through the same angle are conjugate: Rotate the first axis into the second, turn about the second axis, rotate back – the two rotations on the outside are inverses of each other, and the result is the same as if you'd turned about the first axis. Any two rotations through different angles are not conjugate because their matrices have different traces $1+2\cos\phi$. So two rotations are conjugate iff they're through the same angle.<|endoftext|> TITLE: A geometry problem seeking for proof QUESTION [12 upvotes]: Circle $\odot O_1$ is tangent with circle $\odot O_2$ at $P$. Two tangent lines $AE$ and $AF$ of circle $\odot O_2$ meets circle $O_1$ at $B$, $G$ and $C$, $H$, respectively. $D$ is the in-center of $\triangle ABC$. $DP$ meets $BC$ at $I$, $EI$ meets $AO_2$ at $J$. Here is a figure: Prove: $E$, $B$, $D$, $P$ are concyclic $CJ\perp AO_2$ REPLY [4 votes]: This is a solution for part 1. I haven't thought about part 2 yet. The following text may not be very rigorous. To be more specific, the solution may rely on the picture a bit. For example, it may implicitly use the fact that points $P$, $E$ and $F$ all lie on the same side of line $BC$, simply because it looks that way on the picture. I'm afraid that if I make the solution any more rigorous, it will become absolutely incomprehensible. Step 1. Build $P'$ and several other new points. We will forget about point $P$ and the bigger circle $\odot O_1$ for a while, and focus on the triangle $\triangle ABC$ and the smaller circle $\odot O_2$. Consider the circumcircle of $\triangle EBD$. It intersects $\odot O_2$ at point $E$. There must be a second point where these two circles intersect. Let us denote that point by $P'$. Clearly, $E,\,B,\,D,\,P'$ are concyclic. Our plan is simple: to prove that $P=P'$. To do this, we'll construct several more lines and points. Let us denote by $Q$ the second point where line $P'D$ intersects $\odot O_2$. Let $B'$ and $C'$ be the points of intersection of $O_2$ with $P'B$ and $P'C$ respectively. You can see all these on the figure: Step 2. Explore the properties of the newly constructed objects. The objective of this step is to prove that $BC||B'C'$. We will do this by some simple angle chasing. First of all, since $EBDP'$ is an inscribed quadrilateral, $$ \angle EP'Q=\angle ABD = 1/2 \angle ABC. $$ It is quite easy to see that $$ \angle EP'F = 90^{\circ} - 1/2 \angle BAC = 1/2(\angle ABC + \angle ACB). $$ From this we have $$ \angle FP'D = 1/2 \angle ACB = \angle ACD, $$ so $FCDP'$ is also an inscribed quadrilateral. So, we have two inscribed quadrilaterals $EBDP'$ and $FCDP'$. From the first one we know that $\angle BED = \angle BP'D$ and from the second that $\angle CFD = \angle CP'D$. It is obvious that $\angle BED = \angle CFD$ (because $\triangle AED = \triangle AFD$), therefore $\angle BP'D = \angle CP'D$. To put it in different letters, $\angle B'P'Q = \angle C'P'Q$. This in turn means that $\angle B'O_2Q = \angle C'O_2Q$, which means that $B'C' \perp O_2Q$. Now to prove that $B'C'||BC$, it only remains to prove that $O_2Q \perp BC$. But this is easy. We already know that $\angle EP'Q = 1/2 \angle ABC$. Also, $\angle EP'Q = 1/2 \angle EO_2Q$. So $\angle EO_2Q = \angle ABC$. Since $O_2E \perp BA$, it means that $O_2Q \perp BC$, qed. Step 3. Prove that $P=P'$. Alright, we now know that $BC||B'C'$. From here it will be quite easy to prove that $P=P'$. Since $BC||B'C'$, there exists a homothety $f$ with center $P'$ that sends line $B'C'$ to line $BC$. Clearly, $f(B')=B$ and $f(C')=C$. Let us denote $\odot O'_1 = f(\odot O_2)$. Since $\odot O'_1$ is the image of $\odot O_2$ under a homothety with center $P'$, circles $\odot O'_1$ and $\odot O_2$ are tangent at $P'$. Also, since $B'$ and $C'$ lie on $\odot O_2$, $B$ and $C$ lie on $\odot O'_1$. Let us revise what we have established. We have two circles $\odot O_1$ and $\odot O'_1$. Both of them contain points $B$ and $C$ and both of them are tangent to $\odot O_2$ at points $P$ and $P'$ respectively. But from this it is already clear that $\odot O_1$ and $\odot O'_1$ are the same, and so are points $P$ and $P'$. Done!<|endoftext|> TITLE: Does the category framework permit new logics? QUESTION [9 upvotes]: It appears to me that a topos permits a broader concept of subsets than the yes/no decission of a characteristic function in a set theory setting. Probably because the subobject classifier doesn't have to be {0,1}. But I wonder, aren't all the multivalued logics also part of/can be modeled in set theory? Is there some new logic coming in with topoi which weren't there before? Did it just help discovering new ideas? Fuzzy stuff etc. are all existent in "conventional set theory mathematics" already, right? REPLY [5 votes]: Category theory itself, and thus topos theory, can be formalized in set theory(*). So, in a weak sense, everything in topos theory is already "in" set theory. Of course, set theory can be formalized in topos theory using the category Set, and so set theory is also "in" topos theory. The real question that matters is which formalization is useful for a particular purpose. For some purposes, topos theory provides a useful framework to the people who use it, and they prefer this framework over the equivalent framework where everything is rephrased in terms of set theory. The key point about formalization in set theory is that category theory, and topos theory, are formal axiomatic systems, and any axiomatic system can be studied using set theory as a metatheory. (*): There is a minor issue that some things in topos theory may use axioms that appear to be large cardinal axioms from the point of view of set theory. But this is not an impediment to formalizing things in set theory if we simply assume the necessary large cardinal axioms.<|endoftext|> TITLE: Does tensoring flat modules preserve minimal generating sets? QUESTION [6 upvotes]: Let $k$ be a commutative ring and let $M,N$ be two flat modules over $k$. $\mathbf{EDIT}:$ A minimal generating set $X \subseteq M$ is a set which generates $M$ and no proper subset of $X$ generates $M$. There is no canonical notion of size for a minimal generating set, for example $\mathbb Z$ has generating sets $\{ 1 \}$ and $\{ 2, 3\}$ over $\mathbb Z$. My question: is it true that If $\{ m_i \mid i < \lambda\}$ is a minimal generating set for $M$ and $\{ n_j \mid j < \kappa \}$ is a minimal generating set for $N$ then $\{ m_i \otimes n_j \mid i < \lambda; j< \kappa\}$ is a minimal generating set for $M \otimes N$? or equivalently, If $\{ m_i \mid i < \lambda\}$ is a minimal generating set for $M$ and $\{ n_j \mid j < \kappa \}$ is a minimal generating set for $N$ then whenever a finite linear combination $ \displaystyle\sum_{(i,j)< \lambda \times \kappa}\alpha_{(i,j)} m_i \otimes n_j = 0$, each $\alpha_{(i,j)}$ is a non-unit? I attempted to prove this for the case when $k$ is a field, thinking in terms of flatness : Let $V, W$ be vector spaces and let $\{ v_i \mid i< \lambda\}, \{ w_j \mid j < \kappa\}$ be bases for $V,W$ respectively. We can easily define maps $f \colon \oplus_{i < \lambda} k \to V$ and $g \colon \oplus_{j< \kappa} k \to W$ which send sequences $(\alpha_i )_{i< \lambda}$ to $\sum_{i < \lambda } \alpha_i v_i$ and $( \beta_j )_{j < \kappa }$ to $\sum_{j < \kappa }\beta_j w_j$. These maps are injective because of the linear independence property of the bases. By flatness the maps $$ \bigoplus_{i < \lambda}k \otimes \bigoplus_{j < \kappa}k \xrightarrow{f \otimes 1} V \otimes \bigoplus_{j < \kappa}k \quad \text{and} \quad V \otimes \bigoplus_{j < \kappa} k \xrightarrow {1 \otimes g} V \otimes W $$ are injective and so the composite $$ \bigoplus_{(i,j) < \lambda \times \kappa} k \cong \bigoplus_{i < \lambda}k \otimes \bigoplus_{j < \kappa}k \xrightarrow {f \otimes g} V \otimes W$$ is injective. This map being injective tells us that the proposed basis is actually linearly independent. I know there are much easier ways to do that but as I said I wanted to think about the flatness of the modules more than their free-ness. However this proof does not generalise to modules because $f,g$ are not necessarily going to be injective, and the notion of linear independence doesn't really work for modules in general. Instead we have that non-unit condition as above. Is there a way to adapt this proof, or a totally different proof? REPLY [4 votes]: No this is not true. Take $M=N=\mathbb Z$ over the ring $\mathbb Z$. Let $\{2, 3\}$ (resp. $\{2, 5\}$) be a minimal generating set of $M$ (resp. of $N$). Then $M\otimes N=\mathbb Z$ and the generating set obtained by tensoring the minimal ones is $\{4, 10, 6, 15\}$. As $10=4+6$, the subset $\{4, 10, 15\}$ generates $M\otimes N$ and $\{ 4, 10, 6, 15\}$ is not minimal.<|endoftext|> TITLE: Integration by parts and Lebesgue-Stieltjes integrals QUESTION [8 upvotes]: I want to use Integration by parts for general Lebesgue-Stieltjes integrals. The following theorem can be found in the literature: Theorem: If $F$ and $G$ are right-continuous and non-decreasing functions, we have that: $$ \int_{(a,b]}G(x)\text{d}F(x)=F(b)G(b)-F(a)G(a)- \int_{(a,b]}F(x-)\text{d}G(x),$$ where $F(x-)$ is the left limit of $F$ in $x$. Does the following result hold: Theorem: If $F$ and $G$ are left-continuous and non-decreasing functions, we have that: $$ \int_{[a,b)}G(x)\text{d}F(x)=F(b)G(b)-F(a)G(a)- \int_{[a,b)}F(x+)\text{d}G(x),$$ where $F(x+)$ is the right limit of $F$ in $x$. Is it possible to combine these result. So use integration by parts when $F$ is right cont., $G$ is left cont.? REPLY [5 votes]: Variants of the above Lebesgue--Stieltjes partial integral results are given in Theorem 21.67(iv) (it holds for general BV functions, by Remarks 21.68) of Real and Abstract Analysis: A modern treatment of the theory of functions of ... E. Hewitt, K. Stromberg p. 419 They seem to answer your question, though at the cost of having terms like $(F(x+)+F(x-))/2$ in place of $F(x)$.<|endoftext|> TITLE: Why is every representation of $\textrm{GL}_n(\Bbb{C})$ completely determined by its character? QUESTION [8 upvotes]: I know that every (Lie group) representation of $\textrm{GL}_n(\Bbb{C})$ is completely reducible; this I believe comes from the fact that every representation of the maximal compact subgroup $\textrm{U}(n)$ is completely reducible. More explicitly, suppose $V$ is a representation of $\textrm{GL}_n(\Bbb{C})$. Then $V$ is also a representation of $\textrm{U}(n)$, by complete reducibility of the unitary group we know that there is a $\textrm{U}(n)$ invariant inner product such that if $U$ is any $\textrm{GL}_n$ - invariant subspace of $V$ (and hence $\textrm{U}(n)$ invariant), there is an orthogonal complement $W$ such that $$V = U \oplus W$$ with $W$ invariant under $\textrm{U}(n)$. Now $W$ as a representation of the real Lie algebra $\mathfrak{u}(n)$ is invariant and hence under the complexified Lie algebra $$\mathfrak{gl}_n = \mathfrak{u}_n \oplus i \hspace{1mm} \mathfrak{u}(n).$$ Since $\textrm{GL}_n(\Bbb{C})$ is connected $W$ is also invariant under $\textrm{GL}_n$ showing that every representation of it is completely reducible. Now I have read several textbooks on representation theory (e.g. Bump's Lie Groups, Procesi's book of the same name) and they all seem to tacitly assume that every representation of $\textrm{GL}_n$ is completely determined by its character; i.e. if two representations have the same character then they are isomorphic. Now in the finite groups case, we concluded this fact based on 1) Maschke's Theorem and 2) Linear independence of characters. We do not necessarily have 2) so how can we conclude the fact I said about about $\textrm{GL}_n$? Thanks. REPLY [3 votes]: This boils down to facts about the representation theory of compact groups: there every complex representation is determined by its character. Now a representation of $GL(n,\mathbb C)$ is determines by a representation of its Lie algebra. But this is the complexification of $u(n)$ and complex representations of a Lie algebra are in one to one correspondence with complex representations of its complexification.<|endoftext|> TITLE: Minimizing with Lagrange multipliers and Newton-Raphson QUESTION [8 upvotes]: I am writing a program minimizing a real-valued non-linear function of around 90 real variables subject to around 30 non-linear constraints. I found handy explanation in CERN's Data Analysis BriefBook. I've implemented it and it works, but I am not able to derive how they obtained the equations at the bottom. Could anyone please explain how it can be achieved? Copied from the link: Trying to minimize $f(x_1,...,x_n)$ subject to $c_1(x_1,...,x_n) = ... = c_m(x_1,...,x_n) = 0$. Reformulate as $$\partial F/\partial x_1 = \dots = \partial F/\partial x_n = \partial F/\partial\lambda_1 = \dots = \partial F/\partial\lambda_m = 0$$ for $$ F(x_1,\dots,x_n,\lambda_1,\dots,\lambda_m) = f(x_1,\dots,x_n) - \lambda^T c(x_1,\dots,x_n) $$ Using Lagrange multipliers $\lambda$ and Newton-Raphson, they arrive at (1): $$ A\Delta x - B^T\lambda = -a\\ B\Delta x = -c, $$ where $A$ is Hessian of $f$, $a = (\nabla f)^T$ is gradient of $f$ and $B=\nabla c$ is Jacobian of the constraints. I can't seem to follow them. The way I understand it, they're applying Newton-Raphson to solve $\nabla F = 0$. I believe that $$ \nabla F = \left(a - B^T \lambda \atop -c \right) $$ Then, we need to take derivative of $\nabla F$ for Newton-Raphson. First off, it seems to me they're only taking the derivative w.r.t. $x$ and not to $\lambda$. Why is that? Even so, this would lead to (1) if the derivative of $\nabla F$ was $\left(A \atop -B\right)$. However, while it's true that $\nabla a = A$ and $\nabla c = B$, I can't understand where the term $-B^T\lambda$ vanished to. Many thanks to anyone who could shed some light on this. REPLY [2 votes]: Check Sequential Quadratic Programming in any good optimization book. When there are only equality constraints, it really boils down to applying Newton's method to the system you gave. With $F(x,\lambda) := f(x) - \lambda^T c(x)$ (the Lagrangian), differentiating yields $$ \nabla F(x,\lambda) = \begin{bmatrix} \nabla f(x) - J(x)^T \lambda \\ - c(x) \end{bmatrix}, $$ where $$ J(x) = \begin{bmatrix} \nabla c_1(x)^T \\ \vdots \\ \nabla c_m(x)^T \end{bmatrix} $$ is the Jacobian of $c$ at $x$. Applying Newton's method to $\nabla F(x,\lambda) = 0$ requires that we differentiate one more time: $$ \begin{bmatrix} H(x,\lambda) & J(x)^T \\ J(x) & 0 \end{bmatrix} \begin{bmatrix} \Delta x \\ -\Delta \lambda \end{bmatrix} = - \begin{bmatrix} \nabla f(x) - J(x)^T \lambda \\ - c(x) \end{bmatrix}. $$ There are all sorts of difficulties that come in, ranging from the need to perform a linesearch for a merit function, to avoiding the Maratos effect. A good book such as Numerical Optimization (Nocedal & Wright, Springer) will explain everything.<|endoftext|> TITLE: Is there a common notation of the power set excluding the empty set? QUESTION [10 upvotes]: For any set $S$, $\mathcal{P}(S)$ denotes the power set of $S$ and $\emptyset \in \mathcal{P}(S)$ always holds. Essentially, I want to denote the set that equals the power set (of some $S$) but excluding the empty set. I was thinking about writing $\mathcal{P}^+$ and defining that (as $\mathcal{P}^+(S) := \mathcal{P}(S) - \emptyset = \mathcal{P}(S)\setminus \{\emptyset\}$), but this could be a common enough thing that someone already established a notation for it. Wikipedia et al. don't mention anything, but maybe there is something nevertheless. I would prefer to use an established notation if there is one (while still defining what I mean). REPLY [3 votes]: I would suggest using something close to standard notation, e.g. $\mathcal{P}_{\lt \omega}(X)$ or $[X]^{\lt \omega}$ is used for the set of finite subsets of $X$, so why not $\mathcal{P}_{\geq 1}(X)$ or $[X]^{\geq 1}$?<|endoftext|> TITLE: What is the definition of the well-founded part of a model of set theory? QUESTION [5 upvotes]: I've been trying to understand John Steel's various notes on inner model theory, but the one thing that trips me up is what he calls the well-founded part of a model of set theory. What exactly is the well-founded part of a model? If someone could give me a precise definition (maybe it can be defined using transitive closures, but I don't really know) of the well-founded part of a model, it'd be greatly appreciated. Addendum The well-foundedness that I'm referring to is not the internal well-foundedness that comes from assuming the Axiom of Regularity within the model. It's an external property, as viewed from outside the model. REPLY [3 votes]: This is a definition taken from the proof of Theorem 47 of Azriel Lévy’s monograph, A Hierarchy of Formulas in Set Theory (Memoirs of the AMS, Number 57). Definition. Let $ M $ be a set, and $ E $ a binary relation on $ M $. A subset $ X $ of $ M $ is called $ E $-transitive if and only if $$ (\forall x,y \in M)(((y \in X) \land ((x,y) \in E)) \to (x \in X)). $$ The $ E $-transitive closure of an element $ x $ of $ M $ is defined as the following subset of $ M $: $$ \{ y \in M \mid (\forall X) ( ((x \in X \subseteq M) \land (X ~ \text{is} ~ E \text{-transitive})) \to (y \in X) ) \}. $$ A subset $ X $ of $ M $ is called $ E $-well-founded if and only if for every non-empty subset $ Y $ of $ X $, there exists a $ y \in Y $ such that $ (x,y) \notin E $ for every $ x \in Y \setminus \{ y \} $. The $ E $-well-founded part of $ M $ is finally defined as the following subset of $ M $: $$ \{ x \in M \mid \text{The} ~ E \text{-transitive closure of} ~ x ~ \text{is} ~ E \text{-well-founded} \}. $$<|endoftext|> TITLE: Self-Studying Abstract Algebra; Aye or Nay? QUESTION [26 upvotes]: I am a high schooler with a deep interest in mathematics, which is why I have self-studied Linear Algebra and have begun my self-study in Differential Equations. As I am a man who likes to plan ahead, I'm pondering what field of mathematics to plunge into once I've finished DE's. I am thinking of Abstract Algebra: it has always sounded mystical and intruiging to me for some reason. I have a couple of questions regarding AA: What exactly is Abstract Algebra? What does it study? Please use your own definition, no wikipedia definition please. What are its applications? Does it have a use for example in physics or chemistry, or is it as abstract as its name suggests? Would it be a logical step for a high schooler to self-study abstract algebra after studying LA and DE's, or is there a field of (post-high school) math 'better' or more useful to study prior to abstract algebra? What are some good books, pdfs, open courseware etc. on abstract algebra? links and names please. REPLY [4 votes]: Here is a link to Harvard Math 122 (Extension 222) taught by one of the greats - Benedict Gross. It offers a full series of videos along with complete notes taken by a very competent GSI which makes for a very thorough presentation of the first algebra course. http://www.extension.harvard.edu/open-learning-initiative/abstract-algebra It follows "Artin" which I think is an excellent entry point for intuitive understanding. (An echo of the above remark regarding Dummit and Foote - which is much better when you get further in rings, modules, etc.) Should you choose to go this route and buy the text, I would recommend the more recent 2nd edition. While I'm no expert on the math curriculum, most programs entail analysis, topology, and algebra. If you haven't yet undertaken real analysis, here is a link to a free downloadable beautifully transcribed lecture course by Fields Medal winner Vaughan Jones. This too I assure you is outstanding, and may be you best next step. https://sites.google.com/site/math104sp2011/lecture-notes Lastly, not knowing what you used to study linear algebra, you might want to take a look at Axler's "Linear Algebra Done Right." I would do that after the real analysis and before the algebra material. P.S. Since these materials are available at no cost, you might want to just take a taste- like chocolate, eating is believing.<|endoftext|> TITLE: determinant of permutation matrix QUESTION [7 upvotes]: It's a well known fact that $\det(P)=(-1)^t$, where $t$ is the number of row exchanges in the $PA=LU$ decomposition. Can somebody point me to a (semi) formal proof as why it is so? REPLY [10 votes]: Perhaps you can elaborate on what exactly is confusing to you. An elementary row switch matrix has determinant $-1$. A permutation matrix is just a product of such elementary matrices, so every row switch introduces a factor of $-1$. If you have $t$ row switches, then $$P = E_t\cdots E_2E_1 \implies \det(P) = \prod^t_{i=1}\det(E_i)=(-1)^t$$<|endoftext|> TITLE: Derivation of multivariable Taylor series QUESTION [13 upvotes]: I am having trouble grokking why it is, assuming that the function is analytic everywhere (and many other assumptions that I am, no doubt, naively assuming), that this is true: $f(x,y)=f(x_0,y_0)+[f'_x(x_0,y_0)(x-x_0)+f'_y(x_0,y_0)(y-y_0)]+\frac{1}{2!}[f''_{xx}(x_0,y_0)(x-x_0)+2f''_{yx}(x_0,y_0)(x-x_0)(y-y_0)+f''_{yy}(x_0,y_0)(y-y_0)^2]+...$ I am familiar with the one-variabled Taylor series, and intuitively feel why the 'linear' multivariable terms should be as they are. In short, I ask for a proof of this equality. If possible, it would be nice to have an answer free of unnecessary compaction of notation (such as table of partial derivatives). As a auxiliary question, I see a direct analogy with the first 2 terms $f(x,y)=f(x_0,y_0)+[f'_x(x_0,y_0)(x-x_0)+f'_y(x_0,y_0)(y-y_0)]$ and the total differential $f(x,y)-f(x_0,y_0)=\Delta f(x,y)=f'_x(x_0,y_0)\Delta x+f'_y(x_0,y_0)\Delta y$. When $\Delta x $ and $\Delta y $ are not infinitesimally small, can I use the third term in the Taylor multivariable series to get closer to the real total differential? REPLY [6 votes]: I think the easiest way to understand this is coming from the place of operators and linear transformations. A Taylor series in one dimension can be understood by exponentiating the derivative operator: $$ f(x+a) = e^{a\frac{d}{dx}}f(x) = f(x) + af^\prime(x) + \frac{1}{2!}a^2f^{\prime\prime}(x)+... $$ You can see this in one way as follows. The infinitesimal (linear order) transformation $f(x+dx) = f(x) + dx f^\prime(x)$ is known, and we can build up the finite transformation by an infinite succession of infinitesimal transformations: $$ f(x+a) = \lim_{N\rightarrow\infty} \left(1+\frac{a}{N}\frac{d}{dx}\right)^N f(x) = e^{a\frac{d}{dx}} f(x). $$ It is straightforward to extend this to multiple variables if we know the infinitesimal transformation (sometimes referred to as the generator), which you intuitively know as, $f(x+dx, y+dy) = f(x,y) + dx\frac{\partial}{\partial x}f(x,y) + dy\frac{\partial}{\partial y}f(x,y)$. The finite transformation is then, $$ f(x+a,y+b) = e^{a\frac{\partial}{\partial x}+b\frac{\partial}{\partial y}} f(x,y)\\ = \left[1+a\frac{\partial}{\partial x} + b\frac{\partial}{\partial y} + \frac{1}{2!}\left(a^2\frac{\partial^2}{\partial x^2} + 2a\frac{\partial}{\partial x}b\frac{\partial}{\partial y}+ b^2\frac{\partial^2}{\partial y^2}\right) + ...\right]f(x,y). $$<|endoftext|> TITLE: Invertible matrices over a commutative ring and their determinants QUESTION [21 upvotes]: Why is it true that a matrix $A \in \operatorname{Mat}_n(R)$, where $R$ is a commutative ring, is invertible iff its determinant is invertible? Since $\det(A)I_n = A\operatorname{adj}(A) = \operatorname{adj}(A)A$ then I can see why the determinant being invertible implies the inverse exists, since the adjoint always exists, but I can't see why it's true the other way around. REPLY [23 votes]: The determinant of the identity is always the multiplicative identity of the underlying ring and consequently the multiplicative property of the determinant implies that the determinant of an invertible matrix is itself invertible. In essence $$AB = I \implies \det(A)\det(B) = 1$$ so that $\det(A)$ and $\det(B)$ are units.<|endoftext|> TITLE: $D^m\cup_h D^m$, joining $D^m \amalg D^m$ along the boundary $\partial D^m$ QUESTION [7 upvotes]: Given an orientation-preserving diffeomorphism $h: \partial D^m \to \partial D^m$, we can glue two copies of the closed unit disk $D^m$ along the boundary by identifying $x \sim h(x)$ to form the quotient space $$\Sigma(h) := (D^m \amalg D^m)/\sim$$ Now we can give this quotient a smooth structure such that the obvious inclusions $D^m \hookrightarrow \Sigma(h)$ are smooth embeddings and in fact it turns out that for any two smooth structures, there exists a diffeomorphism between them. So $\Sigma(h)$ is a unique manifold up to diffeomorphism. So far, so good. Now, in Kosinski's 'Differential Manifolds', there is the following Lemma: Lemma: $\Sigma(h)$ is diffeomorphic to $S^m$ if and only if $h$ extends over $D^m$. Moreover, $\Sigma(gh) = \Sigma(h)\# \Sigma(g)$. Here $M\# N$ denotes the connected sum of two manifolds as usual. The proof of this is left as an exercise for the reader, but I'm unsure, how one might construct an extension of $h$ to $D^m$, given that $\Sigma(h)$ is diffeomorphic to $S^m$? I know that in this case $h(\partial D^m)$ necessarily separates $\Sigma(h) = S^m$ into two components, and since $h(\partial D^m)$ is an embedded compact $(m-1)$-manifold (which is smooth), I can also prove that $h(\partial D^m)$ is the boundary of both connected components of its complement. But at this point I get lost. Is it clear that these two components are diffeomorphic to disks? Where might I find a proof of this? I'm fine with the other parts of this Lemma, but I just don't see how to extend $h$ given $\Sigma(h) = S^m$. If you could help me out, this would be very much appreciated. Thank you for your help! REPLY [3 votes]: If your sphere $\Sigma(h)$ is diffeomorphic to a standard sphere, consider a diffeomorphism $\Sigma(h) \to S^m$. The two discs in $\Sigma(h)$ are smooth discs, so after applying the diffeomorphism, they're smooth discs in $S^m$. But smooth discs are tubular neighbourhoods of their centres. So they're unique up to embedded isotopy. In particular, this means that by the isotopy extension theorem you can isotope your diffeomorphism $\Sigma(h) \to S^m$ so that it sends the `bottom' disc $D^m$ in $\Sigma(h)$ to the lower hemi-sphere of $S^m$, moreover, you can ensure your diffeo $\Sigma(h) \to S^m$ is a standard diffeomorphism between the lower $D^m$ and the lower hemi-sphere. But now the upper $D^m$ in $\Sigma(h)$ is identified (via a diffeomorphism) with the upper hemi-sphere in $S^m$. So compose that map with a standard diffeomorphism between the upper-hemisphere and a $D^m$. Provided you choose it appropriately on the boundary, this is by design your extension of $h : \partial D^m \to \partial D^m$ to a diffeomorphism $\overline{h} : D^m \to D^m$. So above, when I talk about `standard' diffeomorphisms between $D^m$ and the lower/upper hemi-spheres of $S^m$ what I mean is that $\partial D^m \times \{0\} = \partial H$ (set equality) where $H \subset S^m$ is either the upper or lower hemi-sphere in $S^m$. So to be standard I mean the diffeo must be the identity on the boundary in this sense. Cerf went further than this, his pseudo-isotopy theorem now says that $h$ is isotopic to the identity on $\partial D^m$, provided $m \geq 6$.<|endoftext|> TITLE: Transforming a distance function to a kernel QUESTION [13 upvotes]: Fix a domain $X$: Let $d : X \times X \rightarrow \mathbb{R}$ be a distance function on $X$, with the properties $d(x,y) = 0 \iff x = y$ for all $x,y$ $d(x,y) = d(y,x)$ for all $x,y$ Optionally, $d$ might satisfy the triangle inequality (making it a metric), but this is not necessarily the case. In machine learning, a kernel is a function $K : X \times X \rightarrow \mathbb{R}$ with the properties $K(x,y) = 1 \iff x = y$ $K(x,y) = K(y,x)$ for all $n$ and all sequences $x_1, \ldots, x_n$, the Gram matrix $\mathbf{K}$ with $\mathbf{K}_{ij} = K(x_i, x_j)$ is positive definite. A space that admits a kernel is handy in ML, and often if all you have is a distance function, you can compute a kernel-like object in the form $$ \kappa_d(x, y) = \exp(- \gamma d^2(x,y)) $$ The question: Under what conditions on $d$ is the resulting function $\kappa_d$ a kernel ? REPLY [13 votes]: It seems to me you might be looking for a variant of Schoenberg's theorem: (Think $\Psi(x,y) = d(x,y)^2$ in the following). The function $\Psi \colon X \times X \to \mathbb{R}$ is said to be a kernel of conditionally negative type if $\Psi(x,x) = 0$ for all $x \in X$. $\Psi(x,y) = \Psi(y,x)$ for all $x,y \in X$. For all $n \in \mathbb{N}$, all $x_1,\dots,x_n \in X$ and all $c_1,\dots,c_n \in \mathbb{R}$ such that $\sum_{i=1}^n c_i = 0$ the inequality $$ \sum_{i,j=1}^n c_i c_j \Psi(x_i,x_j) \leq 0 $$ holds. A positive semidefinite kernel is a kernel in your sense but without the condition $K(x,x) = 0$ iff $x = 0$ (presumably you meant $K(x,x)=1$ here, otherwise this would be incompatible with positive definiteness) and with positive definiteness weakened to positive semi-definiteness. Theorem. For a symmetric function $\Psi\colon X \times X \to \mathbb{R}$ with $\Psi(x,x) = 0$ for all $x$ the following are equivalent: $\Psi$ is a kernel of conditionally negative type. The function $K(x,y) = \exp(-\gamma \Psi(x,y))$ is a positive semidefinite kernel for all $\gamma \geq 0$. You can find a proof of this as Theorem C.3.2 on page 370 of the book Kazhdan's Property $(T)$ by Bekka, de la Harpe, and Valette, (link goes to Bekka's homepage). The rest of this appendix might have some useful information, too. Added: Schoenberg's original article: Metric spaces and positive definite functions, Trans. Amer. Math. Soc. 44 (1938), 522-536.<|endoftext|> TITLE: What are mandatory conditions for a family of matrices to commute? QUESTION [10 upvotes]: Suppose that there are some matrices. Each matrix in the set must commute with another in the set. What are the mandatory conditions for this? REPLY [3 votes]: I'll try to rephrase the answer by Will Jagy a bit more explicitly, and add some more detail. Characterising all sets of commuting matrices (other than by the condition that they commute) won't be easy, because given any set of commuting matrices (and there are certainly such sets that are infinite), than any subset of it will also be a set of commuting matrices. So it is more fruitful to ask for maximal commuting sets of matrices, sets for which nothing outside the set commutes with all of them (for one could then add such a matrix). Now given any set of commuting matrices, we can always take scalar multiples of one of them, or sums of several of them, to get other matrices that commute with all of them; therefore a maximal set of commuting matrices must be a subspace of the vector space of all matrices (i.e., it is closed under linear combinations). Moreover, it must also be closed under products of matrices (the technical term is that it must be a subalgebra of the algebra of all square matrices). Given any one matrix $A$, the smallest subalgebra that contains $A$ is the set of polynomials in $A$, the linear combinations of powers $A^i$, including $A^0=I_n$. Correction I wrote here earlier that every maximal commutative subalgebra has dimension $n$ and is the set of polynomials in one matrix (I thought that this was implied by the answer by Will Jagy, but it clearly isn't, and I thought that I could give a somewhat complicated argument for it, but I can't). This is not true. Indeed, there is a $5$-dimensional commutative subalgebra of $M_4(K)$ of matrices of the form $$ \begin{pmatrix}x&0&a&b\\0&x&c&d\\0&0&x&0\\0&0&0&x\end{pmatrix} $$ and for dimension reasons it cannot be the set of polynomials of any one matrix. A question to which I do not know the answer is can two commuting matrices $X,Y$ always be expressed as polynomials of some third matrix $A$? I initially thought, inspired by the above, that $$ X=\begin{pmatrix}0&0&1&0\\0&0&0&1\\0&0&0&0\\0&0&0&0\end{pmatrix},\qquad Y=\begin{pmatrix}0&0&1&0\\0&0&0&-1\\0&0&0&0\\0&0&0&0\end{pmatrix}, $$ provided a negative answer; however it turns out that for $$ A=\begin{pmatrix}1&0&1&0\\0&0&0&1\\0&0&1&0\\0&0&0&0\end{pmatrix} $$ (which is not in the commutative subalgebra above) one has $Y=A(A-I)=A^2-A$ and $X=A(A-I)(2A-I)=2A^3-3A^2+A$.<|endoftext|> TITLE: Big-O notation Basics, is it related to derivatives? QUESTION [8 upvotes]: I am having the hardest time with Big-O notation (I am using this Rosen book for the class I am in). On the surface, Big-O reminds me of derivatives, rate of change and what not; is this proper thinking? If $f(n)$ is $O(g(n))$, would the derivatives have any affect on this? Essentially is there a good resource for learning Big-O for the first time? If I missunderstand this forum and need a specific question, then: Prove that if $f(n)\le g(n)$ for all $n$, then $f(n) + g(n)$ is $O(g(n))$. (I'd rather gain an understanding of how to do this than to have an answer to a problem). EDIT: My attempt at the answer to my specific question using l'Hôpital: $$\lim_{x\to\infty} \frac{f'(x)}{f'(x) + g'(x)} = \lim_{x\to\infty} \frac{1}{g'(x)}.$$ REPLY [2 votes]: The definition of the derivative can be expressed using asymptotic notation. We say f has a derivative at x if there exists M such that: $$f(x+\epsilon) = f(x) + M\epsilon + o(\epsilon)$$ We denote this M as f '(x) (edited as per Antonio's correction)<|endoftext|> TITLE: How is $dx \over dy$ different from $\partial x \over \partial y$? QUESTION [5 upvotes]: Say I have variables $x,y_1,y_2,z_1,z_2$ all $\in \mathbb{R}$ And I have the following equations: $$x = f_1(y_1,y_2)$$ $$y_1 = f_2(z_1,z_2)$$ How does: $$dx \over dz_1$$ differ from: $$\partial x \over \partial z_1$$ or am I confused? Intuitively I just want to think about how $x$ varies in proportion to an infinitesimally small perturbation of $z_1$, so I don't understand the difference between the two different notations (nonpartial vs partial)? REPLY [2 votes]: When a function is defined on more than one variables we use $\cfrac{ \partial f }{\partial x}$ to denote the partial derivation of $f$ with respect to one of its variables $x$ while holding the other variables constant. This is a question of notation. Using $\cfrac {df} {dx}$, the total derivative, will show anyone that sees it that $f$ is only defined on the variable $x$ or that the other variables of $f$ are functions also defined on $x$. Say for example a function $f(x_1, x_2, x_3)$ such that $x_1$, $x_2$, $x_3$ are independent. Then $$\cfrac {\partial f}{\partial x_1} =\left ( \cfrac {\partial f}{\partial x_1}\right )_{x_2, x_3}$$ is the partial derivative of $f$ with respect to $x_1$ holding $x_2, \space x_3$ constant. But consider a function $f(t,x_1(t), x_2(t), x_3(t))$ then $$ \cfrac {df}{dt} = \cfrac {\partial f}{\partial t}\cfrac {dt}{dt}+\cfrac {\partial f}{\partial x_1}\cfrac {dx_1}{dt}+\cfrac {\partial f}{{\partial x_2}}\cfrac {dx_2}{dt}+\cfrac {\partial f}{{\partial x_3}}\cfrac {dx_3}{dt}$$ is the total derivative of $f$ with respect to $t$. Now let's compute the total derivative of the first function $f(x_1, x_2, x_3)$ with respect to $x_1$ $$\cfrac {df}{dx_1} =\cfrac {\partial f}{\partial x_1}\cfrac {dx_1}{dx_1}+\cfrac {\partial f}{\partial x_2}\cfrac {dx_2}{dx_1}+\cfrac {\partial f}{{\partial x_3}}\cfrac {dx_3}{dx_1} = \cfrac {\partial f}{\partial x_1}$$ which is basically the same thing but only because the variables are independent of one another. It changes if one of them say $x_3$ is a function of $x_1$ then $\cfrac {dx_3}{dx_1} \ne 0$ and then $$\cfrac {df}{dx_1} = \cfrac {\partial f}{\partial x_1} + \cfrac {\partial f}{{\partial x_1}}\cfrac {dx_3}{dx_1} $$ See Partial Derivatives and Total Derivatives.<|endoftext|> TITLE: How many cardinals are there? QUESTION [8 upvotes]: I'm trying to do the following exercise: EXERCISE 9(X): Is there a natural end to this process of forming new infinite cardinals? We recommend this exercise instead of counting sheep when you have trouble falling asleep. (This is from W. Just and M. Weese, Discovering Modern Set Theory, vol.1, p.34.) By this process they mean $|\mathbb N| < |\mathcal P(\mathbb N)| < |\mathcal P (\mathcal P (\mathbb N))| < \dots$. My first response to this was "Obviously there is no end to it." but then the exercise is supposed to be challenging ("X-rated") so this must be wrong and there is an end to it. But when exactly? How many cardinals are there? What would be a "natural end"? Thank you for your help! REPLY [4 votes]: This is a matter of philosophical approach to what it means "a natural end". After all, there is no natural end to the generation of natural numbers -- but certainly they end somewhere because we know that most ordinals are uncountable. On the same note generation of cardinals does not end as long as you have ordinals to iterate through (either in the power set operation; or in the Hartogs number operation; or limits of these constructions). In this sense there is no natural end to the process of generating new cardinals. However if one thinks of operations performed along the class of ordinals as operations which terminate when you "run out of ordinals", and if you can run out of natural numbers there is no reason you cannot run out of ordinals as well, in such case there is in fact a natural ending to the generation of new sets and therefore of new cardinals.<|endoftext|> TITLE: Is there a $G_\delta$ set with Positive Measure and Empty Interior? QUESTION [5 upvotes]: It is like in the title. $G_\delta\subset\mathbb{R}^n$ with Lebesgue measure. Thanks for any help REPLY [9 votes]: Yes: any fat Cantor set in $\Bbb R$ is an example, since all closed sets in $\Bbb R$ are $G_\delta$’s. REPLY [6 votes]: Yes. $\mathbb R\setminus \mathbb Q$. REPLY [2 votes]: Yes. For $n\in\mathbb N$ let $F_n = \{x\in(0,1)|\;\exists m\in\mathbb Z:x=\frac{m}{n}\}$ and $U_n=(0,1)\setminus F_n$. The sets $U_n$ are open and have measure $1$, but the intersection $A=\bigcap_{n=1}^{\infty}U_n$ is simply the set of all irrational numbers in $(0,1)$. It is thus a $G_\delta$ set with measure $1$ and empty interior.<|endoftext|> TITLE: Showing that projections $\mathbb{R}^2 \to \mathbb{R}$ are not closed QUESTION [10 upvotes]: Consider $\mathbb{R}^2$ as $\mathbb{R} \times \mathbb{R}$ with the product topology. I am simply trying to show that the two projections $p_1$ and $p_2$ onto the first and second factor space respectively are not closed mappings. It seems like this should be easy, but I have not been able to come up with a closed set in $\mathbb{R}^2$ whose projection onto one of the axes is not closed. I don't really have any work to show...I've really just tried the obvious things like closed rectangles and unions of such, the complement of an open rectangle or union of open rectangles, horizontal and vertical lines, unions of singletons, etc., and haven't come up with anything non-obvious, which I hope is where the answer lies. It's bothering me that I can't come up with an answer, and I'd appreciate some help. Thanks. REPLY [4 votes]: Consider the map $\phi : \mathbb{R}^2 \to \mathbb{R}$ that takes: $$(x,y) \mapsto x \cdot y.$$ This map is continous, thus $\phi^{-1}(1)$ is closed in $\mathbb{R}^2$. On the other hand its projection onto the first coordinate is $\mathbb{R} -\{0\}$ which is not closed because it's open and $\mathbb{R}$ is connected.<|endoftext|> TITLE: Identification of integration on smooth chains with ordinary integration QUESTION [20 upvotes]: Let $M$ be a smooth oriented $n$-dimensional manifold and denote by $A \in H_n(M;\mathbb{Z})$ the fundamental class of $M$ (a generator of singular homology consistent with the orientation of $M$). Consider the following two maps from the top de-Rham cohomology group $H^n_{\mathrm{dR}}(M)$ to $\mathbb{R}$: Regular integration of $n$-forms: $\omega \mapsto \int_M \omega$. This is defined using partition of unity and descends to cohomology by Stokes's theorem. Represent $A$ as a smooth chain $A = [\sum a_i \sigma_i]$ where $\sigma_i : \Delta^n \rightarrow M$ are smooth $n$-simplices and integrate $\omega$ by $$ \omega \mapsto \int_{\sum a_i \sigma_i} \omega := \sum a_i \int_{\Delta^n} \sigma_i^*(\omega). $$ This is well defined and independent of the representation of $A$ again by Stokes's theorem for chains. Both maps are $\mathbb{R}$-linear maps from a one dimensional real vector space to $\mathbb{R}$ and so are a real multiple of one another. Why are they equal? This should probably involve some careful tracing of definitions, identifications and dualities, but I can't put my finger on what is the crux of the matter. REPLY [4 votes]: We may assume that $A=[\sum\sigma_i]$ is actually just represented as the sum of all the $n$-simplices in some oriented smooth triangulation of $M$. Since the two integration maps differ by a constant multiple, it suffices to compare them on just a single $n$-form $\omega$ with nonzero integral. Pick $\omega$ to be supported on the interior of $\sigma_1$ such that $\int_{\Delta^n} \sigma_1^*(\omega)=1$. We then have $\int_{\Delta^n} \sigma_i^*(\omega)=0$ for $i\neq 1$, so your second integral sends $\omega$ to $1$. We now compute $\int_M \omega$ as follows. Note that $\sigma_1$ itself is an oriented smooth chart of $M$, when restricted to the interior of $\Delta^n$. We can now construct a partition of unity on $M$ subordinate to a covering by oriented charts which has as one of its functions a bump function $f$ supported on the interior of $\sigma_1$ which is $1$ on the entire support of $\omega$. When we compute $\int_M\omega$ using this partition of unity, all the terms vanish except the one corresponding to $f$, since $f$ is the only one that does not vanish on the support of $\omega$. By definition, we then have $$\int_M\omega=\int_{\Delta^n}\sigma_i^*(f\omega)=\int_{\Delta^n}\sigma_i^*(\omega)=1.$$ Thus both integrals send $\omega$ to $1$, so they are equal.<|endoftext|> TITLE: Entropy of matrix QUESTION [13 upvotes]: I am trying to understand entropy. From what I know we can get the entropy of a variable lets say X. What i dont understand is how to calculate the entropy of a matrix say m*n. I thought if columns are the attributes and rows are object we can sum the entropy of individual columns to get the final entropy(provided attributes are independent). I have couple of question IS my understanding right in case of independent attributes? What if the attributes are dependent? what happens to entropy? Is there where conditional entropy comes in? Thanks REPLY [7 votes]: You may be interested in the Von Neumann entropy of a matrix, which is defined as the sum of the entropies of the eigenvalues. Ie, for $$A = P \begin{bmatrix}\lambda_1 \\ & \lambda_2 \\ && \ldots \\ &&& \lambda_n \end{bmatrix} P^{-1}$$ with positive $\lambda_i$, the entropy is, $$H(A):=-\sum_i \lambda_i \log \lambda_i.$$ For more on the definition of the von Neumann entropy you might look here on wikipedia, and for how to maximize it numerically you could look at my answer on this Computer Science stack exchange thread. For rectangular matrices, you could extend the definition by replacing the eigenvalues with singular values in the SVD, though it's not clear what this would mean.<|endoftext|> TITLE: How do you detect if a point is in a plane? QUESTION [6 upvotes]: Let's say we have 3 points: (-2,7,4), (-4,5,2), (3,8,5) and we want to see if a fourth point, (2,6,3), is in the plane that the previous 3 points made. How would I go about doing this? REPLY [5 votes]: Let $(x_1, y_1, z_1)$, $(x_2, y_2, z_2)$, and $(x_3, y_3, z_3)$ be the three given points, and $(a,b,c)$ be the fourth. A little linear algebra shows that $(a,b,c)$ is in the plane if and only if the following matrix has rank 2 (assuming of course that the three given points are not collinear): $$ \left[\begin{array}{ccc} x_2-x_1 & y_2-y_1 & z_2-z_1 \\ x_3-x_1 & y_3-y_1 & z_3-z_1 \\ a-x_1 & b-y_1 & c-z_1 \end{array}\right] $$ Hope this helps! REPLY [3 votes]: Let $x_1,x_2,x_3 \in \mathbb{R}^3$ be three non-collinear points. Let $n = (x_2-x_1) \times (x_3-x_1)$, $\hat{n} = \frac{1}{\|n\|} n$, and $d = \langle x_1, \hat{n} \rangle$. Then $|\langle x, \hat{n} \rangle -d|$ is the distance from the point $x$ to the plane spanned by $x_1,x_2,x_3 $. Typically, you need to decide numerically whether or not the point is actually on the plane. In your case we can perform the calculations exactly: $n = (0, -8, 8)^T$, $\hat{n} = \frac{1}{8\sqrt{2}} (0, -8, 8)^T$, $d = -\frac{3}{\sqrt{2}}$. If we let $x = (2,6,3)^T$, we have $\langle x, \hat{n} \rangle = -\frac{3}{\sqrt{2}}$, hence the distance to the plane = $|-\frac{3}{\sqrt{2}}+\frac{3}{\sqrt{2}}| = 0$, so the point is on the plane. In fact, with a little more calculation, you can show that $x = \frac{1}{8} (-11 x_1 + 9 x_2 + 10 x_3)$, and since the coefficients sum to $1$, $x$ lies in what is known as the affine hull of $x_1,x_2,x_3$, which in this case is the plane containing the points. To complete the connection, you can show that $\langle x-x_1, n \rangle = \det A$, where $A = \begin{bmatrix} x_2-x_1 & x_3-x_1 & x - x_1 \end{bmatrix}$. Note that $A$ is the transpose of the matrix in Shaun's answer. The catch here is that you need to scale by $\frac{1}{\|n\|}$ to get a numerically meaningful answer. REPLY [2 votes]: Use the given three points to find the equation of the plane, $ax+by+cz=d$. Then plug in the fourth point and see if it satisfies the equation.<|endoftext|> TITLE: Existence of non-atomic probability measure for given measure zero sets QUESTION [7 upvotes]: Let $\Omega$ be a set and $\Sigma$ be a $\sigma$-algebra of subsets of $\Omega$. Let $N$ be a collection of measurable subsets of $\Sigma$. Question: What conditions on $\Sigma$ and $N$ guarantee that there exists a non-atomic probability measure $\mu:\Sigma\to [0,1]$ such that for any $E\in \Sigma$ if $\mu(E)=0$, then $E\in N$ ? Edited to make question coherent. REPLY [2 votes]: Thanks Michael Greinecker and commenter. The main practical problem for me in applying commenter's idea was that the weak $\sigma$-distributive property in Maharam's 1947 paper, in Kelley's paper, and in Todorcevic's amazing paper of 2004 on measure algebras may not hold if we choose an arbitrary $\sigma$-ideal $J$ in $N$ (and it certainly has no clear meaning for what I am doing). In the end, the best fit for my work was Ryll-Nardzewski's result published in the addendum section of Kelley and not Kelley's result with the distributive property. 1) There exists a sequence $B_n$ of families of subsets of $\Sigma$ such that $(\Sigma\setminus N)\subseteq \bigcup_{n} B_n$. 2) Each $B_n$ has a positive intersection number (as in Kelley). 3) Each $B_n$ is open for increasing sequences; (if $E_m\uparrow E\in B_n$, then eventually $E_m\in B_n$). The final condition (3) of Nardzewski guarantees that $\Sigma\setminus \bigcup_{n} B_n$ is a $\sigma$-ideal. Condition (2) guarantees that there is a finitely additive (positive) probability measure $\nu_n$ on $\Sigma$ that is bounded away from zero on $B_n$. Condition (3) tells us that from $\nu_n$ we can define a countably additive probability measure $\mu_n$ that also measures elements of $B_n$ positively. Letting $\mu= \sum_{n=1}^\infty 2^{-n} \mu_n$, we have the required measure. For the converse suppose that $\mu$ is the required measure, letting $B_n=\{ \mu>1/n\}$ we see that (1), (2), and (3); hold.<|endoftext|> TITLE: the intersection of n disks/circles QUESTION [7 upvotes]: Given that $n$ disks/circles share a common area, meaning that every two of them intersect or overlap one another, and we know their coordinates $(x_{1},y_{1},r_{1})$, $(x_{2},y_{2},r_{2})$, ..., $(x_{n},y_{n},r_{n})$, where $x_{i}$,$y_{i}$,$r_{i}$ represent the $x$ axis coordinate, the $y$ axis coordinate, and the radius of the $i$-th disk/circle, respectively, can you provide a method to calculate the coordinate of the centroid of the intersection of these disks/circles? REPLY [7 votes]: The intersection of $n$ disks is a convex shape bounded by circular arcs that meet at vertices where two or more circles intersect. The first thing to do is to identify these vertices and the arcs that connect them to each other. If no three circles are coincident,$^1$ then this isn't very hard: Consider every pair of circles; find their two intersection points, which form two candidate vertices; if a candidate vertex is inside all other circles, then it is indeed a vertex of the intersection. This gives you a set of vertices.$^2$ Each vertex lies on exactly two circles; using this connectivity, you can sort them in consistent order around the intersection shape. Now you can decompose the intersection shape into a number of pieces: (i) the polygon (shaded blue above) that connects the vertices, and (ii) a collection of circular segments (shaded red), one for each arc of the shape. You can find the areas and centroids of each piece using closed-form formulas (polygon, circular segment). The centroid of the full shape is then just the weighted average of the centroids of each of the pieces, weighted by area. $^1$If multiple circles are coincident on a vertex, then doing pairwise tests may not always get you a consistent set of vertices, especially if you're using floating-point arithmetic. You might be able to get around this to some extent by fudging the inside-circle test with some epsilon bias, but a truly robust solution would have to come from a computational geometry expert. $^2$It's possible that the set of vertices turns out to be empty. Then there are two possibilities: either the intersection is equal to one of the circles, or the intersection is empty. It is easy to distinguish between these two cases by checking if there exists a circle that is entirely inside all the others.<|endoftext|> TITLE: Entire function constant, where $f(z)=f(z+1)$ and $|f(z)|< e^{|z|}$. QUESTION [7 upvotes]: I came across this old exam problem. Suppose $f(z)$ is entire and $|f(z)|< e^{|z|}$, and also $f(z)=f(z+1)$. Show $f(z)$ is a constant. I am able to show the singularity at infinity is not a pole. But I can't rule out it being an essential singularity. REPLY [4 votes]: If you don't want to use the machinery of the extension theorem, define the function $$g(z):= \frac{f(z)-f(0)}{e^{-i2\pi z}-e^{i2\pi z}}.$$ Where $|f(z)| \leq e^{C |z|}$ (the denominator is a constant times $\sin$.) Then it's easy to check that g(z) is entire given the periodicity conditions on $f$. Then we can show that $g(z)$ is bounded on the strips $0 \leq \Re(z) \leq 1$ (and therefore everywhere). Let $z=a + bi$ (where $a$ and $b$ are real. Then $$|g(z)| = |g(a+ bi)| \leq \frac{e^{C|b|}+1}{e^{2 \pi b}-1}$$ when $b>0$ and $$ |g(z)| \leq \frac{e^{C|n|}+1}{e^{-2\pi b}-1}$$ when $b<0$ by using the reverse triangle inequality both ways in the denominator. Then if $C<2 \pi$, the denominator will dominate for $b$ large so $g(z)$ is bounded at infinity. It is clearly bounded away from infinity. By Liouville, $g(z) = A$ for some constant, or $$f(z) = f(0) + A \sin(2\pi z)$$ From here it's not hard to show $A=0$<|endoftext|> TITLE: Prove $e^n$ and $\ln(n)$, mod 1, for $n=2,3,4...$ is dense in $[0,1]$ QUESTION [8 upvotes]: How can one prove $e^n$ and $\ln(n)$, modulo 1, are dense in $[0,1]$, for $n=2,3,4...$? By dense is meant, for any $0\frac 12$ and $e^{M+\frac 12}-e^{M}$ numbers $n$ with $\ln(n)\bmod 1<\frac 12$. These counts differ by a factor of $\sqrt e$ and that will be the relative proportion the larger the range of $n$ one checks becomes. But they are dense in $[0,1]$ and that is the property you are looking for (as reflected by the edit of the question). For the logarithm: Let $\epsilon>0$ be given. Find $N$ such that $\frac1N<\epsilon$. Then $0<\ln(n+1)-\ln n<\frac1n<\epsilon$ for all $n>N$ (because the derivative of $\ln$ is the reciprocal). Therefore the numbers $\ln n\bmod1$ with $N TITLE: Eigenvalues of a bipartite graph QUESTION [8 upvotes]: Let $X$ be a connected graph with maximum eigenvalue $k$. Assume that $-k$ is also an eigenvalue. I wish to prove that $X$ is bipartite. Now if $\vec{x}=(x_1,\cdots ,x_n)$ is the eigenvector for $-k$ then I can show that for the vector $\vec{y}$ whose entries are $(|x_1|,\cdots ,|x_n|)$ we have $y'Ay=ky'y$. From here can I conclude that $\vec{y}$ is an eigenvector with eigenvalue $k$? How to proceed to prove this result? Thanks. REPLY [5 votes]: $A$ is symmetric, nonnegative, and irreducible. By a theorem of Perron and Frobenius, $k$ is a simple eigenvalue with a positive eigenvector $u$. Now with componentwise absolute value, $k|x|=|-kx|=|Ax|\le A|x|$. Multiplication with $u^T$ shows that we must have equality. Hence $|x|$ is an eigenvector, hence a multiple of $u$. Therefore $x$ has no zero component. Partition the nodes into $P$ (of size $p$) and $N$ (of size $n$), where $x_i>0$ if $i\in P$ and $x_i<0$ if $i\in N$. Then $v=x_P>0$ and $w=-x_N>0$. Partition $A$ conformally as $A=\pmatrix{B & C\\C^T & D}$ (of size $p+n\times p+n$, with $B$ of size $p\times p$, and note that $B,C,D$ are nonnegative. Then the equation $|Ax|= A|x|$ implies $|Bv-Cw|=Bv+Cw$ and $|C^Tv-Dw|=C^Tv+Dw$. Taking the squared norm and simplifying yields $(Bv)_i(Cw)_i=0$ for $i\in P$ and $(C^Tv)_k(Dw)_k=0$ for $k\in N$. Since $v,w>0$, $C_{ik}=1$ implies that $B_{ij}=0$ for $j\in P$ and $D_{kj}=0$ for $j\in N$. This means that for every edge $ik$ with $i\in P$ and $k\in N$, the neighbors of $i$ must lie in $N$ and the neighbors of $k$ must lie in $P$. Growing the graph starting with some such edge implies that its connected component is bipartite. On the other hand, if there is no such edge then $P$ and $N$ are unions of connected components. Since the graph was assumed connected, it follows that it is bipartite.<|endoftext|> TITLE: Evaluate $\sum_{k=1}^\infty \frac{k^2}{(k-1)!}$. QUESTION [10 upvotes]: Evaluate $\sum_{k=1}^\infty \frac{k^2}{(k-1)!}$ I sense the answer has some connection with $e$, but I don't know how it is. Please help. Thank you. REPLY [2 votes]: Let's finish it in one line $$\sum_{k=1}^\infty \frac{(k-2)(k-1)+3(k-1)+1}{(k-1)!}=5e$$ Chris.<|endoftext|> TITLE: First Isomorphism Theorem QUESTION [7 upvotes]: Theorem: Let $G$ and $G'$ be groups and let $f:G\to G'$ be a group homomorphism. Then $G/\textrm{ker}\, f \cong\textrm{im}\, f$. My question is how to understand this theorem intuitively. REPLY [5 votes]: Maybe it gets more intuitive if you look at the situation with groups replaced by sets. If $f : M \rightarrow N$ is a map between sets $M$ and $N$, then you get an equivalenve relation $R_f := \{(x,y) \in M \times M \::\: f(x) = f(y)\}$. The equivalence class of $x \in M$ is $f^{-1}(\{f(x)\})$, so we have a well-defined (and injective) map from $M/R_f$ (the set of equivalence classe) to $N$, sending $f^{-1}(\{f(x)\})$ to $f(x)$. Thus, first collecting all elements, which map to the same element via $f$, and then mapping these collections to their respective values gives you a bijection onto the image. Note that in the case of groups $G$ and $G'$ and group homomorphism $f: G \rightarrow G'$ you have $f^{-1}(\{f(x)\}) = x\ker(f)$ for all $x \in G$ and the induced map is a group homomorphism, yielding an isomorphism to the image.<|endoftext|> TITLE: Android devices for reading textbooks and writing math by hand? QUESTION [7 upvotes]: This thread here about iPad is getting Android -answers that belong to other thread. I am interested in things such as real-time TeXing, LaTeX recognition, writing math by styluses/keyboards, teaching -- anything that can be useful for mathematicians and students alike to work. So can you use Android devices for reading textbooks and writing math by hand? REPLY [2 votes]: Because most Android devices lack proper screens, precision writing with things such as Jot Pro do not work. Your best bet for precision writing with Android is to get a Thinkpad x220 tablet or similar. More about this here. I use Maglus stylus in Android devices because the precision tools do not work. Then I use finger-writing tools below, free and good enough to have an old Android phone for casual uses such as finding certain Greek alphabets. Apps with Finger-writing The apps below do not work with precision writing with Jot because most Android devices do not have good-enough display. Currently, the only way to test whether Jot works with a display is to test it: even certain better-than-retina-displays do not work with Jot. I. Detexify here II. OCR MyScript -calculator here Accessories I use Gooseneck 1/4, Balljoint 1/4 and camera-phone-mount here to have my hands free while working with things. Sometimes, I use a macro flash for near-photographs with a DSL camera for very precise photographs. Phone-camera works however for casual photos such as note-taking. P.s. For future, there is a software called Screenshot UX here for sharing sheetshots in Android devices but it requires rooted phone. By this tool, you can easily show and recommend apps.<|endoftext|> TITLE: How to characterize the continuous functions from an infinite set with the cofinite topology to a Hausdorff space? QUESTION [7 upvotes]: The problem Let $X$ be an infinite set with the cofinite (finite complement) topology and let $Y$ be a Hausdorff space. Characterize the continuous functions from $X$ to $Y$. What I have so far For a function $f:X \rightarrow Y$ to be continuous, we have that $f$ is continuous $\iff$ for any open set $U$ of $Y$, $f^{-1}(U)$ is open in $X$, or, equivalently: $f$ is continuous $\iff$ for any closed set $B$ of $Y$, $f^{-1}(B)$ is closed in $X$. The closed sets of $X$ are all the finite subsets of $X$ or all of $X$. Since every finite point set $W$ in a Hausdorff space is closed, we must have that $f^{-1}(W)$ is either a finite subset of $X$ or all of $X$. But what about the infinite closed subsets of $Y$? I don't really know where to go from here. Am I on the right track here? Should I be looking at the closed sets at all? Any help appreciated! REPLY [5 votes]: Hint: Take $x_1, x_2 \in X$ with $f(x_1) \ne f(x_2)$. As $Y$ is Hausdorff, there are disjoint open neighbourhoods. Now look at their preimages.<|endoftext|> TITLE: Can $\int|f_n|d\mu \to \int |f|d\mu$ but not $\int|f_n - f|d\mu \to 0$? QUESTION [8 upvotes]: Possible Duplicate: Convergence a.e. and of norms implies that in Lebesgue space I am trying to show that if $$ \int_X |f_n|d\mu \to \int_X|f|d\mu $$ where $f$ and all the $f_n$ have finite integral and $f_n \to f$ pointwise, then $$ \int_X |f_n-f|d\mu \to 0. $$ I worked out a proof in the case that $\mu(X) < \infty$, but it relies on Egoroff's theorem which may fail if $\mu(X) = \infty$. I can't find a counterexample in the case $\mu(X) = \infty$ but I suspect that it may not be true. I was thinking of $X=\mathbb{R}$ but maybe there is a good counting measure counterexample on $\mathbb{N}$. Does anyone know if this is true in the case $\mu(X) = \infty$, and if so, how might I get started in showing it? REPLY [13 votes]: Let $g_n(x):=|f(x)|+|f_n(x)|-|f(x)-f_n(x)|$. It defines an integrable function, and $g_n\to 2|f|$ pointwise. Furthermore, $g_n\geq 0$. By Fatou lemma, $$\int_X\liminf_{n\to+\infty}g_n(x)d\mu(x)\leq\liminf_{n\to+\infty}\int_Xg_n(x)d\mu(x).$$ The LHS is $2\int_X|f(x)|d\mu(x)$, and the RHS is $2\int_X|f(x)|d\mu(x)+\liminf_{n\to +\infty}-\int_X|f-f_n|d\mu$. This gives $$0\leq -\limsup_{n\to +\infty}\int_X|f-f_n|d\mu,$$ which is the wanted result. In particular, this works without the assumption of finiteness of the measure (we just need a positive measure).<|endoftext|> TITLE: What is the distribution of a data set QUESTION [5 upvotes]: I understand what the probability distribution is. I also have a personal understanding/interpretation of the concept of distribution of a dataset. Whenever I see this expression I imagine a graph with frequency as the y-axis and the members of the data set on the x-axis, for each of them(members of the data set) the graph containing a point at the corresponding frequency level. Is this the correct interpretation ? Is "distribution of a datset" = "probability distribution" ? To me it doesn't look like the two concepts are the same thing.(probably subtly related but not the same thing) I was unable to find a standard definition of this concept. Can you provide me with a pointer to a resource defining it ? When authors say: "Two data sets drawn from the same underlying distribution", what exactly do they mean by "underlying distribution" ? Do they mean the same thing as I mentioned above, i.e. a graph like :frequency vs each member of the data set ? REPLY [2 votes]: You understand the concept of a probability distribution, so let's start there. A probability distribution has a cumulative distribution function that gives us the probability that a variable is less than or equal to a given value. In the discrete case, this CDF is the sum of values at discrete points of the probability mass function; in the continuous case, it is the integral over the real line of a probability distribution function. In either case, the pmf/pdf is non-zero and consequently its sum/integral is monotonically non-decreasing to 1. From the pmf/pdf, we can obtain distribution moments in the typical way: expected value, variance, and higher-order moments, using the standard formulas which need not repeat here. One way of looking at this is that you can characterize a distribution in terms of its moments. A Gaussian distribution is parameterized by its mean and variance; a Poisson distribution is parameterized by its process intensity. You still need to know the shape of the distribution function, but if you do know that, all you need is a handful of parameters. (Actually, there are other ways that we can address this when you don't know the distribution!) Now, let's look at a data set. In the real world, we really don't like dealing with a continuum. If you measure the voltage on a widget, it would be a lot easier if 5.000001 volts was effectively the same as 5.000002. Even when the physics underlying our data set dictate that the output belongs to a continuum, we want to discretize it some way. Typically, we do this using a histogram. There are plenty of resources on how to intelligently set the bin size for a histogram, but ultimately there is no perfect, natural, context-free way to do so. As you know, a histogram counts events at discrete points. In this way, a histogram is very much like a probability mass function: we cannot have a negative number of events in a bin, and if you add the total events from left to right, you end up counting the total number of events. Although the histogram won't sum to unity, if you just divide every count by the total number of events, you end up exactly with something that looks like a pmf. Furthermore, you can compute statistics on the data. Mean, variance, kurtosis, etc. are all statistical moments that can be computed in a straightforward manner. In fact, there are many different types of moments that you can compute, but if you compare the formula for doing so to the canonical way of computing different, say, expected values on a pmf, they are very similar (if not identical)! So you can take your data and turn it into something that looks like a pmf. You can even perform the same steps on the data to get statistical moments. The only thing that's really difficult to do is to find the shape of the distribution. Is it Gaussian? Binomial? Poisson? Weibull? There are tests for showing how well your data fits any given theoretical distribution, but unless you have infinity samples, you can never say for 100% sure. Furthermore, your moment computations aren't exactly the same. Your theoretical distribution might demand discrete values at exactly 1 volt, 2 volts, 3 volts, etc., but you compute your actual sample mean using measured values; .956 volts, 2.14 volts, 2.98 volts, etc. So in the end, a distribution of data is the characterization of the statistical moments of the data along with a comparison of the data to a theoretical distribution. Saying that data has mean and variance of X and Y doesn't give you the full picture. But saying that the data has mean X and variance Y and passes a goodness of fit test for a normal distribution does mean something, because we have tools for organizing the data in a way that has a natural analog to purely theoretical probability definitions.<|endoftext|> TITLE: A question about uniform continuity QUESTION [5 upvotes]: Let $F$ be a continuous function on the real set $\mathbb R$ such that the function $x \mapsto xF(x)$ is uniformly continuous on $\mathbb R$ . Prove that $F$ is also uniformly continuous on $\mathbb R$ . REPLY [3 votes]: Some hints: Prove that there exists $A,B>0$ such that for all real number $x$, $|xF(x)|\leq A|x|+B$. In particular, $F$ is bounded, say by $M$. We write for $x\geq 0$, $$|F(x)-F(y)|\leq \frac 1x|xF(x)-yF(y)|+\frac 1x|F(y)|\cdot |x-y|.$$ So if $|x|\geq 1$, we have $$|F(x)-F(y)|\leq |xF(x)-yF(y)|+M\cdot |x-y|.$$ Conclude, using uniform continuity of $F$ on $[-2,2]$.<|endoftext|> TITLE: Euler Maclaurin summation examples? QUESTION [5 upvotes]: How does one use Euler Maclaurin to compute asymptotics for sums like $$ \sum_{\substack{n\le x \\ (n,q)=1}} \frac{1}{\sqrt{n}} \quad \text{or} \quad \sum_{\substack{n\le x \\ (n,q)=1}} \frac{\log n}{\sqrt{n}}?$$ REPLY [2 votes]: We can actually get an additional term using the Wiener-Ikehara theorem. Introduce the Dirichlet Series $A(s)$ whose terms are given by the indicator $(n, q) =1$ times $1/\sqrt{n}$. We have $$ A(s) = \sum_{(n,q)=1} \frac{1/\sqrt{n}}{n^s} = \sum_{(n,q)=1} \frac{1}{n^{s+\frac{1}{2}}} = \prod_{p\nmid q} \frac{1}{1-\frac{1}{p^{s+\frac{1}{2}}}} = \zeta\left(s+\frac{1}{2}\right) \prod_{p\mid q} \left(1-\frac{1}{p^{s+\frac{1}{2}}} \right),$$ where $p$ ranges over the primes. Furthermore, introduce $$ B(s) = A(s) - \zeta\left(\frac{1}{2}\right) \prod_{p\mid q} \left(1-\frac{1}{\sqrt{p}}\right)$$ This Dirichlet series differs from $A(s)$ in its constant term and converges in $\mathfrak{R}(s) \ge \frac{1}{2}.$ It has a simple pole at $\frac{1}{2}$ and is zero at $s=0.$ Wiener-Ikehara applies to $B(s)$, giving $$\sum_{k\le n,(k,n)=1} \frac{1}{\sqrt{k}} - \zeta\left(\frac{1}{2}\right) \prod_{p\mid q} \left(1-\frac{1}{\sqrt{p}}\right) \sim \prod_{p\mid q} \left(1-\frac{1}{p}\right) \frac{\sqrt{n}}{1/2} = 2 \frac{\phi(q)}{q} \sqrt{n}.$$ We construct the zero at $s=0$ because we are actually working with the Mellin-Perron type integral $$\int_{1-i\infty}^{1+i\infty} B(s) n^s \frac{ds}{s}$$ and need to cancel the pole at $s=0$. The conclusion is that $$ \sum_{k\le n,(k,n)=1} \frac{1}{\sqrt{k}} \sim 2 \frac{\phi(q)}{q} \sqrt{n} + \zeta\left(\frac{1}{2}\right) \prod_{p\mid q} \left(1-\frac{1}{\sqrt{p}}\right)$$ The numerics of this approximation are excellent even for small values of $n$.<|endoftext|> TITLE: For what value of m that the equation $y^2 = x^3 + m$ has no integral solutions? QUESTION [5 upvotes]: For what value of m does equation $y^2 = x^3 + m$ has no integral solutions? REPLY [9 votes]: Here is the solution in Ireland and Rosen (page 270). Suppose the equation has a solution. Then $x$ is odd. For otherwise reduction modulo 4 would imply 3 is a square modulo 4. Write the equation as $$y^2+1=(x+2)(x^2-2x+4)=(x+2) ((x-1)^2+3) \ . \ \ \ (*)$$ Now since $(x-1)^2 +3$ is of the form $4n+3$ there is a prime $p$ of the form $4n+3$ dividing it and reduction of $(*)$ modulo $p$ implies that $-1$ is a square modulo $p$ which is a contradiction.<|endoftext|> TITLE: Let $p, q$ be odd primes with $p = 2q + 1$ Show that $2$ is a primitive root modulo $p$ if and only if $q\equiv1\pmod4$ QUESTION [5 upvotes]: Let $p,q$ be odd primes with $p = 2q + 1$ Show that $2$ is a primitive root modulo $p$ if and only if $q\equiv1\pmod4$ REPLY [4 votes]: One direction is easy. If $q\equiv 3\pmod{4}$, then $p\equiv -1\pmod{8}$, and therefore $2$ is a quadratic residue of $p$, so cannot be a primitive root. For this direction, the primality of $q$ was not used. We now show that if $q$ is a prime of shape $4k+1$, then $2$ is a primitive root of $p$. If $q\equiv 1\pmod{4}$, then $p\equiv 3\pmod{8}$, so $2$ is a quadratic non-residue of $p$. That means that $2$ has a chance to be a primitive root of $p$. We check that indeed it is. We give a proof via a counting argument. There are $(p-1)/2=q$ incongruent quadratic non-residues of $p$. And by a general result, there are $\varphi(\varphi(p))$ primitive roots of $p$. (Here $\varphi$ is the Euler $\varphi$-function.) If $p=2q+1$ where $q$ is prime, then $\varphi(\varphi(p))=\varphi(2q)=q-1$, since $q$ is prime. Since there are $q$ non-residues, and $q-1$ primitive roots, all but one non-residue must be a primitive root. Note that since $p$ has shape $4k+3$, $-1$ is a non-residue of $p$. But $-1$ is obviously not a primitive root of $p$. Therefore *every non-residue of $p$ other than $-1$ must be a primitive root of $p$. Since $2$ is a non-residue, this completes the proof.<|endoftext|> TITLE: Given 3 points of a rigid body in space, how do I find the corresponding orientation (aka rotation or attitude)? QUESTION [6 upvotes]: Say, I measure the 3D positions, $\mathbf{p_1(t), p_2(t), p_3(t)} \in \mathbb{R}^3$ of three points in space which are all connected by a rigid body at time $t = t_0$. Then, I make a second measurement, at $t = t_1$, after the body has rotated and translated. How can I determine the corresponding orientation of that movement? Either a rotation matrix R ($\in SO3$) or quaternion q ($\in H$) is fine. I would like to implement this in software and I'm looking for a quick solution, ideally without the use of high level library functions (eg. Matlab qr() or oth()). I guess we want to satisfy the following equations: $$\mathbf{p_1}(t_1) = \mathbf{R}\ \mathbf{p_1}(t_0) + \mathbf{t} $$ $$\mathbf{p_2}(t_1) = \mathbf{R}\ \mathbf{p_2}(t_0) + \mathbf{t} $$ $$\mathbf{p_3}(t_1) = \mathbf{R}\ \mathbf{p_3}(t_0) + \mathbf{t}$$ Where $\mathbf{R}$ is the rotation I am looking for and $\mathbf{t}$ is the translation. REPLY [5 votes]: I posted this answer to a similar question on sci.math. I will transcribe the question and the summary of the solution below. For this problem, we don't need to compute $r$, just set it to $1$. Least-Squares Conformal Multilinear Regression Given $\{ P_j : 1 \le j \le m \}$ and $\{ Q_j : 1 \le j \le m \}$, two sets of points, we want to find a conformal map, defined by a linear map, $M$, and a vector, $R$, which maps one set of points to the other via $$ Q = P M + R\tag{1} $$ where we require that $M M^T = r^2 I$ and that the square residue $$ \sum_{j=1}^m\left|P_jM+R-Q_j\right|^2\tag{2} $$ is minimal. Note that $(1)$ requires that $P$ and $Q$ are row vectors. Summary of the Method To find the least squares solution to $P M + R = Q$ for a given set of $\{ P_j \}$ and $\{ Q_j \}$, under the restriction that the map be conformal, we first compute the centroids $$ \overline{P}=\frac1m\sum_{j=1}^mP_j\qquad\text{and}\qquad \overline{Q}=\frac1m\sum_{j=1}^mQ_j $$ Next, compute the matrix $$ \begin{align} S &=\sum_{j=1}^m\left(Q_j-\overline{Q}\right)^T\left(P_j-\overline{P}\right)\\ &=\sum_{j=1}^mQ_j^TP_j-m\overline{Q}^T\overline{P} \end{align} $$ Let the Singular Value Decomposition of $S$ be $$ S=UDV^T $$ Next compute $\{ c_k \}$ with $$ \begin{align} c_k &=\sum_{j=1}^m\left[\left(P_j-\overline{P}\right)V\right]_k\left[\left(Q_j-\overline{Q}\right)U\right]_k\\ &=\sum_{j=1}^m\left[P_jV\right]_k\left[Q_jU\right]_k-m\left[\overline{P}V\right]_k\left[\overline{Q}U\right]_k \end{align} $$ and define $$ a_k = \mathrm{sgn}( c_k ) $$ Let $I_k$ be the matrix with the $(k,k)$ element set to $1$ and all the other elements set to $0$. Then calculate $$ E=\sum_{k=1}^na_kI_k $$ Compute the orthogonal matrix $$ W=VEU^T $$ If $\det(W) < 0$ but $\det(W) > 0$ is required, change the sign of the $a_k$ associated with the $c_k$ with the smallest absolute value. If required, compute $r$ by $$ r\sum_{j=1}^m\left|P_j-\overline{P}\right|^2=\sum_{j=1}^m\left\langle\left(P_j-\overline{P}\right)W,Q_j-\overline{Q}\right\rangle $$ or equivalently $$ r\left(\sum_{j=1}^m\left|P_j\right|^2-m\left|\overline{P}\right|^2\right) =\sum_{j=1}^m\left\langle P_jW,Q_j\right\rangle-m\left\langle\overline{P}W,\overline{Q}\right\rangle $$ Finally, we have the desired conformal map $Q = P M + R$ where $$ M = r W $$ and $$ R = \overline{Q} - \overline{P} M $$ More information, easier computation Suppose you want to map $\{P_i\}_{i=1}^3$ to $\{Q_i\}_{i=1}^3$, and the distances between the $P_i$'s and $Q_i$'s are the same. Compute a fourth point by $$ P_4=P_1+(P_2-P_1)\times(P_3-P_1) $$ and $$ Q_4=Q_1+(Q_2-Q_1)\times(Q_3-Q_1) $$ Then create the matrix $P$ whose columns are $P_2-P_1$, $P_3-P_1$, and $P_4-P_1$. Also create the matrix $Q$ whose columns are $Q_2-Q_1$, $Q_3-Q_1$, and $Q_4-Q_1$. Then $x\mapsto QP^{-1}x+(Q_1-QP^{-1}P_1)$ maps the source points to the destination points.<|endoftext|> TITLE: Infinite series where each term is the square of the last QUESTION [8 upvotes]: Is there a closed-form, in terms of elementary functions or otherwise, for the power series $x+x^2+x^4+x^8+x^{16}+...$, where each term is the square of the last? REPLY [5 votes]: The series $$\sum_{n=0}^\infty x^{\large-2^n}$$ generally does not have a closed form. This is just your series where $x \mapsto \dfrac{1}{x}$. When $2 \le x \le 10$, the decimal expansion is given by the OEIS. When $x=2$, the number is called the "Kempner-Mahler number." The case when $x=10$ seems to be called the "Fredholm-Rueppel Sequence" and has many other interesting properties. It has also been shown that the number, $M$, generated by the sum $x=2$ is transcendental by Mahler, and Knight showed that this was true for all $x\ge 2$. (Summarized here) The continued fraction for this series is discussed for $x \ge 3$ in J. Shallit's "Simple continued fractions for some irrational numbers."<|endoftext|> TITLE: Why do we need a pullback for the definition or classification of subobjects? QUESTION [5 upvotes]: Regarding the subobject classifier construction, why do we need the pullback? Monos from $U$ to $X$ are called subobjects, but I see that there might be injections which just have elements of the X (viewed as a set like in set theory) permuted. This is therefore somewhat weak. However, as far as I can see, $\text{Hom}(X,\Omega)$ are the characteristic functions (set theory terminology) and these are in bijective correspondence with what we understand as subsets. Why then do we need $U$, $j$, etc. to do set theory? The only purpose for the $U\rightarrow 1\rightarrow\Omega$ route I can come up with is to define $\chi_j$ in terms of function composition and therefore associating $\chi_j$'s with objects like $U$. Is it ment that the subobject classifier enables us to classify certain unique $U$'s and we can then associate objects (like $U$) as subobjects of objects (like X)? But why would that be necessary? Why not just consider $\text{Hom}(X,\Omega)$ as subobjects (vs, plural) of $X$? And in case we just don't want them as morphisms: I've seen hom-sets taken to a new category via a functor, why doesn't this suffice? Secondly, it's always explicitly stated that we need a terminal object to do all of the above. But don't we also need "true" and "!" as well, must this also always be explicitly required, or are we sure some of them are implying by there being a terminal object? Lastly, it is said that topos theory aviods the stacking of element-inclusion, i.e. $\in$ get replaced by axioms of function composition. But if Set contains any universe you would want to talk about, then surely these nested sets are to be find there too, and this just means that there are super long chaings involving subobject classifiert. Does this really reduce notational ballast, compared to any set theory with multiple types? REPLY [3 votes]: We do not need pullbacks to define subobject classifier. All we need is the notion of a logic over a category, or equally, the notion of subobjects. This may be abstractly characterized as giving a logical fibration over the category. The subobject classifier is then a universal object that allows us to recover all subobjects via substitutions. Pullbacks are needed to define the logic of subobjects. With every category $\mathbb{C}$ with pullbacks one may associate the canonical logic fibration --- $\mathit{sub} \colon \mathit{Sub} \rightarrow \mathbb{C}$. It is showable that if $\mathit{sub}$ has the universal object $X \rightarrow \Omega$ in the above sense, than $X$ is necessary a terminal object in $\mathbb{C}$. One may consider other logics over $\mathbb{C}$, and reach other notions of subobjects. For example, the crucial role in the quasitopos is played by a regular subobject classifier --- it is just the universal object induced by the logic of regular subobjects. One may even consider non-logical fibrations over $\mathbb{C}$, and see how such universal classifiers look like. For example, the external family fibration $\mathit{fam}(\mathbb{C}) \colon \mathit{fam}(\mathbb{C}) \rightarrow \mathbf{Set}$ has a universal classifier precisely when $\mathbb{C}$ is small; it is given by $\mathbb{C}_0$ --- the set of all objects of $\mathbb{C}$. You have also said: Monos from U to X are called subobjects, but I see that there might be injections which just have elements of the X (viewed as a set like in set theory) permuted. That is why we do not define subobjects as monos, but as abstraction classes of equivalent monos.<|endoftext|> TITLE: What is the difference between diagonalization and orthogonal diagonalization? QUESTION [10 upvotes]: I am confused about the following. When you diagonalize a $n\times n$ matrix $A$, you write $A$ as $PDP^{-1}$ with $P$ being orthogonal. Because if $P$ wasn't orthogonal, it wouldn't be invertable. Then why don't we call this "orthogonal diagonalization"? When you diagonalize a $n\times n$ symmetric matrix $A$ (so $A = A^T$), you write $A$ as $PDP^T$, because $P^{-1}= P^T$. But if $P^{-1}= P^T$, doesn't that imply that $P^TP=I$ and thus that P is orthonormal? Then why don't we call this "orthonormal diagonalization"? REPLY [9 votes]: If $A$ is diagonalizable, we can write $A=S \Lambda S^{-1}$, where $\Lambda$ is diagonal. Note that $S$ need not be orthogonal. Orthogonal means that the inverse is equal to the transpose. A matrix can very well be invertible and still not be orthogonal, but every orthogonal matrix is invertible. Now every symmetric matrix is orthogonally diagonalizable, i.e. there exists orthogonal matrix $O$ such that $A=O \Lambda O^T$. It might help to think of the set of orthogonally diagonalizable matrices as a proper subset of the set of diagonalizable matrices. REPLY [3 votes]: Being diagonalizable does not imply that it can be diagonalized with an orthogonal matrix. The relevant result is: A matrix is unitarily diagonalizable iff it is normal (ie, $A^* A = A A^*$). For example, $A = \begin{bmatrix} 1 & 1 \\ 0 & 2 \end{bmatrix}$. It is straightforward to check that $A$ is not normal, has two distinct eigenvalues, and the eigenspaces are $\mathbb{sp} \{ (1,0)^T \}$ ($\lambda=1$) and $\mathbb{sp} \{ (1,1)^T \}$ ($\lambda=2$) respectively. It is easy to see that the eigenspaces are not orthogonal and that $A$ can be diagonalized by taking any non-zero vector from the two eigenspaces, say $p_1,p_2$, forming the matrix $P = \begin{bmatrix} p_1 & p_2 \end{bmatrix}$. Then you will have $A P = P \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}$, and $P$ is invertible (but not orthogonal) because $p_1,p_2$ are linearly independent. Note: Hermitian matrices (or symmetric in the real case) are 'automatically' normal and can always be unitarily (orthogonally) diagonalized. Note: Any orthogonal $U$ matrix can be 'turned into' an orthonormal matrix $\tilde{U}$ in the following way: Let $\Lambda = U^* U$, then $\Lambda$ is diagonal with positive entries on the diagonal. Hence we can define the square root $\sqrt{\Lambda}$ as the diagonal matrix of corresponding square roots. Then $\tilde{U} = U \sqrt{\Lambda}$ is orthonormal.<|endoftext|> TITLE: What does the supremum of a sequence of sets represent? QUESTION [5 upvotes]: I'm trying to understand more about the limits of sequences of sets in Measure Theory. Given a sequence of sets $\{A_n\}_{n\in \mathbb{N}} = \{ A_1,A_2, \ldots \}$, what does $\sup_n \{ A_n \}$ represent? The reason why I'm asking is because I'd like to derive what $\limsup_{n\rightarrow\infty}A_n$ means… and this should formally be $\lim_{n\rightarrow\infty} \sup\{A_k | k \geq n \}$ - right? REPLY [12 votes]: The notation $\sup_n\{A_n\}$ is ambiguous, and I would avoid using it without more context. In the context of the $\limsup$ or $\liminf$ of sets, we are taking the partial order on sets by inclusion: $A \leq B$ if $A \subseteq B$. Then the supremum of a sequence $\{A_n\}_n$ is the smallest possible upper bound for every element of the sequence - the smallest set containing each $A_n$. This must be $$\sup_n A_n = \bigcup_n A_n.$$ Note that not every partially ordered set has well-defined suprema and infima - this kind of poset is called a lattice. For a sequence of real numbers, the definition of limit superior is $${\limsup} \,\{a_n\}_n = \lim_{n \rightarrow \infty} \sup_{m \geq n} a_m.$$ The notation $\lim_{n \rightarrow \infty} A_n$ doesn't make sense for a sequence $\{A_n\}_n$ of sets. But observe that if $B_n = \bigcup_m A_{m \geq n}$ then $B_n$ is a nested sequence: $B_{n+1} \subseteq B_n$, since $B_{n+1}$ is the union over a smaller set of indices. Since $B_n$ is getting smaller and smaller, we might define the "limit" of $B_n$ to be small: the set of $x$ so contained in every $B_n$, or $\cap_n B_n$. Putting this together, a reasonable analog for the $\lim \sup$ of a sequence of real numbers applied to sets is $$\limsup A_n := \bigcap_{n=1}^\infty \bigcup_{m =n}^\infty A_m.$$ Taking apart this definition, we see that $x \in \lim \sup A_n$ if and only if for all $n \in \mathbb{N}$ there exists $m \geq n$ so that $x \in A_m$: $$\forall n \in \mathbb{N}\, \exists m \in \mathbb{N} \,m \geq n \text{ and } x \in A_m.$$ This says that no matter how large of an $n$ you choose, I can find a larger $m$ so that $x \in A_m$. Another way of saying this is that there is no upper bound $n$ to the set of $m$ so that $x \in A_m$; and this is equivalent to saying that $x \in A_m$ for infinitely many $m$. See the Wikipedia article for more info. REPLY [5 votes]: To complement the answer of Jair Taylor: We can relate the notion of $\sup$ and $\limsup$ to the usual notions for real numbers by working with indicator functions: $$x\in\bigcup_n A_n\iff x\in A_n\textrm{ for some }n\iff 1_{A_n}(x)=1\textrm{ for some }n\iff \sup_n 1_{A_n}(x)=1$$ and $$\lim_{n\to\infty}\sup 1_{A_n}(x)=1\textrm{ iff } \lim_n 1_{\bigcup_{m=n}^\infty {A_m}}(x)=1\iff \inf_n 1_{\bigcup_{m=n}^\infty {A_m}}(x)=1\iff x\in\bigcap_n \bigcup_{m=n}^\infty A_n\iff x\in A_n\textrm{ for infinitely many }n.$$<|endoftext|> TITLE: Multiplicative Möbius inversion formula. QUESTION [5 upvotes]: There is a very simple proof for Möbius inversion formula through convolution: If $A$ is a UFD and $B$ is a ring, $f,g:A\rightarrow B$ two functions, then $$(f\ast g)(n) = \sum_{k\cdot l = n}f(k)\cdot g(l)$$ makes the set of functions into a ring with identity $\delta_1$. In this case the Möbius function, assigning 0 to every $n\in A$ which is not square free, 1 to square free elements which are the products of an even number of primes and -1 to square free products of odd number of primes, is an inverse of the constant function 1. Then the Möbius inversion formula $$f(n) = \sum_{d|n} g(d)\Longrightarrow g(n) = \sum_{d|n}f(d)\mu\left(\frac{n}{d}\right)$$ is simply another way of writing $$f = g\ast 1\Longrightarrow g = f\ast \mu.$$ Is there a similarly general and elegant approach to the multiplicative formula $$f(n) = \prod_{d|n}g(d)\Longrightarrow g(n) = \prod_{d|n} f(d)^{\mu(n/d)}?$$ Thank you. REPLY [4 votes]: Of course there is: $$\tag1 \ln f=\ln g *1\Rightarrow \ln g=\ln f *\mu.$$ At least this works if we can take logarithms in $B$. Otherwise, we need to switch to a different ring: Assume $B$ is a commutative ring with unity. Let $C$ be the set of permutations maps $\sigma \colon B^\times\to B^\times$ of the units with $\sigma(1)=1$. Then $C$ is a ring with pointwise multiplication as addition, with composition as multiplication, identity as one and $x\mapsto 1$ as zero. Moreover, we can map $\mathbb Z\to C$ via $n\mapsto(x\mapsto x^n)$ and consider $B^\times$ as a subset of $C$ via $b\mapsto(x\mapsto b x)$. Now if $f,g\colon A\to B$ are two functions with range actually in $B^\times\subseteq C$, then $f=g*1\Rightarrow g=f*\mu$ is the multiplicative inversion formula $$\tag2 f(n)=\prod_{d|n}g(d)\Rightarrow g(n)=\prod_{d|n}f(d)^{\mu(\frac nd)}.$$ What can be done if the ranges of $f,g$ are not contained in $B^\times$? Then let us hope that $B$ is at least an integral domain and replace $B$ with its quotient field. Your specific example is with polynomials, i.e. the integral domain $\mathbb Z[X]$, so the above steps work fine. In that specific case, maybe the following is more intuitive: We can interprete polynomials as functions $\mathbb C\to \mathbb C$. Outside their roots, we can take (multivalued) logarithm, thus can apply $(1)$ pointwise and obtain the conclusion in $(2)$ pointwise for all but finitely many points $\in\mathbb C$, hence $(2)$ also as a whole.<|endoftext|> TITLE: Every open set in $\mathbb{R}$ is the disjoint union of open intervals QUESTION [6 upvotes]: I know this is a standard question and that I can easily find solutions on this site or elsewhere. However, I came up with a proposed proof and would like someone to review it for me. If this is known, my apologies. Let $A_{\alpha}$ be a family of open intervals. Given $\alpha, \alpha'$ we say $A_{\alpha} \sim A_{\alpha'}$ if there exist $\alpha_{1}, ..., \alpha_{n}$ such that $A_{\alpha} \cap A_{\alpha_{1}} \neq \emptyset, ..., A_{\alpha_{n}} \cap A_{\alpha'} \neq \emptyset$. We see that $\sim$ is an equivalence relation. Consider $A$ to be an equivalence class and $F$ to be the union of all elements of $A$. Considering $a = \inf F$, $b = \sup F$ (where $a, b$ take values in the extended reals), we claim $F = (a, b)$. Let $a < x < b$. It suffices to see $x \in F$. This is clear since there exists $\alpha, \alpha'$ with $A_{\alpha}, A_{\alpha'}$ in $A$ such that $A_{\alpha}$ contains points smaller than $x$ (since $x$ is not the infimum) and $A_{\alpha'}$ contains points greater than $x$. Taking $\alpha_{1}, ..., \alpha_{n}$ as in the definition we see that for some $i$, $A_{\alpha_{i}}$ contains $x$. If it is not true that $A_{\alpha} \sim A_{\alpha'}$ then $A_{\alpha} \cap A_{\alpha'} = \emptyset$. Thus each $F$ is disjoint, and the union of $A_{\alpha}$ is the union of the $F$. Therefore any open subset of $\mathbb{R}$ is the union of disjoint open intervals REPLY [2 votes]: The idea is sound, but the implementation could be better. Here’s a fairly careful write-up missing only a few details. Let $U$ be a non-empty open subset of $\Bbb R$, and let $\mathscr{I}$ be the family of all open intervals contained in $U$. For $I,J\in\mathscr{I}$ write $I\sim J$ iff there are $I_0=I,I_1,\dots,I_n=J\in\mathscr{I}$ such that $I_k\cap I_{k+1}\ne\varnothing$ for $k=0,\dots,n-1$; clearly $\sim$ is an equivalence relation on $\mathscr{I}$. For $I\in\mathscr{I}$ let $[I]$ be the $\sim$-equivalence class of $I$, and let $U_I=\bigcup[I]$; clearly $U_I$ is open. Suppose that $U_I\cap U_J\ne\varnothing$ for some $I,J\in\mathscr{I}$; then there are $I'\in[I]$ and $J'\in[J]$ such that $I'\cap J'\ne\varnothing$. Clearly $I\sim I'\sim J'\sim J$, so $I\sim J$, and $[I]=[J]$. Thus, $\{U_I:I\in\mathscr{I}\}$ is a partition of $U$ into open subsets. Suppose that $I,J\in\mathscr{I}$ and $I\cap U\ne\varnothing$; then $I\cup J\in\mathscr{I}$. (Why?) An easy induction on $n$ then shows that if $I_0,\dots,I_n\in\mathscr{I}$, and $I_k\cap I_{k+1}\ne\varnothing$ for $k=0,\dots,n-1$, then $\bigcup_{k=0}^nI_k\in\mathscr{I}$. Fix $I\in\mathscr{I}$, and suppose that $x,y\in U_I$ with $x TITLE: Irreducible polynomial means no roots? QUESTION [18 upvotes]: If a polynomial is irreducible in $R[x]$, where $R$ is a ring, it means that it does not have a root in $R$, right? For example, to say that a polynomial $f(x)\in\mathbb Z[x]$ is irreducible in $\mathbb Q[x]$ is equivalent to say that $f(x)$ does not have any rational root. I just want to make sure. REPLY [13 votes]: I know there's an accepted answer here, but I just wanted to add in something to clarify a couple answers for newcomers: If $F$ is a field, $f(x)\in F[x]$ is reducible if and only if $f(x)$ has a zero in $F$, but this is only always true for polynomials of degree 2 and 3. Mark Bennet gives a decent counterexample to the generalized claim, and note that the polynomial he uses is degree 4. However, things are a bit different when you're working in $Z_n$ ($Z/nZ$). You can check for reducibility by testing if $f(n)=0$ for $n\in[0,n-1]$. For example, $f(x)=x^3+1\in Z_9[x]$ is reducible over $Z_9$ because $f(2)=0$.<|endoftext|> TITLE: Calculus: a very serious L'Hopital's Rule problem QUESTION [9 upvotes]: Compute using L'Hopital's rule: $$\lim_{x\to 0^+} \frac{\ln(x)}{1/\sin(x)}$$ I kept differentiating, but it's getting too long. How can I tackle this kind of problem? Also, when I encounter a limit of the form $\infty \cdot 0$ and I want to make it eligible for L'Hopital's rule, should I transform it into $$\frac{\infty}{1/0}=\frac{\infty}{\infty}$$ or $$\frac{0}{1/\infty}=\frac{0}{0}$$ REPLY [24 votes]: Or without l'Hopital: $$\frac{\ln x}{\frac1{\sin x}} = (x\ln x)\cdot \frac{\sin x}x,$$ where each factor has a well-known limit as $x\to0^+$.<|endoftext|> TITLE: Another question on almost sure and convergence in probability QUESTION [18 upvotes]: Convergence in probability implies convergence on a subsequence almost surely. But this means we fix a subsequence, such that $X_{n_k}$ converges for almost every $\omega$, right? The subsequence we pick does not depend on the $\omega$ right? REPLY [4 votes]: Also, you can directly apply Borel Cantelli to the sequence of events $|X_{n_k}-X|>2^{-k}$.<|endoftext|> TITLE: convergent series, sequences? QUESTION [7 upvotes]: I want to construct a sequence of rational numbers whose sum converges to an irrational number and whose sum of absolute values converges to 1. I can find/construct plenty of examples that has one or the other property, but I am having trouble find/construct one that has both these properties. Any hints(not solutions)? REPLY [2 votes]: Let $\alpha$ be an irrational number in $(0,1)$. Then for suitable choice of $\epsilon_n\in\{\pm1\}$, you can achieve $$\sum_{n=1}^\infty \frac{\epsilon_n}{2^n}=\alpha\qquad\text{and}\qquad\sum_{n=0}^\infty \frac{1}{2^n}=1.$$ To do so, define $\epsilon_n$ recursively: If $\epsilon_k$ is already known for $ks_{n-1}$ and $\epsilon_n=-1$ if $\alpha TITLE: Ratio of time reading and solving QUESTION [11 upvotes]: I am learning a lot of material on my own and I enjoying it but I have a constant problem: I really don't know how much time to spend actually reading theorems, corollaries and stuff and how much time to spend solving the exercises. I like both but at times I feel I spend too much time reading and postponing solving exercises on the excuse that "I must first build foundation". A friend of mine who is working on similar topics only reads definitions and theorems once and jumps into exercises and at times I feel he understands the stuff better than me. Help! REPLY [2 votes]: What I usually do is this: I read theorems and proofs but with a pen and some paper. It is extremely hard to follow a proof (unless it is elementary) without writing something down. I usually try to simplify proofs, definitions, etc. as much as possible because I am dyslexic but apparently it is a good strategy for everyone. Try to focus on the key works. Richard Feynman used almost the same method. This is why almost every professor of physics in the world has a copy of Feynman lectures. I also use a lot of symbols when convenient e.g. $\exists$, $\forall$, $\Rightarrow$, $\Leftarrow$, $\Leftrightarrow$, etc. But doing a few exercises also helps because you understand the concepts better. For example, when you first see the definition of the floor of $x$ i.e. $\left\lfloor x \right\rfloor$, you might think it is obvious but you still need try a few values of $x$ to convince yourself. This way you will actually remember. Your friend will most definitely forget everything by the end of the semester, and you do not need to understand the proof of a theorem to do an exercise that uses that theorem.<|endoftext|> TITLE: Use a linear approximation (or differentials) to estimate the given number. QUESTION [5 upvotes]: Use linear approximation (or differentials) to estimate: $$\sqrt {99.2}$$ What am I supposed to do with this? I am not given $x$ or $dx$. REPLY [5 votes]: Ue Taylor series for $\sqrt{x}$ about $x = 100$. The reason to expand the Taylor series about $100$ is that $100$ is the closest square to $99.2$. $$f(x) = f(100) + f'(100) (x-100) + \text{higher order terms}$$ Hence, $$\sqrt{99.2} \approx \sqrt{100} + \dfrac12 \dfrac{(99.2-100)}{\sqrt{100}} = 10 - \dfrac12 \dfrac{0.8}{10} = 10 - 0.04 = 9.96$$ REPLY [2 votes]: Hint: Is there any number near $99.2$ whose square root is easy? That will be your $x$ value.<|endoftext|> TITLE: Numerical method for finding the square-root. QUESTION [10 upvotes]: I found a picture of Evan O'Dorney's winning project that gained him first place in the Intel Science talent search. He proposed a numerical method to find the square root, that gained him $100,000 USD. Below are some links of pictures of the poster displaying the method. Link 1 Link 2 Link 3 How does this numerical method work and what is the proof? His method makes use of Moebius Transformation. REPLY [8 votes]: Essentially if you are interesting in evaluating $\sqrt{a}$, the idea is to first find the greatest perfect square less than or equal to $a$. Say this is $b^2$ i.e. $b = \lfloor \sqrt{a} \rfloor \implies b^2 \leq a < (b+1)^2$. Then consider the function $$f(x) = b + \dfrac{a-b^2}{x+b}$$ $$f(b) = b + \underbrace{\dfrac{a-b^2}{2b}}_{\in [0,1]} \in [b,b+1]$$ $$f(f(b)) = b + \underbrace{\dfrac{a-b^2}{f(b) + b}}_{\in [0,1]} \in [b,b+1]$$ In general $$f^{(n)}(b) = \underbrace{f \circ f \circ f \circ \cdots f}_{n \text{times}}(b) = b + \dfrac{a-b^2}{f^{(n-1)}(b)+b}$$ Hence, $f^{(n)}(b) \in [b,b+1]$ always. If $\lim\limits_{n \to \infty}f^{(n)}(b) = \tilde{f}$ exists, then $$\tilde{f} = b + \dfrac{a-b^2}{\tilde{f}+b}$$ Hence, $$\tilde{f}^2 + b \tilde{f} = b \tilde{f} + b^2 + a - b^2 \implies \tilde{f}^2 = a$$ To prove the existence of the limit look at $$(f^{(n)}(b))^2 - a = \left(b + \dfrac{a-b^2}{f^{(n-1)}(b)+b} \right)^2 - a = \dfrac{(a-b^2)(a-(f^{(n-1)}(b))^2)}{(b+f^{(n-1)}(b))^2} = k_{n-1}(a,b)((f^{(n-1)}(b))^2-a) $$ where $\vert k_{n-1}(a,b) \vert \lt1$. Hence, convergence is also guaranteed. EDIT Note that $k_{n-1}(a,b) = \dfrac{(a-b^2)}{(b+f^{(n-1)}(b))^2} \leq \dfrac{(b+1)^2 - 1 - b^2}{(b+b)^2} = \dfrac{2b}{(2b)^2} = \dfrac1{2b}$. This can be interpreted as larger the number, faster the convergence. Comment: This method works only when you want to find the square of a number $\geq 1$. EDIT To complete the answer, I am adding @Hurkyl's comment. Functions of the form $$g(z) = \dfrac{c_1z+c_2}{c_3z+c_4}$$are termed Möbius transformations. With each of these Möbius transformations, we can associate a matrix $$M = \begin{bmatrix} c_1 & c_2\\ c_3 & c_4\end{bmatrix}$$ Note that the function, $$f(x) = b + \dfrac{a-b^2}{x+b} = \dfrac{bx + a}{x+b}$$ is a Möbius transformation. Of the many advantages of the associated matrix, one major advantage is that the associate matrix for the Möbius transformation $$g^{(n)}(z) = \underbrace{g \circ g \circ \cdots \circ g}_{n \text{ times}} = \dfrac{c_1^{(n)} z + c_2^{(n)}}{c_3^{(n)} z + c_4^{(n)}}$$ is nothing but the matrix $$M^n = \begin{bmatrix}c_1 & c_2\\ c_3 & c_4 \end{bmatrix}^n = \begin{bmatrix}c_1^{(n)} & c_2^{(n)}\\ c_3^{(n)} & c_4^{(n)} \end{bmatrix}$$ (Note that $c_k^{(n)}$ is to denote the coefficient $c_k$ at the $n^{th}$ level and is not the $n^{th}$ power of $c_k$.) Hence, the function composition is nothing but raising the matrix $M$ to the appropriate power. This can be done in a fast way since $M^n$ can be computed in $\mathcal{O}(\log_2(n))$ operations. Thereby we can compute $g^{(2^n)}(b)$ in $\mathcal{O}(n)$ operations. REPLY [4 votes]: The iteration to find $\sqrt k$ is $f(x) = \frac{d x+k}{x+d}$ where $d = \lfloor \sqrt k \rfloor$. The iterations start with $x = d$. If $x$ is a fixed point of this, $x = \frac{d x+k}{x+d}$, or $x(x+d) = dx + k$ or $x^2 = k$, so any fixed point must be the square root. Now wee see if the iteration increases or decreases. If $y = \frac{d x+k}{x+d}$, $$y - x = \frac{d x+k}{x+d} - x = \frac{d x+k - x(x+d)}{x+d} = \frac{k - x^2}{x+d} $$ so if $x^2 < k$, $y > x$ and if $x^2 > k$, $y < x$. Also, proceeding like analyses of Newton's method, $y^2-k = \frac{(d x+k)^2}{(x+d)^2} - k = \frac{d^2 x^2 +2 d x k + k^2 - k(x+d)^2}{(x+d)^2} = \frac{d^2 x^2 +2 d x k + k^2 - k(x^2 + 2dx +d^2)}{(x+d)^2} = \frac{d^2 x^2 +2 d x k + k^2 - kx^2 - 2dkx -kd^2)}{(x+d)^2} = \frac{d^2 x^2 + k^2 - kx^2 -kd^2)}{(x+d)^2} = \frac{d^2 (x^2-k) + k^2 - kx^2)}{(x+d)^2} = \frac{d^2 (x^2-k) - k(x^2-k))}{(x+d)^2} = \frac{(d^2-k) (x^2-k)}{(x+d)^2} = (x^2-k)\frac{d^2-k}{(x+d)^2} $. Since $d = \lfloor \sqrt k \rfloor$, $d < \sqrt k < d+1$ or $d^2 < k < d^2 + 2d +1$ or $-2d - 1 < d^2 - k < 0$, so $|d^2-k| < 2d+1$. Using this, $|y^2-k| < |x^2-k|\frac{2d+1}{(x+d)^2}| = |x^2-k|\frac{2d+1}{x^2+2dx+d^2} $, so $|y^2-k|< |x^2-k|$, and each iteration gets closer to the square root. Since the starting iterate is $d$, all following iterates exceed $d$ so $|y^2-k| < |x^2-k|\frac{2d+1}{(d+d)^2}| < |x^2-k|\frac{2d+1}{4d^2}| < |x^2-k|\frac{1+1/(2d)}{2d}| \le 3|x^2-k|/4$ since $d \ge 1$. This show that the iteration converges. However, this does not show that it converges quadratically like Newton's, only that it converges linearly.<|endoftext|> TITLE: $\mu(E\setminus (E+x))=0$ for all $x\in\mathbb{R}$. Prove that $\mu(E)=0$ or $\mu(\mathbb{R}\setminus E)=0$ QUESTION [7 upvotes]: I am preparing myself to the mini-exam in measure theory by solving problems from lecturer's notes and I have encountered some difficulties I cannot overcome. I would appreciate if you could solve the following (or if you could give me some huge hints at least): Let $\mu$ be the Lebesgue measure on $\mathbb{R}$ and let $E$ be a Borel subset of $\mathbb{R}$ such that $$\mu(E\setminus (E+x))=0$$ for any $x\in\mathbb{R}$, where $E+x=\{z+x\mid z\in E\}$. Prove that $\mu(E)=0$ or $\mu(\mathbb{R}\setminus E)=0$. I know that $\mu(E)=\mu(E+x)$ and I suppose I have to prove the statement by assuming the opposite, but I really do not know how. Thanks in advance! REPLY [3 votes]: I would like to give another proof I have found recently after being told to make use of Fubini's theorem. Since $\mu(E\setminus E+x)=0$ for all $x\in\mathbb{R}$, we have $$\begin{align} 0=\int\limits_{\mathbb{R}}\mu(E\setminus (E+x))\,\text{d}\mu(x)=&\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\chi_{E\setminus (E+x)}(s)\, \text{d}\mu(s)\,\text{d}\mu(x)\\=&\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}}\chi_E(s)\left(1-\chi_{E+x}(s)\right)\,\text{d}\mu(s)\,\text{d}\mu(x)\\=&\int\limits_{\mathbb{R}}\int\limits_{\mathbb{R}} \chi_E(s)\left(1-\chi_{E+x}(s)\right)\,\text{d}\mu(x)\,\text{d}\mu(s)\\=&\int\limits_{\mathbb{R}}\chi_E(s)\left(\int\limits_{\mathbb{R}}\left(1-\chi_{E-s}(x)\right)\,\text{d}\mu(x)\right)\,\text{d}\mu(s)\\=&\int\limits_{\mathbb{R}}\chi_E(s)\left(\int\limits_{\mathbb{R}}\chi_{(E-s)^c}(x)\,\text{d}\mu(x)\right)\,\text{d}\mu(s)=\int\limits_{\mathbb{R}}\chi_E(s)\mu\left((E-s)^c\right)\,\text{d}\mu(s)\\=&\int\limits_{\mathbb{R}}\chi_E(s)\mu\left(E^c\right)\,\text{d}\mu(s)=\mu\left(E^c\right)\int\limits_{\mathbb{R}}\chi_E(s)\,\text{d}\mu(s)=\mu(E^c)\mu(E). \end{align}$$ Therefore either $\mu(E)$ or $\mu(\mathbb{R}\setminus E)$ is $0$. Notice that we used Fubini's theorem only once while changing the order of integration in equality no. 4. The eighth equality follows from the translation invariance of Lebesgue mesure.<|endoftext|> TITLE: What do you call a function differentiated with respect to all of its arguments? QUESTION [5 upvotes]: Just a simple question. Let $f(x_1, x_2, \ldots, x_n)$ be a smooth function. Is there a particular name for the function $$\frac{\partial^n f}{\partial x_1 \, \partial x_2 \cdots \partial x_n}$$ REPLY [3 votes]: This does not have a name. We call it the $n$-th partial difference of $f$ w.r.t. the vector $x$ or variables $x_1$, $x_2$, ..., $x_n$.<|endoftext|> TITLE: Is the product of two positive semidefinite matrices positive semidefinite? QUESTION [6 upvotes]: If $X$ and $W$ are real, square, symmetric, positive semidefinite matrices of the same dimension, does $XW + WX$ have to be positive semidefinite? This is not homework. REPLY [7 votes]: If $A, B$ are real, pos, and symmetric, then $A=A^{1/2}A^{1/2}$ and the trace of $AB$ is the trace of $A^{1/2}A^{1/2}B$ which is the the trace of $A^{1/2}BA^{1/2}$ which is a positive semidefinite matrix. Thus trace of $AB$ is nonnegative.<|endoftext|> TITLE: Why $\cos^2 (2x) = \frac{1}{2}(1+\cos (4x))$? QUESTION [5 upvotes]: Why: $$\cos ^2(2x) = \frac{1}{2}(1+\cos (4x))$$ I don't understand this, how I must to multiply two trigonometric functions? Thanks a lot. REPLY [12 votes]: Recall the formula $$\cos(2 \theta) = 2 \cos^2(\theta) - 1$$ This gives us $$\cos^2(\theta) = \dfrac{1+\cos(2 \theta)}{2}$$ Plug in $\theta = 2x$, to get what you want. EDIT The identity $$\cos(2 \theta) = 2 \cos^2(\theta) - 1$$ can be derived from $$\cos(A+B) = \cos(A) \cos(B) - \sin(A) \sin(B)$$ Setting $A = B = \theta$, we get that $$\cos(2\theta) = \cos^2(\theta) - \sin^2(\theta) = \cos^2(\theta) - (1-\cos^2(\theta)) = 2 \cos^2(\theta) - 1$$ REPLY [4 votes]: It’s just the double-angle formula for the cosine: for any angle $\alpha$, $\cos 2\alpha=\cos^2\alpha-\sin^2\alpha\;,$ and since $\sin^2\alpha=1-\cos^\alpha$, this can also be written $\cos2\alpha=2\cos^2\alpha-1$. Now let $\alpha=2x$: you get $\cos4x=2\cos^22x-1$, so $\cos^22x=\frac12(\cos4x+1)$. REPLY [2 votes]: $$\cos(4x) = \cos^2 (2x) - \sin^2 (2x) = 2\cos^2 (2x) - 1$$<|endoftext|> TITLE: Example of a discontinuous and bounded function for the limiting case $W^{1,n}$ QUESTION [9 upvotes]: Let $\Omega = B(0,1)$ be the open unit disc in $\mathbb{R}^2$. I'm looking for an example of a discontinuous and bounded function in $W^{1,2}(\Omega)$. I know the example $u(x) = \log \left( \log \left(1 + \frac{1}{|x|}\right)\right)$ of a discontinuous but unbounded function in $W^{1,2}(\Omega)$. I've tried playing with things like $(x,y) \mapsto \frac{x}{(x^2 + y^2)^{1/2}}$ but it didn't get me far. Any insight on how to try and construct such examples and how to expect such functions to behave would be much welcomed! REPLY [5 votes]: One can get an example just by composing the function $u(x,y)$ with the function $f(x) = \sin(x)$. By some variant of a chain rule for Sobolev functions, a composition of function in $u \in W^{1,p}(\Omega)$ with a function $f \in C^1_B(\mathbb{R})$ results in a function in $W^{1,p}(\Omega)$. Choosing for $f(x)$ a bounded function that doesn't have a limit when $x \rightarrow \infty$ and composing it with an unbounded $u$ gives the required example. Of course, the belonging of $f \circ u$ to $W^{1,2}(\Omega)$ can be easily checked directly.<|endoftext|> TITLE: Compute $\lim_{n\to\infty}\int_0^n \left(1+\frac{x}{2n}\right)^ne^{-x}\,dx$. QUESTION [6 upvotes]: I'm trying to teach myself some analysis (I'm currently studying algebra), and I'm a bit stuck on this question. It's strange because of the $n$ appearing as a limit of integration; I want to apply something like LDCT (I guess), but it doesn't seem that can be done directly. I have noticed that the change of variables $u=1+\frac{x}{2n}$ helps. With this, the problem becomes $$ \lim_{n\to\infty}\int_1^{3/2}2nu^ne^{-2n(u-1)}\,du. $$ This at least solves the issue of the integration limits. Let's let $f_n(u):=2nu^ne^{-2n(u-1)}$ for brevity. I believe it can be shown that $$ \lim_{n\to\infty}f_n(u)=\cases{\infty,\,u=1\\0,\,11$. I think I was also able to show that $\{f_n\}$ is eventually decreasing on $(1,3/2]$, and so Dini's Theorem says that the sequence is uniformly convergent to $0$ on $[u_0,3/2]$ for any $u_0\in (1,3/2]$. Since each $f_n$ is continuous on the closed and bounded interval $[u_0,3/2]$, each is bounded; as the convergence is uniform, the sequence is uniformly bounded. Thus, the Lebesgue Dominated Convergence Theorem says $$ \lim_{n\to\infty}\int_{u_0}^{3/2}2nu^ne^{-2n(u-1)}\,du=\int_{u_0}^{3/2}0\,du=0. $$ So it looks like I'm almost there, I just need to extend the lower limit all the way to $1$. I think this amounts to asking whether we can switch the order of the limits in $$\lim_{n\to\infty}\lim_{u_0\to 1^+}\int_{u_0}^{3/2}2nu^ne^{-2n(u-1)}\,du, $$ and (finally!) this is where I'm stuck. I feel like this step should be easy, and it's quite possible I'm missing something obvious. That happens a lot when I try to do analysis because of my practically nonexistent background. REPLY [2 votes]: \begin{eqnarray*} \lim_{n \to \infty}\int_{0}^{n}\left(1+\frac{x}{2n}\right)^n{\rm e}^{-x}\,{\rm d}x & = & \lim_{n \to \infty}n\int_{0}^{1}\left(1+\frac{x}{2}\right)^n{\rm e}^{-nx}\,{\rm d}x = \lim_{n \to \infty}n\int_{0}^{1}{\rm e}^{n\ln\left(1\ +\ x/2\right)-nx}\,{\rm d}x \\[3mm] & = & \lim_{n \to \infty}n\int_{0}^{1}{\rm e}^{-n\,x\,/\,2}\,{\rm d}x = \lim_{n \to \infty}n \left({\rm e}^{-n\,/\,2} - 1 \over -n/2\right) = {\Large 2} \end{eqnarray*}<|endoftext|> TITLE: Is there an intuitive way to see this property of random walks? QUESTION [14 upvotes]: For an $n$-step symmetric simple random walk (start at origin 0 and each step 1 unit towards left or right with equal probability,) an interesting fact is that the probability that you stop exactly at $r$ is equal to the probability that in the whole walk you've never reached $r+1$ but you've been to $r$. Is there a intuitive way to see this? Here, $n$ and $r$ are positive even numbers. REPLY [7 votes]: Let $M_n$ denote the maximum distance to the right reached by the walk. Let $X_n$ denote the ending position of the random walk. The question is asking for an intuitive explanation for why $P(M_n = r) = P(X_n = r)$. I think it's more intuitive to look at why $P(M_n \geq r) = P(X_n \geq r) + P(X_n \geq r+1)$ first. The walks with $M_n \geq r$ can be broken into two categories, depending on the point $s$ where they end: (1) $s \geq r$ and (2) $s < r$. In the latter case, you can take the part of the path after the first step that reaches $r$ and reflect it across the point $r$ so that it ends at a new point $s' > r$. This is the reflection principle that mike mentions in a comment, and the process is reversible. Since every path that reaches a point $s \geq r$ must have $M_n \geq r$, we have $$P(M_n \geq r) = P(X_n \geq r) + P(X_n \geq r+1).$$ Now, to the OP's question. $$\begin{align*} P(M_n = r) &= P(M_n \geq r) - P(M_n \geq r+1) \\ &= P(X_n \geq r) + P(X_n \geq r+1) - P(X_n \geq r+1) - P(X_n \geq r+2) \\ &= P(X_n = r) + P(X_n = r+1). \end{align*}$$ Since $n$ is even, the walk cannot stop on an odd number. Since $r$ is also even, this means $P(X_n = r+1) = 0$ (as mike also mentions in a comment). Therefore, $$P(M_n = r) = P(X_n = r).$$<|endoftext|> TITLE: Proving $|f(z)|$ is constant on the boundary of a domain implies $f$ is a constant function QUESTION [9 upvotes]: Let $D \subset \mathbb{C}$ be a bounded domain and $f$ a function holomorphic in $D$ and continuous in its closure. Suppose that $|f(z)|$ is constant on the boundary of $D$ and that $f$ does not have zeroes in $D$. Prove that $f$ is a constant function. I think that if I can prove that $f$ attains both its maximum and minimum values on the boundary, then the result follows from the maximum principle. But I've been unable to show this. Is this the right way to approach this problem? If so, how do I show this result? Thanks in advance! REPLY [11 votes]: By the maximum modulus principle, $f$ takes its maximum modulus on the boundary. By the minimum modulus principle (which is just the maximum modulus principle applied to $1/f$, which requires that $f$ have no zeros), $f$ also takes its minimum modulus on the boundary. If the modulus is constant on the boundary, then the minimum modulus and the maximum modulus, both lying on the boundary, must be equal. Hence the modulus is constant on all of $D$ including the interior. And if $|f|$ is constant on all of $D$, say $|f|(D)=\{K\}$, then the image of $D$ under $f$ lies inside the circle $\{e^{iθ}K\}.$ A circle which has empty interior in $\mathbb{C},$ so is not open. But the open mapping theorem states that if a function $f$ is not constant, it must be an open map, i.e. it must send any open subset of $\mathbb{C}$ to an open subset. Finally, by contraposition, since $f(D)\subseteq \{e^{iθ}K\}$ is not open, $f$ must be constant.<|endoftext|> TITLE: Finitely generated modules over noetherian rings isomorphic to their double duals QUESTION [6 upvotes]: Let $R$ be a noetherian ring and $M$ a finitely generated $R$-module. Suppose that $M$ is isomorphic to the double dual, how can I prove that $M$ is reflexive? (i.e. it is isomorphic to the double dual through the canonical map). REPLY [6 votes]: For any module $X$, I will let $\alpha_X:X\rightarrow X^{**}$ be the canonical map you described; for $a\in X, b\in X^*$, $\alpha_X(a)(b)=b(a)$. I claim that for any $X$, $\alpha_{X^*}:X^*\rightarrow X^{***}$ is a split injection; in fact it is split by the map $\alpha_X^*$ which is dual to $\alpha_X$. The splitting map is explicitly given by $\alpha_X^*(c)(a)=c(\alpha_X(a))$ for $c\in X^{***}$ and $a\in X$. To verify this claim, let $a\in X, b\in X^*$. Then $\alpha_X^*(\alpha_{X^*}(b))(a)=\alpha_{X^*}(b)(\alpha_X(a))=\alpha_X(a)(b)=b(a)$. Since this holds for all $a$ and $b$, $\alpha_X^*\circ\alpha_{X^*}=\mathrm{id}$ as desired. Suppose now that $M$ is a noetherian module, and let $f:M\rightarrow M^{**}$ be an isomorphism. Applying the above to $X=M^*$ shows $\alpha_{M^{**}}$ is a split injection. Using the naturality of $\alpha$ and the fact that $f$ is an isomorphism, it is clear that $\alpha_M$ is a split injection. We wish to show it is an isomorphism. Let $\beta:M^{**}\rightarrow M$ be the splitting map. It suffices to show $\beta$, or equivalently $\beta\circ f$, is injective. That $\beta \circ f:M\rightarrow M$ is an isomorphism is a special case of the fact that an epimorphism from a noetherian module $M$ to itself is always an isomorphism. To prove this fact, let $\phi$ be such an epimorphism, e.g. $\phi=\beta \circ f$. Consider the chain of submodules $\ker \phi^n\subseteq M$. Because $M$ is noetherian there exists n such that $\ker \phi^n=\ker \phi^{n+1}$. If $x\in \ker \phi$, then $x=\phi^n(y)$ for some $y$. Since $\phi(x)=0$, $y\in \ker\phi^{n+1}=\ker\phi^n$, so $x=0$. Hence $\phi$ is injective. Since $\beta\circ f$ is injective, $\alpha_M$ is surjective, which completes the proof.<|endoftext|> TITLE: Finding $a$ such that $x^a \cdot \sin(1/x)$ is uniformly continuous. QUESTION [6 upvotes]: Assuming that $\sin x$ is continuous on $\mathbb R$, find all real $\alpha$ such that $x^\alpha\sin (1/x)$ is uniformly continuous on the open interval (0,1). I'm guessing that I need to show that $x^\alpha\sin x$ is continuously extendable to [0,1]. Doing that for $x=1$ is pretty trivial, but I am having trouble doing that for $x=0$. I believe that the $\lim_{x\to 0}x^\alpha\sin (1/x)=0$, but how can I find what $f(0)$ equals? I would appreciate any guidance! Thanks for your help in advance. REPLY [5 votes]: You're on the right track. If you can show that $f_\alpha(x) = x^\alpha \sin(1/x)$ can be continuously extended to $[0,1]$, you are done. We will show that $\lim_{x \to 0} f_\alpha(x)$ exists exactly for $\alpha > 0$ (as is 0 in this case). That means that for $\alpha > 0$, the definition $f_\alpha(0) := 0$ makes $f_\alpha\colon [0,1]\to \mathbb R$ continuous, hence uniformly (as $[0,1]$ is compact). If $\alpha > 0$, we have $$ |f_\alpha(x)| \le x^\alpha \cdot |\sin(1/x)|\le x^\alpha \to 0, \quad x \to 0 $$ If $\alpha \le 0$, consider $x_n = 1/(\pi/2 + n\pi)$, then $x_n \to 0$, but $$ f_\alpha(x_n) = (\pi/2+ n\pi)^{|\alpha|} (-1)^n $$ and this doesn't converge for $n \to \infty$. Hence, $f_\alpha$ is uniformly continuous on $(0,1)$ iff $\alpha > 0$.<|endoftext|> TITLE: Confusion regarding proof of L'Hopital rule. QUESTION [5 upvotes]: I was reading Spivak's Calculus. I have a query with regards to the proof for L'Hopital's rule for the 0/0 indeterminate form. Theorem Statement :- "Suppose that $ \lim_{x\to a} f(x) = 0 $ and $\lim_{x\to a} g(x) = 0 $ and suppose that $ \lim_{x\to a} \frac {f'(x)}{g'(x)} $ exists . Then $ \lim_{x\to a} \frac {f(x)}{g(x)} $ exists and $$ \lim_{x\to a} \frac {f(x)}{g(x)} = \lim_{x\to a} \frac {f'(x)}{g'(x)} $$ " I have a confusion with regards to the last part of the proof. Using the Cauchy Mean Value Theorem it is shown that there exists a number $\alpha_x$ in $(a,x)$ such that $$ \ \frac {f(x)}{g(x)} = \ \frac {f'(\alpha_x)}{g'(\alpha_x)} $$ Now $\alpha_x$ approaches $a$ as $x$ approaches $a$ because $\alpha_x$ is in $(a,x)$ , it follows that $$ \lim_{x\to a} \frac {f(x)}{g(x)} = \lim_{x\to a} \frac {f'(\alpha_x)}{g'(\alpha_x)} = \lim_{\alpha_x\to a} \frac {f'(\alpha_x)}{g'(\alpha_x)} = \lim_{y\to a} \frac {f'(y)}{g'(y)} $$ My query is regarding the last two of the above equations. I understand that as $x$ approaches $a$ so does $\alpha_x$ , but then how are we treating $\alpha_x$ as a 'dummy variable' and replacing it with $y$. Is not $\alpha_x$ a dependent variable on x. (I am also having some trouble in seeing how the step $$\lim_{x\to a} \frac {f'(\alpha_x)}{g'(\alpha_x)} = \lim_{\alpha_x\to a} \frac {f'(\alpha_x)}{g'(\alpha_x)}$$ is justified ) Thanks in advance. I am new to these forums and looking forward to the discussion. REPLY [3 votes]: These equalities hold because you assumed that the limit $\displaystyle\lim_{x\rightarrow a} \dfrac{f'(x)}{g'(x)}$ exists. That means it doesn't really matter how you approach $a$ (i.e. either take $x\rightarrow a$ or $\alpha_x\rightarrow a$), you have to arrive at the same value for the limit. This also explains why your last equality is okay, since you have some variable approaching $a$, it doesn't matter how you get there; essentially if the limit exists, then the variable respect to which you are taking the limit is automatically a dummy variable. If you want to be more precise, you can use the $\varepsilon$-$\delta$ definition of the limit. Say $L$ is your limit, which we assume exists. Then for all $\varepsilon>0$ there exists some $\delta>0$ such that if $|a-x|<\delta$ then $\left|L-\dfrac{f'(x)}{g'(x)}\right|<\varepsilon$. But $|a-\alpha_x|<|a-x|<\delta$, so we know that for each $\varepsilon$, we can use the same $\delta$ to see that if $|x-a|<\delta$, then $\left|L-\dfrac{f'(\alpha_x)}{g'(\alpha_x)}\right|<\varepsilon$. This immediately implies $\displaystyle \lim_{x\rightarrow a}\dfrac{f'(\alpha_x)}{g'(\alpha_x)} = L$.<|endoftext|> TITLE: Binomial coefficient modulo prime power QUESTION [5 upvotes]: I am trying to understand how to find binomial coefficients modulo a power of a prime. I am reading the paper by Andrew Granville for this. But I am unable to understand it completely. More specifically, I am unable to work out how each of $(N_j!)_p$ is computed efficiently. It would be really awesome if someone could show a small hand-worked example too - say $\binom{16}{5}$ mod $3^3$ or any other small example. Thanks in advance! Edit Note: Earlier I had mentioned that I am unable to work out an example of the theorem by hand, but found out I was making a mistake. So have edited this question to understand how these coefficients are computed efficiently. REPLY [2 votes]: Just to explain my understanding of theorem 2: Note that the theorem starts by setting $p$, $u$ and $r$. So for now forget $q$: If $p^{r}=2$ or $2r+1 = p$ or $2r+1=p^{2}$ then: $$(up!)_{p} \equiv \pm \prod_{j=1}^{r}(jp!)_{p}^{\beta_{j}}( \mathrm{mod}\:p^{2r})$$ else : $$(up!)_{p} \equiv \pm \prod_{j=1}^{r}(jp!)_{p}^{\beta_{j}}( \mathrm{mod}\:p^{2r+1})$$ Given that $ k p^{q+1} + a = (kp)p^{q} + a$ we can see that $$ \mathrm{if}\: x\equiv a \:\mathrm{mod}\: p^{q+1} \: \mathrm{then}\: x\equiv (\:a\%(p^{q})\:) \:\mathrm{mod}\: p^{q}$$ Now all we need is to set $r$ so that the $\mathrm{mod}$ holds for a number $t \geq q$. Remember $t = 2r$ or $t = 2r+1$ Finally, compute the $\beta_{j}$ using the formula in the article, and the $(jp!)_{p}$ with $$ (jp!)_{p} = \frac{(jp)!}{j!p^{j}} \: \mathrm{for} \: 1\leq j \leq r$$ Note that the explanation for $\pm $ is not very clear, have a look at http://www.cecm.sfu.ca/organics/papers/granville/paper/binomial/html/node2.html for a better one ($\pm$ is $-$ only when $p=2$ and $[u/2] + \sum_{j=1}^{r}[j/2]\beta_{j}$ is odd) For $(up!)_{p}$, the result $res$ will be the residue $\mathrm{mod} \: p^{t}$. Just take $res \% p^{q}$ and you are done.<|endoftext|> TITLE: Solving the differential equation $\frac{dy}{dx}=\frac{3x+4y+7}{x-2y-11}$ QUESTION [8 upvotes]: How do we solve the differential equation $$\frac{dy}{dx}=\frac{3x+4y+7}{x-2y-11}$$? I tried substituting $v=yx$ but I do not seem to be getting anywhere.Putting $u=x-2y$ yielded nothing better. Thanks! REPLY [12 votes]: A hint: Introduce new variables $X$, $Y$ via $$x:=X+\alpha, \quad y:=Y+\beta$$ and choose the constants $\alpha$, $\beta$ such that the $7$ and the $-11$ on the right side of your equation disappear. In terms of the new variables your equation now has the form $$Y'={3X+4Y\over X-2Y} ={3+4{Y\over X}\over 1-2{Y\over X}}\ .$$ This is a standard type of ODE, sometimes called "homogeneous".<|endoftext|> TITLE: Triangle inequality for hyperbolic distance QUESTION [5 upvotes]: A quick way to define the hyperbolic metric in the Poincare disc is via the cross ratio: Given points a,b in the disc, let p,q be the endpoints of the hyperbolic line (halfcircle/line perpendicular to the circle) through a and b. Then the hyperbolic metric is given by d(a,b) = log [a:b:p:q]. (Depending on the precise definition of cross ratio, the order of the four entries might vary.) This definitions has many advantages, e.g. it is easy to show that the metric is invariant under Möbius transformations preserving the circle and that the corresponding geodesics are precisely the hyperbolic lines. However, the disadvantage seems to be that the triangle inequality is not quite obvious. Does anybody know a quick and elegant proof of the triangle inequality for the above definition of the hyperbolic metric? REPLY [4 votes]: I would suggest doing this computation in the Klein model rather than the Poincare model. Since there's an equivalence between them by an appropriate projection sending lines to circles, and preserving cross-ratios, the computation should be equivalent. The hyperbolic metric in the Klein model is the Hilbert metric. A nice exposition of the triangle inequality for the Hilbert metric is given in Section 2 of a paper of McMullen.<|endoftext|> TITLE: Prove that sphere is the only surface which can be generated by rotation in more than one way QUESTION [7 upvotes]: In Hilbert's book Geometry and the imagination, he said that sphere is the only surface which can be generated by rotation in more than one way. It is quite intuitive, but I can't give a rigorous proof. How to prove it? PS: Here rotation means rotating a closed curve with respect to the axis of symmetry of it that is in the same plane. REPLY [5 votes]: Let $S$ be a nonempty subset of $\mathbb R^3$ that can be generated by rotating a closed curve $C$ around $a$ and is also invariant under rotation around $b$. Then $S$ is compact and connected because $C$ is compact and connected. If $a$ and $b$ do not intersect, let $c$ be a line intersecting both $a$ and $b$ perpendicuularly. Then the rotation by $\pi$ around $a$ followed by rotation by $\pi$ about $b$ leave $c$ fixed but translate it by twice the distance between $a$ and $b$. Thus this is the same as a screw operation along $c$ and causes $S$ to be unbounded, contradicting compactness. If $a$ and $b$ intersect (wlog. in the origin), their rotations generate all of $SO(3)$, as joriki says. Therefore a single point $x\in S$ has as orbit a sphere aroound the origin (or consists of $x$ alone if $x=0$). We conclude that $S$ is the union of concentric spheres. Since $S$ is compact and connected, this leaves only the possibilities $$\tag1 S=\{0\} $$ $$\tag2S=\{x\in\mathbb R^3 \colon |x|=r\}\text{ for some }r>0$$ $$\tag3S=\{x\in\mathbb R^3 \colon |x|\le r\}\text{ for some }r>0$$ $$\tag4S=\{x\in\mathbb R^3 \colon r_1\le |x|\le r_2\}\text{ for some }0 TITLE: Average bus waiting time QUESTION [12 upvotes]: My friends and I were "thinking" yesterday in the pub about the following: if a person is standing on a bus stop that is served by a single bus which comes every p minutes, we would expect the average waiting time to be p/2 (which may or may not be correct). But we had no idea how to calculate the average waiting time if there is more than one bus. So let's assume there is n many buses serving the stop, and each comes once in m1, m2 ... mn minutes. How would we go about calculating the average time a person has to wait for a bus? What is the theory behind it? Thank you REPLY [9 votes]: As mentioned in the comments, the answer depends very much on the model used to describe the passage times of the buses. The deterministic situation where the passage times of buses of type $k$ are $s_k+m_k\mathbb N$ for some initial passage time $s_k$ in $(0,m_k)$ is too unwieldy to be dealt with in full generality hence we now study two types of assumptions. (1) Fully random passage times Here the passage times of buses of type $k$ are a Poisson process of intensity $1/m_k$ and the passage times of buses of different types are independent. Then, starting at time $t_0$, the next bus of type $k$ arrives after a random time exponential with mean $m_k$ hence the waiting time $T$ is such that $$ \mathbb P(T\gt t)=\prod_k\mathbb P(\text{no bus of type}\ k\ \text{in}\ (t_0,t_0+t))=\prod_k\mathrm e^{-t/m_k}=\mathrm e^{-t/m}, $$ where $$ \frac1m=\sum_k\frac1{m_k}. $$ In particular, $T$ is exponentially distributed with parameter $1/m$, hence $$ \mathbb E(T)=m. $$ The case $m_1=m_2=\cdots=m_n$ yields $$ \mathbb E(T)=\frac{m_1}{n}. $$ (2) Fully periodic passage times with random uniform initializations Here, buses of type $k$ pass at times in $S_k+m_k\mathbb N$ where $S_k$ is uniform on $(0,m_k)$ and the random variables $(S_k)$ are independent. Now, starting at time $t_0$, the next bus of type $k$ arrives after time $t_0+t$ if $t\leqslant m_k$ and if $S_k$ is not in a subinterval of $(0,m_k)$ of lenth $t/m_k$. Thus, $$ \mathbb P(T\gt t)=\prod_k\left(1-\frac{t}{m_k}\right),\qquad t\leqslant \bar m=\min\limits_km_k. $$ A consequence is that $$ \mathbb E(T)=\int_0^{+\infty}\mathbb P(T\gt t)\,\mathrm dt=\int_0^{\bar m}\prod_k\left(1-\frac{t}{m_k}\right)\,\mathrm dt. $$ Expanding the product yields $$ \mathbb E(T)=\sum_{i\geqslant0}(-1)^i\bar m^{i+1}\frac1{i+1}\sum_{|K|=i}\frac1{m_K}, $$ where, for every subset $K$, $$ m_K=\prod_{k\in K}m_k. $$ For example, time intervals $m_1$, $m_2$, $m_3$ with minimum $m_1$ yield $$ \mathbb E(T)=m_1-\frac{m_1^2}2\left(\frac1{m_1}+\frac1{m_2}+\frac1{m_3}\right)+\frac{m_1^3}{3}\left(\frac1{m_1m_2}+\frac1{m_2m_3}+\frac1{m_3m_1}\right)-\frac{m_1^4}{4m_1m_2m_3}, $$ which can be simplified a little bit (but not much) into $$ \mathbb E(T)=\frac{m_1}2-\frac{m_1^2}{6m_2}-\frac{m_1^2}{6m_3}+\frac{m_1^3}{12m_2m_3}. $$ The case $m_1=m_2=\cdots=m_n$ yields $$ \mathbb E(T)=\frac{m_1}{n+1}. $$<|endoftext|> TITLE: Eigenvalue decomposition of block covariance matrix for Canonical Correlation Analysis (CCA) QUESTION [7 upvotes]: Edited: My question is related to a tutorial I was reading. The covariance matrix is a block matrix where $C_{xx}$ and $C_{yy}$ are within-set covariance matrices and $C_{xy} = C_{yx}^T$ are between-sets covariance matrices. $$ \left[\begin{array}{r r} C_{xx} & C_{xy}\\ C_{yx} & C_{yy} \end{array}\right] $$ The tutorial says that the canonical correlations between $x$ and $y$ can be found by solving the eigenvalue equations $$ C_{xx}^{-1}C_{xy}C_{yy}^{-1}C_{yx} \hat w_x = \rho^2 \hat w_x \\ C_{yy}^{-1}C_{yx}C_{xx}^{-1}C_{xy} \hat w_y = \rho^2 \hat w_y $$ where the eigenvalues are the squared canonical correlations and the eigenvectors and are the normalized canonical correlation basis vectors. What I do not understand is how the eigenvalue equations are found by using the covariance matrix? Can someone please explain how we get those sets of equations? Thanks. REPLY [3 votes]: Canonical correlation between two random vectors $X$ and $Y$ is obtained as the maximal correlation between $a^TX$ and $b^TY$, where the maximum is taken over vectors $a$ and $b$. We can assume without loss of generality that $a^T \Sigma_x a = b^T \Sigma_y b = 1$. Assume for simplicity also that $E(X) = 0$ and $E(y) = 0$. The the correlation between $a^TX$ and $b^TY$ is just $$E(a^T X)(b^TY) = E(a^T X)(Y^Tb) = a^T E(XY^T) b = a^T \Sigma_{xy}b.$$ You can now use either Lagrange duality or Cauchy-Schwarz. Say we use Lagrange duality. The optimal should maximize $$a^T \Sigma_{xy}b -\frac12\mu (a^T \Sigma_x a) - \frac12\lambda (b^T \Sigma_y b)$$ over $a$ and $b$. ($\frac12$s in the above are for convenience.) Differentiating with respect to $a$ and $b$ gives $$ \begin{align*} \Sigma_{xy} b - \mu \Sigma_x a &= 0 \\ \Sigma_{yx} a- \lambda \Sigma_y b &= 0, \end{align*} $$ Multiplying the first by $a^T$ and the second by $b^T$ and enforcing the constraints shows that $\mu = \lambda$. Then, if $\Sigma_x$ and $\Sigma_y$ are invertible you can solve the equations for what you have. That is, $$ \begin{align*} \Sigma_x^{-1} \Sigma_{xy} b - \mu a &= 0 \\ \Sigma_y^{-1} \Sigma_{yx} a- \mu b &= 0, \end{align*} $$ implying $$ \begin{align*} \frac{1}{\mu} \Sigma_x^{-1} \Sigma_{xy} \Sigma_y^{-1} \Sigma_{yx} a - \mu a &= 0 \end{align*} $$<|endoftext|> TITLE: Sufficient conditions on target space for the existence of regular conditional probability QUESTION [5 upvotes]: Suppose $(\Omega, \mathcal{F}, \mathbb{P})$ is a probability space, $(S, \mathcal{B}(S))$ and $(T, \mathcal{B}(T))$ are topological spaces with their Borel $\sigma$-algebras, and $X: \Omega \to S$ and $Y: \Omega \to T$ are random variables. I know there are conditions I can put on $(\Omega, \mathcal{F}, \mathbb{P})$ to guarantee I can find regular conditional probabilities for an arbitrary random variable and measurable map. I'm wondering whether there are topological conditions I can put on $S$ and $T$ which guarantee that there exists a regular conditional probability for $Y$ given $X$. By regular conditional probability, I mean a map $\nu: S\times \mathcal{B}(T) \to [0,1]$ such that: (1) For each $s \in S$, $\nu(s, \cdot)$ is a probability measure on $(T, \mathcal{B}(T))$, (2) For each $B\in \mathcal{B}(T)$, $\nu(\cdot, B)$ is measurable, (3) For each $A\in \mathcal{B}(S), B\in \mathcal{B}(T)$, $\mathbb{P}\{X\in A, Y\in B\} = \int_A \nu(\cdot,B)d\mathbb{P}_X$. Where $\mathbb{P}_X$ is the pushforward probability measure of $X$. REPLY [7 votes]: The two random variables give you a probability measure $\mu$ on $\mathcal{B}(S)\otimes\mathcal{B}(T)$. It is enough to get a kernel $\kappa:S\times\mathcal{B}(T)\to[0,1]$ that reproduces $\mu$ when applied to the marginal of $\mu$ on $\mathcal{B}(S)$. Such a kernel is known as a product regular conditional probability. A sufficient, and potentially necessary, condition is that $\mathcal{B}(S)$ is countably generated and the marginal on that space perfect in the sense of Gnedenko and Kolmogorov, or, equivalently (for e.g. $\sigma$-algebras), compact in the sense of Marczewski. This is shown in The Existence of Regular Conditional Probabilities: Necessary and Sufficient Conditions by Arnold Faden (1979). A condition that is sufficient and necessary for countably generated probability spaces to admit only perfect probability measures is that $\mathcal{B}(S)$ is universally measurable. A topological condition that guarantees that is being a Hausdorff space that is the image of Baire space $\mathbb{N}^\mathbb{N}$ under a continuous function. A more restrictive condition, but probably the most popular, is that $S$ is Polish, that is separable and completely metrizable. There is a weaker notion of conditional probability in which the kernel has only to be measurable with respect to the completion of the marginal on $\mathcal{B}(S)$. In this case, one can be much more general and work with spaces that are not countably generated. The seminal paper for this is Existence of Conditional Probabilities by Hoffman-Jorgensen (1971).<|endoftext|> TITLE: Gradient of a Vector Valued function QUESTION [18 upvotes]: I read somewhere, gradient vector is defined only for scalar valued functions and not for vector valued functions. When gradient of a vector is definitely defined(correct, right?), why is gradient vector of a vector valued function not defined? Is my understanding incorrect? Is there not a contradiction? I would appreciate clear clarification. Thank You. REPLY [2 votes]: Sure enough a vector valued function ${\bf f}$ can have a derivative, but this derivative does not have the "type" of a vector, unless the domain or the range of ${\bf f}$ is one-dimensional. The general setup is the following: Given a function $${\bf f}:\quad{\mathbb R}^n\to{\mathbb R}^m,\qquad {\bf x}\mapsto {\bf y}={\bf f}({\bf x})$$ and a point ${\bf p}$ in the domain of ${\bf f}$ the derivative of ${\bf f}$ at ${\bf p}$ is a linear map $d{\bf f}({\bf p})=:L$ that maps the tangent space $T_{\bf p}$ to the tangent space $T_{\bf q}$, where ${\bf q}:={{\bf f}({\bf p})}$. The matrix of $L$ with respect to the standard bases is the Jacobian of ${\bf f}$ at ${\bf p}$ and is given by $$\bigl[L\bigr]=\left[{\partial y_i\over\partial x_k}\right]_{1\leq i\leq m,\ 1\leq k\leq n}\ .$$ If $m=1$, i.e., if ${\bf f}$ is in a fact a scalar function, then the matrix $\bigl[L\bigr]$ has just one row (of length $n$): $$\bigl[L\bigr]=\bigl[{\partial f\over\partial x_1} \ {\partial f\over\partial x_2}\ \ldots\ {\partial f\over\partial x_n}\bigr]_{\bf p}\ .$$ The $n$ entries of this one-row matrix can be viewed as coordinates of a vector which is then called the gradient of $f$ at ${\bf p}$.<|endoftext|> TITLE: What are the possible two-dimensional Lie algebras? QUESTION [6 upvotes]: I read in book written by Karin Erdmann and Mark J. Wildon's "Introduction to Lie algebras" "Let F be in any field. Up to isomorphism, there is a unique two-dimensional nonabelian Lie algebra over F. This Lie algebra has a basis {x, y} such that its Lie bracket is defined by [x, y] = x" How to prove that Lie bracket [x,y] = x satisfies axioms of Lie algebra such that [a,a] = 0 for $a \in L$ and satisfies jacobi identity and can some one give me an example of two dimensional nonabelian Lie algebra REPLY [16 votes]: In $2$ dimensional case, we have $[x,y]=0$ or $[x,y]=z=ax+by$ if $a$ is zero then by changing the variables you get what you looking for but if $a$ was not zero then divide both sides by $a$ so that it becomes $$[x,y/a]=x+by/a$$ now change the $x+by/a$ variable to $z$. $$[ z-by/a,y/a]=[z,y/a]=z$$ then change $y/a$ to $u$ so that you get $[z,u]=z$<|endoftext|> TITLE: Is the injection $\ell^p \subset \ell^q$ continuous for $p TITLE: Enumeration of set partitions QUESTION [7 upvotes]: The Stirling number of the second kind $S(n,k)$, where $S(n,k) = \frac{1}{k!}\sum\limits_{j=0}^k(-1)^{k-j}\left(\begin{array}{l}k\\j\end{array}\right)j^n$ Gives the number of unique unlabeled, unordered partitions of $n$ elements into $k$ partitions. I am interested in determining a procedure for enumerating all of these partitions. What I have had in mind is to start with a vector $v=\left[\begin{array}{l}i_1\\i_2\\{\vdots}\\i_n\end{array}\right]$ And then generate a set of "partition vectors'', each with $n$ elements, having a form (for $k=2$) of $[x_1~x_2~\cdots~x_n]$, where each $x_i$ is (following some suitable algorithm) chosen from $\left\{0~1\right\}$. I didn't know whether this question was more appropriate for Math or Stackoverflow, so I asked it here since this site allows pretty formatting of equations. edit: Thanks in part to comments by @Henry I've made some progress in the special cases where $k=2$ and also for $k=3$ (where $n$ is not divisible by $3$). For $k=2$, an alternative to the first equation is: $\sum\limits_{i=1}^{[[\frac{n}{2}]]}\frac{1}{\eta\,!}\left(\begin{array}{c}n\\n-i\end{array}\right)$ where $\eta$ is equal to the number of partitions having the same number of members. To implement the $k=2$ case for any arbitrary value of $n$, start with a vector $v_{\circ}$ of length $n$ filled with zeroes, and another vector, $v_\alpha$, with values $1,2,\cdots,n$, in that order. Generate the set $A$ of possible combinations of $n-i$ elements chosen from $v_\alpha$. The members of $A$ are used to identify the indices of elements of successive copies of $v_{\circ}$ whose values should be changed to ones. At this point, the number of vectors generated will exceed $S(n,2)$, because a partition like $[0~1~1~0]$, which is equivalent to $[1~0~0~1]$, is generated twice. In the solutions containing two equal-sized partitions, the redundant ones can be taken out by deleting all of the sets where $n-i = i$ which have a $0$ prior to any $1$. For $k=3$, start by listing the $N$ possible combinations of three partition sizes. So for $n=7$, there is: $\begin{array}{lrrr} \phi_1&5&1&1\\ \phi_2&4&2&1\\ \phi_3&3&3&1\\ \phi_4&3&2&2\\ \end{array}$ The total number of possible partitions is: $\frac{1}{2!}\left(\begin{array}{c}7\\5\end{array}\right)\left(\begin{array}{c}2\\1\end{array}\right) + \left(\begin{array}{c}7\\4\end{array}\right)\left(\begin{array}{c}3\\2\end{array}\right)+ \frac{1}{2!}\left(\begin{array}{c}7\\3\end{array}\right)\left(\begin{array}{c}4\\3\end{array}\right)+ \frac{1}{2!}\left(\begin{array}{c}7\\3\end{array}\right)\left(\begin{array}{c}4\\2\end{array}\right)$ To enumerate the partitions corresponding to each of these terms, set up a general schema: $\begin{array}{lrrr} \phi_i&a&b&c\end{array}$ for contributing $\frac{1}{\eta\,!}\left(\begin{array}{c}n\\a\end{array}\right)\left(\begin{array}{c}b+c\\b\end{array}\right)$ possible partitions. (Note that this could be simplified to $\frac{1}{\eta\,!}\cdot\frac{n!}{a!b!c!}$, but this would not facilitate the computational procedure.) where $a+b+c=n$. Start again with a vector $v_\circ$ containing $n$ zeroes, and a vector $v_\alpha$ containing $1,2,\cdots,n$. As before, generate the set A of possible combinations of $a$ elements chosen from $v_\alpha$, and use these values to specify indices in successive copies of $v_\circ$ whose values should be changed to 1. Each of the copies of $v_\circ$ now contains $a$ ones and $b+c$ zeroes. At this point, one will require the use of a function $W(v,j)$, which returns the index pertaining to the $j^{th}$ zero in vector $v$. Now make a vector $v_\alpha^\prime$ containing $1,2,\cdots,b+c$. Generate a set $A^\prime$ of possible combinations of $b$ elements chosen from $v_\alpha^\prime$, and use these values to specify values of $j$ which are fed into $W$, which specifies the indices in copies of copies of $v_\circ$ whose values should be changed to $2$. To remove redundant partitions, delete every copy of $v_\circ$ containing a $0$ which precedes all $2$'s, if $b=c$, and delete every copy of $v_\circ$ containing a $2$ which precedes all $1$'s, if $a=b$. REPLY [5 votes]: An algorithm for generating all partitions of $n$ elements into $k$ sets is given in Volume 4A of Knuth's The Art of Computer Programming (which was apparently finally published last year; I didn't know that). The subsection is available on his website as fascicle 3b; the algorithm is on page $27$.<|endoftext|> TITLE: How do I find $\frac{\text{d}}{\text{d}z}\left(z\bar{z}\right)$? QUESTION [8 upvotes]: I am seeking $\frac{\text{d}}{\text{d}z}\left(z\bar{z}\right)$ where $f(z)=z\bar{z}.$ And I know that I need to use the following definition of the derivative: $$f'(z)=\lim_{\Delta z\to 0}{\frac{f(z_0+\Delta z)-f(z_0)}{\Delta z}}.$$ However, I'm not sure if I'm using the definition correctly when I plug in $f(z)$: \begin{align*} f'(z)&=\lim_{\Delta z\to 0}{\frac{(z+\Delta z)(\overline{z+\Delta z})-z\bar{z}}{\Delta z}}\\&=\lim_{\Delta z\to 0}{\frac{\overline{\Delta z}(z+\Delta z)+\bar{z}\Delta z}{\Delta z}}\\&=\lim_{\Delta z\to 0}{\frac{\overline{\Delta z}(z+\Delta z)}{\Delta z}}+\lim_{\Delta z\to 0}{\frac{\bar{z}\Delta z}{\Delta z}}\\&=\lim_{\Delta z\to 0}{\frac{\overline{\Delta z}(z+\Delta z)}{\Delta z}}+\bar{z} \end{align*} Assuming that I've maneuvered the limit above properly, I'm not sure how to continue from the final line... REPLY [2 votes]: Your algebraic manipulation is all correct. Now consider the expression $$ \frac{\overline{\Delta z}}{\Delta z}. $$ which shows up in your last line. If I take the complex conjugate of this, I get its reciprocal. Therefore it lies on the unit circle (because $w = 1/\overline{w} \Rightarrow ||w||=1$). As I choose different small complex values of $\Delta z$, this expression is simply the point on the unit circle whose angle with the origin is $-2$ times that of $\Delta z$. In particular, I can get any point on the unit circle I like, no matter how small I make $\Delta z$, so your limit doesn't converge, unless the thing you're multiplying by, namely $z + \Delta z$ tends to 0, i.e. unless $z = 0$, in which case you get 0.<|endoftext|> TITLE: Space with non-convergent Cauchy sequence QUESTION [7 upvotes]: Not all sequences that are Cauchy are convergent. Here is what I think the example should be. Somehow the metric space is open but does not contain its limit points. Is this the right direction of thought? REPLY [10 votes]: Just take any sequence of rational numbers that converges to an irrational number. Then the sequence is Cauchy in $ \mathbb{Q} $, but does not converge in $ \mathbb{Q} $.<|endoftext|> TITLE: Is skew symmetry required for a flow network? QUESTION [6 upvotes]: From Wikipedia: $G(V,E)$ is a finite directed graph in which every edge $\ (u,v) \in E$ has a non-negative, real-valued capacity $\ c(u,v)$. A flow network is a real function $\ f:V \times V \rightarrow \mathbb{R}$ with the following three properties for all nodes $\ u$ and $\ v$: Capacity constraints: ... Skew symmetry: $\ f(u,v) = - f(v,u)$. The net flow from $\ u$ to $\ v$ must be the opposite of the net flow from $\ v$ to $\ u$. Flow conservation:... For any two vertices $v$ and $u$, the two edges $(v,u)$ and $(u,v)$ may not both exist. Even if they both exist, I don't understand why skew symmetry may be required. Also I didn't see "Skew symmetry" is required in the definition in books such as Introduction to graph theory by West, and Combinatorial optimization by Korte and Vygen). So I wonder if skew symmetry is or may be required for a flow network? Thanks! REPLY [2 votes]: Usually, one represents flow like this in a matrix, where entry at position $(u,v)$ is the flow from $u$ to $v.$ If there is no such edge, then we consider the flow to be 0. You can of course have flow between two nodes along multiple edges, in different directions, but these edges may be reduced, to a single, oriented edge, with non-negative flow. So, the assumption above you seem to get stuck on, is just an assumption that we've already made this simplification. If you do not like this simplification, you may instead add an extra node on the middle of each multi-edge. For example, if there are three edges between node A and B, add an extra edge in the middle of each such edge, and adjust flow in the obvious manner. Both these adjustments makes the model simpler, since we only need to store half of the matrix, for example (as it is skew-symmetric).<|endoftext|> TITLE: Borel $\sigma$ algebra on a topological subspace. QUESTION [11 upvotes]: Let $T$ be a topological space, with Borel $\sigma$-algebra $B(T)$ (generated by the open sets of $T$). If $S\in B(T)$, then the set $C:=\{A\subset S:A\in B(T)\}$ is a $\sigma$-algebra of $S$. My question is, if I also generated the Borel $\sigma$-algebra $B(S)$ treating $S$ as a topological subspace, with the inherited topology from $T$, is it true that $B(S)=C$? REPLY [14 votes]: Note that if $Y$ is any subspace of $T$, then $B(Y) = \{ A \cap Y : A \in B(T) \}$. As $\{ A \cap Y : A \in B(T) \}$ clearly contains all open subsets of $Y$, and is itself a $\sigma$-algebra on $Y$, then $B(Y) \subseteq \{ A \cap Y : A \in B(T) \}$. As the inclusion map $i : Y \to T$ is continuous, then $i^{-1} [ A ]$ is a Borel subset of $Y$ for each Borel $A \subseteq T$, but $i^{-1} [ A ] = A \cap Y$, and so $\{ A \cap Y : A \in B(T) \} \subseteq B(Y)$. If $S \subseteq T$ is Borel, then $A \cap S$ is a Borel subset of $T$ for all Borel $A \subseteq T$, and therefore $\{ A \cap S : A \in B(T) \} = \{ A \in B(T) : A \subseteq S \}$.<|endoftext|> TITLE: O(n) as embedded submanifold QUESTION [10 upvotes]: I want to show that the set of orthogonal matrices, $O(n) = \{A \in M_{n \times n} | A^tA=Id\}$, is an embedded submanifold of the set of all $n \times n$ matrices $M_{n \times n}$. So far, I have used that this set can be described as $O(n) = f^{-1}(Id)$, where $f: M_{n \times n} \rightarrow Sym_n = \{A \in M_{n \times n} | A^t = A\}$ is given by $f(A) = AA^t$, and that the map $f$ is smooth. Hence I still need to show that $Id$ is a regular point of this map, i.e. that the differential map $f_*$ (or $df$ if you wish) has maximal rank in all points of $O(n)$. How do I find this map? I tried taking a path $\gamma = A + tX$ in $O(n)$ and finding the speed of $f \circ \gamma$ at $t=0$, which appears to be $XA^t + AX^t$, but don't see how to proceed. Another way I thought of was by expressing everything as vectors in $\mathbb{R}^{n^2}$ and $\mathbb{R}^{\frac{n(n+1)}{2}}$, but the expressions got too complicated and I lost track. REPLY [7 votes]: I think you have almost done. As you said, it suffices to show that $\mathrm{Id}$ is a regular value of $f$, i.e. for each $A\in O(n)$, $f_*:T_A M_{n\times n}\to T_{\mathrm{Id}}Sym_n$ is surjective, where $T_pX$ denotes the tangent space of $X$ at $p$. Note that $T_A M_{n\times n}$(resp. $T_{\mathrm{Id}}Sym_n$) can be identified with $M_{n\times n}$(resp. $Sym_n$) and, as you have known, $f_*(X)=XA^t+AX^t$. Then you only need to verify that for any $S\in Sym_n$, there exists $X\in M_{n\times n}$, such that $XA^t+AX^t=S$. At least you may choose $X=\dfrac{1}{2}SA$.<|endoftext|> TITLE: Is the Taylor series comparable to Fourier series and spherical harmonics? QUESTION [8 upvotes]: I am currently trying to grasp spherical harmonics and try to digest that we proved that the sine and cosine functions are a basis for the $L^2$ space of the squared-integrable functions. So as far as I have understood it, the functions that can be integrated with $$\int_0^1 \mathrm dx \, f^2(x)$$ are forming a vector space. Then all the $e_n = \cos(n \pi x)$ (and sine) form a basis for that space. So any (even, since I like to drop the sine terms) function $f$ can be represented as a linear combination of the basis vectors like: $$f = \sum a_n e_n$$ To get the coefficients $a_n$, I need to project the vector (i. e. the function) onto the basis vector (i. e. the sine) using the inner (dot) product, like so: $$ a_n = \left\langle f(x), e_n \right\rangle_F = \int_0^1 \mathrm dx \, f(x) \cos(n \pi x)$$ Now I was wondering whether the Taylor series is such a representation with orthogonal functions $e_n = x^n$ as well. Is the “Taylor inner product” something like this then? $$ a_n = \left\langle f(x), x^n \right\rangle_T = \frac{1}{n!} \left. \frac{\mathrm d^n f(x)}{\mathrm d x^n} \right|_{x = 0}$$ In the end, I will have a series like so : $$ f = \sum a_n e_n =\sum\limits_{n} \frac{1}{n!} \left. \frac{\mathrm d^n f(x)}{\mathrm d x^n} \right|_{x = 0} x^n$$ REPLY [11 votes]: No. In short, the reason is because Taylor series are a local description of a function, whereas Fourier series and spherical harmonics incorporate global data. More precisely, the Taylor series of a smooth function at $0$ doesn't change if you modify the function arbitrarily outside of a neighborhood of $0$. You are free to introduce an inner product on, say, polynomials given by $$\langle x^n, x^m \rangle = \delta_{nm}$$ which formally reproduces the Taylor series of a polynomial, but the above argument strongly suggests that this inner product can't be described via integration against a function supported away from $0$, and in addition any integral of the form $$\langle x^n, x^m \rangle = \int_{\mathbb{R}} x^n x^m g(x) \, dx$$ would have the property that $\langle x^n, x^m \rangle = \langle 1, x^{n+m} \rangle$. However, in the special case of holomorphic functions, Taylor series can be related to Fourier series using the Cauchy integral formula. Intuitively this is because the local behavior of holomorphic functions determine their global behavior.<|endoftext|> TITLE: Tensor products commute with inductive limit QUESTION [10 upvotes]: How to prove, that tensor products commute with direct limits, if the main ring is not the same? For every $i$ we have modules $L_i$ and $M_i$ over a ring $A_i$, and for every $i \geq j$ homomorphisms $f^i_j: L_i \rightarrow L_j$, $g^i_j: M_i \rightarrow M_j$, $u^i_j: A_i \rightarrow A_j$, such that $f^i_j (al) = u^i_j(a)f^i_j(l)$, $g^i_j(am) = u^i_j (a) g^i_j (m)$ for every $a \in A_i,\, l\in L_i, \, m \in M_i$. To prove, that $\varinjlim (L_i \otimes_{A_i} M_i) = (\varinjlim L_i)\otimes_{\varinjlim A_i} (\varinjlim M_i)$. REPLY [7 votes]: Let $L=\lim_i L_i$, $M=\lim_i M_i$ and $A=\lim_i A_i$. For every $i$, we have canonical maps $$L_i\otimes_{A_i} M_i\to L\otimes_{A_i} M \twoheadrightarrow L\otimes_A M.$$ They pass to the inductive limit $$ f: \lim_i (L_i\otimes_{A_i} M_i) \to L\otimes_A M.$$ For all $i$, we have an $A_i$-bilinear map by composing $$ L_i\times M_i \to L_i\otimes_{A_i} M_i\to \lim_i (L_i\otimes_{A_i} M_i),$$ hence an $A_i$-bilinear map $$ L\times M\to \lim_i (L_i\otimes_{A_i} M_i)$$ which is $A$-bilinear. Hence we get an $A$-linear map $$ f: L\otimes_A M\to \lim_i (L_i\otimes_{A_i} M_i).$$ We check directly that $f$ and $g$ are inverse to each other. Edit Add proof of the above claim. Proof: To check that $g\circ f=\mathrm{Id}$, it is enough to check the equality holds for vectors of the form $x\otimes y$ with $x\in L, y\in M$ because they generate $L\otimes_A M$. For all $i$, denote by $r_i : L_i\to L$, $s_i: M_i\to M$ the canonical maps. Then there exist $i, x_i\in L_i$ and $y_i\in M_i$ such that $x=r_i(x_i)$ and $y=s_i(y_i)$. By construction, $f(x\otimes y)$ is the image of $x_i\otimes y_i\in L_i\otimes_{A_i} M_i$ in $\lim_i (L_i\otimes_{A_i} M_i)$. On the other hand, again by construction, $g$ takes this image to $r_i(x_i)\otimes s_i(y_i)=x\otimes y$ in $L\otimes M$. So $g\circ f=\mathrm{Id}$. The equality $f\circ g=\mathrm{Id}$ is proved similarly.<|endoftext|> TITLE: How to take the gradient of the quadratic form? QUESTION [64 upvotes]: It's stated that the gradient of: $$\frac{1}{2}x^TAx - b^Tx +c$$ is $$\frac{1}{2}A^Tx + \frac{1}{2}Ax - b$$ How do you grind out this equation? Or specifically, how do you get from $x^TAx$ to $A^Tx + Ax$? REPLY [8 votes]: I am just writing this answer for future reference and for clarity because the accepted answer is not completely correct and may case confusion. I will use a simple proof to better understand the multiplication rule in the calculation of $\nabla\mathbf{x^TAx}$. For $\mathbf{x}\in \mathbb{R}^n$ and $\mathbf{A}\in \mathbb{R}^{n\times n}$ let: $$f(g(\mathbf{x}),h(\mathbf{x}))=\langle g(\mathbf{x}),h(\mathbf{x})\rangle=g^T(\mathbf{x})h(\mathbf{x})$$ Where: \begin{equation} \begin{split} &g(\mathbf{x})=\mathbf{x}\\ &h(\mathbf{x})=\mathbf{Ax} \end{split} \end{equation} From the definition of $f$, it is obvious that $f(g(\mathbf{x}),h(\mathbf{x}))=\mathbf{x^TAx}$. In order to calculate the derivative, we will use the following fundamental properties, where $\mathbf{I}$ is the identity matrix: \begin{equation} \begin{split} &\dfrac{\partial \mathbf{A^Tx}}{\partial\mathbf{x}}=\dfrac{\partial \mathbf{x^TA}}{\partial\mathbf{x}}=\mathbf{A^T}\\ &\dfrac{\partial \mathbf{x}}{\partial\mathbf{x}}=\mathbf{I} \end{split} \end{equation} Hence, from the multiplication rule (you can see the rule at wikipedia), we got: \begin{equation} \begin{split} \dfrac{df(g(\mathbf{x}),h(\mathbf{x}))}{d\mathbf{x}}&=g^T(\mathbf{x})\dfrac{\partial h(\mathbf{x})}{\partial\mathbf{x}}+h^T(\mathbf{x})\dfrac{\partial g(\mathbf{x})}{\partial\mathbf{x}}=\\ &=\mathbf{x^T}\dfrac{\partial \mathbf{Ax}}{\partial\mathbf{x}}+(\mathbf{Ax})^T\dfrac{\partial \mathbf{x}}{\partial\mathbf{x}}=\\ &=\mathbf{x^TA}+\mathbf{x^TA^TI}=\\ &=\mathbf{x^TA}+\mathbf{x^TA^T}=\\ &=\mathbf{x^T}(\mathbf{A+A^T}) \end{split} \end{equation} As a result, from the definition of gradient, we got: $$ \nabla f=\Bigg(\dfrac{df}{d\mathbf{x}}\Bigg)^T=(\mathbf{x^T}(\mathbf{A+A^T}))^T=(\mathbf{A^T+A})\mathbf{x} $$ Note: The reason I did the proof this way is to be more generalizable. So you can can plug arbitrary functions $g$ and $h$ and use the above multiplication rule to derive the result.<|endoftext|> TITLE: A property of a sheaf in an arbitrary category QUESTION [6 upvotes]: Let $\phi,\psi:\mathscr{F}\rightarrow\mathscr{G}$ be morphisms of sheaves on $X$ with values in a category $\mathbf{C}$. Let's assume that $\mathbf{C}$ is nice enough to have products, equalizers, etc. Is it true that if $\phi_x=\psi_x:\mathscr{F}_x\rightarrow\mathscr{G}_x\ \forall x\in X$, then $\phi=\psi$ ? The proof of this is easy in a concrete category, but it doesn't seem so easy to do that without using elements of sets and instead using things like equalizers, in terms of which sheaf is defined (Wikipedia). REPLY [10 votes]: First, in order to make sense of the sheaf condition, we must assume that the category $\mathcal{C}$ has enough limits; for simplicity we assume $\mathcal{C}$ has all limits. Next, to make sense of stalks, we must assume that $\mathcal{C}$ has enough filtered colimits; for simplicity we assume $\mathcal{C}$ has colimits for all small filtered diagrams. Write $\textbf{Sh}(X; \mathcal{C})$ for the category of $\mathcal{C}$-valued sheaves on $X$. The key condition is the following: A morphism $\phi : \mathscr{F} \to \mathscr{G}$ in $\textbf{Sh}(X; \mathcal{C})$ is an isomorphism if and only if the stalk $\phi_x : \mathscr{F}_x \to \mathscr{G}_x$ is an isomorphism for all points $x$ in $X$. If $\mathcal{C} = \textbf{Set}$, this is just the fact that the topos $\textbf{Sh}(X)$ has enough points. This property is absolutely essential for doing anything useful with stalks. Lemma. The functor $x^* : \textbf{Sh}(X; \mathcal{C}) \to \mathcal{C}$ that sends a sheaf $\mathscr{F}$ to its stalk $\mathscr{F}_x$ has a right adjoint. Proof. The usual construction of the skyscraper sheaf goes through without problems.  ◼ Lemma. Let $\textbf{Psh}(X; \mathcal{C})$ be the category of $\mathcal{C}$-valued presheaves on $X$. Limits of small diagrams in $\textbf{Psh}(X; \mathcal{C})$ exist and can be computed componentwise. $\textbf{Sh}(X; \mathcal{C})$ is closed under small limits in $\textbf{Psh}(X; \mathcal{C})$. Proof. The first claim is standard abstract nonsense, and the second claim is essentially a consequence of the fact that limits preserve limits. ◼ Proposition. Let $\phi, \psi : \mathscr{F} \to \mathscr{G}$ be a pair of parallel morphisms in $\textbf{Sh}(X; \mathcal{C})$. Suppose at least one of the following conditions holds: Filtered colimits in $\mathcal{C}$ preserve equalisers. $\textbf{Sh}(X; \mathcal{C})$ has coequalisers. Then the following are equivalent: $\phi = \psi$. $\phi_x = \psi_x$ for all points $x$ in $X$. Proof. If $\phi = \psi$ then obviously $\phi_x = \psi_x$ for all $x$. Conversely, suppose $\phi_x = \psi_x$ for all $x$. There are two cases: Assume $\mathcal{C}$ has coequalisers. Let $\theta : \mathscr{G} \to \mathscr{H}$ be the coequaliser of $\phi$ and $\psi$; left adjoints always preserve coequalisers, so $\theta_x$ is the coequaliser of $\phi_x$ and $\psi_x$. Since $\phi_x = \psi_x$, $\theta_x$ is an isomorphism; but this is true for all $x$, so $\theta$ is also an isomorphism, by our assumption on $\textbf{Sh}(X; \mathcal{C})$. Thus $\phi = \psi$. Assume instead that filtered colimits in $\mathcal{C}$ preserve equalisers. Equalisers exist in $\textbf{Sh}(X; \mathcal{C})$ and are computed componentwise, so this means $x^*$ preserves them. The rest of the argument is essentially the same. ◼ Of course, the real question is, when does $\textbf{Sh}(X; \mathcal{C})$ have enough points? This is actually fairly tricky and I don't see a good general argument. Here's one that works, but it is somewhat restrictive. Proposition. Let $\mathcal{C}$ be a locally finitely-presentable (l.f.p.) category. Then: Filtered colimits in $\mathcal{C}$ preserve finite limits. $\textbf{Sh}(X; \mathcal{C})$ has enough points. Proof. Both claims basically boil down to the representation theorem for l.f.p. categories: there exist a small category $\mathcal{A}$ and a fully faithful functor $N : \mathcal{C} \to [\mathcal{A}^\textrm{op}, \textbf{Set}]$ that preserves filtered colimits and has a left adjoint. There is then a fully faithful functor $\textbf{Sh}(X; \mathcal{C}) \to \textbf{Sh}(X; [\mathcal{A}^\textrm{op}, \textbf{Set}])$ obtained by applying $N$ componentwise, and it is not hard to check that $\textbf{Sh}(X; [\mathcal{A}^\textrm{op}, \textbf{Set}])$ and $[\mathcal{A}^\textrm{op}, \textbf{Sh}(X)]$ are equivalent as categories. Now, the fact that $N$ preserves filtered colimits means that the stalk functors fit into a commutative diagram $$\begin{array}{rcl} \textbf{Sh}(X; \mathcal{C}) & \rightarrow & \mathcal{C} \\ \downarrow & & \downarrow \\ [\mathcal{A}^\textrm{op}, \textbf{Sh}(X)] & \rightarrow & [\mathcal{A}^\textrm{op}, \textbf{Set}] \end{array}$$ and so ultimately it boils down to the fact that $\textbf{Sh}(X)$ has enough points.<|endoftext|> TITLE: Primes in Gaussian Integers QUESTION [7 upvotes]: Let $p$ be a rational prime. It is is well known that if $p\equiv 3\;\;mod\;4$, then $p$ is inert in the ring of gaussian integers $G$, that is, $p$ is a gaussian prime. If $p\equiv 1\;mod\;4$ then $p$ is decomposed in $G$, that is, $p=\pi_1\pi_2$ where $\pi_1$ and $pi_2$ are gaussian primes not associated. The rational prime $2$ ramifies in $G$, that is $2=u\pi^2$, where $u$ is a unit in $G$ and $\pi$ a prime in $G$. where can I find a proof of this fact? I want a direct proof, not a proof for the quadratic integers and then deduce this as a particular case. REPLY [7 votes]: There are several places where you can find a direct proof of this. For instance, you can find it in the first 4 pages of Jurgen Neukirch's Algebraic Number Theory about the Gaussian integers. Also, LeVeque's Elementary Theory of Numbers has a short chapter dedicated to the Gaussian integers, where he proves this fact (see section 6.5).<|endoftext|> TITLE: Triangle equality implies vector dependence. QUESTION [5 upvotes]: I am trying to prove this statement: Show that if $x$ and $y$ are two vectors in an inner product space such that $||x+y||=||x||+||y||$, then $x$ and $y$ are linearly dependent. Squaring the equality I get $$\langle x+y,x+y\rangle=\langle x,x\rangle +2||x||\cdot||y||+\langle y,y\rangle $$ then, using linearity of the inner product I get $$ \langle x,x\rangle +\langle y,y\rangle+\langle x,y\rangle+\langle y,x\rangle=\langle x,x\rangle +2||x||\cdot||y||+\langle y,y\rangle $$ After all the cancellation I finally arrive at $$ \mathrm{Re}\langle x,y\rangle=||x||\cdot||y|| $$ This looks like Cauchy-Schwarz inequality, so the only thing left to show is that $\mathrm{Re}\langle x,y\rangle=|\langle x,y\rangle|$, how can I do that? REPLY [3 votes]: Note that through an application of Cauchy-Schwarz we get $$\rm{Re}\langle \mathbf{x},\ \mathbf{y}\rangle = \|\mathbf{x}\|\|\mathbf{y}\|\ge|\langle \mathbf{x},\ \mathbf{y}\rangle|$$ This is only possible if there is equality since we naturally have $$\rm{Re}\langle \mathbf{x},\ \mathbf{y}\rangle \le|\langle \mathbf{x},\ \mathbf{y}\rangle|$$<|endoftext|> TITLE: How is Cantor's diagonal argument related to Russell's paradox in naive set theory? QUESTION [9 upvotes]: I was wondering whether anyone can shed proper light on this issue. I read both and it seems like they are somewhat similar, yet I can't quite see it. REPLY [7 votes]: It is perhaps worth recalling that in his Introduction to Mathematical Philosophy, Russell writes When I first came upon this contradiction [in the idea that there is a greatest cardinal], in the year 1901, I attempted to discover some flaw in Cantor's proof that there is no greatest cardinal ... Applying this proof to the supposed class of all imaginable objects, I was led to a new and simpler contradiction, namely, the following :- The comprehensive class we are considering, which is to embrace everything, must embrace itself as one of its members. In other words, if there is such a thing as "everything" then "everything" is something, and is a member of the class " everything." But normally a class is not a member of itself. Mankind, for example, is not a man. Form now the assemblage of all classes which are not members of themselves. This is a class: is it a member of itself or not? If it is, it is one of those classes that are not members of themselves, i.e. it is not a member of itself. If it is not, it is not one of those classes that are not members of themselves, i.e. it is a member of itself. Thus of the two hypotheses - that it is, and that it is not, a member of itself - each implies its contradictory. This is a contradiction. So yes, Russell came upon his paradox in analysing what is happening in the Cantor proof applied to the limiting case of a (supposed) universal set. There is indeed that close relationship between the arguments.<|endoftext|> TITLE: What is the probability on rolling $2n$ dice that the sum of the first $n$ equals the sum of the last $n$? QUESTION [11 upvotes]: The Question What is the probability, rolling $n$ six-sided dice twice, that their sum each time totals to the same amount? For example, if $n = 4$, and we roll $1,3,4,6$ and $2,2,5,5$, adding them gives $$ 1+3+4+6 = 14 = 2+2+5+5 $$ What is the probability this happens as a function of $n$? Early Investigation This problem is not too hard for $n = 1$ or $n = 2$ via brute force... For $n = 2$: Tie at a total of $2$: $$ \frac{1}{36} * \frac{1}{36} = \frac{1}{1296} $$ Tie at a total of $3$: $$ \frac{2}{36} * \frac{2}{36} = \frac{4}{1296} $$ etc. so the answer is $$ \frac{1^2 + 2^2 + 3^2 + ... + 6^6 + 5^2 + ... + 1^2}{1296} = \frac{\frac{(6)(7)(13)}{6} + \frac{(5)(6)(11)}{6}}{1296} = \frac{146}{1296} $$ Note that I use the formula: $\sum_{k=1}^{n}k^2=\frac{(n)(n+1)(2n+1)}{6}$. Is there a way to do this in general for $n$ dice? Or at least a process for coming up with a reasonably fast brute force formula? The Difficulty The problem arises that the sum of squares is not so simple when we get to three dice. Using a spreadsheet, I figured out we need to sum these squares for 3 dice: $$ 1, 3, 6, 10, 15, 21, 25, 27, 27, 25, 21, 15, 10, 6, 3, 1 $$ For a brute force answer of $\frac{4332}{46656}$. Note how we can no longer use the sum of squares formula, as the squares we need to sum are no longer linear. Some Thoughts I am no closer to figuring out an answer for $n$ dice, and obviously the question becomes increasingly more difficult for more dice. One thing I noticed: I see a resemblance to Pascal's Triangle here, except we start with the first row being six $1$, not one $1$. Se we have: 1 1 1 1 1 1 1 2 3 4 5 6 5 4 3 2 1 1 3 6 10 15 21 25 27 27 25 21 15 10 6 3 1 1 4 9 16 25 36 46 52 54 52 46 36 25 16 9 4 1 ... but that's still a process, not a formula. And still not practical for $n = 200$. I know how to prove the formula for any cell in Pascal's Triangle to be $C(n,r) = \frac{n!}{r!(n-r)!}$... using induction; that doesn't really give me any hints to deterministically figuring out a similar formula for my modified triangle. Also there is no immediately obvious sum for a row of this triangle like there is (powers of 2) in Pascal's Triangle. Any insight would be appreciated. Thanks in advance! REPLY [2 votes]: It is possible to calculate this exactly if you are willing to use arbitrary precision integer arithmetic. You can use the recursion $$f(n,k)=\sum_{j=1}^6 f(n-1,k-j)$$ starting at $f(0,0)=1$ and $f(0,k)=0$ when $k\not =0$ to find the number of ways of scoring $k$ from $n$ dice. Your result is then $$\sum_{i=n}^{6n} f(n,i)^2 / 6^{2n}$$ which is the division of two very large integers: for $n=200$ the numerator will be about $2.1\times 10^{309}$ and the denominator will be $6^{400}\approx 1.8\times 10^{311}$. More practically using a spreadsheet and only looking for several decimal places you can use $$g(n,k)=\sum_{j=1}^6 g(n-1,k-j) / 6$$ starting at $g(0,0)=1$ and $f(0,k)=0$ when $k\not =0$ to find the probability of scoring $k$ from $n$ dice. Your result is then $$\sum_{i=n}^{6n} g(n,i)^2.$$ With $n=200$ this latter method will just over 200 columns and 1200 rows of the spreadsheet, so not difficult, and an extra column for the squares of the final column. In practice it give a value of about $0.0116752$ for the probability of matched sums rolling 200 dice twice. This compares with about $0.0116798$ from joriki's approximation, a relative difference of around 0.04%.<|endoftext|> TITLE: Non-trivial homomorphism between multiplicative group of rationals and integers QUESTION [7 upvotes]: Let $\mathbb{Q}^{\times}$ be the multiplicative group of non-zero rationals. Is there a non-trivial homomorphism $\mathbb{Q}^{\times} \to \mathbb{Z}$? In the same spirit, is there a homomorphism $\mathbb{Z} \to \mathbb{Q}^{\times}$? REPLY [14 votes]: For every prime $p$, select an integer $n_p$. Each rational $q$ can be uniquely represented in the form $$q=\pm\prod_{p \text{ prime}} p^{e_p}$$ for some integers $e_p$, almost all of which are $0$. For example, $${20\over 363} = {2^2\cdot 5\over 3\cdot 11^2} = 2^2\cdot 3^{-1}\cdot5^1\cdot7^0\cdot11^{-2}\cdot13^0\cdot17^0\cdots.$$ Here $e_2 = 2$, $e_3 = -1 $, and so forth. Then map $g\colon\mathbb Q^\times\to\mathbb Z$ by letting $g(\pm \prod p^{e_p})= \sum e_p n_p$. This is a homomorphism (and one can show that all homomorphisms $\Bbb Q^\times \to\Bbb Z$ can be obtained this way; that is, given a homomorphism $f$, the homomorphism $g$ obtained by picking $n_p:=f(p)$ equals $f$) A homomorphism $f\colon \mathbb Z\to\mathbb Q^\times$ is determined by selecting $f(1)\in\Bbb Q^\times$ arbitrarily (and letting $f(n)=f(1)^n$).<|endoftext|> TITLE: What are some good intuitions for understanding Souslin's operation $\mathcal{A}$? QUESTION [25 upvotes]: What are some good intuitions for understanding Souslin's operation $\mathcal{A}$? Recall the definition: Let $S = \mathbb{N^{ TITLE: If $X$ is infinite dimensional, all open sets in the $\sigma(X,X^{\ast})$ topology are unbounded. QUESTION [8 upvotes]: As in the title, if $X$ is infinite dimensional, all open sets in the $\sigma(X,X^{\ast})$ topology are unbounded. The $\sigma(X,X^{\ast})$ topology is the weakest topology that makes linear functionals on $X^\ast$ continuous. How does one show this? How does having an infinite basis relate to open sets being unbounded? I can't see this, please help and thanks in advance! REPLY [13 votes]: It's enough to show it for basic non-empty open sets which contain $0$ (for the others, do a translation). These ones are of the form $$V_{N,\delta,f_1,\dots,f_N}=\bigcap_{j=1}^N\{x\in X, |f_j(x)|<\delta\},$$ where $N$ is an integer, $f_j\in X^*$ and $\delta>0$, $1\leq j\leq N$. Then $$\bigcap_{j=1}^N\ker f_j\subset V_{N,\delta,f_1,\dots,f_N}.$$ As $X$ is infinite dimensional, $\bigcap_{j=1}^N\ker f_j$ is not reduced to $0$ (otherwise the map $x\in X\mapsto (f_1(x),\dots,f_N(x))\in\Bbb R^n$ would be injective). So it contains a non-zero vector $x_0$, and $\lambda x_0$ for all scalar $\lambda$, proving that $V_{N,\delta,f_1,\dots,f_N}$ is not bounded.<|endoftext|> TITLE: What is the difference between eigenfunctions and eigenvectors of an operator? QUESTION [10 upvotes]: What is the difference between the eigenfunctions and eigenvectors of an operator, for example Laplace-Beltrami operator? REPLY [7 votes]: An eigenfunction is an eigenvector that is also a function. Thus, an eigenfunction is an eigenvector but an eigenvector is not necessarily an eigenfunction. For example, the eigenvectors of differential operators are eigenfunctions but the eigenvectors of finite-dimensional linear operators are not.<|endoftext|> TITLE: Please explain the intuition behind the dual problem in optimization. QUESTION [272 upvotes]: I've studied convex optimization pretty carefully, but don't feel that I have yet "grokked" the dual problem. Here are some questions I would like to understand more deeply/clearly/simply: How would somebody think of the dual problem? What thought process would lead someone to consider the dual problem and to recognize that it's valuable/interesting? In the case of a convex optimization problem, is there any obvious reason to expect that strong duality should (usually) hold? It often happens that the dual of the dual problem is the primal problem. However, this seems like a complete surprise to me. Is there any intuitive reason to expect that this should happen? Does the use of the word "dual" or "duality" in optimization have anything to do with the dual space in linear algebra? Or are they just different concepts that go by the same name. What about the use of the word "dual" in projective geometry — is there a connection there? You can define the dual problem and prove theorems about strong duality without ever mentioning the Fenchel conjugate. For example, Boyd and Vandenberghe prove a strong duality theorem without mentioning the Fenchel conjugate in their proof. And yet, people often talk as if the Fenchel conjugate is somehow the "essence" of duality, and make it sound as if the whole theory of duality is based on the Fenchel conjugate. Why is the Fenchel conjugate considered to have such fundamental importance? Note: I will now describe my current level of understanding of the intuition behind the dual problem. Please tell me if you think I might be missing any basic insights. I have read the excellent notes about convex optimization by Guilherme Freitas, and in particular the part about "penalty intuition". When we are trying to solve \begin{align*} \text{minimize} &\quad f(x) \\ \text{such that} & \quad h(x) \leq 0 \end{align*} one might try to eliminate the constraints by introducing a penalty when constraints are violated. This gives us the new unconstrained problem \begin{equation} \text{minimize} \quad f(x) + \langle \lambda ,h(x) \rangle \end{equation} where $\lambda \geq 0$. It's not hard to see that for a given $\lambda \geq 0$, the optimal value of this unconstrained problem is less than or equal to the optimal value for the constrained problem. This gives us a new problem — find $\lambda$ so that the optimal value for the unconstrained problem is as large as possible. That is one way to imagine how somebody might have thought of the dual problem. Is this the best intuition for where the dual problem comes from? Another viewpoint: the KKT conditions can be derived using what Freitas calls the "geometric intuition". Then, if we knew the value of the multipliers $\lambda$, it would be (often) much easier to find $x$. So, a new problem is to find $\lambda$. And if we can somehow recognize that $\lambda$ is a maximizer for the dual problem, then this suggests that we might try solving the dual problem. Please explain or give references to any intuition that you think I might find interesting, even if it's not directly related to what I asked. REPLY [4 votes]: Here are some counter-examples to help you understand KKT conditions and strong duality. The answer is from my other post: https://math.stackexchange.com/a/4154563/273731 ${\bf counter-example 1}$ If one drops the convexity condition on objective function, then strong duality could fails even with relative interior condition. The counter-example is the same as the following one. ${\bf counter-example 2}$ For non-convex problem where strong duality does not hold, primal-dual optimal pairs may not satisfy KKT condition. Consider the optimization problem \begin{align} \operatorname{minimize} & \quad e^{-x_1x_2} \\ \text{subject to} & \quad x_1\le 0. \end{align} The domain for the problem is $D = \{ (x_1,x_2) \ge 0 \}$. The problem is not convex by calculating the Hessian matrix. Clearly, any $x_1 = 0, x_2 \in\mathbb R_+$ is a primal optimal solution with primal optimal value $1$ . The Lagrangian is $$ L(x_1,x_2,\lambda) = e^{-x_1x_2} + \lambda x_1. $$ The dual function is \begin{align} G(\lambda) &= \inf L(x_1,x_2,\lambda) = \begin{cases} 0& \lambda\ge 0\\ -\infty& \lambda < 0 \end{cases} \end{align} Thus, $\lambda \geq 0$ is dual optimal solution with dual optimal value $0$, so dual gap is $1$, strong duality fails. As for the KKT conditions, remember the domain is $D = \{ (x_1,x_2) \ge 0 \}$ \begin{align*} &\lambda-x_2e^{-x_1x_2}=0\\ &x_1\le 0\\ &\lambda\ge 0\\ &\lambda x_1=0\\ \end{align*} Pick any primal-dual pair satisfying $x_1 = 0, x_2\ge 0, \lambda\ge0, \lambda\ne x_2$, the KKT conditions fail. ${\bf counter-example 3}$ For a non-convex problem, even strong duality holds, solutions for KKT conditions may not be primal-dual optimal solution. Consider the optimization problem on $\mathbb R$ \begin{align} \operatorname{minimize} & \quad x^3 \\ \text{subject to} & -x^3-1\le 0. \end{align} The objective function is not convex by calculating the Hessian matrix. Clearly, $x=-1$ is the unique primal optimal solution with primal optimal value $-1$. The Lagrangian is $$ L(x,\lambda) = x^3 - \lambda (x^3+1). $$ The dual function is \begin{align} G(\lambda) &= \inf L(x,\lambda) = \begin{cases} -1& \lambda=1\\ -\infty& otherwise \end{cases} \end{align} Thus, $\lambda = 1 $ is dual optimal solution with dual optimal value $-1$, so dual gap is $0$, strong duality holds. While the KKT conditions are \begin{align*} &3x^2(1-\lambda)=0\\ &-x^3-1\le 0\\ &\lambda\ge 0\\ &\lambda (-x^3-1)=0\\ \end{align*} Solutions for KKT conditions are $x=-1, \lambda=1$ or $x=0,\lambda=0$. Notice that $x=0,\lambda=0$ satisfies KKT conditions but has nothing to do with primal-dual optimal solutions. The discussion indicates for non-convex problem, KKT conditions may be neither necessary nor sufficient conditions for primal-dual optimal solutions. ${\bf counter-example4}$ For a convex problem, even strong duality holds, there could be no solution for the KKT condition, thus no solution for Lagrangian multipliers. Consider the optimization problem on domain $\mathbb R$ \begin{align} \operatorname{minimize} & \quad x \\ \text{subject to} & \quad x^2\le 0. \end{align} Obviously, the problem is convex with unique primal optimal solution $x=0$ and optimal value $0$; Feasible set is $\{0\}$, therefore Slater's condition fails. The Lagrangian is $$ L(x,\lambda) = x + \lambda x^2. $$ The dual function is \begin{align} G(\lambda) &= \inf L(x,\lambda) = \begin{cases} -\infty& \lambda\le 0\\ -\frac{1}{4\lambda} &\lambda >0 \end{cases} \end{align} Thus, dual optimal value is $0$, so dual gap is $0$, strong duality holds. However, there are no solution for dual optimal solution because the optimal value is attained as $\lambda\rightarrow \infty$. As for the KKT conditions \begin{align*} &1+2\lambda x=0\\ &x^2\le 0\\ &\lambda\ge 0\\ &\lambda x^2=0\\ \end{align*} No solution for KKT conditions. This is the convex problem where the dual problem has no feasible solution and KKT conditions have no solution but the primal problem is simple to solve. ${\bf counter-example 5}$ For a differentiable convex problem, there could be no solution for the KKT conditions, even if the primal-dual pair exists. In that case, the strong duality fails. Consider the optimization problem on domain $D:=\{(x,y):y>0\}$ \begin{align} \operatorname{minimize} & \quad e^{-x} \\ \text{subject to} & \quad \frac{x^2}{y}\le 0. \end{align} The problem can be proved to be convex with primal optimal solution $x=0, y>0$ and optimal value $1$; Feasible set is $\{(0,y): y > 0\}$, therefore Slater's condition fails. The Lagrangian is $$ L(x,y,\lambda) = e^{-x} + \lambda \frac{x^2}{y}. $$ After some careful calculation, the dual function is \begin{align} G(\lambda) &= \inf L(x,y,\lambda) = \begin{cases} 0& \lambda\ge 0\\ -\infty &\lambda <0 \end{cases} \end{align} Thus, dual optimal value is $0$, so dual gap is $1$, strong duality fails. We can pick primal-dual pair to be $x=0, y=1, \lambda=2$. As for the KKT conditions \begin{align*} &-e^{-x}+\frac{2\lambda x}{y}=0\\ &\frac{x^2}{y}\le0\\ &\lambda\ge 0\\ &\lambda \frac{x^2}{y}=0\\ \end{align*} with $y>0$, thus no solution for KKT conditions. This counter-example warns us that we have to be careful about the strong duality condition even for differentiable convex problems.<|endoftext|> TITLE: What is the average of rolling two dice and only taking the value of the higher dice roll? QUESTION [18 upvotes]: What is the average result of rolling two dice, and only taking the value of the higher dice roll? To make sure the situation I am asking about is clear, here is an example: I roll two dice and one comes up as a four and the other a six, the result would just be six. Would the average dice roll be the same or higher than just rolling one dice? REPLY [13 votes]: I'll have a go and answer this the maths-lite way (though there are a number of answers with more mathematic rigor and .. dare I say it vigor posted here already). Note that there is: 1 result with a face value 1 3 results with a face value 2, 5 results with a face value 3, 7 results with a face value 4, 9 results with a face value 5, and 11 results with a face value 6 The Average is defined to be: $$\text{Average} = \frac{\text{Sum of the Results}}{\text{Total number of Results}}$$ The Sum of the Results is: $$\begin{eqnarray} \text{Sum} &=& (1 \times 1) + (3 \times 2) + (5 \times 3) + (7 \times 4) + (9 \times 5) + (11 \times 6) \nonumber \\ &=& 1 + 6 + 15 + 28 + 45 + 66 \nonumber \\ &=& 161 \nonumber \end{eqnarray}$$ The Total number of Results is: $ 6 \times 6 = 36$ So the Average is: $$\text{Average} = \frac{161}{36} \approx 4.472$$<|endoftext|> TITLE: Singularity at infinity of a function entire QUESTION [14 upvotes]: How to prove that every non-constant entire function $\,\,f:\mathbb{C}\rightarrow\mathbb{C}\,\,$ has a singularity at infinity? What type of singularity must this be? REPLY [5 votes]: Just to give an example. Clearly $f$ is unbounded near $\infty$ so the singularity cannot be removable. If $f(z)=az+b$ where $a\neq 0$ , then $f$ has a pole at $\infty$. If $f(z)=e^z$, then $f$ has an essential singularity at $\infty$.<|endoftext|> TITLE: Induced Exact Sequence of Dual Spaces QUESTION [8 upvotes]: So given a short exact sequence of vector spaces $$0\longrightarrow U\longrightarrow V \longrightarrow W\longrightarrow 0$$ With linear transformations $S$ and $T$ from left to right in the non-trivial places. I want to show that the corresponding sequence of duals is also exact, namely that $$0\longleftarrow U^*\longleftarrow V^* \longleftarrow W^*\longleftarrow 0$$ with functions $\circ S$ and $\circ T$ again from left to right in the non-trivial spots. So I'm a bit lost here. Namely, I'm not chasing with particular effectiveness. Certainly this "circle" notation is pretty suggestive, and I suspect that this is a generalization of the ordinary transpose, but I'm not entirely sure there either. Any hints and tips are much appreciated. REPLY [5 votes]: You can actually prove a more general result: that if you have a sequence $U \xrightarrow{S} V \xrightarrow{T} W$ with the property that $\mathrm{Ker}(T) = \mathrm{Im}(S)$, i.e. the sequence is exact at $V$, then the dual sequence $W^* \xrightarrow{T^*} V^* \xrightarrow{S^*} U^*$ is also exact at $V^*$; that is, $\mathrm{Ker}(S^*) = \mathrm{Im}(T^*)$. Indeed, if $f \in W^*$ and $u \in U$, then $$(S^* \circ T^*)(f)(u) = T^*(f)(S(u)) = f((T \circ S)(u)) = 0$$ because $T \circ S = 0$ by exactness of $V$. So $\mathrm{Im}(T^*) \subseteq \mathrm{Ker}(S^*)$. On the other hand, suppose $f \in \mathrm{Ker}(S^*)$. This means for every $u \in U$, $S^*f(u) = f(S(u)) = 0$. We want a $g \in W^*$ so that $T^* g = f$. So define $g \in \mathrm{Im}(T)^*$ by $g(T(v)) = f(v)$. This is well-defined on $\mathrm{Im}(T) \subseteq W$, because if $T(v) = T(v')$ for $v, v' \in V$, then $T(v-v') = 0$, so $v-v' \in \mathrm{Ker}(T) = \mathrm{Im}(S)$, so there is a $u \in U$ so that $S(u) = v - v'$; but since $f \in \mathrm{Ker}(S^*)$, this means $g(T(v-v')) = f(v-v') = f(S(u)) = 0$, so $g(T(v)) = g(T(v'))$. Let $\tilde W$ be any subspace of $W$ so that $W = \mathrm{Im}(T) \oplus \tilde W$, and declare $g|_{\tilde W} = 0$. Then $g \in W^*$ satisfies $T^*g(v) = g(T(v)) = f(v)$ for all $v \in V$, so $T^* g = f$, so $\mathrm{Ker}(S^*) \subseteq \mathrm{Im}(T^*)$.<|endoftext|> TITLE: integral of a measure zero set QUESTION [5 upvotes]: let $X$ be a finite measure space and $\{f_n\}$ be a sequence of nonnegative integrable functions, $f_n \rightarrow f\ a.e.$ on $ X$. We know that $\lim_{n \rightarrow \infty}\int_X f_n d\mu=\int_X fd\mu$ and on any measurable $E_i \subset X$ I should apply Egoroff's theorem to conclude that $\lim_{n \rightarrow \infty}\int_X |f_n-f|d\mu=0$. My attempt: I broke the set $X$ to two sets: $F_\sigma$ on which $f_n \rightarrow f$ uniformly based on Egoroff's theorem and $X\backslash F_\sigma$ which is a very small set, i.e. $\mu\{X\backslash F_\sigma\}=\sigma$ and $f_n \nrightarrow f$ I want to show that on each of these sets, the integral is less than $\frac{\epsilon}{2}\ \forall \epsilon$ to finish. How can I show this for the set $X \backslash F_\sigma$? REPLY [2 votes]: I do not think this is true unless you assume $f$ is integrable. Take $X=[0,1]$, Lebesgue measure, $f(x) = \frac{1}{x}$, $f_n = f \cdot 1 _{[\frac{1}{n},1]}$. Then $f_n(x) \to f(x)$ on $(0,1]$, $\int f_n = \log n$, hence $f_n$ are integrable, and $\int_E f_n \to \int_E f$ for all $E$ measurable. However, $\int |f_n-f| = \infty$ for all $n$.<|endoftext|> TITLE: Krull dimension and transcendence degree QUESTION [5 upvotes]: What is the simplest proof of the fact that an integral algebra $R$ over a field $k$ has the same Krull dimension as transcendence degree $\operatorname{trdeg}_k R$? Is it possible to use only Noether normalization theorem? REPLY [6 votes]: R. Ash, A Course in Commutative Algebra, proof of Theorem 5.6.7 uses Noether normalization and few obvious remarks on integral extensions. (However, see QiL's comment.)<|endoftext|> TITLE: Can elements in a set be duplicated? QUESTION [15 upvotes]: If $A = \{x \mid x \text{ is a letter of the word 'contrast'}\}$ Represent it in a Venn Diagram, and then find the $n(A)$. Do I need to write the letter 't' twice inside the venn diagram? What should be the answer for $n(A)$? REPLY [18 votes]: From wikipedia: Every element of a set must be unique; no two members may be identical. A multiset is a generalized concept of a set that relaxes this criterion.<|endoftext|> TITLE: topology - Quotient topology QUESTION [6 upvotes]: Let $k:X \to Y$ be an onto map.How to prove that the quotient topology on $Y$ induced by $k$ is the largest topology relative to which $k$ is continuous. REPLY [10 votes]: I’m going to assume that you’ve defined the quotient topology $\tau$ on $Y$ as follows: a set $U\subseteq Y$ is open iff $k^{-1}[U]$ is open in $X$. It’s an immediate consequence of this definition that the topology $\tau$ on $Y$ makes $k$ continuous, so what you have to show is that if $\tau'$ is another topology on $Y$ making $k$ continuous, and $\tau\subseteq\tau'$, then $\tau=\tau'$. In other words, you must show that it’s impossible to have $\tau'\supsetneqq\tau$: there is no topology on $Y$ that is strictly stronger than $\tau$ and makes $k$ continuous. Here’s a large hint to get you started. Suppose that $\tau\subseteq\tau'$, where $\tau'$ is a topology on $Y$ making $k$ continuous. If $\tau'\ne\tau$, there is a set $U\in\tau'\setminus\tau$. By hypothesis $k$ is continuous as a map from $X$ to $\langle Y,\tau'\rangle$, so $k^{-1}[U]$ is open in $X$. Now apply the definition of the quotient topology $\tau$ to get a contradiction.<|endoftext|> TITLE: Eigenvalues and determinant of conjugate, transpose and hermitian of a complex matrix. QUESTION [10 upvotes]: For a strictly complex matrix $A$, 1) Can we comment on determinant of $A^{*}$ (conjugate of entries of $A$) , $A^{T}$ (transpose of A) and $A^{H}$ (hermitian of $A$). I know that for real matrices, $\det(A)=\det(A^{T})$. Does it carry over to complex matrices, i.e. does $\det(A)=\det(A^{T})$ in general? I understand $\det(A)=\det(A^{H})$ (from Schur triangularization). 2) The same question as first, now about eigenvalues of $A$. I would like to know about special cases, for instance what if $A$ is hermitian or positive definite and so on. REPLY [12 votes]: Since complex conjugation satisfies $\overline{xy} = \overline{x} \cdot \overline{y}$ and $\overline{x+y} = \overline{x} + \overline{y}$, you can see with the Leibniz formula quickly that $\det[A^*] = \overline{\det[A]}$. For complex matrices $\det[A] = \det[A^T]$ still holds and doesn't require any changes to the proof for real matrices. Together this means that $\det[A] = \overline{\det[A^H]}$. This applies to the eigenvalues as well: the characteristic polynomial of $A^*$ is given by $\det[tI - A^*] = \det[(\overline{t}I - A)^*] = \overline{\det[\overline{t}I - A]}$ and the eigenvalues of $A^*$ are exactly the complex conjugates of those of $A$. In particular if $A$ is hermitian, $A = A^*$ and so all eigenvalues are equal to their complex conjugates - in other words, they're real.<|endoftext|> TITLE: $a^2-b^2 = x$ where $a,b,x$ are natural numbers QUESTION [8 upvotes]: Suppose that $a^2-b^2 =x$ where $a,b,x$ are natural numbers. Suppose $x$ is fixed. If there is one $(a,b)$ found, can there be another $(a,b)$? Also, would there be a way to know how many such $(a,b)$ exists? REPLY [2 votes]: You need to exploit the fact that the right hand side of your equation can be factored. For example for part (1) of the exercise, if $x$ is odd, say $x = 2n+1$ for some integer $n$, then $$ x = 2n + 1 = y^2 - z^2 = (y - z)(y + z) $$ Now try to consider a trivial factorization of $2n+1$ like $2n+1 = 1 \cdot (2n+1)$ and compare the two factorizations to get a system of equations $$ \begin{align} y - z &= 1\\ y + z &= 2n + 1 \end{align} $$ I think you can take it from here, but feel free to ask if you get stuck.<|endoftext|> TITLE: Continuous Mapping Theorem (CMT) for a sequence of random vectors QUESTION [5 upvotes]: I need help proving the Continuous Mapping Theorem (CMT) for random vectors. I'm currently reading Econometric Analysis for Cross Section and Panel Data by Jeffrey M. Wooldridge (Chapter 3, pp. 40 - 41, 2nd edition). Unfortunately, he leaves it to the reader to prove most asymptotic results. Additionally, almost every other econometrics textbook I read simply states the result. Definition 1: A sequence of random variables $x_n$ converges in distribution to a continuous random variable $x$ if and only if $\forall s \in \mathbb{R} \ \forall \epsilon >0 \ \exists N \ s.t. \ \forall n>N \; |Prob(x_n \leq s) - Prob(x \leq s)|<\epsilon$. We write $x_n \to^d x.$ [Note: A continuous random variable is one for which the cumulative distribution function is continuous.] Definition 2: A sequence of K $\times$ 1 random vectors $\mathbf{x}_n$ converges in distribution to the continuous random $K \times 1$ vector $\mathbf{x}$ if and only if $\forall \mathbf{c} \in \mathbb{R}^{K}$ such that $\mathbf{c}^T\mathbf{c} = 1$, $\mathbf{c}^T\mathbf{x}_n \to^d \mathbf{c}^T\mathbf{x}$, and we write $\mathbf{x}_n \to^d \mathbf{x}.$ Theorem 1: Let $\mathbf{x}_n$ be a sequence of $K \times 1$ random vectors such that $\mathbf{x}_n \to^d \mathbf{x}$. If $\mathbf{g}:\mathbb{R}^k\to\mathbb{R}^{\ell}$ is a continuous function, then $\mathbf{g}(\mathbf{x}_n)$ $\to^d$ $\mathbf{g}(\mathbf{x}).$ Definition 3: A sequence of random variables $x_n$ is bounded in probability if and only if $\forall \epsilon>0 \ \exists b_{\epsilon}>0 \ \exists N \ s.t. \forall n>N \ Prob(|x_n|>b_{\epsilon})$. A vector $\mathbf{x}_n$ is bounded in probability if and only if the random variables which constitute the vector of random variables are themselves bounded in probability. Theorem 2: If $\mathbf{x}_n \to^d \mathbf{x}$, where $\mathbf{x}$ is a $K \times 1$ vector, then $\mathbf{x}_n = O_p(1)$. I need rigorous proofs for Theorems 1 and 2. This problem has been frustrating me for a couple days now, so any help would go a long way. Thanks. CS REPLY [3 votes]: For Theorem 1: Let $x_n$ be defined on the probability space $(\Omega, \mu)$. Your Definition 2 for convergence in probability of a sequence of random vectors says that for any half space $H$ of $\mathbb{R}^k$, i.e. $H = \phi^{-1}(r)$ for some linear functional $\phi: \mathbb{R}^k \rightarrow \mathbb{R}$ and $r \in \mathbb{R}$, $\mu(x_n^{-1}(H)) \rightarrow \mu(x^{-1}(H))$. ($\phi$ is inner product with $c$ in your definition.) Now if $g: \mathbb{R}^k \rightarrow \mathbb{R}^l$ is linear, then Theorem 1 is immediate: For any half space $H \subset \mathbb{R}^l$, $g^{-1}(H)$ is again a half space of $\mathbb{R}^k$. So $\mu(x_n^{-1}(g^{-1}(H))) \rightarrow \mu(x^{-1}(g^{-1}(H)))$. The case $g$ is just measurable takes a little doing. Definition 2 implies the following: for any closed convex $C \subset \mathbb{R}^k$, $\mu(x_n^{-1}(C)) \rightarrow \mu(x^{-1}(C))$. This can be shown by writing $C$ as the countable intersection of polygons and use continuity-from-above of the pushforward measures. Now take any half space $H \subset \mathbb{R}^l$. Consider the measurable set $g^{-1}(H)$. The pushforward measure $\mu$ induced by $x$ is regular. So it can be approximated from below by some compact $K \subset g^{-1}(H)$. In turn, $K$ can be covered by finite rectangles $C_1,\cdots, C_m$. Since $\mu(x_n^{-1}(C_i)) \rightarrow \mu(x^{-1}(C_m))$ for $i = 1,\cdots,m$, Theorem 1 holds. For Theorem 2: Let $B_b$ denote the closed cube centered at the origin of radius $b$ in $\mathbb{R}^k$. Your Definition 3 says that, for all $\epsilon > 0$, there exists $b> 0$ and $N \in \mathbb{N}$ such that $\mu(x_n^{-1}(B_b)) > 1 - \epsilon$ for all $n \geq N$. $B_b$ is convex and closed. So $\mu(x_n^{-1}(B_b)) \rightarrow \mu(x^{-1}(B_b))$ for any such cube. By the regularity of the pushforward measure again, $\mu(x^{-1}(B_b)) \rightarrow 1$ as $b \rightarrow \infty$. So Theorem 2 holds. I am an econ grad student myself. Hope this helps.<|endoftext|> TITLE: Eigenspace of the companion matrix of a monic polynomial QUESTION [9 upvotes]: How do I prove that the eigenspace of an $n\times n$ companion matrix $$ C_p=\begin{bmatrix} 0 & 1 & 0 &\cdots & 0\\ 0 & 0 & 1 &\cdots & 0 \\ \vdots&\vdots &\vdots&\ddots&\vdots\\ 0 & 0 & 0 &\cdots &1 \\ -\alpha_0 &-\alpha_1 &-\alpha_2 &\cdots&-\alpha_{n-1} \end{bmatrix} $$ equals $\operatorname{Span}\{v_{\lambda} \} $ where $v_{\lambda}$ is an eigenvector of the companion matrix w.r.t. the eigenvalue $\lambda$: $$ v_{\lambda} = \begin{bmatrix} 1 \\ \lambda\\ \lambda^{2} \\ \vdots\\ \lambda^{n-1} \end{bmatrix}. $$ REPLY [6 votes]: If you write $C_p{\bf x}=\lambda {\bf x}$, where ${\bf x}=(x_1,\dots,x_n)^T$, you get that $x_2=\lambda x_1$, $x_3=\lambda x_2$, $\dots$ , $x_n=\lambda x_{n-1}$ which means that $x_2=\lambda x_1$, $x_3=\lambda^2 x_1$, $\dots$ , $x_n=\lambda^{n-1}x_1$ and using your notation ${\bf x}=x_1v_{\lambda}$.<|endoftext|> TITLE: Proving limit with $\log(n!)$ QUESTION [12 upvotes]: I am trying to calculate the following limits, but I don't know how: $$\lim_{n\to\infty}\frac{3\cdot \sqrt{n}}{\log(n!)}$$ And the second one is $$\lim_{n\to\infty}\frac{\log(n!)}{\log(n)^{\log(n)}}$$ I don't need to show a formal proof, and any tool can be used. Thanks! REPLY [2 votes]: Stirling's approximation yields $$ \log(n!)=\left(n+\frac12\right)\left(\log(n)-1\vphantom{\frac12}\right)+\frac12\log(2\pi e)+O\left(\frac1n\right) $$ which implies $$ \lim_{n\to\infty}\frac{\log(n!)}{n\log(n)}=1 $$ Then for the first limit $$ \lim_{n\to\infty}\frac{3\sqrt n}{\log(n!)}=\lim_{n\to\infty}\frac3{\sqrt{n}\log(n)}=0 $$ For the second limit $$ \lim_{n\to\infty}\frac{\log(n!)}{\log(n)^{\log(n)}}=\lim_{n\to\infty}\frac{n\log(n)}{\log(n)^{\log(n)}}\stackrel{n\to e^n}{=}\lim_{n\to\infty}\frac{e^nn}{n^n}=\lim_{n\to\infty}\left(\frac{2e}{n}\right)^n\frac{n}{2^n}=0 $$<|endoftext|> TITLE: Pole set of rational function defined on a variety QUESTION [6 upvotes]: The problem: Let $V = V(y^2-x^2(x+1))$, and let $\overline{x}, \overline{y}$ denote the $I(V)$-residues of $x$ and $y$ in the coordinate ring $\Gamma(V)$. Set $z=\overline{y}/\overline{x}$. Find the pole sets for $z$ and $z^2$. My progress: Since $I(V) = (y^2-x^2(x+1))$ (the polynomial $y^2-x^2(x+1)$ is irreducible), we have $\overline{y}^2 = \overline{x}^2(\overline{x} + \overline{1})$ in $\Gamma(V)$, so $$z=\overline{y}/\overline{x} = (\overline{x}/\overline{y})(\overline{x}+\overline{1})$$ and $$z^2 = \overline{y}^2/\overline{x}^2 = \overline{x} + \overline{1}.$$ Hence $z^2$ has no poles since one representation is a polynomial, and $z$ has no poles at $(x,y)$ if either $x\neq 0$ or $y\neq 0$. So I have concluded that the only possible pole for $z$ is $(0,0)$. However, I am unsure of how to check this point. I would appreciate any hints. REPLY [7 votes]: I will write for simplicity $x, y$ instead of $\bar{x}, \bar{y}$. If $z$ is defined at $(0, 0)$, then there exist $b(x,y), a(x,y)\in \Gamma(V)$ such that $b(x,y)z=a(x,y)$ and $b(0,0)\ne 0$. Equivalently, $by=xa$. Because of the relation $y^2=x^2(x+1)$, we see that any element of $\Gamma(V)$ can be written in a unique way as $f(x)+g(x)y$. So $$(b_0(x)+b_1(x)y)y=x(a_0(x)+a_1(x)y), \quad b_0(0)\ne 0.$$ So $$ b_0(x)y+b_1(x)x^2(x+1)=xa_0(x)+xa_1(x)y.$$ By the uniqueness of the decomposition, $b_0(x)=xa_1(x)$, hence $b_0(0)=0$. Contradiction. So $z$ can't be defined at $(0,0)$.<|endoftext|> TITLE: Direct decomposition of vector space in image of map plus kernel of adjoint QUESTION [6 upvotes]: Let $A:V\to W$ be a linear map with $V,W$ finite dimensional Hilbert spaces. Is it always true that $$ \dim(\mathrm{Im}(A)) + \dim(\ker(A^*)) = \dim(W),$$ i.e. (since $\mathrm{Im}(A) \cap \ker(A^*) = 0$) $$W = \mathrm{Im}(A) \oplus \ker (A^*)?$$ Notation: $A^*$ is the adjoint of $A$, $\mathrm{Im}$ and $\ker$ stand for Image and Kernel. I have something like this in mind, but don't find it in my linear algebra notes. Thanks REPLY [3 votes]: Use the obvious fact that $\ker A^*=(\mathrm{Im}\ A)^{\perp}$. Now it remains to show that $W=\mathrm{Im}\ A\oplus(\mathrm{Im}\ A)^{\perp}$. But this follows from the definition of the orthogonal complement.<|endoftext|> TITLE: Possible mistake in Folland real analysis? QUESTION [5 upvotes]: I am working on some exercises in Folland's real analysis. In number 2.48, they ask you to prove the following question: Let $X = Y = \mathbb N$, $M = N = P(\mathbb N)$ and $\mu = \nu$ be counting measure on $\mathbb N$. Define $f(m,n) = 1$ if $m=n$, $f(m,n) = -1$ if $m = n+1$, and $f(m,n) = 0$ otherwise. Then, $\iint f\ \mathsf d\mu \mathsf d\nu$ and $\iint f\ \mathsf d\nu\mathsf d\mu$ exist and are unequal. It seems to me that \begin{align}\iint f\ \mathsf d\mu\mathsf d\nu &= \sum_n\sum_m f(m,n)\\ &= \sum_n f(n,n) + f(n+1,n)\\ &= \sum_n 1-1\\ &= \sum_n 0 = 0\end{align} and \begin{align}\iint f\ \mathsf d\nu\mathsf d\mu &= \sum_m\sum_n f(m,n)\\ &= \sum_m f(m,m) + f(m,m-1)\\ &= \sum_m 1-1\\ &= \sum_m 0 =0.\end{align} So the two integrals are equal. What am I doing wrong? Thanks! REPLY [10 votes]: Represent the array as $$\begin{array}{r|rrrr} m\backslash n &1&2&3&\cdots\\\hline 1&1&0&0&\dots\\ 2&-1&1&0&\dots\\ 3&0&-1&1&\ddots\\ 4&0&0&-1&\ddots\\ \vdots&\vdots&\vdots&\ddots&\ddots \end{array}$$ When you first sum with respect to $m$, we get $0$, but when we begin by $n$, the sum of the first row is $1$, and $0$ for the other. So, careful with the order of integration when the function is not non-negative and not integrable.<|endoftext|> TITLE: Sufficient statistic for the Negative-Binomial Distribution QUESTION [5 upvotes]: I am fairly new to this topic but here is my problem: I have stumbled across a paper (Robinson and Smyth, 2008) stating that the sample sum is a sufficient statistic for NB-distributed random variables. I have tried to verify this by using the Fisher–Neyman factorization theorem $f_\theta(x)=h(x) \, g_\theta(T(x))$. This is how far I have come: $\frac{\prod\Gamma(x_i+r)}{\prod x_i! \Gamma(r)^n}(1-p)^{n*r}p^{\sum{x_i}}$ It would be easy if it weren't for the Gamma function in the numerator as then $h(x)=\frac{1}{\prod x_i!}$ and $g_\theta(T(x))=\Gamma(r)^{-n}(1-p)^{n*r}p^{\sum{x_i}}$. If I am on the wrong path, could somebody please help me solve this? REPLY [2 votes]: Are you given that $r$ is known? If it is, then you could define $h(x)$ as you have but including the product in the numerator. This would not be a problem as $r$ is known.<|endoftext|> TITLE: Exterior derivative of a complicated differential form QUESTION [5 upvotes]: Let $\omega$ be a $2$-form on $\mathbb{R}^3\setminus\{0\}$ defined by $$ \omega = \frac{x\,dy\wedge dz+y\,dz\wedge dx +z\,dx\wedge > dy}{(x^2+y^2+z^2)^{\frac{3}{2}}} $$ Show that $\omega$ is closed but not exact. What I have tried so far: In order to show that $\omega$ is closed, I need to show that $d\omega=0$. I'm having some problems getting all of the calculus right and somewhere along the way I'm messing up. I started by rewriting $\omega$ as $$ \omega = (x\,dy\wedge dz+y\,dz\wedge dx +z\,dx\wedge dy)(x^2+y^2+z^2)^{-\frac{3}{2}} $$ Now I should be able to use the product rule to evaluate (I think). Then $$ d\omega = (dx\wedge dy\wedge dz+dy\wedge dz\wedge dx +dz\wedge dx\wedge dy)(x^2+y^2+z^2)^{-\frac{3}{2}} + (\ast) $$ where $$ (\ast) = (x\,dy\wedge dz+y\,dz\wedge dx +z\,dx\wedge dy)\left(-\frac{3}{2}(2x\,dx+2y\,dy+2z\,dz)\right)(x^2+y^2+z^2)^{-\frac{5}{2}} $$ Even after trying to simplify everything, I can't get it to cancel. This makes me think that perhaps I can't apply the product rule like this. What should I do to calculate $d\omega$? If $\omega$ is a globally defined smooth form and if $d\omega=0$, then $\omega$ is exact because there is some other form $\alpha$ with $d\alpha=\omega$ and $d^2\alpha=d\omega=0$. Because $\omega$ is not defined at $(0,0,0)$, it makes sense that it isn't exact. Is there a way to use the above reasoning to show that there can't be an $\alpha$ such that $$d\alpha=\omega?$$ REPLY [3 votes]: Use spherical coordinates. In spherical coordinates $r,\theta,\phi$, the form reads: $$ \omega = \sin\theta\,d\theta\wedge d\phi. $$ This is closed, because the coefficient only depends on a coordinate that is already used: $$ d\omega = d(\sin\theta)\wedge d\theta\wedge d\phi = \cos\theta\,d\theta\wedge d\theta\wedge d\phi = 0, $$ since $d\theta\wedge d\theta=0$. This is anyway not exact, because you can integrate it on the sphere! Integrating it on the closed area $0\le\theta\le \pi, 0\le\phi\le2\pi$, that is a sphere (of radius one), you find (it's a simple calculation): $$\oint\omega = 4\pi.$$ Therefore, the form is not exact.<|endoftext|> TITLE: Find Metric in $\mathbb{R}^2$ s.t. it is not Complete QUESTION [5 upvotes]: My friend ask me: How to define a metric in $\mathbb{R}^2$ in such an way that $\mathbb{R}^2$ is not complete. I gave him the following metric: Let $B=\{x\in\mathbb{R}^2:\ \|x\|<1\}$. By a diffeomorphism we can think that $\mathbb{R}^2$ is $B$. In this way we have that the points in $B$, close to the boundary of $B$, are the points in $\mathbb{R}^2$ with big norm in $\mathbb{R}^2$. Hence, if $F:\mathbb{R}^2\rightarrow B$ is the diffeomorphism, we can define the metric in $\mathbb{R}^2$ by $$d(x,y)=\overline{d}(F(x),F(y))$$ where $\overline{d}$ is the euclidean metric restricted to $B$. He liked the metric, but he asked me an more "elementary" metric, not so trivial but not so elaborated. In this way, can you guys please help me to find more metrics? Thanks REPLY [6 votes]: Since everybody tries something almost everywhere smooth, I repeat my non-smooth solution from the comment above as an asnwer: The set $\mathbb R^2$ as the same cardinality as $\mathbb R\setminus \mathbb Q$. Therefore there exists a bijection $F\colon \mathbb R^2\to \mathbb R\setminus \mathbb Q$. Then we can define the metric $$d(x,y)=|F(x)-F(y)|$$ on $\mathbb R^2$, which makes it an incomplete metric space, of course isomorphic to $ \mathbb R\setminus \mathbb Q$ with standard metric.<|endoftext|> TITLE: Are all analytic functions on the whole complex plane "generated" by the polynomials and the exponential? QUESTION [10 upvotes]: Take $S$ to be the set containing all polynomials and $e^z$ over $\mathbb C$. If we add, subtract, multiply, divide (if the denominator is non-zero everywhere) and compose functions in our set $S$ we get analytic functions. My questions is: do all analytic functions on the whole complex plane arise this way? REPLY [4 votes]: As an alternative to the other (perfectly fine) answer, we can also use the fact that an entire function can grow arbitrarily fast on the real line, and that functions generated by $S$ (including iterated integrals of those) can only grow as fast as finitely iterated exponentials. In order to make the argument rigorous, let $e_1(x) = e^{x}$, and inductively $e_{k+1}(x) = e_k(e^x)$ be the iterated exponentials. Now define $$ f(z)=a_0+\sum_{k=1}^\infty \left(\frac{z}{k}\right)^{n_k}$$ where $a_0 = e_1(1)=e$, and $(n_k)$ is a strictly increasing sequence of natural numbers chosen such that $$\left(\frac{k}{k-1}\right)^{n_{k-1}}\ge e_{k}(k)$$ for all $k \ge 2$. Then $f$ is an entire function (since $|z/k|<1$ for $k > |z|$, the tail of the series is dominated by a geometric series for any fixed $z$), and $f(k) \ge e_k(k)$ for all $k\ge 1$ (since the $k$-th term in the series is $\ge e_k(k)$, and all other terms are positive.) It is easy to see by induction that each $e_n$ is increasing, and that $e_k(x) \ge e_m(x)$ for $k \ge m$ and all $x\ge 0$. This implies that $f(k) \ge e_k(k) \ge e_m(k)$ for all $k \ge m$, so $$\limsup_{x\to\infty} \frac{f(x)}{e_k(x)} \ge 1$$ for all $k$. On the other hand, all functions $g \in S$ satisfy $$\limsup_{x\to\infty} \frac{|g(x)|}{e_2(x)} = 0,$$ and by induction any function $g$ that arises from functions in $S$ by $k$ operations (addition, multiplication, division, composition, integration) satisfies $$\limsup_{x\to\infty} \frac{|g(x)|}{e_{k+2}(x)} = 0.$$ This shows that $f$ is not in the class of entire functions generated by $S$.<|endoftext|> TITLE: Using Fermat's Little Theorem to solve a Diophantine equation. QUESTION [5 upvotes]: How to prove that the equation $x^2+5=y^3$ has no integer solutions? I have proved the case when $x$ is odd. I used the fact $x^2\equiv 1 \pmod 4$ but how would you do for even $x$: the mod 4 analysis becomes useless. The problem is from Fermat Little Theorem section. But I do not know how apply it. Thanks REPLY [4 votes]: Here is a proof. First note that, $y$ cannot be congruent to $3$ modulo $4$. Indeed, had it been; then we would have had $$ x^2 + 5\equiv y^3 \equiv27\pmod{4}\implies x^2 \equiv2\pmod{2}, $$ a contradiction. With this in mind, rewrite the equation as, $$ x^2+4 = y^3 - 1=(y-1)(y^2+y+1). $$ Now, we claim that right hand side always has a prime divisor of form $4k+3$. Indeed, if $y\equiv 0\pmod{4}$, then $y-1\equiv3\pmod{4}$, hence it has such a prime divisor; and if $y\equiv1$ or $y\equiv2$ modulo $4$, $y^2+y+1$ has a prime divisor congruent to $3$ modulo $4$. Equipped with the following well-known lemma, we are done. If $p\equiv 3\pmod{4} $ a prime, then $$ p\mid x^2+y^2 \implies p\mid x \ \text{and} \ p\mid y. $$ Applying the lemma we conclude for this prime divisor that $p\mid x^2+2^2$ implies $p \mid 2$, contradiction. Done.<|endoftext|> TITLE: Show that $x$ has larger order than $a$ QUESTION [5 upvotes]: This is exercise $2.35$ from Rotman's A First Course in Abstract Algebra. Let $G$ be a group and let a $a \in G$ have order $pk$ for some prime $p$, where $k \geq 1$. Prove that if there is $x \in G$ with $x^p = a$, then the order of $x$ is $p^2k$, and hence $x$ has larger order than $a$. This isn't homework, but I'm stuck. Is there a nice way to prove it? REPLY [2 votes]: This the special case $n = 1$ of the following fact: Let $p$ be a prime, and suppose $a \in G$ has order divisible by $p$ and $x^{p^n} = a$. Then $x$ has order $p^n \cdot o(a)$. To prove this, you can use the order formula $$o(x^{p^n}) = \frac{o(x)}{\gcd(o(x), p^n)} $$ Thus $o(x) = \gcd(o(x), p^n) \cdot o(a)$. Let $p^k$ be the largest power of $p$ dividing $o(x)$. If $k < n$, then $o(x) = p^k \cdot o(a)$. Because $p$ divides $o(a)$, this implies that $p^{k+1}$ divides $o(x)$, a contradiction. Hence $k \geq n$, which proves that $o(x) = p^n \cdot o(a)$.<|endoftext|> TITLE: Understanding of extension fields with Kronecker's thorem QUESTION [6 upvotes]: In the book Contemporary Abstract Algebra by Gallian it defines an extension field as follows: A field $E$ is an extension field of a field $F$ if $F\subseteq E$ and the operations of $F$ are those of $E$ restricted to $F$. Question 1) When it says "and the operations of $F$ are those of $E$ restricted to $F$" is this equivalent to saying "and the operations of $F$ are those of $E$ such that $F$ is a subfield"? Is this what is meant by "restricted to $F$"? Also, Kronecker's threorem states: Let $F$ be a field and let $f(x)$ be a nonconstant polynomial in $F[x]$. Then there is an extension field $E$ of $F$ in which $f(x)$ has a zero. Question 2) I know that in the theorem above $E=F[x]/\left$ where $p(x)$ is an irreducible factor of $f(x)$ and that the field $E=F[x]/\left$ contains an isomorphic copy of $F$. But why doesn't the definition of extension fields take into account up to an isomorphism (since technically $F$ is not a subset of $E$ in this case) when we talk about a field $E$ being an extension of a field $F$? Can't we define extension fields say as: An extension field of a field $F$ is a pair $(E,\phi)$ such that $\phi$ is a homomorphism from $F$ to $E$ with $\phi(F)\subseteq E$ and the operations of $\phi(F)$ are those of $E$ restricted to $\phi(F)$? REPLY [2 votes]: The main thing is that every homomorphism between fields is an embedding (injective). On one hand it means that your definition is correct (assuming that homomorphisms take $1\mapsto 1$, as Brett said), and yes, more precise. On the other hand, it means that if we have a homomorphism $K\to L$, then, basically we can assume that $K\subseteq L$ is a subfield (in this step we identify $K$ with its image), and this viewpoint has some advantages, at least in simplifying the notations.<|endoftext|> TITLE: Is the following field extension normal $\mathbb Q(\!\sqrt {2+\sqrt 2}): \mathbb Q$ QUESTION [5 upvotes]: I would like to know if $\mathbb Q \left[\sqrt {2+\sqrt 2}\right ]: \mathbb Q$ is normal . The roots of the minimal polynomial is $\pm\sqrt {2\pm\sqrt 2}$. Now the thing that i have really tried and have no idea to get is to write $\sqrt {2-\sqrt 2}$ as the combination of the powers of $\sqrt {2+\sqrt 2}$ If at all it is possible? What are the possible ways of finding the coefficients? Thanks for you help. REPLY [2 votes]: As an incidental remark, this field (which is a normal extension of $\mathbb Q$, as the other answers show) is in fact the maximal totally real subfield of $\mathbb Q(\zeta_{16})$ (where $\zeta_{16}$ denotes a primitive $16$th root of unity). In fact, $\zeta_{16} + \zeta_{16}^{-1} = \sqrt{2 + \sqrt{2}},$ and so the field in question is indeed equal to $\mathbb Q(\zeta_{16} + \zeta_{16}^{-1}).$<|endoftext|> TITLE: As shown in the figure: Prove that $a^2+b^2=c^2$ QUESTION [17 upvotes]: Geometry: Buildings in the triangle Other triangles with the same property: $1.$ 12 18 6 12 30 102 $2.$ 15 30 15 15 15 90 $3.$ 24 30 54 24 6 42 $4.$ 30 10 40 30 20 50 (proposed problem in sense of clockwise) $5.$ 36 12 6 12 18 96 $6.$ 36 18 6 36 6 78 $7.$ 42 6 36 42 12 42 $8.$ 60 6 57 30 3 24 $9.$ 60 24 12 12 6 66 Using matlab we can find all triangles (integer solutions) with this property sums of squares: REPLY [6 votes]: I thought that this problem should be solvable without using trigonometry. Here's a hint for a geometric solution: Draw the segment $DG$ and let $B'$ be the intersection of the line through $AE$ and the perpendicular line to $DG$. I claim that the triangle $DB'G$ has sides of length $a,b,c$, so $c^2 = a^2 + b^2$ by the Pythagorean theorem. Since geometry is hard to communicate, I think it is better to let you figure this out on your own.<|endoftext|> TITLE: Injectivity of Homomorphism in Localization QUESTION [14 upvotes]: Let $\alpha:A\to B$ be a ring homomorphism, $Q\subset B$ a prime ideal, $P=\alpha^{-1}(Q)\subset A$ a prime ideal. Consider the natural map $\alpha_Q:A_P\to B_Q$ defined by $\alpha_Q(a/b)=\alpha(a)/\alpha(b)$. Suppose that $\alpha$ is injective. Then is $\alpha_Q$ always injective? I think so, but I'm clearly being too dense to prove it! My argument goes as follows. Let $\alpha(a)/\alpha(b)=0$. Then $\exists c \in B\setminus Q$ s.t. $c\alpha(a)=0$. If $B$ is a domain we are done. If not we must exhibit some $d\in A\setminus P$ s.t. $da=0$. Obviously this is true if $c =\alpha(d)$. But I don't see how I have any information to prove this! Am I wrong and this is actually false? If so could someone show me the trivial counterexample I must be missing? Many thanks! REPLY [2 votes]: The question is right,isn't it? Actually,see an exercise (2.18b) of chap.2 of algebraic geometry by Hartshorne.Given a ring homomorphism $f:A \to B$,let $g:SpecB \to SpecA$ be the induced morphism.Then f is injective iff the map of sheaves$g^\sharp:O_{SpecA} \to g_*O_{SpecB}$ is injective.Note that $g^\sharp $ is injective iff for any $ a \in A$,$ g^ \sharp(D(a)):O_{SpecA}(D(a)) \to g_*O_{SpecB}(D(a))$ is injective i.e. for any $ a \in A$,$ g^ \sharp_a:A_a \to B_{f(a)}$ is injective .So the question becomes f is injective iff $g^\sharp_a$ is injective for any $a\in A$.The proof as follows: "if"part,assume $g^\sharp $ is injective,taking global section (note that taking global functor is left exact) we have $f:A \to B $ is injective."only if"part,if $f:A \to B $ is injective ,and $g^\sharp_a(c/a^n)=f(c)/f(a^n)=0 \in B_{f(a)}$,then there exists some intrger $m$ such that $ f(c)f(a)^m=0$ which implies $ f(ca^m)=0 $,since f is injective, $ ca^m=0$, so $c/a^n=0$.Q.E.D.By this conclusion, a ring homomorphism f is injective iff for any $p\in SpecB$,the induced map $g^\sharp_p:O_{SpecA,f^{-1}(p)} \to O_{SpecB,p}$ is injective.In the language of category,this fact states that since the category of affine schemes is equivalent to the opposite category of the category of commutative rings with identity,so injectivity of ring homomorphisms is equivalent to injectivity of morphisms of affine schemes,note that injectivity of morphisms of sheaves is equivalent to injectivity of morphisms of sheaves on stalks,hence the result is not strange.<|endoftext|> TITLE: Find all natural numbers with the property that... QUESTION [6 upvotes]: Find all natural numbers with the property that when the first digit is moved to the end, the resulting number is $3.5$ times the original number. REPLY [4 votes]: Besides EuYu's, another way to arrive at the answer is to say that the leading digit has to be $1$ or $2$, because if it were $3$ or greater there would be a carry and the product would have more digits than the original number. Let's try $1$. The multiplicand then starts with $1$ and the product ends with $1$. To have the product end with $1$, the multiplicand must end with $6$ as $6 \cdot \frac 72=21$. Then the product ends with $61$ and the multiplicand ends with $46$. We keep going until we have a $1$ at the front, giving $$ 153846 \\ \underline{\times \ \ \ \ 3.5} \\ 538461$$ Now we can see that multiplying by $2$ will carry, so will not work and this is the only primitive solution.<|endoftext|> TITLE: Is this matrix diagonalizable? Wolfram Alpha seems to contradict itself... QUESTION [14 upvotes]: I have the matrix $\begin{bmatrix}0.45 & 0.40 \\ 0.55 & 0.60 \end{bmatrix}$. I believe $\begin{bmatrix}\frac{10}{17} \\ \frac{55}{68}\end{bmatrix}$ is an eigenvector for this matrix corresponding to the eigenvalue $1$, and that $\begin{bmatrix}-\sqrt{2} \\ \sqrt{2}\end{bmatrix}$ is an eigenvector for this matrix corresponding to the eigenvalue $0.05$. However, Wolfram Alpha tells me this matrix is, in fact, not diagonalizable (a.k.a. "defective"): I'm really confused... which one is in fact defective -- Wolfram Alpha, or the matrix? Or is it my understanding of diagonalizability that's, uh, defective? REPLY [8 votes]: This is a combination of numerical linear algebra being hard, bad error handling, and confusing output 1: Numerical linear algebra is hard This problem is ill-conditioned as far as Mathematica is concerned. As has been mentioned in the comments, Mathematica (and it's wolfram-alpha cousin) automatically enters numerical approximation mode when you give it decimals. It assumes you have provided it with the data to the highest precision you can assert, and cannot tolerate imprecisions that exceed this implicitly provided precision threshold. In your case, one of the entries has only a single digit of precision. Since the condition number is 23, you are expecting to lose $\log_{10}(23)>1$ decimal digits in precision, which exceeds the available precision. This is hardly unique to your data. Trying to have Wolfram-Alpha diagonalize $$\begin{pmatrix}1.0&0.0\\0.0&1.0\end{pmatrix},$$ using the decimal points in particular, results in the same issue. The algorithm therefore detects the system is ill-conditioned, and starts having problems. 2: Bad Error Handling Rather than tell you that the problem is ill-conditioned and therefore prone to issues, it spits out that first message: "not diagonalizable." It should have said something to the effect of "problem is ill-conditioned; answers may not be accurate." 3: Confusing Output Ah, but keep reading! Wolfram-alpha didn't give up when it saw it was ill-conditioned and started spouting nonsense about it being non-diagonalizable. It then did the next-best thing it felt it could do: provide a Jordan decomposition, which turns out to be precisely the diagonalization you want. Wolfram-alpha just doesn't trust that the decomposition is all that reliable. 4: Hidden Secret Bug/Interface shortcoming Of course, there's a second problem. Intuitively you might think "okay, I'll just throw in some trailing zeros for more precision, problem solved!" Unfortunately, this doesn't work. Nor does trying to multiply the matrix by an arbitrary non-zero constant. From what I can gather, the algorithms Wolfram-Alpha (and essentially Mathematica) uses will effectively transform the entries back to the ill-conditioned 1 digit of precision situation, and then transform them back after it gets an answer (if any). Normally this is a worthwhile thing to do, as it guarantees that the numbers occupy the range that the algorithm and system can expect to handle the most efficiently. Here it just becomes a headache. I've yet to find a way that gets Wolfram-Alpha to interpret something like .4 as being accurate to, say, 32 decimal places.<|endoftext|> TITLE: Algorithms for computing or numerically approximating the Prokhorov metric? QUESTION [8 upvotes]: I am interested in the following practical question: Given two measures (say those of two parametric distributions), is there an algorithm for computing the Prokhorov metric between them? The general definition of the Prokhorov metric is as follows. For two finite measures $\mu_1$, $\mu_2$ on a separable metric space $\left( X, d \right)$, that metric is defined as $ \rho \left( \mu_1, \mu_2 \right) = \inf \left\{ \varepsilon > 0 : \mu_1 \left( G \right) \leqslant \mu_2 \left( G^{\varepsilon} \right) + \varepsilon, \forall G \in \mathcal{B} \right\} $ where $\mathcal{B}$ is the Borel $\sigma$-algebra on $X$ and $G^{\varepsilon} = \left\{ x \in X : \inf_{y \in G} d \left( x, y \right) < \varepsilon \right\}$. This metric is quite useful in the theory of weak convergence of probability measures on metric spaces (See Billingsley [Convergence of Probability Measures] or van der Vaart and Wellner [Weak Convergence and Empirical Processes]). The purpose of my question is that I am curious about whether a constructive algorithmic approach has been already studied. And if not, how could that be accomplished. REPLY [2 votes]: A quick answer in the special case of the Levy metric, i.e., the Levy-Prokhorov metric $\rho(\cdot,\cdot)$ for distributions on $R$ is as follows: Let $h_C(\cdot,\cdot)$ be the Hausdorff metric induced by the Chebyshev metric on the space of all closed subsets of $R^2$ (Details here: https://math.stackexchange.com/a/218747/45639 ). For two distribution functions $F$ and $G$, denoting their respective completed graphs by $\bar{F}$ and $\bar{G}$, we have $\rho(F,G) = h_C(\bar{F},\bar{G})$. Once you realize this, there are many algorithms available to calculate $h_C(\bar{F},\bar{G})$. The brute force method for paths (like $\bar{F}$ and $\bar{G}$) in $R^2$ is quite quick. Here's a reference that lists some basic algorithms: Nutanong et al, An Incremental Hausdorff Distance Calculation Algorithm, Proceedings of the VLDB Endowment, Vol 4, Issue 8, May 2011: http://www.vldb.org/pvldb/vol4/p506-nutanong.pdf<|endoftext|> TITLE: Can someone explain what plim is? QUESTION [9 upvotes]: In my Introductory Econometrics class we discussed a concept of "plim" or "probability limit. I'm not sure what this means though and my professor doesn't explain it well at all. Can someone tell me what this is if you have heard of it? It seems to be used in the same way we would use a regular limit but I just don't understand it. Thanks!! REPLY [16 votes]: According to this Wikipedia article If we have a sequence of real numbers $x_1, x_2, x_3, \ldots$, then we have a precise meaning for the statement \begin{equation} \lim_{n \to \infty} x_n = x. \end{equation} In particular, for any $\varepsilon > 0$, there is an $N$ such that $|x_n - x| < \varepsilon$ whenever $n \geq N$. Now suppose that I have a sequence of random variables $X_1, X_2, X_3, \ldots$. What do we mean when we talk about convergence of such a sequence? In fact, unlike the case with real numbers, there are many things that we could mean. Perhaps we mean that the value of the random variables gets close to a real number $x$ in the sense that the probability that $X_n$ is very different from $x$ (i.e., $|X_n - x|$ is large) gets very small as $n$ gets big. Perhaps we mean that the distribution of $X_n$ gets very close to the distribution of some other random variable $Y$ as $n$ gets large (then we would need a definition for the distance between distributions). So here is the definition of a probability limit. Definition: Let $X_1, X_2, X_3, \ldots$ be a sequences of random variables and let $X$ be a random variable. $X_n \to X$ in probability if for every $\varepsilon > 0$ we have \begin{equation} \lim_{n \to \infty} P(|X_n - X| \geq \varepsilon) = 0. \end{equation} Because probabilities are real numbers between $0$ and $1$ the limit is a standard calculus style limit. The definition says that $X$ is the probability limit of $X_n$ if the probability that the real number $|X_n - X|$ is bigger than any positive $\varepsilon$ gets very small as $n$ gets large. Example: Consider repeatedly throwing a fair coin. Let $X_n$ be a random variable that is $0$ if the $n$-th toss is tails and $1$ if it is heads. Let $S_n/n$ be the mean of $X_1, X_2, \ldots, X_n$. You probably know that the mean converges to $0.5$ if you flip the coin enough times. What we mean is that for any $\varepsilon > 0$ we have \begin{equation} \lim_{n \to \infty} P\left(\left|\frac{S_n}{n} - 0.5 \right| \geq \varepsilon \right) = 0. \end{equation}<|endoftext|> TITLE: Evaluate the sum $\sum_{k=0}^{\infty}\frac{1}{(4k+1)(4k+2)(4k+3)(4k+4)}$? QUESTION [8 upvotes]: Evaluate the series $$\sum_{k=0}^{\infty}\frac{1}{(4k+1)(4k+2)(4k+3)(4k+4)}=?$$ Can you help me ? This is a past contest problem. REPLY [4 votes]: Start with the Mercator series $$ \sum_{k=0}^\infty\frac{(-1)^k}{k+1}=\log(2) $$ and Gregory's series $$ \sum_{k=0}^\infty\frac{(-1)^k}{2k+1}=\frac\pi4 $$ The Heaviside Method yields $$ \begin{align} &\frac6{(4k+1)(4k+2)(4k+3)(4k+4)}\\ &=\frac1{4k+1}-\frac3{4k+2}+\frac3{4k+3}-\frac1{4k+4}\\ &=\color{#C00000}{\left(\frac2{4k+1}-\frac2{4k+2}+\frac2{4k+3}-\frac2{4k+4}\right)}\\ &-\,\color{#00A000}{\left(\frac1{4k+1}-\frac1{4k+3}\right)}-\color{#0000FF}{\left(\frac1{4k+2}-\frac1{4k+4}\right)} \end{align} $$ Note that the parts in red, green, and blue are all $O\left(\frac1{k^2}\right)$, so their sums converge absolutely. Thus, $$ \begin{align} \sum_{k=0}^\infty\frac6{(4k+1)(4k+2)(4k+3)(4k+4)} &=\color{#C00000}{2\log(2)}-\color{#00A000}{\frac\pi4}-\color{#0000FF}{\frac12\log(2)}\\ &=\frac32\log(2)-\frac\pi4 \end{align} $$ Dividing by $6$ yields $$ \sum_{k=0}^\infty\frac1{(4k+1)(4k+2)(4k+3)(4k+4)}=\frac14\log(2)-\frac\pi{24} $$<|endoftext|> TITLE: What does Universal mapping property for a free monoid mean? QUESTION [9 upvotes]: I am struggling to understand the "meaning" behind the Universal Mapping Property, as defined by Awodey (p.19): The free monoid $M(A)$ on a set $A$ is by definition "the" monoid with the following so called universal mapping property or UMP! Universal Mapping Property of $M(A)$ There is a function $i:A\to |M(A)|$, and given any monoid $N$ and any function $\bar{f}\circ i =f$, all as indicated in the following diagram: (... in Mon and in Sets diagrams follow) What does the author want me to learn here? Where is the definition of the "Free monoid" - it seems to me like he is referring to an early definition in "by definition". What does the author mean by the word "the" in "the moniod" - simply that it is unique? I am altogether confused, and I would appreciate any an explanation or a different point of view on this matter. Thank you! P.S.: Math level - novice. REPLY [9 votes]: The author is introducing a definition of a mathematical object, not by constructing it explicitly, but by describing an important property that it satisfies. It is a non-obvious fact that this important property characterizes the object in question "up to unique isomorphism" (which you can read as meaning "uniquely" for the time being). The universal mapping property is the definition. It is a non-obvious fact that this is a meaningful way to define something. "The" is shorthand for "unique up to unique isomorphism." I wouldn't worry about this for the time being. Universal properties can be thought of as a vast generalization of the notion of "largest" or "smallest." In many cases they can be thought of as the "laziest" way to do something. In this case, the free monoid can be thought of as the "laziest" way to turn a set into a monoid. This will be made clearer by a more explicit description of the free monoid (I am assuming that Awodey gives such a description) as the set of words on the elements of $A$. You might find it helpful to supplement Awodey by reading Lawvere and Schanuel's Conceptual Mathematics.<|endoftext|> TITLE: Matrix commuting with commutator QUESTION [6 upvotes]: Suppose $A$ and $B$ are real or complex $n \times n$ matrices and $C = [A,B]$ is their commutator. If $C$ commutes with $A$, show that $C$ is nilpotent. REPLY [7 votes]: I guess there are elementary proofs for this result but the only one I can think of now is the following result for bounded derivations, which applies to many other cases. You can see it in Murphy's book(Problem 12-14 to Chapter 1), I just sketch the key steps here. Let $\mathcal{A}$ be an algebra, a linear map $d:\mathcal{A}\to \mathcal{A}$ is a derivation if \begin{equation} d(ab)=ad(b)-bd(a) \end{equation} for all $a,b\in \mathcal{A}$. Derivations satisfy the Leibniz formular \begin{equation} d^n(ab)=\operatorname{\sum_{k=0}^n}\frac{n!}{(n-k)!k!}d^k(a)d^{n-k}(b). \end{equation} (Think about derivatives.) Now let $\mathcal{A}$ be a unital Banach algebra and $d$ be bounded. If we have $a\in \mathcal{A}$ such that $d(a)=\lambda a$ for some $\lambda\neq 0$, then you can apply Leibniz to $d(a^n)$ to see $a^n=0$ for large $n$. (Note that $\lambda$ is in the spectrum of $d$, which is a bounded set.) Now if we have $d^2(a)=0$, then we can show $d^n(a^n)=n!(d(a))^n$, which then gives $d(a)$ is quasi-nilpotent. Because \begin{equation} \|d^n\|\ge n!\frac{\|(da)^n\|}{\|a^n\|} \end{equation} and then \begin{equation} \|d\|=\operatorname{lim}\|d^n\|^{1/n}\ge\operatorname{lim}(n!)^{1/n}\frac{\|(da)^n\|^{1/n}}{\|a^n\|^{1/n}}, \end{equation} so the only possibility that $\|d\|$ can stay bounded is when $\operatorname{lim}\|(da)^n\|^{1/n}=0$, which says $da$ is quasi-nilpotent. To sum up, what we have shown is that in a unital banach algebra $\mathcal{A}$, if $d^2(a)=0$ for some derivation $d$, then $d(a)$ is quasi-nilpotent. Apply this to $\mathcal{A}=M_n(\mathbb{C})$, and the derivation defined by $B\mapsto AB-BA$, then you can see $[A,B]=d(B)$. If $[A,B]$ commutes with $A$, then $d^2(B)=d([A,B])=0$, so $[A,B]=d(B)$ is quasi-nilpotent. But in finite dimensional spaces like $M_n{\mathbb{C}}$, this is the same as nilpotent. By the way, this is the Kleinecke-Shirokov theorem.<|endoftext|> TITLE: Is the difference of the natural logarithms of two integers always irrational or 0? QUESTION [11 upvotes]: If I have two integers $a,b > 1$. Is $\ln(a) - \ln(b)$ always either irrational or $0$. I know both $\ln(a)$ and $\ln(b)$ are irrational. REPLY [28 votes]: If $\log(a)-\log(b)$ is rational, then $\log(a)-\log(b)=p/q$ for some integers $p$ and $q$, hence $\mathrm e^p=r$ where $r=(a/b)^q$ is rational. If $p\ne0$, then $\mathrm e=r^{1/p}$ is algebraic since $\mathrm e$ solves $x^p-r=0$. This is absurd hence $p=0$, and $a=b$.<|endoftext|> TITLE: Prove that $\frac{1}{4-\sec^{2}(2\pi/7)} + \frac{1}{4-\sec^{2}(4\pi/7)} + \frac{1}{4-\sec^{2}(6\pi/7)} = 1$ QUESTION [9 upvotes]: How can I prove the fact $$\frac{1}{4-\sec^{2}\frac{2\pi}{7}} + \frac{1}{4-\sec^{2}\frac{4\pi}{7}} + \frac{1}{4-\sec^{2}\frac{6\pi}{7}} = 1.$$ When asked somebody told me to use the ideas of Chebyshev polynomial, but I haven't learnt that in school. I tried doing this way: Look at $y =\cos\theta + i \sin\theta$ where $\displaystyle\theta \in \Bigl\{\frac{2\pi}{7},\frac{4\pi}{7},\cdots,2\pi\Bigr\}$ Then we have \begin{align*} y^{7} &=1 \\ y^{7}-1 &=0 \\ (y-1) \cdot (y^{6}+y^{5}+\cdots + 1) &= 0 \end{align*} Now the root $y=1$ corresponds to $\theta = 2\pi$, and that $$y^{6} + y^{5}+\cdots + 1 =0$$ have roots $\cos\theta + i \sin\theta$, where $\theta \in \Bigl\{\frac{2\pi}{7},\frac{4\pi}{7} ,\cdots \Bigr\}$. Looking at $y+\frac{1}{y} $ will give me the roots as $\cos\theta$ and then i can put $z=y^{2}$ to get $\cos^{2}$ as the roots and the invert to get $\sec^{2}$, but I have some problems. Can anyone help me out with a neat solution. Thanks. REPLY [3 votes]: Where you have left of $y^6+y^5+\cdots+y+1=0$ where $y=\cos \theta+i\sin \theta$ where $\theta=\frac{2\pi}7,\frac{4\pi}7,\frac{6\pi}7,\cdots , \frac{12\pi}7$ Let us divide both sides by $y^3,$ $y^3+\frac1{y^3}+y^2+\frac 1{y^2}+y+\frac 1 y+1=0$ or $\left(y+\frac1y\right)^3-3\left(y+\frac1y\right)+\left(y+\frac1y\right)^2-2+\left(y+\frac1y\right)+1=0$ or $\left(y+\frac1y\right)^3+\left(y+\frac1y\right)^2-3\left(y+\frac1y\right)-1=0$ Now, $\displaystyle y+\frac 1 y=2\cos \theta=z$ (say) So, $\displaystyle z^3+z^2-3z-1=0\ \ \ \ \color{Red}{(1)},$ has the roots $\displaystyle 2\cos\frac{2\pi}7, 2\cos\frac{4\pi}7, 2\cos\frac{6\pi}7$ using $\displaystyle\cos\frac{r\pi}7=\cos\left(2\pi-\frac{r\pi}7\right)=\cos\frac{(14-r)\pi}7 $ as $\color{Red}{(1)}$ does not have repeated roots $\displaystyle\implies z^2(1+z)=3z-1,z^2=\frac{3z-1}{z+1}$ $$\text{Now,}\displaystyle\frac 1{4-\sec^2\theta}=\frac{\cos^2\theta}{4\cos^2\theta-1}= \frac{z^2}{4z^2-4}=w(say),$$ $\displaystyle\implies z^2=\frac {4w}{4w-1}$ Comparing the values of $\displaystyle z^2, \frac {4w}{4w-1}=\frac{3z-1}{z+1}$ Replacing the $z$ with $w$ in $\color{Red}{(1)}$, we shall get a cubic equation in $w,$ whose sum of roots will give us the required identity.<|endoftext|> TITLE: Generating functions for context-free languages QUESTION [13 upvotes]: I have a question about context free grammars and their relationship with generating functions. It is well-know how to associate a generating function $\mathsf{gf}{(R)}$ with a non-ambiguous regular expression $R$ over the alphabet $\Sigma$: $$ \begin{array}{rclcrcl} \mathsf{gf}{(\emptyset)} &=& 0 &\qquad& \mathsf{gf}{(\epsilon)} &=& 1\\ \mathsf{gf}{(a)} &=& x \quad (a \in \Sigma) && \mathsf{gf}{(R + R')} &=& \mathsf{gf}{(R)} + \mathsf{gf}{(R')} \\ \mathsf{gf}{(RR')} &=& \mathsf{gf}{(R)} \cdot \mathsf{gf}{(R')} && \mathsf{gf}{(R^*)} &=& \frac{1}{1 - \mathsf{gf}{(R)}} \end{array} $$ A regular expression, and more generally a grammar, is ambiguous if at least one string in its language can be parsed in more than one way. (Note that not all languages have non-ambiguous grammars, and that ambiguity of context-free grammars is not decidable.) The generating function of a regular expression can be used to count the number of words of length $n$ in the language of the regular expression: If $f$ is the generating function of a regular expression $R$ and $f$ has the power series expansion $\Sigma_{i < \omega}a_ix^i$ then the language generated by $R$ has $a_i$ words of length $i$. This is explained for example in H. Wilf's book generatingfunctionology. The general theory behind this is the theory of combinatorial species. Now my question: is there a way to do this same thing, explicitly getting a generating function in an inductive (or otherwise 'nice') way, for non-ambiguous context free grammars? REPLY [4 votes]: The classical Chomsky-Schutzenberger theorem is established in a constructive manner by transforming an unambiguous grammatical specification of the language into a set of polynomial equations. According to Flajolet. He gives some nice examples where the construction from grammar to generating function is given. Flajolet, TCS 1987<|endoftext|> TITLE: Are "deterministic" and "idempotent" just two different names of the same concept? QUESTION [5 upvotes]: Sometimes I encounter the term "deterministic" and sometimes I encounter "idempotent" in describing functions . Are they just two different names for the same concept ? Or are they different ? P.S : I found about these two terms in an answer of stackoverflow . REPLY [5 votes]: They are usually completely different. An idempotent function is a function $f$ for which $f(x) = f(f(x))$ for all $x$. A deterministic function is not even a proper mathematical concept, as all mathematical functions are deterministic in a sense of the common meaning of the word. Rather than the determinism of a function, one might refer to its decidability, which is the ability to prove that it results in particular values from any particular argument given a particular theory. Decidability and idempotency are very different from each other, as are determinacy and idempotency. What is the context in which you are seeing "deterministic"? It may mean the same as "idempotent" there. But I do not believe that the terms are commonly or in any general mathematical sense understood to be equivalent. EDIT: I guess the difficulty is the explanation that "it returns the same result when run repeatedly", which seems to mean the same thing as both "deterministic" and "idempotent". So I guess there is probably a better way to answer this.<|endoftext|> TITLE: Original works of great mathematicians QUESTION [42 upvotes]: In almost every mathematical text there is a line as This was first proved by Gauss or This formula first appeared in a work of Riemann, but for me it's more like My friend told me once that... For my Bachelor thesis and other papers I'm working on I would prefer to add a scan of the original paper rather that a quote from a book1 that quoted book2 that found it in a book3. I was already looking for some original papers from some great mathematicians (Riemann, Euler, Cantor, Hilbert etc...) on the internet, but I got quite disappointed by results. I expected some organization that collects scans from the old scientific works and makes them publicly available, but either I was looking elsewhere or it just doesn't exist... So my question is: Do you know about any place (website?) where scans of original work of great old mathematicians are collected? Look, the scan of the first page of Riemann's original work Über die Anzahl der Primzahlen unter einer gegebenen Grösse !! Such a mathematical and historical gold... REPLY [2 votes]: I wonder if there are webpages yet to be discovered that collect collected works. If not it would be a great resource. Here I stumbled on Weierstrass's complete works: https://archive.org/details/mathematischewer03weieuoft<|endoftext|> TITLE: Definition of clique cover and clique edge cover QUESTION [5 upvotes]: From Wikipedia The clique cover problem (also sometimes called partition into cliques) is the problem of determining whether the vertices of a graph can be partitioned into k cliques. It seems to me that a clique cover is defined as a set of cliques that partition the vertices of the graph. From Wikipedia: An alternative definition of the intersection number of a graph G is that it is the smallest number of cliques in G (complete subgraphs of G) that together cover all of the edges of G. A set of cliques with this property is known as a clique edge cover or edge clique cover, It seems to me that a clique edge cover is defined as a set of cliques that cover the vertices of the graph. The two don't seem consistent to me. I searched in Douglas West's Introduction to Graph theory, Both clique cover and clique edge cover are defined in terms of cover instead of partition. So I wonder if the definition for clique cover in Wikipedia is wrong? Thanks! REPLY [8 votes]: A vertex clique cover is a set of cliques that cover all the vertices of a graph. If they overlapped, say two cliques shared the vertex $v$, we could simply delete $v$ from one of the two cliques and we'd end up with a vertex clique cover that is the same size or smaller (smaller if the clique was just $v$). So, in this case, we go ahead and define the cliques to be disjoint. The vertex clique cover number is the smallest number of such cliques needed to cover all the vertices. This number would be the same whether we specified disjoint cliques or not, because if a minimum vertex clique cover contained cliques that overlapped, we could delete out vertices from one or the other until they no longer overlapped. So, it is equivalent to partitioning the vertices into cliques. An edge clique cover is a set of cliques that cover all the edges. We can not guarantee that these will be disjoint, so talking about a partition makes no sense in this case. Take a star, for example, $K_{1, 3}$. To cover all 3 edges, we will need each edge to be a clique and all 3 cliques will contain the central vertex of the star. Again, we can define the edge clique cover number as the smallest number of such cliques needed to cover all edges. Thus, both could be considered in terms of covers, but only the vertex clique cover could be considered in terms of partitions. It is clear from these definitions that vertex clique cover number is less than or equal to edge clique cover number. Also, the vertex clique cover number of a graph is simply equal to the chromatic number of the complement of that graph. See here for the basic idea of a proof.<|endoftext|> TITLE: Explanation about frames as distinct from a co-ordinate system QUESTION [7 upvotes]: I am quite confused as to what is the difference between a frame and a co-ordinate system. The wikipedia page was not very helpful for me. I would be very happy if someone could give me a non-rigorous idea about what exactly the difference is. My background involves basic differential geometry. I have also done some very basic differential topology and am aware of manifolds and some topological properties associated with them. I'll try and elaborate a bit about the context. The Frenet frame fields are useful in that they express the rate of change of the unit vectors constituting the frame in terms of the vectors themselves. Also the fact that it is a "moving frame" is supposedly useful. I am thinking, this means analysis of the various properties associated with the curve is easier as the frame moves along with the observer.Though I couldn't explicitly point out what these properties are. Am I right?? Why exactly is this frame so useful??Also doesn't the notion of a frame reject path independence? I apologise if my question seems a bit vague. I do hope its enough to convey what I am looking for. I am not much exposed to physics. But I would welcome answers involving examples from physics if it helps matters. P.S.: If this can be tied in with notions about the coordinate basis of vector space, then all the better.I am looking for something all encompassing. I hope that doesnt come across as foolish. REPLY [4 votes]: It is worth clarifying the difference between a moving frame on an abstract manifold and a moving frame along a submanifold inside a higher dimensional space. A moving frame on an abstract manifold or on an open subset of the manifold is simply set of vector fields $V_1, \dots, V_n$ such that $V_1(x), \dots, V_n(x)$ form a basis of the tangent space $T_xM$ for each $x$ where the vector fields are defined. If the manifold is assumed to have a geometric structure, it is common to assume that the frame is somehow adapted to the geometric structure. For example, if there is a Riemannian metric, then it is often convenient to assume that the vector fields are orthonormal. A moving frame on a submanifold $M^n$ inside a higher dimensional ambient space $A^N$ is a set of sections $V_1, \dots, V_N$ of the tangent bundle of the ambient space restricted to the submanifold. Again, if the ambient space has a geometric structure, then the frame is usually assumed to be somehow adapted to the geometric structure. For example, if $A^N$ is Euclidean $N$-space or some other Riemannian $N$-manifold, then it is often convenient to assume that the frame is orthonormal and that the first $n$ vectors are tangent to the submanifold and the last $N-n$ are normal. It is possible to work with the moving frame directly but it is usually easier to work with the dual basis, which is a frame of sections of the cotangent bundle. This is because the exterior derivative is easier to compute with than the Lie bracket.<|endoftext|> TITLE: Galois group permutation of roots QUESTION [6 upvotes]: When considering the Galois group of the splitting field of the polynomial $x^3-2$, it is mentioned in my notes that $\sqrt[3]{2}$ can be mapped to $\sqrt[3]{2}$,$\sqrt[3]{2}\omega$ or $\sqrt[3]{2}\omega^2$, where $\omega$ is the cube root of unity. $\omega$ must be mapped to $\omega$ or $\omega^2$. My question is why is this so? Sorry for the beginner question, but why can't $\sqrt[3]{2}$ be mapped to say $\omega$, or $\omega$ be mapped to say, 1? Thank you very much for help. REPLY [7 votes]: First recall the definition of an automorphism of a field $F$. A map $\sigma: F \rightarrow F$ is an automorphism if its a bijective homomorphism that is $\sigma(0)=0, \;\sigma(1)=1$ and for any $a,b \in F$ we have $\sigma(ab)=\sigma(a)\sigma(b)$ and $\sigma(a+b)=\sigma(a)+\sigma(b)$. In particular in your example $\omega$ cannot map to $1$ because $\omega \neq 1$. In the general context we have the following result let $L/K$ be an algebraic extension and $\sigma$ a $K$-automorphism of $L$. By $K$-automorphism I mean that for any $\alpha \in K$ we have $\sigma(\alpha)=\alpha$ so $\sigma$ fixes $K$. If $x \in L$ then $\sigma(x)$ is a $K$-conjugate of $x$ that is, $\sigma(x)$ is a root of the minimum polynomial of $x$ over $K$. Let $p(t)=min_K(x,t)$ that is the minimum polynomial of $x$ then we have that $$0=\sigma(p(x))=\sigma\left(\sum_{i=0}^n p_ix^i\right)=\sum_{i=0}^n \sigma(p_i)\sigma(x)^i=\sum_{i=0}^np_i\sigma(x)^i=p(\sigma(x)).$$ So $\sigma(x)$ is also a root of $p(t)$. In the context of your question $\sqrt[3]{2}$ can't map to $\omega$ because their minimum polynomials over $\mathbb Q$ are different.<|endoftext|> TITLE: How to solve an overdetermined system of point mappings via rotation and translation QUESTION [7 upvotes]: I have a set of points in one coordinate system $P_1, \ldots, P_n$ and their corresponding points in another coordinate system $Q_1, \ldots , Q_n$. All points are in $\mathbb{R}^3$. I'm looking for a "best fit" transformation consisting of a rotation and a translation. I.e. $$ \min_{A,b} \sum (A p_i + b - q_i)^2 , \quad A \in \operatorname{SO}(3), b \in \mathbb{R}^3$$ Can anyone give me some hint in which direction I should search? I already looked at: http://en.wikipedia.org/wiki/Least_squares (don't know how to include the restriction to orthogonal matrices) http://en.wikipedia.org/wiki/Singular_value_decomposition (I thought that I'd start with a matrix from $\operatorname{GL}(3)$ and use the "best fit orthogonal matrix" afterwards like stated in http://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem but that seamed too complicated) REPLY [2 votes]: I try to summarize what I have to do in this answer: Convert the $P_i$s and $Q_i$s: $\overline{p}$ and $\overline{q}$ are the center of gravity for the set of points to be mapped: $$ \overline{p} := \frac1i\sum_iP_i, \quad \overline{q} := \frac1i\sum_iQ_i$$ Make the points relative to their respective center of gravity: $$ p_i := P_i - \overline{p}, \quad q_i := Q_i - \overline{q}$$ This removes the need to consider the translation $b$ in the optimization. $b$ can be calculated as $\overline{q} - A \overline{p}$ once we have determined the optimal $A$. Calulate $B$ as $$B = \left(\sum_i p q^\top \right)^\top = \sum_i q p ^\top$$ Perform SVD decomposition of $B$: $$ B = U \Sigma V^\top$$ Optimal $A$ if $A \in \operatorname{O}(3)$ (not $A \in \operatorname{SO}(3)$ as stated in the question): $$ A := U V^\top $$ Optimal $b$: $$ b := \overline{q} - A \overline{p}$$ Optimal $A$ if $A \in \operatorname{SO}(3)$ would go here but I think $A \in \operatorname{O}(3)$ is what I'm really looking for.<|endoftext|> TITLE: Example of Non-Linear, UnAmbiguous and Non-Deterministic CFL? QUESTION [6 upvotes]: In Chomskhy classification of formal languages, I need some examples of Non-Linear, Unambiguous and also Non-Deterministic Context-Free-Language(N-CFL)? Linear Language: For which Linear grammar is possible $( \subseteq CFG)$ e.g. $ L_{1} = \{a^nb^n | n \geq 0 \} $ Deterministic Context Free Language(D-CFG): For which Deterministic Push-Down-Automata(D-PDA) is possible e.g. $ L_{2} = \{a^nb^nc^m | n \geq 0, m \geq 0 \} $ $L_{2}$is also a Non-Linear CFG (and unambiguous). Non-Deterministic Context Free Language(N-CFG): only Non-Deterministic Push-Down-Automata(N-PDA) is possible e.g. $ L_{3} = \{ww^{R} | w \in \{a, b\}^{*} \} $ $L_{3}$ is also Linear CFG Ambiguous CFL: CFL for which only ambiguous CFG is possible $ L_{4} = \{a^nb^nc^m | n \geq 0, m \geq 0 \} \bigcup \{a^nb^mc^m | n \geq 0, m \geq 0 \} $ $L_{4}$ is both non-linear and Ambiguous CFG And Every $ Ambigous CFL \subseteq NCFL$. [Question] Whether all non-linear, Non-Deterministic CFL are Ambiguous? If not then I need a example that is non-linear, non-deterministic CFL and also unambiguous? Venn-diagram for Chomsky classification of formal languages. REPLY [3 votes]: Let $L$ be the language of well-formed expressions using a single type of brackets such as (()(()())). This language is nonlinear, deterministic and unambiguous. Let $R$ be the language $\{w w^R\}$ of even palindromes. It is unambiguous, linear but nondeterministic. Assume that alphabets of $L$ and $R$ are disjoint. Then $L \cup R$ is unambiguous, nonlinear (due to $L$), and nondeterministic (due to $R$).<|endoftext|> TITLE: Why sqrt(4) isn't equall to-2? QUESTION [5 upvotes]: Possible Duplicate: Square roots — positive and negative $\sqrt{4} = -2$. WolframAlpha says "false"! Now lets take a deeper look to my idea. Well...we know that, $$2^2 = 4 \iff \sqrt{4} = 2$$ $(-2)^2 = 4$ so why can't $\sqrt{4}$ be equal to $-2$? I'm a little bit confused // Thank you for all your answers, I have answer now. Stepo REPLY [5 votes]: By definition, $$\sqrt{x^2} = \vert x \vert$$ for $x \in \mathbb{R}$.<|endoftext|> TITLE: Determine the degree of a field extension QUESTION [6 upvotes]: I have to determine the degree of $\mathbb{Q}\left(\sqrt{2},\sqrt{3}\right)$ over $\mathbb{Q}$ and show that $\sqrt{2}+\sqrt{3}$ is a primitive element ? Could someone please give me any hints on how to do that ? REPLY [7 votes]: Clearly $[\mathbb Q(\sqrt 2):\mathbb Q]\le 2$ becasue of the polynomial $X^2-2$ and $[\mathbb Q(\sqrt 2,\sqrt 3):\mathbb Q(\sqrt 2)]\le 2$ because of the poylnomial $X^2-3$. In fact, $\sqrt 2\notin \mathbb Q$ implies $[\mathbb Q(\sqrt 2):\mathbb Q]=2$. We also have $\sqrt 3\notin \mathbb Q(\sqrt 2)$ because $(a+b\sqrt 2)^2 = 3$ implies $(a^2+2b^2) + 2ab\sqrt 2 = 3$, hence $2ab = 0$ and $a^2+2b^2=3$; thus either $a=0$ and $b^2=\frac 32$, or $b=0$ and $a^2=3$. But both $\sqrt{\frac32}$ and $\sqrt 3$ are irrational. Therefore $[\mathbb Q(\sqrt 2,\sqrt 3):\mathbb Q(\sqrt 2)]=2$ and finally $$[\mathbb Q(\sqrt 2,\sqrt 3):\mathbb Q]=4.$$ For the second part , note that $\mathbb Q(\sqrt 2+\sqrt 3)$ contains $(\sqrt 2+\sqrt 3)^2=2+2\sqrt 6+3$, hence also $\sqrt 6$ and $\sqrt6(\sqrt 2+\sqrt 3)=2\sqrt 3+3\sqrt 2$, and finally both $3(\sqrt2+\sqrt 3)-(2\sqrt 3+3\sqrt 2)=\sqrt 2$ and $(2\sqrt 3+3\sqrt 2)-2(\sqrt2+\sqrt 3)=\sqrt 3$. REPLY [5 votes]: $$x=\sqrt 2+\sqrt 3\Longrightarrow x^2-2\sqrt 2\,x+2=3\Longrightarrow x^4-2x^2+1=8x^2\Longrightarrow$$ $$\Longrightarrow x^4-10x^2+1=0$$ Can you now prove the polynomial $\,t^4-10t^2+1\in\Bbb Q[t]\,$ is irreducible?<|endoftext|> TITLE: On $L^p$ and $\ell^p$ QUESTION [7 upvotes]: If a continuous and infinitely differentiable function $f(x): \mathbb{R}\to\mathbb{C}$ is in $L^p$, is it also true that $f(n),\ n\in \mathbb{Z}$ is in $\ell^p$? REPLY [3 votes]: I claimed: A sufficient condition for $f \in L^p$ to imply $f(n) \in \ell^p$ is that there exist $\delta >0$ and $g \in L^p$ such that $|f(x)−f(y)| TITLE: exp(X)=A has a solution if A is sufficiently close to identity matrix QUESTION [6 upvotes]: Can anyone give hint for proving this? I think some kind of inverse thm argument would work. But I wasn't able to make it accurate... such as continuity of mapping $X \mapsto \exp(X)$ or this mapping has full rank near $A$ if $A$ is sufficiently close to identity... REPLY [7 votes]: Since $\exp(A) = I + A + O(A^2)$, the derivative of $\exp$ at $0$ is the identity map. The inverse function theorem then shows that $\exp$ is invertible in a neighbourhood of $I$.<|endoftext|> TITLE: Showing $\frac{\sin x}{x}$ is NOT Lebesgue Integrable on $\mathbb{R}_{\ge 0}$ QUESTION [31 upvotes]: Let $$f(x) = \frac{\sin(x)}{x}$$ on $\mathbb{R}_{\ge 0}$. $$\int |f| < + \infty\quad\text{iff}\quad \int f^+ < \infty \color{red}{\wedge \int f^- < \infty}$$ EDIT: So what I'm trying to do is to show that in fact $\int f^+ > \infty$ so that therefore $\int |f| = \infty$ Now consider the assertion that: $$\int f+ = \sum_{k=1}^\infty \int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{x} \right) dx \le \sum_{k=1}^\infty\int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{2 \pi k + \pi}\right) dx$$ Two Questions: (1) Is the first step of asserting that $$ \int f^+ = \sum_{k=1}^\infty \int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{x} \right) dx $$ correct, in the sense that you can partition a Lebesgue integral into an INFINITE series of integrals being added together (whose individual term domains cover all of the overall domain of the original integral s.t. they are also pairwise disjoint)? (2) Is there a well known lower bound of $\sum_{k=1}^\infty\int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{2 \pi k + \pi}\right) dx$ that diverges whose existence establishes that $f^+$ is in fact not integrable? REPLY [12 votes]: By definition, a measurable function $f=f(x)$ is Lebesgue integrable over a measurable set $E$ if and only if the Lebesgue integrals$\int_E f^+(x)\,dx$ and $\int_Ef^-(x)\,dx$ are both finite. Let us look at $f^+(x)$, which in this case is $\sin^+(x)/x$. Since $f^+(x)\geqslant 0$, $f^+(x)$ is integrable if and only if $\int f^+(x)\,dx<\infty$. Again, since $f^+(x)\geqslant 0$, we can compute its Lebesgue integral with the Monotone Convergence Theorem, and then estimate it from below: \begin{align*} \int_0^\infty f^+(x)\,dx &= \int_0^\infty \frac{\sin^+(x)}{x}\,dx \\ &\color{red}{=}\sum_{k=0}^\infty\int_{2k\pi}^{(2k+1)\pi}\frac{\sin(x)}{\color{green}{x}}\,dx & \color{red}{\text{Monotone Convergence Theorem}} \\ &\color{green}{\geqslant}\sum_{k=0}^\infty\int_{2k\pi}^{(2k+1)\pi}\frac{\sin(x)}{\color{green}{(2k+1)\pi}}\,dx \\ &= \sum_{k=0}^\infty\frac{1}{(2k+1)\pi}\color{blue}{\int_{2k\pi}^{(2k+1)\pi}\sin(x)\,dx} \\ &\color{blue}{=} \sum_{k=0}^\infty\frac{\color{blue}{2}}{(2k+1)\pi} \\ &= \sum_{k=0}^\infty\frac{1}{(k+1/2)\pi} \end{align*} By the Limit Comparison Test applied to the series obtained above and the harmonic series, we conclude that the series diverges, hence $\int_0^\infty f^+(x)\,dx = \infty$, so $f(x)$ is not integrable.<|endoftext|> TITLE: Farkas Lemma proof QUESTION [10 upvotes]: I am trying to prove the Farkas Lemma using the Fourier-Motzkin elimination algorithm. From Wikipedia: Let A be an $m \times n$ matrix and $b$ an $m$-dimensional vector. Then, exactly one of the following two statements is true: There exists an $x \in \Bbb R^n$ such that $Ax = b$ and $x \ge 0$. There exists a $y \in \Bbb R^m$ such that $A^Ty \ge 0$ and $b^Ty < 0$. The first direction is quite easy. I assume that there is a vector $y$ and I found a contradiction. To the other direction I have used the fourier-motzkin elimination to reduce the number of variables. I assume that $Ax \le b$ and I do one step from the algorithm. I create a new system $A'x' \le b'$. I know that there exist a non-negative matrix $M$ that is a linear combination of the new system to the original. I have followed the direction of repeating the algorithm $n$ times to eliminate all the variables and create the system: $0 \le b''$. Now in order this system to be infeasible it must be that: $b''<0$. So I can assume that exist vector $y''$ such that $y''A''=0$ and also $y''b''<0$ because $b'<0$ and $y\ge 0$. Now I can prove that also there is a vector $y$ for the original system. But the repetition of $n$ steps seems to me a bit arbitrary. If I just do one step and create the system $A'x'\le b'$ how I can use it is infeasible? REPLY [5 votes]: As you note, $1\rightarrow \bar 2$ is straightforward: $$ \begin{align} Ax &= b \\ x'A' &= b' \\ x'A'y &= b'y \\ \end{align} $$ Since $x\geq0$, it is impossible to simultaneously have $A'y\geq 0$ and $b'y < 0$. For $\bar 1\rightarrow 2$, first note that $Ax = b, x\geq 0$ (the system that is infeasible due to $\bar 1$) is equivalent to $Dx \leq d$, where we define $$ D = \left( \begin{array}{c} A \\ -A \\ -I \end{array} \right), d = \left( \begin{array}{c} b \\ -b \\ 0 \end{array} \right). $$ We can now apply Fourier-Motzkin Elimination (FME) on the system $Dx\leq d$, removing all variables $1, 2, \ldots, n$ in order. Define $U^i$ to be the matrix used to remove variable $i$ from the system of equations; I will use $U^i\geq 0$ to indicate that each entry in $U^i$ is non-negative. From FME we have $U^nU^{n-1}\ldots U^1D = 0$. Defining $U = U^nU^{n-1}\ldots U^1$ we have $UD = 0$; note that from $U^1\geq 0, U^2\geq 0, \ldots, U^n\geq 0$ we also have $U\geq 0$. Because $Dx\leq d$ is infeasible, there must be some row $u'\geq 0$ of $U$ such that $u'D = 0'$ and $u'd < 0$. Letting $p\geq 0$ be the first $m$ elements of $u$, $q\geq 0$ be the next $m$ elements of $u$, and $r\geq 0$ be the last $n$ elements of $u$, we have: $$ \begin{align} u'D &= 0' \\ \left( \begin{array}{ccc} p' & q' & r' \end{array} \right) \left( \begin{array}{c} A \\ -A \\ -I \end{array} \right) &= 0' \\ (p-q)'A &= r' \\ (p-q)'A &\geq 0' \\ A'(p-q) &\geq 0 \end{align} $$ and $$ \begin{align} u'd &< 0 \\ \left( \begin{array}{ccc} p' & q' & r' \end{array} \right) \left( \begin{array}{c} b \\ -b \\ 0 \end{array} \right) &< 0 \\ (p-q)'b &< 0 \\ b'(p-q) &< 0 \end{align} $$ By setting $y = p-q$, we have used FME to construct a vector $y\in\mathbb{R}^m$ such that $A'y \geq 0$ and $b'y < 0$.<|endoftext|> TITLE: How does $2^{k+1} = 2 \times 2^k$? QUESTION [9 upvotes]: I ask only because my textbook infers this in an example. Where should I go to learn more about this? I'm trying to learn mathematics by Induction but my knowledge of simplifying algebraic equations is crippling me. Thanks. REPLY [15 votes]: By the rules of exponentiation, $x^{k} \times x = x^{k+1}$. If $k$ is an integer, $x^k = \underbrace{x \times x \times \cdots \times x}_{k \textrm{ times}}.$ So $$x^k \times x = \underbrace{x \times x \times \cdots \times x}_{k \textrm{ times}} \times x = \underbrace{x \times x \times \cdots \times x}_{k+1 \textrm{ times}}.$$<|endoftext|> TITLE: Normal subgroups of the Special Linear Group QUESTION [8 upvotes]: What are some normal subgroups of SL$(2, \mathbb{R})$? I tried to check SO$(2, \mathbb{R})$, UT$(2, \mathbb{R})$, linear algebraic group and some scalar and diagonal matrices, but still couldn't come up with any. So can anyone give me an idea to continue on, please? REPLY [5 votes]: ${\rm{SL}}_2(\mathbb{R})$ is a simple Lie group, so there are no connected normal subgroups. It's only proper normal subgroup is $\{I,-I\}$<|endoftext|> TITLE: Properties of the number 50 QUESTION [20 upvotes]: I will shortly be engaging with my 50th (!) birthday. 50 = 1+49 = 25+25 can perhaps be described as a "sub-Ramanujan" number. I'm trying to put together a quiz including some mathematical content. Contributions most welcome. What does 50 mean to you? REPLY [2 votes]: 50 is half the sum of the first nine prime numbers A lot more here: https://primes.utm.edu/curios/page.php?short=50<|endoftext|> TITLE: Why is the affine line with a doubled point not a separated scheme? QUESTION [11 upvotes]: How to show, that the affine line with a split point is not a separated scheme? Hartshorne writes something about this point in product, but it is not product in topological spaces category! Give the most strict proof! REPLY [7 votes]: Let $X$ be the affine line with a doubled origin. By definition, $X$ is constructed by gluing two schemes $V_1 = \text{Spec }k[t]$, $V_2 = \text{Spec }k[u]$ (which we subsequently identify with open sets of $X$) along the open set $U \subset X$ isomorphic to $\mathbb{A}^1\setminus \{0\}$ via the isomorphism $k[t, 1/t] \cong k[u, 1/u]$ defined by $t \leftrightarrow u$. The fibered product $X\times_k X$ is then covered by the open sets $V_1 \times_k V_1$, $V_2 \times_k V_2$, $V_1\times_k V_2,$ and $V_2\times_k V_1$. Since $\mathbb{A}^1 \times_k \mathbb{A}^1 \cong \mathbb{A}^2$, we see that each of these open sets $V_i \times_k V_j$ is isomorphic to $\mathbb{A}^2$, and that $X$ is obtained from these sets by gluing appropriately. As a result, to compute the image of $\Delta_X : X \to X\times_k X$, it is enough to compute $\Delta_X(V_1)$ and $\Delta_X(V_2)$. We see that $\Delta_X(V_1)$ contains the origin in $V_1 \times_k V_1$ and $\Delta_X(V_2)$ contains the origin in $V_2 \times_k V_2$, while neither $\Delta_X(V_1)$ nor $\Delta_X(V_2)$ contains the origins in $V_1 \times_k V_2$ or $V_2 \times_k V_1$. (Indeed, by computing these fibered products locally, we see for example that $\Delta_X(V_1) \cap V_1 \times_k V_1$ is isomorphic to $\mathbb{A}^1$ sitting in the diagonal of $\mathbb{A}^1\times_k \mathbb{A}^1$, while $\Delta_X(V_1) \cap V_1 \times_k V_2$ is isomorphic to $\mathbb{A}^1 \setminus \{0\}$ sitting in the diagonal of $\mathbb{A}^1\times_k \mathbb{A}^1$.) This is what is meant by the comment above stating that $X\times_k X$ has "four origins" while $\Delta_X(X)$ contains only two of them. From here, Matt's answer tells us that the subset $\Delta_X(X) \subset X\times_k X$ containing "two origins" is not closed in $X\times_k X$, and so $X$ is not separated over $k$.<|endoftext|> TITLE: Differentiable manifolds as locally ringed spaces QUESTION [26 upvotes]: Let $X$ be a differentiable manifold. Let $\mathcal{O}_X$ be the sheaf of $\mathcal{C}^\infty$ functions on $X$. Since every stalk of $\mathcal{O}_X$ is a local ring, $(X, \mathcal{O}_X)$ is a locally ringed space. Let $Y$ be another differentiable manifold. Let $f\colon X \rightarrow Y$ be a differentiable map. Let $U$ be an open subset of $Y$. For $h \in \Gamma(\mathcal{O}_Y, U)$, $h\circ f \in \Gamma(\mathcal{O}_X, f^{-1}(U))$. Hence we get an $\mathbb{R}$-morphism $\Gamma(\mathcal{O}_Y, U) \rightarrow \Gamma(\mathcal{O}_X, f^{-1}(U))$ of $\mathbb{R}$-algebras. Hence we get a morphism $f^{\#} \colon \mathcal{O}_Y \rightarrow f_*(\mathcal{O}_X)$ of sheaves of $\mathbb{R}$-algebras. It is easy to see that $(f, f^{\#})$ is a morphism of locally ringed spaces. Conversely suppose $(f, \psi)\colon X \rightarrow Y$ is a morphism of locally ringed spaces, where $X$ and $Y$ are differentiable manifolds and $\psi\colon \mathcal{O}_Y \rightarrow f_*(\mathcal{O}_X)$is a morphism of sheaves of $\mathbb{R}$-algebras. Is $f$ a differentiable map and $\psi = f^{\#}$? REPLY [22 votes]: Yes: Let $(f,\psi):X\to Y$ be a morphism of locally ringed spaces, where $X$ and $Y$ are smooth manifolds with their sheaves of smooth functions. If $\psi:C^\infty_Y \to f_* C^\infty_X$ is a morphism of sheaves of $\mathbb R$-algebras, then $f$ is smooth and $\psi=f^\#$. Proof. Let $s:U\to \mathbb R$ be a smooth function. The equation $\psi s= s\circ f$ follows from the commutativity of the diagram below. Notice the triangle commutes because there is a unique $\mathbb R$-algebra map $C^\infty_{f(x)}/{\frak m}_{f(x)}\cong \mathbb R \to \mathbb R$. It now follows that $f:X\to Y$ is smooth. Indeed, we know $s\circ f$ is smooth for all real valued functions $s$ on $Y$, and we may take $s$ to be the coordinate functions of charts on $Y$. QED.<|endoftext|> TITLE: Is there any way to define morphisms between filters in order to get a category, one which its opposit category would be the category of ideals? QUESTION [5 upvotes]: It's well known that filters and ideals are dual. I would like to see how to express this fact "Categorically". I would be very thankful if someone could help me with that. REPLY [2 votes]: The duality between filters and ideals can be seen as Zhen Lin describes, and thus answering your question negatively. A somewhat more positive answer (though only somewhat) is to notice that whatever category you concoct from filters, you can also concoct using ideals (in effect, using the duality between them). The resulting categories would then be isomorphic (not dual!). Having said that, the answer to your question could turn out to be a positive one, I'm just not sure. The thing is that there are many ways to define the morphisms in a category whose objects are all pairs $(X,\mathcal F)$ where $X$ is a set and $\mathcal F$ is a filter on it. There seems to be quite a lot of flexibility on the choice of morphisms, so maybe one that suits your needs exists. Just to clarify, there are two commonly considered categories of filters, described in Blass' article. These turn out to be very useful notions of categories (e.g., for one there is a natural notion of tensor product, the other is useful for constructive nonstandard analysis).<|endoftext|> TITLE: Osgood condition QUESTION [10 upvotes]: Let $h$ and $g$ be continuous, non-decreasing and concave functions in the interval $[0,\infty)$ with $h(0)=g(0)=0$ and $h(x)>0$ and $g(x)>0$ for $x>0$ such that both satisfy the Osgood condition $$\int_{0+}\frac{dx}{f(x)}=\infty.$$ Does there exist a concave function $F$ such that $F(x)\geq h(x)$ and $F(x)\geq g(x)$ for all $x$, and satisfies the Osgood condition? REPLY [2 votes]: I think that there are plenty of redundant informations in this question. If $f(x)$ is positive and concave on $\mathbb{R}^+$ then it must be continuous (continuity is a consequence of concavity) and non-decreasing, since otherwise, assuming $ab$. A concave function is also almost-everywhere differentiable, so for almost every $z\in\mathbb{R}^+$ we have: $$\forall x>z,\quad f(x)< f'(z)(x-z)+f(z),\qquad f(z),f'(z)>0$$ for the same reasons as above. It follows that: $$\int_{z}^{M+z}\frac{dx}{f(x)}\geq\int_{0}^{M}\frac{dx}{f'(z)\,x+f(z)}=\frac{1}{f'(z)}\log\left(1+\frac{f'(z)}{f(z)}M\right)$$ so the Osgood condition is fulfilled without further assumptions. It follows that if $g(x)$ and $h(x)$ are positive concave functions on $\mathbb{R}^+$, then $F(x)=g(x)+h(x)$ is greater than both, positive and concave, so it satisfies the Osgood condition.<|endoftext|> TITLE: Proving without reciprocity laws that if $p>0$ a prime such $p=1(5)$ then 5 is a quadratic residue mod $p$. QUESTION [6 upvotes]: I've done a similar problem where $p=1(3)$ and showed that $-3$ is a quadratic residue modulo $p$. These problems are sledgehammered by reciprocity laws, so I am trying to prove it directly using the fact that $U(\mathbb{Z}/p\mathbb{Z})$ is cyclic of order $p-1$ and since $p=3k+1$ we know the order of the group is a multiple of $3$ and therefore it has an element of order 3, say $r$. We know that $4+4r+4r^2=0 (p)$ if we multiply the LHS by $r$ which is not a unit we get the same thing back. From here we just rearrange and get $4r^2+4r+1=-3(p)$ and the LHS is just $(2r+1)^2$ so $-3$ is a residue. I've been trying to do the same thing for 5, but I can't get it to work. That is, starting from $1+k+k^2+k^3+k^4=0(p)$ where $k$ is an element of order five in $U$, getting something of the form $(f(k))^2=5(p)$ where $f(x)$ is a polynomial with integer coefficients. REPLY [6 votes]: Hint : Using a fifth root of unity try to construct $\sqrt{5}$ using an algebraic expression. As an example of what this means, if $\omega$ is a primitive third root of unity we have $\omega - \omega^2 = \pm \sqrt{3}$. If you know what the discriminant of a polynomial is, trying finding the discriminant of the $5^{th}$ cyclotomic polynomial to help you. Or you can look to Gauss Sums to find an expression as well. In case you decide to give up, try to show $(\omega - \omega^2 - \omega^3 + \omega^4)^2 = 5$. The expression inside the square is just a Gauss sum.<|endoftext|> TITLE: average order of $\sum\limits_{\substack{1\le k\le n \\ (k,n)=1}} \frac{1}{k}$ QUESTION [8 upvotes]: Introduce $$\varrho(n) = \sum\limits_{\substack{1\le k\le n \\ (k,n)=1}} \frac{1}{k}.$$ The following thread at math.stackexchange.com proposes to analyse the average order of $\varrho(n)$, i.e. $$\frac{1}{n} \sum_{k=1}^n \varrho(k).$$ I have tried to duplicate this calculation but I don't arrive at the same result. My question is, which one is right, the original post or my findings. My calculation follows. First we need an identity for $\varrho$ that will prove very useful later on. Observe that $$ \varrho(n) + \sum_{\substack{d\mid n \\ d>1}} \sum^n_{\substack{k=1 \\ (k, n)=d}} \frac{1}{k} = H_n.$$ Now the LHS is $$ \varrho(n) + \sum_{\substack{d\mid n \\ d>1}} \sum^{n/d}_{\substack{m=1 \\ (m, n/d)=1}} \frac{1}{md} = \varrho(n) + \sum_{\substack{d\mid n \\ d>1}} \frac{1}{d} \varrho\left(\frac{n}{d}\right) = \sum_{d\mid n} \frac{1}{d} \varrho\left(\frac{n}{d}\right).$$ Switching to Dirichlet convolutions, we have $$\varrho \star \frac{1}{n} = H_n \sim \log n + \gamma + \frac{1}{2n}.$$ With $$ A(s) = \sum_{n\ge 1}\frac{\varrho(n)}{n^s}$$ this gives $$ A(s) \zeta(s+1) \sim -\zeta'(s) + \gamma \zeta(s) + \frac{1}{2} \zeta(s+1)$$ or $$ A(s) \sim \frac{1}{\zeta(s+1)} \left( -\zeta'(s) + \gamma \zeta(s) \right) + \frac{1}{2}.$$ To find the average order use the Mellin-Perron type integral $$\int_{3/2-i\infty}^{3/2+i\infty} A(s) n^s \frac{ds}{s} = -\frac{1}{2} \varrho(n) + \sum_{k=1}^n \varrho(k)$$ and shift to the left to pick up the residue at $s=1$, getting $$ \frac{6}{\pi^2} n \log n + \left(\frac{6(\gamma-1)}{\pi^2} - \frac{36}{\pi^4} \zeta'(2)\right) n + O(\log n)$$ so that the average order is $$ \frac{1}{n} \sum_{k=1}^n \varrho(k) \sim \frac{6}{\pi^2} \log n + \left(\frac{6(\gamma-1)}{\pi^2} - \frac{36}{\pi^4} \zeta'(2)\right) + O\left(\frac{\log n}{n}\right).$$ Which one is right? Addendum. In view of Eric Naslunds excellent comment below maybe we can ask whether anyone is able to supply those missing bounds on the rest of the contour for the Mellin-Perron integral, thereby turning this question into a useful reference. Here is a MSE challenge of the same type. REPLY [2 votes]: Your calculation is correct. When I reached the sum $$\sum_{n=1}^\infty \frac{\mu(n)\log n}{n^2}$$ I incorrectly wrote that it equals $-\zeta^{'}(2)$, rather than $\frac{\zeta^{'}(2)}{\zeta(2)^2}$. To see why it is $\frac{\zeta^{'}(2)}{\zeta(2)^2}$,simply take the derivative of $$\frac{1}{\zeta(s)}=\sum_{n=1}^\infty \mu(n)n^{-s}.$$ Remark: I note again that your proof is missing most of the critical details. When using the residue theorem to evaluate the sum of a multiplicative function, the majority of the work appears when bounding the other three parts of the contour, something which you have taken for granted. Evaluating the residue is the easiest part, and only a minor detail. You must prove that this is actually the asymptotic. This is why I shied away from the residue theorem in my other answers. It provides a nice and fast heuristic, allowing us to see what the answer should be, but to actually prove the results and find the correct error term requires bounds on zeta and lemmas I didn't want to use. It is not always trivial to guess the correct error term when using residue methods.<|endoftext|> TITLE: Almost A Vector Bundle QUESTION [11 upvotes]: I'm trying to get some intuition for vector bundles. Does anyone have good examples of constructions which are not vector bundles for some nontrivial reason. Ideally I want to test myself by seeing some difficult/pathological spaces where my naive intuition fails me! Apologies if this isn't a particularly well-defined question - hopefully it's clear enough to solicit some useful responses! REPLY [5 votes]: Here are two ways one might break the definition a vector bundle. If one is tricky, one might define a fiber bundle with fiber $\Bbb{R}^n$ that's not a vector bundle, if the structure group isn't linear. For instance, you could bundle $\Bbb{R}$ over the circle but define charts on a two-set open cover such that the transition function would send $(s,r)\in S^1\times\Bbb{R}$ to $(s,r^3)$-generally, bring in any nonlinear homeomorphism of the fiber to itself. This particular example might not qualify as non-trivial, but I don't know any very legitimate cases of this. Something perhaps a bit more interesting: the condition that the fiber of a (fiber or) vector bundle be constant over the whole base space is pretty strong. On a manifold with boundary, one can define a degenerate tangent "bundle" which is only a half-space on the boundary, which could be quite useful but doesn't qualify as a vector bundle. Similarly if your almost-manifold has degenerate dimension somewhere for some other reason, as e.g. $z=|x^3|$ embedded in $\Bbb{R}^3,$ which is the union of a surface of two connected components with a $1$-manifold, specifically the line $x=z=0$. You could construct something close to a bundle as the union of the tangent bundle on the $2$-D part and the lines perpendicular tot he $1$-D part, and it wouldn't be a vector bundle.<|endoftext|> TITLE: Intuition behind Cantor-Bernstein-Schröder QUESTION [19 upvotes]: The book I am working from (Introduction to Set Theory, Hrbacek & Jech) gives a proof of this result, which I can follow as a chain of implications, but which does not make natural, intuitive sense to me. At the end of the proof, I found myself quite confused, and had to look carefully at the build-up to see how the conclusion followed. I get the steps, now - but not the intuition. The authors took sets $X$ and $Y$ and assumed injections $f: X \rightarrow Y$, $g: Y \rightarrow X$. Since $X \sim g[f[X]]$ and $X \supseteq g[Y] \supseteq g[f[X]]$, and $Y \sim g[Y]$, the authors went to prove the lemma that $A \sim A_1, A \supseteq B \supseteq A_1 \implies A \sim B$. For the lemma, they defined recursive sequences $\{A_n\}_{n \in \omega}, \{B_n\}_{n \in \omega}$ by $A_0 = A, A_{n+1} = f[A],$ $ B_0 = B, B_{n+1} = f[B].$ Since $A_0 \supseteq B_0 \supseteq A_1 \implies f[A_0] \supseteq f[B_0] \supseteq f[A_1]$, we get from induction that $A_{n+1} \subseteq A_n$. Putting $\{C_n\}_{n \in \omega} = \{A_n - B_n\}_{n \in \omega},$ they noted that $C_{n+1} = f[C_n]$ (since $A_n \supseteq B_n$ inductively, $f[A_n - B_n] = f[A_n] - f[B_n]$). Putting $$ C = \bigcup_{n=0}^\infty C_n, \text{ } D = A - C,$$ they noted that $f[C] = \bigcup_{n=1}^\infty C_n$, and that $h(x): A \rightarrow f[C] \cup D$ can be defined by sending $x \in C \rightarrow f(x),$ and $x \in D \rightarrow x$. It is clear that $h$ is one-to-one and onto. Inductively, for $n > 0$, $C_0 \cap C_n = \varnothing$; it follows that $C_0 \cap \bigcup_{n=1}^\infty C_n = C_0 \cap f[C] = \varnothing$. We know that $C_0 \cup f[C] \cup D$ = A, and since all three sets are disjoint, we may conclude that $f[C] \cup D = A - C_0 = A - (A - B) = B$. Thus, our bijection $h$ maps $A$ to $B$, as we wanted. What is the intuition here? What were H&J, or Cantor, Bernstein, etc. thinking when they went to prove this - the "high-level" idea? Is there a clearer proof, in the sense of thought process (not necessarily shorter)? REPLY [2 votes]: As you pointed out, to prove Cantor-Schroeder-Bernstein theorem, one needs to prove the following lemma: If $A_1 \subset B \subset A$ and $|A_1|=|A|$, then $|B|=|A|$. According to the hypothesis, there exists some one-to-one mapping $f$ from $A$ onto $A_1$. So, we need a one-to-one mapping $g$ from $A$ onto $B$. How can we find that? One may think that we should embed the set $B$ in the set $A$ by inclusion so that we find the required mapping as the inverse of the inclusion mapping, that is, the mapping$$h:A \to B \\ h(x)=x.$$This mapping is one-to-one, but it cannot be defined on the whole set $A$ because for $x \in A-B$ $h(x)=x$ is not contained in the set $B$. One may think that we should use the mapping $f$ as our required mapping, that is,$$k:A \to B \\ k(x)=f(x).$$This mapping is one-to-one, but it cannot cover the whole set $B$ because the range of the function $f$ is the set $A_1 \subset B$ and there may be some $x\in B$ which is not in the set $A_1$. As we see there are two extreme approaches for finding the required mapping; the first one covers the whole set $B$, and the second one covers the whole set $A$. So, it is intuitively expectable that to find the required mapping we should use both the mappings (approaches) simultaneously so that both the sets $A$ and $B$ are covered in a one-to-one manner. But, there exists a problem. If we use both the mappings simultaneously, for example $g: A \to B \quad g(x)=\begin{cases}f(x) & \text{if } x\in C; \\ x & \text{if } x \in A-C \end{cases}$ for some subset $C \subset A$, then we may miss either the onto property or the one-to-one property, because the ranges of the pieces of $g$ may overlap. So, our original problem is reduced to finding some subset $C \subset A$ such that the ranges of the pieces of the mapping are disjoint and the union of them is equal to the set $B$. Now, how to find such a $C$? Here is an idea. Since the function $h(x)=x$ cannot be defined on $C_0=A-B$, as explained above, let us map this subset of $A$ by the function $k(x)=f(x)$ into the set $B$. So, we obtain the mapping$$g_0(x)= \begin{cases}f(x) & \text{if } x \in C_0; \\ x & \text{if }x \in A-C_0 \end{cases}.$$But, we have missed the one-to-one property because the ranges of the pieces overlap (In fact, $f[C_0]$ is contained in the range of the second one, since $f[C_0] \subset f[A] \subset A-C_0$). So, we need to remove the problematic points $C_1=f[C_0]$ from the domain of the second piece (since the domain and the range of the function $h(x)=x$ are the same) to retain the one-to-one property. However, since we need to define the mapping $g$ on the whole set $A$, we need to add such points to the domain of the first piece. So, we obtain the mapping$$g_1(x)= \begin{cases}f(x) & \text{if } x \in C_0 \cup C_1; \\ x & \text{if }x \in A-(C_0 \cup C_1) \end{cases}.$$But, we have missed the one-to-one property because the ranges of the pieces overlap (In fact, $f[C_1]$ is contained in the range of the second one, since $f[C_1]=f^2[C_0] \subset f^2[A] \subset A-(C_0 \cup C_1)$). So, we need to remove the problematic points $C_2=f[C_1]$ from the domain of the second piece (since the domain and the range of the function $h(x)=x$ are the same) to retain the one-to-one property. However, since we need to define the mapping $g$ on the whole set $A$, we need to add such points to the domain of the first piece. So, we obtain the mapping$$g_2(x)= \begin{cases}f(x) & \text{if } x \in C_0 \cup C_1 \cup C_2; \\ x & \text{if }x \in A-(C_0 \cup C_1 \cup C_2) \end{cases}.$$ $$\vdots \qquad \vdots \qquad \vdots$$ But, we have missed the one-to-one property because the ranges of the pieces overlap (In fact, $f[C_{n-1}]$ is contained in the range of the second one, since $f[C_{n-1}]=f^n[C_0] \subset f^n[A] \subset A-(C_0 \cup C_1 \cup \cdots C_{n-1})$). So, we need to remove the problematic points $C_n=f[C_{n-1}]$ from the domain of the second piece (since the domain and the range of the function $h(x)=x$ are the same) to retain the one-to-one property. However, since we need to define the mapping $g$ on the whole set $A$, we need to add such points to the domain of the first piece. So, we obtain the mapping$$g_n(x)= \begin{cases}f(x) & \text{if } x \in C_0 \cup C_1 \cup \cdots \cup C_n; \\ x & \text{if }x \in A-(C_0 \cup C_1 \cup \cdots \cup C_n) \end{cases}.$$ $$\vdots \qquad \vdots \qquad \vdots$$ This pattern motivates us to define the mapping $g$ as follows.$$g(x)=\begin{cases}f(x) & \text{if } x \in C; \\ x & \text{if } x \in A-C \end{cases}, \qquad C= \bigcup_{n=0}^{\infty }C_n$$Noting that$f[C]=\bigcup_{n=1}^{\infty }C_n$, we can easily see that the mapping $g$ is one-to-one because each of its pieces is and the ranges of the pieces are disjoint and it is onto the set $B$. Addendum Looking at how the $C_n$'s are constructed, one may think that the existence of the set $C$ (and so the proof of the theorem) relies on the existence of some infinite set like $\mathbb{N}$ to be able to define the sets $C_n$'s recursively. However, in this section we show that such a view is not correct. In fact, to obtain the bijective mapping $g$, we need some sets $C$ such that the values of the function $f$ at the points of $f[C]$ do not lie outside of $f[C]$. The existence of such a set can be guaranteed by applying some fixed-point theorem (Knaster-Tarski Theorem) to some monotone function of sets, as follows. Let $F: \mathcal{P}(A) \to \mathcal{P}(B)$ be monotone, i.e., if $X \subset Y$, then $F(X) \subset F(Y)$ ($\mathcal{P}(A)$ is the power set of $A$). Consider the set $T= \{ X \subset A \mid F(X) \subset X \}$. It can be easily seen that $\overline{X}=\bigcap T$ is the least fixed point of $F$ (Proof: $A \in T$, so $T \neq \varnothing$ and so $\overline{X}=\bigcap T$ can be defined. Since $F$ is monotone and for any $X \in T$ we have $\bigcap T \subset X$, $F(\overline{X}) \subset F(X)$ for every $X \in T$, so $\overline{X} \in T$. Since $F$ is monotone and $F(\overline{X}) \subset \overline{X}$, we have $F(F(\overline{X})) \subset F(\overline{X})$, so $F(\overline{X}) \in T$. However, since $\overline{X} \subset X$ for every $X \in T$, we have $\overline{X} \subset F(\overline{X})$. Thus, $F(\overline{X})=\overline{X}$. If $F$ has some other fixed points $X'$, i.e., $F(X')=X'$, then $X' \in T$. Since $\overline{X} \subset X$ for every $X\in T$, we conclude that $\overline{X}=\bigcap T$ is the least fixed point of $F$). Consider the function $F(X)=(A-B)\cup f[X]$. Clearly, it is monotone, so the set $C=\overline{X}$ defined above is its least fixed point. Now, we can easily see that the mapping $g:A \to B$ defined by$$g(x)=\begin{cases}f(x) & \text{if } x\in C; \\ x & \text{if } x \in A-C \end{cases}$$ is one-to-one and onto the set $B$ (We only need to note that $$\begin{align}f[C] \cup (A-C) & =f[C] \cup (A-((A-B) \cup f[C])) \\ & = f[C] \cup ((A-(A-B)) - f[C]) \\ & =f[C] \cup (B-f[C]) \\ & =B \end{align}$$(Please note that in the above calculation we have used the fact that $f[C] \subset A_1 \subset B \subset A$) and$$\begin{align}f[C] \cap (A-C) & =f[C] \cap (A-((A-B) \cap f[C])) \\ & = f[C] \cap ((A-(A-B)) - f[C]) \\ & =f[C] \cap (B-f[C]) \\ & = \varnothing \end{align}$$(Please note that in the above calculation we have used the fact that $f[C] \subset A_1 \subset B \subset A$)). Now, the least fixed point of the function $F$ can be obtained recursively as follows. Clearly the function $F$ is continuous, meaning that for any nondecreasing sequence of subsets of $A$, $\langle X_i \mid i \in \mathbb{N} \rangle$, $X_i \subset X_j$ whenever $i \le j$, we have$$F \left ( \bigcup_{i \in \mathbb{N}}X_i \right ) = \bigcup_{i \in \mathbb{N}} F \left ( X_i \right ).$$Let us define recursively $X_0=\varnothing$, $X_{i+1}=F(X_i)$ and then define $\overline{X}=\bigcup_{i \in \mathbb{N}}X_i$. Clearly, the $\langle X_i \mid n \in \mathbb{N} \rangle$ is a nondecreasing sequence of subsets of $A$. So we have$$\begin{align}F \left ( \bigcup_{i \in \mathbb{N}} X_i \right ) & = \bigcup_{i \in \mathbb{N}}F(X_i) \\ & = \varnothing \cup F(X_0) \cup F(X_1) \cup \cdots \\ & = X_0 \cup X_1 \cup X_2 \cup \cdots \\ & = \bigcup_{i \in \mathbb{N}} X_i. \end{align}$$Thus, $\overline{X}=\bigcup_{i \in \mathbb{N}}X_i$ is a fixed point of $F$. Now, if $X'$ is another fixed point of $F$, since $F$ is monotone and $\langle X_i \mid i \in \mathbb{N} \rangle$ is a nondecreasing sequence of subsets of $A$, we have$$\varnothing \subset X' \quad \Rightarrow \quad X_1=F(\varnothing ) \subset F(X')=X' \\ X_1 \subset X' \quad \Rightarrow \quad X_2=F(X_1) \subset F(X')=X' \\ \vdots \qquad \vdots \qquad \vdots \\ X_{n-1} \subset X' \quad \Rightarrow \quad X_n =F(X_{n-1}) \subset F(X')=X' \\ \vdots \qquad \vdots \qquad \vdots$$So, $\overline{X}=\bigcup_{i \in \mathbb{N}} \subset X'$. Thus, $\overline{X}$ is the least fixed point of $F$. Hence, we conclude that the fixed point of the function $F(X)=(A-B) \cup f[X]$, $C$, must be of the form$$\begin{align}C & =(A-B) \cup ((A-B) \cup f[A-B]) \cup ((A-B) \cup f[A-B] \cup f[f[A-B]]) \cup \cdots \\ & = C_0 \cup (C_0 \cup C_1) \cup (C_0 \cup C_1 \cup C_2) \cup \cdots \\ & = \bigcup_{i \in \mathbb{N}}C_n\end{align}$$(Please remember that $f$ is injective), which was already obtained from our original argument. Therefore, the existence of the set $C$ can be confirmed without needing existence of some infinite set like $\mathbb{N}$. Now, if $A$ is a finite set, then $B$ must be equal to $A$ and so $C=A-B=\varnothing$. But, if $A$ is an infinite set (so the existence of some infinite set has been already assumed in our theory), then the set $C$ is constructed from an infinite chain of sets, as explained above.<|endoftext|> TITLE: Divergence of $\sum\limits_n1/\max(a_n,b_n)$ QUESTION [11 upvotes]: Given two positive and (strictly) monotone increasing sequences, $a_n$ and $b_n$ such that $\displaystyle \sum_{n=1}^{\infty}\dfrac1{a_n}$ and $\displaystyle \sum_{n=1}^{\infty}\dfrac1{b_n}$ diverge, does $\displaystyle \sum_{n=1}^{\infty}\dfrac1{\max(a_n,b_n)}$ diverge? For sequences, where $a_n > b_n$ or $a_n < b_n$ eventually, the result follows immediately. But I am unable to see what happens when $a_n$ and $b_n$ take turns to overtake each other as $n \to \infty$? Essentially this question stems from the question, Osgood condition , where I wanted to consider the function $F(x) = g(x) + h(x)$ and lower bound $\displaystyle \int\dfrac{dy}{F(y)}$ by $\displaystyle \int \dfrac{dy}{2\max(h(y),g(y))}$ in an attempt to show that $\displaystyle \int\dfrac{dy}{F(y)}$ diverges. REPLY [7 votes]: This question is asked now and then. The answer is that the series with the maxima in the denominators can converge despite both initial series diverge and the reason is exactly as you put it: $a_k$ and $b_k$ can overtake each other infinitely many times and accumulate large sums when at rest from the work as maxima. Formalizing it isn't hard and did did an excellent job in this respect. The question is whether we can make it "obvious", i.e., to see it in our heads without touching pen or paper at all. Of course, we can talk about decreasing sequences $a_k,b_k>0$ and consider $\sum_k\min(a_k,b_k)$. Now just imagine two people walking. The condition is that neither of them is allowed to increase his speed at any time and that the slowest one carries a stick, which is magically teleported to the other person when he becomes slower. The question is whether both people can walk to infinity while the stick will travel only finite distance on foot. Now the strategy should become clear. Fix any upper bound $v$ for speeds and any upper bound $d$ for the distance the stick is allowed to travel on foot. Let the first person walk at speed $v$ until he goes the distance $1$. The second person should crawl slowly (but steadily) all that time at the speed $dv/2$, so he travels only distance $d/2$ with the stick. At that moment, the first person slows down enormously to the speed $d^2v/4$, so while the second person continues to go at the speed $dv/2$ and moves distance $1$, the first person (who now has the stick) goes only $d/2$. By the end of this cycle, both people moved by at least $1$ but the stick traveled only distance $d$ on foot. We also end up with the new bound for the speed, which is $vd^2/4$. Now repeat the cycles making the allowed distances for the stick smaller and smaller so that the corresponding series converges. We do not care how fast the speed bounds decay because no matter what the bound is, we still can cover the unit distance in some finite time. During each cycle, each person moves by distance $1$ or more, so both ultimately walk away. However, the stick goes only finite distance on foot.<|endoftext|> TITLE: Are these two predicate statements equivalent or not? QUESTION [6 upvotes]: $\exists x \forall y P(x,y) \equiv \forall y \exists x P(x,y)$ I was told they were not, but I don't see how it can be true. REPLY [2 votes]: Not the same. Compare (RHS) "Every person has some father or other" with (LHS) "There's some guy who fathered every person in the human race"<|endoftext|> TITLE: intrinsic proof that the grassmannian is a manifold QUESTION [13 upvotes]: I was trying to prove that the grassmannian is a manifold without picking bases, is that possible? Here's what I've got, let's start from projective space. Take $V$ a vector space of dimension n, and $P(V)$ its projective space. To imitate the standard open sets when you have a basis, consider a hyperplane $H$. We can form a (candidate open) subset $U_H$ consisting of those lines $L \in P(V)$ such that $L \oplus H = V$. For the Grassmannian you can proceed similarly, say you want to construct $Gr(d,V)$. Take a subspace H of dimension $c = n - d$, and consider the set $U_H$ of those subspaces $W \in Gr(d,V)$ such that $W \oplus H = V$. I'm not really sure how to proceed after this. Any hints? the main problem is that $U_H$ should be isomorphic to affine space but I can't seem to cook up the natural candidate for it. REPLY [10 votes]: let $n=dim V$. Let $A$ be a $(n-d)$-dimensional subspace of $V$ and let $\mathcal U(A)$ be the subset of the Grassmanian of all subspaces $B$ of dimension $d$ such that $A\oplus B=V$. If $B$ is any element of $\mathcal (U)$, then there is a bijection $\hom(A,B)\to\mathcal U(A)$ such that the image of a linear map $\phi:A\to B$ is the subspace $B_\phi=\{a+\phi(a):a\in A\}$. Then the set of all $\mathcal U(A)$ with those bijections, for all $A$, is an atlas for $G(d,V)$. You do need to check that transition functions are smooth, though.<|endoftext|> TITLE: Find the volume of the largest right circular cone that can be inscribed in a sphere of radius r? QUESTION [6 upvotes]: I checked this question but didn't fully understand it. I know that the volume of a right circular cone is $V = \frac{1}{3}\pi x^2h$ I know that I must take the first derivative and set it equal to zero, which will find the maximum. My problem is how to deal with the variable $h$, the height? How can I rewrite that in terms of $x$ or $r$. I have tried to implicitly differentiate this equation, but that wasn't helpful for me. REPLY [2 votes]: With given radius r of a sphere let the inscribed cone have height h then remaining length without radius is (h-r) let R be radius of cone then there we get a right angle triangle with r as hypotanious R as adjecent and (h-r) as opposite side. Now by pythagorus theorem ((h-r)^2)+(R^2)=(r^2). Now express R in terms of h and r. We get (R^2)=(2hr-hh). Substitute in the volume of cone equation V=(pi/3)(R^2)h. Now V=(pi/3)(2hr-hh)h. Differentiate wrt h with r as constant because radius cannot be variable and substitute to zero to get maxima. We get h=(4r/3).<|endoftext|> TITLE: Why are there $\frac{(2n)!}{2^nn!}$ ways to break $2n$ people into partnerships? QUESTION [6 upvotes]: Apparently, there are $\frac{(2n)!}{2^nn!}$ ways to break $2n$ people into partnerships. Why? An explanation reads that there are $2n!$ ways to line people up. If we just pair adjacent people in the line up, we would have overcounted. So we must divide by $2^n n!$. My question is why do we divide by $2^n n!$ specifically to adjust for overcounting? Furthermore, this problem can apparently be solved by taking a skip factorial of odd values: $(2n-1)(2n-3)(2n-5) ... (5)(3)(1)$ Why is that so? What does this skip factorial have to do with the problem? REPLY [12 votes]: To see where the double factorial comes from, imagine numbering the people from $1$ through $2n$. There are $2n-1$ possible choices for $1$’s partner; those two are now out of the pool. The smallest unpaired number is now either $3$ or $2$, depending on whether $1$’s partner is $2$ or not; whichever it is, call it $m$. Now choose a partner for $m$; it can’t be $1$ or $1$’s partner, and it can’t be $m$, so there are $2n-3$ choices. Continue in this fashion. After you’ve formed $k$ pairs, let $m$ be the smallest unpaired number, and find a partner for $m$; it can’t be one of the $2k$ people already paired up, and it can’t be $m$, so you have $2n-2k-1=2n-(2k+1)$ choices. By the time you get down to the last two people, you’ll have only one choice. The total number of ways of making the choices is therefore $$(2n-1)\cdot(2n-3)\cdot\dots\cdot3\cdot1=(2n-1)!!\;.$$ Now let’s look at the other explanation. We want to see how many of the $(2n)!$ ways of lining up the people and pairing adjacent ones lead to the same set of $n$ pairs. Let’s say that we number them $1$ through $2n$ from left to right, so that for $k=1,\dots,n$ person $2k-1$ is paired with person $2k$: $$(p_1p_2)(p_3p_4)\dots(p_{2n-1}p_{2n})\;.\tag{1}$$ We can shuffle the $n$ pairs as pairs in any way we like, and it won’t change the partnerships: the lineup $$(p_3p_4)(p_1p_2)\dots(p_{2n-1}p_{2n})(p_7p_8)\tag{2}$$ produces the same partnerships as the lineup in $(1)$. There are $n!$ ways of shuffling the pairs while keeping them completely intact, as in $(2)$. But it also doesn’t change the partnerships if we reverse some pairs: $$(p_4p_3)(p_1p_2)\dots(p_{2n}p_{2n-1})(p_7p_8)$$ has the same partnerships as the lineup in $(2)$, even though I reversed the placement of $p_3$ and $p_4$ within their pair, as well as that of $p_{2n-1}$ and $p_{2n}$ within theirs. Thus, we have $n!$ ways to shuffle the pairs as pairs, and for each pair a two-way choice of whether or not to reverse the order of its members, for a grand total of $2^nn!$ different lineups that result in the same partnership. Thus, $(2n)!$ really does overcount by a factor of $2^nn!$, and the correct answer must be $\frac{(2n)!}{2^nn!}$. Finally, we can verify that the two answers are the same: $$\begin{align*} \frac{(2n)!}{2^nn^!}&=\frac{\Big((2n)\cdot(2n-2)\cdot(2n-4)\cdot\ldots\cdot2\Big)\Big((2n-1)\cdot(2n-3)\cdot\ldots\cdot3\cdot1\Big)}{2^nn!}\\ &=\frac{2n\cdot2(n-1)\cdot2(n-2)\cdot\ldots\cdot2(1)}{2^nn!}\cdot(2n-1)!!\\ &=\frac{2^nn!}{2^nn!}\cdot(2n-1)!!\\\\ &=(2n-1)!!\;. \end{align*}$$<|endoftext|> TITLE: Integer Solutions to $x^2+y^2=5z^2$ QUESTION [11 upvotes]: I'm looking for a formula to generate all solutions $x$, $y$, $z$ for $x^2 + y^2 = 5z^2$. Any advice? REPLY [3 votes]: I couldn't but notice the pattern $x^2 + y^2 = 5 z^2 = z^2 + (2z)^2 $; owing to $(am+bn)^2 + (an-bm)^2 = (an+bm)^2 + (am-bn)^2 $, and so if we let $x=am+bn , y=an-bm , z=am-bn$ , we need $an + bm = 2(am-bn)$ , i.e. $ a(n - 2m) + b(m+2n) =0$ which is possible by $a = (m+2n) k , b=(2m-n)k $ , where $a,b,m,n,k$ $∈$ $\Bbb Z$ So, the solutions are :- $x = k( m^2 + 4mn - n^2 )$ ; $y = 2k(mn + n^2 - m^2 )$ ; $z = k( m^2 + n^2 )$ these are same as what "dinoboy" seems to have obtained by comparatively more effort.<|endoftext|> TITLE: Distribution of the sum of a multinomial distribution QUESTION [7 upvotes]: I have distilled an error analysis problem into the following: I have a multinomial distribution, $X$, consisting of $n$ independent trials where each trial takes on the values $\{0,1,\ldots,k-1\}$ with uniform probability of $\frac{1}{k}$. Now I am interested in finding the distribution of the sum of all the outcomes in the $n$ trials, but not sure how to approach the problem. I suspect it has something to do with partition functions. Would be glad if the relevant probability distribution function in MATLAB could also be pointed out. REPLY [5 votes]: Since you mentioned in a comment that $n=4$ in your case, here's a way to derive the distribution for small values of $n$. The number of ways of writing $m$ as a sum of $n$ values from $0$ to $k-1$ is the coefficient of $x^m$ in $$ (1+x+\dotso+x^{k-1})^n=\left(\frac{1-x^k}{1-x}\right)^n=(1+x+x^2+\dotso)^n\sum_{j=0}^n\binom nj(-x^k)^j\;. $$ Thus you just have to place the binomial coefficients in a sequence at distances of $k$ and then sum the sequence $n$ times; e.g. for $n=4$ and $k=3$: $$ \begin{array}{rrrrrrrrr} 1&0&0&-4&0&0&6&0&0&-4&0&0&1\\ 1&1&1&-3&-3&-3&3&3&3&-1&-1&-1\\ 1&2&3&0&-3&-6&-3&0&3&2&1\\ 1&3&6&6&3&-3&-6&-6&-3&-1\\ 1&4&10&16&19&16&10&4&1 \end{array} $$ Then dividing through by the total number $k^n=81$ of possibilities gives you the probabilities for the values of the sum.<|endoftext|> TITLE: Is my understanding of antisymmetric and symmetric relations correct? QUESTION [19 upvotes]: So I'm having a hard time grasping how a relation can be both antisymmetric and symmetric, or neither. Are my examples correct? symmetric & antisymmetric R ={(1,1),(2,2),(3,3)} not symmetric & not antisymmetric R = { (1,2),(2,1),(3,4) } REPLY [27 votes]: Here’s a way to think about symmetry and antisymmetry that some people find helpful. A relation $R$ on a set $A$ has a directed graph (or digraph) $G_R$: the vertices of $G_R$ are the elements of $A$, and for any $a,b\in A$ there is an edge in $G_R$ from $a$ to $b$ if and only if $\langle a,b\rangle\in R$. Think of the edges of $G_R$ as streets. The properties of symmetry, antisymmetry, and reflexivity have very simple interpretations in these terms: $R$ is reflexive if and only if there is a loop at every vertex. (A loop is an edge from some vertex to itself.) $R$ is symmetric if and only if every edge in $G_R$ is a two-way street or a loop. Equivalently, $G_R$ has no one-way streets between distinct vertices. $R$ is antisymmetric if and only every edge of $G_R$ is either a one-way street or a loop. Equivalently, $G_R$ has no two-way streets between distinct vertices. This makes it clear that if $G_R$ has only loops, $R$ is both symmetric and antisymmetric: $R$ is symmetric because $G_R$ has no one-way streets between distinct vertices, and $R$ is antisymmetric because $G_R$ has no two-way streets between distinct vertices. To make a relation that is neither symmetric nor antisymmetric, just find a digraph that has both a one-way street and a two-way street, like this one: $$0\longrightarrow 1\longleftrightarrow 2$$ It corresponds to the relation $R=\{\langle 0,1\rangle,\langle 1,2\rangle,\langle 2,1\rangle\}$. on $A=\{0,1,2\}$.<|endoftext|> TITLE: Example of integration over path on Riemann surface QUESTION [11 upvotes]: Let $X$ be a Riemann surface $$ X = \left\{ (z,w) \in \mathbb{C}^2 \mid z^3 + w^3 = 1 \right\}. $$ Then we have $z^2 dz + w^2 dw = 0$ and we can define a holomorphic form $\omega$ on $X$ by $$ \omega = \left\{ \begin{array}{rl} \frac{dz}{w}, & w \neq 0 \\ -\frac{wdw}{z^2}, & z \neq 0 \end{array} \right. $$ Let $j = e^{\frac{2 i\pi}{3}}$ and $\gamma$ be a path on $\mathbb{C}_{z}$ given by the following image: $\hskip4cm $ Let $\Gamma$ denote a closed path on $X$ obtained by lifting $\gamma$ and such that $\Gamma$ passes through $(0,1)$, and let $\Gamma_j$ denote a lifting of its part $\gamma_j$. $\hskip3cm $ We will integrate $\omega$ over $\Gamma$. Let's start from $z = 0$ and move right. We have $$ \int\limits_{\Gamma_1} \omega = \int\limits_{0}^{1-\varepsilon} \frac{dt}{(1-t^3)^{1/3}} =: I_{\varepsilon}\;. $$ Next we compute $\int_{\Gamma_2} \omega$. We can use the parametrisation $z = 1 + \varepsilon e^{i \varphi}$, $\phi \in [-\pi,\pi]$ and $$w = (1-(1+\varepsilon e^{i \varphi})^3)^{1/3} = (-\varepsilon e^{i \varphi}(3 + 3 \varepsilon e^{i \varphi} + \varepsilon^2 e^{2i \varphi}))^{1/3}\;.$$ For sufficiently small $\varepsilon$ we have $|w| \geqslant |\varepsilon|^{1/3} C_{1}$ on $\Gamma_2 = \Gamma_2(\varepsilon)$. Then $$ \left| \int\limits_{\Gamma_2} \omega \right| \leqslant \int\limits_{-\pi}^{\pi} \frac{| \varepsilon i e^{i \varphi} | dt}{|w|} \leqslant C_{2} \varepsilon^{2/3} \to 0\text{ when }\varepsilon \to 0. $$ Using same techniques we obtain that $\int_{\Gamma_5} \omega$ and $\int_{\Gamma_8}\omega$ tend to $0$ when $\varepsilon \to 0$. Now we compute $\int_{\Gamma_3} \omega$. Let's remark that $w = \sqrt[3]{1-z^3} = \sqrt[3]{(1-z)(j-z)(j^2-z)} = \sqrt[3]{1-z}\sqrt[3]{(j-z)(j^2-z)}$. When $z$ goes around $1$ on the curve $\gamma_2$, the first factor's argument increases by $\frac{2 \pi}{3}$ because the argument of $(1-z)$ increases by $2 \pi$. The argument of the second factor in unchanged. Then the integral over $\Gamma_3$ is just $$ \int\limits_{\Gamma_3} \omega = \int\limits_{1-\varepsilon}^{0} \frac{dt}{j(1-t^3)^{1/3}} = -j^2 I_{\varepsilon} $$ since $j^2 = \frac{1}{j}$. To take the integral over $\Gamma_4$ let's remark that $t^3 = (jt)^3$ and then $1-t^3 = 1-(jt)^3$. Then $$ \int\limits_{\Gamma_4} \omega = \int\limits_{0}^{1-\varepsilon} \frac{ j dt}{ j (1-(jt)^3)^{1/3}} = I_{\varepsilon}\;. $$ Analogously to $\Gamma_3$ we have that $\int_{\Gamma_6} \omega = \int_{\Gamma_9} \omega = -j^2 I_{\varepsilon}$ and $\int_{\Gamma_7} \omega = I_{\varepsilon}$. Since the form $\omega$ is holomorphic on $X$ we must have $$ \sum\limits_{k=1}^{9} \int\limits_{\Gamma_k} \omega = I $$ where $I$ is a so-called period, that doesn't depend on $\varepsilon$. Passing to the limit when $\varepsilon \to 0$ we obtain $$ 3(1-j^2) \int\limits_{0}^{1} \frac{dt}{(1-t^3)^{1/3}} = I\;. $$ My question is how to find such period $I$? P.S. By the way, using other techniques (not related to Riemann surfaces) we can show that $$ \int\limits_{0}^{1} \frac{dt}{(1-t^3)^{1/3}} = \frac{2 \pi}{3 \sqrt{3}}\;. $$ REPLY [4 votes]: The path is homotopic in the complement of the branch points $1,j,j^2$ to a large circle $|z|=R$, parametrized in the mathematically positive sense. Since the homotopy does not pass through branch points, it can be lifted to the surface $X$. Since the form is holomorphic, it is closed, so the integral only depends on the homotopy class, and it is equal to $$ \int_{|z|=R} \frac{dz}{\sqrt[3]{1-z^3}} = \int_{|z|=R} \frac{dz}{z\sqrt[3]{z^{-3}-1}} = \frac{1}{e^{\frac{i \pi}{3}}} \int_{|z|=R} \frac{dz}{z \sqrt[3]{1-z^{-3}}}$$ where I hope I got the branch of the third root right, and in the last expression it is the branch that maps $1$ to $1$. Expanding this out in a power series and using the fact that $\int\limits_{|z|=R} z^n \, dz = 0$ for $n \ne -1$ and $\int\limits_{|z|=R} z^{-1} \, dz =2\pi i$ (basically using the residue theorem at $\infty$) gives $$ \frac{1}{e^{\frac{i \pi}{3}}} \int_{|z|=R} \frac{1}z\left(1-\frac13z^{-3}\pm\ldots\right) dz = 2\pi ie^{-\frac{i \pi}{3}}$$ and I think this is the same result you got.<|endoftext|> TITLE: An example for a calculation where imaginary numbers are used but don't occur in the question or the solution. QUESTION [20 upvotes]: In a presentation I will have to give an account of Hilbert's concept of real and ideal mathematics. Hilbert wrote in his treatise "Über das Unendliche" (page 14, second paragraph. Here is an English version - look for the paragraph starting with "Let us remember that we are mathematicians") that this concept can be compared with (some of) the use(s) of imaginary numbers. He thought probably of a calculation where the setting and the final solution has nothing to do with imaginary numbers but that there is an easy proof using imaginary numbers. I remember once seeing such an example but cannot find one, so: Does anyone know about a good an easily explicable example of this phenomenon? ("Easily" means that enigneers and biologists can also understand it well.) REPLY [2 votes]: You can use complex numbers to solve in $\mathbb{N}^4$ the following system of equations : $\left\{ \begin{array}{l} ac-bd=1 \\ bc+ab=2 \end{array} \right.$ Let $z_1=a+ib$ et $z_2=c+id$. Thus, the system is equivalent to $z_1z_2=1+2i$. So $(a^2+b^2)(c^2+d^2)=|z_1z_2|^2=5$, but $5$ is a prime so either $\left\{ \begin{array}{l} a^2+b^2=1 \\ c^2+d^2 =5 \end{array} \right.$ or $\left\{ \begin{array}{l} a^2+b^2=5 \\ c^2+d^2 =1 \end{array} \right.$. You deduce that $(|a|,|b|,|c|,|d|) \in \{ (0,1,1,2), (0,1,2,1),(1,0,2,1),(1,0,1,2) \}$. Finally, you find that the solutions are $(0, \pm 1, \pm 1, \mp 2)$, $(0, \pm 1,\pm 2,\mp 1)$, $(\pm 1,0, \pm 2, \pm 1)$ and $(\pm 1, 0, \pm 1, \pm 2)$.<|endoftext|> TITLE: Hatcher pg 187: Idea of Cohomology QUESTION [8 upvotes]: I am readying Hatcher page 178 where he tries to give the idea of cohomology in the case where our space $X$ is a graph. Now in the 4th paragraph from the top he says this: The cohomology group $H^1(X,G) = \Delta^1(X;G)/\textrm{Im} \delta$ will be trivial iff the equation $\delta \varphi = \psi$ has a solution $\varphi \in \Delta^0 (X;G)$ for each $\psi \in \Delta^1 (X;G)$. Solving this equation means deciding whether specifying the change in $\varphi$ across each edge of $X$ determines an actual function $\varphi \in \Delta^0(X;G)$. What does he mean by that last sentence in bold in the paragraph above? Thanks. REPLY [11 votes]: I like to think of this as follows. If you're hiking in the mountains and you keep track of your changes in elevation, then if you ever return to the same spot your total change in elevation will be $0$. So, $H^1$ detects whether it's possible to have a sensible notion of "local change in elevation" such that you don't necessarily return to $0$ when you come around to your starting point. A great illustration of this possibility is the "never-ending staircase"; note that this is only possible because we're not required to say what happens with our elevation anywhere in the center hole. (That is, $H^1(S^1)\not= 0$ but $H^1(D^2)=0$.)<|endoftext|> TITLE: Bijection between Prime numbers and Natural numbers QUESTION [8 upvotes]: We know that if set $S$ is countable then this set and set of all natural numbers are equivalent, which means that there must be some bijection between this two sets $F:S\rightarrow N$. We know that set of all Prime numbers is countable as well as set of all Natural numbers. So how to find bijection between Prime numbers and Natural numbers in an easy way? REPLY [3 votes]: Xavier shows a "non-constructive" proof of the fact that $\#\mathbb P=\#\mathbb N$. I will show a constructive one: Let $F_n=2^{2^n}+1$ be the $n$-th Fermat number. It is proved that Fermat numbers a coprime. Let $P_n$ be the smallest prime number dividing $F_n$. Then $P_n\neq P_m$ if $n\neq m$ (because the Fermat numbers are coprime hence having different primes in thei factorizations). This means that the map $\mathbb N\to\mathbb P$ given by $n\to P_n$ is injective. As well, the identity map $\mathbb P\to\mathbb N$ given by $p\to p$ is injective. Therefore by Cantor-Bernstein theorem, we have $\#\mathbb P=\#\mathbb N$.<|endoftext|> TITLE: How does Pontryagin duality fit into the general cohomology theory framework? QUESTION [9 upvotes]: Pontryagin duality implies the isomorphic relation of the function space $C(G)$ on a locally compact group $G$ to the function space on it's dual group $\hat G \overset{\sim}{=}\text{Hom}(G,T)$, where $T$ is the circle group. (This isomorphism is the generalization of the Fourier transform in the case $G=\mathbb R$ with translations $t\mapsto t+\Delta t$, and $\hat G \overset{\sim}{=}\{\omega|t\mapsto\text{e}^{2\pi i\ \omega\cdot t}\} \overset{\sim}{=}\mathbb{R}$.) I read here, that this result is of relevant in the history of cohomology theory. But I don't really know how to fit it into the picture. For the duality result, you seem to only need the definiton of the group, the rest follows from natural constructions, while the cohomology thoeries I'm aware of seem to be more general. They have a $\text d$-operator, which I miss in the topological group theorem case, and there also is a base space in the latter case. Is maybe $G$ to be taken as base space? Or is $G$ just to be seen as the fibre object without a complicated base? The second idea arises becuase $\text{Hom}(G,T)$ maps groups into the small circle group. In a way this is anologous to the cotangent space which eats vectors and maps to the reals or the complex numbers. Is there a relation only in as far as people in the 30's dicovered the concept of a character is a relevant one. The cohomology results don't seem to care directly about the function space $C$ on this object. REPLY [4 votes]: I think all that is meant is that Poincare duality on a compact, smooth, oriented $n$-manifold $M$ can be phrased as saying that the groups $H^i(M,\mathbb Z)$ and $H^{n-i}(M, \mathbb R/\mathbb Z)$ are naturally Pontrjagin dual to one another. This is a way of phrasing Pontrjagin duality for integer valued comohology that doesn't require mentioning Tor or Ext functors, which would be required for the version of Pontrjagin duality where you compare $H^i(M,\mathbb Z)$ and $H^{n-i}(M,\mathbb Z)$.<|endoftext|> TITLE: Show group of order $4n + 2$ has a subgroup of index 2. QUESTION [22 upvotes]: Let $n$ be a positive integer. Show that any group of order $4n + 2$ has a subgroup of index 2. (Hint: Use left regular representation and Cauchy's Theorem to get an odd permutation.) I can easily observe that $\vert G \vert = 2(2n + 1)$ so $2 \mid \vert G \vert$ and 2 is prime. We have satisfied the hypothesis of Cauchy's Theorem so we can say $G$ contains an element of order 2. This is where I am stuck I am confused about how the left regular representation relates to the group. So my understanding at this point is that every group is isomorphic to a subgroup of some symmetric group. My question: is the left regular representation $\varphi : G \to S_G$ an isomorphism? where $G \cong S_G$ or is $S_G$ the same thing as $S_{\vert G \vert}$ and $\varphi$ is only an injection? I'm using Dummit and Foote for definitions. I saw an argument online that said that since we have an element of order 2, there is a $\sigma \in S_G$ of order 2, but it is a product of $2n + 1$ disjoint 2-cycles. I don't understand how they could claim this and tried working it out on my own but didn't get there. -- They then went on to use the parity mapping $\varepsilon : S_G \to \{\pm1\}$ and since we have an odd permutation $\sigma$, we have $[S_G:\text{ker }\varepsilon] = 2$. I understood their computation but not how that directly shows that $G$ has a subgroup of order 2? unless $G \cong S_G$ because of how left regular representation is defined. (but again, I'm not understanding that concept very well yet.) So, to be clear about my questions: What is meant by left regular representation, is it an isomorphism or just an injection? and how would it be used here. If it is an isomorphism, the argument online starts to make more sense, but how can they say that since $\sigma$ is even, it is made up of $2n + 1$ disjoint transpositions? If you have a full, proof, I'd appreciate it, but good hints are just as good! Thanks you! REPLY [17 votes]: Here is a more general theorem. Theorem: Let $G$ be a group of order $ 2^n m$ (where $m,n \in \mathbb{N}$ ). If $2 \nmid m$ and $G$ has an element of order $2^n$, then there exists a normal subgroup of $G$ of order $m$. Proof: We start with the following facts. Fact 1: If $G$ is a finite group and $\pi : G \to S_G$ be permutation representation of the group action such that $\pi_g (h) = gh$ for $g,h \in G$. If $t$ is an element of even order and $|G|/ \text{ord} (t)$ is odd then $ (\pi_ t)$ is an odd permutation. Sketch of proof: Observe that $(\pi_t)$ is a product of $|G|/\text{ord}(t)$ number of $\text{ord}(t)$-cycles.So $\text{sgn}(\pi_t) = -1$. Fact 2: If there exists such an element $t$ then $G$ has a subgroup of index $2$. Sketch of proof: We use a result. Let $G$ is a finite group and $H$ is a subgroup of $G$ of index $p$ , where $p$ is a prime number. If $K \le G$, then either $K \le H$ or $[K : K \cap H] = p $. Since $G \cong \pi(G)$, to prove the above fact it suffices to show that $\pi(G)$ has a subgroup of index $2$. In the mentioned result let $H = A_G$ ($A_G$ is the subgroup of even permutations) and $K = \pi(G)$. Since $\pi(G)$ contains an odd permutation we can not have $\pi(G) \le A_G$. So we get $[\pi(G) : \pi(G) \cap A_G] = 2$. So $\pi(G) \cap A_G$ is our required subgroup of $\pi(G)$ of index $2$ and the fact is proved. Now we are ready to prove the theorem. We proceed by induction. When $n = 1$, Cauchy's theorem gives us an element of order $2$, so we get a normal subgroup of order $m$ (index $2$) by the preceding results. Assume it to be true for $n = k-1$, we will show it is true for $n=k$. Let $t$ be the element of order $2^k$. Then by the above results it follows that there is a subgroup $H$ of $G$ of index $2$, i.e of order $ 2^{k-1} m $. By induction hypothesis there is a normal subgroup $J$ of $H$ which is of order $m$. We claim that $J$ is normal in $G$. Since $[G:H] = 2$, $H$ is normal in $G$. We will use a result which is easy to prove. If $ J\le H \le G$, $H$ is normal in $G$ and $J$ is a characteristic subgroup of $H$, then $J$ is normal in $G$. So now it suffices to show that $J$ is a characteristic subgroup of $G$. To show that it is sufficient to show that $J$ is the only subgroup of $H$ of order $m$. If not let $P$ be another subgroup of order $m$ of $H$. Note that we have $PJ= JP$ so $PJ \le H$. We have $|PJ| = \frac{|P||J|}{|P\cap J|} = \frac{m^2}{|P \cap J|}$. So $|PJ|$ is odd. If $|P \cap J| < m$, then $|PJ| >m$ and hence it can not divide $|H|$, contradicting Lagrange's theorem. So we must have $|P \cap J| = m$, implying $P= J$. Now the theorem is completely proved.<|endoftext|> TITLE: Study of functions from $\mathbb{Q}$ to $\mathbb{Q}$ QUESTION [6 upvotes]: Is it possible to study functions from $\mathbb{Q}$ to $\mathbb{Q}$ with ordinary calculus ? Obviously with the limitation that $\mathbb{Q}$ is not complete. So much less limits, derivatives and integrals exist; but does it make sense a tangent in $\mathbb{Q^2}$ REPLY [3 votes]: It could make some sense, but it will be quite pathological. $\bf Q$ is not a differentiable manifold, so differentiation in the usual sense doesn't make much sense, furthermore, there is no nontrivial continuous measure on $\bf Q$ (because $\bf Q$ is countable), so Lebesgue integral will be pretty much completely useless in studying it. If course, you can still calculate limits, derivatives and Riemann integrals as if you were working in $\bf R$. After all, limits make sense in any topological space, including $\bf Q$, and all these objects are defined as limits, so at worst, they may just fail to exist, which can lead to rather pathological examples. For instance, the function $1/x$ will still be continuous except in $0$, but it will be very non-integrable (as the logarithm of a positive rational number distinct from $1$ is irrational). It is not hard to imagine a function $f:{\bf Q}\to{\bf Q}$ and three rational numbers $a TITLE: Schwartz space: semi norm estimate on translation QUESTION [5 upvotes]: the following family of semi norms is commonly used to introduce the space of Schwartz functions $\mathcal{S}(\mathbb{R}^n)$: $$ \|\phi\|_N := \sup_{\substack{x \in \mathbb{R}^n \\ |\alpha|\,,|\beta| \leq N}} |\,x^\beta(\partial^\alpha_x \phi)(x)\,| $$ defined for each non-negative integer $N$, where the multi - index notation is used and $\phi$ is a $C^\infty$ function. in particular, this is done in the book by E. Stein (et al), "Functional Analysis" (Ch. 3). There it is also stated (in the proof of Proposition 1.5 in Ch.3, Sect. 1.5) that for any compactly supported $C^\infty$ function $\psi$ and any $N$, if $\psi^\backsim_x := \psi(x - y)$ then we have the estimate $$ \|\psi^\backsim_x\| \leq c(1 + |x|)^N\|\psi\|_N \,, $$ and more generally, $$ \|\partial^\alpha_x \psi^\backsim_x\| \leq c(1 + |x|)^N\|\psi\|_{N + |\alpha|} \,, $$ this confuses me and clearly shows that I don't understand the notation of the semi-norm well enough. here is what I struggle with: since $\psi^\backsim_x$ denotes translation by $x$ and this is done before I take the norm, I would have thought that this operation has no impact on the size of the norm, i.e. just plugging in the translated function in the norm I'd have $$ \|\psi^\backsim_x\|_N := \sup_{\substack{(x-y)\, \in \, \mathbb{R}^n \\ |\alpha|\,,|\beta| \leq N}} |\,(x - y)^\beta(\partial^\alpha_{(x - y)} \psi)(x - y)\,| = \|\psi\|_N $$ Why is this not the correct way to measure $\psi^\backsim_x$ with respect to the family $\|\cdot\|_N$ ? thanks a lot for clarification! REPLY [3 votes]: The expression for $\|\psi_{\tilde x}\|_N$ is incorrect; $\psi_{\tilde x}$ does not depend on $x$. It should be $$ \|\psi_{\tilde x}\|_N=\sup_{\substack{y \in \mathbb{R}^n \\ |\alpha|,\,|\beta| \le N}} |y^\beta(\partial^\alpha_y \psi)(y-x)|= \sup_{\substack{y \in \mathbb{R}^n \\ |\alpha|,\,|\beta| \le N}} |(y+x)^\beta(\partial^\alpha_y \psi)(y)|. $$<|endoftext|> TITLE: Continuous functions on $[0,1]$ is dense in $L^p[0,1]$ for $1\leq p< \infty$ QUESTION [26 upvotes]: I tried to show that the continuous functions on $[0,1]$ are dense in $L^p[0,1]$ for $ 1 \leq p< \infty $ by using Lusin's theorem. I proceeded as follows.. By using Lusin's theorem, for any $f \in L^p[0,1]$, for any given $ \epsilon $ $ > $ 0, there exists a closed set $ F_\epsilon $ such that $ m([0,1]- F_\epsilon) < \epsilon$ and $f$ restricted to $F_\epsilon$ is continuous. Using Tietze's extension theorem, extend $f$ to a continuous function $g$ on $[0,1]$. We claim that $\Vert f-g\Vert_p $ is sufficiently small. $$ \Vert f-g\Vert_p ^p = \displaystyle \int_{[0,1]-F_\epsilon} |f(x)-g(x)|^p dx $$ $$ \leq \displaystyle \int_{[0,1]-F_\epsilon} 2^p (|f(x)|^p + |g(x)|^p) dx $$ now using properties of $L^p$ functions, we can make first part of our integral sufficiently small. furthermore, since $g$ is conti on $[0,1]$, $g$ has an upper bound $M$, so that second part of integration also become sufficiently small. I thought I solved problem, but there was a serious problem.. our choice of g is dependent of $\epsilon$ , so constant $M$ is actually dependent of $\epsilon$, so it is not guaranteed that second part of integration becomes 0 as $\epsilon $ tends to 0. I think if our choice of extension can be chosen further specifically, for example, by imposing $g \leq f$ such kind of argument would work. Can anyone help to complete my proof here? REPLY [4 votes]: Fix $p\text{ , and }1\leq p\lt \infty.$ By using Lusin's theorem, for any $f \in L^p[0,1]$, for any given $ \epsilon $ $ > $ 0, there exists a closed set $ F_\epsilon $ such that $ m([0,1]- F_\epsilon) < \epsilon$ and $f$ restricted to $F_\epsilon$ is continuous. Using Tietze's extension theorem, extend $f$ to a continuous function $g$ on $[0,1]$. Note that $f\equiv g$ on $F_\epsilon$, so we only need to take care of the integral on $[0,1]- F_\epsilon$. Continuous function is always integrable on $[0,1]$, so $g^p$ is integrable on $[0,1]$. Since $|f(x)-g(x)|^p \leq 2^p (|f(x)|^p + |g(x)|^p)$ and $f \in L^p[0,1],$ we know $\int_{[0,1]}|f(x)-g(x)|^p \lt \infty, i.e. |f(x)-g(x)|^p$ is integrable on $[0,1].$ By the proposition I post, $ \int_{[0,1]-F_\epsilon}|f(x)-g(x)|^p \to 0 $ when $m([0,1]- F_\epsilon)\to 0.$ Note that $\epsilon \to 0 \Rightarrow m([0,1]- F_\epsilon)\to 0$ (Since $m([0,1]- F_\epsilon \lt \epsilon$) For each $\epsilon \gt 0$, we can find a corresponding continuous function $g_\epsilon, $ and $\Vert f-g_\epsilon \Vert \to 0$ when $\epsilon \to 0$. So, $C([0,1])$ is dense in $L^p[0,1]$. Reference: The proposition is from the Real Analysis,4th Ed, written by Royden and Fitzpatrick.<|endoftext|> TITLE: A Riemann integrable function $f$ on a bounded interval $[a, b]$ is measurable with respect to the Borel measure on $[a,b]$? QUESTION [6 upvotes]: Suppose $f:[a,b]\rightarrow [-\infty, \infty]$ is bounded and Riemann integrable, must it be measurable with respect to the Boreal measure on $[a,b]$? REPLY [9 votes]: The answer is no. We know that a function is Riemann integrable iff it is bounded and a.e. continuous. So if you take $f$ to be the characteristic function of a non-Borel set contained in the standard $1/3$-Cantor set (these sets exist by axiom of choice and a neat construction), then $f$ is Riemann integrable but not Borel measurable. (It is Lebesgue-measurable, though.)<|endoftext|> TITLE: Showing that $1/x$ is NOT Lebesgue Integrable on $(0,1]$ QUESTION [13 upvotes]: I aim to show that $\int_{(0,1]} 1/x = \infty$. My original idea was to find a sequence of simple functions $\{ \phi_n \}$ s.t $\lim\limits_{n \rightarrow \infty}\int \phi_n = \infty$. Here is a failed attempt at finding such a sequence of $\phi_n$: (1) Let $A_k = \{x \in (0,1] : 1/x \ge k \}$ for $k \in \mathbb{N}$. (2) Let $\phi_n = n \cdot \chi_{A_n}$ (3) $\int \phi_n = n \cdot m(A_n) = n \cdot 1/n = 1$ Any advice from here on this approach or another? REPLY [7 votes]: I think this may be the same as what Davide Giraudo wrote, but this way of saying it seems simpler. Let $\lfloor w\rfloor$ be the greatest integer less than or equal to $w$. Then the function $$x\mapsto \begin{cases} \lfloor 1/x\rfloor & \text{if } \lfloor 1/x\rfloor\le n \\[8pt] n & \text{otherwise} \end{cases}$$ is simple. It is $\le 1/x$ and its integral over $(0,1]$ approaches $\infty$ as $n\to\infty$.<|endoftext|> TITLE: For what algebraic curves do rational points form a group? QUESTION [11 upvotes]: For what real algebraic curves do rational points form a group ? How does this relate to Jacobian Varieties ? REPLY [11 votes]: There are lots of curves whose set of rational points have a group structure with a geometric interpretation - which is the question that should have been asked. In addition to the projective group laws on elliptic curves there are group laws on many curves of genus 0, in particular Pell conics (degenerate or not; actually any conic will do, but the group structure is "canonical" only if you have a canonical rational point acting as an identity). And of course you can always get group structure on higher dimensional varieties such as products of Pell conics. Geometrically, these group laws com from "generalized Jacobians". These are nicely described in the thesis of Isabelle Dechene, which can be found here. On a much more elementary level I will try to put a few things together here.<|endoftext|> TITLE: Is there a continuous injection from the unit square to the unit interval? QUESTION [11 upvotes]: I see that the Peano curve is a continuous surjection from the unit interval to the unit square (correct me if I'm wrong). Does it then follow that there is a continuous injection from the unit square to the unit interval? Thank you! REPLY [6 votes]: The other three answers provide reasons for why there is no continuous injection from the unit square to the unit interval. But I wanted to show why you're specific argument fails. The issue is that a continuous surjection $f:[0,1]\rightarrow [0,1]^2 $ will fail to be injective. The quick reason is that an injective continuous map between compact Hausdorff spaces is automatcally a homeomorphism onto its image, and $[0,1]$ and $[0,1]^2$ are not homeomorphic (as the other answers show). So, since $f$ is surjective, there is an inverse injective function $g:[0,1]^2\rightarrow [0,1]$, but it involves making many choices. This is because whenever you have $y=f(x_1) = f(x_2)$ with $x_1\neq x_2$, then you must make a choice for $g(y)$. Should $g(y) = x_1$ or $g(y) = x_2$? (In worse cases, there aren't just $2$ different $x$s mapping to the same $y$, but sometimes infinitely many). How do you make such a choice? Well you can, but not in any canonical fashion. Hence, while you do get an injective function $g:[0,1]^2\rightarrow [0,1]$, due to all the choices you had to make, it won't be continuous.<|endoftext|> TITLE: A problem of J. E. Littlewood QUESTION [26 upvotes]: Many years ago I picked up a little book by J. E. Littlewood and was baffled by part of a question he posed: "Is it possible in 3-space for seven infinite circular cylinders of unit radius each to touch all the others? Seven is the number suggested by counting constants." It is the bit in italics which baffled me then (and still does). Can anyone explain how he gets 7 by "counting constants"? P.S. For completeness, the book is "Some problems in Real and Complex Analysis" (1968) REPLY [21 votes]: Here is my take: There are $4$ degrees of freedom in selecting the center line of each cylinder, for a total of $4n$ degrees of freedom. Subtract from this the $6$ degrees of freedom given by the Euclidean motions (rotations and translations in space), as applied to the total configuration – for a total of $4n-6$ degrees of freedom. For two cylinders to touch, the minimal distance between points on their respective center lines must be $2$. This results in $\binom{n}{2}$ equations. To be able to satisfy all these equations, we must probably have $4n-6\ge\binom{n}{2}$, which holds for $n\le7$.<|endoftext|> TITLE: How is the general solution for algebraic equations of degree five formulated? QUESTION [22 upvotes]: In a book on neural networks I found the statement: The general solution for algebraic equations of degree five, for example, cannot be formulated using only algebraic functions, yet this can be done if a more general class of functions is allowed as computational primitives. What are the "more general class of functions"? REPLY [10 votes]: Felix Klein has a small book called "Lectures on the icosahedron and the solution of equations of the fifth degree", where he develops a method of solving the quintic using modular forms. The relationship between the isocahedron and the general quintic is that the automorphism group of the isocahedron is $S_5$, which is also the Galois group of the general quintic. Googling has turned up this introductory blog post on the topic, and this expository article.<|endoftext|> TITLE: A functional equation: $f(x, y + z) = f(x, y) + f(x + y, z)$ QUESTION [5 upvotes]: Can anything be said about the solutions of the following functional equation? $$ f(x, y + z) = f(x, y) + f(x + y, z) $$ I don't seem to be able to find much in what I think are the standard references in these cases. REPLY [5 votes]: Following the beautiful idea of Robert Israel, we will show that the solutions of the equation are precisely functions of the form $f(x,y)=g(x+y)-g(x)$, where $g$ is an arbitrary function. First, we plug $z = -y$ into the equation. This yields $$f(x, 0) = f(x, y) + f(x + y, -y).\tag{1}$$ In the case $y=0$, this tells us that $f(x,0)=0$ for all $x$, as Robert Israel already noticed. Using this fact in $(1)$, we have that $$f(x,y)=-f(x+y,-y)\tag{2}$$ must hold for all $x,y$ in order for $f$ to be a solution. We will now use this fact in the original equation. The original equation says that $$f(x, y) = f(x, y + z) - f(x + y, z)$$ holds for all $x,y,z$. Using $(2)$ twice we may rewrite this as $$f(x, y) = - f(x+y+z, - y - z) + f(x+y+z, -z)$$ Now, plug in $z=-x-y$ and get $$f(x, y) = - f(0, x) + f(0, x+y).$$ This means that if $f$ solves the original equation, we may define the function $g$ by $g(w)=f(0,w)$ and then $f(x,y)=g(x+y)-g(x)$ will hold. This shows that indeed the solutions of the equation are precisely of the form suggested by Robert Israel.<|endoftext|> TITLE: Prime one heap Nim QUESTION [12 upvotes]: I have been working on an interesting problem my lecturer mentioned recently. Prime Nim is a variant of the Nim game where you have a single pile with an arbitrary number $n\in \Bbb N+\{0\}$ of elements and players can take away a prime count of elements every round. Now I want to find a way to decide whether we can ensure victory in a given position (and the winning strategy, of course). What I did so far: $0$ and $1$ are clearly lost positions. On the contrary, any prime $n$ and $n+1$ are winning positions. For all other $n$ we can say that if there is no prime $p TITLE: Expected Value of a Binomial distribution? QUESTION [42 upvotes]: If $\mathrm P(X=k)=\binom nkp^k(1-p)^{n-k}$ for a binomial distribution, then from the definition of the expected value $$\mathrm E(X) = \sum^n_{k=0}k\mathrm P(X=k)=\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}$$ but the expected value of a Binomal distribution is $np$, so how is $$\sum^n_{k=0}k\binom nkp^k(1-p)^{n-k}=np$$ REPLY [3 votes]: Since there are already great direct answers there, let me show you an alternative approach via differential calculus: \begin{align} \sum_i^N i \binom{N}{i} a^{i} b^{N-i} &= a \sum_i^N i \binom{N}{i} a^{i-1} b^{N-i} \\ &= a \frac{d}{da}\sum_i^N \binom{N}{i} a^{i} b^{N-i} \\ &= a \frac{d}{da}(a+b)^N \\ &= a N(a+b)^{N-1} \\ \end{align} Now substitute $a = p$ and $b = 1-p$ will give you the expectation. $\blacksquare$<|endoftext|> TITLE: Smallest value of $(a+b)$ QUESTION [6 upvotes]: What can be the smallest value of $(a+b)$ , $a>0$ and $b>0$ where $(a+13b)$ is divisible by $11$ and $(a+11b)$ is divisible by $13$ This is what I have done so far. We have 1) $a+ 13b = 11m$ 2) $a + 11b = 13n$ $b = \dfrac{11m - 13n}{2}$ , $a = \dfrac{169n - 121m}{2}$ Since $a,b > 0$, we have that $11m - 13n > 0$ and $169n - 121m > 0$ $\implies 11\cdot13m - 13^2n > 0$ and $13^2n - 11^2m > 0$ $\implies 11\cdot13m > 13^2n > 11^2m$ $\implies (\frac{11}{13}) m > n > (\frac{11}{13})^2m$ How to proceed from here? REPLY [2 votes]: $\rm\begin{eqnarray}{\bf Hint}\rm\quad a+13b &=&\rm 11m &\:\Rightarrow\:&\rm 13a+169b &=&\rm 143m\\ \rm a+11b &=&\rm 13n &\:\Rightarrow\:&\rm 11a+121b &=&\rm 143n\\ && &\:\Rightarrow\:&\rm\ \ 2a+\ \ 48b &=&\rm 143(m-n)\ \ \ by\ subtracting\ prior\ from\ first \end{eqnarray}$ Thus $\rm\:a + 24b = 143c.\:$ Its solution with minimal $\rm\:a+b\:$ is $\rm\:(a,b) = (23,5),\:$ since all other solutions arise by adding to $\rm\:(a,b)\:$ nonnegative multiples of $\rm\:(-1,6)\:$ or $\rm\:(24,-1),\:$ increasing $\rm\:a+b,$ viz. $$\begin{eqnarray}\rm a+24b = 143&\:\Rightarrow\:&\rm (a,b) = (23,\ \ 5),\ (47,\ \ 4),\ (71,\ \ 3),\ \ldots\\ \rm a+24b = 286&\:\Rightarrow\:&\rm (a,b) = (22,11),\ (46,10),\ (70,\ \ 9),\ \ldots\\ \rm a+24b = 429&\:\Rightarrow\:&\rm (a,b) = (21,17),\ (45,16),\ (69,15),\ \ldots\\ \cdots && \cdots \end{eqnarray}$$ In this solution table, adding $\rm\:(24,-1)\:$ moves right, and adding $\rm\:(-1,6)\:$ moves down. But moving left or above the border by subtracting these terms results in a solution with $\rm\:a\:$ or $\rm\:b\:$ negative.<|endoftext|> TITLE: sum of a series QUESTION [8 upvotes]: Can \begin{equation} \sum_{k\geq 0}\frac{\left( -1\right) ^{k}\left( 2k+1\right) }{\left( 2k+1\right) ^{2}+a^{2}}, \end{equation} be summed explicitly, where $a$ is a constant real number? If $a=0,$ this sum becomes \begin{equation} \sum_{k\geq 0}\frac{\left( -1\right) ^{k}}{2k+1}=\frac{\pi }{4}. \end{equation} What about for $a\neq 0$. I tried this method \begin{eqnarray*} \sum_{k\geq 0}\frac{\left( -1\right) ^{k}\left( 2k+1\right) }{\left( 2k+1\right) ^{2}+a^{2}} &=&\frac{1}{2}\sum_{k\geq 0}\left( -1\right) ^{k}% \left[ \frac{1}{2k+1+ja}+\frac{1}{2k+1-ja}\right] \\ &=&\frac{1}{2}\sum_{k\geq 0}\frac{\left( -1\right) ^{k}}{2k+1+ja}+\frac{1}{2}% \sum_{k\geq 0}\frac{\left( -1\right) ^{k}}{2k+1-ja} \\ &=&\frac{1}{2}\sum_{k\geq 0}\left( -1\right) ^{k}\int_{0}^{1}x^{2k+ja}dx+% \frac{1}{2}\sum_{k\geq 0}\left( -1\right) ^{k}\int_{0}^{1}x^{2k-ja}dx \\ &=&\frac{1}{2}\int_{0}^{1}\left[ \sum_{k\geq 0}\left( -1\right) ^{k}x^{2k+ja}% \right] dx+\frac{1}{2}\int_{0}^{1}\left[ \sum_{k\geq 0}\left( -1\right) ^{k}x^{2k-ja}\right] dx \\ &=&\frac{1}{2}\int_{0}^{1}\frac{x^{ja}}{1+x^{2}}dx+\frac{1}{2}\int_{0}^{1}% \frac{x^{-ja}}{1+x^{2}}dx \\ &=&\frac{1}{2}\int_{0}^{1}\frac{x^{ja}+x^{-ja}}{1+x^{2}}dx \end{eqnarray*} but I was stucked at the last equations. Can any one give me some hint or tell me that the analytic expression doesn't exist. Thanks very much! REPLY [2 votes]: This sum can be evaluated by the same trick as presented here: math.stackexchange.com. In order to capture the convergence behavior of the series group consecutive terms to obtain absolute convergence, writing $$ S(a) = \sum_{m\ge 0} \frac{4m+1}{(4m+1)^2+a^2} - \sum_{m\ge 0} \frac{4m+3}{(4m+3)^2+a^2} = \sum_{m\ge 0} \frac{(4m+1)((4m+3)^2+a^2)-(4m+3)((4m+1)^2+a^2)} {((4m+3)^2+a^2)((4m+1)^2+a^2)} = \sum_{m\ge 0} \frac{2(4m+1)(4m+3)-2a^2}{((4m+3)^2+a^2)((4m+1)^2+a^2)} $$ Instead of using $f_1(z)$ and $f_2(z)$ from the other post use $$f(z) = \frac{2(4z+1)(4z+3)-2a^2}{((4z+3)^2+a^2)((4z+1)^2+a^2)} \pi \cot(\pi z).$$ The key operation of this technique is to compute the integral of $f(z)$ along a circle of radius $R$ in the complex plane, where $R$ goes to infinity. We certainly have $|\pi\cot(\pi z)|<2\pi$ for $R$ large enough. The core term from the sum is $\theta(R^2/R^4)$ which is $\theta(1/R^2)$ so that the integrals are $\theta(1/R)$ and vanish in the limit. This means that the sum of the residues at the poles of $f(z)$ add up to zero. It is easily verified that the poles at the integers produce the terms of the sum twice over. It follows that $$ 2 S(a) + \text{Res}_{z=\frac{3}{4}-\frac{1}{4}ia} f(z) + \text{Res}_{z=\frac{3}{4}+\frac{1}{4}ia} f(z) + \text{Res}_{z=\frac{1}{4}-\frac{1}{4}ia} f(z) + \text{Res}_{z=\frac{1}{4}+\frac{1}{4}ia} f(z) = 0$$ The residues are easily computed as the poles are all simple. This finally yields $$ S(a) = \frac{1}{4} \frac{\pi}{\cosh \left(\frac{\pi a}{2}\right)}.$$<|endoftext|> TITLE: Why is $Q[\pi]$ not a field? QUESTION [10 upvotes]: I am having trouble seeing how to apply the definition of transcendental to see this. Thanks! REPLY [11 votes]: Hint $\ $ Notice $\:\pi\:$ transcendental over $\rm\Bbb Q\:\Rightarrow\:\Bbb Q[\pi]\cong \Bbb Q[x].\:$ But a polynomial ring cannot be a field since if $\rm\ x^{-1}\! = f(x)\in\Bbb Q[x]\ $ then $\rm\ x \; f(x) = 1 \: \Rightarrow\: 0 = 1,\ $ by evaluating at $\rm\ x = 0. $ Remark $\ $ The above proof has a very instructive universal interpretation.<|endoftext|> TITLE: Generalised Hardy-Ramanujan Numbers QUESTION [11 upvotes]: The number 1729 is famously the smallest positive integer expressible as the sum of two positive cubes in two different ways ($1729=1^3+12^3=9^3+10^3$). There is plenty of work on "taxicab numbers" - the smallest sums of cubes in $n$ different ways (which always exist) - Here's Ivars Peterson at MAA And here's another detailed analysis. (Does anyone know anything about the "Bill Butler" referred to in the second article) However the sequence which caught my attention is OEIS A016078 - 4, 50, 1729, 635318657 which gives the smallest numbers which are sums of positive $n^{th}$ powers in two ways. Is there any more recent work or prospect of identifying such numbers for fifth powers and above? And should they be named as in the title of this post? [This question arises from a much more frivolous one, which was closed, in which I learned why a $50^{th}$ birthday was special in this particular way]. REPLY [2 votes]: It is necessary to solve the equation: $$x^5+y^5+z^5=q^5$$ For integers complex numbers solutions exist. $j=\sqrt{-1}$ Making this change. $$a=p^2-2ps-s^2$$ $$b=p^2+2ps-s^2$$ $$c=p^2+s^2$$ You can write the solution. $$x=jc+b$$ $$y=jc-b$$ $$z=a-jc$$ $$q=a+jc$$ $p,s$ - integers.<|endoftext|> TITLE: quotient metric spaces for dummies QUESTION [14 upvotes]: I was hoping that somebody can explain to me the definition of quotient metric spaces I got the following definition from wikipedia: If $M$ is a metric space with metric $d$, and $\sim$ is an equivalence relation on $M$, then we can endow the quotient set $M/{\sim}$ with the following (pseudo)metric. Given two equivalence classes $[x]$ and $[y]$, we define $$ d([x],[y]) = \inf\{d(p_1,q_1)+d(p_2,q_2)+\dotsb+d(p_{n},q_{n})\} $$ where the infimum is taken over all finite sequences $(p_1, p_2,\dots, p_n)$ and $(q_1, q_2,\dots, q_n)$ with $[p_1]=[x], [q_n]=[y],[q_i]=[p_{i+1}], i=1,2,\dots, n-1$. From another discussion on this website I understand that we use this definition, instead of simply the infimum over d(p,q) for all possible combinations for p and q, to guarantee the triangle inequality. But it is not entirely clear to me how to (geometrically) interpret this definition and how to actually compute distances with it. I tried to work with the following example: $X = \{ -1,1,-2,2,1.1,2.1\}$ with $d(x,y)=|x-y|$ and $\sim\, = \{\{1,-1\},\{2,-2\},\{1.1,2.1\}\}$ and compute the distance between -1 and 1 and also the distance between -1 and 1.1. Could somebody please be so kind to give me a step by step walk-through on how to use the definition and compute the distances for these two examples. Thanks! Gijs Dubbelman REPLY [12 votes]: This is one of my favourite definitions. You can think of the equivalence classes as networks of teleporters. You can enter any teleporter in a given network (equivalence class) and jump to any other teleporter in the same network (equivalence class), and it doesn't take you any time/distance. All you have to pay for is the distance you cover by foot. The infimum is taken over all possible sequences of teleportations. In this way, points in the same equivalence class become a single point, and you can freely choose which of its incarnations to enter and which one to leave. Your examples are fundamentally flawed in that you're asking for the distance between points of $X$, but these aren't points of $X/\sim$, so you can't compute their distance in the quotient metric $d_\sim$. You can ask what the distance from $[-1]$ to $[1]$ is, and the answer is $0$, since these are the same points (equivalence classes) of $X/\sim$. For the second one, you can ask for the distance from $[-1]$ to $[1.1]$. To find this, enter the teleporter at $-1$, jump to $1$ for free, and walk to $1.1$ by foot, for a total distance $d_\sim([-1],[1.1])=d_\sim([1],[1.1])=d(1,1.1)=0.1$.<|endoftext|> TITLE: The philosophy of change of variables QUESTION [5 upvotes]: Change of variables is a basic method in mathematical solving. However, I can't use it smoothly, i.e. I don't know when to use it. (I can use it in some regular problems, but I won't have the sense of using it when coming across some new problems). Besides that, I'm not sure why we can use the change of variables,i.e.why the change of variable is reasonable.For example, in multivariable calculus, it is largely used. Can anyone give me some ideas? REPLY [9 votes]: This is an excellent question, but one which is quite difficult to answer. "Changing variables" is an example of a problem solving heuristic (or "strategy") that can be very powerful when used in the right scenario. Perhaps if you were asked to solve for $x$ in the following you might try substitution: $$x^4 - 6x^2 + 8 = 0$$ This doesn't change the structure of the original problem (and after a bit of practice you might find a substitution of $y = x^2$ unnecessary for factoring the left-hand expression above) but it can drastically change your perspective on the problem. Indeed, $$y^2 - 6y + 8 = 0$$ is easily seen as a quadratic equation, where the roots can be found by factoring (or, if you are feeling particularly obstinate, using the quadratic formula). If you are interested in reading about problem solving heuristics, a good place to start is the work of George Polya and Alan Schoenfeld. In general, it turns out to be very difficult to develop a sense of when to use which heuristic. For your particular question, this means that it is tough to explain when you should use a change of variables (or some other strategy). I suppose it would be possible to start a list of situations in which you might use a variable change, but doing so would either require too much generality to be helpful (Polya's seminal work "How to solve it" suffers from this problem) or too specific to be of a reasonable length or readable. The best way to learn about how/when to change variables (in fact, the only way that has proven to be effective) is by solving lots and lots of problems. Heuristic use boils down to intuition, and intuition is developed as you are exposed to many different situations and have to weasel your way out of them. I realize this answer, though true, is somewhat disappointing. So let me end with a nice problem that can be solved using substitution (in a couple different ways). Solve for $x$ in the following equation: $$(x-1)(x-2)(x-3)(x-4) = 2013$$<|endoftext|> TITLE: Group with more than one element and with no proper, nontrivial sub groups must have prime order. QUESTION [7 upvotes]: I want to show that if $G$ is a group with more than one element, and that $G$ has no proper non-trivial subgroups. Prove that $|G|$ is prime. (Do not assume at the outset that |G| is finite). My question is not that how to prove it. I am saying that suppose $|G|\geq 2$ possibly $|G|=\infty.$ By assumption the only subgroups of $G$ are $\{e\}$ and $G$, i.e., the trivial groups. Let $a$ be non-identity element in $G$. Consider $\langle a\rangle$. Then $\langle a\rangle=G.$ So $G$ is cyclic. My question is, why can I say that $G=\langle a\rangle$. I know there are only two subgroups and $\langle a\rangle\neq e$ because $a\neq e$. Therefore we must have $G=\langle a\rangle$. But my problem is why cant I say that consider $a,b\in G$ and then we look at $\langle a,b\rangle$. And then I say $G=\langle a,b\rangle$ and then I cannot say that $G$ is cyclic, and then I will have problem proving question. REPLY [5 votes]: Resuming all the above, together with my comment and, of course, what you did: Take any $\,1\neq g\in G\,$ (exists such an element since $\,|G|>1\,$), then $\,\langle\,g\,\rangle=G\,$ , otherwise $\,G\,$ has a non-trivial subgroup, and we already know $\,G\,$ is cyclic: 1) It can't be the order of $\,g\,$ is infinite, otherwise $\,G=\langle\,g\,\rangle\cong\Bbb Z\,$ , but then there're lots of non-trivial subgroups: $\,\langle\,g^n\,\rangle\cong n\Bbb Z\,\lneq\Bbb Z\cong G\,$ , and thus $\,G\,$ is cyclic and finite. 2) Supose finally that $\,|G|=ord(g)=n\,$ . If there exists $\,k\in\Bbb N\,\,,\,1 TITLE: Logic, geometry, and graph theory QUESTION [9 upvotes]: [Seven years later, I made an edit to this question, see below.] By delving into topos theory and sheaves one will eventually discover a "deep connection" between logic and geometry, two fields, which are superficially rather unrelated. But what if I have not the abilities or capacities of delving deeper into topos theory and sheaves? Does the deep connection between logic and geometry have to remain a mistery for me forever? At which level of abstraction and sophistication can this connection be recognized for the first time? And which seemingly superficial analogies have really to do with this "deep connection"? What's rather easy to grasp is that there is (i) an algebra of logic and (ii) an algebra of geometry. But is this at the heart of the "deep connection"? What comes to my mind is, that both logic (the realm of linguistic representations) and geometry (the realm of graphical representations) have to do with - representations. Is this of any relevance? Edit: I also wonder if and how graph theory can be related to – or serve as a connection between – logic and geometry, the vertices of graphs representing objects (in the sense of logic), resp. points (in the sense of geometry), the edges representing sentences, resp. line segments. If there was such a "deep connection" of logic, geometry, and graph theory, the existence and importance of planar graphs might appear in a new light. Furthermore, I have found this: "Roughly speaking, category theory is graph theory with additional structure to represent composition" is a good summary of the connection between [graph theory and category theory].Source So the two possible ways to relate logic and geometry (via categories/toposes/sheaves, resp. graph theory) are related themselves. REPLY [2 votes]: I wonder if this analogy goes into the right direction: Logic lies at the heart of set theory. In set theory you may define numbers as equivalence classes of sets wrt "having a bijection", together with a prototypical representative (e.g. $\{\{\}\}$ for all sets with 1 element). In geometry you define numbers as equivalence classes of lines wrt "having same length", together with a prototypical representative (e.g. the line $\overline{01}$ for all lines of length 1). Other "natural" representatives (but being other types of objects) are: the circle with center 0 that contains 1 the point 1 itself.<|endoftext|> TITLE: The Maths necessary to understand Logic, Model theory and Set theory to a very high level QUESTION [13 upvotes]: I am studying Philosophy but most of my interests have to do with the philosophy of Maths and Logic. I would like to be able to have a very high level of competence in the topics mentioned in the title, and I was wondering, given that I don't have a mathematical background beyond basic school level maths, what particular branches of pure mathematics will help me to go deeper in my study of Logic, Model theory and Set theory? Calculus? Group theory? I hope you can give me some suggestions. REPLY [5 votes]: This is a bit tangential to your question, but if your ultimate interest is in the philosophy of mathematics, I would believe that some knowledge of category theory, as a possible alternative to set theory as a foundation for mathematics, would be important. The texts by Steve Awodey (Category Theory) or F. W. Lawvere and Stephen Schanuel (Conceptual Mathematics) may serve as useful introductions.<|endoftext|> TITLE: Interchanging limit with infimum/supremum QUESTION [18 upvotes]: I'm sure I'm having a notational misunderstanding. Anyway, suppose $(f_n)$ is a sequence of continuous functions from a metric space $X$ to $\mathbb{R}$. So, if $(f_n)$ converges uniformly to a function $f$, then $$\lim_{n \to \infty} \inf_{x \in X} f_n(x) = \inf_{x \in X} \lim_{n \to \infty} f_n(x)$$ and $$\lim_{n \to \infty} \sup_{x \in X} f_n(x) = \sup_{x \in X} \lim_{n \to \infty} f_n(x).$$ Can somebody explain to me why is that? Again, this might be really simple... REPLY [12 votes]: I think to talk about $\sup$ and $\inf$ you need some sort of order on $Y$? Anyways, I'm going to take $Y = \mathbb{R}$ in what follows. We have $f$ is the uniform limit of the $f_i$, and the claim is $$\lim_i \sup_{x \in X} f_i(x) = \sup_{x \in X} f(x)$$I treat the case $\sup_{x \in X} f(x) = M < \infty$, for $M = \infty$ you can minimally alter the same argument. Lemme know if you want details. First of all, why does $\lim_i \sup_{x \in X} f_i(x)$ exist? Well indeed, let's show this sequence is Cauchy. Fix $\epsilon > 0$, for $i$ large enough we have all the $f_i$ uniformly within $\epsilon$ of $f$. For $j, j' > i$ take a sequence $x_n$ such that $f_j(x_n)$ converges to $\sup_{x} f_j(x)$, we have $$\sup_x f_{j'}(x) \geqslant \limsup_n f_{j'}(x_n) \geqslant \lim f_j(x_n) - \epsilon = \sup_x f_j(x) - \epsilon$$by symmetry we have the result. Now onto business, we want to show $$\lim_j \sup_x f_j(x) = M$$ Well let's see $\geqslant$: for any $\epsilon > 0$, we can find $x$ such that $f(x)$ is within $\epsilon$ of $M$, and for big enough $n$ we have $f_n(x)$ is within $\epsilon$ of $f(x)$. In particular $\sup_{x' \in X} f_i(x ') \geqslant f_i(x) \geqslant M - 2\epsilon$ So we get $$\lim_i \sup_{x \in X} f_i(x) \geqslant M$$ Now let's see $\leqslant$: it suffices to show $\leqslant M + \epsilon$, for any $\epsilon > 0$. Indeed, for $i >> 0$, we have $f_i(x)$ is uniformly within $f(x)$. For such large $i$, take a sequence $x_n$ such that $\lim_n f_i(x_n) = \sup_x f_i(x)$. We have $$\lim_n f_i(x_n) \leqslant \limsup_n f(x_n) + \epsilon \leqslant M + \epsilon$$as desired.<|endoftext|> TITLE: Dimension of solution space for system of linear inequalities QUESTION [5 upvotes]: Let's say I have a system of inequalities: $Ax \leq g$ for some $A \in \mathbb{R}^{4\times4}$, $x \in \mathbb{R}^4$, $g \in \mathbb{R}^4$, and $A$ is full rank. Here, the $\leq$ denotes element-wise inequality. Specifically, I know that $x$ lies in a two-dimensional subspace of $\mathbb{R}^4$ (determined by the null space of some matrix $N$). What I'm interested in is the dimension of the solution to the above system of inequalities. More succinctly, I'm interested in the dimension of the set $$ \left\{ x \in \mathbb{R}^4 \ \vert \ x \in {\rm Null}(N) , \ Ax \leq g\right\} $$ Understanding more about this set would be nice too, but the dimension would suffice. I'm really not sure how to approach this problem... at all. This problem arose while I was trying to analyze the set of solutions to a linear program, if you're curious. REPLY [2 votes]: If $A$ is an invertible matrix, $S = \{x: Ax \le g\}$ is the image of $V = \{y: y \le g\}$ under the linear transformation $A^{-1}$, and in particular is a convex cone in ${\mathbb R}^4$ with nonempty interior. If you intersect this with a two-dimensional linear subspace, the intersection will be two-dimensional if the subspace intersects the interior of $S$. However, it is also possible that the subspace only intersects the boundary of $S$, in which case the dimension could be $1$ or $0$, or that the subspace does not intersect the boundary. For example, with $A = I$ and $g = (0,0,0,0)$, your intersection would have dimension $2$ if $\text{Null}(N)$ contains a vector with all entries $>0$, but it would have dimension $1$ if $\text{Null}(N)$ is spanned by $(1,0,0,0)$ and $(0,1,-1,0)$ and dimension $0$ (consisting only of the origin) if $\text{Null}(N)$ is spanned by $(1,-1,0,0)$ and $(0,0,1,-1)$, or be empty if $g_1 < 0$ and $\text{Null}(N)$ contains only vectors $v$ with $v_1 = 0$.<|endoftext|> TITLE: Count the number of "special subsets" QUESTION [5 upvotes]: Let $A(n)$ - is the set of natural numbers $\{1,2, \dots ,n\}$. Let $B$ - is any subset of $A(n)$. And $S(B)$ is the sum of all elements $B$. Subset $B$ is "special subset" if $S(B)$ divisible by $2n$ ( Mod$[S(B),2n]=0$). Example: $A(3)=\{ 1,2,3 \}$, so we have only two "special subset" - $\{\varnothing\}$ and $\{1,2,3\}$. $A(5)=\{ 1,2,3,4,5 \}$, so we have $4$ "special subset" - $\{\varnothing\}, \{1,4,5\}, \{2,3,5\}, \{1,2,3,4\}$. Let $F(n)$ is the number of all "special subsets" for $A(n)$, $n \in \mathbf{N}$. I found for $n<50$ that $F(n)-1$ is the nearest integer to $\frac{2^{n-1}}{n}$. $F(n)$=Floor$[\frac{2^{n-1}}{n} + \frac{1}{2}] + 1$. Is it possible to prove this formula for any natural $n$ -? REPLY [3 votes]: Let $B_j$ be independent Bernoulli random variables with parameter $1/2$, i.e. they take values $0$ and $1$, each with probability $1/2$, and $X = \sum_{j=1}^n j B_j$. Thus $X$ is the sum of a randomly-chosen subset of $A(n)$. Your $F(n) = 2^n P(X \equiv 0 \mod 2n) = \frac{2^n}{2n} \sum_{\omega} E[\omega^X]$ where the sum is over the $2n$'th roots of unity ($\omega = e^{\pi i k/n}, k=0,1,\ldots, 2n-1$). Now $$E[\omega^X] = \prod_{j=1}^n E[\omega^{j B_j}] = \prod_{j=1}^n \frac{1 + \omega^j}{2} $$ For $\omega = 1$ we have $E[1^X] = 1$, so this gives us a term $2^{n-1}/n$. Each $\omega \ne 1$ is a primitive $m$'th root of $1$, i.e. $\omega = e^{2\pi i k/m}$ where $m$ divides $2n$ and $\gcd(k,m)=1$. Now $E[\omega^X] = 0$ if some $\omega^j = -1$. This is true iff $m$ is even. Each primitive $m$'th root for the same $m$ gives the same value for $E[\omega^X]$ (the same factors appear, just in different orders). It appears to me (from looking at the first few cases) that if $\omega$ is a primitive $m$'th root with $m$ odd, $E[\omega^X] = 2^{-n+n/m}$. Now there are $\phi(m)$ primitive $m$'th roots, so I get $$ F(n) = \frac{2^{n-1}}{n} + \sum_m \frac{\phi(m)}{n} 2^{n/m-1} $$ where the sum is over all odd divisors of $n$ except $1$. It's not true that $F(n) -1$ is the nearest integer to $2^{n-1}/n$, although $2^{n-1}/n$ is the largest term in $F(n)$. For example, $F(25) = 671092$ but $2^{25-1}/25 = 671088.64$.<|endoftext|> TITLE: An updated alternative to "A Panorama of Pure Mathematics" QUESTION [6 upvotes]: Dieudonne's A Panorama of Pure Mathematics serves as a nice, brisk overview of the state of pure mathematics at its time, but it would be nice if there were an updated version of this book. Is there a more recent book with a similar style and scope to Dieudonne's book? Thanks, in advance. BTW: I tagged this as "math history", but would prefer it to be tagged as "math overview"... REPLY [6 votes]: Try The Princeton Companion to Mathematics.<|endoftext|> TITLE: Equivalent metrics using open balls QUESTION [7 upvotes]: Let $d$ and $p$ be two metrics on a set $X$ and let $m$ and $n$ be positive constants such that $md(x,y) \leq p(x,y) \leq nd(x,y)$ for every $x,y \in X$. Show that every open ball for one metric contains an open ball with the same center for the other metric. Well to show that every open ball for $p$ contains and open ball for $d$, I have the following - We have a $p$ open ball $B_{\epsilon}^p(x)$ and we want to find a $\delta > 0$ such that $$B_{\delta}^d(x) \subseteq B_{\epsilon}^p(x)$$ We know that $$d(x,y) \geq \frac{p(x,y)}{n}$$ So if we take $\delta = \frac{\epsilon}{n}$ we have $$B_{\epsilon}^p(x) = \{ y \in X | p(x,y) < \epsilon\}$$ $$B_{\delta}^d(x) = \{ y \in X | d(x,y) < \frac{\epsilon}{n}\}$$ and the $d$ open ball will be the same size or smaller than the $p$ open ball. It's hard to explain it properly with just notation, it seems a lot clearer when I draw out diagrams on paper and sub in actual numbers for $n$ and $\epsilon$. Have I got the general idea correct? REPLY [6 votes]: Your choice of $\delta$ does indeed work. Here’s a way of explaining it that may be a bit clearer. You want to choose your $\delta$ so that if $d(x,y)<\delta$, then $p(x,y)<\epsilon$; that will ensure that if $y\in B_\delta^d(x)$, then $y\in B_\epsilon^p(x)$ and hence that $B_\delta^d(x)\subseteq B_\epsilon^p(x)$. You know that $p(x,y)\le nd(x,y)$, so if $d(x,y)<\delta$, then $p(x,y) TITLE: Why is the collection of all groups a proper class rather than a set? QUESTION [25 upvotes]: According to Wikipedia, The collection of all algebraic objects of a given type will usually be a proper class. Examples include the class of all groups, the class of all vector spaces, and many others. In category theory, a category whose collection of objects forms a proper class (or whose collection of morphisms forms a proper class) is called a large category. I am aware of Russell's Paradox, which explains why not everything is a set, but how can we show the collection of all groups is a proper class? REPLY [35 votes]: The collection of singletons is not a set. Therefore the collection of all trivial groups is not a set. If you wish to consider "up to isomorphism", note that for every infinite cardinal $\kappa$ you can consider the free group, or free abelian group with $\kappa$ generators. These are distinct (up to isomorphism, that is), and since the collection of cardinals is not a set the collection of groups cannot be a set either.<|endoftext|> TITLE: Is there a short proof of $x^2=(-x)^2$ in an arbitrary ring? QUESTION [10 upvotes]: Identity: Let $R$ be a ring and $x \in R$. Then $x^2=(-x)^2.$ It's exam marking time here, and one of the students used the above identity in a proof. The identity is true, but I can't think of a straightforward proof of this. Question: Is there a short proof of this identity? (Note: $R$ might not have a multiplicative identity.) Here's a proof generated by Prover9, which makes me think there might not be a shorter proof. However, this might not necessarily be true, since Prover9 can only work with the ring theory axioms I input (and would have to prove any auxiliary lemmata we would take for granted). ============================== PROOF ================================= % Proof 1 at 0.01 (+ 0.00) seconds. % Length of proof is 26. % Level of proof is 10. % Maximum clause weight is 16. % Given clauses 30. 1 x * x = -x * -x # label(non_clause) # label(goal). [goal]. 2 x + (y + z) = (x + y) + z. [assumption]. 3 (x + y) + z = x + (y + z). [copy(2),flip(a)]. 4 x + 0 = x. [assumption]. 5 0 + x = x. [assumption]. 6 x + -x = 0. [assumption]. 8 x + y = y + x. [assumption]. 10 x * (y + z) = (x * y) + (x * z). [assumption]. 11 (x + y) * z = (x * z) + (y * z). [assumption]. 12 -c1 * -c1 != c1 * c1. [deny(1)]. 13 x + (-x + y) = y. [para(6(a,1),3(a,1,1)),rewrite([5(2)]),flip(a)]. 18 (x * 0) + (x * y) = x * y. [para(5(a,1),10(a,1,2)),flip(a)]. 19 (x * y) + (x * -y) = x * 0. [para(6(a,1),10(a,1,2)),flip(a)]. 24 --x = x. [para(6(a,1),13(a,1,2)),rewrite([4(2)]),flip(a)]. 25 x + (y + -x) = y. [para(8(a,1),13(a,1,2))]. 27 (x * y) + ((-x * y) + (z * y)) = z * y. [para(13(a,1),11(a,1,1)),rewrite([11(5)]),flip(a)]. 33 -x + (y + x) = y. [para(24(a,1),25(a,1,2,2))]. 40 x + -(x + y) = -y. [para(33(a,1),33(a,1,2)),rewrite([8(3)])]. 57 -(x + y) = -y + -x. [para(33(a,1),40(a,1,2,1)),flip(a)]. 69 x * 0 = 0. [para(18(a,1),33(a,1,2)),rewrite([8(4),6(4)]),flip(a)]. 70 (x * y) + (x * -y) = 0. [back_rewrite(19),rewrite([69(6)])]. 78 -(x * -y) = x * y. [para(70(a,1),33(a,1,2)),rewrite([8(5),5(5)])]. 87 x * -y = -(x * y). [para(78(a,1),24(a,1,1)),flip(a)]. 88 -(-c1 * c1) != c1 * c1. [back_rewrite(12),rewrite([87(5)])]. 101 -(-x * y) = x * y. [para(27(a,1),33(a,1,2)),rewrite([57(5),8(8),13(8)])]. 102 $F. [resolve(101,a,88,a)]. ============================== end of proof ========================== REPLY [2 votes]: Another approach: The proof is easy if the ring has identity, and if not, the ring can always be embedded in a ring with identity. If the statement is true in the larger ring, it's true in the original subring.<|endoftext|> TITLE: Showing that a metric space is complete QUESTION [8 upvotes]: The Wikipedia page on complete metric spaces gives various examples of metric spaces that are and are not complete - http://en.wikipedia.org/wiki/Complete_metric_space Here's a few lines in particular - The open interval $(0, 1)$, again with the absolute value metric, is not complete either. The sequence defined by $x_n = \frac{1}{n}$ is Cauchy, but does not have a limit in the given space. However the closed interval $[0, 1]$ is complete; the given sequence does have a limit in this interval and the limit is zero. I know that a metric space M is complete if every Cauchy sequence of points in M has a limit that is also in M, but that example above just considers one Cauchy sequence and then announces that the interval is complete. How can they say it is complete without considering all possible Cauchy sequences in the interval which is what the definition demands..and for that matter, how would it be possible to consider all Cauchy sequences in an interval given that, I presume, there are an infinite number of them? Can anyone clear this up for me...I have a feeling I'm overlooking something straightforward. REPLY [3 votes]: The closed interval $[a, b]$ is complete. Proof: Observe that $[a, b] \subseteq \mathbb R$ where $\mathbb R$ is a complete metric space. Consider $[a, b]^C = (-\infty, a) \cup (b, \infty)$. Now If $x_0 \in (-\infty, a)$, choose $\displaystyle \delta = \frac{a + x_0}{2}$, then $x_0 \in B(x_0, \delta) \subseteq (-\infty, a) \subseteq [a, b]^C$. If $x_0 \in (b, \infty)$, choose $\displaystyle \delta = \frac{b + x_0}{2}$, then $x_0 \in B(x_0, \delta) \subseteq (b, \infty) \subseteq [a, b]^C$. Thus $[a, b]^C$ contains a ball about each of its points, so $[a, b]^C$ is open and thus $[a, b]$ is closed. Suppose $(x_n)$ is a Cauchy sequence in $[a, b]$. Since every Cauchy sequence in $\mathbb R$ converges, it follows that $x_n \to x \in \overline{[a, b]}$ where $\overline{[a, b]}$ is the closure of $[a, b]$ ($\overline{[a, b]} = [a, b] \cup \{\text{accumulation points}\}$) which is the smallest closed set containing $[a, b]$. But observe that the smallest closed set containing $[a, b]$ is $[a, b]$. Hence $x \in \overline{[a, b]} = [a, b]$ Thus $x_n \to x \in [a, b]$ and we can conclude that $[a, b]$ is a complete metric space. In particular, $a=0$ and $b=1$ implies $[0, 1]$ is complete.<|endoftext|> TITLE: Functions space of discrete space: how does taking quotients lead to noncommutativity? QUESTION [5 upvotes]: It is pointed out in Geometry from the spectral point of view the following: If one considers a discrete space, say, the two-point space $\{1,2\}$, after identifying its points $X=\{1,2\}/\sim$, the algebra of functions $A=C(\{1,2\}/\sim,\mathbb{C})$ is the algebra of matrices $M_2(\mathbb{C})$ with usual matrix product. According to the author, the noncommutativity of this algebra is a result of the relation between the "points". (If one takes the quotient one has $X=\{*\}$, whose algebra is $\mathbb{C}$. I still have no problem with this apparent ambiguity, i.e. $M_2(\mathbb{C})$ is Morita equivalent to $\mathbb{C}$, so it will have the same "noncommutative topology", so to say). However, concerning Connes' statement, Question: where does the usual matrix product comes from? REPLY [3 votes]: It comes from composition of isomorphisms. One version of the "algebra of functions" on, say, a finite groupoid $G$ is its groupoid algebra $\mathbb{C}[G]$, which is a direct generalization of the group algebra: take the free vector space on the morphisms in $G$ with multiplication given by composition (or $0$ if there is no composition). If $G$ is an equivalence relation, then $\mathbb{C}[G]$ is a finite direct product of matrix algebras.<|endoftext|> TITLE: Direct limits and $\rm Hom$ QUESTION [5 upvotes]: I read that $\lim\limits_{\longleftarrow}\mathrm{Hom}(N_j,M)\cong\mathrm{Hom}(\lim\limits_{\longrightarrow}N_j,M)$. I was wondering if we can write $\lim\limits_{\longrightarrow}\mathrm{Hom}(N_j,M)$ as $\mathrm{Hom}(X,M)$ for some $X$. Do you know if we can? I would be happy also if you can give me only a reference. (Here I'm talking of modules over commutative rings, we can also suppose that the $N_j$'s and $M$ are finitely generated, we can also add some other conditions if you wish, maybe noetherianity). I also read that $\mathrm{Hom}(M,\lim\limits_{\longleftarrow}N_j)\cong\lim\limits_{\longleftarrow}\mathrm{Hom}(M,N_j)$. I was wondering if under some hypothesis $\mathrm{Hom}(M,\lim\limits_{\longrightarrow}N_j)\cong \lim\limits_{\longrightarrow}\mathrm{Hom}(M,N_j)$. I would be happy if you can help me even in only one of those questions. EDIT: what about if $M$ is not even finitely generated? EDIT: Suppose that the transition maps in $\{N_j\}$ are injective. We can see $M$ as a direct limit of finitely generated modules. Is it true that $\lim\limits_{\longleftarrow_n}\lim\limits_{\longrightarrow_j}Hom(M_n,N_j)$=$\lim\limits_{\longrightarrow_j}\lim\limits_{\longleftarrow_n}Hom(M_n,N_j)$? If this is true then we can drop the hypothesis $M$ finitely generated in the answer of Matt E. EDIT: what about $\mathrm{Hom}(\lim\limits_{\longleftarrow}\;M_n,N)$? REPLY [8 votes]: There is a natural isomorphism $\varinjlim Hom(M,N_j) \cong Hom(M,\varinjlim N_j)$ for all filtered direct systems $N_j$ if and only if $M$ is finitely presented. If you assume that the transition maps in the system $\{N_j\}$ are injective, then finitely generated is enough. (See this answer.)<|endoftext|> TITLE: The openness of the set of positive definite square matrices QUESTION [14 upvotes]: Let $\mathbb{R}^{n\times n}$ be the vector space of square matrices with real entries. For each $A\in \mathbb{R}^{n\times n}$ we consider the norms given by: $$ \displaystyle\|A\|_1=\max_{1\leq j\leq n}\sum_{i=1}^{n}|a_{ij}|; $$ $$ \displaystyle\|A\|_\infty=\max_{1\leq i\leq n}\sum_{j=1}^{n}|a_{ij}|; $$ $$ \displaystyle\|A\|_\text{max}=\max\{|a_{ij}|\}. $$ Matrix $A\in \mathbb{R}^{n\times n}$ is said to be positive definite iff $$ \langle Ax, x\rangle> 0 \quad \forall x\in\mathbb{R}^n\setminus\{0\}. $$ Let $S$ be the set of all positive definite matrices on $\mathbb{R}^{n\times n}$. Prove that $S$ is an open set in $(X,\|.\|_1)$, $(X,\|.\|_\infty$), $(X,\|.\|_\text{max})$. I would like to thank all for their help and comments. REPLY [18 votes]: Restricting to the unit ball is always illustrating. Let $A$ be a given positive definite matrix, then there is $\delta>0$ such that \begin{equation} \ge\delta \end{equation} for all $\|x\|=1$. We use the 2-norm, defined by \begin{equation} \|A\|=\operatorname{sup}_{\|x\|=1}\|Ax\|, \end{equation} which is equivalent to any other norms. If $B$ is very close to $A$, say, $\|B-A\|<\epsilon$, then \begin{equation} |-|=|<(B-A)x,x>|<\epsilon\|x\|^2, \end{equation} so if you restrict to the unit ball again then you can bound $$ from below using positive definiteness of $A$ and controlling $\epsilon$, and this will lead to the positive definiteness of $B$.<|endoftext|> TITLE: Integrating a 3-form over the 3-sphere QUESTION [5 upvotes]: Consider the 1-form $\alpha = xdz + ydw -(x^2 + y^2 + z^2 + w^2)dt$ on $\mathbb{R}^5$. I'm trying to find $\int_S d\alpha \wedge d\alpha$, where $S \subset \mathbb{R}^5$ is given by $x^2 + y^2 + z^2 + w^2 =1$ and $0\leq t \leq 1$. Restricting to $S$,we get $\alpha = xdz + ydw -dt$, so $d\alpha = dx\wedge dz + dy \wedge dw$. Now $d\alpha \wedge d\alpha$ = $d(\alpha \wedge d\alpha)$, so by Stokes' Theorem, $\int_S d\alpha \wedge d\alpha = \int_{\partial S} \alpha \wedge d\alpha$. I found that $\alpha \wedge d\alpha = xdz\wedge dy \wedge dw + ydw\wedge dx \wedge dz - dt \wedge dx \wedge dz - dt \wedge dy \wedge dw$. The boundary $\partial S$ of $S$ is the disjoint union $S^3 \times \{0\} \cup S^3 \times \{1\}$, where $S^3$ is the unit sphere in the $x,y,z,w$-subspace of $\mathbb{R}^5 = \{(x, y, z, w, t)\}$. How do I integrate this 3-form $\alpha \wedge d\alpha$ over a boundary sphere of $S$? (This isn't homework! I've never seen explicitly how to integrate differential forms over a manifold, so a worked example would be really helpful.) REPLY [2 votes]: Since $t$ is constant over each component of $\partial S$, $dt|_{\partial S}=0$. So it suffices to integrate $\eta=x \, dz \wedge dy \wedge dw + y \, dw \wedge dx \wedge dz$; the last two terms of $\alpha \wedge d\alpha$ vanish on $\partial S$. But $\eta$ is independent of $t$, so it is identical on the two boundary components. Since whatever orientation you choose on $S$ will determine opposite orientations on the two boundary components, it follows that $\int_{\partial S} \eta = 0$. So that's pretty boring. Let's integrate $\eta$ over each boundary component individually, just to see what happens. Notice that: $$d\eta = dx \wedge dz \wedge dy \wedge dw + dy \wedge dw \wedge dx \wedge dz = -2 dx \wedge dy \wedge dz \wedge dw \, ;$$ that is, $d\eta$ is a constant multiple of the volume form on each $x,y,z,w$-subspace of $\Bbb{R}^5$. So, applying Stokes' theorem again and recalling the formula for the volume of a hyperball, we have $$\int_{S^3 \times \{1\}} \eta = \int_{B^4 \times \{1\}} d\eta = -2 \mathrm{Vol}(B^4)= \pm \pi^2$$ depending on your original choice of orientation for $S$. You can do the same thing with $S^3 \times \{0\}$, except that you'll have to choose the opposite sign in order to be consistent.<|endoftext|> TITLE: A hole puncher that hates irrational distances QUESTION [11 upvotes]: Possible Duplicate: Irrational painting device I have a special hole puncher that does the following: When applied to any point $ x \in \mathbb{R}^{2} $, it removes all points in $ \mathbb{R}^{2} $ whose distance from $ x $ is irrational (by this, it is clear that $ x $ is not removed). Is there a minimum number of times that I can apply the hole puncher (to various points in $ \mathbb{R}^{2} $, of course) so as to remove every point in $ \mathbb{R}^{2} $? REPLY [8 votes]: Three. 1) Convince yourself two won't be enough. 2) Consider $(0, 0), (1,0)$ and $(\pi, 0)$.<|endoftext|> TITLE: Cubic (3-regular) graph spanning tree QUESTION [5 upvotes]: Considering loop free cubic graphs (graphs where every node has 3 neighboring nodes): Is is possible to construct a spanning tree that only has nodes with 3 neighbors in the spanning tree or 1 neighbor in the spanning tree (leaves). That is I want to be able to construct a spanning tree where there are no nodes that are connected to only 2 other nodes in the spanning tree. They should all be connected to either 1 node (a leaf) or all their edges in the underlying cubic graph should also be present in the spanning tree (ie attached to 3 nodes in the spanning tree)? Does anyone know the answer to this? And if it is possible, how to construct such a spanning tree? Many thanks. REPLY [4 votes]: It's not always possible. For example:<|endoftext|> TITLE: connected $\Rightarrow$ path connected? QUESTION [7 upvotes]: Well, so far, I have noticed that whenever a matrix lie group is connected it is path connected, so is it true that in matrix lie group connected $\Rightarrow$ path connected?If yes, could anyone tell me where I can get the proof?or if some one tell me the sketch of the proof. Thank you. REPLY [3 votes]: Like Qiaochu Yuan said, any connected locally path-connected space is path-connected. This is because local path-connectedness implies that the path-connected components are open (this is essentially by definition: every point admits a path-connected neighbourhood, and hence is an interior point of its path-connected component), and therefore are also closed (since the complement of a component is the union of the other component, hence a union of open sets). Therefore, by connectedness, there is only one path-connected component and it is everything.<|endoftext|> TITLE: Infinitely many primes for quadratic residues QUESTION [10 upvotes]: Let $a \in \mathbb{N}.$ Prove there are infinitely many primes $p$ satisfying $$\left(\frac{a}{p}\right) =1.$$ Remark: One may need to use the Dirichlet theorem, which states that if $a,m$ are coprime then there exists infinitely many primes $p$ such that $p \equiv a \pmod{m}.$ REPLY [11 votes]: You don't need Dirichlet's theorem. The following more general result is true. Proposition: Let $f(x) \in \mathbb{Z}[x]$ be a nonconstant polynomial with nonzero constant term. Then there exist infinitely many primes $p$ such that $p | f(n)$ for some $n \in \mathbb{Z}$. To get the desired result take $f(x) = x^2 - a$. Proof 1. Euler's proof of the infinitude of the primes carries over to this result. Certainly there exists at least one such prime. If $p_1, ... p_n$ are a finite collection of such primes, then $\frac{f(k f(0) p_1 ... p_n))}{f(0)}$ is relatively prime to each $p_i$ for any $k$, so for sufficiently large $k$ it has a prime divisor not among the $p_i$. Proof 2. More generally we can replace $f(n)$ by a sequence which grows at most polynomially. See http://qchu.wordpress.com/2009/09/02/some-remarks-on-the-infinitude-of-primes/.<|endoftext|> TITLE: Half the rationals? QUESTION [9 upvotes]: Let $\mathbb{Q}[n]$ be the set of rational numbers with denominator $\le n$ and for any $X\subseteq \mathbb{Q}$, let $X[n]=X\cap \mathbb{Q}[n]$. Is there a set of rational numbers, X, such that for any interval Y of rationals: $$\underset{n\to \infty }{\mathop{\lim }}\,\frac{card(X[n]\cap Y)}{card(\mathbb{Q}[n]\cap Y)} = 1/2 ?$$ REPLY [2 votes]: As I mentioned in a comment above there is a nice set $X$ containing half the rationals with a very simple description: the set of reduced fractions $a/b$ with $\operatorname{gcd}(ab,3)=1$. To see it, we can map each reduced fraction $a/b$ to the pair $(x,y) \equiv (a,b) \pmod{3}$. There are 8 possible pairs $(x,y)$: $$ (0,1),(0,2),(1,0),(2,0),(1,1),(1,2),(2,1),(2,2) $$ if we can prove that reduced fractions in an interval $Y$ are uniformly distributed between these 8 pairs then we have the result. It is clearly enough to prove this for an interval $Y = [0,\lambda)$ for some real $\lambda > 0$. Lets fix a pair $(x,y)$ and call $A_{(x,y)}(n)$ the number of fractions $a/b$ with $b\le n$, $\operatorname{gcd}(a,b)=1$, $a < \lambda b$, and $(a,b)\equiv(x,y\pmod{3}$ ie the horrid sum; $$ A_{(x,y)}(n) = \sum_{b\le n}\sum_{\begin{matrix}\operatorname{gcd}(a,b)=1\\(a,b)\equiv (x,y) \pmod{3}\\a \le \lambda b\end{matrix}} 1 $$ Now we use the Möbius identity: $$ \sum_{d \vert \operatorname{gcd}(a,b)}\mu(d) = \begin{cases}1 \quad&\text{if }\operatorname{gcd}(a,b)=1\\0&\text{otherwise}\end{cases} $$ to get $$ A_{(x,y)}(n) = \sum_{b\le n}\sum_{\begin{matrix}(a,b)\equiv (x,y) \pmod{3}\\a \le \lambda b\end{matrix}} \sum_{d\vert \operatorname{gcd}(a,b)} \mu(d) $$ We can invert this sums puting $b = kd, a = td$ and then $t < \lambda k$ so we have $$ A_{(x,y)}(n) = \sum_{d\le n} \mu(d) \sum_{\begin{matrix}kd\le n\\kd\equiv y\pmod{3}\end{matrix}}\sum_{\begin{matrix}t<\lambda k \\td\equiv x\pmod{3}\end{matrix}} 1 $$ In the outer sum we can limit ouselves to integers $d$ coprime with 3, (if $3 \vert d$ as $(x,y)\not\equiv (0,0)\pmod{3}$ there is nothing to sum) so we have: $$ \begin{align} A_{(x,y)}(n) &= \sum_{\begin{matrix}d\le n\\d\not\equiv 0\pmod{3}\end{matrix}} \mu(d) \sum_{\begin{matrix}kd\le n\\kd\equiv y\pmod{3}\end{matrix}}\sum_{\begin{matrix}t<\lambda k \\td\equiv x\pmod{3}\end{matrix}} 1\\ &=\sum_{\begin{matrix}d\le n\\d\not\equiv 0\pmod{3}\end{matrix}} \mu(d) \sum_{\begin{matrix}kd\le n\\kd\equiv y\pmod{3}\end{matrix}} \left(\frac{\lambda k}3 + O(1)\right) \\ &=\sum_{\begin{matrix}d\le n\\d\not\equiv 0\pmod{3}\end{matrix}} \mu(d) \left( \frac{ \lambda n^2}{18d^2} + O(n/d) \right)\\ &=\frac{\lambda n^2}{18}\sum_{\begin{matrix}d=1\\d\not\equiv 0\pmod{3}\end{matrix}}^\infty \frac{\mu(d)}{d^2} + O(n) + O(n\log n) \end{align} $$ We can evaluate the constant in the last equation, call $$ B = \sum_{\begin{matrix}d=1\\d\not\equiv 0\pmod{3}\end{matrix}}^\infty \frac{\mu(d)}{d^2} $$ then $$ B - \frac{B}{9} = \sum_{\begin{matrix}d=1\\d\not\equiv 0\pmod{3}\end{matrix}}^\infty \frac{\mu(d)}{d^2} +\sum_{\begin{matrix}d=1\\d\not\equiv 0\pmod{3}\end{matrix}}^\infty \frac{\mu(3d)}{(3d)^2} = \sum_{d=1}^\infty \frac{\mu(d)}{d^2} = \frac{1}{\zeta(2)} = \frac{6}{\pi^2}$$ and so: $$ B = \frac{27}{4\pi^2} $$ and finally $$ A_{(x,y)}(n) = \frac{3\lambda }{8\pi^2} n^2 + O(n\log n) $$ so each pair $(x,y)$ have ultimately the same proportion $1/8$ of all reduced fraction in the interval, as was to be shown. Note: I suppose that with some additional work it can be proven that for a given modulus $m$ the reduced fractions $a/b$ are uniformly distributed when reducing mod $m$ between the pairs $(x,y)$ such that $\operatorname{gcd}(x,y,m)=1$. And then we could find sets containing any given proportion of the rationals with the same procedure.<|endoftext|> TITLE: Existence of non-trivial ultrafilter closed under countable intersection QUESTION [8 upvotes]: Under what conditions on $\Omega$ does there exist $\mathcal{F} \subset \mathcal{P}(\Omega)$ such that $\mathcal{F}$ is a non-trivial ultrafilter and, for every sequence $(F_{i})_{i \in N}$ of elements of $\mathcal{F}$, $\bigcap_{i \in N}{F_{i}} \in \mathcal{F}$? REPLY [9 votes]: This is actually a very strong property. It is an old result that if $\kappa$ is the least cardinal such that there is a countably complete nonprincipal ultrafilter on $\kappa$ (i.e., the ultrafilter is closed under countable intersections), then that ultrafilter is actually $\kappa$-complete (closed under $\lambda$ intersections for all $\lambda < \kappa$), and therefore $\kappa$ is a measurable cardinal. Therefore the cardinality of $\Omega$ must be at least the smallest measurable cardinal. Addition: The main point of this being that measurable cardinals may not exist. For example, it is a result of Dana Scott that measurable cardinals provably do not exist in Gödel's Constructible Universe. At a more basic level, as measurable cardinals are strongly inaccessible if $\kappa$ is the least measurable cardinal, then $V_\kappa$ is a model of ZFC and $V_\kappa \models \not\exists\text{ measurable cardinal}$.<|endoftext|> TITLE: Orders of Growth between Polynomial and Exponential QUESTION [12 upvotes]: What is known in contemporary mathematics about orders of growth for functions that exceed any degree polynomial, but fall short of exponential? This is a subject for which I've found little literature in the past. An example: $Ae^{a\sqrt x}$ clearly will outrun any finite degree polynomial, but will be outrun by $Be^{bx}$. If we replace $x$ with $y^2$ then that example doesn't seem so deep. Are there functions that exceed polynomial growth yet fall short of $Ae^{ax^p}$ for any power $0 TITLE: Finding the Jordan canonical form of this upper triangular $3\times3$ matrix QUESTION [11 upvotes]: I am supposed to find the Jordan canonical form of a couple of matrices, but I was absent for a few lectures. \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 2 \\ 0 & 0 & 3 \end{bmatrix} Since this is an upper triangular matrix, its eigenvalues are the diagonal entries. Hence $\lambda_{1,2}=1$ and $\lambda_3 = 3$, with corresponding eigenvectors $(1,2,2)$ and $(1,0,0)$. Now what? I do not know how to proceed, nor what it means that my matrix is built up by Jordan blocks. REPLY [17 votes]: Since this is a very small matrix there's actually no need to explicitly find the Jordan form by going through the routine tedious procedure. You know the characteristic polynomial is $$p(x) = (x-1)^2(x-3)$$ and you can check that the minimal polynomial is the same. This means a few things. You have a single Jordan block corresponding to $3$. This is just $(3)$. You cannot have two Jordan blocks corresponding to $1$ since that would make the matrix diagonalizable (which it is not since the minimal polynomial does not split into distinct factors). Therefore you must have a single block of the form $\begin{pmatrix}1 & 1 \\ 0 & 1\end{pmatrix}$ These combined gives you the form (up to order of the blocks) $$J=\begin{pmatrix}1 & 1 & 0 \\ 0 & 1 & 0 \\ 0& 0 & 3\end{pmatrix}$$ In general, for small matrices like these (anything up to $6\times 6$) you can find the Jordan form through similar types of analysis using the facts The geometric multiplicity of an eigenvalue is the number of blocks corresponding to it. The algebraic multiplicity of an eigenvalue is the sum of the total sizes of the blocks. The exponent of the term corresponding to an eigenvalue in the minimal polynomial is the size of the largest block.<|endoftext|> TITLE: Lie group homomorphism from $\mathbb{R}\rightarrow S^1$ QUESTION [6 upvotes]: I need to prove that every Lie group homomorphism from $\mathbb{R}\rightarrow S^1$ is of the form $x\mapsto e^{iax}$ for some $a\in\mathbb{R}$. Here is my attempt: As it is group homomorphism so it must satisfies $\phi(x+y)=\phi(x).\phi(y),\forall x,y\in\mathbb{R}$, I know one result if some continous function satisfies this rule, then it is of the form $e^x$, is this the same trick here we need to apply? REPLY [3 votes]: I found an elementary proof which expands on the intuition given by user38268 years ago. I'm posting this new answer because the already accepted answer given by Rudy the Reindeer requires knowledge of the theory of covering spaces and liftings in order to be understood and I think it's useful to have access to elementary proofs which don't relie on another theory, just in case. In fact, I'm not capable myself of following Rudy's answer, since I don't know that much stuff about covering spaces and liftings. So I post this proof in order to help future people in my same situation who get to this thread for help, as it was my case. This proof was motivated by the hint given by Brian C. Hall in his book Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, chapter 1, exercise 18. This exercise asks the very same question of this thread. Let $Φ:(\mathbb{R},+)\to (S^1,\cdot)$ be a continuous group homomorphism. We shall prove that there exists $a\in\mathbb{R}$ such that $Φ(x)=e^{iax}$ for all real $x$. The hint given by Hall is the following: Since $Φ$ is continuous and $Φ(0)=1$ (because group homormophisms preserve the identity) there exists some $ε>0$ such that, if $|x|\leq ε$, then $Φ(x)$ belongs to the (open) right half of the unit circle. Using the customary continuous determination of the complex argument $\operatorname{arg}:\mathbb{C}\setminus\mathbb{R}_{\leq 0}\to(-π,π)$ we define the continuous function $θ:[0,ε]\to (-\frac{π}{2},\frac{π}{2})$ as $θ(x)=\operatorname{arg}(Φ(x))$. Thus $$\tag{1}\label{ec:phi} Φ(x)=e^{iθ(x)} $$ for every $x\in[0,ε]$. The whole trick of the proof is that it will suffice to know the value of $Φ$ in $[0,ε]$ to deduce its value in all $\mathbb{R}$. And for what has been already presented, working with $Φ|_{[0,ε]}$ amounts to work with $θ$, the latter having a more tractable codomain than the former. Furthermore, since $θ$ is continuous, it will be sufficient to know what $θ$ evaluates in a dense subset of $[0,ε]$. This will be calculated thanks to $Φ$ being a homomorphism, which will transmit $θ$ some sort of additivity: Let's see that $θ$ is additive under closed sums in $[0,ε]$. Let $x,y\in [0,ε]$ be some numbers such that $x+y\in [0,ε]$. Then $$ e^{iθ(x+y)}=Φ(x+y)=Φ(x)Φ(y)=e^{iθ(x)}e^{iθ(y)}=e^{i[θ(x)+θ(y)]}. $$ Since the exponential function is injective up to the sum of an integer multiple of $2π$, $$ θ(x+y)=θ(x)+θ(y)+2kπ $$ for some $k\in\mathbb{Z}$. And due to the triangle inequality, $$ 2|k|π=|θ(x+y)-θ(x)-θ(y)|\leq \frac{3π}{2}, $$ from which $k=0$ is deduced. This proves that $θ(x+y)=θ(x)+θ(y)$ for every $0\leq x,y \leq ε$ satisfying also $0\leq x+y\leq ε$. A similar property follows via induction for $n$ numbers contained in $[0,ε]$ for which its total sum remains inside this interval. We'll compute now the value of $θ(\frac{k}{n}ε)$, where $k,n\in\mathbb{N}$ and $k\leq n$, in terms of the value of $θ(ε)$. Using the additivity for $θ$ we just showed, $$ θ(ε)=θ\Bigg(\sum_{j=1}^n \frac{ε}{n}\Bigg)=\sum_{j=1}^n θ\left(\frac{ε}{n}\right)=nθ\left(\frac{ε}{n}\right). $$ Hence $θ(\frac{ε}{n})=\frac{1}{n}θ(ε)$. Now, if $k\in\mathbb{N}$ is such that $k\leq n$ and using again this additivity, $$ θ\left(k\frac{ε}{n}\right)=θ\left(\sum_{j=1}^k\frac{ε}{n}\right)=\sum_{j=1}^kθ\left(\frac{ε}{n}\right)=kθ\left(\frac{ε}{n}\right) $$ and so $$\tag{2}\label{ec:A} θ(\tfrac{k}{n}ε)=\tfrac{k}{n}θ(ε). $$ Define now $$ A=\{\tfrac{k}{n}: k,n\in\mathbb{N}, k\leq n\}. $$ It is not difficult to see that $\overline{A}=[0,1]$. Given $t\in [0,1]$ and $δ>0$, taking $n$ such that $\frac{1}{n}<δ$, we can always find $0\leq k\leq n$ satisfying $|\frac{k}{n}-t|<δ$. Since $t\in [0,1]\mapsto tε\in [0,ε]$ is a homeomorphism, this gives us $\overline{Aε}=[0,ε]$. Provided that we now the value of $θ$ on $Aε$ in terms of $θ(ε)$, \eqref{ec:A}, continuity gives us its value on $\overline{Aε}=[0,ε]$. Indeed, if $tε\in[0,ε]$ and $\{a_nε\}\subset Aε$ is some sequence with $a_nε\to tε$, then $$ θ(tε)=\lim_n θ(a_n ε) =\lim_n a_n θ(ε)=tθ(ε). $$ This proves that $θ(x)=x\frac{θ(ε)}{ε}$ for every $x\in[0,ε]$. Finally, we show that the knowledge of the function $θ$ on $[0,ε]$ gives us the value of $Φ$ on $\mathbb{R}$. Let $x\in\mathbb{R}$. If $x$ is non-negative, then take $n$ big enough so $0\leq\frac{x}{n}\leq ε$. Expressing $x=\sum_{j=1}^n \frac{x}{n}$, using that $Φ$ is a homomorphism and \eqref{ec:phi} gives $Φ(x)=e^{i\frac{θ(ε)}{ε}x}$. If $x$ is negative, then, since group homomorphisms preserves symmetric elements, $$ Φ(x)=Φ(-(-x))=Φ(-x)^{-1}=(e^{i\frac{θ(ε)}{ε}(-x)})^{-1}=e^{i\frac{θ(ε)}{ε}x}. $$<|endoftext|> TITLE: When does $X^\star$ separate the points of $X$? QUESTION [7 upvotes]: Let $X$ be a separable, infinite dimensional Banach space. Does $X^\star$ (the set of bounded complex linear functionals) separate the points of $X$? (meaning, for every two vectors $x,y\in X$ there is some $\phi \in X^\star$ such that $\phi(x)\neq\phi(y)$). What if $X$ is not a Banach space and is just a Fréchet space? REPLY [8 votes]: Yes, the dual $X^*$ of every banach space $X$ separates the points of $X$. This is an immediate consequence of the Hahn-Banch theorem. A proof can be found in every introductory course on fuctional analysis. Moreover, the Hahn-Banach theorem holds in locally convex spaces. Thus statement is also true for Fréchet spaces.<|endoftext|> TITLE: Proving: $\operatorname{P.V.} \int^\infty_{-\infty}\frac{\ln(t^2+1)}{t^2-1}dt =\frac{\pi^2}{2}$ QUESTION [6 upvotes]: How to prove that $$\operatorname{P.V.}\int^\infty_{-\infty}\frac{\ln(t^2+1)}{t^2-1}dt =\frac{\pi^2}{2}\: ?$$ REPLY [6 votes]: If you integrate along this closed curve in $\mathbb{C}$ then you get $0$ by Cauchy theorem. The integral you want is along the horizontal parts of the contour, as the small circles vanish and the big one goes to $\infty$. The contributions of the small circles cancel each other (the two residues have opposite signs), the integral along to big semicircle goes to $0$, so your integral is equal to the integral along the vertical part of the contour. Now notice that $\log(t^2+1)$ have different values on the two sides of the vertical path (that's why the integral is not $0$), but they differ just by $2\pi i$. What remains is (setting $t=is$) $$\int_1^\infty 2\pi/(1+s^2)\, ds=\pi^2/2$$<|endoftext|> TITLE: A theorem due to Gelfand and Kolmogorov QUESTION [20 upvotes]: For any topological space $X$, we can define $C(X)$ to be the commutative ring of continuous functions $f\,:\,X\rightarrow \mathbb{R}$ under pointwise addition and multiplication. Then $C(-)$ becomes a contravariant functor $C(-)\,:\,\bf{Top}\rightarrow \text{ComRing}$. A theorem due to Gelfand and Kolmogorov states the following: Let $X$ and $Y$ be compact Hausdorff spaces. If $C(X)$ and $C(Y)$ are isomorphic as rings, then $X$ and $Y$ are homeomorphic. I encountered this theorem as an example in a book on homological algebra, without proof. I have searched for the proof, but have been unable to find it. If anyone has an idea of how to prove this, or a reference to a proof, I would appreciate it greatly. REPLY [7 votes]: The key to the proof is the following fact. Lemma: Let $X$ be a compact space and let $\varphi:C(X)\to\mathbb{R}$ be a ring-homomorphism. Then there exists $x\in X$ such that $\varphi(f)=f(x)$ for all $f\in C(X)$. Proof: Note that if $f\in C(X)$ is such that $f\geq 0$ everywhere, then $\varphi(f)=\varphi(\sqrt{f})^2\geq 0$. I claim furthermore that $\varphi$ is an $\mathbb{R}$-algebra homomorphism, so $\varphi(r)=r$ for any $r\in\mathbb{R}$ (thinking of it in $C(X)$ as a constant function on $X$). Indeed, we know this must be true if $r\in\mathbb{Q}$; for arbitrary $r\in\mathbb{R}$, now use the fact that $\varphi(r-q)\geq0$ if $q\leq r$ and $q\in\mathbb{Q}$ and $\varphi(q-r)\geq 0$ if $q\geq r$ and $q\in\mathbb{Q}$. Now suppose that $\varphi$ is not given by evaluation at any point. Then for each $x\in X$, there is a function $f_x\in C(X)$ such that $\varphi(f_x)\neq f_x(x)$. Letting $r=\varphi(f_x)$ and replacing $f_x$ with $f_x-r$, we may assume $f_x(x)\neq 0$ and $\varphi(f_x)=0$. Replacing $f_x$ with its square, we may further assume that $f_x\geq 0$ everywhere. By compactness of $X$, finitely many of the sets $\{y:f_x(y)>0\}$ cover $X$, and so adding together the corresponding $f_x$'s, we get a function $f\in C(X)$ such that $f>0$ everywhere and $\varphi(f)=0$. But then $1/f$ is continuous so $f$ is a unit and so $\varphi(f)$ cannot be $0$, so this is a contradiction. Given this fact, the result you ask for follows easily. If $X$ is compact Hausdorff, then we can recover the set $X$ from $C(X)$ (up to canonical bijection) as the set of ring-homomorphisms $C(X)\to\mathbb{R}$. We can moreover recover the topology on $X$ since it is the coarsest topology that makes each element of $C(X)$ continuous, by Urysohn's lemma. (Here if we are identifying $X$ with homomorphisms $C(X)\to\mathbb{R}$, we can think of an element of $C(X)$ as a function on $X$ by evaluation.) So we can recover the space $X$ up to homeomorphism from the ring $C(X)$. (In fact, it similarly follows from the Lemma that if $X$ and $Y$ are compact Hausdorff, then ring-homomorphisms $C(X)\to C(Y)$ are naturally in bijection with continuous maps $Y\to X$, and this preserves composition. So this gives a contravariant equivalence of categories between compact Hausdorff spaces and rings of the form $C(X)$.)<|endoftext|> TITLE: Computing $ I_{n}=\int \tan(x)^n \mathrm dx$ QUESTION [6 upvotes]: I'm trying to compute: $$ I_{n}=\int \tan(x)^n \mathrm dx$$ We have: $$ I_{n}+I_{n-2}=\int (1+\tan(x)^2)\tan(x)^{n-2} \mathrm dx$$ $$ I_{n}=\frac{1}{n-1}\tan(x)^{n-1}-I_{n-2}+C$$ Which gives the formulas: $$ \int \tan(x)^{2n} \mathrm dx= \sum_{k=0}^{n-1} \frac{(-1)^k}{2n-(2k+1)}\tan(x)^{2n-(2k+1)}+(-1)^nx+C$$ $$ \int \tan(x)^{2n+1} \mathrm dx=\sum_{k=0}^{n-1} \frac{(-1)^k}{2(n-k)}\tan(x)^{2(n-k)}+(-1)^{n+1}\ln(\cos(x))+C$$ I would just like to know if these equalities are correct. REPLY [2 votes]: I think that you can omit the constant C for the last two equations because the left part can take in the constant.<|endoftext|> TITLE: Let $f(z)$ be entire function. Show that if $f(z)$ is real when $|z| = 1$, then $f(z)$ must be a constant function using Maximum Modulus theorem QUESTION [19 upvotes]: Let $f(z)$ be entire function. Consider the functions $e^{if(z)}$ and $e^{−if(z)}$ and applying the Maximum Modulus Theorem, show that if $f(z)$ is real when $|z| = 1$, then $f(z)$ must be a constant function. (We take $f(z)=u(z)+iv(z)$) I am confused as so far I have $|g(z)|=|e^{if(z)}|=|e^{-v(z)}|$ and then since $f(z)$ is real, $f(z)=u(z)$ and $v(z)=0$ so I assumed it would follow that $|g(z)|=|e^{v(z)}|=1$. Similarly, $|g(z)|=|e^{-if(z)}|=|e^{v(z)}|=1$. Using Liouville I assumed one could say that both $g(z)$ and $h(z)$ are bounded entire functions, they are constant and so it follows that $v(z)$ is constant, meaning that both its partial derivatives are equal to 0 and, due to Cauchy Riemann, both of the partial derivatives of $u(z)$ are equal to zero. It would then follow that $f(z)$ is constant. I don't know how to go about the question using the Maximum Modulus Theorem, also I feel I am overlooking the importance of $|z|=1$ perhaps? Any help would be much appreciated!! REPLY [21 votes]: The function $g:z\mapsto e^{if(z)}$ is entire. Since $f$ is real on the unit circle $\mathbb{S}^1$, it turns out that $|g|=1$ on this set. But since $g$ is entire, using the Maximum Modulus Theorem, we know that $|g(z)| \leq 1$ for all $|z| \leq 1$. This means that (using your notations: $f(z)=u(z)+iv(z)$) $v \geq 0$ for $|z| \leq 1$. Same reasoning with $h:z\mapsto e^{-if(z)}$ leads to $v \leq 0$ on the unit disk and hence $v(z)=0$ on the unit disk, that is $f$ takes only real values on the whole unit disk which happens only if $f$ is constant (open mapping theorem). Ayman REPLY [11 votes]: The maximum modulus principle says that $|g(z)|=|e^{if(z)}|$ attains its maximum for $D=\{ |z| \leq 1 \}$ on the boundary. Thus $$|e^{if(z)}| \leq 1 \,;\, \forall |z| \leq 1 \,.$$ Applying it to $h(z)=|e^{-if(z)}|$ you get again $$|e^{-if(z)}| \leq 1 \,;\, \forall |z| \leq 1 \,.$$ Now, $$e^{-if(z)}=\frac{1}{e^{if(z)}} \,.$$ Plug this in the second identity and you are done.<|endoftext|> TITLE: The characterization of compact space QUESTION [5 upvotes]: I have encountered a characterization of compact space, but I do not know how to prove it. $V$ is a topological space which satisfies that for any topological space $W$, the projection $V\times W\rightarrow W$ is a closed map, then $V$ is compact. REPLY [3 votes]: Let $\phi$ be a filter on $V$. Let $W = V \cup \{\phi\}$, where $V$ carries its topology and $F \cup \{\phi\}$, $F \in \phi$ are the neighbourhoods of $\phi$. Now let $D = \{(x,x)\mid x \in V\}$ and $A = \operatorname{cl}D$. Now $\operatorname{pr}_W(A)$ is a closed subset of $W$ containing $\operatorname{pr}_W(D) = V$. Hence $\operatorname{pr}_W(A) = W$, so there is a $x \in V$ such that $(x, \phi) \in \operatorname{cl}D$, that is each neighbourhood $U \times (\{\phi\} \cup F)$ of $(x, \phi)$ intersects $D$, that is $U \cap (\{\phi\} \cup F) = U \cap F\ne\emptyset$ for each $F \in \phi$, $U$ neighbourhood of $x$. So $x$ is an adherence point of $F$ and $V$ is compact.<|endoftext|> TITLE: Fake homeomorphism between $R$ and $R^2$ QUESTION [5 upvotes]: I attended a lecture today in which we were given the proof of non-existence of homeomorphisms between $\mathbb R$ and $\mathbb R^2$. I came up with the following bijection between $\mathbb R$ and $\mathbb R^2$ but could not prove why this doesn't qualify as a valid homeomorphism. Map $(x,y) \rightarrow (\frac 1 \pi (\tan ^{-1} x + \pi/2), \frac 1 \pi (\tan ^{-1} y + \pi/2))$. This map is continuous, bijective and maps $\mathbb R^2$ to the unit square. Now, for every pair $(x,y)$ with $x = 0.a_1a_2 \cdots, y = 0.b_1b_2 \cdots$ being their decimal expansions, define $$f(x,y) = 0.a_1b_1a_2b_2 \cdots$$ $f(x,y)$ is a continuous bijection between the unit square and the interval $(0,1)$ Now, just map $x \rightarrow \tan (\pi x - \pi/2)$ to get a continuous bijection between $(0,1)$ and $\mathbb R$. Why isn't the composition of these functions a homeomorphism between $\mathbb R$ and $\mathbb R^2$? REPLY [13 votes]: Your middle function is not continuous. Let $x_0=0.1$, $y_n=0.4\overbrace{9\cdots9}^{n\text{ times }}0\cdots$. Then $(x_0,y_n)\to(0.1, 0.5)$. We have $$ f(0.1,0.5)=0.15,\ \ f(0.1,y_n)=0.14090909\cdots $$ So $$ |f(0.1,0.5)-f(0.1,y_n)|>0.009 $$ for all $n$. As a general comment, every time your proof includes a "waving hand part", you should suspect that part. A lot. (said from extensive own experience) REPLY [5 votes]: Your function $f$ is not continuous. If it were, then fixing $y$ to some arbitrary value is would give a strictly increasing continuous function of $x$ from the unit interval to itself, whose image would have to be an interval; however this cannot be the case given the fact that half of the decimal places have unchanging digits throughout the image of the map (this restriction of $f$ very serverly violates the intermediate value theorem). In fact you can check that $f$ is discontinuous in any point at least one of whose two coordinates has a finite decimal representation. For such coordinates you have to choose one of the two possible decimal representations to work with in the definition of $f$; assuming you choose the finite one (rather than the one that ends with all digits $9$), you get a discontinous jump when you lower that coordinate. You may note that your function $f$ also fails to be surjective, for essentially the some reason (for instance $0.3919491959992969593959\ldots$ is not in the image). As a consolation, it does happen to be continuous almost everywhere. I may add that the intermediate value theorem consideration shows that for a map $f:\mathbf R^2\to\mathbf R$ (or $(0,1)^2\to(0,1)$), even the weaker requirement of just being both continuous and injective cannot be met. It suffices to consider two distinct points $p_0,p_1\in\mathbf R^2$, where we may assume $f(p_0) TITLE: Which letter is not homeomorphic to the letter $C$? QUESTION [5 upvotes]: Which letter is not homeomorphic to the letter $C$? I think letter $O$ and $o$ are not homeomorphic to the letter $C$. Is that correct? Is there any other letter? REPLY [8 votes]: From the wikipage on topology, here are the equivalence classes (under homeomorphism) of letters of the English alphabet. Any letter not in the same bracketed set as 'C' is not homeomorphic to it.<|endoftext|> TITLE: Open subgroups of a topological group are closed QUESTION [18 upvotes]: Let $G$ be a topological group such that for each $x \in G$ the mapping $x\mapsto xy$ is a homeomorphism. If $H$ is a open subgroup of $G$, prove that $H$ is also closed Could anyone just give hint for this one? REPLY [10 votes]: In a topological group the group multiplication is by definition continuous (and thus translations are homeomorphisms). You're probably trying to say that if $G$ is a group with topology such that right translations are homeomorphisms, then any open subgroup is also closed. To show that, notice that $H$ is closed iff its complement is open, which you can write out explicitly using the group operations.<|endoftext|> TITLE: Is a module an inverse limit of finitely generated modules? QUESTION [6 upvotes]: Every module is the direct limit of finitely generated modules. Is it true that every module is the inverse limit of finitely generated modules? REPLY [9 votes]: No. Consider $\mathbb{Q}$ or any other nonzero divisible group. Every homomorphism from $\mathbb{Q}$ into a finitely generated abelian group is zero: subgroups of finitely generated groups are finitely generated, quotients of divisible groups are divisible, and the only finitely generated divisible group is $0$. More details: (using Wikipedia's notation) Suppose towards a contradiction that $(X_i, f_{ij})$ is an inverse system of finitely generated abelian groups and $\pi_i \colon \mathbb Q \to X_i$ are the projections identifying $\mathbb{Q}$ as the inverse limit $\mathbb{Q} = \varprojlim\nolimits_i X_i$, in particular $f_{ij}\pi_j = \pi_i$. We know that $\pi_i = 0$ . Consider the maps $\psi_i = \pi_i \colon \mathbb{Q} \to X_i$. Both $u = 1_{\mathbb{Q}}$ and $u' = 0$ are homomorphisms $\mathbb{Q} \to \mathbb{Q}$ such that $\psi_i = \pi_i u$ and $\psi_i = \pi_i u'$, hence the uniqueness statement in the universal property of the inverse limit implies $u = u'$, so $1_\mathbb{Q} = 0$. Nonsense. (More generally, you can try to show that any map into $\mathbb{Q}$ would have to be zero.)<|endoftext|> TITLE: What is the number of trailing zeros in a factorial in base ‘b’? QUESTION [11 upvotes]: I know the formula to calculate this, but I don't understand the reasoning behind it: For example, the number of trailing zeros in $100!$ in base $16$: $16=2^4$, We have: $\frac{100}{2}+\frac{100}{4}+\frac{100}{8}+\frac{100}{16}+\frac{100}{32}+\frac{100}{64}=97$ Number of trailing zeros$ =\frac{97}{4} = 24$. Why do we divide by the power of '$2$' at the end? REPLY [11 votes]: Suppose that $b=p^m$, where $p$ is prime; then $z_b(n)$, the number of trailing zeroes of $n!$ in base $b$, is $$z_b(n)=\left\lfloor\frac1m\sum_{k\ge 1}\left\lfloor\frac{n}{p^k}\right\rfloor\right\rfloor\;.\tag{1}$$ That may look like an infinite summation, but once $p^k>n$, $\left\lfloor\frac{n}{p^k}\right\rfloor=0$, so there are really only finitely many non-zero terms. The summation counts the number of factors of $p$ in $n!$. The set $\{1,2,\dots,n\}$ of integers whose product is $n!$ contains $\left\lfloor\frac{n}p\right\rfloor$ multiples of $p$, $\left\lfloor\frac{n}{p^2}\right\rfloor$ multiples of $p^2$, and so on $-$ in general $\left\lfloor\frac{n}{p^k}\right\rfloor$ multiples of $p^k$. Each multiple of $p$ contributes one factor of $p$ to the product $n!$; each multiple of $p^2$ contributes an additional factor of $p$ beyond the one that was already counted for it as a multiple of $p$; each multiple of $p^3$ contributes an additional factor of $p$ beyond the ones already counted for it as a multiple of $p$ and as a multiple of $p^2$; and so on. Let $s=\sum_{k\ge 1}\left\lfloor\frac{n}{p^k}\right\rfloor$; then $n!=p^sk$, where $k$ is not divisible by $p$. Divide $s$ by $m$ to get a quotient $q$ and a remainder $r$: $s=mq+r$, where $0\le r TITLE: How do I prove that $S^n$ is homeomorphic to $S^m \Rightarrow m=n$? QUESTION [7 upvotes]: This is what I have so far: Assume $S^n$ is homeomorphic to $S^m$. Also, assume $m≠n$. So, let $m>n$. From here I am not sure what is implied. Of course in this problem $S^k$ is defined as: $S^k=\lbrace (x_0,x_1,⋯,x_{k+1}):x_0^2+x_1^2+⋯+x_{k+1}^2=1 \rbrace$ with subspace topology. REPLY [3 votes]: A non-elementary argument would be that spaces with non-identical homotopy groups are not homeomorphic. Assume $n < m$. Then $\pi_n (S^n) = \mathbb Z$ (see this article) but $\pi_n(S^m) = 0$. Hence $S^n$ and $S^m$ are not homeomorphic if $n \neq m$. Alternatively, as pointed out in the comments, you could use homology groups: $H_n(S^n) = \mathbb Z$ but $H_n(S^m) = 0$. But homeomorphic spaces have isomorphic homology groups.<|endoftext|> TITLE: Proof of the extreme value theorem without using subsequences QUESTION [8 upvotes]: I am preparing a lecture on the Weierstrass theorem (probably best known as the Extreme Value Theorem in english-speaking countries), and I would propose a proof that does not use the extraction of converging subsequences. I did not explain subsequences in my calculus course, and I must choose between skipping the proof of the theorem and finding some proof which works only for functions $\mathbb{R} \to \mathbb{R}$. I remember I once read a proof based on some bisection technique, but I can't find a reference right now. I would be grateful for any reference to books, papers, web sites about this alternative proof. Edit: since somebody modified my question, I will write down the precise theorem I want to prove. Theorem. Let $f \colon [a,b] \to \mathbb{R}$ be a continuous function. Then $f$ has at least a maximum and a minimum point. REPLY [5 votes]: Here is a proof of the Extreme Value Theorem that does not need to extract convergent subsequences. First we prove that : Lemma: Let $f : [a,b] \rightarrow \mathbb{R}$ be a continuous function, then $f$ is bounded. Proof: We prove it by contradiction. Suppose for example that $f$ does not have an upper bound, then $\forall n\in\mathbb{N}$, the set $\{x \in [a,b] , \, f(x) \geqslant n\}$ is not empty. Consider the following quantity: $$a_n = \inf\{x \in [a,b] , \, f(x) \geqslant n\}.$$ For all $n\in \mathbb{N}$, $a_n \in [a,b]$ exists. By the continuity of $f$, $f(a_n) \geqslant n$. And since $\{x \in [a,b] , \, f(x) \geqslant n+1\} \subset \{x \in [a,b] , \, f(x) \geqslant n\}$, we have $a_{n+1} \geqslant a_n$. Since $(a_n)_{n\in\mathbb{N}}$ is a monotonic bounded sequence, it has a limit: $$a_{\infty} = \lim_{n\to\infty} a_n $$ and $a_{\infty} \in [a,b]$. Let $M = \lceil f(a_{\infty}) \rceil$, then $\forall n \geqslant M+2$, $f(a_n) > f(a_{\infty})+1$. Hence by the continuity of $f$, $$f(a_{\infty}) = \lim_{n\to\infty}f(a_n) \geqslant f(a_{\infty})+1,$$ which yields a contradiction. Therefor $f$ must have an upper bound on $[a,b]$. For the same reason, $f$ must have an lower bound on $[a,b]$. In conclusion, $f$ is bounded on $[a,b$ With this lemma we can prove the Extreme Value Theorem. Theorem: Let $f : [a,b] \rightarrow \mathbb{R}$ be a continuous function, then $f$ has at least a maximum and a minimum point. Proof: We proved in the lemma that $f$ is bounded, hence, by the Dedekind-completeness of the real numbers, the least upper bound (supremum) $M$ of $f$ exists. By the definition of $M$, $$\forall n \in \mathbb{N}, \, S_n = \{x \in [a,b] , \, f(x) \geqslant M - \frac1n\} \neq \emptyset .$$ Let $s_n = \inf S_n$ be the infimum of $S_n$. We know that $a \leqslant s_n \leqslant s_{n+1} \leqslant b$ and $f(s_n) \geqslant M - \frac1n$. Since $(s_n)_{n\in\mathbb{N}}$ is a monotonic bounded sequence, the limit $s = \lim_{n\to\infty} s_n \in [a,b]$ exists. $\forall N\in\mathbb{N}$, $\forall n > N$, $f(s_n) > M - \frac1N$, so $M \geqslant f(s) = \lim_{n\to\infty} f(s_n) \geqslant M - \frac1N$. Hence $f(s) = M$ and $f$ has at least a maximum point. Similarly we can prove that $f$ has at least a minimum point. Q.E.D<|endoftext|> TITLE: Given a φ independent of PA which is true in the standard model, will always (PA+ ¬φ) be ω-inconsistent? QUESTION [9 upvotes]: Given a φ independent of PA which is true in the standard model, will always (PA+ ¬φ) be ω-inconsistent? Does it mean that every such ¬φ can be used to prove that a Turing machine halts on a given C when it will actually never halt? REPLY [10 votes]: The usual definition of $\omega$ consistency is: A theory $T$ is $\omega$-inconsistent if there is a formula $\psi$ such that $T$ proves $(\exists x)\psi(x)$, and $T$ also proves $\lnot \psi(n)$ separately for each standard natural number $n$. $T$ is $\omega$-consistent if it is not $\omega$-inconsistent. The property that "for every sentence $\psi$, if the theory proves $\psi$ then $\psi$ is true in the standard model" is called "arithmetical soundness" of the theory $T$. The statement "if $T$ proves that a Turing machine halts then it does halt" is a special case of soundness known as "1-consistency" which is formally defined using a certain classification of formulas. A formula in the language of arithmetic is called $\Sigma^0_1$ if it is of the form $(\exists x)\psi(x)$ where $\psi(x)$ has only bounded quantifiers. It turns out that for any Turing machine $m$, the statement "$m$ halts on input $x$" can be expressed as a $\Sigma^0_1$ formula $\psi(x)$, and vice versa. 1-consistency is defined as soundness for $\Sigma^0_1$ formulas. Unfortunately, there is only a weak relationship between arithmetical soundness, 1-consistency and $\omega$-consistency: (1) Every sufficiently strong $\omega$-consistent theory is 1-consistent, that is, $\Sigma^0_1$ sound. "Sufficiently strong" here includes the assumption that the theory proves every true $\Sigma^0_1$ sentence - PA has this property. The following examples show that we can't strengthen that statement very much: (2) There are theories that are $\omega$-consistent but not arithmetically sound: they prove sentences that are false in the standard model of arithmetic. Some of these theories are even of the form $PA + \lnot\phi$ for a sentence $\phi$ that is true in the standard model and independent of PA. (3) There are $1$-consistent theories (that is, $\Sigma^0_1$ sound) that are $\omega$-inconsistent. Some of these theories are even of the form $PA + \lnot\phi$ for a sentence $\phi$ that is true in the standard model and independent of PA. Example (2) shows that it is not the case that $PA + \lnot \phi$ has to be $\omega$-inconsistent, which answers the first part of the question. Example (3) shows that it is possible for $PA + \lnot \phi$ to be $\Sigma^0_1$ sound, in which case it never proves that a Turing machine halts unless the machine does halt. That answers the second part of the question.<|endoftext|> TITLE: Trace of $A$ if $A =A^{-1}$ QUESTION [8 upvotes]: Suppose $I\neq A\neq -I$, where $I$ is the identity matrix and $A$ is a real $2\times2$ matrix. If $A=A^{-1}$ then what is the trace of $A$? I was thinking of writing $A$ as $\{a,b;c; d\}$ then using $A=A^{-1}$ to equate the positions but the equations I get suggest there is an easier way. REPLY [5 votes]: If ${\bf A} = {\bf A}^{-1}$ we may assume that $\det({\bf A}) \neq 0$. (Otherwise, what is ${\bf A}^{-1}$?) We can multiply the right-hand side of both sides of the equation by ${\bf A}$, giving: $${\bf A} = {\bf A}^{-1} \iff {\bf A}{\bf A} = {\bf A}^{-1}{\bf A} \iff {\bf A}^2 = {\bf E} \, . $$ where ${\bf E}$ notes the $n \times n$ identity matrix. The characteristic polynomial of ${\bf A}$ is given by: $$ \det({\bf A}-\lambda {\bf E}) = \det \left[ \begin{array}{cc} a_{11}-\lambda & a_{12} \\ a_{21} & a_{22}-\lambda \end{array} \right] =(a_{11}-\lambda)(a_{22}-\lambda)-a_{12}a_{21} $$ $$ =a_{11}a_{22} - \lambda a_{11} - \lambda a_{22} + \lambda^2 - a_{12}a_{21} = \lambda^2 - (a_{11} + a_{22})\lambda + (a_{11}a_{22} - a_{12}a_{21}) \, .$$ If $\text{tr}({\bf A})$ denotes the trace of ${\bf A}$, then $\det({\bf A}-\lambda {\bf E}) = \lambda^2 - \text{tr}({\bf A})\lambda + \det({\bf A}).$ We usually consider the eigenvalues, $\lambda$, given by $\det(A-\lambda E) = 0$. In other words, we usually consider the equation $\lambda^2 - \text{tr}({\bf A})\lambda + \det({\bf A}) = 0$. There is a great result due to Hamilton which says that if we replace the number $\lambda$ in the numerical equation $\det({\bf A}-\lambda {\bf E}) = 0$ by the matrix ${\bf A}$ then we get a matrix equation: $$ {\bf A}^2 - \text{tr}(A){\bf A} + \det(A){\bf E} = {\bf 0} \, . $$ Next, we ask ourselves: what os $\det({\bf A})$? Well, since ${\bf A}^2 = {\bf E}$ it follows that $\det({\bf A}^2) = \det({\bf E})$ and, since $\det({\bf XY}) = \det({\bf X})\det({\bf Y})$, we have $\det({\bf A})^2 = \det({\bf E})$ which tells us $\det({\bf A})^2 = 1$. Clearly, $\det({\bf A}) = \pm 1$. Thence: $${\bf A}^2 - \text{tr}(A){\bf A} \pm \det(A){\bf E} = {\bf 0} \implies {\bf E} - \text{tr}(A){\bf A} \pm {\bf E} = {\bf 0}\, .$$ At this point, we should separate into the $\pm$ cases: $${\bf E} - \text{tr}(A){\bf A} - {\bf E} = {\bf 0} \ \ \text{or} \ \ {\bf E} - \text{tr}(A){\bf A} + {\bf E} = {\bf 0} \, . $$ In the first case, we get $\text{tr}(A){\bf A} = 0$. Since $\det({\bf A}) \neq 0$ it follows that $\text{tr}({\bf A}) = 0.$ In the second case, $-\text{tr}(A){\bf A} = -2{\bf E}$. This is only possible if ${\bf A} \propto {\bf E}$. For more detail, notice that: $$-\text{tr}(A){\bf A} = -2{\bf E} \implies \text{tr}\left[-\text{tr}(A){\bf A}\right] = \text{tr}\left[-2{\bf E}\right] \implies \text{tr}(A)^2 = 4 \implies \text{tr}(A) = \pm 2.$$ It follows that the necessary conditions are that $\text{tr}(A) = -2, 0 , 2.$<|endoftext|> TITLE: Evaluate: $\int{\frac{x^{5}-x}{x^{8}+1}}\:\mathrm dx.$ QUESTION [9 upvotes]: Evaluate: $$\int{\frac{x^{5}-x}{x^{8}+1}\:\mathrm dx}.$$ I am unable to see a decent starting point for this integral, there are no radicals so trigonometric substitution isn't helpful; there is no nice partial fraction decomposition to simplify the integrand, integration by parts doesn't help to simplify it much, and I cannot see any factorization or useful substitution to use. Can anyone help shed some light on this integral? Thanks in advance! REPLY [14 votes]: These integrals are often wrapped up nicely by substitutions of the form: $$u=x^a\pm\frac{1}{x^a}$$ where $a$ is chosen appropriately. A little bit of playing around leads to the following: $$\int\frac{x^{5}-x}{x^{8}+1}dx=\int\frac{x^{3}\left(x^{2}-\frac{1}{x^{2}}\right)dx}{x^{4}\left(x^{4}+\frac{1}{x^{4}}\right)}=\int\frac{\left(x^{2}-\frac{1}{x^{2}}\right)dx}{x\left[\left(x^{2}+\frac{1}{x^{2}}\right)^{2}-2\right]}$$ Now let $$u=x^{2}+\frac{1}{x^{2}}$$ $$du=2\left(x-\frac{1}{x^{3}}\right)dx=2\frac{1}{x}\left(x^{2}-\frac{1}{x^{2}}\right)dx$$ Hence $$2I=\int\frac{du}{u^{2}-2}=\frac{1}{2\sqrt{2}}\int\frac{du}{u-\sqrt{2}}-\int\frac{du}{u+\sqrt{2}}=\frac{1}{2\sqrt{2}}\ln\left|\frac{u-\sqrt{2}}{u+\sqrt{2}}\right|$$ $$I=\frac{1}{4\sqrt{2}}\ln\left|\frac{x^{4}-\sqrt{2}x^{2}+1}{x^{4}+\sqrt{2}x^{2}+1}\right|$$<|endoftext|> TITLE: Trace Norm properties QUESTION [16 upvotes]: Let $\|A\|_1=\operatorname{trace}(\sqrt{A^* A})$. I already proved that for arbitrary unitary matrices $U$ and $V$, $\|UAV^*\|_1=\|A\|_1$ and $\|A\|_1=\sigma_1+\dots+\sigma_k$. Now I would like to prove that $\|A\|_1$ defines a matrix norm, $A\in M_{m\times n}\mathbb (C)$. 1) $\|A\|_1=0\Leftrightarrow A=0$. I already proved that. 2) $\|\lambda A\|_1=|\lambda|\|A\|_1$.This also. 3) $|\operatorname{trace}(A)|\leqslant \|A\|_1$. I am not sure, my idea is to use $A=U\Sigma V^*$. 4) $\|BA\|_1\leqslant \|B\|\|A\|_1$ for $B\in M_{l\times m}\mathbb (C)$ and $\|B\|=\sup\frac{\|Bx\|}{\|x\|}=\max\{\sigma_1,\dots,\sigma_k\}$. My idea is again using singular value decomposition for $A$ and a polar decomposition for $BA$. 5)$\|A\|_1=\sup_{\|B\|\leqslant 1}|\operatorname{trace}(BA)|$ with $B\in M_{n\times m}\mathbb (C)$ and $A\in M_{m\times n}\mathbb (C)$ Here I have no idea. 6) $\|A+A'\|_1\leqslant\|A\|_1+\|A'\|_1$ with $A,A'\in M_{m\times n}\mathbb (C)$ This can be followed from 5). If you could help me with 3)-5) I would really appreciate it. REPLY [10 votes]: Here are some (edit: more) ideas: First, it seems useful to restrict oneself to square matrices by "squaring" A as in this reference (p. 2 bottom of http://www.drhea.net/wp-content/uploads/2011/11/vonNeumann.pdf - just add zeros to $A$ to make it square which does not affect the SVD except some diagonal ones to $U$ or $V$ and some zeros to $\Sigma$). (I think this should anyway only hold if $A \in M_{n\times n}(\mathbb{C})$ is a square matrix.) In that case, I believe you can prove this by considering $$\mathrm{tr}(A) = \mathrm{tr}(U\Sigma V^*) = \mathrm{tr}(\Sigma V^*U) = \sum_i(\Sigma e_i)\cdot(V^*U e_i) \le \sum_i \| \Sigma e_i \| \| V^*U e_i \|\\ \le \sum_i \sigma_i.$$ (Note $U,V,\Sigma$ are all square now.) First of all, let us assume von Neumann's trace inequality ( http://en.wikipedia.org/wiki/Von_Neumann%27s_trace_inequality ). This inequality implies $|\mathrm{tr}(BA)| \le \|B\| \|A\|_1$, i.e.\ $\sup_{\|B\|\le 1} |\mathrm{tr}(BA)| \le \|B\| \|A\|_1$. The other direction follows with the choice $B=VU^*$, where $U,V$ unitary are such that $A=U\Sigma V^*$, i.e. $\sqrt{A^*A} = V \Sigma V^*$, because then $\mathrm{tr}(BA) = \mathrm{tr}(VU^* U\Sigma V^*) = \mathrm{tr}(V\Sigma V^*) = \|A\|_1$. The statement $|\mathrm{tr}(AB)|\le ||B||\cdot |\mathrm{tr}(A)|$ is untrue in general, consider the matrix $A=B=\begin{bmatrix} 0 & 1\\ 1 & 0\\ \end{bmatrix}$. $|\mathrm{tr}(AB)|=2$ while $|\mathrm{tr}(A)| = 0$. Now we deduce 4) from 5) and von Neumann's trace inequality: $\|BA\|_1 = \sup_{\|C\|\le 1}|\mathrm{tr}(CBA)| \le \sup_{\|C\|\le 1}|\|B\|\mathrm{tr}(CA)| = |\|B\| \|A\|_1$