diff --git "a/stack-exchange/math_stack_exchange/shard_106.txt" "b/stack-exchange/math_stack_exchange/shard_106.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_106.txt" +++ /dev/null @@ -1,5600 +0,0 @@ -TITLE: If an operator have only Real eigenvalues + symmetric then it's self-adjoint? -QUESTION [5 upvotes]: I know that if an operator is self-adjoint then has Real eigenvalues but I'm not sure about the converse i.e. if it has only Real eigenvalues and is symmetric then the operator is selfadjoint. Is that true? -----Edit--- -The difference between selfadjoint and symmetric being the definition set. Symmetric has an extension which coincide in the original domain, while a Selfadjoint operator has the same domain of definition -I define symmetric as follows. -Let be $\mathcal{A}=\left(A,\mathfrak{D}_{A}\right)$ -an operator densely defined and $\mathcal{A}^{*}=\left(A^{*},\mathfrak{D}_{A^{*}}\right)$ the adjoint operator, then $\mathcal{A}$ it is called symmetric if -\begin{eqnarray} - & & \mathfrak{D}_{A^{*}}\supseteq\mathfrak{D}_{A}\\ - & & A^{*}\psi=A\psi\qquad\forall\psi\in\mathfrak{D}_{A}. -\end{eqnarray} -While I define Self-adjoint like this -Let be $\mathcal{A}=\left(A,\mathfrak{D}_{A}\right)$ -an operator densely defined and $\mathcal{A}^{*}=\left(A^{*},\mathfrak{D}_{A^{*}}\right)$ the adjoint operator, then $\mathcal{A}$ it is called selfadjoint if -\begin{eqnarray} - & & \mathfrak{D}_{A^{*}}=\mathfrak{D}_{A}\\ - & & A^{*}\psi=A\psi\qquad\forall\psi\in\mathfrak{D}_{A}. -\end{eqnarray} - -REPLY [10 votes]: This is far from being true, indeed, every symmetric operator has only real eigenvalues: -If $\psi\in\ker(T-\lambda),\,\psi\neq 0$, then -$$ -\lambda\|\psi\|^2=\langle T\psi,\psi\rangle=\langle \psi,T\psi\rangle=\bar\lambda\|\psi\|^2, -$$ -hence $\lambda=\bar\lambda$. -Now every symmetric operator that is not self-adjoint yields a counterexample to your conjecture (if you want to be explicit, take $\Delta$ on $C_c^\infty(\mathbb{R}^n)$ as operator in $L^2(\mathbb{R}^n)$). -To get a criterion for self-adjointness, you have to replace the eigenvalues by the spectrum of the operator. Then the following characterization holds: -A symmetric operator $T$ is self-adjoint if and only if its spectrum is contained in $\mathbb{R}$. -Proof: It suffices to show that $D(T^\ast)\subset D(T)$. Let $z\in\mathbb{C}\setminus\mathbb{R}$. Since $\sigma(T)\subset\mathbb{R}$, the operators $T-z$ and $T-\bar z$ are invertible. -Let $\phi\in D(T^\ast)$ and $\psi:=(T-z)^{-1}(T^\ast -z)\phi\in D(T)$. Then we have $T\psi=T^\ast\psi$ and $(T^\ast-z)\psi=(T-z)\psi$. -It follows that -$$ -(T^\ast-z)(\phi-\psi)=(T-z)\psi-(T-z)\psi=0, -$$ -that is, $\phi-\psi\in N(T^\ast-z)=R(T-\bar z)^\perp=\{0\}$. Hence, $\phi=\psi\in D(T)$. -Remark: I used $R(A)$ and $N(A)$ to denote the range and kernel of $A$.<|endoftext|> -TITLE: Presentation for hand calculation -QUESTION [5 upvotes]: I am looking for a presentation of SmallGroup(32,7) which is more amenable to hand calculation. More precisely, I used the following comments to find a presentation of the above group and $$\langle a,b\mid b^2=(aba)^2=(a^{-1}bab)^2=a^8=1 \rangle$$ is obtained. -gap> G:=SmallGroup(m,n); - H:=Image(IsomorphismFpGroup(G)); - RelatorsOfFpGroup(H); - SimplifiedFpGroup(H); - -On the other hand, from the library of GAP, we know that $SmallGroup(32,7)$ is isomorphic to $(\mathbb Z_8\rtimes \mathbb Z_2)\rtimes \mathbb Z_2$. But the above presentation does not interpret $(\mathbb Z_8\rtimes \mathbb Z_2)\rtimes \mathbb Z_2$. Now my question is -How can I find a presentation for $G$ such that it interprets $(\mathbb Z_8\rtimes \mathbb Z_2)\rtimes \mathbb Z_2$? - -REPLY [5 votes]: The routines for computing presentations try foremost to produce a reasonably small presentation and don't focus on trying to reflect possible semi direct product structures. So to get a presentation that reflects this structure, it probably is quickest to build it by hand. First lets find the (sub)normal subgroups of order 16 and 8: -gap> G:=SmallGroup(32,7); - -gap> n:=Filtered(NormalSubgroups(G),x->Size(x)=16 -> and Length(Complementclasses(G,x))>0); -[ Group([ f1*f2, f3, f4, f5 ]), Group([ f1, f3, f4, f5 ]) ] -gap> StructureDescription(n[1]); -"C8 : C2" -gap> u:=Filtered(NormalSubgroups(n[1]),x->Size(x)=8 and IsCyclic(x)); -[ Group([ f1*f2*f4, f3*f4, f5 ]), Group([ f1*f2, f3*f4, f5 ]) ] -gap> Complementclasses(n[1],u[1]); -[ ] - -So n[1] and u[1] are suitable subgroups, having complements. Now we pick generators: -gap> b:=last[1].1;Order(b); -f3*f5 -2 -gap> a:=SmallGeneratingSet(u[1])[1];Order(a); -f1*f2*f3*f5 -8 -gap> Complementclasses(G,n[1]); -[ , ] -gap> c:=last[1].1;Order(c); -f2 -2 - -Now identify how $b$ acts on $\langle a\rangle$, and how $c$ acts on $\langle a,b\rangle$: -gap> Factorization(Group(a),a^b); -x1^-3 -gap> h:=Group(a,b); - -gap> Factorization(h,a^c); -x2*x1 -gap> Factorization(h,b^c); -x2 - -We now have a presentation $\langle a,b,c\mid a^8=b^2=c^2=1,a^b=a^{-3},a^c=ba,b^c=b\rangle$. The semidirect product stucture implies that the order is not larger than 32, but we can check this: -gap> f:=FreeGroup("a","b","c"); - -gap> g:=f/ParseRelators(f,"a^8=b^2=c^2=1,a^b=a^-3,a^c=ba,b^c=b"); - -gap> Size(g); -32 -gap> IsomorphismGroups(g,G); -[ a, b, c ] -> [ f1*f3*f4*f5, f3*f5, f2*f3 ] - -(Ot that this isomorphism does not find images as we chose them, we could also build the isomorphism by hand (constructing the homomorphism verifies the isomorphism property): -gap> isom:=GroupHomomorphismByImages(g,G,GeneratorsOfGroup(g),[a,b,c]); -[ a, b, c ] -> [ f1*f2*f3*f5, f3*f5, f2 ] -gap> IsBijective(isom); -true<|endoftext|> -TITLE: Find all integer solutions to $a+b+c|ab+bc+ca|abc$ -QUESTION [8 upvotes]: As you can see from the title, I am trying to find all integer solutions $(a,b,c)$ to $$(a+b+c) \ \lvert\ (ab+bc+ca) \ \lvert\ abc$$ (that is, $a+b+c$ divides $ab+bc+ca$, and $ab+bc+ca$ divides $abc$). Unfortunately, I could not find anything on this problem (although I find it hard to believe that nobody though of this before). -What I've found so far -I have looked at the simpler case: $(a+b) \ \lvert\ ab$. I was able to solve this, and all solutions are $$(a,b)=(\alpha(\alpha+\beta)\gamma,\beta(\alpha+\beta)\gamma)$$ with $\alpha,\beta,\gamma\in\mathbb{Z}$. -I was also able to reduce the given problem to only one division. If we are able to solve $$(a_0b_0+b_0c_0+c_0a_0) \ \lvert\ (a_0+b_0+c_0)a_0b_0c_0 $$ with $\gcd(a_0,b_0,c_0)=1$, then we know that -\begin{align} -a&=a_0(a_0+b_0+c_0)\cdot k\\ -b&=b_0(a_0+b_0+c_0)\cdot k\\ -c&=c_0(a_0+b_0+c_0)\cdot k\\ -\end{align} -For $k\in\mathbb{Z}$ are all solutions to the original problem. However, I was not able to solve this. I have computed a few solutions to the last (and the corresponding solutions for the original problem) but was not able to find a pattern. Any progress on the problem is welcome! - -REPLY [2 votes]: $a,b,c$ are roots of $X^3-(a+b+c)X^2+(ab+ac+bc)X=abc$. -Putting $$\begin{cases}a+b+c=A\\ab+ac+bc=mA\\abc=n(ab+ac+bc)\end{cases}\qquad (*)$$ -one gets $$X^3-AX^2+mAX=mnA$$ it follows -$$\begin{cases}a^3-Aa^2+mAa=mnA\\b^3-Ab^2+mAb=mnA\\c^3-Ac^2+mAc=mnA\end{cases}$$ - Hence $a^3\equiv b^3\equiv c^3\pmod A$ so that -$a^3+b^3+c^3=kA$. -We take a look to the equation $$X^3+Y^3+Z^3=r$$ for cases in which there are solutions and we try to condition these to our problem. -Example.-For $r=s^3$. $$X^3+Y^3+Z^3= s^3 \Rightarrow (X,Y,Z)=(t,-t,s)$$ To fit with (*) we choose $t=2s$ which gives $(a,b,c)=(2s,-2s,s)$ so one has $$\begin{cases}a+b+c=2s-2s+s=s\\ab+ac+bc=-4s^2=(-4s)s\\ abc=-4s^3=(-4s^2)s\end{cases}$$ Thus $$a+b+c=s$$ $$\frac{ab+ac+bc}{a+b+c}=-4s$$ $$\frac{abc}{ab+ac+bc}=s$$ -I will return to this problem…., if I may.<|endoftext|> -TITLE: Every group of order $60$ , having a normal subgroup of order $2$ , has a normal subgroup of order $6$ (without Sylow )? -QUESTION [5 upvotes]: How to prove , without using Sylow's theorems , that every group of order $60$ , having a normal subgroup of order $2$ , contains a normal subgroup of order $6$ ? Please help . Thanks in advance - -REPLY [4 votes]: FACT 1: Group of order $15$ is cyclic. [without Sylow theory] -By Cauchy theorem, $G$ contains a subgroup $H$ of order $5$; if $H_1$ is another subgroup of order $5$ then $|HH_1|=|H|.|H_1|/|H\cap H_1|=5.5/1>|G|$, contradiction. So subgroup of order $5$ is unique, hence normal. Let $K$ be a subgroup of order $3$. -For every $k\in K$, define $\varphi_k\colon H\rightarrow H$, $\varphi_k(h)=khk^{-1}$, it is an automorphism (since $H$ is normal). We can see that $\varphi_k\varphi_{k'}=\varphi_{kk'}$; hence $k\mapsto \varphi_k$ is a homomorphism from $K$ to $Aut(H)$. -Since $K\cong \mathbb{Z}_3$ and $Aut(H)\cong \mathbb{Z}_4$, the homomorphism from $\mathbb{Z}_3$ to $\mathbb{Z}_4$ is trivial. This means $\varphi_k=$identity for every $k\in K$; this means, $khk^{-1}=h$ for all $h\in H$ (and $k\in K$). This means $H$ and $K$ commute elementwise. Therefore $G=H\times K=\mathbb{Z}_5\times \mathbb{Z}_3$, which is cyclic. -FACT 2: Any group of order $2m$ ($m$ odd) has normal subgroup of order $m$. -This is also easy to prove without Sylow; you can try (or then search for link). -Towards your question: Let $N$ be normal subgroup of order $2$. Then $G/N$ is group of order $30=2.15$. Now $G/N$ has normal subgroup, say $L/N$ of order $15$; it should be cyclic; hence it has unique subgroup of order $3$, say $H/N$. So we have situation: -$H/N$ is unique subgroup of order $3$ in $L/N$ and $L/N\trianglelefteq G/N$. -Exercise: Show that $H/N$ is normal in $G/N$. -Then $H$ is normal in $G$ with $|H|=|N|.|H/N|=2.3=6$.<|endoftext|> -TITLE: Least and greatest value of $|z|$ given that $|z- 4/z| = 2$? -QUESTION [5 upvotes]: If the complex number $z$ satisfies the equation $\left\lvert z- {4\over z}\right\rvert = 2$ then the least and the greatest values of $|z|$ are ? -My try - $\left\lvert z- {4\over z}\right\rvert = 2$ -$\left| |z| - \left\lvert z- {4\over z}\right\rvert \right| \le 2$. - -REPLY [8 votes]: Given $$\left|z-\frac{4}{z}\right| = 2$$ and here we have to find $\max$ and $\min$ of $|z|$ -So $$|z| = \left|\left(z-\frac{4}{z}\right)+\frac{4}{z}\right|\leq \left|z-\frac{4}{z}\right|+\frac{4}{|z|}$$ -Above we have used $\bf{\triangle \; Inequality}$ -So $$|z|\leq 2+\frac{4}{|z|}\Rightarrow |z|^2-2|z|\leq 4$$ -So $$\left(|z|-1\right)^2 \leq 5$$ -Now after that You can solve it. - -REPLY [6 votes]: If you know $\left|\frac zw\right|=\frac{|z|}{|w|}$, then you can solve your inequality for real $|z|$. Then see if real $z$ attains those max and min. -$$|r-4/r|\leq2\\-2r\leq r^2-4\leq2r\\5\leq(r+1)^2,(r-1)^2\leq5\\-1+\sqrt{5}\leq r\leq1+\sqrt{5}$$<|endoftext|> -TITLE: The smallest infinity and the axiom of choice -QUESTION [9 upvotes]: The short version of this question is: which (natural) axiom should be added to ZF so that the statement "$\aleph_{0}$ is the smallest infinity" becomes true? -A set $A$ is called infinite if it can be put in bijection with a proper subset of itself, or, equivalently, if there exists an injection $\phi \colon A \hookrightarrow A$ that is not a bijection. Consider the following proof that the cardinality of the natural numbers, $\aleph_{0}$, is the 'smallest' infinity. -It is enough to show that for every infinite set $A$ there is an injection $\mathbb{N} \hookrightarrow A$. It is simple to prove that if $A$ is in bijection with a set $B$, then $B$ is also infinite. Define $A_{0} = A$, and let $\phi_{0} \colon A_{0} \rightarrow A_{0}$ be an injection that is not a bijection. -Denote $A_{1} = \phi(A_{0})$, and $B_{1} = A_{0} - A_{1}$. By definition of $\phi_{0}$, we have $B_{1} \ne \emptyset$. -Since $A_{1}$ is also infinite (as $\phi$ restricts to a bijection between $A_{0}$ and $A_{1}$), we can repeat this process with a non-bijective injection $\phi_{1} \colon A_{1} \rightarrow A_{1}$, and define -$A_{2} = \phi_{1}(A_{1})$ and $B_{2} = A_{1} - A_{2}$, with $B_{2} \ne \emptyset$. -Continuing in this manner we obtain an infinite descending chain of proper subsets $$A_{0} \supsetneq A_{1} \supsetneq A_{2} \supsetneq A_{3} \supsetneq A_{4} \supsetneq \cdots$$ -and a countably infinite family $\{B_{n}\}_{n \in \mathbb{N}}$ of mutually disjoint subsets of $A$. Choosing a $b_{i}$ from each $B_{i}$, we obtain an infinite sequence of elements of $A$ without repetitions $$b_{1}, b_{2}, b_{3}, b_{4}, \dots,$$ -which is exactly an injection $\mathbb{N} \hookrightarrow A$. -It seems clear to me that the above argument uses the axiom of choice in choosing the sequence $\{ b_{n} \}_{n \in \mathbb{N}}$. It also seems clear that without something like AC one cannot guarantee that $\aleph_{0}$ is the smallest infinity. However I've also been told that people often mistakenly use the axiom of choice, or use the full axiom when a weaker variant would do. Since here I have an ordered, countably infinite collection of sets to make a choice from, it seems like this might be the case. -I am no set theorist, so I don't know how to answer these questions (and maybe even how to ask them), but roughly: are there commonly used axioms that imply '$\aleph_{0}$ is the smallest infinity', but are weaker than AC? Is there a minimal such axiom? Can one adopt '$\aleph_{0}$ is the smallest infinity' as an axiom, or would this cause problems? -EDIT: I am informed below that no version of the axiom of choice is necessary with this definition of 'infinite set'. So what about when we use the 'usual' definition of an infinite set: a set that is not in bijection with any natural number. -Supplementary question: why is the 'usual' definition preferable to the 'bijective with proper subset' definition? The latter seems more natural to me, seeing as it only mentions sets and doesn't require numbers to be defined. - -REPLY [10 votes]: With your definition of "infinite set" (which is Dedekind's definition, not the usual one), no axioms beyond ZF are needed to prove that $\aleph_0$ is the smallest infinite cardinal. -Let $A$ be an infinite set, and let $\phi:A\to A$ be an injection which is not a bijection. Choose an element $a\in A\setminus\phi(A).$ Then $a,\phi(a),\phi(\phi(a)),\phi(\phi(\phi(a)))\dots$ is an infinite sequence of distinct elements of $A$, showing that $|A|\ge\aleph_0.$ -Your question would be more interesting if you defined "infinite set" in the usual way, namely, a set $A$ is infinite if there is no bijection between $A$ and any natural number. - -REPLY [6 votes]: The "most natural axiom" to add is perhaps the axiom of countable choice which asserts the existence of a choice function for every countable family of non-empty sets. -I'd even argue that the true "natural axiom" would be the principle of dependent choice, which is a slight (but significant) strengthening of countable choice which posits that recursive definitions work as they should. -The reason that the latter is "more natural" is that it allows the usual proof to go through. If $A$ is infinite, let $a_0\in A$, then if $a_0,\ldots,a_n$ were chosen, we choose $a_{n+1}\in A\setminus\{a_0,\ldots,a_n\}$. We can transform this proof to only use countable choice, but the proof is inherently different since we are not allowed to use such recursive selection (where each choice depends on the previous ones, thus the name "dependent choice"). -(Note, by the way, that the definition of "finite" or "infinite" branch out without the axiom of choice. Being equipotent with a bounded set of natural numbers is not the same as having no self-injection which is not surjective, without assuming some amount of choice. In fact, that is just the point of infinite sets without countably infinite subsets.) -Addressing the edit. -You are asking why is the usual definition of finiteness is the most natural one. Well, there are two points here: - -We can define the natural numbers via pure set theoretic means, so the statement "equipotent with a bounded set of natural numbers" mentions only sets and no numbers at all. -There are other definitions which provably give you the same definition of finiteness. For example: $A$ is finite if and only if every non-empty $U\subseteq\mathcal P(A)$ has a $\subseteq$-minimal element (or, $\mathcal P(A)$ is well-founded with $\subseteq$). - -So why is this the natural definition? Well, thinking outside of set theory, this is how we think about finite sets. And where the definition of Dedekind (the one you originally cited) gives us some notion of finiteness, it turns out that we need to use the axiom of choice to prove the equivalence. This, to me, says that philosophically we sort of miss the target there. -As it turns out there is a nice hierarchy (which is mostly linear) of definition of finiteness which branch and differ without the axiom of choice. But they all have the same two properties: - -Nothing is worthy of being finite if it is not Dedekind-finite (or, a set cannot be called finite if there is an injection from $\Bbb N$ into the set). And - -2.The "true" definition of finite must satisfy whatever we define as finiteness. Otherwise we miss the point of the definition (because the definition comes to model our intuition, and not vice versa).<|endoftext|> -TITLE: Number of groups of order $9261$? -QUESTION [6 upvotes]: I checked the odd numbers upto $10\ 000$ , whether they are group-perfect ($gnu(n)=n$ , where $gnu(n)$ is the number of groups of order $n$), and -the only case I could not decide is -$$9261=3^3\times 7^3$$ - -What is $gnu(9261)=gnu(3^3\times 7^3)$ ? - -I would already be content with a proof of $gnu(9261)<9261$, which is my -conjecture because $gnu(3087)=46$ is small and $9261=3\times 3087$. - -REPLY [3 votes]: I think the answer is $\operatorname{gnu}( 9261 ) = 215$. By itself, the grpconst package function ConstructAllGroups may produce a list in which not all groups have been distinguished up to isomorphism. Thus it is not sufficient to simply count the number of elements of the list it returns. One must also check whether there are any nested lists consisting of groups whose isomorphism has not been decided by the algorithm. The GrpConst package then provides the function DistinguishGroups that employs stronger methods to resolve any undecided isomorphisms remaining. -Here is a simplified version of the frontend that I use that employs these considerations. -CountGroups := function( n ) - local L, trouble, okay, c; - - L := ConstructAllGroups( n ); - okay := Filtered( L, x -> not IsList( x ) ); - trouble := Filtered( L, IsList ); - - if Length( trouble ) > 0 then - for c in trouble do - c := DistinguishGroups( c, true ); - if ForAny( c, IsList ) then - Error( "could not resolve some isomorphisms" ); - fi; - Append( okay, c ); - od; - fi; - - return Length( okay ); -end;; - -Using this, we obtain the claimed result. -gap> CountGroups( 9261 ); -215 - -Having said all that, I confess I am far from an expert on GAP. If I have made a mistake, I should be happy to improve my understanding.<|endoftext|> -TITLE: Class models of $\mathsf{ZFC}$ and consistency results -QUESTION [5 upvotes]: First of all, I'm only starting to study independence results in set theory. And there is one obstacle that confuses me a lot. Probably such questions have already been asked, but I haven't found anything that clarifies it for me. I'm new to that subject, so please feel free to correct me if I'm using wrong terminology or reasoning. -Here is how I understand the reasoning behind proving that a statement $\varphi$ is not a theorem of a theory $T$. From mathematical logic we know that $T \not\vdash \varphi$ if and only if $\mbox{Con}(T + \neg \varphi)$. From Gödel's Completeness theorem it follows that $\mbox{Con}(T)$ if and only if $T$ has a model (that is a set with a collection of operations, relations and constants). So in order to prove something like $\mathsf{ZFC} \not\vdash \mathsf{CH}$ we should produce a model of $\mathsf{ZFC} + \neg\mathsf{CH}$. On the other hand, by Gödel's Incompleteness theorem this can't be done inside $\mathsf{ZFC}$. So all consistency results should be relative and be of the form $$\mbox{Con}(\mathsf{ZFC}) \rightarrow \mbox{Con}(\mathsf{ZFC} + \varphi).$$ -As I've understood, when the method of forcing is used, we start with the assumption like $\mbox{Con}(\mathsf{ZFC})$ that gives us a (set) model $(M, \in)$ of $\mathsf{ZFC}$ from which (using Löwenheim–Skolem theorem) we can get a countable model of $\mathsf{ZFC}$. After that we extend this model to another model $(N, \in)$ that satisfies some desired statement $\varphi$ obtaining a model of $\mathsf{ZFC} + \varphi$. Such reasoning looks perfectly fine to me, since we are working with set models of $\mathsf{ZFC}$ whose existence is based on the assumption $\mbox{Con}(\mathsf{ZFC})$. -The question is about class models of $\mathsf{ZFC}$. For example, there is the Gödel's constructible universe $L$ that is a model for $\mathsf{ZFC} + V = L$ (and also for $\mathsf{AC}$ and $\mathsf{CH}$). My main question is - -How can we formally deduce from the existence of such a (class) model that $$\mbox{Con}(\mathsf{ZFC} + V = L)?$$ - In other words, how the existence of a class model for $T$ implies $\mbox{Con}(T)$? - -Is there some kind of Gödel's completeness theorem for class models? Or again we are starting with some set model of $\mathsf{ZFC}$ and constructing $L$ inside it? If so, how this changes the proof of that $L$ is a model of $\mathsf{ZFC}$? It seems like I'm missing some basic fact (and commonly used technique) that allows to go from class models to set models. I would be very grateful if someone could explain me in details how this issue is resolved. Thanks in advance! - -REPLY [7 votes]: The theorem that $\operatorname{Con}\sf (ZF)$ implies $\operatorname{Con}\sf (ZF+\mathit{V=L})$ is a meta-theorem. -It is formulated in the meta-theory, and not internally. You are absolutely right that we can't quite formulate this internally without violating Godel's incompleteness theorem. But the nice thing is that what we can prove is that for every axiom $\varphi$ of $\sf ZFC$, $\sf ZF\vdash\varphi^L$, where $L$ is the class defined via the axiom of constructibility. -So while $\sf ZF$ does not prove that $L\models\sf ZFC$, it does prove that every axiom of $\sf ZF$ is relatively true. This tells us even a bit more than just the meta-theorem, which is great.<|endoftext|> -TITLE: Involutions of full matrix ring $M_n(R)$ -QUESTION [6 upvotes]: Hellow, I want to describe all involutions of full matrix ring over field and all involutions of matrix polynomial ring. -Is it true or false that every involution of the full matrix ring $T = M_n(R)$ over field $R$ has the follwing form -$$ -A \to C^{-1}A^TC, -$$ -for all $A\in M_n(R)$ and some fixed matrix $C$? -What can we say about involutions of the matrix-polynomial ring $T[x]$? - -REPLY [3 votes]: Every involution of $T$ is of the form $\phi(A)=C^{-1}\sigma(A^T)C$ for an invertible matrix $C$, and a field automorphism $\sigma\colon R\to R$ satisfying $\sigma^2=\iota$. -Note that, for an involution $\phi$, then $\theta\colon T\to T$ defined by $\theta(A)=\phi(A^T)$ is a ring automorphism. So $\theta$ preserves the center of $T$. As the center of $T$ is the diagonal elements $\lambda I$ for $\lambda\in R$, we have $\theta(\lambda I)=\sigma(\lambda)I$ for a field automorphism $\sigma$. As $\phi\phi(\lambda I)=\lambda I$, we have $\sigma^2(\lambda)=\lambda$. Then, $\chi(A)=\phi(\sigma(A^T))$ is an automorphism of $T$, which preserves $\lambda I$ for each $\lambda\in R$, so is an automorphism of $R$-algebras. So, $\chi$ is an inner automorphism, meaning it is of the form $\chi(A)=C^{-1}AC$ for an invertible $C$. Therefore, $\phi(A)=C^{-1}\sigma(A^T)C$. -In fact, $\phi\phi(A)=A$, so $C^{-1}\sigma(C)^TA\sigma(C)^{-T}C=A$, showing that $C^{-1}\sigma(C)^T$ is in the center of $T$, so is equal to $\lambda I$ for some $\lambda\in R$. So, $C^T=\lambda \sigma(C)$. Taking the transpose $C=C^{TT}=\lambda\sigma(C^T)=\lambda^2 C$. So, $\lambda=\pm1$ and either $C^T=\sigma(C)$ or $C^T=-\sigma(C)$.<|endoftext|> -TITLE: properties of the integral (Rudin theorem 6.12c) -QUESTION [6 upvotes]: if $ f\in\mathscr{R(\alpha})$ on $[a,b]$ and if $a 0$ there exists a parition $P_{\epsilon}$ of $[a, b]$ such that for all partitions $P = \{x_{0}, x_{1}, x_{2},\ldots, x_{n}\}$ of $[a, b]$ with $P \supseteq P_{\epsilon}$ and for every choice of $t_{k} \in [x_{k -1}, x_{k}]$ we have $$|S(P, f, \alpha) - I| < \epsilon$$ -When $\alpha$ is monotone on $[a, b]$ this definition coincides with the one based on upper and lower integrals given by Rudin. -Apostol mentions another definition of Riemann-Stieltjes integral in the exercises (Problem 7.3, page 174, Mathematical Analysis): -Definition 2: Let $f$ and $\alpha$ be defined and bounded on $[a, b]$. We say that $f$ is Riemann integrable with respect to $\alpha$ on $[a, b]$, and we write "$f \in R(\alpha)$ on $[a, b]$" if there exists a number $I$ having the following property: For every $\epsilon > 0$ there exists a number $\delta > 0$ such that for all partitions $P = \{x_{0}, x_{1}, x_{2}, \ldots, x_{n}\}$ of $[a, b]$ with norm $||P|| = \max_{k = 1}^{n}(x_{k} - x_{k - 1})< \delta$ and any choice of points $t_{k} \in [x_{k - 1}, x_{k}]$ we have $$|S(P, f, \alpha) - I| < \epsilon$$ -Definition 1 is more general than definition 2 in the sense that if $f \in R(\alpha)$ on $[a, b]$ based on definition 2 then $f \in R(\alpha)$ on $[a, b]$ based on definition 1, but there are functions $f, \alpha$ such that $f \in R(\alpha)$ on $[a, b]$ according to definition 1 and $f \notin R(\alpha)$ on $[a, b]$ based on definition 2. An example is given in Problem 7.3(b), Page 174, Apostol' Mathematical Analysis which is integrable according to definition 1, but not according to definition 2 (this example is incidentally the same as the one given by Martin in his answer to the current question). -Moreover if $\alpha(x) = x$ (so that we are just talking about plain Riemann integrability of $f$ on $[a, b]$) then both the definitions are equivalent (proof is not trivial/obvious and is given in Problem 7.4, page 174, Apostol's Mathematical Analysis). -Apostol uses definition 1 in his text and proves the following theorem: -Theorem: Assume that $c \in (a, b)$. If two of the integrals in the equation below exist, then the third also exists and we have $$\int_{a}^{c}f\,d\alpha + \int_{c}^{b}f\,d\alpha = \int_{a}^{b}f\,d\alpha$$ -Moreover at the end of this theorem Apostol mentions the following note: -Note: The preceding type of argument cannot be used to prove that the integral $\int_{a}^{c}f\,d\alpha$ exists whenever $\int_{a}^{b}f\,d\alpha$ exists. The conclusion is correct however. -Apostol gives proof of existence of $\int_{a}^{c}f\,d\alpha$ based on existence of $\int_{a}^{b}f\,d\alpha$ in case when $\alpha$ is of bounded variation on $[a, b]$. However the proof for general bounded function $\alpha$ is based on different idea (Cauchy's criterion of integrability of interval functions, see page 22, Functional Analysis by F. Riesz and B. Nagy). -Based on definition 1, the claim by OP is correct. Also the counter-example by Martin in his answer is wrong. But if we use definition 2, then Martin's counter-example is correct and OP's claim is wrong. Thus it is a matter of definitions. In my opinion one should use the more general definition 1 compared to the slightly restrictive (but the one given originally by Riemann) definition 2.<|endoftext|> -TITLE: Filtration of stopping time equal to the natural filtration of the stopped process -QUESTION [6 upvotes]: Given a probability space $(\Omega,\mathcal{F},P)$ and a process $X_{t}$ defined on it. We consider the natural Filtration generated by the process $\mathcal{F}_{t}=\sigma (X_{s}:s\leq t)$. Let $\tau$ be a stopping time. The corresponding $\sigma$-Algebra of the stopping time $\tau$ is given by -\begin{align} -F_{\tau}=\left\{A\in \bigcup_{t\geq 0} F_{t}: A\cap\{\tau \leq t\}\in \mathcal{F}_{t} \forall t\geq 0\right\} -\end{align} -Now i am considering the requirements under which -\begin{align} -\mathcal{F}_{\tau}=\sigma (X_{s\wedge \tau }:s\geq 0) -\end{align} -holds. -Shiryaev "Optimal Stopping Rules" Stochastic Modelling and Applied Probability 8 in Springer 2008 -makes the assumptions, that $\Omega$ has to be sufficiently rich, that means for all $t\geq 0$ and $\omega \in \Omega$ there exists $\omega'\in\Omega$ such that -\begin{align} -X_{s}(\omega')=X_{s\wedge t}(\omega) \forall s\geq 0 -\end{align} -holds; see Theorem 6 of Shiryaev. Then $\mathcal{F}_{\tau}=\sigma (X_{s\wedge \tau }:s\geq 0)$ should hold. Shiryaev shows this for arbitrary measurable space, and measurable process $X$ fullfilling this condition. -Also D. Stroock, S. Varadhan "Multidimensional Diffusion Processes" Lemma 1.3.3 proofs it, where here $\Theta$ is the space of continuous trajectories from $[0,\infty)$ to $\mathbb{R}^{d}$. In the proof Stroock also uses the assumption of the sufficiently richness of $\Omega$ under $X$, but he mentions it incidental, as if it would be clear for continuous $X$. -I guess, that this property also holds for cadlag processes $X$ i.e. fullfilling -\begin{align} -\mathcal{F}_{\tau}=\sigma (X_{s\wedge \tau }:s\geq 0) -\end{align} -But i dont know, how it is fullfilled. - -REPLY [3 votes]: The "saturation" property assumed by Shiryaev is indeed clear for $\Omega$ consisting of the space of cadlag paths from $[0,\infty)$ to $\Bbb R^d$, and $X_t(\omega)=\omega(t)$ for $t\ge0$ and $\omega\in\Omega$. For, given $t\ge 0$ and $\omega\in\Omega$, define $\omega'$ to be the path "$\omega$ stopped at time $t$"; that is, $\omega'(s):=\omega(s\wedge t)$ for $s\ge 0$. This path $\omega'$ is clearly an element of $\Omega$, and $X_{s\wedge t}(\omega) =\omega(s\wedge t) =\omega'(s)=X_s(\omega')$ for each $s\ge 0$.<|endoftext|> -TITLE: What's wrong with this proof of ZF being inconsistent? -QUESTION [6 upvotes]: I'm studying (independently) mathematical logic and in investigating -self-referential statements I developed a result which I don't know -how to interpret. I'll use the notation from Enderton's "A -mathematical introduction to logic" (p 266-267): -Let $T$ be any sufficiently strong recursively axiomizable theory (PA -or ZF will do) and let $\sf Prb_T \sigma$ (Here $\sf Prb_T$ -abbreviates "provable in $T$") express that $\sigma$ has a deductive -proof in $T$. (i.e. $\sf Prb_T \sigma$ means $T \vdash \sigma$). Let -$\sf Cons T$ be a sentence expressing that $T$ is consistent (such as -$\neg \sf Prb_T (0=1))$. Then using the fixed-point lemma we can get a -sentence $\sigma$ such that: -$$T \vdash (\sigma \leftrightarrow \sf Prb_T (\sf Cons T \rightarrow -\neg \sf Prb_T \sigma))$$ -Now either $T \vdash \sigma$ or $T \nvdash \sigma$. I will show that -we cannot have either assuming $T$ is consistent. -If $T \vdash \sigma$ then by our definition of $\sigma$ we have that -$T \vdash \sf Prb_T (\sf Cons T \rightarrow \neg \sf Prb_T \sigma)$. This implies that $T \vdash (\sf Cons T \rightarrow \neg \sf Prb_T \sigma)$ -But this means that we have a proof of $T$ being consistent implies $T \nvdash \sigma$ which -contradicts this initial hypothesis of $T \vdash \sigma$. Therefore, -assuming $T$ is consistent, we must have $T \nvdash \sigma$. But this -too also leads to a contradiction, for by our construction of $\sigma$ -we now have $T \nvdash \sf Prb_T (\sf Cons T \rightarrow \neg \sf -Prb_T \sigma)$. This means that there is no proof of $T$ being -consistent implying that $T \nvdash \sigma$. However, we just -provided a proof (albeit meta-mathematical in nature) testifying to -this fact. -So what should I logically conclude from this? I figure that this -implies that $T$ is inconsistent. I guess there is another -possibility: That this whole result/proof cannot be formalized thus -removing the contradiction (or maybe not because then we might have -a contradiction with the Completeness Theorem for first-order logic). I know that by Godel's Second Incompleteness Theorem assuming $T$ is consistent then $T \nvdash \sf Cons T$. However $T$ can still prove some facts about implications involving $\sf Cons T$ (In fact, Enderton gives one such example on the bottom of page 267). My understanding is that most meta-mathematical results such -as Godel's incompleteness theorem and Tarski's undefinability theorem -can be represented and proven in the deductive calculus. Am I wrong on -this? If I'm not then what makes this result any different? - -REPLY [6 votes]: The problem with the argument, indeed with most of the claims of this sort is that the internalization of logic and proofs are only as good as your standard integers, but become uncontrollable when we move forward from them. -What do I mean by that? We know that every real, meta-theoretic, proof corresponds to a specific natural number. And we know that $T$ itself has a predicate that is interpreted as exactly the codes for the axioms of $T$. -But this only happens in models where the integers are standard. If you have non-standard integers, you invariably add new axioms to $T$ and you add new inference rules to your logic, and so you added new ways of proving that $0=1$. -So the way to reconcile this is to notice that indeed a model of $\sf ZFC+\lnot\operatorname{Con}(ZFC)$ must disagree with its meta-theory about the integers (namely, it must have non-standard integers which will witness this).<|endoftext|> -TITLE: Find the number of all subsets of $\{1, 2, \ldots,2015\}$ with $n$ elements such that the sum of the elements in the subset is divisible by 5 -QUESTION [7 upvotes]: The problem is as in the question title. Only one addition - $n$ is not divisible by $5$. -I already have a solution involving permutations, but recently I read about generating functions and I was wondering if this problem can be solved with them. -A similar problem is the following: -Find the number of all subsets of $\{1, 2, \ldots, 2015\}$ and the sum of elements in each subset is divisible by 5. The generating function used is $${((1+x^0)(1+x^1)(1+x^2)(1+x^3)(1+x^4))}^{403}.$$ -But this function cannot be used for my problem, since we need to count how many elements have been "used" to make the subset. Can anyone help me? - -REPLY [16 votes]: A generating function solution. -For every $S\subset\{1,2,\ldots,2015\}$ we will write $\Sigma S=\sum_{k\in S}k$. -Let -$$ -f(a,x) = \prod_{k=1}^{2015} (1+a^kx) = -\sum_{S\subset\{1,\ldots,2015\}} a^{\Sigma S} x^{|S|}. -$$ -Take the average this function over putting $5$th complex roots of unity for $a$. Let $\omega=e^{2\pi i/5}$; then -$$ -\frac15\sum_{j=0}^4 f(\omega^j,x) = -\sum_{S\subset\{1,\ldots,2015\}} \left(\frac15\sum_{j=0}^4 \big(\omega^{\Sigma S}\big)^j\right) x^{|S|} = -\sum_{\substack{S\subset\{1,\ldots,2015\}\\\Sigma S\equiv0\pmod5}} x^{|S|}. -\tag{$*$} -$$ -On the RHS of $(*)$, the coefficient of $x^n$ is the number of $n$-element sets $S\subset\{1,\ldots,2015\}$ with $\Sigma S\equiv0\pmod5$. -On the other hand, -$$ -f(\omega^j,x)= \begin{cases} (1+x)^{2015} & \text{if } j=0 \\ -(1+x^5)^{403} & \text{if } j=1,2,3,4 \end{cases} -$$ -so on the LHS of $(*)$ the coefficient of $x^n$ is: $\frac15\binom{2015}n$ if $n$ is co-prime with $5$, and -$\frac15\binom{2015}n+\frac45\binom{403}{n/5}$ if $5|n$.<|endoftext|> -TITLE: How can I display generators or a minimal generating set with GAP? -QUESTION [5 upvotes]: The following commands -gap> G:=SmallGroup(6,1); - -gap> GeneratorsOfGroup(G); -[ f1, f2 ] -gap> - -display the generators of $G$ in a rather abstract way. The command StructureDescription helps to understand the structure of the group, but does not tell anything about generators or generating sets. In particular, how do I get the size of a minimal generating set of some group ? - -How can I display the generators or a minimal generating set more concrete, for example a^2=b^3=ab^2=e or something like that ? $f_1$ and $f_2$ is not helpful to understand the structure of a group. - -UPDATE : I just found an answer that could help me. The following commands give -the relations of the group. -gap> H:=Image(IsomorphismFpGroup(G)); - -gap> RelatorsOfFpGroup(H); -[ F1^2, F2^-1*F1^-1*F2*F1*F2^-1, F2^3 ] -gap> - -Is this a presentation of the given group ? - -REPLY [4 votes]: For a pc group (polycyclic presentation), RelatorsOfFpGroup(Range(IsomorphismFpGroup(G))) will return defining relators in these generators which can be interpreted as either expressing generator powers or generator commutators in terms of "lower order" generators. This can represents some semidirect product structure. -Otherwise one would have to find "nice" generators by hand, the question -Presentation for hand calculation -might give a bit of an idea how to do so. -You can compute a minimal (with respect to cardinality) generating set as -gap> MinimalGeneratingSet(G); -[ f1, f2 ] - -(this will work only for solvable groups. In general there is SmallGeneratingset that does not guarantee minimal cardinality), but in this case it will give you the same generators as before. -MinimalGeneratingSet will only give a generating set of minimal cardinality -- it is not a generating set with respect to which a presentation would be particularly nice.<|endoftext|> -TITLE: Basic and non basic variables in linear programming -QUESTION [8 upvotes]: I dont understand what are Basic and non basic variables,why we are talking them specially, what they have got to do with the rank of the coefficient matrix and augmented matrix ,and some deal with the linearly independent set corresponding to the decision variables , and some finding the determinant of the coefficient matrix. -and if the solution is not feasible will it be still a basic solution? -Any kind of help is appreciated. -also kindly suggest where this concept is illustrated book/link -Thank you - -REPLY [6 votes]: So in a linear programming problem, you have what is geometrically some sort of multidimensional object (polyhedron) and what is algebraically a matrix, or system of equations. -So in a such a problem, you usually want to get the extrema of the object, because it is at the corner points of this multidimensional object that the objective function (what the equations represent and are modelling) is at a max or min. So if you look at the algebra, since you have a matrix/system of equations, there is an assignment of variables (from the equation standpoint) or a solution vector (from the matrix standpoint) that identifies/is the coordinates of the extrema. Now some of these variables you will set equal to zero, and some you will solve for by setting the equations equal to each other. The ones you set equal to zero are "non basic variables". Any one not set equal is a "basic" variable. -For more details, go to http://home.ubalt.edu/ntsbarsh/opre640a/partiv.htm and conrol-f basic and start reading from there. Linear programming is pretty cool, good luck!<|endoftext|> -TITLE: Is there a way to prove vector triple product from quaternion multiplication? -QUESTION [6 upvotes]: For pure imaginary quaternions $u, v, w$, is there a way to prove the vector triple product $u\times(v\times w) = v(u\cdot w) - w(u\cdot v)$ from the relation: -$$uv = -u\cdot v + u\times v \text{ for $u, v \in \mathbb{R}^3$ }$$ -I tried many times but failed. - -REPLY [3 votes]: The vector triple product identity follows from quaternion associativity. -A convenient tool for expressing quaternion statements in terms of dot and cross products is to write the quaternion as a real number and a 3-vector, as $a = (\alpha, \mathbf{a})$. Then quaternion multiplication of quaternion $b = (\beta, \mathbf{b})$ on the left by $a$ is given by -$$ - a b = - \begin{bmatrix} - \alpha & -\mathbf{a}^T \\ - \mathbf{a} & \alpha I + \mathbf{a} \times - \end{bmatrix} - \begin{pmatrix} - \beta \\ - \mathbf{b} - \end{pmatrix} - = - \begin{pmatrix} - \alpha\beta - \mathbf{a} \cdot \mathbf{b} \\ - \beta\mathbf{a} + \alpha \mathbf{b} + \mathbf{a} \times \mathbf{b} - \end{pmatrix} . -$$ -This is much simplified for purely non-real quaternions -$a = (0,\mathbf{a})$, $b = (0,\mathbf{b})$ and $c = (0,\mathbf{c})$ -$$ - a b = - \begin{bmatrix} - 0 & -\mathbf{a}^T \\ - \mathbf{a} & \mathbf{a} \times - \end{bmatrix} - \begin{pmatrix} - 0 \\ - \mathbf{b} - \end{pmatrix} - = - \begin{pmatrix} - -\mathbf{a} \cdot \mathbf{b} \\ - \mathbf{a} \times \mathbf{b} - \end{pmatrix} . -$$ -And further, -$$ - a b c = - \begin{pmatrix} - -(\mathbf{a} \times \mathbf{b}) \cdot \mathbf{c} \\ - -(\mathbf{a} \cdot \mathbf{b}) \mathbf{c} + (\mathbf{a} \times \mathbf{b}) \times \mathbf{c} - \end{pmatrix} . -$$ -To extract the identity from quaternion expressions, for three cyclic permutations of three generic quaternions $a$, $b$ and $c$, write equations expressing associativity, to form a system of equations: -$$ - \left\{ - \begin{array}{rcl} - ( a b ) c - a ( b c ) &=& 0 \\ - ( b c ) d - b ( c d ) &=& 0 \\ - ( c a ) b - c ( a b ) &=& 0 . - \end{array} - \right. -$$ -In case the three quaternions are purely non-real, write $a = (0,\mathbf{a})$, $b = (0,\mathbf{b})$ and $c = (0,\mathbf{c})$, -for which the above system implies -$$ - \left\{ - \begin{array}{rcl} - -( \mathbf{a} \cdot \mathbf{b} ) \mathbf{c} - +\mathbf{a} ( \mathbf{b} \cdot \mathbf{c} ) - + ( \mathbf{a} \times \mathbf{b} ) \times \mathbf{c} - - \mathbf{a} \times ( \mathbf{b} \times \mathbf{c} ) &=& 0 \\ - -( \mathbf{b} \cdot \mathbf{c} ) \mathbf{a} - +\mathbf{b} ( \mathbf{c} \cdot \mathbf{a} ) - + ( \mathbf{b} \times \mathbf{c} ) \times \mathbf{a} - - \mathbf{b} \times ( \mathbf{c} \times \mathbf{a} ) &=& 0 \\ - -( \mathbf{c} \cdot \mathbf{a} ) \mathbf{b} - +\mathbf{c} ( \mathbf{a} \cdot \mathbf{b} ) - + ( \mathbf{c} \times \mathbf{a} ) \times \mathbf{b} - - \mathbf{c} \times ( \mathbf{a} \times \mathbf{b} ) &=& 0 . - \end{array} - \right. -$$ -The cross product terms can be compared more easily if the -anticommutativity of the cross product is applied: -$$ - \left\{ - \begin{array}{rcl} - -( \mathbf{a} \cdot \mathbf{b} ) \mathbf{c} - +\mathbf{a} ( \mathbf{b} \cdot \mathbf{c} ) - - \mathbf{c} \times ( \mathbf{a} \times \mathbf{b} ) - - \mathbf{a} \times ( \mathbf{b} \times \mathbf{c} ) &=& 0 \\ - -( \mathbf{b} \cdot \mathbf{c} ) \mathbf{a} - +\mathbf{b} ( \mathbf{c} \cdot \mathbf{a} ) - - \mathbf{a} \times ( \mathbf{b} \times \mathbf{c} ) - - \mathbf{b} \times ( \mathbf{c} \times \mathbf{a} ) &=& 0 \\ - -( \mathbf{c} \cdot \mathbf{a} ) \mathbf{b} - +\mathbf{c} ( \mathbf{a} \cdot \mathbf{b} ) - - \mathbf{b} \times ( \mathbf{c} \times \mathbf{a} ) - - \mathbf{c} \times ( \mathbf{a} \times \mathbf{b} ) &=& 0 . - \end{array} - \right. -$$ -The identity results from adding the first two equations of the system, and subtracting the third.<|endoftext|> -TITLE: Conway's cosmological theorem on look-and-say sequences -QUESTION [5 upvotes]: The most famous look-and-say sequence is -$$1,11,21,1211,111221,\ldots$$ -where the next term in the sequence corresponds to reading off the previous term, e.g. the term after $1211$ is one $1$, one $2$, two $1$, or $111221$. -I was reading the wiki page on look-and-say sequences, in which it says in the section on cosmological decay: - -Conway's cosmological theorem: Every sequence eventually splits - ("decays") into a sequence of "atomic elements", which are finite - subsequences that never again interact with their neighbors. There are - 92 elements containing the digits 1, 2, and 3 only, which John Conway - named after the natural chemical elements. There are also two - "transuranic" elements for each digit other than 1, 2, and 3. - -What is this paragraph saying? I don't understand what it means by the sequences eventually splitting. - -REPLY [5 votes]: It means that for any sequence seed $a_0$, we eventually arrive at a point where $a_k$ can be written as the concatenation $b_0c_0$, such that all future $a_m, m > k$, can similarly be written as the concatenation $b_{m-k}c_{m-k}$; that is, the end of $b_i$ never mixes with the start of $c_i$, for $i \geq 0$. (The above is not intended to limit the splits to one at a time; I don't know the exact dynamics.)<|endoftext|> -TITLE: Prove: $\operatorname{E}[X^2]<\infty\Longrightarrow \operatorname{E}[X]$ exists -QUESTION [5 upvotes]: I don't know how to prove this. Lets assume that $X$ is a discrete random variable. I've just come this far: If we do a direct proof of the implication, then we start with the assumption: -$$ -\operatorname{E}[X^2]=\sum_{x\in Im(X)} x^2\cdot p_X(x) <\infty. -$$ -However, I don't know how I can follow from this that $\operatorname{E}[X]$ exists. - -REPLY [4 votes]: I'm a big fan of Jenson's inequality for probability measures, for $f(x)=x^r$ with $r\geq 1$, implies: -$$f(E[X]])\leq E[f(X)],$$ -which gives, after rearranging: -$$E[X]\leq E[X^r]^{1/r}$$ - -REPLY [2 votes]: Here's another; Use the Cauchy-Schwarz inequality. -$$E|X| = E|X|\cdot 1 \le \|1\|_2 \|X\|_2 = \left(E|X|^2\right)^{1/2}.$$<|endoftext|> -TITLE: Smallest Circumcircle of Three Triangles -QUESTION [10 upvotes]: What is the minimum diameter of the circumcircle about the triangle formed by the center points of three congruent equilateral triangles that do not overlap? -The diagram is the best solution I've found so far. If the triangles have a side of length 1, the circle has diameter 0.853. - -REPLY [3 votes]: This is minimal. The tangent line is parallel to the edge of the triangle. -Diameter of circle is $\frac{10\sqrt{3}-6\sqrt{6}}{3}\approx0.8745232$<|endoftext|> -TITLE: Smallest square containing sectors of disc -QUESTION [10 upvotes]: This question occurred to me a while ago when taking leftover slices of a pizza to go. -Suppose you have a unit-radius circular disc divided into $n$ equal sectors ("slices"). What is the smallest square that will contain $k$ of those $n$ sectors? You are permitted to translate and rotate the slices as desired, but they must all fit within the perimeter of the square, and they must not overlap. -For instance, if $k = 1$, $n = 4$, the answer is trivially a square of side $1$, but if $k = 1$, $n = 8$, then the smallest square has side -$$ -\cos \left(\frac{\pi}{8}\right) = \frac{\sqrt{2+\sqrt{2}}}{2}. -$$ -At least, I think it does. I haven't proved it yet. Anyway, I'm also interested in finding, for any $n$, which values of $k$ will not fit in a square of side less than $2$? By way of analogy in packing problems, it is known that $n^2-2$ squares will not fit into a square of side less than $n$. But I'm interested in pizza slices. -If it makes it easier to start things going, I'm also interested in various solutions for $k$ for a selected fixed $n$. - -REPLY [2 votes]: Partial, incomplete answer: -We can show that when $n$ is not a multiple of 4, we can fit at least $n-1$ slices in a square of side less than 2. If $n$ is a multiple of 4, we can fit at least $n-2$ slices in a square of side less than 2. Here is the argument: -Assume the slices are numbered $S_1,...,S_n$. -Case $n$ not multiple of 4: Put all the slices together, according the the way they are numbered, forming the unit circle. Let $\ell=\lfloor\frac{3n}{4}\rfloor$, as in the first picture (image to the left). If we remove $S_\ell$, rotate the slices in the darker area by $\alpha/2$ to the left and the slices in the lighter area by $\beta/2$ to the right, we can fit the remaining $n-1$ slices in a square of side $1+\cos(\beta/2)<2$, as in the first picture (image to the right). Also in this case $1+\cos(\beta/2)<2$ is an upper bound on the size of the side of the smallest square. - -Case $n$ multiple of 4: Put all the slices together, according the the way they are numbered, forming the unit circle. Let $\alpha=\frac{\pi}{n}$ and $\ell=\lfloor\frac{3n}{4}\rfloor$, as in the second picture (image to the left). We see that rotating all the slices together by $\alpha/2$ to the right and then removing $S_{n/2}$ and $S_\ell$, we can fit the remaining $n-2$ slices in a square of side less than 2, as shown in the second picture (image to the right). Also, in this case $1+\cos(\alpha/2)<2$ is an upper bound on the size of the side of the smallest square. - -Now. Whenever $k -TITLE: Equivalent bimodule categories -QUESTION [5 upvotes]: Let $A,B$ be two rings such that their categories of bimodules are equivalent: -$$A\mathsf{-Bimod} \simeq B\mathsf{-Bimod}$$ -What can we say about $A$ and $B$? Are they isomorphic? Are they Morita-equivalent? Probably this has been studied in the literature, so this is primarily a reference request. - -REPLY [8 votes]: Let me work over a field $k$ and talk about bimodules over $k$ and $k$-linear categories (so we ask for an equivalence respecting the $k$-linear structure). Then it does not follow that $A$ and $B$ are Morita equivalent (or isomorphic) over $k$. -The given condition is equivalent to the condition that $A \otimes_k A^{op}$ and $B \otimes_k B^{op}$ are Morita equivalent. If $A, B$ are both central simple algebras over $k$ then both of these algebras are Morita equivalent to $k$, but $A$ and $B$ themselves are Morita equivalent iff they represent the same class in the Brauer group $\text{Br}(k)$. So, for example, we can take the two representatives $\mathbb{R}, \mathbb{H}$ of the Brauer group $\text{Br}(\mathbb{R})$. -On the other hand, if $A$ and $B$ are Morita equivalent, then $\text{Bimod}(A)$ and $\text{Bimod}(B)$ are equivalent even monoidally, because they can be described as the monoidal categories of $k$-linear cocontinuous endofunctors of $\text{Mod}(A)$ and $\text{Mod}(B)$ respectively by Eilenberg-Watts.<|endoftext|> -TITLE: The equation $a^3 + b^3 = c^2$ has solution $(a, b, c) = (2,2,4)$. -QUESTION [5 upvotes]: The equation $a^3 + b^3 = c^2$ has solution $(a, b, c) = (2,2,4).$ Find 3 more solutions in positige integers. [hint. look for solutions of the form $(a,b,c) = (xz, yz, z^2)$ -Attempt: -So I tried to use the hint in relation to the triple that they gave that worked. So I observed that the triple that worked $(2,2,4)$ could also be written as $$2(1,1,2)$$ So i attempted to generalize it to $$n(1,1,2)$$ I thought this may work because I am using the "origin" of where the triple $(2,2,4)$ came from. This did not work unfortunately. So I am asking what else could I comsider to devise some system to find these triples? Because I highly doubt they're asking me to just shoot in the dark and pick random numbers - -REPLY [3 votes]: This is not what your exercise is expecting. -For the purpose to have a reference around, following is what we know about the equation: -$$x^3 + y^3 = z^2$$ -According to $\S 14.3.1$ of Henri's Cohen's Number Theory, Vol II, -Analytic and Modern Tools, -the integer solutions of above equation, subject to $\gcd(x,y) = 1$, can be grouped into three families. These families are disjoint, in the sense that any solution of the equation belongs to one and only one family. -The $s$ and $t$ below denote coprime integers satisfying corresponding congruences modulo $2$ and $3$. The three families/parametrizations, up to exchange of $x$ and $y$, are: -$$\begin{align} -1. & \begin{cases} -x &= s(s+2t)(s^2-2ts+4t^2)\\ -y &= -4t(s-t)(s^2+ts+t^2)\\ -z &= \pm(s^2-2ts-2t^2)(s^4+2ts^3+6t^2s^2-4t^3s + 4t^4) -\end{cases} -& s \text{ is odd },\quad s \not\equiv t \pmod 3\\ -\\ -2. & \begin{cases} -x &= s^4-4ts^3-6t^2s^2-4t^3s + t^4\\ -y &= 2(s^4+2ts^3+2t^3s+t^4)\\ -z &= 3(s-t)(s+t)(s^4+2s^3t+6s^2t^2+2st^3+t^4) -\end{cases} -& s \not\equiv t \pmod 2,\quad s \not\equiv t \pmod 3\\ -\\ -3. & \begin{cases} -x &= -3s^4 + 6t^2s^2 + t^4\\ -y &= 3s^4+6t^2s^2-t^4\\ -z &= 6st(3s^4+t^4) -\end{cases} -& -s \not\equiv t \pmod 2,\quad 3 \not| t -\end{align} -$$ -For more details and derivation, please consult Cohen's book mentioned above.<|endoftext|> -TITLE: How to compute $\lim _{x\to \infty }\left(\ln\left(\frac{e^x-1}{x}\right)-x\right)$? -QUESTION [5 upvotes]: I have a problem with this limit, I don't know what method to use. I have no idea how to compute it. Can you explain the method and the steps used? Thanks -$$\lim _{x\to \infty }\left(\ln\left(\frac{e^x-1}{x}\right)-x\right)$$ - -REPLY [3 votes]: Notice that $x = \ln(e^x)$, therefore the original expression can be written as -$$\ln\left(\frac{e^x - 1}{xe^x}\right) < \ln\left(\frac{e^x}{xe^x}\right) = -\ln x.$$ -Since the $-\ln x \to - \infty$ as $x \to \infty$, the limit of the original expression is bound to be $-\infty$.<|endoftext|> -TITLE: Why doesn't Fermat's Last Theorem for polynomials entail that for integers? -QUESTION [12 upvotes]: Fermat's Last Theorem for polynomials follows from the Stothers-Mason theorem, that is: For any integer $n\geq 3$, there do not exist polynomials $x(t), y(t), z(t)$ not all constant such that $x(t)^n + y(t)^n = z(t)^n$ for all $t\neq 0$. -But since we can always find suitable polynomials in $t$ such that $(x(t), y(t), z(t)) = (a,b,c)$, why can't FLT for polynomials entail that for integers ? -For example, suppose we had $7^n + 8^n = 15^n$ for some integer $n\geq 3$, we would have $t=2$ such that $(7, 8, 15) = (3t+1, 4t, 8t-1)$, which is impossible by FLT for polynomials ? - -REPLY [11 votes]: The Mason-Stothers theorem holds for polynomials over $\mathbb{Q}$, and any $n\ge3$. Note that $\mathbb{Q}$ is an infinite field, so two polynomials $P(t)$ and $Q(t)$ are different if and only if there is $a\in\mathbb{Q}$ such that $P(a)\ne Q(a)$. -So FLT for polynomials over $\mathbb{Q}$ can be stated - -for any coprime and nonconstant polynomials $x(t)$, $y(t)$, $z(t)$ over $\mathbb{Q}$ and any $n\ge 3$, there exists $a\in\mathbb{Q}$ such that - $$x(a)^n+y(a)^n\ne z(a)^n$$ - -because this is an alternative way for saying $x(t)^n+y(t)^n\ne z(t)^n$ as polynomials. -So the theorem does not say that the inequality holds for all $a\in\mathbb{Q}$, which would be needed to derive FLT from it. -Another way for looking at it is considering FLT for polynomials over $\mathbb{R}$: can you perhaps derive from it that for $a,b,c\in\mathbb{R}$ and $n\ge 3$ we have $a^n+b^n\ne c^n$?<|endoftext|> -TITLE: Levi Civita connection intuition and motivation -QUESTION [6 upvotes]: Can someone explain why need we the Levi-Civita connection and what it does intuitively? - -REPLY [3 votes]: I fix a Riemannian manifold $(M,g)$ of dimension $n$, a (linear) connection $\nabla$ on $M$. -I recall you that we can write, in a local chart $(U,\varphi)$ of $M$: -\begin{gather*} -\forall i,j\in\{1,\dots,n\},\,g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)=g_{ij},\\ -\forall P\in U,v,w\in T_PM,\,g_P(v,w)=g_{ij}(P)[dx^i\otimes dx^j(v,w)]\equiv\langle v,w\rangle_P; -\end{gather*} -where $\{x^1,\dots,x^n\}$ are the local coordinates of $M$ in $U$, and $\left\{\textstyle\frac{\partial}{\partial x^1},\dots,\frac{\partial}{\partial x^n}\right\}$ is the corresponding local frame in $TM$ (the tangent bundle of $M$). -Then $\langle\cdot,\cdot\rangle$ is a vector field on $M$ defined by $g$, and it makes sense compute $\nabla_X\langle Y,Z\rangle$; where $X$, $Y$ and $Z$ are vector fields on $M$. -I say that: - -$\nabla$ is compatible with the metric induced by $g$ if -\begin{equation*} -\forall X,Y,Z\in\mathfrak{X}(M),\,\nabla_X\langle Y,Z\rangle=\langle\nabla_XY,Z\rangle+\langle Y,\nabla_XZ\rangle; -\end{equation*} -$\nabla$ is symmetric if in any local chart $(U,\varphi)$ of $M$, the Christoffel symbols are symmetric; I mean that -\begin{equation*} -\forall i,j,k\in\{1,\dots,n\},\,\Gamma_{ij}^k=\Gamma_{ji}^k; -\end{equation*} -$\nabla$ is the Levi-Civita connection if it is symmetric and compatible with the metric induced from $g$. - -Let $\sigma:[0,1]\equiv I\to M$ be a smooth curve over $M$; I remember you that (a generic connection) $\nabla$ induces a unique covariant derivation $\nabla^{\sigma}$ along $\sigma$, and via $\nabla^{\sigma}$ you can define the parallel transport $\widetilde{\sigma}$ of the vector fields on $M$ along $\sigma$. -One can prove that $\nabla$ is compatible with the metric induced from $g$, if and only if for any smooth curve $\sigma$ on $M$, $\widetilde{\sigma}$ is an isometry! Moreover, if $\nabla$ is the Levi-Civita connection on $M$, one can prove that: -\begin{equation*} -\forall V\in\mathfrak{X}(\sigma),\,d(\nabla^{\sigma}(V))=\nabla^{\sigma}(dV(f)) -\end{equation*} -where $\mathfrak{X}(\sigma)$ is the vector space of vector fields on $M$ along $\sigma$. Or in a general setting: -\begin{equation*} -\forall X,Y\in\mathfrak{X}(M),\,d(\nabla_XY)=\nabla_{dX}(dY). -\end{equation*} -Roughly speaking, the Levi-Civita connection and the connections induced on the smooth curves on $M$ commute with the differential! -However, in general $\widetilde{\sigma}$ is a linear isomorphism between $E_{\sigma(0)}$ and $E_{\sigma(1)}$, whenever $\nabla$ is a connection on a vector bundle $E$ over $M$. -Some reference: - -Kobayashi and Nomizu - Foundations od Differential Geometry; volume 1, chapters 2, 3 and 4; I have a doubt about the chapter 5. -Kolář, Michor and Slovák - Natural Operations in Differential Geometry; chapters 3 and 10; available at http://www.emis.de/monographs/KSM/kmsbookh.pdf. -Spivak - A Comprehensive Introduction to Differential Geometry; volume II, chapter 6. - -If you would like, I can explain a Lagrangian mechanical application of the Levi-Civita connection... -Is it all clear?<|endoftext|> -TITLE: Defining vertical tangent lines -QUESTION [7 upvotes]: In looking at the definition of vertical tangent lines in some popular calculus texts, -I noticed that there are a few different definitions for this term, including the following: -A function $f$ has a vertical tangent line at $a$ if -$\textbf{1)}$ $\;f$ is continuous at $a$ and $\displaystyle\lim_{x\to a}\;\lvert f^{\prime}(x)\rvert=\infty$ -$\textbf{2)}$ $\;f$ is continuous at $a$ and $\displaystyle\lim_{x\to a} f^{\prime}(x)=\infty$ or $\displaystyle\lim_{x\to a} f^{\prime}(x)=-\infty$ -$\textbf{3)}$ $\;\displaystyle\lim_{h\to0}\frac{f(a+h)-f(a)}{h}=\pm\infty$ -I would like to ask if there is a standard definition of this term, and whether or not the definition should include continuity at $a$ and should not include the situation where the graph has a vertical cusp at $a$. - -Here are some examples where these definitions lead to different conclusions: -a) $\;f(x)=x^{2/3}$ -b) $\;f(x)=\begin{cases}1&\mbox{, if }x>0\\0&\mbox{, if }x=0\\-1&\mbox{, if }x<0\end{cases}$ -(This question has also been posted on Math Educators Stack Exchange.) - -REPLY [5 votes]: Speaking as a geometer, I want "tangency" to be independent of the coordinate system. Particularly, if $f$ is a real-valued function of one variable defined in some neighborhood of $a$, and if $f$ is invertible in some neighborhood of $a$, then the line $x = a$ should be tangent to the graph $y = f(x)$ at $a$ if and only if the line $y = b = f(a)$ is tangent to the graph $y = f^{-1}(x)$ at $b$. -For an elementary calculus course I'd want: - -$f$ continuous in some neighborhood of $a$; -$f$ invertible in some neighborhood of $a$; -$f'(a) = \pm\infty$, i.e., $(f^{-1})'(b) = 0$ (the graph $y = f^{-1}(x)$ has $y = a$ as horizontal tangent). - - -Condition 1 does not guarantee invertibility near $a$ (as the cusp shows), so in my book it's out. -Condition 2 implies all three items of my wish list. ($f$ is implicitly assumed differentiable in some neighborhood of $a$; the derivative condition guarantees the derivative doesn't change sign in some neighborhood of $a$, and that $f'(a) = \pm\infty$.) -Condition 3 does not imply continuity (as the step function shows), so it's out.<|endoftext|> -TITLE: cutoff function vs mollifiers -QUESTION [14 upvotes]: $\boldsymbol{Q_1}$ What are cutoff functions? What are mollifiers? I cannot distinguish the two. Could anyone give some concrete/simple examples of cutoff functions and how they differ from mollifiers? - $\boldsymbol{\text{I did check wiki (so please, I do not want wiki type answer)}}$ -$\boldsymbol{Q_2}$ How to obtain compact support using a cutoff function? Could anyone give a example? Or take a look at the last sentence of Lemma 1.5 and explain what it means mathematically? - -enter link description here -Many Thanks! - -REPLY [18 votes]: There are a few related concepts here: -1) A bump function is a term for a smooth function with compact support. The set of all bump functions forms a vector space. If these functions are on $\mathbb{R}^n$, then it is often denoted $C_c^\infty(\mathbb{R}^n)$. In distribution theory, this is what is most commonly referred to when one refers to test functions, but this second usage depends on context. -2) A smooth cutoff function is a bump function, but with additional properties. Namely, a smooth cutoff function $f$ should satisfy $0 \leq f \leq 1$ everywhere, equal to $1$ on a small neighborhood of a compact set $K$, and equal to $0$ outside of a compact set $K'$. A general cutoff function has these properties but is only required to be continuous. The existence of cutoff functions is the subject of a famous theorem, Urysohn's lemma. -3) A mollifier is a function $\phi$ satisfying certain properties: $\phi$ is smooth, compactly supported (i.e. a bump function), integrates to $1$, and satisfies -$$ -\lim_{\varepsilon\to 0} \frac{1}{\varepsilon^n}\phi(x/\varepsilon) = \delta(x) -$$ -where $\delta$ denotes the Dirac delta and the equation is interpreted in the sense of distributions; namely, it should hold when integrated against a suitable sense of test functions, such as $C_c^\infty(\mathbb{R}^n)$ or the Schwarz functions. This is the definition taken in the Wikipedia article on mollifiers. The function given in the section "Concrete example" is sometimes called the standard mollifier. -4) An approximate identity is pretty much a mollifier, but we don't require it to be compactly supported. Also, an approximate identity can be defined as a single function, or as a sequence of functions. For example, the Poisson kernel on the half-space, -$$ -\mathcal{P}(x) = \frac{c_n}{(1+|x|^2)^{\frac{n+1}{2}}} -$$ -(where $c_n$ normalizes the function so it integrates to $1$) is an approximate identity that is defined using a single function; one then obtains a sequence of approximations to the identity by defining -$$ -\mathcal{P}_y(x) = \frac{1}{y^n}\mathcal{P}(x/y) = \frac{c_ny}{(y^2 + |x|^2)^{\frac{n+1}{2}}}. -$$ -(This function can also be what is meant when one hears "Poisson kernel.") Notice how $\mathcal{P}$ and $\mathcal{P}_y$ do not have compact support. Yet it enjoys most of the properties a compactly supported mollifier would have: in particular, the "smoothing property" and the "approximation of identity" properties are satisfied (with appropriate changes made). Other good examples of approximate identities are the Fejer kernel and the heat kernel, AKA the Gauss-Weierstrass kernel. -There is some blending of vocabulary between 3) and 4). The way we have defined things, every mollifier is an approximate identity, but not conversely. Sometimes when someone says "approximate identity" they mean "mollifier" in the sense described here. But I think it's comparatively rare to hear the term "mollifier" for a kernel without compact support. Approximate identity also has a definition in functional analysis, which should generalize some of these notions in the $C^*$-algebra setting. The point is, whether you distinguish 3) and 4) is pretty much up to personal preference. -These concepts are also related to each other via convolutions. A common way to construct a smooth cutoff function is to take the convolution of a characteristic function (AKA indicator function) with a mollifier or an approximate identity, and use the fact that this convolution approximates the original function pointwise under suitable assumptions.<|endoftext|> -TITLE: Which of the numbers $300!$ and $100^{300}$ is greater -QUESTION [14 upvotes]: Determine which of the two numbers $300!$ and $100^{300}$ is greater. -My attempt:Since numbers starting from $100$ to $300$ are all greater than $100$. But am not able to justify for numbers between $1$ to $100$ - -REPLY [10 votes]: $\displaystyle e^x=1+\frac{x}{1!}+\frac{x^2}{2!}+\cdots+\frac{x^n}{n!}+\frac{x^{n+1}}{(n+1)!}+\cdots $ -$$e^x>\frac{x^n}{n!}$$ for $x=n$ -$$n!>\bigg(\frac{n}{e}\bigg)^n>\bigg(\frac{n}{3}\bigg)^n$$ -for $n=300.$ we have $\color{red}{300!>(100)^{300}}$<|endoftext|> -TITLE: Inequality with three variables -QUESTION [10 upvotes]: Let $a,b,c\ge 0$,show that -$$\sqrt{a^3+2}+\sqrt{b^3+2}+\sqrt{c^3+2}\ge \sqrt{\dfrac{9+3\sqrt{3}}{2}(a^2+b^2+c^2)}$$ - -REPLY [8 votes]: This inequality can actually be handled by standard calculus methods. It's a lot of work, but it does work. Just set up the function -$$f(x,y,z) =\sqrt{x^3+2}+\sqrt{y^3+2}+\sqrt{z^3+2}-\sqrt{\frac{9+3\sqrt{3}}{2}(x^2+y^2+z^2)}$$ -and search for its minimum on the region $(x,y,z)\in [0,\infty)^3$. -First, we look for any critical points in the interior of that region. The gradient is -$$\nabla f(x,y,z) = \left(\frac{3x^2}{2\sqrt{x^3+2}}-\sqrt{\frac{9+3\sqrt{3}}{2}}\cdot\frac{2x}{2\sqrt{x^2+y^2+z^2}},\cdots,\cdots\right)$$ -where the $\cdots$ represent the cyclically shifted expressions with $y$ and then $z$ in the places $x$ sticks out. Clearing the denominators, this first coordinate of the gradient is zero when -$$\sqrt{2}\cdot \sqrt{9+3\sqrt{3}}\cdot x\sqrt{x^3+2}=3x^2\sqrt{x^2+y^2+z^2}$$ -$$\frac{2x^3+4}{x^2}=\frac{9}{9+3\sqrt{3}}\left(x^2+y^2+z^2\right)$$ -Dividing by $x^2$ as we did doesn't lose any potential solutions since we're looking specifically for critical points in the interior $x,y,z>0$. Now, the right side of that is symmetric in $x,y,z$ and the left side is simply a function of $x$. Applying this to the other two coordinates, we get -$$\frac{9}{9+3\sqrt{3}}\left(x^2+y^2+z^2\right) = 2x+\frac{4}{x^2} = 2y+\frac{4}{y^2} = 2z+\frac{4}{z^2}$$ -The function $q(t)=2t+\frac{4}{t^2}$ is, unfortunately for us, not monotone. It has a minimum at $\sqrt[3]{4}$, and increases in both directions away from that. Thus, for a given value $k>3\sqrt[3]{4}$, there are two solutions to $q(t)=k$ to choose values of $x,y,z$ from if $x^2+y^2+z^2$ is fixed. We will count points based on how many of $x,y,z$ are on each side of $\sqrt[3]{4}$. -If all three of $x,y,z$ are $\le \sqrt[3]{4}$ or all three are $\le \sqrt[3]{4}$, then $x=y=z$ and $x$ is a root of the polynomial equation $\frac{9}{3+\sqrt{3}}x^4-2x^3-4=0$. That polynomial has exactly one positive root $r$, at about $r\approx 1.582$. Evaluating there, we find $f(r,r,r)\approx 0.0233>0$. -If exactly two of $x,y,z$ are on one side of $\sqrt[3]{4}$, suppose WLOG those are $x$ and $y$. Then, solving the equation $2x+\frac{4}{x^2}=2z+\frac{4}{z^2}$, we find $z=\frac{1+\sqrt{2x^3+1}}{x^2}$. Substituting this in, we seek solutions to -$$\frac{3}{3+\sqrt{3}}\left(2x^2+z^2\right)=2x+\frac{4}{x^2}$$ -$$\frac{3}{3+\sqrt{3}}\left(2x^2+\frac{1+2x^3+1+2\sqrt{2x^3+1}}{x^4}\right)=2x+\frac{4}{x^2}$$ -$$\frac{3}{3+\sqrt{3}}\left(2x^6+2x^3+2+2\sqrt{2x^3+1}\right)=2x^5+4x^2$$ -$$\frac{3}{3+\sqrt{3}}\left(u^2+u+1+\sqrt{2u+1}\right)=(u+2)\cdot u^{\frac23}$$ -setting $u=x^3$ for convenience in the last equation. Looking for solutions numerically, I started the iteration at $1$... and that was exactly a root. Testing $f$ at $(1,1,1+\sqrt{3})$, it's exactly zero. We have found the equality case. -Numerical calculations find another root at $u\approx 3.886$, for a triple $(x,y,z)\approx (1.572,1.572,1.603)$ and $f(x,y,z)\approx 0.0233$. That's not enough precision to tell whether it's better or worse than the symmetric critical point, but I've got more digits in my spreadsheet. To six significant digits, $f$ is $0.0232638$ at the symmetric critical point and $0.0232641$ at the nearby asymmetric critical points. The symmetric point is slightly better. -And that's it; no more interior critical points. In particular, there are no interior critical points with two or more of $x,y,z$ greater than $\sqrt[3]{4}$. -Part 2 -Now we look at the boundary of the region. The two-dimensional boundary has three faces, each of which has exactly one of the $x,y,z$ zero. WLOG, look at the face $x>0,y>0,z=0$. -Here, we eliminate $z$ and look at the two-dimensional gradient of $g(x,y)=f(x,y,0)$. The gradient is almost the same: -$$\nabla g(x,y) = \left(\frac{3x^2}{2\sqrt{x^3+2}}-\sqrt{\frac{9+3\sqrt{3}}{2}}\cdot\frac{2x}{2\sqrt{x^2+y^2}},\cdots\right)$$ -This is zero when -$$\frac{9}{9+3\sqrt{3}}\left(x^2+y^2\right) = 2x+\frac{4}{x^2} = 2y+\frac{4}{y^2}$$ -again not needing any truly new calculation. Again, we have two options. -If $x=y$, then $x$ is a root of the polynomial equation $\frac{6}{3+\sqrt{3}}x^4-2x^3-4=0$. That has exactly one positive root, at $x\approx 1.982$ for $g(x,x)\approx 0.2030$. -Alternatively, if $x\neq y$, then $y=\frac{1+\sqrt{2x^3+1}}{x^2}$ and we seek solutions to -$$\frac{3}{3+\sqrt{3}}\left(x^2+y^2\right)=2x+\frac{4}{x^2}$$ -$$\frac{3}{3+\sqrt{3}}\left(x^2+\frac{1+2x^3+1+2\sqrt{2x^3+1}}{x^4}\right)=2x+\frac{4}{x^2}$$ -$$\frac{3}{3+\sqrt{3}}\left(x^6+2x^3+2+2\sqrt{2x^3+1}\right)=2x^5+4x^2$$ -$$\frac{3}{3+\sqrt{3}}\left(\frac12u^2+u+1+\sqrt{2u+1}\right)=(u+2)\cdot u^{\frac23}$$ -Numerical methods find two roots, at $u\approx 0.756$ and at $u\approx 29.70$. They lead to $x\approx 0.915$ and $x\approx 3.097$, with $y$ being the other root in each case. These critical points have $g(x,y)\approx 0.1044$. -That's the two-dimensional boundary. Now, we look at the one-dimensional boundary, where two of $x,y,z$ are zero. WLOG, let it be the ray $x>0,y=z=0$. Let $h(x)=f(x,0,0)$, so -$$h'(x)=\frac{3x^2}{2\sqrt{x^3+2}}-\sqrt{\frac{9+3\sqrt{3}}{2}}$$ -This is zero when -$$\frac{9}{9+3\sqrt{3}}\cdot x^2 = 2x+\frac{4}{x^2}$$ -$$\frac{3}{3+\sqrt{3}}\cdot x^4 - 2x^3-4 = 0$$ -This has exactly one root, at $x\approx 3.326$. At that point, $h(x)\approx 0.1956$. -Finally, the one-dimensional boundary $x=y=z=0$. At that point, $f(0,0,0)=3\sqrt{2}\approx 4.2426$. -Also, we must consider what happens as any of $x,y,z$ go to $\infty$. Suppose WLOG that $x\ge y\ge z\ge 0$. Then $f(x,y,z) < \sqrt{x^3}-\sqrt{\frac{9+3\sqrt{3}}{2}(x^2+x^2+x^2)} = x\left(\sqrt{x}-\sqrt{\frac{9+3\sqrt{3}}{2}}\right)$, which goes to $\infty$ as $x\to\infty$. The entirety of the infinite boundary is the supremum for our function. -Coda -Summarizing, the complete list of critical points: -$(x,y,z)\approx (1.582,1.582,1.582)$ for $f(x,y,z)\approx 0.0233$ -$(x,y,z)\approx (1.572,1.572,1.603)$ and permutations for $f(x,y,z)\approx 0.0233$ -$(x,y,z) = (1,1,1+\sqrt{3})$ and permutations for $f(x,y,z)=0$ -$(x,y,z)\approx (1.982,1.982,0)$ and permutations for $f(x,y,z)\approx 0.2030$ -$(x,y,z)\approx (0.915,3.097,0)$ and permutations for $f(x,y,z)\approx 0.1044$ -$(x,y,z)\approx (3.326,0,0)$ and permutations for $f(x,y,z)\approx 0.1956$ -$(x,y,z) = (0,0,0)$ for $f(x,y,z) = 3\sqrt{2}$ -As $\max(x,y,z)\to\infty$, $f(x,y,z)\to\infty$ -The smallest value of $f$ at any critical point is $0$ at $(1,1,1+\sqrt{3})$. This is therefore the global minimum of $f$, and we have the inequality $f(x,y,z)\ge 0$ for all $x,y,z$. That is the inequality we wanted to prove, and we're done.<|endoftext|> -TITLE: Prove that middle cancellation implies that the group is abelian -QUESTION [5 upvotes]: Suppose that $G$ is a group with the property that for every choice of elements in $G$, $axb=cxd$ implies $ab=cd$. -Prove that $G$ is Abelian. -(Middle cancellation implies commutativity). - -To be an Abelian group, it is required that $\forall a,b \in G$, $a. b=b .a$ -Can I request for a hint to this question? I'm not sure how to get started on this one. - -REPLY [5 votes]: Hint: Given $a$ and $b$, find an element $x$ such that $axb=bxa$. (I can think of two $x$s that would work.)<|endoftext|> -TITLE: Is there a systematic way to solve in $\bf Z$: $x_1^2+x_2^3+...+x_{n}^{n+1}=z^{n+2}$ for all $n$? -QUESTION [6 upvotes]: Is there a systematic way to solve in $\bf Z$ $$x_1^2+x_2^3+...+x_{n}^{n+1}=z^{n+2}$$ For all $n$? -It's evident that $\vec 0$ is a solution for all $n$. -But finding more solutions becomes harder even for small $n$: -When $n=2$, -$$ -x^2+y^3=z^4 -$$ -I'm already pretty lost. -After this it gets even more complicated, has anyone encountered this problem before? -I want all the solutions, or at least infinitely many for every equation. - -REPLY [4 votes]: Here's a proof of infinitely many solutions $(x_1, x_2, x_3, \dots, x_n, z)$: -For $n = 1$ we have $(1, 1)$. For $n = 2$ we have $(3^3, 2 \cdot 3^2, 3^2)$. -For $n \ge 3$ we have $(2, 3, 1, 2, 2, 2, 2, \dots, 2)$. -(To verify: $2^2 + 3^3 + 1^4 + 2^5 + 2^6 + \dots + 2^n = 32 + 2^5 (1 + 2 + 4 + \dots + 2^{n-5}) = 32 + 2^5 (2^{n-4} - 1) = 2^{n+1}$, where we have used a geometric series to simplify the sum.) -Thus all $n \ge 1$ yields at least one solution in positive integers. To get infinitely many solutions, multiply term $x_k$ by $m^{\frac{(n+1)!}{k}}$ and $z$ by $m^{n!}$; then both sides of the Diophantine equation are multiplies by $m^{(n+1)!}$. Choosing arbitrary $m \ge 2$ gives infinitely many solutions, as desired.<|endoftext|> -TITLE: Which of the following is not a prime number? -QUESTION [18 upvotes]: Which of the following is not a prime number ? - -$a.)\ 911 \ \ \ \ \ \ \ \ \ \ b.)\ 919 \\ -\color{green}{c.)\ 943} \ \ \ \ \ \ \ \ \ \ d.)\ 947$ -This was asked in my exam and the time given per question was $1-3\ \text{ min}$ -If the time was $10\ \text{min}$ for this question I would have solved this comfortably but since the time was less i couldn't solve it with dividing every option with shorter primes . -I look for a simple and short way. -I have studied maths up to $12$th grade. - -REPLY [2 votes]: As others have mentioned, you need to check divisibility only against prime numbers that are smaller than or equal to the square root of the dividend. -And since the square roots of all the dividends are smaller than $31$, you need to check divisibility of each dividend only by $2,3,5,7,11,13,17,19,23,29$. -Checking divisibility against $2,3,5$ is easy, so I'm not going to expand on it. -Checking divisibility against $7$ is easy in this case: - -$911$ is not divisible by $7$ since $910$ is -$919$ is not divisible by $7$ since $910$ is -$943$ is not divisible by $7$ since $950$ would have to be -$947$ is not divisible by $7$ since $940$ would have to be - -Checking divisibility against $11$ is easy in this case: - -$911$ is not divisible by $11$ since $900$ would have to be -$919$ is not divisible by $11$ since $930$ would have to be -$943$ is not divisible by $11$ since $990-44=946$ is -$947$ is not divisible by $11$ since $990-44=946$ is - -Checking divisibility against $13$ is easy in this case: - -$911$ is not divisible by $13$ since $910$ is -$919$ is not divisible by $13$ since $910$ is -$943$ is not divisible by $13$ since $930$ would have to be -$947$ is not divisible by $13$ since $960$ would have to be - -Checking divisibility against $17$ is partially easy in this case: - -$911$ - $\color\red{\text{you'll have to do the math}}$ -$919$ - $\color\red{\text{you'll have to do the math}}$ -$943$ is not divisible by $17$ since $960$ would have to be -$947$ is not divisible by $17$ since $930$ would have to be - -Checking divisibility against $19$ is easy in this case: - -$911$ is not divisible by $19$ since $930$ would have to be -$919$ is not divisible by $19$ since $900$ would have to be -$943$ is not divisible by $19$ since $950$ is -$947$ is not divisible by $19$ since $950$ is - -Checking divisibility against $23$ is easy in this case: - -$911$ is not divisible by $23$ since $920$ is -$919$ is not divisible by $23$ since $920$ is -$943$ is divisible by $23$ since $920+23$ is<|endoftext|> -TITLE: How can one prove the axiom of collection in ZFC without using the axiom of foundation? -QUESTION [10 upvotes]: Say I want to prove the axiom(s) of collection from the axiom(s) of replacement. If you have the axiom of foundation, then you can use Scott's trick to do this. -But suppose I'm working in a context without the axiom of foundation. How can I prove it then? It certainly seems like it ought to be possible using the axiom of choice instead. In particular, if you allow the axiom of global choice, it's quite easy. And if it's possible with global choice, it certainly ought to be possible with ordinary local choice! And yet so far I have not been able to make it work (again, without using the axiom of foundation). -How can you prove collection from replacement, without using foundation, and just using ordinary, local choice? -Thanks all! - -REPLY [12 votes]: The answer is that you can't. -Instead of omitting foundation, I'll add atoms. You can replace them by Quine atoms and have the same results without foundations to your liking. -We construct the same permutation model as in this answer: start with a proper class of atoms and global choice, and make the class of atoms have only finite subsets while preserving local choice. -Now consider the predicate $\varphi(x,y)$ stating that $x<\omega$ and $y$ is a set of atoms which is equinumerous with $x$. Since $\omega$ is still a set, and we have finite sets of atoms in every finite size, $\forall x\in\omega\exists y(\varphi(x,y))$. -However, if there is some $B$ such that $B$ is a set and for every $x\in\omega$ there is some $y\in B$ such that $\varphi(x,y)$ then $\bigcup B$ is an infinite set of atoms. Contradiction.<|endoftext|> -TITLE: Book on Chaos Theory -QUESTION [8 upvotes]: Please suggest some good chaos theory as general read, which can be enjoyed while on beach has patterns. -I am a electrical Eng Post Graduate in communication theory and signal processing so can understand complex math. - -REPLY [4 votes]: Here are my suggestions: - -Stephen Strogatz Nonlinear Dynamics and Chaos -Online SOOC chaosbook.org -Edward Ott Chaos in Dynamical Systems - -If you know nothing about nonlinear dynamics, then Strogatz is the best place to start. If you want to jump straight into chaos, then go with Edward Ott's book. I recently discovered the online SOOC--just started, but it seems very promising!<|endoftext|> -TITLE: hunting for the closed form of a series -QUESTION [6 upvotes]: Let $N$ be a positive integer, and -$$ -F(N) = \sum_{n=1}^{N} -\frac{1-\cos\left(\frac{(2n-1)\pi}{2N}\right)} -{\left[1+\cos\left(\frac{(2n-1)\pi}{2N}\right)\right]\left[5+3\cos\left(\frac{(2n-1)\pi}{2N}\right)\right]^2} -$$ -I want to konw the closed form of $F(n)$. In order to make an attack on it, I calculate some values, i.e. -$N=16$, $F(n)=117$; $N=32$, $F(n)=490$; $N=64$, $F(n)=2004$; $N=128$, $F(n)=8104$; $N=256$, $F(n)=32592$; $N=512$, $F(n)=130720$, etc. -I can not conclude the general expression of $F(n)$, and oeis does not seem to help a lot. -EDIT: My numerical effort shows that $F(N)=\frac{N(8N-11)}{16}$. -EDIT2: -$N=17$, $F1(N)=132.8125000000000187306828$, $F2(N)=132.8125$; -$N=18$, $F1(N)=149.6250000000000023235341$, $F2(N)=149.625$; -$N=50$, $F1(N)=1215.6250000000000000000000000000000000000000000050$, $F2(N)=1215.625$. - -REPLY [11 votes]: Let $\omega = e^{\frac{\pi}{2N}i}$ be the primitive $4N^{th}$ root of unity and $$\Lambda = \bigg\{\; \omega^{2k-1} : -N < k \le N \;\bigg\} = \bigg\{\; \omega^{\pm(2k-1)} : 1 \le k \le N \;\bigg\}$$ be the set of roots for the polynomial $z^{2N} + 1 = 0$. -For any integer $k$, let $c_k = \cos\left(\frac{(2k-1)\pi}{2N}\right) = \frac{\omega^{2k-1} + \omega^{1-2k}}{2}$. Notice -$$\frac{1-z}{(1+z)(5+3z)^2} = \frac12\left(\frac{1}{1+z}\right) -- \frac12\left(\frac{1}{\frac53 + z}\right) -\frac49\left(\frac{1}{\frac53 + z}\right)^2 -$$ -We can simplify our sum as -$$F(N) = \frac12 f(1) - \frac12 f\left(\frac53\right) + \frac49 f'\left(\frac53\right) -\quad\text{ where }\quad -f(\alpha) = \sum\limits_{k=1}^N \frac{1}{\alpha + c_k} -$$ -For any $\alpha > 1$, let $\beta = \alpha + \sqrt{\alpha^2 - 1}$, we have $\displaystyle\;\alpha = \frac{\beta + \beta^{-1}}{2}$. -Since $c_k = c_{1-k}$, we have -$$\begin{align} -f(\alpha) -&= \sum_{k=1}^{N} \frac{1}{\alpha + c_k} -= \frac12 \sum_{k=-N+1}^N \frac{1}{\alpha + c_k} -= \frac12\sum_{\lambda \in \Lambda}\frac{1}{\frac{\beta + \beta^{-1}}{2} + \frac{\lambda + \lambda^{-1}}{2}}\\ -&= \sum_{\lambda \in \Lambda}\frac{\lambda}{(\lambda+\beta)(\lambda+\beta^{-1})} -= \frac{1}{\beta-\beta^{-1}}\sum_{\lambda\in\Lambda}\left(\frac{\beta}{\lambda+\beta} - \frac{\beta^{-1}}{\lambda+\beta^{-1}}\right) -\end{align} -$$ -Notice $\lambda \in \Lambda \iff -\lambda \in \Lambda$ and taking logarithm derivatives, we have -$$\prod_{\lambda\in\Lambda} (z + \lambda) = -\prod_{\lambda\in\lambda}(z - \lambda) = - z^{2N} + 1 -\quad\implies\quad \sum_{\lambda\in\Lambda} \frac{z}{\lambda + z} = \frac{2N z^{2N}}{z^{2N}+1}$$ -This leads to -$$f(\alpha) = \frac{2N}{\beta-\beta^{-1}}\left(\frac{\beta^{2N} - 1}{\beta^{2N} + 1}\right) -$$ -Notice $f(\alpha)$ is continuous at $\alpha = 1$. We find -$$f(1) = \lim_{\alpha\to 1}f(\alpha) = \lim_{\beta\to 1}f(\alpha) = \frac{2N}{2}\left(\frac{2N}{2}\right) = N^2$$ -This expression for $f'(\alpha)$ is pretty messy. If I didn't make any mistake, -it is -$$\begin{align} -f'(\alpha) &= \frac{d\beta}{d\alpha}\frac{df(\alpha)}{d\beta} -= \frac{\beta}{\sqrt{\alpha^2-1}} \frac{d\log f(\alpha)}{d\beta} f(\alpha)\\ -&= \frac{4N}{(\beta-\beta^{-1})^2}\left(-\frac{\beta+\beta^{-1}}{\beta - \beta^{-1}} + \frac{4N\beta^{2N}}{\beta^{4N}-1}\right)\left(\frac{\beta^{2N} - 1}{\beta^{2N} + 1}\right)\end{align}$$ -For the problem at hand, we need $f(\alpha)$ and $f'(\alpha)$ at $\alpha = \frac53$ which is equivalent to $\beta = 3$. -Throwing the function $f(\alpha)$ and $f'(\alpha)$ to a CAS and ask it to simplify the mess at $\beta = 3$, we get -$$\begin{align} -F(N) &= \frac12 N^2 - \frac{3N}{8}\left(\frac{9^N-1}{9^N+1}\right) --\frac{N \left( 5\cdot 9^{2N}-16N\cdot 9^N-5\right)}{16\left( 9^N+1\right)^2}\\ -&= \frac{(8N-11)N}{16}+\frac{N((8N+11)9^N + 11)}{8(9^N+1)^2} -\end{align} -$$ -In particular, the first few values of $F(N)$ are -$$\frac{1}{25},\frac{1188}{1681},\frac{327129}{133225},\frac{56551312}{0764961}‌​,\frac{316019361}{34869025},\frac{979687020612}{70607649841},\ldots$$ -As a double check, I have computed the values of these $F(N)$ numerically using original expression and they match up to $50$ decimal places.<|endoftext|> -TITLE: Existence of square roots and logarithms -QUESTION [13 upvotes]: Does there exist an open connected set in the complex plane on which the identity function has an analytic square root but not an analytic logarithm? - -REPLY [5 votes]: Suppose $U \subseteq \mathbb{C}$ is a connected open set on which an analytic square root can be defined. Then it follows easily that $U$ cannot contain the origin, and every closed curve in $U$ must have even winding number around the origin. But any closed curve in the plane with nonzero winding number around the origin contains in its image a simple closed curve with winding number one around the origin, so $U$ cannot have any closed curves with nonzero winding number around the origin, and hence an analytic logarithm exists on $U$. -See also this question.<|endoftext|> -TITLE: If $\omega$ is an imaginary fifth root of unity, then $\log_2 \begin{vmatrix} 1+\omega +\omega^2+\omega^3 -\frac{1}{\omega} \\ \end{vmatrix}$ = -QUESTION [6 upvotes]: If $\omega$ is an imaginary fifth root of unity, then $$\log_2 \begin{vmatrix} -1+\omega +\omega^2+\omega^3 -\frac{1}{\omega} \\ -\end{vmatrix} =$$ - -My approach : -$$\omega^5 = 1 \\ \implies 1+\omega +\omega^2 +\omega^3 + \omega^4 =0$$ -Therefore, \begin{align}\log_2 |1+\omega +\omega^2+ \omega^3 -\frac{1}{\omega}| &=\log_2 |1+\omega +\omega^2+ \omega^3 -\omega^4|\\& =\log_2|-2\omega^4|\\ &=\log_2 2 +\log_2 \omega^4 \end{align} -Now how to solve further; please suggest. - -REPLY [4 votes]: Since -$$ -\begin{align} -1+\omega+\omega^2+\omega^3+\omega^4 -&=\frac{1-\omega^5}{1-\omega}\\[3pt] -&=0 -\end{align} -$$ -we have -$$ -\begin{align} -1+\omega+\omega^2+\omega^3-\frac1\omega -&=-\omega^4-\frac1\omega\\ -&=-\frac{\omega^5+1}\omega\\ -&=-\frac2\omega -\end{align} -$$ -Therefore -$$ -\begin{align} -\log_2\left|1+\omega+\omega^2+\omega^3-\frac1\omega\right| -&=\log_2\left|-\frac2\omega\right|\\ -&=\log_2(2)\\[6pt] -&=1 -\end{align} -$$<|endoftext|> -TITLE: meaning of the notation f'(-x) -QUESTION [6 upvotes]: What does $f'(-x)$ essentially mean? - -$\frac{df(-x)}{dx}$, or -$\frac{df(x)}{d(-x)}$, or -$\frac{df(x)}{dx}|_{x=-x}$ ? - -I am not sure if all the options are different, though! :) - -EDIT 1: -Let me explain what I mean by all the options given. In all the cases, $f(x)$ is the original function, say $f(x)=x^3+2x$. Please note that many parts of the explanation are obvious, still I have mentioned them just to avoid any confusion. - -$\frac{df(-x)}{dx}$ : First we put $-x$ in place of $x$ in $f(x)$ to get $f(-x)=-x^3-2x$. Then we differentiate $f(-x)$ w.r.t $x$ and we get $-3x^2-2$ -$\frac{df(x)}{d(-x)}$ : This means differentiating $f(x)$ w.r.t. $-x$, so applying the chain rule we get the value as $\frac{df(x)}{dx} \times \frac{dx}{dz} = (3x^2+2) \times (-1) = -3x^2-2$, where $z=-x$ and hence $\frac{dx}{dz}=-1$. -$\frac{df(x)}{dx}|_{x=-x}$ : First we differentiate $f(x)$ w.r.t $x$ and in the derivative we put $-x$ in place of $x$ to obtain the result $3x^2+2$. - -Another option may be added, though it is not a strong candidate in this case. - -$\frac{df(-x)}{d(-x)}$ : Which means differentiating $f(-x)$ with respect to $-x$, and we get the result $3x^2+2$. - -Now I present a context where $f'(-x)$ is relevant. It's basically the question: -Show that the derivative of an odd function is even (or vice-versa). - -The question is discussed here, here and at many other places. In all the answers, to prove the statement, it is shown that $f'(-x)=-f'(x)$. From the question what I interpret is that we have to show $\frac{df(x)}{dx}|_{x=-x} = \frac{df(x)}{dx}$ or in another notation, $f'(x)|_{x=-x} = f'(x)$. While the solutions seem to involve differentiation with respect to $-x$, thus, in my opinion, do not actually prove the statement at hand. -Note: I have changed the title slightly to reflect my question more accurately. - -EDIT 2: -David K's vivid explanation cleared my doubt. Here are the conclusions I have come to regarding the notations of concern: (please let me know if there is any flaw) - -(i) $f'(g(t))=f'(u)|_{u=g(t)}=\frac{d}{dz}(f(z))|_{z=g(t)}=\lim\limits_{h\to 0} \frac{f(g(t)+h)-f(g(t))}{h}$ [where $h$ is a small change in $g(t)$], hence $f'(x)$ does not necessarily mean that we have to find $f'$ by differentiating $f$ with respect to $x$. It just means that represent $f$ in the form of any 'dummy' variable (say $v$), differentiate it w.r.t. $v$ to get $f'$ and substitute $x$ for $v$ in $f'$ to get $f'(x)$. -(ii) $f'(-x)=\frac{d}{d(-x)}(f(-x))=\frac{d}{dx}(f(x))|_{x=-x}$, hence option 3 and option 4 in my question are actually same and both are correct answer to my original question. - -REPLY [7 votes]: Here's how I would interpret it: -$f$ is a function that takes numbers to numbers. The meaning of $f$ is independent of what you apply it to, as long as you apply it to a number. -$f'$ is the derivative of $f$, that is, $f'$ is a function from numbers to numbers. The meaning of $f'$ also is independent of what you apply it to. -$f'(-x)$ is the function $f'$ applied to $-x$. -Hence $$f'(-x) = \left.\frac{df(u)}{du}\right|_{u=-x}. \tag{1}$$ -Update: -Looking at the context in which this notation was found, I am confident -that the meaning above is what was intended. -The difficulty of applying this definition in the various proofs that -the derivative of an odd function is even -(or that the derivative of an even function is odd) -is not that this definition of the notation contradicts anything in -those proofs, but that there are so many sign changes and other -notations involved that it is hard to keep account of all of them. -The key idea in those proofs is generally that idea that you can -write $f(-x)$ as $f(g(x))$ where you have defined the function $g$ -by the equation $g(x) = -x$. The proofs also use the chain rule, -for which a general formula is -$$ \left.\left(\frac{d}{du} (f(g(u)))\right) \right|_{u=x} -= \left.\left(\frac{d}{du}(f(u))\right)\right|_{u=g(x)} -\cdot \left.\left( \frac{d}{dv} (g(v)) \right)\right|_{v=x}. -\tag{2}$$ -Usually this is written in more compact notation, for example, -$$ \frac{d}{dx} f(g(x)) = f'(g(x)) g'(x), $$ -but in Equation $(2)$ I chose to spell out the various parts of the -formula in a much more detailed way, with extra parentheses added to -try to avoid any possibility of applying the functions and operators -in the wrong order. -For either the "$f'$ is even" or "$f'$ is odd" theorems, -we are given a rule to find $f(-x)$ in terms of $f(x)$, and -we need to show how to express $f'(-x)$ in terms of $f'(x)$. -Importantly, when we say that $f'$ is odd (for example), -we are talking about a function named $f'$ (which happens to be related -to another function named $f$), which we then evaluate on some input -number in parentheses, as in Equation $(1)$. -Example: let $f(x) = x^3 + 2x$, that is, $f$ is the function -$f : x \mapsto x^3 + 2x$, or equivalently (since we can use any -variable name in that definition), $f : t \mapsto t^3 + 2t$. -Then $f'(x) = 3x^2 + 2$, that is, $f'$ is the function -$f' : t \mapsto 3t^2 + 2$. -Therefore $f'(-x) = 3(-x)^2 + 2 = 3x^2 + 2.$ -This implies $f'(-x) = f'(x)$, that is, $f'$ is an even function. -The general proof that the derivative of an odd function is even -can be explicated as follows. -Let $f$ be an odd function, that is, $f(-x) = -f(x)$ for all $x$. -Define a function $g$ such that $g(x) = -x$, -so that $\frac{d}{dx}g(x) = -1$ and -$$f(x) = -f(-x) = -f(g(x)). \tag{3}$$ -Then -$$\frac{d}{dx}(f(x)) = \frac{d}{dx}(-f(g(x))),$$ -and we can simplify both sides of this equation to get -$$f'(x) = -\frac{d}{dx}(f(g(x))). \tag{4}$$ -But the chain rule says that -\begin{align} -\frac{d}{dx} (f(g(x))) -= \left.\left(\frac{d}{du} (f(g(u)))\right) \right|_{u=x} -&=\left(\left.\frac{d}{du}(f(u))\right|_{u=g(x)}\right) -\cdot \left.\left( \frac{d}{dv} (g(v)) \right)\right|_{v=x} \\ -&=\left(\left.\frac{d}{du}(f(u))\right|_{u=-x}\right) -\cdot \left.\left( -1 \right)\right|_{v=x} \\ -&= f'(-x) \cdot (-1) \\ -&= -f'(-x). \tag{5} -\end{align} -Plug this in at the far right-hand end of Equation $(4)$, -and we find that -$$f'(x) = -(-f'(-x)), $$ -that is, $f'(x) = f'(-x)$, showing that $f'$ is an even function. -If you follow the (more compactly written) proofs -in this answer -or in the "official" solution -(according to this question) -carefully, -you should find that they agree at every step with the facts above. -It's unfortunately easy to lose track of a sign change somewhere along -the way; note that there is one "sign change" in Equation $(3)$ -due to $f$ being odd, -and another "sign change" in Equation $(5)$ due to the chain rule -and the fact that $g'(x) = -1$. -Notice what happens with $\frac{d}{dx} (f(-x))$ when $f$ is odd. -Since $f(-x) = -f(x)$, we find that -$$\frac{d}{dx} (f(-x)) = \frac{d}{dx} (-f(x)) - = -\frac{d}{dx} (f(x)) = -f'(x),$$ -which does not look at all like what we want to prove; -we need $f'(-x) = f'(x)$, not $-f'(x)$, to show that -$f'$ is an even function. -The expression $\frac{d}{d(-x)} (f(x))$ is just as bad, because -if we set $t = -x$ we have -$$\frac{d}{d(-x)} (f(x)) = \frac{d}{dt} (f(-t))$$ -which is just $\frac{d}{dx} (f(-x))$ using the variable -name $t$ instead of $x$.<|endoftext|> -TITLE: What is an upper bound for number of semiprimes less than $n$? -QUESTION [5 upvotes]: A semiprime is a number that is the product of two prime numbers. -What is an upper bound for the number of numbers of the form $pq$ less than $n$? -$p,q$ are prime numbers smaller than $n$. - -REPLY [6 votes]: Recall that by the PNT we have $\pi(n) \sim \frac{n}{\log n}$ where $\pi(n)$ is the number of primes less than or equal to $n$. The number of semiprimes $\pi_2(n)$ is approximately -$$\pi_2(n) \sim \sum_{p \le \sqrt{n}} \pi \left( \left\lfloor \frac{n}{p} \right\rfloor \right)$$ -where the sum runs over primes. The summand is approximately $\frac{n}{p \log n}$, so overall this sum is approximately -$$\pi_2(n) \sim \frac{n}{\log n} \sum_{p \le \sqrt{n}} \frac{1}{p}.$$ -The sum $\sum_{p \le \sqrt{n}} \frac{1}{p}$ is known to be asymptotically $\log \log n$, so overall we get -$$\pi_2(n) \sim \frac{n \log \log n}{\log n}$$ -at least heuristically. This is in fact the correct asymptotic by a result of Landau.<|endoftext|> -TITLE: The limit of a uniform convergent sequence of isometries is an isometry (problem 6-3 of Lee's "Riemannian manifolds") -QUESTION [6 upvotes]: I'm trying to prove the following theorem: let $f_n : M \to N $ a sequence of isometries of Riemannian manifolds that converges uniformly to a function $f:M \to N$: prove that $f$ is an isometry too. -What I did already: -It is enough to prove the theorem for $M,N$ connected, so we can use the fact that $f$ is a Riemannian isometry if and only if it is a metric isometry, with respect to Riemannian distance. Now, $f$ is continuous because it is the limit of a uniformly convergent sequence of continuous maps, and preserves the distance because the distance function is continuous. So, $f$ is also injective, and then it is open by the invariance of domain theorem. -Now, it suffices to show that $f$ is closed or surjective. -Any help? - -REPLY [4 votes]: We show that $f$ is surjective. Suppose that exist $y\notin Im(f)$. Since $f_n$ is -diffeomorphism for all $n$, there exist unique $x_n\in M$ such that $f_n(x_n)=y$. By the uniform convergence follows that $\lim f(x_n)=y$. Note that $x_n$ can't have accumulation point ( because $x_{n_k}\to x\in M$ implies that $f(x_{n_k})\to f(x)$, so $f(x)=y$. Contradiction! ). So we can choose a subsequence $x_{n_k}$ such that $d(x_{n_k}, x_{n_{k+1}}) > \delta$ for some $\delta > 0$. Therefore we have $d(f(x_{n_{k}}), f(x_{n_{k+1}}))=d(x_{n_k}, x_{n_{k+1}}) > 0$. Absurd! Since -$\lim f(x_{n_k})=y$.<|endoftext|> -TITLE: Integers represented by $x^2 + 3 y^2$ vs. integers represented by $x^2 + x y + y^2$. -QUESTION [7 upvotes]: How does one show that the quadratic forms $x^2 + 3 y^2$ and $x^2 + x y + y^2$ represent the same set of integers? -I think it relates to a classical result of Euler about primes of form $6k+1$. In fact a positive integer $n$ is of the form $x^2 + 3 y^2$ if and only if the the primes $\equiv -1 \mod 3$ have an even exponent in $n$. Similarly for $x^2 + x y + y^2$. -However somebody told me that this can be shown without so much number theoretical background. Any ideas would be appreciated! - -REPLY [3 votes]: Others have already answered well; the principal form $x^2 + xy + k y^2$ always represents a superset of the numbers represented by $x^2 + (4k-1)y^2.$ For $k=1,$ the two sets agree. Same for $k = -1,$ so $x^2 + xy - y^2$ represents the same numbers as $x^2 - 5 y^2.$ -It is always the case that $x^2 + xy + 2k y^2$ always represents the same ODD numbers represented by $x^2 + (8k-1)y^2.$ So, $x^2 + xy + 2y^2$ and $x^2 + 7 y^2$ represent the same odd numbers. Same with $x^2 + xy -4y^2$ and $x^2 - 17 y^2$<|endoftext|> -TITLE: Why $e^x$ is always greater than $x^e$? -QUESTION [18 upvotes]: I find it very strange that $$ e^x \geq x^e \, \quad \forall x \in \mathbb{R}^+.$$ -I have scratched my head for a long time, but could not find any logical reason. Can anybody explain what is the reason behind the above inequality? I know this is math community, but I'd appreciate a more intuitive explanation than a technical one. - -REPLY [3 votes]: I'd appreciate a more intuitive explanation than a technical one. - -Well on that, try to consider the behaviour of $2^x$. Every time I increase x by one I get another (binary) digit out of my expression. So I quickly get a huge number. If I double $x$ I get way more digits. -Now compare that with $x^2$. Every time I increment $x$ I will not generally get an extra digit from this expression. In fact as $x$ gets really large there may be no difference in the number of digits between two "adjacent" values of the expression. -For example the number of binary digits required to represent $1000^2$ is the same as the number to represent $1001^2$. In fact it's 10 binary digits for both. But $2^{1000}$ requires one thousand binary digits. So a huge difference in growth. -So we see an exponentiation function grows much faster than the square function. -If binary digits aren't your "thing" then use base $10$ and compare $10^x$ and $x^{10}$. It's the same argument. -Now if you consider your specific request about $e^x$ and $x^e$ you should see, intuitively, that this is a similar situation in terms of relative growth. We no longer have the convenience of binary digits, but it's pretty clear the same rationale holds.<|endoftext|> -TITLE: Find all $p$'s such that $p^4 + p^3 + p^2 + p +1$ is a perfect square. -QUESTION [9 upvotes]: Let $p$ be a prime number. Find all $p$'s such that $p^4 + p^3 + p^2 + p +1$ is a perfect square. - -I tried rewriting the expression as $p^4 + p^3 + p^2 + p +1 = x^2 \iff (p^2 + p)(p^2 + 1)=(x - 1)(x + 1)$. Then I think I need to bound this using $x$ but am not sure how to. - -REPLY [5 votes]: Note that $p^2=q^4+q^3+q^2+q+1$ has solutions only for $p=11$ and $q=3$. -Indeed we can write $$\left(q^2+\frac{q}{2}\right)^2={q^4+q^3}+\frac{q^2}{4}<{q^4+q^3}+q^2+q+1 \\ \frac{q^2}{4}q^4+q^3+q^2+q+1 \\ {q^4+q^3}+\frac{9}{4}q^2{+q+1}>{q^4+q^3}+q^2{+q+1} \\ \frac{9}{4}q^2>q^2.$$ From here, $q$ cannot be even, and for some odd $q$ we must have $$\left(q^2+\frac{q+1}{2}\right)^2={q^4+q^3+q^2}+\frac{q^2+2q+1}{4}={q^4+q^3+q^2}+q+1 \\ q^2+2q+1=4q+4 \\ q^2-2q-3=(q-3)(q+1)=0,$$ from here $q=3$. In particular, $$3^4+3^3+3^2+3+1=11^2$$ -therefore the only solutions are $p=11$, $q=3$<|endoftext|> -TITLE: "Ordering" of Complex Plane -QUESTION [8 upvotes]: I have heard that the Complex Numbers do not form an ordered field, so you can't say that one number is larger or smaller than another. I've always been fine with this, but I recently wondered what happens if one describes size as the distance along a space-filling curve? If the real numbers can be ordered, and you can map a curve through all points in 2D space, then what prevents you from ordering the points purely by how far along the curve the points are? Each point should be a unique distance, and the distance continuously increases (as far as my understanding goes). Is there perhaps some difference between being "ordered" and being an "ordered field" that I am missing? I know the basics about why they differ, but I lack formal math training above Algebra 2... Everything else (up to most of Real Analysis) I have studied on my own, so my knowledge is spotty in areas like this. - -REPLY [3 votes]: You can definitely put an order $\prec$ on the complex numbers. In fact, the order can be a well-order, by the well ordering theorem. The question is whether such an order "is useful." -Here are some things we probably want in the order $\prec$: - -if $0 \prec \alpha, \beta$, then $0 \prec \alpha \beta$. -if $\alpha, \beta, z, w$ are complex with $\alpha \prec \beta$ and $z \prec w$, then $\alpha + x \prec \beta + y$. - -In fact, an order on a field (such as the complex numbers) that satisfies these properties gives us an ordered field. See Ross Millikan's comment for why you cannot put such an order on $\mathbb C$.<|endoftext|> -TITLE: More 'conceptual' reason of why $\int_{-\infty}^{+\infty}\text{sin}(x^2) = \int_{-\infty}^{+\infty}\text{cos}(x^2)$ -QUESTION [8 upvotes]: In our complex analysis course, as an application of the residue theorem and some clever contour integration, we computed the following integrals: - -$$\int_{-\infty}^{\infty}\text{sin}(x^2)\,\mathrm{d}x = \sqrt{\frac{\pi}{2}}$$ - And - $$\int_{-\infty}^{\infty}\text{cos}(x^2)\,\mathrm{d}x = \sqrt{\frac{\pi}{2}}$$ - -I noted that these two values are equal (not a very unusual observation). However, I was wondering if we could deduce just this result (i.e. the fact that they are equal) without actually computing them. -The computation we made did not make this clear at all. It just happened to be a consequence of our calculations. So does anyone know a more conceptual reason? -I'm not sure wheter this exists or should exist, but I had the following (more trivial) example in mind: -$$ \int_{0}^{\pi/2} \sin{x} \,\mathrm{d}x = \int_{0}^{\pi/2} \cos{x} \,\mathrm{d}x $$ -Which can be proven by using a substitution $u = \pi/2-x$ and the fact that $\text{sin}(\pi/2-x) = \cos{x}$ -Many thanks in advance. - -REPLY [3 votes]: Employing the change of variables $2u =x^2$ We get $$I=\int_0^\infty \cos(x^2) dx =\frac{1}{\sqrt{2}}\int^\infty_0\frac{\cos(2x)}{\sqrt{x}}\,dx$$ $$ J=\int_0^\infty \sin(x^2) dx=\frac{1}{\sqrt{2}}\int^\infty_0\frac{\sin(2x)}{\sqrt{x}}\,dx $$ - -Summary: We will prove that $J\ge 0$ and $I\ge 0$ so that, proving that $I=J$ is equivalent to $$ \color{blue}{0= (I+J)(I-J)=I^2 -J^2 =\lim_{t \to 0}I_t^2-J^2_t}$$ - Where, $$I_t = \int_0^\infty e^{-tx^2}\cos(x^2) dx~~~~\text{and}~~~ J_t = \int_0^\infty e^{-tx^2}\sin(x^2) dx$$ - $t\mapsto I_t$ and $t\mapsto J_t$ are clearly continuous due to the present of the integrand factor $e^{-tx^2}$. - -However, By Fubini we have, -\begin{split} -I_t^2-J^2_t&=& \left(\int_0^\infty e^{-tx^2}\cos(x^2) dx\right) \left(\int_0^\infty e^{-ty^2}\cos(y^2) dy\right) - \left(\int_0^\infty e^{-tx^2}\sin(x^2) dx\right) \left(\int_0^\infty e^{-ty^2}\sin(y^2) dy\right) \\ -&=& \int_0^\infty \int_0^\infty e^{-t(x^2+y^2)}\cos(x^2+y^2)dxdy\\ -&=&\int_0^{\frac\pi2}\int_0^\infty re^{-tr^2}\cos r^2 drd\theta\\&=&\frac\pi4 Re\left( \int_0^\infty \left[\frac{1}{i-t}e^{(i-t)r^2}\right]' dr\right)\\ -&=&\color{blue}{\frac\pi4\frac{t}{1+t^2}\to 0~~as ~~~t\to 0} -\end{split} -To end the proof: Let show that $I\ge 0$ and $J\ge 0$. Performing an integration by part we obtain -$$J = \frac{1}{\sqrt{2}} \int^\infty_0\frac{\sin(2x)}{x^{1/2}}\,dx=\frac{1}{\sqrt{2}}\underbrace{\left[\frac{\sin^2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{1}{2\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{3/2}}\,dx\color{red}{\ge0}$$ -Given that $\color{red}{\sin 2x= 2\sin x\cos x =(\sin^2x)'}$. Similarly we have, -$$I = \frac{1}{\sqrt{2}}\int^\infty_0\frac{\cos(2x)}{\sqrt{x}}\,dx=\frac{1}{2\sqrt{2}}\underbrace{\left[\frac{\sin 2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{1}{4\sqrt{2}} \int^\infty_0\frac{\sin 2 x}{x^{3/2}}\,dx\\= - \frac{1}{4\sqrt{2}}\underbrace{\left[\frac{\sin^2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{3}{8\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{5/2}}\,dx\color{red}{\ge0}$$ - -Conclusion:$I^2-J^2 =0$, $I>0$ and $J>0$ impliy $I=J$. Note that we did not attempt to compute neither the value of$I$ nor $J$. - -Extra-to-the answer However using similar technique in above prove one can easily arrives at the following $$I_tJ_t = \frac\pi8\frac{1}{t^2+1}$$ from which one get explicit value of $$I^2=J^2= IJ = \lim_{t\to 0}I_tJ_t =\frac\pi8$$<|endoftext|> -TITLE: An identity involving binomial coefficients -QUESTION [6 upvotes]: Prove the following identity -$$\displaystyle \sum_{i+j=m}\frac{(n-1) \binom{ai+n-1}{i} \binom{aj+1}{j}}{(ai+n-1)(aj+1)} = \frac{n\binom{am+n}{m}}{am+n}$$ -where $i = 0,1,\cdots,m$ and $m, n$ are positive integers and $a$ is a positive integer or even a fraction. -One complete proof can be found in that famous book "Concrete Mathematics" by -Ronald L. Graham, Donald E. Knuth, and Oren Patashnik which is widely used as a textbook in many computer science departments. -Another complete proof can be found in the book written by Egorychev which is directly using the inversion rule of residue, see page 49 in the book "Integral representation and the computation of combinatorial sums". If you are reading that book, keep in mind that the index under $\sum$ can be extended to infinity because the contour integral has no pole when $k>n$. - -REPLY [6 votes]: This is a nice example to apply the inversion rule of formal power series stated as rule 4 in G.P. Egorychevs Integral Representations and the Computation of Combinatorial Sums section 1.2.2. -I think it's worthwhile to present a complete proof. Here we use the coefficient of operator $[z^n]$ to denote the coefficient of $z^n$ in a formal power series. We show - -The following is valid for $m,n\geq 1$ and $a\in\mathbb{R}$ appropriate - \begin{align*} -\sum_{{i+j=m}\atop{i,j\geq 0}}&\frac{n-1}{ai+n-1}\binom{ai+n-1}{i}\frac{1}{aj+1}\binom{aj+1}{j} -=\frac{n}{am+n}\binom{am+n}{m} -\end{align*} - -$$ $$ - -We obtain - \begin{align*} -\sum_{{i+j=m}\atop{i,j\geq 0}}&\frac{n-1}{ai+n-1}\binom{ai+n-1}{i}\frac{1}{aj+1}\binom{aj+1}{j}\\ -&=\sum_{i=0}^m\frac{n-1}{ai+n-1}\binom{ai+n-1}{i}\frac{1}{am-ai+1}\binom{am-ai+1}{m-i}\\ -&=\sum_{i=0}^\infty\left\{\binom{ai+n-1}{i}-a\binom{ai+n-2}{i-1}\right\}\\ -&\qquad\qquad\cdot\left\{\binom{am-ai+1}{m-i}-a\binom{am-ai}{m-i-1}\right\}\tag{1}\\ -&=\sum_{i=0}^{\infty}\left([z^i](1+z)^{ai+n-1}-a[z^{i-1}](1+z)^{ai+n-2}\right)\\ -&\qquad\qquad\cdot\left([w^{m-i}](1+w)^{a(m-i)+1}-a[w^{m-i-1}](1+w)^{a(m-i)}\right)\\ -&=[w^m](1+w)^{am}(1+w-aw)\\ -&\qquad\qquad\cdot\sum_{i=0}^{\infty}\left(\frac{w}{(1+w)^a}\right)^i[z^i](1+z)^{n-2}(1+z-az)(1+z)^{ai}\tag{2}\\ -&=[w^m](1+w)^{am}(1+w-aw)\left.\left((1+z)^{n-1}\right)\right|_{z=w}\tag{3}\\ -&=[w^m](1+w)^{am+n}-a[w^{m-1}](1+w)^{am+n-1}\\ -&=\binom{am+n}{m}-a\binom{am+n-1}{m-1}\\ -&=\frac{n}{am+n}\binom{am+n}{m} -\end{align*} - and the claim follows. - -Comment: - -In (1) we use the identity $$\frac{q}{pk+q}\binom{pk+q}{k}=\binom{pk+q}{k}-p\binom{pk+q-1}{k-1}$$ We also set the upper limit of the sum to $\infty$ without changing anything since we are only adding zero. -In (2) we rearrange the sum, use the linearity of the coefficient of operator and the rule $[z^{n-k}]A(z)=[z^n]z^kA(z)$. -In (3) we use the inversion rule - -\begin{align*} -\sum_{i=0}^\infty w^i [z^i]A(z)f^i(z)=\left.\left(A(z)\frac{f(z)}{f(z)-zf^{\prime}(z)}\right)\right|_{z=g(w)} -\end{align*} -with $g(w)$ the inverse of $w=\frac{z}{g(z)}$. We get from (2) -\begin{align*} -A(z)&=(1+z)^{n-2}(1+z-az)\\ -f(z)&=(1+z)^a -\end{align*} -and obtain -\begin{align*} -A(z)\frac{f(z)}{f(z)-zf^{\prime}(z)}&=(1+z)^{n-2}(1+z-az)\frac{(1+z)^a}{(1+z)^a-az(a+z)^{a-1}}\\ -&=(1+z)^{n-1} -\end{align*} -Since $w=\frac{z}{f(z)}=\frac{z}{(1+z)^a}$ we apply in (3) the substitution $z=w$.<|endoftext|> -TITLE: Lie groups where $x \mapsto x^2$ is a diffeomorphism? -QUESTION [5 upvotes]: In every Lie group $G$ the function $x \mapsto x^2$ is a local diffeomorphism in a neighbourhood of the identiy. -(This is because its differential is: $v \mapsto 2v$ when considered as a map from $T_eG$ to itself). -In some Lie groups it is a global diffeomorphism, for instance in $\mathbb{R}^n$. -I would like to find more examples for Lie groups where the square is a global diffeomorphism. Are there any non-abelian groups where this holds? - -REPLY [8 votes]: Suppose so. I'm going to call the inverse $s$ for square root. -The exponential map of such a Lie group is automatically injective. For if $e(a)=e(b)$, then $s^k(e(a))=s^k(e(b))$; because we know square roots are unique in $G$, we know that $s^k(e(a)) = e(a/2^k)$. For large enough $k$, because the exponential map is a diffeomorphism near zero, we duduce $a/2^k=b/2^k$, hence $a=b$. -Now we may invoke the Dixmier-Saito classification of Lie groups with injective exponential to verify that the exponential is actually a diffeomorphism. Then define $s(g) = e(e^{-1}(g)/2)$. Because $e$ is a diffeomorphism, this is a well-defined smooth map, which you may check is the inverse of the squaring map. -Hence $x \mapsto x^2$ is equivalent to the exponential map being injective, which is equivalent to all the properties in the linked classification; in particular - - -$G$ is solvable, simply connected, and $\mathfrak{g}$ does not admit $\mathfrak{e}$ as subalgebra of a quotient. -$G$ is solvable, simply connected, and $\mathfrak{g}$ does not admit $\mathfrak{e}$ or $\tilde{\mathfrak{e}}$ as subalgebra - -Here $\mathfrak{e}$ is the 3-dimensional Lie algebra with basis $(H,X,Y)$ and bracket $[H,X]=Y$, $[H,Y]=-X$, $[X,Y]=0$. It is isomorphic to the Lie algebra of the group of isometries of the plane. Its central extension $\tilde{\mathfrak{e}}$ is defined as the 4-dimensional Lie algebra defined by adding a central generator $Z$ and the additional nonzero bracket $[X,Y]=Z$. - -One particular nontrivial example is the Heisenberg group. Note that this does not include all simply connected nilpotent Lie groups, because $\mathfrak e$ itself is nilpotent! So the universal cover of the group of oriented affine transformations of the plane is nilpotent but $x \mapsto x^2$ is not a diffeomorphism.<|endoftext|> -TITLE: Characteristic function and moment generating function: differentiating under the integral -QUESTION [8 upvotes]: In order to justify the interchange of the derivative and integral when differentiating a characteristic function, one can use the dominated convergence theorem: - -$$\frac{d}{dt} \int e^{itx} P(dx) = \lim_{h \to 0} \frac{1}{h} \int (e^{ihx}-1) e^{itx} P(dx).$$ -Since $|e^{ihx}-1| \le |hx|$, we have -$$\frac{1}{h} \int |e^{ihx}-1| P(dx) \le \int |x| P(dx),$$ - so if we assume the random variable is in $L^1$, we may push the derivative under the integral. Similarly, if the random variable is in $L^k$, then we can push the $k$th derivative under the integral. - -I am trying to find an analogous statement for moment generating functions, but I am having trouble generalizing the above argument. Under what conditions can we do this for MGFs? Any hints would be appreciated, but I would prefer an argument that uses dominated convergence rather than Leibniz's integral rule. - -REPLY [10 votes]: Denote by -$$M(t) := \int e^{tx} \, \mathbb{P}(dx)$$ -the moment generating function of the measure $\mathbb{P}$. - -Suppose that there exist $t_0 \in \mathbb{R}$ and $\epsilon>0$ such that $M(t)<\infty$ for all $t \in [t_0-\epsilon,t_0+\epsilon]$. Then - -If $t_0>0$ and $\int_{(-\infty,0)} |x| \, \mathbb{P}(dx)<\infty$, then $M$ is differentiable at $t=t_0$. -If $t_0<0$ and $\int_{(0,\infty)} |x| \, \mathbb{P}(dx) <\infty$, then $M$ is differentiable at $t=t_0$. -If $t_0=0$, then $M$ is differentiable at $t = t_0 = 0$. - - -Proof: - -Choose $\epsilon \in (0,1)$ sufficiently small such that $(t_0-\epsilon,t_0+\epsilon) \subseteq (0,\infty)$ and fix $h \in (-\epsilon/2,\epsilon/2)$. It follows from the mean value theorem that $$\left|\frac{e^{(t_0+h)x} -e^{t_0 x}}{h} \right| \leq |x| e^{\zeta x} \tag{1}$$ for some intermediate value $\zeta \in (t_0, t_0+h) \subseteq (0,\infty)$. If $x \geq 0$, we get $$\left|\frac{e^{(t_0+h)x} -e^{t_0 x}}{h} \right| \leq x e^{(t_0+h)x}.$$ Since $t_0+\epsilon>t_0+h>0$, we can choose $C>0$ (not depending on $h$, $x$) such that -$$\left|\frac{e^{(t_0+h)x} -e^{t_0 x}}{h} \right| \leq C e^{(t_0+\epsilon)x} \tag{2}$$ for all $x \geq 0$. For $x \leq 0$, $(1)$ yields $$\left|\frac{e^{(t_0+h)x} -e^{t_0 x}}{h} \right| \leq |x|. \tag{3}$$ Combining $(2)$ and $(3)$, we get $$\left|\frac{e^{(t_0+h)x} -e^{t_0 x}}{h} \right| \leq w(x)$$ for $$w(x) := \begin{cases} C e^{x(t_0+\epsilon)}, & x \geq 0, \\ |x|, & x < 0 \end{cases}$$ Because of our assumptions, $w$ is an integrable dominating function. Applying the dominated convergence theorem proves the differentiability. -Apply statement 1 to the measure $\mathbb{Q}(B) := \mathbb{P}(-B)$. -Choose $h \in (0,\epsilon/2)$. Using $(1)$ for $t_0 = 0$, we get $$\left| \frac{e^{hx}-1}{h} \right| \leq |x| e^{\zeta x}$$ for some intermediate value $\zeta=\zeta(h) \in (0,h)$. Hence, $$\left| \frac{e^{hx}-1}{h} \right| \leq |x| (1_{\{x \leq 0\}} + e^{hx} 1_{\{x>0\}}).$$ Using that $|x| \leq C(e^{-\epsilon x}+e^{\epsilon x})$ for some constant $C$, we get $$\left| \frac{e^{hx}-1}{h} \right| \leq (C+1) e^{x \epsilon} + C e^{-x \epsilon}.$$ Consequently, we may again apply the dominated convergence theorem to interchange limit & integration. A very similar argumentation works for $h \in (-\epsilon/2,0)$. This gives the differentiability of $M$ at $t=0$.<|endoftext|> -TITLE: What is an example of a non-zero "ring pseudo-homomorphism"? -QUESTION [6 upvotes]: By "pseudo ring homomorphism", I mean a map $f: R \to S$ satisfying all ring homomorphism axioms except for $f(1_R)=f(1_S)$. -Even if we let this last condition drop, there are only two ring pseudo-homomorphisms from $\Bbb Z$ to $\Bbb Z[\omega]$ where $\omega$ is any root of unity, for example. They are the identity homomorphism and the "$0$ pseudo-homomorphism". -I couldn't find a non-zero example. It is easy to conclude that $0=f(a)(1_S-f(1_R))$, so I know that the counterexample will have to be some $S$ with zero-divisors. - -REPLY [2 votes]: The (conceptually) simplest example I know is $f:\Bbb Z\to\Bbb Z\times\Bbb Z$ defined by $f(n)=(n,0)$. The ring identity of $\Bbb Z\times\Bbb Z$ is $(1,1)$ but $1$ is instead mapped to $(1,0)$. The same example shows that a subset of a ring that is a ring is not necessarily a subring if you don't require it to contain the ring unit.<|endoftext|> -TITLE: Prove that if $(a^2+b^2+c^2+d^2)^2 > 3(a^4+b^4+c^4+d^4)$, then, using any three of them we can construct a triangle. -QUESTION [5 upvotes]: Prove that if $a,b,c,$ and $d$ are positive numbers and satisfy $(a^2+b^2+c^2+d^2)^2 > 3(a^4+b^4+c^4+d^4)$, then, using any three of them we can construct a triangle. - -I find it hard to go from the given inequality to the triangle inequality for $3$ of $a,b,c,d$. Expanding it may help but that would get ugly. I did it and got $d^2 (2 a^2+2 b^2+2 c^2-2 d^2)+b^2 (2 a^2-2 b^2+2 c^2)+a^2 (2 c^2-2 a^2)-2 c^4$. Also doing a ravi substitution here seems almost impossible to do with $4$ variables. - -REPLY [3 votes]: We'll prove that $a$, $b$ and $c$ are sides-lengths of a triangle. -Indeed, by C-S -$$3(a^4+b^4+c^4+d^4)=(2+1)(a^4+b^4+c^4+d^4)\geq\left(\sqrt{2(a^4+b^4+c^4)}+d^2\right)^2$$ -which gives $a^2+b^2+c^2>\sqrt{2(a^4+b^4+c^4)}$ or -$$(a+b+c)(a+b-c)(a+c-b)(b+c-a)>0$$ -and we are done because $a+b-c<0$ and $a+c-b<0$ impossible.<|endoftext|> -TITLE: $(\mathbb{R}^n,\|\cdot\|_{p})$ is isometrically isomorphic to $(\mathbb{R}^n,\|\cdot\|_{q})$ iff $p=q,$ -QUESTION [5 upvotes]: I want to prove, that $(\mathbb{R}^n,\|\cdot\|_{p})$ is isometrically isomorphic to $(\mathbb{R}^n,\|\cdot\|_{q})$ iff $p=q,$ I have tried looking the unitary balls, and I have been proved that $(\mathbb{R}^n,\|\cdot\|_{1})$ is not isometrically isometric to any other $p$-norm. - -REPLY [5 votes]: In the article Isometries of Finite-Dimensional Normed Spaces by F. C. Sanchez and J.M.F. Castillo, the authors prove the following result. - -Theorem. For $1\le p,q\le\infty$ and $n\ge 2$, $(\mathbb{R}^n,\|\cdot\|_p)$ and $(\mathbb{R}^n,\|\cdot\|_q)$ are isometrically isomorphic if and only if $p=q$ or if $n=2$ and $p,q\in\{1,\infty\}$. - -The argument splits in several cases. -The first one is to rule out any isometric isomorphisms between $\ell^n_p$ and $\ell^n_q$ if $q\not=p'$ where $p'$ is the Hölder dual of $p$ defined by $\frac1p+\frac1{p'}=1$. -This is a very beautiful argument, which I will now try to reproduce. For the remaining cases, see the paper. -For simplicity let us also only consider $1 -TITLE: Ratio of sums vs sum of ratio -QUESTION [9 upvotes]: Is anyone aware of any general (or perhaps not so general) relationship (inequality for instance) relating -$A(x,y)= \frac{\sum_z f(x,y,z)}{\sum_z g(y,z)}$ -and -$B(x,y)= \sum_z\left(\frac{f(x,y,z)}{g(y,z)}\right)$ -? -Specific context (for what I'm dealing with, but not necessarily the question) is that $\sum_{x,y,x}f(x,y,z)=1$ and $\sum_{y,x}g(y,z)=1$ and $f(x,y,z)\geq 0$ and $g(y,z)\geq 0\quad \forall x,y,z$. I.e. probabilities (or more generally, I guess, measures). -It seems like it could 'vaguely' be related to log sum inequalities (when transformed) or Jensen's inequality perhaps? - -REPLY [10 votes]: If you assume that both $f$ and $g$ are nonnegative, you have $A(x, y) \leq B(x, y)$. -Proof: -$ \frac{f(x, y, z)}{\sum_{z}g(y, z)} \leq \frac{f(x,y,z)}{g(y,z)} $, since $g(y, z) \leq \sum_{z}g(y, z)$. -So -$A(x,y) = \frac{\sum_z f(x,y,z)}{\sum_z g(y,z)} = \sum_z \frac{f(x, y, z)}{\sum_z g(y, z)} \leq \sum_z \frac{f(x, y, z)}{g(y, z)} = B(x,y)$.<|endoftext|> -TITLE: First countable + separable imply second countable? -QUESTION [5 upvotes]: In topological space, does first countable+ separable imply second countable? If not, any counterexample? - -REPLY [11 votes]: It’s false in general. A simple counterexample is the Sorgenfrey line, also known as $\Bbb R$ with the lower limit topology: $\Bbb Q$ is still a dense set, and each point $x$ has a countable local base of sets of the form $\left[x,x+\frac1n\right)$ for $n\in\Bbb Z^+$, but the space is not second countable. -The Sorgenfrey plane, the Cartesian product of the Sorgenfrey line with itself, is even worse: $\Bbb Q\times\Bbb Q$ is a dense subset, so it’s separable, and as a product of two first countable spaces it is certainly first countable, but the reverse diagonal, $\{\langle x,-x\rangle:x\in\Bbb R\}$, is an uncountable closed discrete set. It’s easy to see that no space with an uncountable closed discrete subset can be second countable.<|endoftext|> -TITLE: Are two generic filters in a common generic extension? -QUESTION [6 upvotes]: Let $M$ be a countable transitive set. Suppose $\mathbb{P}$ is a forcing in $M$. Let $G$ and $H$ be two generic filters for $\mathbb{P}$ over $M$. -My questions are: -Is there a forcing $\mathbb{Q}$ and a generic $K$ for this forcing over $M$ such that $G,H \in M[K]$? -If the answer is yes, can this $\mathbb{Q}$ be chosen to depend only on $\mathbb{P}$ and not the filters $G$ and $H$? - -If $M$ and $H$ are mutually $\mathbb{P}$-generic then by definition, $\mathbb{Q}$ can be $\mathbb{P}\times\mathbb{P}$. So the main difficultly is the case when the two filters are not mutually generic. -Thanks for any insight. - -REPLY [6 votes]: Such a $\mathbb{Q}$ may not exist. Indeed, we can find two reals $x,y$, both Cohen over $M$, such that no extension of $M$ with the same ordinals contains both $x,y$. The idea is to take a real coding the entire model $M$ and then disguise it by breaking it up between the two Cohen reals. -Joel David Hamkins has a manuscript here describing the argument, which he attributes to Woodin. He also identifies a large class of posets that exhibit the same kind of non-amalgability phenomenon (any $\mathbb{P}$ which is not $|\mathbb{P}|$-cc below any condition will have such incompatible generics).<|endoftext|> -TITLE: Complement of the Solid Torus in $S^3$ is Again a Solid Torus -QUESTION [5 upvotes]: On pg. 48 of Hatcher's Algebraic Topology, the author writes that the $3$-sphere $S^3$ can be thought of as the union of two solid torus. -First a formal reasoning is given which is - -$S^3=\partial D^4 = \partial(D^2\times D^2) = (\partial D^2\times D^2)\cup (D^2\times \partial D^2)$ - -This is clear. But then the author gives a geometric interpretation which is as follows. -Think of $S^3$ as the one point compactification of $\mathbf R^3$. Let $T=S^1\times D^2$ be the first solid torus. The second solid torus is the closure of the complement of $T$ in $\mathbf R^3$ along with the compatification point at infinity. -The statement in italics is not clear to me. How do we see this as a solid torus? - -REPLY [9 votes]: Here's another way that I like to view this: The solid torus $S^1 \times D^2$ is a donut, and you can fill in the donut hole with a plug ("munchkin" if you're a Dunkin' Donuts fan) homeomorphic to $D^2 \times [-1,1]$. The union of these is isotopic to the standard $D^3 \subset \mathbb{R}^3 \subset S^3$, so the complement (of its interior, technically) is also homeomorphic to $D^3$. The plug $D^2 \times [-1,1]$ intersects this outer $D^3$ along $D^2 \times \{\pm 1\}$. Working in reverse, we see that the complement of the original solid torus is obtained by attaching $D^2 \times [-1,1]$ to the outer $D^3$, together forming a complementary solid torus.<|endoftext|> -TITLE: Good text to start studying topological games? -QUESTION [18 upvotes]: Topological games and some similar infinite games seem to be often used used as a tool in some areas of general topology, but also some other areas, such as Ramsey -theory, filters, etc. -Probably the best known are Banach-Mazur and Choquet game, but many other games of similar nature have been studied. Various questions about topological -games -and their winning strategies have also been asked on this site. (And there were also posts specifically about Banach-Mazur -game and -Choquet game.) -Apart from being a useful tool in some areas of research, they seem to be rather fascinating topic by itself, since there are some results which seem, at the first glance, rather counter-intuitive. (Especially when compared with finite games.) For example, it is possible that topological games is not determined, i.e., it is possible that neither of the two players has a winning strategy. (As far as I know, this is related to Axiom of Choice and Axiom of determinacy.) -Another seemingly counter-intuitive fact is that for some games there might be difference between winning strategy and winning tactic (a.k.a. stationary strategy, which is a strategy where the move depends only on the position and not on the sequence of moves which lead to the position). - -Is there some good text (book, thesis, lecture notes) suitable to start studying topological games and other similar infinite games? - -Of course, I could start backwards by taking some paper using topological games in connection with some topic I am interested in or some survey article and whenever I stumble upon some unfamiliar result or notion, I could try to track down some references and learn about this particular thing. But if there is some reference which develops infinites game in a somewhat more systematic way, it might be more suitable way to start studying this topic. -(It is also possible that what I am asking is not a reasonable question in this sense: Maybe if somebody understands the basic idea of the topological games and winning strategies, then the next logical step to gain enough practice in adapting this technique to various specific situations rather than studying some systematic development of the topic.) - -I suppose that for such specific topic there will not be too many suggestions. So I will be grateful for any suggestion. But since in the book recommendation and reference request it is advised to be specific, I will also add the following: -What would I mostly expect from the text? I would hope that after studying this text I would have better understanding of the seemingly counter-intuitive statements mentioned before. And I would also hope to be able to follow better proofs using topological games and maybe even to be able to come up with my own proofs in situations where topological games can be used. -If it is relevant to the question, I will also describe my background in this area. (Although I guess that if this post is supposed to be useful for other people as well, there should not be that much stress on this.) I understand the basic results about Banach-Mazur game and Choquet game given in Chapter 8 of Kechris: Classical Descriptive Set Theory. (I think I understand them well enough that probably I would be able to reproduce the proofs given enough time. I have not read Chapters 20 and 21, which also deal with topological games. I suppose that studying some material from the preceding parts of the book before starting these two chapters will be needed.) I have also been at some talks on topological games (in connection with Baire spaces), which I was more-or-less able to follow. - -REPLY [10 votes]: Here are some suggestions. I'm afraid they do not fit exactly to your requirements; but I hope this will help anyway. I apologize in advance if you already know all of that. -0 Of course, Kechris and Oxtoby are great. -1 Concerning topological games like Banach-Mazur, and applications to Banach space theory, you could have a look at the following (and the references therein) - -Julian P. Revalski: The Banach-Mazur Game: History and Recent Developments, http://www1.univ-ag.fr/aoc/activite/revalski/Banach-Mazur_Game.pdf -Jiling Cao and Warren B. Moors: A survey on topological games and their applications in analysis, http://www.rac.es/ficheros/doc/00232.pdf -Ratislav Telegársky: Topological games: On the 50th anniversary of the Banach Mazur game, Rocky Mountain J. Math., Volume 17, Number 2 (1987), 227-276. http://projecteuclid.org/euclid.rmjm/1250126541 - -2 Concerning AD and these kinds of things: - -Besides Kechris' book, there are lots of things in the older book by Moschovakis. -Maybe the following article by Mycielski (after all, AD is his axiom): Jan Mycielski: Games with Perfect Information (Chapter 3 in Handbook of Game Theory with Economic Applications), doi:10.1016/S1574-0005(05)80006-2, https://www.math.upenn.edu/~ted/210F10/References/GameTheory/Chapter3copy.pdf - -3 Concerning the difference between strategies and tactics, try the following paper by Debs: - -Gabriel Debs, Stratégies gagnantes dans certains jeux topologiques, Fund. Math., Volume: 126, Issue: 1, page 93-105, 1985. https://eudml.org/doc/211611, http://matwbn.icm.edu.pl/ksiazki/fm/fm126/fm12618.pdf - -4 Incidentally, the best reference is perhaps Dens' "thèse d'état" entitled convexes compacts et jeux topologiques (but I didn't read it!) -5 A more specialized thing, which I suspect you may enjoy reading: - -Gary Gruenhage: The Story of a Topological Game, Rocky Mountain J. Math., Volume 36, Number 6 (2006), 1885-1914, http://projecteuclid.org/euclid.rmjm/1181069351<|endoftext|> -TITLE: When is LIATE simply wrong? -QUESTION [18 upvotes]: I'm currently teaching Calculus II, and yesterday I covered integration by parts and mentioned the LIATE rule. I also gave the usual "it works 99% of the time", but started wondering whether there are any cases where LIATE simply gets the choice of $u$ and $v'$ wrong. -(For those of you who don't know what LIATE is, check out https://en.wikipedia.org/wiki/Integration_by_parts#LIATE_rule ) -I don't consider the example listed at the link above to be what I'm looking for, because I don't consider $e^{x^2}$ to be an exponential function here (only $a^{bx}$). (I don't consider $\tan x$ to be a "trig" function, either, in this context.) -Does anyone have a "pet" example that they show? - -REPLY [4 votes]: My favorite example of LIATE failure is: -$$\int x^{13}e^{\left(x^7\right)}\;dx$$ -What works is: -$$f = x^7$$ -$$dg = x^6e^{\left(x^7\right)}\;dx$$ -$$df = \frac{1}{7}x^6$$ -$$g = \frac{1}{7}e^{\left(x^7\right)}$$ -I ran this through https://www.integral-calculator.com/, and it just substituted $u=x^7$ right at the start, resulting in an easier problem, in which LIATE works.<|endoftext|> -TITLE: What is the mathematical understanding behind what physicists call a gauge fixing? -QUESTION [10 upvotes]: I'm learning fiber bundle from my poor physicist point of view. I understand that a gauge transformation (physicist language) corresponds to the transformation of the connections built from an overlapping patch of the base space of the bundle. Said differently a gauge transform is a change of section in overlapping spaces. (Comments about that are welcome :-) -I'm wondering what is the meaning of a gauge fixing in term of fiber bundle model. Is that a reduction/constraint of the group transformation between sections for instance ? -Thanks for any comment which can improve this question. - -REPLY [3 votes]: In short: if $\pi: P \rightarrow M$ is a principal $G$-bundle, then fixing a gauge corresponds to making a choice of section $\sigma_{U}: U \rightarrow \pi^{-1}(U)$, for some open $U \subset M$. A different choice of section, say $\sigma_{V}: V \rightarrow \pi^{-1}(V)$ will then be related to $\sigma_{U}$ by the action of $G$, so for example $\sigma_{V}(x) = g_{UV}(x)\sigma_{U}(x)$ for some $x \in U \cap V$ with $g_{UV}(x) \in G$. The ambiguity in choosing a section corresponds to gauge freedom, and the group action relating two different choices of section is a gauge transformation, and $G$ is said to be the gauge group. -The simplest example demonstrating this concept actually isn't strictly a principal $G$-bundle, but rather an associated vector bundle of a principal $G$-bundle, since a wavefunction actually takes values in a some copy of $\mathbb{C}^{n}$ rather than in a group. To avoid obfuscating the point however I won't go into the particulars of this. -Let $P \rightarrow M$ be a principal $U(1)$-bundle over some base manifold $M$, then a wavefunction is a section of the vector bundle associated to $P$, denoted in the literature commonly as $P \times_{U(1)} \mathbb{C}^{n}$; all that we need to now of this is that a fibre of $P$ is isomorphic to the vector space $\mathbb{C}^{n}$, which represents the value of a wavefunction. For concreteness, say that $n = 1$ so that the wavefunction $\psi(x) \in \mathbb{C}$ represents a complex scalar field, or a spin-$0$ particle. As the probability of the particle being at the point $x \in M$ is $|\psi(x)|^{2}$, choosing a different section $e^{i\alpha}\psi$ for $e^{i\alpha}\in U(1)$ needs to give the same physical result. Choosing either section $\psi$ or $e^{i\alpha}\psi$ is the same as fixing a gauge, and they are related to one another by the action of an element of $U(1)$, in this case just multiplication. So here a gauge transformation is just multiplication/division by $e^{i \alpha}$, and the gauge group is $U(1)$. -Depending on the topology of the base manifold $M$, it could be that $\alpha$ is actually a function $\alpha:M \rightarrow \mathbb{R}$ (if it is just a constant then the gauge transformation is called a global one, since it holds globally on $M$). If it is a smooth function, then this will give rise to the notion of connection 1-forms and curvature 2-forms, which are called gauge potentials and gauge forces respectively in the physics literature. These are required since both $\psi$ and $e^{i\alpha}\psi$ need to physically represent the same state, but with the introduction of derivatives in the Hamiltonian or Lagrangian, terms like $\partial_{x}\alpha$ start popping up that need to be dealt with by introducing a covariant derivative.<|endoftext|> -TITLE: Probabilistic method proof -QUESTION [6 upvotes]: Let $v_1, v_2,...,v_n \in \mathbb{R}^n$ be unit vectors. Use probabilistic method to show that there exist constants $a_1, a_2,..., a_n \in \{-1,1\}$ such that -$||a_1 v_1 + a_2 v_2 + ... + a_n v_n ||_2 \leq \sqrt{n}$. -I think I should make a n random variable $X_i$ that take values in $\{-1,1\}$ each with probability ½, and somehow show that the probability that the squared $2-$norm of the linear combination with coefficients $X_i$ is greater than n, is less than 1. But I am not sure how I should go about that. -Any hint would be appreciated. - -REPLY [4 votes]: You shouldn't look for the probability that the norm squared is greater than $n$. You should look for it's expectation value. We have -$$\left|\sum a_jv_j\right|^2 = \sum\sum a_j a_k v_j\cdot v_k = \sum a_j^2|v_j|^2 + \sum_{j\ne k} a_j a_k v_j\cdot v_k = n + \sum a_ja_kv_jv_k$$ -Assume that $a_j$ are mutually independent variables with $P(a_j=\pm1) = 1/2$ -Now the expectation value of $a_ja_k = 0$ so the expectation value of the norm squared is $n$. This can only happen if it's probable that it's less than or equal to $n$ - therefore there must exist $a_j$s that make it less than or equal to $n$.<|endoftext|> -TITLE: Basic understanding of a metric. -QUESTION [8 upvotes]: What is a metric ? Do a metric depend on the system of coordinates I use ? Does it depend on surfaces (or higher dimensional manifolds. Correct me if I'm wrong using the word) the coordinate frames are chosen on ? Is there any general form for a metric? In this course it's mentioned Gauss pointed it out as restriction for an infinitesimal metric or distance function on surfaces to be -$$(dS)^2 = g_{xx} (x,y)(dx)^2 + g_{xy} (x,y)dxdy + g_{yy} (x,y)(dy)^2.$$ And is it that mentioned quadratic form restricted for a metric to look like a Euclidean metric $(ds)^2 = (dx)^2 + (dy)^2$ , is just a belief working in this comment of Gauss or is there any proper logical explanation to this ? - -REPLY [7 votes]: $\newcommand{\Reals}{\mathbf{R}}$In the broad sense of your question, a "metric" is a means of quantifying "distance" between points in some "space" (or set) $M$. -One common axiomatic approach leads to the concept of a "topological metric", a function $d$ (for "distance") that assigns a positive real number $d(p, q)$ to each unordered pair of distinct points $p$ and $q$ in $M$, subject to the "triangle inequality", and to the condition $d(p, p) = 0$ for all $p$. This approach is described in multiple answers. -In 1854, Georg Bernhard Riemann gave his habilitation lecture Über die Hypothesen, welche der Geometrie zu Grunde liegen (see Item 13 on the linked page), in which he took an approach leading to the concept of metric in your question. -Suppose $M$ is a space that, near each point, "looks like" the $n$-dimensional Cartesian space. By "looks like", you should imagine that if $p$ is an arbitrary point of $M$, then a sufficiently small neighborhood $U$ of $p$ in $M$: - -Is continuously coordinatized by $n$ real-valued functions (i.e., $U$ is homeomorphic to an open subset of $\Reals^{n}$). -There is a well-defined notion of differentiability. Loosely, if $(x^{1}, \dots, x^{n})$ and $(y^{1}, \dots, y^{n})$ are two different sets of coordinates near $p$, then an arbitrary real-valued function $f$ is smooth when expressed in the $x$ coordinates if and only if $f$ is smooth when expressed with respect to the $y$ coordinates. This is equivalent to saying the change of coordinates mapping from $x$ to $y$ is smooth, and invertible with smooth inverse. - -Nowadays we call $M$ (together with a fixed notion of "differeitiability") an $n$-dimensional smooth manifold. Examples include Cartesian space itself and any non-empty open subset, spheres of arbitrary dimension (the "unit $n$-sphere" is the set of points at unit distance from the origin in $\Reals^{n+1}$), tori of arbitrary dimension, real or complex projective spaces, and regular level sets of smooth mappings. -Riemann's idea was, in essence, to assume the Pythagorean theorem holds infinitesimally, namely, that if $du$ and $dv$ are orthogonal infinitesimal displacements, then the magnitude of their sum is $ds = \sqrt{du^{2} + dv^{2}}$. -With modern hindsight, Riemann is equipping each "tangent space" of $M$ with an inner product, a real, symmetric, positive-definite bilinear form, now called a Riemannian metric. With respect to a coordinate system $(x^{1}, \dots, x^{n})$, a Riemannian metric and the associated length-squared function look like -$$ -g = \sum_{i, j} g_{ij}\, dx^{i} \otimes dx^{j},\qquad -(ds)^{2} = \sum_{i, j} g_{ij}\, dx^{i} \, dx^{j}. -$$ -(The Riemannian metric $g_{p}$ at a point $p$ accepts two tangent vectors and returns their inner product; the metric components $(g_{ij})$ are location-dependent matrix entries defining the inner product. The length-squared function is the associated quadratic form, $(ds)^{2}(v) = g(v, v)$. -A Riemannian metric is not the only "distance-inducing geometric structure" a manifold may possess. In addition to general topological metrics, there are, for example, Finsler manifolds, in which each tangent space is equipped with a norm function, and pseudo-Riemannian manifolds, as in general relativity, in which the inner product is not positive-definite.) -Riemannian geometry is the study of Riemannian manifolds, including questions such as - -When are two Riemannian metrics isometric? (Loosely, when do two different coordinate representations determine "the same geometry"?) Even locally (in a neighborhood of a point) the question is non-trivial. Riemann showed that a Riemannian metric is locally isometric to the Euclidean metric $\sum_{i} (dx^{i})^{2}$ if and only if a certain fourth-order tensor vanishes. -Given a tangent vector $v_{p}$ at some point $p$ and a smooth path in $M$ joining $p$ to $q$, is there a "natural" notion of parallel transport along the path, yielding a tangent vector $v_{q}$ at $q$? (The answer is "yes", and the Riemann curvature tensor has the following geometric interpretation. Let $u$ and $v$ be tangent vectors at $p$. Parallel transport around a small (curvilinear) parallelogram with sides (approximately) parallel to $u$, $v$, $-u$, and $-v$ defines an orthogonal transformation of the tangent space $T_{p}M$, and the dependence turns out to be linear in $u$ and $v$. To say the curvature vanishes is to say that parallel transport around a small parallelogram is the identity transformation, as is the case in Euclidean geometry.) -Given two points $p$ and $q$ in a Riemannian manifold, does there exist a "shortest path" (technically, a length-minimizing geodesic) from $p$ to $q$? If so, is such a geodesic unique? - -Locally, geodesics exist and are automatically length-minimizing, but globally they can fail to exist, or to minimize length, for geometric reasons. For example, if $M$ is the plane $\Reals^{2}$ with the origin removed, and if $g$ is the (restriction of) the Euclidean metric, then a point $p$ and its "opposite" $q = -p$ are not joined by a geodesic in $M$. -A Riemannian manifold is complete if every geodesic is defined for all time. (Loosely, no geodesic "encounters an edge of $M$ in finite time", or "$M$ has no edge".) Even in a complete Riemannian manifold, a sufficiently long geodesic can fail to be length-minimizing: Think of a great circle arc on a sphere subtending more than half of a "full turn". -A connected Riemannian manifold $(M, g)$ becomes a metric space (equipped with a topological metric) if, for points $p$ and $q$ of $M$, we define $d(p, q)$ to be the infimum of the lengths of piecewise-smooth curves from $p$ to $q$. When $M = \Reals^{n}$ and $g$ is the Euclidean (Riemannian) metric, the geodesics are line segments (parametrized at constant speed) and the topological distance reduces to the Euclidean (Pythagorean) distance. In general, however, a topological metric on a smooth manifold $M$ does not arise in this way from a Riemannian metric on $M$.<|endoftext|> -TITLE: Are $\frac{\pi}{e}$ or $\frac{e}{\pi}$ irrational? -QUESTION [7 upvotes]: Is it clear whether $\displaystyle \frac{\pi}{e}$ or $\displaystyle \frac{e}{\pi}$ are irrational or not? -If not, then there would exist $q,p\in \mathbb{Z}$ such that $$p\cdot \pi = q\cdot e$$ - -REPLY [5 votes]: A QUITE SIMPLE REMARK $$\begin{cases}\frac{e}{\pi}=t\\e+\pi=s\end{cases}\qquad (*)$$ would imply $$\pi=\frac{s}{t+1}\\e=\frac{st}{t+1}$$ Consequently and least one of $t$ and $s$ in (*) must be trascendental.<|endoftext|> -TITLE: On the surjectivity of the Hurewicz homomorphism -QUESTION [6 upvotes]: The Hurewicz homomorphism is a surjective homomorphism from $\pi_n(X) \to H_n(X)$ if $\pi_{n-2}(X)=0$ according to Wikipedia. -But if it is surjective then how could the following (contradiction) I constructed be true? -$\pi_2(\mathbb{RP}^5)=\pi_2(S^5)=0$ so $\mathbb{RP}^5$ is $2$-connected. Thus $\pi_4(\mathbb{RP}^5)\to H_4(\mathbb{RP}^5)$ is a surjective homomorphism. But $H_4(\mathbb{RP}^5)=\mathbb{Z}/2$ and $\pi_4(\mathbb{RP}^5)=\pi_4(S^5)=0$. So the Hurewicz homomorphism can't be surjective. - -REPLY [7 votes]: A topological space $X$ is called $k$-connected if $\pi_i(X) = 0$ for $0 \leq i \leq k$. The Hurewicz theorem states that if $X$ is $(n-2)$-connected, then the Hurewicz homomorphism $h_* : \pi_n(X) \to H_n(X)$ is surjective. -In your example, while $\pi_2(\mathbb{RP}^5) = 0$, $\pi_1(\mathbb{RP}^5) = \mathbb{Z}/2\mathbb{Z} \neq 0$, so $\mathbb{RP}^5$ is not $2$-connected and therefore the Hurewicz theorem does not apply. - -REPLY [2 votes]: For the hurewiczs homomorphism, ALL the homotopy groups $\pi_i, i\leq n$ have to be zero to have a surjective morphism $\pi_{n+1}(M)\rightarrow H_n(M)$. In your case, $\pi_1(RP^5)$ is not zero.<|endoftext|> -TITLE: Prob. 6, Chapter 1 in Baby Rudin -QUESTION [8 upvotes]: Here's problem $6$ in Chapter $1$ in the book Principles of Mathematical Analysis by Walter Rudin, $3$rd edition: -Fix a real number $b$, such that $b > 1$. -$(a)$ If $m, n, p, q$ are integers, $n > 0$, $q > 0$, and $r = m/n = p/q$, then (I've managed to show that) -$$\left( b^m \right)^{1/n} = \left( b^p \right)^{1/q}.$$ -Hence it makes sense to define -$$b^r \colon= \left(b^m\right)^{1/n}.$$ -$(b)$ (I've also managed to show that) for any rational numbers $r, s$, we have -$$b^{r+s} = b^r b^s.$$ -$(c)$ For any real number $x$, define the set $B(x)$ as follows: -$$B(x) \colon= \left\{ \ b^t \ \colon \ t \ \mbox{ is rational}, \ t \leq x \ \right\}.$$ -We can then prove that -$$b^r = \sup B(r)$$ -when $r$ is a rational number. -Hence it makes sense to define -$$b^x \colon= \sup B(x) = \sup \left\{ \ b^t \ \colon \ t \ \mbox{ is rational}, \ t \leq x \ \right\}$$ -for every real number $x$. -$(d)$ How to prove (USING THE MACHINERY DEVELOPED HERE) that $$b^{x+y} = b^x b^y$$ -for all real numbers $x$ and $y$? -I've already posted this question. Here's the link. However, none of the answers there seems to work for me. - -My Work -Let $X, Y$ be any two non-empty subsets of $\mathbb{R}$. Then we have the following facts: -$(1)$ If $X \subset Y$ and if $Y$ is bounded above in $\mathbb{R}$, then so is $X$ and we have the inequality -$$\sup X \leq \sup Y.$$ -$(2)$ If $X, Y$ are non-empty subsets of the set of non-negative real numbers and if both $X$ and $Y$ are bounded above, then so is the set -$$\{ \ xy \ \colon \ x \in X, \ y \in Y \ \};$$ -moreover, we have the identity -$$\sup \{ \ xy \ \colon \ x \in X, \ y \in Y \ \} = \sup X \cdot \sup Y.$$ -Thus, -$$\begin{align} -b^x b^y &= \sup B(x) \sup B(y) \\ -&= \sup \{ \ b^r \ \colon \ r \in \mathbb{Q}, r \leq x \ \} \sup \{ \ b^s \ \colon \ s \in \mathbb{Q}, s \leq y \ \} \\ -&= \sup \{ \ b^{r+s} \ \colon \ r \in \mathbb{Q}, \ s \in \mathbb{Q}, \ r \leq x, \ s \leq y \ \}. -\end{align}$$ -But if $r \in \mathbb{Q}$, $s \in \mathbb{Q}$, $r \leq x$, and $s \leq y$, then the sum $r+s \in \mathbb{Q}$ and $r + s \leq x + y$ also. So -$$ \{ \ b^{r+s} \ \colon \ r \in \mathbb{Q}, \ s \in \mathbb{Q}, \ r \leq x, \ s \leq y \ \} \subseteq \{ \ b^t \ \colon \ t \in \mathbb{Q}, \ t \leq x+y \ \}.$$ -Therefore, -$$\begin{align} -b^x b^y &= \sup \{ \ b^{r+s} \ \colon \ r \in \mathbb{Q}, \ s \in \mathbb{Q}, \ r \leq x, \ s \leq y \ \} \\ -&\leq \sup \{ \ b^t \ \colon \ t \in \mathbb{Q}, \ t \leq x+y \ \} \\ -&= \sup B(x+y) = b^{x+y}. -\end{align}$$ -Now we have to show that -$$b^x b^y \geq b^{x+y}.$$ -Case I. When $x+y$ is irrational: -If $t$ is a rational number such that $t \leq x+y$, then since $x+y$ is irrational, we can also conclude that $t < x+y$. -So we can find a rational number $r$ such that -$$ t-y < r < x.$$ -Then -$$t-r < y < x+y -r.$$ -Let's take $s \colon = t-r$. Then $s \in \mathbb{Q}$. And $s < y$. -Thus, we have written $t$ as $t = r + s$, where $r, s \in \mathbb{Q}$, with $r < x$, $s < y$. -So, when $x + y \in \mathbb{R} - \mathbb{Q}$, then we have -$$\begin{align} -b^{x+y} &= \sup B(x+y) = \sup \{ \ b^t \ \colon \ t \in \mathbb{Q}, \ t \leq x+y \ \} \\ -&= \sup \{ \ b^t \ \colon \ t \in \mathbb{Q}, \ t < x+y \} \\ -&= \sup \{ \ b^{r+s} \ \colon \ r \in \mathbb{Q}, \ s \in \mathbb{Q}, \ r< x, \ s < y \} \\ -&= \sup \{ \ b^r b^s \ \colon \ r \in \mathbb{Q}, \ s \in \mathbb{Q}, \ r< x, \ s < y \} \\ -&= \sup \{ \ b^r \ \colon \ r \in \mathbb{Q}, \ r < x \ \} \sup \{ \ b^s \ \colon \ s \in \mathbb{Q}, \ s < y \} \\ -&\leq \sup \{ \ b^r \ \colon \ r \in \mathbb{Q}, \ r \leq x \ \} \sup \{ \ b^s \ \colon \ s \in \mathbb{Q}, \ s \leq y \} \\ -&= \sup B(x) \sup B(y) \\ -&= b^x b^y. -\end{align}$$ -Case II. What if $x + y \in \mathbb{Q}$? -In this case, we can conclude that both $x$ and $y$ are irrational. -How do we proceed with showing the remaining reverse inequality in this particular case? - -REPLY [4 votes]: This is another instance of proving stuff with one hand tied behind your back, limiting knowledge of the reals to what is in Rudin ch. 1. For this reason I write out a complete proof, even though most of it is given by the OP. We assume $b>1$ real. -Lemma 0. If $p>0$ is rational, $b^p>1$. -Lemma 1. If $s1$ and $n>(b-1)/(t-1)$ then $b^{1/n}1$ so $b^x$ is an upper bound of $\{b^r: ru$, with $x-1/n$ rational and $ -TITLE: Sudoku grid guaranteed to be solvable? -QUESTION [9 upvotes]: I want to generate random sudoku grids. -My approach is to fill the three 3x3 groups on a diagonal of the grid, each with the numbers 1-9 randomly shuffled. That looks e.g. like the grid below: -+-------+-------+-------+ -| 5 6 4 | · · · | · · · | -| 1 7 2 | · · · | · · · | -| 9 8 3 | · · · | · · · | -+-------+-------+-------+ -| · · · | 3 2 5 | · · · | -| · · · | 7 9 6 | · · · | -| · · · | 8 1 4 | · · · | -+-------+-------+-------+ -| · · · | · · · | 1 5 9 | -| · · · | · · · | 3 2 7 | -| · · · | · · · | 6 4 8 | -+-------+-------+-------+ - -Then I let my sudoku solver process it until it finds a solution to fill all remaining gaps. The example above resulted in the filled grid below: -+-------+-------+-------+ -| 5 6 4 | 9 3 7 | 2 8 1 | -| 1 7 2 | 5 4 8 | 9 6 3 | -| 9 8 3 | 1 6 2 | 4 7 5 | -+-------+-------+-------+ -| 7 4 9 | 3 2 5 | 8 1 6 | -| 2 1 8 | 7 9 6 | 5 3 4 | -| 6 3 5 | 8 1 4 | 7 9 2 | -+-------+-------+-------+ -| 8 2 6 | 4 7 3 | 1 5 9 | -| 4 5 1 | 6 8 9 | 3 2 7 | -| 3 9 7 | 2 5 1 | 6 4 8 | -+-------+-------+-------+ - -My question is if this approach is mathematically safe, i.e. can you prove me that when I fill my grid using the described approach (randomly assigning the first 27 numbers to fields in the groups on a diagonal line), there will be always at least one possible way to complete the grid from there on? -Or are there chances that the randomly placed numbers can interfere with each other and result in an impossible grid? - -REPLY [2 votes]: WLOG the top block is in the canonical order, because we can just relabel, as in 5 -> 1, 6 -> 2, … in your example. I'm not going to make that replacement through the rest of your grid, but will just pretend you started off like that. -+-------+-------+-------+ -| 1 2 3 | · · · | · · · | -| 4 5 6 | · · · | · · · | -| 7 8 9 | · · · | · · · | -+-------+-------+-------+ -| · · · | 3 2 5 | · · · | -| · · · | 7 9 6 | · · · | -| · · · | 8 1 4 | · · · | -+-------+-------+-------+ -| · · · | · · · | 1 5 9 | -| · · · | · · · | 3 2 7 | -| · · · | · · · | 6 4 8 | -+-------+-------+-------+ - -Making minor-row swaps within one of the three major rows, or making minor-column swaps within one of the three major columns, doesn't change solveability of the sudoku. Therefore we may WLOG that the top-left entry of each of the three grids is 1, by rotating: -+-------+-------+-------+ -| 1 2 3 | · · · | · · · | -| 4 5 6 | · · · | · · · | -| 7 8 9 | · · · | · · · | -+-------+-------+-------+ -| · · · | 1 4 8 | · · · | -| · · · | 9 6 7 | · · · | -| · · · | 2 5 3 | · · · | -+-------+-------+-------+ -| · · · | · · · | 1 5 9 | -| · · · | · · · | 3 2 7 | -| · · · | · · · | 6 4 8 | -+-------+-------+-------+ - -Similarly we may WLOG that the 5 of the middle block appears in one of two places: -+-------+ -| 1 * | -| ! * | -| | -+-------+ - -because if it's in any of the others, we can row/column swap it into one of those. The exception is if it's in the ! position, which is actually equivalent to the top *. This is because we may transpose the entire grid, and relabel the top-left block again, without affecting the middle block's 1 or 5 except in moving them to the correct positions. -Likewise (but without the option of reflecting this time, because that could mess up the middle block) we may WLOG that the bottom-right block's 5 is in one of three positions: -+-------+ -| 1 * | -| * * | -| | -+-------+ - -There are now $2 \times 7! \times 3 \times 7! = 152409600$ remaining cases, which are left as an exercise to the reader.<|endoftext|> -TITLE: When writing proofs, is logical notation a crutch? -QUESTION [30 upvotes]: I'm near the end of Velleman's How to Prove It, self-studying and learning a lot about proofs. This book teaches you how to express ideas rigorously in logic notation, prove the theorem logically, and then "translate" it back to English for the written proof. I've noticed that because of the way it was taught I have a really hard time even approaching a proof without first expressing everything rigorously in logic statements. Is that a problem? I feel like I should be able to manipulate the concepts correctly enough without having to literally encode everything. Is logic a crutch? Or is it normal to have to do that? - -REPLY [5 votes]: Logic is not a crutch. It is a way to check that you haven't made a mistake. -It might not be the best first step to make a proof, but it is definitely a necessary step. -As you get more experienced, you tend to start by sketching the proof in very informal terms. This is like navigating by large landmarks: "I am headed towards that mountain over there, hm, that ridge seems like a good approach. OK, lets go." -However, to actually get to that mountain you still put one foot in front of another, one step at a time for hours. Logic is like walking one step at a time, while looking at your feet and the nearby ground. It is necessary, but sometimes you need to look up to see where you should be going. -Now, if you are going to write the proof in English eventually, you don't have to actually write down all the logic in symbols. I find that I can write English (or Norwegian) while keeping the formal symbols in my mind. -Some published proofs aren't very formal. The reason for this is that formal proofs can be very long. Instead they give arguments that convince the reader that it is possible to work out the proof in all its details, even if neither the writer or the reader have actually done so. -Professional mathematicians become good at telling whether such an informal argument is valid or not, but they occasionally slip up, even peer-reviewed articles can be wrong. So, take care.<|endoftext|> -TITLE: State whether $x^5-5x^4+10x^3-7x^2+8x-4$ is irreducible or not -QUESTION [5 upvotes]: I have tried everything in my knowledge and no, I cannot state it. -I have tried a factorizor online which tells me that it is not factorizable hence irreducible. But I cannot reason why. -I looked at Eisenstein's criteria but obviously, there is no prime $q$ that fits the criteria so this is useless. -I then tried reducibility via modulo reduction, and this should give me the options to test irreducibility up to mod $8$ since that is the largest coefficient in the polynomial...yes? But every mod arrives at the polynomial being reducible...so it basically fails to tell me that it is irreducible. -For mod 2, I get 0 as a solution to the reduced polynomial so that means I can factor it out with $x$. -Similarly, mod 3 says 1 is a solution so $(x-1)$ should be a solution. -In a similar fashion, I get mod 4,5,6,7 to have solutions 0,3,4,6. -Am I missing anything? This is the best I can do, nothing more assures me irreducibility at all. Ideas please...? - -REPLY [2 votes]: One option is to reduce the given polynomial modulo $11$, in which case it factors (over $\Bbb F_{11}$) as -$$(x - 5)(x^4 - x^2 - x - 3).$$ -So, if the polynomial is reducible over $\Bbb Q$, it has one linear factor and one irreducible quartic factor there. -On the other hand, checking the short list, $\pm 1, \pm 2, \pm 4$, of candidates given by the Rational Root Test shows that the polynomial has no rational roots and hence no linear factors. (In fact, since the signs of the polynomial are alternating, all of its real roots are positive, so we need only check $+1, +2, +4$.) Thus, the polynomial is irreducible. -(Alternatively, the given polynomial is irreducible altogether modulo $17$, but this is surely even less pleasant to check by hand than the irreducibility modulo $11$ of the above quartic.)<|endoftext|> -TITLE: Integrate: $\int \frac{\sin(x)}{9+16\sin(2x)}\,\text{d}x$. -QUESTION [6 upvotes]: Integrate: $$\int \frac{\sin(x)}{9+16\sin(2x)}\,\text{d}x.$$ -I tried the substitution method ($\sin(x) = t$) and ended up getting $\int \frac{t}{9+32t-32t^3}\,\text{d}t$. Don't know how to proceed further. -Also tried adding and substracting $\cos(x)$ in the numerator which led me to get $$\sin(2x) = t^2-1$$ by taking $\sin(x)+\cos(x) = t$. -Can't figure out any other method now. Any suggestions or tips? - -REPLY [6 votes]: To attack this integral, we will need to make use of the following facts: -$$(\sin{x} + \cos{x})^2 = 1+\sin{2x}$$ -$$(\sin{x} - \cos{x})^2 = 1-\sin{2x}$$ -$$\text{d}(\sin{x}+\cos{x}) = (\cos{x}-\sin{x})\text{d}x$$ -$$\text{d}(\sin{x}-\cos{x}) = (\cos{x}+\sin{x})\text{d}x$$ -Now, consider the denominator. -It can be rewritten in two different ways as hinted by the above information. -$$9+16\sin{2x} = 25 - 16(1-\sin{2x}) = 16(1+\sin{2x})-7$$ -$$9+16\sin{2x} = 25 - 16(\sin{x}-\cos{x})^2 = 16(\sin{x}+\cos{x})^2-7$$ -Also note that -$$\text{d}(\sin{x}-\cos{x})-\text{d}(\sin{x}+\cos{x}) = 2\sin{x}\text{d}x$$ -By making the substitutions -$$u = \sin{x}+\cos{x}, v = \sin{x}-\cos{x}$$ -The integral is transformed into two separate integrals which can be evaluated independently. -$$2I = \int \frac{\text{d}v - \text{d}u}{9+16\sin{2x}} = \int \frac{\text{d}v}{25-16v^2} + \int \frac{\text{d}u}{7-16u^2}$$ -The remainder of this evaluation is left as an exercise to the reader.<|endoftext|> -TITLE: Why Cohomology Groups? -QUESTION [21 upvotes]: Why do we need cohomology groups? Homology groups are easier to compute and given two topological spaces, there is an isomorphism in homology groups if and only if there is an isomorphism in cohomology groups. So why do I need them? - -REPLY [7 votes]: Cohomology appears naturaly. Lots of problems are solved locally and then the local solutions glued together to construct global solutions, and cohomology is the precise description of how to do that.<|endoftext|> -TITLE: Can the Recursion Theorem be proved in Peano Arithmetic? -QUESTION [5 upvotes]: A recursive function on $\mathbb{N}$ can be defined as follows: -Given an element $a \in \mathbb{N}$ and a function $f:\mathbb{N}\rightarrow\mathbb{N}$, we can define a function $g:\mathbb{N}\rightarrow\mathbb{N}$ as follows: - -$g(0) = a$ -$g(n+1) = f(g(n))$ - -The fact that $g$ exists and is unique can be proved using the Recursion Theorem in set theory. -My understanding is that in Peano Arithmetic, it is possible prove the existence of exponentiation using just addition and multiplication, but is it true for any recursively defined function? -In other words, is the existence of addition and multiplication strong enough to prove the existence of any recursive function? i.e can the Recursion Theorem be proved using the Peano Axioms (including addition and multiplication)? -Update: Since the answers below indicate that this is true, I am looking for an outline of a proof or a reference that shows that addition and multiplication are powerful enough to define any recursive function. - -REPLY [2 votes]: $PA$ can indeed prove the recursion theorem, although it's a bit of work to formalize it properly since $PA$ doesn't talk directly about functions. It's much better to work in the theory $ACA_0$, a conservative two-sorted extension of $PA$ which does talk directly about functions. - -However, this isn't really the end of the story. It's not that $ACA_0$ proves the existence of exponentiation, but rather that PA proves the totality of exponentiation. And, in fact, $ACA_0$ does not prove the totality of every recursive function! This is a somewhat different issue, and I suspect that this is what you are really interested in.<|endoftext|> -TITLE: Difference between pure probability and measure-theoretic probability -QUESTION [5 upvotes]: What is the difference between probability theory when it is and isn't based on measure theory? -My university teaches it with measure theory, and I am enjoying it so far, but am curious as to what I am missing out on. Is "pure" probability theory worth taking a look at, or will it be a waste of time due to already having things covered with measure-theoretic PT? - -REPLY [8 votes]: Probability theory based on measure theory is not less pure than purely mathematical probability not requiring understanding of measure theory. -If one regards Kolmogorov as God's last prophet in the field of probability, as some mathematicians in effect do, then measure-theoretic probability is probability. Probability courses not involving measure theory are intended for people who don't know measure theory. You can say quite a lot about probability that can be understood without knowledge of measure theory and it's perfectly possible that a lot of good stuff is in that other course that's not in the measure-theoretic course. -Some people reject the idea that countable additivity should be taken as axiomatic. Then they can talk about a uniform distribution of sorts on the set of all integers. Inequalities for Stochastic Processes by Lester Dubins and Leonard Jimmy Savage dispenses with countable additivity for technical reasons. -One can also argue that the following question belongs to probability theory but not to mathematics (and that would mean probability theory is a science relying heavily on mathematics, just as physics does, but is not actually a field within mathematics, just as physics is not): Why should the conventional axioms of probability apply when probabilities are taken to be epistemic, i.e. assigned to uncertain propositions and expressing logically justified degrees of belief, rather than relative frequencies? A number of answers to that have been published, including Bruno de Finetti's thought-experiments on gamling strategy and Richard T. Cox's book Algebra of Probable Inference. (And my own paper titled "Scaled Boolean Algebras" in the 2002 volume of Advances in Applied Mathematics (this was before the Elsevier boycott).) -Occasionally a mathematician rejects outright (as William Feller did) the idea that such epistemic probability theory has any validity or utility. An important thing to notice about that is that such a rejection is in no sense a mathematical proposition, susceptible of proof or disproof. -The proposition that probabilities should be taken to be epistemic, regardless of whether that means taking them to be subjective (on the one hand) or (on the other hand) logical is called Bayesianism. Bayesian methods of statistical inference are invulnerable to some pathologies that can afflict what are called frequentist methods, but you encounter the hard problem of assignment of prior probabilties, and that is not a mathematical problem. Lest anyone think that that means subjectivity is inherent in Bayesian methods but not in frequentist methods, let us note that the choice of maximum probabilities of Type I and Type II errors that one is willing to tolerate in frequentist hypothesis testing is itself a subjective economic decision (unless you take the philosophical view, as some do, that economic decisions are not basically subjective).<|endoftext|> -TITLE: $A\subseteq B$ integral domains with surjective multiplication, then the localization by all monic polynomials evaluated at some point is nonzero -QUESTION [5 upvotes]: Let $A\subseteq B$ be two integral domains such that the multiplication function $m: A \times (B \setminus A) \to B$, $m(x,y)=xy$, is surjective. -Let $S \subset A[x]$ be the set of all monic polynomials. For each $b \in B$, let ${v}_b: A[x] \to B$ be the evaluation homomorphism, given by $v_b(f) = f(b)$. -Prove that there exists $b_0 \in B$ such that $({v}_{b_0}(S))^{-1}B \neq \{0\}$. - -We look for some $b_0,b \in B, f \in S$ such that $\frac{b}{f(b_0)} \neq \frac{0}{1}$. Thus we look for some elements $b_0, b \in B$ such that for all $g \in S$, $b\cdot g(b_0)\neq 0 $ in $B$. Since $B$ is an integral domain, this is equivalent to requiring that both $b, g(b_0)$ are nonzero. Hence the choice of $b$ doesn't seem to matter, and we only want to find a suitable $b_0 \in B$ that is not the root of any monic polynomial in $A[x]$. -Clearly we must have $b_0 \in B \setminus A$, since otherwise we can take $g = x-b_0 \in S$ and get $b\cdot g(b_0) = 0 $ no matter how we choose $b$. -We didn't use our given information that $m$ is surjective and $A$ is an integral domain. The surjectivity implies that there are $x \in A, y \in B \setminus A$, with $xy=1$. Thus $x,y$ are units in $B$. Which seems to be a dead-end. - -REPLY [2 votes]: Let $A\subseteq B$ be a ring extension such that the multiplication function $m: A \times (B \setminus A) \to B$, $m(x,y)=xy$, is surjective. Show that $A\subseteq B$ is not integral. - -Let me start form here: The surjectivity implies that there are $x \in A, y \in B \setminus A$, with $xy=1$. -Suppose that $A\subset B$ is an integral extension. Then $y$ is integral over $A$ and write $$y^n+a_1y^{n-1}+\cdots+a_n=0,$$ where $a_i\in A$. Multiply this by $x^n$ and find $1+a_1x+\cdots+a_nx^n=0$, so $x$ is invertible in $A$ hence $y\in A$, a contradiction.<|endoftext|> -TITLE: How, if at all, does pure mathematics benefit from $2^{74207281}-1$ being prime? -QUESTION [16 upvotes]: So a couple of days ago the $17$ million digit number $2^{57885161}-1$ was beaten by the $22$ million digit number $2^{74207281}-1$ at being the largest known prime number. -Are there any specific (purely mathematical) implications of the fact that this particular number is prime? In particular, does this resolve something other than the question if $2^{74207281}-1$ is prime, in the style of some newly discovered theorem resolving previous conjectures? Is it perhaps a counterexample to something so far believed to be true? -Edit: I'm aware of the many non-(strictly mathematical) practical implications and reasons to continue the search for large primes (cryptography etc), but I'm not aware of any strictly mathematical uses of particular numbers being prime, which is why (and what) I'm asking. - -REPLY [2 votes]: Well, there have been cases in the past where a conjecture was eventually refuted only for an extremely large value. So finding such number (and at a certain distance from the previous number), may contribute to our knowledge of some yet unproven conjecture. -Here is one such example of a conjecture that was eventually refuted only for an extremely large value (though in this case it was not refuted as a direct result of discovering that value): - -Consider the following functions: - -The prime-counting function $\pi(n)$ -The logarithm-integral function $Li(n)=\int\limits_{2}^{n}\frac{1}{\ln{x}}dx$ - -The prime number theorem, stating that $\pi(n) \approx Li(n)$, was proved in $1896$. -For small values of $n$, it had been checked and always found that $\pi(n) -TITLE: Distributive Law and how it works -QUESTION [13 upvotes]: Quick question, which seems to irritate some people as it apparently strikes them as a nonsensical question: Why does the distributive law only work in one direction (in $(ℝ,+,*)$)? Why does it work this way: -$$ a * (b + c) = (a * b) + (a * c) $$ -But not this way: -$$ a + (b * c) = (a + b) * (a + c) $$ -What is it about the way addition and multiplication are constructed/composed that inhibits this form of usage? (in ℝ specifically) - -REPLY [9 votes]: My way of interpreting the distributive law in an image: - -The reason why people get annoyed with stuff like this is because they see the operations $+$ and $\times$ as symbols, not actual processes. -Standard index laws when using $\times$ allows us to relate multiplication with dimensions. -Standard addition laws regarding 'like terms' means we can't just simplify two entities from these 'separate' dimensions. -From image: -"The output of the * operator can be visualised by adding dimensions (as per the index laws, x^2 * x^3 = x^5" -"The output of the + operator can be visualised as lumping two of the same dimension objects together." -"If you imagine x=a=b=c and apply the distributive law, you'll end up with non-similar terms. i.e. you can't group x^2 and x terms"<|endoftext|> -TITLE: Why does division by zero not have an imaginary number "option"? -QUESTION [7 upvotes]: In regular math, you cannot get the square root of a negative number. Likewise, you cannot divide by zero. Both dividing by zero and taking the square of a negative have no place in real life. -However, we have systems in place to deal with the square root of a negative number (hence, imaginary numbers). Why does an option for divide by zero not exist? - -REPLY [8 votes]: I tried this myself. If I understand correctly, you intend by your question to ask "Why can we not define a number (call it $q$), such that $q \cdot 0 = 1$?". Just a bit of algebra here: we have defined 0 as the additive identity. So we have $q \cdot 0 = q \cdot (0 + 0)$. Using the distributive law (which is always required in algebras, as far as I am aware), we have: $q \cdot 0 = q \cdot 0 + q \cdot 0)$. Now subtracting $q \cdot 0$ from each side, we have $0 = q \cdot 0$, which is a contradiction. Thus, such a number cannot exist. Notice, also, that we did not assume that our number was real; in fact, we assumed nothing about the number $q$. Thus it is the properties of $0$ that decide this, not the properties of our hypothesized $q$. - -REPLY [2 votes]: We can define a quantity $i$ to have the property $i^2 = -1$, add $i$ to our set of numbers, and still have all of the rules of algebra work correctly. -Suppose we could define 1/0 to be a certain quantity, say z. Then we have this problem: If we start with the definition -$1/0 =z$, then multiply both sides by $0$, we get -$1 = z*0$, or -$1=0$, which is inconsistent.<|endoftext|> -TITLE: Ideal of $\mathbb{C}[x,y]$ not generated by two elements -QUESTION [11 upvotes]: Consider the ring of polynomials in two variables $\mathbb{C}[x,y]$. Show that the ideal $\langle xy^3, x^2y^2, x^3y\rangle$ cannot be generated by two elements. - -Until now, I assumed by contradiction that $\langle xy^3, x^2y^2, x^3y\rangle=\langle g, h \rangle$ for some $g,h \in \mathbb{C}[x,y]$. Then, we have that: -$g=xy^3a + x^2y^2b + x^3yc$ -$h=xy^3d + x^2y^2e + x^3yf$ -for $a, b, c, d, e, f$ in the polynomial ring. However, I'm not sure if this is the correct approach as I'm not sure on how to proceed. I was thinking that maybe one can write an element in $\langle xy^3, x^2y^2, x^3y\rangle$ as the square of a sum. What would be the best approach to do this? -Thanks for the help. - -REPLY [8 votes]: Set $I=\langle xy^3, x^2y^2, x^3y\rangle$ and $\mathfrak m=\langle x, y\rangle$. - -$\dim I/\mathfrak mI=3$ - -Just show that the residue classes of the generators of $I$ are linearly independent over $K$, that is, if $axy^3+bx^2y^2+cxy^3\in mI$ with $a,b,c\in K$ then $a=b=c=0$. This is obvious since the generators of $\mathfrak mI$ are homogeneous of degree $5$. - -If $\dim I/\mathfrak mI=3$ then the minimal number of generators of $I$ is $3$. - -This is very well explained here.<|endoftext|> -TITLE: proving 1/x is convex (without differentiating) -QUESTION [12 upvotes]: I know that $\frac{1}{x}$ is convex when $x \in (0,\infty)$, this can be proven easily by showing that the second derivative is positive. However, I am finding difficulty showing it using the definition of convexity, in other words, for $\alpha \in [0,1]$ and $x_1, x_2 \in \Bbb R^+$, show that: -$$\frac{1}{\alpha x_1 + (1-\alpha) x_2 } \leq \frac{\alpha}{x_1}+\frac{1-\alpha}{x_2}$$ -Note that the relation between the harmonic mean and arithmetic mean is just a special case, (take $\alpha = 0.5$). - -REPLY [7 votes]: Let $x,y$ be positive and $a+b=1$ with $a,b\in [0,1].$ We will use the fact that $$a^2+b^2=(a+b)^2-2 a b=1-2 a b.$$ Eliminating the denominators, we have $$ a/x+b/y\geq 1/(a x+b y)$$ $$\iff (a x+ b y)(a y+b x)\geq x y$$ $$\iff (a^2+b^2)x y +a b(x^2+y^2)\geq x y $$ $$\iff (1-2 a b)x y +a b (x^2+y^2)\geq x y$$ $$ \iff -2 a bx y+a b(x^2+y^2)\geq 0$$ $$ \iff a b (x-y)^2\geq 0.$$<|endoftext|> -TITLE: If $a^4 + 4^b$ is prime, then $a$ is odd and $b$ is even. -QUESTION [5 upvotes]: We say an integer $p>1$ is prime when its only positive divisors are $1$ and $p$. Let $a$ and $b$ be natural number not both $1$. Prove that if $a^4+4^b$ is prime, then $a$ is odd and $b$ is even. -I'm also given a hint: -Consider the expression $(x^2-2xy+2y^2)(x^2+2xy+2y^2)$. -What I have so far is: -Let $x=a^2, y=2^b$ and substitute it into the expression. -But I am not sure how to proceed. Can anyone help please? - -REPLY [3 votes]: I think that hint is terrible and only the geniuses in the class could get it right away. I put $(x^2 - 2xy + 2y^2)(x^2 + 2xy + 2y^2)$ into Wolfram Alpha and it gave me $x^4 + 4y^4$ as an "alternate form." -But the problem says "$4^b$", not "$4y^4$." If you're a precocious genius you can right away see that if $b$ is odd then $$4^b = 4 \left(2^{\frac{2b - 2}{4}} \right)^4.$$ I'm not a genius, so it took me a while (three days, to be precise) to arrive at this. It was only then that the problem finally unraveled. Now we have $$a^4 + 4 \left(2^{\frac{2b - 2}{4}} \right)^4 = (a^2 - 2^{\frac{2b - 2}{2}} a + 2^b)(a^2 + 2^{\frac{2b - 2}{2}} a + 2^b).$$ -This means that with $b$ odd, the only way for the resulting number to be prime is for $(a^2 - 2^{\frac{2b - 2}{2}} a + 2^b)$ to equal $1$ or $-1$ somehow. There are four possible solutions to that equation, and they're all ruled out with the definition of the problem (because $a$ and $b$ are natural numbers not both equal to $1$, and $0$ is or isn't a "natural number" -- but that's a whole other can of worms). Therefore, under the constraints of the problem, odd $b$ gives a composite number, so $b$ must be even in order for the expression to give a prime (this is a necessary but not sufficient condition, of course). -Let's have a concrete example: $a = 1$, $b = 3$. Then $a^4 + 4^b = 65$, which is obviously composite. But still take the time to run it through Sophie Germain's identity: $1 + 4^3 = 1 + 4 (2^1)^4 = (1 - 2 \times 1 \times 2 + 2 \times 2^2)(1 + 2 \times 1 \times 2 + 2 \times 2^2) = 5 \times 13$. -As for the parity of $a$, that, as Bob Happ said in a comment, "is the easy part here."<|endoftext|> -TITLE: Non-associative commutative binary operation -QUESTION [33 upvotes]: Is there an example of a non-associative, commutative binary operation? -What about a non-associative, commutative binary operation with identity and inverses? -The only example of a non-associative binary operation I have in mind is the commutator/Lie bracket. But it is not commutative! - -REPLY [2 votes]: If $A$ is an abelian group, then the new operation $a * b = - (a+b)$ is commutative but is typically not associative. (This comes up naturally in the classical chord-tangent law on a cubic curve $C$ in the projective plane: if $a$ and $b$ are two points on the curve, then $a*b$ is the third point of intersection of the line through $a$ and $b$ with $c$.)<|endoftext|> -TITLE: Dealing with a difficult sum of binomial coefficients, $\sum_{l=0}^{n}\binom{n}{l}^{2}\sum_{j=0}^{2l-n}\binom{l}{j} $ -QUESTION [12 upvotes]: I am interested in finding an upper bound for the sum - $$F(n)= \sum_{l=0}^{n}\binom{n}{l}^{2}\;\sum_{j=0}^{2l-n}\binom{l}{j}$$ - -Ideally it should be possible to evaluate it exactly using some combinatorial identities or generating functions. Note that the $\binom{n}{l}$ term is squared, and that when $2l-n<0$, we define the inner to be zero. -Simple upper bound: By noticing that $2l-n\leq n$, we have that $$\sum_{j=0}^{2l-n}\binom{l}{j}\leq 2^n,$$ and so $$F(n)\leq 2^n \sum_{l=0}^{n}\binom{n}{l}^{2}=2^n\binom{2n}{n}.$$ (See this question for a proof that $\sum_{l=0}^{n}\binom{n}{l}^{2}=\binom{2n}{n}$) Modifying the above based on when the inner sum is zero, that is when $2l\leq n$, we can replace this upper bound by $$F(n)\leq 2^{n-1}\binom{2n}{n}.$$ However it should be possible to do significantly better than this. - -REPLY [3 votes]: First time contributor to Stack Exchange, so pardon my typesetting. My proof could use some more rigor, so I’ll only sketch the idea. -From an integral representation of the exponential generating function for the numbers F(n), I was able to do classic saddle point analysis. I then used ‘de-Poissionization’ to extract an asymptotic formula for F(n). I will give a closed form for only the most important entity, A, since the other factors are an awful combination of derivatives of a complicated function that is evaluated at the saddle point. -$$ F(n) \approx C \frac{A^{n+1}}{n+1} \bigl( 1 - \frac{A\, B}{n+2} \bigr) $$ -$$ A=\bigl(\sqrt{y_0}+\sqrt{1+1/y_0}\bigr)^2 \, , y_0=\frac{1}{3}\bigl( -1 + \tau^{1/3} + \tau^{-1/3} \bigr) \, , -\tau=\frac{1}{2}(25+3\sqrt{69}) .$$ -Numerically, -$$ A=5.729031537980930838, B=4.1847320149263052, C=0.250939299689850514 .$$ -As an example, F(2000)=1.03227E1513 and the approximation is F*(2000)=1.0318E1513, so that about 4 significant digits are obtained. The proposer wanted a bound, and since the second term in the asymptotic series is negative, the first term is sufficient, for n large enough.<|endoftext|> -TITLE: Summation of factorials. -QUESTION [5 upvotes]: How do I go about summing this : -$$\sum_{r=1}^{n}r\cdot (r+1)!$$ -I know how to sum up $r\cdot r!$ But I am not able to do a similar thing with this. - -REPLY [3 votes]: Making use of: -$$r.\left(r+1\right)!=\left(r+2-2\right).\left(r+1\right)!=\left(r+2\right)!-2.\left(r+1\right)!$$ -we write the summation as: -$$\left[\left(n+2\right)!-2.\left(n+1\right)!\right]+\left[\left(n+1\right)!-2.n!\right]+\cdots+\left[4!-2.3!\right]+\left[3!-2.2!\right]$$ -leading to: -$$\sum_{r=1}^{n}r.\left(r+1\right)!=\left(n+2\right)!-2-\sum_{r=1}^{n}\left(r+1\right)!$$ -So finding an expression for it is in essence the same as finding an expression for: $$\sum_{r=1}^{n}r!$$ -For this have a look here.<|endoftext|> -TITLE: Lipschitz function of independent sub-Gaussian random variables -QUESTION [9 upvotes]: If $X\sim \mathcal{N}(0,I)$ is a Gaussian random vector, then Lipschitz functions of $X$ are sub-Gaussian with variance parameter 1 by the Tsirelson-Ibragimov-Sudakov inequality (eg. Theorem 8 here). -Suppose $X = (X_1,X_2,\ldots, X_n)$ consisted of independent sub-Gaussian random variables themselves, which are not normally distributed. Does the above property still hold? - -REPLY [6 votes]: Here are two options that may suit your needs. - -Concentration inequality for convex functions of bounded random variables. -If $X_1,...,X_n$ are independent taking values in in $[0,1]$ and $f$ is a quasi-convex, then \[P(f(X) > m+t) \le 2e^{-t^2/4}, -P(f(X) < m - t) \le 2e^{-t^2/4} -\qquad -\] where $m$ is the median of $f(X)$. See Theorem 7.12 in the book Concentration Inequalities: A Nonasymptotic Theory of Independence -by Gábor Lugosi, Pascal Massart, and Stéphane Boucheron. It follows from the convex distance inequality due to Talagrand. -View $X_i$ as a function of a standard normal. If $X_i$ can be written as $\Phi(Z_i)$ where $Z_i$ is standard normal, then $f(X) = f\circ \Phi(Z)$ where $Z_1,...,Z_n$ are iid standard normal. Here, the multivariate function $\Phi:R^n\to R^n$ applies $\Phi$ on every coordinate. -Then the Tsirelson-Ibragimov-Sudakov inequality applies to $f\circ \Phi$, and the Lipschitz norm of $f\circ \Phi$ is at most $\|f\|_{Lip} \|\Phi\|_{Lip}$. Now, the question is whether $\|\Phi\|_{Lip}$ is bounded by an absolute constant (and, in particular, whether $\Phi$ is Lipschitz at all, otherwise $\|\Phi\|_{Lip}=+\infty$ and we do not get anything). -Inequality $\|\Phi\|_{Lip} t) \le 2 \exp(-\kappa c t^2) \] -for some absolute constant $c>0$. This is Theorem 5.2.15 in the book High Dimensional Probability by Roman Vershynin.<|endoftext|> -TITLE: Sum over all permutations -QUESTION [8 upvotes]: Playing with another question, I got this equality by probabilistic considerations. I guess there is a simple proof, but I'm not strong at this... -Let $x_i >0$, $i=1,2 \cdots N$ -Then -$$ \frac{1}{\displaystyle \prod_{i=1}^N x_i}=\sum_{\sigma}\prod_{s=1}^N \frac{\displaystyle 1}{\displaystyle \sum_{i=1}^s x_{\sigma(i)}} $$ -where the sum is over all permutations of $\{ 1,2 \cdots N\}$ -(I suspect that the assumption $x_i > 0$ is not necessary) - -REPLY [5 votes]: Group the permutations into pairs with the same $\{\sigma(1),\sigma(2)\}$, and add their contributions. From the first two factors, you get $$\frac{x_{\sigma(1)}+x_{\sigma(2)}}{x_{\sigma(1)}x_{\sigma(2)}(x_{\sigma(1)}+x_{\sigma(2)})}$$ -and all the other factors are the same for that pair. So you can replace $\frac1{x_{\sigma(1)}}\frac1{x_{\sigma(1)}+x_{\sigma(2)}}$ in every permutation by $\frac1{2x_{\sigma(1)}x_{\sigma(2)}}$, and you won't change the overall sum. -Now group the permutations into sets of six with the same $\{\sigma(1),\sigma(2),\sigma(3)\}$. When you add their contributions, the first three factors simplify in the same way, so you get the same overall sum by replacing the first three factors by $\frac1{6x_{\sigma(1)}x_{\sigma(2)}x_{\sigma(3)}}$. -And so on.<|endoftext|> -TITLE: Malliavin derivative under change of measure -QUESTION [8 upvotes]: Let $\widetilde{B}$ be a Brownian Motion under the measure $\mathbb{P}$. -Let $\theta$ be a stochastic process fulfilling the Novikov's condition and $Z_\theta$ the relative Radon–Nikodym derivative for which it holds $\mathbb{Q}(d\omega)=\mathbb{P}(d\omega)Z_\theta$. Then by Girsanov, $B_t = \widetilde{B}_t + \int_0^t \theta_s ds$ is a Brownian Motion under $\mathbb{Q}$ . -Let -$$d\widetilde{F}_t=\mu(t,\widetilde{F}_t)dt+\sigma(t,\widetilde{F}_t)d\widetilde{B}_t$$ - be an Ito process with deterministic (but not constant) drift and diffusion, satisfying Malliavin derivability conditions. -Denote by $D^{\widetilde{B}}_s \widetilde{F}_t$ the Malliavin derivative at time $s$ of the stochastic process $\widetilde{F}$ at time $t$ with respect to the Brownian Motion $\widetilde{B}$. -Then the process $\widetilde{F}$ under change of measure is: -$$ dF_t=\big[\mu(t,F_t) - \theta_t\cdot\sigma(t,F_t) \big]dt+\sigma(t,F_t)dB_t $$ -Let $W$ be a Brownian Motion under $\mathbb{Q}$ correlated with $B$, correlation coefficient being $\rho$. -One wishes to compute $D^{{W}}_s {F}_t$, having close form expression for $D^{\widetilde{B}}_s \widetilde{F}_t$. -$\cdot$ Does an explicit relation between $D^{\widetilde{B}}_s \widetilde{F}_t$ and $D^{{W}}_s {F}_t$ exist? More specifically, can we express $D^{{W}}_s {F}_t$ as a function of $D^{\widetilde{B}}_s \widetilde{F}_t$ ? -(Where indeed $D^{{W}}_s {F}_t$ is the Malliavin derivative of the process $\widetilde{F}$ expressed under $\mathbb{Q}$, with respect to $W$). -My guess is that it should depend on the Malliavin derivative of $\theta$, but with respect to what Brownian Motion? I have spent a week on this but did not manage to arrive at satisfying results. I tried to leverage the Clark–Ocone Formula under Change of Measure and Malliavin derivative operator under -change of variable. The second seems to be particularly close to what I am looking for. However its notation is not clear to me (with respect to which BM the Malliavin derivatives are taken and under which measure the random variable are expressed) and was not able to actually use the result. -EDIT: In my particular case an Ito process with deterministic drift and diffusion is considered. It would be nice to have such result for a general Malliavin-derivable stochastic process though. - -REPLY [2 votes]: (I repost here my partial answer from mathoverflow, https://mathoverflow.net/questions/229044/malliavin-derivative-under-change-of-measure/229874#229874) -Here an answer for the case with determinist drift as mentioned in the edit. (Note: I fail to see why to use different notations for $F$ and $\tilde{F}$ as it is the same process) -As -$$ dF_t = \mu_t \, dt + \sigma_t d\tilde{B}_t$$ -we have -$$ D_s^{\tilde{B}}F_t = \sigma_s$$. -Furthermore, writing $B_t = \rho W_t + \sqrt{1-\rho^2} W_t^\perp$ for $W^\perp$ a Brownian motion orthogonal to $W$ and noting that $D_s^{W} W^\perp_t =0$, we have from -$$ dF_t = \bigl(\mu_t - \theta_t \sigma_t\bigr) \, dt + \sigma_t dB_t = \bigl(\mu_t - \theta_t \sigma_t\bigr) \, dt + \rho \sigma_t dB_t + \sqrt{1-\rho^2}\sigma_t dW^\perp_t$$ -that -$$D^W_sF_t = \rho\sigma_s - \int_s^t \sigma_u D_s^W\theta_u \, du$$ -and can thus conclude -$$D^\tilde{B}_sF_t = \frac{D^W_sF_t + \int_s^t \sigma_u D_s^W\theta_u \, du}{\rho}$$.<|endoftext|> -TITLE: $f \in C^1$ defined on a compact set $K$ is Lipschitz? -QUESTION [5 upvotes]: Let $f: \Omega \subseteq \mathbb{R}^N \to \mathbb{R}^M$ be $C^1$, and $K \subseteq \Omega$. Prove that $f \mid_K$ is Lipschitz. - -Letting $x,y \in K$, I know that $f$ is locally Lipschitz, I thought about taking a finite subcover of $K$ by the open cover of the balls which makes $f$ locally Lipschitz, and then take a sequence of $x_k, k=0,\dots, n$ in such that $x_0 =x$ and $x_n=y$, $x_k =y$ and $(x_i)$ are aligned, then try to use $|f(x)- f(y)| \le \sum_{i=1}^{n}|f(x_{i=1})- f(x_i)|$. But the very first problem is to take this sequence as $K$ is not necessary convex. -Again, if $K$ was convex I would make use of the Mean Value Inequality and that $K$ is compact to limit the value of $||Df_{p}||$, but as it is not the case I don't know how to proceed. -So how can I avoid that? Does it hold for a general $K$ compact? -Finally, I am aware of this question, but his question, at the end, was only concerning the continuity, which I already know how to deal with there was no useful answer, nor comment for me, I apologise if I acted wrong by opening a new one. - -REPLY [6 votes]: I assume that $\Omega$ shall be open. -In my opinion, it's more convenient to not bother with open coverings of $K$, though that works too. -For every $\eta > 0$, we can look at -$$L^{(1)}_{\eta} := \sup \biggl\{ \frac{\lVert f(x) - f(y)\rVert}{\lVert x-y\rVert} : (x,y) \in K\times K \setminus \Delta_{\eta}\biggr\},\tag{$\ast$}$$ -where $\Delta_{\eta} = \{ (x,y) \in K\times K : \lVert x-y\rVert < \eta\}$. Since $K\times K \setminus \Delta_{\eta}$ is compact, $L^{(1)}_{\eta} < +\infty$. -To establish Lipschitz continuity, we now need only look at $\Delta_{\eta}$. Now, if we choose $\eta$ small enough, we can happily ignore all problems arising from non-convexity, since then the straight line connecting $x,y$ with $(x,y) \in \Delta_{\eta}$ stays inside a compact subset of $\Omega$, where the derivative of $f$ is uniformly bounded and hence we have Lipschitz continuity there. -Since $\Omega$ is open and $K\subset \Omega$ compact, there is an $\varepsilon > 0$ such that -$$B_{\varepsilon}(K) = \{ x\in \mathbb{R}^N : \operatorname{dist}(x,K) < \varepsilon\} \subset \Omega.$$ -Then $\overline{B_{\eta}(K)} \subset \Omega$ for all $\eta < \varepsilon$, and $\overline{B_{\eta}(K)}$ is compact. With a bound $L^{(2)}_{\eta}$ of $\lVert Df\rVert$ on $\overline{B_{\eta}(K)}$, the mean value theorem yields -$$\lVert f(x) - f(y)\rVert \leqslant L^{(2)}_{\eta}\cdot \lVert x-y\rVert$$ -for $x,y\in K$ with $\lVert x-y\rVert \leqslant \eta$, and together with $(\ast)$ we deduce that $f\lvert_K$ is Lipschitz continuous with Lipschitz constant $L \leqslant \max \{ L^{(1)}_{\eta}, L^{(2)}_{\eta}\}$.<|endoftext|> -TITLE: When is inverting a matrix numerically unstable? -QUESTION [10 upvotes]: What does numerically unstable mean when inverting a matrix and what are the mathematical conditions that cause this problem to arise? - -REPLY [5 votes]: Numerical stability for linear algebra operations is usually associated with the matrix's condition number. A way of estimating the condition number is the ratio of the largest eigenvalue to the smallest eigenvalue, or the largest singular value to the smallest singular value. -What this tells you is the relative scale of the matrix. When doing Gaussian elimination, you will be dividing by some of these numbers. So if you ever divide a large number by a really small number, not only does the result get large, but you will have amplified the error as well. Thus the condition number is related to error estimates, with the larger condition number being less numerically "good".<|endoftext|> -TITLE: Limit $\lim_{x \to 1} \frac{\log{x}}{x-1}$ without L'Hôpital -QUESTION [6 upvotes]: I was wondering if it's possible to identify this limit without using L'Hôpital's Rule: -$$\lim_{x \to 1} \frac{\log{x}}{x-1}$$ - -REPLY [9 votes]: In THIS ANSWER and THIS ONE I showed, without the use of calculus, that the logarithm function satisfies the inequalities -$$\frac{x-1}{x}\le \log(x)\le x-1$$ -Therefore, we can write for $x>1$ -$$\frac1x \le \frac{\log(x)}{x-1}\le 1$$ -and for $x<1$ -$$1 \le \frac{\log(x)}{x-1}\le \frac1x$$ -whereupon applying the squeeze theorem yields the result $1$.<|endoftext|> -TITLE: Are the eigenvalues of the sum of two positive definite matrices increased? -QUESTION [9 upvotes]: Let $A$ and $B$ be two $n \times n$ (symmetric) positive definite matrices, and denote the $k$th smallest eigenvalue of a general $n \times n$ matrix by $\lambda_k(X)$, $k = 1, 2, \ldots, n$ so that $$\lambda_1(X) \leq \lambda_2(X) \leq \cdots \leq \lambda_n(X).$$ I guess the following relation holds: -$$\lambda_k(A + B) > \max\{\lambda_k(A), \lambda_k(B)\}, \; k = 1, 2, \ldots, n.$$ -This looks intuitive but I have difficulty to prove it, any hints? - -REPLY [13 votes]: For symmetric matrices you have the Courant-Fischer min-max Theorem: -$$\lambda_k(A) = \min \{ \max \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=k \}$$ -with -$$R_A(x) = \frac{(Ax, x)}{(x,x)}.$$ -Now, your assertion follows easily, since $R_{A+B}(x) > \max\{R_A(x), R_B(x)\}$. -This theorem is also helpful to prove other nice properties of the eigenvalues of symmetric matrices. For example: -\begin{equation*} - \lambda_k(A) + \lambda_1(B) \le \lambda_k(A+B) \le \lambda_k(A) + \lambda_n(B) -\end{equation*} -This shows the continuous dependence of the Eigenvalues on the entries of the matrix, and also your assertion.<|endoftext|> -TITLE: Prove that $\frac{2^{122}+1}{5}$ is a composite number -QUESTION [11 upvotes]: As in the title. It's very easy to show that $5|2^{122}+1$, but what should I do next to show that $\frac{2^{122}+1}{5}$ is a composite number? I'm looking for hints. - -REPLY [5 votes]: $$1+2^{122}=\left(1+2^{61}\right)^2-\left(2^{31}\right)^2$$ -$$=\left(1+2^{61}+2^{31}\right)\left(1+2^{61}-2^{31}\right)$$ -Both factors are larger than $5$, also $$2^{122}\equiv \left(2^4\right)^{30}\cdot 2^2\equiv (1)^{30}\cdot 2^2\equiv -1\pmod{5},$$ so your result follows. -More generally, the following is called the Sophie-Germain identity (for all $a,b\in\mathbb R$): -$$a^4+4b^4=\left(a^2+2b^2\right)^2-\left(2ab\right)^2$$ -$$=\left(a^2+2b^2+2ab\right)\left(a^2+2b^2-2ab\right)$$ -The following factorization is a Aurifeuillean factorization (for all $k\in\mathbb R$): -$$2^{4k+2}+1=\left(2^{2k+1}-2^{k+1}+1\right)\left(2^{2k+1}+2^{k+1}+1\right)$$ -Some other Aurifeuillean factorizations (not related to the Sophie-Germain identity): -$$3^{6k+3}+1=\left(3^{2k+1}+1\right)\left(3^{2k+1}-3^{k+1}+1\right)\left(3^{2k+1}+3^{k+1}+1\right)$$ -$$5^{10k+5}-1=\left(5^{2k+1}-1\right)\left(5^{4k+2}+5^{3k+2}+3\cdot 5^{2k+1}+5^{k+1}+1\right)\times$$ -$$\times \left(5^{4k+2}-5^{3k+2}+3\cdot 5^{2k+1}-5^{k+1}+1\right)$$ -$$6^{12k+6}+1=\left(6^{4k+2}+1\right)\left(6^{4k+2}+6^{3k+2}+3\cdot 6^{2k+1}+6^{k+1}+1\right)\times$$ -$$\times \left(6^{4k+2}-6^{3k+2}+3\cdot 6^{2k+1}-6^{k+1}+1\right)$$ -There are a lot more of these on the Wikipedia page about Aurifeuillean factorization.<|endoftext|> -TITLE: Limit problem $\ln(x)$ and $1^\infty$ -QUESTION [5 upvotes]: Can anyone help me with this limit problem without L'Hopital rule and Taylor series? -$$\lim_{x\rightarrow\ 1}\left(\frac{2^x+2}{3^x+1}\right)^{1/\ln(x)}$$ - -REPLY [3 votes]: $$\frac{2^x+2}{3^x+1}=\left(1+(\frac{2^x+2}{3^x+1}-1)\right)=1+\frac{2^x-3^3+1}{3^x+1}$$ Puting $\frac{2^x-3^3+1}{3^x+1}=h$ one gets $$\left(\frac{2^x+2}{3^x+1}\right)^{\frac{1}{\ln x}}=(1+h)^{\frac{1}{\ln x}}=(1+h)^{{\frac{h}{h\ln x}}}$$ -$$\lim_{x\rightarrow\ 1}\left(\frac{2^x+2}{3^x+1}\right)^{1/\ln(x)}=\lim_{h\rightarrow\ 0}\left((1+h)^{\frac 1h}\right)^{{\frac{h}{\ln x}}}$$ -Now $$\frac{h}{\ln x}=\frac{2^x-3^x+1}{(3^x+1)\ln x}=\frac{1}{3^x+1}\cdot\frac{2^x-3^x+3-2}{\ln x}$$ it follows -$$\left(\frac{1}{3^x+1}\right) - \left(\frac{2(2^{x-1}-1)-3(3^{x-1}-1)}{\ln x}\right)$$ On the other hand, it is known that $$\lim_{x\rightarrow 1}\frac{a^{x-1}-1}{\ln x}=\ln a$$ Hence -$$\lim_{x\rightarrow 1}\left(\frac{1}{3^x+1}\right) - \left(\frac{2(2^{x-1}-1)-3(3^{x-1}-1)}{\ln x}\right)=\frac 14(2\ln 2-3\ln 3)$$ -Hence we have as limit $$e^{-\frac 14 (\ln 27-\ln 4)}=e^{-\ln \sqrt[4]{\ln 27-\ln 4}}$$ i.e. our limit is equal to $$\color{red}{\sqrt[4]{\frac{4}{27}}}$$<|endoftext|> -TITLE: If $G$ is a finite group s.t. $|G|=4$, is it abelian ? -QUESTION [5 upvotes]: If $G$ is a finite group s.t. $|G|=4$, is it abelian ? To me it's isomorphic to $\mathbb Z/2\mathbb Z\times \mathbb Z/2\mathbb Z$ or $\mathbb Z/4\mathbb Z$, but a friend of me said that they are not the only group of order 4, and there exist some non-abelian. Do you have such example ? - -REPLY [2 votes]: Without relying on orders of an element or anything else, you can prove this from the ground up by applying the sheer definition of a group. -Assume that $G$ is a group of order $4$, say $G=\{e,a,b,c\}$, where $e$ denotes the identity element and all elements are different. Assume that $G$ is not abelian, then (maybe after renaming the elements) we can assume that $ab \neq ba$, that is $a$ and $b$ do not commute. Now consider the element $ab$ which must be in $G$, since the set is closed under group multiplication. -If $ab=e$, then $a=b^{-1}$, and hence $a$ and $b$ commute, which is not the case. -If $ab=a$, then $b=e$, which is not the case. -If $ab=b$, then $a=e$, which is not the case. -Hence $ab=c$. But also $ba \in G$. It cannot be equal to $ab$ by assumption. If $ba=e$, then again $a=b^{-1}$, which is not the case. -If $ba=a$, then $b=e$, which is not the case. -If $ba=b$, then $a=e$, which is the final contradiction. -So after all, $G$ must be abelian. -Note with a similar slightly more subtle reasoning you can prove that a group of order $5$ must be abelian. Try it, or see here.<|endoftext|> -TITLE: If $G$ is a direct product of simple groups, then is every simple subgroup of $G$ isomorphic to a subgroup of some factor? -QUESTION [5 upvotes]: Let $G=N_1\times N_2\dots \times N_n$. Suppose that $H$ is a simple subgroup of $G$. Is $H$ isomorphic to a subgroup of $N_i$, for some $N_i$? -This is a weaker version of this question, which turned out to be false: -If $G$ is direct product of simple normal subgroups, then is every simple subgroup isomorphic to some factor - -REPLY [4 votes]: Let $\pi_i:G\to N_i$ be the projection onto the $i$th factor. Then $\pi_i(H)\neq \{1\}$ for some $i$ and, for this $i$, we must have $\ker\pi_i=\{1\}$. Hence, $H$ is isomorphic to a subgroup of $N_i$. -This is false if you require equality. Take $G=A_6\times A_6$ and $A_5\cong H=\Delta(A_5)\leq G$, where $\Delta(A_5)=\{(\sigma,\sigma)\mid \sigma\in A_5\}$.<|endoftext|> -TITLE: How to calculate the shortest interval, for $P ( X ≤ 1 . 645) = 0 . 95$? -QUESTION [5 upvotes]: The problem statement said: - -Based on the fact that $\Phi(1 . 645) = 0 . 95$ find an interval in which $X$ - will fall with $95\%$ probability. - -Therefore: -Since $P ( X ≤ 1 . 645) = 0 . 95, ( -∞ , 1 . 645)$ is a $95\%$ confidence interval for $X$ -The question I have problem to understand is: - -Among all possible intervals into which $X$ falls with $95\%$ probability, - find the shortest one. - -How can I compute or see which is the shortest interval? -Thanks! - -REPLY [3 votes]: Because the normal curve has a bell shape, there is more area under it near the average. It appears that we are dealing with a standard normal distribution, so you want to solve for little $x$ -Using the CDF, -\begin{align*} -.95 &= P(X\leq x) -P(X\leq -x) \\ -&=P(X\leq x)-[1-P(X\leq x)]\tag{1}\\ -&=2P(X\leq x)-1\\ -&=2\Phi(x)-1 -\end{align*} -where in $(1)$ I recognize symmetry. -This gives -$$\Phi(x)=\frac{.95+1}{2}\implies x = \Phi^{-1}\left(0.975\right) = 1.959964$$ -So the interval is $(-1.959964,1.959964)$. - -Alternatively, using the survival gives -\begin{align*} -.95 &= P(X>-x)-P(X>x)\\ -&= P(X\leq x)-[1-P(X\leq x)]\tag 2\\ -&=2P(X\leq x) - 1 -\end{align*} -where in $(2)$ I recognize symmetry for the value $P(X>-x)$. -This results in the same interval.<|endoftext|> -TITLE: Find a (simple?) counterexample to this statement about topological manifolds. -QUESTION [5 upvotes]: Let us assume by a topological manifold $M$ of dimension $n$ I mean a Hausdorff topological space that is locally homeomorphic to $\mathbf{R}^n$, where $n$ is fixed. -I know that if $M$ is assumed separable and paracompact, then $M$ admits an exhaustion by compact sets. -Is this still true if $M$ is only assumed separable, and not necessarily paracompact? -Thanks. - -REPLY [4 votes]: According to this MathOverflow answer, there are even non-normal separable complex manifolds; it gives a reference, and a comment below it notes that the version of the Prüfer surface mentioned in the cited reference is also separable and non-normal. And M. E. Rudin and P. Zenor, A perfectly normal non­-metrizable manifold, Houston.J. Math. $2$ ($1976$), $129­$-$134$, constructs a perfectly normal, hereditarily separable, non-metrizable manifold assuming $\mathsf{CH}$. If any of these admitted an exhaustion by compact sets, it would be second countable: each compact set is covered by finitely many charts, and each chart is second countable. But then it would be metrizable by the Uryson metrization theorem.<|endoftext|> -TITLE: Exists a uniformly convex norm on Banach space satisfying certain condition? -QUESTION [8 upvotes]: Let $E$ be a Banach space with norm $\|\cdot\|$. Assume that there exists on $E$ an equivalent norm, denoted by $|\cdot|$, that is uniformly convex. Given any $k > 1$, does there exist a uniformly convex norm $\|\cdot\|'$ on $E$ such that$$\|x\| \le \|x\|' \le k\|x\| \text{ for all }x \in E?$$Progress. Maybe we could set$$\|x\|'^2 = \|x\|^2 + \alpha|x|^2$$with $\alpha > 0$ small enough? I am not sure what to do from there on out though. - -REPLY [3 votes]: Lemma (Exercise 3.29, Brezis). Let $E$ be a normed vector space with a uniformly convex norm and fix $p > 1$. If $x$, $y \in \overline{N(0, M)} =: B$ are at least $\epsilon > 0$ apart, then there is some $\delta$ such that$$\left\|{{x + y}\over2}\right\|^p \le {{\|x\|^p + \|y\|^p}\over2} - \delta.$$ - -Suppose not for the sake of contradiction. Then there are sequences $(x_n)$ and $(y_n)$ in $B$ such that$$\|x_ - y_n\| > {{\|x\|^p + \|y\|^p}\over2} - {1\over n},$$and we may suppose that $|x_n| \to a$ and $y_n| \to b$ by compactness of $[0, M]$. Since$$a + b > \epsilon_0 \text{ and }{{a^p + b^p}\over2} \le \left({{a + b}\over2}\right)^p,$$we have that $a = b$. Let$$x_n' = {{x_n}\over{\|x_n\|}} \text{ and }y_n' = {{y_n}\over{\|y_n\|}}$$for which$${\epsilon\over a} + \text{o}(1) < \|x_n' - y_n'\| < 1 - \delta.$$Therefore,$$a^p + \text{o}(1) \le \left\|{{x_n + y_n}\over2}\right\|^p \le a^p(1 - \delta)^p + \text{o}(1),$$which is a contradiction. - -Original Problem (Exercise 3.30, Brezis). If $E$ is a normed vector space that admits an equivalent uniform norm $|\cdot|_1$, then there is a uniformly convex norm $|\cdot|_2$ such that$$\|x\| \le |x|_2 \le k\|x\|$$for $k$ arbitrarily close to $1$. - -Let$$|x|_2^2 = \|x\|^2 = \alpha|x_1|^2,$$then by the lemma,$$\|x\|^2 \le |x|_2 \le (1 + \alpha C^2)\|x\|^2,$$which can be made arbitrarily small and for any $x$, $y \in \overline{B(0, 1)}$ with respect to the new norm, then for any $|x - y|_2 > \epsilon$,$$\left|{{x + y}\over2}\right|_2^2 \le \left\|{{x + y}\over2}\right\|^2 + \left|{{x + y}\over2}\right|^2 \le \left\|{{x + y}\over2}\right\|^2 + {{|x|^2 + |y|^2}\over2} - \delta.$$<|endoftext|> -TITLE: Prove that every $\mathbb{Z}/6\mathbb{Z}$-module is projective and injective. Find a $\mathbb{Z}/4\mathbb{Z}$-module that is neither. -QUESTION [8 upvotes]: I want to show that every $\mathbb{Z}/6\mathbb{Z}$-module is a direct sum of projective modules. -As abelian group, $\mathbb{Z}/6\mathbb{Z}\cong\mathbb{Z}/2\mathbb{Z}\oplus \mathbb{Z}/3\mathbb{Z}$, but is it the direct sum of $\mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/3\mathbb{Z}$ as a $\mathbb{Z}/6\mathbb{Z}$-module? -Also, I know that free modules are projective, but does a module free over $\mathbb{Z}/2\mathbb{Z}$ implies that it is a free module over $\mathbb{Z}/6\mathbb{Z}$? -And is it really true that every $\mathbb{Z}/6\mathbb{Z}$-module is a direct sum of free modules? -Also, I don't know how to prove the statement for $\mathbb{Z}/4\mathbb{Z}$. - -REPLY [4 votes]: Note that $2\mathbb{Z}/6\mathbb{Z}$ and $3\mathbb{Z}/6\mathbb{Z}$ are ideals of the ring $\mathbb{Z}/6\mathbb{Z}$; they are evidently minimal ideals, their intersection is trivial and their sum is the whole ring. So $\mathbb{Z}/6\mathbb{Z}$ is a direct sum of simple submodules, hence semisimple (and artinian). Therefore every module is projective and injective. -It is false that every module over $\mathbb{Z}/6\mathbb{Z}$ is free (or a direct sum of free modules, which is the same), because the ring has two non isomorphic simple modules. -The ring $\mathbb{Z}/4\mathbb{Z}$ has a single minimal ideal (which is proper), namely $2\mathbb{Z}/4\mathbb{Z}$, so it is not semisimple. This ideal, as a module, is neither injective (it is not a direct summand) nor projective because the obvious map $\mathbb{Z}/4\mathbb{Z}\to 2\mathbb{Z}/4\mathbb{Z}$ doesn't split.<|endoftext|> -TITLE: Use of directed sets in the definition of nets in topology -QUESTION [7 upvotes]: In topology, we use nets instead of sequences. The motivation is quite natural since the sequence is not "long" enough if the neighborhoods of some point "separate" too much. -What I am confused about is the concept of directed set, is the only reason why we need the set to be directed because we want the definition of convergence of net to be uniform with that of sequence? Or there is another reason, thanks - -REPLY [6 votes]: If $(x_d)_{d\in D}$ is a net in $X$ with $(D,\preceq)$ a directed preorder, $(x_d)$ converges to $x$ iff it's eventually in every neighborhood of $x$, where "eventually" means "from some $\underline{d}$ on": for every neighborhood $U$ of $x$, there is some $\underline{d}\in D$ such that for all $d \succeq \underline{d}$, $x_d \in U$. -The neighborhood system of a point $x$ in a space $X$ is downward directed by $\subseteq$: if $U,V$ are neighborhoods of $x$, then there is a neighborhood $W$ of $x$ such that $W\subseteq U$ and $W\subseteq V$ — namely, $W = U\cap V$. -We need to be able to argue that for any sets $A$ and $B$, if $(x_d)$ is eventually in $A$ and $(x_d)$ is eventually in $B$, then $(x_d)$ is eventually in $A\cap B$. Requiring $(D,\preceq)$ to be directed guarantees that we can. -If nets were defined on non-directed preorders, then "limits" wouldn't necessarily be unique, even in 'nice' (e.g. Hausdorff) spaces. For example, suppose $D = $ the infinite binary tree — all finite sequences of $0$s and $1$s, ordered by inclusion, i.e. extension. $(D, \subseteq)$ is very non-directed: any two incomparable finite sequences in $D$ differ at some index, so they have no common extension. -Suppose $(x_s)_{s\in D}$ is the function $D\to\Bbb R$ such that -$$ -x_s = \begin{cases} -3 &\text{if $s_0 = 0$,}\\ -5 &\text{if $s_0 = 1$,}\\ -0 \text{ ($\text{arbitrarily}$)} &\text{if $s$ is the empty sequence.}\\ -\end{cases}$$ -Then $(x_s)$ is eventually in every neighborhood of $3$ (for every extension $s$ of $\underline{s} = \langle 0 \rangle$, $x_s$ is in every neighborhood of $3$), and it's eventually in every neighborhood of $5$ (take $\underline{s} = \langle 1 \rangle$).<|endoftext|> -TITLE: Fixed point property of spaces having same homotopy type -QUESTION [6 upvotes]: Suppose X and Y have same homotopy type.X as a topological space has fixed point property.Can we conclude anything about fixed point property of Y? - -REPLY [10 votes]: No. For example, $X$ could be a point. Then you're asking whether a contractible space $Y$ has the fixed point property. This isn't true in general (although the Brouwer fixed point theorem is a weaker result along these lines): for example, $Y = \mathbb{R}$ doesn't have the fixed point property. -More generally, if $X$ is any space, then $Y = X \times \mathbb{R}$ is a homotopy equivalent space which doesn't have the fixed point property. This suggests that at the very least we should take $X$ and $Y$ to both be compact. Unfortunately, this is also not enough. -More can be said with more restrictions. For example, the Lefschetz fixed point theorem can be used to show that a compact triangulable space which is homotopy equivalent to $\mathbb{CP}^{2n}$ has the fixed point property (in fact it suffices that it have the same cohomology ring).<|endoftext|> -TITLE: Hahn-Banach Theorem for separable spaces without Zorn's Lemma -QUESTION [14 upvotes]: I was reading about the Hahn-Banach Theorem, its many versions and their proofs. It's known that in the proofs we need Zorn's Lemma. But in the book that I'm reading, the author said if $X$ is a separable space then it's possible to prove the Hahn-Banach Theorem without the Zorn's Lemma. -How can we show there is a suitable extension of a linear functional without Zorn's lemma? - -REPLY [15 votes]: Note: I completely rewrote my previous attempt - posted in another (now deleted) answer, since the previous version was incorrect. Thanks to Asaf Karagila for pointing out the problem with my previous proof. And also to Eric Wofsey for simplifying some steps in the proof. -Let us hope that this time I have avoided mistakes. The formulation of the version of HBT which I am trying to prove in ZF is adapted from the claims mentioned in this paper. Some remarks explaining what this paper proves about the relation between ZF, AC and HBT for separable spaces can be found below. - -As the OP wrote, there are several formulations of Hahn-Banach Theorem. So maybe it is good to start by clearly stating this result. - -Hahn-Banach Theorem. Let $X$ be a vector space and let $p:X\to{\mathbb R}$ be any sublinear function. - Let $M$ be a vector subspace of $X$ and let $f:M\to{\mathbb R}$ be a linear functional dominated by $p$ on $M$. - Then there is a linear extension $\widehat{f}$ of $f$ to $X$ that is dominated by $p$ on $X$. - -The formulation that $f$ is dominated by $p$ on $M$ means that $(\forall x\in M) f(x)\le p(x)$. -This is basically the usual formulation of Hahn-Banach Theorem; although you can find many slight variations. -We will try to prove this in ZF: - -Hahn-Banach Theorem in ZF. Let $X$ be a separable topological vector space and let $p:X\to{\mathbb R}$ be a continuous sublinear function. - Let $M$ be a vector subspace of $X$ and let $f:M\to{\mathbb R}$ be a linear functional dominated by $p$ on $M$. - Then there is a linear extension $\widehat{f}$ of $f$ to $X$ that is dominated by $p$ on $X$. - -You may notice that this implies that every bounded functional on a subspace $M$ of a separable normed space $X$ can be extended to a functional on $X$ with the same norm. This is is the version of HBT stated here. -Notice a few changes in the formulation of the theorem. Since we want to speak about separable spaces, we need some kind of topology. So it is not that surprising that we work now with topological vector spaces. Maybe it is a little bit surprising that we require $p$ to be continuous. (On the other hand, if we want to use separability, then it is perhaps natural that we will probably use somewhere in the proof that $p$ behaves reasonable w.r.t. the topological structure.) But let us try to prove this version first. We will return to the question, whether continuity of $p$ is needed, later. -The standard proof of HBT uses as one of the steps the following fact: - -Lemma. Let $X$ be a vector space and let $p:X\to{\mathbb R}$ be a sublinear function. - Let $M$ be a vector subspace of $X$ and let $f:M\to{\mathbb R}$ be a linear functional dominated by $p$ on $M$. - Let $x\in X$ and let $c\in\mathbb R$ be a number such that - $$\sup_{y\in M} [f(y)-p(y-{x})] \le c \le \inf_{y\in M} [p(y+{x})-f(y)].$$ -Then there exists a linear function $\overline f \colon [M\cup\{x\}]\to \mathbb R$ which extends $f$, it is dominated by $p$ on $[M\cup\{x\}]$ and $$f(x)=c.$$ - -Now we can prove this version of HBT in the following steps: - -Prove the above lemma. And also check that under the assumptions of the lemma -$$\sup_{y\in M} [f(y)-p(y-x)] \le \inf_{y\in M} [p(y+{x})-f(y)].$$ -which means that we have at least one possible choice of $c$ in such situation. -If $X$ is separable and $\{x_n; n\in\mathbb N\}$ is a countable dense subset of $X$, then we can prove using induction and the above lemma that there exists a linear functional $f_n$ defined on $A_n=[M\cup\{x_1,\dots,x_n\}]$ which agrees with $f$ on $M$ and is dominated by $p$ on $A_n$. Moreover, each $f_n$ extends $f_{n-1}$. -Notice that in the induction step of this proof we choose some value of $c$ from a non-empty interval. Since the interval is closed, we can simply take the left endpoint at each step of the induction. So we do not need Axiom of Choice here. -The above gives us a new linear functional $g$ defined on a dense subspace $A=\bigcup A_n$. (Given simply by $g=\bigcup f_n$, i.e., $g(x)=f_n(x)$ if $x\in A_n$.) Moreover, $g$ is dominated by $p$ on $A$ and $g$ extends $f$. -Now it only remains to get from the function defined on the set $A$ the extension to the $\overline A=X$. I.e., we want to show get a function $\widehat f \colon X \to \mathbb R$ which is also linear, dominated by $p$ and fulfills $\widehat f|_M=f$. If we have $x\in X$ then there exists a sequence $(a_n)$ of points of $A$ such that $a_n\to x$. We define $$\widehat f(x):=\lim\limits_{n\to\infty} g(a_n).$$ -If we can show that this indeed defines a function (i.e., that the above limit exists and that the value of $\widehat f(x)$ does not depend on the choice of the sequence $(a_n)$), then proving that $\widehat f$ is linear, extends $f$ and is dominated by $p$ is more-or-less straightforward. (Notice that here we will have to use continuity of $p$.) - -Let us have a closer look at a more detailed proof of existence and uniqueness of the above limit. (Since this is precisely the place where my previous attempt failed.) -It is worth pointing out that we repeatedly use continuity of $p$. To be more precise, we use $p(a_n)\to p(x)$. -We will use rather simple observation that $g(x)=-g(-x)\ge -p(-x)$, so we have -$$-p(-x)\le g(x) \le p(x)$$ -for any $x\in A$. -Now if we have a sequence $a_n\to x$, we can use the inequality -$$-p(a_n-a_m) \le g(a_m-a_n) \le p(a_m-a_n)$$ -to show that the sequence $(g(a_n))$ is Cauchy sequence in $\mathbb R$, and therefore it has a limit. Similarly if we have $a_n\to a$ and $b_n\to a$, we can use -$$-p(b_n-a_n)\le g(a_n-b_n) \le p(a_n-b_n)$$ -to show that $\lim g(a_n)=\lim g(b_n)$. -Putting all things mentioned above together we get the proof of the above result. - -Would it be possible to prove stronger result - to omit condition that $p$ is continuous and just leave the sublinearity? The answer is no, which shows that these problems might be a bit more subtle than they could appear on the first glance. It is shown in the paper Juliette Dodu and Marianne Morillon: The Hahn-Banach Property and the Axiom of Choice (Mathematical Logic Quarterly, Volume 45, Issue 3, pages 299–314, 1999, DOI: 10.1002/malq.19990450303) that if every separable normed space satisfies the Hahn-Banach property (i.e., if the above theorem holds without the assumption that the sublinear function $p$ is continuous), then there are non-trivial finitely-additive measures on the set of positive integers. Therefore this result cannot be shown in ZF. (See Theorem 6 and Corollary 4 in Section 9 of this paper for details.) -The authors also say the following: - -If a topological vector space $E$ has a dense subset which is well-orderable, then $E$ satisfies the continuous Hahn-Banach property, and the proof relies on some transfinite recursion and the following classical lemma... In particular, separable norme spaces satisfy the continuous Hahn-Banach property. - -The "classical lemma" mentioned there is the lemma I formulated above. And if the dense subset is countable, then this is the above claim (for which I tried at least to sketch a proof). - -If $X$ is not separable, then the argument above cannot be used to get a function $\widehat f$ defined on the whole space $X$. But the usual proof of Hahn-Banach Theorem is along the same lines, we just use Zorn's Lemma or transfinite induction instead of mathematical induction. (Zorn's lemma is known to be equivalent to the Axiom of Choice. The proof based on transfinite induction uses AC too, since we start by choosing some well-ordering of $X$. Well-ordering theorem is equivalent to AC.) -The only step in which the above proof and the usual proof of HBT differ substantially is the extensions from a dense subspace to the whole space. This is the point of the proof where we used continuity of the sublinear function $p$. (And this condition is not needed if AC is available.)<|endoftext|> -TITLE: Why is a linear transformation called linear? -QUESTION [38 upvotes]: $T(av_1 + bv_2) = aT(v_1) + bT(v_2)$ -Why is this called linear? $f(x) =ax + b$, the simplest linear equation does not satisfy $f(x_1 + x_2) = f(x_1) + f(x_2)$. -Thank you. - -REPLY [3 votes]: You may refer to them as "$k$-vectorspace homomorphisms" if you wish. -In a little more detail: if $k$ is a field, then a $k$-vectorspace can be thought of as a set $X$ equipped with, for each sequence $a \in k^n$, a corresponding "linear combination mapping" $X \leftarrow X^n$. By convention, the structure-preserving mappings between $k$-vectorspaces are called "$k$-linear transforms," or simply $k$-linear, probably because the phrase "linear-combination-mapping-preserving mapping" is altogether too confusing. Like I said, you may refer to them as $k$-vectorspace homomorphisms if you wish. -But this just pushes the issue back: why are they called, of all things, linear combinations? Well, if $X$ is a $k$-vectorspace we're given a single vector $x \in X$, then $$\{ax : a \in k\}$$ is the set of all elements of $X$ that can be obtained by applying linear combination mappings to $x.$ In the special case where $k=\mathbb{R}$, this can be visualized as a line through the origin. -So "linear combination" is probably best thought of as a shorthand for "line-through-the-origin combination." Of course, if we've got two vectors, then we're potentially talking about a plane through the origin. And so on. -In fact, linear algebra isn't really about lines at all; its really about flat things that pass through the origin. Perhaps it should have been called "algebra with respect to the origin" instead!<|endoftext|> -TITLE: How important is the choice of books in studying Analysis? -QUESTION [6 upvotes]: I am in a fix. I have done a graduate course in Pure Mathematics.I love to study abstract algebra.I want to do postgraduate in Mathematics especially in Abstract Algebra . -In order to enter a postgraduate program I have to qualify a screening test which comprises of three sections: - -Algebra -Analysis -Metric Spaces and Topology - -I appeared in the test the earlier year .Though I scored a perfect 10 in Algebra,my marks in the other two topics were $3$ and $4$ out of $10$ as a result of which I failed to qualify. -Is it possible to learn Analysis now or its too late.Can I speed up the process of learning Analysis?Can someone please give some tips on how should I read this topic?I don't know why I fail in this topic very badly.Though I have studied topics like Continuity,Differentiablity,Riemann-Integral,Sequence and series of functions etc.,I fail in these topics miserably.Do I need to start from scratch now? -Also I more question I find that people ask for recommendation of books in Analysis.How important is the choice of books in a particular topic.We followed Rudin-Principles of Mathematical Analysis in our course. -In short please give some tips on how to study Analysis .I know there are many on this site who are very good at analysis.Any recommendations will be helpful - -REPLY [4 votes]: Since you are talking about the fundamentals, for the one-variable case I can only recommend Spivak's excellent book "Calculus". I learnt a lot from it even before I entered university and it has provided me with intuition which has been very helpful ever since. -For multivariable calculus things aren't so clear for me. I'm not really sure whether there exists a vector calculus textbook which is actually good, since I learnt from lecture notes and solidified my knowledge from experience in electromagnetism and differential geometry (btw if you're into that, you may benefit from O'Neill's "Elementary differential geometry"). -At any rate, I enjoyed Marsden & Tromba's book "Vector Calculus", which is however not very rigorous. You can again go for Spivak's little monster called "Calculus on Manifolds", but be warned: the notation is kind of irritating from time to time and it is more oriented towards geometry, not to mention that it is extremely laconic compared to his other books. -Rudin's book is great but in my opinion it can only be of use if you are exceptionally good or have already a solid background in analysis. A fantastic book which has a lot of overlap with Rudin is Apostol's "Mathematical Analysis", which covers even more material but in a much friendlier manner. I guess this last one is probably what you are looking for. -Finally, concerning metric spaces, I'd suggest Kaplansky's "Set Theory and Metric Spaces", the "metric spaces"-part of which has appealed to me ever since I read it for the first time. He has very well placed definitions and really enlightening exercises. I'd also consider Simmons' "Introduction to topology and modern analysis", for which I have a soft spot as it has been the first book I read metric space theory from. His language initially seemed kind of strange but I finally realized how fitting and natural it is. -I think that I would begin with Spivak, then move to Marsden&Tromba for a little while and then to Apostol and Kaplansky. -Now, concerning "strategic" advice, I can't say much since you are not providing sufficient information on your background, but for some reason my guess is that your exams were less computational and more "theoretical" than you expected. In this case, I'd say you can take Apostol's book for example and fill every proof you can on your own. If you do this I believe that you'll be more than ready (: -Good luck with your tests!<|endoftext|> -TITLE: References about Iterating integration, $\int_{a_0}^{\int_{a_1}^\vdots I_1dx}I_0\,dx$ -QUESTION [17 upvotes]: Are there any references that discuss Iterating integration in general, $\int_{a_0}^{\int_{a_1}^\vdots I_1dx}I_0\,dx$, conditions in which they converge, some special values, some special tricks to compute them, for example $$\Large\int_0^{\displaystyle\int_0^{\displaystyle\int_0^{\vdots}(1-x)^3dx}(1-x)^2dx}(1-x)^1dx$$ - -REPLY [2 votes]: I have not seen this before. Instead of direct references, here are some thoughts and pointers to something else. -If you write $F_k(x)=\int_0^{x} I_k(t) dt$ then your question asks for what $F_0(F_1(F_2(F_3(\dots))))$ is, the limit of the sequence $$F_0,\, F_0\circ F_1,\, F_0\circ F_1\circ F_2,\, \dots$$ Clearly the limit (if it exists) is a function, not a number. If it is a constant function, it is the zero function. For any differentiable $F_k$, you can represent the iteration by integrals. -In the simplest case where all of your integrands are the same, you are looking at functional iteration. If you iterate analytic functions, you should expect nice convergence on a "Fatou set" and chaotic behaviour elsewhere. -There is a formal similarity between your iterated integrals and power towers $$a_0^{a_1^{a_2^{a_3^{\dots}}}}$$ so you could try similar approaches. For nice references, see this MSE question. Basically, there you can consider the functions $$a_0^x,\quad a_0^{a_1^x}, \quad a_0^{a_1^{a_2^x}},\dots$$ and then evaluate that limit at $x=1$. -Repeating what I said above, in your integral question, you can consider -$$ -\int_0^{x} f_0(t)dt,\quad \int_0^{\large\int_0^x f(x_1) dx_1} f_0(x_0)dx_0, \quad -\int_0^{\large\int_0^{\int_0^x f_2(x_2)dx_2} f_1(x_1) dx_1} f_0(x_0)dx_0 -$$ -but it is less obvious what value of $x$ to use. You could consider for which values of $x$ your tower of integrals converges, though. $x=0$ is an obvious trivial choice (that you may want to avoid), where your tower of integrals collapses to zero. -You could alternatively try to find reasonable limits in a different way. Consider in your concrete example -\begin{align}J_1&=\int_0^{J_2} (1-x) dx = \frac12(1- (1-J_2)^2)\\ -J_2&=\int_0^{J_3} (1-x)^2 dx = \frac13(1-(1-J_3)^3)\\ -&\dots \\ -J_{k}&=\int_0^{J_{k+1}} (1-x)^k dx = \frac1{k+1} (1-(1-J_{k+1})^{k+1}) -\end{align} -You could ask for which values of $J_1$ a sequence $(J_k)$ exists that satisfies this recursion relation. $J_1=0$ is an obvious trivial solution. Are there others?<|endoftext|> -TITLE: Prob 15, Chap. 1 in Baby Rudin: If this condition also sufficient for equality? -QUESTION [5 upvotes]: Here's Prob. 15, Chap. 1 in the book Principles of Mathematical Analysis by Wlater Rudin, 3rd edition: - -Under what conditions does equality hold in the Schwarz inequality? - -Now the Schwarz inequality, which is Theorem 1.35 in Rudin, is as follows: - -If $a_1, \ldots, a_n$ and $b_1, \ldots, b_n$ are complex numbers, then $$ \left\vert \sum_{j=1}^n a_j \overline{b_j} \right\vert^2 \leq \sum_{j=1}^n \left\vert a_j \right\vert^2 \sum_{j=1}^n \left\vert b_j \right\vert^2.$$ -If there is a complex number $z$ such that $a_j = z b_j$ for each $j=1, \ldots, n$, then we have -$$ -\begin{align} -\left\vert \sum_{j=1}^n a_j \overline{b_j} \right\vert^2 &= \left\vert \sum_{j=1}^n z b_j \overline{b_j} \right\vert^2 \\ -&= \left\vert z \ \sum_{j=1}^n \left\vert b_j \right\vert^2 \right\vert^2 \\ -&= \vert z \vert^2 \cdot \left\vert \sum_{j=1}^n \left\vert b_j \right\vert^2 \right\vert^2 \\ -&= \vert z \vert^2 \cdot \left( \sum_{j=1}^n \left\vert b_j \right\vert^2 \right)^2, -\end{align} -$$ -whereas -$$ -\begin{align} -\sum_{j=1}^n \left\vert a_j \right\vert^2 \sum_{j=1}^n \left\vert b_j \right\vert^2 &= \sum_{j=1}^n \left\vert z b_j \right\vert^2 \sum_{j=1}^n \left\vert b_j \right\vert^2 \\ -&= \sum_{j=1}^n \vert z \vert^2 \left\vert b_j \right\vert^2 \sum_{j=1}^n \left\vert b_j \right\vert^2 \\ -&= \vert z \vert^2 \cdot \left( \sum_{j=1}^n \left\vert b_j \right\vert^2 \right)^2 \\ -&= \left\vert \sum_{j=1}^n a_j \overline{b_j} \right\vert^2. -\end{align} -$$ - -Now is this condition also a necessary condition for the equality to hold in the Schwarz "inequality"? - -REPLY [3 votes]: The Schwarz inequality follows from expanding -$$ \begin{aligned} -0 &\le \sum_{i=1}^n \sum_{j=1}^n \left\vert a_i b_j - a_j b_i \right\vert^2 \\ - &= \sum_{i=1}^n \sum_{j=1}^n (a_i b_j - a_j b_i)\overline{(a_i b_j - a_j b_i)} \\ - &= \quad ... \\ - &= 2 \sum_{j=1}^n \left\vert a_j \right\vert^2 \sum_{j=1}^n \left\vert b_j \right\vert^2 - - 2 \left\vert \sum_{j=1}^n a_j \overline{b_j} \right\vert^2 -\end{aligned} -$$ -and therefore equality holds if and only if -$$ \tag{*} - a_i b_j - a_j b_i = 0 \quad - \text{ for all } i, j = 1, \dots n \, . -$$ -If at least one $b_i$ is not zero then you can define $z = a_i/b_i$ -and conclude from $(*)$ that -$a_j = z b_j$ for each $j=1, \ldots, n$. -Another way to express $(*)$ is that one of the vectors -$(a_1, \ldots, a_n)$ and $(b_1, \ldots, b_n)$ is a constant multiple -of the other, or that they are linearly dependent over $\Bbb C$.<|endoftext|> -TITLE: How was Zeno's paradox solved using the limits of infinite series? -QUESTION [9 upvotes]: This is a not necessarily the exact paradox Zeno is thought to have come up with, but it's similar enough: -A man (In this photo, a dot 1) is to walk a distance of one unit from where he's standing to the wall. He, however, is to walk half the distance first, then half the remaining distance, then half of that, then half of that and so on. This means the man will never get to the wall, as there's always a half remaining. -This defies common sense. We know a man(or woman, of course) can just walk up to a wall and get to it. -My calculus book says this was solved when the idea of a limit of an infinite series was developed. The idea says that if the distances the man is passing are getting closer and closer to the total area from where he started to the wall, this means that the total distance is the limit of that. -What I don't understand is this: mathematics tells us that the sum of the infinite little distances is finite, but, in real life, doesn't walking an infinite number of distances require an infinite amount of time, which means we didn't really solve anything, since that's what troubled Zeno? -Thanks. - -REPLY [2 votes]: Zeno's paradox, today, is mostly irrelevant, other than marking an important stage in the history of philosophy. Let's formulate it in its essential form: - -If we sum infinitely many parts, the result can't be finite. - -It makes use of the concept of infinity in its premise but there is no such thing as infinity. Zeno does not know what infinity is, we don't know what infinity is. By using such a vague term in his argument's premise, Zeno sacrifices rigor even before starting. In short, it is a bogus argument from the get go. -Taking a look at one example of the argument, namely tortoise and the hare, Zeno says "tortoise needs to advance the half of the way first, then half of the remaining way...". This in turn is problematic. Zeno does not believe these infinite parts can make a finite whole but he has no problem of chopping down that same finite whole into infinitely many parts in the first place. How can he know this is possible? All sorts of bogus reasoning bound to follow once you start to tinker with infinity. In short: - -There is no such thing as infinity. Even if there is, it is only a metaphysical concept and can not really be used in physics and geometry in a computational way. - -In calculus, we don't talk about infinite sums per se. Take a look at the sum: -$$ 1 + 1 / 2 + 1/4 + ... = 2$$ -Those three dots could make you think that we are summing infinitely many terms. In reality, what calculus tells you is the following: -$$\lim_{n \to \infty}{\sum (1/2)^n} = 2, n \in N$$ -and this has nothing to do with infinity. It tells you that you can make this sum as close to $2$ as you need. More rigorously, for all $\epsilon > 0$, there exists an $N$ such that -$$\mid{\sum (1/2)^n} - 2 \mid < \epsilon$$ -for all $n > N$. The use of $\infty$ in limit above is just a notation, not to suggest a metaphysical use of the concept. It just tells you that you can make this sum as close to $2$ as possible, if you so desired. This in turn suggests that you can use these symbolic-analytical tools of calculus in your calculations for similar observations. The sum above was already $2$, clear from the observation, irrespective of any mathematics. Calculus just gives you tools that you can employ when thinking about such observations (measurements). Imagine the following computer program: -sum = 0 -n = 0 -epsilon = 0.00000002 // your choice of epsilon -while absolute_value(sum - 2) > epsilon: - sum = sum + (1/2)^n - n = n + 1 -print n - -Calculus tells you that this program will terminate and print some number, no matter how small you chose epsilon to be. -On the other hand, completely unrelated to the phenomenon of infinity, there is a physical observation; that the hare actually catches the tortoise. We want to compute the amount of time required for hare to catch the tortoise. Calculus just gives you a way to do this computation, again it has nothing to do with infinity. Computation is already finite, the result is already observed to be finite. -Zeno believed he had grasp of some abstract concepts and used his bogus reasoning about these abstract concepts to deny physically observed phenomena. Calculus is doing the other way, it constructs abstract concepts to explain (to a desired degree of exactness) physically observed phenomena. Philosophically speaking, what Zeno did is a fundamental error in thinking: trying to base physics onto metaphysics. This is bound to fail. This is the reason why Metaphysics was the last book in the shelf of Aristotelian curriculum. Aristotle clearly knew that you need to base your metaphysics onto physics, not the other way around. That's why you had to study physics before metaphysics, if you wanted to be a student of Aristotle (or more like peripatetic school of philosophy). Metaphysics, the word itself, means literally "after the physics" in Greek. This is the reason. But of course, metaphysics was still the "first philosophy", philosophy of first principles. This meant its truths were foundational to all else, even though we arrive at those via physical phenomenon, sensation and observation (by abstracting and conceptualizing on those). -So in short, Zeno's paradoxes were not paradoxes but were just errors in his thinking. It was not evident at the time since humans had more vague notions of concepts like number, measurement, infinity, time, motion etc. Calculus is not resolving this so-called paradox, it does something entirely different. Along the way, calculus refines our understanding of above-mentioned concepts to a point that we realize there was not an actual paradox in the first place.<|endoftext|> -TITLE: Show that $f_n\to f$ uniformly on $\mathbb{R}$ -QUESTION [8 upvotes]: Let $$P_n(x) = \frac{n}{1+n^2x^2}$$. -First, I had to prove that -$$\int_{-\infty}^\infty P_n(x)\ dx = \pi$$ -And that for any $\delta > 0$: -$$\lim_{n\to\infty} \int_\delta^\infty P_n(x)\ dx = \lim_{n\to\infty} \int_{-\infty}^{-\delta} P_n(x)\ dx = 0$$ -I've done that easily. -Now I need to prove that for $f:\mathbb{R}\to\mathbb{C}$ which is $2\pi$ periodic and continuous and: -$$f_n(x) = \frac{1}{\pi} \int_{-\infty}^\infty f(x-t)P_n(t)\ dt$$ -$f_n\to f$, uniformly on $\mathbb{R}$. -We learned in class about convolution and about Dirichlet/Fejer kernels. -Also, we learned that the trigonometric polynomials, $\{e^{inx}\}_{n\in\mathbb{Z}}$ are a dense set on $C(\mathbb{T})$ and the density is uniform. Meaning, there's a $P_n(x)=\sum c_n e^{inx}$ converges uniformly to $f$ where $f\in C(\mathbb{T})$. -note: $f\in C(\mathbb{T})$ is a continuous and $2\pi$ periodic function (T is for Torus). - -REPLY [6 votes]: To get you started: $$| f_n(x) - f(x)| =\left| (1/\pi) \int_{-\infty}^{\infty} f(x-t) P_{n}(t) \; dt - f(x)\right| = (1/\pi)\left| \int_{-\infty}^{\infty}\left[f(x-t)- f(x)\right] P_n(t) \; dt \right|$$ -Now because $f$ is continuous on $\mathbb{T}$ and $2\pi$-periodic, we can essentially deduce its properties by considering its restriction $f_r$ to $[0,4\pi]$. As a continuous function on a compact interval, $f_r$ is bounded and uniformly continuous. It's not hard to see that these properties carry over to $f$, meaning that $f$ is bounded and uniformly continous on $\mathbb{R}$. This implies that for every $\epsilon > 0$, there is a $\delta -> 0$ such that -$$|f(x) - f(x-t)| < \epsilon \quad \forall t \in (-\delta,\delta)\forall x\in \mathbb{R}$$ -Now, given an $\epsilon > 0$, we can choose $\delta$ accordingly and then split up the integrals giving -$$|f_n(x) - f(x)| \leq (1/\pi)\left[\int_{-\infty}^{-\delta}C\cdot P_{n}(t) \; dt + \int_{-\delta}^{\delta}\epsilon\cdot P_{n}(t) \; dt + \int_{\delta}^{\infty}C\cdot P_{n}(t) \; dt\right]$$ -Because of what you've already shown we know the left and right integral converge to $0$ as $n \to \infty$. But the middle integral can be estimated by $\epsilon$, which concludes the proof.<|endoftext|> -TITLE: Sin(x): surjective and non-surjective with different codomain? -QUESTION [5 upvotes]: Statement that $\operatorname{sin}(x)$ not surjective with codomain $\mathbb R$ and surjective with codomain $[-1,1]$ found here: - -Non-surjective: $\mathbb{R}\rightarrow\mathbb{R}: x\mapsto\operatorname{sin}(x)$ -Surjective: $\mathbb{R}\rightarrow[-1,1]: x\mapsto\operatorname{sin}(x)$ - -where the image $Im(f)=[-1,1]$ in both cases and the codomain is $\mathbb R$ and $[-1,1]$ for the case 1 and case 2, respectively. In the second case, $\forall x\in\mathbb R : \operatorname{sin}(x)\in [-1,1]$ where the codomain equals the image of $f$. Surjection means that the image of the function equals to the codomain. -Why is sin not surjection with one codomain and surjective with other codomain? - -REPLY [8 votes]: The functions $\operatorname{sin}:\mathbb R\rightarrow \mathbb R$ and $\operatorname{sin}: \mathbb R\rightarrow [-1,1]$ are two different functions. In mathematics, a function is usually defined as the collection of the following data: - -Specifying the domain X (a set) -Specifying the codomain Y (a set) -A relation on $X\times Y$ satisfying certain properties - -and so the mathematical function can be understood as an ordered triple set $\{X,Y,R\}$ where $R$ is the relation such as $X\times Y$. The usual notation is $f:X\to Y$. -In this strict sense, the string ''sin'' can represent infinitely many different functions, depending on the choice of domain and codomain. Some of them are surjective, injective, bijective, or none if that. For example, $\sin: [0,1]\to\Bbb R$ is injective but not surjective, $\sin: \Bbb R\to [-1,1]$ is surjective and not injective, $\sin: [-\pi/2,\pi/2]\to [-1,1]$ is bijective and so on. -However, that's not the whole story. In physics and engineering, one often ignores the specification of domain or/and codomain and assumes that it is understood from the context. In this sense, ''$\sin x$'' is an assignment that to a real number $x$ assigns $\sum_{i=0}^{\infty} (-1)^{i}\frac{x^{2i+1}}{(2i+1)!}$. Here you don't care about domains and codomains. This is often good enough for practical considerations. But from the formal viewpoint, it doesn't make sense to ask whether this ''function'' $\sin$ is surjective, if you don't specify the domain and codomain.<|endoftext|> -TITLE: Erasing numbers from the front of the row -QUESTION [5 upvotes]: Numbers $1,2,\ldots,k$ are written in this order in a row. For $i=1,\ldots,k$, in the $i$th step, a random variable $V_i$ is drawn uniformly from the interval $[0,2i]$. If $V_i$ is greater than the first remaining number, that number is erased. What is the expected number of numbers that will be erased? -For example, if $k=1$, then we have the number $1$ and $V_1$ drawn from $[0,2]$, so $1/2$ numbers will be erased in expectation. -If $k=2$, then with probability $1/2$ we have $V_1>1$ and it is 50-50 whether the second number is erased. Otherwise $V_1<1$ and with probability $3/4$ the first number is erased. So the answer is $(1/2)(3/2)+(1/2)(3/4)=9/8$. -For $k=3$, a similar case analysis shows that the answer is $85/48$ (if I calculated correctly.) It could be that no closed form can be found in general. If so, upper/lower bounds would still be interesting. - -REPLY [2 votes]: Note: Here are some other aspects towards a closed expression of the expectation values $E_k(X)$ of erased numbers. We focus at the probability tree for $k\geq 1$. This tree is some kind of weighted Leibniz harmonic triangle. -We derive a recurrence formula, detect some nice symmetric relations and give a representation of specific coefficients in terms of harmonic numbers. To find some relations, we start with a closer look of the cases $k=3,4$. - -Example $k=3$ -\begin{array}{ccccccccc} -\{1,2,3\}&\frac{1}{2}&\{2,3\}&\frac{2}{4}&\{3\}&\color{blue}{\frac{3}{6}}&\emptyset\\ -\\ -\frac{1}{2}&&\frac{2}{4}&&\color{blue}{\frac{3}{6}}\\ -\\ -\{1,2,3\}&\frac{3}{4}&\{2,3\}&\color{blue}{\frac{4}{6}}&\{3\}\\ -\\ -\frac{1}{4}&&\color{blue}{\frac{2}{6}}\\ -\\ -\{1,2,3\}&\color{blue}{\frac{5}{6}}&\{2,3\}\\ -\\ -\color{blue}{\frac{1}{6}}\\ -\\ -\{1,2,3\}\\ -\end{array} - -The probability tree for $k=3$ represents the remaining sets of $\{1,2,3\}$ as nodes and the transition probabilities after each step $l=0,1,2,3$. We could consider this tree as a triangle rotated counter-clockwise by ${90}^\circ$. The sets which are possible in step $l$ are written down diagonally. -If we look at the top left entry $\{1,2,3\}$ we see we obtain with probability $\frac{1}{2}$ the successor states $\{2,3\}$ horizontally and $\{1,2,3\}$ vertically. -The rightmost diagonal of transition probabilities shows the (blue) values -\begin{align*} -\color{blue}{\frac{1}{6}}\quad\color{blue}{\frac{5}{6}}\qquad\color{blue}{\frac{2}{6}}\quad\color{blue}{\frac{4}{6}}\qquad\color{blue}{\frac{3}{6}}\quad\color{blue}{\frac{3}{6}}\tag{2} -\end{align*} -from bottom left to top right. We see the resulting sets in the rightmost diagonal. E.g. the set $\{2,3\}$ in the rightmost diagonal is obtained from $\{1,2,3\}$ with probability $\frac{5}{6}$ and from $\{2,3\}$ with probability $\frac{2}{6}$. -We calculate the probabilities of the finally obtained sets: -\begin{array}{rl} -\emptyset&xxx\\ -&\frac{1}{2}\frac{2}{4}\frac{3}{6}=\frac{6}{48}=\frac{1}{8}\\ -\\ -\{3\}&yxx+xyx+xxy\\ -&\frac{1}{2}\frac{3}{4}\frac{4}{6} -+\frac{1}{2}\frac{2}{4}\frac{4}{6} -+\frac{1}{2}\frac{2}{4}\frac{3}{6} -=\frac{26}{48}=\frac{13}{24}\\ -\\ -\{2,3\}&yyx+yxy+xyy\\ -&\frac{1}{2}\frac{1}{4}\frac{5}{6} -+\frac{1}{2}\frac{3}{4}\frac{2}{6} -+\frac{1}{2}\frac{2}{4}\frac{2}{6} -=\frac{15}{48}=\frac{5}{16}\tag{1}\\ -\\ -\{1,2,3\}&yyy\\ -&\frac{1}{2}\frac{1}{4}\frac{1}{6}=\frac{1}{48}\\ -\end{array} - -We obtain for $k=3$ the expected value $E_3(X)$ of erased numbers - \begin{align*} -E_3(x)&=3-\frac{13}{24}-2\frac{5}{16}-3\frac{1}{48}=\frac{85}{48}=1.7708\overline{3} -\end{align*} -We observe nice regularities in (2) - -The odd entries from left to right are $\frac{j}{2k}$ with $1\leq j \leq k$ -The even entries from right to left are $\frac{k+j}{2k}$ with $0\leq j < k$ -Two successive entries $\frac{2j+1}{2k},\frac{2j+2}{2k}$ sum up to $1$ - -We also see, that the resulting probabilities starting from the beginning set $\{1,2,3\}$ are derived similarly as we do when adding up values in a pascal triangle or leibniz harmonic triangle but weighted with the corresponding transition probabilities. If we look at the resulting set $\{2,3\}$ after three steps, we see in (1) it can be reached with probability $\frac{5}{16}$ and the paths are - \begin{align*} -yyx+yxy+xyy -\end{align*} - meaning that we use all paths containing one $x$-step horizontally and two $y$-steps vertically when starting from $\{1,2,3\}$ in the probability tree when we want to reach $\{2,3\}$. The number of $y$ in the path description is bijectively related to the number of remaining elements in the set $\{2,3\}$. - -These symmetries and also the same transition probabilities are valid for $k$ in general as we can see, when we look at the example for $k=4$. - -Example $k=4$ -\begin{array}{ccccccccc} -\{1,2,3,4\}&\frac{1}{2}&\{2,3,4\}&\frac{2}{4}&\{3,4\}&\frac{3}{6}&\{4\}&\color{blue}{\frac{4}{6}}&\emptyset\\ -\\ -\frac{1}{2}&&\frac{2}{4}&&\frac{3}{6}&&\color{blue}{\frac{5}{8}}\\ -\\ -\{1,2,3,4\}&\frac{3}{4}&\{2,3,4\}&\frac{4}{6}&\{3,4\}&\color{blue}{\frac{5}{8}}&\{4\}\\ -\\ -\frac{1}{4}&&\frac{2}{6}&&\color{blue}{\frac{3}{8}}\\ -\\ -\{1,2,3,4\}&\frac{5}{6}&\{2,3,4\}&\color{blue}{\frac{6}{8}}&\{3,4\}\\ -\\ -\frac{1}{6}&&\color{blue}{\frac{2}{8}}\\ -\\ -\{1,2,3,4\}&\color{blue}{\frac{7}{8}}&\{2,3,4\}\\ -\\ -\color{blue}{\frac{1}{8}}\\ -\\ -\{1,2,3,4\} -\end{array} - -Obseve that we have in the leftmost three diagonals the same probability transitions as in the example for $k=3$. - -\begin{array}{rl} -\emptyset&xxxx\\ -&\frac{1}{2}\frac{2}{4}\frac{3}{6}\frac{4}{8}=\frac{24}{384}=\frac{1}{16}\\ -\\ -\{4\}&yxxx+xyxx+xxyx+xxxy\\ -&\frac{1}{2}\frac{3}{4}\frac{4}{6}\frac{5}{8} -+\frac{1}{2}\frac{2}{4}\frac{4}{6}\frac{5}{8} -+\frac{1}{2}\frac{2}{4}\frac{3}{6}\frac{5}{8} -+\frac{1}{2}\frac{2}{4}\frac{3}{6}\frac{4}{8} -=\frac{154}{384}=\frac{77}{192}\\ -\\ -\{3,4\}&yyxx+yxyx+yxxy+xyyx+xyxy+xxyy\\ -&\frac{1}{2}\frac{1}{4}\frac{5}{6}\frac{6}{8} -+\frac{1}{2}\frac{3}{4}\frac{2}{6}\frac{6}{8} -+\frac{1}{2}\frac{3}{4}\frac{4}{6}\frac{3}{8} -+\frac{1}{2}\frac{2}{4}\frac{2}{6}\frac{6}{8} -+\frac{1}{2}\frac{2}{4}\frac{4}{6}\frac{3}{8} -+\frac{1}{2}\frac{2}{4}\frac{3}{6}\frac{3}{8} -=\frac{168}{384}=\frac{7}{16}\\ -\\ -\{2,3,4\}&yyyx+yyxy+yxyy+xyyy\\ -&\frac{1}{2}\frac{1}{4}\frac{1}{6}\frac{7}{8} -+\frac{1}{2}\frac{1}{4}\frac{5}{6}\frac{2}{8} -+\frac{1}{2}\frac{3}{4}\frac{2}{6}\frac{2}{8} -+\frac{1}{2}\frac{2}{4}\frac{2}{6}\frac{2}{8} -=\frac{37}{384}\\ -\\ -\{1,2,3,4\}&yyyy\\ -&\frac{1}{2}\frac{1}{4}\frac{1}{6}\frac{1}{8}=\frac{1}{384}\\ -\end{array} -We obtain for $k=4$ the expected value $E_4(X)$ of erased numbers - \begin{align*} -E_4(x)&=4-\frac{77}{192}-2\frac{7}{16}-3\frac{37}{384}-4\frac{1}{384}=\frac{931}{384}=2.4244791\overline{6} -\end{align*} -From these examples it is easy to derive a recurrence relation for the transition probabilities. - -$$ $$ - -Recurrence relation: -The example trees above together with the observations at the end of example $k=3$ indicate the following recurrence relation of the transition probabilities. -Let $p_{k,l}$ denote the probability to obtain the set $\{l+1,l+2,\ldots,k\}$ when starting with the set $\{1,2,\ldots,k\}$ with $0\leq l < k$. The following holds valid -\begin{align*} - p_{k,l}&=\frac{2k-l}{2k}p_{k-1,l-1}+\frac{l+1}{2k}p_{k-1,l}&\qquad 1\leq l -TITLE: Axiom of Powers -QUESTION [5 upvotes]: Something I'm failing to understand from Halmos "Naive Set theory" book. - -If $\Phi $ is a collection of subsets of a set E (that is, $\Phi$ is a subcollection of $\rho (E)$), then write - -First of all I would like to point out that letters Phi and rho aren't the ones used in Halmos book. I do not recognize the letters from the book so I had no option but to pick my own. -$$\rho(E) = \{X:X\subset E\}$$ -This is the definition of $\rho (E)$ from the book. -My question is: What is the difference between $\rho (E) $ and $\Phi$? - -REPLY [5 votes]: The set $\rho(E) = \mathcal P (E)$ is the set of all the subsets of $E$, while $\Phi$ is a collection of subsets of $E$ (which may not contain all of them). In other words : $\Phi \subseteq \rho(E)$. -For instance, if $E =\{1,2\}$, then the power set of $E$ is $\rho(E) = \{\varnothing, \{1\}, \{2\}, E\}$ and you can take $\Phi = \{\{2\}, E\}$ as a collection of subsets of $E$ (which is not the whole power set).<|endoftext|> -TITLE: Prove that Helly Theorem is not true in $L^{\infty}[0,1]$ -QUESTION [6 upvotes]: Prove that Helly Theorem is not true in $X=L^{\infty}[0,1]$ -Helly's Theorem: Let $X$ be a separable normed linear space and $\{T_n \}$ a sequence in its dual space $X^*$ that is bounded, that is, there is an $M > 0$ for which $|T_n(f)|\leq M \cdot ||f||$ for all $f$ in $X$ and all $n$. Then there is a subsequence $\{T_{n_k}\}$ of $\{T_n\}$ and $T$ in $X^*$ for which $\lim\limits_{k \to \infty} T_{n_k} =T(f)$ for all $f$ in $X$ -My thoughts: Since one of the condition of the theorem is $X$ is separable normed linear space, I thought that will be enough to prove that $L^{\infty} [0,1]$ is not separable (and we can do this by contradiction) but I think it is not enough and maybe a counterexample (that I can't see) will solve this problem easily, any clues or solutions? Thanks - -REPLY [6 votes]: A little bit late but might be helpful for future visitors: -Define $$ f_n(x)=\begin{cases} 2^n & x\in \left[\frac{1}{2^n},\frac{1}{2^{n-1}}\right] \\ 0 & \text{else} \end{cases} $$ Then $f_n \in L^1[0,1]$ so it induces a linear functional on $L^\infty[0,1]$ by $T_n(g)=\int_{[0,1]}f_n\, g $. -Assume $T_n$ has a weak-* convergent subsequence, $ T_{n_k} \rightharpoonup T $. Define $ f(x) $ by $$ f(x)= \begin{cases} (-1)^k & x\in \left[\frac{1}{2^{n_k}},\frac{1}{2^{n_k-1}} \right) \\ 0 & \text{else} \end{cases} $$ Clearly, $ f\in L^\infty[0,1] $ but $T_{n_k}(f)=(-1)^k $ which clearly doesn't converge.<|endoftext|> -TITLE: What is the probability that two numbers between 1 and 10 picked at random sum to a number greater than 5? -QUESTION [6 upvotes]: We have the numbers $1$ through $10$ in a box, we pick one at random, write it down and put it back in the box. We pick another of those numbers at random and write it down again. If we add the two numbers, what is the probability that it will be greater than $5$? -At first I though that I could count the number of ways we could add two numbers to get six, i.e. $2+4$ and see what are the chances to get numbers bigger than those choices. Then adding all the probabilities that relate to each way. However, I get numbers greater than $1$ which is impossible. I also thought about the chance of getting a $1$ and then a number equal to or bigger to $5$, $P(x \ge 5) = \frac 12$ multiplying them together and repeating until all numbers run out. Again, wrong answer. -My question is: how do we get to the correct answer? Is it possible to generalize? Say that the probability of $n$ numbers picked at random from $N$ choices add to something greater than $k$. - -REPLY [3 votes]: You could solve this with generating functions. The generating function for this situation, equivalent to rolling a fair 10-sided die twice, is: -$$(x + x^2 + x^3 + x^4 + x^5 + x^6 + x^7 + x^8 + x^9 + x^{10})^2$$ -which expands to -$$x^2 + 2 x^3 + 3 x^4 + 4 x^5 + 5 x^6 + 6 x^7 + 7 x^8 + 8 x^9 + - 9 x^{10} + 10 x^{11} + 9 x^{12} + 8 x^{13} + 7 x^{14} + 6 x^{15} + 5 x^{16} + - 4 x^{17} + 3 x^{18} + 2 x^{19} + x^{20}$$ -The coefficient of each $x^n$ is the number of ways of getting a sum of $n$ from the two random draws. There are 100 total possibilities ($10 \cdot 10$) with two draws from 1...10 with replacement. Looking at the polynomial above, the coefficients of the monomials $x^2$ through $x^5$ show that there are a total of 10 ways to get a sum of 5 or less. Thus, the probability of a sum greater than 5 is 90/100 or 0.9. -Perhaps a simpler way is to first eliminate the cases where at least one draw is 5 or greater, since these guarantee a sum greater than five. There are 84 of these, leaving only the 16 cases in which both draws are 4 or less. This can also be done with a generating function (though counting manually is easy too): -$$(x + x^2 + x^3 + x^4)^2 = x^2 + 2 x^3 + 3 x^4 + 4 x^5 + 3 x^6 + 2 x^7 + x^8$$ -once again giving 10 out of 100 cases where the sum is 5 or less.<|endoftext|> -TITLE: What can go wrong if we let sigma algebra to admit the union of uncountable union of elements? -QUESTION [5 upvotes]: By definition we only allow the union of countable infinite of elements to be also include the $\sigma$ field, why not uncountable many? Is there a historical view behind this? - -REPLY [2 votes]: You can show, that it's not possible to construct a measure $\lambda$ defined on all the subsets of $\mathbb{R}$ such that the following natural assumptions hold: - -For intervals $[a,b]$, you have $\lambda([a,b])=b-a$. -Translation Invariance: $\forall X\subset\mathbb{R}$ and $\forall t\in\mathbb{R}$ you have $$\lambda(X+t)=\lambda(\{x+t | x\in X\})=\lambda(X)$$ -Additivity: For any sequence $(X_i)_{i\in I}$ of disjoint sets in $\mathbb{R}$ you have $$\lambda\Bigl(\bigcup_{i\in I} X_i \Bigr)=\sum_{i \in I}\lambda(X_i)$$ - -And therefore you need to have some restrictions on your sets (at least if you want to keep the assumptions about your measure, which seems like a good idea) -Counter example (vitali sets): -Define the following equivalence relation on $\mathbb{R}$: -$$x \sim y \quad \text{iff} \quad x-y \in \mathbb{Q}$$ -Then, by the axiom of choice, there is a map $f\colon \mathbb{R} / \sim \to \mathbb{R}$ which assigns a representant to each equivalence class. You can assume that $X=f(\mathbb{R} / \sim)\subset [0,1]$ (you can just "add integers"). -Now look at $\lambda(X)$: - -If $\lambda(X)$ would be equal to $0$, you would have $$\lambda(\mathbb{R})=\lambda \Bigl(\bigcup_{\mathbb{Q}}(t+X) \Bigr)=\sum_{\mathbb{Q}}\lambda(X)=0$$ by the translation invariance and the countability of $\mathbb{Q}$. -If $\lambda(X)=c>0$, you get $$\lambda \Bigl(\bigcup_{\mathbb{Q}\cap [0,1]}(t+X) \Bigr)=\sum_{\mathbb{Q}\cap [0,1]} c = +\infty$$ But you also have $$\lambda \Bigl(\bigcup_{\mathbb{Q}\cap [0,1]}(t+X) \Bigr)\leq \lambda([0,2])=2$$ and therefore another contradiction.<|endoftext|> -TITLE: If $G$ is a group show that if $(a \cdot b)^2 = a^2 \cdot b^2$ then $G$ must be abelian. -QUESTION [6 upvotes]: If $G$ is a group show that if $(a \cdot b)^2 = a^2 \cdot b^2$ then $G$ must be abelian. - -$\begin{aligned}(a \cdot b)^2 = a^2 \cdot b^2 & \iff (a\cdot b)\cdot(a \cdot b) = (a \cdot a)\cdot (b \cdot b) \\& \iff a \cdot (b \cdot (a \cdot b)) =a(a \cdot (b\cdot b)) \\& \iff (a^{-1} \cdot a) \cdot (b \cdot (a \cdot b)) =(a^{-1}\cdot a)(a \cdot (b\cdot b))\\& \iff (b \cdot (a \cdot b)) =(a \cdot (b\cdot b)) \\& \iff (b \cdot a) \cdot b =(a \cdot b)\cdot b\\& \iff (b \cdot a) \cdot (b \cdot b^{-1}) =(a \cdot b)\cdot (b \cdot b^{-1}) \\& \iff b \cdot a = a \cdot b\end{aligned}$ -Thus $G$ must be abelian. Is this right? Is there less clunky way to write it if it's? - -REPLY [5 votes]: It's correct, but let's see if we can address "clunkiness". -$$ -abab = aabb \qquad \text{Is this right or not?} -$$ -Multiplying both sides by $a^{-1}$ on the left, we get -$$ -bab = abb \qquad \text{Is this right or not?} -$$ -Multiplying both sides by $b^{-1}$ on the right, we get -$$ -ba = ab \qquad \text{Is this right or not?} -$$ -All this is right if at the appropriate points we invoke associativity. What appears above is really the idea of the proof, and the necessary invocations of associativity are playing an essentially technical role in this argument. Maybe "clunky" means all the stuff that's necessary for logical rigor camouflages the central idea, which one wishes to exhibit. That's a reason for getting "lemmas" out of the way before getting to the central idea of an argument. - -REPLY [4 votes]: More simply, $(ab)^2=abab=a^2b^2$ -Cancel $a$ from the left and $b$ from the right to get $ba=ab$.<|endoftext|> -TITLE: Can all vector spaces be made into normed spaces? -QUESTION [5 upvotes]: Can all vector spaces be made into normed spaces (even trivial ones)? Vectorspace could be of infinite dimension. -Update: I don't know how to make this question more specific. I am talking about a very general vectorspace. It can be any kind of vectorspace (finite dimensional, infinite dimensional) with any kind of underlying field ($\mathbb{R}$ or $\mathbb{C}$ or other). It seems like most sources I have simply start by discussing the definition of a normed vectorspace without discussing when it is possible for a vectorspace to have a norm. So I am thinking maybe there could be a vectorspace that we cannot define a norm on it? I couldn't think of an example myself. - -REPLY [5 votes]: Sure. Take a (Hamel) basis $\{v_i\}_{i\in I}$ of your vector space $V$ (its existence is guaranteed by Zorn's lemma) and define an inner product by setting -$$\langle v_i,v_j\rangle = \delta_{ij}$$ -and extending linearly over $V$. Now take the induced norm $\|w\| = \langle w,w\rangle$. However, this structure is not interesting at all, as it basically only depends on the "size" of $V$ (the cardinality of a basis), and not on any other information you have. - -REPLY [3 votes]: Yes (as long as you have an absolute on the field, and you do not use non-common axioms schemes for your set-theory). -Just recall that every vector space has a basis, fix one, and define the norm, e.g., as the $\infty$-norm of the coordinates in this basis.<|endoftext|> -TITLE: How to generalize "Seven trees in one" to labelled/colored trees? -QUESTION [7 upvotes]: In the famous paper Seven trees in one, Andreas Blass showed that there is "a particularly elementary bijection between the set $T$ of finite binary trees and the set $T^7$ of seven-tuples of such trees". -For the Haskellers, if we define -data Tree = Leaf | Node Tree Tree - -this leads to a bijection of types -iso :: Tree -> (Tree, Tree, Tree, Tree, Tree, Tree, Tree) - -The justification, which this paper elaborates on, stems from the fact that for the set $T$ of trees, the recursive definitions yields an isomorphism $$T \cong T^2 + 1.$$ Treating this as an equation of complex numbers, we'd get -$$T = \frac 1 2 \pm \frac 1 2 i \sqrt 3$$ -which is a primitive sixth root of unity, thus yielding $T^7 = T$. -This fascinating isomorphism just works for trees with no information attached to the nodes, as it just operates on the structure of the tree. I would love to see how we could incorporate actual, labelled nodes though (or better: colored nodes, see comments below). What do we get if we introduced labels on the nodes (with one of $n$ possibilities)? Say -data Label = A1 | ... | An -data Tree = Leaf | Node Label Tree Tree - -Now we have an isomorphism $$T \cong 1 + nT^2$$ with complex solution -$$T = \frac 1{2n}(1+i\sqrt{4n-1}).$$ -E.g. for $n=2$ we had -$$T = \frac 1 4 (1 \pm i\sqrt 7).$$ -None of these solutions can be roots of unity for $k>1$ because of their absolute values, so some nontrivial isomorphism $T^k \cong T^\ell$ won't arise. Are there other results one can achieve? -I know that this question basically amounts to asking if there is nice integral polynomial equation satisfied by the complex value for $T$ above, which means nice polynomial multiples of $nT^2-T+1$. Has there been any work done on these? -Thank you for your comments - -I know this question, though it's way broader (and hasn't received an answer anyway). - -REPLY [2 votes]: Have a look at this paper. It gives some results for lists, naturals, and constants as well.<|endoftext|> -TITLE: Hint on computing the series $\sum_{n=2}^\infty \frac{1}{n^2-1}$. -QUESTION [6 upvotes]: I'm supposed to determine whether this sum diverges or converges and if it converges then find its value: -$$ -\sum_{n=2}^\infty \frac{1}{n^2-1}. -$$ -Using the comparison test I eventually showed that this converges. But I can't figure out how to show what this sum converges to. The only sums we actually found values for in my notes are geometric series which this clearly isn't. -I saw that I could use partial fraction decomposition to represent the terms as -$$\frac{1/2}{n-1}- \frac{1/2}{n+1} $$ -but that just gets me $\infty - \infty$, so this isn't the way to do it. -I'm not sure how to find the value of this sum. I don't need the full solution but a hint would be appreciated. Thanks. :) - -REPLY [2 votes]: Since $$\frac{1}{n^2-1}=\frac{1}{(n-1)(n+1)}=\frac12\left(\frac{1}{n-1}-\frac{1}{n+1}\right)$$ as you already have, then write for $N\in \mathbb N$ -\begin{align}\sum_{n=2}^{N}\left(\frac{1}{n-1}-\frac{1}{n+1}\right)&=\left(1-\frac13\right)+\left(\frac12-\frac14\right)+\left(\frac13-\frac15\right)+\dots+\left(\frac1{N-1}-\frac1{N+1}\right)\\[0.2cm]&=1+\frac12+\left(\frac13-\frac13\right)+\dots+\left(\frac1{N-1}-\frac1{N-1}\right)-\frac1N-\frac1{N+1}\\[0.2cm]&=\frac32-\frac1N-\frac1{N+1}\end{align} In other words, this sum telescopes. Now let $N\to \infty$ (and of course do not forget 1/2 in front of the sum) to conclude that $$\sum_{n=2}^{+\infty}\frac{1}{n^2-1}=\lim_{N\to+\infty} -\frac12\sum_{n=2}^N\left(\frac{1}{n-1}-\frac1{n+1}\right)=\frac12\lim_{N\to+\infty}\left(\frac32-\frac1N-\frac1{N+1}\right)=\frac34$$<|endoftext|> -TITLE: Reduced row echelon form and linear independence -QUESTION [8 upvotes]: Let's say I have the set of vectors $S = \{v_1, v_2, ..., v_n\}$ where $v_j \in R^m$, $v_j = (a_{1j}, a_{2j}, ..., a_{mj})$. -If the matrix formed by each of the vectors $A=[v_1, v_2, ..., v_n]$ looks like this (I believe), which is not a square matrix: -$$A = \begin{pmatrix} -a_{11} & a_{12} & \cdots & a_{1n} \\ -a_{21} & a_{22} & \cdots & a_{2n} \\ -\vdots & \vdots & \ddots & \vdots \\ -a_{m1} & a_{m2} & \cdots & a_{mn} -\end{pmatrix}$$ -Then does A's reduced row echelon form help me determine whether the vectors of $S$ are linear dependent or independent? If so, how? -I hope I got all the indices, notation and terminology right, since I am a beginner in linear algebra, and English is not my native language. - -REPLY [7 votes]: The "row rank" of $A$ is the number of linearly independent rows it has, and the "column rank" the number of its linearly independent columns. The key facts are (for any matrix $A$) that: - -The row rank is equal to the column rank. -The row (equiv. column) rank is unchanged by elementary row operations. - -Therefore you can get the rank of $A$ (as we say for simplicity) by counting the leading ones of its row echelon form. Since $S$ has $n$ vectors, we need the rank of $A$ to be $n$ (it cannot be more) in order for $S$ to be a linearly independent set.<|endoftext|> -TITLE: K theory of finite dimenional Banach algebras -QUESTION [7 upvotes]: Is there a reference which studied the K theory of finite dimensional Banach algebras? -In particular is there a finite dimensional Banach algebra whose $K_{0}$-group is a non trivial finite group?(I am Sorry if the later question is elementary) -Edit: What is a generalization of the following theorem by Taylor, to non commutative Banach algebras(In particular finite dimensional non commutative Banach algebras".(I read this theorem in paper By J. Rosenberg "Comparison between algebraic and Topological K theory") -" Theorem 2: Taylor If A is a unital commutative Banach algebra and ifX is its maximal -ideal space, then the Gelfand transform A → C(X) induces an isomorphism -on topological K-theory." -Note that this theorem implies that for a commutative Banach algebra of finite dimension, the K_{0} group can not be a non trivial finite group. - -REPLY [4 votes]: Let $A$ be a finite-dimensional Banach algebra. The group $K_0(A)$ only depends on the underlying ring structure of $A$. In what follows, $A$ can be any finite-dimensional algebra over an arbitrary field $k$, well $A\neq 0$. The map $\{\text{finitely generated projective $A$-modules}\}\mapsto \mathbb Z$ given by $P\mapsto\dim_kP$ is well defined (being $A$ finite-dimensional so is any f.g. $A$-module) isomorphism invariant, additive, and non-trivial since $A\neq 0$, therefore it extends to a non-trivial homomorphism $K_0(A)\rightarrow \mathbb{Z}$. The image is a non-trivial subgroup of $\mathbb Z$, all of which are infinite cyclic (and obviously projective), hence $K_0(A)$ cannot be finite (it actually contains an infinite cyclic direct summand).<|endoftext|> -TITLE: Why isn't integer factorization in complexity P, when you can factorize n in O(√n) steps? -QUESTION [11 upvotes]: It is said that integer factorization is an NP problem. -Why isn't it P? You can solve it in $O(\sqrt{n})$ time with trial factorization, and since $\sqrt{n} = n^{1/2}$, to me that looks like a number of form $n^k$ which is a polynomial. -I am having difficulty determining what P vs NP vs NP-Complete vs NP-Hard means because I don't know how to separate the definitions and how complexity is measured and defined. - -REPLY [8 votes]: Let $k$ denote the length of the input value $n$. -Since $k=\log_2n$, a complexity of $O(\sqrt{n})$ is equivalent to $O(\sqrt{2^k})$. -Since $\sqrt{2^k}=2^{k/2}$, this complexity is obviously exponential in terms of input-length.<|endoftext|> -TITLE: Is this 2-tensor symmetric? It satisfies these conditions -QUESTION [5 upvotes]: I have some scalar field $p$ and a 2nd order tensor $\textbf{T}$ such that -$div( \ \textbf{T} \ \textbf{grad}(p) \ )=0$ -$\textbf{curl}(\ \textbf{T} \ \textbf{grad}(p) \ )=\textbf{0} $ -$\textbf{T}$ is positive definite, i.e. $\textbf{a} \cdot \textbf{T}\textbf{a} >0 \ \ \ \forall \ \textbf{a} \in E$ -I am trying to show that $\textbf{T}=\textbf{T}^T$. Is is true? If yes, how can I prove it? I've been working on it for the past few days and keep running in circles, I need help. -If it isn't true, what if I assume $\textbf{T}$ is constant, then is it symmetric? I'd appreciate any input. - -REPLY [2 votes]: This isn't true. For example let -$$\mathbf{T}=\begin{pmatrix}2&2&2\\ -0&2&2\\ -0&0&2 -\end{pmatrix}$$ -everywhere. -Then $\mathbf{a}\cdot\mathbf{T}\mathbf{a}=2a_1^2+2a_2^2+2a_3^2+2a_1a_2+2a_2a_3+2a_3a_1$ $=(a_1+a_2)^2+(a_2+a_3)^2+(a_3+a_1)^2>0$ -Let $p$ be any scalar field with a constant gradient, for example $p(x,y,z)=x$. Then $\mathbf{T}\;\mathbf{grad}(p)$ is constant and so has zero divergence and curl.<|endoftext|> -TITLE: Finite group with three proper subgroups -QUESTION [5 upvotes]: The Klein-$4$ group is a finite group with exactly three subgroups $H$ such that $1 -TITLE: Can you create non transitive dice for any finite graph? -QUESTION [9 upvotes]: Let's say you have a finite directed graph, with no two nodes that point at each other. Can we assign each node a dice, so that each node beats the node it is pointing at. -This is easy for acyclic graphs, but it is possible for some non-acyclic graphs: see Nontransitive dice. -By dice, I mean any probability distribution the natural numbers (including those of infinite support). A dice beats another if the probability of it being higher than the other is more than a half. -Can we assign nontransitive dice to an arbitrary graph? -Also: Can this still work with certain infinite graphs? - -REPLY [10 votes]: It's possible for any directed graph that has no length-2 cycles. My algorithm is non-optimal, and requires $2e+2$ sides on each die, where $e$ is the number of edges in the graph. All dice not connected by an edge will be even competitors. -It would probably be more proper to define this as an inductive proof, but I'm just going to describe how you'd build it up an edge at a time. -Say your graph has $n$ nodes. Start out with $n$ identical 2-sided dice. Each having 0 on both sides will work fine. -Pick an arbitrary edge in the graph, specifying that $D_a$ should beat $D_b$. All the dice currently have $k$ sides, and satisfy the requirements for previously added edges, and unconnected dice are even. Find $m$, the maximum of all the faces on the dice. To $D_a$, add two faces with $m+3$. To $D_b$, add two faces of $m+2$. To all other dice, add two faces, one of $m+1$ and one of $m+4$. Repeat this procedure for all edges in the graph, winding up with $2e+2$-sided dice. -Now I need to show that this works. Let $p(x,y)$ be the probability that $D_x$ beats $D_y$ before one of these edge iterations, and let $p'(x,y)$ be the probability that $D'_x$ beats $D'_y$ after one of these edge iterations. -First look at $D'_a$ and $D'_b$. $D'_a$ has a $\frac2{k=2}$ chance of rolling $m+3$ and winning over any $D'_b$ roll. $D'_a$ can also win half the time when both dice roll numbers appearing on their old versions, which occurs with probability $\frac{k^2}{(k+2)^2}$. -$$\begin{align} -p'(a,b) & = \frac{k^2}{(k+2)^2} p(a,b) + \frac{2}{k+2} \\ - & = \frac{k^2}{2(k+2)^2} + \frac{2}{k+2} \\ - & = \frac{k^2 + 4k + 8}{2(k+2)^2} \\ - & = \frac{(k+2)^2 + 4}{2(k+2)^2} \\ - & = \frac12 + \frac{2}{(k+2)^2} \text{ so $D'_a$ beats $D'_b$.} -\end{align}$$ -Now look at $D'_a$ and some other $D'(x)$ where $x$ is not $a$ or $b$. There are six cases. Two cases where $D'_a$ rolls either an old number or $m+3$. Three cases where $D_x$ rolls either an old value, $m+1$, or $m+4$. -$$\begin{align} -p'(a,x) & = \frac{k}{k+2}\left[\frac{k}{k+2}p(a,x)\right] + \frac2{k+2}\left[\frac{k}{k+2} + \frac{1}{k+2}\right] \\ - & = \frac{k^2}{(k+2)^2}p(a,x) + \frac{2k+2}{(k+2)^2} \\ -& \text{Define $\epsilon$ so that $p(a,x) = \frac12 + \epsilon$} \\ -& = \frac{k^2 + 2k^2\epsilon}{2(k+2)^2} + \frac{2k+2}{(k+2)^2} \\ -& = \frac{k^2 + 4k + 4 + 2k^2\epsilon}{2(k=2)^2}\\ -& = \frac12 + \frac{2k^2}{2(k+2)^2}\epsilon\\ -\end{align}$$ -Now we know that the sign of $p(a,x) - \frac12$ is the same as the sign of $p'(a,x) - \frac12$, so any previous dominance relationship between $D'_a$ and $D'_x$ is preserved. -Challenges between $D'_b$ and some $D'_x$ can be checked similarly, as can challenges between $D'_x$ and $D'_y$ where $x$ and $y$ designate dice other than $a$ and $b$.<|endoftext|> -TITLE: Number of lines formed by sides of polygon -QUESTION [7 upvotes]: Let $n\geq 3$, and consider an $n$-gon, not necessarily convex. What is the minimum number of distinct lines that are formed by sides of the $n$-gon? -When $n=3,4,5$ the answer is $3,4,5$ respectively. For $n=6$ we can save one line, for example if we draw the "V-shaped" $6$-gon so that the two sides at the top of the V form the same line. For larger $n$ we should be able to halve the number of distinct lines by forming a "star shape" so that opposite sides of the star form the same line. But can we do better? - -REPLY [2 votes]: Can we do better? Yes: - -This figure has 28 edges forming 12 lines. If you count this as "2 indents on each side" then the generalisation to "$k$ indents per side" has $8k+12$ edges forming $2k+8$ lines, approaching asymptotically 4 edges per line. -Are there better configurations? I'd conjecture almost certainly :-). -Edit: In fact we can get the ratio arbitrarily low. In these figures with $k$ 'towers' and $k$ 'tiers' ($k \ge 2$) there are $8k^2$ edges in $6k+2$ lines ($2k+2$ horizontal and $4k$ vertical), giving at least $k$ edges per line.<|endoftext|> -TITLE: Proving the surprising limit: $\lim\limits_{n \to 0} \frac{x^{n}-y^{n}}{n}$ $=$ log$\frac{x}{y}$ -QUESTION [16 upvotes]: A few months ago, while at school, my classmate asked me this curious question: What does $\frac{x^{n}-y^{n}}{n}$ tend to as $n$ tends to $0$? I thought for a few minutes, became impatient, and asked "What?" His reply, log$\frac{x}{y}$, was surprising, but his purported 'proof' was more surprising: -Consider $\lim\limits_{n \to 0}\,\int_y^x t^{n-1}\, dt$. "Pushing the limit into the definite integral", we have $$\int_y^x \lim\limits_{n \to 0}\,t^{n-1}\, dt \implies \int_y^x \frac{1}{t}\, dt \implies \mathsf{log} \frac{x}{y}$$ -Leaving the fact that he had the inspiration to pull this integral out of thin air aside, is the limit allowed to pass into the definite integral? We hadn't learned Real Analysis (we were just taking a basic high school, hand-wavy single-variable calculus course), and I remember feeling very uneasy about the sorcery. I still am, hence, this question. -I've since thought about approaching it using $\mathsf{L'Hospital}$, but I still feel uneasy, since it involves differentiating with respect to different variables, which is a little bit confusing. I'd also appreciate your help in this regard. -If you have a better proof, I'll truly appreciate it. - -REPLY [2 votes]: \begin{align} -\lim_{n\to0}\frac{x^n-y^n}{n}&=\lim_{n\to0}\frac{e^{n\cdot \ln x}-e^{0\cdot \ln x}-(e^{n\cdot \ln y}-e^{0\cdot \ln y})}{n}\\ -&=\left(e^{n\cdot \ln x}\right)'|_{n=0}-\left(e^{n\cdot \ln y}\right)'|_{n=0}\\ -&=\ln x - \ln y -\end{align}<|endoftext|> -TITLE: What is the geometry behind $\frac{\tan 10^\circ}{\tan 20^\circ}=\frac{\tan 30^\circ}{\tan 50^\circ}$? -QUESTION [18 upvotes]: This identity is solvable by the help of trigonometry identities, but I guess there is an interesting and simple geometry interpretation behind this identity and I can't find it. - -I found it when I was thinking about World's Hardest Easy Geometry Problem - -REPLY [9 votes]: Consider a triangle $ABC$ with $\angle A = 10^\circ$, $\angle B = 150^\circ$, and $\angle C=20^\circ$. Let $O$ be the circumcenter of the triangle $ABC$. Then -$$\angle AOC = 360^\circ - 2 \angle B = 60^\circ,$$ -so triangle $AOC$ is equilateral. -Let $S$ be the circumcenter of the triangle $OBC$. Then -$$\angle BSC = 2\angle BOC = 4\angle BAC = 40^\circ$$ -and -$$\angle SCB = \frac{180^\circ - \angle BSC}2 = 70^\circ,$$ -so $\angle SCA = 50^\circ$. Denoting the intersection of $AC$ and $SB$ by $X$ we get -$$\angle CXS = 180^\circ - \angle SCA - \angle BSC = 90^\circ$$ so $AC \perp SB$. -Moreover $\triangle ASC \equiv \triangle ASO$ because $AC=AO$ and $SC=SO$. In particular $\angle CAS = \angle SAO$ and since $\angle CAO=60^\circ$, we have $\angle CAS = 30^\circ$. -Observe that -$$\frac{\tan \angle BAC}{\tan \angle ACB} = \frac{\frac{BX}{AX}}{\frac{BX}{CX}} = \frac{CX}{AX} = \frac{\frac{SX}{AX}}{\frac{SX}{CX}} = \frac{\tan \angle CAS}{\tan \angle SCA},$$ -therefore -$$\frac{\tan 10^\circ}{\tan 20^\circ} = \frac{\tan 30^\circ}{\tan 50^\circ}.$$<|endoftext|> -TITLE: Compute the (multiplicative) inverse of $4x+3$ in the field $\frac {\Bbb F_{11}[x]}{\langle x^2+1 \rangle}$? -QUESTION [6 upvotes]: So I am finding a polynomial $px+q$ ($p,q \in \Bbb F_{11}$) which is multiplicative inverse of $4x+3$ in $\frac {\Bbb F_{11}[x]}{\langle x^2+1 \rangle}$. -i.e. $[(4x+3)+\langle x^2+1 \rangle][(px+q)+\langle x^2+1 \rangle]=1+\langle x^2+1 \rangle$ -$\Rightarrow$ -$(4x+3)(px+q)+\langle x^2+1 \rangle=1+\langle x^2+1 \rangle$ -$\Rightarrow$ -$4px^2+(4q+3p)x+3q+\langle x^2+1 \rangle=1+\langle x^2+1 \rangle$. -We see that the remainder,when $(4x+3)(px+q)$ is divided by $x^2+1$, is $1$. -So by Division algorithm, -$$ -\require{enclose} -\begin{array}{r} -4p \\[-3pt] -x^2+1 \enclose{longdiv}{4px^2+(4q+3p)x+3q} \\[-3pt] -\underline{4px^2+4p} \\[-3pt] -(4q+3p)x+(3q-4p) \\[-3pt] -\end{array} -$$ -So I equate $(4q+3p)x+(3q-4p)=1$ and solve the simultaneous linear equations -$4q+3p=0, 3q-4p=1$. I get $p=6,q=1$ -Hence $6x+1$ is the required inverse. -I am pretty sure that the answer is correct but is the method to achieve it right? - -REPLY [5 votes]: Here's an alternative approach, doable since the vector space here is only two-dimensional. Multiplication by $4x+3$ corresponds to the matrix $\begin{bmatrix}3&7\\4&3\end{bmatrix}$ using the basis $\{1,x\}$. This matrix has inverse $\frac{1}{-19}\begin{bmatrix}3&-7\\-4&3\end{bmatrix}\equiv\frac{1}{3}\begin{bmatrix}3&-7\\-4&3\end{bmatrix}\equiv4\begin{bmatrix}3&4\\7&3\end{bmatrix}\equiv\begin{bmatrix}1&5\\6&1\end{bmatrix}$, which corresponds to multiplication by $1+6x$ (upon inspecting the first column).<|endoftext|> -TITLE: Compute $\lim_{n \to +\infty} n^{-\frac12 \left(1+\frac{1}{n}\right)} \left(1^1 \cdot 2^2 \cdot 3^3 \cdots n^n \right)^{\frac{1}{n^2}}$ -QUESTION [7 upvotes]: How to compute - $$\displaystyle \lim_{n \to +\infty} n^{-\dfrac12 \left(1+\dfrac{1}{n}\right)} \left(1^1\cdot 2^2 \cdot 3^3 \cdots n^n \right)^{\dfrac{1}{n^2}}$$ - I'm interested in more ways of computing limit for this expression - -My proof: -Let $u_n$be that sequence we've: -\begin{eqnarray*} -\ln u_n &=& -\frac{n+1}{2n}\ln n + \frac{1}{n^2}\sum_{k=1}^n k\ln k\\ -&=& -\frac{n+1}{2n}\ln n + \frac{1}{n^2}\sum_{k=1}^n k\ln \frac{k}{n}+\frac{1}{n^2}\sum_{k=1}^n k\ln n\\ -&=& \frac{1}{n^2}\sum_{k=1}^n k\ln \frac{k}{n}\\ -&=& \frac{1}{n}\sum_{k=1}^n \frac{k}{n}\ln \frac{k}{n}\\ -&\to&\int_0^1 x\ln x\,dx = -1/4 -\end{eqnarray*} -Therefore the limit is $e^{-\frac{1}{4}}$ - -REPLY [2 votes]: Another approach, considering $$A_n= n^{-\dfrac12 \left(1+\dfrac{1}{n}\right)} \left(\prod_{i=1}^n i^i \right)^{\dfrac{1}{n^2}}=n^{-\dfrac12 \left(1+\dfrac{1}{n}\right)} H(n)^{\dfrac{1}{n^2}}$$ where appears the hyperfactorial function. Taking logarithms $$\log(A_n)={-\dfrac12 \left(1+\dfrac{1}{n}\right)}\log(n)+\dfrac{1}{n^2}\log(H(n))$$ Now, using Stirling like approximations $$\log(H(n))=n^2 \left(\frac{1}{2} \log(n)-\frac{1}{4}\right)+\frac{1}{2} n - \log (n))+\left(\log (A)+\frac{1}{12} \log - \left(n\right)\right)+O\left(\frac{1}{n^2}\right)$$ where appears Glaisher constant. -All of that, once recombined, leads to $$\log(A_n)=-\frac{1}{4}+\frac{\log (A)+\frac{1}{12} \log - \left(n\right)}{n^2}+O\left(\frac{1}{n^{5/2}}\right)$$ which shows the limit and how it is approached.<|endoftext|> -TITLE: How do I simplify and evaluate the limit of $(\sqrt x - 1)/(\sqrt[3] x - 1)$ as $x\to 1$? -QUESTION [8 upvotes]: Consider this limit: -$$ \lim_{x \to 1} \frac{\sqrt x - 1}{ \sqrt[3] x - 1} -$$ -The answer is given to be 2 in the textbook. -Our math professor skipped this question telling us it is not in our syllabus, but how can it be solved? - -REPLY [2 votes]: Another way : change variable $x=1+y$; so $$A=\frac{\sqrt x - 1}{ \sqrt[3] x - 1}=\frac{\sqrt{1+y} - 1}{ \sqrt[3] {1+y} - 1}$$ Now, using the fact that, close to $y=0$ (using the generalized binomial theorem as lulu commented) $$(1+y)^a=1+a y+\frac{1}{2}a \left(a-1\right) y^2+O\left(y^3\right)$$ which makes $$A=\frac{1+\frac{y}{2}-\frac{y^2}{8}+O\left(y^3\right)-1}{1+\frac{y}{3}-\frac{y^2}{9}+O\left(y^3\right)-1}\approx \frac{\frac{y}{2}-\frac{y^2}{8}}{\frac{y}{3}-\frac{y^2}{9}}=\frac{\frac{1}{2}-\frac{y}{8}}{\frac{1}{3}-\frac{y}{9}}$$ Now make $y\to0$ to get the result. -You can even get more if you know long division. Omitting the high order terms, the last expression is $\sim\frac{3}{2}+\frac{y}{8}$ which reveals not only the limit but also how it is approached.<|endoftext|> -TITLE: $\sigma$-algebra generated by random variable : Show that if $\sigma(X)=\sigma(Y)$ then $\sigma(X+Y)\subseteq \sigma(X)$ -QUESTION [8 upvotes]: Let $(\Omega,\mathcal{F},P)$ be a probability space and $X$ be a random variable. The $\sigma$-algebra generated by $X$ is defined as -$$\sigma(X):=\{X^{-1}(B)\; | \; B\in B_{\mathbb{R}}\}$$ -where $B_{\mathbb{R}}$ is the Borel $\sigma$-algebra of subsets of $\mathbb{R}$. The problem statement is as follows : -Let $(\Omega,\mathcal{F},P)$ be a probability space and $X,Y$ are random variables such that $\sigma(X)=\sigma(Y)$. Show that $\sigma(X+Y)\subseteq \sigma(X)$. -I can show that $X+Y$ is a random variable. So, how do I go about proving this? - -REPLY [10 votes]: By definition, $\sigma(Z)$ denotes the smallest $\sigma$-algebra $\Sigma$ on $\Omega$ such that $Z: (\Omega, \Sigma) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ is measurable. -This means in particular that $X:(\Omega,\sigma(X)) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ is measurable. Since $\sigma(Y) = \sigma(X)$, we also have that -$$Y: (\Omega,\sigma(X)) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$$ -is measurable. Consequently, the sum -$$X+Y: (\Omega,\sigma(X)) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$$ -is measurable as a sum of two measurable random variables. Since $\sigma(X+Y)$ is the smallest $\sigma$-algebra $\Sigma$ such that -$$X+Y: (\Omega,\Sigma) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$$ -is measurable, this shows $\sigma(X+Y) \subseteq \sigma(X)$.<|endoftext|> -TITLE: New largest prime number discovery - what's all the fuss -QUESTION [5 upvotes]: So I've read about the latest largest prime number discovery (M74207281), but I find it hand to understand what's the big deal because using Euclid's proof of the infinitude of primes we can generate primes as large as we want. -I'll be happy to know what I'm missing - -REPLY [2 votes]: TL;DR Euclid's proof guarantees that there will be a larger prime than the largest yet known, but does not appear to be a particularly efficient way of actually finding one. -As for why we would want to actually find a larger prime than any yet known: because it's there? - -Reading one of the comments, I notice that you have heard of the -Euclid-Mullin sequence, -$$ a_n = \mathrm{lpf}\left(1+\prod_{k=1}^{n-1}a_k\right). $$ -The right-hand side of this equation contains the famous -formula from Euclid's proof of the infinitude of primes. -But it also applies the "$\mathrm{lpf}$" function to this formula, -and there's the rub. -That's the "least prime factor" function, and it not only requires -you to factorize a very large number, it also requires you to -determine whether the factors you have found are prime. -So the way to find a new world's largest prime using Euclid's proof -would be as follows: - -Collect together all the known primes. -Multiply all the known primes together (not a small task; some of these primes run into millions of digits). -Add one. -Test the resulting number from the previous step to determine whether it is prime. -If the result of step 4 is that the number is prime, skip ahead to step 8. But very likely the number is not prime, and you must continue to step 6. -Use the results of step 4 to write a factorization of the number from step 3. Insert those factors in a list of unique factors found in this step. -Remove one number from the list in step 6, and return to step 4, using this number as the new input to step 4. -Compare the prime number found in step 5 to the list of known primes that you last used in step 2. If your prime is larger than any of the previously-known primes, congratulations, you have found the new world's largest prime number! But it is very likely that your new prime will fit in one of the many huge gaps in the sequence of previously-discovered primes; in that case, add the new prime to the list of known primes, and then either return to step 7 (if there are still numbers in the list from step 6) or start the process over from step 2. - -The reason this is so complicated is that Euclid's formula does not directly produce a new prime; rather, it gives you a number whose prime factors are guaranteed to be new. You still have to test the number for primality and (very likely) factorize it somehow in order to find one of these new primes. -Moreover, if you collected together all the $N$ known primes, they are not the first $N$ primes; there are enormous tracts of relatively "unexplored territory" between the end of the largest "first $M$ primes" table and the largest known Mersenne prime. So after some amount of primality-testing and factorization applied to the number you got from Euclid's formula, you may well find that all the new primes it produced fell into the gaps in the list of known primes. Eventually, if you reapply the formula often enough, you will find a new largest prime, but how many iterations will that take? It could take a very, very long time before you find any new primes that don't just fill in the huge gaps between the already-known ones. -In contrast, testing new candidates for a Mersenne prime has a low probability of success each time, but relatively efficient ways to confirm success when it occurs; and when it succeeds it is practically guaranteed to be a new world's record. (The only way it wouldn't be a new record is if some other computer was concurrently testing an even larger Mersenne prime.)<|endoftext|> -TITLE: The Inverse of a Fourth Order Tensor -QUESTION [8 upvotes]: Suppose that we have a fourth order tensor ${\bf{A}}$ -$${\bf{A}}=A_{ijkl} {\bf{e}}_i \otimes {\bf{e}}_j \otimes {\bf{e}}_k \otimes {\bf{e}}_l$$ -in the orthonormal basis $\{{\bf{e}}_1,{\bf{e}}_2,{\bf{e}}_3\}$ for $\mathbb{R}^3$. Then we define the inverse of ${\bf{A}}$ denoted by ${\bf{B}}$ as follows -$${\bf{A}} : {\bf{B}} = {\bf{B}} : {\bf{A}} = {\bf{I}}$$ -where ${\bf{I}}$ is the fourth order identity tensor -$$\begin{align} -{\bf{I}} &=I_{ijkl} {\bf{e}}_i \otimes {\bf{e}}_j \otimes {\bf{e}}_k \otimes {\bf{e}}_l\\ -&=\delta_{ik} \delta_{jl} {\bf{e}}_i \otimes {\bf{e}}_j \otimes {\bf{e}}_k \otimes {\bf{e}}_l -\end{align}$$ -where $\delta_{ij}$ is the Kronecker's Delta -$$\delta_{ij}= -\begin{cases} -1 & i=j \\ -0 & i \ne j -\end{cases}$$ -and $:$ is the double contraction defined by -$${\bf{A}} : {\bf{B}}=A_{ijmn}B_{mnkl} {\bf{e}}_i \otimes {\bf{e}}_j \otimes {\bf{e}}_k \otimes {\bf{e}}_l$$ -I want to write a code to compute ${\bf{B}}$. However, I could not find any good resource on the net that gives the elements of ${\bf{B}}$ in terms of the elements of ${\bf{A}}$. How should I compute ${\bf{B}} = {\bf{A}}^{-1}$? -An application of this can be found in the theory of elasticity, where the fourth order tensor have the following symmetries -$$A_{ijkl}=A_{jikl}=A_{ijlk}=A_{klij}.$$ -That would be great if you can also touch upon this. - -REPLY [4 votes]: In general -$$ -\sum_{m=1}^N\sum_{n=1}^N A_{ijmn}B_{mnkl} = \delta_{ik}\delta_{jl} -$$ -for $i,j,k,l = 1,\ldots,N$. Now suppose the tensor $\mathbf{A}$ is unfolded as the $N^2\times N^2$ matrix given by -$$ -A = \left[ -\begin{array}{ccccccccc} -A_{1111} & A_{1112} & \cdots & A_{111N} & \cdots & A_{11N1} & A_{11N2} & \cdots & A_{11NN}\\ -A_{1211} & A_{1212} & \cdots & A_{121N} & \cdots & A_{12N1} & A_{12N2} & \cdots & A_{12NN}\\ -\vdots & \vdots & &\vdots & & \vdots & \vdots & & \vdots\\ -A_{NN11} & A_{NN12} & \cdots & A_{NN1N} & \cdots & A_{NNN1} & A_{NNN2} & \cdots & A_{NNNN}\\ -\end{array} -\right] -$$ -If the same is done for $\mathbf{B}$, then the matrix product $AB$ corresponds to the unfolded tensor product $\mathbf{A} : \mathbf{B}$. Let $E$ be the same unfolding of the tensor identity matrix $\mathbf{I}$. Then -$$ -\mathbf{A} : \mathbf{B} = \mathbf{I} \iff AB = E -$$ -Similarly, -$$ -\mathbf{B} : \mathbf{A} = \mathbf{I} \iff BA = E -$$ -In order for such a matrix $B$ to exist it follows that $AB = BA$ must hold, i.e., the product of $A$ and $B$ must commute. Now suppose $A$ is invertible. Then $B$ must satisfy the equations -$$ -B = A^{-1}E \quad\text{and}\quad B = EA^{-1} \iff A^{-1}E = EA^{-1} \iff EA = AE -$$ -Thus, in the case that $A$ is invertible, the product of $A$ and $E$ must commute in order for such a matrix $B$ to exist. If these conditions are met, then you should be able to compute a unique $B$ from the equation $AB = E$ via the LU factorization or by some other matrix factorization.<|endoftext|> -TITLE: Proving minimum number of chairs is $567$ -QUESTION [5 upvotes]: Hints only please! -I am trying to figure this out somehow. A row can have as many girls, and a column can have as many boys. -Proof by contradiction seems like a good technique, but I am not sure how much it would help here. - -REPLY [6 votes]: Let $r$ be the number of rows and $c$ be the number of columns. There are $14r$ boys and $10c$ girls along with $3$ empty seats. There are $rc$ seats in all. Thus, $14r+10c+3=rc$. -Subtracting $14r+3$ from both sides shows us that $10c=(c-14)r-3$ or $10c \equiv -3 \pmod r$. -Subtracting $10c+3$ from both sides shows us that $14r=(r-10)c-3$ or $14r \equiv -3 \pmod c$. -Not sure if this will lead you to the solution, but hopefully, these equations help!<|endoftext|> -TITLE: What is the expected value of cosine of a multivariate Gaussian? -QUESTION [5 upvotes]: Suppose $X \sim \mathcal{N}\left(\mu, \Sigma\right)$. How do I evaluate $\operatorname{E}\left[\cos \left(t^{T}X \right) \right] $ and $\operatorname{E}\left[\sin \left(t^{T}X\right) \right] $? Does this have to do with the characteristic function $\operatorname{E}\left[e^{it^{T}X} \right] =\exp \left\{i\mu^{T}t -\frac{1}{2}t^{T}\Sigma t\right\}$? - -REPLY [3 votes]: I will post an answer to my own question. Throughout the answer, the Euler's formula is extensively used: -$$ -e^{ix} = \cos x + i \sin x -$$ -Plugging in $x = t^{T}X$, -$$ -\begin{align*} e^{it^{T}X} &= \cos \left(t^{T}X\right) + i \sin \left(t^{T}X\right) \\ \mathbb{E}\left[e^{it^{T}X} \right] &= \mathbb{E} \left[\cos \left(t^{T}X\right) \right] + i \mathbb{E}\left[\sin \left(t^{T}X\right) \right] \\ &= \exp\left\{i\mu^{T}t-\frac{1}{2}t^{T}\Sigma t \right\} \left(\text{characteristic function} \right)\\ &= \exp\left\{i\mu^{T}t \right\}\exp\left\{-\frac{1}{2}t^{T}\Sigma t \right\}\\ &= \left(\cos \left(\mu^{T}t\right) + i\sin\left(\mu^{T}t\right) \right)\exp \left\{-\frac{1}{2}t^{T}\Sigma t \right\} \end{align*} -$$ -Therefore, the comparing the real parts and imaginary parts, -$$ -\begin{align*}\mathbb{E}\left[\cos \left(t^{T}X\right) \right] &= \cos \left(\mu^{T}t\right)\exp \left\{-\frac{1}{2}t^{T}\Sigma t \right\}\\ \mathbb{E}\left[\sin \left(t^{T}X\right) \right] &= \sin \left(\mu^{T}t\right)\exp\left\{ -\frac{1}{2}t^{T}\Sigma t\right\}. \end{align*} -$$<|endoftext|> -TITLE: What is the area of the circle? -QUESTION [16 upvotes]: In the following diagram, $AB = 4$ and $AC = 3$. What is the area of the circle? -I can't find any way to solve this. - -REPLY [3 votes]: Hint: -Let $M$ be the fourth vertex of the rectangle (on the circle), and denote $x$ its polar angle, w.r.t. the polar axis through the centre of the circle. You can express $\sin x$ and $\cos x$ as functions of the radius $R$ of the circle. Then write down Pythagoras' identity $\;\sin^2x+\cos^2x=1\;$ to obtain a quadratic equation for $R$. Don't forget the solution, if any, is subject to the condition $R\ge 4$.<|endoftext|> -TITLE: How to find irrational numbers between rationals. (And is my method correct?) -QUESTION [23 upvotes]: I have a question from an A-level revision book: - -Find an irrational number which lies between $\frac34$ and $\frac78$. - -What is the correct method for doing this? Here is my method: - -Square numerators and denominators of the (in this case: both) fractions. -Find LCD (Lowest Common Denominator) for denominators and convert all fractions to this LCD. -The numerators of the fractions are now perfect squares. Write a new fraction with a numerator that is not a perfect square and between the original fractions' numerators. -Put the new fraction inside a square root. The new fraction (inside a square root sign) is now a surd, and an irrational number (also an irrational fraction). - -Using this method, $\sqrt{\frac{37}{64}}$ would be one such irrational number between $\frac34$ and $\frac78$. -Please let me know if I've made any glaring lapses in logic. If I'm correct, a fraction with an irrational (surd) numerator is itself an irrational number. And non-perfect square numbers are all irrational. Therefore a fraction with an irrational numerator is irrational. (Pardon me if I'm not using all the correct terminology yet.) -Many thanks in advance -PS. I want to check I understand these relatively basic concepts correctly. (I apologize in advance if I have made a schoolboy error or stupid mistake. I'm trying to take the A-level mathematics exams in nearly 4 months, the reason is a rather long story, and I'm refreshing all my mathematics which I did at school. I did well at Math GCSE, although it was the intermediate not higher tier, and I'm considering blitzing all the A-level maths in a little over 3 months to take exams starting mid-May.) - -REPLY [2 votes]: We seek an irrational $x\in\mathbb{R}$ such that $\frac{3}{4}\frac{1}{\sqrt{577}}>\frac{1}{28},$$ and hence, $$\frac{7}{8}>\frac{21}{\sqrt{577}}>\frac{3}{4}.$$ Since $577$ is one more than a square, it follows that $577$ is not square itself, and so $\sqrt{577}$ is irrational.<|endoftext|> -TITLE: uniform boundedness principle for $L^{1}$ -QUESTION [5 upvotes]: i read this theorem from V.I.Bogachev vol 1 Measure Theory. -A family $\mathcal{F}\subset L_{1}(\mu)$,where the measure $\mu$ takes values in $[0,+\infty]$, is norm bounded in $L_{1}(\mu)$ precisely for every $\mathcal{A}\subset A$ one has $$\sup_{f\in\mathcal{F}}|\int_{A}f d\mu |<\infty $$ -Can anyone tell where i can find o simple proof of this theorem, because i can't understand the proof from Bogachev. Thanks. - -REPLY [2 votes]: I wouldn't know about the proof in the book, but here's a proof. It could probably be streamlined some - you should see what it looked like a few days ago. Going to change some of the notation; this is going to be enough typing as it is. -Going to assume we're talking about real-valued functions, so that for every $f$ there exists $E$ with $\left|\int_E f\right|\ge\frac12||f||_1$. -Theorem Suppose $\mu$ is a measure on (some $\sigma$-algebra on) $X$, $S\subset L^1(\mu)$, and $\sup_{f\in S}||f||_1=\infty$. Then there exists a measurable set $E$ with $$\sup_{f\in S}\left|\int_Ef\right|=\infty.$$ -Notation: The letter $f$ will alsways refer to an element of $S$; $E$ and $F$ will always be measurable sets (or equivalence classes of measurable sets modulo null sets). -Proof: First we lop a big chunk off the top: Wlog $S$ is countable; hence wlog $\mu$ is $\sigma$-finite. Now we nibble away at the bottom: -Case 1 $\mu$ is finite and non-atomic. -This is the meat of it. It's also the cool part: We imitate the standard proof of the standard uniform boundedness principle, with measurable sets instead of elements of some vector space. -Let $\mathcal A$ be the measure algebra; that is, the algebra of measurable sets modulo null sets. For $E,F\in\mathcal A$ define $$d(E,F)=\mu(E\triangle F)=||\chi_E-\chi_F||_1.$$Now $\mathcal A$ is a complete metric space (complete because it's isometric with a closed subset of $L^1$). Define $$A_n=\{E\in\mathcal A:\sup_f\left|\int_Ef\right|>n\}.$$ -$A_n$ is open in $\mathcal A$: Say $E\in A_n$ and choose $f$ so $$\left|\int_Ef\right|-n=\epsilon>0.$$ There exists $\delta>0$ so the integral of $|f|$ over any set of measure less than $\delta$ is less than $\epsilon$. Hence if $d(E,F)<\delta$ we have $$\left|\int_{F} f\right|\ge \left|\int_{E} f\right|-\int_{E\triangle F}|f|>n,$$so $F\in A_n$. -$A_n$ is dense in $\mathcal A$: Say $E\in\mathcal A$ and let $\epsilon>0$. Write $$X=\bigcup_{j=1}^NE_j,$$where $E_j\cap E_k=\emptyset$ and $$\mu(E_j)<\epsilon.$$ Choose $f$ with $||f||_1>4Nn$. Choose $F$ so $$\left|\int_{F} f\right|>2Nn.$$Now there exists $j$ with$$\left|\int_{F\cap E_j} f\right|>2n.$$We want to show there exists $E'\in A_n$ with $d(E,E')<\epsilon$. If $\left|\int_{E\setminus(F\cap E_j)} f\right|>n$ then $E'=E\setminus(F\cap E_j)$ works; if not the triangle inequality shows that $E'=E\cup(F\cap E_j)$ works. -So the Baire category theorem shows that $\bigcap A_n\ne\emptyset$. -Case 2 $\mu$ is non-atomic. If there exists $E$ with $\mu(E)<\infty$ and $\sup_f\int_E|f|=\infty$ we're done by Case 1. Suppose not. -Suppose we've chosen $f_1,,\dots f_n$ and $E_n$ so that $\mu(E_n)<\infty$ and $$\left|\int_{E_n} f_j\right|>j\quad(1\le j\le n).$$There exists $F$ with $\mu(F)<\infty$, $E_n\subset F$, and such that if $E_{n+1}$ is any set with $$E_{n+1}\cap F=E_n$$then we will still have $$\left|\int_{E_{n+1}} f_j\right|>j\quad(1\le j\le n).$$Choose $c$ so $\int_F|f|\le c$ for all $f$. Choose $f_{n+1}$ with $||f_{n+1}||_1>3c+2(n+1)$. Then $\int_{X\setminus F}|f_{n+1}|>2c+2(n+1)$, so there exists $F_n\subset X\setminus F$ with $\left|\int_{F_n} f_{n+1}\right|>c+{n+1}.$If we let $E_{n+1}=E_n\cup F_n$ then we have $$\left|\int_{E_{n+1}} f_j\right|>j\quad(1\le j\le n+1).$$(For $1\le j\le n$ this follows by the comments above and for $j=n+1$ it uses the triangle inequality.) -So if $E=\bigcup E_n$ then $$\left|\int_{E} f_j\right|\ge j$$for all $j$. -Case 3 $X$ is a countable union of atoms. We may as well assume $\mu$ is a measure on $\Bbb N$. The argument in this case is really just like the argument in Case 2; details available on request. -Case 4 $\mu$ is $\sigma$-finite. Write $X=A_2\cup A_3$ where $\mu$ is non-atomic on $A_2$ and $A_3$ is a countable union of atoms. There exists $j=2,3$ such that $\int_{A_j}|f|$ is unbounded; we are done by Case $j$ above.<|endoftext|> -TITLE: Arithmetic-quadratic mean and other "means by limits of means" -QUESTION [9 upvotes]: For $x,y$ positive real numbers, and $p\neq 0$ real, define the Hölder $p$-mean -$$M_p(x,y) := \left(\frac{x^p+y^p}{2}\right)^{1/p}$$ -whereas -$$M_0(x,y) := \sqrt{xy}$$ -is the limit of $M_p(x,y)$ when $p\to 0$, i.e., the geometric mean. -Furthermore, when $p,q$ are two real numbers, define $L_{p,q}(x,y)$ as the obvious fixed point satisfying the equation -$$L_{p,q}(x,y) = L_{p,q}(M_p(x,y),M_q(x,y))$$ -—meaning that $L_{p,q}(x,y)$ is defined as the common limit of the sequences $(x_n)$ and $(y_n)$ such that $(x_0,y_0) = (x,y)$ and $(x_{n+1},y_{n+1}) = (M_p(x,y),M_q(x,y))$. (Showing convergence is easy, and evidently, $L_{p,q} = L_{q,p}$.) -Thus, $L_{0,1}(x,y)$ is the famous arithmetic-geometric mean. -Among other things, all these functions satisfy $\min(x,y) \leq F(x,y) \leq \max(x,y)$, as well as $F(y,x) = F(x,y)$ and $F(\lambda x, \lambda y) = \lambda F(x,y)$ — so we could just define them from $F(1,x)$, which is continuous and monotonically increasing with value $1$ at $1$. It is well-known that $M_p(x,y) < M_q(x,y)$ whenever $x\neq y$ and $p -TITLE: Show that $\lim_{n\to\infty}n+n^2 \log\left(\frac{n}{n+1}\right)= 1/2$ -QUESTION [5 upvotes]: Show that $$\lim_{n\to\infty}\left[n+n^2 \log\left(\frac{n}{n+1}\right)\right]= \frac{1}{2}$$ -I can't get how this limit comes about. It says that it comes from the sequential criterion of limits. - -REPLY [2 votes]: Let $n=\frac{1}{t}\implies t\to 0$ as $n\to \infty$ -$$\lim_{n\to \infty}\left(n+n^2\log\left(\frac{n}{n+1}\right)\right)$$ -$$=\lim_{t\to 0}\left(\frac{1}{t}+\frac{1}{t^2}\log\left(\frac{1}{1+t}\right)\right)$$ -$$=\lim_{t\to 0}\left(\frac{\underbrace{t-\log\left(1+t\right)}_{\longrightarrow0}}{\underbrace{t^2}_{\longrightarrow0}}\right)$$ -using L'Hospital's rule for $\frac 00$ form, -$$=\lim_{t\to 0}\left(\frac{1-\frac{1}{1+t}}{2t}\right)$$ -$$=\lim_{t\to 0}\left(\frac{t}{2t(t+1)}\right)$$ -$$=\frac 12\lim_{t\to 0}\left(\frac{1}{t+1}\right)$$ -$$=\frac 12\left(\frac{1}{0+1}\right)=\color{red}{\frac 12}$$<|endoftext|> -TITLE: Riemannian metrics on homogeneous spaces -QUESTION [11 upvotes]: Let G be a Lie group and H be a compact subgroup. The (left) coset space G/H is, up to an isomorphism, equivalent to the smooth homogeneous manifold M. My question is, is it possible to impose an explicit Riemannian structure on this manifold by specifying G as an isometry group with corresponding isotropy group H? For example, consider the group SO(3). The coset space SO(3)/SO(2) is then isomorphic to the 2 sphere. If one specifies SO(3) as the group of isometries, is it possible to recover an explicit expression for the metric? Would this necessarily be the "traditional" metric inherited from embedding in Euclidean 3D space? Thanks - -REPLY [4 votes]: The answer to your question is no for several reasons. -First, just because you can write $M = G/H$ does not mean that any of the natural metrics on $M$ have isometry group $G$ - it is sometimes larger. -For example, consider the homogeneous space $S^6 = G_2/SU(3)$. As YCor states, the set of $G_2$ invariant metrics on $G_2/SU(3)$ is in bijection with $SU(3)$ invariant inner products on $\mathfrak{g}/\mathfrak{h}$ under the adjoint action. Well, in this case, the adjoint action of $SU(3)$ on the $6$-d real vector space $\mathfrak{g}/\mathfrak{h}$ is nothing but the usual $SU(3)$ action on $\mathbb{C}^3\cong \mathbb{R}^6$. In particular, the action is transitive, so irreducible. -It follows that there is, up to scale, a unique invariant metric on $S^6 = G_2/SU(3)$. But the usual round metric is one example of such an invariant metric, so it must be the only example (up to scaling) - and it has isometry group $O(7)$, strictly containing $G_2$. -Another issue is that there may be multiple metrics which are $G$-invariant. This is already hiding in point 1 of YCor's post, but I wanted to make it more explicit. For example, consider $S^3 = SU(2)$. How many metrics (not including scaling) are $SU(2)$ invariant? Well, of course the usual round metric is, but here is another example. Thinking of $S^3\subseteq \mathbb{C}^2$, at each point $p\in S^3$, we can conisder the tangent vector $ip$. Define a new Riemannian metric where the vector $ip$ has length $\lambda \neq 1$, but where the metric on the orthogonal complement is the usual round metric. This new metric is still $SU(2)$ invariant for any choice of $\lambda > 0$, but is only round if $\lambda = 1$. (This new metric gives a so called Berger sphere.)<|endoftext|> -TITLE: Study of algebraic structures analogous to the ring of smooth functions and module of vector fields -QUESTION [5 upvotes]: $\newcommand{\Ga}{\Gamma}$ -Let $M$ be a smooth manifold. $\Ga(TM)$ is a module over the ring of smooth (real) functions (which is also an algebra, and denoted by $C^{\infty}(M)$). -Also, each $X \in \Ga(TM)$ defines a derivation on $C^{\infty}(M)$. -That is , we have two operations: -$(1)$ "scalar multiplication" (satisfies the module axioms): -$C^{\infty}(M) \times \Ga(TM) \to \Ga(TM) \, , (f,X) \mapsto fX$, -$(2)$ "differentiation": -$\Ga(TM) \times C^{\infty}(M) \to C^{\infty}(M) \, , (X,f) \mapsto Xf$ -satisfying: -$(1) \, (X+Y)f=Xf+Yf$ -$(2) \, (gX)f=g(Xf)$ -$(3) \, X(fg)=g(Xf)+f(gX)$ (derivation property) - -Is there any literature from an algebraic perspective, on this situation; i.e on pairs $(M,R)$ where $M$ is an $R$-module , and we also have a "differentiation" operation: -$M \times R \to R \, , (m,r) \mapsto mr$, satisfying: -$(1)$ $(m+m')r=mr+m'r$ -$(2)$ $(r'm)r=r'(mr)$ -$(3)$ $m(rr')=r(mr')+r'(mr)$ -Can we say anything interesting on this interplay algebraically? Has this phenomena been studied from this point of view? - -REPLY [4 votes]: Actually you have much more structure than this; most importantly, vector fields also have a Lie bracket, and the Lie derivative gives an action of the corresponding Lie algebra. One way to package up all of this structure is to talk about a Gerstenhaber algebra structure on polyvector fields (sections of exterior powers of the tangent bundle). -Polyvector fields have a multiplication, but they also have a Lie bracket of degree $-1$ called the Schouten-Nijenhuis bracket which reproduces both the Lie derivative and the Lie bracket of vector fields. The interaction between these is a version of the same interaction for a Poisson algebra except that there are a lot of signs.<|endoftext|> -TITLE: Commutativity of "extension" and "taking the radical" of ideals -QUESTION [6 upvotes]: Let $K$ be a field (not necessarily algebraically closed) and $\overline{K}$ its algebraic closure. -By $K[\text{X}]$, I mean $K[X_1,...,X_n]$. - -Is it true that the operations of "extension" and "taking the radical" commute for: -$$\mathfrak{a}\subset{K}[\text{X}]\subset\overline{K}[X],$$ -i.e., is it true that $\sqrt{\mathfrak{a}\overline{K}[X]}$=$\sqrt{\mathfrak{a}}{\overline{K}[X]}$? - -You may also assume $K$ is a perfect field if necessary. -Thoughts: -Did not make much progress at all. Not really sure what to do with the algebraic closure. The natural generalisation of this result to arbitrary ring extensions is false. -Background: -The motivation for this is very basic and possibly pedantic and is due to wanting to say immediately that an (affine) algebraic set like $(X-Y)^2=1$ (i.e. zeros of $(X-Y)^2-1\in K[\text{X}]$ in $\mathbb{A}^n(\overline{K})$) is "defined over $K$". All definitions are as according to Chapter 1 of Silverman's book on Elliptic Curves wherein the author claims that such statements are clear (c.f. examples 1.3.1-1.3.3). The definition of "defined over $K$" is given in the second "Definition" of Chapter 1. - -REPLY [4 votes]: The question reduces to show that if $\mathfrak a$ is a radical ideal, then its extension is also radical. Equivalently, if $K[X]/\mathfrak a$ is reduced, then $K[X]/\mathfrak a\otimes_K\overline K$ is also reduced. If $K$ is a perfect field (and this is an assumption on page 1 of Silverman's book), then this holds as it is pointed out in this answer and proved here.<|endoftext|> -TITLE: How to picture a first countable space? -QUESTION [9 upvotes]: I find myself forgetting what it means for a space to be first countable on a frequent basis. -This is unlike say other terminologies such as "Hausdorff space", where you can picture balls separating each points. When you define first countable, you have to first recall what a "local neighborhood" (or was it countable neighborhood? Open neighborhood?) is. And I find myself forgetting what that term exactly is as well. -What is a surefire way to remember the definition of first countable? What do you see when you picture a first countable space? - -REPLY [7 votes]: I picture a (countable) sequence of shrinking neighborhoods around a point in $\mathbb{R}^2$. (Specifically, the balls $B_{1/n} (x)$.) If I draw any blob around this point, I just have to wait a bit and my neighborhoods will shrink enough so that they are eventually contained in it. -If something is first countable, then you can check if it is second countable. (Being second countable is stronger.) -And for being second countable, the picture is that you have countable collection of open sets that get fine enough so that any open set can be described as a union of some of them. For this I again picture $\mathbb{R}^2$, and fuzzily the collection of $1/n$ balls around the rational points, and maybe imagine taking some wiggly open set and filling it out with this little balls.<|endoftext|> -TITLE: Are the random variables $X + Y$ and $X - Y$ independent if $X, Y$ are distributed normal? -QUESTION [7 upvotes]: Let $X, Y$ be independent random variables such that $X,Y \sim N(\mu,\sigma^2)$, show that $X+Y$ and $X-Y$ are independent using the moment generating function. -I know that the moment generating function of a sum of independent random variables is the product of the MGF. -So, I´m trying to solve that but i don´t know if my process is correct -$M_{X+Y}(t_1,t_2)=M_X(t_1)M_Y(t_2)=M^2_{N(\mu,\sigma^2)}(t)$ -? - -REPLY [3 votes]: An answer using moment generating functions has already been given, but please also consider the following simpler approach. -Note that two jointly normal random variables are independent if and only if they are uncorrelated. Since $X,Y$ are independent normals, the pair $(X,Y)$ is normal. Any linear transformation of a normal is also a normal, so $(X+Y, X-Y)$ is normal, i.e. $X+Y$ and $X-Y$ are jointly normal. Then $E[(X+Y)(X-Y)] = E[X^2 - Y^2] = \sigma^2 - \sigma^2 = 0 = (2 \mu) \cdot 0 = E[X+Y]E[X-Y]$. Thus $X+Y$ and $X-Y$ are independent.<|endoftext|> -TITLE: Is $f(x)=x|x|$ differentiable everywhere? -QUESTION [7 upvotes]: When $f$ is a function $\mathbf{R}$ to $R$. I know $\lim_{x \to 0+}\frac{f(x)-f(0)}{x-0}= -\lim_{x \to 0+}\frac{x^2}{x}=0$ and $\lim_{x \to 0-}\frac{f(x)-f(0)}{x-0}= -\lim_{x \to 0-}\frac{-x^2}{x}=0$, but $|x|$ is not differentiable everywhere, so I'm doubting myself. - -REPLY [4 votes]: It should be clear that $f$ is differentiable for $x\neq 0$. -For $x=0$, we have -$\lim_{x \to 0} { f(x)-f(0) \over x-0} = \lim_{x \to 0} |x| = 0$, hence $f$ -is differentiable at $x=0$ with derivative zero.<|endoftext|> -TITLE: When $\cosh (z)=0$? -QUESTION [7 upvotes]: I'm studying complex analysis and I'm wondering about all complex values of $z$ that satisfy the equation: -$$ -\cosh(z)=0 \,\, . -$$ -Is there a smart way to show all values that vanish with the equation above? If yes, how could I demonstrate this? What are these values? -Thank you! - -REPLY [12 votes]: $$\cosh(z)=0\Longleftrightarrow$$ -$$\frac{e^z+e^{-z}}{2}=0\Longleftrightarrow$$ -$$e^z+e^{-z}=0\Longleftrightarrow$$ -$$e^{-z}\left(1+e^{2z}\right)=0\Longleftrightarrow$$ - -Since $e^z$ is never zero for any $z\in\mathbb{C}$, no solution exists for $e^{-z}=0$: - -$$1+e^{2z}=0\Longleftrightarrow$$ -$$e^{2z}=-1\Longleftrightarrow$$ -$$2z=i\pi(2n+1)\Longleftrightarrow$$ -$$z=\frac{i\pi(2n+1)}{2}$$ -With $n\in\mathbb{Z}$<|endoftext|> -TITLE: Why are the elements of a galois/finite field represented as polynomials? -QUESTION [9 upvotes]: I'm new to finite fields - I have been watching various lectures and reading about them, but I'm missing a step. I can understand what a group, ring field and prime field is, no problem. -But when we have a prime extension field, suddenly the elements are no longer numbers, they are polynomials. I'm sure there is some great mathematical tricks which show that we can (or must?) use polynomials to be able to satisfy the rules of a field within a prime extension field, but I haven't been able to find a coherent explanation of this step. People I have asked in person don't seen to know either, it's just assumed that that is the way it is. -So I have two questions: -What is a clear explanation of "why polynomials?". -Has anyone tried using constructs other than polynomials to satisfy the same field rules? -Thanks in advance. - -REPLY [9 votes]: The short answer is that you do not need to view the elements of finite fields as polynomials, but it simply is the most convenient presentation for many a purpose. - -The slightly longer answer is that those elements really aren't polynomials, but instead they are cosets in a ring of polynomials, and we simply select the lowest degree polynomial to represent the entire coset. This version of the answer is pedagogically possibly the worst I'm gonna give, but it does come in handy when you do calculations in a finite field in a computer program. Namely, with a bit of care you can use either those low degree polynomials, or you can use monomials to represent the same elements. The former are useful for addition, the latter for multiplication. This is because we can use discrete logarithm tables. I'm not gonna discuss this aspect now, but if you are interested you can take a peek at this Q&A I prepared for referrals like this in mind. - -Then the long answer - building up on the explanation given by Mathmo123 (+1). -When working with algebraic extensions of number fields, say $\Bbb{Q}$, students often take comfort in thinking of those extensions as consisting of some explicit numbers. Been there, done that. This works because we can always think of a copy of that extension field inside the field of complex numbers $\Bbb{C}$. This does have some pitfalls (there are often several distinct ways of view the given field as such a subset), but we can do this because A) $\Bbb{C}$ is algebraically closed, B) at that point in our studies the concept of a number is still closely tied to $\Bbb{C}$ and its subsets. -Consider the following. It is certainly more natural to work with $\sqrt2$ rather than the coset $x+\langle x^2-2\rangle$ in the quotient ring $\Bbb{Q}[x]/\langle x^2-2\rangle$. We easily do arithmetic as follows -$$ -(3+\sqrt2)(5+4\sqrt2)=15+(5+12)\sqrt2+4(\sqrt2)^2=23+17\sqrt2. -$$ -But notice that we multiplied those numbers involving $\sqrt2$ exactly like we would multiply the polynomials -$$ -(3+x)(5+4x)=15+17x+4x^2. -$$ -Only in the end we replaced $x^2$ with $2$. All because we think of $x$ having "value" $x=\sqrt2$. Saying that we here actually did arithmetic with polynomials modulo $x^2-2$ is just a fancy way of saying that we use $\sqrt2$ exactly as we use $x$ except in the end we replace all occurrences of $(\sqrt2)^2$ with $2$. -Building up on this we can similarly do arithmetic with $\root3\of2$. Only this time we can only simplify third powers and higher. So compare (I write -$\root3\of 4$ in place of $(\root3\of2)^2$ and similarly for higher powers) -$$ -\begin{aligned} -(1+2\root3\of2+3\root3\of4)(1+\root3\of4)&=1+2\root3\of2+(3+1)\root3\of4+2\root3\of8+3\root3\of{16}\\ -&=(1+2\cdot2)+(2+3\cdot2)\root3\of2+4\root3\of4\\ -&=5+8\root3\of2+4\root3\of4 -\end{aligned} -$$ -replacing $\root3\of 8$ with $2$ and $\root3\of{16}$ with $2\root3\of2$ (and -$\root3\of{32}$ with $2\root3\of4$ should the need arise). -This is, again, exactly like working with polynomials -$$ -(1+2x+3x^2)(1+x^2)=1+2x+(3+1)x^2+2x^3+3x^4, -$$ -where this time we replaced $x^3$ with $2$ and $x^4$ with $2x$. -What's the deal you may ask? In these two example we could have simply used the real numbers $\sqrt2$ and $\root3\of2$. We might even have tried to use their decimal approximations to do the arithmetic with a pocket calculator. Sure, that would introduce rounding errors whereas doing it as above is precise, because it is easier to identify the numbers exactly. By this I mean, it is in this sense more informative to write -$$ -(\sqrt2-1)^{100}=94741125149636933417873079920900017937-66992092050551637663438906713182313 - 772 \sqrt{2} -$$ -instead of -$$ -(\sqrt2-1)^{100}=5.2775391806914391296141\cdot10^{-39}, -$$ -which is what a calculator might give you. For example, from that decimal expansion it is very hard to realize that it actually is a number of the form $a-b\sqrt2$ for some integers $a,b$. -Forward we go. What is you want to do arithmetic with the largest real zero of $x^5-2x^3-2x+2$. Plotting that polynomial shows that this number is slightly bigger than $1.5$. Well, this time we DO NOT HAVE A NICE EXPRESSION FOR THAT NUMBER IN TERMS OF RADICALS. So what to do? Just call the number $\alpha$, do arithmetic with its polynomials as above, and keep in mind that by the defining equation -$$ -\alpha^5=2\alpha^3+2\alpha-2. -$$ -Therefore we can calculate for example that -$$ -\begin{aligned} -\alpha^7&=\alpha^2\alpha^5\\ -&=\alpha^2(2\alpha^3+2\alpha-2)\\ -&=2\alpha^5+2\alpha^3-2\alpha^2\\ -&=2(2\alpha^3+2\alpha-2)+2\alpha^3-2\alpha^2\\ -&=6\alpha^3-2\alpha^2+4\alpha-4. -\end{aligned} -$$ -What has this got to do with doing arithmetic in finite fields? There the situation is much like in that last case. It is not convenient to use a numerical value for $\alpha$, when we need to do exact arithmetic and avoid rounding errors and such. All we have to work with is that polynomial equation satisfied by $\alpha$. We use that equation to do our arithmetic. The price we need to pay is that $\alpha$ does not have a precise identity. In the above example any of the five zeros of that polynomial - two of them complex numbers - would lead to the same arithmetic. This is because the related fields are isomorphic. - -Enough of why polynomials (modulo an equation). You asked for alternatives. Let's look at the field of four elements, denote it $\Bbb{F}_4$ or $GF(4)$, whichever you are more familiar with. The field looks like -$$ -\Bbb{F}_4=\{0,1,\alpha,\alpha+1\}, -$$ -and its arithmetic follows from it having characteristic two (so $1+1=0=\alpha+\alpha$) and the special equation $\alpha^2=\alpha+1$. -We can produce $\Bbb{F}_4$ as a quotient ring much the same way that we produce the field of seven elements as $\Bbb{Z}/7\Bbb{Z}$ - residue classes of integers modulo seven. Here's one way. Let $\omega=(-1+i\sqrt3)/2$ be the usual complex primitive third root of unity. Let us look at the ring -$$ -\Bbb{Z}[\omega]=\{a+b\omega\mid a,b\in\Bbb{Z}\}. -$$ -We can then reduce modulo two in the ring $\Bbb{Z}[\omega]$. In other words, we can do arithmetic as usual, but do operations with $a,b$ modulo two, so effectively both $a$ and $b$ will have two choices, $0,1$, and we have four combinations of them altogether. What do we get? Because -$$ -0=\omega^3-1=(\omega-1)(\omega^2+\omega+1) -$$ -we can deduce that -$$ -\omega^2=-\omega-1. -$$ -Now, if we reduce this modulo two, we see that, because $1\equiv-1\pmod2$, $\omega^2=\omega+1$. In other words $\omega$, or more precisely, its residue class modulo two, takes the role of $\alpha$ in $\Bbb{F}_4$ -We can produce any finite field in this way. In infinitely many different ways. But using the resulting representations of complicated numbers with coefficients modulo a prime number does not really help us to do any arithmetic operations. So we just usually won't bother. - -This became even longer than I anticipated. Hope it helps :-)<|endoftext|> -TITLE: Prob. 21, Sec. 17, in Munkres' TOPOLOGY, 2nd ed: Closure and complementation can result in at most 14 different sets -QUESTION [5 upvotes]: Here's Prob. 21 in Sec. 17 of the book Topology by James R. Munkres, 2nd edition: - -Consider the collection of all subsets $A$ of the topological space $X$. The operations of closure $A \to \overline{A}$ and complementation $A \to X-A$ are functions from this collection to itself. -(a) Show that, starting with a given set $A$, one can form no more than $14$ distinct sets by applying these two operations successively. -(b) Find a subset $A$ of $\mathbb{R}$ (in its usual topology) for which the maximum of $14$ is obtained. - -My effort: -Based on the feedback I've had from the valued Mathematics Stack Exchange community, I would like to ammend my effort as follows: -Let $\mathcal{P} (X)$ denote the power set (i.e. the collection of all the subsets ) of $X$, and let the functions $f \colon \mathcal{P} (X) \to \mathcal{P} (X) $, $g \colon \mathcal{P} (X) \to \mathcal{P} (X) $, and $i \colon \mathcal{P} (X) \to \mathcal{P} (X) $ be defined as follows: -$$f(A) \colon= \overline{A} \ \ \ \mbox{ for all } \ \ \ A \in \mathcal{P} (X),$$ -$$g(A) \colon= X - A \ \ \ \mbox{ for all } \ \ \ A \in \mathcal{P} (X),$$ -and -$$i(A) \colon= \mathrm{Int} (A) \ \ \ \mbox{ for all } \ \ \ A \in \mathcal{P} (X).$$ -Then we have the following couple of results -$$f(f(A)) = f(A), \ \ \ i(i(A) ) = i(A), \ \ \ \mbox{ and } \ \ \ g(g(A)) = A \ \ \ \mbox{ for all } \ A \in \mathcal{P}(X).$$ -Moreover, we also have the following two relations: -$$f(g(A)) = g(i(A)) \ \ \ \mbox{ and } \ \ \ g(f(A)) = i(g(A)) \ \ \ \mbox{ for all } \ A \in \mathcal{P} (X).$$ -How do we show that -$$f(g(f(g(f(g(f(g(A)))))))) = f(g(f(g(A))))$$ and $$g(f(g(f(g(f(g(f(A)))))))) = g(f(g(f(A))))$$ -for all $A \in \mathcal{P} (X)?$. -Please also refer to this link. - -REPLY [5 votes]: Note also that -$$iA\subseteq A\subseteq fA$$ -and -$$A\subseteq B\implies fA\subseteq fB,\ iA\subseteq iB.$$ -From $$fgA=giA,$$ it follows that $$gfgA=ggiA=iA$$ and $$fgfgA=fiA;$$ thus the equality $$fgfgfgfgA=fgfgA$$ can be rewritten in the form $$fifiA=fiA,$$ which I will now prove. -$\underline{fifiA\subseteq fi A}$: From $$ifiA\subseteq fiA$$ it follows that $$fifiA\subseteq ffiA=fiA.$$ -$\underline{fiA\subseteq fifiA}$: From $$iA\subseteq fiA$$ it follows that $$iA=iiA\subseteq ifiA$$ and $$fiA\subseteq fifiA.$$ -Substituting $gA$ for $A$ in the previous identity, we have $$fgfgfgfg(gA)=fgfg(gA),$$ i.e., $$fgfgfgfA=fgfA;$$ taking complements, $$gfgfgfgfA=gfgfA.$$<|endoftext|> -TITLE: counterexample to a "theorem" on continuity of largest deltas for continuous functions $f:[a,b]\to\mathbb{R}$ -QUESTION [6 upvotes]: "Theorem 12" in these notes states the following (verbatim): - -Let $f:[a,b]\to\mathbb{R}$ be continuous and let $\epsilon>0$. For $x\in[a,b]$, let - $$\Delta(x)=\sup\left\{\delta\,\,|\,\,\text{for all}\,y\in[a,b]\,\text{with}\,|x-y|<\delta,\,|f(x)-f(y)|<\epsilon\right\}$$ - Then $\Delta$ is a continuous function of $x$. - -In other words (this is my paraphrase of the imprecise statement about the supremum), $\Delta(x)$ is the largest $\delta$ we can pick at $x$ that keeps the variation of $f$ within $\epsilon$. (Of course, for a specified $x$, the set of $\delta$'s is nonempty because $f$ is continuous.) -Following the statement of the theorem, we're invited to use it to prove that continuous functions on closed and bounded intervals are uniformly continuous. -But I think this theorem is false. Take $f(x)=\sqrt{x}$ on $[0,1]$ and $\epsilon=0.5$. I claim -$$\Delta(x)=\begin{cases}\sqrt{x}-0.25&x>0.25\\\sqrt{x}+0.25&0\leq x\leq0.25\end{cases}$$ -which has a jump discontinuity at $x=0.25$. This follows from a straightforward calculation I carried out for arbitrary $\epsilon$, which gives -$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x>\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\leq\epsilon^2\end{cases}$$ -with a jump discontinuity at $x=\epsilon^2$. This contradicts the quoted theorem. -(As a side note, this computation gives a very nice brute-force way to discover that choosing $\delta=\epsilon^2$ suffices for the uniform continuity of $\sqrt{x}$ on $[0,\infty)$. Indeed it shows more: this choice of $\delta$ is the largest we can make, because it is the infimum of $\Delta$. I wish I'd done this computation in college.) -I have two questions: - -Am I missing something, or does my counterexample show this claim is false? I've reworked the calculation and don't believe I'm wrong, but I could spell out the details if someone asks. -If the claim is false, can it be repaired to do what the instructor wanted to do with it? - -(Please note that with question 2 I'm not asking for any old proof of the special case of the Heine-Cantor theorem that says continuous functions on closed and bounded intervals are uniformly continuous. I know the proof of the general result in arbitrary metric spaces. I'm asking: what theorem could the instructor possibly have had in mind instead of Theorem 12, as stated?) -ADDED: a sketch of the calculation of $\Delta(x)$ for $\sqrt{x}$ on $[0,\infty)$ for arbitrary $\epsilon$. -Break the work into cases. -Case one: $\sqrt{x}>\epsilon$, i.e. $x>\epsilon^2$. The inverse image of the interval $(\sqrt{x}-\epsilon,\sqrt{x}+\epsilon)$ is $\big((\sqrt{x}-\epsilon)^2,(\sqrt{x}+\epsilon)^2\big)$. Therefore -$$\Delta(x)=\min\big(x-(\sqrt{x}-\epsilon)^2,(\sqrt{x}+\epsilon)^2-x\big)=x-(\sqrt{x}-\epsilon)^2=2\epsilon\sqrt{x}-\epsilon^2$$ -Case two: $\sqrt{x}\leq\epsilon$, i.e. $x<\epsilon^2$. Now we just look at the inverse image of $[0,\sqrt{x}+\epsilon)$, so it's only the right-hand boundary of the inverse image that matters for controlling the variation of $f$. The right-hand boundary is still $(\sqrt{x}+\epsilon)^2$, so $\Delta(x)=(\sqrt{x}+\epsilon)^2-x=2\epsilon\sqrt{x}+\epsilon^2$. -CORRECTION (1/27/16): John Ma correctly points out in the comments that the correct calculation for $\sqrt{x}$ is the lower semicontinuous function -$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x\color{red}{\geq}\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\color{red}{<}\epsilon^2\end{cases}$$ -rather than my original upper semicontinuous -$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x\color{blue}{>}\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\color{blue}{\leq}\epsilon^2\end{cases}$$ - -REPLY [2 votes]: John Ma's answer, together with my calculation for $\sqrt{x}$, settles the issue: the notes are wrong (on any reasonable interpretation of the supremum that makes the function $\Delta(x)$ well-defined), and lower semicontinuity is the best that can be concluded in general. But even this weaker result can be used to prove the fact—call it Heine's theorem—that continuous functions $f:[a,b]\to\mathbb{R}$ are uniformly continuous. -I've done some digging in the literature, though, and found some fascinating history on this question. A paper by Lennes from 1905 gives a counterexample similar to my computation for $\sqrt{x}$, except for $\sin x$ on $[0,\pi]$. More recent results in the American Mathematical Monthly show that while picking the largest delta gives only a lower semicontinuous function in general, we can nevertheless always choose the deltas continuously for any continuous function $f$. (In fact, if $f$ is uniformly continuous, we can choose the deltas uniformly continuously, too.) This opens onto the subject of "continuous selections." -I leave my findings here for those who might be interested, since these questions are not treated systematically in any standard analysis reference. - -To begin with, the notes I quoted are hinting at the idea of Lüroth's alternative proof of Heine's theorem, given in - -Lüroth, "Bemerkung über gleichmässige Stetigkeit," Mathematische Annalen, September 1873, Vol. 6 No. 3. Available here. - -The idea is to deduce Heine's theorem from the continuity of a positive function that is very similar to $\Delta(x)$. The difference is that $\Delta(x)$ measures the variation $f(y)$ from a fixed value $f(x)$ for all $y$ within $\delta$ of $x$, while Lüroth's function, call it $\Delta_L(x)$, measures the variation of $f$ over all $y$ within $\delta$ of $x$: -$$\Delta_L(x):=\sup\left\{\delta\,\,|\,\,\forall x_1\forall x_2(x_1,x_2\in(x-\delta,x+\delta)\cap[a,b]\implies|f(x_1)-f(x_2)|<\epsilon) \right\}$$ -This function does have to be continuous; as a result, it assumes its infimum on $[a,b]$. And because the function is positive, this infimum is precisely the delta we need to prove uniform continuity. (Indeed, it is the best such delta.) The trick for getting continuity of the deltas is that we use the delta guaranteed by the oscillation definition of continuity rather than the delta in the more familiar definition of continuity. As Lüroth says, "Diese Definition der Stetigkeit stimmt offenbar überein mit der gewöhnlichen." Despite the equivalence of these notions of continuity at a point, the functions we get from their largest deltas are not. -A more contemporary version of this argument is given in - -Teodora-Liliana Radulescu, Vicentiu D. Radulescu, and Titu Andreescu, Problems in Real Analysis: Advanced Calculus on the Real Axis, page 146. Available here. - -As John Ma shows, though, we can prove Heine's theorem by taking the infimum of the largest deltas even if we use the deltas from the standard definition of continuity. In this case we don't have continuity, but we don't need it. Lower semicontinuity is enough. One wonders whether Lüroth realized this. -Veblen and Lennes give both Heine's proof and Lüroth's proof (pp. 88-90) in their influential book - -Introduction to Infinitesimal Analysis, 1907. Available here. - -They highlight the difference between $\Delta_L(x)$ and $\Delta(x)$, and a footnote on page 90 refers to a short note by Lennes that answers my question explicitly with several counterexamples: $\Delta(x)$ needn't be continuous. (He does not show lower semicontinuity.) - -Lennes, "Remarks on a proof that a continuous function is uniformly continuous," Annals of Mathematics, 1905, Vol. 6 No. 2. Available here. - -For the first of his four counterexamples, Lennes shows $\Delta(x)$ for $\sin x$ on $[0,\pi]$ and $\epsilon=\sqrt{3}/2$ has a jump discontinuity at $x=\pi/3$. He makes a minor error in claiming that $\Delta(\pi/3)=2\pi/3$ while the limit from the right is $\pi/3$. Actually, $\Delta(\pi/3)=\pi/3$ and the limit from the left is $2\pi/3$. I made the same error in my computation for $\sqrt{x}$. -This paper settles my original question: $\Delta(x)$ isn't continuous. But can we pick the deltas continuously across $x$ if we give up the requirement that we pick the best deltas? The answer is yes; in fact, we can also get a continuous function as both $x$ and $\epsilon$ vary. Several more recent papers discuss these results: - -Seidman and Childress, "A Continuous Modulus of Continuity," American Mathematical Monthly, Vol. 82, No. 3 (Mar., 1975), pp. 253-254. -Guthrie, "A Continuous Modulus of Continuity," American Mathematical Monthly, Vol. 90, No. 2 (Feb., 1983), pp. 126-127. -Enayat, "$\delta$ as a Continuous Function of $x$ and $\epsilon$," American Mathematical Monthly, Vol. 107, No. 2 (Feb., 2000), pp. 151-155. -De Marco, "For Every $\epsilon$ There Continuously Exists a $\delta$," American Mathematical Monthly, Vol. 108, No. 5 (May, 2001), pp. 443-444. - -See also - -Reem, "New proofs and improvements of theorems in classical analysis," arXiv:0709.4492 [math.CA]. Available here. - -which gives an excellent discussion of the optimal delta.<|endoftext|> -TITLE: How is the metric defined on the real projective space $\mathbb{RP}^n$? -QUESTION [18 upvotes]: The standard metric on $RP^n$ is usually defined to be the metric that locally looks like the metric on $S^n$. But as a differentiable manifold (and not just as a set), $RP^n$ is not a subset of $S^n$, it is a quotient. So there is no natural map $RP^n\to S^n$, but rather a map $S^n\to RP^n$. -The standard metric on $RP^n$ would then be a sort of "push-forward metric", but there is no such a thing, metrics can only naturally be pulled back. -What am I missing? - -REPLY [17 votes]: The point is that the metric on $S^n$ is invariant under the action of the group $\mathbb{Z}_2$, so it can be pushed down to the quotient. More explicitly, assume that we endow $S^n$ with a standard round metric and let $\pi \colon S^n \rightarrow \mathbb{RP}^n$ be the quotient map. For $p \in S^n$, we denote $\pi(p)$ by $[p]$. Let $A \colon S^n \rightarrow S^n$ be the antipodal map. Then $A$ is an isometry of $S^n$ and we have $\pi \circ A = \pi$. -The metric on $\mathbb{RP}^n$ is defined as -$$ \left< v, w \right>_{[p]} := \left< \left( d\pi|_p \right)^{-1}(v), \left( d\pi|_p \right)^{-1}(w) \right>_p. $$ -This is well-defined because we have -$$ \left< \left( d\pi|_p \right)^{-1}(v), \left( d\pi|_p \right)^{-1}(w) \right>_p = \left< \left( d \left( \pi \circ A \right)|_p \right)^{-1}(v), \left( d \left( \pi \circ A \right)|_p \right)^{-1}(w) \right>_p = -\left< \left( dA|_{p} \right)^{-1} \left( \left( d \pi|_{-p} \right)^{-1}(v) \right), \left( dA|_{p} \right)^{-1} \left( \left( d \pi|_{-p} \right)^{-1}(v) \right) \right>_p \\ -= \left< \left( d\pi|_{-p} \right)^{-1}(v), \left( d\pi|_{-p} \right)^{-1}(w) \right>_{-p} $$ -where we used the fact that $dA|_p$ is an isometry. -More generally, assume you have some Riemannian manifold $(M,g)$ with a Lie group $G$ acting on $M$ smoothly, freely and properly by isometries. Then the quotient $M / G$ is a manifold and it has a naturally and uniquely defined Riemannian metric that turns the map $\pi \colon M \rightarrow M / G$ into a Riemannian submersion. If $G$ is zero dimensional, as in your case, the map $\pi$ is a local isometry.<|endoftext|> -TITLE: Prove that greatest common divisor of two numbers multiplied with itself divides the product of those numbers -QUESTION [7 upvotes]: $a, b, c \in \mathbb{N} $ -if $c$ is the greatest common divisor of $a$ and $b$, $c^2$ divides $a\cdot b$. -$c = \gcd(a, b) \implies c^2|ab $ -How would I prove this? I understand why this sentence is true, but can't formulate it in a mathematically correct way. - -REPLY [11 votes]: It is very simple. -Since $c=\gcd(a,b)$, so we can write that there exist distinct integers $p,q$ such that $a=cp$ and $b=cq$ where $\gcd(p,q)=1$. -Hence $$ab=cp\cdot cq = c^2 \cdot pq$$ -Thus we can conclude that $$c^2 | ab$$<|endoftext|> -TITLE: Noether normalization in algebraically closed field -QUESTION [7 upvotes]: The Noether normalization lemma states that if $k$ is a field, and $A$ a finitely generated $k$-algebra, then there exist elements $y_1,...,y_m\in A$ such that - -$y_1,...,y_m$ are algebraically independent over $k$ -$A$ is finite over $B=k[y_1,...,y_m]$. - -The following problem is Exercise 3.16 from Undergraduate Algebraic Geometry by Reid: - -Let $I=\ker \{k[x_1,\dots,x_n]\rightarrow k[a_1,\dots,a_n]=A\}$, and consider $V=V(I)$ in $k^n$. -Let $Y_1,\dots,Y_m$ be general linear forms in $X_1,\dots,X_m$, and write $\pi: k^n\rightarrow k^m$ for the linear projection defined by the $Y$'s. Set $p = \pi |V : V → k^m$. Prove that for every $P \in k^m$, $p^{−1}(P)$ is a finite set, and nonempty if $k$ is algebraically closed. - -I showed the first part by using that for each $X_i$, there is a monic equation in $I$ in terms of $X_i$. So the solution set has to be finite. I am stuck on showing the nonemptiness. -I am confused about the concept finite extension. I saw an example of $A=\mathbb C[x_1,x_2]/(x_1x_2-1)$. Apparently the assumption is $x_1$ is transcendental over $\mathbb C$. In this case, we see that $A$ is not a finite extension of $\mathbb C[x_1]$. However, if we make a change of variables, let $x_1=y_1+y_2, x_2=y_1-y_2$, then $A$ becomes $\mathbb C[y_1,y_2]/(y_1^2-y_2^2-1)$, in which case $A$ is finite over $\mathbb C[y_1]$. So in this case, $m=1$. -Applying the result of this question to the example, does it mean for any $y_1$, there exists value of $y_2$ in $V(y_1^2-y_2^2-1)$? And how to show the general case? -Sorry if this is a stupid question. -If someone can explain intuitively what Noether normalization implies, that will be great. -Any help would be greatly appreciated. - -REPLY [3 votes]: About nonemptyness: following the hint given by the book we start with a point $P=(b_1,\dots,b_m)\in K^m$, and consider the ideal $J_P=I+(Y_1-b_1,\dots,Y_m-b_m)$, where $Y_i$ are linear forms in $K[X_1,\dots,X_n]$ such that $y_i=Y_i(a_1,\dots,a_n)$ are algebraically independent over $K$ and $K[y_1,\dots,y_m]\subset A$ is finite. -We want to show that $J_P\ne(1)$. Suppose the contrary, and write $$1=f+(Y_1-b_1)g_1+\cdots+(Y_m-b_m)g_m$$ in $K[X_1,\dots,X_n]$. (Notice that $f\in I$, that is, $f(a_1,\dots,a_n)=0$.) It follows that $1=(y_1-b_1)g_1(a)+\cdots+(y_m-b_m)g_m(a)$, where $a= (a_1,\dots,a_n)$. In particular, $(y_1-b_1,\dots,y_m-b_m)=(1)$ in $A$. But this is not possible since $(y_1-b_1,\dots,y_m-b_m)$ is a maximal ideal in $K[y_1,\dots,y_m]$, and the extension $K[y_1,\dots,y_m]\subset A$ is integral. (See exercise 3.15 from the book.) -Then $J_P\ne(1)$, and whenever $K$ is algebraically closed we have $V(J_P)\ne\emptyset$. Then there is $\alpha\in K^n$ such that $g(\alpha)=0$ for all $g\in J_P$. In particular, $f(\alpha)=0$ for all $f\in I$ hence $\alpha\in V(I)$, and $Y_i(\alpha)=b_i$ for all $i$ hence $\pi(\alpha)=P$.<|endoftext|> -TITLE: A number $n$ has $12$ divisors and $d_{d_4-1} = (d_1+d_2+d_4)d_8$. -QUESTION [6 upvotes]: Find a number $n$ which has - -$12$ divisors $(1 = d_1 < d_2 < \cdots d_4$. By other side, as $d_1+d_2+d_4$ divides a divisor of $n$, implies that $d_1+d_2+d_4=d_i$ for $4 -TITLE: The étale fundamental group as a functor -QUESTION [13 upvotes]: The usual "topological fundamental group" $\pi_1 (X,x)$ of a pointed topological space $(X,x)$ is functorial in the sense that a pointed continuous map $f\colon (X,x)\rightarrow (Y,y)$ induces a homomorphism $f_* \colon \pi_1 (X,x)\rightarrow \pi_1 (Y,y)$ via composition with $f$ i.e. $[\alpha]\mapsto [f\circ\alpha]$ where $\alpha$ is a loop in $X$ based at $x$. -I know that the étale fundamental group of a variety over $K$ with a distinguished $K$-point is supposed to be functorial in the same way (i.e. gives a functor from the category of pointed varieties over $K$ to the category of groups) but there are two technical issues I need to settle. Here is my definition of the étale fundamental group: -Let $X$ be a variety over $K$, let $x\in X(K)$ and let $\textbf{Cov}(X,x)$ denote the category of finite pointed étale covers $(Y,y)\rightarrow (X,x)$ where $y\in Y(K)$. Let $\Phi_x\colon \textbf{Cov}(X, x)\rightarrow \textbf{Set}$ denote the fibre functor sending a covering $\phi: (Y,y)\rightarrow (X,x)$ to the fibre $\phi^{-1}(x)\subseteq Y$. Then the étale fundamental group of $(X,x)$ is defined as $\pi_1^{\text{ét}} (X,x) = \operatorname{Aut}(\Phi_x)$. -Now I want to show that if $f\colon (X,x)\rightarrow (Y,y)$ is a pointed morphism of varieties, it induces a homomorphism $f_* \colon \pi_1^{\text{ét}} (X,x)\rightarrow \pi_1^{\text{ét}} (Y,y)$. I read right at the start of these notes how to begin: pick a cover $\phi\colon (Y', y')\rightarrow (Y,y)$ in $\textbf{Cov}(Y,y)$. Then apparently the pullback $\phi^*\colon X\times_Y Y'\rightarrow X$ is a finite étale cover of $X$ (i.e. an object in $\textbf{Cov}(X,x)$) and the fibre of $\phi^*$ above $x$ is isomorphic to the fibre of $\phi$ above $y$. -My questions are: - -Why is $\phi^* \colon X\times_Y Y' \rightarrow X$ a finite étale cover of $X$? -How can I show the claim that $\Phi_x (X\times_Y Y', \phi^*) \cong \Phi_y (Y', \phi)$? - -I will mainly be restricting my varieties $X$ and $Y$ to be elliptic curves, so an explanation assuming this would also be very welcome. - -Edit: Kevin Carlson's comments below have helped me solve question 2 by myself. I include a proof here for completeness, but because I still don't have an answer to question 1 I have not posted this as an answer. -To get a canonical isomorphism $\Phi_x (X\times_Y Y')\cong \Phi_y (Y)$, note that $\Phi_x (X\times_Y Y') = \text{Spec}(K)\times_X (X\times_Y Y')$ and $\Phi_y (Y') = \text{Spec}(K)\times_Y Y'$ where $X\xleftarrow{x}\text{Spec}(K)\xrightarrow{y} Y$ are the $K$-points. Then since $f\circ x = y$, we have a commutative diagram where the two inner squares are pullbacks: - -Then from the pullback pasting lemma (see the section "pasting of pullbacks") the outer rectangle is a pullback diagram and so it follows that $\Phi_x (X\times_Y Y')$ has the UMP of the fibre product $\text{Spec}(K)\times_Y Y' = \Phi_y (Y)$, so must be isomorphic. - -REPLY [3 votes]: I hope you don't mind if I post an answer to get this question off the unanswered list. The answer to the first question is essentially given by Zhen Lin in the comments: - -You should be able to find in standard references that the class of finite (resp. étale, surjective) morphisms is closed under pullback. For instance, try the Stacks project. – Zhen Lin Jan 25 '16 at 19:19 - -And the answer to the second question is at the end of the question statement.<|endoftext|> -TITLE: Why does the sum of two linearly independent solutions of a second order homogeneous ODE give a general solution? -QUESTION [5 upvotes]: The following is a short extract from the book I am reading: - -If given a Homogeneous ODE: - $$\frac{\mathrm{d}^2 y}{\mathrm{d}x^2}+5\frac{\mathrm{d} y}{\mathrm{d}x}+4y=0\tag{1}$$ - Letting - $$D=\frac{\mathrm{d}}{\mathrm{d}x}$$ then $(1)$ becomes - $$D^2 y + 5Dy + 4y=(D^2+5D+4)y$$ - $$\implies\color{blue}{(D+1)(D+4)y=0}\tag{2}$$ - $$\implies (D+1)y=0 \space\space\text{or}\space\space (D+4)y=0$$ which has solutions $$y=Ae^{-x}\space\space\text{or}\space\space y=Be^{-4x}\tag{3}$$ respectively, where $A$ and $B$ are both constants. -Now if $(D+4)y=0$, then $$(D+1)(D+4)y=(D+1)\cdot 0=0$$ - so any solution of $(D + 4)y = 0$ is a solution of the differential equation $(1)$ or $(2)$. Similarly, any solution of $(D + 1)y = 0$ is a solution of $(1)$ or $(2)$. $\color{red}{\text{Since the two solutions (3) are linearly independent, a linear combination}}$ $\color{red}{\text{of them contains two arbitrary constants and so is the general solution.}}$ Thus $$y=Ae^{-x}+Be^{-4x}$$ is the general solution of $(1)$ or $(2)$. - -The part I don't understand in this extract is marked in $\color{red}{\mathrm{red}}$. - -Firstly; How do we know that the two solutions: $y=Ae^{-x}\space\text{and}\space y=Be^{-4x}$ are linearly independent? -Secondly; Why does a linear combination of linearly independent solutions give the general solution. Or, put in another way, I know that $y=Ae^{-x}\space\text{or}\space y=Be^{-4x}$ are both solutions. But why is their sum a solution: $y=Ae^{-x}+Be^{-4x}$? - -REPLY [7 votes]: If you recall from linear algebra, abstract functional spaces can be considered as vector spaces. We define the zero function to serve as the zero vector, and pointwise addition/multiplication as the vector space operations. -For a collection of normal vectors, to show linear independence, we want to show none of the chosen vectors can be written as a linear combination of any of the others; each vector describes a 'different' part of the space. For a vector space of functions, e.g. the space of differentiable functions, to show linear independence, we must show there are no non-zero scalars $a,b$ such that -\begin{align*} -af(t)+bg(t)=0 -\end{align*} -for $\textit{all}$ values of $t$ in the domain. It isn't enough that we can find one or two values of $t$ where the sum equals zero, but instead for all values of the domain. -Now, to show the two functions in your problem are linearly independent. Suppose we could find two non zero numbers $a$ and $b$ such that -\begin{align*} -ae^{-t}+be^{-4t}=0 -\end{align*} -For all $t$. Differentiate this expression. -\begin{align*} --ae^{-t}-4be^{-4t}=0 -\end{align*} -Adding the two equations together gives -\begin{align*} --3be^{-4t}&=0 -\end{align*} -The exponential function is strictly positive, so we must have $b=0$. This implies $ae^{-t}=0$, and similarly $a=0$. -Now, why must a linear combination also be a solution? Well the differential equation described is $\textbf{linear}$, so any element of the span of linearly independent solutions will always yield another solution. They all get mapped to zero.<|endoftext|> -TITLE: Prove that $ { a }^{ 2 }+2ab+{ b }^{ 2}\ge 0$ without using $(a+b) ^{ 2 }$ -QUESTION [5 upvotes]: Prove that $${ a }^{ 2 }+2ab+{ b }^{ 2 }\ge 0,\quad\text{for all }a,b\in \mathbb R $$ - without using $(a+b)^{2}$. - -My teacher challenged me to solve this question from any where. He said you can't solve it. I hope you can help me to solve it. - -REPLY [4 votes]: Case 1. If both $a,b$ are positive , then each term $a^2, 2ab, b^2$, all are positive. Hence $a^2 + 2ab +b^2>0$. -Case 2. If both $a,b$ are negative, then also each term $a^2, 2ab, b^2$, all are positive. Hence $a^2 + 2ab +b^2>0$. -Case 3. If one of $a,b$ is positive and other is negative, then in this case by changing one of its sign sufficient to show $a^2 -2ab+b^2 > 0$, where $a,b$ both are positive. So, without loss of generality assume $a>b$. -Here is a pictorial proof of that fact.<|endoftext|> -TITLE: What inversion sets of permutations of $123\ldots n$ are possible? -QUESTION [5 upvotes]: Given a sequence of integers $\sigma = a_1 a_2 a_3 \ldots a_n$, let's define an inversion of $\sigma$ to be a pair $(a_i,a_j)$ of entries of $\sigma$ such that $a_i < a_k$ and $i > k$. For example, in the sequence $21345$, the pair $(1,2)$ is an inversion, since $1<2$ but $2$ comes before $1$ in the sequence. The inversion set of $\sigma$ is just the set of all inversions of $\sigma$. (This may not be the usual definition of inversion, but I didn't know what else to call it.) Now my question is this: - -What are the possible inversion sets of permutations of $123\ldots n$? - -Clearly, not all subsets of $\{(a,b) \in \mathbb{Z}^2 \mid 1 \leq a < b \leq n \}$ can occur as inversion sets; for example, if the inversions $(3,4)$ and $(1,3)$ occur, then so too must $(1,4)$. More generally, if $a < b < c$, any inversion set containing $(a,b)$ and $(b,c)$ must contain $(a,c)$. But I'm not sure if this fact alone is enough to characterize all inversion sets. How do I proceed? - -REPLY [2 votes]: Here is one way to think about it relating to @darijgr's comment. Let's denote $[n]:= \{1,2,\dots,n\}$. We want to prove that a subset $I$ of $X=\{(i,j)\in [n]^2\mid i -TITLE: Rank 2 vector bundle -QUESTION [6 upvotes]: $E$ is a rank $2$ vector bundle. - -Why is $E\simeq E^*\otimes \det E$? -Any generalization (arbitrary rank, $E$ non locally free etc.)? - -REPLY [13 votes]: More generally, any vector bundle $V$ (or representation of a group, etc.) of rank $d$ comes equipped with a natural nondegenerate pairing (the exterior product) -$$V \otimes \wedge^{d-1} V \to \wedge^d V$$ -which gives an isomorphism -$$V \cong \wedge^{d-1} V^{\ast} \otimes \wedge^d V.$$ -Now take $d = 2$.<|endoftext|> -TITLE: Is there a name for numbers between 0 and 1? -QUESTION [8 upvotes]: In the world of competitive esports, players often discuss kill/death ratios, where higher is better. My friends sometimes call a poor ratio, like 1 kill to 4 deaths, as 'negative', but that's not quite right. Is there another word to describe values between 0.0 and 1.0, vs values larger than 1.0? - -REPLY [4 votes]: It can be called the Unit Interval - -In mathematics, the unit interval is the closed interval [0,1], that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. It is often denoted I (capital letter I). In addition to its role in real analysis, the unit interval is used to study homotopy theory in the field of topology.<|endoftext|> -TITLE: Is $x^2+x+1$ divisible by $101$, if $x\in\mathbb Z$? -QUESTION [5 upvotes]: Prove $x^2+x+1$ isn't divisible by $101$, for any $x\in\mathbb Z$? -I think the way of solving the problem it by using "Fermat's Little Theorem". - -REPLY [4 votes]: I'm probably gonna get down-voted for this, but here is one way to prove it: - -$x\equiv 0\pmod{101} \implies x^2+x+1\equiv 0^2+ 0+1\equiv 1\pmod{101}$ -$x\equiv 1\pmod{101} \implies x^2+x+1\equiv 1^2+ 1+1\equiv 3\pmod{101}$ -$x\equiv 2\pmod{101} \implies x^2+x+1\equiv 2^2+ 2+1\equiv 7\pmod{101}$ -$x\equiv 3\pmod{101} \implies x^2+x+1\equiv 3^2+ 3+1\equiv 13\pmod{101}$ -$x\equiv 4\pmod{101} \implies x^2+x+1\equiv 4^2+ 4+1\equiv 21\pmod{101}$ -$x\equiv 5\pmod{101} \implies x^2+x+1\equiv 5^2+ 5+1\equiv 31\pmod{101}$ -$x\equiv 6\pmod{101} \implies x^2+x+1\equiv 6^2+ 6+1\equiv 43\pmod{101}$ -$x\equiv 7\pmod{101} \implies x^2+x+1\equiv 7^2+ 7+1\equiv 57\pmod{101}$ -$x\equiv 8\pmod{101} \implies x^2+x+1\equiv 8^2+ 8+1\equiv 73\pmod{101}$ -$x\equiv 9\pmod{101} \implies x^2+x+1\equiv 9^2+ 9+1\equiv 91\pmod{101}$ -$x\equiv 10\pmod{101} \implies x^2+x+1\equiv 10^2+ 10+1\equiv 10\pmod{101}$ -$x\equiv 11\pmod{101} \implies x^2+x+1\equiv 11^2+ 11+1\equiv 32\pmod{101}$ -$x\equiv 12\pmod{101} \implies x^2+x+1\equiv 12^2+ 12+1\equiv 56\pmod{101}$ -$x\equiv 13\pmod{101} \implies x^2+x+1\equiv 13^2+ 13+1\equiv 82\pmod{101}$ -$x\equiv 14\pmod{101} \implies x^2+x+1\equiv 14^2+ 14+1\equiv 9\pmod{101}$ -$x\equiv 15\pmod{101} \implies x^2+x+1\equiv 15^2+ 15+1\equiv 39\pmod{101}$ -$x\equiv 16\pmod{101} \implies x^2+x+1\equiv 16^2+ 16+1\equiv 71\pmod{101}$ -$x\equiv 17\pmod{101} \implies x^2+x+1\equiv 17^2+ 17+1\equiv 4\pmod{101}$ -$x\equiv 18\pmod{101} \implies x^2+x+1\equiv 18^2+ 18+1\equiv 40\pmod{101}$ -$x\equiv 19\pmod{101} \implies x^2+x+1\equiv 19^2+ 19+1\equiv 78\pmod{101}$ -$x\equiv 20\pmod{101} \implies x^2+x+1\equiv 20^2+ 20+1\equiv 17\pmod{101}$ -$x\equiv 21\pmod{101} \implies x^2+x+1\equiv 21^2+ 21+1\equiv 59\pmod{101}$ -$x\equiv 22\pmod{101} \implies x^2+x+1\equiv 22^2+ 22+1\equiv 2\pmod{101}$ -$x\equiv 23\pmod{101} \implies x^2+x+1\equiv 23^2+ 23+1\equiv 48\pmod{101}$ -$x\equiv 24\pmod{101} \implies x^2+x+1\equiv 24^2+ 24+1\equiv 96\pmod{101}$ -$x\equiv 25\pmod{101} \implies x^2+x+1\equiv 25^2+ 25+1\equiv 45\pmod{101}$ -$x\equiv 26\pmod{101} \implies x^2+x+1\equiv 26^2+ 26+1\equiv 97\pmod{101}$ -$x\equiv 27\pmod{101} \implies x^2+x+1\equiv 27^2+ 27+1\equiv 50\pmod{101}$ -$x\equiv 28\pmod{101} \implies x^2+x+1\equiv 28^2+ 28+1\equiv 5\pmod{101}$ -$x\equiv 29\pmod{101} \implies x^2+x+1\equiv 29^2+ 29+1\equiv 63\pmod{101}$ -$x\equiv 30\pmod{101} \implies x^2+x+1\equiv 30^2+ 30+1\equiv 22\pmod{101}$ -$x\equiv 31\pmod{101} \implies x^2+x+1\equiv 31^2+ 31+1\equiv 84\pmod{101}$ -$x\equiv 32\pmod{101} \implies x^2+x+1\equiv 32^2+ 32+1\equiv 47\pmod{101}$ -$x\equiv 33\pmod{101} \implies x^2+x+1\equiv 33^2+ 33+1\equiv 12\pmod{101}$ -$x\equiv 34\pmod{101} \implies x^2+x+1\equiv 34^2+ 34+1\equiv 80\pmod{101}$ -$x\equiv 35\pmod{101} \implies x^2+x+1\equiv 35^2+ 35+1\equiv 49\pmod{101}$ -$x\equiv 36\pmod{101} \implies x^2+x+1\equiv 36^2+ 36+1\equiv 20\pmod{101}$ -$x\equiv 37\pmod{101} \implies x^2+x+1\equiv 37^2+ 37+1\equiv 94\pmod{101}$ -$x\equiv 38\pmod{101} \implies x^2+x+1\equiv 38^2+ 38+1\equiv 69\pmod{101}$ -$x\equiv 39\pmod{101} \implies x^2+x+1\equiv 39^2+ 39+1\equiv 46\pmod{101}$ -$x\equiv 40\pmod{101} \implies x^2+x+1\equiv 40^2+ 40+1\equiv 25\pmod{101}$ -$x\equiv 41\pmod{101} \implies x^2+x+1\equiv 41^2+ 41+1\equiv 6\pmod{101}$ -$x\equiv 42\pmod{101} \implies x^2+x+1\equiv 42^2+ 42+1\equiv 90\pmod{101}$ -$x\equiv 43\pmod{101} \implies x^2+x+1\equiv 43^2+ 43+1\equiv 75\pmod{101}$ -$x\equiv 44\pmod{101} \implies x^2+x+1\equiv 44^2+ 44+1\equiv 62\pmod{101}$ -$x\equiv 45\pmod{101} \implies x^2+x+1\equiv 45^2+ 45+1\equiv 51\pmod{101}$ -$x\equiv 46\pmod{101} \implies x^2+x+1\equiv 46^2+ 46+1\equiv 42\pmod{101}$ -$x\equiv 47\pmod{101} \implies x^2+x+1\equiv 47^2+ 47+1\equiv 35\pmod{101}$ -$x\equiv 48\pmod{101} \implies x^2+x+1\equiv 48^2+ 48+1\equiv 30\pmod{101}$ -$x\equiv 49\pmod{101} \implies x^2+x+1\equiv 49^2+ 49+1\equiv 27\pmod{101}$ -$x\equiv 50\pmod{101} \implies x^2+x+1\equiv 50^2+ 50+1\equiv 26\pmod{101}$ -$x\equiv 51\pmod{101} \implies x^2+x+1\equiv 51^2+ 51+1\equiv 27\pmod{101}$ -$x\equiv 52\pmod{101} \implies x^2+x+1\equiv 52^2+ 52+1\equiv 30\pmod{101}$ -$x\equiv 53\pmod{101} \implies x^2+x+1\equiv 53^2+ 53+1\equiv 35\pmod{101}$ -$x\equiv 54\pmod{101} \implies x^2+x+1\equiv 54^2+ 54+1\equiv 42\pmod{101}$ -$x\equiv 55\pmod{101} \implies x^2+x+1\equiv 55^2+ 55+1\equiv 51\pmod{101}$ -$x\equiv 56\pmod{101} \implies x^2+x+1\equiv 56^2+ 56+1\equiv 62\pmod{101}$ -$x\equiv 57\pmod{101} \implies x^2+x+1\equiv 57^2+ 57+1\equiv 75\pmod{101}$ -$x\equiv 58\pmod{101} \implies x^2+x+1\equiv 58^2+ 58+1\equiv 90\pmod{101}$ -$x\equiv 59\pmod{101} \implies x^2+x+1\equiv 59^2+ 59+1\equiv 6\pmod{101}$ -$x\equiv 60\pmod{101} \implies x^2+x+1\equiv 60^2+ 60+1\equiv 25\pmod{101}$ -$x\equiv 61\pmod{101} \implies x^2+x+1\equiv 61^2+ 61+1\equiv 46\pmod{101}$ -$x\equiv 62\pmod{101} \implies x^2+x+1\equiv 62^2+ 62+1\equiv 69\pmod{101}$ -$x\equiv 63\pmod{101} \implies x^2+x+1\equiv 63^2+ 63+1\equiv 94\pmod{101}$ -$x\equiv 64\pmod{101} \implies x^2+x+1\equiv 64^2+ 64+1\equiv 20\pmod{101}$ -$x\equiv 65\pmod{101} \implies x^2+x+1\equiv 65^2+ 65+1\equiv 49\pmod{101}$ -$x\equiv 66\pmod{101} \implies x^2+x+1\equiv 66^2+ 66+1\equiv 80\pmod{101}$ -$x\equiv 67\pmod{101} \implies x^2+x+1\equiv 67^2+ 67+1\equiv 12\pmod{101}$ -$x\equiv 68\pmod{101} \implies x^2+x+1\equiv 68^2+ 68+1\equiv 47\pmod{101}$ -$x\equiv 69\pmod{101} \implies x^2+x+1\equiv 69^2+ 69+1\equiv 84\pmod{101}$ -$x\equiv 70\pmod{101} \implies x^2+x+1\equiv 70^2+ 70+1\equiv 22\pmod{101}$ -$x\equiv 71\pmod{101} \implies x^2+x+1\equiv 71^2+ 71+1\equiv 63\pmod{101}$ -$x\equiv 72\pmod{101} \implies x^2+x+1\equiv 72^2+ 72+1\equiv 5\pmod{101}$ -$x\equiv 73\pmod{101} \implies x^2+x+1\equiv 73^2+ 73+1\equiv 50\pmod{101}$ -$x\equiv 74\pmod{101} \implies x^2+x+1\equiv 74^2+ 74+1\equiv 97\pmod{101}$ -$x\equiv 75\pmod{101} \implies x^2+x+1\equiv 75^2+ 75+1\equiv 45\pmod{101}$ -$x\equiv 76\pmod{101} \implies x^2+x+1\equiv 76^2+ 76+1\equiv 96\pmod{101}$ -$x\equiv 77\pmod{101} \implies x^2+x+1\equiv 77^2+ 77+1\equiv 48\pmod{101}$ -$x\equiv 78\pmod{101} \implies x^2+x+1\equiv 78^2+ 78+1\equiv 2\pmod{101}$ -$x\equiv 79\pmod{101} \implies x^2+x+1\equiv 79^2+ 79+1\equiv 59\pmod{101}$ -$x\equiv 80\pmod{101} \implies x^2+x+1\equiv 80^2+ 80+1\equiv 17\pmod{101}$ -$x\equiv 81\pmod{101} \implies x^2+x+1\equiv 81^2+ 81+1\equiv 78\pmod{101}$ -$x\equiv 82\pmod{101} \implies x^2+x+1\equiv 82^2+ 82+1\equiv 40\pmod{101}$ -$x\equiv 83\pmod{101} \implies x^2+x+1\equiv 83^2+ 83+1\equiv 4\pmod{101}$ -$x\equiv 84\pmod{101} \implies x^2+x+1\equiv 84^2+ 84+1\equiv 71\pmod{101}$ -$x\equiv 85\pmod{101} \implies x^2+x+1\equiv 85^2+ 85+1\equiv 39\pmod{101}$ -$x\equiv 86\pmod{101} \implies x^2+x+1\equiv 86^2+ 86+1\equiv 9\pmod{101}$ -$x\equiv 87\pmod{101} \implies x^2+x+1\equiv 87^2+ 87+1\equiv 82\pmod{101}$ -$x\equiv 88\pmod{101} \implies x^2+x+1\equiv 88^2+ 88+1\equiv 56\pmod{101}$ -$x\equiv 89\pmod{101} \implies x^2+x+1\equiv 89^2+ 89+1\equiv 32\pmod{101}$ -$x\equiv 90\pmod{101} \implies x^2+x+1\equiv 90^2+ 90+1\equiv 10\pmod{101}$ -$x\equiv 91\pmod{101} \implies x^2+x+1\equiv 91^2+ 91+1\equiv 91\pmod{101}$ -$x\equiv 92\pmod{101} \implies x^2+x+1\equiv 92^2+ 92+1\equiv 73\pmod{101}$ -$x\equiv 93\pmod{101} \implies x^2+x+1\equiv 93^2+ 93+1\equiv 57\pmod{101}$ -$x\equiv 94\pmod{101} \implies x^2+x+1\equiv 94^2+ 94+1\equiv 43\pmod{101}$ -$x\equiv 95\pmod{101} \implies x^2+x+1\equiv 95^2+ 95+1\equiv 31\pmod{101}$ -$x\equiv 96\pmod{101} \implies x^2+x+1\equiv 96^2+ 96+1\equiv 21\pmod{101}$ -$x\equiv 97\pmod{101} \implies x^2+x+1\equiv 97^2+ 97+1\equiv 13\pmod{101}$ -$x\equiv 98\pmod{101} \implies x^2+x+1\equiv 98^2+ 98+1\equiv 7\pmod{101}$ -$x\equiv 99\pmod{101} \implies x^2+x+1\equiv 99^2+ 99+1\equiv 3\pmod{101}$ -$x\equiv100\pmod{101} \implies x^2+x+1\equiv100^2+100+1\equiv 1\pmod{101}$<|endoftext|> -TITLE: Are $\mathbb{C}^2$ and $\mathbb{C}^2/(x,y)\sim(y,x)$ homeomorphic? -QUESTION [5 upvotes]: Let $A$ be the set of monic quadratics over $\mathbb C$ and let $B$ be the set of unordered pairs over $\mathbb C$ where possibly the two elements of the pair may be the same. Then the map which takes a quadratic to it roots seems to be a homeomorphism between $A$ and $B$. But I can't reconcile this with the fact that $B$ is $\mathbb C^2$ after it has been "folded in half" and so the points $\{x,x\}\in B$ ought to lie on a boundary near which $B$ is not homeomorphic to $\mathbb C^2$. Where am I going wrong? - -REPLY [2 votes]: Viète's mapping $$V:\mathbb C^2\to A=\mathbb C^2:(x,y)\mapsto (s=x+y,p=xy)$$ yields a bijective continuous map $$v :B=\frac {\mathbb C^2}{\sim}\to A=\mathbb C^2:\widetilde {(x,y)}=\widetilde {(y,x)}\mapsto (s=x+y,p=xy)$$ The mapping $V$ is proper, hence closed and thus the descended map $v$ is closed too, which finishes the proof that $v$ is a homeomorphism. -Edit -The map $$v^{-1}: A=\mathbb C^2\to B=\frac {\mathbb C^2}{\sim}:(s,p)\mapsto \widetilde {(x,y)}$$ is the one described by Oscar: -It sends the ordered pair $(s,p)$ identified to the monic polynomial $T^2-sT+p$ to the unordered pair $\widetilde {(x,y)}$ consisting of its two roots, given by the good old formula $$x,y=\frac {s\pm\sqrt {s^2-4p}}{2}$$.<|endoftext|> -TITLE: A tricky problem about matrices with no $\{-1,0,1\}$ vector in their kernel -QUESTION [9 upvotes]: A Hankel matrix is a square matrix in which each ascending skew-diagonal from left to right is constant. Let us call a matrix partial Hankel if it is the first $m(2n+1)^m$ then there are some vectors $\mathbf{x},\mathbf{y}\in \{0,1\}^n$ such that -$M\mathbf{x}=M\mathbf{y}$. Then $v=\mathbf{x}-\mathbf{y}$ satisfies the requred properties. -So a suitable vector $v$ exists if $2^n>(2n+1)^m$, or equivalently $m<\frac{n}{\log_2(2n+1)}$. -This works for all $m\times n$ matrices with entries from a fixed set of integers, so it does not use the special form of $M$. - -Another approach is applying Minkowski's theorem. I did not have time to work it out for this case, it requires some unpleasant estimates, but it is clear to me that the same $m -TITLE: Is there only one way to make $\mathbb R^2$ a field? -QUESTION [7 upvotes]: I think I read an answer to this question before but I can't find it by searching. -We can make $\mathbb R^2$ a field by defining addition as normal and defining multiplication by complex multiplication so $(u,v) \times (x,y) = (ux-vy,uy+vx)$ and it satisfies the field axioms. I know defining multiplication by $(u,v) \times (x,y) = (ux,vy)$ does not work as $(0,1)$ for instance has no inverse. But is there another way in which we could define multiplication in $\mathbb R^2$ that would satisfy the field axioms, while keeping normal vector addition? - -REPLY [7 votes]: The answer of Hagen von Eitzen gives the correct answer if we demand that the multiplication that we are constructing is compatible with the usual multiplication on $\mathbb{R}$. However, if we do not make that assumption, then there are many non-isomorphic field structures on $\mathbb{R}^2$. -To see this, let $F$ be any field of characteristic zero and cardinality $|\mathbb{R}|$. We claim that $F$ is isomorphic as a field to $\mathbb{R}^2$ for a suitable choice of multiplication for $\mathbb{R}^2$. -To see this, we note that both $F$ and $\mathbb{R}^2$ have the structure of a $\mathbb{Q}$-vector space. Moreover, looking at the cardinalities, each has dimension $|\mathbb{R}|$ over $\mathbb{Q}$. Since vector spaces with the same dimension are isomorphic, this means that there is a linear isomorphism $\phi: F \to \mathbb{R}^2$. -Since $\phi$ is an isomorphism, we can use it to define a multiplication $\cdot : \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}^2$ by transport of structure: we simply define $a \cdot b = \phi(\phi^{-1}(a) \cdot \phi^{-1}(b))$ for all $a, b \in \mathbb{R}^2$. Now $\phi$ preserves not only addition (which it already did because it is linear), but also multiplication (by construction). Since $F$ is a field, and $\phi$ is a bijection, this proves directly that $\mathbb{R}^2$ with this multiplication and the usual addition is a field, and in fact isomorphic to $F$. -A surprising consequence is that $\mathbb{R}^2$ has a multiplication such that it is isomorpic to $\mathbb{R}$! Unfortunately, we have shown the existence of this multiplication using the axiom of choice, so it might not be possible to give a 'direct' desciption of this multiplication.<|endoftext|> -TITLE: In what quadratic or quartic integer ring is a prime of the form $a^4 + 4^b$ guaranteed to split? -QUESTION [6 upvotes]: The obvious choice seemed at first to be $\mathbb{Z}[\root 4 \of 4]$. But since I know next to nothing about quartic fields, I thought to look in the quadratics. -For the first few such primes in $\mathbb{Z}[\sqrt{-2}]$ I was doing pretty well, e.g., $3^4 + 4^2 = 97 = (5 - 6 \sqrt{-2})(5 + 6 \sqrt{-2})$. I thought I hit a wall with $2417$ but then I found a mistake in my calculations which led me to $(45 - 14 \sqrt{-2})(45 + 14 \sqrt{-2})$. -Of course I could find a thousand primes of this form that split in this way in $\mathbb{Z}[\sqrt{-2}]$, but that doesn't rule out the possibility of a single inert prime just beyond the reach of my calculations. -Is $\mathbb{Z}[\sqrt{-2}]$ the ring I'm looking for? And if not, what is? - -REPLY [4 votes]: I think Mathmo123 already gave the Best Answer (and the one which deserves the bounty), and I understand your trepidation at dealing with quartic domains. But I also think you were a bit too hasty to drop the quartic domains line of inquiry, even though it still leads to a quadratic domain as the answer. -For now I'm going to ignore the special case $a = b = 1$ to concentrate on $b$ an even nonnegative integer (and then of course $a$ has to be odd). -Suppose there is a number $\theta$, possibly complex (I don't know at this point), such that $\theta^4 - 1 = 0$, so $\mathbb{Z}[\theta]$ is almost certainly a quartic domain. Then $$(a^2 - 2^{\frac{b}{2}})(a^2 + 2^{\frac{b}{2}}) = a^4 - 4^b \theta^2.$$ -Clearly my thinking is still dominated by quadratic domains. But if it were the case that $\theta^2 = -1$, we would be done, as the right hand side of that equation would boil down to $a^4 + 4^b$. In fact, the only way to have $\theta^2 = -1$ and $\theta^4 = 1$ is for $\theta$ to be $i$ or $-i$. Then it turns out that $\mathbb{Z}[\theta] = \mathbb{Z}[i]$, a quadratic domain after all. And it works for the special case, too, as we have $1^4 + 4^1 = 5$ and $5 = (2 - i)(2 + i)$. -What about $\theta^4 + 1 = 0$? Then $\theta = \pm \sqrt{i}$. That's still a bit beyond my ability at this point.<|endoftext|> -TITLE: Is there a name for this piecewise cubic interpolation kernel -QUESTION [6 upvotes]: I went looking for a way to do piecewise cubic interpolation, like natural cubic splines, but: - -expressible as a convolution of data points with a piecewise cubic kernel; and -still C2-continuous -still passes through all the data points -still cubic between integer abscissas -kernel converges to $0$ at $\pm\infty$ - -I did the math and figured out there's only one solution -- the kernel is a piecewise cubic function, characterized as follows: -\begin{split} - y(0) &= 1\\ - y(x) &= 0,\text{ for all integer }x \neq 0\\ - y'(x) &= sgn(x) * 3(\sqrt3-2)^{|x|},\text{ for all integer }x\\ -\end{split} -which implies -$$y''(x) = -6\sqrt 3(\sqrt3-2)^{|x|},\text{ for all integer }x \neq 0$$ -This turns out to make a pretty nice interpolation, without a lot of ringing or arbitrary boundary conditions, so... what's it actually called? -EDIT: It's been a while, and still no takers. In the interim, I made a demo. See: http://nbviewer.jupyter.org/github/mtimmerm/IPythonNotebooks/blob/master/NaturalCubicSplines.ipynb - -REPLY [2 votes]: A couple of comments (that are too long for a comment). -Firstly, your splines have uniform knot spacing. So, the matrix in the usual system of linear equations is fixed, and you can invert it symbolically, once and for all. Having done this, you have a simple formula that gives you the interpolating spline as a function of the data points. -I don't know anything about convolution kernels, but the functions you're using look a lot like so-called "cardinal" splines. Again, using uniform knots located at the integers, the $i$-th cardinal spline $s_i$ satisfies $s_i(i) = 1$ and $s_i(j)=0$ for $j \ne i$. Then, the spline $s$ defined by -$$ -s(x) = \sum a_i s_i(x) -$$ -interpolates the $a_i$ values. Specifically $s(i)=a_i$ for all $i$.<|endoftext|> -TITLE: Postive-semidefiniteness of matrix with entries $1/(a_i+a_j)$ -QUESTION [5 upvotes]: Let $a_1, \ldots, a_n$ be a set of positive numbers. Define a matrix $M_{ij} = \frac{1}{a_i+a_j}$. I'm trying to prove that $M$ is positive-semidefinite. The hint says to use the fact that $\int_{0}^{\infty} e^{-sx}\; dx = \frac{1}{s}$ if $s > 0$. However I don't know how this hint is useful. I've tried choosing an arbitrary vector $x$ and substituting $x^{\intercal}Mx = \sum_{i}\sum_{j} \frac{x_ix_j}{a_i+a_j}$ into $s$ and using properties of exponents to simplify the equation into something that is clearly positive, but without any luck. The denominator $\frac{1}{a_i+a_j}$ is simply too difficult to work with. At this point I think I'm just missing some trick that I don't know. Any help would be appreciated. - -REPLY [11 votes]: Use the hint as follows: -For each $x = (x_1, \ldots, x_n)^T \in \mathbb{R}^n$, -\begin{align} -& x^TMx \\ -= & \sum_i\sum_j x_ix_j\frac{1}{a_i + a_j} \\ -= & \sum_i\sum_j \int_0^\infty x_i x_j e^{-(a_i + a_j)t}dt \\ -= & \int_0^\infty \left[\sum_i\sum_j x_i x_j e^{-(a_i + a_j)t} \right] dt \\ -= & \int_0^\infty \left[\sum_i\sum_j x_ie^{-a_i t}x_je^{-a_jt} \right] dt \\ -= & \int_0^\infty \left(\sum_k x_ke^{-a_kt}\right)^2 dt \\ -\geq & 0. -\end{align} -Thus $M$ is positive-semidefinite.<|endoftext|> -TITLE: Is there an upper bound on the growth rate of analytic functions? -QUESTION [7 upvotes]: This problem comes from a solution tactic used in Is there a rational surjection $\Bbb N\to\Bbb Q$?, where I discovered that there is an analytic function $f(z)$ that takes the values $f(n)=a_n$ for all $n\in\Bbb N$, as long as $a_n$ has at most polynomial growth; here I am interested in seeing how far I can relax the "polynomial growth" constraint. -Let us call an analytic function a kernel if it satisfies $k(0)=1$ and $k(n)=0$ for all $0\ne n\in\Bbb Z$. The main kernel used in the above question/answer was the function ${\rm sincz}(z)=\frac\pi z\sin(\frac z\pi)$, which has growth rate $O(z^{-1})$ (in the positive and negative real direction). Then ${\rm sincz}^m(z)$ is also a kernel, with growth rate $O(z^{-m})$, and any kernel yields an analytic function via $f(z)=\sum_{n\in\Bbb N}a_nk(n)$, which works for all sequences whose growth rate is no more than $O(\frac1{k(n)n^2})$ (or substitute some other summable series for $n^{-2}$). -But if $k$ is a kernel and $g$ is any analytic function with $g(0)=1$, then $g(z)k(z)$ is also a kernel, which allows for much faster-decaying kernels, such as $e^{-z^2}{\rm sincz}(z)$. In fact, given any eventually monotonic analytic function $g(z)$ with $g(0)=1$, the function $\frac{{\rm sincz}^2(z)}{g(z^2)}$ is a kernel with growth rate $O(\frac1{g(z^2)z^2})$, which can create analytic functions for any sequence of growth rate less than $O(g(n^2))\supseteq O(g(n))$. -So the problem is reduced to the question in the title: - -Is there any upper bound on the growth rate of analytic functions? That is, is there a definable sequence $a_n$ which grows so fast that it eventually outpaces any analytic function $f(z)$ sampled at the natural numbers? - -The examples given still fall far short of such fast-growing functions as the Ackermann function or Graham's sequence, but it is not obvious to me that there are not similar techniques for producing extremely fast-growing analytic functions. - -REPLY [7 votes]: No, there is no upper bound. In fact, we can say something much stronger. - -Theorem Suppose that $a_n \in \mathbb{C}$ satisfy $a_n \to \infty$, and that $A_n$ are arbitrary complex numbers. Then there exists an entire function $f(z)$ satisfying $f(a_n) = A_n$. - -In sketch, I'll just tell you such a function (more or less). Let $g(z)$ be an entire function with simple zeros at the $a_n$. That such a function exists is a theorem of Hadamard. Then for a sufficiently clever choice of $\gamma_n$, the following function -$$ \sum_{n \geq 1} g(z) \frac{e^{\gamma_n(z - a_n)}}{z - a_n} \frac{A_n}{g'(a_n)}$$ -converges everywhere, is entire, and satisfies $f(a_n) = A_n$. -This also appears as Exercise 1 in section 5.2.3 of my copy of Ahlfor's Complex Analysis. It has been asked and answered on this site as well. -I happen to remember this from a complex analysis graduate exam question I faced some years ago. In fact, the actual question posed of me was more astounding, and I had to prove the following. - -Theorem Suppose that $a_n \in \mathbb{C}$ satisfy $a_n \to \infty$ and that $\mathcal{A}_n = \{ A_n^{(0)}, A_n^{(1)}, \ldots, A_n^{(j)}\}$ are arbitrary finite lists of complex numbers. Then there exists an entire function $f(z)$ satisfying $f^{(j)}(a_n) = A_n^{(j)}$ for all $a_n$ and all prescribed $A_n^{j}$. - -In short, it is possible to specify infinitely many points with arbitrarily (but finitely many) derivatives at each point, as long as the points tend to infinity. I do not actually remember how to prove this anymore, but I'm sure that it's built off of the function above in a clever way. -Returning to your question: you can choose $A_n$ that grow as arbitrarily fast as you want, and yet you can still find a complex analytic function that takes those values at the integers.<|endoftext|> -TITLE: Prove with use of derivative -QUESTION [5 upvotes]: How to prove this inequality using derivative ? -For each $x>4$ , -$$\displaystyle \sqrt[3]{x} - \sqrt[3]{4} < \sqrt[3]{x-4} $$ - -REPLY [6 votes]: Let $f(x)= x^{1/3} - 4^{1/3} - (x-4)^{1/3}$ -Note that $f(x)=0$ for $x=4$ (which is just outside its domain) -Now, taking the derivative with respect to $x$, -$f'(x)=(1/3)x^{-2/3} -(1/3)(x-4)^{-2/3}$ -The term on the right hand side is negative for $x>4$ -Thus, the funtion $f(x)$ is a decreasing function for $x>4$. -So, $f(x)<0$ for all $x>4$, which yields the desired inequality. - -REPLY [3 votes]: If $f(x)=x^\frac{1}{3}$ we must show that $$ f(x) < f(x-4) + f(4) -$$ -Note that for any $a,\ b>0$ if we have $f(a+b)< f(a) + f(b)$, then -we are done. Note that this is equivalent to $$ f(a+b)-f(a) < f(b) - -f(0) $$ -In further this is -equivalent to $$ \int_a^{a+b} f'(x)dx < \int_0^b f'(x) dx $$ -So $f'(x) = \frac{1}{3} \frac{1}{x^\frac{2}{3}} $ is decreasing so -that in the above right integral is large<|endoftext|> -TITLE: Why can we convert a base $9$ number to a base $3$ number by simply converting each base $9$ digit into two base $3$ digits? -QUESTION [7 upvotes]: Why can we convert a base $9$ number to a base $3$ number by simply - converting each base $9$ digit into two base $3$ digits ? -For example $813_9$ can be converted directly to base $3$ by noting -\begin{array} \space 8_9&=22_3 \\ \space 1_9 &=01_3 \\ \space 3_9 - &=10_3 \\ \end{array} -Putting the base digits together ,we get $$813_9=220110_3$$ - -I know it has to do with the fact that $9=3^2$ but I am not able to understand this all by this simple fact... - -REPLY [2 votes]: Hint: -$$\color{blue}1\cdot3^5+\color{blue}0\cdot3^4+\color{green}2\cdot3^3+\color{green}2\cdot3^2+\color{red}0\cdot3^1+\color{red}1\cdot3^0 -=(\color{blue}1\cdot3^1+\color{blue}0)3^4+(\color{green}2\cdot3^1+\color{green}2)3^2+(\color{red}0\cdot3^1+\color{red}1)3^0\\ -=\color{blue}3\cdot9^2+\color{green}8\cdot9^1+\color{red}1\cdot9^0$$ -$$\color{blue}{10}\color{green}{22}\color{red}{01}_3=[\color{blue}{10_3}|\color{green}{22_3}|\color{red}{01_3}]_9=\color{blue}3\color{green}8\color{red}1_9$$<|endoftext|> -TITLE: The Functional Inequality $f(x) \ge x+1$, $f(x)f(y)\le f(x+y)$ -QUESTION [8 upvotes]: Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a continuous function that satisfies the following conditons. -$$f(x)f(y)\le f(x+y)$$ -$$f(x)\ge x+1$$ -What is $f(x)$? -It is not to difficult to find that $f(0)=1$. -If $f(x)$ is differentiable, we can further these results so that $f'(0)=1$, and $f(x)=f'(x)$. -However, I was not able to go any further than this. I believe that $f(x)=e^x$, but cannot prove it. -Any help would be appreciated. - -REPLY [6 votes]: Let's try to get bounds on $f(1)$, for the sake of generalization to $x\in\mathbb{R}$. -We can get a nice lower bound by doing the following algorithm: -\begin{align} -f(\tfrac12+\tfrac12)&\geq f(\tfrac12)^2\geq (\tfrac12+1)^2=\tfrac94\\ -f(\tfrac14+\tfrac14+\tfrac14+\tfrac14)&\geq f(\tfrac14)^4\geq (\tfrac14+1)^4=\tfrac{625}{64} -\end{align} -Note that I did $f(\frac14+\frac14+\frac14+\frac14)\geq f(\frac14+\frac14)^2\geq f(\frac14)^4$. We can continue this process to get for any $k\geq1$ the inequality $$f(1)\geq (\frac{1}{2^k}+1)^{2^k}$$ and it is widely known that this limit goes to, as you conjectured, $e$. We can generalize this by $$f(x)=f(\tfrac{x}{2^k}+\tfrac{x}{2^k}+\cdots+\tfrac x{2^k})\geq f(\tfrac x{2^k})^{2^k}\geq (\tfrac x{2^k}+1)^{2^k}$$ and the last expression will tend to $e^x$ as $k$ approaches infinity. Thus, we proved (notice that we use $\lim f(a_n)=f(\lim a_n)$ because $f$ is continuous) that $f(x)\geq e^x$. -Now we're almost done. Since we have $1=f(x)f(-x)$, we know that $f(x)=\frac{1}{f(-x)}\leq \frac{1}{e^{-x}}=e^x$, and thus, $f(x)=e^x$. -Hope this helped!<|endoftext|> -TITLE: If $U$ is unitary operator then spectrum $\sigma(U)$ is contained inside the unit circle -QUESTION [6 upvotes]: In a Hilbert space, let $U$ be a continuous operator which it unitary. Prove $\sigma (U)\subseteq \Bbb{S}^1$. - -It is important for me to know how I am doing, and I didn't come by a clear explanation so it would be appreciated to have your correcting and guiding. -Attempt: $U^{*}U=UU^{*}=I$ and therefore $U^{-1}=U$. I have -$$ -\| U x \|^2 -= \langle U x, U x \rangle -= \langle U^* U x, x \rangle -= \langle x, x \rangle -= \|x\|^2 -$$ -and the same goes for $U^*$ and therefore $\| U \| = \| U^{-1} \| = 1$. -Looking at $U-\lambda$ I can take -$$ --\lambda U(U^{-1}-{1\over \lambda}I) -=U-\lambda I. -$$ -Clearly, for $\lambda \ne 0$, $-\lambda U$ is defined and is a linear operator. -$(U^{-1}-{1\over \lambda}I)$ is invertible if $|\lambda|\ge 1$ and therefore $\sigma (A^{-1})=\sigma (A)\subseteq \{|z|\le1\}$. But from the RHS, $U-\lambda I$ is invertible if $|\lambda|\le 1$ and therefore $\sigma(A)\subseteq \Bbb{S}^1$. I have noticed my claims tend to have holes in them many times which I cannot spot. If it is substantially true, how can I make sure it is completely formal? - -REPLY [9 votes]: In your proof slight modification is needed. -We have, $UU^* =U^*U =I$. So $U$ is invertible with $U^{-1}= U^*$. -Also as you have already shown that $||U || = 1 = ||U^{-1}||$, we must have -$$\sigma (U) \subset \{\lambda \in \mathbb{C} : |\lambda| \leq 1\}\;\; \text{and}\;\; -\sigma (U^{-1}) \subset \{\lambda \in \mathbb{C} : |\lambda| \leq 1\}.$$ -Now to show $\sigma (U) \subset \{\lambda \in \mathbb{C} : |\lambda| = 1\}, $ it is sufficient show that for $(U-\lambda I)$ is invertible for every $\lambda$ satisfying $|\lambda| <1$. As $U$ is already invertible, sufficient to show that $(U-\lambda I)$ is invertible for every $\lambda$ satisfying $0 <|\lambda| <1$. -Now from the relation $-\lambda U (U^{-1} - \frac{1}{\lambda}I) = (U- \lambda I)$, It is clear that, for $\lambda \neq 0$, -if $(U^{-1} - \frac{1}{\lambda}I)$ is invertible then $(U - \lambda I)$ becomes invertible. -Now note that If $0 <|\lambda| < 1,$ then $|\frac{1}{\lambda}| > 1$, then in that case $(U^{-1} - \frac{1}{\lambda}I)$ invertible and hence $(U- \lambda I)$ invertble. This gives us $\sigma (U) \subset \{\lambda \in \mathbb{C} : |\lambda| = 1\}. $<|endoftext|> -TITLE: Natural bijections between Dyck paths -QUESTION [5 upvotes]: A dyck path with $2n$ steps is a lattice path in $\mathbb{Z}^2$ starting at the origin $(0,0)$ and going to $(2n,0)$ using the steps $(1,1)$ and $(1,-1)$ without going below the x-axis. What are some natural bijections between the set of such dyck path with $2n$ steps? For example of course the identity is such a bijection but also the map sending a given dyck path to the same dyck path but viewing $(2n,0)$ as the origin is a bijection. What are some other examples of natural(or easy to write down) bijections? Note that dyck path correspond to ballot sequences, which are sequences of length $2n$ consisting of 1(corresponding to (1,1)) and -1(corresponding to (1,-1)) with the property that in this sequence the partial sums are never negative. So maybe its sometimes easier to write down such bijections explicitly using ballot sequences. Note that the number of dyck paths is the Catalan number. - -REPLY [4 votes]: There are tons of such bijections. Here are a few that came to my mind, and maybe I'll make some updates later. Or maybe it should be a community wiki if this hasn't been asked before. - -Since Dyck paths are in a natural 1-1 correspondence with binary trees, this bijection corresponds to recursively switching the left and right subtrees of every node: -$$ -\varphi(\emptyset)=\emptyset, \qquad \varphi(uD_1dD_2)=u\varphi(D_2)d\varphi(D_1), -$$ -where $D_1$ and $D_2$ are arbitrary Dyck paths. -Every Dyck path $D$ with unit steps $u$ (up) and $d$ (down) can be uniquely decomposed into a sequence of raised Dyck paths: -$$ -D=(uD_1d)(uD_2d)\dots(uD_kd). -$$ -This can be mapped bijectively onto -$$ -D'=(D_1u)(D_2u)\dots(D_ku)\underbrace{d\dots d}_{k}, -$$ -so that the $i$th "special" $u$ (the step following $D_i$) from the left matches the $i$th (special) $d$ from the right for $i=1,\dots,k$. Another way to find the position of the $i$th $u$ is to note that this is the last time the path $D'$ is at height $i$ until the final block of $d$'s. Alternatively, this bijection can be given recursively as: -$$ -\psi(\emptyset)=\emptyset, \qquad \psi(uD_1dD_2)=D_1u\psi(D_2)d. -$$ -A $321$-avoiding permutation is uniquely determined by the positions and values of its left-to-right maxima as well as by the positions and values of its right-to-left minima. The LR-maxima correspond to a Dyck path on or above the main diagonal, and the RL-minima correspond to a Dyck path on or below the main diagonal. So this determines another bijection between Dyck paths. (E.g. if $\pi=24153$, then $D_{LRmax}=uuduuddudd$ and $D_{RLmin}=uuudduuddd$.) -Elizalde-Deutsch bijection<|endoftext|> -TITLE: Residue fields of schemes of finite type (over $\mathbb{Z}$) -QUESTION [6 upvotes]: Suppose $X$ is a scheme of finite type over $\mathbb Z$. I want to prove that: -(1) The residue fields of closed points of $X$ are finite; -(2) For a given $q=p^n$ with $p$ prime, there is only a finite number of closed points of $X$ whose residue field is $\mathbb F_q$. -For (1), I see that one has to use some form of Nullstellensatz. First, we suppose $X=\operatorname{Spec}(A)$ affine, and $A=\mathbb{Z}[X_1,...,X_n]/I$. We need to show that if $m$ a maximal ideal of $A$, then $A/m$ is finite. Note $p=\mathbb Z\cap m$. If $p$ is maximal, we get that $A/m$ is of finite type over $\mathbb F_p$ thus a finite extension by Nullstellensatz and we are done. But $p$ could be the $(0)$ ideal in which case I neither conclude nor get a contradiction. -For (2), we can again argue locally, so we need to show that $A$ contains only a finite number of maximal ideals $m$ with $A/m$ of fixed cardinality $q=p^n$. I get an impression that to count these ideals its the same as to count the number of automorphisms of $\mathbb{F}_{q}$ which fix $\mathbb F_p$, but the corrspondence is not bijective, so we only get an upper bound which is enough. Is this idea correct? - -REPLY [4 votes]: In (1), the assertion that $\mathbb{Z}\cap m$ cannot be $0$ follows from the version of the Nullstellensatz which applies to arbitrary Jacobson rings (rings in which every prime is the intersection of the maximal ideals containing it), rather than just fields. This general Nullstellensatz says that if $R$ is a Jacobson ring and $A$ is a finitely-generated $R$-algebra, then $A$ is Jacobson and for any maximal ideal $m\subset A$, $m\cap R$ is maximal in $R$ and $A/m$ is a finite extension of $R/(m\cap R)$. Applying this with $R=\mathbb{Z}$ immediately gives that $\mathbb{Z}\cap m$ cannot be $0$ in your case. -Alternatively, you can deduce that $\mathbb{Z}\cap m\neq 0$ from just the Nullstellensatz for fields and the Artin-Tate lemma as follows. If $\mathbb{Z}\cap m=0$, then $A/m$ is a finitely generated $\mathbb{Q}$-algebra and hence is a finite extension of $\mathbb{Q}$ by the Nullstellensatz for $\mathbb{Q}$. By the Artin-Tate lemma applied to $\mathbb{Z}\subset\mathbb{Q}\subseteq A/m$, the fact that $A/m$ is a finitely generated $\mathbb{Z}$-algebra then implies that $\mathbb{Q}$ is also a finitely generated $\mathbb{Z}$-algebra. This is clearly false (any finitely generated subring of $\mathbb{Q}$ can have only finitely many different primes appearing in denominators).<|endoftext|> -TITLE: Prove that $\frac{a}{b+2c+3d}+\frac{b}{c+2d+3a}+\frac{c}{d+2a+3b}+\frac{d}{a+2b+3c} \geq \frac{2}{3}.$ -QUESTION [5 upvotes]: Let $a,b,c,d$ be positive real numbers. Prove that $$\dfrac{a}{b+2c+3d}+\dfrac{b}{c+2d+3a}+\dfrac{c}{d+2a+3b}+\dfrac{d}{a+2b+3c} \geq \dfrac{2}{3}.$$ - -I was thinking of trying $a \geq b \geq c$ since the inequality is cyclic. I am not sure how to use this, though, or if this would help simplify the inequality. It also doesn't look like I can really use AM-GM or Cauchy-Schwarz. - -REPLY [2 votes]: By C-S $\sum\limits_{cyc}\frac{a}{b+2c+3d}\geq\frac{(a+b+c+d)^2}{\sum\limits_{cyc}(4ab+2ac)}\geq\frac{2}{3}$, where the last inequality it's -$\sum\limits_{sym}(a-b)^2\geq0$. Done!<|endoftext|> -TITLE: Line bundle with a nowhere vanishing global section is trivial. -QUESTION [8 upvotes]: Let $k$ be a field and $X$ be a projective variety over $k$. I think it should be true that if $L$ is a line bundle on $X$ such that exists $s \in \Gamma(X,L)$ with $s_x \neq 0$ for all $x \in X$, then $L$ is isomorphic to $\mathcal{O}_X$. At least this is what analytic intuition would tell me. I am looking for a good proof or reference of this fact. - -REPLY [12 votes]: There is a basic correspondence between sections of $L$ and maps $\mathcal O_X \to L$ (this is basically the $R$-module isomorphism $M\cong Hom_R(R,M)$ given by $m \mapsto [1 \mapsto m]$ sheafified). In this case, $\mathcal O_X \to L$ is given by $1\mapsto s$. Since $s$ is nonzero in every stalk, it globally generates $L$, so this sheaf map induces an isomorphism $\mathcal O_{X,x} \to L_x$ by $1_x\mapsto s_x$ on every stalk (this is an isomorphism on each fiber since it is a nonzero linear map of one-dimensional $k(x)$-vector spaces, hence an isomorphism on stalks by Nakayama's lemma). And of course a sheaf map which induces an isomorphism on stalks is itself an isomorphism.<|endoftext|> -TITLE: Group conjecture -QUESTION [5 upvotes]: Conjecture: - -Given a finite group $G$ and a subset $A\subset G$. Then - $\{A,A^2,A^3,\dots\}$ is a group iff $\forall n\in \mathbb N: |A^n|=|A^{n+1}|$. - -Given that the composition between the subsets $A,B\subset G$ is -$A\cdot B=\{g\in G|\exists a\in A\exists b\in B:g=a\cdot b\}$. -Example: suppose that $N\subset G$ is a normal subgroup, then the cosets -$\{Ng,Ng^2,...\}$ have the same cardinality and constitutes a group, while for random sets the cardinality seems to grow when multiplying: -{ 3412 2143 4321 1234 } { 2143 1234 } nswap pnormal . -1 ok -{ 3412 2143 4321 1234 } { 2143 1234 } pquotient set. {{3412,4321},{2143,1234}} ok -{ 4321 3412 } go ok -gen. {4321,3412} ok -gen. {2143,1234} ok -gen. {4321,3412} ok -ndrop ok -{ 2431 2341 } go ok -gen. {2431,2341} ok -gen. {4132,3142,4312,3412} ok -gen. {1234,1243,3214,4213,1324,1423,3124,4123} ok -gen. {2413,2314,3421,4321,1423,1324,2341,2431,2143,2134,3241,4231,1243,1234} ok -gen. {4123,3124,4321,3421,2143,2134,4132,3142,4213,3214,4231,3241,3412,4312,1432,1342,2413,2314,2431,2341} ok -gen. {4231,3241,1234,1243,3214,4213,1432,1342,1324,1423,2134,2143,2314,2413,4123,3124,4321,3421,4132,3142,4312,3412} ok -gen. {3412,4312,2314,2413,2341,2431,2143,2134,4321,3421,3241,4231,1342,1432,3142,4132,1234,1243,3214,4213,1324,1423,3124,4123} ok -gen. {4123,3124,3142,4132,3412,4312,1432,1342,3214,4213,2413,2314,3421,4321,1423,1324,2341,2431,2143,2134,3241,4231,1243,1234} ok -ndrop ok - -REPLY [2 votes]: The set of nonempty subsets of a finite group $G$ forms a semigroup. It is known that the subgroups of this semigroup are precisely the ones of the form -$$H/K = \{hK : h \in H \}$$ -where $K \trianglelefteq H \leq G$. This is not difficult to prove, see for example 3.57 in "A Course in Group Theory" by Rose. In any case this shows that your condition is necessary, since $|hK| = |h'K|$. -See also this question.<|endoftext|> -TITLE: Riemann Integrability of Step Function -QUESTION [6 upvotes]: Call $f: [a,b] \to \mathbb{R}$ a step function if there exist a partition $P=\{x_0, \ldots, x_n \}$ of $[a,b]$ such that $f$ is constant on the interval $[x_i, x_{i+1})$. -I'm having some issues proving that f is Riemann integrable. If $f$ was defined such that it was constant on the closed interval $[x_i, x_{i+1}]$ then it would be trivial to define a partition such that the supremum and infinum coincide, and the upper and lower sums would cancel. But since we are working with a half-open interval, I'm having issues defining the partitions that will allow me to prove integrability. All proofs I've seen online rely on theorems we haven't proven in class, so I'm assuming I must be missing something simple. - -REPLY [2 votes]: This question came to my attention due to a deleted similar question asking about it. I'm providing a complete answer to this question as well as the other question (which has since been deleted). -Pick a $\delta>0$ and let $Q$ be any partition of $[a,b]$ into intervals such that the maximum length of an interval in $Q$ is less than $\delta$. -Let $|P|$ be the number of intervals in the partition $P$, so $|P|-1$ is the number of jumps in the step function. Therefore, there are at most $|P|-1$ intervals in $Q$ that contain a jump. For all of the remaining $|Q|-|P|+1$, $f$ is constant. -Suppose that $\{Q_1,\dots,Q_k\}$ contain jumps of $f$ and $\{Q_{k+1},\dots,Q_n\}$ do not contain jumps. In this case, -$$ -U(Q,f)=\sum_{i=1}^n \max_{x\in Q_i}f(x)\Delta Q_i=\sum_{i=1}^k \max_{x\in Q_i}f(x)\Delta Q_i+\sum_{i=k+1}^n \max_{x\in Q_i}f(x)\Delta Q_i. -$$ -Similarly, -$$ -L(Q,f)=\sum_{i=1}^n \min_{x\in Q_i}f(x)\Delta Q_i=\sum_{i=1}^k \min_{x\in Q_i}f(x)\Delta Q_i+\sum_{i=k+1}^n \min_{x\in Q_i}f(x)\Delta Q_i. -$$ -Since the maximum and minimum coincide for $Q_i$ with $k+1\leq i\leq n$, we have that -$$ -\sum_{i=k+1}^n \max_{x\in Q_i}f(x)\Delta Q_i=\sum_{i=k+1}^n \min_{x\in Q_i}f(x)\Delta Q_i. -$$ -Therefore, -$$ -U(Q,f)-L(Q,f)=\sum_{i=1}^k \max_{x\in Q_i}f(x)\Delta Q_i-\sum_{i=1}^k \min_{x\in Q_i}f(x)\Delta Q_i. -$$ -Let $M$ be the maximum value of $f$ and $m$ be the minimum value of $f$ over $[a,b]$. Then -$$ -U(Q,f)-L(Q,f)\leq \sum_{i=1}^k M\Delta Q_i-\sum_{i=1}^k m\Delta Q_i. -$$ -Since the length of an interval in $Q$ is at most $\delta$, it follows that -$$ -U(Q,f)-L(Q,f)\leq kM\delta-km\delta. -$$ -Since $k\leq |P|-1$, it follows that -$$ -U(Q,f)-L(Q,f)\leq (|P|-1)(M-m)\delta, -$$ -which goes to zero as $\delta$ goes to $0$ (as $|P|$, $M$, $m$, and $1$ are all constants). -Therefore, $f$ is Riemann integrable.<|endoftext|> -TITLE: A kind of converse to the Jordan curve theorem -QUESTION [11 upvotes]: The Jordan curve theorem in $\mathbb{R}^2$ says that if $S$ is a closed curve in $\mathbb{R}^2$. Then $S$ splits $\mathbb{R}^2$ into exactly two connected components $A$ and $B$. -I was thinking about a kind of converse to this problem which is as follows. -Let $S$ be a closed and bounded set in $\mathbb{R}^2$ and let $x,y\in\mathbb{R}^2 \setminus S$. Define -$$A:=\{a\in\mathbb{R}^2 \setminus S: a\text{ and } x \text{ are path connected in } \mathbb{R}^2\setminus S\}$$ -$$B:=\{b\in\mathbb{R}^2\setminus S: b\text{ and } y\text{ are path connected in }\mathbb{R}^2 \setminus S\}$$ -If $A\cap B=\emptyset$ does there exists a connected $T\subset S$ such that $\mathbb{R}^2 \setminus T$ is split into exactly two connected components one which contains $A$ and the other that contains $B$. -Any help or references on this would be much appreciated -Note: Ive made an edit to reflect the comments using the topologists sine curve. I believe that now this will not be a counter-example because the 'quasi-circle" is obviously connected. Also I apologize for not being more specific originally. - -REPLY [10 votes]: Consider this variant of the extended topologist's sine curve: - -(picture borrowed from this question) -It is the closure of the graph of the function $x\mapsto \sin(\pi/x)$ defined on the half open interval $(0,1]$ with the points $(0,0)$ and $(1,0)$ connected by a simple arc. It is a closed and bounded subset $S$ of $\mathbf R^2$. Its complement has exactly $2$ path-wise connected components, but there is no simple closed curve $T$ contained in $S$. - -REPLY [7 votes]: Here's a counterexample which I think works. -$S \subset \Bbb R^2$ be the subspace consisting of points $(x, \sin(1/x))$ for $x \in (0, 1]$ and points in $\{0\} \times [-1, 1]$ union an arc joining $(0, 0)$ with $(1, \sin(1))$, namely, the quasi-circle. -$S$ is a closed and bounded subset of $\Bbb R^2$. One can see that $S$ separates $\Bbb R^2$ into two disjoint components by noting that image of any path $\gamma$ not hitting $S$ has a positive distance from $S$, as both are compact subspaces of $\Bbb R^2$. Then one can - with care - replace the $\{0\} \times [-1, 1]$ part of $S$ (because $\gamma$ has distance at least $\delta > 0$ from it) to turn it into a circle and invoke Jordan curve theorem. -However, there is no such $T$: any closed loop in $S$ must miss the origin $(0, 0)$ as $S$ is not path connected at that point. But then obviously such loops do not separate $\Bbb R^2$.<|endoftext|> -TITLE: What does echelon mean? -QUESTION [7 upvotes]: When you solve a system of linear equations, you write down the augmented matrix and reduce this to reduced row echelon form. What is the meaning of the word echelon? - -REPLY [5 votes]: From Online Etymology Dictionary: - -echelon (n.) -1796, echellon, "step-like arrangement of troops," from French échelon "level, echelon," literally "rung of a ladder," from Old French eschelon, from eschiele "ladder," from Late Latin scala "stair, slope," from Latin scalae (plural) "ladder, steps," from PIE $\ast$skand- "to spring, leap" (see scan (v.)). Sense of "level, subdivision" is from World War I. - -Steven Schwartzman's The Words of Mathematics: An Etymological Dictionary of Mathematical Terms (MAA, 1994) also attributes the root of the term to the French word échelon and its meaning of "rung of a ladder" (link from Google Books). -The earliest appearance of "echelon form" as a linear algebra term that I can find is in Birkhoff and Mac Lane (1953), A Survey of Modern Algebra (link from Google Books). Yet, Nathan Jacobson's Lectures in Abstract Algebra: II. Linear Algebra (1953) does not contain this term. I guess the term was introduced to linear algebra more or less around the 1950s.<|endoftext|> -TITLE: Another beta integral due to Cauchy. -QUESTION [6 upvotes]: I have the following identity which I want to prove: -$$C(x,y):= \int_{-\infty}^{\infty} \frac{dt}{(1+it)^x(1-it)^y} = \frac{\pi \cdot 2^{2-x-y}\Gamma(x+y-1)}{\Gamma(x)\Gamma(y)}$$ where $\Re(x+y)>1$. -I proved so far that $C(x,y) = \frac{2^{2n} (x)_n (y)_n }{(x+y-1)_{2n}}C(x+n,y+n)$ and that: $C(x+n,y+n) = \int_{-\infty}^{\infty} \frac{dt}{(1+t^2)^n(1+it)^x(1-it)^y}$; now they ask me to set $t\to t/\sqrt{n}$ and then let $n\to \infty$, I don't see what does this integral converge to? -P.S -Here I use the following notation: $(a)_n = a(a+1)(a+2)\ldots (a+n-1)$. -Thanks in advance. - -REPLY [2 votes]: \begin{align*} -C(x+n,y+n) &= \int_{-\infty}^\infty \frac{dt}{(1+t^2)^n(1+it)^x(1-it)^y} \\ -&= \int_{-\infty}^\infty \frac{du/\sqrt{n}}{(1+u^2/n)^n(1+iu/\sqrt{n})^x(1-iu/\sqrt{n})^y} -\qquad (\textrm{let }u=t/\sqrt{n}) \\ -&= \int_{-\infty}^\infty \frac{1}{\sqrt{n}} e^{-u^2} + O(1/n) -\qquad (\textrm{standard limit for }e) \\ -&= \sqrt{\frac{\pi}{n}} + O(1/n) -\qquad (\textrm{standard integral}) -\end{align*} -Recall that $\lim_{n\rightarrow\infty}(1+1/n)^n = e$. -Similarly, $\lim_{n\rightarrow\infty}(1+x/n)^n = e^x$. -Thus, we find -$$\lim_{n\rightarrow\infty}\frac{1}{(1+u^2/n)^n} = e^{-u^2}.$$ -Also -$$\lim_{n\rightarrow\infty}\frac{1}{(1+iu/\sqrt{n})^x} -= \lim_{n\rightarrow\infty}\frac{1}{(1-iu/\sqrt{n})^y}=1.$$<|endoftext|> -TITLE: Functor category between two small categories is not small? -QUESTION [7 upvotes]: Let $C,D$ be two small categories, i.e. small sets of objects $O_C, O_D$ and small hom-sets $\mathrm{Hom}_C(c,c')$ and $\mathrm{Hom}_D(d,d')$. -The set of objects of $D^C$ is the set of all functors $F : C \to D$, and such an object is determined by a collection of functions, namely its function $O_C \to O_D$ on objects and its functions on hom-sets $\mathrm{Hom}_C(c,c') \to \mathrm{Hom}_D(Fc, Fc')$ ; since all sets involvled here are small and the product of small sets indexed by a small set is again small, the set of objects of $D^C$ is again small. -Now let $F,G : C \to D$ be two functors ; the hom-set $D^C(F,G)$ has for elements collections of morphisms $Fc \to Gc$, i.e. elements of the hom-sets of $D$. Thus a natural transformation is an element of the product over all $c \in O_C$ of the hom-sets between $Fc$ and $Gc$, and by the same reasoning we conclude that $D^C(F,G)$ is contained in a small set, thus small. -So why does Mac Lane tell me that the hom-set of a functor category need not be small? Or where is my mistake? Did I misinterpret something? (Page 40, right before the last paragraph.) - -REPLY [9 votes]: Everything you say is correct. I don't have Mac Lane in front of me at the moment, but I would guess he actually only assumed that $C$ and $D$ were locally small, so while each Hom-set is small, the set of objects might be large.<|endoftext|> -TITLE: Derivative of the Logarithm - Dirac -QUESTION [6 upvotes]: So I stumbled across P.Dirac's book Principles of Quantum Mechanics and I found something really peculiar on page 61 of the Fourth Edition. -He states that usually we accept that $$\frac{d}{dx}\log(x)=\frac -{1}{x}$$ for real positive x. -However the author states that it can be extended to $$\frac{d}{dx}\log(x)=\frac -{1}{x}-i\pi\delta(x)$$ -He argues the following way: -\begin{equation} -\int_{-a}^{a}\frac{d}{dx} \log(x)\,dx=\int_{-a}^{a}\frac{1}{x}-i\pi\delta(x)dx -\end{equation} -\begin{equation} -\log(-1)=-i\pi \bigg| \exp(.) -\end{equation} -\begin{equation} -e^{\log(-1)}=e^{-i\pi} -\end{equation} -\begin{equation} --1=e^{-i\pi} -\end{equation} -Intuitively I think it has to do with Riemann surfaces and I think one could somehow using this extend the definition of complex exponentials and write them the following way: -\begin{equation} -e^{i\theta}=\cos(\theta)+i\sin(\theta)+\hat{j}m -\end{equation} -Where m indicates the mth Riemann surface and j is kind of a vector basis. -$$m=(\theta-\text{mod}_{2\pi}(\theta))/2\pi$$ - -REPLY [5 votes]: We will define the distribution $\frac{d\log(x)}{dx}$ through the regularization given by -$$\frac{d\log(x)}{dx} \sim \lim_{\epsilon \to 0^+}\left(\frac{d\log(x+i\epsilon)}{dx}\right)$$ -Then, for any $a<0 -TITLE: What is Representation Theory? -QUESTION [19 upvotes]: I'm beginning a course that uses representation theory, but I do not really understand what that is about. In the text I am following, I have the following definition: - -A representation of the Lie group $G$ on the vector space $V$ is a continuous mapping $\cdot \colon G \times V \to V$ such that - -for each $g \in G$, the translation $T_{g} \colon V \to V$ given by $T_g(v) = g \cdot v$, $v \in V$, is a linear map; -$T_{e} = \mathrm{Id}$ where $e$ is the identity element of $G$; -$T_{gh} = T_{g} T_{h}$ for $g, h \in G$. We call the pair $(V,\cdot)$ a real (resp. complex) representation and $V$ the representation space. - - -What is the motivation behind this sort of definition? From my google searches I have seen different definitions, but I still don't really know why what I am reading is defined that way. Why a Lie group and not a regular group? etc. - -REPLY [21 votes]: One of the most common ways that groups arise "in the wild" is as sets of symmetries of an object. For example - -The symmetry group $S_n$ is the group of all permutations of $\{1,\ldots n\}$ -The dihedral group $D_{2n}$ is the group of symmetries of a regular $n$-gon -The Lie group $\mathrm{GL}_n(\mathbb R)$ is the group of invertible linear maps on $\mathbb R^n$ - -More generally, given a general abstract group $G$, we regularly consider the case of $G$ acting on a set $X$, and we might ask the question: given some set $X$, what is its "group of symmetries". -Representation theory asks the converse to this question: - -Given a group $G$, what sets does it act on? - -Whilst it is possible to attempt to answer this general, a useful starting point is to restrict the sets in question to sets we know an awful lot about: vector spaces. - -Definition: Let $G$ be a group, and $V/k$ be a vector space. A representation of $G$ is a group action of $G$ on $V$ that is linear (so preserves the vector space structure of $V$) - i.e. for every $g\in G$, $u,v\in V$, $\mu,\lambda\in k$ $$g(\lambda u+\mu v) = \lambda g(\mu) + \mu g(v).$$ - -This is the definition that you have been given. With $V$ as before, an equivalent definition is this: - -A representation of $G$ is a group homormophism - $$\rho: G\to\mathrm{GL}(V)$$ - -Indeed, a group action of $G$ on $V$ assigns to each $g$ an invertible linear map. And given a homomorphism $\rho$, $G$ acts on $V$ via $g\cdot v = \rho(g)v$. -In the case that $G$ is a Lie group (or more generally a topological group), then we require this action/representation to be continuous. -Representation theory allows us to translate our viewpoint by viewing (a quotient of) our group as a group of linear maps on a vector space. This allows us to tackle problems in group theory using the familiar and powerful tools of linear algebra. For example, we can take the trace of a linear map, and the identity $\mathrm{tr}(ABA^{-1}) = \mathrm{tr}(B)$ tells us that the trace of a representation (called the character of the representation) is constant on the conjugacy classes of a group. We can also consider determinants, characteristic polynomials, dual vector spaces (or the dual representation), dimension and many more of our favourite concepts from linear algebra. -Representations are certainly powerful: - -There are theorems (for example, concerning Frobenius groups) whose only known proofs use representation theory. -There are groups (such as $\mathrm{Gal}(\overline{\mathbb Q}/\mathbb Q)$) which we only really know how to study via their representations.<|endoftext|> -TITLE: Interior estimate for derivatives of harmonic function -QUESTION [5 upvotes]: I'm learning PDE from the book of Trudinger and Gilbarg and I'm attempting to prove the following theorem: - -Let $\hspace{0.1ex}u\hspace{0.1ex}$ be harmonic in $\hspace{0.1ex}\varOmega \subset \mathbb{R}^n\hspace{0.1ex}$ (open, connected, bounded) and let $\hspace{0.1ex}\varOmega^{\hspace{0.1ex}'}\hspace{0.1ex}$ be any compact subset of $\hspace{0.1ex}\varOmega$. Then for any multi-index $\hspace{0.1ex}\alpha\hspace{0.1ex}$ we have -\begin{align} -\sup_{\varOmega{\hspace{0.1ex}'}} \big\lvert \,D^{\,\alpha}\hspace{0.075ex}u\,\big\rvert -&\leq -\left( \dfrac{n \left\lvert\alpha\right\rvert}{d} \right)^{\left\lvert\alpha\right\rvert} \!\sup_{\varOmega} \left\lvert\hspace{0.1ex} u\hspace{0.1ex}\right\rvert, &d&=d\left(\varOmega^{\hspace{0.1ex}'}, \partial \hspace{0.1ex}\varOmega\right). -\end{align} - -They say the theorem is proved by applying the estimate -\begin{align} -\big\lvert\,D\hspace{0.3ex}u\left(\hspace{0.1ex}y\hspace{0.1ex}\right)\big\rvert -\leq -\frac{n}{d_y} \sup_{\varOmega} \left\lvert\hspace{0.1ex} u\hspace{0.1ex}\right\rvert, \quad d_y -= -d\left(\hspace{0.1ex}y, \hspace{0.1ex}\partial \hspace{0.1ex}\varOmega\hspace{0.1ex}\right) -\end{align} -successively to equally spaced nested balls. -The natural thing to do is induction on $\hspace{0.1ex}k = \left\lvert\alpha\right\rvert$. -The case $\hspace{0.1ex}k=1\hspace{0.1ex}$ follows from the above estimate. -So assume the theorem holds for every compact subset $\hspace{0.1ex}\varLambda\hspace{0.05ex}$ of $\hspace{0.1ex}\varOmega\hspace{0.1ex}$ and multi-index of order $\hspace{0.1ex}k-1\hspace{0.1ex}$ and let $\hspace{0.1ex}\alpha\hspace{0.1ex}$ be a multi-index with $\hspace{0.1ex}\left\lvert\alpha\right\rvert = k$. -Given $\hspace{0.1ex}x \in \varOmega^{\hspace{0.1ex}'}$ we have $D^{\hspace{0.1ex}\alpha}\hspace{0.1ex}u\left(x\right) = \dfrac{\partial \hspace{0.1ex}D^{\hspace{0.1ex}\beta}\hspace{0.1ex}u}{\partial \hspace{0.1ex}x_i}\,\left(\hspace{0.1ex}x\hspace{0.1ex}\right)\,$ for some $\hspace{0.1ex}i\hspace{0.1ex}$ and $\hspace{0.1ex}\beta\hspace{0.1ex}$ with $\hspace{0.1ex}\left\lvert\hspace{0.1ex}\beta\hspace{0.1ex}\right\rvert = k-1$. -Applying the mean value theorem for $\hspace{0.1ex}D^{\hspace{0.1ex}\alpha}\hspace{0.1ex}u\hspace{0.1ex}$ in a ball $\hspace{0.1ex}B\hspace{0.05ex}\left(\hspace{0.05ex}x,\hspace{0.15ex} R\hspace{0.1ex}\right)$ with $\hspace{0.1ex}R < d\left(\hspace{0.1ex}x,\hspace{0.1ex} \partial \hspace{0.1ex}\varOmega\hspace{0.05ex}\right)$ we obtain -$$\big\lvert \,D^{\hspace{0.1ex}\alpha}\hspace{0.1ex}u\left(\hspace{0.1ex}x\hspace{0.1ex}\right)\big\rvert \leq \frac{n}{R} \sup_{\partial \hspace{0.05ex}B\hspace{0.1ex}\left(\hspace{0.075ex}x,\hspace{0.2ex} R\hspace{0.1ex}\right)} \big\lvert \,D^{\hspace{0.1ex}\beta}\hspace{0.1ex}u\,\big\rvert . $$ -Using the induction hypothesis with $\hspace{0.1ex}\varLambda = \partial \hspace{0.1ex}B\left(\hspace{0.1ex}x,\hspace{0.1ex} R\hspace{0.1ex}\right)$ we thus have -$$ -\big\lvert\,D^{\hspace{0.1ex}\alpha}\hspace{0.1ex}u\left(\hspace{0.1ex}x\hspace{0.1ex}\right)\big\rvert -\leq \frac{n}{R} \, -\left( \frac{n \left\lvert\hspace{0.1ex}\beta\hspace{0.1ex}\right\rvert}{d\hspace{0.1ex}\big(\hspace{0.1ex}\partial\hspace{0.1ex} B\left(\hspace{0.1ex}x,\hspace{0.1ex} R\hspace{0.1ex}\right), \hspace{0.2ex}\partial\hspace{0.1ex} \varOmega\hspace{0.1ex}\big)} \right)^{\left\lvert\hspace{0.1ex}\beta\hspace{0.1ex}\right\rvert}\sup_{\varOmega} \left\lvert\hspace{0.05ex} u\hspace{0.1ex}\right\rvert,$$ -leading to nothing. Any suggestions? Where do the nested balls come in? -Thanks! - -REPLY [2 votes]: Your solution is practically there. You're right that the evenly spaced nested balls is what's missing. -Fix $y \in \Omega$ and fix some multi-index $\alpha$ with $| \alpha | = k$. Let $d = dist(y, \partial \Omega)$. Let $d_{0} = \frac{d}{|\alpha|}$. -Then consider the ball $B = B(y, d_{0})$. Let $\alpha_{1}$ be a multi-index such that $|\alpha_{1}| = k-1$ and $\alpha_{1} < \alpha$. Based off the estimate you already know, we have that -$$ -|D^{\alpha} u(y)| \le \frac{n}{d_{0}} \sup_{B(y,d_{0})} |D^{\alpha_{1}} u|. -$$ -However, for every $y_{1} \in B(y, d_{0})$ we can apply the same estimate on the ball $B(y_{1}, d_{0})$ to gain -$$ -|D^{\alpha} u(y)| \le \frac{n}{d_{0}} \sup_{y_{1} \in B(y,d_{0})} \left[ \frac{n}{d_{0}} \sup_{B(y_{1}, d_{0})} |D^{\alpha_{2}}u| \right] \le \frac{n}{d_{0}} \left[ \frac{n}{d_{0}} \sup_{B(y,2 d_{0})} |D^{\alpha_{2}} u| \right], -$$ -where $|\alpha_{2}| = k-2$, and $\alpha_{2} < \alpha_{1}$. -Repeating inductively, we attain the desired result -$$ -| D^{\alpha} u(y) | \le \left( \frac{n}{d_{0}} \right)^{|\alpha|} \sup_{B(y, |\alpha| d_{0})} |u| \le \left( \frac{n |\alpha|}{d} \right)^{|\alpha|} \sup_{\Omega} |u|, -$$ -where the equality follows since we chose $d_{0} = \frac{d}{|\alpha|}$.<|endoftext|> -TITLE: How many metrics are there on a set up to topological equivalence? -QUESTION [16 upvotes]: I want to find the number of topologically nonequivalent metrics on a set. -I think if the cardinal of set is finite then we have one metric that is the discrete metric and every metric on this set is equivalent with the discrete metric. Now what is the number of nonequivalent metrics on an infinite set? - -REPLY [13 votes]: Let $S$ be an infinite set. A metric on $S$ is a function $S\times S\to \mathbb{R}$, and there are only $$|\mathbb{R}|^{|S\times S|}=(2^{\aleph_0})^{|S|}=2^{\aleph_0\cdot |S|}=2^{|S|}$$ -such functions. Thus a trivial upper bound on the number of metrics on $S$ up to equivalence is $2^{|S|}$. (In contrast, there are $2^{2^{|S|}}$ different topologies up to homeomorphism on $S$, so the restriction to metric spaces in this question really does make a difference!) -There are actually two reasonable definitions of when two metrics on $S$ are "topologically equivalent". The first definition is that they give the same topology, i.e. that the identity map $S\to S$ is a homeomorphism from the first metric to the second. With this definition, it is easy to see that there are always $2^{|S|}$ inequivalent metrics on $S$. For instance, for any subset $T\subseteq S$, you can put a metric on $S\times (\mathbb{N}\cup\{\infty\})$ such that with respect to that metric, the sequence $(s,n)$ converges to $(s,\infty)$ iff $s\in T$. Choosing a bijection between $S$ and $S\times (\mathbb{N}\cup\{\infty\})$, this gives inequivalent metrics on $S$ for each subset $T\subseteq S$. -The second and more interesting definition of "topologically equivalent" metrics on $S$ is that the two metrics give rise to homeomorphic metric spaces (but the homeomorphism need not be the identity map). That is, we are asking how many metrizable topological spaces of a given cardinality there are up to homeomorphism. With this definition, it is also true that the are $2^{|S|}$ inequivalent metrics on $S$, but the proof is more difficult. -Before diving into the proof, let us review the notion of Cantor-Bendixson rank and make some related definitions. If $X$ is a topological space, write $D(X)$ for the set of non-isolated points of $X$ (the "Cantor-Bendixson derivative of $X$"). For each ordinal $\alpha$, define $D^\alpha(X)$ by induction: - -$D^0(X)=X$ -$D^{\alpha+1}(X)=D(D^\alpha(X))$ -For $\alpha$ limit, $D^\alpha(X)=\bigcap_{\beta<\alpha} D^\beta(X)$. - -For $x\in X$, the least $\alpha$ such that $x\not\in D^{\alpha+1}(X)$ is called the Cantor-Bendixson rank of $x$ (if no such $\alpha$ exists, we will say $x$ is unranked). If $x\in X$ is unranked say that the type of $x$ is the lim sup of the ranks of ranked points approaching $x$ plus 1 (i.e., $\inf_U\sup_{y\in U\text{ ranked}}(\operatorname{rank}(y)+1)$, where $U$ ranges over all neighborhoods of $x$). Equivalently, the type of $x$ is what the rank of $x$ would be if you removed all other unranked points from the space $X$. -Lemma 1: For each ordinal $\alpha>0$, there exists a metrizable space $X_\alpha$ such that $|X_\alpha|=|\alpha|+\aleph_0$, every point of $X_\alpha$ has rank $\leq\alpha$, and there exists a point $x_\alpha\in X_\alpha$ of rank $\alpha$. -Proof: Use induction on $\alpha$. Assuming you have such spaces $X_\beta$ for each $\beta<\alpha$ and points $x_\beta\in X_\beta$ of rank $\beta$, let $X_\alpha$ be a disjoint union of countably many copies of each $X_\beta$ and one additional point $x_\alpha$, with the distance from $x_\alpha$ to the $n$th copy of $x_\beta$ being $1/n$. (For $\beta=0$, take $X_\beta$ to be a point.) -Lemma 2: For each ordinal $\alpha>0$, there exists a metrizable space $Y_\alpha$ with $|Y_\alpha|=|\alpha|+\aleph_0$ which contains an unranked point of type $\alpha$ and no unranked points of type $\beta$ for any nonzero $\beta\neq\alpha$. -Proof: Let $Y_\alpha=X_\alpha\sqcup\mathbb{Q}/\sim$, where $\sim$ identifies $x_\alpha\in X_\alpha$ with $0\in\mathbb{Q}$. -We can now construct $2^{|S|}$ nonhomeomorphic metrizable spaces of cardinality $|S|$ for any infinite set $S$. Namely, if $A$ is any set of $|S|$ nonzero ordinals, all of which have cardinality $\leq |S|$, we can take the space $Z_A=\coprod_{\alpha\in A} Y_\alpha$. We can recover $A$ from $Z_A$ as the set of nonzero ordinals $\alpha$ such that $Z_A$ contains an unranked point of type $\alpha$. Every such $Z_A$ has cardinality $|S|$, and $Z_A$ is homeomorphic to $Z_B$ iff $A=B$. Since there are $2^{|S|}$ such sets $A$, this gives $2^{|S|}$ nonhomeomorphic metrizable spaces of cardinality $|S|$. -If $|S|\geq 2^{\aleph_0}$, there is another rather different way to get $2^{|S|}$ metrizable spaces of cardinality $|S|$. Given any total order $\leq$ on $S$, we can form a metric space which "draws a picture of the order" as follows. For each point of $S$, we have a 2-dimensional disk in the metric space. Whenever $x -TITLE: Prove $a+b \leq 1$ with two disjoint squares in a larger square. -QUESTION [5 upvotes]: Two disjoint squares are located inside a square of side $1$. If the lengths of the sides of the two squares are $a$ and $b$, prove $a+b \leq 1$. - -I thought about setting up a coordinate axes with the leftmost vertex of the square at the origin. Then I will have to show that the sum of the side lengths is less than or equal to $1$ while the two squares being disjoint and still inside the square. This is where I get stuck. - -REPLY [2 votes]: [ NOTE: this is a heavy edit of the original answer, which fixes a wrong assumption and adds some more detail. ] -Here is the outline of a proof. The squares will be considered "open sets" i.e. not including their actual borders, which allows for a vertex of one square to lie on the side of the other, or two sides from different squares to overlap - and still be disjoint. (With the alternative interpretation of squares as "closed sets" the inequality would become a strict one $a + b \lt 1$.) -The proof relies on the following two propositions. -(1) A line can be drawn which separates the squares into different half-planes. -This follows from the general statement about separability of convex bodies in an n-dimensional Euclidian space known as the Hyperplane separation theorem: - -if both disjoint convex sets are open, then there is a hyperplane in between them - -In the 2-dimensional case, the hyperplane is simply a straight line, and one such possible line is actually the separating bitangent (common tangent) of the two convex areas. -(2) Let $\Delta ABC$ be a right triangle with the right angle at $A$. The largest square "inscribed" in $\Delta ABC$ has a vertex at $A$, two sides along the legs $AB, AC$, and $AA'$ as a diagonal, where $A' \in BC$ is the foot of the bisector through $A$. -For an arbitrary triangle (not necessarily a right one), it can be shown that the largest inscribed square has two vertices on one side of the triangle, and (at least) one other vertex on another side. It can also be shown that the side of the square is $\frac{a h}{a + h}$ where $a$ is the length of the side where the square "sits" and $h$ is the length of the corresponding altitude. This is proved for example at Maximum area of a square in a triangle. -In the case of a right triangle, there are just two candidate such squares, one "sitting" on both legs, the other one on the hypothenuse. - -Let $a, b, c$ be the lengths of the sides, and $h$ the length of the altitude through $A$. Then the general formula gives the sides of the squares as $\frac{b c}{b + c}$ and $\frac{a h}{a + h}$ respectively. To conclude the proof, it remains to be shown that: -$$ -\frac{b c}{b + c} \ge \frac{a h}{a + h} -$$ -Noting that $b c = a h = 2 S$ where $S$ is the area of the triangle, and $h = \frac{b c}{a}$, the inequality reduces to: -$$ -a + \frac{b c}{a} \ge b + c \\ -a^2 - a (b + c) + b c \ge 0 \\ -(a - b) (a - c) \ge 0 -$$ -Since $a \ge b, c$ the last inequality holds, and proposition (2) is thus proved. -Now, back to the question in point here. By proposition (1) there is a line separating the two inner squares, for example: - -If the separating line is parallel to one of the sides of the outer square, then each inner square can be inscribed in a larger one with sides parallel to the outer square edges, and the two newly constructed squares have sides which obviously add up to $\le 1$ so it follows that $a + b \le 1$. - -If the separating line is not parallel to any of the outer square sides, then it will intersect all of them - two internally (within the side segments of the outer square) and two externally. This forms two right angle triangles sharing part of the hypothenuse, each of them containing one of the inner squares. From proposition (2) each of the inner squares must be smaller than the respective one drawn in red, whose sides obviously add up to $1$, so it again follows that $a + b \le 1$, which concludes the proof. - -(This last drawing shows the case when the separating line intersects two opposite edges of the outer square. The layout is slightly different in the other case, when it intersects two adjacent edges, but the same construction and same argument apply in that case as well.) -P.S. I am a bit curious about the background/context of this question, since it doesn't look like a run-of-the-mill exercise.<|endoftext|> -TITLE: Maximal Ideals in Ring of Continuous Functions -QUESTION [13 upvotes]: Dummit and Foote, 7.4.33(a): Let $R$ be the ring of all continuous functions $[0,1] \to \mathbb{R}$ and let $M_c$ be the kernel of evaluation at $c \in [0,1]$, i.e. all $f$ such that $f(c) = 0$. Show that if $M$ is a maximal ideal in $R$ then $M = M_c$ for some $c$. - -I am aware of proofs that use the compactness of $[0,1]$; however I have been told that a simpler proof exists using only the isomorphism theorems for rings along with basic facts about ideals. I have tried on my own to prove it this way but still have no success; is it possible? -(My attempts go something like: since $M$ is maximal, the quotient ring $R/M$ is a field. Then it would be nice to use the fact that $R/M$ has no zero divisors, but I can't quite make it work. Alternatively, consider $M \cap M_c$.) - -REPLY [3 votes]: Maximal ideals in the ring of continuous functions were studied by Edwin Hewitt in -Hewitt, Edwin. Rings of real-valued continuous functions. I. Trans. Amer. Math. Soc. 64, (1948). 45–99. -He called such ideals hyper-real. They are not always principal :-)<|endoftext|> -TITLE: Solving a logarithmic equation that has an exception to the power rule -QUESTION [25 upvotes]: Given the following: -$$\log_3({x^2-3})^2=2$$ -If I were to use the power rule, I would do: -$$2\log_3({x^2-3})=2$$ -$$\log_3({x^2-3})=1$$ -$$3^1=x^2-3$$ -$$3+3=x^2$$ -$$x=\pm\sqrt6$$ -Substituting back into the original equation, it is correct. However, there is a missed answer: -$$x=0$$ -Substituting that into the original equation, it is correct as well, but while using the power rule, it has been overlooked. -Obviously, this is a rather simple question, so I could have just done $3^2=(x^2-3)^2$ and solved it like that, in which case the answer of $0$ would appear. However, for more complicated questions, how would one go about doing it? And are there exceptions to this exception? - -REPLY [2 votes]: There is NO 'exception to the power rule' here, it's your error in expression handling. The error is similar to saying, if $(x+1)^2 > 9$ then $x+1>3$. -The square in the inner function guarantees the logarithm argument is non-negative, and the equation's domain is all reals with two points excluded: -$$\{x:(x^2−3)^2 > 0\} = \{x:(x^2−3)\ne 0\} = \mathbb R \setminus \{\pm\sqrt 3\}$$ -'Removing' the square cuts-off the domain portion in which $(x^2-3)<0$, leaving -$$\{x:(x^2-3)>0\} = \mathbb R \setminus [-\sqrt 3,\sqrt 3]$$ -hence you miss solution(s) inside the interval $(-\sqrt 3,\sqrt 3)$.<|endoftext|> -TITLE: Dimensions of a rectangle containing a rotated rectangle -QUESTION [5 upvotes]: Given sides a, b, and an arbitrary rotation Θ (0 - 360 degrees) around the centerpoint of the rectangle, how would I calculate sides A and B of a containing rectangle? - -REPLY [6 votes]: B = $a*sin(90-\theta)+b*sin(\theta)$ -A = $b*sin(90-\theta)+a*sin(\theta)$ -(Assuming the image shows minor clockwise rotation) -when $\theta=0\ $ $B=a\ $ $A=b\ $ (matching the image)<|endoftext|> -TITLE: How many integer solutions are there to the equation $x_1 + x_2 + x_3 + x_4 = 12$ with restrictions on $x_1,x_2,x_3,x_4$? -QUESTION [5 upvotes]: How many integer solutions are there to the equation $x_1 + x_2 + x_3 + x_4 = 12$ with $x_i > 0$ for each $i \in \{1, 2, 3, 4\}$? How many solutions with $x_1 > 1$, $x_2 > 1$, $x_3 > 3$, $x_4 \geq 0$? -So for the first part I did this: -We set $x_i = y_i + 1$ for $i = 1, 2, 3$ and obtain $y_1 + y_2 + y_3 = 12 − 3 = 9$. -So, there is one-to-one correspondence between the positive integer solutions of $x_1 + x_2 + x_3 = 12$ and the non-negative integer solutions of $y_1 + y_2 + y_3 = 9$. -Hence, both equation have the same number of solutions. Since $y_1 + y_2 + y_3 = 9$ has ${9+3-1 \choose 3-1} = {11 \choose 2} = 55$ integer solutions. -Is this correct? -For the second part I do not understand how to figure out how many solutions with $x_1 > 1$, $x_2 > 1$, $x_3 > 3$, $x_4 \geq 0$? - -REPLY [2 votes]: I am not sure why you have four variables in the statement of your question and three variables in your answer. I will assume you meant to work with four variables. - -How many integer solutions are there to the equation $x_1 + x_2 + x_3 + x_4 = 12$ with $x_i > 0$ for each $i \in \{1, 2, 3, 4\}$? - -We wish to solve the equation -$$x_1 + x_2 + x_3 + x_4 = 12 \tag{1}$$ -in the positive integers. -Method 1: We reduce the problem to one in the non-negative integers. Let $y_k = x_k - 1$ for $1 \leq k \leq 4$. Then each $y_k$ is a non-negative integer. Substituting $y_k + 1$ for $x_k$, $1 \leq k \leq 4$, in equation 1 yields -\begin{align*} -y_1 + 1 + y_2 + 1 + y_3 + 1 + y_4 + 1 & = 12\\ -y_1 + y_2 + y_3 + y_4 & = 8 \tag{2} -\end{align*} -Equation 2 is an equation in the non-negative integers. A particular solution corresponds to placing three addition signs in a row of eight ones. For instance, -$$1 1 1 1 1 + 1 + 1 1 1$$ -corresponds to the solution $y_1 = 5$, $y_2 = 1$, and $y_3 = 3$, while -$$1 1 + + 1 1 1 1 1 1$$ -corresponds to the solution $y_1 = 2$, $y_2 = 0$, and $y_3 = 6$. Thus, the number of solutions of equation 2 is the number of ways three addition signs can be inserted into a row of eight ones, which is -$$\binom{8 + 3}{3} = \binom{11}{3}$$ -since we must choose which three of the eleven symbols (eight ones and three addition signs) will be addition signs. -Method 2: A particular solution of equation 1 in the positive integers corresponds to inserting three addition signs in the eleven spaces between successive ones in a row of $12$ ones. -$$1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1$$ -For instance, -$$1 1 1 + 1 1 1 1 + 1 1 1 1 1$$ -corresponds to the solution $x_1 = 3$, $x_2 = 4$, and $x_3 = 5$. Thus, the number of solutions of equation 1 in the positive integers is the number of ways three addition signs can be inserted into the eleven gaps between successive ones in a row of $12$ ones, which is -$$\binom{11}{3}$$ - -How many integer solutions are there to the equation $x_1 + x_2 + x_3 + x_4 = 12$ with $x_1 > 1$, $x_2 > 1$, $x_3 > 3$, $x_4 \geq 0$? - -We reduce the problem to one in the non-negative integers. Since $x_1$ is an integer, $x_1 > 1 \implies x_1 \geq 2$. Similarly, since $x_2$ and $x_3$ are integers, $x_2 > 1 \implies x_2 \geq 2$ and $x_3 > 3 \implies x_3 \geq 4$. Let -\begin{align*} -y_1 & = x_1 - 2\\ -y_2 & = x_2 - 2\\ -y_3 & = x_3 - 4\\ -y_4 & = x_4 -\end{align*} -Then each $y_k$, $1 \leq k \leq 4$, is a non-negative integer. Substituting $y_1 + 2$ for $x_1$, $y_2 + 2$ for $x_2$, $y_3 + 4$ for $x_3$, and $y_4$ for $x_4$ in equation 1 yields -\begin{align*} -y_1 + 2 + y_2 + 2 + y_3 + 4 + y_4 & = 12\\ -y_1 + y_2 + y_3 + y_4 & = 4 \tag{3} -\end{align*} -Equation 3 is an equation in the non-negative integers with -$$\binom{4 + 3}{3} = \binom{7}{3}$$ -solutions.<|endoftext|> -TITLE: Exercise 27 from chapter 4 (“Hilbert Spaces: An Introduction”) of Stein & Shakarchi's “Real Analysis” 2 -QUESTION [6 upvotes]: I'm having trouble with the following problem: - -Prove that the operator $$Tf(x) = \frac{1}{\pi} \int_0^\infty \frac{f(y)}{x+y} dy$$ is bounded on $L^2(0,\infty)$ with norm $||T|| \leq 1$. - -I have no idea where to start for this problem. -I guess I can write $Tf(x) = \frac{1}{\pi} \int_0^\infty K(x,y)f(y) dy$ with $K(x,y) = \frac{1}{x+y}$ but I don't see how I can proceed from here. Can anyone help me out? Thanks! - -REPLY [2 votes]: Let $x\in (0,\infty)$ and $f\in L^2(0,\infty)$. Via the transformation $y\mapsto xy$, -$$Tf(x) = \frac{1}{\pi}\int_0^\infty \frac{f(xy)}{x + xy}\, x\, dy = \frac{1}{\pi} \int_0^\infty \frac{f(xy)}{1 + y}\, dy.$$ -Use Minkowski's integral inequality and the transformation $x\mapsto x/y$ to get the estimate -$$\|Tf\|_{L^2} \le \frac{1}{\pi}\int_0^\infty \frac{\|f\|_{L^2}}{y^{1/2}\,(1 + y)}\, dy.$$ -Via the transformation $y\mapsto y^2$, -$$\frac{1}{\pi}\int_0^\infty \frac{\|f\|_{L^2}}{y^{1/2}(1 + y)}\, dy = \left(\frac{2}{\pi}\int_0^\infty \frac{1}{1 + y^2}\, dy\right)\|f\|_{L^2} = \|f\|_{L^2}.$$ -Hence $\|Tf\|_{L^2} \le \|f\|_{L^2}$. Since $f$ was arbitrary, $\|T\| \le 1$.<|endoftext|> -TITLE: Evaluate $\sum_{n=1}^{\infty }\int_{0}^{\frac{1}{n}}\frac{\sqrt{x}}{1+x^{2}}\,\mathrm{d}x$ -QUESTION [9 upvotes]: I have some trouble in evaluating this series -$$\sum_{n=1}^{\infty }\int_{0}^{\frac{1}{n}}\frac{\sqrt{x}}{1+x^{2}}\mathrm{d}x$$ -I tried to calculate the integral first, but after that I found the series become so complicated. -Besides, I found maybe the series equals to -$$\sum_{k=0}^{\infty }\frac{\left ( -1 \right )^{k}\zeta \left ( 2k+\dfrac{3}{2} \right )}{2k+\dfrac{3}{2}}$$ -this is definitely a monster to me. So I want to know is there a good way to solve this integral-series. - -REPLY [5 votes]: Let -$$I(n) = \int_{0}^{\frac{1}{n}}\frac{\sqrt{x}}{1+x^{2}}\mathrm{d}x.$$ -Substitution: -$$x = t^2,\quad \mathrm{d}x = 2t\mathrm{d}t:$$ -$$I(n) = \int\limits_{0}^{\frac{1}{\sqrt{n}}}\frac{2t^2}{1+t^{4}}\mathrm{d}t.$$ -$$I(n) = \int\limits_{0}^{\frac{1}{\sqrt{n}}}\frac{2}{t^2 + \dfrac{1}{t^2}}\mathrm{d}t.$$ -$$I(n) = \int\limits_{0}^{\frac{1}{\sqrt{n}}}\frac{1-\dfrac1{t^2}}{\left(t + \dfrac{1}{t}\right)^2-2}\mathrm{d}t + \int\limits_{0}^{\frac{1}{\sqrt{n}}}\frac{1+\dfrac1{t^2}}{\left(t - \dfrac{1}{t}\right)^2+2}\mathrm{d}t.$$ -$$I(n) = \dfrac{1}{2\sqrt{2}}\left.\ln\left|\frac{t + \dfrac{1}{t}-\sqrt 2}{t + \dfrac{1}{t}+\sqrt 2}\right|\right|_0^{\frac1{\sqrt n}} + \dfrac1{2\sqrt2}\left.\arctan\dfrac{ {t - \dfrac{1}{t}}}{\sqrt2}\right|_0^{\frac1{\sqrt n}}.$$ -$$I(n) = \dfrac{1}{2\sqrt{2}}\ln\frac{n - \sqrt {2n} + 1}{n + \sqrt {2n} + 1} + \dfrac1{2\sqrt{2}}\left(\dfrac{\pi}{2}-\arctan\dfrac{n - 1}{\sqrt{2n}}\right), \quad I(1) = \dfrac{\pi}{4\sqrt2}.$$ -or -$$I(n) = \dfrac{1}{2\sqrt{2}}\ln\frac{1 -\dfrac{\sqrt{2n}}{n + 1}}{1 +\dfrac{\sqrt{2n}}{n + 1}} + \dfrac1{2\sqrt{2}}\arctan\dfrac{\sqrt{2n}}{n - 1}, \quad I(1) = \dfrac{\pi}{4\sqrt2}.$$ -UPD -Maclaurin series is as follows: -$$ -I(n) = \dfrac{\sqrt n}{n+1}\left(1+\dfrac13\dfrac{2n}{(n+1)^2}+\dfrac15\dfrac{(2n)^2}{(n+1)^4}+\dots++\dfrac1{2k+1}\dfrac{(2n)^k}{(n+1)^{2k}}+\dots\right)$$$$+\dfrac12\,\dfrac{\sqrt n}{n-1}\left(1-\dfrac13\dfrac{2n}{(n-1)^2}+\dfrac15\dfrac{(2n)^2}{(n-1)^4}-\dots+\dfrac1{2k+1}\dfrac{(-2n)^k}{(n-1)^{2k}}+\dots\right), -$$ -$$I(1) = \dfrac{\pi}{4\sqrt2}.$$<|endoftext|> -TITLE: Show that the set of points where a real-valued continuous sequence of functions converges is $F_{\sigma\delta}$ -QUESTION [7 upvotes]: By $F_{\sigma\delta}$, I mean that the set can be expressed as a countable intersection of $F_\sigma$ sets. -Let this sequence of functions be $f_n$, and the set of points where $f_n$ converges be $C$. Since $f_n$ must be Cauchy, I can define $C$ as -$$C = \{x \in \mathbb{R}\:|\: \forall\epsilon\in\mathbb{N} \; \exists N\in\mathbb{N} \::\:|f_m(x) - f_n(x)| < \frac{1}{\epsilon} \: \forall m,n > N\}$$. -Looking at this set, I'm trying to prove $F_{\sigma\delta}$ by firstly trying to find some countable intersection. I try to rewrite $C$ as -$$C = \bigcap\limits_{\epsilon \in \mathbb{N}}\{x \in \mathbb{R}\:|\: \exists N\in\mathbb{N} \::\:|f_m(x) - f_n(x)| < \frac{1}{\epsilon} \: \forall m,n > N\}$$. -Now I need to express each set being intersected as a countable union of closed sets to prove that it is $F_\sigma$. Then the countable intersection of these must be $F_{\sigma\delta}$. I tried to pick $N$ to describe this union, and rewrote $C$ as -$$C = \bigcap\limits_{\epsilon \in \mathbb{N}}\bigg[\;\bigcup\limits_{N \in \mathbb{N}}\{x \in \mathbb{R}\:|\:|f_m(x) - f_n(x)| < \frac{1}{\epsilon} \: \forall m,n > N\}\;\bigg]$$. -But this option confuses me because $N$ depends on $\epsilon$, so I don't know whether it would make sense to let both vary freely like that. Am I going in the wrong direction? - -REPLY [8 votes]: Actually I feel a hint is not enough here. I needed to think a bit more as well so here is an attempt at a solution. -The first identity to notice is this: -$$C = \bigcap_{k \in \Bbb N} \; \underbrace {\{ x \in \Bbb R\ : \exists N \in \Bbb N \; \text{s.t.} \; m, n \gt N \implies |f_m(x) - f_n(x)| \le \frac 1 k \}}_{B_k} $$ -The second is: -$$B_k = \bigcup_{n \in \Bbb N} \; \underbrace{\{ \ x \in \Bbb R : |f_m(x) - f_n(x)| \le \frac 1 k\ \; \text{for each} \;m\ge n \}}_{A_{n, k}}$$ -The third is: -$$A_{n, k} = \bigcap_{t \ge n} \; \underbrace{\{x \in \Bbb R : |f_t(x) - f_n(x)| \le \frac 1 k \}}_{F_{t, n,k}}$$ -Now each $F_{t, n,k}$ is a closed set since it is the pre-image of a closed set under a continuous function, $g = |f_t - f_n|$. Since any intersection of closed sets is closed, $A_{n,k}$ is closed. Hence, $B_k$ is an $F_\sigma$ set for each $k \in \Bbb N$.<|endoftext|> -TITLE: What is the fastest way to find the range of functions having modulus: $f(x) = |x+3| - |x+1| - |x-1| + |x-3|$ -QUESTION [6 upvotes]: While solving problems I saw a question in which I was supposed to find the range of a function $$f(x) = |x+3| - |x+1| - |x-1| + |x-3|$$ I know the way in which I can take different cases of $x$ larger than $3$ and then $x$ between $3$ and $1$ and so on. But it gets a bit long as there are 5 cases and the competitive exam for which I am getting my brain ready for wants me to solve a question in average 1.5 minutes. I want to know if there exists a faster way to find its range. - -REPLY [3 votes]: Let $g(x) = |x+3| + |x-3|$ and $h(x) = |x+1| + |x-1|.$ -Then -\begin{align} -g(x) &= \begin{cases} 2|x| & \text{if $|x| \geq 3$} \\ - 6 & \text{if $|x| < 3$} \end{cases} \\ -h(x) &= \begin{cases} 2|x| & \text{if $|x| \geq 1$} \\ - 2 & \text{if $|x| < 1$} \end{cases} \\ -f(x) &= g(x) - h(x) -\end{align} -It should then be easy to see that $f(x) = 0$ when $|x| \geq 3$ -and $f(x) = 4$ when $|x| < 1$. -You just then have to confirm that $0 \leq f(x) \leq 4$ -when $1 \leq |x| < 3$.<|endoftext|> -TITLE: Filling the area below a decreasing function by rectangles -QUESTION [6 upvotes]: Suppose $f:\mathbb{R}_+ \to \mathbb{R}_+$ is a strictly decreasing continuous function such that. Let $n$ be a natural number. I want to solve the following maximization problem -$$ - S_n = \max_{x_1 \leq \dots \leq x_n} x_1 f(x_1) + (x_2-x_1) f(x_2) + \dots + (x_n-x_{n-1}) f(x_n). -$$ -See the plot below for graphical illustration when $n=3$. I need to find two things: - -Characterization of optimal $(x_1,\dots,x_n)$ -Lower bound for the area covered. - -This is a standard problem for example in economics (price discrimination) and it looks like someone probably has solved the problem, but I haven't been able to find it. - -REPLY [2 votes]: It seems to me that you have already done most of the work, writing the problem as an optimization problem for multiple variable. In general calling $\theta$ the actual value of the area as a function of the different points chosen. -$\theta(x_1,x_2,...,x_N) = x_1\cdot f(x_1) + \sum_{I=2}^N{f(x_I)\cdot(x_I-x_{I-1})}$ -And we want to find the maximum of theta, so we search the point with all partial derivatives equal to zero -$\begin{cases}\frac{\partial \theta }{\partial x_1} = f(x_1) + x_1 \cdot f^{'}(x_1) - 0 \cdot f^{'}(x_1) - f(x_2) \\ \frac{\partial \theta }{\partial x_2} = f(x_2) + x_2 \cdot f^{'}(x_2) - x_1 \cdot f^{'}(x_2) - f(x_3) \\.... \\ \frac{\partial \theta }{\partial x_N} = f(x_N) + x_N \cdot f^{'}(x_N) - x_{N-1} \cdot f^{'}(x_N) - 0 \end{cases}$ -This in addition to the conditions -$ \begin{cases} - x_J>x_I & \text{for $J>I$} \\ - x_I > 0 & \forall I -\end{cases} -$ -will give you a system of equation that you can solve for the value of $x_I$. The system may not always be solvable, depending on the definition of $f$. Also you should check that the chosen point is a maxima and not a minima. -Some examples: -For $f= 1-x$ the solution is always to split in equal part. -For $f= 1-x^2$ and $N=2$ the two points are -$ \begin{cases} -x_1 = \sqrt{\frac{1}{9-2\sqrt3}} \approx 0.425\\ -x_2 = \sqrt{\frac{9}{9-2\sqrt3}} \approx 0.736 -\end{cases}$<|endoftext|> -TITLE: A version of Casorati-Weierstrass for harmonic functions? -QUESTION [5 upvotes]: Suppose that $f:B(0,1)\setminus\left\{0\right\}\subset \mathbb{R}^n \to \mathbb{R}$ is a harmonic function. Clearly, the property that $\overline{f(B(0,\epsilon))}=\mathbb{R}$ for all $\epsilon>0$ is equivalent to $\limsup_{x \to 0 } f(x) = +\infty$ and $\liminf_{x \to 0 } f(x) = -\infty$ (through the intermediate value theorem). -Therefore, after considering $f$ on a smaller ball, adding a constant and replacing $f$ by $-f$ if necessary, my question is wether a harmonic function $f:B(0,1)\setminus\left\{0\right\}\subset \mathbb{R}^n \to [0,+\infty)$ is necessarily of the form -\begin{equation} -f(x)=h(x)+\frac{C}{|x|^{n-2}} -\end{equation} -where $h$ is a harmonic function on the entire ball $B(0,1)$ (this formula needs an appropriate modification when $n=2$ of course.). -NOTE: -Using the Kelvin transform, we can restate this problem as saying that for a harmonic function $f:\mathbb{R}^n\setminus \overline{B(0,1)} \to [0,\infty)$ we have that $f$ converges to some $C$ at infinity and $|f(x)-C|=O(|x|^{...})$. Maybe the mean value property can do the trick from this point in some way? - -REPLY [2 votes]: My conjecture turns out to be right. That is the content of Bôcher's theorem.<|endoftext|> -TITLE: Show by combinatorial argument that ${2n\choose 2} = 2{n \choose 2} + n^2$ -QUESTION [22 upvotes]: So i was given this question. Show by combinatorial argument that ${2n\choose 2} = 2{n \choose 2} + n^2$ -Here is my solution: -Given $2n$ objects, split them into $2$ groups of $n$, $A$ and $B$. $2$-combinations can either be assembled both from $A$, both from $B$ or one from each. There are ${n \choose 2}$ from $A$ and ${n \choose 2}$ from $B$. For the mixed pair, each choice from $A$ can be coupled with $n$ choices from $B$ so the total is $n^2$. -Therefore ${2n\choose 2} = 2{n \choose 2} + n^2$ -Is this correct. Is there any other combinatoric proof to solve this? - -REPLY [2 votes]: There is a generalization of this fact that I cannot resist sharing. Here is how I came across it: Consider the scenario where you keep additively splitting a positive integer into two positive integers as a tree diagram, until you are left with only $1$'s as the leaves. Then you take the product of the two numbers at each branching and sum all such products. Remarkably, applying this process to $n$ will always yield $\binom{n}{2}$ as the final number. Try it with $8$ and you get $28$ every time. The inductive step of the strong induction argument for this fact uses the fact that, if you split $n=a+b$ then $$\binom{a}{2}+\binom{b}{2}+ab=\binom{a+b}{2}=\binom{n}{2}.$$ A combinatorial proof double counts the number of ways of choosing two people out of $n=a+b$ people. We can either choose $2$ out of the $a$ or $2$ out of the $b$ or $1$ from each. The original post asks for a combinatorial proof of this fact when $a=b.$<|endoftext|> -TITLE: Exact sequence splitting naturally -QUESTION [5 upvotes]: So I encountered a term that I don't quite recognize from lecture. The professor stated that a certain short exact sequence splits naturally, but I don't understand what the naturally condition is in this case. Does it means that the squares with the splitting maps in place commutes or what? Note that exact sequence is the exact sequence from the universal coefficient theorem. I know that there is another stackexchange topic on this, but the answer isn't what I was looking for. So I was wondering if my characterization was correct or not. - -REPLY [4 votes]: The universal coefficient theorem provides you, for every space $X$, a short exact sequence of abelian groups. This entire short exact sequence is functorial in $X$, which is to say that it can be thought of as a short exact sequence of functors. A natural splitting is a splitting of this short exact sequence of functors: that is, it's a family of splittings which themselves organize into a functor. This means that various squares involving the splitting maps commute as you say.<|endoftext|> -TITLE: Do mathematical objects have underlying types? -QUESTION [7 upvotes]: I came upon this issue while I was trying to think about what "type" of object is $\mathbb R^n$ - is it a set, a vector space, an inner product space, an affine space, a metric space? Perhaps the simplest way is to define $\mathbb R^n$ as simply being an object of type "set", and then construct new objects such as the vector space $\mathbb R^n$, which possesses an additional structure by having the operations of vector addition and scalar multiplication, or the metric space $\mathbb R^n$, which possesses the additional structure of a metric. -It seems often the case, at least in real analysis, however, that we wish to work with an object (denoted $\mathbb R^n$) which possesses both the algebraic structure afforded by being a vector space and the geometric structure afforded by being a metric space. While I can do all of the calculations, treating $\mathbb R^n$ as both a vector space and a metric space, I am troubled by the fact that I'm not sure which type of object I'm working with. Is it a metric space, or is it a vector space? Is it both? Neither? How should I think about this? (for both the "type" of $\mathbb R^n$ and for mathematical objects in general) -I added a few tags that I thought might be relevant (in areas I'm not quite familiar with), but mods should feel free to remove them if they're not appropiate. - -REPLY [3 votes]: Putting things together into an answer and expanding significantly. -First, as T. Bongers stated, this is a form of notational overloading that is usually intended to be resolved by context. The context may be either broad conventions ("standards") such as $\mathbb R^n$'s metric structure, canonical choices such as $\mathbb R$'s field structure, or the convention may be defined in the text you're reading. -Still, if we didn't want to rely on context what would we do and what does this overloading "resolve" to anyway? The typical approach to be more explicit in informal (by which I mean "not machine checked") texts is to do the following: -$$\mathbb{R} \text{ is the set of real numbers} \\ (\mathbb{R},+,0) \text{ is the additive monoid of real numbers} \\ (\mathbb{R},\times,+,1,0) \text{ is the semiring of real numbers}$$ -This clearly articulates the additional structure a monoid and a semiring have and what choices we are making here. If we want to treat a semiring as a monoid, we'd have to explicitly forget the extra structure: $U_M(t,m,a,u,z) = (t,m,u)$ (note we'd also have $U_A(t,m,a,u,z) = (t,a,z)$, both $U_M, U_A : \mathbf{SemiRing} \to \mathbf{Monoid}$). Similarly, $U : \mathbf{Monoid} \to \mathbf{Set}$ would be $U(t,m,u) = t$. Compare this with typical statements like $U(\mathbb R) = \mathbb R$. -Being this explicit can often be clarifying and helpful. For example, it's much easier to see and articulate the adjunction in $$\mathbf{Monoid}(FS, M) \cong \mathbf{Set}(S, UM)$$ than in $$\mathbf{Monoid}(FS, M) \cong \mathbf{Set}(S, M)$$ The former can easily be expressed with $F \dashv U$, the latter is awkward to write down without introducing $U$. -The tuple notation can also be very handy. Sticking with the above adjunction, if you want to explicitly define the counit, using the normal, implicit notation leads to some awkwardness and difficulty. In the typical notation you'd be asked to define a natural transformation: $\varepsilon_M : FUM \to M$ natural in $M$. The elements of $FUM$ for a monoid $M$ are just lists of elements of $M$. We need to collapse that list into a single element by multiplying them all together by the monoid operation. $\varepsilon_M(ms) = \text{fold}(?,?,ms)$ but what do we put for the question marks? It should be the multiplication and unit of the monoid, but what are they? The free monoid construction has certainly forgotten them. Moving to the more explicit notation, we see: $\varepsilon_{(M,\oplus,e)}(ms) = \text{fold}(\oplus,e,ms)$. -However, even the approach above is suppressing some aspects. For example, what's the difference between a monoid and a commutative monoid? There is no additional structure, just an additional property of the existing structure. Taking a cue from type theory, the additional evidence we need that a monoid is a commutative monoid is a proof that the operation is commutative. Once we make that explicit, we should then make explicit the evidence that the operation is associative and unital. It's extremely rare for mathematicians to treat proofs as mathematical objects, but this is exactly what modern type theories do. -The cost of all this explicitness, of course, is verbosity and bureaucratic distinctions that can clutter up what is trying to be demonstrated.<|endoftext|> -TITLE: Variety of proofs of a simple proposition in arithmetic -QUESTION [8 upvotes]: I have a couple of simple proofs of a simple proposition and I'm curious to see the variety of different approaches others would take to prove the same thing. -Definition: The dilation of a sequence $a_1, a_2, a_3,\ldots$ by a factor $n$ is the sequence $b_1,b_2,b_3,\ldots$ for which -$$ -\begin{cases} -b_{kn} = a_k & \text{for } k=1,2,3,\ldots, \\ -b_j = 0 & \text{if $j$ is not a multiple of $n$.} -\end{cases} -$$ -Thus the dilation of $a_1,a_2,a_3,\ldots$ by a factor of $3$ is -$$ -0,\ 0,\ a_1,\ 0,\ 0,\ a_2,\ 0,\ 0,\ a_3,\ 0,\ 0,\ a_4,\ \ldots -$$ -(This is not standard terminology as far as I know, so tell me if there's some standard name for this.) -Now let $\{a_k\}_{k=1}^\infty$ consist of an infinite sequence of repetitions of $1,0,0,0,1,0$ (a $\text{“}1\text{''}$ in the $1$st and $5$th positions and $\text{“}0\text{''s}$ elsewhere). In other words $a_k$ is just the indicator that $k$ is coprime to $6$. -Now start by letting $c$ be an infinite sequence of $0$s and proceed as follows: -1. Let $n$ be the smallest index for which $c_n=0$ (thus initially $n=1$); -2. Let the new value of $c$ be $(c + \text{the dilation of } a \text{ by } n)$ (where the occurrence of $c$ inside the round brackets is the value of $c$ we had at the end of step $1$.); -3. Go back to step $1$. -This goes on forever. -Proposition: This process converges to an infinite sequence of $1$s. (Thus, we never add $1$ to any position that was not $0$.) -As I said, I have a couple of simple proofs and I am curious to see the variety of different approaches others would take to prove the same thing. - -REPLY [2 votes]: Alright, here's one way that I know of. -Suppose $N$ is an infinitely large integer (oh $\ldots$ um $\ldots$ TRIGGER WARNING: If you suffer pain and distress as a result of reading something whose rigorous definition has not been stated and might be problematic, then don't read the foregoing phrase) whose prime factorization contains infinitely many $2$s and infinitely many $3$s and nothing else. Look at the sequence -$$ -\frac 1 N, \frac 2 N, \frac 3 N,\ \ldots,\ \frac{N-3} N, \frac{N-2} N, \frac{N-1} N. \tag a -$$ -The first time one comes to step 2. in the algorithm, one adds $1$ in every position in $(\mathbf a)$ in which the fraction is in lowest terms. -The second time one comes to step 2., one adds a $1$ in every position in which one reduces the fraction to lowest terms only by dividing the numerator and denominator by $2$ (not including those cases in which division by $2$ reduces the fraction incompletely). -The third time one comes to step 2., one adds a $1$ in every position in which one reduces the fraction to lowest terms only by dividing the numerator and denominator by $3$ (not including those cases in which division by $3$ reduces the fraction incompletely). -The fourth time one comes to step 2., one adds a $1$ in every position in which one reduces the fraction to lowest terms only by dividing the numerator and denominator by $4$ (not including those cases in which division by $4$ reduces the fraction incompletely). -The fifth time one comes to step 2., one adds a $1$ in every position in which one reduces the fraction to lowest terms only by dividing the numerator and denominator by $6$ (not including those cases in which division by $6$ reduces the fraction incompletely). -and so on. One reduces by dividing by $2$, or $3$, or $4$, or $6$, or $8$, or $9$, or $12$, etc. This sequence contains all integers that have no prime factors except $2$ and $3$. This process will never reduce a fraction that's already reduced, so it never adds a $1$ to a position more than once. -\begin{align} -& \frac{2520}{2\times2\times2\times\cdots \times 3\times3\times3\times\cdots} \\[10pt] -= {} & \frac{\overbrace{2\times2\times2}\times\overbrace{3\times3}\times35}{2\times2\times2\times\cdots \times 3\times3\times3\times\cdots} \\[10pt] -= {} & \frac{35}{\not2\times\not2\times\not2\times2\times2\times\cdots \times \not3\times\not3\times3\times3\times\cdots} -\end{align} -(It is not hard to rephrase all this in a way that doesn't mention an infinitely large integer.) -In a comment above I asked "How would you write this same argument in a way intended to make it comprehensible to secondary-school pupils while still keeping it simple?". This is how I would do that. So a question remains: Are there other ways, in some essential way different?<|endoftext|> -TITLE: How to explain why the angle between two vectors in $\mathbb{R}^n$ is defined the way it is. -QUESTION [5 upvotes]: It is given in couple of the textbooks I have seen that they just define the angle between two vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$ to be $\theta$ such that -$$ -\cos \theta = \frac{\vec{x} \cdot \vec{y} }{ \|\vec{x}\| \|\vec{y}\|}. -$$ -I wanted to explain this to a first year undergraduate student why this makes some sense and that it is not completely arbitrary, but I wasn't really sure how to do so. I would greatly appreciate any explanation! Thank you!! - -REPLY [3 votes]: We can fit our vectors $\vec x$ and $\vec y$ into a triangle - -The law of cosines says -$$ -\left\lVert\vec y-\vec x\right\rVert^2 -= \left\lVert\vec x\right\rVert^2+\left\lVert \vec y\right\rVert^2 --2\left\lVert\vec x\right\rVert\left\lVert\vec y\right\rVert\cos\theta\tag{1} -$$ -But we also have -$$ -\left\lVert\vec y-\vec x\right\rVert^2 -= \left(\vec y-\vec x\right)\cdot \left(\vec y-\vec x\right) -= \vec y\cdot\vec y+\vec x\cdot\vec x-2\,\vec x\cdot\vec y -= \left\lVert\vec y\right\rVert^2+\left\lVert\vec x\right\rVert^2-2\,\vec x\cdot\vec y\tag{2} -$$ -Plugging (2) into (1) gives -$$ -\left\lVert\vec y\right\rVert^2+\left\lVert\vec x\right\rVert^2-2\,\vec x\cdot\vec y=\left\lVert\vec x\right\rVert^2+\left\lVert \vec y\right\rVert^2 --2\left\lVert\vec x\right\rVert\left\lVert\vec y\right\rVert\cos\theta -$$ -which simplifies to -$$ -\vec x\cdot\vec y=\left\lVert\vec x\right\rVert\left\lVert\vec y\right\rVert\cos\theta -$$ -Thus $\cos\theta$ can be expressed as -$$ -\cos\theta=\frac{\vec x\cdot\vec y}{\left\lVert\vec x\right\rVert\left\lVert\vec y\right\rVert} -$$ -as desired! -In other words, taking this equation as the definition of $\cos\theta$ matches our intuition developed in trigonometry.<|endoftext|> -TITLE: Constructing prime ideal of tensor product from two prime ideals -QUESTION [13 upvotes]: If $M,N$ are $R$-algebras and natural maps $m:M\to M\otimes_RN,n:N\to M\otimes_RN$, is there any way to construct a prime ideal $T$ of $M \otimes_R N$ given two prime ideals $A,B$ of $M,N$ such that $m^{-1}(T)=A,n^{-1}(T)=B$? -Equivalently, is there a way to explicitly construct or describe the map $\mathrm{Spec}(M\otimes_RN)\to\operatorname{Spec}M\times_{\operatorname{Spec}R}\operatorname{Spec}N$? - -REPLY [2 votes]: Remy's answer is very nice. And I found it not so immediate without writing down the details, so I write here how the factorization deduces the lemma in Remy's answer: - -Given a prime $\xi\subset M\underset{R}{\otimes}N$, and we have the factorization $\psi :M\underset{R}{\otimes}N\overset{\psi_1}{\rightarrow} k(p)\underset{R}{\otimes}k(q) \overset{\psi_2}{\rightarrow} k(\xi)$, we associate $\xi$ with $\psi_2^{-1}(0)$ which is a prime in $k(p)\underset{R}{\otimes}k(q)$. - -Consider the map $\psi_1: M \underset{R}{\otimes}N \rightarrow k(p)\underset{R}{\otimes}k(q)$, and we associate a prime $\zeta$ in $k(p)\underset{R}{\otimes}k(q)$ to $\psi_1^{-1}(\zeta).$ - - -We prove these two maps are inverse to each other. - -Start from $\xi \subset M\underset{R}{\otimes}N$. By the property of localization and quotient, $\xi = \psi^{-1}(0) = \psi_{1}^{-1}\psi_2^{-2}(\xi). $ This proves one direction. - -Start from $\zeta \subset k(p)\underset{R}{\otimes}k(q)$. Since $\psi_1$ s subjective, $\psi_1^{-1}(\zeta) = \psi_1^{-1}\psi_2^{-1}(0) \implies \psi_1\psi_1^{-1}(\zeta) =\psi_1 \psi_1^{-1}\psi_2^{-1}(0)\implies\zeta = \psi_2^{-1}(0).$ - - -This fills the gap.<|endoftext|> -TITLE: Find the value of : $\lim_{x\to\infty}\frac{\sqrt{x+\sqrt{x+\sqrt{x}}}}{x}$ -QUESTION [5 upvotes]: I saw some resolutions here like $\sqrt{x+\sqrt{x+\sqrt{x}}}- \sqrt{x}$, but I couldn't get the point to find -$\lim_{x\to\infty}\frac{\sqrt{x+\sqrt{x+\sqrt{x}}}}{x}$. -I tried $\frac{1}{x}.(\sqrt{x+\sqrt{x+\sqrt{x}}})=\frac{\sqrt{x}}{x}\left(\sqrt{1+\frac{\sqrt{x+\sqrt{x}}}{x}} \right)=\frac{1}{\sqrt{x}}\left(\sqrt{1+\frac{\sqrt{x+\sqrt{x}}}{x}} \right)$ but now I got stuck. Could anyone help? - -REPLY [2 votes]: For an alternative answer, consider the infinitely nested radical expression $y = \sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x+...}}}}}$. -Evidently $y > \sqrt{x+\sqrt{x+\sqrt{x}}}$ for $x > 1$ and therefore $\frac{y}{x}$ is an upper bound for $\frac{\sqrt{x+\sqrt{x+\sqrt{x}}}}{x}$. -Also $y > \sqrt{x}$ and therefore $\lim_{x\to\infty} y = \infty$ -But we can express $x$ in terms of $y$ by using the substitution $y = \sqrt{x+y}$ in the infinitely nested expression, leading to $x = y(y-1)$. -Then $\lim_{x\to\infty}\frac{y}{x} = \lim_{y\to\infty} \frac{y}{y(y-1)} = \lim_{y\to\infty}\frac{1}{y-1} = 0$.<|endoftext|> -TITLE: Any convex set is connected -QUESTION [7 upvotes]: I'm stuck in a proof: given that $E\subset \mathbb{R}^n$ is convex, prove that it is connected. I know that this can be proved easier using the concept of path-connectedness, but that's not I'm "allowed" to do. I need to use another definition: $E$ is connected if and only if it cannot be separatedby a pair of two relatively open sets. -My attempt: -Pick any $x, y\in E$. Since $E$ is convex, $tx+(1-t)y \subset E$ for $t\in [0,1]$. Suppose that $E$ is not connected. Then there exist non-empty sets $U$ and $V$, such that $U\cap V =\emptyset$, $U\cup V=E$, and $U$ and $V$ are relatively open in $E$, which implies that there exist open sets $A$ and $B$ such that $U=A\cap E$ and $V=B\cap E$. -Let $x \in U$ and $y \in V$. -(i) $U$ and $V$ are clearly not empty. -(ii) $U\cup V$... -Intuitively I understand that what needs to be shown is that $U \cup V \ne E$, but I'm stuck in trying to show it. Would appreciate some help. - -REPLY [6 votes]: Try to use the contrapositive of the statement.<|endoftext|> -TITLE: Find the third rational point on the curve: $y^2 = x^3 + 8$ -QUESTION [7 upvotes]: I am trying to find a third rational point on the curve $y^2 = x^3 + 8$ -According to the my professor's solution, the idea is to find two rational points then solve for the third point. -These are the first two points: -$$(x,y) = (1, 3)~~~\text{and}~~~(x,y)=\left(-\frac{7}{4}, \frac{13}{8}\right)$$ -The point $(1,3)$ is fairly straight forward and we can find it by trial and error, but we are also supposed to find the second point by trial and error. Is there a systematic way to go about this without having to try so many numbers? - -REPLY [3 votes]: There is a systematic way. Draw a picture of $y^2 = x^3 + 8$ on graph paper. -http://www.printablepaper.net/category/graph -Per request: I am very, very old. When learning to draw graphs, I used a special kind of treeware called "PAPER," of a special type with little squares already printed on it. This special kind is called Graph Paper. If a printer is available, one may download a free pdf of graph paper, and print out a copy whenever one needs to draw a graph. -I made one. I bought a programmable calculator in 1983. I put in $\sqrt{x^3 + 8}$ and drew the following: - -Note that I bought an inexpensive flatbed scanner for my home computer, maybe about 2012, which is how you get to see the picture.<|endoftext|> -TITLE: In what sense right dual and braiding structure respect the tensor product structure in a monoidal category? -QUESTION [5 upvotes]: Throughout let $(\mathscr{C}, \otimes, \mathbf{1})$ be a monoidal category (I suppressed unitors and associators for simplicity). -The usual definition a rigid monoidal category is done in two steps: 1) Defining what is means right and left dual objects for an object $X$. 2) All objects have both right and left duals. Also we demand the rigidity axioms as a coherence condition. My first question is as follows: - -The rigidity puts an extra structure, dualizable objects, on - $\mathscr{C}$. New structures should respect old structures. So the - question is the right dual functor, $*$, a monoidal functor? It - should be! - -So I investigated and found that $*$ can be defined as a monoidal functor $*:(\mathscr{C}, \otimes)\to (\mathscr{C}^\text{op}, \otimes^\text{op})$, where $\otimes^\text{op}$ is a bifunctor like $\otimes$ such that $X\otimes^\text{op} Y=Y\otimes X$. This make perfect sense to me, but I want to make sure that is in fact true. -Now we move onto the second question: Suppose $\mathscr{C}$ has a braiding structure (remove rigidity). Braiding $\sigma$ is itself a natural transformation between the bifunctors $\otimes$ and $\otimes^\text{op}$ both define as $\mathscr{C}\times \mathscr{C}\to \mathscr{C}$. - -In what sense braiding structure respects the tensor product - structure? I highly doubt that $\sigma$ can be defined as a monoidal - functor! The correct way, I think, should be treating - $\sigma$ as a monoidal natural transformation. But how exactly? - -I tried but cannot find anything reasonable. Can anyone shed some light on this? - -REPLY [2 votes]: You're asking two basically independent questions here, so they really should be separated, but yes, taking duals is monoidal. The argument generalizes to the observation that taking adjoints of 1-morphisms in 2-categories respects composition. -As for braidings, one way to define a braided monoidal category is as a "monoidal monoidal category" (or "$E_2$ category" for short): that is, it's a category equipped with two monoidal structures $\otimes_1, \otimes_2$ both of which are monoidal with respect to each other. The braiding comes from applying the Eckmann-Hilton argument to this situation. You get that the two monoidal structures are equivalent, but along the way to writing down this equivalence you end up writing down a braiding.<|endoftext|> -TITLE: Extended ideals and algebraic sets -QUESTION [6 upvotes]: Let $L\subset k$ a field extension such that $k$ is algebraically closed. Now consider the algebraic set $Z(\mathfrak a)$ where $\mathfrak a$ is an ideal of $k[T_1,\ldots, T_n]$ but it is generated by polynomials with coefficients in $L$. In other words $\mathfrak a$ is an extended ideal of $\mathfrak b\subset L[T_1,\dots,T_n]$. In symbols we write $\mathfrak a=\mathfrak b^e$. -Now let -$$I(Z(\mathfrak a))=\{f\in K[T_1,\ldots,T_n]:f(x)=0\,\forall x\in Z(\mathfrak a) \}$$ -Can I conclude that $I(Z(\mathfrak a))$ is an extended ideal of $L[T_1,\ldots,T_n]$? -Note that by Hilbert Nullstellensatz theorem I have that: -$$I(Z(\mathfrak b^e))=\text{rad}(\mathfrak b^e)\supseteq \text{rad} (\mathfrak b)^e$$ - -REPLY [13 votes]: Remark. An alternative formulation of the question is: let $B$ be a finite type $K$-algebra, let $K \subseteq L$ be an extension, and let $A = B \otimes_K L$. If $B$ is reduced, then so is $A$. -Indeed, if we take $B = K[T_1,\ldots,T_n]/\operatorname{rad}(\mathfrak b)$, then $A = L[T_1,\ldots,T_n]/\operatorname{rad}(\mathfrak b)^e$, and proving that $\operatorname{rad}(\mathfrak b)^e$ is radical is equivalent to proving that $A$ is reduced. -The result is false in positive characteristic: - -Example. Let $K = \mathbb F_p (X)$, and consider $K \subseteq \bar K$. Let $n = 1$, and let $\mathfrak b = (T^p - X) \subseteq K[T]$; then $\mathfrak a = (T^p - X) \subseteq \bar K[T]$. Now - $$\operatorname{rad}(\mathfrak a) = \left(T-X^{\frac{1}{p}}\right),$$ - which is not generated by polynomials over $K$. (In fact, $\mathfrak b$ is radical, so $\operatorname{rad}(\mathfrak b)^e = \mathfrak a$.) - -However, the result is true for field extensions $K \subseteq L$ with $K$ perfect. We will break down the proof into a few smaller lemmata. -Remark. It suffices to consider the case where $B$ is a domain. Indeed, the localisation of $B$ at the set of nonzerodivisors is the total ring of fractions $K(B)$. It is the product of the localisations at the minimal primes of $B$. Since $B$ is reduced, each $B_\mathfrak p$ for $\mathfrak p$ minimal is a field; thus $B$ injects into a product of fields. Now if we prove the result for each $B_\mathfrak p$ (with $\mathfrak p$ minimal), then that of $B$ follows since a subring of a product of reduced rings is reduced. -Lemma 1. Let $K \subseteq L$ be finite separable, and let $B$ be a $K$-algebra. If $B$ is a domain, then $A = B \otimes_K L$ is reduced. -Proof. We can write $L = K[x]/f$ for some $f \in K[x]$ separable irreducible, by the primitive element theorem. Then $A = B[x]/f$, which embeds into $(\operatorname{Frac}B)[x]/f$. This is a reduced ring, since $f$ is separable (a quality that is not altered by field extension). $\square$ -But: note that $f$ is not necessarily irreducible in $(\operatorname{Frac}B)[x]$. -Corollary 1. Let $K \subseteq L$ be separable algebraic, and let $B$ be a $K$-algebra. If $B$ is a domain, then $A = B \otimes_K L$ is reduced. -Proof. If $x \in A$ satisfies $x^k = 0$ for some $k \in \mathbb Z_{>0}$, then $x$ actually lives in $B \otimes_K M$ for some finite subextension $M \subseteq L$. Thus, $x = 0$ by the lemma. $\square$ -Lemma 2. Let $K \subseteq L$ be purely transcendental, and let $B$ be a $K$-algebra. If $B$ is a domain, then so is $A = B \otimes_K L$. -Proof. In this case, $L = K(\{x_i\}_{i \in I})$ is a localisation of $K[\{x_i\}_{i \in I}]$, so $A = B \otimes_K K(\{x_i\}_{i \in I})$ is a localisation of $B[\{x_i\}_{i \in I}]$. $\square$ -Note however that $A$ is not equal to $B(\{x_i\}_{i \in I})$. For example, we cannot necessarily divide by $x_i - b$ for all $b \in B$; only for all $b \in K$. -Corollary 2. Let $K \subseteq L$ be a separably generated extension, and let $B$ be a $K$-algebra. If $B$ is a domain, then $A = B\otimes_K L$ is reduced. -Proof. A separably generated extension $K \subseteq L$ can be written as a tower $K \subseteq K(\{x_i\}_{i \in I}) \subseteq L$, with the first extension purely transcendental and the second separable algebraic. For the first piece, the result follows from Lemma 2. For the second piece, the result follows from Corollary 1. $\square$ -Corollary 3. Let $K \subseteq L$ be a separable extension, and let $B$ be a $K$-algebra. If $B$ is a domain, then $A = B \otimes_K L$ is reduced. -Proof. This is a limit argument identical to that of Corollary 1. $\square$ - -Proposition. Let $K$ be perfect, and let $K \subseteq L$ be any field extension. Let $B$ be a $K$-algebra. If $B$ is a domain, then $A \otimes_K L$ is reduced. - -Proof. For $K$ perfect, every field extension is separable. Thus, the result follows from Corollary 3. $\square$ -Remark. In all the lemmata, corollaries, and propositions above, we can replace the assumption if $B$ is a domain by if $B$ is a finite type reduced $K$-algebra. See the remark above. -The key word in all of the above is geometrically reduced. I just proved that every reduced variety over a perfect field is geometrically reduced, and showed that this is not true over imperfect fields. -Definition. Let $B$ be a finite type $K$-algebra. Then $B$ is geometrically reduced if $B \otimes_K L$ is reduced for every field extension $K \subseteq L$. -The above also shows: - -Proposition. Let $B$ be a finite type $K$-algebra. Then the following are equivalent: - -$B$ is geometrically reduced; -$B \otimes_K \bar K$ is reduced; -$B \otimes_K K^{\operatorname{perf}}$ is reduced; -$B \otimes_K L$ is reduced for every purely inseparable extension $K \subseteq L$. - - -Proof. The implications $1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4$ are trivial. Suppose $4$ holds, and let $K \subseteq L$ be any field extension. By a limit argument as in Corollary 1, we may assume $L$ is finitely generated. By Tag 04KM, there exists a commutative diagram -$$\begin{array}{ccc}K & \to & L \\ \downarrow & & \downarrow \\ K' & \to & L'\end{array}$$ -with $K \subseteq K'$ and $L \subseteq L'$ finite and purely inseparable, and $K'\subseteq L'$ separably generated. By assumption, $B \otimes_K K'$ is reduced. By Corollary 2, so is $B \otimes_K L'$. The result follows since $B \otimes_K L \subseteq B \otimes_K L'$. $\square$ -All of this can also be found (with a slightly different presentation) in Tag 030V. I got a little carried away trying to understand and restructure all the arguments myself...<|endoftext|> -TITLE: Is It Always Possible to Cross a Surface Exactly Once? -QUESTION [11 upvotes]: Yesterday, in my physics class, the following question arose: - -Is there a closed surface embedded in $\mathbb R^3$ dividing space into two connected components such that all paths from one component to the other cross the surface multiple times? - -Note that one can safely replace "closed surface" with "topological sphere" if they like, since this question concerns only the local properties of the surface, not the global structure. You can see that by noting that, if a path with exactly one crossing exists, then it can be restricted to some neighborhood of the crossing and still be a path from one component to the other, crossing the surface once (that is, the path should being within the surface and end outside, and have exactly one point on the surface). -I haven't much idea how to answer such a question; I note that the Jordan-Schönflies theorem implies a negative result in two dimensions, but this doesn't extend to three dimensions. - -REPLY [3 votes]: Edit: As pointed out in the comments, there is an error here. I believe it is salvageable and am not going to make too much effort to correct it. -Yes, this is always possible. Let $\Sigma$ be the surface and $U_1, U_2$ be the two sides it divides $\Bbb R^3$ into. -Invariance of domain (one of the two times we actually use the topology of the surface) implies that $\Sigma$ is nowhere-dense, so any point $x \in \Sigma$ has points in one of the $U_i$ arbitrarily close to it. In addition, $\partial U_i \subset \Sigma$. These sets are, of course, closed (since they're given by $\overline U_i \cap \Sigma$), so we see that $\partial U_1 \cup \partial U_2$ is a covering of $\Sigma$ by two closed sets; because $\Sigma$ is connected, we see that $\partial U_1 \cap \partial U_2$ is nonempty. Pick a point $x$ in this set. (Actually, it should be true that $\partial U_1 \cap \partial U_2 = \Sigma$, but I haven't been able to prove this. Thanks to Milo Brandt for suggesting this way of avoiding needing to prove it.) -Now pick a sequence of points $x_n \to x$, $x_n \in U_i$ ($i$ fixed), and demand that $d(x_n,x) < 1/n$. I claim that for large enough $n$, $U_i \cap B(x,1/n)$ is path-connected. (This is the second place we use the topology of the surface.) This is because $\Sigma \cap B(x,1/n)$ is homeomorphic to an open subset of $\Bbb R^2$ for large $n$. Pass to the one-point compactification. Here $S = \overline{\Sigma \cap B(x,1/n)}$ is homeomorphic to the one-point compactification of the aforementioned open planar surface. I believe one can calculate that it has $\check H^2(S) = \Bbb Z$, and hence by Alexander duality the complement of $S$ in $S^3$ is two path-connected open sets; but $S^3 \setminus S = B(x,1/n) \setminus \left(\Sigma \cap B(x,1/n)\right)$. Hence $B(x,1/n) \setminus \Sigma$ has two path-connected components; and since I know one of them belongs to $U_1$ and the other to $U_2$ I have the desired claim. -So, supposing $n>N$, $N$ large enough that the above applies, pick a path $f_n: [n,n+1] \to U_i \cap B(x,1/n)$, with $f_n(0) = x_n$ and $f_n(1) = x_{n+1}$. gives a map $f: [N,\infty) \to U_i$, and by construction $\lim_{x \to \infty} f(x) = x$ (taking the limit inside $\overline U_i$). Compactifying we obtain a map $f': [0,1] \to \overline U_i$ such that $f'^{-1}(\Sigma) = 1$. Doing the exact same thing on the other side, we've constructed a path with the desired property. - -Don't let the fact that this worked out fool you - topological manifold usually behave terribly. As an example of something like the above with the dimensions swapped, Bing constructed a simple closed curve in $\Bbb R^3$ such that there is no disc that intersects it precisely once (and such that the boundary of the disc links with the curve). See here. -It's also worth seeing where we can weaken the assumption that $\Sigma$ is a surface. We still need some sort of invariance of domain and a way to use Alexander duality. It is a straightforward modification of the above to prove that you can go between two regions in space, separated by a finite connected 2-dimensional polyhedron, while crossing only once; and maybe one can weaken this to "2-dimensional finite CW complex". (In these cases we don't have the property that $\overline U_i \subset \Sigma$, which the original version of this answer tried to prove; but by Milo's argument we don't really need it.)<|endoftext|> -TITLE: Localization commutes with quotient -QUESTION [9 upvotes]: Suppose $R$ is a commutative ring and $D$ a multiplicatively closed subset. I'd like to show via universal properties that if $\mathfrak{a} \triangleleft R$ is an ideal, then $D^{-1}R/D^{-1}\mathfrak{a} \cong \bar{D}^{-1}(R/\mathfrak{a})$, where $\bar{D}$ is the image of $D$ in $R/\mathfrak{a}$. This is pretty close to using exactness on the sequence $0 \to \mathfrak{a} \to R \to R/\mathfrak{a} \to 0$, except now we're localizing $R/\mathfrak{a}$ with respect to $\bar{D}$, not just tensoring it with $D^{-1}R$. Essentially I'm trying to show that localization commutes with passing to the quotient by $\mathfrak{a}$. -I could probably brute force this proof, but I'd like to see it in a way that is easy to understand and to remember. I get the feeling it's an easy property in disguise. - -REPLY [6 votes]: I brute-forced this just for the fun of it, in a slightly different flavour than the above answer. First, the composition $R\to R/\mathfrak{a} \to \overline{D}^{-1}(R/\mathfrak{a})$ sends $D$ to invertible elements and so we get a map $\varphi: D^{-1}R\to \overline{D}^{-1}(R/\mathfrak{a})$ via $\frac{r}{d}\mapsto \frac{[r]}{[d]}$, where the brackets denote mod $\mathfrak{a}$. This map sends $D^{-1}\mathfrak{a}$ to $0$ and so we get a map -$$ \psi: \frac{D^{-1}R}{D^{-1}\mathfrak{a}} \rightarrow \overline{D}^{-1}\left(\frac{R}{\mathfrak{a}}\right) \hspace{.5cm} \text{via} \hspace{.5cm} \left[\frac{r}{d}\right] \mapsto \frac{[r]}{[d]}.$$ -Similarly, the composition $R\to D^{-1}R\to \frac{D^{-1}R}{D^{-1}\mathfrak{a}}$ sends $\mathfrak{a}$ to $0$, and so we get a map $\theta: R/\mathfrak{a} \to \frac{D^{-1}R}{D^{-1}\mathfrak{a}}$ via $[r]\mapsto [r/1]$. This map sends $\overline{D}$ to units because $[d/1]\cdot [1/d]=[1/1]$, and so we get a map -$$ \eta: \overline{D}^{-1}\left(\frac{R}{\mathfrak{a}}\right) \rightarrow \frac{D^{-1}R}{D^{-1}\mathfrak{a}} \hspace{.5cm} \text{via} \hspace{.5cm} \frac{[r]}{[d]} \mapsto \left[\frac{r}{d}\right].$$ -Thus $\psi$ and $\eta$ are inverses, QED.<|endoftext|> -TITLE: Rigorous Probability/Statistics Book reference? -QUESTION [5 upvotes]: Im wondering if anyone could recommend a book (or a few books) about statistics/probability for someone at the advanced undergraduate level who has taken some real analysis (at the level of baby rudin) and some mathematical statistics and probability (only with calculus and intro to proofs as prerequisite). I would like to really start from the beginning and approach statistics from a rigorous (rudin-esque) sort of approach, a lot of the statistics I've encountered so far has been a lot of hand waving and lacking in rigorous proofs. Im hoping for a few books to really build a rigorous foundation and intuition, not just introduce me to the topics as quickly as possible. Thanks for any help and suggestions. Please let me know if what I'm asking for is ridiculous. -Note: I haven't taken measure theory, but I would be happy to learn it. - -REPLY [2 votes]: I fear you will get a variety of answers with no way to evaluate the credentials of those who offer them. Why not browse course materials posted online at nearby or well-known PhD programs in statistics to see what texts they are using? That way, you can judge the degree of rigor of the course, and thus find a text that suits you. -Billingsley was my thesis adviser and I personally like his book very much, but -I wonder if it is the best choice to begin the kind of self-study you have in mind.<|endoftext|> -TITLE: Odd digits of $2^n$ -QUESTION [24 upvotes]: Let $u_{b}(n)$ be equal to to number of odd digits of $n$ in base $b$. -For example: -In base $10$, $u_{10}(15074) = 3$ -In base $13$, $u_{13}(15610) = u_{13}([7, 1, 4, 10]_{13}) = 2$ -What is the value of $$\sum_{n=1}^\infty \frac{u_b(2^n)}{2^n}$$ -I don't think there's a nice closed form solution in the general case. -But when the base is even, it seems that the sum evaluates to $\frac{1}{b-1}$. -I have no idea how to approach it. I tried thinking about the generating function. But I can't find a recurrence relation or any nice properties. - -REPLY [23 votes]: Here is my solution: -If $a=\sum_{j\geq 0} a_j b^j$, then $u_b(a) = \frac{1}{2}\sum_j (1-(-1)^{a_j})$ because $1-(-1)^m=\begin{cases} 0 & 2\mid m\\ 2 & 2\nmid m\end{cases}$. Note that $a_j$ is the digit directly left of the decimal point in the $b$-adic expansion of $ab^{-j}$, i.e. $a_j = \lfloor ab^{-j} \rfloor \mod b$. Now if $b$ is even we can therefore replace $(-1)^{a_j}$ by $(-1)^{\lfloor ab^{-j} \rfloor}$. -Thus: -$$\sum_{n\geq 1} 2^{-n} u_b(2^n) = \sum_{n\geq 1,j\geq 0} 2^{-n} \frac{1}{2} (1-(-1)^{\lfloor 2^n b^{-j}\rfloor})$$ -Observe that for $j=0$ we always have $2^n b^{-j} \in 2\mathbb{N}$ so that all summands for $j=0$ vanish. Thus we assume $j\geq 1$ from now on. -Now we have to think about $2^n b^{-j}$. This suggests looking at the $2$-adic expansion of $b^{-j}$: -$$b^{-j} =\sum_{n\geq 1} d_n 2^{-n}$$ -for some $d_n\in\{0,1\}$ so that $\lfloor 2^n b^{-j} \rfloor \mod 2 = d_n$. -The nice thing about the digits $d$ of any $2$-adic expansion is that $\frac{1}{2}(1-(-1)^d)$ not just indicates whether or not $d$ is odd, it is actually equal to $d$, because $d\in\{0,1\}$. Therefore $\frac{1}{2}(1-(-1)^{\lfloor 2^{n} b^{-j} \rfloor})=\frac{1}{2}(1-(-1)^{d_n})=d_n$. -Thus: -$$\sum_{j\geq 1} \sum_{n\geq 1} 2^{-n} \frac{1}{2}(1-(-1)^{\lfloor 2^n b^{-j} \rfloor}) = \sum_{j\geq 1} \sum_{n\geq 1} 2^{-n} d_n = \sum_{j\geq 1} b^{-j} = \frac{1}{b-1}$$ -and that's what we wanted to prove.<|endoftext|> -TITLE: Sum of $1-\frac{2^2}{5}+\frac{3^2}{5^2}-\frac{4^2}{5^3}+....$ -QUESTION [11 upvotes]: $1-\frac{2^2}{5}+\frac{3^2}{5^2}-\frac{4^2}{5^3}+....$ -How can we find sum of above series upto infinite terms? -I don't know how to start and just need some hint. - -REPLY [3 votes]: Say sum to $n$ terms of the series is $S_n$. -Now we have that $$S_n=1+\sum_{r=1}^n (-1)^n\cdot\frac{(n+1)^2}{5^{n}}=1+R_n$$ -where $R_n=\sum_\limits{r=1}^n (-1)^n\cdot\frac{(n+1)^2}{5^{n}}=-\frac{2^2}{5}+\sum_\limits{r=1}^n (-1)^{n+1}\cdot\frac{(n+2)^2}{5^{n+1}}$ -Now $$-\frac{1}{5}\cdot R_n=\sum_{r=1}^n (-1)^{n+1}\cdot\frac{(n+1)^2}{5^{n+1}}$$ -Subtracting, we get $$\frac{6}{5}\cdot R_n=-\frac{2^2}{5}+\sum_{r=1}^n (-1)^{n+1}\cdot\frac{2n+3}{5^{n+1}}=-\frac{2^2}{5}+Q_n$$ -Similarly $$Q_n=\frac{1}{5}+\sum_{r=1}^n (-1)^{n+2}\cdot\frac{2n+5}{5^{n+2}}$$ -and $$-\frac{1}{5}\cdot Q_n=\sum_{r=1}^n (-1)^{n+2}\cdot\frac{2n+3}{5^{n+2}}$$ -Again we get that $$\frac{6}{5}Q_n=\frac{1}{5}+\sum_{r=1}^n (-1)^{n+2}\cdot\frac{2}{5^{n+2}}$$ -When $n\to\infty$, we have that $$Q_\infty=\frac{5}{6}(\frac{1}{5}-\frac{1}{75})=\frac{7}{45}$$ -Hence we can calculate $R_\infty$ and $S_\infty$ which comes out to be $\frac{25}{54}$.<|endoftext|> -TITLE: What is the smallest prime factor of the number $14^{14^{14}}+13\ $? -QUESTION [7 upvotes]: What is the smallest prime factor of the number -$$N\ :=\ 14^{14^{14}}+13\ ?$$ -The number of digits of $N$ is $12,735,782,555,419,983$ (The number of digits of $N$ has itself $17$ digits). The first digits of $N$ are $1698324865652...$ and the last digits are $...6015154651149$ -It is hopeless to apply a primilaty test for $N$ because $N$ is far too large. -I applied trial division and found no prime factor below $6\times 10^9$. -I do not think that there are any better methods to find a factor of such a number than trial division, so I invite number-theory-enthusiasts to join in the search for prime factors. - -REPLY [10 votes]: The smallest prime factor of $ 14^{14^{14}}+13$ is $13990085729$. -Trial division should be the only way to find factors of such big numbers. -Pollards-RHO Method and ECM both need $gcd(q,N)$ for some small $q$ while $N$ is far out of range for such computations. -The trial division with modular exponentiation I used in Pari/GP: -forprime(p=2,10^11,if(!(Mod(14,p)^11112006825558016+13),return(p))) - -returns the result -time = 10min, 58,230 ms. -%1 = 13990085729 - -Note: precomputing $14^{14}=11112006825558016$ speeds up the algorythm about 7%.<|endoftext|> -TITLE: Examples of bounded linear operators with range not closed -QUESTION [13 upvotes]: I've been trying to get some intuition on what it means for a bounded linear operator to have closed range. Can anyone give some simple examples of such an operator that does not have closed range? -Thanks! - -REPLY [6 votes]: Take the bounded linear operator $$T: l^\infty \ni (x_j) \mapsto \left(\dfrac{x_j}{j}\right)\in l^\infty.$$ -Then $x^{(n)}=(\sqrt{1},\sqrt{2},\dots,\sqrt{n},0,0,\dots)$ is in $l^\infty$ with $Tx^{(n)}=(1/\sqrt{1},\dots,1/\sqrt{n},0,\dots)$. -We have $Tx^{(n)} \to y=(1/\sqrt{j})$ in $l^\infty$, but $y \notin T(l^\infty)$ as $(\sqrt{j})\notin l^\infty$. So $T(l^\infty)$ is not closed.<|endoftext|> -TITLE: Functional Equation: When $f(x+y)=f(x)+f(y)-(xy-1)^2$ -QUESTION [6 upvotes]: How does one solve the following functional equation when $f:\mathbb{R}\rightarrow\mathbb{R}$ -$$f(x+y)=f(x)+f(y)-(xy-1)^2$$ -When I assumed it was a polynomial equation, it can be seen through induction that $$f(nx)=nf(x)-\sum _{ i=1 }^{ n-1 }{ (ix^2-1)^2 } $$ -for some natural number $n$. -This implies that $$f(n)-nf(1)+\sum _{ i=1 }^{ n-1 }{ (i-1)^2 }=0 $$ -is true for all natural $n$, or that for all $n$ $$f(n)-nf(1)+\frac{(n-2)(n-1)(2n-3)}{6}=0$$ -Since there are infinite number of solutions to $f(n)-nf(1)+\frac{(n-2)(n-1)(2n-3)}{6}=0$, it can be said that $$f(n)=nf(1)-\frac{(n-2)(n-1)(2n-3)}{6}$$ for all real number $n$. -This implies that $f(n)$ is of the degree $3$, but this is a contradiction since if $f(n)$ had a degree of $3$, the coefficient for $x^2y^2$ would be $0$. -So there appeared to be no polynomial solutions to this function. I further thought that there would be no functions that satisfied this equation. -How does one solve this equation? Any help would be appreciated. - -REPLY [9 votes]: $$f(x+y)=f(x)+f(y)-(xy-1)^2\\ -f(2)=2f(1)\\ -f(3)=f(2)+f(1)-1=3f(1)-1\\ -f(2)+f(2)-9=f(4)=f(1)+f(3)-4\\ -4f(1)-9=4f(1)-5$$<|endoftext|> -TITLE: Show that the set of Hermitian matrices forms a real vector space -QUESTION [6 upvotes]: How can I "show that the Hermitian Matrices form a real Vector Space"? I'm assuming this means the set of all Hermitian matrices. -I understand how a hermitian matrix containing complex numbers can be closed under scalar multiplication by multiplying it by $i$, but how can it be closed under addition? Wouldn't adding two complex matrices just produce another complex matrix in most instances? -Also, how can I find a basis for this space? - -REPLY [15 votes]: Hint: -i give you the intuition for $2\times 2$ matrices that you can extend to the general case. -An Hermitian $2\times 2$ matrix has the form: -$$ -\begin{bmatrix} -a&b\\ -\bar b&c -\end{bmatrix} -$$ -with: $a,c \in \mathbb{R}$ and $b\in \mathbb{C}$ ($\bar b $ is the complex conjugate of $b$). -Now you can see that, for two matrices of this form, we have: -$$ -\begin{bmatrix} -a&b\\ -\bar b&c -\end{bmatrix} -+ -\begin{bmatrix} -x&y\\ -\bar y&z -\end{bmatrix}= -\begin{bmatrix} -a+x&b+y\\ -\bar{b}+\bar{y}&c+z -\end{bmatrix} -$$ -and, since $\bar{b}+\bar{y}=\overline{b+y}$ the result is a matrix of the same form, i.e. an Hermitian matrix. -For the product we have: -$$k -\begin{bmatrix} -a&b\\ -\bar b&c -\end{bmatrix}= -\begin{bmatrix} -ka&kb\\ -k\bar b&kc -\end{bmatrix} -$$ -so the result matrix is Hermitian only if $k$ is a real number. Finally we see that the null matrix is hermitian and the opposite of an Hermitian matrix is Hermitian, so the set of hermitian matrix is real vector space. -For the basis: -Note that an hermitian matrix can be expressed as a linear combination with real coefficients in the form: -$$ -\begin{bmatrix} -a&b\\ -\bar b&c -\end{bmatrix}= -a\begin{bmatrix} -1&0\\ -0&0 -\end{bmatrix}+ -\mbox{Re}(b) -\begin{bmatrix} -0&1\\ -1&0 -\end{bmatrix} -+\mbox{Im}(b) -\begin{bmatrix} -0&i\\ --i&0 -\end{bmatrix} -+c -\begin{bmatrix} -0&0\\ -0&1 -\end{bmatrix} -$$ -and ,since the four matrices at the right are linearly independent, these form a basis.<|endoftext|> -TITLE: Minimum infinity norm control problem -QUESTION [5 upvotes]: I am having trouble understanding Example 2 of section 5.9 of Luenberger's Optimization by Vector Space Methods. The problem is to select a current $u(t)$ on $[0,1]$ to drive a motor governed by -$$ \ddot{\theta}(t) + \dot{\theta}(t) = u(t) $$ -from $\theta(0) = \dot{\theta}(0) = 0$ to $\theta(1) = 1, \dot{\theta}(1) = 0$ minimizing $\max_{0 \leq t \leq 1} |u(t)|$. Integrating the differential equation allows one to express the constraints as -$$ -\dot{\theta}(1) = \int_0^1{e^{t-1}u(t)dt} = 0 \\ -\theta(1) = \int_0^1{(u(t) - \ddot{\theta}(t))dt} = \int_0^1{(1 - e^{t-1})u(t)dt} = 1 -$$ -Defining $y_1$, $y_2 \in L_1[0,1]$ by $y_1(t) = e^{t-1}$, $y_2(t) = 1 - e^{t-1}$, we seek $u \in L_{\infty}[0,1]$ of minimum norm satisfying -$$ -\langle y_1, u \rangle = 0 \\ -\langle y_2, u \rangle = 1, -$$ -where $\langle x, f \rangle$ denotes $f(x)$, $f$ being a bounded linear functional. -By a previous theorem, $\min ||u|| = \max_{||a_1 y_1 + a_2 y_2|| \leq 1} a_2$. Luenberger writes "Maximization of $a_2$ subject to this constraint is a straightforward task, but we do not carry out the necessary computations.". He then shows that the optimal $u$ is "bang-bang". I understand the transformation of the problem and why $u$ is bang-bang, and I know that once I have $a_1$ and $a_2$ I may determine $u$, since it changes sign at the same time as $a_1 y_1 + a_2 y_2$, and its absolute value is equal to $a_2$. But I do not know how to determine $a_1$ and $a_2$, and I do not see how maximization of $a_2$ subject to the integral constraint is a straightforward task. Any hints would be very welcome. - -REPLY [3 votes]: I was able to solve this with some outside help. Since there seemed to be some interest in the problem (judging by the upvotes), I am posting the solution here. -We consider three separate cases: -$$ -(a_1 - a_2)\mathrm{e}^{t-1} + a_2 > 0 \text{ for all } t \in (0, 1) \\ -(a_1 - a_2)\mathrm{e}^{t-1} + a_2 < 0 \text{ for all } t \in (0, 1) \\ -(a_1 - a_2)\mathrm{e}^{t-1} + a_2 = 0 \text{ for some } t \in (0, 1) -$$ -In this way we can calculate the restriction integral directly and obtain simpler restrictions in order to apply standard methods. -In the first case, -$$ -\int_0^1{((a_1 - a_2)\mathrm{e}^{t-1} + a_2)\mathrm{d}t} \leq 1 \iff a_2 \leq \mathrm{e} - (\mathrm{e} - 1)a_1 -$$ -Also, since the function is nonnegative at the endpoints, -$$ -a_1 \geq 0 -$$ -and -$$ -a_2 \geq -\frac{1}{\mathrm{e} - 1}a_1 -$$ -Maximization of $a_2$ can be performed geometrically; the optimum is found to be $(a_1, a_2) = (0, \mathrm{e})$. -The second case is similarly handled, the solution is $(a_1 ,a_2) = (\frac{\mathrm{e}(1 - \mathrm{e})}{(\mathrm{e} - 1)^{2} - 1}, \frac{\mathrm{e}}{(\mathrm{e} - 1)^{2} - 1})$. This is worse than the solution for the first case, so we discard it. -The third case is a little harder. If $(a_1, a_2)$ is feasible, so is $(-a_1, -a_2)$, hence since we are maximizing $a_2$ we may assume $a_2 \geq 0$. If $a_2 = 0$, we get $u(t) = 0$ almost everywhere, which is not a solution (by the second restriction). Therefore $a_2 > 0$. If $(a_1 - a_2)\mathrm{e}^{t_0-1} + a_2 = 0$, then $t_0 = 1 + \log{(a_2/(a_2 - a_1))}$. Thus we need to have $a_2 - a_1 > 0$, which means $(a_1 - a_2)\mathrm{e}^{t-1} + a_2$ is decreasing in $t$, so it will be postive first and then negative. Now we can integrate the constraint: -$$ -\int_0^1{|(a_1 - a_2)\mathrm{e}^{t-1} + a_2| \mathrm{d}t} \leq 1 \iff \\ -\int_0^{t_0}{((a_1 - a_2)\mathrm{e}^{t-1} + a_2)\mathrm{d}t} + \int_{t_0}^1{((a_2 - a_1)\mathrm{e}^{t-1} - a_2)\mathrm{d}t} \leq 1 \iff \\ -\mathrm{e}^{-1}a_2 - (1 + \mathrm{e}^{-1})a_1 + 2a_2\log{\frac{a_2}{a_2-a_1}} \leq 1. -$$ -If we can show that the optimum is achieved with equality in this restriction, this will be reduced to an equality and then we can apply the Lagrange multiplier method. Let $(a_1, a_2)$ be feasible with -$$ -\int_0^1{|(a_1 - a_2)\mathrm{e}^{t-1} + a_2| \mathrm{d}t} = 1 - \delta -$$ -for some $\delta \in (0, 1)$. -Then -$$ -\int_0^1{|(a_1 - (a_2 + \mathrm{e}\delta)\mathrm{e}^{t-1} + a_2 + \mathrm{e}\delta| \mathrm{d}t} = \int_0^1{|(a_1 - a_2)\mathrm{e}^{t-1} + a_2 + \mathrm{e}\delta(1 - \mathrm{e}^{t-1})| \mathrm{d}t} \\ -\leq \int_0^1{|(a_1 - a_2)\mathrm{e}^{t-1} + a_2| \mathrm{d}t} + \mathrm{e}\delta\int_0^1{|(1 - \mathrm{e}^{t-1})| \mathrm{d}t} \\ -= 1 - \delta + \mathrm{e}\delta(1 - (1 - \mathrm{e}^{-1})) \\ -= 1 -$$ -In other words, $(a_1, a_2 + \mathrm{e}\delta)$ is feasible. So $(a_1, a_2)$ cannot be optimal. Hence the optimal point must be achieved with equality in the restriction. -Now we form the Lagrangian $L(a_1, a_2, \lambda) = a_2 + \lambda (\mathrm{e}^{-1}a_2 - (1 + \mathrm{e}^{-1})a_1 + 2a_2\log{\frac{a_2}{a_2-a_1}} - 1)$, and apply the Lagrange multiplier method. This gives a simple system of equations with solution $((1 - \frac{2}{1 + \mathrm{e}^{-1}})\frac{1}{\log{\frac{(\mathrm{e}+1)^2}{4\mathrm{e}}}}, \frac{1}{\log{\frac{(\mathrm{e}+1)^2}{4\mathrm{e}}}})$. This has a greater value of $a_2$ than the solution in the first case, so it must be the solution.<|endoftext|> -TITLE: Can a Compact Lie Group have a Non-Compact Lie Subgroup? -QUESTION [6 upvotes]: Hopefully the title says it all in terms of the question I'm asking. I feel that the answer is no, but I'm not sure why; I don't really know many deep theorems about the structure of lie groups. - -REPLY [11 votes]: Depends on what you mean by "subgroup". If your definition of a Lie subgroup requires that the subgroup is closed or embedded (which is equivalent), then this is not possible. If not, then consider the example of one-parameter subgroup whose image is (after the obvious identifications) a straight line with an irrational slope inside the torus $S^1 \times S^1$. - -REPLY [6 votes]: I'm not going to worry about what the definition of "Lie subgroup" is, but a compact Lie group can certainly have a subgroup which is a non-compact Lie group. Consider the torus $$\Bbb T^2=\{(e^{it},e^{is}):t,s\in \Bbb R\}.$$ Suppose $\alpha\in\Bbb R$ is irrational and consider the subgroup $$\{(e^{it},e^{i\alpha t}):t\in\Bbb R\}.$$<|endoftext|> -TITLE: Cayley-Hamilton Theorem - Trace of Exterior Power Form -QUESTION [9 upvotes]: Let $V$ be an $n$-dimensional vector space over a field $F$ (the characteristic of which, for the purpose of this post, may be taken as $0$). Let $T$ be a linear operator on $V$ and $\lambda\in F$. -Sometime ago, somewhere (I can't recall where) I read that - -Formula. $\det(T-\lambda I)= \sum_{k=0}^n (-1)^k \text{trace}(\Lambda^k T)\lambda^{n-k}$ - -I considered a special case to test this out by taking $n=3$. And here is what I got: -Let $\{e_1, e_2, e_3\}$ form a basis for $V$. Then -$$\det(T-\lambda I)(e_1\wedge e_2\wedge e_3)=(Te_1-\lambda e_1)\wedge (Te_2-\lambda e_2)\wedge (Te_3- \lambda e_3)$$ -whence expanding the RHS we get -$$(\det T)(e_1\wedge e_2\wedge e_3) - \lambda(Te_2\wedge Te_2 \wedge e_3+ Te_1\wedge e_2\wedge Te_3 + e_1\wedge Te_2\wedge Te_3)+\lambda^2(Te_1\wedge e_2\wedge e_3+ e_1\wedge Te_2\wedge e_3+e_1\wedge e_2 \wedge Te_3) - \lambda^3(e_1\wedge e_2\wedge e_3)$$ -Since $\det T=\text{trace}(\Lambda^3 T)$, the coefficient of $\lambda^0$ matches with that in the formula. Also, the coefficient of $\lambda^2$ is just $\text{trace}(T)$ by definition so this is also fine. - -The Problem. What I am not able to see is how the coefficient of $\lambda$ eaul to $\text{trace}(\Lambda^2T)$. - -Can somebody please help. Thanks. - -REPLY [3 votes]: Let us write $Te_i = \sum_j a^j_i e_j$. For $1 \leq i < j \leq n$, we have -$$ (\Lambda^2(T))(e_i \wedge e_j) = Te_i \wedge Te_j = \left( \sum_{k_1} a_i^{k_1} e_{k_1} \right) \wedge \left( \sum_{k_2} a_j^{k_2} e_{k_2} \right) = (a_i^i a_j^j - a_i^j a_j^i) (e_i \wedge e_j) + \cdots $$ -where the $\cdots$ don't involve $e_i \wedge e_j$ (as the coefficient of $e_i \wedge e_j$ for $i < j$ comes from $k_1 = i, k_2 = j$ or $k_1 = j, k_2 = i$). Hence, -$$ \mathrm{tr}(\Lambda^2T) = \sum_{i < j} (a_i^i a_j^j - a_i^j a_j^i). $$ -In your case, similar arguments show that -$$ Te_1 \wedge Te_2 \wedge e_3 = \left( \sum_{k_1} a_1^{k_1} e_{k_1} \right) \wedge \left( \sum_{k_2} a_2^{k_2} e_{k_2} \right) \wedge e_3 = (a_1^1 a_2^2 - a_1^2 a_2^1) (e_1 \wedge e_2 \wedge e_3), \\ -Te_1 \wedge e_2 \wedge Te_3 = \left( \sum_{k_1} a_1^{k_1} e_{k_1} \right) \wedge e_2 \wedge \left( \sum_{k_2} a_3^{k_2} e_{k_2} \right) = (a_1^1 a_3^3 - a_1^3 a_3^1) (e_1 \wedge e_2 \wedge e_3), \\ -e_1 \wedge Te_2 \wedge Te_3 = e_1 \wedge \left( \sum_{k_1} a_2^{k_1} e_{k_1} \right) \wedge \left( \sum_{k_2} a_3^{k_2} e_{k_2} \right) = (a_2^2 a_3^3 - a_2^3 a_3^2) (e_1 \wedge e_2 \wedge e_3), -$$ -and so indeed -$$ \mathrm{trace}(\Lambda^2 T)(e_1 \wedge e_2 \wedge e_3) = Te_1 \wedge Te_2 \wedge e_3 + Te_1 \wedge e_2 \wedge Te_3 + e_1 \wedge Te_2 \wedge Te_3. $$<|endoftext|> -TITLE: When is it possible to have $f(x+y)=f(x)+f(y)+g(xy)$? -QUESTION [9 upvotes]: According to the question mentioned here, it seems that there is no function $f(x)$ such that the functional equation -$$f(x+y)=f(x)+f(y)-(xy-1)^2$$ -can hold. Motivated by this question, I found it interesting to somehow extend the question. - -What conditions are required for a given function $g(x)$ such that there exists a function $f(x)$ that can satisfy the following equality - $$f(x+y)=f(x)+f(y)+g(xy)$$ - where $f(x)$ and $g(x)$ are real valued functions of real variable. - -Any hint or help is appreciated. :) - -REPLY [7 votes]: If $f$ is not differentiable, we still have $f((x+y)+z)=f(x+(y+z))$, from which follows -$$g(xy)+g(xz+yz)=g(xy+xz)+g(yz)\tag{1}$$ -By change of variables -$$x=\sqrt{\frac{ab}{c}}, \quad y=\sqrt{\frac{ca}{b}}, \quad z=\sqrt{\frac{bc}{a}}\tag{2}$$ -this becomes -$$g(a)+g(b+c)=g(a+b)+g(c)\tag{3}$$ -whenever $abc>0$. -By change of variables $(a,b,c)\to(a+b,-b,b+c)$ we can also move $-b$ across so it holds for all $abc\neq0$. Then -$$g(a+b)+g(b-b)=g(a)+g(b)\tag{4}$$ -so $h(x)=g(x)-g(0)$ satisfies $$h(a+b)=h(a)+h(b)\tag{5}$$ -This has the well-known solutions $h(x)=kx$, and some very discontinuous ones.<|endoftext|> -TITLE: How can I guarantee the unique positive root of this polynomial? -QUESTION [9 upvotes]: How can I guarantee the unique positive root of this polynomial? -I have two polynomial, -$$ -x^{n+1} + x^n - 1 =0 -$$ -and -$$ -x^{n+1} - x^n - 1 =0 -$$ -respectively, where $n\in\mathbb{N}$. I have tried for the cases from $n=1$ to $100$. For every calculation, I have found unique positive root for each polynomial with the help of MATHEMATICA, but I couldn't prove it mathematically. How can I prove these polynımials have unique positive root ? - -REPLY [2 votes]: From $x^n = \frac{1}{( x \pm 1 )}$ -explore the interception of two functions: - $x^n$ (always positive growing for $x>0$ ) whatever $n>0$ -and $\frac{1}{x+1}$ (always decreasing from $1$ to $0$ for $x>0$ ) - or $\frac{1}{x-1}$ (always decreasing from $\infty$ to $0$ for $x>1$ ) -the interception belongs always and necessarily to the first quadrant -( explore the graphs in wolframalpha.com , varying the $n$ ) -solve $x^n = \frac{1}{( x + 1 )}$ and $n=2$ -solve $x^n = \frac{1}{( x - 1 )}$ and $n=2$<|endoftext|> -TITLE: What's the mathematical symbol to say "It doesn't depend on ..."? -QUESTION [6 upvotes]: If I want to say x doesn't exist I would use the symbol $\nexists$ -If I want to say x is a member of... I would use $\in$ -But what's the symbol to say y doesn't depend on x? -I know I could write something like y$\neq$f(x) but I'm looking for a single symbol, something more compact, without the f northe parenthesis. -Does it exist? -In fact f(x) represents a function, and we could have a relationship without a function. - -REPLY [3 votes]: OK, the notation already exists, and I have found it. -The book "Bayesian Reasoning and Machine Learning" provides a mathematical notation list, where it says: - -Then, this pi-like symbol, and the reversed pi mean that a variable is dependent or independent on other variable. -I don't know if it's only used in a statistical framework or also in general in mathematics. -It's strange nobody knew it.<|endoftext|> -TITLE: Never seen this notation before: $\int (y-f(x))^2 Pr(dx,dy) $ -QUESTION [8 upvotes]: I have never seen an integral like this: -$$\int (y-f(x))^2 Pr(dx,dy) $$ -What is that? More precisely what is $Pr(dx,dy)$? And how is that integral defined? I found it in Elements of Statistical Learning, By T. Hastie, et. al. - -REPLY [4 votes]: $Pr(dx,dy)$ is the joint distribution. For example, if the density of the joint distribution is $g(x,y)$ , then -$$ -\int (y-f(x))^2 Pr(dx,dy)=\int\int (y-f(x))^2 g(x,y)\,dx\,dy, -$$ -over your region of integration.<|endoftext|> -TITLE: Upper bound on exact power of wild prime that divides the different -QUESTION [7 upvotes]: Let $K$ be a number field and let $p\in \mathbb{Z}$ be a prime. Suppose that $p\mathcal{O}_K=Q^eI$, with $\gcd(Q, I)=1$ and let $\mathcal{D}_{K/\mathbb{Q}}$ be the different ideal of $K$. We know that $Q^{e-1}\mid \mathcal{D}_{K/\mathbb{Q}}$ and if $Q$ is tamely ramified over $p$, i.e. $p\not\mid e$, then $Q^e\not\mid \mathcal{D}_{K/\mathbb{Q}}$. Suppose now that $Q$ is wild over $p$, i.e. $p\mid e$, and let $k$ be the exact power of $Q$ that divides $\mathcal{D}_{K/\mathbb{Q}}$. If we know $p$ and $e$, can we give an upper bound on $k$? - -REPLY [4 votes]: Yes, there is an upper bound depending only on $p$ and $e=p^n$, and it’s achieved in the extension $K=\Bbb Q(p^{1/e})$. -This is a purely local question, and I’m going to pretend that you asked it for wildly ramified extensions of $\Bbb Q_p$, though the translation to $\Bbb Q$ is almost automatic. First let’s look at the above extension: -We have $\pi$, a root of $X^{p^n}-p$, whose derivative is $p^nX^{p^n-1}$, substitute $\pi$ and get $p^n\pi^{p^n-1}=\delta$, a number with $v_\pi(\delta)=np^n+p^n-1=(n+1)e-1$. I say that this is as big as your number $k$ can get. -Our general extension $K\supset\Bbb Q_p$ is to be totally wildly ramified, so of degree $p^n=e$, with a prime element $\pi$ that’s root of an Eisenstein polynomial $F(X)\in\Bbb Q_p[X]$. Say $F=X^{p^n}+\sum_0^{e-1} c_jX^j$, with $\delta=F'(\pi)=p^n\pi^{p^n-1}+\sum_0^{p^n-2}(j+1)c_{j+1}\pi^j$. When you ask for the $v_\pi$-value of this, you notice that no two monomials in the expression have the same value, since all are incongruent modulo $p^n=e$: the numbers $v_\pi(c_{j+1})$ are in $\Bbb Q_p$ and thus are divisible by $e=p^n$. So the minimum of the $v_\pi$-values of these monomials may well be less than that of $p^n\pi^{p^n-1}$, but since this monomial might also dominate, as it did in my example, that is the biggest possible value for $k$.<|endoftext|> -TITLE: Understanding a proof about nested nonempty connected compact subsets -QUESTION [6 upvotes]: I know this question has asked to death here on MSE but I have not found a satisfactory solution. A solution found online is extremely elegant but I do not quite understand it! - -Given nested sequence of closed nonempty connected subsets of - a compact metric space $X$. Prove that $\bigcap_{i=1} X_i$ is nonempty - and connected. - -It is a standard fact that arbitrary intersection of compact set is nonempty and compact. -I wish to understand the bit of the proof that this is also connected -Proof: https://math.berkeley.edu/sites/default/files/pages/f10solutions.pdf - -Suppose that $\bigcap_{i=1} X_i$ is not connected. Let $A$ and $B$ be two - disjoint nonempty closed sets so that $\bigcap_{i=1} X_i = A \cup B$. - Find disjoint open sets $U$ and $V$ so that $A \subset U$ and $B \subset V$. -Put $F_i = X_i − (U \cup V )$. Then $\{Fi\}$ is a nested sequence of - compact sets, whose intersection is empty. Thus $F_i = ∅$ for some - $i$. That is, $X_i \subset U \cup V$. -However, $X_i$ intersects both $U$ and $V$ , since $X_i \cap A\neq ∅$ and $X_i \cap B \neq ∅$. This contradicts the assumption that $X_i$ - is connected. - -Can someone please elaborate on some of the details of the proof? -1) Why bother finding open sets containing $A,B$ and how do we know that they even exist? This move seems sort of unnatural. -2) What is so special about $U \cup V$ especially, why does $F_i = X_i − (U \cup V )$ imply $\{F_i\}$ has empty intersection? Why does intersection being empty imply the existence of $F_i = \varnothing$? -3) Can someone please justify $F_i = \varnothing \Leftrightarrow X_i \subset U \cup V$. -4) Can someone please justify the contradiction? -I know this is the pretty much the entire proof, but I have seriously tried to crack it and failed to understand it. Perhaps the proof is too difficult. - -REPLY [5 votes]: 1) We have some fiddley-ness at the start to get $U, V$ disjoint. It's a bit long, so see Disjoint compact sets in a Hausdorff space can be separated -This is possible because the space is Hausdorff (given that it is metrizable). There's a nice phrase by Willard, who says that this is an example of compact sets behaving like points - as $A, B$ are closed in a compact space, they are also compact, and because the space is Hausdorff, there are disjoint open sets containing $A$ and $B$. Neat! -The reason we want such sets will become apparent as we wade on. -2) $U\cup V$ is special because it covers $\bigcap_{i=1}X_i$, and because it's open. So defining $F_i\equiv X_i\setminus (U\cup V)$ is a closed subset of $X$, hence compact, and doesn't contain the intersection $\bigcap_{i=1}X_i$; finally note the $F_i$ also form a nested sequence (draw a diagram!). So if all $F_i$ were non-empty, their intersection would be non-empty - but that's not true! -As $$\bigcap_{i=1}F_i=\bigcap_{i=1}(X_i\setminus (U\cup V))=(\bigcap_{i=1}X_i)\setminus (U\cup V)=\emptyset$$ because we already knew $U\cup V$ covers that intersection. -So, there is some specific $F_k$ that is empty! -3) Nearly there now. As this $F_k$ is defined as $X_k\setminus (U\cup V)$, then if $F_k=\emptyset$, that implies every element of $X_k$ was also in $U\cup V$. i.e. $X_k\subset (U\cup V)$. -4) So we have that there is a pair of disjoint open sets covering this connected $X_k$. But surely they provide a separation of $X_k$? Just taking the complements of the open sets $U, V$ (namely our original $A$ and $B$...) and then intersecting with $X_k$ to get a separation. The only way this wouldn't be true is if $X_k$ had trivial intersection with one of those complements, i.e. if $X_k$ was entirely contained by one of the open sets. But this can't be true, because we know $$X_k\cap A\supseteq (\bigcap_{i=1}X_i)\cap A\neq\emptyset$$ -The same holds for $B$, and so $X_k$ really has a separation. This is a contradiction because it was supposed to be connected! -Phew. I hope that makes sense, please ask for clarifications if anything is too terse. As often seems to be the case, a short looking proof is actually an example of economy of writing, rather than a simple idea. That's not to say this isn't a nice proof, it's just not one you can 'see' in a flash without having spent some time thinking topologically.<|endoftext|> -TITLE: Can the function $f(x,y) = \frac{xy}{\sqrt{x^2+y^2}}$ be defined so that f is differentiable at the origin? -QUESTION [6 upvotes]: Can the function $f(x,y) = \frac{xy}{\sqrt{x^2+y^2}}$ be defined so that $f$ is continuous and differentiable at the origin? -I redefined the function piecewise so that $f=0$ at the origin and $f = \frac{xy}{\sqrt{x^2+y^2}}$ otherwise. This made the function continuous since $\displaystyle\lim_{(x,y) \to (0,0) } \frac{xy}{\sqrt{x^2+y^2}} = 0$. -However, I'm having a difficult time of proving whether or not f is differentiable. I'm pretty sure it's not. I tried to do this by showing that the partial derivatives are not continuous at the origin, but I got stuck. -Thanks! - -REPLY [6 votes]: Let us check whether $\frac{\partial f}{\partial x}(0,0)$ exists. We have -$$ \frac{\partial f}{\partial x}(0,0) = \lim_{h \to 0} \frac{f(h,0) - f(0,0)}{h} = \lim_{h \to 0} 0 = 0 $$ -and similarly, since the function is symmetric in $x$ and $y$, $\frac{\partial f}{\partial y}(0,0) = 0$ so both partial derivatives exist. -Those are necessary but not sufficient conditions for the function $f$ to be differentiable at the origin. For $f$ to be differentiable at the origin, we must have -$$ \lim_{(x,y) \rightarrow 0} \frac{f(x,y) - f(0,0) - \frac{\partial f}{\partial x}(0,0) x - \frac{\partial f}{\partial y}(0,0) y}{||(x,y)||} = 0. $$ -Plugging in, we have -$$ \frac{f(x,y) - f(0,0) - \frac{\partial f}{\partial x}(0,0) x - \frac{\partial f}{\partial y}(0,0) y}{||(x,y)||} = \frac{xy}{x^2 + y^2}. $$ -Denoting this function by $g(x,y)$, we have -$$ \lim_{t \to 0} g(t,t) = \lim_{t \to 0} \frac{t^2}{t^2 + t^2} = \lim_{t \to 0} \frac{1}{2} = \frac{1}{2}, \,\,\, \lim_{t \to 0} g(0,t) = \lim_{t \to 0} 0 = 0 $$ -so $ \lim_{(x,y) \to (0,0)} g(x,y)$ doesn't exist and $f$ is not differentiable at the origin.<|endoftext|> -TITLE: Closed surface integral of the surface's normal vector -QUESTION [14 upvotes]: Is it true that the surface integral over any closed surface (we are in $\mathbb R^3$) of the normal vector $\hat n$ of that surface, say $\hat n$ is pointing outward, is zero? In other words, is it true that -$$\iint_S \hat n \ dS = 0$$ -for any closed surface $S$? -I recently ran across such an integral and while intuitively it would seem to have to be zero (it's quite obvious if the closed surface is a cube or a sphere), is it true for any closed surface and what would be the best way of proving this statement? - -REPLY [18 votes]: Yes, the integral is always $0$ for a closed surface. To see this, write the unit normal in $x,y,z$ components $\hat n = (n_x,n_y,n_z)$. Then we wish to show that the following surface integrals satisfy -$$ -\iint_S n_x dS = \iint_S n_y dS = \iint_S n_z dS = 0. -$$ -Let $V$ denote the solid enclosed by $S$. Denote $\hat i = (1,0,0)$. We have via the -divergence theorem -$$ -\iint_S n_x dS = \iint_S (\hat i \cdot \hat n) dS = \iiint_V (\nabla \cdot \hat i) dV = 0, -$$ -since $\hat i$ is constant. The other two integrals are computed analogously.<|endoftext|> -TITLE: Must this rng be a ring? -QUESTION [14 upvotes]: A rng is a ring without the assumption that the ring contains an identity. Consider a finite rng $\mathbf{R}$. -I am investigating conditions that get close forcing an identity but not quite. The closest condition I can think of is the following: - -If $a\in \mathbf{R}$ is non-zero then there is $b\in\mathbf{R}$ such that $ab\neq - 0$ - -I am finding myself unable to prove that $\mathbf{R}$ must/need-not have a multiplicative identity i.e. be a ring. -Are there well-known results/examples that deal with this sort of condition? - -REPLY [9 votes]: In the commutative case (there are probably simple non-commutative counterexamples coming from matrix rings): -We say that a commutative rng $R$ has property $\mathcal{P}$ if, for all nonzero $a\in R$, there is some $b\in R$ with $ab\neq 0$. The zero ring has property $\mathcal{P}$, and it has a unit. Let $R$ be a finite nonzero commutative rng such that all smaller commutative rngs with property $\mathcal{P}$ have a unit. -Pick some $a\neq 0$. Then there is some $b$ with $ab\neq 0$, some $c$ with $abc\neq 0$, and so on. So there exist arbitrarily long nonzero products in $R$, which implies that there is some $x\in R$ that is not nilpotent. -Since $R$ is finite, there are $d,N$ such that $x^n = x^{n+d}$ for $n>N$. Choosing $M$ so that $Md>N$, we have $(x^{Md})^2 = x^{2Md} = x^{Md}$, so there is some non-zero idempotent $e=x^{Md}$. -Let $I = \{r\in R \mid er = 0\}$. $I$ has property $\mathcal{P}$: if $r\in I$ is nonzero, then there is some $s\in R$ with $rs\neq 0$. Then $r(s-es)=s(r-er)=sr\neq 0$, and $s-es\in I$. -$I$ is strictly smaller than $R$ ($e\notin I$, because $e$ is nonzero), so, by choice of $R$, $I$ has a unit $u$. But then $u+e$ is a unit of $R$: for any $t\in R$, $(u+e)t = ut + et=u(t-et) + et = (t-et) + et = t$. -By induction, we conclude that all finite commutative rngs with property $\mathcal{P}$ have a unit. - -In retrospect, the idea here is not so difficult: As in Thomas Andrews' answer, we are trying to write $R\cong R_1\bigoplus R_2$ for subrngs $R_1,R_2$. Direct summands inherit property $\mathcal{P}$, and $R$ has a unity if and only if both $R_1$ and $R_2$ do, so this lets us quickly reduce to the case of indecomposable rngs. -Furthermore, idempotents are one of the more natural ways to identify direct summands. Given an idempotent $e$, we can write $R\cong eR \bigoplus \operatorname{ann}(e)$, and $eR$ always has unity $e$. So the challenge is just to show that $R$ must contain a nonzero idempotent.<|endoftext|> -TITLE: What is the difference between disjoint union and union? -QUESTION [26 upvotes]: If $S = A \cup B$, then $S$ is the collection of all points in $A$ and $B$ - -What about $S = A \sqcup B$?, I think disjoint union is the same as union, only $A, B$ are disjoint. So the notation is a bit misleading. Because it is not a new operation, but operation where the pair $A,B$ satisfies $A \cap B = \varnothing$. -So given $A \cap B = \varnothing$, $S = A \sqcup B = A \cup B$. -Is my interpretation correct? - -REPLY [36 votes]: The notation $A\sqcup B$ (and phrase "disjoint union") has (at least) two different meanings. The first is the meaning you suggest: a union that happens to be disjoint. That is, $A\sqcup B$ is identical to $A\cup B$, but you're only allowed to write $A\sqcup B$ if $A$ and $B$ are disjoint. -The second meaning is that $A\sqcup B$ is a union of sets that look like $A$ and $B$ but have been forced to be disjoint. There are many ways of defining this precisely; for instance, you could define $A\sqcup B= A\times\{0\}\cup B\times \{1\}$. This construction can also be described as the coproduct of $A$ and $B$ in the category of sets. -(This ambiguity is similar to the ambiguity between "internal" and "external" direct sums; see for instance my answer here.)<|endoftext|> -TITLE: Prove $\int_{0}^{x}f+\int_{0}^{f(x)}f^{-1}=xf(x)\qquad\text{for all $x\geq0$}$ -QUESTION [13 upvotes]: Suppose that the function $f:[0,\infty)\rightarrow\mathbb{R}$ is continuous and strictly increasing and that $f:(0,\infty)\rightarrow\mathbb{R}$ is differentiable. Moreover, assume $f(0)=0$. Consider the formula $$\int_{0}^{x}f+\int_{0}^{f(x)}f^{-1}=xf(x)\qquad\text{for all $x\geq0$}$$ Prove this formula. - - -Attempt: Consider that $g(x)=\int_{0}^{x}f+\int_{0}^{f(x)}f^{-1}-xf(x)$ which is continuous on $\mathbb{R}$. Then differentiate $g(x)$, we have $g'(x)=f(x)+xf'(x)-f(x)-xf'(x)=0$. Thus for all $x$, $$g(x)-g(0)=0\Longrightarrow\int_{0}^{x}f+\int_{0}^{f(x)}f^{-1}-xf(x)=0$$ - -I am not sure my the equality valid not or not. If not, can someone give me a suggestion to modify the proof. Thanks. - -REPLY [5 votes]: I think your proof is fine. Here is an alternative. -If you are willing to interpret an integral as area, then look at the plot of $y=f(x)$. It is anchored at $(0,0)$, and increasing. $xf(x)$ is the area of a rectangle with lower left corner at $(0,0)$. -The two integrals are the area under the curve, and the area to the left of that curve (integrating with respect to $y$). - -(OK, the picture abuses "$x$" in the lower integral. Maybe the indexing variable should be "$t$" or something other than "$x$".)<|endoftext|> -TITLE: Proving positive definiteness of matrix $a_{ij}=\frac{2x_ix_j}{x_i + x_j}$ -QUESTION [7 upvotes]: I'm trying to prove that the matrix with entries $\left\{\frac{2x_ix_j}{x_i + x_j}\right\}_{ij}$ is positive definite for all n, where n is the number of rows/columns. -I was able to prove it for the 2x2 case by showing the determinant is always positive. However, once I extend it to the 3x3 case I run into trouble. I found a question here -whose chosen answer gave a condition for positive definiteness of the extended matrix, and after evaluating the condition and maximizing it via software, the inequality turned out to hold indeed, but I just can't show it. -Furthermore, it would be way more complicated when I go to 4x4 and higher. I think I should somehow use induction here to show it for all n, but I think I'm missing something. Any help is appreciated. -Edit: Actually the mistake is mine, turns out there are indeed no squares in the denominator, so it turns out user141614's first answer is what I really needed. Thanks a lot! Should I just accept this answer, or should it be changed back to his first answer then I accept it? - -REPLY [10 votes]: (Update: Some fixes have been added because I solved the problem with $a_{ij}=\frac{2x_ix_j}{x_i+x_j}$ instead of $a_{ij}=\frac{2x_ix_j}{x_i^2+x_j^2}$. Thanks to Paata Ivanisvili for his comment.) -The trick is writing the expression $\frac{1}{x^2+y^2}$ as an integral. -For every nonzero vector $(u_1,\ldots,u_n)$ of reals, -$$ -(u_1,\ldots,u_n) \begin{pmatrix} -a_{11} & \dots & a_{1n} \\ -\vdots & \ddots & \vdots \\ -a_{n1} & \dots & a_{nn} \\ -\end{pmatrix} -\begin{pmatrix} u_1\\ \vdots \\ u_n\end{pmatrix} = -\sum_{i=1}^n \sum_{j=1}^n a_{ij} u_i u_j = -$$ -$$ -=2 \sum_{i=1}^n \sum_{j=1}^n u_i u_j x_i x_j \frac1{x_i^2+x_j^2} = -2\sum_{i=1}^n \sum_{j=1}^n u_i u_j x_i x_j \int_{t=0}^\infty -\exp \Big(-(x_i^2+x_j^2)t\Big) \mathrm{d}t = -$$ -$$ -=2\int_{t=0}^\infty -\left(\sum_{i=1}^n u_i x_i \exp\big(-{x_i^2}t\big)\right) -\left(\sum_{j=1}^n u_j x_j \exp\big(-{x_j^2}t\big)\right) -\mathrm{d}t = -$$ -$$ -=2\int_{t=0}^\infty -\left(\sum_{i=1}^n u_i x_i \exp\big(-{x_i^2t}\big)\right)^2 -\mathrm{d}t \ge 0. -$$ -If $|x_1|,\ldots,|x_n|$ are distinct and nonzero then the last integrand cannot be constant zero: for large $t$ the minimal $|x_i|$ with $u_i\ne0$ determines the magnitude order. So the integral is strictly positive. -If there are equal values among $|x_1|,\ldots,|x_n|$ then the integral may vanish. Accordingly, the corresponding rows of the matrix are equal or the negative of each other, so the matrix is only positive semidefinite. - -You can find many variants of this inequality. See -problem A.477. of the KöMaL magazine. -The entry $a_{ij}$ can be replaced by $\left(\frac{2x_ix_j}{x_i+x_j}\right)^c$ with an arbitrary fixed positive $c$; see -problem A.493. of KöMaL.<|endoftext|> -TITLE: Show that $f(x) = -\ln(x)$ is convex (WITHOUT using second derivative!) -QUESTION [5 upvotes]: In the lecture notes for a course I'm taking, the definition of a convex function is given as follows: -"a function $f$ is convex if, for any $x_1$ and $x_2$, and for any $\alpha$ $\in$ [0,1], $\alpha f(x_1) + (1-\alpha)f(x_2) \ge f(\alpha x_1 + (1-\alpha ) x_2)$" -That is, if you draw a line segment between two points on the curve for this function, the function evaluated at any $x$ between $x_1$ and $x_2$ will be lower than the line segment. -Immediately after this definition is given, there is an exercise: "show that $f(x) = -\ln(x)$ is convex." Now, I happen to know that a function is convex if its second derivative is always greater than zero, so we can easily check the second derivative here to show that $-\ln(x)$ is convex (the second derivative is $\frac{1}{x^2}$, which is always greater than zero). -However, because of the way the exercise comes immediately after the "line segment" definition of convexity, without any mention of the second derivative test, I get the impression that it is possible to prove that $-\ln(x)$ is convex without using the second derivative test. I have attacked this problem many different ways, and I haven't been able to show convexity using only the line segment definition. Is this possible? - -REPLY [2 votes]: Without the AGM nor the weighted AGM inequality. It suffices to consider the case $x> y$ and $a=\alpha \in (0,1).$ Take a fixed $y>0$ and a fixed $a\in (0,1)$ and for $x>0$ let $$g(x)=-a\log x- (1-a)\log y +\log (a x+(1-a)y).$$ We have $$g'(x)= \frac{dg(x)}{dx} = -\frac{a}{x} + \frac{a}{a x+(1-a) y} = \frac{a(1-a)(x-y)}{ax+(1-a)y}\cdot \frac {1}{x}.$$ Observe that $g'(y)=0$ and that $x>y\implies g'(x)>0.$ Therefore $$x>y\implies g(x)>g(y)=0.$$<|endoftext|> -TITLE: Give a combinatorial proof for a multiset identity -QUESTION [7 upvotes]: I'm asked to give a combinatorial proof of the following, $\binom{\binom n2}{2}$ = 3$\binom{n}{4}$ + n$\binom{n-1}{2}$. -I know $\binom{n}{k}$ = $\frac{n!}{k!(n-k)!}$ and $(\binom{n}{k}) = \binom{n+k-1}{k}$ but I'm at a loss as to what to do with the $\binom{\binom n2}{2}$ -Can someone point me in the right direction as to how to proceed with writing a combinatorial proof for this identity? - -REPLY [6 votes]: Consider a graph $G=(V,E)$ such that $|V|=n$ and $|E|={n \choose 2}$ (i.e. any two vertices are connected). Than the LHS of your identity is the number of unordered pairs of edges. Any such pair is either determined by four vertices or by three. In the first case we first choose 4 vertices and then connect them in any of 3 ways which gives us altogether $3{n \choose 4}$ combinations. In the second case we first select the vertex which belongs to both edges and later we select the other two vertices from remaining $n-1$ vertices which gives us $n{n-1 \choose 2}$ possibilities. Summing it up we get the RHS.<|endoftext|> -TITLE: Upper bound on integral: $\int_1^\infty \frac{dx}{\sqrt{x^3-1}} < 4$ -QUESTION [5 upvotes]: I'm going through Nahin's book Inside Interesting Integrals, and I'm stuck at an early problem, Challenge Problem 1.2: to show that -$$\int_1^\infty \frac{dx}{\sqrt{x^3-1}}$$ -exists because there is a finite upper-bound on its value. In particular, show that the integral is less than 4. -I've tried various substitutions and also comparisons with similar integrals, but the problem is that any other integral that I can easily compare the above integral to is just as hard to integrate, which doesn't help solve the problem. -I also tried just looking at the graph and hoping for insight, but that didn't work either. -So how doesone one place an upper bound on the integral? - -REPLY [10 votes]: $$\begin{eqnarray*}\color{red}{I}=\int_{1}^{+\infty}\frac{dx}{\sqrt{x^3-1}}=\int_{0}^{+\infty}\frac{dx}{\sqrt{x^3+3x^2+3x}}&=&\int_{0}^{+\infty}\frac{2\, dz}{\sqrt{z^4+3z^2+3}}\\&\color{red}{\leq}&\int_{0}^{+\infty}\frac{2\,dz}{\sqrt{z^4+3z^2+\frac{9}{4}}}=\color{red}{\pi\sqrt{\frac{2}{3}}.}\end{eqnarray*}$$ -A tighter bound follows from Cauchy-Schwarz: - -$$\begin{eqnarray*} \color{red}{I} &=&2\int_{0}^{+\infty}\frac{\sqrt{z^2+\sqrt{3}}}{\sqrt{z^4+3z^2+3}}\cdot\frac{dz}{\sqrt{z^2+\sqrt{3}}}\\&\color{red}{\leq}& 2\sqrt{\left(\int_{0}^{+\infty}\frac{z^2+\sqrt{3}}{z^4+3z^2+3}\,dz\right)\cdot\int_{0}^{+\infty}\frac{dz}{z^2+\sqrt{3}}}\\&=&2\pi\cdot\left(\frac{1}{6}-\frac{1}{4\sqrt{3}}\right)^{1/4}\leq\color{red}{\frac{2\pi}{42^{1/4}}}.\end{eqnarray*} $$ - -The manipulations in the first line show that $I$ is just twice a complete elliptic integral of the first kind, whose value can be computed through the arithmetic-geometric mean. -On the other hand, through the substitution $x=\frac{1}{t}$ and Euler's beta function we have: - -$$ I \color{red}{=} \frac{\Gamma\left(\frac{1}{3}\right)\cdot \Gamma\left(\frac{1}{6}\right)}{2\sqrt{3\pi}}\color{red}{\leq}\frac{3\cdot 6}{2\sqrt{3\pi}}=\sqrt{\frac{27}{\pi}}$$ - -since in a right neighbourhood of the origin we have $\Gamma(x)\leq\frac{1}{x}$. -As a by-product we get: - -$$ \frac{\Gamma\left(\frac{1}{3}\right)\cdot \Gamma\left(\frac{1}{6}\right)}{2\sqrt{3\pi}} = \frac{\pi}{\text{AGM}(\frac{1}{2} \sqrt{3+2 \sqrt{3}},3^{1/4})}$$ - -that allows us to compute $\Gamma\left(\frac{1}{6}\right)$ through an AGM-mean: - -$$ \Gamma\left(\frac{1}{6}\right) = \color{red}{\frac{2^{\frac{14}{9}}\cdot 3^{\frac{1}{3}}\cdot \pi^{\frac{5}{6}} }{\text{AGM}\left(1+\sqrt{3},\sqrt{8}\right)^{\frac{2}{3}}}}.$$ - -The last identity was missing in the Wikipedia page about particular values of the $\Gamma$ function, so I took the liberty to add it and add this answer as a reference.<|endoftext|> -TITLE: How to partition $nk$ objects $\frac{1}{n}\binom{nk}{k}$ times, each time making subsets of size $k$, so that no combination of $k$ is repeated. -QUESTION [5 upvotes]: What is an algorithm to partition $nk$ objects a total of $\frac{1}{n}\binom{nk}{k}$ times, each time making subsets of size exactly $k$, so that no subset of size $k$ is ever repeated? -For example, if $n=k=3$, then we have $9$ objects and we want to partition into groups of size $3$. We must hit all $\binom{9}{3}=84$ combinations in $\frac{1}{3}\binom{9}{3}=28$ partitions. For this example, the naive approach of iterating through all permutations in order and greedily choosing the first ones that work (as shown below) does not suffice; it can hit only $72$ of the $84$ combinations. -$[(0, 1, 2), (3, 4, 5), (6, 7, 8)], -[(0, 1, 3), (2, 4, 6), (5, 7, 8)], -[(0, 1, 4), (2, 3, 7), (5, 6, 8)], -[(0, 1, 5), (2, 3, 6), (4, 7, 8)], -[(0, 1, 6), (2, 3, 8), (4, 5, 7)], -[(0, 1, 7), (2, 3, 5), (4, 6, 8)], -[(0, 1, 8), (2, 3, 4), (5, 6, 7)], -[(0, 2, 3), (1, 5, 8), (4, 6, 7)], -[(0, 2, 4), (1, 5, 6), (3, 7, 8)], -[(0, 2, 5), (1, 4, 7), (3, 6, 8)], -[(0, 2, 6), (1, 3, 7), (4, 5, 8)], -[(0, 2, 7), (1, 3, 8), (4, 5, 6)], -[(0, 2, 8), (1, 4, 5), (3, 6, 7)], -[(0, 3, 4), (1, 5, 7), (2, 6, 8)], -[(0, 3, 5), (1, 4, 6), (2, 7, 8)], -[(0, 3, 6), (1, 4, 8), (2, 5, 7)], -[(0, 3, 7), (1, 6, 8), (2, 4, 5)], -[(0, 4, 6), (1, 2, 7), (3, 5, 8)], -[(0, 4, 7), (1, 2, 8), (3, 5, 6)], -[(0, 4, 8), (1, 2, 6), (3, 5, 7)], -[(0, 5, 7), (1, 3, 6), (2, 4, 8)], -[(0, 5, 8), (1, 3, 4), (2, 6, 7)], -[(0, 6, 7), (1, 2, 5), (3, 4, 8)], -[(0, 6, 8), (1, 3, 5), (2, 4, 7)]$ -Basically, this is equivalent to Efficiently partition a set into all possible unique pair combinations, but instead of partitioning into pairs, I am partitioning into subsets of size $k$. - -REPLY [4 votes]: The existence of a solution is guaranteed by Baranyai's theorem. A proof of the theorem can be found, for instance, in Dieter Jungnickel - Graphs, Networks and Algorithms. The proof is constructive, so an algorithm can be derived from it. The book I mention contains references for explicit constructions in certain special cases. -Furthermore, a Python implementation is given on Stackoverflow. I cannot attest to its correctness.<|endoftext|> -TITLE: Understanding the first fundamental form of a surface, how the parametrization doesn't matter. -QUESTION [5 upvotes]: The following is an excerpt from Pressley's Elementary Differential Geometry on the definition of the first fundamental form. However, there are some parts of this concept that I'm unclear about. It says in the bottom of this excerpt that the coefficients E,F,G and the linear maps du, dv depend on the choice of surface patch for $S$. Here, the patch is $\sigma$. But then, I don't understand how the first fundamental form is an inherent concept of the surface, that is the expression varies under different parametrizations of the same surface and the same point on it. So I can't understand why in the final line, it says "but the first fundamental form itself depends only on $S$ and $\mathbf{p}$. -So to make my question clear, there are theorems that show that the fundamental form determines many properties of different surfaces, such as surfaces that have the same fundamental form are locally isometric etc. But from what I understood from the below definition is that the first fundamental form is an expression, of which is determined by the coefficients $E, F, G$ and the linear maps $du, dv$. But these depend on the parametrization of the surface. So for the same surface and the same point on it, we can have two different expressions,say, $Edu^2+2Fdudv+Gdv^2$ and $\bar{E}d\tilde{u}^2+2\bar{F}d\tilde{u}d\tilde{v}+\bar{G}d\tilde{v}^2$. So the specific choice of parametrization seems to matter, but then how can we use this to talk about inherent properties of different surfaces? I would greatly appreciate any help to understand this. - -REPLY [4 votes]: If $T$ is a two dimensional vector space, an euclidian structure is given by a symmetric bilinear form, positive, non degenerate. In a given coordinate system $(x,y)$ it is given by $ Ex^2+2Fxy+Gy^2$. $E,F,G$ depends on the coordinate system, but the euclidian structure do not. -This is what happen in your case, just you have a two parameter family of vector spaces $T_p$ (the tangent space at the point $p$ on $S$), and on each space an euclidian structure : if $S\subset E_3$ is a surface in the euclidian space, it is the restriction of the euclidian structure to the tangent space at this point. -If you choose a chart $(u,v)$ around your point $\partial u, \partial v$ is at each point a base of the tangent space, and in this base (which depends on the point) the euclidian structure at the point $p$ of coordinate $(u,v)$ is $g(\partial u,\partial u)=E(u,v),g(\partial u,\partial v)=F(u,v)$...The functions depends on the coordiante system, the euclidian structure not.<|endoftext|> -TITLE: Obtain magnitude of square-rooted complex number -QUESTION [7 upvotes]: I would like to obtain the magnitude of a complex number of this form: -$$z = \frac{1}{\sqrt{\alpha + i \beta}}$$ -By a simple test on WolframAlpha it should be -$$\left| z \right| = \frac{1}{\sqrt[4]{\alpha^2 + \beta^2}}$$ -The fact is that if I try to cancel the root in the denominator I still have a troublesome expression at the numerator: -$$z = \frac{\sqrt{\alpha + i \beta}}{\alpha + i \beta}$$ -And this alternative way seems unuseful too: -$$z = \left( \alpha + i \beta \right)^{-\frac{1}{2}}$$ -If WolframAlpha gave the correct result, how to prove it? - -REPLY [4 votes]: The square root of a complex number is not well defined; however, if we consider a complex number $z$ such that $z^2=w$, then -$$ -|w|=|z^2|=|z|^2 -$$ -so -$$ -|z|=\sqrt{|w|} -$$ -(and this is well defined, since we are talking of nonnegative real numbers). -Your situation is exactly this one, with a slight complication. However, since -$$ -|w^{-1}|=|w|^{-1} -$$ -for any $w\ne0$, you know that, when $w=\alpha+i\beta$ and $z^2=w^{-1}$ -that -$$ -|z|=\sqrt{|w^{-1}|}=\sqrt{|w|^{-1}}=\frac{1}{\sqrt{|w|}} -$$ -because of the arguments above. So -$$ -|z|=\frac{1}{\sqrt{|w|}}=\frac{1}{\sqrt{\sqrt{\alpha^2+\beta^2}}} -=\frac{1}{\sqrt[4]{\alpha^2+\beta^2}} -$$<|endoftext|> -TITLE: If $R$ is a integral domain and $S$ is a subring of $R$ then is $S$ an integral domain automatically? -QUESTION [5 upvotes]: Here is the problem that I have: - -Let $R$ be an integral domain and $S$ be a subring of $R$ containing the one of $R$. Prove that $S$ is also an integral domain. - -Here is my answer: Suppose for a contradiction that $S$ is not an integral domain then there exists $x,y \in S$ s.t $x,y \neq 0$ and $x \cdot y=0$ but since $S$ is a subset of $R$ then $x,y \in R$ and so $R$ is not an integral domain i.e. a contradiction. So $S$ is an integral domain. $~\square$ -My problem is why does the question stipulate that $S$ must contain the one from $R$. Why is this necessary? - -REPLY [2 votes]: Rings (should) always have an identity; otherwise call them pseudo-rings or rngs or non-unital rings. Subrings are injective ring homomorphisms, and of course ring homomorphisms preserve (by definition) the whole structure, including the unit. This is no special convention - it follows from general principles of general algebra.<|endoftext|> -TITLE: Show that the limit $\lim_{(x,y)\to(0,0)}\frac{y+\sin3x-3x}{y+x^5}$ exist/does not exist. -QUESTION [6 upvotes]: How do I show that this limit exist/does not exist? My assumption is that it does not; however I don't see how I can apply the two-path test to verify this. -$$\lim_{(x,y)\to(0,0)}\dfrac{y+\sin3x-3x}{y+x^5} = L,$$ -$$\lim_{y\to0}\dfrac{y+\sin(0)-(0)}{y+(0)^5} = 1,$$ -But I don't see any second path that simplifies this well; -$$\lim_{x\to0}\dfrac{(0)+\sin x-x}{(0)+x^5} = \:?$$ - -REPLY [2 votes]: Hint -Let $f(x,y)=\frac{y+\sin(3x)-3x}{y+x^5}$. -What do you think about $$\lim_{t\to 0}f(0,t)\quad \text{and}\quad \lim_{t\to 0}f(t,0)\ \ ?$$ -Edited -$$\sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}+o(x^5)$$ -therefore $$\frac{\sin(x)-x}{x^5}=\frac{-\frac{x^3}{3!}+\frac{x^5}{5!}+o(x^5)}{x^5}\underset{x\to 0}{\longrightarrow }-\infty .$$<|endoftext|> -TITLE: Set, n-Tuple, Vector and Matrix — links and differences -QUESTION [20 upvotes]: I know this question has been asked like 1000 times, however all supplied answers were not really satisfying to me. -My question concerns the similarities and differences between these mathematical objects. -First, the Set. A set is defined to be the entity of distinct objects (not necessarily numbers). The arrangement of objects is not relevant. We use curly braces to denote sets; commata are used to seperate the objects within the set. -Second, the $n$-Tuple. A $n$-tuple is very similar to a set, however the objects need not to be the same and the ordering of objects within the $n$-tuple is important. $n$-Tuples are usually denoted with parentheses and the objects within are seperated with commata as in sets. -Also, it is common to build the set of even numbers for instance like this: $\{2n\mid n\in \mathbb{N}\}$. However, I have never seen something like this with regard to n-tuples. -Third, the Vector. A vector is an element of a vector space. However, if I calculate the Cartesian product of, for instance, $\mathbb{R}×\mathbb{R}$ then the objects of $\mathbb{R}^2$ are (column-)vectors which are denoted as tuples. Furthermore, I often see box brackets to denote such vectors and the elements are written in one column (opposed to tuples or sets). Also, commata are not used to separate the objects (however, sometimes I see the elements of row vectors separated by commata). -However, I have never seen such notation when for instance describing elements of $\mathbb{N}\times\mathbb{R}$. -Finally, matrices. Matrices are arrays of numbers and clearly linked to vectors as each column/row is a vector. However, I have never seen commata used in combination with matrices. Furthermore, the space of matrices is written as $A^{(m×n)}$. I know what the idea behind this notation is, however, as matrices are linked to vectors I have problems to really understand it. -Those concepts are obviously linked, however at certain points there arise crucial differences between them (which also come, I believe, from notational differences between authors and fields of mathematics). I hope my problem is comprehensible and someone can help me and shed light on my issues. -Thanks! - -REPLY [25 votes]: Preliminary Notions: -I would like to start by mentioning the fact that the terms set, tuple, vector, and matrix, are fairly high level abstractions that have come to be linked to somewhat generic notions across multiple sub-fields of mathematics, physics, and computer science. As a result, the laymen definitions of these objects are widely available, while formal definitions remain difficult to ascertain. This is especially true if you're aim is to have these formal definitions all reside within the same formal system. This brings us to our first problem: The formal definition of any given mathematical object really only holds water in the axiomatic or formal system within which it is defined. For example, Wikipedia says that: - -"In mathematics, an n-tuple is a sequence (or ordered list) of n elements, where n is a non-negative integer." - -However, in many systems, a sequence $a_n$ is precisely defined as a total function $a:\mathbb{N}\to\mathbb{R}$. This definition of sequence, combined with the definition of tuple in the quote above, implies that every tuple has a countably infinite number of entries. This, of course, is not a useful definition of tuple. The problem here is that we are mixing and matching the operational definitions of objects from different formal systems. I will now describe one possible way (in terms of sets) of formally relating all of the objects you mentioned, and try to answer all of your questions. -Sets: -Sets are objects that contain other objects. If an object $a$ is contained in a set $A$, it is said to be an element or a member of $A$, and is denoted $a\in A$. Two sets are equal iff they have the same members. In other words, -$$(A=B)\Leftrightarrow [(\forall a\in A)(a\in B)\land (\forall b\in B)(b\in A)].$$ This is really all there is to it, for all intents and purposes. Sets do not, themselves, have any higher level structure such as order, operations, or any other relations. -Tuples: -An n-tuple is a finite ordered list of elements. Two n-tuples are equal iff they have the same elements appearing in the same order. We denote them as $(a_1, a_2, ... , a_n)$. Given elements $a_1, a_2, ... , a_n, a_{n+1}$, n-tuples are inductively defined as follows: -$(a_1)\equiv\{a_1\}$ is a 1-tuple. -$(a_1, a_2)\equiv\{\{a_1\},\{a_1, a_2\}\}$ is a 2-tuple. -If $(a_1, ... , a_n)$ is an n-tuple, then $((a_1, ... , a_n), a_{n+1})$ is an (n+1)-tuple. -This construction satisfies the requirements for the properties of a tuple. It has been proven many times so I will not do so again here. However, as a side note I would like to entertain your inquiry into the extension of set-builder notation to the description of tuples. -Describing Sets of Tuples: -$A\equiv\{(x,\ y)\ |\ (x=y)\}$ is the set of all 2-tuples whose elements are equal. This is a trivial example of an equivalence relation. -$A\equiv\{(n,\ n+1)\ |\ (n\in \mathbb{N})\}$ is the set of all 2-tuples of consecutive natural numbers. This is a special type of order relation known as a cover relation. -$A\equiv\{(2x,\ 2y+1)\ |\ (x,y\in\mathbb{Z})\}$ is the set of all 2-tuples whose first element is an even integer and whose second element is an odd integer. -Cartesian Products and Sets of Tuples: -Let us define a set operation, called the Cartesian Product. Given sets $A$, $B$, $$A\times B\equiv\{(a,\ b)\ |\ (a\in A)\land(b\in B)\}.$$ This allows us to concisely describe sets of tuples from elements of other sets. The set of tuples from example 3 above can also be described as $E\times D$ where $E\equiv\{2x\ |\ (x\in\mathbb{Z})\}$ and $D\equiv\{2x+1\ |\ (x\in\mathbb{Z})\}$. -It is important to notice that the Cartesian product is not commutative (i.e. $A\times B\neq B\times A$) nor is it associative (i.e. $(A\times B)\times C\neq A\times(B\times C)$ ). From now on we will assume the convention that Cartesian products are left associative. That is, if no parenthesis are present, then $A\times B\times C=(A\times B)\times C$. Furthermore, multiple products of the same set can be abbreviated using exponent notation (i.e. $A\times A\times A\times A\times A = A^5$). -Vectors: -Oh, boy... Here we go! Okay, let's take a look at something you said about vectors: - -"A vector is an element of a vector space . . . the objects of $\mathbb{R}^2$ are (column-)vectors which are denoted as tuples . . . box brackets [are used to] denote such vectors and the elements are written in one column . . . commata are not used to separate the objects (however, sometimes [are] . . . ) . . . I have never seen such notation when for instance describing elements of $\mathbb{N}\times\mathbb{R}$." - -Our discussion of universes of discourse has just hit home in a real way, and by doing so, is causing some serious confusion (and reasonably so). You are right in saying that a vector is an element of a vector space, but may not be aware of the plethora of structural implications that sentence carries with it. But! Let's come back to that in a moment. - -"the objects of $\mathbb{R}^2$ are (column-)vectors which are denoted as tuples" - -Strictly speaking, this is not true. The elements of $\mathbb{R}^2$ are nothing more or less than 2-tuples with real valued entries, and $\mathbb{R}$ is simply a set, whose members we choose to call "the real numbers". Period. This is clearly shown by seeing that $\mathbb{R}^2=\mathbb{R}\times\mathbb{R}=\{(x,y)\ |\ (x\in\mathbb{R})\land(y\in\mathbb{R})\}$. Less strictly speaking, often when people write $\mathbb{R}$ they don't mean simply the set of real numbers, but the set of real numbers together with the standard addition and multiplication that constitutes an infinite ring with unity and the cancellation property, such that every nonzero element is a unit, which means that they constitute a field. Furthermore it is assumed that the completeness axiom, the axioms of order, and the absolute value we are all familiar with are present as well. Often when people write $\mathbb{R}^2$ they don't simply mean the set of real valued 2-tuples, but the 2-dimensional vector space over the field $\mathbb{R}$ with the Euclidean norm. What is a vector space over a field? A vector space over a field is a special case of a module over a ring. A field is an integral domain with every nonzero element being a unit. An integral domain is a ring with unity and the cancellation property. A ring is an abelian group under addition together with an associative, and distributive binary operation called multiplication. If you are not familiar with the notion of group, then we have delved too far down the rabbit hole. -(Inhale) -I suggest that you do not concern yourself with notational subtleties such as commas vs. no commas, square brackets vs. angle brackets vs. parenthesis, etc. These are, more often than not, used to simply convey contextual information. And do not worry if you have not heard some of the jargon above, you probably have an intuitive understanding (especially considering your inquiring into the deeper subtleties of the relationships of the objects in question) of what is going on, and you really just need to know that the important things are the operations. The thing that makes something a vector is not the presence or absence of commas, or the use of angle brackets. Still, it is useful in many domains to distinguish vectors from "points", or standard tuples, because it makes it easier to keep track of what objects have more structure applied on their underlying set. The reason you have probably never seen elements of $\mathbb{N}\times\mathbb{R}$ represented using the same notation as that used for vectors, is that $\mathbb{N}$ is not a field under standard operations, thus the direct product of that structure with the algebraic structure $\mathbb{R}$ is also not a field. If $\mathbb{N}\times\mathbb{R}$ isn't a field, then it has failed the very first thing required of it to have a vector space over it. Also, $\langle\mathbb{N},+\rangle$ isn't a group, so if vector addition is simply member-wise addition, then $\langle\mathbb{N}\times\mathbb{R},+\rangle$ is also not a group (another requirement). If it's not a vector space then its elements are not vectors, and will thus not be denoted as such. -Vector Spaces: -What makes something a vecter? If an object is an element of a vector space, then it is a vector. Given any field $F$ and set $V$, if $+:(V\times V)\to V$ and $\cdot:(F\times V)\to V$ are operations (called vector addition and scalar multiplication) such that $$ is an abelian group and scalar multiplication distributes both over vector addition and scalar addition (the addition operation of the field $F$), and scalar multiplication associates with the multiplication of $F$, and lastly the unity of $F$ is an identity under scalar multiplication, then $V$ is a vector space over $F$, and any element of the set $V$ is called a vector. If an object is not an element of a vector space then it is not a vector. Period. Notice that this does not describe what vectors look like. -Surprising Example: $\mathbb{R}$ is a vector space over itself. -In general vectors are effectively represented by tuples, but making sense of them requires the context of the algebraic structure (vector space) within which vectors are defined. Thus a tuple representation, along with operations for how to manipulate/relate other tuples, is a satisfactory way to represent the algebraic structures known as vector spaces. -Matrices: -While matrices are often coupled with vector spaces, they are used for many purposes and are not defined in terms of vector spaces directly. Most treatments of "matrix theory" seem to simultaneously use set theoretic results but do not define matrices in terms of sets nor use sets as the object of study. As a result, this will be the object that will make it most difficult to intuitively see its relation to the others. -Like vectors, however, the thing that makes something a matrix, is the structure of which it is a part. A matrix contains elements that have both multiplication and addition operations defined on them. Then, the operations of matrix addition and matrix multiplication (as well as dot products, cross products, determinants, and various other things) are defined on the matrices in terms of the multiplication and addition operations of their entries. The usual definition of 'rectangular array' is not really helpful in the realm of sets, so I will provide an analogous definition. -Given some set $A$ over which addition and multiplication are defined, a $m$ by $n$ matrix with entries in $A$ is an element of $M_{m\times n}(A)\equiv (A^n)^m=A^{m\times n}$. Notice that, besides the quarky transposition of powers, we are simply using the regular Cartesian product here. The set of $3$ by $2$ matrices with Integer entries would look like this: -$$M_{3\times 2}=(\mathbb{Z}^2)^3=(\mathbb{Z}^2)\times(\mathbb{Z}^2)\times(\mathbb{Z}^2)=(\mathbb{Z}\times\mathbb{Z})\times(\mathbb{Z}\times\mathbb{Z})\times(\mathbb{Z}\times\mathbb{Z}).$$ -Supposing we may use the same indexing scheme as with regular tuples (I see no reason why not, this is simply a nested tuple) then we may refer to elements of a matrix as such: given $M$ is an $m$ by $n$ matrix, $M$ is an n-tuple whose entries are m-tuples. $M_1$ is the first row, $M_2$ is the second row, etc. Since $M_1$ is still a tuple, I can further index its elements: ${M_1}_1$ is the first element of the first row, ${M_1}_2$ is the second element of the first row, etc. Notice, that there comes a difficulty in concisely representing a single column, however. To get a column $k$ of an $m$ by $n$ matrix, I must define a m-tuple with the $k$th element of every row: $({M_1}_k, {M_2}_k, ... , {M_m}_k)$. I can then from here easily define all of the normal matrix operations in terms of tuples of tuples, and show that it is consistent with the matrix algebra you are used to. I could have just as easily chosen to represent the set of $m$ by $n$ matrices with entries in $A$ by the set $(A^m)^n$ and let $M_1$ be the first column and so forth, or even by $\mathbb{N}\times A^{mn}$, where an $m$ by $n$ matrix $M$ would be of the form $(n, ({M_1}_1, {M_1}_2, ... {M_1}_n, {M_2}_1, {M_2}_2, ... , {M_2}_n, ... , {M_m}_n))$. The natural number entry is required to distinguish an $m$ by $n$ from a $n$ by $m$ or any other matrix with the same total number of entries. In the end, it is all in how we define our operations that determines "what" something is. For example, if $F$ is a field, then the set $F^{m\times n}$ of $m$ by $n$ matrices with matrix addition is an abelian group, and scalar multiplication meets all the requirements for vector spaces, thus $F^{m\times n}$ with matrix addition and scalar multiplication is a vector space over the field $F$, even though people would not normally think of sets of matrices that are not "column" or "row" vectors as a vector space. These intricacies are often beyond the scope of the usual applications of matrices however, and the fact that they are not defined within most of the common foundational theories is usually left unscrutinized. -Closing Remarks: -I hope this shed some light on the subject. I think the take away is that each of this objects of study are linked to those generic notions we are all so familiar with. If you are in an applied field, then that is satisfactory in most cases. If you are in a field that places high importance on precise and rigorous argument within an axiomatic system, then well founded formal definitions are of the utmost importance and must be constructed, in terms of axioms or derived results, for each of mathematical structures you intend to use or study.<|endoftext|> -TITLE: Prove that the exponential $\exp z$ is not zero for any $z \in \Bbb C$ -QUESTION [6 upvotes]: How can the following been proved? - -$$ -\exp(z)\neq0, z\in\mathbb{C} -$$ - -I tried it a few times, but I failed. Probably it is extremly simple. -If a draw the unit circle and then a complex number $\exp(a+ib)=\exp(a)\exp(ib)$ then it is obvious that this expression is only $0$ if $\exp(a)$ equals zero, but $\exp(a),a\in\mathbb{R}$ is never zero. This seems not to be very robust. -Thank you - -REPLY [2 votes]: $\exp(z)\neq 0 $ $\space , \forall z\in \Bbb{C}$ -Proof: -$f(z) =\exp(z) $ is an entire function i.e holomorphic on $\Bbb{C}$. -To show : $\scr{Z_f}=\{z:f(z)=0\}=\emptyset$ -We argue by contradiction. Suppose$\scr{Z_f}\neq\emptyset$. -Let, $z_0\in \scr{Z_f}$ -Then, $f^n{(z_0)}=\exp(z_0) =0$ $ \space, \forall n\in \Bbb{N}$ -Since, $f\in H(\Bbb{C})$, then $f$ has a Taylor Series about the point $z_0$ converges for all $z\in \Bbb{C}$. -$f(z) =\sum_{n=0}^{\infty}\frac{f^n(z_0) }{{n!}}(z-z_0)^n$$\space , \forall z\in \Bbb{C}$ -Hence, $f(z) =0$ on $\Bbb{C}$. -But, $f(0) =\exp(0) =1$ ,$\space $ a contradiction. -Hence, $\scr{Z_f}=\{z:f(z)=0\}=\emptyset$ -In other words, $\exp(z) \neq 0$ for any $z\in \Bbb{C}$<|endoftext|> -TITLE: Algebraic closure with no nontrival automorphism -QUESTION [5 upvotes]: In Milne's notes on Galois theory, Chapter 7, p.91 he remarked that it is consistent without the axiom of choice that there exists an algebraic closure $L$ of $\mathbb{Q}$ with no nontrivial automorphisms.1 -With Zorn's lemma, we can always extend an automorphism of $\mathbb{Q}(i)$ into $L$, thus always get a nontrivial automorphism of $L$. Without axiom of choice, we cannot do what we did. But it is hard for me to think why this would happen, could anyone give an brief explanation? - - -Wilfrid Hodges, Läuchli’s algebraic closure of $\mathbb Q$, Math. Proc. Cambridge Philos. Soc. 79 (1976), no. 2, 289--297. - -REPLY [6 votes]: There is no simple explanation. The only thing you can give as an intuition is that Zorn's lemma fails. And its failure is witnessed already in these parts. -The construction is written in a fairly outdated language, but here's what I can speculate from what's in the paper and the things I know about these constructions. -We add a generic copy of the algebraic closure of $\Bbb Q$ to our universe, and then we restrict to a subuniverse of this extension in which every element is essentially defined from finitely many elements of the field as the field structure itself. -So we have a field which is still algebraically closed, as being algebraically closed is witnessed "locally" by finite collections of elements; and it is still of characteristics $0$, so it has a copy of the rationals. We can also show that no subfield is algebraically closed in a fairly straightforward way. -Therefore our generic copy is an algebraic closure of $\Bbb Q$. One can show, however, that it is not countable (an enumeration cannot depend only on finitely many elements). So now $\Bbb Q$ has two non-isomorphic closure. Which is great. Finally, an automorphism cannot be defined by a finite subset of this field, since it would mean it has to be an automorphism of a subfield and we have to make choice outside that subfield. - -In a nutshell, we add a copy of $\overline{\Bbb Q}$ and then forget all its subsets which are not definable (in a set theoretic way, but also model theoretic way) from the roots of finitely many polynomials. It is not hard to show that if an automorphism of this field is defined from a polynomial, it has to be more or less an automorphism of that polynomial's splitting field. And we cannot "decide how to extend it" by the relative homogeneity of the [original] algebraic closure.<|endoftext|> -TITLE: Are there any two squares with a distance of a power of 2 between them? -QUESTION [6 upvotes]: I was working with Pythagorean triples and found a formula to generate Pythagorean triples. I am pretty sure every Pythagorean triple falls under my formula. This formula is: If $a$, $x$, and $n$ are all integers greater than zero, and $x = (2a + 1)n$, then the set $\{x,\frac{x^2 - n^2}{2n}, \frac{x^2 + n^2}{2n}\}$ is a Pythagorean triple. After playing around with these numbers for awhile I figured out that if I am correct, no two squares are a distance of a power of $2$ apart. Could someone confirm this to be true? - -REPLY [3 votes]: More generally, if $c=2^{n-2}+1$ and $a=2^{n-2}-1$ then $c^2-a^2=2^{n}$. -When $n$ is even, $a=u^2-1, b=2u, c=u^2+1$ is a Pythagorean triple, for $u=2^{(n-2)/2}$, and $c^2-a^2=b^2=2^n$. -It is true for the legs of a Pythagorean triple, other than $2^0=1$, of course, under the condition that the legs are relatively prime. -Alternatively, it is definitely true that the difference cannot be an odd power of $2$.<|endoftext|> -TITLE: Application of Doob's inequality -QUESTION [7 upvotes]: Suppose that $X_n$ is a martingale with $X_0 = 0$ and $EX^2_n < \infty$. Show that - $$P\left(\max_{1\leq m \leq n} {X_m} \geq \lambda\right) \leq \frac{EX^2_n}{EX^2_n+\lambda^2}$$ - by using the fact that $(X_n+c)^2$ is a submartinagle. - -So far, I have tried to manipulate $P(\max_{1\leq m \leq n} X_m \geq \lambda) $ into the form of $$P\left(\max_{1\leq m \leq n} (X_m + c)^2 \geq \lambda^2 + EX^2_n\right)$$ by finding an appropriate $c$ that yields the inequality, but it does not seem to work. How should I proceed? -The problem is from Durrett Exercise 5.4.5. - -REPLY [9 votes]: Observe that -$$\max_{1\leq m \leq n} X_m \geq \lambda \Leftrightarrow \max_{1\leq m \leq n} X_m + c \geq \lambda + c \implies \max_{1\leq m \leq n} \lvert X_m + c \rvert \geq \lambda + c$$ -$$\max_{1\leq m \leq n} \lvert X_m + c \rvert \geq \lambda + c \Leftrightarrow \max_{1\leq m \leq n} \lvert X_m + c \rvert^2 \geq (\lambda + c)^2$$ -The second relation is true if $\lambda + c \geq 0$. For now, suppose that this is the case. -Then you have $$P\{\max_{1\leq m \leq n} X_m \geq \lambda\} \leq P\{\max_{1\leq m \leq n} \lvert X_m + c \rvert^2 \geq (\lambda + c)^2\}$$ -Apply Doob's inequality to the RHS to obtain -$$P\{\max_{1\leq m \leq n} \lvert X_m + c \rvert^2 \geq (\lambda + c)^2\} \leq \frac{1}{(\lambda + c)^2}E[\lvert X_n + c \rvert^2] = \frac{1}{(\lambda + c)^2}(E[X_n^2] + c^2)$$ -On the last equality I made use of the fact that $E[X_n] = E[X_0] = 0$ (martingale property). Now we want to find a lower bound for this last expression by choosing the optimal $c$. Differentiating this expression with respect to $c$ and equating it to $0$ gives the optimal $c$ as $\frac{E[X_n^2]}{\lambda}$. Substitute this $c$ into $\frac{1}{(\lambda + c)^2}(E[X_n^2] + c^2)$ to finish the exercise. -Note that the optimal $c$ satisfies the assumption we made $\lambda + c \geq 0$ if $\lambda \geq 0$. That should be stated somewhere in the question. However, I think that the $\max$ expressions should run from $0$ instead of $1$, in which case $\lambda \geq 0$ becomes a natural starting point.<|endoftext|> -TITLE: Relationship between factorial and derivatives -QUESTION [5 upvotes]: I was wondering if there is any relationship between factorials and derivatives because I notice that if we had $x^n$ and we take the $n$-th derivative of this function it will be equal to the factorial of $n$: $$\frac{d^n}{dx^n}(x^n)=n!$$ - -REPLY [3 votes]: Look at the general expression $$y_i = \frac{{\rm d}^i x^n}{{\rm d}x^i}$$ -This is the i-th derivative of $x^n$ for $i=1\ldots n$. This can be directly evaluated as -$$ \boxed{ \frac{{\rm d}^i x^n}{{\rm d}x^i} = \frac{n!}{(n-i)!} x^{n-i} } $$ -So the i-th derivative of a n-th order polynomial contains terms of $n!$, $(n-1)!$, $(n-2)!$ etc<|endoftext|> -TITLE: If $P(A \ \cup \ B) = P(A) + P(B)$, is it the case that $A$ and $B$ are disjoint? -QUESTION [7 upvotes]: I know that if $A$ and $B$ are disjoint events, then $P(A \cup \ B) = P(A) + P(B)$. However, is the converse true as well? Thanks. - -REPLY [7 votes]: No for example suppose you have a uniform distribution on $[0,1]$. Let $A=[0,1/2]$ and $B=[1/2,1]$. Then $P(A\cap B)=0$ but $A\cap B=\{1/2\}\not=\emptyset$.<|endoftext|> -TITLE: If any differential equation is given by $f''(x)+f'(x)+f^2(x) = x^2\;,$ Then $f(x)=$ -QUESTION [18 upvotes]: If any differential equation is given by $f''(x)+f'(x)+f^2(x) = x^2\;,$ Then $f(x)=$ - -$\bf{My\; Try:}$ -We can write the above differential equation as $$e^xf''(x)+e^xf'(x)+e^x\cdot (f(x))^2 = e^x\cdot x^2$$ -So $$\frac{d}{dx}\left[e^x\cdot f'(x)\right] = e^x\cdot x^2-e^x(f(x))^2$$ -Now How can I proceed after that, help me -Thanks. - -REPLY [2 votes]: Using the power series trick we get that: -\begin{align} - &f(x) = \sum_{k=0}^\infty a_kx^k\\ - &f'(x) = \sum_{k=0}^\infty a_{k+1}(k+1) x^k\\ - &f''(x) = \sum_{k=0}^\infty a_{k+2}(k+1)(k+2)x^k\\ - &f(x)^2 = \sum_{k=0}^\infty \left(\sum_{i=0}^k a_ia_{k-i} \right)x^k -\end{align} -Inserting this into the equation we have that: -$$ - (k+1)(k+2)a_{k+2}+(k+1)a_{k+1}+\sum_{i=0}^k a_ia_{k-i} = \delta_{k,2}, \quad \mbox{for } k\geq 0. -$$ -Noting that $a_0 = f(0)$ we may successively solve the recurrence equation to find a solution. As a side remark we know (since the coefficients in the ODE are entire) that the radius of convergence is infinite (Fuchs theorem - if i recall correctly).<|endoftext|> -TITLE: Unusual proof that $M$ is Noetherian if and only if $N$ and $M/N$ are Noetherian -QUESTION [5 upvotes]: Let $M$ be an $R$-module for a ring $R$. Let $N$ be a submodule of $M$. I read that one can prove that $M$ is Noetherian if and only if $N$ and $M/N$ are Noetherian using these two results: -1) If $M$ is finitely generated and $A$ is a submodule, then $M/A$ is finitely generated. -2) If, for a submodule $A$, $A$ and $M/A$ are finitely generated, then $M$ is finitely generated. -I see how to prove the original result based on ascending chains but not how to do it using these results... Could someone enlight me here? It would be very helpful as I'm getting confused. - -REPLY [6 votes]: Let's prove that, if $N$ and $M/N$ are noetherian, then $M$ is noetherian. We need to see that every submodule of $M$ is finitely generated. -Consider a submodule $L$ of $M$. Then $L\cap N$ is a submodule of $N$, hence finitely generated. Also $(L+N)/N$, as a submodule of $M/N$ is finitely generated. Consider the exact sequence -$$ -0\to L\cap N\to L\to L/(L\cap N)\to 0 -$$ -Since -$$ -L/(L\cap N)\cong (L+N)/N -$$ -is finitely generated, also $L$ is finitely generated because of fact 2. -For the converse, assume $M$ is noetherian and that $N$ is a submodule of $M$. Then $N$ is noetherian, because every submodule of $N$ is also a submodule of $M$. For $M/N$ recall that a submodule of $M/N$ is of the form $L/N$, where $L$ is a submodule of $M$ and $N\subseteq L$. Then $L/N$ is finitely generated because of fact 1.<|endoftext|> -TITLE: How do you find the probability of A winning if the probability of getting a favourable outcome in the $r^{th}$ turn is a function of $r$? -QUESTION [5 upvotes]: Problem: -Two players A and B are playing snake and ladder. A is at 99 and he needs 1 to win in rolling of a dice. However, he is always allowed to re-throw the dice if 6 appears. -What is the probability that A will win eventually before B gets a chance, if the probability of getting 1 in the $r^{th}$ throw is $\frac{1}{3^r}$ and that of getting 6 in the $r^{th}$ throw is $\frac{2r-1}{4r}$? -My attempt: -We know that A can win before B gets a chance only if he rolls {$1$},{$6$,$1$},{$6$,$6$,$1$} and so on. -In the $r^{th}$ turn, we have the probability: $$\frac{1}{3^r}\cdot\frac{1}{4}\cdot\frac{3}{8}\cdot\cdot\cdot\frac{2r-3}{4(r-1)}$$ -$$=\frac{1}{3^r}\cdot\frac{1}{4^{r-1}}\left(\frac{1\cdot3\cdot5\cdot7\cdot\cdot\cdot(2r-3)}{1\cdot2\cdot3\cdot4\cdot\cdot\cdot(r-1)}\right)$$ -$$=\frac{1}{3^r}\cdot\frac{1}{4^{r-1}}\left(\frac{(2r-2)!}{(r-1)!\cdot(r-1)!\cdot2^{r-1}}\right)$$ -$$=\frac{1}{3}\cdot\frac{1}{24^{r-1}}\left(\frac{(2r-2)!}{(r-1)!\cdot(r-1)!}\right)$$ -Therefore, we have the probability as -$$\sum_{r=1}^{\infty} \frac{1}{3}\cdot\binom{2r-2}{r-1} \frac{1}{24^{r-1}}$$ -Taking $r-1$=$n$ -$$\frac{1}{3}\sum_{n=0}^{\infty} \binom{2n}{n} \frac{1}{24^{n}}$$ - -I got stuck at the last step because I do not know how to evaluate that summation. Any help with the summation/providing an alternate way to solve this question will be appreciated. - -REPLY [3 votes]: The generating function for the central binomial coefficients is - \begin{align*} -\sum_{n=0}^{\infty}\binom{2n}{n}z^n=\frac{1}{\sqrt{1-4z}}\qquad |z|<\frac{1}{4} -\end{align*} -This is an application of the binomial series - \begin{align*} -(1+z)^{\alpha}=\sum_{n=0}^{\infty}\binom{\alpha}{n}z^n\qquad |z|<1, \alpha\in\mathbb{C} -\end{align*} - and the relation - \begin{align*} -\binom{-\frac{1}{2}}{n}=\frac{(-1)^n}{4^n}\binom{2n}{n} -\end{align*} - -Can you proceed?<|endoftext|> -TITLE: Continuous path inside the Mandelbrot set connecting i to zero? -QUESTION [8 upvotes]: This relates to another challenge Question about drawing Mandelbrot filaments. -It is possible to compute a formula for a continuous path inside the Mandelbrot Set connecting $c=i$ to $c=0$? Obviously, the part inside the cartoid or lobes is easy, but finding any in-Set curve that includes $i$ has eluded me. I know the Mandebrot boundary is infinitely long and detailed, but if a filament has any finite thickness there should be a finite-length path through it that doesn't follow the boundary. -I tried to start by finding a direction one could travel from $i$ for a very short distance and remain in-Set, but even that eluded me. -To show the topology we're up against, here is the $|z_{25}|==2$ contour in the vicinity of i. - -So, can one derive a formula for the path I want? -As an aside, it's worth noting that $i$ is a Misiurewicz Point, meaning its orbit is not immediately periodic but becomes so after a finite number of steps, i.e., $z_3(i)=z_1(i)$. This property places i exactly on the boundary of the Mandelbrot set. -$$i\rightarrow-1+i\rightarrow-i\rightarrow-1+i\rightarrow-i\rightarrow-1+i\rightarrow-i...$$ - -REPLY [4 votes]: Here's a zoom into the Mandelbrot set centered on $i$, with a factor of 10 magnification per frame. As you can extrapolate from the first frame, your path will have to go through an infinite number of bulbs before getting to any filaments. Then there will be an infinite number of tiny Mandelbrot islands to pass through, as they are dense in the filaments (they get very small very quickly). Finally you'll have to pass through an infinite number of 3-way Misiurewicz branch points, around which the Mandelbrot set is asymptotically self-similar. -All of this means I strongly doubt there is a simple formula for your desired path, if there is one at all. And if by "inside the Mandelbrot set" you mean "within the interior of the Mandelbrot set" then there is no such path possible, because you'll have to touch the boundary at the 3-way branching Misiurewicz points. - -PS: this image is rendered using exterior distance estimation, which I explained in my answer to your other question.<|endoftext|> -TITLE: Question about weak convergence of random variables -QUESTION [5 upvotes]: When you start to learn probability theory, for instance the central limit theorem, you learn about convergence in distribution $X_n\to X$ (where, say, both $X_n$ and $X$ are $\mathbb R$-valued random variables). -At the beginning, the first course start with the pointwise convergence of the repartition functions $F_n(x)\to F(x)$, where $F_n(x)=\mathbb P(X_n\leq x)$ and $F(x)=\mathbb P(X\leq x)$, but is not clear what it exactly implies (at least for me). -Then you learn that this equivalently means the law $\mu_n$ of $X_n$ converges to the law $\mu$ of $X$ in the weak topology. By weak topology, it means weak-* topology when you embed the space of (Borel) probability measures on $\mathbb R$ into the Banach space of continuous linear forms over the space of continuous and bounded functions. Sometimes you look at the dual space of continuous and compactly supported functions, and then you speak about vague convergence. Now, I take for definition for the convergence in law that $\int f d\mu_n\to\int fd\mu$ for the class of continuous and bounded functions. -But, what does it implies for larger spaces of test functions $f$? -When $f$ is continuous but not bounded, it is not enough: you need uniform integrability, fine. -But on the other way around, what if $f$ is bounded but not continuous (but still measurable) ? (if you even assume $X_n$ and $X$ take values on a same compact subset of $\mathbb R$ ?). In other words, is $f$ is bounded, what are the minimal regularity assumptions we can make on $f$ so that we still have $\int fd\mu_n\to\int fd\mu$ ? -Moreover : what kind of minimal set of test functions $f$ so that $\int fd\mu_n\to\int fd\mu$ implies convergence in distribution ? For instance, what if $f$ is in the space of Lipschitz functions ? - -REPLY [4 votes]: For your first question: you cannot use any non-continuous $f$. - -Suppose $f : \mathbb{R} \to \mathbb{R}$ is a measurable function which is not continuous. There exist measures $\mu_n, \mu$ such that $\mu_n \to \mu$ weakly but $\int f\,d\mu_n \not\to \int f\,d\mu$. - -Proof. If $f$ is not continuous then there exist points $x_n, x \in \mathbb{R}$ with $x_n \to x$ but $f(x_n) \not\to f(x)$. Let $\mu_n = \delta_{x_n}, \mu = \delta_x$ be point masses at $x_n, x$ respectively. (That is, take $X_n = x_n, X = x$ to be constant random variables.) - -For your second question: we want to know conditions on a set $\mathcal{C}$ of bounded continuous functions on $\mathbb{R}$ to guarantee the following statement: if $\int f\,d\mu_n \to \int f\,d\mu$ for all $f \in \mathcal{C}$, then $\mu_n \to \mu$ weakly. -A helpful sufficient condition is: the uniform closure of the linear span of $\mathcal{C}$ contains the set $C_c(\mathbb{R})$ of continuous compactly supported functions. -First note that if $\int f\,d\mu_n \to \int f\,d\mu$ for all $f \in \mathbb{C}$, then by linearity of the integral, the same holds for $f$ which are (finite) linear combinations of functions from $\mathcal{C}$. Also, if $f_m \to f$ uniformly and the above condition holds for all $f_m$, then a triangle inequality argument shows it also holds for $f$. So the condition holds on the closure of the span of $\mathcal{C}$. -Now if the condition holds on $C_c(\mathbb{R})$, then we get $\mu_n \to \mu$ weakly. This is a standard, but not quite trivial, fact. The proof goes something like this. -Find a large enough compact set $K$ that $\mu(K) \ge 1-\epsilon$. Choose $f \in C_c(\mathbb{R})$ that is 1 on $K$ and bounded by 1. From the fact $\int f\,d\mu_n \to \int f\,d\mu$, deduce that the sequence $\{\mu_n\}$ is tight. By Prohorov's theorem, $\{\mu_n\}$ is weakly precompact. Using the "double subsequence" trick, it now suffices to show that the only possible subsequential weak limit is $\mu$. -Suppose $\nu$ is a weak limit of some subsequence $\mu_{n_k}$. For every $f \in C_c(\mathbb{R})$, we have $$\int f\,d\nu = \lim_{k \to \infty} \int f \,d\mu_{n_k} = \int f\,d\mu.$$ Now for any open set $U$, we can find a sequence $f_m \in C_c(\mathbb{R})$ with $f_m \to 1_U$ pointwise and boundedly. By dominated convergence, $$\nu(U) = \lim_{m \to \infty} \int f_m\,d\nu = \lim_{m \to \infty} \int f_m\,d\mu = \mu(U).$$ Now use a monotone class argument to show $\nu(B) = \mu(B)$ for all Borel sets. Hence $\nu = \mu$. -Some examples of classes $\mathcal{C}$ satisfying this condition: - -Compactly supported, piecewise linear functions -$C^\infty$ compactly supported functions -Lipschitz functions (by either of the above) - - -Another class $\mathcal{C}$ of functions that works, for different reasons, is $\{e^{itx} : t \in \mathbb{R}\}$. This is Lévy's continuity theorem. I don't think this $\mathcal{C}$ satisfies the previous sufficient condition. - -A new question was posed in comments: suppose the probability measures $\mu_n, \mu$ are all absolutely continuous with respect to Lebesgue measure $m$, with densities $h_n, h$. Suppose moreover that they are all supported inside $[0,1]$. If $\mu_n \to \mu$ weakly, does it follow that $\int f\,d\mu_n \to \int f\,d\mu$ for all bounded measurable $f$? -Answer: No, it does not. -Let $E \subset [0,1]$ be a fat Cantor set, or your other favorite set which is closed, nowhere dense, and has positive Lebesgue measure. Let $h = \frac{1}{m(E)} 1_E$. For $1 \le k \le 2^n$, let $I_{n,k}$ denote the open interval $((k-1)2^{-n}, k2^{-n})$. Define $h_n$ by -$$f_n = \sum_{k=1}^{2^n} \frac{m(I_{n,k} \cap E)}{m(I_{n,k} \setminus E) m(E)} 1_{I_{n,k} \setminus E}.$$ -Note that since $E$ is closed and nowhere dense, for any $n,k$ we have that $I_{n,k} \setminus E$ is open and nonempty; in particular it has positive measure. So this definition makes sense. -Let $d\mu_n = h_n\,dm$, $d\mu = h \,dm$ be the measures with the corresponding densities. Observe that $\mu_n(E) = 0$ for all $n$. If we take $f = 1_E$, which is bounded measurable (and even upper semicontinuous), then clearly $\int f\,d\mu_n = 0$ while $\int f\,d\mu = 1$. Now I claim that $\mu_n \to \mu$ weakly, which will complete the counterexample. -By construction, $\mu_n(I_{n,k}) = \frac{m(I_{n,k} \cap E)}{m(E)} = \mu(I_{n,k})$. Also observe that for $n \le m$ and $1 \le k \le 2^{-n}$ we have $$I_{n,k} = \left(\bigcup_{j=(k-1)2^{m-n}+1}^{k 2^{m-n}} I_{m,j}\right) \cup \text{ finitely many points} \tag{*}$$ -and thus -$$\mu_m(I_{n,k}) = \sum_j \mu_m(I_{m,j}) = \sum_j \mu(I_{m,j}) = \mu(I_{n,k}) = \mu_n(I_{n,k})$$ -where the sums are taken over the same $j$ as in (*). -Let $g : [0,1] \to \mathbb{R}$ be continuous and let $\epsilon > 0$. By uniform continuity, for all sufficiently large $n$ we have that the oscillation of $g$ on every interval of length $2^{-n}$ is less than $\epsilon$. So if we set -$$\tilde{g} = \sum_{k=1}^{2^n} g(k 2^{-n}) 1_{I_{n,k}}$$ -then $|g - \tilde{g}| < \epsilon$ almost everywhere. Now note that $$\int \tilde{g}\,d\mu_n = \sum_{k=1}^{2^n} g(k 2^{-n}) \mu_n(I_{n,k}) = \sum_{k=1}^{2^n} g(k 2^{-n}) \mu(I_{n,k}) = \int \tilde{g}\,d\mu.$$ -Hence -$$\left|\int g\,d\mu_n - \int g\,d\mu\right| = \left| \int (g-\tilde{g})\,d\mu_n + \int (\tilde{g}-g)\,d\mu \right| \le 2\epsilon.$$ -So $\int g\,d\mu_n \to \int g\,d\mu$ and we have shown weak convergence.<|endoftext|> -TITLE: Solving integration by parts $\int\frac{xe^{2x}}{(1+2x)^2}\,dx$ -QUESTION [6 upvotes]: $\displaystyle\int\frac{xe^{2x}}{(1+2x)^2}\,dx$ -With this particular problem. my approach is to to rewrite the integral as $$\int xe^{2x}\frac{1}{(1+2x)^2}\,dx$$ and then pick a $u$ and a $dv$ and take it from there. The only issue I'm running into is that $xe^{2x}$ appears to me as two functions instead of one. What is a suggestion for this? - -REPLY [5 votes]: Make the substitution $1 + 2x = t$ so that $\text{d}x = \frac{\text{d}t}{2}$ so you get: -$$I = \int \frac{(t-1)}{2}\cdot \frac{e^{2\left(\frac{(t-1)}{2}\right)}}{t^2}\frac{\text{d}t}{2}$$ -and arranging the terms you easily get: -$$\frac{e^{-1}}{4}\int \frac{e^t}{t} - \frac{e^{t}}{t^2}\ \text{d}t$$ -Now the first integral is a Special Function called the Exponential Integral function: -$$\int\frac{e^t}{t}\ \text{d}t = \text{Ei}(t)$$ -and the second one can be performed by parts, giving the quite same result: -$$\int \frac{e^t}{t^2}\ \text{d}t = -\frac{e^t}{t} + \text{Ei}(t)$$ -Putting together and you see the two Special Functions are cancelled by the minus sign, obtaining in the end the result of the integration in $\text{d}t$: -$$\frac{e^{-1}}{4}\frac{e^{t}}{t} \equiv \frac{e^{t-1}}{4t}$$ -coming back to $x$: -$$I = \frac{e^{2x}}{4\cdot(1 + 2x)}$$ -More about Exponential Integral -https://en.wikipedia.org/wiki/Exponential_integral -Final Remark -Don't forget about the various $C$ constants you can obtain from each integration, and you can set up them as zero! - -REPLY [2 votes]: To use the integration by parts method, we do the following: -$$\int \frac{1}{(1+2x)^2}xe^{2x}dx=\int \left (-\frac{1}{2(1+2x)}\right )'xe^{2x}dx \\ =-\frac{1}{2(1+2x)}xe^{2x}+\int \frac{1}{2(1+2x)}(e^{2x}+2xe^{2x})dx \\ =-\frac{1}{2(1+2x)}xe^{2x}+\int \frac{1}{2(1+2x)}(e^{2x}(1+2x))dx \\ =-\frac{1}{2(1+2x)}xe^{2x}+\int \frac{e^{2x}}{2}dx \\ =-\frac{1}{2(1+2x)}xe^{2x}+\frac{e^{2x}}{4}+C \\ =e^{2x}\frac{1+2x-2x}{4(1+2x)}+C\\ =\frac{e^{2x}}{4(1+2x)}+C$$<|endoftext|> -TITLE: What is the probability that the upturned faces of three fair dice are all of different numbers? -QUESTION [5 upvotes]: Three fair dice are rolled ($6$ sides). What is the probability that the upturned faces of the three dice are all of different numbers? - -I got that the number of possible outcomes total is $6^3$ and the number of possible outcomes for which the upturned dice are all different numbers is $6 * 5 * 4$, so the probability is $\frac{5}{9}$. -Is this correct? - -REPLY [4 votes]: Community wiki answer so the question can be marked as answered: -As Alex remarked: yes, this is correct. \ No newline at end of file