TITLE: Where is my argument that $\int_{-1}^{1} \sqrt{1-x^2}dx=0$ wrong? QUESTION [11 upvotes]: $$\int_{-1}^{1} \sqrt{1-x^2}dx$$ I let $u = 1-x^2$, $x = (1-u)^{1/2}$ $du = -2x dx$ $$-\frac{1}{2}\int_{0}^{0} \frac{u^{1/2}}{(1-u)^{1/2}}du = 0$$ because $$\int_{a}^{a} f(x)dx = 0$$ But it isn't zero. Why????? REPLY [17 votes]: For $x \in [-1,0)$, you can't have $$ u = 1-x^2 \implies x = (1-u)^{1/2} $$ thus your change of variable is not valid over $[-1,0)$. You would better write by parity $$ \int_{-1}^{1} \sqrt{1-x^2}dx=2\int_0^{1} \sqrt{1-x^2}dx $$ then use the given change of variable. REPLY [13 votes]: You may only do a u-substitution when it is bijective on your integration domain. You can solve this problem by breaking up your integral into $\int_{-1}^0$ and $\int_0^1,$ and using $x=-(1-u)^{1/2}$on the first integral, and $x=(1-u)^{1/2}$ on the second.<|endoftext|> TITLE: Trying to understand how a subnet of a sequence differs from a subsequence QUESTION [9 upvotes]: Let's say I have a topological space $(X,\tau)$ and a sequence $\{x_n\}_{n \in \mathbb{N}} \subset X$ which has a limit point $a$ but no subsequence converging to $a$. If I regard $\{x_n\}_{n \in \mathbb{N}}$ as a net over $\mathbb{N}$ directed with the standard order I know there must be a subnet converging to $a$, however I don't quite understand how this can happen. I know there are subnets of $\{x_n\}_{n \in \mathbb{N}}$ that are not subsequences, and I have in mind the following example: $$\{x_{\left \lfloor{r}\right \rfloor}\}_{r \in \mathbb{R}}$$ This subnet has uncountably many terms but for any $j \in \mathbb{N}$ all the terms of the subnet between $x_j$ and $x_{j+1}$ are equal to $x_j$. That is, the subnet above consist of all the terms of the original sequence but there are uncountably many copies of each one before reaching the next term (of the original sequence). Ok, that was a particular case of a subnet of a sequence (which seems to be useless if we are seeking for a subnet converging to a limit point of a sequence which has no subsequence doing that job), but I can't see how different can be any other subnet of the sequence, taking into account that the subnet can't add new elements to the image of the original sequence nor change their order. I mean, what else can the subnet do, appart from adding copies of some of the original terms (just like the subnet of the example above)? And if this is the only thing it can do, how is it possible that this thing converges but when we consider the subsequence resulting of taking away all those extra copies of the original terms, it doesn't converge? REPLY [9 votes]: IMHO, you have to work with some subnet constructions, to get a feel for them. A subnet is a function on index sets, as a net is a function from an index set into a space, as you probably know. So if $f: (I, \le) \rightarrow X$ ,is a net (a sequence is just a net from $(\mathbb{N}, \le)$, where $(I, \le)$ is a directed set ($i \le i$, $i \le j \le k \rightarrow i \le k$, $\forall i_1, i_2 \in I \exists j \in I: i_1 \le j, i_2 \le j$) then a net $g:(J, \le) \rightarrow X$ is a subnet of $f$ iff there exists $h: J \rightarrow I$ that is order preserving ($j_1 \le j_2 \rightarrow h(j_1) \le h(j_2)$ and which has a cofinal image ($\forall i \in I, \exists j \in J: h(j) \ge i$) such that $f \circ h = g$. There are other notions of subnet that are not equivalent, see this question and answer, but I find this one the easiest to understand, as it looks the most like a traditional subsequence. It's in fact exactly a subsequence if we'd demand $J = \mathbb{N}$ as well. But $J$ can be much bigger. E.g. in your example you use the map $\mathbb{R}^+ \rightarrow \mathbb{N}$ defined by $x \rightarrow \lfloor x \rfloor$, which satisfies the requirements if both sets have their usual orders. To get a subnet converging to $a$, we usually take a directed set that is related to $a$: let the directed set be $I =\mathscr{N}_a$ all open neighbourhoods of $a$, ordered by reverse inclusion $i_1 \le i_2$ iff $i_2 \subseteq i_1$. Directedness follows from standard inclusion properties and the fact that the intersection of two neighbourhoods of $a$ is again a neigbourhood of $a$. Then we use that $a$ is a limit point (every neighbourhood of $a$ contains infinitely many points of the sequence, really an accumulation point) of $\{x_n: n \in \mathbb{N}\}$ to make a subnet: for each $U \in \mathscr{N}_a$ define $f(U) = \min \{n: x_n \in U\}$ and set $g(U) = x_{f(U)}$ where $f$ is well-defined as the minimum of a non-empty subset of the (well-ordered) set $\mathbb{N}$. This defines a net $g: I \rightarrow X$, and $f$ shows it's clearly a subnet of the orginal sequence in the above sense (a smaller neighbourhood can only have a larger minimal index, so order is preserved, and the image is cofinal as every neighbourhood must intersect infinitely many elements of the sequence). It also converges to $a$ as any open neighbourhood $O$ of $a$ is in the index set, so we have $x_O \in O$ and whenever $O' \ge O$ we actually have $O' \subseteq O$ so $x_{O'} \in O' \subseteq O$ as well. So every neighbourhood defines it own "tail" sets (all neighbourhoods inside it) in this net of neighbourhoods $I$; the order on $I$ is not linear, that's where you have to broaden your intuition, sets of the form $\{i \in I: i \ge i_0\}$ can be very thin threads in $I$, not "almost all points", like for sequences. So convergence wrt a net on such an order is a bit strange. If $a$ would have a countable local base we could just take a countable base as $I$ which is decreasing and $I$ just becomes a copy of $\mathbb{N}$ and we get a subsequence instead of a subnet. To get something really different you normally use large products like $[0,1]^\mathbb{R}$ or spaces like $\beta \omega$, where the points usually have quite complicated neighbourhood structures, and many non-compatible neighbourhoods exist (for 2 neighbourhoods one need not be a subset of the other: in metric spaces neighbourhoods are essentially countable linear structures, consider the balls $B(x,\frac{1}{n})$ for fixed $x$, etc. that is why sequences suffices there.) This question and answer also illustrate nicely how we can not a have subsequence but still a subnet that converges.<|endoftext|> TITLE: Are equivalent representations unitarily equivalent? QUESTION [5 upvotes]: Let $G$ be a group and let $\pi: G\rightarrow Aut_\mathbb{C}(V)$ be a finite dimensional irreducible representation of $G$. I have two related questions If I have two different hermitian inner products on $V$ with respect to which $\pi$ is unitary, does one have to be a scalar multiple of the other? are two equivalent unitary finite dimensional irreducible representations of a group unitarily equivalent? remark: (1. implies 2.) REPLY [3 votes]: Let $\left< \cdot, \cdot \right>_1, \left< \cdot, \cdot \right>_2$ be two Hermitian inner products on $V$ with respect to which $\pi$ is unitary and let $T \colon V \rightarrow V$ be the (unique) operator satisfying $$ \left< Tv, w \right>_2 = \left< v, w \right>_1 $$ for all $v,w \in V$. Then $$ \left< (\pi(g) \circ T)(v), v \right>_2 = \left< T(v), \pi(g)^{*}(v) \right>_2 = \left< T(v), \pi(g^{-1})v \right>_2 = \left< v, \pi(g^{-1})(v) \right>_1 = \left< v, \pi(g)^{*}(v) \right>_1 = \left< \pi(g)v, v \right>_1 = \left< (T \circ \pi(g))(v), v \right>_2 $$ for all $v \in V$ and $g \in G$ so $\pi(g) \circ T = T \circ \pi(g)$ for all $g \in G$ and $T$ is an intertwining operator. By Schur's lemma, $T = \lambda \cdot \operatorname{id}_V$ and since $T$ is $\left< \cdot, \cdot \right>_2$-self-adjoint (and positive) we have $\lambda \in \mathbb{R}$ and $\left< \cdot, \cdot \right>_2$ is a positive multiple of $\left< \cdot, \cdot \right>_1$. REPLY [2 votes]: The first one is surely false. Just take the trivial representation. Nevertheless, if $h_1,h_2$ are two invariant hermitian forms then $h_2(x,y)=h_1(Tx,y)$ for some invariant linear operator $T$, so by Schur's lemma they are proportional if your representation is irreducible. Proof of the fact claimed in the proof: hermitian form $h$ is equivalent to an antilinear mapping $k$ from the vector space to its dual. You can take $T=k_2^{-1}k_1$.<|endoftext|> TITLE: Why we need eigenvectors and eigenvalues QUESTION [12 upvotes]: Given a matrix A, what do eigenvectors and eigenvalues of A imply? I know how to calculate them but I want to understand WHY do we need to find them? In what application this is important Thank you REPLY [8 votes]: This page may help graphically some of the applications: http://setosa.io/ev/eigenvectors-and-eigenvalues/ Music All music is just eigenvalues and eigenvectors. The strings of a guitar, a sitar or a santoor - they resonate at their eigenvalue frequencies. The membranes of percussion instruments like the Indian tabla, drums, etc. resonate at their eigenvalues and move according to the two dimensional eigenvectors. Statistics Eigenvectors of your data set matrix correspond to directions of maximum variance, ordered in decreasing marginal increase in variance by decreasing corresponding eigenvalues. This is the main idea behind principal component analysis (PCA), a dimensionality reduction trick often used in machine learning and AI. Control Theory Eigenvalues of the system matrix of a linear system tell you information about the stability and response of your system. For a continuous system, the system is stable if all eigenvalues have negative real part (located in the left half complex plane). For a discrete system, the system is stable if all eigenvalues have magnitude less than 1 (inside the unit circle in the complex plane) Graphs Eigenvalues of matrices associated with graphs, like the adjacency matrix and the Laplacian matrix. They relate to various structural properties of the graph. For instance, the number of 0 eigenvalues of the Laplacian matrix is equal to the number of components of the graph. The number of distinct eigenvalues of the adjacency matrix is a lower bound for one plus the diameter of the graph (the size of the longest path of the graph), and so on. Finance The eigenvalues and eigenvectors of a matrix are often used in the analysis of financial data and are integral in extracting useful information from the raw data. They can be used for predicting stock prices and analyzing correlations between various stocks, corresponding to different companies. They can be used for analyzing risks. There is a branch of Mathematics, known as Random Matrix Theory, which deals with properties of eigenvalues and eigenvectors, that has extensive applications in Finance, Risk Management, Meteorological studies, Nuclear Physics, etc. Google Search Results The searching algorithm of Google diagonalizes a giant matrix and with the SVD (Singular value decomposition) method, and according to the eigenvalue assigned to every website, it shows you the best results. See the paper about this here: http://www.rose-hulman.edu/~bryan/googleFinalVersionFixed.pdf Quantum mechanics Eigenvalues are possible measurement results of an observable represented by an operator. Classical mechanics Eigenvectors of the moment of inertia tensor represent the "main axes" around which a solid body can stably rotate, and the corresponding eigenvalues are scalar moments of inertia along those axes. These summaries pulled from : https://www.quora.com/What-are-some-very-good-and-practical-uses-of-eigenvalues-of-a-matrix<|endoftext|> TITLE: Distribution and expected value of a random infinite series $\sum_{n \geq 1} \frac{1}{(\text{lcm} (n,r_n))^2}$ QUESTION [5 upvotes]: Can we find the distribution and/or expected value of $$S=\sum_{n \geq 1} \frac{1}{(\text{lcm} (n,r_n))^2}$$ where $r_n$ is a uniformly distributed random integer, $r_n \in [1,n]$ and $\text{lcm}$ is the least common multiple? Or maybe an estimate is possible? Some boundaries are easy to see. For example: $$1 TITLE: Why did Riemann believe that all non-trivial zeros of the zeta function lie on the critical line? QUESTION [5 upvotes]: Bear with me, I'm fresh out of high school so my level of mathematical knowledge is quite low (probably too low to be trying to understand the Riemann hypothesis, but at least I'm trying). At this current time, I'm trying to make sense of John Derbyshire's (fantastic) book Prime Obsession. One thing that hasn't been explained in the book and to which I cannot find answers within the bounds of my understanding through research, is why exactly Riemann thought his hypothesis was true. From what I understand, Bernhard Riemann was a very intuitive mathematician so it's quite possible that although he did not have a proof for his hypothesis, it made sense to him intuitively. I guess my question is, How does it intuitively make sense that all non-trivial zeros of the zeta function lie on the critical line and not somewhere else? REPLY [4 votes]: From Riemann's Zeta Function, By Harold M. Edwards:<|endoftext|> TITLE: End of Hom-profunctor in Grp QUESTION [5 upvotes]: What is the end of the $\operatorname{Hom}$ profunctor in the category of groups? $$\int_{X\in\mathsf{Grp}} \operatorname{Hom}(X, X)$$ Preface: if we examine ends of $\operatorname{Hom}(-, -)$ in other categories, in $\mathsf{Set}$ we get the identity function, or rather a mapping $S \mapsto \{\operatorname{id}_S\}$. In $\mathsf{Vect}_K$ the end is $\{(k \cdot -) : k \in K\}$ - the set of uniformly scaling linear maps, and if we take the end of the internal $\operatorname{hom} : \mathsf{Vect}_K^{op} \times \mathsf{Vect}_K \to \mathsf{Vect}_K$, we get the vector space of those maps (isomorphic to $K$). Intuitively, the end of $\operatorname{Hom}$ seems to be the set of morphisms (or rather morphism families parameterised by the domain object) that can be applied to any object, $\{x \mapsto x\}$ in $\mathsf{Set}$ and $\{(x \mapsto k\cdot x) | k \in K\}$ in $\mathsf{Vect}_K$. Continuing the line of thought the end of $\operatorname{Hom}$ in $\mathsf{FinGrp}$ and likely $\mathsf{Grp}$ too, is $\{(x \mapsto x^i) | i \in \mathbb{Z}\}$. The proof that $G \mapsto \{(x \mapsto x^i) : G \to G | i \in \mathbb{Z}\}$ is a wedge of $\operatorname{Hom}(-, -)$ is simple enough, for any group homomorphism $f : G \to H$, $f(x^i) = f(x)^i$ and hence pre- and post-compositions of a hom-set with $-^i$ are necessarily equal. However I have no proof that this wedge is universal. REPLY [7 votes]: You're either approaching, or already made without saying so explicitly, the realization that the end of the hom bifunctor is the set of natural endomorphisms of the identity functor. This follows directly from the definition, once you unravel it. This immediately gives us the first two results you list: since a singleton represents the identity endofunctor of $\mathbf{Set}$, its single endomorphism represents the end of the hom bifunctor, and analogously in vector spaces. Since groups aren't enriched over themselves, we can't make quite the same argument to calculate $\int_G\mathrm{Gp}(G,G)$ as $\mathrm{Gp}(\mathbb{Z},\mathbb{Z})$. However, there's a nice trick we can use: groups have a faithful forgetful functor $U$, and so a natural endormorphism of the identity is just a natural endomorphism of the forgetful functor whose components happen to be group homomorphisms. But the forgetful functor is represented by $\mathbb{Z}$, so its endomorphisms are just $\mathrm{Gp}(\mathbb{Z},\mathbb{Z})$, each element of which corresponds to the natural endomorphism $g\mapsto g^n$ of $U$. However, none of these endomorphisms of $U$ are group homomorphisms for nonabelian groups $G$, except when $n=1$ or $n=0$! Thus there are only two elements in the end of the hom bifunctor for groups, corresponding to the identity natural transformation and to that which maps each group to its identity element. The object we've just computed is often known as the center of the category, as it's guaranteed to be a commutative monoid by the Eckmann-Hilton argument. Note that, indeed, all of our computations so far are commutative monoids, mostly rather boring ones.<|endoftext|> TITLE: Is there a closed form for the double sum $\sum_{n,k} \frac{1}{\text{lcm}^3 (n,k)}$ QUESTION [5 upvotes]: Is there a closed form for the double sum with least common multiple: $$\sum_{n \geq 1} \sum_{k \geq 1} \frac{1}{\text{lcm}^3 (n,k)}$$ For truncated sums, Mathematica gives: $$\sum_{n = 1}^{2500} \sum_{k = 1}^{2500} \frac{1}{\text{lcm}^3 (n,k)}=1.707289827$$ $$\sum_{n = 1}^{5000} \sum_{k = 1}^{5000} \frac{1}{\text{lcm}^3 (n,k)}=1.707290976$$ $$\sum_{n = 1}^{10000} \sum_{k = 1}^{10000} \frac{1}{\text{lcm}^3 (n,k)}=1.707291287$$ It's very close to $1+1/ \sqrt{2}$, but not quite. By the way, how do we prove it converges? REPLY [5 votes]: For any $m\geq 1$, the number of couples $(n,k)$ such that $\text{lcm}(n,k)=m$ can be easily understood. Assuming that the factorization of $m$ is given by $p_1^{\alpha_1}\cdots p_k^{a_k}$, there are $(2\alpha_1+1)\cdots (2\alpha_k+1)$ such couples. If we denote with $g(u)$ the multiplicative function whose value at $p^\alpha$ is $2\alpha+1$, the original series equal the Dirichlet series: $$ \sum_{m\geq 1}\frac{g(m)}{m^3}=\prod_{p\in\mathcal{P}}\left(1+\frac{g(p)}{p^3}+\frac{g(p^2)}{p^6}+\frac{g(p^3)}{p^9}+\ldots\right)=\prod_{p\in\mathcal{P}}\left(1-\frac{1}{p^6}\right)\left(1-\frac{1}{p^3}\right)^{-3} $$ that can be easily represented in closed form, $\color{red}{\large\frac{\zeta(3)^3}{\zeta(6)}}$, through Euler's product. In a similar way: $$\forall s>1,\qquad \sum_{m,n\geq 1}\frac{1}{\text{lcm}(m,n)^s} = \sum_{m\geq 1}\frac{g(m)}{m^s}=\color{red}{\frac{\zeta(s)^3}{\zeta(2s)}}.$$<|endoftext|> TITLE: Why can't I use the chain rule for the derivative of $x^{x^2}$? QUESTION [5 upvotes]: I realize you can do this with implicit differentiation, but I thought you could also take the derivative by using the chain rule. Hence, it should be $2x\cdot x^{x^2-1}$ What's wrong with this? REPLY [3 votes]: $$ \begin{align} \lim_{h \to 0} \frac1h \bigg((x+h)^{(x+h)^2}-x^{x^2}\bigg) & = \lim_{h \to 0} \frac{x^{x^2+2hx}}h \bigg((1+\frac{h}x)^{x^2+2hx}-x^{-2hx} \bigg) \\[10pt] & = \lim_{h \to 0} \frac{x^{x^2+2hx}}h\bigg((1-x^{-2hx}+\frac{h}x(x^2+2hx) \bigg) \\[10pt] & = x^{x^2}\lim_{h \to 0}\frac{x^{2hx}-1}{h} + \lim_{h \to 0}x^{x^2+2hx}(x+2h) \\[10pt] & = x^{x^2}\lim_{h \to 0}\frac{(x^{2x})^h-1}{h} + x^{x^2}x \\[10pt] & = x^{x^2}(\log x^{2x} +x) \\[10pt] & = x^{x^2}(2\log x +1)x \end{align} $$<|endoftext|> TITLE: Why can't Russell's Paradox be solved with references to sets instead of containment? QUESTION [23 upvotes]: My background is in computer science, and I'm keeping the Java implementation in my mind as a model. Included in the Java language is the notion of sets. Now I understand that this is different from the model Russell and Whitehead had in their minds when they were writing Principia Mathematica, but I don't completely understand why it is different. To me, when you say "a set that contains a set," you have three ways you can "implement" this. You can say that it is "physically inside" (and draw it inside). You can say "it is just a logical concept" (which is what I think Russell was getting at). And you can say "it is a physical concept, but not physically inside — we link them together with pointers" (like in computer programming). Taking this further into Russell's paradox: "The set of all sets that don't contain themselves," when talking about computer programming, is a relatively easy concept to implement (within the domain of the sets in a computer program). I'm guessing there is a philosophical difference between sets in Java and Russell's sets. (I imagine there must be a name for Russell's sets, but I don't know what they are called.) I can see that mathematics has other theories of sets like Zermelo–Fraenkel set theory and Quine's New Foundations. My question is: Why can't Russell's Paradox be solved with references to sets instead of containment? REPLY [7 votes]: While the answers by Reese and Dustan have explained the set-theoretic paradox in terms of Java, they do not answer the part of the question about how exactly Set data structures (not just in Java) avoid the paradox, nor do they show how the paradox can be avoided in a manner consistent with programming. Firstly, the type system of programming languages are different from the usual set theories and type theories in foundations of mathematics. I would even go so far as to say that a programming language ought to have a universal type, and indeed Java comes close (the only exceptions I am aware of are the native data types, which are there for performance reasons). But you see, most programming languages do not have any 'specification axiom', unlike set/type theories. In other words, you cannot create a data type or class that includes as members all objects that satisfy a particular property. So you just cannot construct a Russell-like data type in most programming languages. However, you could say, why not use a program to specify a type? Namely, a type is simply a program $P$ (a procedure with no internal state in most programming languages) and define its members as all inputs on which $P$ halts and outputs $true$. In a modern programming language such as Java, this corresponds to saying that a type is a (partial) function P with signature bool P(Object x). This notion is extremely intuitive (after all, how else do we classify things?) and also fits perfectly with the basic intuitive notions including the universal type (which is simply bool U(Object x) { return true; }) and existence of universal complements (the complement of P is just bool notP(Object x) { return !P(x); }. In a more abstract framework this could be denoted by $U = ( obj\ x \mapsto true )$ and $P' = ( obj\ x \mapsto \neg P(x) )$ respectively. Moreover, under this programs-as-types paradigm we can indeed construct the Russell type. If the abstract framework (programming language) allows run-time type coercion (like Javascript) then it is just $R = ( obj\ x \mapsto \neg x(x) )$. If not then we need some kind of reflection (like in Java) to define $R = ( obj\ x \mapsto type(x) \ ? \ \neg x(x) : false )$. Either way, we can then prove that $R(R) = \neg R(R)$ in the sense that both expressions have the same output behaviour. In this case, both do not halt, so the statement is true and does not cause contradiction! So you can see that the idea that a set must be an indicator function on the entire universe is the key feature of set theories that face Russell's paradox, as the paradox vanishes once you permit a truth-value gap and do not permit the system to form types based on what falls into that gap. See this post for one possible way of handling such constructions. In fact, Kripke described a similar notion of groundedness and also showed that one can circumvent Tarski's undefinability theorem in a certain sense using Kleene's 3-valued logic in his theory of truth. Finally, I would note that all this has little to do with whether objects are handled by value or by reference. The major issue is whether you can capture meta-theoretic properties in the system itself. In the case of set theory, it is the notion that you can construct a set that precisely divides the universe into two parts with no gap, depending on some property that only makes sense from the 'outside'. $\{ x : \neg x \in x \}$ depends on evaluating "$\neg x \in x$" for each object $x$, which can be answered for any given model of set theory, but the answer is in the meta-theory and is not always captured by the theory itself. Similarly Tarski's undefinability theorem shows that truth (a meta-property) cannot always be captured by a formal system. The model in Kripke's theory of truth does not answer affirmatively about the truth of every sentence; some of these questions fall into the truth-value gap.<|endoftext|> TITLE: Find $a, b\in \mathbb Z$ where $(a^2+b)(a+b^2)=(a-b)^3$ QUESTION [9 upvotes]: Find all non-zero $a, b\in \mathbb Z$ where $$(a^2+b)(a+b^2)=(a-b)^3$$ I actually had no clue on what to try. Thanks for your help. I believe I've already tried but per the 1st comment let me expand both sides and see what I cancel. $$a^3+a^2b^2+ab+b^3=a^3-3a^2b+3ab^2-b^3$$ $$b(a^2b+2b^2+3a^2-3ab+a)=0$$ And then..? REPLY [4 votes]: Let me post it as an answer to mark this answered. Thanks again to астон вілла олоф мэллбэрг! $$a^3+a^2b^2+ab+b^3=a^3-3a^2b+3ab^2-b^3$$ $$b(a^2b+2b^2+3a^2-3ab+a)=0$$ $$b=0\quad or \quad 2b^2+(a^2-3a)b+3a^2+a=0$$ Applying quadratic formula on b, $$b=\frac{a(3-a)\pm\sqrt{a^2(a-3)^2-24a^2-8a}}{4}=\frac{a(3-a)\pm (a+1)\sqrt{a(a-8)}}{4}$$ So $a(a-8)$ should be a square, let's say $n^2$. $$a^2-8a=n^2$$ $$(a-4)^2:=m^2=n^2+16$$ $$(m,n)=(\pm4,0),(\pm5,\pm3)$$ $$(a,b)=(8,-10),(-1,-1),(9,-6),(9,-21)$$<|endoftext|> TITLE: Cubes of binomial coefficients $\sum_{n=0}^{\infty}{{2n\choose n}^3\over 2^{6n}}={\pi\over \Gamma^4\left({3\over 4}\right)}$ QUESTION [17 upvotes]: Consider the sum $(1)$ $$\sum_{n=0}^{\infty}{{2n\choose n}^3\over 2^{6n}}={\pi\over \Gamma^4\left({3\over 4}\right)}\tag1$$ How does one prove $(1)$? An attempt: Recall $${2n\choose n}\cdot{\pi\over 2^{2n+1}}=\int_{0}^{\infty}{\mathrm dx\over (1+x^2)^{n+1}}\tag2$$ Choosing $x=\tan{u}$ then $\mathrm dx=\sec^2{u}\mathrm du$ $(2)$ becomes $${2n\choose n}\cdot{\pi\over 2^{2n+1}}=\int_{0}^{\pi/2}\cos^{2n}{u}\mathrm du\tag3$$ $${2n\choose n}^3\cdot{1\over 2^{6n}}={8\over \pi^3}\cdot\left(\int_{0}^{\pi/2}\cos^{2n}{u}\mathrm du\right)^3\tag4$$ $${2n\choose n}^3\cdot{1\over 2^{6n}}={8\over \pi^3}\cdot\left({(2n-1)!!\over (2n)!!}\right)^3\cdot{\pi^3\over 8}\tag5$$ I am not on the right track here. How else can we tackle $(1)?$ Note: $$\sum_{n=0}^{\infty}\left({(2n-1)!!\over (2n)!!}\right)^3={\pi\over \Gamma^4(3/4)}\tag1$$ similar to ramanujan's sum $(6)$ $$\sum_{n=0}^{\infty}(-1)^n(4n+1)\left({(2n-1)!!\over (2n)!!}\right)^5={2\over \Gamma^4(3/4)}\tag6$$ I found similar here, it may be helpful. REPLY [8 votes]: This is an extension of my comment to user "Start wearing purple"'s excellent answer and should be considered complementary to that. You have stumbled onto a famous series from the classical theory of elliptic integrals, elliptic functions and theta functions. I provide a brief outline here. Let $0 < k < 1$ and $k'=\sqrt{1 - k^{2}}$ then we define a function $$K(k) = \int_{0}^{\pi/2}\frac{dx}{\sqrt{1 - k^{2}\sin^{2}x}}\tag{1}$$ which is normally called complete elliptic integral of first kind and mostly we drop the parameter $k$ and just use $K$ for the elliptic integral. By expanding the integrand into an infinite series (using binomial theorem for general index) and integrating term by term we can obtain $$K = \frac{\pi}{2}\left\{1 + \left(\frac{1}{2}\right)^{2}k^{2} + \left(\frac{1\cdot 3}{2\cdot 4}\right)^{2}k^{4} + \cdots\right\}$$ or $$\frac{2K}{\pi} = {}_{2}F_{1}\left(\frac{1}{2}, \frac{1}{2}; 1; k^{2}\right)\tag{2}$$ Next we can transform the hypergeometric series on right into another form by using the formula $${}_{2}F_{1}\left(a, b; a + b + \frac{1}{2}; 4x(1 - x)\right) = {}_{2}F_{1}\left(2a, 2b; a + b + \frac{1}{2}; x\right)\tag{3}$$ which holds if $|x| < 1, |4x(1 - x)| < 1$ and $(a + b + (1/2))$ is neither zero nor a negative integer (see this blog post for proof). Applying $(3)$ on $(2)$ (using $a = 1/4, b = 1/4, x = k^{2}$) we get $$\frac{2K}{\pi} = {}_{2}F_{1}\left(\frac{1}{4},\frac{1}{4}; 1; (2kk')^{2}\right)\tag{4}$$ or in explicit form $$\frac{2K}{\pi} = 1 + \left(\frac{1}{4}\right)^{2}(2kk')^{2} + \left(\frac{1\cdot 5}{4\cdot 8}\right)^{2}(2kk')^{4} + \left(\frac{1\cdot 5\cdot 9}{4\cdot 8\cdot 12}\right)^{2}(2kk')^{6} + \cdots$$ which holds for $0 \leq k \leq 1/\sqrt{2}$. Now we need another piece of magic called Clausen's formula (proof available in the blog post linked earlier) $$\left({}_{2}F_{1}\left(a, b; a + b + \frac{1}{2}; z\right)\right)^{2} = \,{}_{3}F_{2}\left(2a, 2b, a + b; 2a + 2b, a + b + \frac{1}{2}; z\right)\tag{5}$$ Using $(4), (5)$ together (with $a = 1/4, b = 1/4, z = (2kk')^{2}$) we get $$\left(\frac{2K}{\pi}\right)^{2} =\,{}_{3}F_{2}\left(\frac{1}{2}, \frac{1}{2}, \frac{1}{2}; 1, 1; (2kk')^{2}\right)$$ or in explicit form $$\left(\frac{2K}{\pi}\right)^{2} = 1 + \left(\frac{1}{2}\right)^{3}(2kk')^{2} + \left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}(2kk')^{4} + \left(\frac{1\cdot 3\cdot 5}{2\cdot 4\cdot 6}\right)^{3}(2kk')^{6} + \cdots\tag{6}$$ for $0 \leq k\leq 1/\sqrt{2}$. Your sum in question is the above series on right with $k = 1/\sqrt{2}$ so that $2kk' = 1$ and thus the value of the series is $(2K(1/\sqrt{2})/\pi)^{2}$. Equation $(6)$ was sitting idly in the classical theory for a long time until Ramanujan appeared on the scene and in 1914 he differentiated the series $(6)$ (and some more series like it) with respect to $k$ (plus some highly non-obvious stuff) to obtain a class of series for $1/\pi$ the simplest of which is $$\frac{4}{\pi} = 1 + \frac{7}{4}\left(\frac{1}{2}\right)^{3} + \frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3} + \frac{19}{4^{3}}\left(\frac{1\cdot 3\cdot 5}{2\cdot 4\cdot 6}\right)^{3} + \cdots\tag{7}$$ (see this blog post for details).<|endoftext|> TITLE: Combinatoric task about flock of sheep QUESTION [13 upvotes]: Flock of sheep is walking on the path in one direction. There are 30 sheeps. Initially they all have different random speeds. The main feature is that in the process of walking, if a sheep at a faster speed bumps on a sheep which has a slower speed, then the faster sheep starts moving with the speed of this slower sheep. Obviously, through a sufficiently large amount of time this flock will be divided into groups which have a constant speed. The task is to find the average number of groups. Edit: Let's consider uniform distribution of speeds: each sheep given a uniform random speed in (0,1) independently. PS. Can you look at my answer and check my logic? REPLY [3 votes]: The answers already provided to this and to the equivalent post about cars in queue deal with the expected number of groups, and that is what the post asks. Since the problem is quite interesting, I got curious about the underlying PDF but I did not succeed to find a satisfactory hint about (admittedly, I might have overlooked something) . However I tried and develop a different approach which shows which is the probability distribution behind it. Consider to "quantize" the speed into $n$ classes. Then we can represent the $q$ sheeps in queue at time $0$ onto a diagram speed vs. position as in the sketch. It is clear that the first ship will block all the following ones with higher or equal speed. That means all those before the first (n. $4$ in the sketch) that has a speed lower than n. $1$. That in her turn will block the following ones with same or higher speed, etc. Note that we are individuating the resulting groups as the blocks that along the time get separated by an even larger distance. So in the sketch n. $1$ and n. $3$ are in the same group. Now the total possible ways of arranging the diagram is $T(n,q)=n^q$. The number of ways to arrange a group, with leader speed $v$ and $m$ members, is: $$ G(v,m) = \left( {n - v + 1} \right)^{\,m-1} $$ Therefore the number of ways $N_{1}(n,q)$ to arrange the sheeps in such a way that they finally make up only one group will be: $$ \begin{gathered} N_{\,1} (n,q) = \sum\limits_{1\, \leqslant \,\,v_{\,1} \, \leqslant \,n} {\left( {n - v_{\,1} + 1} \right)^{\,q - 1} } = \sum\limits_{1\, \leqslant \,\,k\, \leqslant \,n} {k^{\,q - 1} } = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,q - 1} \right)} {\left\langle \begin{gathered} q - 1 \\ j \\ \end{gathered} \right\rangle \left( \begin{gathered} n + 1 + j \\ q \\ \end{gathered} \right)} = \sum\limits_{\left( {0\, \leqslant } \right)\,j\,\left( { \leqslant \,q - 1} \right)} {\;j!\;\left\{ \begin{gathered} q - 1 \\ j \\ \end{gathered} \right\}\left( \begin{gathered} n + 1 \\ j + 1 \\ \end{gathered} \right)} = \hfill \\ = \frac{1} {q}\sum\limits_{0\, \leqslant \,j\, \leqslant \,q - 1} {\left( \begin{gathered} q \\ j \\ \end{gathered} \right)\;B_j \;\left( {n + 1} \right)^{\,q - j} } \quad \left| {\;1 \leqslant \text{integer }q,n} \right. \hfill \\ \end{gathered} $$ where $ \left\langle {} \right\rangle $ indicate the Eulerian N., $\left\{ {} \right\}$ the Stirling N. 2nd kind, and $B_j$ the Bernoulli N. Then the number of ways to arrange them as to finally have two groups is $$ \begin{gathered} N_{\,2} (n,q) = \sum\limits_{\left\{ \begin{subarray}{l} 1\, \leqslant \,v_{\,2} \, < \,v_{\,1} \, \leqslant \,n \\ m_{\,2} \, + \,m_{\,1} \, = \,q\;\;\left| {\;1\, \leqslant \,m_{\,k} \,} \right. \end{subarray} \right.} {\left( {n - v_{\,1} + 1} \right)^{\,m_{\,1} - 1} \left( {n - v_{\,2} + 1} \right)^{\,m_{\,2} - 1} } = \hfill \\ = \sum\limits_{\left\{ \begin{subarray}{l} 1\, \leqslant \,k_{\,1} \, < \,k_{\,2} \, \leqslant \,n\, \\ m_{\,2} \, + \,m_{\,1} \, = \,q\;\;\left| {\;1\, \leqslant \,m_{\,k} \,} \right. \end{subarray} \right.} {k_{\,1} ^{\,m_{\,1} - 1} \;k_{\,2} ^{\,m_{\,2} - 1} } \hfill \\ \end{gathered} $$ to which corresponds a probability $$ P_{\,2} (n,q) = \frac{{N_{\,2} (n,q)}} {{n^{\,q} }} = \frac{1} {{n^{\,2} }}\sum\limits_{\left\{ \begin{subarray}{l} 1\, \leqslant \,k_{\,1} \, < \,k_{\,2} \, \leqslant \,n\, \\ m_{\,2} \, + \,m_{\,1} \, = \,q\;\;\left| {\;1\, \leqslant \,m_{\,k} \,} \right. \end{subarray} \right.} {\left( {\frac{{k_{\,1} }} {n}} \right)^{\,m_{\,1} - 1} \;\left( {\frac{{k_{\,2} }} {n}} \right)^{\,m_{\,2} - 1} } $$ and so forth. I do not know whether the multiple summations can be reduced to a simpler form. (*) However, if we increase speed granularity (i.e. $n$) till continuum then we can replace the summation over the $k's$ in the expression for the probability into integrals $$ \begin{gathered} P_{\,g} (q) = \sum\limits_{\begin{subarray}{l} \\ \\ \,m_{\,1} \, + \,m_{\,2} + \, \cdots \, + m_{\,g} \, = \,q\;\;\left| {\;1\, \leqslant \,m_{\,k} \,} \right. \end{subarray} } {\mathop {\int {} }\limits_{\begin{subarray}{l} \\ 0\, \leqslant \,x_{\,1} \, < \,x_{\,2} < \, \cdots \, < \,x_{\,g} \, \leqslant \,1 \end{subarray} } \prod\limits_{1\, \leqslant \,j\, \leqslant \,g\,} {x_{\,j} ^{\,m_{\,j} - 1} \;dx_{\,j} } } = \hfill \\ = \sum\limits_{\begin{subarray}{l} \\ \,m_{\,1} \, + \,m_{\,2} + \, \cdots \, + m_{\,g} \, = \,q\;\;\left| {\;1\, \leqslant \,m_{\,k} \,} \right. \end{subarray} } {\frac{1} {{m_{\,1} }}\;\mathop {\int {} }\limits_{0\, \leqslant \,\,x_{\,2} < \, \cdots \, < \,x_{\,g} \, \leqslant \,1} x_{\,2} ^{\,m_{\,1} + m_{\,2} - 1} \;dx_{\,2} \prod\limits_{3\, \leqslant \,j\, \leqslant \,l\,} {x_{\,j} ^{\,m_{\,j} - 1} \;dx_{\,j} } } = \hfill \\ = \sum\limits_{\begin{subarray}{l} \\ \,m_{\,1} \, + \,m_{\,2} + \, \cdots \, + m_{\,g} \, = \,q\;\;\left| {\;1\, \leqslant \,m_{\,k} \,} \right. \end{subarray} } {\frac{1} {{m_{\,1} }}\frac{1} {{m_{\,1} + m_{\,2} }}\, \cdots \,\frac{1} {{m_{\,1} + m_{\,2} + \, \cdots \, + m_{\,g} }}} \hfill \\ \end{gathered} $$ We can see that $P$ satisfies the following recursion $$ \bbox[lightyellow] { P_{\,g} (q) = \frac{1} {q}\;\sum\limits_{\left( {0\, \leqslant \,g - 1 \leqslant } \right)\,k\, \leqslant \,q - 1} {P_{\,g - 1} (k)} }$$ and for the initial conditions we can put that $0$ sheeps can only be arranged into a empty group and v.v. that an empty group can gather only $0$ sheeps. $$ \left\{ \begin{gathered} P_{\,g} (q) = 0\quad \left| {\;g < 0\; \vee \;p < 0} \right. \hfill \\ P_{\,g} (0) = \delta _{\,g,\,0} \hfill \\ P_{\,0} (q) = \delta _{\,q,\,0} \hfill \\ \end{gathered} \right. $$ Now, starting from a known identity about Stirling N. of the 1st kind, we have $$ \begin{gathered} \left[ \begin{gathered} n + 1 \\ m + 1 \\ \end{gathered} \right] = \sum\limits_{0\, \leqslant \,k\, \leqslant \,n} {\left[ \begin{gathered} k \\ m \\ \end{gathered} \right]n^{\,\underline {\,n - k\,} } } = n!\sum\limits_{0\, \leqslant \,k\, \leqslant \,n} {\left[ \begin{gathered} k \\ m \\ \end{gathered} \right]\frac{1} {{k!}}} \quad \Rightarrow \hfill \\ \Rightarrow \quad \left[ \begin{gathered} n \\ m \\ \end{gathered} \right] = \left( {n - 1} \right)!\sum\limits_{0\, \leqslant \,k\, \leqslant \,n - 1} {\left[ \begin{gathered} k \\ m - 1 \\ \end{gathered} \right]\frac{1} {{k!}}} \quad \left| {\;1 \leqslant n,m} \right.\quad \Rightarrow \hfill \\ \Rightarrow \quad \left( {\frac{1} {{n!}}\left[ \begin{gathered} n \\ m \\ \end{gathered} \right]} \right) = \frac{1} {n}\;\sum\limits_{0\, \leqslant \,k\, \leqslant \,n - 1} {\left( {\frac{1} {{k!}}\left[ \begin{gathered} k \\ m - 1 \\ \end{gathered} \right]} \right)} \hfill \\ \end{gathered} $$ Finally, having the same recursion and same initial conditions, we reach to: $$ \bbox[lightyellow] { P_{\,g} (q) = \frac{1} {{q!}}\left[ \begin{gathered} q \\ g \\ \end{gathered} \right] }$$ and then it is known (**) that $$ \bbox[lightyellow] { \overline g (q) = \sum\limits_{\left( {0\, \leqslant } \right)\,g\,\left( { \leqslant \,q} \right)} {\frac{g} {{q!}}\left[ \begin{gathered} q \\ g \\ \end{gathered} \right]} = H(q) }$$ ----------- Note (*) I have later found a closed form for $N_{g}(n,q)$, which is presented in this related post. ---------- Note (**) because we have in fact $$ \begin{gathered} \frac{{\left( {x + 1} \right)^{\,\overline {\,n\,} } }} {{n!}} = \prod\limits_{0\, \leqslant \,k\, \leqslant \,n - 1} {\frac{{x + 1 + k}} {{1 + k}}} = \exp \left( {\sum\limits_{0\, \leqslant \,k\, \leqslant \,n - 1} {\ln \left( {1 + \frac{x} {{1 + k}}} \right)} } \right) = \sum\limits_{\left( {0\, \leqslant } \right)\,k} {\frac{1} {{n!}}\left[ \begin{gathered} n \\ k \\ \end{gathered} \right]\left( {x + 1} \right)^{\,k} } \hfill \\ \left( {x + 1} \right)\frac{d} {{d\,x}}\frac{{\left( {x + 1} \right)^{\,\overline {\,n\,} } }} {{n!}} = \exp \left( {\sum\limits_{0\, \leqslant \,k\, \leqslant \,n - 1} {\ln } \left( {1 + \frac{x} {{k + 1}}} \right)} \right)\sum\limits_{0\, \leqslant \,k\, \leqslant \,n - 1} {\frac{{x + 1}} {{x + k + 1}}} = \sum\limits_{\left( {0\, \leqslant } \right)\,k} {\frac{k} {{n!}}\left[ \begin{gathered} n \\ k \\ \end{gathered} \right]\left( {x + 1} \right)^{\,k} } \hfill \\ \left. {\left( {\left( {x + 1} \right)\frac{d} {{d\,x}}\frac{{\left( {x + 1} \right)^{\,\overline {\,n\,} } }} {{n!}} = \left( {x + 1} \right)\frac{d} {{d\,x}}\frac{{\Gamma \left( {x + 1 + n} \right)}} {{\Gamma \left( {x + 1} \right)\Gamma \left( {1 + n} \right)}}} \right)\;} \right|_{\;x\, = \,0} = \sum\limits_{0\, \leqslant \,k\, \leqslant \,n - 1} {\frac{1} {{k + 1}}} = \sum\limits_{\left( {0\, \leqslant } \right)\,k} {\frac{k} {{n!}}\left[ \begin{gathered} n \\ k \\ \end{gathered} \right]} \hfill \\ \end{gathered} $$<|endoftext|> TITLE: Is this a correct proof? QUESTION [9 upvotes]: For every $r,s \in \mathbb{Q}$ with $r1$ and since $\frac{c}{d}-\frac{a}{b}$ is always positive, $0<\dfrac{\frac{c}{d}-\frac{a}{b}}{\pi} < \frac{c}{d}-\frac{a}{b}$, i.e. less than the difference of $r$ and $s$, so $r TITLE: Spectrum of Diagonal Operator in $\ell^2$ QUESTION [5 upvotes]: This question is from Kreyszig 7.3, #4-6: Let $T: \ell^2 \rightarrow \ell^2$ be defined by $y = Tx, x = (x_i), y = (y_i), y_i = a_ix_i$ where $(a_i)$ are dense in [0,1]. Find $\sigma_p(T)$ and $\sigma(T)$. Moreover, show that if $\lambda \in \sigma-\sigma_p$ then $R_\lambda(T)$ is unbounded. (The definitions of spectrum can be found here) Extending the foregoing problem: Find a linear operator $T: \ell^2 \rightarrow \ell^2$ whose eigenvalues are dense in a given compact set $K \subset \mathbf{C}$ and $\sigma(T) = K$. So far: My only steps so far were recognizing that $(a_i)$ should be in $\sigma_p$. I'm having trouble exhibiting a sequence that's in the spectrum but not in the point spectrum. I tried to use the second part of the first question by thinking about what would make $(T-\lambda I)$ unbounded but to no avail. Any thoughts? Thanks! REPLY [3 votes]: Let $A$ stand for the (dense) set of $(a_n)_n$. Since $\sigma(T)$ is closed and $A\subseteq\sigma(T)$, it follows that $[0,1]=\overline{A}\subseteq\sigma(T)$. On the other hand, if $\lambda\notin[0,1]$, then $S$ defined by $Se_j=\frac{1}{\lambda-a_j}e_j$ for any $j\in\mathbb{N}$, is a bounded (diagonal) operator whose inverse is $\lambda I-T$.Thus $\sigma(T)=[0,1]$. If $\mu$ is an eigenvalue, then we have $Tx=\mu x$ for some non-zero $x=\sum b_je_j$. Then $Tx=\sum a_j b_j e_j=\sum \mu b_j e_j$. Hence, for any $j\in\mathbb{N}$, $a_j b_j=\mu b_j$, which only can happen if $\mu=a_k$ for some $k$. Thus $\sigma_p(T)=A$. The generalization has almost identical proof. Take $A$ to be a countable, dense subset of $K$, and define a diagonal operator as above.<|endoftext|> TITLE: If f''(x) > 0 does it mean it can possess at most 1 point of minimum? QUESTION [7 upvotes]: Graphically I can observe that if $f''(x)>0$ for all $x$ then it can have at most $1$ point of minimum. Is it true? If yes how can I prove it? Thank you! REPLY [3 votes]: Comment Is it possible to explain MVT/Rolle's thm with reference to graph?<|endoftext|> TITLE: Normed vector spaces over finite fields QUESTION [12 upvotes]: Normed vector spaces are typically defined over the reals or complex numbers. Is there any "standard," well-behaved construction that generalizes this to a vector space over a finite field, such as $\Bbb F_2$? I'm looking for something kind of like the class of $\ell_p$ norms, except designed with finite fields in mind. Ideally, something that has deep fundamental properties making it well-behaved in the same way that the Euclidean norm is. REPLY [19 votes]: There is a "standard" way to consider normed spaces over arbitrary fields but these are not well-behaved in the case of scalars in finite fields. If you want to work with norms on vector spaces over fields in general, then you have to use the concept of valuation. Valued field: Let $K$ be a field with valuation $|\cdot|:K\to\mathbb{R}$. This is, for all $x,y\in K$, $|\cdot|$ satisfies: $|x|\geq0$, $|x|=0$ iff $x=0$, $|x+y|\leq|x|+|y|$, $|xy|=|x||y|$. The set $|K|:=\{|x|:x\in K-\{0\}\}$ is a multiplicative subgroup of $(0,+\infty)$ called the value group of $|\cdot|$. The valuation is called trivial, discrete or dense accordingly as its value group is $\{1\}$, a discrete subset of $(0,+\infty)$ or a dense subset of $(0,+\infty)$. For example, the usual valuations in $\mathbb{R}$ and $\mathbb{C}$ are dense valuations. The valuation is said to be non-Archimedean when it satisfies the strong triangle inequality $|x+y|\leq\max\{|x|,|y|\}$ for all $x,y\in K$. In this case, $(K,|\cdot|)$ is called a non-Archimedean valued field and $|n1_K|\leq1$ for all $n\in\mathbb{Z}$. Common examples of non-Archimedean valuations are the $p$-adic valuations in $\mathbb{Q}$ or the valuations of a field that is not isomorphic to a subfield of $\mathbb{C}$. Norm: Let $(K,|\cdot|)$ be a valued field and $X$ be a vector space over $(K,|\cdot|)$. A function $p:X\to \mathbb{R}$ is a norm iff for each $a,b\in X$ and each $k\in K$, it satisfies: $p(a)\geq0$ and $p(a)=0$ iff $a=0_X$, $p(ka)=|k|p(a)$, $p(a+b)\leq p(a)+p(b)$ In the case of a finite field, the valuation $|\cdot|$ must be the trivial one. In fact, if there is nonzero scalar $x\in K$ such that $|x|\neq1$, then $\{|x^n|:n\in\mathbb{Z}\}=\{|x|^n:n\in\mathbb{Z}\}$ is an infinite subset of $K$, which is a contradiction. Example of Normed space over a finite field: Let $K$ be any field with the trivial valuation (e.g. a finite field) and let $X$ be an infinite-dimensional vector space with Hamel basis $B$. We can define a norm $p$ by saying $p(e)$ is the number of nonzero coefficients there are when we write $e$ as a linear combination of elements of $B$. But in this context, we have unexpected situations. For example, two norms may induce the same topology without being equivalent. In fact, consider the trivial norm $q$ on $X$ defined by $q(e)=1$ for all nonzero $e\in X$. Then both norms, $p$ and $q$, induce the discrete topology, but $p/q$ is unbounded. So there are no constant $C$ such that $ p\leq Cq$. A comprehensive starting point to read about normed spaces in this context is the book: Non-Archimedean Functional Analysis - [A.C.M. van Rooij] - Dekker New York (1978). For more information on finite fields, I recommend the paper: Non-archimedean Banach spaces over trivially valued fields, Borrey, S., P-adic functional analysis, Editorial Universidad de Santiago, Chile, 17 - 31. (1994). There, the norm is assumed to satisfy the strong triangle inequality. For the study of more advanced stuff, like locally convex spaces over valued fields I recommend the book: Locally Convex Spaces over non-Arquimedean Valued Fields - [C.Perez-Garcia,W.H.Schikhof] - Cambridge Studies in Advanced Mathematics (2010).<|endoftext|> TITLE: Distributing pebbles QUESTION [14 upvotes]: The rules to this "game" are simple, but after checking 120 starting positions, i can still not find a single pattern that consistantly holds. I am grateful for the smallest of suggestions. Rules: You have two bowls with pebbles in each of them. Pick up the pebbles from one bowl and distribute them equally between both bowls. If the number of pebbles is odd, place the remaining pebble in the other bowl. If the number of distributed pebbles was even, repeat the rule by picking up the pebbles from the same bowl. If the number of pebbles was odd, repeat the rule by picking up the pebbles from the other bowl. Continue applying the previous rules until you have to pick up exactly 1 pebble from a bowl, at which point you win. There are some starting positions, however, for which you will never be able to win. If the number of pebbles in the bowls are 5 and 3 respectively, you will cycle on forever. Question: Depending on the number of pebbles in each bowl at the starting position, can you easily predict if that position will be winnable/unwinnable? Edit: Here is some python code i wrote to generate answers for given starting values: http://codepad.org/IC4pp2vH Picking up $2^n$ pebbles will guarantee a win. Edit: As shown by didgogns, starting with n pebbles in both bowls always results in a win. REPLY [3 votes]: By the result of Simon, $T^k(n,n)=(1,2n−1)$ for some $k∈N$ if and only if $(1,2n−1)$ is in the orbit of $(n,n)$. $(1,2n−1)$ is in the orbit of $(n,n)$ if and only if $(n,n)$ is in the orbit of $(1,2n−1)$. Indeed, $$T^2(1,2n-1)=T(2n,0)=(n,n)$$ and it is obvious that there exist $a$ such that $T^a(1,2n-1)=(1,2n-1)$. $T^{a-2}(1,2n-1)=(n,n)$ because T is bijective and you always win the game at $(n,n)$.<|endoftext|> TITLE: Backwards stability of QR vs SVD QUESTION [5 upvotes]: I've been reading Trefethen & Bau's book on Numerical Linear Algebra, and they have this one question whose answer does not entirely make sense to me. In particular, they imply that the SVD algorithm (the computation of the SVD, not the solution of $Ax = b$ by SVD) is not backwards stable. The suggestion is that this has to do with the fact that SVD maps from an $m\times n$ matrix into the space of triples of $m\times m$, $m\times n$, and $n\times n$ for $U$, $\Sigma$, and $V$. They have a comment, with regards to the outer product computation, that since that, too, maps from a smaller dimensional space into a larger one, the computation should not be expected to be backwards stable. At the same time, Householder triangularization ($QR$), is backwards stable, but this too maps from a smaller dimensional space into a larger dimensional space. Is Householder just an exceptional case, or is there more to this? REPLY [4 votes]: Notice that there are restrictions on the output matrices. In QR factorization, Q(an orthogonal matrix) has dimension $\frac{n(n-1)}{2}$, R(upper triangular matrix) has dimension $\frac{n(n+1)}{2}$, the total dimension is $n^2$, the dimension of n by n matrices.<|endoftext|> TITLE: How to establish the identity of the infinite sum QUESTION [7 upvotes]: How to prove the following identity? $$\sum_{n=-\infty}^{\infty}\frac{1}{(z+n)^2 +a^2} = \frac{\pi}{a}\cdot\frac{\sinh 2\pi a}{\cosh 2\pi a - \cos 2\pi z}$$ REPLY [3 votes]: The trick here is to use $$\frac{1}{(z+w)^2+a^2} \times \pi\cot(\pi w)$$ as shown at the following MSE link. The sum term is also quadratric in $n$ so the estimates of the integrals presented there apply to the present case as well. We get for the residues at $w = -z \pm ia$ the closed form $$\left.\frac{1}{2(z+w)} \pi\cot(\pi w)\right|_{w=-z\pm ia}.$$ Recall that $$\cot(v) = i \frac{\exp(iv)+\exp(-iv)}{\exp(iv)-\exp(-iv)}.$$ Introducing $x=\exp(\pi i z)$ and $y=\exp(\pi a)$ we get for the two residues $$\frac{\pi}{2a} \frac{1/x/y+xy}{1/x/y-xy} - \frac{\pi}{2a} \frac{y/x+x/y}{y/x-x/y} = \frac{\pi}{2a} \left(\frac{1+x^2y^2}{1-x^2y^2} - \frac{y^2+x^2}{y^2-x^2}\right) \\ = \frac{\pi}{2a} \frac{y^2+x^2y^4-x^2-x^4y^2-y^2+x^2y^4-x^2+x^4y^2} {(1-x^2y^2)(y^2-x^2)} \\ = \frac{\pi}{2a} \frac{2x^2y^4-2x^2}{(1-x^2y^2)(y^2-x^2)} = \frac{\pi}{a} \frac{y^2-1/y^2}{(y^2-x^2-x^2y^4+x^4y^2)/y^2/x^2} \\ = \frac{\pi}{a} \frac{y^2-1/y^2}{1/x^2 - 1/y^2 - y^2 + x^2}.$$ Flip the sign to get $$\bbox[5px,border:2px solid #00A000]{ \frac{\pi}{a} \frac{\sinh(2\pi a)}{\cosh(2\pi a)-\cos(2\pi z)}.}$$ Observe that when $z = q \mp ia$ with $q$ an integer we have $$\cos(2\pi z) = \cos(2\pi q \mp 2\pi i a) = \cosh(2\pi a)$$ and the formula becomes singular. This is correct however since in this case the sum term $$\frac{1}{(z+n)^2+a^2}$$ is singular as well namely when $n = -q$ and the sum is undefined.<|endoftext|> TITLE: What does this number theory statement mean? QUESTION [6 upvotes]: I recently started studying number theory by myself and I am reading a book about number theory. There is one thing that I don't understand, the statement below: If $a,b \in \mathbb{Z}$, then there is a $d \in \mathbb{Z}$ such that $(a,b)=(d)$. I understand everything but the $(a,b)=(d)$. I know it has to do something with set theory probably, but what? Specifically why d is in parentheses and a pair is equal to a single variable? REPLY [4 votes]: $(a,b)$ means (in simple words) everything you get by multiplying and adding whatever is in $\mathbb{Z}$. More precisely, it is the ideal generated by $a$ and $b$. Similarly, $(d)$ is generated by $d$ single handedly, i.e., its multiples. Two elements generate an ideal, which is "as fine as" their gcd. Indeed, this is what gcd means: it is the "finer mesh" that covers their union. The gcd is the "common nature" of them, which may also give this "joined mesh". For example, adding and multiplying whatever by 6 and 10, then you get all multiples of 2. Here, gcd(6,10)=2. This is why gcd is also notated as (6,10). If you want a proof of this, you probably may find it in the section you ar reading, no matter what book you are referring to. Or you may want to show it yourself.<|endoftext|> TITLE: Derivative of $f(x)=e^7+\ln(4)$. QUESTION [5 upvotes]: I need to differentiate $$f(x)=e^7+\ln(4).$$ I know that $\dfrac d{dx}e^x = e^x$ and $\dfrac d{dx}\ln(x) = \dfrac {1}{x}$. I recently learned this, but I get stuck when it comes to solving this problem using real numbers. Can someone guide me with an example? REPLY [5 votes]: Consider this alternative. Find $g'(x)$ given $$g(x) = 2^7 +\log_4(4)$$ Perhaps here you more readily identify that $2^7 = 128$ and $\log_4(4)=1$. That is, both terms are constants: they're numbers that do not depend on the input variable $x$. Now since $$g(x) = 129,$$ the derivative of this constant function is zero, $$g'(x)=0.$$ Your problem is the same, except it involves the irrational number $e=2.71828\dots$ and its logarithm. Perhaps the difficulty is recognizing that $e$, as a symbol, represents a real number. It's like $\pi$ or $\sqrt{2}$, in that it is easiest to represent the number with a symbol rather than work with an interminable decimal or some alternative definition. Similarly, it seems you may have faced some difficulty distinguishing the constant number $\ln(4)$ from the function $\ln(x)$. Here's one more example: $$h(x) = \sin\left(\frac{\pi}{3}\right) + \log(57) + \sqrt{3} + 9^{1/7} + 2^e + \pi^\pi$$ Do you see any variables present on the right hand side? There are none. This function is constant. Actually, it's roughly equal to fifty. And since it is constant, $h'(x)=0$.<|endoftext|> TITLE: Are column vectors considered the "default" orientation in linear algebra? QUESTION [13 upvotes]: I am taking an online course where vectors are typically written out as column vectors. It seems like the only row vectors we have seen are the transposes of column vectors and labeled as such. So I'm wondering if mathematicians (at least those in linear algebra) tend to favor column vectors. This is essentially a question about convention, i.e. is it such a strong convention that if you told a mathematician about a vector without specifying its alignment, would they assume it is a column vector? Come to think of it I guess that might make sense since it is the first dimension. REPLY [14 votes]: It's possible you haven't seen matrix multiplication defined yet. If that's the case, then this won't be much of a justification. If $v$ is a row vector of length $n$ (so a $1 \times n$ matrix), and $M$ is an $n \times n$ matrix, then $Mv$ isn't defined although $vM$ would be (and gives back a row vector of length $n$): compare $$\underset{``\text{nonsense''}}{\pmatrix{1 & 0 \\ 0 &1} \pmatrix{1 & 0}} \quad \text{versus} \quad \underset{``\text{less common, but fine''}}{\pmatrix{1 & 0}\pmatrix{1 & 0 \\ 0 &1}} \quad \text{versus} \quad \underset{``\text{the gold standard''}}{\pmatrix{1 & 0 \\ 0 &1} \pmatrix{1 \\ 0}.}$$ Because we like to think of matrices as functions from a vector space to itself, we probably want to emulate function notation (as in, $f(x)$) and apply matrices "on the left," as in $Mv$ rather than $vM$. Because of the way matrix multiplication is defined, this forces us to use column vectors. It is much more common in general to see matrices act (from the left!) on column vectors (although there are niche "act on the right" markets).<|endoftext|> TITLE: How to find $k$ such that $(\mathbb{Z} \times \mathbb{Z})/ \langle (m,n)\rangle \cong \mathbb{Z} \times \mathbb{Z}_k$ QUESTION [6 upvotes]: In John Fraleigh's book, A First Course In Abstract Algebra Exercises 15.7 and 15.11, one shows that $$ (\mathbb{Z} \times \mathbb{Z})/ \langle (1,2)\rangle \cong \mathbb{Z} \times \mathbb{Z}_1 \cong \mathbb{Z} \ \ \ \ \  \mbox{ and  } \ \ \ \ \ (\mathbb{Z} \times \mathbb{Z})/ \langle (2,2)\rangle \cong \mathbb{Z} \times \mathbb{Z}_2 $$ One does this with the first isomorphism theorem. With the same idea I proved for example that $$ (\mathbb{Z} \times \mathbb{Z})/ \langle (2,3)\rangle \cong \mathbb{Z} \times \mathbb{Z}_1 \cong \mathbb{Z} \ \ \ \ \ \mbox{ and }\ \ \ \ \ (\mathbb{Z} \times \mathbb{Z})/ \langle (2,4)\rangle \cong \mathbb{Z} \times \mathbb{Z}_2 $$ So I conjectured that $(\mathbb{Z} \times \mathbb{Z})/ \langle (m,n)\rangle \cong \mathbb{Z} \times \mathbb{Z}_{k}$ where $k=\mathrm{gdc}(m,n)$. For the previous four cases, the homomorphism $\phi: \mathbb{Z}\times \mathbb{Z} \to \mathbb{Z} \times\mathbb{Z}_k$ given by $$ \phi(x,y)=\left(\frac{nx-my}{k}, \ x \ \ (\mathrm{mod} \ k) \right) $$ is surjective with kernel =$\langle (m,n)\rangle$. However, this is not the case when $(m,n)=(4,6)$. So, I cannot use what I did for the four cases to prove the general case. What I want to know is if my conjecture is true. If so, how can I give a general homomorphism? If it is not true, how can I find $k$, such that $(\mathbb{Z} \times \mathbb{Z})/ \langle (m,n)\rangle \cong \mathbb{Z} \times \mathbb{Z}_k$? Thanks in advance for any help/hint/comment! REPLY [5 votes]: Another approach is to solve $mx-ny=k$ and then show that $r(m/k,n/k)+s(x,y)$ is all of $\mathbb Z\times \mathbb Z$, and each element of $\mathbb Z\times\mathbb Z$ is expressible in this way only one way. Then it is clear that $r(m/k,n/k)+s(x,y)\sim r'(m/k,n/k)+s'(x,y)$ if and only if $k\mid r-r'$ and $s=s'$. So, how do we show the above? If $mx-ny=k$ then $$\begin{pmatrix}m/k&n/k\\y&x\end{pmatrix}$$ has determinant $1$, and thus has an inverse with integer coefficients.<|endoftext|> TITLE: Does Leibniz's rule hold for improper integrals? QUESTION [9 upvotes]: Does this hold in general? $$ \frac{\mathrm{d}}{\mathrm{d}t}\int_{-\infty}^{\infty} f(x,t) \mathrm{d}x = \int_{-\infty}^{\infty}\frac{\partial}{\partial t}f(x,t) \mathrm{d}x. $$ I know it is true if the bounds on the integral are finite but can the result be extended to improper integrals? Also if it is true in special cases, what are those cases? Thanks a lot! REPLY [11 votes]: No, the equality does not hold in general. EXAMPLE $1$ For a first example, the integral $I(x)$ as given by $$I(x)=\int_{-\infty}^\infty\frac{\sin(xt)}{t}\,dt$$ converges uniformly for all $|x|\ge \delta>0$. But the integral of the derivative with respect to $x$, $\int_{-\infty}^\infty \cos(xt)\,dt$ diverges for all $x$. EXAMPLE $2$ As another example, let $J(x)$ be the integral given by $$J(x)=\int_0^\infty x^3e^{-x^2t}\,dt$$ Obviously, $J(x)=x$ for all $x$ and hence $J'(x)=1$. However, $$\int_0^\infty (3x^2-2x^4t)e^{-x^2t}\,dt=\begin{cases}1&,x\ne 0\\\\0&,x=0\end{cases}$$ Thus, formal differentiation under the integral sign leads to an incorrect result for $x=0$ even though all integrals involved are absolutely convergent. Sufficient Conditions for Differentiating Under the Integral If $f(x,t)$ and $\frac{\partial f(x,t)}{\partial x}$ are continuous for all $x\in [a,b]$ and $t\in \mathbb{R}$, and if $\int_{-\infty}^\infty f(x,t)\,dt$ converges for some $x_0\in[a,b]$ and $\int_{-\infty}^\infty \frac{\partial f(x,t)}{\partial x}\,dt$ converges uniformly for all $x\in [a,b]$, then $$\frac{d}{dx}\int_{-\infty}^\infty f(x,t)\,dt=\int_{-\infty}^\infty \frac{\partial f(x,t)}{\partial x}\,dt$$<|endoftext|> TITLE: Sum of random variables at least $\log n$ QUESTION [8 upvotes]: Let $X_1,\dots,X_n$ be independent random variables in $\{0,1\}$, and $X=X_1+\dots+X_n$. Suppose that $\mathbb{E}[X]=1$. What is the best possible upper bound on $\text{Pr}(X>\log n)$? Using the multiplicative form of Chernoff's bound, we have that $\text{Pr}(X>1+\delta)<\dfrac{e^\delta}{(1+\delta)^{1+\delta}}$ for any $\delta>0$. When $\delta$ is $\log n-1$, then this becomes $\dfrac{e^{\log n-1}}{\log n^{\log n}}$. This is approximately $n^{1-\log\log n}$. Are there examples of random variables $X_1,\dots,X_n$ that shows that this bound resulting from Chernoff is (approximately) tight? REPLY [2 votes]: Let $\ell(n)=\lfloor \ln(n)\rfloor+1$, and let all $E[X_i]=\frac{1}{\ell(n)}$ for $i\leq \ell(n)$ and $E[X_i]=0$ otherwise. Essentially, you are concentrating all the "mass" into the first $\ell(n)\approx\ln(n)$ variables (the minimum number whose sum can exceed $\ln(n)$), divided evenly. The probability that $X_i=1$ for $i\leq \ell(n)$ is $\left(\frac{1}{\ell(n)}\right)^{\ell(n)}\approx\left(\frac{1}{\ln(n)}\right)^{\ln(n)}=n^{-\ln\ln(n)}\approx n^{-\ln\ln(n)+1}$, where the last term is the Chernoff Bound you obtained. Note that the (multiplicative) error "hidden" in the first of the two $\approx$ (due to having to use $\lfloor \ln(n)\rfloor+1$ instead of $\ln(n)$ because the $X_i$ are discrete) is of the order of $\ln(n)$, so smaller than that "hidden" in the second $\approx$ which is $n$. The latter (which is, after all, just a $+1$ added to a $-\ln\ln(n)$ exponent) is mostly a consequence of the approximations required to produce a manageable formula like the $\frac{e^\delta}{(1+\delta)^{1+\delta}}$ one you used, from the "full-power" Chernoff-Hoeffding bound written in terms of relative entropy for $n$ independent $X_i$ with values in $[0,1]$ and expected sum $\mu$: $\Pr\left[\sum_{i=1}^n X_i \geq \mu+\lambda)\right]\leq e^{−nH_{\mu/n}(\mu/n+ \lambda)}$, where $H_p(x)=x\ln(\frac{x}{p})+(1−x)\ln(\frac{1−x}{1-p})$ is the relative entropy of $x$ with respect to $p$.<|endoftext|> TITLE: What does "open set" mean in the concept of a topology? QUESTION [24 upvotes]: Given the following definition of topology, I am confused about the concept of "open sets". 2.2 Topological Space. We use some of the properties of open sets in the case of metric spaces in order to define what is meant in general by a class of open sets and by a topology. Definition 2.2. Let $X$ be a nonempty set. A topology $\mathcal{T}$ for $X$ is a collection of subsets of $X$ such that $\emptyset,X\in\mathcal{T}$, and $\mathcal{T}$ is closed under arbitrary unions and finite intersections.     We say $(X,\mathcal{T})$ is a topological space. Members of $\mathcal{T}$ are called open sets.     If $x\in X$ then a neighbourhood of $x$ is an open set containing $x$. It seems to me that the definition of an open subset is that subset $A$ of a metric space $X$ is called open if for every point $x \in A$ there exists $r>0$ such that $B_r(x)\subseteq A$. What is the difference of being open in a metric space and being open in a topological space? Thanks so much. REPLY [3 votes]: Here is an interesting intuitive definition for open set that actually gives rise to the topological axioms! An open set is a collection of objects for which we can easily verify membership of any given member. Note that it may not be easy to verify non-membership of non-members! Now let us take a look at the axioms: The empty collection is an open set. Vacuously true since it has no members. Any finite intersection of open sets is an open set. True because you just verify membership in each open set one at a time. Any union of open sets is an open set. True because for each member in the union you just have to verify its membership in the correct open set of the union. Notice how the restriction to "finite intersection" is crucial here because we may not be able to easily verify membership in infinitely many open sets! Here we are using the intuitive assumption that it is easy to do finitely many easy tasks. Now of course one can ask whether there is a concrete representation of such intuitive notion of open sets. There is! In a metric space, suppose that you interpret that membership of $x$ in $S$ is easily verified iff there is some positive error margin $ε$ such that moving $x$ by a distance of at most $ε$ cannot make it no longer a member of $S$. Then indeed you can easily verify that metric-open sets have easily verifiable membership, and you can see why infinite intersections of metric-open sets may not be open (since those sets may require smaller and smaller error margins for that member). Incidentally, there is a related notion of semi-decidable sets (namely the sets such that there is a program that accepts the input iff it is a member) or equivalently recursively enumerable (RE) sets (namely the sets such that a program can enumerate all their members by appending them one by one to the end of the designated output list). RE sets are slightly related to open classes in linguistics, in that the enumeration idea corresponds to the notion of a collection whose members are not fixed but get added over time but never removed. We have the following analogous facts for RE sets: The empty set is an RE set. It is enumerated by a program that rejects everything. Any finite intersection of RE sets is an RE set. Take any RE sets $S_{1..n}$, and let $P_{1..n}$ be the corresponding programs. Then $\bigcap_{k \in \{1..n\}} S_k$ is enumerated by a program that runs all the programs $P_{1..n}$ in parallel and appends an object to the output list whenever it has been appended to all of their output lists. (Again note how the finiteness restriction for intersections naturally arises from the fact that we can only run finitely many programs in parallel!) Any RE union of RE sets is an RE set. (Note that an RE set is represented by a program, so by "RE union of RE sets" we mean "union of RE sets whose programs form an RE set".) Take any set $S$ of RE sets such that there is a program $P$ that enumerates all the programs corresponding to the RE sets in $S$. Then $\bigcup S$ is enumerated by a program that runs $P$ and in parallel runs every program output by $P$ (which we shall call a child program) and appends an object to the output list whenever it is appended for the first time to the output list for some child program.<|endoftext|> TITLE: Why does the eigenvalues of $AA^T$ are the squares of the singular values of $A$? QUESTION [7 upvotes]: Let $A$ where it's non-zero singular values are $\sigma_1,\ldots,\sigma_r$. $A$ can be written as $A=U\Sigma V$ where both $U,V$ are unitary matrices. Let's look at $$AA^T = U\Sigma V^TV\Sigma U^T= U\Sigma^2 U^T$$ Now, from this point one should infer that the eigenvalues of $AA^T$ are $\sigma_1^2,\ldots,\sigma_r^2$ but I'm not sure how exactly. Is that has something to do with the fact that $U$ is unitary? REPLY [9 votes]: Suppose $A$ is an $m\times n$ matrix over $\mathbb{R}$ and consider the equation $AA^T = U\Sigma\Sigma^TU^T$, where $A = U\Sigma V^T$ is the SVD of $A$. Then $\Sigma\Sigma^T = \operatorname*{diag}\{\sigma_1^2,\ldots,\sigma_r^2,0,\ldots,0\} \in \mathbb{R}^{m\times m}$ where $r \leq \min\{m,n\}$ is the rank of $A$. Since $U$ is an orthgonal matrix we can write this equation equivalently as $$ (AA^T)U = U\Sigma\Sigma^T $$ Let $\lambda_i$ denote the $i$th diagonal element of $\Sigma\Sigma^T$. Then this equation says that $$ (AA^T)u_i = \lambda_i u_i $$ for $i = 1,\ldots,m$. Thus, each squared singular value $\sigma_j^2$ is indeed an eigenvalue of $AA^T$ with corresponding eigenvector $u_j$. The remaining eigenvalues of $AA^T$ (if there are any) is $0$ with algebraic multiplicity $m-r$.<|endoftext|> TITLE: 3 plate dinner problem QUESTION [6 upvotes]: Consider n people dining in a circular table. Each of them is ordering one of three plates. What is the probability that no two people sitting next to one another will order the same plate? I intuitively think that every person except the first one has 2 choices as he cannot order the same as the one preceding him. However i can't figure out what happens with the last person as he can have either 1 or 2 choices depending whether the person before him had chosen the same dinner as the first person. REPLY [5 votes]: You are considering problem when people are sitting in a line. This is a good idea, but it is not enough to compute probability that it is possible to make a circle from this line. When is it possible? If and only if the last of $n$ people has chosen plate different with previous one and the first one. So we need to compute two sequences: $f_k$ is a probability that $k$ people sitting in the line has ordered plates different for each pair of neighbors and the last one has ordered different plate with the first one, $g_n$ is a probability that $k$ people sitting in the line has ordered plates different for each pair of neighbors and the last one has ordered the same plate as the first one. Then $$f_1 = 0,\\ g_1 = 1,\\ f_k = \frac{f_{k - 1} + 2g_{k - 1}}{3} \text{ for } k > 1,\\ g_k = \frac{f_{k - 1}}{3} \text{ for } k > 1.$$ Substituting last into penultimate we get: $$f_k = \frac{3f_{k - 1} + 2f_{k - 2}}{9} \text{ for } k > 2.$$ EDIT. So $$f_k = \frac23\left(\left(\frac23\right)^{k - 1} - \left(-\frac13\right)^{k - 1}\right) = \frac{2^k + 2\cdot (-1)^k}{3^k}$$ is the answer for you problem when $k \ge 2$ people are sitting at the table. For $k = 1$ the answer depends on whether this man is next to himself or not.<|endoftext|> TITLE: Is it possible to obtain a 12×12 carpet by cutting a 16×9 carpet once? QUESTION [6 upvotes]: Yesterday my brother asked me a question. Suppose you have a carpet of size $16×9$. Now you have to cut this carpet in such a way that after cutting it and arranging pieces, the dimension is $12×12$. You can only cut the carpet once. You can arrange the obtained pieces in any way you like. How can you do this? REPLY [4 votes]: Essentially the same problem appears in Boris Kordemsky's puzzle book, with the solution as below.<|endoftext|> TITLE: Small question about proof QUESTION [8 upvotes]: How do I prove this without using a calculator? $124235127381 · 34562736458392 \not= 4293905956544262926431352$ What methods should I use? Or a suggestion in a method to use in order to solve this. REPLY [2 votes]: "How do I prove this without using a calculator?" What do we mean by "using"? The answer you're looking for appears to be something along the following lines: guess a way in which the inequality might be proved without computing all the digits (or even very many of the digits) of the product on the left-hand side; then check to see if this method of proof works. If it does not work, guess another method. Multiplying just the last $n$ digits on the left-hand side will tell you the remainder of each side on division by $10^n.$ But for any value of $n$ you're likely to want to try, this turns out not to work. There are quick tests for division by other numbers you can perform, however; casting out $3$s works, as it turns out, as does casting out $9$s. Note that you could be guessing and checking a lot of methods, depending on how cleverly the person posing the problem has protected it against different methods and on your luck at guessing one of the methods that works. The proof is a lot harder to guess in this way if we change the right-hand side to $4293905966547263926431352,$ for example. Another interpretation of the problem is that the proof must not mention any calculations performed by a calculator. This leaves open the possibility that the discovery of the proof might be assisted by a calculator. We find that $$124235127381 \cdot 34562736458392 = 42939059\underline{6}6544262926431352,$$ which agrees with the given right-hand side except for the single underlined digit, which is $6$ in the correct product and $5$ in the given number. A difference of one in one digit implies that most of the quick tests for divisibility will work, especially if the test also gives you a unique result for each remainder modulo the divisor. For example, the test for divisibility by $3$ will definitely show remainders on the left-hand side whose product is inconsistent with the remainder on the right-hand side (which is $r+1,$ where $r$ is the "correct" remainder). So we immediately know this test will work. On the other hand, if the right-hand side in the question had been $4293905966547263926431352,$ we would immediately have been able to rule out remainder-modulo-$3$ and several other methods of attempted proof, and would have a good clue as to what methods might work instead. (I might use the fact that $1000\equiv1 \pmod{37},$ or that $10000\equiv1 \pmod{101}.$) By the way, the given numbers are tedious to multiply out by hand, but not impossible. You probably wouldn't do this for question $5$ out of $40$ on a three-hour examination, but it's not going to take hours. Possibly the less said about that idea the better, however.<|endoftext|> TITLE: Why is the Borel Subgroup self normalizing? QUESTION [5 upvotes]: Let $B = B_n(\mathbb{F})$ be the set of upper triangular matrices of order $n$ with entries from $\mathbb{F}$. I am supposed to be showing $N_G(B) = B$ I have been told that what we need to notice here is that the stabilizer of the standard flag (when considered with its natural action due to $GL_n(\mathbb{F})$ is $B$ and we are supposed to be using this fact here, somehow. My attempt was to try picking an element from the normalizer and showing that it necessarily stabilizes the standard flag, and hence deduce that the elements in the normalizer are in fact in the stabilizer, which is $B$. However, I couldn't succeed with this. [I have no idea of why flags or Borel groups are considered in general. So, please post an answer which is approachable for a person with no knowledge of Algebraic Groups.] REPLY [3 votes]: Alternative proof: $G=GL_{\mathbb F}(V)$ acts on the set of (complete) flags $(V_i)_i$ (i.e., the $V_i$ are subspaces of $V$ with $\dim V_i = i$ and $V_i\le V_{i+1}$ where $V_0=0$ and $V_n=V$) and the Borel subgroup $B$ is the stabilizer of the standard flag. For $v_i\in V_i\setminus V_{i-1}$ the smallest $B$-invariant subspace containing $v_i$ is $V_i$: Picking $v_j\in V_j$ for $j\in\{1,\dots, i-1, i+1, \dots n\}$ gives a base of $V$, so for arbitrary $v\in V_{i-1}$ one can define a linear map $\lambda_v$ sending $v_i$ to $v_i+v$ and fixing all other $v_j$ ($j\ne i$). Clearly $\lambda_v$ stabilizes all $V_i$, i.e., $\lambda_v\in B$. Hence any $B$-invariant subspace of $V$ containing $v_i$ contains also $\lambda_v(v_i)-v_i = v$ for arbitrary $v\in V_{i-1}$, hence it contains also $\langle v_i, V_{i-1}\rangle = V_i$ proving the claim. Any $B$-invariant subspace $W$ of $V$ of dimension $i$ equals $V_i$ as otherwise it contains some $w\in W\setminus V$. For $j$ minimal with $w\in V_j$ (hence $j>i$) we get by the last paragraph $V_j\le W$ contradicting $\dim V_j = j > i = \dim W$. This implies that the standard flag is the only (complete) flag stabilized by $B$, i.e., the set of fixed points of $B$ consists of only one element. As the normalizer $N := N_G(B)$ of $B$ acts on the set of fixed points of $B$ (taking $n\in N, b\in B$ and a fixed point $x$ of $B$ one gets $bnx = n(n^{-1}bn)x = nb'x = nx$ as $b':=n^{-1}bn\in B$), it fixes the standard flag, hence $N$ is contained in $B$.<|endoftext|> TITLE: How do I calculate the value of this series? QUESTION [11 upvotes]: I want to find the value to which this series converges $$\sum_{n=0}^\infty \frac{(-1)^n}{n^2+1}$$ I tried looking at the sequence of partial sums $$S_k = \sum_{n=0}^k \frac{(-1)^n}{n^2+1}$$ and I noticed that $$\frac{-1}{n^2+1} \leq \frac{(-1)^n}{n^2+1} \leq \frac{1}{n^2 +1}$$ and so I think that by the squeeze rule I can see ( I could have just noticed it by logic, but okay) that the terms converge to zero. How do I find the value of the original series though? I could only show that it converged REPLY [13 votes]: You may notice that: $$ \frac{1}{n^2+1} = \int_{0}^{+\infty}\sin(x)e^{-nx}\,dx\tag{1}$$ from which$^{(*)}$: $$ S=\sum_{n\geq 0}\frac{(-1)^n}{n^2+1}=1+\int_{0}^{+\infty}\sin(x)\sum_{n\geq 1}(-1)^n e^{-nx}\,dx \tag{2}$$ and: $$ S = 1-\int_{0}^{+\infty}\frac{\sin(x)}{e^x+1}\,dx =\color{red}{\frac{1}{2}\left(1+\frac{\pi}{\sinh\pi}\right)}\tag{3}$$ where the last equality follows from integration by parts and the residue theorem. The same can be proved by considering the Fourier cosine series of $\cosh(x)$ over the interval $(-\pi,\pi)$. Yet another (Eulerian) approach. It is clearly enough to compute $\sum_{n\geq 1}\frac{1}{n^2+1}$ and $\sum_{n\geq 1}\frac{1}{4n^2+1}$. From the Weierstrass product for the $\sinh$ function we have $$ \frac{\sinh(\pi z)}{\pi z}=\prod_{n\geq 1}\left(1+\frac{z^2}{n^2}\right)\tag{4}$$ and by applying $\frac{d}{dz}\log(\cdot)$ to both sides: $$ -\frac{1}{z}+\pi\coth(\pi z) = \sum_{n\geq 1}\frac{2z}{z^2+n^2}\tag{5}$$ At last we just need to evaluate the LHS of $(5)$ at $z=1$ and $z=\frac{1}{2}$. $(*)$ The exchange of $\sum$ and $\int$ is allowed by the absolute convergence of the series $\sum_{n\geq 0}\frac{(-1)^n}{n^2+1}$, the trivial inequality $\left|\sin(x)\right|\leq x$ and the dominated convergence theorem. For any $x>0$ we have $$ \sum_{n=1}^{N} e^{-nx}\leq \frac{1}{e^x-1} $$ and $\frac{x}{e^x-1}$ is a function belonging to $\mathcal{L}^1(\mathbb{R}^+)$, whose integral over $\mathbb{R}^+$ equals $\zeta(2)=\frac{\pi^2}{6}$.<|endoftext|> TITLE: Transcendental solutions to constrained polynomial optimization problems? QUESTION [7 upvotes]: Can an optimization problem in which the objective and constraints are all polynomials with rational coefficients have a solution involving transcendental values? REPLY [2 votes]: I'm assuming what you mean is the following: Let $f(x_1,\cdots,x_n)$ and $g(x_1,\cdots,x_n)$ be polynomials with rational coefficients, and let $(a_1,\cdots,a_n)$ be the point in $\mathbb{R}^n$ where $f$ is minimized, given that $g(a_1,\cdots,a_n)=0$. Is it possible that not all of $(a_1,\cdots,a_n)$ is algebraic? As A.Γ. pointed out, it's possible that the set that reaches the minimum is a line, in which case they can certainly be transcendental. I am henceforth assuming that there are finitely many points where the minimum is reached; a corollary of this is that all of the points reaching global minima satisfy $\nabla f = \lambda \nabla g$ for some real $\lambda$ (Lagrange multipliers). We can consider $a_1,\cdots,a_n,\lambda$ as $n+1$ variables, with the $n+1$ conditions that $$\frac{\partial f}{\partial x_i}\bigg|_{\mathbb{a}}=\lambda \frac{\partial g}{\partial x_i}\bigg|_{\mathbb{a}}$$ for all $1\leq i\leq n$, and $g(\mathbb{a})=0$. These are all polynomial equations in the predefined $n+1$ variables, and (by our assumptions) it is well-behaved. Algebraically, for some polynomials with rational (we may presume integer) coefficients $P_1,\cdots,P_{n+1}$, we can represent our equations as $P_k(a_1,\cdots,a_n,\lambda)=0$ for all $1\leq k\leq n+1$. We will prove, by induction on $m$, that a system of $m$ polynomial equations with rational coefficients in $m$ variables will, if it has only finitely many solutions, have only algebraic ones (it might have none). Base case: If $m=1$ this is just the definition of an algebraic number; either there are finitely many algebraic solutions or the polynomial is the zero polynomial. Inductive step: Assume one has a set of $m+1$ polynomial equations in $m+1$ variables. Let those variables be $x_1,\cdots,x_{m+1}$, and WLOG let the $(m+1)$-th equation depend on $x_{m+1}$ (if not, we may simply reorder the equations). Each of our $m+1$ equations can be viewed as a polynomial equation in $x_{m+1}$ with coefficients rational-coefficiented polynomials in $x_1,\cdots,x_m$. Recall that two polynomials have a common root iff their resultant is $0$. Since this resultant is a rational-coefficiented polynomial in terms of the coefficients, we get that for any two polynomials in our list, their resultant (when viewed as a polynomial in $x_{m+1}$) being $0$ is a polynomial equation in $x_1,\cdots,x_m$. Thus, there exists a simultaneous solution for $x_{m+1}$ iff the polynomial equations determined by the resultant of $P_i$ and $P_{m+1}$ (for all $1\leq i \leq m$) are all solved (they are $m$ equations in $x_1,\cdots,x_m$). So, by the inductive hypothesis, either there are infinitely many solutions, or all solutions result in $x_1,\cdots,x_m$ all being algebraic. In addition, since there was nothing special in the ordering of our variables, $x_2,\cdots,x_{m+1}$ are all algebraic. Thus, all of the variables are algebraic, finishing the inductive step.<|endoftext|> TITLE: How to calculate the series $\sum\limits_{n=1}^{\infty} \arctan(\frac{2}{n^{2}})$? QUESTION [7 upvotes]: I encountered the series $$ \sum_{n=1}^{\infty} \arctan\frac{2}{n^{2}}. $$ I know it converges (by ratio test), but if I need to calculate its limit explicitly, how do I do that? Any hint would be helpful.. REPLY [3 votes]: \begin{align*} \sum_{n=1}^\infty\arctan\left ( \frac{2}{n^2} \right ) &=-arg \prod_{n=1}^\infty\left (1-\frac{2i}{n^2} \right ) \\ &=-arg \prod_{n=1}^\infty\left (1-\frac{(\sqrt{2i})^2}{n^2} \right ) \\ &=-arg\left(\frac{\sin(\pi\sqrt{2i})}{\pi\sqrt{2i}} \right ) \\ &=-arg\left(-\frac{(1/2+i/2)\sinh\left(\pi \right )}{\pi} \right ) \\ &= \frac{3\pi}{4} \end{align*}<|endoftext|> TITLE: Epsilon-Delta: Prove $\frac{1}{x} \rightarrow 7$ as $x \rightarrow \frac{1}{7}$ QUESTION [5 upvotes]: Prove that $\displaystyle\frac{1}{x} \rightarrow 7$ as $\displaystyle x \rightarrow \frac{1}{7}$. I need to show this with an $\epsilon-\delta$ argument. Still figuring these types of proofs out though, so I could use some tips/critiques of my proof, if it is correct at all. It might not be so clear, but I use the fact that $\displaystyle\left|x - \frac{1}{7}\right| < \delta$ several times in the proof. For $\varepsilon > 0$, let $\displaystyle\delta = \min\left\{\frac{1}{14}, \frac{\varepsilon}{98}\right\}$. Then $\displaystyle \left|x - \frac{1}{7}\right| < \delta$ implies: $$\left|\frac{1}{7}\right| = \left|\left(-x + \frac{1}{7}\right) + x\right| \leq \left|x - \frac{1}{7}\right| + \left|x\right| < \frac{1}{14} + |x|,$$ and so $\displaystyle |x| > \frac{1}{14}$. Also, $\displaystyle \left|x - \frac{1}{7}\right| < \delta$ implies: $$\left|\frac{1}{x} - 7\right| = \left|\frac{1-7x}{x}\right| = 7\frac{\left|x - \frac{1}{7}\right|}{|x|} < 98\left|x - \frac{1}{7}\right| < \frac{98\varepsilon}{98} = \varepsilon.$$ Thus for $\varepsilon > 0$, $\displaystyle\left|\frac{1}{x} - 7\right| < \varepsilon$ if $\displaystyle\left|x - \frac{1}{7}\right| < \delta$, for $\displaystyle \delta = \min\left\{\frac{1}{14}, \frac{\varepsilon}{98}\right\}$. REPLY [2 votes]: The only logical error I could find is related to the definition of $\delta$. If you want to use strict inequalities, you should have $\delta<\min\left\{\frac{1}{14},\frac{\varepsilon}{98}\right\}$. Otherwise, at least one of the strict inequalities should be changed (depending on the value of $\varepsilon$). Regardless, a good proof if you're still gaining familiarity with $\varepsilon-\delta$ proofs.<|endoftext|> TITLE: Principal Bundle and Cocycle QUESTION [11 upvotes]: Let $G$ be a Lie Group and X a smooth manifold. Let $ G Bund(X)$ be the category of $G$-Principal Bundles. Objects are maps $\pi: P \rightarrow X$ where $P$ is a right $G$-space such that the local triviality is satisfied and maps $f: \pi_1 \rightarrow \pi_2$ are $G$-morphisms $f: P_1 \rightarrow P_2$ such that $\pi_2\circ f = \pi_1$. A standard result is that there is a bijection between the first Chech Cohomology group $\check{H}^1(X, G)$ and isomorphism classes of $G$-Principal Bundles. To see that, given a $G$-cocycle $\{{g_{\alpha\beta}}\}$ over an open cover $\{U_{\alpha}\}$ of $X$, one can form the space $P = \bigcup_{\alpha}(\{\alpha\}\times U_\alpha\times G)$ and quotient it by $(\alpha, x, g) \sim (\beta, y, h) \Leftrightarrow (x = y) \wedge (h = g_{\beta\alpha}(x)\cdot g)$. That being said, my question is do we have an equivalence of categories (groupoids here) $$C \simeq G Bund(X)$$ for a category $C$ that is described in term of $G$-cocycles. I know that there is such an equivalence given a classifying space $BG$ and in the principal bundle article of the ncatlab they are talking about an equivalence: $$\mathbf{H}(X, \mathbf{B}G) \stackrel{\simeq}{\to} G Bund(X)$$. But can we state that with a category $C$ whose objects are $G$-cocycles or cohomologous classes $\omega\in \check{H}^1(X, G)$? Also, I would be glad if someone explain the equivalence found in the ncatlab article but concretely in our case (not in the abstract context of ncatlab). I don't figure out wether this abstract construction is more related to cocycles or to classifying spaces. Thanks, Paul. REPLY [7 votes]: I've found my answer. Let $\check{Z}^1(X, G)$ be the following category: Objects: $(\{U_\alpha\}, \{\Phi_{\alpha\beta}\})$ where $\{U_\alpha\}$ is an open cover of $X$ and $\Phi_{\alpha\beta}: U_\alpha\cap U_\beta\longrightarrow G$ are smooth functions such that: $$\forall\alpha,\beta,\gamma: \Phi_{\alpha\beta}\cdot \Phi_{\beta\gamma} = \Phi_{\alpha\gamma}$$ So objects are cocycles. Morphisms: $(\{U_\alpha\}, \{\Phi_{\alpha\beta}\})\stackrel{t}\longrightarrow (\{V_i\}, \{\varphi_{ij}\})$ is a collection $t = \{t_{i\alpha}\}_{i\alpha}$ of smooth functions $t_{i\alpha}: U_\alpha\cap V_i\longrightarrow G$ such that: $$\forall \alpha, \beta, i, j: \varphi_{ji}\cdot t_{i\alpha}\cdot\Phi_{\alpha\beta} = t_{j\beta}$$ Composition: Let $(\{U_\alpha\}, \{\Phi_{\alpha\beta}\})\stackrel{t}\longrightarrow (\{V_i\}, \{\varphi_{ij}\})$ and $(\{V_i\}, \{\varphi_{ij}\})\stackrel{u}\longrightarrow (\{W_a\}, \{\theta_{ab}\})$ be two morphisms. Then $(\{U_\alpha\}, \{\Phi_{\alpha\beta}\})\stackrel{v}\longrightarrow (\{W_a\}, \{\theta_{ab}\})$ is defined by $v_{a\alpha} = u_{ai}\cdot t_{i\alpha}$ on $U_\alpha\cap W_a\cap V_i$. Note that it is well defined because $\forall i, j: u_{ai}\cdot t_{i\alpha} = u_{aj}\cdot \varphi_{ji}\cdot t_{i\alpha}= u_{aj}\cdot t_{j\alpha}$ So now define the functor $K: \check{Z}^1(X, G)\longrightarrow GBund(X)$ as follow. For $\{\Phi_{\alpha\beta}\} \in \check{Z}^1(X, G)$, let $P = \bigcup_{\alpha}(\{\alpha\}\times U_\alpha\times G)/\sim$ as explained in my question. Then $K(\{\Phi_{\alpha\beta}\})$ is the $G$-principal bundle $\pi:P\longrightarrow X$. Given an other cocyle $\{\varphi_{ij}\}$ and a map $\{\Phi_{\alpha\beta}\}\stackrel{f}{\longrightarrow}\{\varphi_{ij}\}$ one can check that there is an unique map $K(\{\Phi_{\alpha\beta}\})\stackrel{K(f)}{\longrightarrow}K(\{\varphi_{ij}\})$ such that $\varphi_i\circ K(f)\circ \Phi^{-1}_\alpha = f_{i\alpha}$. Hence $K$ is full and faithful. Also, for all $P\in GBund(X)$ there exists $\{\Phi_{\alpha\beta}\}$ such that $P \simeq K(\{\Phi_{\alpha\beta}\})$. Note that here you need to choose a $G$-atlas $\{\Phi_\alpha\}$ for $P$ and take $\{\Phi_{\alpha\beta}\} = \{\Phi_\alpha\circ\Phi^{-1}_\beta\}$. So $K$ is dense. Because $K$ is full, faithfull and dense, there exists a quasi-inverse for $K$ and so we have proved $$GBund(X)\simeq\check{Z}^1(X, G)$$ Note that the quasi-inverse depend of the choice of a $G$-atlas for each $G$-Principal Bundle. Note also that taking the connected component of the categories we have: $$GBund(X)_0\simeq\check{H}^1(X, G)$$ Finally, this construction works also for $(G, \lambda)$-Bundle where $\lambda: G\times F\longrightarrow F$ is a faithfull action of $G$ on a typical fiber $F$. I haven't wrote all the details so I hope it's clear enough.<|endoftext|> TITLE: Every Submodule of a Free Module is Isomorphic to a Direct Sum of Ideals QUESTION [6 upvotes]: Suppose that $R$ is a ring with identity and $R^{\oplus n}$ is a free left $R$-module. Is every submodule of $R^{\oplus n}$ isomorphic to $A_1\oplus \ldots \oplus A_n$ where $A_i$ are left ideals of $R$? Edit: It appears that this is false, and an example is given in a linked question. Can anyone explain why the example works? REPLY [4 votes]: First recall that the ring in the linked answer is $R=\mathbb{Z}/4\mathbb{Z}[X]/(X^2)$. An easy calculation shows that $R$ is local, with unique maximal ideal generated by $2$ and $X$, and has a unique simple ideal generated by $2X$. The module in the linked answer is the submodule of $R^2$ generated by $(2,X)$. Since it is cyclic, it is a quotient of the free module $R$, and so has a unique maximal submodule. Therefore it is indecomposable, and so if it were a direct sum of ideals, it would have to be isomorphic to a single ideal. But then it would have a unique simple submodule. However, it has two simple submodules, generated by $(2X,0)$ and $(0,2X)$ respectively. A similar, and arguably clearer, example is given by $R=k[X,Y]/(X^2,Y^2)$ for a field $k$, with the submodule of $R^2$ generated by $(X,Y)$.<|endoftext|> TITLE: Sign of integral of $\frac{ 2 ^{\frac{it}{2/3}} \Gamma ( \frac{it +1}{2/3}) }{ 2 ^{\frac{it}{1.5}} \Gamma ( \frac{it +1}{1.5}) } \frac{1}{(a+it)^k}$ QUESTION [8 upvotes]: Can we determine the sign of the following function \begin{align} f(a,k)=\int_{-\infty}^\infty \frac{ 2 ^{\frac{it}{2/3}} \Gamma \left( \frac{it +1}{2/3}\right) }{ 2 ^{\frac{it}{3/2}} \Gamma \left( \frac{it +1}{3/2}\right) } \frac{1}{(a+it)^k} dt, \end{align} where $a\neq 0$ and $k \ge 1$ is some positive integer. The conjecture is that the sign of the integral is equal to \begin{align} {\rm sign } (f(a,k))={\rm sign}(a)^k. \end{align} Perhpas the following limit can be usefull. By using a method in this question it is not difficult to see that \begin{align} \left | \frac{ 2 ^{\frac{it}{2/3}} \Gamma \left( \frac{it +1}{2/3}\right) }{ 2 ^{\frac{it}{3/2}} \Gamma \left( \frac{it +1}{3/2}\right) } \right| \to O( e^{- (\frac{3}{2}-\frac{2}{3}) t}) \text{ as } t \to \infty. \end{align} Thanks REPLY [2 votes]: Not quite an answer, but a good start. Let's look at $$f(a,k)=\int_{-\infty}^\infty \frac{ 2 ^{\frac{it}{2/3}} \Gamma \left( \frac{it +1}{2/3}\right) }{ 2 ^{\frac{it}{3/2}} \Gamma \left( \frac{it +1}{3/2}\right) } \frac{1}{(a+it)^k} dt$$ which can be rewritten as $$f(a,k)= (-i) \int_{-i\infty}^{i\infty} 2 ^{\frac{5z}{6}}\frac{ \Gamma \left( \frac{3(z +1)}{2}\right) }{ \Gamma \left( \frac{2(z +1)}{3}\right) } \frac{1}{(a+z)^k} dz$$ Observe that $\Gamma \left( \frac{3(z +1)}{2}\right)$ has poles in the left half plane when $\frac{3(z+1)}{2} = n$ and $ n = 0, -1, -2, -3,....$, so when $z = \frac{2}{3}n - 1$ we have a pole. The principal part is $$\frac{(-1)^n}{n!(\frac{3(z+1)}{2} +n)} = \frac{2}{3}\frac{(-1)^n}{n!(z + 1 + \frac{2}{3}n)}$$ Take a semicircle contour that grows in the left half plane. A simple exercise in Mellin transforms gives that, if $a < 0$ $$f(a,k) = \frac{4\pi}{3}\sum_{n=0}^\infty \frac{(-1)^n2^{-\frac{5}{6}(1+\frac{2}{3}n)}}{n!\Gamma(-\frac{4}{9}n)(a-1-\frac{2}{3}n)^k}$$ Now showing that $\text{sign}(f(a,k)) = \text{sign}(a)^k$ involves talking about this series. Note that some of the terms disappear if $n = 0 \,\mod 9$ (because the Gamma function on the bottom vanishes there). Not sure how you would really approach this, but probably discussing that this series oscillates wildly has something to do with it. It seems obvious though that if $a<0$ then $\text{sign}(f(a,k))^k = \text{sign}(a)^k$. EDIT: If $a > 0$ there's another term added because $\frac{1}{(a+z)^k}$ has a pole in the left half plane. This needs to be handled in cases, because when $a = -\frac{2}{3}n -1$ the residues get all wonky. I'll leave it to you to find that extra term, which isn't too hard to get at. It just involves taking the $k$'th derivative of the rest of the integrand (something I'm not in the mood to do). PS: I may have screwed up some arithmetic; the amount of fractions I just crossed out and rearranged in my head had me blurry eyed.<|endoftext|> TITLE: On existence of polyhedra with a fixed number of edges per face QUESTION [5 upvotes]: Let $P$ be a convex polyhedra such that each face has exactly $A$ edges. Denote with $V$, $E$ and $F$ the number of vertices, edges and faces of $P$ respectively. Since each face has $A$ edges we get that $E=\frac{AF}{2}$. This means that $A$ and $F$ cannot be both odd. The graph $G$ induced by $P$ is a planar graph such that each vertex has at least degree $3$. This means we can use Euler's formula to get the amount of vertices: $$V=2+E-F=\frac{4 + AF -2F}{2}.$$ Question For which values of $A$ and $F$ does such a convex polyhedron exist? If such a polyhedron exists, how many such non-isomorphic polyhedra exist? By isomorphism I mean that their respective skeleton graphs are isomorphic as graphs. Partial results The average degree $d$ of $G$ is equal to $d=\frac{2E}{V}=\frac{2AF}{4+(A-2)F}$. Since each vertex has at least degree $3$ we get that $3 \leq d$. Since $G$ is planar we also have that $d < 6$. This two inequalities give restrictions to the possibilities of $A$ and $F$: $3 \leq d$ gives that $$A \leq \frac{6F-12}{F}.$$ The inequality $d < 6$ gives that $$\frac{3F-6}{F} < A.$$ This gives us that $A < 6$. We also have that $3 \leq A$ since a face of a polyhedron has at least $3$ edges. The above inequalities and the fact that $F \geq 0$ give If $A=3$, then $F \in [4, \infty)$ If $A=4$, then $F \in [6, \infty)$ If $A=5$, then $F \in [12, \infty)$ Denote with $\mathcal{E}$ the set of all even numbers. Since $A$ and $F$ cannot be both odd we have that If $A=3$, then $F \in \mathcal{E} \cap [4, \infty) $ If $A=4$, then $F \in [6, \infty)$ If $A=5$, then $F \in \mathcal{E} \cap [12, \infty)$ REPLY [5 votes]: These are the duals of the convex polyhedra where each vertex is $A$-valent, which you might have an easier time finding information on. The graphs of these dual polyhedra are all the $A$-regular 3-connected planar graphs. Case $A = 3.$ You want polyhedra with all faces being triangles, which are the simplicial polyhedra. Wikipedia has a list of several classes of these. They exist for any even number of faces greater than two. One example with $2n$ faces for any $n > 2$ is the $n$-gonal bipyramid. The graphs of the duals are 3-regular (aka cubic) 3-connected planar graphs. The number of non-isomorphic such graphs with 10 to 20 vertices can be found at Gordon Royle's page, or as OEIS A000109. \begin{array}{| c | c |} \hline F & \text{#} \\ \hline 4 & 1 \\ 6 & 1 \\ 8 & 2 \\ 10 & 5 \\ 12 & 14 \\ 14 & 50 \\ 16 & 233 \\ 18 & 1249 \\ 20 & 7595 \\ 22 & 49566 \\ 24 & 339722 \\ 26 & 2406841 \\ 28 & 17490241 \\ 30 & 129664753 \\ \hline \end{array} As you can see, there are a lot. Case $A = 4.$ You want polyhedra with all faces being quadrilaterals. (These are the 3-dimensional cubical polytopes.) Such polyhedra exist with $2n$ faces for any $n \geq 3$: the trapezohedra, aka deltohedra, which are duals to antiprisms. Quadrilateral-faced polyhedra also exist with $3n$ faces for any $n \geq 3$, by taking an $n$-gonal bipyramid and truncating each vertex on the equatorial $n$-gon to the midpoints of the equatorial edges (making a ring of rhombi around the equator), as described in this answer of achille hui for $n = 5$. When $n=3$ you get the Herschel enneahedron: There are no quadrilateral-faced heptahedra. Given a quadrilateral-faced polyhedron, we can "glue" an irregular cube to one of its faces to get a quadrilateral-faced polyhedron with four more faces. Starting from the Herschel enneahedron, you can thus get $F$ faces for any $F \geq 9$ equivalent to 1 mod 4. The program plantri will generate all non-isomorphic 3-connected planar quadrangulations, which are equivalent to quadrilateral-faced convex polyhedra, and finds: \begin{array}{| c | c |} \hline F & \text{#} \\ \hline 6 & 1 \\ 7 & 0 \\ 8 & 1 \\ 9 & 1 \\ 10 & 3 \\ 11 & 3 \\ 12 & 11 \\ 13 & 18 \\ 14 & 58 \\ 15 & 139 \\ 16 & 451 \\ 17 & 1326 \\ 18 & 4461 \\ 19 & 14554 \\ 20 & 49957 \\ 21 & 171159 \\ 22 & 598102 \\ 23 & 2098675 \\ 24 & 7437910 \\ 25 & 26490072 \\ 26 & 94944685 \\ 27 & 341867921 \\ 28 & 1236864842 \\ 29 & 4493270976 \\ 30 & 16387852863 \\ 31 & 59985464681 \\ \hline \end{array} (OEIS A007022). Since there are quadrilateral-faced 11-hedra, we can do the same cube-gluing operation to get any $F \geq 11$ equivalent to 3 mod 4. Thus, for $A=4$ we can have 6 faces, or any number greater than 7. Case $A = 5.$ There are again lots of examples. Suppose that the polyhedron has $v_k$ vertices of valence $k$. You can specify the number of vertices of each valence, $(v_3, v_4, v_5, \dotsc)$, and such a polyhedron exists with all pentagonal faces so long as $v_4 \geq 6$ and $v_3 = 20 + \sum_{k \geq 4} (3k - 10) v_k$, by a result of J.C. Fisher in Five-valent convex polyhedra with prescribed faces. So the total number of vertices is $$ V = 20 + \sum_{k \geq 4} (3k - 9) v_k, $$ and since $V = \frac{3F}{2} + 2$, the total number of faces is $$ F = 12 + 2\sum_{k \geq 4}(k-3)v_k. $$ By setting $v_4$ to any integer at least 6, and all other $v_k = 0$, we can guarantee the existence of pentagonal-faced polyhedra with any even number of faces greater than or equal to 24. There are also pentagonal-faced polyhedra with 12 faces (the dodecahedron), 16 faces (the dual of the snub square antiprism), 18 or 20 faces (the polyhedra with planar graphs shown below), and 22 faces (the result of gluing two regular dodecahedra together along a face, as described in this answer of Oscar Lanzi.) (The 20-faced pentagonal planar graph is the dual of the so-called 20-quintic graph 1.) The same answer of Oscar Lanzi, in the question Possible all-Pentagon Polyhedra, asserts that there are no pentagon-faced polyhedra with 14 faces. The Fisher article cited above provides @Oscar's reference for the nonexistence of a pentagonal 14-hedron. Using plantri and countg (from the nauty gtools), we can count the number of 5-regular 3-connected planar graphs, to enumerate the number of isomorphism classes: \begin{array}{| c | c |} \hline F & \text{#} \\ \hline 12 & 1 \\ 14 & 0 \\ 16 & 1 \\ 18 & 1 \\ 20 & 6 \\ 22 & 14 \\ 24 & 96 \\ 26 & 518 \\ 28 & 3917 \\ 30 & 29821 \\ \hline \end{array}<|endoftext|> TITLE: Evaluate limit of an integral: $\lim_{x\to \infty }\frac{1}{x}\int _0^x\:\frac{dt}{2+\cos t}$ QUESTION [9 upvotes]: $$\lim _{x\to \infty }\frac{1}{x}\int _0^x\:\frac{dt}{2+\cos t}$$ Can someone explain to me if it is a limit of type $\frac{\infty}{\infty}$ or not and why ? I considered it to be one, applied L'Hospital and got $\cos\infty$, which would mean that the limit does not exist, but the answer is $\frac{1}{\sqrt{3}}$ REPLY [9 votes]: $f(t)=\frac{1}{2+\cos(t)}$ is a positive, bounded and $2\pi$-periodic function. It follows that the mean value of $f$, i.e. the wanted limit, equals the mean value of $f$ over one period: $$ \lim_{x\to +\infty}\frac{1}{x}\int_{0}^{x}\frac{dt}{2+\cos t} = \frac{1}{2\pi}\int_{0}^{2\pi}\frac{dt}{2+\cos t} = I $$ and through the substitution $t\mapsto 2t$ we have: $$ I = \frac{1}{\pi}\int_{0}^{\pi}\frac{dt}{1+2\cos^2(t)}=\frac{2}{\pi}\int_{0}^{\pi/2}\frac{dt}{1+2\cos^2(t)} $$ then by setting $t=\arctan(u)$: $$ I = \frac{2}{\pi}\int_{0}^{+\infty}\frac{du}{3+u^2}=\color{red}{\frac{1}{\sqrt{3}}}.$$<|endoftext|> TITLE: Burnside / Cauchy-Frobenius Lemma for the automorphism group of a symmetric block design QUESTION [5 upvotes]: I am quite familiar with the Burnside/Cauchy-Frobenius Lemma, which states that for a group $G$ acting on a set $X$, where $O$ is the number of orbits of $X$ under the action of $G$, we have: $$O=\frac{1}{|G|}\sum\limits_{g\in G}|\{x\in X: xg=x\}|,$$ that is, the number of orbits is found by finding the number of fixed points of the action for each group element and adding them up. I'm trying to write this lemma down in notation more friendly to the specific situation where we have a symmetric design $D$ and its automorphism group $\text{Aut}(D)$, but I am not entirely comfortable with what I came up with, and thought I would check with the community. I have already proven (and it is well-known) that any automorphism on a symmetric design $D$ fixes the same number of blocks and points, further blurring things. I believe $D$ is playing the role of $X$ above, and it makes sense to consider both blocks and points fixed by any element $\sigma\in\text{Aut}(D)$. I also think it makes sense that $\text{Aut}(D)$ is playing the role of $G$ above. So I arrive at the following statement for Burnside's Lemma in the context of a symmetric design and its automorphism group: $$O=\frac{1}{|\text{Aut}(D)|}\sum\limits_{\sigma\in\text{Aut}(D)}|\{p\in D:\sigma p=p\}|,$$ where $p$ represents a point. Does this look good to you? Main concern: $D$ isn't really a set, in the traditional sense - it is an incidence structure, so writing "$p\in D$" feels wrong/notationally abusive, but writing it out for blocks $b$ has the same problem. I suppose I could further muddy things by saying $X$ is the set of points on which our design lives, but then writing $p\in X$ seems to destroy the relation between my group ($\text{Aut}(D)$) and my set ($D$?... $X$?...) The questions: Does my statement of Burnside's Lemma for this situation look right? Is everything I'm doing consistent with a good understanding of how a design's automorphism group interacts with the design itself? Is my set in the summand reasonable and not too notationally weird? REPLY [2 votes]: It's true that $D$ isn't really a set, so that is the part that seems awkward to me. I feel that you should say that you are viewing $\mathrm{Aut}(D)$ as a subset of $\mathrm{Sym}(\mathcal{P})$, $\mathrm{Sym}(\mathcal{B})$, or $\mathrm{Sym}(\mathcal{P} \cup \mathcal{B})$, to clarify whether you are viewing an automorphism as being defined by a permutation of points, blocks, or both. After double checking, in the thesis of Aschbacher he refers to the automorphism group of a design $D$ as acting on $D$, the implied meaning being that the set $X = \mathcal{P} \cup \mathcal{B}$, so I guess this aligns directly with your notation (note that this means $O$ consists of both point and block orbits). The version you have of Burnside's lemma will apply in any of these cases.<|endoftext|> TITLE: Where do Mathematicians Get Inspiration for Pi Formulas? QUESTION [26 upvotes]: Question: Where do people get their inspirations for $\pi$ formulas? Where do they begin with these ideas? Equations such as$$\dfrac 2\pi=1-5\left(\dfrac 12\right)^3+9\left(\dfrac {1\times3}{2\times4}\right)^3-13\left(\dfrac {1\times3\times5}{2\times4\times6}\right)^3+\&\text{c}.\tag{1}$$$$\dfrac {2\sqrt2}{\sqrt{\pi}\Gamma^2\left(\frac 34\right)}=1+9\left(\dfrac 14\right)^4+17\left(\dfrac {1\times5}{4\times8}\right)^4+25\left(\dfrac {1\times5\times9}{4\times8\times12}\right)^4+\&\text{c}.\tag{2}$$$$\dfrac \pi4=\sum\limits_{k=1}^\infty\dfrac {(-1)^{k+1}}{2k-1}=1-\dfrac 13+\dfrac 15-\&\text{c}.\tag{3}$$ Have always confused me as to where Mathematicians always get their inspirations or ideas for these kinds of identities. The first one was found by G. Bauer in $1859$ (something I still want to know how to prove. I've found this recently asked question still open for proofs), the second was found by Ramanujan. And has a relation with Hypergeometrical series. I'm wondering whether people see $\pi$ in other formulas, such as$$\sum\limits_{k=1}^{\infty}\dfrac 1{k^2}=\dfrac {\pi^2}6\implies\pi=\sqrt{\sum\limits_{k=1}^\infty\dfrac 6{k^2}}\tag{4}$$ And isolate $\pi$, or if something new comes up and they investigate it? For example, I'm wondering if it's possible to manipulate the expansion of $\ln m$ $$\ln m=2\left\{\dfrac {m-1}{m+1}+\dfrac 13\left(\dfrac {m-1}{m+1}\right)^3+\dfrac 15\left(\dfrac {m-1}{m+1}\right)^5+\&\text{c}.\right\}\tag{5}$$ To get a $\pi$ formula. Or the series$$\sum\limits_{k=1}^{\infty}\dfrac 1{k^p}=\dfrac {\pi^p}n\tag{6}$$ Which converges faster and faster as $p$ gets larger and larger. REPLY [4 votes]: One method, no doubt, is due to reasoning involving the classic definition of $pi$ (the ratio of the diameter to the circumference). For example the ratio of the diameter of a regular polygon to perimeter as the number of sides goes to infinity gives $\pi$. Starting with the square and doubling the number of sides of the polygon yields the sequence $$2\sqrt2$$ $$4\sqrt{2-\sqrt2}$$ $$8\sqrt{2-\sqrt{2+\sqrt2}}$$ $$16\sqrt{2-\sqrt{2+\sqrt{2+\sqrt2}}}$$ $$\dots$$ You could derive something similar for $\pi^2$ or $\pi^3$ using the areas of regular polygons, surface areas/volumes of convex regular polyhedrons, etc. Just speculating: one might get inspiration from physical phenomenon such as angular velocity vs linear velocity for an object traveling in a circle, angular momentum, torque, etc.<|endoftext|> TITLE: Draught of barge moving through water QUESTION [5 upvotes]: I am studying fluid dynamics at university and have been working on the following problem: A flat-bottomed barge moves very slowly through a closely fitting canal but generates a significant velocity $U$ in the small gap beneath its bottom. Estimate how much lower the barge sits in the water compared to when it is stationary if $U = 5 \, \text {ms}^{−1}.$ Considering the problem in the rest frame of the barge, I've deduced that, by conservation of mass, if the draught of the barge is $d$, its clearance above the canal bed $h$ and speed through the water $V$, then $Vd = Uh$. This doesn't seem helpful though, as we don't know what $V$ is. I've thought about using the Bernoulli Streamline Theorem on a streamline along the riverbed and I get $$\frac{V^2}{2}+gh=\frac{U^2}{2}+gd$$ but it would seem that, when the barge is at rest, we have $h = d$, which doesn't seem to make sense (for every conceivable barge). I can't seem to use any information on buoyancy as I know nothing about the weight of the barge. Please help me understand how to solve this question, but also why the approach works with such little information. REPLY [3 votes]: Your idea of using the Bernoulli's streamline theorem is correct. The theorem states that, in any arbitrary point along a streamline of an incompressible fluid with steady flow, if we neglect the friction by viscous forces, the following equation holds: $${\displaystyle {\frac {V^{2}}{2}}+gh+{\frac {P}{\rho }}={\text{constant}}} $$ where $V $ is the fluid flow velocity, $g$ is the gravity acceleration, $h $ is the level of the point above a reference plane, $P $ is the pressure, and $ρ$ is the density of the fluid. In this problem, we are asked to determine how $h $ changes between a situation of steady flow with $V= \text {5 ms}^{-1} \,\,$ and a situation of rest with $V=0 \,$. So, calling $h'$ the changed level of the point at rest and setting $g= \text {9.8 ms}^{-2} \,\,$, we have to solve $$\displaystyle \frac {5^2}{2}+9.8\,h+\frac {P}{\rho } \\ = \frac {0^2}{2}+9.8\,h'+\frac {P}{\rho }$$ from which we get $$\displaystyle h'-h= \frac {25}{2 \cdot 9.8} \approx 1.275 \, \text {m}$$<|endoftext|> TITLE: Semi-group theory and Poisson equation on the upper half plane QUESTION [10 upvotes]: We first look at the 2D Laplace equation , say on the upper half plane: $$\Delta u=0,\quad -\infty0$$ $$u(x,0)=g(x),$$ where $g\in L^p(\mathbb{R})$ for some $1\leq p<\infty$. Then the general solution can be represented using the Poisson kernel $$P_y(x)=\frac{y}{\pi(y^2+x^2)},$$ with $$u(x,y)=(P_y*g)(x)=\frac{1}{\pi}\int_{-\infty}^\infty \frac{y}{y^2+(x-t)^2}g(t)dt.$$ Now if we define the following linear operator on $L^p(\mathbb{R})$: $$T_yg(x)=(P_y*g)(x).$$ Then we can verify that the family $\{T_y\}_{y\geq 0}$, satisfies the semi-group properties: $T_0=\mathrm{id}$, i.e. $T_0$ is the identity operator; $T_{y+s}=T_yT_s$ for any $y,s\geq 0$. Thus we see that we can study solutions of the Laplace equation from the view of semi-group theory. Here is my question: Can we perform similar analysis to the Poission equation? i.e. consider the solutions of the poisson equation from the view of semi-group theory? The Poisson equation is basically the laplace equation with a source term $$-\Delta u=f(x,y),\quad -\infty0$$ $$u(x,0)=g(x),$$ here we use the same domain as above. In this case the general solution can be represented by using the Green's function: $$G(x,y)=\frac{1}{2\pi}\ln\sqrt{x^2+y^2},$$ with $$u(x,y)=\int_{\mathbb{R}\times\mathbb{R}^+}G(x-x',y-y')f(x',y')dx'dy'+\int_{\{y=0\}}g(x')\frac{\partial G}{\partial\mathbf{n}}(x-x',y-y')dS,$$ where in the second integral above $\mathbf{n}$ is the normal vector of $\{y=0\}$ pointing ourwards the domain $\mathbb{R}\times\mathbb{R}^+$. If we want to view the solution from semi-group theory, then we need to find a suitable Banach space $X$ and a family of bounded linear operators $\{T_t\}_{t\geq 0}$ on $X$ which form a semi-group. But I'm not sure whether this can be done. Any ideas on this question are greatly appreciated. REPLY [3 votes]: The answer is no, i.e. the solution operator of the boundary problem value of every non-homogeneous linear equation is not a semigroup of operators respect to any of the space or time variables involved. I will show this in two steps: I'll construct the general solution operator $T$ for the Dirichlet problem for the Poisson equation in the upper half plane and then show that this cannot be a semigroup. First Step. Let's precisely define the Green function for a non homogeneous boundary value problem and explicitly calculate it for the Dirichlet problem in the upper half-plane for the Poisson equation as required by the OP, i.e. $$ \begin{cases} \Delta u(x,y)=f(x,y),\quad (x,y)\in\mathbb{R}\times\mathbb{R}^+\\ u(x,0)=g(x). \end{cases}\tag{1}\label{1} $$ Definition. The Green function of the (linear) boundary value problem $$ \begin{cases} P(x,D) u(x)=f(x),\quad x\in G\subset\mathbb{R}^n\\ Bu(x)=g(x),\quad x\in\partial G \end{cases}\tag{2}\label{2} $$ for the linear partial differential operator $P(x,D)$ is the distribution solution of the following associated boundary problem $$ \begin{cases} P(x,D) \mathscr{G}(x,t)=\delta(x-t),\quad x,t\in G\subset\mathbb{R}^n\\ B\mathscr{G}(x,t)=0,\quad x\in\partial G \end{cases}\tag{3}\label{3} $$ where $G$ is a domain with a "sufficiently regular" boundary $\partial G$, $B$ is a linear boundary operator defined on $\partial G$. Notes. Clearly the Dirichlet problem \eqref{1} is of type \eqref{2} since $B u(x,y)=u|_{\partial G}=u(x,0)$ is a linear boundary operator, and a solution of problem \eqref{3} for \eqref{1} exists under reasonable conditions on $\partial G$ and on $P(x,D)$ (Vladimirov (1983) §29.1, p. 369). The Green's function $\mathscr{G}((x,y),(t,s))$ for \eqref{1} is the solution of the following Dirichlet boundary value problem: $$ \begin{cases} \Delta \mathscr{G}((x,y),(t,s))=\delta((x,y)-(t,s)),\quad (x,y),(t,s)\in \mathbb{R}\times\mathbb{R}^+\\ \mathscr{G}((x,0),(t,s))=0. \end{cases}\tag{3'}\label{3'} $$ Note. The vector $(t,s)\in \mathbb{R}\times\mathbb{R}^+$ is a parameter which has an interesting physical interpretation in the theory of the static electric field (see Vladimirov (1983) §29.1 for the details) The solution of problem \eqref{3'} has the form (Vladimirov (1983) §29.1, p. 368) $$ \mathscr{G}((x,y),(t,s))=\frac{1}{2\pi}\ln|(x,y)-(t,s)|+g((x,y),(t,s)) $$ where the first term on the right side is the fundamental solution of the laplacian while the second one is a function harmonic in the whole $\mathbb{R}\times\mathbb{R}^+$. In the problem posed by the OP, $$ \mathscr{G}((x,y),(t,s))=\frac{1}{2\pi}\big(\ln|(x,y)-(t,s)|-\ln|(x,y)-(t,-s)|\big).\tag{4}\label{4} $$ Now, noting that $\frac{\partial\mathscr{G}}{\partial\mathbf{n}}=\frac{\partial\mathscr{G}}{\partial t}$ on the boundary $\{y=0\}$ of the upper half plane, by Green's formula we obtain the formula for the general solution of \eqref{1}: $$ \begin{split} u(x,y)&=\int\limits_{\mathbb{R}\times\mathbb{R}^+}\mathscr{G}((x,y),(t,s))f(t,s)\mathrm{d}t\mathrm{d}s+\int\limits_{\{s=0\}}g(t)\frac{\partial\mathscr{G}}{\partial t}((x,y),(t,0))\mathrm{d}t\\ &=\int\limits_{\mathbb{R}\times\mathbb{R}^+}\mathscr{G}((x,y),(t,s))f(t,s)\mathrm{d}t\mathrm{d}s+\frac{1}{\pi}\int\limits_{-\infty}^\infty \frac{y}{y^2+(x-t)^2}g(t)\mathrm{d}t\\ &\overset{\mathrm{def}}{=} T(g;f)(x,y) \end{split}\tag{5}\label{5} $$ Second step. Now note that the operator $T$ defined by \eqref{5} is a semigroup only if $f\equiv 0$. To see this, suppose that we have homogeneous boundary conditions, i.e. $g\equiv 0$ and assume $y$ as the parameter of the hypothetical semigroup, i.e $T(g;f)(x,y)=T_y(g;f)(x)$: for $y=0$ we have $$ T_0(0;f)(x)\equiv 0 \quad \forall x\in\mathbb{R}\text{ by equation \eqref{4} } $$ If $T_y$ defined by \eqref{5} would be a semigroup, the equation above would imply that $$ T_y(0,f)(x)=T_{y+0}(0,f)=T_{y}T_{0}(0,f)(x)\equiv 0 $$ by the second property of the semigroup and by the linearity of $T$, and this is clearly false since the first term of \eqref{5} is not necessarily $\equiv0$. If instead of $y$ we try to assume $x$ as the parameter of the hypothetical semigroup by posing $T(g;f)(x,y)=T_x(g;f)(y)$, it is simple to see that $T_0\neq \mathrm{id}$. Last notes This proposition holds for general boundary problems \eqref{2} since it can be proved that, for the general Green's function defined by \eqref{3} $$ B\mathscr{G}(x,t)=0,\quad t\in\partial G. $$ See again Vladimirov (1983) §29.1 for the details when $P(x,D)=\Delta$. Alberto Cialdea showed me the main argument of the second step in the proof above, and I would like to thank him publicly. [1] Vladimirov, V. S. (1983)[1970], Equations of mathematical physics, Moscow: Mir Publishers, 2nd ed., pp. 464, MR0764399, Zbl 0207.09101 (the Zbmath review refers to the first English edition).<|endoftext|> TITLE: What's really going on behind calculus? QUESTION [26 upvotes]: I'm currently taking maths at A Level and I have found it strange that it is not explained why the differential of $x^n$ is $nx^{n-1}$, for example. I can see that it works through observation and first principle, but how can it be derived? And what about for an unknown function, not based on trigonometry, $e^x$ or polynomials? Is there some sort of intuition or derivation that can lead to a general answer, other than making observations? Another thing that I do not quite understand is why higher derivatives of some function is denoted by $\frac{d^ny}{dx^n}$. Sorry if this sounds really basic but I would appreciate any explanations/links to good resources (I've had a look online and couldn't find too much other than some nice explanations special cases etc). Thanks :) REPLY [4 votes]: Concerning your question of why higher derivatives are denoted by $$\frac{d^ny}{dx^n}$$ There is a reason for this notation. Now $$\begin{align}\frac{d^2y}{dx^2} &= \lim_{h \to 0} \frac{y'(x + h) - y'(x)}{h}\\&=\lim_{h \to 0}\frac{\lim_{h_1 \to 0} \frac{y(x + h + h_1) - y(x + h)}{h_1} - \lim_{h_2 \to 0} \frac{y(x + h_2) - y(x)}{h_2}}h\\&=\lim_{h \to 0}\lim_{h_1 \to 0}\lim_{h_2 \to 0} \frac{ \frac{y(x + h + h_1) - y(x + h)}{h_1} - \frac{y(x + h_2) - y(x)}{h_2}}h\end{align}$$ Now assuming both the iterated and the combined limit exist, we can set the three values $h, h_1, h_2$ to be equal without changing the value. So $$\begin{align}\frac{d^2y}{dx^2} &=\lim_{h \to 0} \frac{ \frac{y(x + 2h) - y(x + h)}{h} - \frac{y(x + h) - y(x)}{h}}h\\&=\lim_{h \to 0} \frac{ y(x + 2h) - 2y(x + h) + y(x)}{h^2}\end{align}$$ The expression on the top is the "2nd difference". It is what you get from taking a difference of a difference, so it is the taking of differences that is squared in the short-hand notation: $d^2y$. But the denominator is just $h^2$, which is the square of a single difference in $x$. Thus $dx^2$ (which is intended to mean $(dx)^2$, not $d(x^2)$).<|endoftext|> TITLE: High-dimensional shapes with known volume formulas QUESTION [8 upvotes]: There don't seem to be a lot of high-dimensional shapes whose volume, surface area, etc. can be expressed in a concise way. The examples I know of are: Spheres Cubes (or parallelotopes, more generally) Simplices Zonotopes What other classes of high dimensional objects admit relatively simple volume (or area, etc.) formulas? EDIT: Since zonotopes are the most unfamiliar of my examples, here's a reference: Chapter 9 of "Computing the Continuous Discretely". To summarize, a zonotope is a set of the form $$\{a_1 \vec{x}_1 + \cdots + a_m \vec{x}_m \:|\: a_1,\dots,a_m\in[0,1]\}$$ where $\vec{x}_1,\dots,\vec{x}_m\in \mathbb{R}^n$ are fixed. This is like a parallelotope, except the vectors $\vec{x}_j$ need not be linearly independent (e.g. $m$ can be greater than $n$). The volume of such a zonotope is given by $$ \sum_{S\subset \{1,\dots,m\}, |S|=n} |\det[x_i]_{i\in S}|$$ which means: "Take any n of the m vectors $\vec{x_i}$ and compute the volume of the parallelotope formed by these n vectors in $\mathbb{R}^n$. Sum over all such parallelotopes and you get the volume of the zonotope." REPLY [3 votes]: A couple more examples I've found: 1) Cross-polytopes. These are generalizations of an octohedron. Wikipedia has a nice article on them, and the standard n-dimensional cross-polytope has volume $\frac{2^n}{n!}$. 2) Cones. A cone is formed from a base shape of codimension 1 and a point at some height $h$ above the base. In n dimensions, if the base has $(n-1)$ volume $A$, then the cone has $n$ volume $\frac{Ah}{n}$.<|endoftext|> TITLE: "Prime decomposition of $\infty$" QUESTION [7 upvotes]: I've just read the following exercise: "Determine for some (or all) $n\leq 10$ the prime decomposition of $2, 3, 5$ and $\infty$ in $\mathbb{Q}(\zeta_{12})$, where $\zeta_{12}$ is a primitive $12$-th root of unity. In particular, determine the different places above $2, 3, 5$ and $\infty$, their ramication indices and their inertia degrees." What is "prime decomposition of $\infty$" supposed to mean here? REPLY [4 votes]: The "prime decomposition of $\infty$" in a number field $K$ is more or less a question of convention in order to parallel the same phenomenon for prime ideals. But the question remains what convention to choose. It will take long to make things more precise, but let me first recall the definition(s) of a place of $K$ : (1) Starting from absolute values of $K$ as in the answer of @Lubin, a place of $K$ is an equivalence class of non trivial absolute values (archimedean or not) of $K$, two absolute values being equivalent iff the topological spaces that they define on $K$ are homeomorphic. The set of places $Pl_K$ is determined from $Pl_{\mathbf Q}$ by explicit formulas : if $P$ is a prime ideal of $K$, then $|x|_P$ := $N(P)^{-v_P (x)}$ , where $N(P)$ is the absolute norm of $P$ and $v_P (x)$ is the power to which $P$ appears in the ideal factorization of $(x)$ the archimedean absolute values are of two types : the real ones, indexed by the $r_1$ embedding $\sigma : K \to \mathbf R$ , defined by $|x|_{\sigma} = |\sigma x|$ ; the complex ones, indexed by the $r_2$ pairs of conjugate embeddings $\tau : K \to \mathbf C$, defined by $|x|_{\tau} = |\tau x|^2$ . Note that the square accounts for the fact that for each pair of conjugate $\tau$ 's, one picks only one of the two conjugate $\tau (x)$' s (2) To define archimedean places starting from the $\mathbf Q$-embeddings of $K$ into an algebraic closure of $\mathbf Q$ as in the answer of @Bob Jones, we must first (for a fixed prime $p$) look at the $\mathbf Q_p$-embeddings of $K$ into the completion $\mathbf C_p$ of an algebraic closure of $\mathbf Q_p$. Given any $p$-adic valuation $v$ and any embedding $i_v : K \to K_v$ (the $v$-completion of $K$), it is easy to see that $K_v = i_v (K)\mathbf Q_p$. Two $\mathbf Q_p$-embeddings $\sigma , \tau : K \to \mathbf C_p$ will be called equivalent if $\mathbf Q_p \sigma(K)$ and $\mathbf Q_p \tau(K)$ are $\mathbf Q_p$-conjugate, and a new definition will be that a $p$-place = an equivalence class of such a $\mathbf Q_p$-embedding $\sigma$ = {$\sigma \tau.i_v$} for a chosen $i_v$ and for $\tau$ running through the $\mathbf Q_p$-isomorphisms of $K_v$ into $\mathbf C_p$. The coincidence with the first definition comes from the formula $|x|_v = |N_{K_v /\mathbf Q_p }(x)|_p$. The analogous definition of an archimedean complex place will obviously come from the formula $N_{\mathbf C / \mathbf R} (i_\infty(x)) = i_\infty(x)$. conjugate $i_\infty(x)$, just as previously in (1). The presence of the square in the formula defining a complex achimedean place is perhaps at the origin of the widely accepted convention (since Hasse) that in a relative extension $K/F$, a real place of $F$ which become complex in $K$ is called ramified, with ramification index 2. But this is not undisputable. Instead of "ramification", some authors (e.g. G. Gras in his book CFT: from theory to practice, Springer 2003) advocate the more neutral word "complexification" (of a real place). In analogy with the vocabulary for a local extension $L_w / K_v$ obtained by completing a global $L/K$, there is no doubt that the case ${\mathbf R / \mathbf R}$ for the place $\infty$ should be called "totally decomposed". But the conventional systematic terminology "ramified" for the case ${\mathbf C / \mathbf R}$ leads to undue complications, e.g. in the formulation of the main theorems of CFT. Consider for instance $K = \mathbf Q (\zeta_m)$ , of which the "natural" conductor should be $m$, whereas in the classical formulation of CFT in terms of ray class fields, this conductor is actually the modulus $(m). \infty$. More seriously, in the part of CFT which deals with abelian extensions with restricted ramification, more precisely unramified outside $S$ and totally decomposed inside $T$, $S$ and $T$ being two finite disjoint sets of places of the base field (op. cit. chapter III), the so called Spiegelungsatz (reflection theorem) exchanges among other things the real infinite places and the real infinite places outside $S$, which renders the statements a bit messy when sticking to the usual convention.<|endoftext|> TITLE: Identity operator on $L^2(\mathbb{R}^d)$ QUESTION [12 upvotes]: I want to show that the identity operator on $L^2(\mathbb{R}^d)$ cannot be given by an absolutely convergent integral operator. That is, if $K(x,y)$ is a measurable function on $\mathbb{R}^d \times \mathbb{R}^d$ such that for each $f \in L^2(\mathbb{R}^d)$ and $T(f)(x) = \int_{\mathbb{R}^d} K(x,y)f(y)dy$ converges for almost every $x$, then $T(f) \neq f$ for some $f$. Therefore, suppose that $T(f)(x)$ converges absolutely for almost every $x$ and $T(f) = f$ for all $f$. Then $$f(x) = \int_{\mathbb{R}^d} K(x,y) f(y) dy \leq \int_{\mathbb{R}^d} \left| K(x,y)f(y) \right|dy< \infty.$$ I don't really know how to proceed from here. REPLY [2 votes]: Here is an argument based on the heuristic observation that the essential support of $K$ must be contained in the diagonal, which is itself a measure zero set. Part 1, The Approximation: Let $f(x)=(2\pi)^{-d/2}e^{-\lvert x \rvert^2/2}$ and define $$ \tilde{f}(x) = \begin{cases} \int \lvert K(x,y)f(y)\rvert\,dy&\mbox{ if } \int \lvert K(x,y)f(y)\rvert\,dy<\infty,\\ 0&\mbox{ otherwise }. \end{cases} $$ Then $\tilde{f}$ is measurable. Consider the measurable set $E_t=(\tilde{f}\leqslant t)$ and define $\phi_t=1_{E_t}\cdot f$ and $K_t(x,y)=\phi_t(x)K(x,y)\phi_t(y)$. Note that $\phi_t\rightarrow f$ pointwise almost everywhere as $t\rightarrow\infty$. We have $$ \int \lvert K_t(x,y)\rvert \, dx dy \leqslant \int t\cdot \phi_t(x) \, dx = t <\infty, $$ i.e. $K_t$ is integrable. Interlude, The Diagonal: Let $\Delta=\{(x,x)\in\mathbb{R}^{2d}\;\vert\; x\in\mathbb{R}^d\}$ denote the diagonal. Pick open sets $U,V\subseteq\mathbb{R}^{d}$ such that $U\cap V=\emptyset$, let $\Sigma$ denote the Borel $\sigma$-algebra on $\mathbb{R}^{2d}$, let $\Sigma_{U\times V}=\{(U\times V)\cap A\;\vert\; A\in\Sigma\}$ denote the trace $\sigma$-algebra on $U\times V$. Part 2, The Dynkin Class Argument: Define the family $$ \mathcal{D}=\left\{ A\in \Sigma_{U\times V} \; \middle| \; \int_A 1_{U\times V}(x,y)K_t(x,y)\,dxdy=0 \right\}. $$ We will now argue that $\mathcal{D}$ is a Dynkin system. We have $\int_{U\times V} 1_{U\times V}(x,y)K_t(x,y)\,dxdy =\langle 1_U\cdot \phi_t,1_V\cdot \phi_t\rangle = 0$. If $A\in\mathcal{D}$, then $\int_{(U\times V)\setminus A} 1_{U\times V}(x,y)K_t(x,y)\,dxdy = -\int_{A} 1_{U\times V}(x,y)K_t(x,y)\,dxdy = 0$. If $(A_j)_{j\in\mathbb{N}}$ is a sequence of pairwise disjoint elements of $\mathcal{D}$, then $$ \int_{\bigcup_{j=1}^\infty A_j} 1_{U\times V}(x,y)K_t(x,y)\,dxdy = \sum_{j=1}^\infty\int_{A_j} 1_{U\times V}(x,y)K_t(x,y)\,dxdy = 0. $$ Next, we define the family $$ \mathcal{P}=\left\{ A\in\Sigma_{U\times V} \; \middle| \; A=B\times C \right\}, $$ which is clearly a $\pi$-system. Furthermore, if $B\times C\in\mathcal{P}$, then $$ \int_{B\times C} 1_{U\times V}(x,y)K_t(x,y)\,dxdy =\langle 1_{B}\cdot \phi_t,1_{C}\cdot \phi_t\rangle = 0, $$ so $\mathcal{P}\subseteq\mathcal{D}$. It follows from Dynkin's Theorem that $\sigma(\mathcal{P})\subseteq\mathcal{D}$, where $\sigma(\mathcal{P})$ denotes the $\sigma$-algebra generated by $\mathcal{P}$. But the $\sigma$-algebra generated by $\mathcal{P}$ is the Borel $\sigma$-algebra on $U\times V$; it follows that $K_t=0$ almost everywhere in $U\times V$. Conclusion: Since we may cover $\Delta^c$ by a countable family of open sets of the form $U\times V$ where $U,V\subseteq\mathbb{R}^d$ are open and satisfy $U\cap V=\emptyset$, we conclude that $K_t=0$ almost everywhere in $\Delta^c$, and therefore $K_t=0$ almost everywhere in $\mathbb{R}^{2d}$. Since $K_n(x,y)\rightarrow f(x)K(x,y)f(y)$ for almost every $(x,y)\in\mathbb{R}^{2d}$ as $n\rightarrow \infty$, we conclude that $K=0$ almost everywhere in $\mathbb{R}^{2d}$. Contradiction!<|endoftext|> TITLE: Calculating the standard deviation of a circular quantity QUESTION [5 upvotes]: This is partially an algorithm question, but I think it is best asked in this stackexchange. I need to find the standard deviation of an angle bounded on the interval $(-\pi,\pi]$. Taking the mean if such a quantity is usually done by converting the angles to points on the unit circle, taking the mean of those the x and y values of those points and converting the mean of x and y back into an angle. I don't think it's as straight-forward to make such a conversion with the standard deviation. My idea was to get the standard deviation of the points on the circle and convert the standard deviations in x and y to an angle using this classic equation for error propagation where $f(x, y) = \arctan{\left(\frac{y}{x}\right)}$ There seems to be a scipy function that is designed to do this, scipy.stats.circstd(). The problem is my implementation gives a very different result than the scipy function, and my result gives an answer closer to what I would expect for the physical situation I am applying it to. Additionally, I do not understand what the scipy function is doing. It's source code starts on line 2773 here. What is happening in their algorithm? What is the proper way to do this? EDIT: This is data generated from a simulation, and I don't know the underlying distribution of the angles. REPLY [4 votes]: I'm pretty sure Scipy's algorithm does the following: Uniformly scales your sample angles into the range $[0, 2\pi]$ Converts each sample angle into a "point" on the unit circle Finds the mean of those points Computes the norm of that mean point, call it r Computes $\sqrt{\log(1 / r^2)}$ Scales this value back to the original range of the samples All the steps seem reasonable except maybe #5. Where does that formula come from? Well first we need to ask the question, where does this formula come from? $$\hat\sigma = \sqrt{\frac{1}{n-1}\sum_{i=1}^n(x-\hat\mu)^2},\ \ \ \ \ \hat\mu=\frac{1}{n}\sum_{i=1}^nx_i$$ I write $\hat\sigma$ and $\hat\mu$ with hats because they are really estimators of the true population values of standard deviation and mean. I'll let Wikipedia do the variance proof for you, i.e. prove that $\mathbf{E}(\hat\sigma^2)=\sigma^2$. Their results make use of some operators that we take for granted when our samples are from $\mathbb{R}$, like addition, subtraction, and a commutative multiplication. You have random angles though, which live on $\mathbb{SO2}$ and so we cannot be so care-free. Rather than trying to guess an estimator of variance for random variables on $\mathbb{SO2}$, we can take an alternative perspective. The normal distribution on $\mathbb{R}$ is special for many reasons, but one is that its variance is exactly one of the two parameters that defines its probability density function (pdf), $$\rho(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$$ We can "wrap" this pdf around the unit circle as described here by stacking up the density at $x$ with the density at $x+2\pi k,\ \ \forall k \in \mathbb{N}$. Once we do this, we must limit our sample-space to $[0, 2\pi]$ (or any interval of length $2\pi$) or else we won't be satisfying the definition of a pdf: that its integral over the whole sample-space is 1. Make sure you shift your samples into this range before you use a wrapped pdf! Doing this results in the wrapped normal distribution; call its probability measure $P_o$ and its density $\rho_o$. Perhaps its variance can give us some insight to what a good "circular variance" estimator would look like. To compute the variance, we can't make use of the expectation over $\mathbb{R}$ like we usually do, $$\mathbf{E}(X) \neq \int_{0}^{\infty}x\rho_o(x)dx$$ for the reason I stated a moment ago. We'll take our samples as the unit phasors (unit-magnitude complex numbers), a valid representation of $\mathbb{SO2}$ complete over the real domain $[0, 2\pi]$. $$\mathbf{E}(X) = \int_{\mathbb{SO2}}x\ dP_o = \int_{0}^{2\pi}e^{j\theta}\rho_o(\theta)d\theta$$ This is what is done here where they are able to compute the nth moment and then yank out an expression for $\sigma$ from it, that looks much like the scipy equation. In my description of the scipy algorithm, I put the word "point" in quotes because they are really treating your samples as unit phasors, not vectors. In the next section of the Wikipedia article, they find an (albeit biased) estimator of this $\sigma$ which is exactly what scipy computes. I would say that this method is "correct" in that it isn't ad-hoc. If you find that your estimator makes more sense for your application, perhaps it is really what you want, but I would consider the scipy function to be the "right" way to measure spread of angles.<|endoftext|> TITLE: How to prove that $K =\lim \limits_{n \to \infty}\left( \prod \limits_{k=1}^{n}a_k\right)^{1/n}\approx2.6854520010$? QUESTION [8 upvotes]: I was going through a list of important Mathematical Constants, when I saw the Khinchin's constant. It said that : If a real number $r$ is written as a simple continued fraction : $$r=a_0+\dfrac{1}{a_1+\dfrac{1}{a_2+\dfrac{1}{a_3+\dots}}}$$, where $a_k$ are natural numbers $\forall \,\,k$, then $\lim \limits_{n \to \infty} GM(a_1,a_2,\dots,a_n )= \left(\lim \limits_{n \to \infty} \prod \limits_{k=1}^{n}a_k\right)^{1/n}$ exists and is a constant $K \approx 2.6854520010$, except for a set of measure $0$. First obvious question is that why the value $a_0$ is not included in the Geometric Mean? I tried playing around with terms and juggling them but was unable to compute the limit. Also, is it necessary for $r$ to be "written-able" in the form of a continued fraction ? Thanks in Advance ! :-) REPLY [2 votes]: This answer won't shed much light on the theorem or its proof, but is aimed to answer your specific questions about the context of the statement. Petch Puttichai points out in a comment that there is a proof sketch on the Wikipedia page for Khinchin's constant. The number $a_0$ is the floor of $r$. When $0\leq r<1$, $a_0=0$. If $a_0$ were included in the geometric mean, it would make it zero on the interval $[0,1)$, which has positive measure, and it would have no effect when $r\geq 1$ because $\lim\limits_{n\to\infty}c^{1/n}=1$ if $c>0$. (And if $r<0$ you would have to worry about taking $n^\text{th}$ roots of a negative number.) Every real number $r$ has a simple continued fraction expansion with natural number $a_k$s for $k\geq 1$ ($a_0$ might be $0$ or a negative integer). It is a finite expansion if $r$ is rational, but the set of rational numbers has measure $0$, so they can be ignored here. Otherwise it is infinite, and you can compute coefficients by repeated subtracting, taking the reciprocal, and taking the floor. $ \begin{align*} a_0&=\lfloor r\rfloor,\\ a_1&=\left\lfloor \dfrac{1}{r-a_0}\right\rfloor,\\ a_2&=\left\lfloor\dfrac{1}{\dfrac{1}{r-a_0}-a_1} \right\rfloor,\\ a_3&=\left\lfloor\dfrac{1}{\dfrac{1}{\dfrac{1}{r-a_0}-a_1}-a_2}\right\rfloor, \end{align*} $ and so on. For example, take $r=\pi$: Then $r = 3.14...$, so $a_0=3$. Then $\dfrac{1}{r-3}= 7.06...$, so $a_1=7$. Then $\dfrac{1}{7.06... - 7}= 15.99...$, so $a_2=15$. One more: $\dfrac{1}{15.99... -15} = 1.003...$, so $a_3=1$. This gives a sequence of approximations of $\pi$ starting with $3$, $3+\frac17=\frac{22}{7}$, $3+\frac{1}{7+\frac1{15}} = \frac{333}{106}$, and $3+\frac{1}{7+\frac{1}{15+\frac{1}{1}}}=\frac{355}{113}$, but continuing on in an infinite sequence converging to $\pi$. The Wikipedia article on continued fractions summarizes many results about them, including results that imply the sequence of "convergents" always converges to the number in question. As for the result in question here: I'm not credible on this topic, and your question first brought it to my attention, but commenters and links indicate the following: "The theorem is not easy to prove, the limit is not easy to compute except in some cases where it doesn't equal $K$ (but that's a set of measure zero)." - Gerry Myerson "Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose." -Wikipedia<|endoftext|> TITLE: Proving $n^2 \csc^2(nx) = \sum_{k=0}^{n-1}\csc^2\left(x+ k \frac{\pi}{n}\right)$ (without calculus?) QUESTION [7 upvotes]: I recently came across the following trigonometric identity in a test: $$ n^2 \csc^2(nx) = \sum_{k=0}^{n-1}\csc^2\left(x+ k \frac{\pi}{n}\right) $$ The question was to prove the result for any natural number $n$. What would be a good way to approach this problem? My initial line of thought was that since $\csc^2 x$ is the negative of the derivative of $\cot x$, the sum of the above series could be evaluated by differentiating a series involving the cotangent. Hence the question is equivalent to showing that : $$ n \cot(nx) = \sum_{k=0}^{n-1} \cot\left(x+ k \frac{\pi}{n}\right) $$ Taking the derivative of both sides with respect to the variable $x$ and multiplying the resulting equation by $-1$, we arrive at the required result. Although this does look simpler, I couldn't find a way to calculate the new sum. Could logarithms be used for this? Does this method work on further simplification? Or is there an alternative route to the answer (involving, for instance, complex numbers)? EDIT: It turns out that the method does indeed work, as explained in this answer, where the second summation has been calculated using basic trigonometric expansions and a bit of algebra. Nevertheless, Is there a different way to prove the identity without using calculus? Or even better (ideally), from trigonometry alone? Invoking calculus in a trig problem of this sort seems a tad unnatural, unintuitive and unappealing to me. REPLY [2 votes]: Let $n$ be an integer, then: $$\sin{nθ}=sinθ[\binom{n}{0}(2\cosθ)^{n-1}-\binom{n-1}{1}(2\cosθ)^{n-3}+\binom{n-2}{2}(2\cosθ)^{n-5}-...]$$ $$\cos{nθ}=\frac{1}{2}[(2cosθ)^{n}-\frac{n}{1}\binom{n-2}{0}(2\cosθ)^{n-2}+\frac{n}{2}\binom{n-3}{1}(2cosθ)^{n-4}-...]$$ You can get other identities by setting $$θ=\frac{π}{2}-ϕ; $$ and then consider different cases when n is even or odd and so on. Either way, in the second series $\cos{nθ}$ is in terms of powers of cosines, set $\cos{nθ}$ an arbitrary value, say $p$, then we also have that $\cos{(nϕ+2π)},\cos{(nϕ+4π)},...$ satisfy the equation, hence $\cos{(ϕ)},\cos{(ϕ+\frac{2π}{n})},\cos{(ϕ+\frac{4π}{n})},...$ are the roots of the equation on the right hand side, there are exactly n roots. Let$$ \cosθ=\frac{1}{q}$$ Upong making this substitution multiply by $q^n$ on both sides of the identity, then the roots of the new equation become $\sec{(ϕ)},\sec{(ϕ+\frac{2π}{n})},\sec{(ϕ+\frac{4π}{n})},...$, I'll consider the case when $n$ is odd, then $$\cos{nθ}=2^{n-1}(cosθ)^{n}-\frac{n}{1}\binom{n-2}{0}2^{n-3}(\cosθ)^{n-2}+...+(-1)^{\frac{n-1}{2}}n\cosθ$$ Making the said substitution and multiplying by $q^n$ yields: $$q^n\cos{nθ}=2^{n-1}-\frac{n}{1}\binom{n-2}{0}2^{n-3}q^2+...+(-1)^{\frac{n-1}{2}}nq^{n-1}$$It is a well known fact that the sum of roots is equal to the coefficient of the $q^{n-1}$ term divided by the coefficient of $q^{n}$, therefore: $$\sum_{k=1}^{n}\sec{(ϕ+\frac{(2k-2)π}{n})}=(-1)^{\frac{n-1}{2}}n\sec{nϕ}$$ Furthermore, $p_1^2+...+p_n^2=(p_1+...+p_n)^2-2\sum_{i≠j}p_ip_j$, but the sum of the roots taken two at a time is the coefficient of $q^{n-2}$ which is zero when $n$ is odd, thus when n is odd we have: $$\sum_{k=1}^{n}\sec^2{(ϕ+\frac{(2k-2)π}{n})}=n^2\sec^2{nϕ}$$ A similar derivation goes when n is even, for the sum of cosecants you may want to expand the sine in powers of sines and make the natural substitution $\sinθ=\frac{1}{q}$ and then use similar arguments with the roots so that the sum of its roots is a known coefficient, furthermore you can let $\sin^2{θ}=\frac{1}{q}$ and then let $p=q-1$ because $\cot^2{θ}+1=\csc^2{θ}$ so that by the same argument you can get the sum of cotangets and also of the cotangets squared, in the same manner one uses the cosines to build up the secant and then make use of the secant/tangent identity to find the sum of tangents. If you are interested in the the original series, you can derive them from the identity: $$\frac{\sin{θ}}{1-2xcos{θ}+x^2}=\sinθ+x\sin{2θ}+x^2\sin{3θ}+...ad inf$$ and: $$\frac{1-x^2}{1-2x\cos{θ}+x^2}=1+2x\cosθ+2x^2\cos{2θ}+... adinf$$ In both expressions you can expand the denominator as a geometric series if $x<1$ and the compare coefficients and finally get the expressions for $\sin{nθ}$ and $2\cos{nθ}$. (and by the way, the series in the beginning end whenever the binomial coefficient is either $\binom{n}{n-1}$ or $\binom{n}{n}$, and the signs alternate as plus, minus, plus,...) As you can see, the mathematical analysis way is easier and more convenient whilst using trigonometry can be tedious, the coefficients of this series behave nicely using the binomial notation, I hope it doesn't bother you posting this 2 years later but I figured it out I could employ trigonometry to somehow solve your problem.<|endoftext|> TITLE: How to construct integer polynomial of degree $n$ which takes $n$ times the value $1$ and $n$ times the value $-1$ (in integer points) QUESTION [5 upvotes]: Question is like in title: For positive integer $n$ I want to construct a polynomial $w(x)$ of degree $n$ with integer coefficients such that there exist $a_1 TITLE: Existence of area for a norm sphere QUESTION [5 upvotes]: Let $\lVert \cdot \rVert \colon \mathbb{R}^d \to \mathbb{R}^{\geq0}$ be a norm. Prove that the area $A(S)$ of the unit-sphere $S = \{x \in \mathbb{R}^d : \lVert x \rVert = 1\}$ exists. Integral representation Denote by $\lVert \cdot \rVert_D, \lVert \cdot \rVert_E \colon \mathbb{R}^d \to \mathbb{R}^{\geq 0}$ an arbitrary norm and the Euclidean norm, respectively. By geometric intuition, under proper smoothness assumptions, it seems to me that the area of $S$ is given by $$A(S) = \int_{S_D} \frac{1}{n(\Theta) \bullet n_D(\Theta)} \left(\frac{\lVert x(\Theta) \rVert_E}{\lVert x_D(\Theta) \rVert_E}\right)^{d - 1} \mathrm{d} \Theta,$$ where $n, n_D$ are the unit-normals (unit Euclidean-norm), and $x, x_D$ are the points on the corresponding unit-spheres $S$ and $S_D$. Intuition Intuitively, we think of mapping an infinitesimal piece of the $S_D$ sphere to $S$ by multiplication. The multiplication scales the area of the piece to the power $(d - 1)$ since the piece is stretched in every dimension. The scaled piece, and the corresponding piece on $S$ still have different normals. When the normals point to different directions, the contribution should be larger. Examples The formula works trivially when $\lVert \cdot \rVert = \lVert \cdot \rVert_D$. To compute the area of the maximum norm unit-sphere in $\mathbb{R}^2$, let $\lVert \cdot \rVert_D = \lVert \cdot \rVert_E$. Then in polar coordinates $\Theta \in [-\pi / 4, \pi / 4]$ we have $n_D(\Theta) = (\cos(\Theta), \sin(\Theta))$, $n(\Theta) = (1, 0)$, $x(\Theta) = (1, \tan(\Theta))$, and $\lVert x(\Theta) \rVert_E = 1 / \cos(\Theta)$. By symmetry, $$A(S) = 4 \int_{-\pi / 4}^{\pi / 4} \frac{1}{\cos(\Theta)^2} \mathrm{d} \Theta = 8.$$ To compute the area of the Manhattan norm unit-sphere in $\mathbb{R}^2$, let $\lVert \cdot \rVert_D = \lVert \cdot \rVert_E$. Then in polar coordinates $\Theta \in [0, \pi / 2]$ we have $n_D(\Theta) = (\cos(\Theta), \sin(\Theta))$, $n(\Theta) = (1, 1) / \sqrt{2}$, $x(\Theta) = (\cos(\Theta), \sin(\Theta)) / (\cos(\Theta) + \sin(\Theta))$, and $\lVert x(\Theta) \rVert_E = 1 / (\cos(\Theta) + \sin(\Theta))$. By symmetry, $$A(S) = 4 \int_{0}^{\pi / 2} \frac{\sqrt{2}}{(\cos(\Theta) + \sin(\Theta))^2} \mathrm{d} \Theta = 4 \sqrt{2}.$$ To compute the area of the Euclidean norm unit-sphere in $\mathbb{R}^3$, let $\lVert \cdot \rVert_D$ be the maximum norm. Then in Euclidean coordinates $-1 \leq x, y \leq 1$ and $z = 1$ we have $n(x, y, 1) = (x, y, 1) / \sqrt{x^2 + y^2 + 1}$, $n_D(x, y, 1) = (0, 0, 1)$, $x_D(x, y, 1) = (x, y, 1)$, and $\lVert x_D(\Theta) \rVert_E = \sqrt{x^2 + y^2 + 1}$. By symmetry, $$A(S) = 6 \int_{-1}^{1} \int_{-1}^{1} \frac{1}{\sqrt{x^2 + y^2 + 1}^3} \mathrm{d}y \mathrm{d}x = 4 \pi.$$ Compactness It can be proved that the unit-sphere of any norm is compact. Given that the integrand is continuous, the integrand is bounded by the extreme-value theorem. The area of the Euclidean sphere $S_E$ is known and finite. Therefore, the area $A(S)$ is an integral of a bounded function on a set $S_E$ of finite measure, and so a finite number. Measurability The integral is well-defined when the integrand is measurable. When $n$ is continuous, the integrand is continuous and therefore measurable. The problem is, there are norm-spheres which do not have continuous normal-fields, such as the maximum norm. The solution for the maximum norm is to partition the integral along the discontinuities (i.e. the faces of the cube). The question then seems to come down to: is it always possible to partition the norm-sphere into countable many measurable sets with continuous normal-fields? REPLY [2 votes]: I'll suggest two approaches, which one to prefer depends on what one knows about "area" (definition, basic results). Hausdorff measure Let's interpret the area as $(d-1)$-dimensional Hausdorff measure $\mathcal H^{d-1}$. This measure has the property that $\mathcal H^{d-1}(f(E))\le L^{d-1} \mathcal H^{d-1}(E)$ for any Lipschitz map $f$ (with $L$ being its constant constant). Upper bound: pick $R$ such that the Euclidean ball $B_R$ of radius $R$ centered at the origin surrounds $S$. Let $f:\partial B_R\to S$ be the nearest-point projection. It is known that $f$ is Lipschitz with $L=1$. Hence, $\mathcal H^{d-1}(S) \le \mathcal H^{d-1}(\partial B_R) = \omega_{d-1}R^{d-1}$, with $\omega_{d-1}$ being the area of the unit Euclidean sphere. Lower bound: Let $r$ be such that the Euclidean ball $B_r$ of radius $r$ centered at the origin is surrounded by $S$. Use the nearest-point projection of $S$ onto $\partial B_r$ to conclude that $\mathcal H^{d-1}(S) \ge \omega_{d-1}r^{d-1}$. Thus, $\mathcal H^{d-1}(S)$ is finite and positive. Coarea formula Apply the Coarea formula $$ \int g(x)|\nabla u(x)|\,dx = \int_{\mathbb{R}} \int_{u^{-1}(t)}g(x)\,d\mathcal H^{d-1}(x)\,dt \tag{1}$$ with $g=\chi_{\{\|x\|\le 1\}}$ and $u(x)=\|x\|$. Since $|\nabla u| = 1$ at the points of differentiability of $u$ (almost everywhere), the left-hand side of (1) is just the volume of the unit ball $\{\|x\|\le 1\}$. The right hand side is the integral $\int_0^1 t^{n-1} A(S)\,dt = A(S)/n$. Note that the existence of $\int_{u^{-1}(t)}g(x)\,d\mathcal H^{d-1}(x)$ for a.e. $t$ is a part of the statement of the coarea formula: thus, almost every level set of a Lipschitz function on a set of bounded measure has finite area. Since all level sets of $u$ are similar, the statement applies to all of them.<|endoftext|> TITLE: Simplest Proof of the Six Regular 4-Polytopes QUESTION [8 upvotes]: I'm not necessarily looking for a rigorous proof, more an outline of how an undergrad could count the regular 4D polytopes (and perhaps investigate what they look like) as explicably as possible. REPLY [3 votes]: Numberphile made a video on exactly this topic a while back. The idea is to consider the dihedral angle between adjacent faces for each of the five Platonic solids – a polychoron will consist of instances of one such solid. For the polychoron to be valid polychoron, at least three cells must meet at an edge and the sum of dihedral angles in 3-dimensional space must be strictly less than 360°. Tetrahedron (dihedral angle 70.5°): three, four or five tetrahedra can share an edge in 3D without overlapping. Bent into 4D space, these configurations give the 5-cell, 16-cell and 600-cell respectively. Cube (90°): three cubes around an edge yields the 8-cell or tesseract. Four around an edge, however, completely fills it up, yielding the ordinary cubic honeycomb and not any polychoron. Octahedron (109.5°): only three octahedra can fit around an edge, yielding the 24-cell. Dodecahedron (116.6°): the situation is the same as for the octahedron, and yields the 120-cell. Icosahedron (138.2°): since this is larger than 120°, three of these cannot fit around an edge in the first place, so no regular polychoron has icosahedral cells.<|endoftext|> TITLE: For $a^3+b^3+c^3=3$ prove that $a^4b+b^4c+c^4a\leq3$ QUESTION [7 upvotes]: Let $a$, $b$ and $c$ be non-negative numbers such that $a^3+b^3+c^3=3$. Prove that: $$a^4b+b^4c+c^4a\leq3$$ This inequality similar to the following. Let $a$, $b$ and $c$ be non-negative numbers such that $a^2+b^2+c^2=3$. Prove that: $$a^3b+b^3c+c^3a\leq3,$$ which follows from the following identity. $$(a^2+b^2+c^2)^2-3(a^3b+b^3c+c^3a)=\frac{1}{2}\sum_{cyc}(a^2-b^2-ab-ac+2bc)^2.$$ I tried Rearrangement. Let $\{a,b,c\}=\{x,y,z\}$, where $x\geq y\geq z$. Hence, $$a^4b+b^4c+c^4a=a^3\cdot ab+b^3\cdot bc+c^3\cdot ca\leq x^3\cdot xy+y^3\cdot xz+z^3\cdot yz=$$ $$=y(x^4+y^2xz+z^4)$$ and I don't see what is the rest. Thank you! REPLY [3 votes]: The Buffalo Way works. However it is not nice. Let me describe Michael Rozenberg's solution. We have \begin{align} \sum_{\mathrm{cyc}} a^4b &\le \frac{1}{3}\sum_{\mathrm{cyc}} (a^{9/2}b^{3/2} + a^{9/2}b^{3/2} + a^3)\\ &= \frac{2}{3} \sum_{\mathrm{cyc}} a^{9/2}b^{3/2} + \frac{1}{3} (a^3+b^3+c^3)\\ &\le \frac{2}{9} (a^3+b^3+c^3)^2 + \frac{1}{3} (a^3+b^3+c^3)\\ &= 3 \end{align} where we have used Vasc's inequality $(x^2+y^2+z^2)^2\ge 3(x^3y+y^3z+z^3x)$ to obtain $\sum_{\mathrm{cyc}} a^{9/2}b^{3/2} \le \frac{1}{3}(a^3+b^3+c^3)^2$. This inequality can be used to prove the following inequality: Let $x,y,z>0$ and $x^7+y^7+z^7=3$. Prove that $\frac{x^4}{y^3}+\frac{y^4}{z^3}+\frac{z^4}{x^3}\ge 3$. Proof: Using AM-GM, we have $7\frac{x^4}{y^3} + 9 x^{28/3} y^{7/3} \ge 16 x^7$ which results in $$7 \sum_{\mathrm{cyc}} \frac{x^4}{y^3} + 9 \sum_{\mathrm{cyc}} x^{28/3} y^{7/3} \ge 16 (x^7+y^7+z^7). $$ It suffices to prove that $\sum_{\mathrm{cyc}} x^{28/3} y^{7/3} \le 3$. Let $a = x^{7/3}, \ b = y^{7/3}, \ c = z^{7/3}$. The condition becomes $a^3+b^3+c^3 = 3$. We need to prove that $a^4b+b^4c+c^4a \le 3$. We are done.<|endoftext|> TITLE: what operation repeated $n$ times results in the addition operator? QUESTION [24 upvotes]: I had a difficult time in phrasing my question. But I was wondering if there is an operation that, when repeated n times, results in the addition operator. Same way as repeating addition n times results in the multiplication operator, and repeating multiplication n times results in the exponentiation operator etc. So $n$ times addition of a number $x$ results in $x\times n$. And $n$ times multiplication of a number $x$ results in $x^n$ Then my question is $n$ times ...what... results into the number $x+n$. Let's call this operator: $@$. For example, the following would then hold: $$a\times a=a^2$$ $$a+a=a\times 2$$ $$a@a=a+2$$ My question is, does it make any sense thinking of such an operator, is there anything known about it, can it be followed through even further like $a\sim a = a@2$? REPLY [2 votes]: There are already amazing answers! But I'll try to give some additional info since this question was asked a lot of times in the past (and it will be asked again in the future right?) and I gave some answers too. As Hagen von Eitzen and Brevan Ellefsen said $$a\circ b:= \max (a,b)+1+\delta_{ab}$$ is one of the solutions of the equation $a*(a+n)=a+n+1$ , where $a$ and $n$ range over some restricted domains. This particular solution was called Zeration by Rubtsov and Romerio: you can easily find the details of the story in the Zeration thread that was linked by Brevan Ellefsen. Another solution is $\max(a,b)+1$ which is commutative and is not discontinuous, but in the linked thread, a user, Tetration Forum's founder, also noticed that it is possible to find a non-commutative solution too. You can find an updated discussion of the topic in this thread: Zeration Update @Tetration Forum Also here at MSE, similar questions were answered: for example, this one is almost identical Does anything precede incrementation in the operator “hierarchy?” and also remark 1 of this answer I wrote 2 years ago: More info on Zeration - Go to remark 1<|endoftext|> TITLE: Finding $\lim_{ n \to \infty }(1-\tan^2\frac{x}{2})(1-\tan^2\frac{x}{4})(1-\tan^2\frac{x}{8})...(1-\tan^2\frac{x}{2^m})=?$ QUESTION [7 upvotes]: Find the limit : $$\lim_{ n \to \infty }(1-\tan^2\frac{x}{2})(1-\tan^2\frac{x}{4})(1-\tan^2\frac{x}{8})...(1-\tan^2\frac{x}{2^n})=?$$ My try : $$1-\tan^2 y = \frac{2\tan y }{\tan(2y)}$$ $$\lim_{ n \to \infty }\left( \frac{2\tan\frac{x}{2} }{\tan(x)}\right)( \frac{2\tan\frac{x}{4} }{\tan(\frac{x}{2})})( \frac{2\tan\frac{x}{8} }{\tan(\frac{x}{4})})...( \frac{2\tan\frac{x}{2^n} }{\tan(\frac{x}{2^{n-1}})})=?$$ Now? REPLY [7 votes]: Note that the numbers in the deonominator and numerator cancel, leaving only $$\lim_{n \to \infty} 2^{n} \frac{\tan \frac{x}{2^n}}{\tan x}$$ In your original equation. However, as $\lim\limits_{n \to \infty} \frac{x}{2^n}=0$, we have that $$\lim_{n \to \infty} 2^n \tan \frac{x}{2^n}=\lim_{n \to \infty}x \times \dfrac{ \tan \frac{x}{2^n}}{\frac{x}{2^n}} =x$$ Using the fact that $\lim\limits_{a \to 0}\frac{\tan a}{a}=1$. So the limit becomes $$\lim_{n \to \infty} 2^n \times {\tan \frac{x}{2^n}} \times \frac{1}{\tan x}= \frac{x}{\tan x}=x \cot x$$ The answer is $x \cot x$.<|endoftext|> TITLE: If a matrix commutes with a set of other matrices, what conclusions can be drawn? QUESTION [27 upvotes]: I have a very specific example from a book on quantum mechanics by Schwabl, in which he states that an object which commutes with all four gamma matrices, $$ \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1\\ \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & -1 & 0 & 0\\ -1 & 0 & 0 & 0\\ \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & -i\\ 0 & 0 & i & 0\\ 0 & i & 0 & 0\\ -i & 0 & 0 & 0\\ \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & -1\\ -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \end{pmatrix}, $$ must be a multiple times the unit matrix. These matrices don't seem to span all $4 \times 4$ matrices so why would this be the case? I have asked around but no one seems to know the answer. REPLY [10 votes]: For two matrices to commute, it is necessary that each matrix preserves the eigenspace of the other matrix (that is, it can't map part of an eigenspace onto a different eigenspace). As multiples of the identity matrix do not change eigenspaces and have all vectors in the same eigenspace, they commute with all other matrices. If $A$ and $B$ commute, and $x$ is an eigenvector of $A$ with eigenvalue $\lambda$, then, $$ ABx=BAx=B\lambda x=\lambda Bx $$ and thus $Bx$ must also be an eigenvector of $A$ with the same eigenvalue... or a zero vector. Suppose that $v=(a,b,c,d)$ is an eigenvector of our matrix. Then we must find, in the same eigenspace, $(a,b,-c,-d)$, $(d,c,-b,-a)$, $(-d,c,b,-a)$, and $(c,-d,-a,b)$ - (I have dropped the $i$ from the third matrix, as it's just a constant multiplier). Adding and subtracting the middle two together, we can also see that $(0,c,0,-a)$ must be in the eigenspace, as must $(d,0,-b,0)$. At least one of these must be non-zero. Let's assume that $(0,c,0,-a)$ is non-zero. Then we can also see that $(0,c,0,a)$ is in the eigenspace (from the first matrix), and so both $(0,c,0,0)$ and $(0,0,0,a)$ are in the eigenspace. Let's assume that $(0,0,0,a)$ is non-zero, and thus $(0,0,0,1)$ is in the eigenspace. Now, from the last matrix, $(0,0,1,0)$ is in the eigenspace. From the third matrix, we can then determine that $(0,1,0,0)$ and $(0,0,0,1)$ are in the eigenspace. A similar analysis works if you assume $b$, $c$, or $d$ is non-zero. Therefore, the eigenspace is the set of all 4D vectors, all sharing the same eigenvalue. This tells us that our matrix must be the identity matrix multiplied by that eigenvalue.<|endoftext|> TITLE: Proof of Polya's theorem (Probability Theory) QUESTION [5 upvotes]: I know a theorem called Polya's theorem: $X_n \rightarrow X$ in distribution as $n\rightarrow \infty$ is equivalent to $\sup_{x} | F_n(x) -F(x)| \rightarrow 0$ as $n \rightarrow \infty$, where $F_n, F$ are distribution functions of $X_n$ and $X$, respectively. Do you know where I can find out the proof for this theorem? or do you have hints to prove it? REPLY [14 votes]: The supremum should be over all $x$, not over all $n$, it's pointless. This statement fails in general without assumption that the limiting function is continuous. Counterexamples are quite obvious. Say, a sequence $X_n$ with CDF $F_n(x)=x^n1_{(0,1)}$ converges to $X=1$ in distribution, but $\sup_x | F_n(x) -F(x)|=1$. The proof is straightforward. Since $F$ is continuous, for any $k\geq 1$ there exist points $-\infty=x_0 TITLE: Proving Saalschutz Theorem QUESTION [5 upvotes]: I saw this in a pdf, and I'm wondering Questions: How do you prove Saalschutz Theorem: $$_3F_2\left[\begin{array}{c,c}-x,-y,-z\\n+1,-x-y-z-n\end{array}\right]=\dfrac {\Gamma(n+1)\Gamma(x+y+n+1)\Gamma(y+z+n+1)\Gamma(z+x+n+1)}{\Gamma(x+n+1)\Gamma(y+n+1)\Gamma(z+n+1)\Gamma(x+y+z+n+1)}\tag{1}$$ I'm somewhat relatively new to Hypergeometrical Series. I understand that the general Hypergeometrical series takes the form$$_pF_q\left[\begin{array}{c,c}\alpha_1,\alpha_2,\ldots,\alpha_p\\\beta_1,\beta_2,\ldots,\beta_q\end{array};x\right]=\sum\limits_{k=0}^{\infty}\dfrac {(\alpha_1)_k(\alpha_2)_k\ldots(\alpha_p)_k}{(\beta_1)_k(\beta_2)_k\ldots(\beta_q)_k}\dfrac {x^k}{k!}\tag{2}$$ So therefore, by $(2)$, we should have$$_3F_2\left[\begin{array}{c,c}-x,-y,-z\\n+1,-x-y-z-n\end{array}\right]=\sum\limits_{k=0}^{\infty}\dfrac {(-x)_k(-y)_k(-z)_k}{(n+1)_k(-x-y-z-n)_k}\tag{3}$$ However, I'm not sure how to manipulate the RHS of $(3)$ to get the RHS of $(1)$. EDIT: Since $(a)_k=\Gamma(a+k)/\Gamma(a)$, the RHS of $(3)$ becomes$$\dfrac {(-x)_k(-y)_k(-z)_k}{(n+1)_k(-x-y-z-n)_k}=\dfrac {\Gamma(k-y)\Gamma(n+1)\Gamma(k-x)\Gamma(-x-y-z-n)\Gamma)k-z)}{\Gamma(n+k+1)\Gamma(-x)\Gamma(-y)\Gamma(-z)\Gamma(-x-y-z-n+k)}$$Now, I need to figure out how$$\Gamma(k-y)\Gamma(k-x)\Gamma(k-z)\Gamma(-x-y-z-n)=\Gamma(x+y+n+1)\Gamma(y+z+n+1)\Gamma(x+z+n+1)$$$$\Gamma(n+k+1)\Gamma(-x)\Gamma(-y)\Gamma(-z)\Gamma(-x-y-z-n+k)=\Gamma(x+n+1)\Gamma(y+n+1)\Gamma(z+n+1)\Gamma(x+y+z+n+1)$$ Extra: I also believe that using the same general approach, we can prove$$\begin{align*} & _7F_6\left[\begin{array}{c,c}n,\frac 12n+1,-x,-y,-z,-u,x+y+z+u+2n+1\\\frac 12n,x+n+1,y+n+1,z+n+1,u+n+1,-x-y-z-u-n\end{array}\right]\\ & =\dfrac {\Gamma(x+n+1)\Gamma(y+n+1)\Gamma(z+n+1)\Gamma(u+n+1)\Gamma(x+y+z+n+1)}{\Gamma(n+1)\Gamma(x+y+n+1)\Gamma(y+z+n+1)\Gamma(x+u+n+1)\Gamma(z+u+n+1)}\\ & \times\dfrac {\Gamma(y+z+u+n+1)\Gamma(x+u+z+n+1)\Gamma(x+y+u+n+1)}{\Gamma(x+z+n+1)\Gamma(y+u+n+1)\Gamma(x+y+z+u+n+1)}\end{align*}\tag{4}$$ REPLY [4 votes]: Prove$$_3F_2\left[\begin{array}{c,c}-x,-y,-z\\n+1,-x-y-z-n\end{array}\right]=\dfrac {\Gamma(n+1)\Gamma(x+y+n+1)\Gamma(y+z+n+1)\Gamma(z+x+n+1)}{\Gamma(x+n+1)\Gamma(y+n+1)\Gamma(z+n+1)\Gamma(x+y+z+n+1)}$$ Proof: Begin with the identity$$(1-z)^{a+b-c}\space_2F_1(a,b;c;z)=_2F_1(c-a,b-a;c;z)\tag1$$This can be easily proven by setting both solutions of the second order linear differential equation$$(z-z^2)\frac {d^2y}{dz^2}+\bigr\{c-(a+b+1)z\bigr\}\frac {dy}{dx}-aby=0$$equal to each other, and changing the dependent variables. Starting with $(1)$, rewrite it as a summation, and then find the coefficient of $z^n$.$$\sum\limits_{k=0}^\infty\frac {(a+b-c)_k}{k!}(-z)^k\sum\limits_{r=0}^{\infty}\frac {(a)_r(b)_r}{(c)_r}\frac {z^r}{r!}=\sum\limits_{l=0}^{\infty}\frac {(c-a)_l(b-a)_l}{(c)_l}\frac {z^l}{l!}\tag2$$The coefficient of $z^n$ of $(2)$ is therefore$$\sum\limits_{r=0}^{\infty}\frac {(a)_r(b)_r(c-a-b)_{n-r}}{(n-r)!(c)_rr!}=\frac {(c-a)_n(b-a)_n}{(c)_nn!}\tag3$$And from $(3)$, it follows that the left-hand side is equal to$$\sum\limits_{r=0}^{\infty}\frac {(a)_r(b)_r}{(c)_r}\frac {(c-a-b)_n(-n)_r}{(1+a+b-c-n)_rn!}=_3F_2\left[\begin{array}{c c}a,b,-n\\c,1+a+b-c-n\end{array}\right]\frac 1{n!(c-a-b)_n}$$Equating to the right-hand side of $(3)$, and simplifying, we get the identity$$_3F_2\left[\begin{array}{c c}a,b,-n\\c,1+a+b-c-n\end{array}\right]=\frac {\Gamma(c)\Gamma(c-a-b)\Gamma(n-a+b)\Gamma(n-a+c)}{\Gamma(b-a)\Gamma(c-a)\Gamma(c+n)\Gamma(n+c-a-b)}\tag4$$Replacing $a=-x,b=-y,n=z,$ and $c$ with $n+1$, we deduce Saalschutz's theorem.<|endoftext|> TITLE: Is most of mathematics independent of set theory? QUESTION [19 upvotes]: Is most of mathematics independent of set theory? Reading this quote by Noah Schweber: most of the time in the mathematical literature, we're not even dealing with sets! it seems that the answer to my question is "yes". But why? When I read in the mathematical literature, sets appear everywhere – we need them in the definitions of groups, rings, vector spaces, ... However, is there some truth in the quote, and in what sense? REPLY [11 votes]: Most mathematics can be translated into some suitable set theory such as ZFC. That is certainly true! However, it is totally different from the claim that most mathematics deals with sets. From the point of view within ZFC, of course it is true because in ZFC there are nothing else except sets! But in fact, most ordinary mathematics can be translated into a very very weak system called ACA, which has no internal notion of sets of sets. Furthermore, there are alternative systems such as type theories that some logicians even argue to be more natural than ZFC as a foundation for mathematics. That is naturally a subjective opinion, but the objective fact is that there are indeed different formal systems that can 'do the same thing', so to speak, even though one might 'think' of everything as sets while another 'thinks' that there are urelements. Ultimately I agree with the main point in Qiaochu Yuan's answer. Namely that mathematics is nearly always based on existing structures that obey some properties. For elementary number theory, we do not care what are the internals of each natural number as long as the collection of natural numbers together with the arithmetic operations on them, as a whole, satisfy the Peano axioms. This abstraction would be a valid reason to say that most of them when we use natural numbers we do not actually deal with sets, since it does not matter even if natural numbers are urelements! Of course, this mirrors the idea in programming that we write high-level programs and do not deal with CPU instructions in the sense that we do not actually care how our programs are translated to CPU instructions. We only care that our program behaves in a way that is solely determined by the high-level programming language. Take Java for example. On different machines it is necessarily translated into different CPU instructions, but that is done by the Java environment; in our terms the Java language is the abstraction that frees us from 'bare-metal' concerns. In the same way our mathematical axiomatizations (like PA for natural numbers) frees us from foundational concerns. Finally, there is a huge benefit to working with abstractions than with underlying implementations. In case we ever wish to use a different foundational system for whatever reason, abstraction makes it far easier to identify which parts transfer over without change to the new system. We can in fact precisely reason about such general transfer via interpretability.<|endoftext|> TITLE: Cohen-Macaulay and connected implies equidimensional? QUESTION [5 upvotes]: I'm asking for a reality check. It seems to me that since Cohen-Macaulay rings are locally equidimensional, such a ring is either equidimensional or else disconnected (with different dimensions occurring on different connected components). But I have not slept enough, so I don't trust my reasoning. Is it sound? Addendum: Having given this more (and better-slept) thought, it is not at all clear to me. I know that a local Cohen-Macaulay ring is equidimensional, in other words, all minimal primes have the same dimension. (E.g. Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry, Corollary 18.11.) (Edit 2/24/17: The following proof is flawed. See second addendum.) It follows from this that for an arbitrary (noetherian) Cohen-Macaulay ring $R$, if $\operatorname{Spec} R$ is connected, then all the minimal primes $\mathfrak{p}$ have the same dimension. I.e. $\dim R/\mathfrak{p}$ does not depend on $\mathfrak{p}$. Proof: two minimal primes of differing dimension cannot be contained in the same maximal, by localizing at that maximal and applying the result for local rings. Therefore, sort the minimal primes according to dimension. There are only finitely many minimal primes, so finitely many classes containing finitely many primes each, and one can build a partition of $\operatorname{Spec} R$ into finitely many disjoint closed sets corresponding to the classes in this way. Connectedness implies there is only one class, thus all minimal primes have the same dimension. Geometrically, this is the statement that the irreducible components of $R$ are all the same dimension. But I thought equidimensionality also meant that the maximals all have the same height. Here, I don't see the argument. Imagine one maximal that covers all the minimals with height $1$, say, and another maximal that lies above all the minimals with height $2$, with a lot of height 1 primes in between. This yields a connected $\operatorname{Spec}$ since a single closed set containing all the minimal primes will be everything, but if two different closed sets each contain a minimal prime, then they overlap at the maximals. This scenario is not ruled out by any property I can think of possessed by the poset of primes of a Cohen-Macaulay ring: it is catenary (since it is a ranked poset, ranked by height); the minimals are all dimension 2; etc. But yet I see numerous references on the internet to the notion that Cohen-Macaulay rings are equidimensional (sometimes without even a connected assumption, which can't be right; e.g. $k\times k[x]$ is Cohen-Macaulay). E.g. here. Perhaps they mean the local case? Or are using a weaker notion of equidimensionality? Second addendum, 2/24/17: I am no longer convinced by the above argument that the minimal primes in a Cohen-Macaulay ring with a connected spectrum must have the same dimension. It is the case that the minimal primes under a given maximal have the same-length saturated chains to that maximal, but it doesn't follow that they have the same dimension: perhaps one of them is also under another maximal with a greater height, while the other is not. On the other hand: if the Cohen-Macaulay ring with the connected spectrum happens to be integral over an equidimensional Cohen-Macaulay subring (which is guaranteed, e.g. for finite-type $k$-algebras by Noether normalization, per conv. with MooS below her/his answer), then I have convinced myself it is equidimensional. Here is why I think so: Let the Cohen-Macaulay ring be $R$ and let the equidimensional Cohen-Macaulay subring be $S$; say its dimension is $d$. Since $S$ is Cohen-Macaulay, the result quoted above for local rings, together with the fact that Cohen-Macaulay rings are catenary, implies that the lengths of any two saturated chains of primes from a given maximal to two minimals under it are equal. Since $S$ is equidimensional, this common length must be $d$ for every maximal. Thus, the length of any saturated chain between any minimal prime and any maximal is $d$. It follows from a second application of catenaryness that for any prime ideal of $S$, and any saturated chain from a minimal to a maximal containing this prime, the position of this prime in this chain depends only on the prime and not on the chain. Now consider any saturated chain of primes between a minimal prime $\mathfrak{p}$ and a maximal $\mathfrak{m}$ in $R$. Say its length is $\ell$. Intersecting with $S$, one obtains a chain of primes, and due to integrality, it has the following three properties: (1) it is saturated; (2) its "top end" is a maximal; (3) it is length $\ell$. (1) and (2) are due to going-up and (3) is due to incomparability. (Edit 2/24/17: I'm no loger convinced going-up implies (1). Grr! See third addendum.) Let the $h(\mathfrak{p})$ be the height of $\mathfrak{p}\cap S$ in $S$. By the above discussion, $h(\mathfrak{p}) = d - \ell$; thus $\ell$ doesn't depend on $\mathfrak{m}$ or the choice of chain, but only on $\mathfrak{p}$. Connectedness of the spectrum (at least for noetherian rings) is equivalent to connectedness of the bipartite graph with vertex classes the minimal and maximal primes, and edges indicating containment. (Proof: two closed sets $V(\mathfrak{p}), V(\mathfrak{q})$ of the spectrum, where $\mathfrak{p},\mathfrak{q}$ are minimal primes, meet iff there is a maximal containing both $\mathfrak{p},\mathfrak{q}$. So if the graph is disconnected, there is a partition of these finitely many closed sets $V(\text{minimal prime})$, which cover the spectrum, into disjoint classes, and this disconnects the spectrum. Conversely, if the spectrum is disconnected, there are two disjoint nonempty closed sets; they must both meet the minimal primes and contain everything above the ones they meet, and then the way they partition the minimal primes also partitions the graph.) Thus, between any two minimal primes $\mathfrak{p},\mathfrak{q}$, there is a path in the graph $$\mathfrak{p} = \mathfrak{p}_1 \subset \mathfrak{m}_1\supset \mathfrak{p}_2\subset \mathfrak{m}_2\supset \dots\subset\mathfrak{m}_r\supset \mathfrak{p}_{r+1} = \mathfrak{q}$$ Because $R$ is Cohen-Macaulay, by the same logic as for $S$ above (i.e. by localizing at $\mathfrak{m}_i$) we conclude that the lengths $\ell_i$ and $\ell_i'$ of saturated chains between $\mathfrak{m}_i$, and $\mathfrak{p}_i$ and $\mathfrak{p}_{i+1}$ respectively, are equal, for each $i$. Thus $h(\mathfrak{p}_i)$ and $h(\mathfrak{p}_{i+1})$ are equal. And thus $h(\mathfrak{p})$ and $h(\mathfrak{q})$ are equal. Thus $h$ is a constant function on the minimal primes of $R$. Now take any minimal prime of $S$. By lying-over, there is some prime of $R$, say $\mathfrak{p}^*$, lying over it, which must be minimal by incomparability, and we have $h(\mathfrak{p}^*)=0$. Therefore, $h$ is identically zero on the minimal primes of $R$. Thus for any saturated chain in $R$ from a minimal $\mathfrak{p}$ to a maximal $\mathfrak{m}$, of length $\ell$, we have $0=h(\mathfrak{p}) = d - \ell$, and we conclude $\ell=d$. Therefore, all maximals have height $d$ and all minimals have dimension $d$, i.e. $R$ is equidimensional of dimension $d$. Third addendum, 2/24/17: There is a soft spot in the proof of the second addendum as well. I found another unjustified assumption I made probably due to being overly familiar with the good behavior of the coordinate rings of varieties. Going-up doesn't imply by itself that the intersection of a saturated chain in $R$ with $S$ yields a saturated chain in $S$, although it may well be true in the present context for other reasons. I asked a followup question on this point here. Fourth addendum, 2/26/17: Given the track record at this point, there's no reason to expect me not to find another mistake; however, I believe I have proven this variant of the statement in the second addendum: If a Cohen-Macaulay ring has a connected spectrum and is finite (strengthened from integral) over a subring that is an integrally closed, equidimensional, catenary noetherian domain, then it is equidimensional. This OP is already too long without me adding another detailed proof, but it very loosely follows the pattern of the argument in the 2nd addendum, i.e. argues that all maximal chains in $S$ have length $d$ and then shows $h(\mathfrak{p})=0$ using connectedness. Here is a reasonably complete outline: I. Show all maximal chains in $S$ have the same length $d$ using equidimensionality + catenariness. II. Consider the set $W_0$ of primes of $R$ whose intersection with $S$ is zero. (a) They are all minimal and dimension $d$. (b) Fixing one of them $\mathfrak{p}$, and a target maximal $\mathfrak{m}$ containing it, construct a chain of length $d$ to $\mathfrak{m}$ by (b1) passing temporarily to $R/\mathfrak{p}$ which is a domain containing $S$; then passing to their fraction fields; then forming normal closure of $\operatorname{Frac}(R/\mathfrak{p}) /\operatorname{Frac}S$ and taking $S$'s integral closure $B$ in it; (b2) invoking going-up to construct a chain of length $d$ in $B$ with top term lying over $\mathfrak{m}\cap S$, and and lying-over to find another prime in $B$ lying over $\mathfrak{m}$; (b3) using a Galois automorphism to move the chain til its top term is the prime we found lying over $\mathfrak{m}$; (b4) intersecting with $R$ yielding the desired chain. (c) By Cohen-Macaulayness, a maximal $\mathfrak{m}$ of $R$ that contains one of the $\mathfrak{p}$'s in $W_0$ (and is therefore height $d$) can't also contain a minimal prime $\mathfrak{q}$ with $h(\mathfrak{q}) >0$. Thus if there are any such minimal primes, the graph described in the 2nd addendum is disconnected. This contradicts the fact that the spectrum of $R$ is connected, so there are no such primes. Therefore all minimals are in $W_0$, so are dimension $d$ by (a), and all maximals contain one of these, so are height $d$ by (b). Conclude equidimensionality. Fifth addendum, 3/13/17: Fourth addendum holds up I believe, but argument in (b) can be simplified by replacing the Galois theory just by passing to $R/\mathfrak{p}$ and then invoking the going-down theorem for $S\subset R/\mathfrak{p}$. REPLY [2 votes]: Your doubts are justified. Consider $R=k[x,y]$ and the multiplicative system $S = R \setminus (P \cup Q)$ with $P=(x,y)$ and $Q=(x-1)$. $S^{-1}R$ is not equidimensional since its maximal ideals are $P$ and $Q$ of height $2$ and $1$ respectively. But $S^{-1}R$ is clearly Cohen-Macaulay, since all its localizations are regular local rings. When you read at some point that the author concludes equidimensionality from the Cohen-Macaulay property, then he probably either works in the local case or in the case of a finite type $k$-algebra. Or he simply made a mistake.<|endoftext|> TITLE: Cyclic Group Generators of Order $n$ QUESTION [8 upvotes]: How many generators does a cyclic group of order $n$ have? I know that a cyclic group can be generated by just one element while using the operation of the group. I am having trouble coming up with the generators of a group of order $n$. Any help would be great! Thanks! REPLY [2 votes]: Let $G$ be a group of order $n$. Let $g$ be a generator of $G$. Then, $G = \{e, g, g^2, \cdots, g^{n-1}\}$. If $h = g^i$ is a generator of $G$, then, $h^k = g^{k i} = g$ for some $k \in \mathbb{Z}$. So, $g^{k i - 1} = e$. So, $k i - 1 = (-l) n$ for some $l \in \mathbb{Z}$. $\therefore k i + l n = 1$. So, $gcd(i, n) = 1$. Conversely, if $\gcd(i, n) = 1$, then, there exist $k, l \in \mathbb{Z}$ such that $k i + l n = 1$. $g = g^1 = g^{k i + l n} = g^{k i} (g^n)^l = g^{k i} e^l = g^{k i} e = g^{k i}$. So, $g^i$ is a generator of $G$. $\therefore$ There exist $\#\{i \in \{0, 1, \cdots, n-1\} | \gcd(i, n) = 1\}$ generators in $G$.<|endoftext|> TITLE: Can a real ODE have a complex solution? QUESTION [10 upvotes]: By a real ODE I mean an ordinary differential equation with only real coefficients and the resulting function is a function of a real argument. If such a solution exists, can you give an example? Edit: To add to this, is it still possible if the initial conditions must also be real? REPLY [3 votes]: Try $$\left[\frac{dy}{dx}\right]^2 = -(y^2 + 1)$$ This equation can have no real solution at all. Proof by contradiction: assume $y(x)$ is a real valued solution. Then $\left[\frac{dy}{dx}\right]^2$ is real as well, but that implies $\sqrt{-(y(x)^2 + 1)}$ is real, yet $y(x)^2 + 1 > 0$ always no matter what real function $y(x)$ is, thus $-(y(x)^2 + 1) < 0$ always and so this square root can never be real. Contradiction. EDIT: I just throw it into Wolfram, and it looks like all solutions may be real valued at some isolated points -- but isolated points is not a real-differentiable function of a real variable!<|endoftext|> TITLE: Physicist trying to understand GIT quotient QUESTION [12 upvotes]: I am reading Nakajima's textbook on Hilbert Schemes. I am trying to understand some very basic facts about the GIT quotient. We start with a vector space $V$ over $\mathbb{C}$. Let $G \subset U(V)$ be a Lie group and $G^{\mathbb{C}}$ its complexification so I guess $G^{\mathbb{C}} \subset GL_V$. I will denote by $G$ the complexification from now on. Apparently $V/G$ is a very badly behaved space. I do not know really why though. I can imagine that there might be some singularities but can they not be resolved e.g. by blowing up? Also, why sometimes this space is not Hausdorff? Now, let $A(V)$ be the coordinate ring of $V$. Nakajima says something I did not know, that the $A(V)$ is the same as the symmetric power of the dual space $V^*$. $$ A(V) = Sym^n(V^*) $$ Why is this true? I have to admit that this seems very basic and I did not know about it. Next I learn that $G$ has a natural action on $V$ i.e. $v \mapsto gv$ for $g \in G$ and $v \in V$. Then, this induces an action on $A(V)$. We define $$ A(V)^G = \{ {\text{polynomials }a | ga =a \text{ for }\forall g\in G } \}$$ the ring of invariant (polynomials). Finally we define the algebro-geometric quotient of $V$ by $G$ as $$ Spec(A(V)^G)=V//G $$ To me this is the space of prime ideals that are invariant under $G$. But I do not see how exactly this is related to the original space we wanted to construct. It seems quite different actually. Intuitively what is this space $V//G$ and why is it useful? P.S. Nakajima says: The underlying space of $V//G$ is the set of closed $G$-orbits modulo the equivalence relation defined by $x \backsim y$ if some specific condition, that I do not mention here, holds. REPLY [6 votes]: Apparently $V/G$ is a very badly behaved space. I do not know really why though. I can imagine that there might be some singularities but can they not be resolved e.g. by blowing up? Also, why sometimes this space is not Hausdorff? The simplest way to think about this is just to consider an example, and the best one is probably the following: Consider the action of $\mathbb C^* = \mathbb C -0$ on $\mathbb C^2$, where the action is given by $$(x,y) \mapsto (\lambda x, \lambda^{-1} y) $$ What are the orbits for this action? There are the orbits of the form $xy = c \neq 0$ for any complex number $c$. Then there axial orbits $\{ (x,0) : x \neq 0\}$ and $\{ (0,y): y \neq 0\}$. Finally there is the zero orbit, which just contains one point $0$. The vast majority of the orbits are of the first type, which suggest the quotient $\mathbb C^2 /\mathbb C^*$ should be $\mathbb C$, but then the question remains, what happens to the other orbits? The axial orbits and the zero orbit all lie arbitrarily close to one another (a sequence of points in the axial orbits can converge to zero, but zero is not in the orbit). Therefore the resulting space taking the quotient naively would be $\mathbb C$, but with three copies of the $0$ point, which is a non-Hausdorff space. Since we are quotienting a variety, we would hope to get another variety, and this is clearly not one. GIT deals with this by declaring any orbit which contains zero in the closure to be unstable, and the quotient is defined only on the (poly)stable points (I don't want to go into what stability means, since it's a bit complicated and there are multiple competing definitions. I'll link some stuff to read if you want to know more at the bottom.) To me this is the space of prime ideals that are invariant under $G$. But I do not see how exactly this is related to the original space we wanted to construct. It seems quite different actually. Intuitively what is this space $V//G$ and why is it useful? Again, best to think of an example. In the example above, the polynomial ring associated to the variety $\mathbb C^2$ is the whole polynomial ring in two variables $\mathbb C[x,y]$. Then under the action above, a polynomial $f(x,y)$ gets sent to $f(\lambda x, \lambda^{-1} y)$, and we see therefore that the polynomial $f = xy$ is invariant under the action of $\mathbb C^*$. It's not hard to show that all invariant polynomials are generated by this one, so the invariant ring is $$ \mathbb C[x,y]^{\mathbb C^*} = \mathbb C[xy]$$ It's clear then that $$\operatorname{Spec}(\mathbb C[x,y]^{\mathbb C^*}) = \operatorname{Spec}(\mathbb C[xy]) = \mathbb C,$$ and in this we see that since the axial orbits and the zero orbit all lie in the same invariant class ($xy =0$); hence the GIT quotient treats all 3 as equivalent. (Try and do this example again for yourself but with a different space and a different action. It's a good exercise. The only way to get your head around this stuff is lots of examples in my opinion.) As for why the GIT quotient is useful: well, it's the correct quotient to use in algebraic geometry. Since taking quotients is so common in geometry, it's no suprise that mathematicians want a good theory about how to do it. Most interesting to me personally is how it relates to the symplectic reduction through the Kempf-Ness theorem. That's projective GIT, and that's the really interesting bit to me. There's also infinite-dimensional analogue of the Kempf-Ness theorem that concerns the theory of connections on principal $U(n)$-bundles upto gauge equivalence. Anyway some resources. I learnt GIT and symplectic reduction from the notes by Richard Thomas. There's also a book by Dolgachev on Invariant theory that's pretty good for an algebraic perspective, and the original GIT was of course found in the book by Mumford. I believe the latest edition has some stuff on symplectic reduction as well. There's also a book on invariants and moduli by Mukai which is simply brilliant. I'm currently writing some stuff about this for my master's project, I'll link it here when I'm done.<|endoftext|> TITLE: Prove that if $w \in \mathbb{Z}[\sqrt{3}]$ and $N(w)$ is a prime, then $w$ is prime also QUESTION [11 upvotes]: Prove that if $w$ is an extended integer, $w \in\mathbb{Z}[\sqrt{3}]$, and $N(w)$ is a prime in the ordinary integers, then $w$ is a prime. From this, conclude that $7+2\sqrt{3}$ is a prime in $\mathbb{Z}[\sqrt{3}]$. I'm kinda having a tough time dealing with primes so I'm wondering if this proof makes sense: We say that $w = a+b\sqrt{3}$ and $N(w)$ is prime. Let $w = a+b\sqrt{3} = k\cdot z$ where $k$ and $z$ are integers. We now have $N(w) = N(k)\cdot N(z)$, but we know that $N(w)$ is prime which implies that either $N(k)$ is unit or $N(z)$ is unit. If $N(k)$ is unit, then $k$ is also unit. Similarly, if $N(z)$ is unit, then $z$ is also unit. Since $w = k\cdot z$ and either $k$ or $z$ is unit, we can conclude that $w$ is prime hence the proof is complete. We have $w = a+b\sqrt{3} = 7+2\sqrt{3}$, so $N(w) = a^2-3b^2 = (7)^2-3(2)^2 = 49-3(4) = 37$. Since $37$ is a positive integer prime, this implies that $N(w)$ is prime. From the proof above, we can conclude that if $N(w)$ is prime then $w$ is also prime, hence $7+2\sqrt{3}$ is a prime in $\mathbb{Z}[\sqrt{3}]$. Thank you for any help you may be able to offer. REPLY [5 votes]: I will first prove (1) An element $p \in R$ is prime id and only if the residue class ring $R/pR$ of the residue classes modulo $\pi$ is a domain. In fact, the residue class ring oes not have a zero divisor if $ab \equiv 0 \bmod p$ implies that $a \equiv 0 \bmod p$ or $b \equiv 0 \bmod p$. But this is just a version of the definition of a prime element, which states that an element ist prime if $p \mid ab$ implies that $p \mid a$ or $p \mid b$. Now we claim (2) If $k$ is a quadratic number field with ring of integers $R ? {\mathcal O}_k$, then each $\pi \in {\mathcal O}_k$ with prime norm is prime. We will show that the residue class ring $R/\pi R$ is a domain by showing that it is isomorphic to the field with $p$ elements. To this end let $\{1, \omega\}$ be an integral basis of ${\mathcal O}_k$; then $\pi = a+b\omega$ for integers $a, b \in {\mathbb Z}$. We claim that $b$ is not divisible by $\pi$ (and thus not divisible by $p = |\pi \pi'|$). In fact, $\pi \mid b$ implies $\pi \mid a$ since $a = \pi - b\omega$, and taking norms we find $p \mid a^2$ and $p \mid b^2$. Since $p$ is prime, this implies that $p \mid a$ and $p \mid b$. But then $\pi = a+b\omega$ would be divisible by $p$, hence $\pi'$ would be a unit: contradiction. Thus there exists an integer $c \in {\mathbb Z}$ with $bc \equiv 1 \bmod p$, and in particular we have $bc \equiv 1 \bmod \pi {\mathcal O}_k$. We find $b\omega \equiv -a \bmod \pi$, after multiplying through by $c$ thus $\omega \equiv -ac \bmod \pi {\mathcal O}_k$. If any $\gamma = r+s\omega \in {\mathcal O}_k$ is given, then we find $\gamma \equiv r - sac \bmod \pi {\mathcal O}_k$, thus modulo $\pi$ every element is congruent to an ordinary integer. Reducing this number modulo $p$ (and $p$ is a multiple of $\pi$) we find that $\gamma$ is congruent to one of the numbers $0, 1, 2, \ldots, p-1$ modulo $\pi$. Now it is easy to show that there are no zero divisors in the ring of residue classes: If we had $\alpha \beta \equiv 0 \bmod \pi$ and if $A, B \in \{0, 1, \ldots, p-1\}$ are integers with $\alpha \equiv A \bmod \pi {\mathcal O}_k$ and $\beta \equiv B \bmod \pi {\mathcal O}_k$, then $\pi \mid AB$; taking norms yields $p \mid A^2B^2$, hence $p \mid A$ or $p \mid B$. Thus $A = 0$ or $B = 0$, and therefore $\alpha \equiv A = 0 \bmod \pi$ or $\beta \equiv B = 0 \bmod \pi$.<|endoftext|> TITLE: $T_{2.5}$ topology without coarser metric topology QUESTION [7 upvotes]: Let $(X,\tau)$ be a second-countable $T_{2.5}$ space, where with $T_{2.5}$ I mean that any distinct points are separated by closed neighborhoods. Does there have to be some metrizable second-countable $\tau' \subseteq \tau$? The typical examples of $T_{2.5}$ spaces that are not metrizable seem to be constructed by adding additional open sets to some metrizable topology, so I would be interested in a potential example of a space which is constructed differently -- or maybe a proof that we can always find a coarser metrizable second-countable topology. REPLY [3 votes]: As pointed out by Henno Brandsma in the comments, an example is the "Arens square" as modified by Brian Scott in his answer to this question. This space $(X,\tau)$ is $T_{2.5}$ (thanks to Brian Scott's modification), and is second-countable since it has only countably many points and it is clearly first-countable. However, there is no continuous function $f:X\to [0,1]$ such that $f(0,0)=0$ and $f(1,0)=1$. Indeed, given such a function $f$, there would be $\epsilon>0$ such that $f(x,y)<1/3$ whenever $02/3$ whenever $3/4 TITLE: Moduli of Riemann surfaces (genus g curves) is a variety. QUESTION [5 upvotes]: I often see the moduli spaces $\mathcal{M}_g$, or at least the coarse moduli space, of Riemann surfaces of genus $g$ described as the set of isomorphism classes of Riemann surfaces of genus $g$. Obviously, the moduli space has more structure than a set. However, the book I have been following immediately goes on to call $\mathcal{M}_g$ a variety. How do we go from $\mathcal{M}_g$ being a set to $\mathcal{M}_g$ being a variety? REPLY [6 votes]: Your book is being extremely sneaky! This in fact a highly nontrivial construction. One good source to read about it is the end of Chapter I in the book Moduli of Curves by Joe Harris and Ian Morrison. Curiously, long before anyone constructed $M_g$, Riemann was happily computing that its dimension equals $3g-3$. Riemann's argument is very nicely explained by user Brenin in another question on this site, which I can't find just at the moment. The first construction of $M_g$, if I remember correctly, is due to Bers around 1950. This construction is analytic: it shows that the Teichmueller space $T_g$ is an open subset of $\mathbf C^{3g-3}$, and $M_g$ is then a quotient of $T_g$ by a discrete group action. The first purely algebraic construction of $M_g$ was then given by Mumford in the early 1960s, using Geometric Invariant Theory. The idea here is that all smooth curves of genus $g$ can be embedded in a projective space $\mathbf P^N$ of fixed dimension (depending on $g$). So one can consider the corresponding component of the Hilbert scheme of $\mathbf P^N$; the moduli space $M_g$ is then the quotient of (an open subset of) this component by the action of the group $SL(N+1)$. Anyway, the moral is: $M_g$ is not easy!<|endoftext|> TITLE: Number of non singular matrices over a finite field of order 2 QUESTION [7 upvotes]: I have to find out the number of $3×3$ non singular matrices over a field of order $2$. I tried in the following way. First to find out a non singular matrix $A,$ clearly any row of $A$ can't be full of $0$s. So the first row (say) can be filled up by $(8-1)$ ways. Once the row is filled up,the next row can't be the same and also can't be full of zeros,so we can fill the next row by $ (8-2)$ ways. And at last the third row also can't be full of zeros,same as the first row,and same as the second row also.So we have $(8-3)$ choices. Hence the number of non singular matrices seems to be $7×6×5=210$. Am I right? Or there are more non singular matrices ? May be less also. Please correct me if I am wrong. Thank you. REPLY [7 votes]: Here field of order 2 is $\mathbb{F}_2$ = $\mathbb{Z}_2$ = $\{0,1\}$. The result is actually the number of elements in the General Linear Group $GL_3(\mathbb{F}_2)$ or $GL_3(\mathbb{Z}_2)$. To count the number of non-singular matrices of order 3 with 0 and 1 as its elements only, we have to make sure that all the rows are linearly independent and non-zero. For the first row we have $(2^3 – 1)$ choices. For the second row we have $(2^3 – 1) – 1 $ = $(2^3 – 2)$ choices. Because we cannot count the vector already has been used in first row. For the third row we have $(2^3 – 1) – 2 – 1 $ = $(2^3 – 2^2)$ choices. Because we have to omit 2 vectors from the count that already have been used in first and second row. And we have to omit one more vector that can be the linear combination of the first and second rows. So, in total we can have $(2^3 – 1)(2^3 – 2)(2^3 – 2^2)$ = $7\times6\times4$ = $168$ matrices.<|endoftext|> TITLE: Homology of free loop space QUESTION [7 upvotes]: By rational homotopy theory, $H(\Lambda M; \mathbb{Q})$ is infinite-dimensional over $\mathbb{Q}$ if $M$ is simply-connected. Are there (non-simply-connected) examples when $H(\Lambda M; \mathbb{Q})$ is finite-dimensional? I am most interested when $M$ is a manifold. Also, note that $H_0(\Lambda M; \mathbb{Q})$ is the set of conjugacy classes of $\pi_1(M)$, ie. loops up to homotopy. REPLY [3 votes]: Take a nontrivial (say, infinite) group $G$ which has exactly two conjugacy classes (e.g. one given by the HNN construction). Now, take $X=K(G,1)$. By working harder one can construct examples of such groups which have finite cohomological dimension and, hence, are fundamental groups of (noncompact) aspherical manifolds. Edit. In fact, if you apply the HNN construction to the infinite cyclic group $G_0$, the result will an (infinitely generated) countable group $G$ of cohomological dimension 2 with exactly two conjugacy classes (the presentation complex $X$ of $G$ will be 2-dimensional and aspherical). Hence, by Stallings embedding theorem, there exists a 4-dimensional aspherical manifold $M=K(G,1)$ (obtained by embedding $X'$ homotopy-equivalent to $X$ into $R^4$ and then taking a regular neighborhood there). Lastly, there are no known examples of infinite finitely presented groups with finitely many conjugacy classes.<|endoftext|> TITLE: exponential equation: $6^x+8^x+15^x=9^x+10^x+12^x$ QUESTION [6 upvotes]: What are the solutions of this equation? Or at least in which interval are they? $$6^x+8^x+15^x=9^x+10^x+12^x$$ I tried to find an increasing function, or use some inequalities but I got nothing out of it... REPLY [8 votes]: Let $a=3^x$ and $b=2^x$ and $c=5^x$. Then we have that $$ab+b^3+ac=a^2+bc+ab^2$$ $$ab+b^3+ac-a^2-bc-ab^2=0$$ $$a(b-a)+b^2(b-a)-c(b-a)=0$$ $$(b-a)(a+b^2-c)=0$$ Now $a=b \implies x=0$ and $a+b^2-c=0 \implies 3^x+4^x=5^x$ which gives solution for $x=2$ only and not for any higher integer $x$ by Fermat's Last Theorem. So these are the $2$ solutions.<|endoftext|> TITLE: Prove that $|AB - \lambda I| = |BA - \lambda I|$. QUESTION [5 upvotes]: Suppose that one has two matrices $A$, $B$. Then Prove that $$|AB - \lambda I| = |BA - \lambda I|,$$ where $|\cdot|$ denotes the determinant, $I$ - identity matrix and $\lambda \in \mathbb{C}$. Note that $A$ and $B$ are not necessary invertible. For invertible matrices I easily found $$|AB - \lambda I| = |B(AB - \lambda I)B^{-1}| = |BA - \lambda I|.$$ REPLY [4 votes]: There is an old-fashioned proof depending only on the properties of determinants and minors. (i) In $\det(xI-X)$ the coefficient of $x^{n-k}$ is the sum of the principal $k\times k$ minors of $X$. (ii) Let $X^{(k)}$ denote the matrix of $k\times k$ minors of $X$ for any square $X$. Then by a theorem sometimes called the Binet-Gauss Theorem we have that $(XY)^{(k)}=X^{(k)}Y^{(k)}$. (iii) For any square matrices $X,Y$ we have $\text{tr}(XY)=\text{tr}(YX)$. So the coefficient of $x^{n-k}$ in $\det(xI-AB)$ is $$\text{tr}((AB)^{(k)})=\text{tr}(A^{(k)}B^{(k)})$$ whereas the coefficient of $x^{n-k}$ in $\det(xI-BA)$ is $$\text{tr}((BA)^{(k)})=\text{tr}(B^{(k)}A^{(k)})$$ and these are equal.<|endoftext|> TITLE: What does "most of mathematics" mean? QUESTION [10 upvotes]: After reading the question Is most of mathematics not dealing with sets? I noticed that most posters of answer or comments seemed to be comfortable with the concept of "most of mathematics". I'm not trying to ask a stickler here, I'm just curious if there is some kind of consensus how the quantification of mathematics might be done. Is the fraction's denominator only known mathematics in 2017, or all of Mathematics, from a potential viewpoint that math exists whether we realize it or not. For example, the highly up voted, accepted answer begins: "It is very well-known that most of mathematics..." which at least suggests some level of consensus. In this case, how does the consensus agree that math is quantified? I hesitated to ask this question at first, wondering if answers would be potentially too opinion based. So I'd like to stick to answers that address the existence of some kind of consensus how the quantification of mathematics might be done. For example. does the question "Does at least half of mathematics involve real numbers?" even make sense? If so, could it actually mean something substantially different to each individual who believes it makes sense? Or in fact is there at least some kind of consensus. REPLY [23 votes]: It is clear that "most of mathematics" means "the half of whole possible theorems + 1"... Jokes aside I think that when someone is saying "most of mathematics" is in fact referring to mathematics as an activity and not to mathematics as a subject. So when someone is saying that "most of mathematics doesn't care about foundational mathematics..." he's just saying that most of the mathematical activity that is ongoing nowday by professional mathematicians is not focused on foundational subtleties and even doesn't care about it. So you can safely be a working mathematician and not knowing ZFC. So I think you question is just a misunderstanding between mathematics as a subject and mathematics as an activity.<|endoftext|> TITLE: How to evaluate the integral $\int_0^{\infty}\mathrm{d}x\frac{\sin(x)\sin(ax)}{\pi^2-x^2}e^{-ibx^2}$? QUESTION [8 upvotes]: Note that $a$ and $b$ are positive constants. Can this integral be evaluated in closed form ? $$\int_0^{\infty}\mathrm{d}x\frac{\sin(x)\sin(ax)}{\pi^2-x^2}e^{-ibx^2}$$ REPLY [4 votes]: After some toil I managed to evaluate this integral analytically. I am posting the detailed derivation below for the benefit of other users. I. Background We wish to evaluate $$I=\int_0^{\infty}\!\!\!\mathrm{d}x\frac{\sin(x)\sin(ax)}{\pi^2-x^2}e^{-ibx^2}=\frac{1}{2}\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{\sin(x)\sin(ax)}{\pi^2-x^2}e^{-ibx^2}.\qquad(1)$$ Writing $$\sin(x)\sin(ax)=\frac{\left(e^{ix}-e^{-ix}\right)}{2i}\frac{\left(e^{iax}-e^{-iax}\right)}{2i}=\frac{1}{4}\left(e^{i(a-1)x}+e^{-i(a-1)x}-e^{i(a+1)x}-e^{-i(a+1)x}\right)$$ we have $$I=\frac{1}{8}\left\{\mathscr{D}\left(\frac{a-1}{2}\right)+\mathscr{D}\left(\frac{1-a}{2}\right)-\mathscr{D}\left(\frac{a+1}{2}\right)-\mathscr{D}\left(\frac{-a-1}{2}\right)\right\},\qquad(2)$$ where $$\mathscr{D}(\alpha):=\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x~\frac{e^{-ibx^2+2i\alpha x}}{\pi^2-x^2}.\qquad(3)$$ II. prerequisites In order to obtain $\mathscr{D}(\alpha)$ analytically, we allude to a special function $$w(z)=\frac{i}{\pi}\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{e^{-x^2}}{z-x},\qquad\mathrm{Im}[z]>0\qquad(4)$$ called the Faddeeva function (after Soviet mathematician Vera Faddeeva), or the complex complementary error function, due to the identity $$w(z)=e^{-z^2}\mathrm{erfc}(-iz)\qquad z\in\mathbb{C}.\qquad(5)$$ Since the integral representation $(4)$ is only valid in the upper half of the complex plane, for $\mathrm{Im}[z]<0$, letitng $x=-y$ we have $$\frac{i}{\pi}\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{e^{-x^2}}{z-x}=\frac{i}{\pi}\int_{-\infty}^{\infty}\!\!\!\mathrm{d}y\frac{e^{-y^2}}{z+y}=-\frac{i}{\pi}\int_{-\infty}^{\infty}\!\!\!\mathrm{d}y\frac{e^{-y^2}}{(-z)-y}=-w(-z),\qquad(6)$$ using equation $(4)$. Combining equations $(4)$ and $(6)$, we have $$\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{e^{-x^2}}{z-x}=\frac{\pi}{i}\,\mathrm{sgn}\left(\mathrm{Im}[z]\right)w\left(\mathrm{sgn}\left(\mathrm{Im}[z]\right)z\right),\qquad(7)$$ which holds for any complex $z$ with nonzero imaginary part. We list the identity $$w(z)+w(-z)=2e^{-z^2}\qquad(8)$$ for later use. III. Complex offset We need one last ingredient for computing $\mathscr{D}(\alpha)$. Consider the integral $$I(\omega,z)=\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{e^{-(x-\omega)^2}}{z-x},\qquad(9)$$ where $\omega$ is a complex offset. In order to evaluate this, we will apply the residue theorem. First, let $\color{blue}{\mathrm{Im}[z]>0:}$ Next, choose the positively (negatively) oriented rectangular contour shown in the figure below, according as $\omega$ lies in the upper (lower) half of the complex plane. Applying the residue theorem, we obtain $$\oint\mathrm{d}x\frac{e^{-(x-\omega)^2}}{z-x}=2\pi i\,\mathrm{Res}[z]\,\theta(\mathrm{Im}[\omega])\,\theta\left(\mathrm{Im}[\omega-z]\right),\qquad(10)$$ where $$\mathrm{Res}[z]=\lim_{x\to z}\,\,(x-z)\frac{e^{-(x-\omega)^2}}{z-x}=-e^{-(z-\omega)^2}\qquad(11)$$ is the residue of the simple pole at $z$. If $\omega$ lies in the lower half plane, the contour integral evaluates to zero, since $z$ lies outside the contour. This is ensured by the $\theta(\mathrm{Im}[\omega])$ term. Similarly, whem $\omega$ lies in the upper half plane, the $\theta\left(\mathrm{Im}[\omega-z]\right)$ evaluates to one (zero) according as $z$ lies inside (outside) the contour. Now, the contour integral can be resolved into integrals over the edges of the rectangle. The edge $DA$ gives $$I_{DA}=\int_{-R}^R\!\!\mathrm{d}x\,\frac{e^{-(x-\omega)^2}}{z-x},\qquad(12)$$ which tends to $I(\omega,z)$ as $R\to\infty$. In this limit, the contributions from the side edges $I_{AB}$ and $I_{CD}$ go to zero, since the respective integrands go like $e^{-R^2}$. Parameterizing the edge $BC$ by $\omega+y$, we have $$I_{BC}=\int_{R-\mathrm{Re}[\omega]}^{-R-\mathrm{Re}[\omega]}\mathrm{d}y\,\frac{e^{-y^2}}{z-(y+\omega)}\to-\int_{-\infty}^{\infty}\!\!\!\mathrm{d}y\,\frac{e^{-y^2}}{(z-\omega)-y}\qquad(13)$$ as $R\to\infty$. The resulting integral can be evaluated using equation $(7)$, $$I_{BC}=-\frac{\pi}{i}\mathrm{sgn}\left(\mathrm{Im}[z-\omega]\right)\,w\left(\mathrm{sgn}\left(\mathrm{Im}[z-\omega]\right)(z-\omega)\right).\qquad(14)$$ Finally, plugging equations $(11)$, $(12)$ and $(14)$ into $(10)$, we arrive at the desired result $$I(\omega,z)=\frac{\pi}{i}\left\{2\,\theta(\mathrm{Im}[\omega])\,\theta\left(\mathrm{Im}[\omega-z]\right)\,e^{-(z-\omega)^2}\right.\\+\mathrm{sgn}\left(\mathrm{Im}[z-\omega]\right)\,w\left(\mathrm{sgn}\left(\mathrm{Im}[z-\omega]\right)(z-\omega)\right)\Big\}\qquad(15)$$ Note that, for $\omega=0$ we recover the result from equation $(4)$, viz. $I(0,z)=w(z)$. Equation $(15)$ holds for any complex $\omega$, with $z$ in the upper half plane. Next, consider the case $\color{blue}{\mathrm{Im}[z]<0}:$ We can use the same trick as equation $(6)$ to handle this case. Substituting $x=-y$, we have $$\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{e^{-(x-\omega)^2}}{z-x}=\int_{-\infty}^{\infty}\!\!\!\mathrm{d}y\frac{e^{-(y+\omega)^2}}{z+y}=-\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{e^{-(y-(-\omega))^2}}{(-z)-y}=-I(-\omega,-z)\\=-\frac{\pi}{i}\left\{2\,\theta(-\mathrm{Im}[\omega])\,\theta\left(\mathrm{Im}[z-\omega]\right)\,e^{-(z-\omega)^2}\right.\\+\mathrm{sgn}\left(\mathrm{Im}[\omega-z]\right)\,w\left(\mathrm{sgn}\left(\mathrm{Im}[z-\omega]\right)(z-\omega)\right)\Big\}\qquad(16)$$ Finally, equations $(15)$ and $(16)$ can be combined into a single equation, viz. $$\int_{-\infty}^{\infty}\!\!\!\mathrm{d}x\frac{e^{-(x-\omega)^2}}{z-x}=\frac{2\pi}{i}\color{blue}{\mathrm{sgn}\left(\mathrm{Im}[z]\right)}\,\theta(\color{blue}{\mathrm{sgn}\left(\mathrm{Im}[z]\right)}\mathrm{Im}[\omega])\,\theta\left(\color{blue}{\mathrm{sgn}\left(\mathrm{Im}[z]\right)}\mathrm{Im}[\omega-z]\right)\,e^{-(z-\omega)^2}\\+\frac{\pi}{i}\mathrm{sgn}\left(\mathrm{Im}[z-\omega]\right)\,w\left(\mathrm{sgn}\left(\mathrm{Im}[z-\omega]\right)(z-\omega)\right),\qquad(17)$$ which holds for any complex offset $\omega$ and complex $z$ with nonzero imaginary part. IV. Evaluation of $\mathscr{D}(\alpha)$ Since $b>0$, the 'complex Gaussian' $e^{-ibx^2}$ goes to zero, as $|x|\to\infty$ in the second and fourth quadrants of the complex $x$-plane. To use this property to our advantage, consider the oriented contour shown below. Applying the residue theorem, we have $$\oint\mathrm{d}x~\frac{e^{-ibx^2+2i\alpha x}}{\pi^2-x^2}=-2\pi i\,\mathrm{Res}[\pi]+2\pi i\,\mathrm{Res}[-\pi],\qquad(18)$$ where the real poles at $x=\pm\pi$ yield the following residues: $$\mathrm{Res}[\pm\pi]=\lim_{x\to\pm\pi}~(x\mp\pi)\frac{e^{-ibx^2+2i\alpha x}}{\pi^2-x^2}=\mp\frac{e^{-ib\pi^2\pm2i\alpha\pi}}{2\pi}.\qquad(19)$$ Thus, equation $(18)$ simplifies to $$\oint\mathrm{d}x~\frac{e^{-ibx^2+2i\alpha x}}{\pi^2-x^2}=2i\,e^{-ib\pi^2}\cos(2\pi\alpha)\qquad(20)$$ The 'arcs at infinity' (shown by dashed lines in the figure) make no contribution to the contour integral. The small semicircle around $x=\pi$, which may be parametrized as $x=\pi+\varepsilon\,e^{i\phi}$, $0<\phi<\pi$, gives $$i\varepsilon\int_0^{\pi}\!\!\!\mathrm{d}\phi\frac{e^{-ib(\pi+\varepsilon\,e^{i\phi})^2+2i\alpha(\pi+\varepsilon\,e^{i\phi})}}{(2\pi+\varepsilon\,e^{i\phi})(-\varepsilon\,e^{i\phi})}\to\frac{e^{-ib\pi^2+2i\alpha\pi}}{2i}$$ as $\varepsilon\to0$. Similarly, the other semicircular piece around $x=-\pi$ gives $\mathrm{exp}(-ib\pi^2\color{red}{-}2i\alpha\pi)/2i$. The line segments coincident on the real line yield $\mathscr{D}(\alpha)$. One is thus, left with the $\frac{\pi}{4}$-line, viz. $BC$. At this stage, equation $(20)$ can be rewritten as $$\mathscr{D}(\alpha)+\int_{BC}\!\!\!\mathrm{d}x\,\frac{e^{-ibx^2+2i\alpha x}}{\pi^2-x^2}+\frac{e^{-ib\pi^2}}{2i}\left(e^{2i\pi\alpha}+e^{-2i\pi\alpha}\right)=2i\cos(2\pi\alpha),$$ which further yields $$\mathscr{D}(\alpha)=3i\cos(2\pi\alpha)e^{-i\pi^2b}\color{red}{+}\int_{\color{red}{CB}}\!\!\!\mathrm{d}x\,\frac{e^{-ibx^2+2i\alpha x}}{\pi^2-x^2}\qquad(21)$$ Now we compute the integral on the line $CB$, parameterizing it as $x=u\,e^{-i\frac{\pi}{4}}$, we have $$\int_{CB}\!\!\!\mathrm{d}x\,\frac{e^{-ibx^2+2i\alpha x}}{\pi^2-x^2}=\int_{-\infty}^{\infty}\!\!\!\mathrm{d}u\,\frac{e^{-bu^2+2\alpha\sqrt{i}u}}{(\pi\,e^{i\frac{\pi}{4}}-u)(\pi\,e^{i\frac{\pi}{4}}-u)}$$ $$=\int_{-\infty}^{\infty}\frac{\mathrm{d}u}{2\pi\sqrt{i}}\left\{\frac{1}{\pi\sqrt{i}-u}+\frac{1}{\pi\sqrt{i}+u}\right\}\,e^{-bu^2+2\alpha\sqrt{i}u},$$ which letting $u=\frac{s}{\sqrt{b}}$, and completing squares on the Gaussian factor, equals $$\frac{e^{i\frac{\alpha^2}{b}}}{2\pi\sqrt{i}}\int_{-\infty}^{\infty}\!\!\!\mathrm{d}s\,\left\{\frac{1}{\pi\sqrt{ib}-s}-\frac{1}{(-\pi\sqrt{ib})-s}\right\}\,e^{-\left(s-\alpha\sqrt{\frac{i}{b}}\right)^2}=\frac{e^{i\frac{\alpha^2}{b}}}{2\pi\sqrt{i}}\left\{I\left(\alpha\sqrt{i/b},\pi\sqrt{ib}\right)-I\left(\alpha\sqrt{i/b},-\pi\sqrt{ib}\right)\right\},\qquad(22)$$ where $I(\omega,z)$ was obtained in equation $(17)$. Applying equation $(17)$ and plugging into equation $(21)$, yields the desired result $$\mathscr{D}(\alpha)=3i\cos(2\pi\alpha)e^{-i\pi^2b}+\frac{e^{i\frac{\alpha^2}{b}}}{2i^{3/2}}\left\{w\left(\alpha\sqrt{i/b}+\pi\sqrt{ib}\right)+2\theta(\alpha)\theta(\alpha/b-\pi)\,e^{-i(\pi\sqrt{b}-\alpha/\sqrt{b})^2}+\mathrm{sgn}(\pi-\alpha/2b)w\left(\mathrm{sgn}(\pi-\alpha/2b)(\pi\sqrt{ib}-\alpha\sqrt{i/b})\right)\right\}$$ I am dead :p Since $w(z)$ is related to the error function via equation $(5)$, it might turn out, for $b=1$ this solution agrees with the Wolfram alpha solution proposed in one of the comments.<|endoftext|> TITLE: Prove that $\left(\frac{3+\sqrt{17}}{2}\right)^n + \left(\frac{3-\sqrt{17}}{2}\right)^n$ is always odd for any natural $n$. QUESTION [5 upvotes]: Prove that $$\left(\frac{3+\sqrt{17}}{2}\right)^n + \left(\frac{3-\sqrt{17}}{2}\right)^n$$ is always odd for any natural $n$. I attempted to write the binomial expansion and sum it so the root numbers cancel out, and wanted to factorise it but didn't know how. I also attempted to use induction but was not sure how to proceed. REPLY [3 votes]: A high-powered solution comes from looking at the expression $ 2 $-adically. Indeed, by choosing an embedding $ \mathbf Q(\sqrt{17}) \to \mathbf Q_2 $ and noting that we have a sum of the form $ \alpha^n + \beta^n $, we note that $ \alpha + \beta = 3 $ is odd. It follows that one of $ \alpha, \beta $ is odd and the other one is even in $ \mathbf Z_2 $, and thus, upon reduction modulo $ 2 $, the same is true for $ \alpha^n, \beta^n $ for any $ n \geq 1 $; and thus $ \alpha^n + \beta^n $ is odd.<|endoftext|> TITLE: How does one show that $\sum_{k=2}^{\infty}{(-1)^k\over k+1}\cdot{ \lceil \log_2(k) \rceil}=1-2\gamma?$ QUESTION [6 upvotes]: Consider $$\sum_{k=2}^{\infty}{(-1)^k\over k+1}\cdot{ \lceil \log_2(k) \rceil}=1-2\gamma\tag1$$ How does on show that $(1)$ converges to $1-2\gamma?$ REPLY [4 votes]: The given series equals $$ \sum_{k\geq 2}\frac{(-1)^k}{k+1}+\sum_{k\geq 3}\frac{(-1)^k}{k+1}+\sum_{k\geq 7}\frac{(-1)^k}{k+1}+\ldots \tag{1}$$ or $$ \log(2)-\frac{1}{2}+\int_{0}^{1}\left[\sum_{h\geq 2}\sum_{k\geq 2^h-1}(-x)^k\right]\,dx \tag{2}$$ hence the claim boils down to proving Catalan's integral$^{(*)}$ $$ \gamma=\int_{0}^{1}\frac{1}{1+x}\sum_{n\geq 1}x^{2^n-1}\,dx \tag{3}$$ where the RHS of $(3)$ equals $$ \int_{0}^{1}\frac{1}{x}\sum_{m\geq 0}(-1)^m x^m \sum_{n\geq 1}x^{2^n}\,dx = \int_{0}^{1} \frac{1}{x}\sum_{m\geq 2} r(m) x^m\tag{4}$$ with $r(m)$ being the difference between the number of ways we may represent $m$ as $2^a+2b$ and the number of ways we may represent $m$ as $2^a+(2b+1)$ with $a\geq 1$ and $b\geq 0$. $(*)$ The linked page shows a derivation, $(24)\to(29)$, based on Euler's series acceleration method.<|endoftext|> TITLE: Evaluate integral with integer part QUESTION [5 upvotes]: I have to evaluate $$\int _0^2\:\frac{x-\left[x\right]}{2x-\left[x\right]+1}dx$$ Where $[x] = floor(x)$ I tend to write it like this, but I think i'm missing the point $x = 2$ $$\int _0^2\:\frac{x-\left[x\right]}{2x-\left[x\right]+1}dx=\int _0^1\:\frac{x}{2x+1}dx+\int _1^2\:\frac{x-1}{2x}dx = 1 - \frac{1}{4} \cdot \ln 3$$ The correct answer is $1 - \frac{1}{4} \cdot \ln 12$ REPLY [3 votes]: That's correct, the value at the extremes of the interval is irrelevant, as long as the function can be extended by continuity at the end points. There would be more relaxed conditions, but in this case this is enough. However $$ \int_0^1\frac{x}{2x+1}\,dx=\frac{1}{2}\int_0^1\frac{2x+1-1}{2x+1}\,dx =\frac{1}{2}\Bigl[x-\frac{1}{2}\ln(2x+1)\Bigr]_0^1= \frac{1}{2}\left(1-\frac{1}{2}\ln 3\right) $$ and $$ \int_1^2\frac{x-1}{2x}\,dx= \frac{1}{2}\int_1^2\left(1-\frac{1}{x}\right)dx= \frac{1}{2}\Bigl[x-\ln x\Bigr]_1^2=\frac{1}{2}(2-\ln2-1) $$ so your integral is $$ \frac{1}{2}-\frac{1}{4}\ln3+\frac{1}{2}-\frac{1}{2}\ln2=1-\frac{1}{4}\ln12 $$<|endoftext|> TITLE: Counting pairs of squares in ${\mathbb Z}_n$ with certain distance QUESTION [7 upvotes]: Let $S_n = \{ x^2 \pmod{n} \mid x \in \mathbb Z \}$ denote the set of squares in ${\mathbb Z}_n$. Define $S_n(d) = \{ (x, y) \in S_n^2 \mid x + d \equiv y \pmod{n} \}$. Is there an explicit formula for $|S_n(d)|$ ? Update: As mentioned in comments, if $d \equiv 0 \pmod{n}$, then $|S_n(d)| = |S_n|$ and there is a formula for it according to Walter D. Stangl's paper "Counting Squares in ${\mathbb Z}_n$" (MAA link) (PDF link). I am still looking for a general result where $d \not\equiv 0 \pmod{n}$. REPLY [2 votes]: This is far from being a complete answer but it might give you some clue to follow up. What I give is a closed form when $p$ is an odd prime and $p\nmid d$: \begin{align} S_p(d) &= \frac{ p+1+(\tfrac{-d}{p}) + (\tfrac{d}p)}4 \\&= \frac{ p+1 +(1+(-1)^{\frac{p-1}2})(\tfrac{d}p)}4\end{align} $(\frac dp)$ is the Legendre symbol which is zero if $p$ divides $d$, $+1$ when $d$ is a square in $\mathbb Z_p$ and $-1$ otherwise. I'm using the elementary theory of quadratic residues and non residues and Legendre symbols freely in the rest of the answer. (by the way zero is neither a quadratic residue nor a quadratic non-residue which is usually a source of some confusion) The method of proof goes back to Gauss, first I'm using the following result, (I give an sketch of the proof at the end): $$ \sum_{t=1}^p \left(\frac{t(t+d)}p\right) = -1\tag{1}\label{eq1}$$ Call $RR$ the number of integers $t$ mod $p$ such that both $t$ and $t+d$ are both quadratic residues, in the same way let $RN$ the number of integers $t$ mod $p$ such that both $t$ is a quaratic residue and $t+d$ a quadratic non-reside, and define $NR$ and $NN$ in a similar way. Then the sum in \eqref{eq1} can be written $$ RR + RN + NR + NN = -1 \tag{2}\label{eq2} $$ now observe that the sum $RR+RN$ counts once every quadratic residue except $-d$ if it is a quadratic residue, as there are $\tfrac{p-1}2$ quadratic residues and ${1+(\tfrac{-d}p)}2 =1$ exactly when $-d$ is a quadratic residue then $$ RR + RN = \frac{p-1}2 - \frac {1+(\tfrac{-d}p)}2 = \frac{p-2-(\tfrac{-d}p)}2\tag{3}\label{eq3}$$ With similar arguments we find: $$ NR+NN=\frac{p-2+(\tfrac{-d}p)}2\tag{4}\label{eq4}$$ $$ RR+NR=\frac{p-2-(\tfrac{d}p)}2\tag{5}\label{eq5}$$ $$ RN+NN=\frac{p-2+(\tfrac{d}p)}2\tag{6}\label{eq6}$$ Summing \eqref{eq2}, \eqref{eq3} and \eqref{eq4} we get after simplifying $$ 2RR +2NN = p-3$$ summing \eqref{eq3} and \eqref{eq4} and substracting \eqref{eq5} and \eqref{eq6} we get $$ 2RR - 2NN = -(\tfrac{-d}p)-(\tfrac dp) $$ so we get $$ 4RR = p-3 -(\tfrac{-d}p)-(\tfrac{d}p) $$ Now we have $$ S_p(d) = RR + \tfrac 12 \bigl(1 -(\tfrac{-d}p)\bigr)+ \tfrac 12 \bigl(1 -(\tfrac{d}p)\bigr) $$ and combining both we get the claim. Now we prove \eqref{eq1}. Call the left hand sum $T(d)$ then \begin{align} T(d) &= \sum_{t=1}^p \left(\frac{t(t+d)}p\right) \\ &= \sum_{t=1}^p \left(\frac{dt(dt+d)}p\right)=\sum_{dt=1}^p \left(\frac{d^2}p\right)\left(\frac{t(t+1)}p\right)\\ &= \sum_{t=1}^p \left(\frac{t(t+1)}p\right) \\ &= T(1) \end{align} Now we can tke the sum $$ T(0)+T(1)+\dots+T(p-1) = \sum_{t=1}^p \left(\frac tp\right) \sum_{d=0}^p\left(\frac{(t+d)}p\right) = 0 $$ but $T(0) = p-1$ and $T(1)=T(2)=\dots =T(p-1)$ so $$ T(0)+T(1)+\dots+T(p-1) = p-1 + (p-1)T(1) = 0 $$ and we get \eqref{eq1}. If you want to extend this to prime powers you could try to reproduce this argument for prime powers using a primitive root $g$ mod $p^k$, if $t \equiv g^a$ define $\chi(t)= (-1)^a$, and try to evaluate $\sum \chi(t(t+d))$.<|endoftext|> TITLE: There cannot be an infinite AP of perfect squares. QUESTION [18 upvotes]: I could not find any existing questions on this site stating this problem. Therefore I am posting my solution and I ask for other ways to prove this theorem too. The Question Prove that there cannot be an infinite integer arithmetic progression of distinct terms all of which are perfect squares. My attempt We shall prove it using contradiction. First off, there are a couple of things to notice which greatly simplify our discussion: The AP cannot be decreasing as eventually, the terms will be negative and perfect squares are non-negative. There has to be a non-zero, positive difference between the terms otherwise the terms would not be distinct. Let us therefore, assume an AP with the first term $a$ - a non-negative integer and the positive difference $d$. The $i$th term of the AP is $T_i=a+(i-1)d$. The AP is increasing, therefore there is a term $T_n$ for the least value of $n$ such that $T_n\geq d^2$. Now, $T_{n+1}$ is also a perfect square. Let $T_{n+1}=b^2$. Therefore, we have $$ d^2 \leq b^2 \implies d \leq b $$ Therefore we have $$ T_{n+1}=b^2+d TITLE: How and when to do exercises from books? QUESTION [9 upvotes]: Most math books I have seen seem to be structured in the same way; each chapter is a few dozen pages, and after each chapter there are a few dozen exercises. I often find myself not knowing what exercises to do and when to do them. If I have read the first 10 pages, I'm not sure what $x$ is in "the first $x$ exercises should be answerable after reading the first $10$ pages." A prime example is Tristan Needham's "Visual Complex Analysis". It's a very good book; however, the first chapter is about $50$ pages long, after which follows a splatter of $50$ exercises. I'm not sure when I should interrupt my reading and go the exercises. And I'm not sure when to interrupt my exercises and go back to the reading. REPLY [2 votes]: As has already been discussed in the comments section to this question, any answer to the question will be subjective and opinion-based. What's always important to keep in mind is that methods that work for some will not work for others. If I had to give one piece of advice, it would be find out which method works for you. That said, here is how I approach these types of books. In general, I prefer to completely finish the reading before starting the exercises. I do this because I want to make sure I understand the material covered, and for me, the best way of doing that is to do an assortment of problems on all the theorems and ideas covered. So, for example, suppose the chapter talks about Theorem A, Corollary B, Theorem C, and Theorem D. If I were to do the exercises concerning Theorem A and Corollary B immediately after reading about them, I would probably forget about them after I have finished the reading (and done the exercises) for Theorems C and D. So, instead I finish all of the reading and then do all of the problems. Then, I review the problems that I am unable to solve and the theorems, properties etc. concerning them. This method works best with shorter chapters or sections (no more than 20 pages long). For slightly longer sections (like the 50-page sections you mention), I like to read some of the chapter and then do a sampling of exercises before completing the reading. So, for your case I might read 20-25 pages, then do 5-10 of the first 20 exercises in the exercises section just to make sure I understand the concepts that I learned in the first half. Then, I would finish the reading and solve all of the other exercises. I find that this method works very well with longer chapters (as long as there are sufficient exercise problems, which in your case is not a problem). To further reinforce my understanding of the topic, I like to circle questions that I answered incorrectly the first time and then come back to them a couple of days or a week later. If I am still unable to do the question, I circle it a second time and reread the relevant theorems. I repeat this process until I understand the question and the relevant concepts. Hope this helped! :)<|endoftext|> TITLE: Is there any point to Olympiad geometry beyond Olympiads themselves? QUESTION [5 upvotes]: Is synthetic geometry still relevant in mathematics? I always saw Olympiad geometry as an odd field because while every other Olympiad topic would extend to larger math, geometry seems especially useless. I get that analytic geometry is useful in college but that is heavily discouraged in Olympiad geometry. So what does olympiad geo extend to? REPLY [5 votes]: Here is a list of open problems in Euclidean geometry taken from Mathoverflow. Thus, the sentiment that "all of the interesting things (in Euclidean geometry) have been discovered already" is just false. Edit 1: At the same time, most MO problems have no connection to the modern research. Edit 2. Here is one (admittedly atypical) example. I was told by a person present at the conversation, that the paper W.P.Thurston, "Shapes of polyhedra and triangulations of the sphere" grew out of a conversation about a Brazilian MO problem that Thurston had with a Brazilian grad student in Princeton around 1984 during a lunch break. (Sadly, I do not know the name of the student and the exact MO problem.) According to Google Scholar, Thurston's paper currently has 233 citations. Of course, one has to be William Thurston to accomplish such a feat.<|endoftext|> TITLE: How to efficiently use a calculator in a linear algebra exam, if allowed QUESTION [41 upvotes]: We are allowed to use a calculator in our linear algebra exam. Luckily, my calculator can also do matrix calculations. Let's say there is a task like this: Calculate the rank of this matrix: $$M =\begin{pmatrix} 5 & 6 & 7\\ 12 &4 &9 \\ 1 & 7 & 4 \end{pmatrix}$$ The problem with this matrix is we cannot use the trick with multiples, we cannot see multiples on first glance and thus cannot say whether the vectors rows / columns are linearly in/dependent. Using Gauss is also very time consuming (especially in case we don't get a zero line and keep trying harder). Enough said, I took my calculator because we are allowed to use it and it gives me following results: $$M =\begin{pmatrix} 1 & 0{,}3333 & 0{,}75\\ 0 &1 &0{,}75 \\ 0 & 0 & 1 \end{pmatrix}$$ I quickly see that $\text{rank(M)} = 3$ since there is no row full of zeroes. Now my question is, how can I convince the teacher that I calculated it? If the task says "calculate" and I just write down the result, I don't think I will get all the points. What would you do? And please give me some advice, this is really time consuming in an exam. REPLY [160 votes]: There is a very nice trick for showing that such matrix has full rank, it can be performed in a few seconds without any calculator or worrying "moral bending". The entries of $M$ are integers, so the determinant of $M$ is an integer, and $\det M\mod{2} = \det(M\mod{2})$. Since $M\pmod{2}$ has the following structure $$ \begin{pmatrix} 1 & 0 & 1 \\ 0 & 0 & 1 \\ 1 & 1 & 0\end{pmatrix} $$ it is trivial that $\det M$ is an odd integer. In particular, $\det M\neq 0$ and $\text{rank}(M)=3$.<|endoftext|> TITLE: What is the name for a rectangle that is not a square? QUESTION [8 upvotes]: A square is a special case of a rectangle. What is a single-word term for a rectangle that is not a square? I am looking for a word that excludes squares. I am also looking for a word that is not "rectangle". REPLY [9 votes]: It's called oblong. The following picture is from wikipedia.<|endoftext|> TITLE: Small example of Zappa-Szep product of finite groups QUESTION [5 upvotes]: Does anyone know a good place to find an example of a group, $G$, of "small" order in particular finite such that there are subgroups $H,K$ of $G$ with $HK=G$ and $G$ a Zappa-Szep product of $H$ and $K$ (in particular neither $H$ nor $K$ is normal? REPLY [4 votes]: The smallest nontrivial examples have order $16$. One of them is $$ \mathbb{Z}_2 \times D_8 \,=\, \langle a, r,s \mid a^2=r^4=s^2=[a,r]=[a,s]=1,sr=r^{-1}s\rangle, $$ which is the Zappa-Szép product of the subgroups $\{1,a,s,as\}$ and $\{1,ar^2,rs,ar^3s\}$. Other examples include $S_4$ (as Derek Holt mentions in the comments) and $D_{24}$, which is the Zappa-Szép product of the subgroups $\langle r^4,s\rangle \cong D_6$ and $\langle r^6,rs\rangle \cong V$.<|endoftext|> TITLE: Extending holomorphic $f$ on the unit disk to the boundary as $1/z$ QUESTION [5 upvotes]: I found the following problem statement in a book (Stein and Shakarchi): Show that there is no holomorphic function $f$ on the unit disk $\mathbb{D}$ that extends continuously to $\partial D$ such that $f\left(z\right) = 1/z $ for $z\in \partial D$ I know that I need uniform continuity of $f$ on $\mathbb{D}$ in order to show: $$\lim_{r\rightarrow 1^{-}} \int_{C_{r}}f = \int_{C_{1}} f = \int_{C_{1}} (1/z) = 2 \pi i$$ And then I can get a contradiction. I think I remember $f$ being uniform continuous if it is continuous on a compact set - however I am not sure how to prove this. Also not sure how to prove the integral equations above given this fact. Any suggestions? REPLY [2 votes]: Yes, a continuous function on a compact set is uniformly continuous. "...however I am not sure how to prove this." Are you looking to prove this standard result just to apply it in one case, or would you be comfortable just citing it for now? Here's a reference. Suppose $f$ has a continuous extension to the boundary. That means the extension, which we're still calling $f$, is continuous on the closed disk, which is a compact set because it is closed and bounded. This implies $f$ is uniformly continuous, by what is apparently called the Heine–Cantor theorem. It also implies that $f$ is bounded, i.e., there exists some $M>0$ such that $|f(z)|\leq M$ for $|z|\leq 1$. Given $r$ with $00$ there exists $\delta>0$ such that $|w-z|<\delta$ implies $|f(w)-f(z)|<\frac{\varepsilon}{4\pi}$. If $r$ is chosen such that $1-r<\delta$, and such that $1-r<\dfrac{\varepsilon}{4\pi M}$, or in other words, $r>\max\left\{1-\delta,1-\frac{\varepsilon}{4\pi M}\right\}$, then for all $z$ with $|z|=1$ we have $|f(z)-f(rz)|<\frac{\varepsilon}{4\pi}$ by choice of $\delta$, and therefore $$\left|\int_{C_1}f(z)\,dz-\int_{C_r}f(z)\,dz\right|< 2\pi \frac{\varepsilon}{4\pi} + \dfrac{\varepsilon}{4\pi M}2\pi M=\varepsilon.$$ This shows that $\lim\limits_{r\nearrow 1}\int_{C_r}f(z)\,dz = \int_{C_1}f(z)\,dz$. All that was used was that $f$ is continuous on the closed disk. (Incidentally, in your particular case you would have $M=1$ by the maximum modulus theorem.) This equality would provide a contradiction in your case as you mentioned, because it implies that $\int_{C_1}f(z)\,dz = 0$ for any $f$ that is the continuous extension of a holomorphic function on the open disk, by Cauchy's theorem on the disk.<|endoftext|> TITLE: Does the adjoint relationship between Tensor and Hom functors give an adjoint relationship between Ext and Tor? QUESTION [7 upvotes]: The question is as in the title; I don't really have a more specific question, I am more interested in a "yes, here is an example of how it is useful," a "yes, but it really isn't useful," or a "no, here is why." The context is we are covering homological algebra out of Lang for our algebra course. The problem is I still wasn't completely comfortable with the categorical properties of the Hom and Tensor functors before moving on to Ext and Tor, so I am trying to make some connections between the two notions to try and solidify my understanding. I am comfortable with the adjoint relationship between Hom and Tensor; namely, $$Hom(Y\otimes X, Z)\cong Hom(Y, Hom(X,Z))$$ So potentially a candidate would be $$ Hom(Tor_n(Y,X),Z)\cong Hom(Y,Ext^n(X,Z))? $$ In my head I am thinking this could be useful as follows: For any two abelian groups $A$ and $B$, $Tor_n^\mathbb{Z}(A,B)=0$ for $n\geq 2$. This is easy to see by taking a projective resolution $0\to ker(f)\to F\to A\to 0$, where $f:F\to A$ is a realization of $A$ as a quotient of a free group $F$. From this, can we immediately conclude $Ext^n(A,B)=0$ for $n\geq 2$, without doing anymore work? (I am sure a direct proof would be equally straightforward, I am just interested in the hypothetical). Thanks in advance REPLY [4 votes]: No, here is why. Recall that a right adjoint is left exact. So if $\operatorname{Ext}^n(X,.)$ were the right adjoint of some functor, it should preserve left exact sequence. But it doesn't ! Indeed, if $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ is a short exact sequence, then the long exact sequence $$...\rightarrow\operatorname{Ext}^{n-1}(X,C)\rightarrow\operatorname{Ext}^n(X,A)\rightarrow\operatorname{Ext}^n(X,B)\rightarrow\operatorname{Ext}^n(X,C)\rightarrow ...$$ shows that in general the map $\operatorname{Ext}^n(X,A)\rightarrow\operatorname{Ext}^n(X,B)$ is not into. But the tensor-hom adjunction leads indeed to a derived version. For this, we need a more advanced tool : the derived category. (For simplicity, I assume that we are in $R$-Mod, moreover there are technical boundedness assumptions that I will not expand here). There are complexes $Y\otimes^L X$ and $R\operatorname{Hom}(X,Z)$, such that $$\operatorname{Hom}_D(Y\otimes^L X,Z)=\operatorname{Hom}_D(Y,R\operatorname{Hom}(X,Z))$$ with $H_n(Y\otimes^L X)=\operatorname{Tor}_n(Y,X)$ and $H^n(R\operatorname{Hom}(X,Z))=\operatorname{Ext}^n(X,Z)$. I am not sure we can deduce formally the vanishing of $\operatorname{Ext}^2_\mathbb{Z}$ from the above derived adjunction (I will think about it). But if you know that you can compute $\operatorname{Ext}$ using a projective resolution of the first variable, the proof for $\operatorname{Tor}$ works for $\operatorname{Ext}$ and many other functors (namely every right exact covariant or left exact contravariant).<|endoftext|> TITLE: What exactly is meant by "differential forms carry their own scale"? QUESTION [5 upvotes]: I've been watching this series: Intro to differential forms, where the author says every now and then something like "differential forms carry their own scale", "I can eyeball a path integral with differential forms, but not with a gradient picture, as I need a scale for the latter". Can someone explain or justify these statements? And also perhaps explain the idea of seeing differential forms (or perhaps just the exterior derivative of a 0-form $f$) as 'level sets' (of $f$)? REPLY [6 votes]: The issue of scaling is one of the deeper but also trickier aspects of differential forms. To focus on that issue exclusively, it’s simplest to work just in $\mathbb{R}^2$ (the plane) and only consider constant differential $1$-forms, that is,$$\alpha = a \,\text{d}x + b \,\text{d}y$$where $a$, $b$ are constants. We can also note that$$\alpha = \text{d}(ax + by)$$that is, it is the exterior differential of the linear function $f(x,y) = ax + by$. To be specific, say $\alpha = 2 \,\text{d}x + 3 \,\text{d}y$. We can draw the "stack" picture of this form by drawing the integer level sets of $f$, that is, the sets $2x + 3y = k$ for all integers $k$. If $v = \langle v_1,v_2\rangle$ is a tangent vector (at a point $(x_0,y_0)$, say, although it won’t matter here) then the number $\alpha(v)$ created by letting the $1$-form $\alpha$ "eat" the vector v is$$\alpha(v) = (a \,\text{d}x + b\,\text{d}y)(\langle v_1, v_2\rangle) = av_1 + bv_2.$$But there is a much more geometric interpretation, namely, this is just how many "sheets" of the "stack" the arrow representing $v$ passes through (at least, that’s the integer part of $\alpha(v)$). To see this, let $(x_0,y_0) = (0,0)$ for simplicity, and note that the tip of the vector $v$ is at $(v_1,v_2)$, where the function $f$ has the value $f(v_1,v_2) = av_1 + bv_2$. So the arrow has passed through all of the sheets labeled $1,2,3,\ldots$ through $av_1 + bv_2$. For example, if $\alpha = 2 \,\text{d}x + 3 \,\text{d}y$ and $v = \langle5,-1\rangle$ then the vector $v$ will start on the set $f(x,y) = 0$ and end on the set $f(x,y) = f(5,-1) = 2(5) + 3(-1) = 7$, and $\alpha(v) = 7$ in this case. So far I haven’t talked about the scaling issue. But try this: draw a coordinate system with carefully scaled $x$ and $y$-axes on a blank piece of paper, and use that to set up the example just given, with the "sheets" of alpha (with their labels $0,1,2,3,4,5,6,7$) and the arrow representing the vector $v$. Now erase the scale from the axes and see if you can still calculate $\alpha(v)$. Of course you can! It’s just the number of sheets passed through. (Or imagine handing the sheet to someone else who knows about $1$-forms and asking them to calculate $\alpha(v)$—they’ll have no trouble.) To see it even more purely, don’t start with any axes at all. Just draw a bunch of parallel lines with labels on them (these are the level sets of a linear function) and an arrow going through the "stack" of lines. You can certainly count how many lines the arrow goes through without putting any axes or any scale of any kind on the picture. In contrast, try this with the dot product of two vectors. Draw two vectors based at the same point, but do not draw a scale. Try to calculate the dot product of the vectors—you won’t be able to do it. The notion of the dot product depends in an essential way on the scale. (Recall that a vector is a unit vector if and only if its dot product with itself is $1$. So in particular, the dot product knows how to recognize a vector with length $1$.) The notion of a $1$-form eating a vector, on the other hand, is scale-independent. If you want a "gotcha" version of this for the vectors, start with a scale with, say, $1\,\text{cm}$ being $100$ units on each scale, and draw the vectors $v = \langle500,-100\rangle$ and $w = \langle400,600\rangle$. (Don’t label the vectors numerically though!) Now erase the scale markings on the axes, erasing the "$00$"s very well, and erasing the leading digits less well, so it looks like the scale is $1,2,3,\ldots$ instead of $100,200,300,\ldots$. Now ask someone who knows dot products to calculate the dot product. They should say "I can’t do that", but they may assume (seeing the faint marks where you erased the scale numbers) that $1\,\text{cm}$ is $1$ unit. Then they would get $v \cdot w = 14$. You can then reveal (putting back in the scale that you really used) that $v \cdot w$ is really $140000$—quite a bit off! Similarly, let’s stick with the same linear $\alpha = a \,\text{d}x + b\,\text{d}y$ but now integrate it over an oriented curve $C$ going from point $P$ to point $Q$ instead of having it eat a vector. The integral is$$\int_C \alpha = \int_C \text{d}f = f(Q) – f(P)$$which requires no scale, once again, to calculate. If you’re looking at the "stack" picture for $\alpha$, then the integral of $\alpha$ over $C$ is just how many sheets of $\alpha$ the curve passes through. This is another scale-invariant calculation. In contrast, if you have a constant vector field $V$ and you want to integrate it over a curve $C$, you won’t be able to do that without a given scale. (You could run a very similar "gotcha" as above, for example.) I hope that makes the statements about scaling more clear, and also show a simple example where the level sets of a function (here linear) are used to visually represent the data of a $1$-form (here with constant coefficients). There are two steps to a general $1$-form. First, we can still only look at exact forms, that is, those representable as $\alpha = \text{d}f$ where $f$ is a function. Second, we could look at a totally general form $\alpha = g(x,y)\,\text{d}x + h(x,y)\,\text{d}y$ that is not exact. Let’s just think about exact ones for now. I’ll also make one assumption, for simplicity: the range of $f$ is big enough (or, we could say, $f$ varies fast enough) that drawing a contour map using integer level sets of $f$ gives a precise picture with pretty dense level sets (pretty small spacing), one that doesn’t miss essential features of $f$. (If that’s not true we could always change the contour interval to $0.1$, or $0.01$, etc., but that just clutters up the conceptual understanding.) For example we could say $f(x,y) = x^2 + y^2$ and note that near the origin this is not the most precise picture (since the level sets are pretty far spaced there). In any case, draw the contour map of $f$, using integer level sets, and let’s see if we can use it (and it alone, with no axis scaling!) to calculate $\alpha(v)$ where $\alpha = \text{d}f$ and $v$ is a tangent vector and the integral of $\alpha$ over an oriented curve $C$. For (1), we’ll first assume that $v$ is a pretty small vector, in the sense that as we go from the tail to the tip of $v$, the level sets of $f$ look close to parallel and evenly spaced. I claim that $\alpha(v)$ is (approximately) still just the number of level sets of $f$ that the arrow for $v$ crosses. Algebraically,$$\begin{align*} \text{d}f(v) & = \text{the directional derivative of } f \text{ along the vector }v \text{ (this is the best definition of d}f\text{)} \\ & = f(\text{tip of }v) – f(\text{tail of }v).\end{align*}$$(You may be familiar only with the notion of directional derivative with respect to a unit vector; here we’re using the more general notion of the directional derivative with respect to any vector $v$. The official definition is$$\lim_{t\to 0} {{f(p+tv) – f(p)}\over{t}}$$and I’m assuming that $v$ is already small enough that setting $t=1$ gives a good approximation to the limit.) Clearly $f(\text{tip of }v) – f(\text{tail of v})$ is just the number of "sheets" crossed by $v$. If $v$ is not small, say $|v|=100$, then apply the above to, say, $w= v/10000$ to get $\alpha(w)$, and then put back in the scaling factor to get $\alpha(v)$. (This corresponds to putting $t = 1/10000$ into the official limit definition.) For (2), it’s actually easier; since we’re not trying to model infinitesimal tangent vectors with actual arrows, we don’t need to worry about things being big or small or scaling them to make things work well. We again have (just as in the linear case)$$\int_C \alpha = \int_C \text{d}f = f(Q) – f(P)$$which just counts how many "sheets of alpha" the curve $C$ crosses. So note so far how the $1$-form $\alpha = \text{d}f$ can be thought of as having almost exactly the same information content as $f$, just interpreted differently. (One subtlety is that we only ever counted how many sheets got crossed by things, i.e. differences of $f$-values; so adding a constant to $f$ will not change $\alpha(v)$ or $\int_C \alpha$. Of course that’s correct, since $\text{d}(f + C) = \text{d}f$, as the derivative of a constant is zero.) That tells us that $1$-forms and $\text{d}$ are extremely natural ideas; but it also begs the question of why we should bother. It gets more interesting when $\alpha$ is not exact, i.e. $\alpha$ is not of the form $\text{d}f$ for some function. Then, we don’t have the global level set picture; instead, though, near any point $P$ (especially if we zoom in very tight) we can recreate the linear picture, putting a little stack near that point and using it to calculate $\alpha(v)$ for a tangent vector $v$ at $P$. If $\alpha$ secretly is exact, then we would eventually discover that all of these little stacks fit seamlessly together into the level set picture; but usually, the stacks are tighter in some places than others, or are rotated weirdly relative to each other, in a way that means they don’t join up.<|endoftext|> TITLE: How is "point" in geometry undefined? And What is a "mathematical definition"? QUESTION [11 upvotes]: How is "point" in geometry undefined? I mean, when we say "A point in geometry is a location. It has no size, i.e., no width, no length, and no depth," is it not a definition? If it is not a definition, then how can we know whether some statement is definition or not? What are the characteristics of a definition in math? REPLY [5 votes]: Concerning your definition of "point" in geometry, what littleO said in a comment is enough: You have only replaced one undefined term with other undefined terms. To be more explicit, if you define "point" in terms of "location", I would simply ask you to define "location". Now this does not address your other questions. If it is not a definition, then how can we know whether some statement is definition or not? Informally, we say that a valid definition is a way of describing something that is precise and only involves previously defined concepts. This is good enough for informal mathematics, but there is in fact a completely precise definition of "valid definitions" in first-order logic, which is variously called definitorial expansion or full abbreviation power. This rule basically allows one to name and later use any constant-symbol or predicate-symbol or function-symbol that can be represented uniquely by some first-order formula. For example, if you work within first-order Peano Arithmetic plus full abbreviation power, you can define: $even(n) \overset{def}\equiv \exists k\ ( k+k = n )$. And from then on you can reason about objects that satisfy the now defined predicate $even$, and can prove theorems involving it such as: $\forall n\ ( even(n \times n) \to even(n) )$. Suffice to say that such a technical device is necessary in practice so that we do not have pointless duplication of content. Of course the above theorem could have been written in a plain arithmetic sentence without using $even$, but clearly it would be much longer and less informative. What are the characteristics of a definition in math? The above concerns the technical details of how to formally define "valid definition" in first-order logic, and hence most of mathematics (which is based on a first-order set theory called ZFC). The concept of abbreviation power extends easily to other logics anyway. But there is a second issue of the different kinds of definitions in mathematics. The first kind involves defining concepts within an existing framework. The example of $even$ is one instance of this. The second kind involves defining an entire framework (as a single concept)! For instance, we can define a structure to be a model for Peano Arithmetic iff it obeys all the axioms of PA. Note carefully that such a definition does not define what a single natural number is, but what is the collection of natural numbers together with the arithmetic operations as a whole. Similarly, in any usual axiomatization of Euclidean geometry one does not define what points are, but rather defines that a structure is a Euclidean geometry iff it consists of lines and points (and usually numbers) that together satisfy certain axioms.<|endoftext|> TITLE: Embedding theorem for finite p-groups QUESTION [7 upvotes]: One knows that any countable group can be embedded in a 2-generated group. Also, any finite group is a subgroup of a 2-generated finite group, namely by the Cayley embedding. Does the same hold if we restrict to finite $p$-groups, where $p$ is some prime, i.e. is any finite $p$-group a subgroup of a 2-generated finite $p$-group? Note that the minimal number of generators of the Sylow subgroup of the symmetric groups grow with their order. Thus the Cayley embedding would not suffice. REPLY [3 votes]: This is proved in Neumann, B. H. and Neumann, H. (1959), Embedding Theorems for Groups. Journal of the London Mathematical Society, s1-34: 465–479. doi:10.1112/jlms/s1-34.4.465 At least, it says in the introduction that they prove this. For some reason I don't seem to be able to access the full article online at the moment.<|endoftext|> TITLE: Is every finite subgroup of $C^*=$ set of all non-zero complex numbers cyclic? QUESTION [9 upvotes]: Is every finite subgroup of $C^*=$ set of all non-zero complex numbers cyclic? I see that the set $A_n=\{z:z^n=1\}$ is a subgroup of $C^{*}$. Any element of $A_n$ is a solution of $z^n=1$.Now the solutions of $z^n=1$ for any $n\in \Bbb N$ are $e^{\frac{{2ki\pi}}{{n}}};1\le k\le n$ and hence the subgroup $A_n$ is generated by $a=e^{\frac{{2ki\pi}}{{n}}}$ and hence cyclic. But is it the case that any finite subgroup of $C^{*}$ is of the form $A_n$ for some $n$? I am having problems to answer this question.If it is true then it answers the original question. Please help. REPLY [2 votes]: All finite subgroups are cyclic. In fact this statement is true for $F^*$ for any field $F$. However one can easily construct infinite subgroups that are not cyclic. Consider all elements of finite order. This is a proper subgroup: it is precisely all the roots of unity. Because there are infinitely many prime numbers, one can see that not all numbers of the form $e^{2\pi i/p},\ p$ a prime, can be obtained as a power of a single complex number.<|endoftext|> TITLE: Integral and unit of measurement QUESTION [6 upvotes]: Is there a kind of "formalism" which define how unit measures come out from integration? An example: given a point $P(x,y,z) \in \mathbb{R}^3$ there is the concept of mass $m$ associated to this point. Mass is measured in $\text{kg,g,lb,...}$ I indicate the generic unite measure of the mass $[m]$. Now, there is also the concept of density of mass $\rho(x,y,z)$ which is a (scalar) function which represents "mass per unit volume", or also $\frac{[m]}{[s]^3}$ (where $[s]$ is the unit measure of space), e.g. if we take a constant density over a volume, to know the mass of the volume it suffices to multiply density $\rho$ with volume $V$. The effect of this multiplication is coherent with unit measures involved. Now, in general for non-constant density function, one needs to integrate the function over the volume to know mass: $m=\int_V \rho(x,y,z)\text{d}\tau$ where $\tau$ is the volume element. Now, integrals are pure mathematical objects, how can I relate the fact that an integral is not only (naively) "a product of the integrand for the measure of the space of integration" with the fact that, in the end, there will be $[m]=\frac{[m]}{[s]^3} [s]^3$ Now, naively I can argument something like this $\int_{[s^3]} \frac{[m]}{[s]^3} \text{d}([s^3])$, but the integrand is constant so $\frac{[m]}{[s]^3} \int_{[s^3]} \text{d}([s]^3)=[m]$. But here there is no mathematical formalism, only a naive thought about such an "integral of unit measures" sounds to me. Is there actually an ad-hoc formal argument for this problem? REPLY [3 votes]: I guess the simplest way to formalise it is to write dimensionful quantities as a product of a dimenionless number and a constant dimensionful parameter. In this way you get e.g. $\rho=\bar{\rho}\times [\rho]$, with $\bar\rho$ dimensionless and $[\rho]$ the appropriate unit. Now if ypou change your unit of density, you get a rescaling $[\rho]\to \lambda [\rho]$ which you can absorb into the dimensionless prefactor. Of course, this is just a slightly fanciefied way of writing a quantity as value times unit. The same trick works for the integration measure - at least for Riemannian integrals, the measure is a limit of small actual volume elements, and the $\text{d}v$ rescales if you change the units. Now you can just pull the "units" out of the integral, since they are constant, and you're left with a dimensionless integral times some product of units of the correct dimension.<|endoftext|> TITLE: Stone–Čech compactification as a functor. QUESTION [5 upvotes]: I am now working on Munkres Topology,Stone–Čech compactification part. He says that the correspondence between a completely regular space and its Stone–Čech compactification is a funtor. To verify this, I need to show that the correspondence preserves the identity mapping and composites of functions. It was easy to show the former, but I am not sure how to do the latter... The situation is: Let $\beta(X)$ denote a Stone–Čech compactification of a topological space $X$. Let $X,Y,Z$ be completely regular spaces. Let $f:X\rightarrow Y$ , $g:Y\rightarrow Z$ be continuous maps. Let $\beta(f):\beta(X) \rightarrow \beta(Y)$ extend $\iota \circ f$, where $\iota:Y \rightarrow \beta(Y)$ is an inclusion mapping. What I need to show is that $$\beta(g\circ f)=\beta(g) \circ\beta(f)$$ How can I show this? It seems obvious if $x\in X$. But how can I show for the case $x\in \beta(X)-X$? Any help would be really appreciated. Thanks. REPLY [7 votes]: The continuous map $\beta(f):\beta(X)\to \beta(Y)$ fits into the diagram $$\require{AMScd}\begin{CD} X @> f >>Y\\ @V \iota_X VV @V \iota_Y VV \\\beta(X) @> \beta(f) >> \beta(Y) \end{CD}$$ and is the unique such map. So we have $$\require{AMScd}\begin{CD} X @> g\circ f >>Z\\ @V \iota_X VV @V \iota_Z VV \\\beta(X) @> \beta(g\circ f) >> \beta(Z) \end{CD}$$ but also $$\require{AMScd}\begin{CD} X @> f >>Y @> g >> Z\\ @V \iota_X VV @V \iota_Y VV @V \iota_Z VV\\\beta(X) @> \beta(f) >> \beta(Y) @> \beta(g) >> \beta(Z) \end{CD}$$ commutes as both small squares commute. So both $\beta(g\circ f)$ and $\beta(g)\circ \beta(f)$ extend $\iota_Z \circ g\circ f$ so they must be equal by the uniqueness.<|endoftext|> TITLE: Partition of numbers : $1, 2, ..., 20$ QUESTION [6 upvotes]: Integers $1, 2, ..., 20$ are partitioned into $2$ groups. Sum of all integers in one group is equal to $n$ and the product of all integers in another group is also equal to $n$. Find the maximal $n$. Since $1 + 2 + ... + 20 = 210$, so the product of all integers in another group is less than $210$. Please suggest how to proceed. REPLY [5 votes]: Here's a answer by trial and error. Notice that the product of the smallest $5$ integers is $5!=5\cdot 4\cdot 3\cdot 2\cdot 1=120$ and the product of $6$ integers is bigger then $210$. Now the sum will be bigger if we choose smaller numbers,lets try to make the sum the max possible value $<210$ by changing the digit $5$,testing $8$ is the biggest digit so the sum doesn't exceed $210$.We have that $1\cdot 2\cdot 3\cdot 4\cdot 8=192$ and the sum is $210-1-2-3-4-8=192$ so we have that $192$ can be $n$. Now lets try to prove that $n\leq 192$ first $193$ is a prime $194=97\cdot 2$ where $97$ is prime so those are not solutions (because of the product).Now lets try numbers $\geq 195$ then sum of those numbers is $\leq 15$ checking products of $2$ numbers such that their sum is $\leq 15$ the greatest such number is $8\cdot 7=56$, now checking product of $3$ numbers the biggest is $5\cdot 4\cdot 6=120$ since the closer the $3$ numbers the bigger the product is,checking $4$ numbers we get that the biggest product is $2\cdot 3\cdot 4\cdot 5=120$ and at last checking five numbers it's $1\cdot 2\cdot 3\cdot 4\cdot 5=120$ this isn't the answer either so we concluded that $n=192$.<|endoftext|> TITLE: Geometric intuition for flat morphisms QUESTION [10 upvotes]: I'm trying to develop some geometric intuition for what it means for a morphism of schemes to be flat. The definition of flatness in Hartshorne says (if I'm correct) that a morphism $f: X \to Y$ is flat iff pullbacks of SESs of quasicoherent sheaves on $Y$ are exact on $X$. But this is very algebraic, and not at all easy to visualise! The most helpful thing I've found in Hartshorne is Prop. 9.7: If (for instance) $X$ and $Y$ are varieties and $Y$ is smooth of dimension 1, then $f$ is flat iff the image of every irreducible component of $X$ is dense in $Y$. Thus the irreducible components of $X$ lie "flat" over $Y$, hence the terminology. But what if $Y$ is of dimension bigger than 1? What is the intuition for flatness now? And is there another way to think about flat morphisms which is more intuitive altogether? REPLY [14 votes]: Here is a hotch-potch of examples, counterexamples, theorems, ... which I plagiarized adapted from the answer by some guy with a complicated name to the analogous question for complex analytic spaces. I hope they will give you some intuition for flatness, that "riddle that comes out of algebra, but which technically is the answer to many prayers" (Mumford, Red Book, page 214). Let $f:X\to Y$ be a scheme morphism, locally of finite presentation. Then: a) $f$ smooth $\implies$ $f$ flat. b) $f$ flat $\implies$ $f$ open (i.e. sends open subsets to open subsets). Beware however that the natural morphism $\operatorname {Spec}\mathbb Q \to \operatorname {Spec} \mathbb Z$ is flat and yet not open: this is because it is not locally of finite presentation. c) Open immersions are flat. d) However general open maps need not be flat. A counterexample is: $$\operatorname {Spec k}\to \operatorname {Spec} k[\epsilon]=\operatorname {Spec} \frac {k[T]}{\langle T^2\rangle }$$ e) The normalization $X=Y^{\operatorname {nor}}\to Y$ of a non-normal scheme is NEVER flat. For example the normalization of the cusp $C=V(y^2-x^3)\subset \mathbb A^2$ :$$\mathbb A^1\to C:t\mapsto (t^2,t^3)$$ is not a flat morphism. f) A closed immersion it is NEVER flat, unless it is also an open immersion [cf. c)]. g) If $X,Y$ are regular and $f:X\to Y$ is finite and surjective, then $f$ is flat. for example the projection of the parabola $y=x^2$ onto the $y$-axis is flat, even though one fiber is single point (but a non reduced one!) while the other fibers have two points (both reduced). As another illustration, every non constant morphism between smooth projective curves is flat. h) If $Y$ is integral and $X\subset Y\times \mathbb P^n$ is a closed subscheme, the projection $X\to Y$ is flat if and only if all fibers $X_y=\operatorname {Spec}\kappa(y)\times X$ ($y$ closed in $Y$) have the same Hilbert polynomial. In particular the fibers must have the same dimension, so that for example the blow-up morphism $\widetilde {\mathbb P^n}\to \mathbb P^n$ of $\mathbb P^n$ at a point $O$ is not flat, since all fibers are a single point, except the fiber at $O$ which is a $\mathbb P^{n-1}$. Notice how the morphism $\operatorname {Spec k}\to \operatorname {Spec} k[\epsilon]$ evoked above (for which you have only one fiber!) yields a counterexample to g) if you do not assume $Y$ reduced. This very general result h) (which is at the heart of the theory of Hilbert schemes) might be the best illustration of what flatness really means.<|endoftext|> TITLE: True or False : If $f(x)$ and $f^{-1}(x)$ intersect at an even number of points , all points lie on $y=x$ QUESTION [13 upvotes]: Previously I have discussed about odd number of intersect points (See : If the graphs of $f(x)$ and $f^{-1}(x)$ intersect at an odd number of points, is at least one point on the line $y=x$?) Now , I want to know the even condition . For example $f(x) = \sqrt{x}$ and $f^{-1}(x) = x^2 , x\ge 0$ intersects each other in $(0,0)$ and $(1,1)$ points and these points located on $y=x$ line Edit : Consider $f$ is continuous function. REPLY [5 votes]: Assume that $f$ is a continuous and invertible real function with a connected domain. Then $f$ is either strictly decreasing or strictly increasing over its domain (or it would assume some value twice, and would not be one-to-one). Consider these two cases: $f$ is decreasing. Then, since $f(x)-x$ is also decreasing, $f$ can't cross $y=x$ more than once. If it never crosses $y=x$, then it lies entirely above or below that line, and its inverse lies entirely on the other side; they never meet, and the theorem holds vacuously. If, on the other hand, it crosses $y=x$ exactly once, then the total number of intersections between $f$ and $f^{-1}$ is odd, since off-diagonal intersections come in pairs. The theorem holds in this case too. $f$ is increasing. Then it cannot intersect $f^{-1}$ at any point off the line $y=x$. Suppose it did, at a pair of points $(x,y)$ and $(y,x)$ with $x x=f(y)$, contradicting the fact that $f$ is increasing. In this case, then, all intersections are on $y=x$, and the theorem holds. Since the theorem holds whether $f$ is increasing or decreasing, it is true in general.<|endoftext|> TITLE: Prove the dual space of $l^p$ is isomorphic to $l^q$ if $\frac{1}{q}+\frac{1}{p}=1$ QUESTION [11 upvotes]: Prove the dual space of $\ell^p$ is isomorphic to $\ell^q$ if $\frac{1}{q}+\frac{1}{p}=1$ ($1 TITLE: Conversion of symmetric matrix into skew-symmetric matrix by flipping signs of elements QUESTION [5 upvotes]: We are given an $n \times n$ matrix $A=(a_{ij})_{i,j\in\{1,2,\ldots,n\}}$ with the following properties : $A$ is symmetric, $A$ has a zero diagonal, every element of $A$ is a number in $\{0,1,2\}$, every row sum of $A$ is an odd number. We say that we flip the sign of an element of $A$ if we change the element from some $a_{ij}$ to $-a_{ij}$. Prove that it is always possible to perform a finite number of sign flips on $A$ to to obtain a new matrix $B=(b_{ij})_{i,j\in\{1,2,\ldots,n\}}$ such that $B$ is skew-symmetric, and each row sum of $B$ is either $1$ or $-1$. REPLY [2 votes]: We can think of this problem as a graph theory problem. The $n \times n$ matrix $A$ is the adjacency matrix of: an undirected graph $G$, with no self-loops, but allowing either single or double edges, and where each vertex has odd degree. Our goal is to orient every edge of $G$ so that the resulting digraph $G'$ satisfies $|d_{\mathrm{in}}(v) - d_{\mathrm{out}}(v)| = 1$ for all vertices $v$. To make the matrix skew-symmetric, exactly one of $A_{ij}$ and $A_{ji}$ must be negative, which corresponds to choosing one of the orientations of the edge. The row sums of the final matrix $B$ are precisely the above in-degree/out-degree condition. In general, this is an easy problem to solve (see, e.g., this question) but we have a further restriction: whenever we have a double edge, it must be ordered consistently. So we'll proceed differently here. To orient $G$, we repeat a process of orienting and removing paths (not necessarily simple paths). To make the proof simpler, we'll classify the vertices into three types: Dangerous, if it is incident to an odd number of double edges and an odd number of single edges. Terminating, if it is incident to an even number of double edges, but an odd number of single edges. Safe, if it is incident to an even number of double edges and an even number of single edges. The fourth case is a type of vertex we will never have in the graph. Initially, all vertices are either dangerous or terminating. To pick a path to orient and remove, start with an arbitrary edge and extend it from either endpoint by the following rules: If we're at a dangerous vertex, make sure the path takes exactly one double edge: if we enter on a single edge, leave on a double edge, and vice versa. Removing the edges we took will make the dangerous vertex safe. If we're at a safe vertex, keep going along the same edge type. Removing the edges we took will keep the safe vertex safe. If we enter a terminating vertex along a double edge, keep going along a double edge. Removing the edges we took will keep the vertex terminating. If we enter a terminating vertex along a single edge, end the path there. Removing the one edge we took will make the terminating vertex safe. Another thing that might happen is that in extending the path, both our endpoints are at the same vertex. In that case: If the two edges we took to get to that vertex are compatible with the rules above, stop there. The path we remove will actually be a cycle, but this changes nothing. If not, then keep extending the path first from one endpoint (which might change the vertex type) then the other, and ignore the self-intersection. Once we can no longer extend the path further, we pick a direction along the path, orient all edges following that direction, and then remove them from consideration, since they've already been oriented. You can check that the statements in italics above will hold when we do this. Also note that when we make a dangerous or terminating vertex safe, its value of $d_{\mathrm{in}} - d_{\mathrm{out}}$ is changed to $\pm 1$ among the oriented edges. In all other cases, the oriented edges coming in match the oriented edges coming out, and $d_{\mathrm{in}} - d_{\mathrm{out}}$ does not change. As long as we have dangerous or terminating vertices in our graph, we must have edges, by parity. So when this algorithm terminates because there are no edges left, all the vertices have been made safe. This means that their total oriented degree satisfies $|d_{\mathrm{in}}(v) - d_{\mathrm{out}}(v)| = 1$, and we have found the desired orientation (or skew-symmetric matrix).<|endoftext|> TITLE: Point out my Fallacy, in combinatorics problem, please. QUESTION [8 upvotes]: Five children sitting one behind the other in a five seater merry go round, decide to switch seats so that each child has a new companion in front. In how many ways can this be done? My tries: I try using IEP but didn't work, please point out a fallacy. There are $4!$ without any restriction. Let $p_1$ be the property such that one of them has the same companion in front, similarly for other properties, $p_2,p_3,p_4,p_5$, as well. No. of way in which $p_1$ occur, I used tie method, tie 1st one and the 2nd one, we remain with $3$, which along with tied one can be arranged in $3!$ (circular permutations). Similarly for other properties as well. No of ways in which $p_1\cup p_2$ occur: Now tie three consecutive people, so we remain with $2$, which along with tied peoples can be permuted in $2!$, similarly for other as well. Total results in $5\cdot 2!$. (need to tie consecutive from $5$, not any hence factor with $2$ is not ${{5}\choose{2}}$) No. of ways in which $p_1\cup p_2\cup p_3$ occur: Now, four consecutive people, we remain with $1$, which along with tied can be permuted in $1!$. Total results in $5\cdot 1!=5$ No. of ways in which $p_1\cup p_2\cup p_3\cup p_4$ occur: Now we'll tie $5$ consecutive, so one way. No. of ways in which $p_1\cup p_2\cup p_3\cup p_4\cup p_5$ occur: will be same as No. of ways in which $p_1\cup_2\cup p_3\cup p_4$ occur $=1$ Exploiting IEP: $$4!-(5\cdot 3!)+(5\cdot 2!)-(5\cdot 1!)+1-1=-1$$ Where I over substracted !!! Please Help. REPLY [2 votes]: We have actually two different problems here. Given $n$ children and $n$ seats, the number of ways the children can be seated is notoriously $n!$, the number of permutations on $n$ objects, or, if you prefer, the order of the symmetric group $S_n$. However, if the seats are on a merry-go-round and are not distinguishable from each other, we can turn the merry-go-round and have, say, child number $1$ at the fixed position we want. For instance, this means that $(34512)$ and $(51234)$ are both equivalent to $(12345)$, and we are only considering the relative positions of the children. In this case we speak of circular permutations and their number is clearly $(n-1)!$ With a more advanced terminology, we can say that we are not working in the symmetric group $S_n$, but in $S_n/C_n$, its quotient group modulo the cyclic group $C_n$. It is clear that, if we label the seats and start distinguishing among them, for every circular permutation we have only $n$ different seat choices for the first child, and all others are forced to their respective seats with no other choice. In the following I will talk about circular permutations and indistinguishable seats, but if you want the results for distinguishable seats, it will be enough to multiply my results by $n$ and they will be valid for regular permutations as well. Let’s define $f_0(n)$ to be the number of circular permutations of $n$ objects: $$ f_0(n) = \left\{ \begin{array}{ll} 1 & \mbox{if } n = 0 \\ (n-1)! & \mbox{if } n > 0 \end{array} \right. $$ The degenerate case $f_0(0)$ is needed by the following. It’s the only case where $n!\ne n\cdot f_0(n)$, and its interpretation is analogous to that of $0!$ in the case of permutations: no seats, no children, no fun, only one possible situation. Then, we define $f_1(n) \mbox{ for } n>1$ to be the number of permutations of $n$ objects not containing the sequence $12$, and, in general, we define $f_k(n) \mbox{ for } n\ge k$ to be the number of circular permutations of $n$ objects not containing any sequence $i(i+1) \mbox{ for } i\le k$. For instance, $f_3(4) = 2$ is the number of elements of the set $\left\{ (1432), (1324) \right\}$, i.e., the set of all circular permutations of $4$ objects containing neither $12$, nor $23$, nor $34$. Note that it still contains one element with the sequence $41$, $(1324) \approx (4132)$, but we don’t care: we’ll discard it when we compute $f_4(4)$. I hope all is clear up to this point. Our problem is how to compute $f_5(5)$ and we can do it by induction, starting with the definition of $f_0$ above and by observing that $$\begin{array}{lr} f_{k+1}(n) = f_k(n) - f_k(n-1) & \forall k \ge 0, n>k \end{array} $$ The proof is rather simple. By definition, $f_k(n)$ is the number of circular permutations of $n$ objects not containing sequences $i(i+1)\mbox{ for }i\le k$; to compute $f_{k+1}(n)$ we need to subtract the number of such circular permutations which contain the sequence $(k+1)(k+2)$. In fact, they are $f_k(n-1)$. For each of the permutations of $(n-1)$ objects counted by $f_k(n-1)$, we find where the element $(k+1)$ is, place a new element next to it naming it $(k+2)$, renumber all subsequent elements, and we get one of the circular permutations of $n$ objects that we want to discard. If there is no $(k+1)$, we are in the case $k=n-1$ and it is enough to concatenate $(k+1)$ at the end. For instance, for $k=3, n=4$, from $(132)$ we get $(1324)$. Conversely, if we have a circular permutation with the sequence $(k+1)(k+2)$, it is enough to delete the element $(k+2)$ and renumber all subsequent elements: we get one of the circular permutations of $n-1$ objects counted by $f_k(n-1)$. Again, if there is no $(k+2)$ because we want to discard the sequence $(k+1)1$, it is enough to delete $(k+1)$. For instance, $(1324)\mapsto(132)$. Q.E.D. Let me give another example: we have already seen the set $E = \left\{ (1432), (1324) \right\}$. It has $f_3(4)$ elements. From each of its elements we can generate a circular permutation of $5$ objects containing the sequence $45$ but not “smaller” ones: $(1432)\mapsto(14532)\mbox{ and }(1324)\mapsto(13245)$. Conversely, for every circular permutation of $5$ elements containing the sequence $45$ but no “smaller” ones, we can delete the element $5$ and get one of the elements of $E$. We can now tabulate: $$ \begin{array}{lrrrrrr} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline f_0: & 1 & 1 & 1 & 2 & 6 & 24 \\ f_1: & & 0 & 0 & 1 & 4 & 18 \\ f_2: & & & 0 & 1 & 3 & 14 \\ f_3: & & & & 1 & 2 & 11 \\ f_4: & & & & & 1 & 9 \\ f_5: & & & & & & 8 \end{array} $$ As expected, we see that the values of $f_n(n)$ in the diagonal of the above table form the sequence OEIS A000757. This answer is based on the literature cited at that link. We see that $f_5(5)=8$, as already shown in Coolwater’s answer. Let’s recompute: $$ \begin{array}{rl} 24 & \mbox{# All possible circular permutations of 5 children}\\ -6 & \mbox{# permutations containing }12:(12abc),\,6\mbox{ possible permutations of }abc\\ -4 & \mbox{# }remaining\mbox{ permutations containing }23:(1a23b)\mbox{ or }(1ab23),\mbox{ with }a,b\in\{4,5\}\\ -3 & \mbox{# }remaining\mbox{ permutations containing }34: (13425),(13452),(15342)\\ -2 & \mbox{# }remaining\mbox{ permutations containing }45: (14532),(13245)\\ -1 & \mbox{# }remaining\mbox{ permutation containing }51: (14325)\approx(51432)\\ \hline =8 \end{array} $$<|endoftext|> TITLE: De Rham cohomology of $\mathbb{R}^2 \setminus \{\text{one point}\}$ QUESTION [6 upvotes]: This question is motivated by Exercise 1.7 from Differential Forms in Algebraic Topology by Bott & Tu, book I'm working over on my own. The original question in the text concerns the de Rham cohomology of $\mathbb{R}^2$ with points $P$ and $Q$ deleted. I have tried to simplify it a bit caring only about one point. So I'm trying to: Compute in a rigorous way de Rham cohomology of $\mathbb{R}^2$ with one point $P$ deleted and find the closed forms that represent the cohomology classes. There are two related questions: first second I have already solved the exercise in several ways: Using singular cohomology and the isomorphism between singular and the de Rham cohomologies. Using Stokes and the ideas of Example 24.4 of Loring's book Introduction to Smooth Manifolds. However, I want to solve the exercise rigorously using only what is previously covered in the book: the definition of the de Rham cohomology. Since I have already solve it by other means, I already know the solution, so I am only interested in the ideas and the heuristics of another approach which uses only what I stated above. Any help would be appreciated. REPLY [6 votes]: I'm not sure what order material is covered in the book you're referencing, but here's a fairly elementary demonstration that $H^1_{dR}(\mathbb{R}^2\setminus\{(0,0)\} \cong \mathbb{R}.$ It requires only material covered in a standard American Calc 3 class, together with basic facts about forms. For ease of writing, I'll write $M$ for $\mathbb{R}^2\setminus \{(0,0\}$. Proposition 1: For $\eta = P(x,y) dx + Q(x,y) dy$, $\eta$ is closed iff $P_y = Q_x$. Said another way, $\eta$ is closed iff the integrand you get in Green's theorem vanishes. Proof: Well, since $dx\wedge dx = 0$ $d(Pdx) = P_y dy\wedge dx$. Likewise, $d(Qdy) = Q_x dx\wedge dy$, so $d\eta = (-P_y + Q_x) dx \wedge dy$. The result follows. Now comes the technical result. In some sense, Proposition 2 is proving that $H^1_{dR}(M)$ is at most an $\mathbb{R}$ because there is only one obstruction to being exact: $\int_C \eta = 0$. Proposition 2 Suppose $\eta = Pdx + Qdy$ is a smooth $1$-form on $M$. Let $C$ denote the unit circle with center $(0,0)$ traversed counter clockwise. Assume $\eta$ is closed. Then $\eta$ is exact iff $\int_C \eta = 0$. Proof: Before proving this, the hypothesis that $\int_C \eta = 0$ implies that $\int_{C_R} \eta =0$ for any circle of radius $R$ centered at $(0,0)$. Indeed, because $\eta$ is closed, $P_y - Q_x = 0$. Thus, by Green's theorem, we have $\int_C \eta - \int_{C_R} \eta = \iint_X P_y - Q_x dA$ where $X$ is the annulus between $C$ and $C_R$. Since $\int_C\eta = 0$ and $P_y - Q_x = 0$, $\int_{C_R}\eta = 0$. With this out of the way, we can now actually prove this proposition. First, assume $\eta$ is exact: $\eta = df$ for some smooth function $f$. We paramaterize $C$ as $(x,y) = (\cos t, \sin t)$ with $0\leq t\leq 2\pi$. Writing $f(x,y) = f(\cos t, \sin t) = g(t)$ for some smooth function $g$, we then compute $$\int_C df = \int_0^{2\pi} d(f(\cos t, \sin t)) = \int_0^{2\pi} dg = \int_0^{2\pi} g'(t) dt = g(t)|_0^{2\pi} = f(\cos t, \sin t)|_0^{2\pi} = 0.$$ Thus, if $\eta = df$, then $\int_C \eta = 0$. Now, let's prove the converse, so assume that $\int_C \eta = 0$. We define a function $f:M\rightarrow \mathbb{R}$ with $df = \eta$ as follows. For $p\in M$ which is not on the negative $x$-axis, let $L_p$ denote the segment connecting the point $(1,0)$ to $p$ (which stays in $M$. For $p\in M$ off of the negative $y$-axis, let $B_p$ a line segment connecting $(1,0)$ to $(0,1)$ followed by a segment connecting $(0,1)$ to $p$. For $p$ off of the positive $y$-axis, let $D_p$ be the line segment connecting $(1,0)$ to $(0,1)$ followed by the segment connecting $(0,1)$ to $q$. For $p\in M$ note that at least 2 out of three of $L_p$, $B_p$, and $D_p$ are defined. We claim that $\int_{L_p} \eta = \int_{B_p} \eta$ if both are defined. Specifically, drawing both $L_p$ and $B_p$, it's clear they make a (possible degenerate) triangle which does not contain $(0,0)$. Applying Green's theorem, togther with Proposition 1 (and recalling that $\eta$ is closed) shows $\int_{L_p} \eta = \int_{B_p} \eta$. The same proof works for showing $\int_{L_p}\eta = \int_{D_p}\eta$ (again, assuming both are defined). To see that $\int_{B_p} \eta = \int_{D_p} \eta$ (assuming both are defined), let $R$ be a large enough so that $B_p \cup D_p$ lies inside the circle of radius $R$ centered at $(0,0)$. Applying Green's theorem to the region between $B_p\cup D_p$ and $C_R$, we deduce that $\int_{B_p} \eta - \int_{D_p}\eta + \int_{C_R} \eta = 0$. Since, we have already showed $\int_{C_R} = 0$, it follows that $\int_{B_p}\eta = \int_{D_p} \eta$. Now, let's first prove that $f_x(p) = P(p)$. For $h$ small, let $S_h$ be the line segment connecting $p$ to $p + (h,0)$. If $p$ is not along the negative real axis, then the three paths $L_p$, $S_h$, and $L_{p+(h,0)}$ are defined and form a triangle. Again applying Green's theorem, we see that $ \int_{L_{p+(h,0)}} \eta -\int_{L_p}\eta =\int_{S_h} \eta$. Paramaterizing $S_h$ via $(x,y) = p + (t,0)$, we see $y' = 0$, so \begin{align*} f_x(p) &= \lim_{h\rightarrow 0} \frac{f(p+(h,0)) - f(p)}{h}\\ &= \lim_{h\rightarrow 0} \frac{\int_{S_h} \eta}{h}\\ &= \lim_{h\rightarrow 0} \frac{\int_0^h P dx}{h} \\&= P(p),\end{align*} where the last line uses L'Hospital's rule together with the fundamental theorem of calculus. If, on the other hand, $p$ is along the negative real axis, repeat this proof with $D_p$ and $D_{p + (h,0)}$ replacing $L_p$ and $L_{p+(h,0)}$. The proof that $f_y(p) = Q(p)$ is almost identical. In fact, if $p$ is not on the negative real axis, again use $L_p$ and $L_{p+(0,h)}$. If $p$ is on the negative real axis, to show $\lim_{h\rightarrow 0^+} \frac{f(p+(0,h)) - f(p)}{h} = Q$, use $B_p$ and $B_{p+(0,h)}$, and to show that $\lim_{h\rightarrow 0^-} \frac{f(p+(0,h)) - f(p)}{h} = Q$, use $D_p$ and $D_{p+(0,h)}$.$\square$ Now, let $\omega = \frac{-y}{x^2 + y^2}dx + \frac{x}{x^2+y^2}dy.$ Computing $P_y$ and $Q_x$ both gives $\frac{x^2-y^2}{(x^2 + y^2)^2}$, so they're equal. Hence, by Proposition 1, $\omega$ is closed. Proposition 3: The form $\omega$ is not exact. Proof: From proposition $2$, it's enough to compute $\int_C \omega$ where $C$ is the unit circle centered at $(0,0)$. Paramaterizing the circle via $(x,y) = (\cos t, \sin t)$ with $0\leq t\leq 2\pi$, we get $\int_0^{2\pi} \frac{-\sin t}{\cos^2 t + \sin^2 t} (-\sin t) dt + \frac{\cos t}{\cos^2 t + \sin^2 t}(\cos t) dt = \int_0^{2\pi} (\sin^2 t + \cos^2 t)dt = 2\pi$. Since $2\pi \neq 0$, $\omega$ cannot be exact. $\square$ If follows that $[\omega]\in H^1_{dR}(M)$ is non-zero. Proposition 4: Suppose $\eta$ is a closed one form. There is a real number $\lambda$ with the property that $\lambda \omega - \eta$ is exact. In other words, $[\eta] = \lambda[\omega]$ so $[\omega]$ generates all of $H^1_{dR}(M)$. Proof: Let $\lambda = \frac{1}{2\pi} \int_C \eta$, where $C$ is the unit circle centered at $(0,0)$. Then \begin{align*}\int_C \lambda \omega - \eta &= \lambda \int_C \omega - \int_C \eta\\ &= \frac{1}{2\pi} \int_C \eta \int_C \omega - \int_C\eta\\ &= \frac{1}{2\pi} \int_C \eta 2\pi - \int_C\eta\\ &= 0.\end{align*} Since $\lambda \omega - \eta$ is obviously closed, Proposition 2 now implies that $\lambda \omega - \eta$ is exact. $\square$<|endoftext|> TITLE: What values of $a,b\in\mathbb{Z^+}$ satisfy $a^b=b^a+1$? QUESTION [7 upvotes]: What values of $a,b\in\mathbb{Z^+}$ satisfy the equation $a^b=b^a+1$ ? I know one answer is $a=3,b=2$, but I know that just by luck. How do I get to the answer? Are there any more of them? Why? Suppose I don't know the answer, how would I start? I started taking logarithms to both sides, then changing $1$ to $b^a/b^a$, then simplifying, but nothing. I always end where I start; I only need a hint to make the train rolling. EDIT Some people are answering erroneously because I define $\mathbb{Z^+}$ as $\{1,2,\dots\}$, and $0\not\in\mathbb{Z^+}$. The correct values by now are $a=3,b=2$ and $a=2,b=1$, but are they the only ones? Why? REPLY [3 votes]: As @i9Fn pointed out in the comments, this is another form of the Catalan's conjecture, which Preda Mihăilescu proved that $a=3,b=2$ is the only solution for this problem. (And $a=2, b=1$ if we allow $a=1\lor b=1$)<|endoftext|> TITLE: Prove that $(n^3 - 1)n^3(n^3 + 1)$ is divisible by $504$ QUESTION [6 upvotes]: How to prove that $(n^3 - 1)n^3(n^3 + 1)$ is divisible by $504$? Factoring $504$ yields $2^3 \cdot 3^2 \cdot 7$. Divisibility by $8$ can be easily seen, because if $n$ is even then $8 | n^3$, else $(n^3 - 1)$ and $(n^3 + 1)$ are both even and one of these is divisible by $4$ so $8|(n^3 - 1)(n^3 + 1)$. I'm stuck at proving divisibility by $9$ and $7$ REPLY [3 votes]: $(n^3-1)(n^3)(n^3+1) = n^9-n^3$ $n^6 \equiv 1 \pmod 7$ This is Fermat's little theorem. If $p$ prime $n^{p-1}\equiv 1\pmod p$ $n^9-n^3 \equiv n^3-n^3\equiv 0 \pmod 7$ $n^6 \equiv 1 \pmod 9$ If this is not obvious: $n^2 \equiv 1 \pmod 3\\ n^2 = (3k+1)\\ n^6 = (3k+1)^3 = (27k^3 + 27k^2 + 9k + 1)$ $n^9-n^3 \equiv 0 \pmod 9$<|endoftext|> TITLE: Embedding $RP^2$ into $R^4$ QUESTION [7 upvotes]: I have a homework Question which asks to show that the map \begin{equation}f:R^3\rightarrow R^4, f(x,y,z)=(x^2-y^2,xy,xz,yz)\end{equation} Induces an embedding of $RP^2$ into $R^4$ Overall I have a fairly good idea of how I want to go about showing this, however I am looking for a "neat" way of showing $f$ is injective (take neat to mean whatever you want in this setting). I have an idea that is probably far reaching, but I was wondering if we could somehow use the kernel of $f$ in our argument? If we had a group homomorphism that would be one thing, but this is not so. However it does seem like the only element that gets mapped to zero is $(0,0,1)$ subject to the domain $S^2$, so maybe that's something? Honestly I'm just really lazy and don't want to brute force algebra onto this function to see that it is injective. I am just looking for some good hints here, anything is appreciated! REPLY [2 votes]: See Section 2 Example of (Embedding of $RP^2$ into $R^4$): https://www.cs.uic.edu/~sxie/lecture_notes/diff_manifolds/lecture10.pdf<|endoftext|> TITLE: Limit with integral and power QUESTION [6 upvotes]: I'm trying to calculate this limit: $$\lim_{x\to\infty} \left(\int_0^1 t^{-tx} dt\right)^{\frac1x}$$ I tried the squeezing idea without success. REPLY [6 votes]: For any large $x$ we have $$ \int_{0}^{1}t^{-tx}\,dt = \int_{0}^{1}\exp\left(-x t\log t\right)\,dt = \sum_{n\geq 0}\frac{x^n}{n!}\int_{0}^{1}t^n(-\log t)^n\,dt =\sum_{n\geq 0}\frac{x^n}{n!(n+1)^{n+1}}$$ If we call this entire function $f(x)$, the wanted limit equals $$\exp\lim_{x\to +\infty}\frac{\log f(x)}{x} \stackrel{dH}{=} \exp\lim_{x\to +\infty}\frac{f'(x)}{f(x)}=\exp\lim_{x\to +\infty}\frac{\int_{0}^{1}(-t\log t)t^{-tx}\,dt}{\int_{0}^{1}t^{-tx}\,dt}$$ that is $\color{red}{e^{1/e}}$ since $t^{-tx}$ converges in distribution to $C\cdot\delta\left(t-\frac{1}{e}\right)$.<|endoftext|> TITLE: How can I find all the matrices that commute with this matrix? QUESTION [9 upvotes]: I would like to find all the matrices that commute with the following matrix $$A = \begin{pmatrix}2&0&0\\ \:0&2&0\\ \:0&0&3\end{pmatrix}$$ I set $AX = XA$, but still can't find the solutions from the equations. REPLY [4 votes]: Block matrices provide an immediate insight. Let $$A = \left[\begin{array}{l}2&0&0\\0&2&0\\0&0&3\end{array}\right] = \left[\begin{array}{l|l}a&0\\\hline0&3\end{array}\right]$$ The submatrix $a = 2I_2$. Now define the block matrix $$X = \left[\begin{array}{l|l}b&0\\\hline0&c\end{array}\right]$$ with conformal block sizes. That is, $c=constant$ and $$b = \left[\begin{array}{l}b_{11}&b_{12}\\b_{21}&b_{22}\end{array}\right] $$ has arbitrary complex elements. The equation to solve is $$[A,X] = AX - XA = \left[\begin{array}{l|l}a&0\\\hline0&3\end{array}\right] \left[\begin{array}{l|l}b&0\\\hline0&c\end{array}\right] - \left[\begin{array}{l|l}b&0\\\hline0&c\end{array}\right] \left[\begin{array}{l|l}a&0\\\hline0&3\end{array}\right] = \left[\begin{array}{l|l}0&0\\\hline0&0\end{array}\right]$$ We have two equations: $$ b a = a b$$ $$ 3c = 3c$$ The second equation is trivial: $c$ is arbitrary. The first equation is just $$ 2bI_{2} = 2 I_{2} b$$ Since the identity matrix commutes with every matrix, the $b$ matrix is arbitrary. To conclude, the solution matrix has five arbitrary complex numbers arranged so: $$X = \left[\begin{array}{ll|l}b_{11}&b_{12}&0\\b_{21}&b_{22}&0\\\hline0&0&c \end{array}\right] $$ One sees the benefit of this form in analyzing matrices of the form $$A = \left[\begin{array}{l}c_{i} I_{i}&0&0 & 0\\0&c_{j}I_{j}&0 & 0\\0&0&c_{k}I_{k} & 0 \\ 0 & 0 & 0 & \ddots\end{array}\right]$$<|endoftext|> TITLE: What are some interesting calculus facts your calculus teachers didn't teach you? QUESTION [24 upvotes]: I recently learned an interesting fact about what the value of a Lagrange multiplier represents: suppose the maximum of some real-valued function $f(\vec{x})$ subject to a constraint $g(\vec{x})=c$ is $M$ (of course, $M$ depends on $c$), which you obtained via Lagrange multipliers (solving $\nabla f = \lambda \nabla g$). Then, it's easy to show (using the chain rule) that the multiplier $\lambda$ can be interpreted as the change of the maximum with respect to perturbations of the level set $g=c$. That is, $$ \lambda = \frac{d M}{dc} $$ I think this is a pretty cool result and I never it heard about during my time as an undergraduate. What are some interesting calculus (or undergraduate mathematics) results nobody told you about during your calculus (or undergraduate) education? REPLY [3 votes]: Integration by parts - visualization Not that shocking, but my teacher never showed me this. For many people this will be familiar, but for those who never saw it: it can be an eye opener. Image source: Wikipedia The area of the blue / red region is: $$A_1=\int_{y_1}^{y_2}x(y)dy$$ $$A_2=\int_{x_1}^{x_2}y(x)dx$$ So we have: $$\overbrace{\int_{y_1}^{y_2}x(y)dy}^{A_1}+\overbrace{\int_{x_1}^{x_2}y(x)dx}^{A_2}=\biggl.x > . y(x)\biggl|_{x1}^{x2} = \biggl.y . x(y)\biggl|_{y1}^{y2}$$ Assuming the curve is smooth within a neighborhood, this generalizes to indefinite integrals: $$\int xdy + \int y dx = xy$$ Rearranging yields the well-known formula: $\int xdy = xy - \int y dx$<|endoftext|> TITLE: Is denying "Paradoxical Partitioning" equivalent to accepting the Axiom of Choice? QUESTION [5 upvotes]: In this accepted answer (to this question here, from a couple of years ago) it has been noted: One more point to make about the paradoxical decomposition to more parts than elements [...] currently we do not know of any model of ZF+$\lnot$AC where such decomposition does not exist. Namely, as far as we know, in all models where choice fails there is some set which can be partitioned into more parts than elements. (Just for clarification: From the context of the above references the comparison "more parts than" is obviously meant in terms of the cardinality of the suitable partition being strictly larger than the cardinality of the suitable initial set itself.) Now, I find the negation or denial of such "paradoxial partitioning" interesting; i.e. the statement (proposition, "$\lnot$PP"): "Each partition of each given set has cardinality less than, or at most equal to, the cardinality of the given set." And I like to further explore how this suggested "proposition $\lnot$PP" (being considered along with ZF, of course) relates to "the standard set theory including Axiom of Choice", ZFC. Therefore My questions: Are there any models of ZFC known (or could there be any such models, in principle) in which there is some set which can be partitioned into more parts than elements ? And: Are there any models of ZF known (or could there be any such models, in principle) in which there is no set which can be partitioned into more parts than elements, and which is not also a model of ZFC ? (And just for reference: Is there a conventional or concise way of expressing the suggested "proposition $\lnot$PP" in terms of standard notation, such as used in the sources linked above? Has it perhaps been discussed already, by some other name? ...) REPLY [9 votes]: No, to both questions. But for different reasons. The Axiom of Choice proves that if $A$ is any set, then any partition of $A$ has size of at most $A$. This is because there is a surjection from $A$ onto a partition of $A$, which using choice means there is an injection from the partition into $A$. So assuming $\sf ZFC$, every set can be partitioned into at most its-cardinality-many parts. The second question is open, because what you are asking is called The Partition Principle (which is what usually called $\sf PP$, by the way), and the question whether or not it implies the axiom of choice over $\sf ZF$ is the oldest open question in set theory, as of 2017. So we simply don't know the answer to this one. Related threads: Miller's Construction, Partition Principle and Failure of Axiom of Choice Unions and the axiom of choice. Injection of union into disjoint union Is the following equivalent to the axiom of choice? Without AC, it is consistent that there is a function with domain $\mathbb{R}$ whose range has cardinality strictly larger than that of $\mathbb{R}$? There exists an injection from $X$ to $Y$ if and only if there exists a surjection from $Y$ to $X$. In ZF, how would the structure of the cardinal numbers change by adopting this definition of cardinality? Why does a proof of $\exists f: X\to Y$ injection $\iff \exists g: Y \to X$ surjection requires the axiom of choice?<|endoftext|> TITLE: Strengthen Lowenheim-Skolem QUESTION [8 upvotes]: Suppose $\frak A \prec \frak B$ are models in a language $\mathcal L$, and $|\frak A| < \kappa < |\mathcal L| \leq |\frak B|$. Is there an elementary chain $\frak A \prec \frak C \prec \frak B$ such that $|\frak C| = \kappa$? REPLY [7 votes]: It is at least consistent that the answer is "no". This answer of mine gives an example of a theory in a continuum-sized language with a countable model $\mathfrak{A}$ such that any proper elementary extension is of size at least continuum. If $2^{\aleph_0}>\aleph_1$, then taking $\kappa=\aleph_1$ gives a counterexample to the question.<|endoftext|> TITLE: Find the limit $\lim_{ x \to \pi }\frac{5e^{\sin 2x}-\frac{\sin 5x}{\pi-x}}{\ln(1+\tan x)}$ QUESTION [7 upvotes]: Find the limit (without using l'Hôpital and equivalence) $$\lim_{ x \to \pi }\frac{5e^{\sin 2x}-\frac{\sin 5x}{\pi-x}}{\ln(1+\tan x)}=?$$ my try : $u=x-\pi \to 0$ $$\lim_{ x \to \pi }\frac{5e^{\sin 2x}-\frac{\sin 5x}{\pi-x}}{\ln(1+\tan x)}=\lim_{ u\to 0 }\frac{5e^{\sin (2u+2\pi)}+\frac{\sin (5u+5\pi)}{u}}{\ln(1+\tan (u+\pi))}\\=\lim_{ u\to 0 }\frac{5e^{\sin (2u)}-\frac{\sin (5u)}{u}}{\ln(1+\tan (u))}$$ now :? REPLY [4 votes]: We can proceed as follows \begin{align} L &= \lim_{u \to 0}\dfrac{5e^{\sin 2u} - \dfrac{\sin 5u}{u}}{\log(1 + \tan u)}\notag\\ &= \lim_{u \to 0}\dfrac{5e^{\sin 2u} - \dfrac{\sin 5u}{u}}{\dfrac{\log(1 + \tan u)}{\tan u}\cdot\dfrac{\tan u}{u}\cdot u}\notag\\ &= \lim_{u \to 0}\dfrac{5e^{\sin 2u} - \dfrac{\sin 5u}{u}}{u}\notag\\ &= \lim_{u \to 0}\frac{5ue^{\sin 2u} - \sin 5u}{u^{2}}\notag\\ &= \lim_{u \to 0}\frac{5ue^{\sin 2u} - 5u}{u^{2}} + 25\cdot\frac{5u - \sin 5u}{(5u)^{2}}\tag{1}\\ &= \lim_{u \to 0}5\cdot\frac{e^{\sin 2u} - 1}{u} + 25\cdot 0\notag\\ &= \lim_{u \to 0}5\cdot\frac{e^{\sin 2u} - 1}{\sin 2u}\cdot\frac{\sin 2u}{2u}\cdot 2\notag\\ &= 5\cdot 1\cdot 1\cdot 2 = 10\notag \end{align} The limit in $(1)$ evaluates to $0$ because $\lim\limits_{x \to 0}\dfrac{x - \sin x}{x^{2}} = 0$ and this can be proved via Squeeze theorem using $\sin x < x < \tan x$ for $x \in (0, \pi/2)$. We have $$0 < \frac{x - \sin x}{x^{2}} < \frac{\tan x - \sin x}{x^{2}}$$ and letting $x \to 0^{+}$ we get the result via Squeeze theorem. The case for $x \to 0^{-}$ can be proved by putting $x = -t$. A Curious Fallacy: The answer by user "Jeevan Devaranjan" has a curious fallacy which involves replacing a part of the expression by its limit. This is not guaranteed to work in general. To highlight the fallacy suppose that instead of the function $\sin$ we had $f$ given by $$f(x) = x + \frac{x^{2}}{2}$$ and then $$\lim_{x \to 0}\frac{f(x)}{x} = 1$$ just like $$\lim_{x \to 0}\frac{\sin x}{x} = 1$$ and therefore $$\lim_{u \to 0}\frac{f(5u)}{u} = 5, \lim_{u \to 0}\frac{f(2u)}{2u} = 1$$ Now if the question is to evaluate the limit $$\lim_{u \to 0}\dfrac{5e^{f(2u)} - \dfrac{f(5u)}{u}}{\log(1 + \tan u)}\tag{2}$$ then it is not possible to reduce the expression to $$\lim_{u \to 0}\dfrac{5e^{2u} - 5}{\log(1 + \tan u)}\tag{3}$$ We can see that $(2)$ evaluates to $-5/2$ and $(3)$ like earlier evaluates to $10$. Hence one should avoid such replacements. Justification: The reason why the replacements worked in the answer from "Jeevan Devaranjan" is that $\sin x$ satisfies $$\lim_{x \to 0}\frac{x - \sin x}{x^{2}} = 0\tag{4}$$ whereas $$\lim_{x \to 0}\frac{x - f(x)}{x^{2}} = -\frac{1}{2}\tag{5}$$ Replacing $(\sin 5u)/u$ by $5$ is actually replacing $(\sin 5u)/u^{2}$ with $5/u$ (because of an extra $u$ term in denominator coming from $\log(1 + \tan u)$) which is valid because $$\lim_{u \to 0}\frac{\sin 5u}{u^{2}} - \frac{5}{u} = 0$$ because of $(4)$ but the same does not hold for $f$ because limit in $(5)$ is non-zero. So the replacements are justified not by the famous limit $\lim_{x \to 0}\dfrac{\sin x}{x} = 1$ but by the not so famous limit $(4)$. In this answer I describe the scenarios where one can replace a sub-expression by its limit in the process of evaluation of limit of a complex expression containing that sub-expression.<|endoftext|> TITLE: A proof of: The derivative of the determinant is the trace QUESTION [6 upvotes]: I want to solve the following problem: Show that the derivative of $\mbox{det}:GL(n,\mathbb{R})\rightarrow\mathbb{R}$ at $I\in GL(n,\mathbb{R})$ is given by $$\mbox{det}_{*}(I)(X)=\mbox{tr}X$$ I would like you to check my proof, and answer the question in the end. My Attempt: I'll denote $N=GL(n,\mathbb{R})$ . Also, $\simeq$ will be used for vector space isomorphisms and $\cong$ will be used for diffeomorphisms. We know that $\mbox{det}_{*}(I):T_{I}N\rightarrow T_{det(I)=1}\mathbb{R}$ . Let $X\in T_{I}N$ . We can write $X$ in a basis of $T_{I}N$ . So let us find a basis $of T_{I}N$ : we know that $T_{I}N\simeq M_{n}(\mathbb{R})$ , so we can get a basis of $T_{I}N$ from a basis of $M_{n}(\mathbb{R})$ using an isomorphism. The function $$f:T_{I}N \rightarrow M_{n}(\mathbb{R}) \\ [\gamma] \mapsto \gamma'(0)$$ is known to be an isomorphism. Furthermore, ${E_{ij}}$ is a basis for $M_{n}(\mathbb{R})$ , where $E_{ij}$ is the $n\times n$ matrix whose entries are all zero except the entry $i,j$ , which is $1$ . Thus, a basis for $T_{I}N$ is ${f^{-1}(E_{ij})}$ . Now, $f^{-1}(E_{ij})$ is the equivalence class of curves $\gamma:\mathbb{R}\rightarrow N$ such that $\gamma(0)=I$ and $\gamma'(0)=E_{ij}$ . Hence, a representative of this equivalence class is $\alpha_{ij}(t)=I+tE_{ij}$ , and so we can write ${f^{-1}(E_{ij})}={[\alpha_{ij}]}$ . Hence, we can write $X=\overset{n}{\underset{i,j=1}{\sum}}x_{ij}[\alpha_{ij}]$ . Let us see how $det_{*}$ acts on the basis elements $[\alpha_{ij}]$ . We have $\mbox{det}_{*}(I)([\alpha_{ij}])=[\mbox{det}\circ\alpha_{ij}]_{1}$ by definition of derivative (the subscript 1 reminds us that the equivalence relation of this equivalence class is different, since it is defined on the set of all curves of the type $\gamma:\mathbb{R}\rightarrow\mathbb{R}$ such that $\gamma(0)=\mbox{det}(I)=1 ).$ Now, $\mbox{det}\circ\alpha_{ij}:\mathbb{R}\rightarrow\mathbb{R}$ is such that $$\mbox{det}\circ\alpha_{ij}=\mbox{det}(\alpha_{ij}(t))=\mbox{det}\left(I+tE_{ij}\right)=\mbox{det}\left(\left[\begin{array}{ccc} 1 & & \mathbb{O}\\ & \ddots\\ \mathbb{O} & & 1 \end{array}\right]+\left[\begin{array}{cccc} \mathbb{O} & & & \mathbb{O}\\ & & t\,(i,j\mbox{ entry})\\ \\ \mathbb{O} & & & \mathbb{O} \end{array}\right]\right)$$ . The matrix is triangular (or simply diagonal), and so the determinant is the product of the diagonal elements. Hence, $\mbox{det}\circ\alpha_{ij}=1+t\delta_{ij}$ , with $\delta_{ij}$ the Kronecker delta. Hence, $$\mbox{det}_{*}(I)([\alpha_{ij}])=[1+t\delta_{ij}]_{1}\in T_{1}\mathbb{R}$$ Finally, $$\mbox{det}_{*}(I)(X)=\overset{n}{\underset{i,j=1}{\sum}}x_{ij}\mbox{det}_{*}(I)([\alpha_{ij}])=\overset{n}{\underset{i,j=1}{\sum}}x_{ij}[1+t\delta_{ij}]_{1}$$ Now, I noticed that, if I for some reason use the isomorphism $$g:T_{1}\mathbb{R} \rightarrow \mathbb{R} \\ [\gamma]_{1} \mapsto \gamma'(0)$$ to “identify” $\alpha_{ij}$ with $g(\alpha_{ij})=\delta_{ij}$ and use that instead of $[1+t\delta_{ij}]_{1}$ , I get $\overset{n}{\underset{i,j=1}{\sum}}x_{ij}\delta_{ij}=\mbox{tr}X$ . My question is: why is this last step (since "Now, I noticed...") legitimate? REPLY [2 votes]: Another approach uses standard coordinate : note that, as a function of several variables, the determinant is particularly simple as it is linear in each coordinate. If you developp the determinant using standard rule, you see that is $X=(x_{i,j})$ is a matrix and the index $i$ is fixed $\det M= \sum _{j=1}^n x_{i,j} \det X_{i,j} (-1)^{i+j}$, where $X_{i,j}$ is obtained by erasing the i-th raw and column of $X$. If follows that $({ \partial \over \partial x_{i,j}} \det ) X= (-1)^{i+j} \det X_{i,j}$ In particular if $X= Id$ is the identity matrix, $({ \partial \over \partial x_{i,j}} \det ) Id=0$ if $i\not = j$ and $({ \partial \over \partial x_{i,i}} \det ) Id=1$ Whence $d \det ({Id}) M= \sum _{i,j} ({ \partial \over \partial x_{i,i}} \det )(Id) m_{i,j} =\sum _i m_{i,i}$ This approach immediately gives you the gradient of the determinant at any point $X$ : $\vec {grad} (\det )X$ is the matrix $(-1)^{i+j} \det X_{i,j}$<|endoftext|> TITLE: "Trivial" geometry behind the intersection pairing on surfaces QUESTION [8 upvotes]: The intersection pairing between two divisors on a nonsingular algebraic surface over a field is defined thanks to the following theorem (the reference is Hartshorne's book): One can define a pairing for any couple of invertible sheaves $\mathcal L,\mathcal M\in\operatorname{Pic}(X)$ as follows: $$\mathcal L.\mathcal M:=\chi(\mathcal O_X)-\chi(\mathcal L^{-1})-\mathcal (M^{-1})-\chi( \mathcal L^{-1}\otimes \mathcal M^{-1})\quad\quad (\ast)$$ By using the well known isomorphism between $\operatorname{Pic}(X)$ and the group of divisors up linear equivalence, one can clearly define: $$C.D:=\mathcal O_X(C).\mathcal O_X(D)$$ and the final step is to show that this definition satisfies properties (1)-(4) of the above theorem. So everything is very clear, but I don't understand what is the meaning of the definition $(\ast)$. It seems to me that this pairing for invertible sheaves appears out of the blue. Can you give any intuitive motivation about its nature? Why do we need the Poincare characteristics? Why are we taking the inverse sheaves? REPLY [2 votes]: Your questions are a frequent reaction to the "rabbit-out-of-a-hat" type of proof. The reason for doing it that way is a matter of exposition. A more natural approach would be step-by-step. First, for two nonsingular curves $C$ and $D$, meeting transversally, define $C.D$ to the the number of intersection points. Next, keeping $C$ fixed, let's show this depends only on the linear equivalence class of $D$. This is because if $D \sim D'$ on the surface, then $C.D \sim C.D'$ as divisors on $C$. And then we know that linearly equivalent divisors on a curve have the same degree. This will still work if $D$ becomes singular, as long as its intersection with $C$ is a finite set of points. Maybe you can work out the rest for yourself. If you start with arbitrary divisors, any such is a difference of effective curves, and using Bertini's theorem, you can take them to be nonsingular. This gives a definition for any two divisors, but you have to show it is independent of the choices made. By the time you work this all out in complete detail, perhaps you can appreciate the choice of Hartshorne to take the more efficient "rabbit-out-of-a-hat" method. Of course Kenny's answer to your question is quite excellent.<|endoftext|> TITLE: Geometry homework question - enough data? QUESTION [5 upvotes]: I've been asked to help with the following school problem on geometry. In the triangle $\Delta ABC$ one has $AB = 60$, $AC = 80$. Point $O$ is the centre of the circumscribed circle. Point $D$ belongs to the side $AC$. Additionally, one has $AO \perp BD$. One is asked to find $CD$. (just in case, the answer is $35$) I am really puzzled, since the information given clearly does not fix the triangle. I know how to solve the problem under the assumption that point $O$ belongs to $BD$. In this case, the solution goes as follows: Denote $\alpha = \angle OAC$, $\beta = \angle OBC$. $\angle ACB = \dfrac{1}{2}\angle AOB = 45^\circ$. From the sum of angles of the triangle $\triangle ABC$, one has: $$\alpha + \beta = 45^\circ$$ The law of sines for the triangle $\triangle ABC$ gives: $$\dfrac{AC}{\sin(\beta + 45^\circ)}=\dfrac{AB}{\sin(45^\circ)}$$ From where one can find $\beta$: $$\beta = \arccos\left( \dfrac{2\sqrt2}{3} \right) + 45^\circ$$ From the triangle $\triangle AOD$ one finds: $$CD = AC - AD = AC - \dfrac{AO}{\cos(\alpha)}= AC - \dfrac{AO}{\cos(45^\circ - \beta)}$$ Substituting the value of $\beta$ indeed gives $CD = 35$. Now, I have two questions: Is it possible to get the answer without the assumption I have made (or any other one). Can anyone present an easier solution? (just in case, this is one of $26$ problems in the $9$th grade quiz in Russian middle school — students are obviously limited in time and are not supposed to use Mathematica and even Stack Exchange) REPLY [3 votes]: Draw the line $AO$ and let $E$ be the second point of intersection of $AO$ with the circumcircle of triangle $ABC$ (the first point of intersection being $A$). Then $AE$ is the diameter of the circumcircle and therefore triangle $ABE$ is a right triangle ( $\angle \, ABE = 90^{\circ}$ ). Let $H$ be the itnersection point of $BD$ and $AO$. Since by assumption $BD$ is orthogonal to $AO$, and therefore orthogonal to $AE$, segment $BD$ is an the altitude of $ABE$ through $B$. Hence triangles $AHB$ and $ABE$ are similar and thus $$\frac{AH}{AB} = \frac{AB}{AE}$$ which is equivalent to $$AH \cdot AE = AB^2 = 60^{2}$$ Triangle $AEC$ is right triangle ( $\angle \, ACE = 90^{\circ}$ ) and so is $AHD$, which means they are similar and thus $$\frac{AD}{AE}=\frac{AH}{AC} $$ which is equivalent to $$AD \cdot AC = AH \cdot AE = 60^2$$ and since $AD = AC - CD = 80 - CD$ and $AC = 80$ get the equation $$(80-CD)\cdot 80 = 60^2$$ When you solve it you get $CD = 35$.<|endoftext|> TITLE: Oh Times, $\otimes$ in linear algebra and tensors QUESTION [19 upvotes]: Can I have some clarification of the different meanings of $\otimes$ as in the unifying and separating implications in basic linear algebra and tensors? Here is some of the overloading of this symbol... 1.1. Kronecker matrix product: If $A$ is an $m \times n$ matrix and $B$ is a $p \times q$ matrix, then the Kronecker product A ⊗ B is the $mp \times nq$ block matrix: $$A\color{red}{\otimes}B=\begin{bmatrix}a_{11}\mathbf B&\cdots&a_{1n}\mathbf B\\\vdots&\ddots&\vdots\\a_{m1}\mathbf B&\cdots&a_{mn}\mathbf B\end{bmatrix}$$ 1.2. Outer product: $\mathbf u \otimes \mathbf v = \mathbf{uv}^\top = \begin{bmatrix}u_1\\u_2\\u_3\\u_4\end{bmatrix}\begin{bmatrix}v_1&v_2&v_3&v_4\end{bmatrix}=\begin{bmatrix}u_1v_1&u_1v_2&u_1v_3\\u_2v_1&u_2v_2&u_2v_3\\u_3v_1&u_3v_2&u_3v_3\end{bmatrix}$ Definition of the tensor space: $$\begin{align}T^p_q\,V &= \underset{p}{\underbrace{V\color{darkorange}{\otimes}\cdots\color{darkorange}{\otimes} V}} \color{darkorange}{\otimes} \underset{q}{\underbrace{V^*\color{darkorange}{\otimes}\cdots\color{darkorange}{\otimes} V^*}}:=\{T\, |\, T\, \text{ is a (p,q) tensor}\}\\[3ex]&=\{T: \underset{p}{\underbrace{V^*\times \cdots \times V^*}}\times \underset{q}{\underbrace{V\times \cdots \times V}} \overset{\sim}\rightarrow K\}\end{align}$$ Definition of the tensor product: It takes $T\in T_q^p V$ and $S\in T^r_s V$ so that: $$T\color{blue}{\otimes}S\in T_{q+s}^{p+r}V$$ defined as: $$\begin{align}&(T\color{blue}{\otimes}S)(\underbrace{ \omega_1,\cdots,\omega_q,\cdots,\omega_{q+s}, v_1,\cdots,v_p,\cdots,v_{p+r}}_\text{'eats'})\\&:= T(\underbrace{\omega_1,\cdots,\omega_q, v_1,\cdots,v_p}_{\text{'eats up' p vec's + q covec's}\rightarrow \text{no.}})\underbrace{\cdot}_{\text{in the field}}S(\underbrace{\omega_{q+1},\cdots,\omega_{q+s}, v_{p+1},\cdots,v_{p+r}}_{\text{'eats up' p vec's and q covec's} \rightarrow\text{no.}})\end{align}$$ An example of, for instance, some operation like $\underbrace{e_{a_1}\color{blue}{\otimes}\cdots\color{blue}{\otimes}e_{a_p}\color{blue}{\otimes} \epsilon^{b_1}\color{blue}{\otimes}\cdots\color{blue}{\otimes}\epsilon^{b_q}}_{(p,q)\text{ tensor}}$ after settling for some basis could be helpful. For clarity this is a fragment of the more daunting expression: $$ T=\underbrace{\sum_{a_1=1}^{\text{dim v sp.}}\cdots\sum_{b_1=1}^{\text{dim v sp.}}}_{\text{p + q sums (usually omitted)}}\underbrace{\color{green}{T^{\overbrace{a_1,\cdots,a_p}^{\text{numbers}}}_{\quad\quad\quad\quad\underbrace{b_1,\cdots,b_q}_{\text{numbers}}}}}_{\text{a number}}\underbrace{\cdot}_{\text{S-multiplication}}\underbrace{e_{a_1}\color{blue}{\otimes}\cdots\color{blue}{\otimes}e_{a_p}\color{blue}{\otimes} \epsilon^{b_1}\color{blue}{\otimes}\cdots\color{blue}{\otimes}\epsilon^{b_q}}_{(p,q)\text{ tensor}}$$ showing how to recuperate a tensor from its components. I realize that there is a connection as stated here: The Kronecker product of matrices corresponds to the abstract tensor product of linear maps. Specifically, if the vector spaces $V, W, X$, and $Y$ have bases $\{v_1, \cdots, v_m\}, \{w_1,\cdots, w_n\}, \{x_1,\cdots, x_d\},$ and $\{y_1, \cdots, y_e\}$, respectively, and if the matrices $A$ and $B$ represent the linear transformations $S : V \rightarrow X$ and $T : W \rightarrow Y$, respectively in the appropriate bases, then the matrix $A ⊗ B$ represents the tensor product of the two maps, $S ⊗ T : V ⊗ W → X ⊗ Y$ with respect to the basis $\{v_1 ⊗ w_1, v_1 ⊗ w_2, \cdots, v_2 ⊗ w_1, \cdots, v_m ⊗ w_n\}$ of $V ⊗ W$ and the similarly defined basis of $X ⊗ Y$ with the property that $A ⊗ B(v_i ⊗ w_j) = (Av_i) ⊗ (Bw_j)$, where $i$ and $j$ are integers in the proper range. But it is still elusive... REPLY [12 votes]: If $V$ and $W$ are vector spaces, you can form a third vector space from them called their tensor product $V \otimes W$. The tensor product consists of sums of certain vectors called "pure tensors," which are written $v \otimes w$ where $v \in V, w \in W$, subject to certain rules, e.g. $(v_1 + v_2) \otimes w = v_1 \otimes w + v_2 \otimes w$. For a complete list of these rules see Wikipedia. In practice you'll do fine if you remember the following: If $v_1, \dots v_n$ is a basis of $V$ and $w_1, \dots w_m$ is a basis of $W$, then the pure tensors $v_i \otimes w_j, 1 \le i \le n, 1 \le j \le m$ form a basis of $V \otimes W$. In particular, $\dim V \otimes W = \dim V \times \dim W$. If $T : V_1 \to V_2$ and $S : W_1 \to W_2$ are two linear maps, you can form a third linear map from them which is also called their tensor product $$T \otimes S : V_1 \otimes W_1 \to V_2 \otimes W_2.$$ It is completely determined by how it behaves on pure tensors, which is $$(T \otimes S)(v \otimes w) = T(v) \otimes S(w).$$ The relationship between these two uses of the term "tensor product" is given formally by the notion of a functor. Tensor product notation for linear maps is compatible with the notation $v \otimes w$ for pure tensors in the following sense. A vector $v \in V$ in a vector space is the same thing as a linear map $v : 1 \to V$ from the one-dimensional vector space $1$ given by the underlying field to $V$, and if $v : 1 \to V$ and $w : 1 \to W$ are two vectors in $V, W$, then their tensor product as linear maps $v \otimes w : 1 \otimes 1 \to V \otimes W$ corresponds to the pure tensor $v \otimes w$, where we use that there's a canonical isomorphism $1 \otimes 1 \cong 1$. The Kronecker product is a description of the tensor product of linear maps with respect to a choice of basis for all of the vector spaces involved. Formally, with notation as above, if $B_1, B_2$ are bases for $V_1, V_2$, $C_1, C_2$ are bases for $W_1, W_2$, given bases $B_i, C_i$ of $V_i, W_i$, we write $B_i \otimes C_i$ for the corresponding basis of $V_i \otimes W_i$ as in the highlighted area above, and we write $_{B_2}[T]_{B_1}$ to refer to the matrix of a linear transformation $T : V_1 \to V_2$ with respect to a basis $B_1$ of $V_1$ and a basis $B_2$ of $V_2$, then we have $$_{B_2 \otimes C_2}[T \otimes S]_{B_1 \otimes C_1} = \, _{B_2}[T]_{B_1} \otimes \, _{C_2}[S]_{C_1}$$ where on the LHS $\otimes$ means the tensor product of linear maps and on the RHS $\otimes$ means the Kronecker product. One final remark: the definition of spaces of tensors you give in 2) is a terrible definition that I've only seen in some textbooks on differential geometry. It is absolutely the wrong way to think about tensors.<|endoftext|> TITLE: Finding null space of matrix. QUESTION [11 upvotes]: I need to make sure I'm understanding this correctly. I skipped a few steps to reduce typing, but let me know if I need to clarify something. Question asks: Find $N(A)$ for $A$ = \begin{bmatrix} -3 & 6 & -1 & 1 & -7 \\ 1 & -2 & 2 & 3 & -1\\ 2 & -4 & 5 & 8 & -4 \\ \end{bmatrix} First thing I did was put the augmented matrix into reduced echelon row: $\begin{bmatrix} 1 & -2 & 0 & -1 & 3 & 0 \\ 0 & 0 & 1 & 2 & -2 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{bmatrix}$ $(1)$ So then... $x=\begin{bmatrix} x_1\\ x_2 \\ x_3\\ x_4\\ x_5\\ \end{bmatrix} = \begin{bmatrix} 2x_2 + x_4 - 3x_5\\ x_2 \\ -2x_4 + 2x_5\\ x_4\\ x_5\\ \end{bmatrix}$ $(2) $ Since $x_2, x_4$ and $x_5$ are free variables.. $ x_2 \begin{bmatrix} 2\\ 1 \\ 0\\ 0\\ 0\\ \end{bmatrix} + x_4 \begin{bmatrix} 1\\ 0 \\ -2\\ 1\\ 0\\ \end{bmatrix} + x_5 \begin{bmatrix} -3\\ 0 \\ 2\\ 0\\ 1\\ \end{bmatrix}$ $(3)$ Resulting in.. $N(A)= \left( \begin{bmatrix} 2\\ 1 \\ 0\\ 0\\ 0\\ \end{bmatrix} , \begin{bmatrix} 1\\ 0 \\ -2\\ 1\\ 0\\ \end{bmatrix} , \begin{bmatrix} -3\\ 0 \\ 2\\ 0\\ 1\\ \end{bmatrix} \right)$ $(4)$ REPLY [5 votes]: As mentioned in the comments, provided your arithmetic is accurate, this is the correct response. The idea behind the null space of a matrix is that it is precisely those vectors in the domain being sent to the $\mathbf{0}$ vector in the codomain. So, what you have (correctly) done, is determined the solution set of $A\mathbf{x}=\mathbf{0}$. You did this by finding the null space of a reduced row echelon form of $A$, which has the same null space as $A$. That is, if $B$ is the reduced row echelon form for $A$ that you found, $A\mathbf{x}=\mathbf{0}$ if and only if $B\mathbf{x}=\mathbf{0}$. So, $N(B)=N(A)$.<|endoftext|> TITLE: Finding Radon-Nikodym derivative QUESTION [8 upvotes]: Let $m$ be Lebesgue measure on $\mathbb R_+=(0,\infty)$ and $\mathcal A = \sigma\left(( \frac 1{n+1} , \frac 1n ]:n=1,2,...\right)$. Define a new measure $\lambda$ on $\mathcal A$, for each $E \in \mathcal A$, by $\lambda(E)= \int_E fdm $, where $f(x)=2x^2$ . Find the Radon-Nikodym derivative $\frac{d\lambda}{dm}$. It is clear that $\lambda $ is absolutely continuous with respect to $m$ by definition, and Lebesgue measure is $\sigma$-finite in $(\mathbb R_+ , \mathfrak M_+)$, where $\mathfrak M_+$ is the collection of all Lebesgue measurable subsets of $\mathbb R_+$, so $m$ is $\sigma$-finite in $(\mathbb R_+, \mathcal A)$ since every element of $\mathcal A$ is also Lebesgue measurable. However, to apply Radon-Nikodym theorem, $\lambda$ must be a finite measure, but $\lambda(E)= \int_E fdm = \infty$ if $E=\mathbb R_+ - ( \frac 12,1]$, so we cannot apply that theorem. Is there any other approach to this problem? REPLY [8 votes]: There is no need for the Radon-Nikodým theorem. By the very definition of the Radon-Nikodým derivative, we are looking for a function $g: (0,\infty) \to [0,\infty)$ which is measurable with respect to $\mathcal{A}$ and satisfies $$\lambda(E) = \int_E g \, dm, \qquad E \in \mathcal{A}. \tag{1}$$ Note that $g(x) := f(x) :=2x^2$ is not measurable with respect to $\mathcal{A}$ and therefore we cannot simply choose $g=f$. If we define $$E_n := \begin{cases} \bigg( \frac{1}{n+1}, \frac{1}{n} \bigg], & n \in \mathbb{N}, \\ (1,\infty), & n = 0 \end{cases}$$ then $\mathcal{A} = \sigma(E_n; n \in \mathbb{N}_0)$. Since the intervals $E_n$, $n \in \mathbb{N}_0$, are disjoint and cover $(0,\infty)$, equation $(1)$ is equivalent to $$\lambda (E_n) = \int_{E_n} g \, dm \qquad \text{for all} \, \, n \in \mathbb{N}_0. \tag{2}$$ Moreover, any $\mathcal{A}$-measurable function $g$ is of the form $$g(x) = \sum_{n \in \mathbb{N}_0} c_n 1_{E_n}(x) \tag{3}$$ for constants $c_n \in \mathbb{R}$. The only thing which we have to do is to choose the constants $c_n \geq 0$ such that $(2)$ holds. To this end, we plug our candidate $(3)$ into $(2)$ and find $$\lambda(E_n) \stackrel{!}{=} \int_{E_n} g \, dm \stackrel{(3)}{=} c_n \int_{E_n} \, dm= c_n m(E_n) = c_n \left( \frac{1}{n}-\frac{1}{n+1} \right)$$ which implies $$c_n = \lambda(E_n) n (n+1)$$ for all $n \in \mathbb{N}$. $\lambda(E_n)$ can be calculated explicitly using the very definition of $\lambda$; I leaves this to you. For $n=0$ we get $$\lambda(E_0) = \infty \stackrel{!}{=} c_0 m(E_0) = c_0 \infty,$$ i.e. we can choose $c_0 := 1$. Hence, $$g(x) = 1_{E_0}(x)+ \sum_{n \geq 1} \lambda(E_n) n (n+1) 1_{E_n}(x)$$ is a non-negative $\mathcal{A}$-measurable function which satisfies $(2)$ (hence, $(1)$), i.e. $$g = \frac{d\lambda}{dm}.$$<|endoftext|> TITLE: Question about proof of Implicit Function Theorem in *Analysis on Manifolds* by Munkres QUESTION [7 upvotes]: I am reading Analysis on Manifolds by Munkres, and have a question about the proof about the Implicit Function Theorem (both the statement and proof included below): Note (3rd paragraph of the proof) how Munkres chooses $U \times V$ as a neighborhood of $(a,b) \in \mathbb{R}^{k+n}$. I know this can be done by restricting the open set guaranteed to exist by the Inverse Function Theorem, but I don't see why we want it to be a Cartesian product. Regarding uniqueness (last paragraph of the proof), why is the argument provided necessary? It seems unnecessarily complicated. Here is how I reasoned it: say $(x,g(x)) \in U \times V$ s.t. $f(x,g(x))= \textbf{0}_n$. Then $F(x,g(x))=(x,\textbf{0}_n)$, so $$(x,g(x))=G(x,\textbf{0}_n)=(x,h(x,\textbf{0}_n)).$$ (Here $G=F^{-1}$ and $h$ is the last $n$ coordinate functions of $G$, following Munkres' notation). By inspection, $g(x)=h(x,\textbf{0}_n)$, hence uniqueness is shown because we just derived what $g(x)$ has to be. Many thanks in advance. REPLY [2 votes]: For the first question, you are essentially viewing $\mathbb{R}^{k+n}$ as being isomorphic to the Cartesian product $\mathbb{R}^k \times \mathbb{R}^n$. The open sets of $\mathbb{R}^k \times \mathbb{R}^n$ are precisely the unions of Cartesian products of open sets from $\mathbb{R}^k$ and $\mathbb{R}^n$. So, you can express the open subset of $\mathbb{R}^{k+n}$ arising from the inverse function theorem as $U \times V$, where $U \subset \mathbb{R}^k$ and $V \subset \mathbb{R}^n$ are open. For the second question, your reasoning only works when you know that $g(x) \in V$. A priori it is not known that $g(B) \subset V$, as @Math_QED points out. To be very precise, here is the conclusion of the implicit function theorem, with the existence and uniqueness claims separated out and elaborated: There exists a neighbourhood $B$ of $a$ in $\mathbb{R}^k$ and a continuous function $g : B \to \mathbb{R}^n$ such that $g(a) = b$, $(x,g(x)) \in A$ for all $x \in B$ and $f(x,g(x)) = 0$ for all $x \in B$. Moreover, if $g_0 : B \to \mathbb{R}^n$ is any continuous function such that $g_0(a) = b$, $(x,g_0(x)) \in A$ for all $x \in B$ and $f(x,g_0(x)) = 0$ for all $x \in B$, then $g = g_0$. So, it could very well be that $g_0(B) \not\subset V$. It will turn out that this is not the case, but this needs a proper argument and cannot be assumed beforehand. Let me elaborate in greater detail on where and why OP's naive method of proving uniqueness fails. Suppose $g_0 : B \to \mathbb{R}^n$ is another continuous function such that $g_0(a) = b$ and $f(x,g_0(x)) = 0$ for all $x \in B$. Since $g_0$ is continuous and $b \in V$, there is a neighbourhood $B_0$ of $a$ in $B$ such that $g_0(x) \in V$ for all $x \in B_0$. Now, we apply OP's idea to say that for all $x \in B_0$, we have \begin{align} (x,g_0(x)) \in U \times V &\implies F(x,g_0(x)) = (x,f(x,g_0(x)) = (x,0) \in W \\ &\implies (x,g_0(x)) = G(x,0) = (x,h(x,0)) = (x,g(x)). \end{align} Hence, $g(x) = g_0(x)$ for all $x \in B_0$. Now, if we replace the neighbourhood $B$ with $B_0$ (since we anyway need the existence only locally), we are done. But are we, really? What if I can find a sequence of functions $g_n$ for each $n \in \mathbb{N}$ such that $g_n$ agrees with $g$ around $a$ on sets of smaller and smaller measure, such that the intersection of all these domains is the singleton $\{ a \}$? Then, there won't be any neighbourhood of $a$ on which a unique $g$ exists! Such a scenario seems very unlikely, but how to rule it out? Well, going back to our attempt, we observe that the proof goes through not just for $a$ but for any $a_0 \in B$ such that $g(a_0) = g_0(a_0)$. So, whenever $g(a_0) = g_0(a_0)$, $g(x) = g_0(x)$ for all $x$ in some neighbourhood $B_0$ of $a_0$. So, we have actually proved that $g$ agrees with $g_0$ on an open set. How does this help? Well, we already know that the set of points $x \in B$ at which $g(x) \neq g_0(x)$ is open, simply by continuity. What we have shown is that the complement of this set is also open. But this implies that the connected set $B$ is a disjoint union of two open sets, so one of them must be empty. And at this point we are done. And now note that this is literally Munkres’ proof, reproduced with some commentary. Here is an example that shows why the continuity of $g$ plays a crucial role in determining its uniqueness. Consider $f : \mathbb{R}^2 \to \mathbb{R}$ given by $f(x,y) = x^2 + y^2 - 5$, as in @Math_QED’s answer. We can solve for $y$ in terms of $x$ locally around $x=1$, but in two different ways. This is because $f$ vanishes at both $(1,2)$ as well as $(1,-2)$. So, there is an open interval $B \subset \mathbb{R}$ containing $1$ and continuous functions $g_i : B \to \mathbb{R}$ with $g_1(1) = 2$ and $g_2(1) = -2$, such that $$ f(x,g_i(x)) = 0 \quad \text{for all } x \in B, \quad i=1,2. $$ So far, there is no contradiction to the statement that $g$ is unique, for the following reason. There can be several tuples $(a,b)$ for a fixed $a$ such that $f(a,b) = 0$. The implicit function theorem says that for each such tuple, there is a unique continuous $g : B \to \mathbb{R}^n$ such that $g(a) = b$ and $f(x,g(x)) = 0$ for all $x \in B$. And, this is verified in the above example. Once the tuple $(1,2)$ or $(1,-2)$ is fixed, the function $g_i$ is also uniquely determined. However, if we drop the demand that $g$ be continuous, then it is easy to see how there can be many different functions that satisfy the given conditions. For instance, define $g_0 : B \to \mathbb{R}$ by $$ g_0(x) = \begin{cases} g_1(x), & x \in B \cap \mathbb{Q};\\ g_2(x), & \text{otherwise}. \end{cases} $$ Then, $g_0(1) = 2$ and $f(x,g_0(x)) = 0$ for all $x \in B$. However, $g_0 \neq g_1$. This shows that the continuity of $g$ plays a crucial role in proving the uniqueness of $g$.<|endoftext|> TITLE: Working out a concrete example of tensor product QUESTION [6 upvotes]: From this entry in Wikipedia: The tensor product of two vector spaces $V$ and $W$ over a field $K$ is another vector space over $K$. It is denoted $V\otimes_K W$, or $V\otimes W$ when the underlying field $K$ is understood. If $V$ has a basis $e_1,\cdots,e_m$ and $W$ has a basis $f_1,\cdots,f_n$, then the tensor product $V\otimes W$ can be taken to be a vector space spanned by a basis consisting of all pairs $(e_i,f_j)$; each such basis element of $V\otimes W$ is denoted $e_i\otimes f_j$. For any vectors $v=\sum_i v_ie_i\in V$ and $w=\sum_j w_j f_j\in W$ there is a corresponding product vector $v\otimes w$ in $V\otimes W$ given by $\sum_{ij}v_iw_j(e_i\otimes f_j)\in V\otimes W.$ This product operation $\otimes:V\times W \rightarrow V\otimes W$ is quickly verified to be bilinear. As an example, letting $V=W=\mathbb R^3$ (considered as a vector space over the field of real numbers) and considering the standard basis set $\{\hat x, \hat y,\hat z\}$ for each, the tensor product $V\otimes W$ is spanned by the nine basis vectors $\{\hat x \otimes \hat x,\hat x \otimes \hat y,\hat x \otimes \hat z,\hat y \otimes \hat x,\hat y \otimes \hat y, \hat y \otimes \hat z ,\hat z\otimes \hat x,,\hat z \otimes \hat y, \hat z \otimes \hat z \}$ and is isomorphic to $\mathbb R^9$. For vectors $v=(1,2,3),w=(1,0,0)\in \mathbb R^3$ the tensor product $$\bbox[10px, border:2px solid red]{v\otimes w= \hat x\otimes \hat x + 2\hat y\otimes \hat x+3\hat z\otimes \hat x}$$ The above definition relies on a choice of basis, which can not be done canonically for a generic vector space. However, any two choices of basis lead to isomorphic tensor product spaces (c.f. the universal property described below). Alternatively, the tensor product may be defined in an expressly basis-independent manner as a quotient space of a free vector space over $V\times W$. This approach is described below. QUESTION: If we decide on the standard Euclidean orthonormal basis, what is the final expression of the $v\otimes w$ product in the red boxed expression? Do we eventually get rid of the vector expressions with hats (as well as the $\otimes$ symbols) to get a number as per the (approximate) idea of a tensor as a map from $V\times W\rightarrow \mathbb R?$ What if we change the bases from orthonormal to $\large\begin{bmatrix}\tilde x\\\tilde y\\\tilde z\end{bmatrix}=\begin{bmatrix}3&4&-1\\0&3&7\\1&3&0.5\end{bmatrix}\begin{bmatrix}\hat x\\\hat y\\\hat z\end{bmatrix}?$ REPLY [2 votes]: Thanks for the hint in comments. It's more clear now what the answer would be: Applied to the case in the QUESTION, the change of basis matrix is $\small\begin{bmatrix}3&4&-1\\0&3&7\\1&3&0.5\end{bmatrix}$, and its inverse $\small\begin{bmatrix}0.7&0.2&-1.1\\-0.3&-0.1&0.8\\0.1&0.2&-0.3\end{bmatrix}$. The vectors $v$ and $w$ in the new coordinate system are $v =\small\begin{bmatrix}0.7&0.2&-1.1\\-0.3&-0.1&0.8\\0.1&0.2&-0.3\end{bmatrix}\begin{bmatrix}1\\2\\3\end{bmatrix} =\begin{bmatrix}-2.3\\1.9\\-0.5\end{bmatrix}$ and $w=\small\begin{bmatrix}0.7&0.2&-1.1\\-0.3&-0.1&0.8\\0.1&0.2&-0.3\end{bmatrix}\begin{bmatrix}1\\0\\0\end{bmatrix}=\begin{bmatrix}0.7\\-0.3\\0.1\end{bmatrix}$. Therefore, $$\begin{align}\large v\otimes w=\left(-.23\tilde x + 1.9\tilde y -0.5 \tilde z\right)\otimes \left(0.7\tilde x -0.3\tilde y + 0.1\tilde z\right)\\[2ex]=-1.6\;\tilde x\otimes \tilde x + 1.3\;\tilde x\otimes \tilde y -0.3 \;\tilde x\otimes \tilde z + 0.6\;\tilde y\otimes \tilde x -0.5\;\tilde y\otimes \tilde y+ 0.1\;\tilde y\otimes \tilde z -0.3\;\tilde z\otimes \tilde x +0.2 \;\tilde z\otimes \tilde y-0.1\;\tilde z\otimes \tilde z\end{align}$$ So what's the point? Starting off defining the tensor product of two vector spaces ($V\otimes W$) with the same bases, we end up calculating the outer product of two vectors: $$\large v\otimes_o w=\small \begin{bmatrix}-2.3\\1.9\\-0.5\end{bmatrix}\begin{bmatrix}0.7&-0.3&0.1\end{bmatrix}=\begin{bmatrix}-1.61&0.69&-0.23\\1.33&-0.57&0.19\\-0.35&0.15&-0.05\end{bmatrix}$$ This connect this post to this more general question.<|endoftext|> TITLE: Why is there no analogy of completing the square for quartics and higher? QUESTION [6 upvotes]: One use of completing the square in a quadratic is to find its critical value. I thought whether this could also be possible with quartics; if nothing else, it would be a cool way to find extrema witouth taking a derivative and having to solve a cubic; so I tried to factor $x^4 + 4x^3 + 6x^2 + 6x + 2$ as $(x^2 + bx + c)^2+d$, and I arrived at an inconsistent system of equations. Is there any deep reason why we can't solve quartics using the "completing the square" method? P.S. If possible, please refrain from using advanced abstract algebra terminology. REPLY [5 votes]: Quartics can be solved in a similar idea. As dxiv mentions, one can depress the quartic with the substitution $x=y-\frac b{4a}$ to get $$ax^4+bx^3+cx^2+dx+e=ay^4+c'y^2+d'y+e'$$ For some new constants $c',d',e'$. We can then invite the "completing the square" step by setting this equal to zero: $$ay^4+c'y^2+d'y+e'=0$$ And then subtracting a quadratic from both sides to get $$ay^4+c''y^2+e''=ny^2-d'y+m$$ And solve for $c'',e'',n,m$ such that we get perfect squares: $$(uy^2+u')^2=(vy+v')^2$$ Whereupon we can remove the squares to get $$uy^2+u'=\pm(vy+v')$$ and complete the square again to solve for $y$, which solves for $x$. Of course, since it's not possible to solve 5th degree polynomials and higher with radicals... this isn't going to work for any higher general polynomials.<|endoftext|> TITLE: Evaluating the integral $\frac{1}{2^{2n-2}}\int_0^1\frac{x^{4n}\left(1-x\right)^{4n}}{1+x^2} dx$ QUESTION [8 upvotes]: Prove that : $$ \frac{1}{2^{2n-2}}\int \limits_{0}^{1} \dfrac{x^{4n}\left(1-x\right)^{4n}}{1+x^2} dx =$$$$\sum \limits_{j=0}^{2n-1}\dfrac{(-1)^j}{2^{2n-j-2}\left(8n-j-1\right)\binom{8n-j-2}{4n+j}} + (-1)^n\left(\pi-4\sum \limits_{j=0}^{3n-1}\dfrac{(-1)^j}{2j+1}\right)\,\,\,\,(♣)$$ where $\binom{a}{b}= {}_aC_b=\frac{a!}{b!(a-b)!}$. I was reading an article on "$\frac{22}{7}$ exceeds $\pi$ ", when came across the generalized form $(♣)$. Putting the value of $n=1$ in $(♣)$, we get the famous Putnam Problem : $$0<\int \limits_0^1\dfrac{x^4(1-x)^4}{1+x^2}dx=\dfrac{22}{7}-\pi.\,\,\,\,(♠)$$ I know how to evaluate $(♠)$ by using expansion and polynomial long-division and then term-wise integration. The other integrals involving $n>1$ provide better approximations of $\pi$. But I am unable to get even close to evaluating $(♣)$ because of the '$n$'s. I tried to do polynomial division but got badly stuck. The $1+x^2$ in the denominator reminds me of $\arctan(x)$, but what can I do? Can anyone provide a bit of help as to how to evaluate this lovely integral ? Also I would like to know if it is okay to treat $n$ as a real number here instead of a natural number. REPLY [3 votes]: Let us do some preliminary proofs. $$\color{red}{\displaystyle 4\int\limits_0^1 \dfrac{x^{6n}}{1+x^2}\; dx = (-1)^n \left(\pi-4\sum\limits_{j=0}^{3n-1}\dfrac{(-1)^j}{2j+1}\right)}\tag 1 \\$$ Proof : Denote the integral by $\mathfrak{A}$ and so, $\mathfrak{A}\\ =\displaystyle (-1)^n \left(4\int\limits_0^1\dfrac{1-1+(-1)^n x^{6n}}{1+x^2}\; dx\right) \\ \displaystyle =(-1)^n \left(4\int\limits_0^1 \dfrac{1}{1+x^2}\; dx - 4\int\limits_0^1\dfrac{1-(-1)^n x^{6n}}{1+x^2}\; dx\right)\\ \displaystyle =(-1)^n \left(\pi -4\int\limits_0^1\sum\limits_{j=0}^{3n-1}(-x^2)^j\right)\\ \displaystyle =(-1)^n \left(\pi-\sum_{j=0}^{3n-1}\dfrac{(-1)^j}{2j+1}\right)$ $$\color{blue}{\displaystyle \sum\limits_{j=0}^{2n-1} \left(\dfrac{-2x}{(1-x)^2}\right)^j = \dfrac{(1-x)^2}{1+x^2}\left(1-\left(\dfrac{2x}{(1-x)^2}\right)^{2n}\right)}\tag 2$$ Proof : It follows directly from the GP formula that, $\displaystyle \sum\limits_{j=0}^{2n-1} \left(\dfrac{-2x}{(1-x)^2}\right)^j = \dfrac{1-\left(\dfrac{-2x}{(1-x)^2}\right)^{2n}}{1+\left(\dfrac{2x}{(1-x)^2}\right)}$ Just simplifying the denominator we get, $\displaystyle \sum\limits_{j=0}^{2n-1} \left(\dfrac{-2x}{(1-x)^2}\right)^j = \dfrac{(1-x)^2}{1+x^2}\left(1-\left(\dfrac{2x}{(1-x)^2}\right)^{2n}\right)$ Coming back to the problem and denote the Integral by I, $\displaystyle \dfrac{1}{2^{2n-2}}\int\limits_0^1 \dfrac{x^{4n}(1-x)^{4n}}{1+x^2}\; dx \\ = \displaystyle \dfrac{1}{2^{2n-2}}\int\limits_0^1 \dfrac{x^{4n}(1-x)^{4n}-2^{2n}(x^{6n}-x^{6n})}{1+x^2}\; dx \\= \displaystyle \dfrac{1}{2^{2n-2}}\int\limits_0^1 \dfrac{x^{4n}(1-x)^{4n}-2^{2n}x^{6n}}{1+x^2}\; dx+4\int\limits_0^1 \dfrac{x^{6n}}{1+x^2}\; dx = \displaystyle \dfrac{1}{2^{2n-2}}\int\limits_0^1 x^{4n}(1-x)^{4n}\dfrac{(1-x)^2}{1+x^2}\left(1-\left(\dfrac{-2x}{(1-x)^2}\right)^{2n}\right)\; dx + \underbrace{(-1)^n \left(\pi-4\sum\limits_{j=0}^{3n-1}\dfrac{(-1)^j}{2j+1}\right)}_{\text{From (1)}} \\ \displaystyle = \dfrac{1}{2^{2n-2}}\int\limits_0^1 x^{4n}(1-x)^{4n} \underbrace{\left(\displaystyle \sum\limits_{j=0}^{2n-1} \left(\dfrac{-2x}{(1-x)^2}\right)^j\right)}_{\text{From (2)}}\; dx + (-1)^n \left(\pi-4\sum\limits_{j=0}^{3n-1}\dfrac{(-1)^j}{2j+1}\right) \\ \displaystyle = \dfrac{1}{2^{2n-2}}\sum\limits_{j=0}^{2n-1} (-2)^j \beta(4n+j+1,4n-2j-1)+(-1)^n \left(\pi-4\sum\limits_{j=0}^{3n-1}\dfrac{(-1)^j}{2j+1}\right) \\ \displaystyle = \sum \limits_{j=0}^{2n-1}\dfrac{(-1)^j}{2^{2n-j-2}\left(8n-j-1\right){{8n-j-2}\choose{4n+j}}} + (-1)^n\left(\pi-4\sum \limits_{j=0}^{3n-1}\dfrac{(-1)^j}{2j+1}\right)\\$ $$\color{red}{\boxed{\mathfrak{Proved}}}$$<|endoftext|> TITLE: Smallest positive integral value of $a$ such that ${\sin}^2 x+a\cos x+{a}^2>1+\cos x$ holds for all real $x$ QUESTION [7 upvotes]: If the inequality $${\sin}^2 x+a\cos x+{a}^2>1+\cos x$$ holds for all $x \in \Bbb R$ then what's the smallest positive integral value of $a$? Here's my approach to the problem $$\cos^2 x+(1-a)\cos x-a^2<0$$ Let us consider this as a quadratic form respect to $a$. Applying the quadratic formula $a=\frac{-\cos x\pm\sqrt{5\cos^2 x+4\cos x}}2 $ and substituting $\cos x$ with $1$ and $-1$ we get 3 values of where the graph should touch the x axis $-2,0,1$ How should I proceed now? REPLY [3 votes]: We need that the inequality $\cos^2x+(1-a)\cos{x}-a^2<0$ will be true for all real $x$. Let $\cos{x}=t$. Thus, we need to find a smallest natural $a$ for which the inequality $$t^2+(1-a)t-a^2<0$$ is true for all $t\in[-1,1],$ for which we need $$(-1)^2+(1-a)(-1)-a^2<0$$ and $$1^2+(1-a)\cdot1-a^2<0,$$ which is $$a^2+a-2>0$$ and $$a^2-a>0,$$ which gives $$a\in(-\infty,-2)\cup(1,+\infty)$$ and we got the answer: $2$.<|endoftext|> TITLE: Topos theory and higher-order logic QUESTION [8 upvotes]: [Updated in light of some of the comments and answers below] This is a question about the relationship between higher-order logic and topoi. It's well-known that every topos gives a model of higher-order logic, in which the type in which sentences inhabit is taken to be the subobject classifier $\Omega$. As far as I can see $\Omega$, qua interpretation of higher-order logic, is playing two distinct roles and I can't see why they're being identitified: $\Omega$ plays a special role in turning subobjects into elements of a power object in the topos. $Sub(-)\cong Hom(-,\Omega)$ $\Omega$ plays the role of being the truth values of sentences. There is, for example, another natural choice for the second role, namely: $2$ (the coproduct of the terminal object with itself). Given a choice of arrow $true: 1\to 2$ we can interpret higher-order logic within a topos in an analogous way. I'd like to know if there's any reason not to split these two roles. Is there a principle of higher-order logic that forces these two roles to coincide? A natural candidate would be the schema: $\forall f(\forall xy(fx=fy \to x=y) \to \exists G \forall z(Gz \leftrightarrow \exists xfx=z))$ where $f:A\to B$, $x,y: A$, $z:B$ and $G:B\to \Omega$. This schema is schematic in the types $A$ and $B$. Does this sentence fail if I use $2$ in a topos instead of $\Omega$? Or does $\forall xy(fx=fy \to x=y)$ fail to express the fact that $f$ is mono? Or does $\forall z(Gz \leftrightarrow \exists xfx=z)$ fail to express the fact that $G$ is a characteristic function of $f$. Or what? Lastly, just to clarify where I'm coming from: I'm mainly interested in topos theory as an interpretation of higher-order logic. I realise there are lots of interesting differences between topoi that lead to the same sentences of higher-order logic being true, but these aren't the sorts of differences I'm interested in (at least for these purposes). REPLY [5 votes]: The main idea connecting logic to set theory is the idea that propositions are subsets. On the one hand, if you are interpreting propositional logic in some (set-theoretic) domain $D$ of discourse, then each proposition gets interpreted as a subset of $D$, and connectives are operations on subsets. On the other hand, given any set $D$, we can make $\mathcal{P}(D)$ into a Boolean lattice in which we can compute with propositional logic. And this all extends fairly naturally to predicates and quantifiers and such. When connecting logic to category theory we use basically the same idea, except instead that propositions are subobjects rather than subsets. The posets $\operatorname{Sub}(X)$ of subobjects of $X$ are of crucial importance to doing propositional logic, and more generally we are interested in the maps back and forth between $\operatorname{Sub}(X)$ and $\operatorname{Sub}(Y)$ that may be induced by a map $X \to Y$. So, for internal first-order logic, classifying subobjects really is the thing we want. Even better, in a topos, $\operatorname{Sub}$ is a representable functor $$ \operatorname{Sub}(-) \cong \hom(-, \Omega) $$ So, while $\operatorname{Sub}(-)$ is a relatively complicated object, in a topos the whole thing effectively collapses down to being a single object $\Omega$, thus allowing the internal logic to be treated in a more elementary way.<|endoftext|> TITLE: Show integral limit QUESTION [10 upvotes]: How to show that: $$I:=\lim_{\epsilon \to 0^+} \int_{\epsilon}^{2\epsilon} \frac{1}{\ln{(1+x)}}dx=\ln{2}$$ Using the mean value theorem for integrals I can show that $\frac{1}{2 }\leq I \leq 1$, but I'm not able to show that $I=\ln{2}$. Any hints? REPLY [2 votes]: Make the substitution $x=\epsilon t$, to give $$ I=\lim_{\epsilon\to0^+}\int_1^2 \frac\epsilon{\ln(1+\epsilon t)}\ dt = \int_1^2 \lim_{\epsilon\to0^+}\frac\epsilon{\ln(1+\epsilon t)}\ dt $$ Note that moving the limit inside the integral can fail in some situations (even with constant limits)... fortunately, we can do it if the function converges uniformly on the interval, and this function does. Evaluating the limit, we get $$ I = \int_1^2 \frac1t\ dt = \big[\ln t\big]_1^2= \ln 2-\ln 1 = \ln 2 $$<|endoftext|> TITLE: Tricky inequality involving 3 variables QUESTION [11 upvotes]: Let $x, y$ and $z$ be three real numbers satisfying the following conditions: $$0 < x \leq y \leq z$$ AND $$xy + yz + zx = 3$$ Prove that the maximum value of $(x y^3 z^2)$ is $2.$ I tried using the weighted AM-GM inequality, but to no avail as the powers 1,2 and 3 are giving me a hard time. How should I proceed? Thanks in advance. REPLY [7 votes]: Let $x=\frac{a}{2\sqrt2}$, $y=\sqrt2b$ and $z=\sqrt2c$. Hence, $c\geq b$ and by AM-GM: $$6=4bc+ab+ac\geq6\sqrt[6]{(bc)^4(ab)(ac)}=6\sqrt[6]{a^2b^5c^5}\geq6\sqrt{a^2b^6c^4},$$ which gives $$1\geq ab^3c^2=\frac{1}{2}xy^3z^2.$$ The equality occurs for $x=\frac{1}{2\sqrt2}$ and $y=z=\sqrt2$ and we are done!<|endoftext|> TITLE: Bizarre Definite Integral QUESTION [27 upvotes]: Does the following equality hold? $$\large \int_0^1 \frac{\tan^{-1}{\left(\frac{88\sqrt{21}}{215+36x^2}\right)}}{\sqrt{1-x^2}} \, \text{d}x = \frac{\pi^2}{6}$$ The supposed equality holds to 61 decimal places in Mathematica, which fails to numerically evaluate it after anything greater than 71 digits of working precision. I am unsure of it's correctness, and I struggle to prove it's correctness. The only progress I have in solving this is the following identity, which holds for all real $x$: $$\tan^{-1}{\left( \frac{11+6x}{4\sqrt{21}} \right )} + \tan^{-1}{\left( \frac{11-6x}{4\sqrt{21}} \right )} \equiv \tan^{-1}{\left(\frac{88\sqrt{21}}{215+36x^2}\right)}$$ I also tried the Euler Substitution $t^2 = \frac{1-x}{1+x}$ but it looks horrible. Addition: Is there some kind of general form to this integral? Side thoughts: Perhaps this is transformable into the Generalised Ahmed's Integral, or something similar. REPLY [19 votes]: As pointed out in one of the comments, user @Start wearing purple demonstrated a very general approach for solving this kind of integral, see this. As an alternative approach, let me give a different argument that appeals to a specific property satisfied by OP's integral. Step 1. (Reduction and the main claim) We begin by substituting $x = \cos(\theta/2)$. Then the integral equals $$ \frac{1}{2} \int_{0}^{\pi} \arctan\left(\frac{88\sqrt{21}}{233+18\cos\theta}\right) \, d\theta = \frac{\pi}{4} - \frac{1}{2} \int_{0}^{\pi} \arctan\left(\frac{233+18\cos\theta}{88\sqrt{21}}\right) \, d\theta. $$ So it suffices to prove that $$ \int_{0}^{\pi} \arctan\left(\frac{233+18\cos\theta}{88\sqrt{21}}\right) \, d\theta \stackrel{?}{=} \frac{\pi^2}{6}. \tag{1} $$ To evaluate this integral, let me give the punchline. Claim. Let $0 < a <1$ and $b > 0$ satisfy $4a^2 - b^2 = \frac{4}{3}$. Then $$ \int_{0}^{\pi} \arctan(a + b\cos\theta) \, d\theta = \frac{\pi^2}{6}. $$ Notice that $(a, b) = \left( \frac{233}{88\sqrt{21}}, \frac{18}{88\sqrt{21}} \right)$ satisfies the relation in the assertion of Claim. So we focus on proving this claim. Step 2. (Definition and properties of $I$) Now define $I(a, b)$ by $$ I(a, b) = \int_{0}^{\pi} \arctan(a + b\cos\theta) \, d\theta. $$ From the substitution $\theta \mapsto \pi - \theta$, it is clear that $I(a,-b) = I(a, b)$. Then for $0 < a < 1$ and $0 < \theta < \pi$, we have \begin{align*} &\arctan(a + b\cos\theta) + \arctan(a - b\cos\theta) \\ &\hspace{1em}= \arctan\left( \frac{2a}{1-(a^2-b^2\cos^2\theta)} \right) \\ &\hspace{2em}= \arctan\left( \frac{4a}{2-2a^2+b^2+b^2\cos(2\theta)} \right) \\ &\hspace{3em}= \frac{\pi}{2} - \arctan\left( \frac{2-2a^2+b^2}{4a} + \frac{b^2}{4a}\cos(2\theta) \right). \end{align*} Plugging this back and exploiting the symmetry of cosine, we have $$ I(a, b) = \frac{\pi^2}{4} - \frac{1}{2}I\left( \frac{2-2a^2+b^2}{4a}, \frac{b^2}{4a} \right). \tag{2} $$ Step 3. Now here comes the central observation. Let $(a, b)$ satisfy $0 < a < 1$ and $b > 0$, and define the sequence $(a_n, b_n)$ recursively by $$ (a_0, b_0) = (a, b), \qquad (a_{n+1}, b_{n+1}) = \left( \frac{2-2a_n^2+b_n^2}{4a_n}, \frac{b_n^2}{4a_n} \right). $$ Observation. Assume that $4a^2 - b^2 = \frac{4}{3}$. Then for all $n \geq 0$ we have $$ \frac{1}{\sqrt{3}} \leq a_{n+1} \leq a_n, \qquad 4a_n^2 - b_n^2 = \frac{4}{3}. $$ The proof is a tedious algebra, so we skip this. Now by this observation, we have $|a_n| < 1$ for all $n$. Then a recursive application of $\text{(2)}$ gives $$ I(a, b) = \frac{\pi^2}{4}\sum_{k=0}^{n-1} \left(-\frac{1}{2}\right)^k + \left(-\frac{1}{2}\right)^n I(a_n, b_n). $$ Since $|I(a_n, b_n)| \leq \frac{\pi^2}{2}$ for all $n$, taking limist as $n\to\infty$ proves the claim. Remark. (1) The condition $4a^2 - b^2 = \frac{4}{3}$ is crucial for our proof. For arbitrary starting point $(a, b)$, the sequence $(a_n, b_n)$ is dynamically unstable and hence the formula $\text{(2)}$ is not applicable. (2) The claim is true for any $a > 0$ in view of the principle of analytic continuation. (3) Again, @Start wearing purple's computation gives a more general result with a relatively economic computation: for all $a, b \in \Bbb{R}$, $$ \int_{0}^{\pi} \arctan(a + b\cos\theta) \, d\theta = \pi \arg \left(1 + ia + \sqrt{b^2 + (1+ia)^2}\right). \tag{3} $$ This follows from the formula $$ \int_{0}^{\pi} \log(1 + s \cos\theta) \, d\theta = \pi \log\left( \frac{1 + \sqrt{1-s^2}}{2} \right) $$ which is valid for any complex $s$ with $|s| < 1$. Our relation $4a^2 - b^2 = \frac{4}{3}$ ensures that the RHS of $\text{(3)}$ is always $\frac{\pi^2}{6}$, since $1 + ia + \sqrt{b^2 + (1+ia)^2} = (1+\sqrt{3}a)\left( 1 + \frac{i}{\sqrt{3}} \right)$.<|endoftext|> TITLE: Forms of a Rubik's Snake QUESTION [6 upvotes]: A Rubik's Snake is a game made by Rubik Erno. It is a rod with 24 triangular prisms fixed together on 23 pivots. Each pivot can be twisted with 4 x 90 degree turns to create different shapes. I know I can find the number of forms it can take, provided that it only makes 90 degree turns at each pivot and there is no 'lack' of space (can be easily found out by taking the figures 4 twists available, and 23 pivots for each twist. $4^{23} = 70368744177664$. How would I incorporate the fact that there is limited space available, and that not all moves would be accepted? I don't mind whether they are sensitive to orientations, or not. Therefore, go for the easier option. REPLY [2 votes]: As ofthis article on Wikipedia: The number of different shapes of the Rubik's Snake is at most 423 = 70 368 744 177 664 (≈ 7×1013), i.e. 23 turning areas with 4 positions each. The real number of different shapes is lower since some configurations are spatially impossible (because they would require multiple prisms to occupy the same region of space). Peter Aylett computed via an exhaustive search that 13 535 886 319 159 (≈ 1×1013) positions are possible when prohibiting prism collisions, or passing through a collision to reach another position; or 6 770 518 220 623 (≈ 7×1012) when mirror images (defined as the same sequence of turns, but from the other end of the snake) are counted as the one position.<|endoftext|> TITLE: Compute $\sum \limits_{k=0}^{n}(-1)^{k}k^{m}\binom{n}{k}$ using Lagrange interpolation. QUESTION [7 upvotes]: Using Lagrange interpolation (I think identity $\sum \limits_{k=0}^{n}k^{m}\prod \limits_{\substack{i=0\\i\neq k}}^{n}\frac{x-i}{k-i}=x^m$) shows that $$\sum \limits_{k=0}^{n}(-1)^{k}k^{m}\binom{n}{k}=0 \text{ if }\ 0≤m TITLE: Proving the Hypergeometric Sequence QUESTION [5 upvotes]: Question: How do you prove$$_4F_3\left[\begin{array}{c c}\frac 12n+1,n,-x,-y\\\frac 12n,x+n+1,y+n+1\end{array};-1\right]=\dfrac {\Gamma(x+n+1)\Gamma(y+n+1)}{\Gamma(n+1)\Gamma(x+y+n+1)}\tag{1}$$For $\Re(2x+2y+n+2)>0$ I'm not sure how to prove this. There are a multitude of other similar formulas. Some of which$$\begin{align*}_4F_3\left[\begin{array}{c c}\frac 12n+1,n,n,-x\\\frac 12n,x+n+1,1\end{array};-1\right] & =\dfrac {\Gamma(x+n+1)}{\Gamma(n+1)\Gamma(x+1)}\end{align*}\tag{2}$$For $\Re(2x-n+2)>0$$$_3F_2\left[\begin{array}{c c}\frac 12n+1,n,-x\\\frac 12n,x+n+1\end{array};-1\right]=\dfrac {\Gamma(x+n+1)\Gamma\left(\frac 12n+\frac 12\right)}{\Gamma(n+1)\Gamma\left(x+\frac 12n+\frac 12\right)}\tag3$$ For $\Re(x)>-\frac 12$. I know that the general Hypergeometric Sequence can be written as$$_pF_q\left[\begin{array}{c c}\alpha_1,\alpha_2,\ldots,\alpha_p\\\beta_1,\beta_2,\ldots,\beta_p\end{array};x\right]=\sum\limits_{k=0}^\infty\dfrac {(\alpha_1)_k(\alpha_2)_k\cdots(\alpha_p)_k}{(\beta_1)_k(\beta_2)_k\cdots(\beta_p)_k}\dfrac {x^k}{k!}\tag4$$So $(1)$ becomes$$_4F_3\left[\begin{array}{c c}\frac 12n+1,n,-x,-y\\\frac 12n,x+n+1,y+n+1\end{array};-1\right]=\sum\limits_{k=0}^{\infty}\dfrac {\left(\frac 12n+1\right)_k(n)_k(-x)_k(-y)_k}{\left(\frac 12n\right)_k(x+n+1)_k(y+n+1)_k}\dfrac {(-1)^k}{k!}\tag{5}$$However, how do you manipulate the RHS of $(5)$ into the RHS of $(1)$. I think that there could be a sort of elementary transformation involved, but if so, I'm not sure what. REPLY [3 votes]: An established identity, useful to transform a $_4F_3 $ hypergeometric sequence with negative unit argument in a $_3F_2$ sequence, is $$_4F_3 \left[\begin{array}{c c} a, b, c, d \\ a - b +1, a - c+1, a - d+1 \end{array};-1\right] \\ = \dfrac {\Gamma[a - b+1] \Gamma[a - c+1]}{\Gamma[a+1] \Gamma[a - b - c+1]} \, _3F_2 \left[\begin{array}{c c} a/2 - d+1, b, c \\ a/2 + 1, a - d+1 \end{array};-1\right]$$ So, rewriting your series as $$_4F_3 \left[\begin{array}{c c} n, -x, -y, \frac 12n+1\\ n+x+1, n+y+1, \frac 12n \end{array};-1\right] \\$$ we obtain that it is equal to $$\dfrac {\Gamma(n+x+1)\Gamma(n+y+1)}{\Gamma(n+1)\Gamma(n+x+y+1)} \\ _3F_2 \left[\begin{array}{c c} 0, -x, -y \\ n/2 + 1, n/2 \end{array};-1\right] $$ and since the $_3F_2$ sequence, as a result of the zero among its terms, is equal to $1$, we get your identity. The same method can be used to generate the second identity. Rewriting it as $$\begin{align*}_4F_3\left[\begin{array}{c c} n, -x, n,\dfrac 12n+1 \\ n+x+1, 1, \dfrac 12n \end{array};-1\right] & \end{align*}$$ we get $$ =\dfrac {\Gamma(n+x+1) \Gamma (1)}{\Gamma(n+1)\Gamma(x+1)} \, _3F_2 \left[\begin{array}{c c} 0, -x, n \\ n/2 + 1, n/2 \end{array};-1\right] $$ where again the $_3F_2$ sequence is equal to $1$, giving your identity. Lastly, another known identity can be used to express in closed form a $_3F_2$ hypergeometric function with negative unit argument: $$_3F_2 \left[\begin{array}{c c} a, \dfrac a2+1,c \\ \dfrac a2,a-c+1 \end{array};-1 \right]=\dfrac {\sqrt {\pi} \,\, \Gamma(a-c+1)}{2^n \, \Gamma \left(\frac a2+1 \right)\Gamma \left(\frac{a+1}{2}-c \right)}$$ This can be used to prove the third identity of the OP. Rewriting it as follows we get $$_3F_2 \left[\begin{array}{c c} n, \frac 12n+1,-x \\ \dfrac 12n,n+x+1 \end{array};-1 \right]=\dfrac {\sqrt {\pi} \,\, \Gamma(n+x+1)}{2^n \Gamma \left(\frac n2+1 \right) \Gamma \left(\frac{n+1}{2}+x \right)}$$ Now since it is known that, for given $z $, $$\dfrac {\Gamma (z)}{\Gamma (2z)}= \dfrac {\sqrt {\pi}}{2^{2z-1}\, \Gamma \left(z+\frac{1}{2} \right)}$$ setting $z=(n+1)/2 \,\,$ we have $$\dfrac {\Gamma (\frac 12n+ \frac 12 )}{\Gamma (n+1)}= \dfrac {\sqrt {\pi}}{2^{n} \Gamma \left(\frac {1}{2}n+1 \right)}$$ Substituting in the equation above we obtain $$_3F_2 \left[\begin{array}{c c} n, \frac 12n+1,-x \\ \frac 12n,n+x+1 \end{array};-1 \right]=\dfrac {\Gamma(n+x+1) \Gamma \left(\frac 12n + \frac 12 \right)}{\Gamma(n+1) \Gamma \left(\frac 12n + x+ \frac 12 \right)}$$<|endoftext|> TITLE: Is it possible to change position of the segment so that it is parallel to Y axis? QUESTION [8 upvotes]: Let's consider infinite Cartesian system of coordinates. Assume that in any lattice point (i.e. both coordinates are integers) there is a spike. I have a line segment of length $n$ (it is an integer). It is parallel to X axis, as shown below (image shows only a part of system of coordinates). I want to make it parallel to Y axis as shown below. What I can do is rotate and move this line segment, but without raising it up. I can't change shape, length etc. of it. Movement and rotation, as you can see, is limited by the spikes in lattice points. Below is an example of one of possible moves. My question: is it possible to make this line segment parallel to Y-axis? I do believe that Chinese Remainder Theorem has something to do with this. REPLY [3 votes]: Put an edge of the stick in the point $O=(\sqrt 2,1/2)$. Consider all points with integer coordinates which have distance from $0$ not larger than $n+1$ ($n$ is the length of the stick). These are a finite number of points. Order them by their angle with respect to $O$: $\alpha_1, \dots, \alpha_N$. Since $O$ has an irrational coordinate, all points have a different angle (not really important) and moreover if there is a point with angle $\alpha$ there is no point with angle $\alpha+\pi$ (this is used to simplify the algorithm). To make the stick rotate around the point $O$ it is enough to describe a "move" to pass across each pin. Let $\alpha$ be the angle of the pin $P$. Since the circle of radius $n+1$ centered in $O$ contains a finite number of pins, there is a small angle around $\alpha$ and around $\alpha+\pi$ which contains no pins apart from $P$. Also a small circle around $O$ contains no pins (not really needed). The pin can then move in the orange region of the picture. You can easily translate it in the opposite sector, rotate a little and translate back to pass the point $P$. A very similar problem: http://mathworld.wolfram.com/KakeyaNeedleProblem.html<|endoftext|> TITLE: Let $K$ be a normal subgroup of $G$, and $H$ a normal subgroup of $K$. If $G/H$ is abelian, prove that $G/K$ and $K/H$ are both abelian. QUESTION [6 upvotes]: I think this is not a duplicate Let $K$ be a normal subgroup of $G$, and $H$ a normal subgroup of $K$. If $G/H$ is abelian, prove that $G/K$ and $K/H$ are both abelian. My attempt is to set $f:G/H\to G/K$ given by $f(Hx)=Kx$ for every $x\in G$, so $f(Hx\cdot Hy)=f(H(xy))=K(xy)=Kx\cdot Ky=f(Hx)\cdot f(Hy)$, so $f$ is a homomorphism and $G/K$ is abelian (since homomorphisms preserve conmutativity) Now $G/H$ is abelian so $H(xy)=H(yx)$ for every $x,y\in G$, but every $x,y\in K$ are also in $G$, so $K/H$ is abelian Is the proof right? I don´t see any mistakes but I´ve never seen/thought there could be a homomorphism between two different quotient groups REPLY [2 votes]: I assume that you forgot something: in order to give a meaning to $G/H$, $H$ must be normal in $G$. I did not see that in your post. A very nice and simple criterion for a factor group $G/N$ to be abelian is if and only if the commutator subgroup $G' \subseteq N$. I leave it to you to prove that. In your situation: $H \unlhd G$ and $H \subseteq K \unlhd G$, and apparently $G' \subseteq H$. Since $H \subseteq K$, it follows that $G' \subseteq K$ and hence $G/K$ is abelian. Further, $K' \subseteq G' \subseteq H$, whence $K/H$ is also abelian by the criterion but now applied to $H \unlhd K$.<|endoftext|> TITLE: Evaluate: $\int_{0}^{\pi}\frac{\cos 2017x}{5-4\cos x}dx$ QUESTION [8 upvotes]: Evaluate: $\int\limits_{0}^{\pi}\dfrac{\cos 2017x}{5-4\cos x}~dx$ I thought of using some series but could not get it REPLY [14 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\int_{0}^{\pi}{\cos\pars{2017x} \over 5 - 4\cos\pars{x}}\,\dd x = \left.\Re\int_{0}^{\pi}{z^{2017} \over 5 - 4\pars{z+1/z}/2} \,{\dd z \over \ic z}\right\vert_{\ z\ =\ \exp\pars{\ic x}} \\[5mm] = &\ \left.-\,{1 \over 2}\,\Im\int_{0}^{\pi}{z^{2017} \over z^{2} - \pars{5/2}z + 1} \,\dd z\,\right\vert_{\ z\ =\ \exp\pars{\ic x}} = \left.-\,{1 \over 2}\,\Im\int_{0}^{\pi}{z^{2017} \over \pars{z - 1/2}\pars{z - 2}}\,\dd z\,\right\vert_{\ z\ =\ \exp\pars{\ic x}} \\[5mm] = &\ {1 \over 2}\,\Im\lim_{\epsilon \to 0^{+}} \int_{\pi}^{0}{\pars{1/2 + \epsilon\expo{\ic\theta}}^{2017} \over \epsilon\expo{\ic\theta}\pars{1/2 + \epsilon\expo{\ic\theta} - 2}} \epsilon\expo{\ic\theta}\ic\,\dd\theta = {1 \over 2}\pars{-\pi}{\pars{1/2}^{2017} \over 1/2 - 2} \\[5mm] = &\ \bbx{\ds{{2^{-2017} \over 3}\,\pi}} \approx 6.9587 \times 10^{-608} \end{align}<|endoftext|> TITLE: Dimension of space of Jacobi fields QUESTION [6 upvotes]: Let $J$ be a Jacobi field along the geodesic $\gamma : [0,a] \to M$. Then $$\langle J(t),\gamma'(t) \rangle=\langle J'(0),\gamma'(0)\rangle t + \langle J(0),\gamma'(0)\rangle,$$ where $t \in [0,a]$. Suppose that $J(0)=0$. Then $\langle J'(0),\gamma'(0)\rangle = 0$ if and only if $\langle J,\gamma'\rangle \equiv 0$; in particular, the space of Jacobi fields $J$ with $J(0)=0$ and $\langle J,\gamma' \rangle(t) \equiv 0$ has dimension equal to $n-1$. I get that $J(0)=0$ implies that $\langle J(t),\gamma'(t) \rangle=\langle J'(0),\gamma'(0)\rangle t$, and from this the statement "$\langle J'(0),\gamma'(0)\rangle = 0$ if and only if $\langle J,\gamma'\rangle \equiv 0$" follows. However, how do we deduce that the dimension of the space of Jacobi fields is $n-1$? Addendum: do Carmo states also that: A Jacobi field is determined by its initial conditions $J(0)$, $\frac{DJ}{dt}(0)$. Indeed, let $e_1(t),\ldots,e_n(t)$ be parallel, orthonormal fields along $\gamma$. We shall write: $$J(t)=\sum_i f_i(t) e_i(t), \qquad a_{ij} = \langle R(\gamma'(t),e_i(t))\gamma'(t),e_j(t) \rangle,$$ $i,j=1,\ldots,n=\dim M$. Then $$ \frac{D^2 J}{dt^2} = \sum_i f_i''(t) e_i(t), $$ and \begin{align} R(\gamma',J)\gamma' &= \sum_j \langle R(\gamma',J)\gamma',e_j \rangle e_j \\ &= \sum_{ij} f_i \langle R(\gamma',e_i)\gamma',e_j \rangle e_j \\ &= \sum_{ij} f_i a_{ij} e_j. \end{align} Therefore, the Jacobi equation $\frac{D^2J}{dt^2}+R(\gamma'(t),J(t))\gamma'(t)=0$ is equivalent to the system $$f_j''(t)+\sum_i a_{ij}(t) f_i(t)=0,$$ $j=1,\ldots,n$, which is a linear system of the second order. Hence, given the initial conditions $J(0)=0$, $J'(0)=\frac{DJ}{dt}(0)$, there exists a $C^\infty$ solution of the system, defined on $[0,a]$. There exist, therefore, $2n$ linearly independent Jacobi fields along $\gamma$. REPLY [3 votes]: If $J$ is a Jacobi field along a geodesic $c(t):=\exp_p\ tv,\ |v|=1$, then assume that $J(0)=0$. Then $$ J(t)=(d\exp_p)_{tv}\ tw$$ s.t. $J'(0)=w$. If $w=v$, then $J(t)=t c'(t)$. If $(J,c'(t))=0$, then $w\perp v$ (cf. Gauss Lemma). So for choice of $w$ we have dimension $n-1$.<|endoftext|> TITLE: Homeomorphism with no fixed points permutes boundary components of twice punctured disk QUESTION [8 upvotes]: This is a problem from Bredon's 'Topology and Geometry,' it's not homework, just something I'm doing for general learning. The problem is as follows: Let $K = \{ (x,y) \in \mathbb R^2| x^2 + y^2 \leq 16, (x-2)^2 + y^2 \geq 1, (x+2)^2 + y^2 \geq 1 \}$. This is just a disk with two smaller disks removed from the interior. If $f: K \to K$ is a homeomorphism with no fixed points, show that $f$ must cyclically permute the three boundary components, and reverse orientations. Construct an example of such a map. For this problem, I thought I had worked out a map with the desired properties that realized this space as a kind of 'pair of pants' type object, but when I want to check with a more careful sketch, it turns out my map actually was not orientation reversing, and also had fixed points(!). My new ideas is to take the pair of pants, and try turning them inside out, permuting the holes, and then putting it back down in the plane, where a leg is now the torso, the torso a leg, and the second leg is now the first. I believe this map will do what I want, but since I can't write it down explicitly, it seems hopeless to check if it has fixed points or not. Actually, now with some more thinking, I'm worried this map may have fixed points too, in the 'crotch' of the pants. I know that once I produce a map, I can check if it's fixed point free or not with the Lefschetz-Hopf trace formula, if I can give a description of the induced map on homology (or just chain groups, by a theorem in Bredon). Since my map sends generators of the chain group to generators, we can actually just check that this has Lefschetz number different from $0$, and so it does have a fixed point. So that ideas is dead in the water. OK, so I'm rather stuck on this problem, even after some new thoughts while writing this up. Maybe I should shoot for the general case first, and see if that's more insightful. Either way, some helpful hints would be appreciated. REPLY [4 votes]: View $K$ as homeomorphic to a sphere with three identical circular holes punched in it at equdistant points along the equator. Mirror everything across the equatorial plane and then turn the sphere by 120° around the axis.<|endoftext|> TITLE: Finding a limit - is my argument correct? QUESTION [6 upvotes]: The limit is: $$ \lim_{\lambda\to 0} \frac{\int_{\lambda}^{a}{\frac{\cos(x)}{x}dx}}{\ln\lambda}. $$ My argument is: First rewrite the integral: $$ \lim_{\lambda\to 0} \frac{\int_{0}^{a}{\frac{\cos(x)}{x}dx} - \int_{0}^{\lambda}{\frac{\cos(x)}{x}dx}}{\ln\lambda}. $$ Then use l'Hopital's rule. The first term on the top vanishes as it has not $\lambda$ dependence. The second term is found by applying fundamental theorem of calculus. So I get: $$ \lim_{\lambda\to 0} \frac{-{\frac{\cos(\lambda)}{\lambda}dx}}{\frac{1}{\lambda}} = \lim_{\lambda\to 0}{-\cos(\lambda)} = -1. $$ Are there any problems with my argument? REPLY [3 votes]: There are just two small problems. You cannot compute the two-sided limit, because $$ \int_{0}^{a}\frac{\cos x}{x}\,dx $$ does not converge. So, assuming $a>0$, the limit can only be computed for $\lambda\to0^+$. You have $$ \int_{\lambda}^a\frac{\cos x}{x}\,dx= -\int_a^{\lambda}\frac{\cos x}{x}\,dx $$ and the derivative is $$ -\frac{\cos\lambda}{\lambda} $$ without the need to split the integral (by the way, you chose the wrong way to split it, because $\int_0^a\frac{\cos x}{x}\,dx$ does not exist, as said above). In full detail, for $a>0$, $$ \lim_{\lambda\to0^+} \frac{\displaystyle\int_{\lambda}^a \frac{\cos x}{x}\,dx}{\ln\lambda}= \lim_{\lambda\to0^+} \frac{-\dfrac{\cos \lambda}{\lambda}}{\dfrac{1}{\lambda}}= \lim_{\lambda\to0^+}-\cos\lambda=-1 $$ Note that l’Hôpital can be applied to the forms $\dfrac{\text{whatever}}{\infty}$. However, it's also easy to see that $$ \lim_{\lambda\to0^+}\int_\lambda^a\frac{\cos x}{x}\,dx=\infty $$ because in the interval $[0,\pi/3]$ we have $\cos x\ge\frac{1}{2}$, so $$ \int_\lambda^a\frac{\cos x}{x}\,dx= \int_\lambda^{\pi/3}\frac{\cos x}{x}\,dx+ \int_{\pi/3}^a\frac{\cos x}{x}\,dx\ge \frac{1}{2}\int_\lambda^{\pi/3}\frac{1}{x}\,dx+ \int_{\pi/3}^a\frac{\cos x}{x}\,dx $$ so, by comparison, we get the required limit.<|endoftext|> TITLE: Show that $\int_0^1\frac{\ln(1+x)}{1+x^2}\mathrm dx=\frac{\pi}8\ln 2$ QUESTION [8 upvotes]: Show that $\int_0^1\frac{\ln(1+x)}{1+x^2}\mathrm dx=\frac{\pi}8\ln 2$ using the change of variable $x=\tan y$. Hint: $1+\tan x=\sqrt 2\sin(x+\pi/4)/\cos x$. With the change of variable suggested I get $$\int_0^1\frac{\ln(1+x)}{1+x^2}\mathrm dx=\int_0^{\pi/4}\ln(1+\tan y)\mathrm dy$$ but I dont know exactly what to do here or what to do with the identity $1+\tan x=\sqrt 2\sin(x+\pi/4)/\cos x$. I tried some changes of variable or integration by parts but nothing work. I tried to write the logarithm as a series but the elements $\tan^k x$ are complicate to integrate. Some help will be appreciated, thank you. REPLY [4 votes]: $\displaystyle J=\int_0^1 \dfrac{\ln(1+x)}{1+x^2}dx$ Perform the change of variable $y=\dfrac{1-x}{1+x}$, $\begin{align}\displaystyle J&=\int_0^1 \dfrac{\ln\left(\tfrac{2}{1+x}\right)}{1+x^2}dx\\ &=\int_0^1 \dfrac{\ln 2}{1+x^2}dx-J\\ &=\dfrac{\pi \ln 2}{4}-J \end{align}$ Therefore, $2J=\dfrac{\pi \ln 2}{4}$ $\boxed{J=\dfrac{\pi \ln 2}{8}}$<|endoftext|> TITLE: Limit of $n$th root of $n$! QUESTION [27 upvotes]: I am asked to determine if a series converges or not: $$\displaystyle\sum\limits_{n=1}^{\infty} \frac{(2^n)n!}{(n^n)}$$ So I'm using the $n$th root test and came up with $\lim_{n \to {\infty}}\frac{2}{n}\times(\sqrt[n]{n!})$ I know that the limit of $\frac{2}{n}$ goes to $0$ when $n$ goes to infinity but what about the $(\sqrt[n]{n!})$? REPLY [9 votes]: Note that we do not need to actually evaluate the limit, we just need to find an upper bound. Consider that, for $n>m$, $$ \frac{n!}{m!}\leq n^{n-m} $$ As such, if we let $n=6k-a$, where $0\leq a\leq5$, we can observe that $$ n!\leq (6k)!\leq \prod_{i=1}^6 (ik)^k=(720k^6)^k<\left (\frac{20}{6^4}\right)^k(6k)^{6k} $$ Therefore, $$ \sqrt[n]{n!}<\left(\frac{20}{6^4}\right)^{(n+a)/6n}(n+a)^{1+a/n} $$ and thus $$ \frac{2\sqrt[n]{n!}}{n}<2\left(\frac{20}{6^4}\right)^{1/6}\left(\frac{20}{6^4}\right)^{a/6n}(1+a/n)(n+a)^{a/n} $$ Now, as $a$ cannot be larger than 5, we can easily take the limit of each term as $n\to\infty$, to give $$ \lim_{n\to\infty}\frac{2\sqrt[n]{n!}}{n}<2\left(\frac{20}{6^4}\right)^{1/6}\approx 0.997932 $$ Therefore, as the limit is less than 1, it converges. Note that the $\lim$ in the final line isn't strictly correct notation, as we have not proven that the limit exists. That said, it captures the intent, that for sufficiently large $n$, the expression will be less than $0.997932$.<|endoftext|> TITLE: Example of a commutative noetherian ring with $1$ which is neither domain nor local and has a principal prime ideal of height $1.$ QUESTION [6 upvotes]: I am trying to construct an example of a ring satisfying the followings. A commutative noetherian ring with $1$ which is neither domain nor local and has a principal prime ideal of height $1.$ I know that a local noetherian ring having a height $1$ principal prime ideal is a domain. Actually I wanted to prove this without the local condition. I couldn't prove this hence I am looking for a counterexample. I need some help. Thanks. REPLY [4 votes]: Let $k$ be a field and $A=k[x,y]/(xy)$. Then the ideal $(y-1)\subset A$ is a principal height $1$ prime. (Note that this ring is not local, and the non-localness is essential to the example in that if you localize at $(y-1)$ then the ring becomes a domain.) As a hint to what's going on here that can't happen in the local case, notice that $(y-1)x=-x$, so $x$ is divisible by $y-1$ arbitrarily many times. In a local Noetherian ring this would imply $x=0$ by the Krull intersection theorem. REPLY [4 votes]: Let $R$ be a noetherian ring with principal prime ideal $P$ of height $1$. Then $R\times R$ is Noetherian, non local, not a domain, and still has a principal prime ideal of height $1$: $P\times R$.<|endoftext|> TITLE: Improper integral of natural log over a quadratic QUESTION [8 upvotes]: I need to evaluate $$\int\limits_0^{+\infty}\frac{\ln{x}}{x^2+x+1}\,\mathrm{d}x\,.$$ I don't know how to integrate this, and for the most part, I don't even think it is expressible as elementary functions. In that case, how would I even manipulate the integral using some $u$-substitution to transform this into some integrable function? Or can this all be done without actual integration, and just some clever substitution to somehow find a multiple of this integral's value? REPLY [12 votes]: Substitute $u=\log(x),$ giving $$ \int_0^\infty \frac{\log(x)}{x^2+x+1}dx = \int_{-\infty}^\infty \frac{u}{e^u+1+e^{-u}}du = 0$$ since the integrand is odd. (And the integral exists since the integrand decays exponentially in both directions.) REPLY [10 votes]: Considering $$\int_{0}^{\infty}\frac{\ln{x}}{x^2+x+1}\,dx=\int_{0}^{1}\frac{\ln{x}}{x^2+x+1}\,dx+\int_{1}^{\infty}\frac{\ln{x}}{x^2+x+1}\,dx$$ For the second integral, change variable $x=\frac 1y$, simplify and admire ! REPLY [7 votes]: First of all, the integral is convergent because : $\displaystyle{\frac{\ln(x)}{x^2+x+1}\underset{0}{\sim}\ln(x)}$ $\displaystyle{\frac{\ln(x)}{x^2+x+1}\underset{+\infty}{=}o\left(\frac1{x^{3/2}}\right)}$ Now consider change of variable $\displaystyle{t=\frac 1x}$ You will get : $$\int_1^\infty\frac{\ln(x)}{x^2+x+1}\,dx=\int_1^0\frac{-\ln(t)}{\frac1{t^2}+\frac 1t+1}\frac{-dt}{t^2}=-\int_0^1\frac{\ln(t)}{t^2+t+1}\,dt$$ This proves that : $$\boxed{\int_0^{+\infty}\frac{\ln(x)}{x^2+x+1}\,dx=0}$$ It should be added that, more generally (and for the same reasons) : $$\forall a\in(-2,+\infty),\,\int_0^{+\infty}\frac{\ln(x)}{x^2+ax+1}\,dx=0$$<|endoftext|> TITLE: finding $ \int^{\infty}_{0}\frac{\ln x}{x^2+6x+9}dx$ QUESTION [5 upvotes]: finding $\displaystyle \int^{\infty}_{0}\frac{\ln x}{x^2+6x+9}dx$ Attempt: let $\displaystyle I(a) = \int^{\infty}_{0}\frac{\ln (ax)}{(x+3)^2}dx, a>0$ $\displaystyle I'(a) = \int^{\infty}_{0}\frac{x}{ax(x+3)^2}dx = \frac{1}{a}\int^{\infty}_{0}\frac{1}{(x+3)^2}dx = -\frac{1}{a}\bigg(\frac{1}{x+3}\bigg)\bigg|_{0}^{\infty} = \frac{1}{3a}$ so $\displaystyle I(a) = \frac{\ln(a)}{3}+C$ could some help me how to solve from there, thanks in advanced REPLY [2 votes]: If $\displaystyle I=\int\limits_0^\infty \dfrac{\ln x}{(x+a)^2}\; dx$ for some $a\gt 0$, Then the substitution $xy=a^2$ gives $\displaystyle I=\int\limits_0^\infty \dfrac{2\ln(a)-\ln(y)}{(y+a)^2}\; dy\\2I=\displaystyle 2\ln a\int\limits_0^\infty \dfrac{dy}{(y+a)^2} = \dfrac{2\ln a}{a}$ Therefore, $\displaystyle \int\limits_0^\infty \dfrac{\ln x}{(x+a)^2}\; dx = \dfrac{\ln a}{a}$ For this one, the answer is $\boxed{\dfrac{\ln 3}{3}}$<|endoftext|> TITLE: What is a conservative field? QUESTION [7 upvotes]: My understanding of the conservative field is that it is any vector field that satisfies any of these three equivalent conditions: $$\oint_C\vec{F}.d\vec{s}=0$$for any closed path $C$ in the domain,$$\vec{F}=\vec{\nabla}\phi$$for some scalar field $\phi$ defined over the domain, and$$\vec{\nabla}\times\vec{F}=\vec{0}$$ at every point in the domain. However, our teacher told us today that a conservative field and a field derived from a potential are not the same thing. In my research on the issue I found this wolfram page that states that the last condition is not equivalent to the others if the domain $D$ is not simply connected. Can anyone provide me with an example on the case ? And in this case, what becomes the definition of a conservative field ? REPLY [3 votes]: Consider $\mathbf{F} = (-y/\sqrt{x^2 + y^2}) \mathbf{e}_x + (x/\sqrt{x^2 + y^2}) \mathbf{e}_y = (1/r) \mathbf{e}_\theta$ on the multiply connected domain $D =\mathbb{R}^2 \setminus (0,0)$. Note that $\mathbf{F}$ is the gradient of a function $\phi$ in $\hat{D} =D \setminus \{(r,\theta): \theta = 0 \}$ but not throughout $D$. In $\hat{D}$ we have for $\phi(r,\theta) = \theta$ $$\mathbf{F}(r,\theta) = \nabla \phi = \frac{\partial \theta}{\partial r}\mathbf{e}_r + \frac{1}{r}\frac{\partial \theta}{\partial \theta} \mathbf{e}_\theta + \frac{\partial \theta}{\partial z} \mathbf{e}_z =\frac{1}{r}\mathbf{e}_\theta .$$ This field has zero curl throughout $D$, i.e., $$ \nabla \times \mathbf{F} = \frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{1}{r} \right)\mathbf{e_z} = 0,$$ but it is non-conservative. Around any circular contour $C$ centered at the origin, we have $$\oint_C \mathbf{F} \cdot d\mathbf{s} = \int_0^{2\pi} \frac{1}r r\, d\theta = 2 \pi \neq 0.$$ It is impossible to satisfy both $\mathbf{F} = \nabla \phi$ where $\phi$ is continuous and differentiable and $\oint_C \mathbf{F} \cdot \, d \mathbf{s} \neq 0.$<|endoftext|> TITLE: How many resistors are needed? QUESTION [44 upvotes]: I was told about the following problem. Suppose you have infinite number of resistors with only value 1$\Omega$. Question is What minimal number of 1$\Omega$-resistors is needed to construct given fraction resistance $R$, i.e. $R\in \mathbb{Q}_+$. Note that you can connect two resistors ($R_1$ and $R_2$) in two ways: parallel and series. The resulting resistance ($R_p$ and $R_s$ respectively) in those cases are: $$R_s = R_1 + R_2$$ $$R_p = R_1 \oplus R_2 = \frac{1}{\frac{1}{R_1}+\frac{1}{R_2}},$$ where we introduced $\oplus$-operation in order to simplify notes. NOTE: only schemes which are able to be written in the form: $$(... 1 + (1\oplus 1) ... )$$ are allowed. For example, this is not allowed: If you know how to formulate this rule better, please, say. For example, you can use the Euclidean algorithm. $$\frac56 = \frac{1}{\frac65}=\frac{1}{1+\frac15} = 1 \oplus 5,$$ so you needed 6 resistors because $$5 = \sum_{k=1}^{5}1$$ But it was not the minimum number of resistors because, for example, $$\frac56 = \frac12 + \frac13 = (1\oplus 1)+(1\oplus 1\oplus 1),$$ where it is enough to use only 5 resistors. I think that Euclidean algorithm often solves the problem. More over, it is needed to consider only one "half of $\mathbb{Q}_+$", for the other half it is enough to replace +'s by $\oplus$'s and vice-versa. The one who told me about this problem adhered to the following notations. $$[1,1]\quad \text{for}\quad 1\oplus 1.$$ $$(1,1)\quad \text{for}\quad 1 + 1,$$ so our previous example looks like $$[1,(1,1,1,1,1)]\quad \text{and}\quad ([1,1,1],[1,1]).$$ REPLY [6 votes]: Since the other day I've played with resistors, and made a program similar to Martin's to enumerate all possibilities that ensure the minimum number of resistors. You can download the result file there $\to$ All rationnals for $n\le 12$ Also from this list, I've drawn a picture of the rationnals that share the same $n$. It is quite impressive the see how the bubble expands, but the border of this bubble seems to have an asymptotic behaviour already. As you can see it is hollow, because many rationnals are not reached for such a low $n$, eventually all the white is to be filled with a superior rank color. For a better rendering you can download directly the XPM file.<|endoftext|> TITLE: Is there any way to Integrate this function? QUESTION [6 upvotes]: After 2 change of variables, and a $x = \log(u)$ transformation, I have this integral... $$\int_0^{\infty } \frac{\alpha e^x}{\lambda-e^x \lambda+e^{\alpha x}} \, dx$$ What are some recommendations on how to integrate this? NOTE: The original integral looked like: $$\int_{1 }^{\infty } \frac{\alpha \left(\frac{1 }{x}\right)^{\alpha }}{1-\lambda (x-1 ) \left(\frac{1 }{x}\right)^{\alpha }} \, dx$$ and for the integral to converge $\alpha > 1$. Finally, when $\alpha = 2$, I did get this expression... $$\frac{8 \tan ^{-1}\left(\frac{\sqrt{\lambda }}{\sqrt{4-\lambda }}\right)}{\sqrt{(4-\lambda ) \lambda }}$$ Thanks, REPLY [2 votes]: For integer $\alpha>1$, I got the following: Take the Mellin transform of the integrand with respect to $\lambda$, $$ \int_0^\infty \frac{\alpha e^x}{\lambda-e^x \lambda+e^{\alpha x}} \lambda^{s-1} \; d\lambda = \frac{\alpha e^{x-\alpha x}}{(-e^{-\alpha x}(e^x-1))^s}\pi\csc(\pi s) = I_{\alpha}(s,x) $$ now integrate over $x$ $$ \int_0^\infty I_{\alpha}(s,x) \; dx = (-1)^{-s}a\pi\csc(\pi s)\frac{\Gamma(1-s)\Gamma(\alpha-1+s-\alpha s)}{\Gamma(\alpha-\alpha s)} $$ now take the inverse Mellin transform with respect to $s$ to get something that depends on $\alpha$ and $\lambda$, $$ G_\alpha(\lambda) = \frac{1}{2 \pi i}\int_{c-i \infty}^{c + i \infty} \lambda^{-s}(-1)^{-s}a\pi\csc(\pi s)\frac{\Gamma(1-s)\Gamma(\alpha-1+s-\alpha s)}{\Gamma(\alpha-\alpha s)}\;ds $$ we can use the Mathematica command InverseMellinTransform[((-1)^-s a [Pi] Csc[[Pi] s] Gamma[ 1 - s] Gamma[-1 + a + s - a s])/Gamma[a - a s] /. a -> 2, s, l] Where the a -> X part can be changed for various values of $\alpha$. This reproduces your result for $\alpha=2$, but that obscures the hypergeometric nature of the result. It also gives results for higher values of $\alpha$ as hypergeometric functions weighted by products of gamma functions. The results seem to have a pattern $$ G_4(\lambda)=\frac{3^2}{2^3}\sqrt{\frac{3}{2}}\frac{\Gamma(\frac{4}{3})\Gamma(\frac{5}{3})}{\Gamma(\frac{5}{4})\Gamma(\frac{7}{4})}\;_4F_3\left(\frac{3}{3},\frac{3}{3},\frac{4}{3},\frac{5}{3}\bigg|\frac{5}{4},\frac{6}{4},\frac{7}{4}\bigg|\frac{3^3}{4^4}\lambda\right) $$ $$ G_5(\lambda)=\pi\frac{4^3}{5^3}\sqrt{\frac{2}{5}}\frac{\Gamma(\frac{5}{4})\Gamma(\frac{7}{4})}{\Gamma(\frac{6}{5})\Gamma(\frac{7}{5})\Gamma(\frac{8}{5})\Gamma(\frac{9}{5})}\;_5F_4\left(\frac{4}{4},\frac{4}{4},\frac{5}{4},\frac{6}{4},\frac{7}{4}\bigg|\frac{6}{5},\frac{7}{5},\frac{8}{5},\frac{9}{5}\bigg|\frac{4^4}{5^5}\lambda\right) $$ In general it appears that $$ G_n=C_n\; _nF_{n-1}\left(\frac{n-1}{n-1},\frac{n-1}{n-1},\frac{n}{n-1},\cdots,\frac{2n-3}{n-1}\bigg|\frac{n+1}{n},\frac{n+2}{n},\cdots,\frac{2n-1}{n}\bigg|\frac{(n-1)^{n-1}\lambda}{n^n}\right) $$ where $$ C_n=\sqrt{\frac{2\pi(n-1)}{n}}\frac{(n-1)^{n-2}}{n^{n-2}}\frac{\prod_{k=0}^{n-2} \Gamma(\frac{n+k}{n-1})}{\prod_{k=1}^{n-1} \Gamma(\frac{n+k}{n})} $$ where we see your result in $C_2=2$ and $$ _2F_1\left(1,1\bigg|\frac{3}{2}\bigg|\frac{\lambda}{4}\right) = \frac{4 \sin^{-1}(\sqrt{\lambda}/2)}{\sqrt{(4-\lambda)\lambda}} $$ seems to check out for a few numerical examples.<|endoftext|> TITLE: Measurability of stopping times QUESTION [6 upvotes]: If $\tau$, $\rho$ are stopping times, then it is easily seen that $\tau+\rho$ is also a stopping time. However $\tau-\rho$ and $\tau\rho$ are not necessarily stopping times as it requires a "peak into the future", but I am not sure why this is so and would appreciate any feedback to facilitate my understanding. For the difference case, I think it not as if $(X,\mathscr{F})$ is a measurable space and $f, \space g \space :(X,\mathscr{F})\rightarrow([0,\infty),\mathscr{B}([0,\infty))$ are measurable maps then if for some $x\in X$, $f(x)-g(x)<0$ then I say it does not make sense that $f-g$ is measurable as it maps some elements outside of $[0,\infty)$, and $(f-g)^{-1}(\mathscr{B}([0,\infty)))$ is not a sub $\sigma$-algebra of $\mathscr{F}$. Is my reasoning correct? So in the case of stooping time, $\tau$ we have the added requirement that $1_{\{\tau\leq t\}}$ be adapted to the given filtration. So my question is why does the difference of stopping times require a "peak into the future"? I have the same qualms about the product case. However in the case of the reciprocal, I think I see why, as if we take $t=1/2$, then $\{1/\tau\leq2\}=\{\tau\geq2\}\in\mathscr{F}_2$ and we have no way on knowing if the event is in $\mathscr{F}_{1/2}$ So any comments and answers would be really appreciated. REPLY [5 votes]: Intuitively, one should think about a stopping time in conjunction with a process. It's nice to think about a process as a stock price, and a stopping time as a strategy for deciding when to sell. For instance, imagine a stock whose price at time $t=0$ is 100. let $\tau$ be the first time the stock price hits 200 (i.e. $\tau = \inf\{t : X_t = 200\}$). Similarly $\rho$ be the first time the price reaches 50. So $\tau$ itself is a reasonable strategy: "sell when the price reaches or exceeds 200". You could actually do that, so $\tau$ is a stopping time. $\tau+5$ is also a reasonable strategy: "Wait until the price hits 200, wait 5 more days, then sell." So $\tau+5$ is also a stopping time. $\tau-5$ is not a stopping time: "Sell five days before the stock hits 200." If the stock hits 200 on day 8, you won't know that until day 8, at which point you'll see you should have sold on day 3, but by then it's too late. So $\tau+\rho$ is a stopping time: "Wait until the stock has hit both prices, add the times, and sell at that time." If it hits 200 on day 8 and 50 on day 9, you are supposed to wait until day 17 to sell. You could actually do that. It's a stopping time, though a stupid one because there is no particular meaning to adding absolute times to get an absolute time. $\tau - \rho$ is not a stopping time for reasons similar to $\tau-5$ (though it is still stupid because subtracting absolute times isn't meaningful as an absolute time). For $\tau\rho$, consider what happens if $\tau = 1/2$ and $\rho = 3/4$. (And it's even stupider because multiplying two times doesn't give a time at all; the units are wrong.)<|endoftext|> TITLE: Monotone convex functions QUESTION [5 upvotes]: Let $f$ be a real function defined on $[1, +\infty)$ and convex from a number on. Is it true that $f$ is monotone from a number on? REPLY [5 votes]: For $x_1 < x_2 < x_3$ from the definition of convex function $$\dfrac {f \left({x_2}\right) - f \left({x_1}\right)} {x_2 - x_1} \le \dfrac {f \left({x_3}\right) - f \left({x_2}\right)} {x_3 - x_2} $$ so that $$f \left({x_3}\right) - f \left({x_2}\right) \ge \dfrac {(x_3 - x_2)(f \left({x_2}\right) - f \left({x_1}\right))} {x_2 - x_1} $$ So if $f \left({x_2}\right) \ge f \left({x_1}\right)$ then $f \left({x_3}\right) \ge f \left({x_2}\right)$. If there is no such $x_1, x_2$, then your function is monotonically decreasing.<|endoftext|> TITLE: To show $\mathrm{ker} f=\{0\}$ for linear mapping $f$. QUESTION [6 upvotes]: Let $V$ be a vector space over $F$ with basis $\{e_1,e_2,...e_n\}$. Let $F$ be a linear mapping from $V$ to $V$ such that $F(e_1) =e_2,...F(e_n)=e_1$. Show that $\mathrm{ker} f=\{0\}$. Also find $f^{-1}$. I just know that $\mathrm{ker} f=\{0\}$ iff $f$ is 1-1. So is it enough to show that basic definition for 1-1? The inverse mapping will be defined as $f^{-1}(e_1) =e_n, f^{-1}(e_2)=e_1,...$ Am I right? REPLY [4 votes]: Using the rank-nullity theorem it is easy to prove that if $V$ is a finite dimensional vector space and $f\colon V\to V$ is a linear map, then $f$ is injective (or 1-1) if and only if it is surjective Prove it and verify your map is surjective. The inverse function is correct. REPLY [2 votes]: Recall that for finite dimensional vector spaces you have that $$ \dim V = \dim Im F + \dim \ker F, $$ an by the definition of $F$ you know that $$ Im F \in sp\{ e_1, e_2,..., e_n \}, $$ thus $\dim Im F= n$, therefore $\dim \ker F = 0$ so $\ker F = 0_v$.<|endoftext|> TITLE: Let $f$ be an entire function and $L$ a line in $\mathbb{C}$ such that $f(\mathbb{C})\cap L=\emptyset$. Show that $f$ is constant function. QUESTION [6 upvotes]: Let $f$ be an entire function and $L$ a line in $\mathbb{C}$ such that $f(\mathbb{C})\cap L=\emptyset$. Show that $f$ is constant function. If $f$ is not constant then $f(\mathbb{C})$ is dense set in $\mathbb{C}$, but how can I use that line does not intersect the image set? REPLY [2 votes]: As $f$ is entire and non-constant, $f$ takes all values with atmost one exception possibly (Little Picard's theorem). But $f(\mathbb C)\cap L=\emptyset$ implies $f$ skip infinite points on $\mathbb C$, so $f$ must be constant.<|endoftext|> TITLE: Evaluating $\int_0^1 \frac{\ln^m (1+x)\ln^n x}{x}\; dx$ for $m,n\in\mathbb{N}$ QUESTION [8 upvotes]: Evaluate $\displaystyle \int\limits_0^1 \dfrac{\ln^m (1+x)\ln^n x}{x}\; dx$ for $m,n\in\mathbb{N}$ I was wondering if the above had some kind of a closed form, here some of the special cases have been discussed but this one is really a fascinating one. I guess there's no general taylor expansion for $\ln^m (1+x)$ and so transforming into a series wouldn't be that easy. REPLY [3 votes]: Given the integral, $$I(n,p) = \int\limits_0^1 \dfrac{ \ln^{n-1}(x)\ln^p (1+x)}{x}\; dx$$ which is the notation consistent with Nielsen polylogs. Then closed-forms in terms of ordinary polylogarithms are known only for the following cases, $$I(1,p) \\ I(n,1) \\ I(2k+1,2) \\ I(2,2) \\ I(2,3)$$ and no more. See this more general post on Nielsen polylogs.<|endoftext|> TITLE: Solving for the CDF of the Geometric Probability Distribution QUESTION [5 upvotes]: So I am trying to find the CDF of the Geometric distribution whose PMF is defined as $$P(X=k) = (1-p)^{k-1}p$$ where X is the number of trials up to and including the first success. Now attempting to find the general CDF, I first wrote out a few terms of the CDF: $$P(X=1) = p \\P(X=2) = p(1-p) + p \\ P(X=3) = p(1-p)^2 + p(1-p) + p\\....P(X=k) = p(\sum\limits_{i=1}^{k-1} (1-p)^i)$$ Now I know this last sum has to equal 1, therefore: $$p(\sum\limits_{i=1}^{k-1} (1-p)^i) = 1 $$ Now I am aware that the CDF is supposed to be $$F(X=k) = 1-(1-p)^k$$ What I am trying to figure out is how to go from what I have to the final solution. Any hints or ideas? Thanks REPLY [12 votes]: The CDF is defined as $$ F(k)=P(X\leq k)=\sum_{k'=1}^k P(X=k')=\sum_{k'=1}^k p (1-p)^{k'-1}=1-(1-p)^k\ , $$ using a finite geometric sum .<|endoftext|> TITLE: Question about $\lim_{n\to\infty} n|\sin n|$ QUESTION [8 upvotes]: I have a question regarding this limit (of a sequence): $$\lim_{n\to \infty} n|\sin(n)|$$ Why isn't it infinite? The way I thought this problem is like this-$|\sin(n)|$ is always positive, and n tends to infinity, so shouldn't the whole limit go to infinity? What is the right way to solve this and why is my idea wrong? REPLY [7 votes]: The limit of the sequence $\{n\left|\sin n\right|\}_{n\geq 0}$ as $n\to +\infty$ does not exist. Obviously $\left|\sin n\right|$ is arbitrarily close to $1$ for infinite natural numbers, making the $\limsup=+\infty$. On the other hand, if $\frac{p_m}{q_m}$ is a convergent of the continued fraction of $\pi$ we have $$ \left|p_m -\pi q_m\right|\leq \frac{1}{q_m} $$ and since $\sin(x)$ is a Lipschitz continuous function, the $\liminf$ is finite, by considering $n=p_m$.<|endoftext|> TITLE: Collatz divide by -2 instead QUESTION [11 upvotes]: I've been toying around with the Collatz conjecture for a while, and in an effort to extend it to the negative integers I tried diving by $-2$ instead of by $2$. The new iteratively applied function is then: $f(n) = \left\{ \begin{array}{ll} -\frac{n}{2} & \text{if n is even} \\ 3n+1 & \text{if n is odd} \end{array} \right.$ A couple of examples: $1 \to 4 \to -2 \to 1 \to \dots$ $2 \to -1 \to -2 \to 1 \to \dots$ $-6 \to 3 \to 10 \to -5 \to -14 \to 7 \to 22 \to -11 \to -32 \to 16 \to -8 \to 4 \to -2 \to 1 \to \dots$ I have tested this with a quick python script, and it seems to eventually reach $1$ for all n between $-10^9$ and $10^9$, not including $0$. Is there a relation between this function and the Collatz conjecture? In particular, would this series convergence to $1$ follow from the Collatz conjecture being true? REPLY [6 votes]: A note how to test existence of cycles Consider the notation for one (compressed) step, where we can reduce the discussion to that of odd numbers only. Also, numbers divisible by 3 need not be considered btw, because they cannot be the result of one transformation from such an odd value $a$ to $b$. Moreover, different from the Collatz-problem we can have positive and/or negative values in the consideration. The definition of the "one-step-transformation" is: $$ b= {3a + 1\over (-2)^A } \tag 1$$ where the exponent $A$ is such, that the result $b$ is an odd integer again. 1) one-step-cycle If we have a one-step-cycle then this means $b=a$ and we can rearrange: $$ a= {3a + 1\over (-2)^A } \\ (-2)^A = {3a+1 \over a} $$ $$ (-2)^A = (3 + {1 \over a}) \tag 2 $$ Obviously the rhs can assume only two integer values, which are perfect powers of $2$ , namely $a=1 \implies 3+1/a=4 , A=2$ and $a=-1 \implies 3-1/a=2 , A=1$. The first value $a=1$ gives a cycle, because $(-2)^2 = 3+1 $, but not the second one because $ (-2)^1 \ne 3-1 $ So we know: there exists exactly one one-step-cycle using $a=1$. 2) Two step cycle Consider now a two step cycle. This means $$ b= {3a + 1\over (-2)^A } \qquad a = {3b + 1\over (-2)^B } \tag 3$$ To test, whether such a cycle can exist, we do the product of both lhs and rhs to arrive at formula $$ a \cdot b= {3a + 1\over (-2)^A } \cdot {3b + 1\over (-2)^B } $$ where if we rearrange the denominators and the lhs: $$ (-2)^{A+B} = (3 + {1\over a} ) \cdot ( 3+ { 1\over b } ) \tag 4$$ Because $a=1$ already gives the one-step-cycle, we can assume, $a$ at least $\pm5$ and because $b \ne a$ it must be at least $b = \pm 7$ . We see, that on the rhs we can have at minimum $(3-1/5)(3-1/7) = 8 $ and as maximum $(3+1/5)(3+1/7) = 10 {2 \over 35} $ The only perfect power of 2 in this interval is $8$ so we must have $$ (-2)^S \overset?= 8 = (3-1/5)(3-1/7) $$ (Remark: I use always $S$ for the sum of all involved exponents, so in this case we have $S=A+B$) The only solution allowing the equality in absolute values would be $S=3$ , but then $(-2)^3 = -8 \ne 8$ so this is no solution, and we know, that no 2-step-cycle exists 3) generalization I leave the obvious generalization to you; with this method one can disprove a nice multitude of few-step-cycles with little effort; some of them because perfect powers of 2 are not near to products of $3 + 1/a$ - thus immediately requiring $a,b,c,...=1$ (but which means again the one-step-cycle) and for the disproof of others of them one must test sets of low bounded (absolute) values for $a,b,c,...$ and knowing $a,b,c,...$ must be low the search radius is not big. 4) Hypothese In all my studies on generalizations of the Collatz-problem, I only found a) few cycles, b) short cycles, c) cycles on small elements $a,b,c,... $ and my handful of tests combined with your exhaustive search makes me much confident, that indeed no nontrivial cycle exists. Remark The convergents of the continued fraction of $\log(3)/ \log(2)$ give values for $N$ (indicating the number of steps and exponent of power of 3 involved) and $S$ (indicating the sum of all exponents $A,B,C,...$ involved and indicating the exponent of the highest power of 2 involved where we shall have $2^S \gt 3^N \gt 2^{S-1}$ or $2^S \lt 3^N \lt 2^{S+1}$ depending on the signs of the $a,b,c,d,...$. Having your exhaustive search assuring that all $a,b,c,...$ must be larger than $10^6$ My heuristic (using a rather simple procuedure in Pari/GP) show, that the method above allows to disprove all cycles of lengthes [updated] $N<2966$ update Some heuristical data. I have a routine for finding the upper bound for $a_1$ in assumed cycles of length $N$. That means, the smallest element of a cycle must be smaller or equal than $a_1$. Assuming the next value $a_1'$ as smallest value of a cycle gives that in the required equation $(-2)^S \overset?= (3+1/a_1)(3+1/a_2)...(3+1/a_N)$ with N parentheses the rhs is already too small to match the lhs. Reworded it means: if we already know (from exhaustive search), that all numbers smaller or equal some upper bound $X$ go down to 1 (and are thus not part of a nontrivial cycle), then the cycle of length $N$ and its needed $a_1 \le X$ cannot exist. ($a_1'$ is the next possible number). The following q&d table uses actually $(2)^S$ instead of $(-2)^S$ but the logic is the same. Only that cycles of length $N \le 1000$ are documented, where $a_1 \gt 20000$ All other lengthes $N$ require far smaller minimal values $a_1$ rhs is rhs is diff(N) N a1 a1' higher than S lower than S S ----------------------------------------------------------------- 253 26735 - 26737 : 401.000000445 400.999999949 401 53 306 99323 - 99325 : 485.000000022 484.999999978 485 200 506 26363 - 26365 : 802.000000167 801.999999174 802 53 559 44255 - 44257 : 886.000000272 885.999999876 886 53 612 98867 - 98869 : 970.000000018 969.999999929 970 147 759 25991 - 25993 : 1203.00000092 1202.99999943 1203 53 812 36163 - 36167 : 1287.00000072 1286.99999988 1287 53 865 54647 - 54649 : 1371.00000023 1370.99999983 1371 53 918 98411 - 98413 : 1455.00000005 1454.99999992 1455 41 959 20593 - 20597 : 1520.00000163 1519.99999877 1520 Here the first rhs in the generalized formula $(4)$ (having $N$ parentheses) is computed based on $a_1$ and is larger or equal than $2^S$ and the next rhs is computed based on $a_1' \gt a_1$ and is too small to arrive at $2^S$. Result: given $X=1 000 000$ we see, that all $a_1 \lt X$ and thus none of the cycles of the shown lengthes can exist. [Appendix] Here are the first few cycle-lengthes $N$ which allow the smallest element $a_1$ to be larger than $X=1\,000\,000$ , so that the displayed cycle-lengthes are not disproven using the upper bound by your exhaustive search only. N a1 a1' S f(a1) f(a1') diff(N) ----------------------------------------------------------------------------- 2966 1161955 - 1161959 : 4701 0.000000002 -0.000000001 2966 3631 >1489099 - >1489103 : 5755 0.000008467 0.000008465 665 4296 >1487105 - >1487107 : 6809 0.000286350 0.000286347 665 4961 >1485109 - >1485113 : 7863 0.000564521 0.000564518 665 5267 1001761 - 1001765 : 8348 0.000000006 -0.000000002 306 5626 >1483115 - >1483117 : 8917 0.000842982 0.000842978 359 5932 1157525 - 1157527 : 9402 0.000000002 -0.000000004 306 ... ... ... ... ... ... ... The term "f(a1)" means the following: Let $x = \log_2()$ of the product in the generalized rhs of equation (4) where the smallest/first element is $a_1$. Then "f(a1)" is the deviation of $x$ from $S$: $x-S$<|endoftext|> TITLE: If $a+b=2$ so $a^a+b^b+3\sqrt[3]{a^2b^2}\geq5$ QUESTION [6 upvotes]: Let $a$ and $b$ be positive numbers such that $a+b=2$. Prove that: $$a^a+b^b+3\sqrt[3]{a^2b^2}\geq5$$ My trying. Easy to show that $x^x\geq\frac{x^3-x^2+x+1}{2}$ for all $x>0$, but $$\frac{a^3-a^2+a+1}{2}+\frac{b^3-b^2+b+1}{2}+3\sqrt[3]{a^2b^2}\geq5$$ is wrong for $a\rightarrow0^+$ REPLY [3 votes]: We will show that if $x \in (-1,1)$, then $$(1+x)^{1+x}+(1-x)^{1-x}+3 \left(1-x^2\right)^{2/3} \geq 5.$$ First we will prove that the inequality holds for all $x \in [-\frac{9}{10},\frac{9}{10}]$. Lemma 1.1. If $x \in (-1,1)$, then $(1+x)^{1+x}\geq \frac{1}{12}x^5+\frac{1}{3}x^4+\frac{1}{2}x^3+x^2+x+1$. Proof. For all $x \in (-1,1)$ we have \begin{align*} &\frac{1}{12}x^5+\frac{1}{3}x^4+\frac{1}{2}x^3+x^2+x+1 > 0 \\[7pt] &\impliedby \frac{1}{12}x^5+\frac{1}{3}x^4 > 0 \land \frac{1}{2}x^3 + x^2 > 0 \land x+1 > 0 \\[7pt] &\iff x^4 (x+4) > 0 \land x^2 (x+2) > 0 \land x+1 > 0, \end{align*} thus we can define $\gamma \colon (-1,1) \rightarrow \mathbb{R}$ by $$\gamma(x) = (1+x) \log (1+x) - \log\left(\frac{1}{12}x^5+\frac{1}{3}x^4+\frac{1}{2}x^3+x^2+x+1\right).$$ We show $\gamma \geq 0$. For all $x \in (-1,1)$ we have \begin{align*} &\gamma''(x) \geq 0 \\[7pt] &\iff \frac{x^4 \left(x^6+13 x^5+65 x^4+192 x^3+364 x^2+468 x+324\right)}{(x+1) \left(x^5+4 x^4+6 x^3+12 x^2+12 x+12\right)^2} \geq 0 \\[7pt] &\impliedby x^6+13 x^5+65 x^4+192 x^3+364 x^2+468 x+324 \geq 0 \\[7pt] &\impliedby x^6+13 x^5+65 x^4 \geq 0 \land 192 x^3+192 x^2 \geq 0 \land 172 x^2+468 x+324 \geq 0 \\[7pt] &\impliedby\operatorname{Discr} \left(x^2+13 x+65\right) = -91 \\[7pt] &\qquad\land x^2(x+1) \geq 0 \\[7pt] &\qquad\land \operatorname{Discr} \left(172 x^2+468 x+324\right) = -3888, \end{align*} therefore $\gamma'$ is increasing. Since $\gamma'(0) = 0$, we know that $\gamma'(x) \leq 0$ for all $x \in (-1,0]$ and $\gamma'(x) \geq 0$ for all $x \in [0,1).$ Thus $\gamma$ is decreasing on $(-1,0]$ and increasing on $[0,1)$. Since $\gamma(0) = 0$, we have $\gamma \geq 0$ and we are done. $$\tag*{$\Box$}$$ Lemma 1.2. If $x \in [-\frac{9}{10}, \frac{9}{10}]$ then $$8 x^8+72 x^6+108 x^4-432 x^2+243 \geq 0.$$ Proof. For all $x \in [-\frac{9}{10}, \frac{9}{10}]$ we have \begin{align*} &8 x^8 + 9 (8 x^6+12 x^4-48 x^2+27) \geq 0 \\ &\impliedby 8 x^6+12 x^4-48 x^2+27 \geq 0 \\ &\impliedby 8 x^6+12 x^4-48 x^2+27 + 4 \left(x^2-1\right)^3 \geq 0 \\ &\iff 12 (x^6-3 x^2+2)-1 \geq 0 \\ &\iff 12 (x^2-1)^2 \left(x^2+2\right) -1 \geq 0 \end{align*} We define $\gamma \colon [0, \frac{9}{10}] \rightarrow \mathbb{R}$ by $$\gamma(x)=12 (x^2-1)^2 \left(x^2+2\right) -1.$$ For all $x \in [0, \frac{9}{10}]$ we have $$\gamma'(x) = 72 x(x^4-1) \leq 0,$$ thus $\gamma$ is decreasing. Since \begin{align*} &\gamma\left(\frac{9}{10}\right) \geq 0 \\ &\iff 12 \left(\frac{-19}{100}\right)^2 \cdot \frac{281}{100} -1 \geq 0 \\ &\iff \frac{12\cdot19^2 \cdot 281}{10^6} -1 \geq 0 \\ &\iff 12\cdot19^2 \cdot 281 \geq 10^6 \\ &\impliedby 19^2 \cdot 28 \geq 10^4 \\ &\iff 10108 \geq 10^4, \end{align*} we have $\gamma \geq 0$. By symmetry, we are done. $$\tag*{$\Box$}$$ Claim 1.3. If $x \in [-\frac{9}{10}, \frac{9}{10}]$ then $$(1+x)^{1+x}+(1-x)^{1-x}+3 \left(1-x^2\right)^{2/3} \geq 5.$$ Proof. Let $x \in [-\frac{9}{10}, \frac{9}{10}]$. We have \begin{align*} &(1+x)^{1+x}+(1-x)^{1-x}+3 \left(1-x^2\right)^{2/3} \geq 5 \\ \text{By Lemma 1.1:} \\ &\impliedby \left(\frac{1}{12}x^5+\frac{1}{3}x^4+\frac{1}{2}x^3+x^2+x+1\right) \\ &\qquad +\left(-\frac{1}{12}x^5+\frac{1}{3}x^4-\frac{1}{2}x^3+x^2-x+1\right) \\ &\qquad + 3 \left(1-x^2\right)^{2/3} \geq 5 \\[7pt] &\iff \frac{2}{3}x^4 + 2x^2 - 3 \geq -3 \left(1-x^2\right)^{2/3} \\[7pt] &\iff \left(\frac{2}{3}x^4 + 2x^2 - 3\right)^3 \geq -27 \left(1-x^2\right)^{2} \\[7pt] &\iff \frac{1}{27} x^4 \left(8 x^8+72 x^6+108 x^4-432 x^2+243\right) \geq 0. \\[7pt] &\impliedby \text{Lemma 1.2.} \end{align*} $$\tag*{$\Box$}$$ Now we will prove that the inequality holds for all $x \in [\frac{9}{10},1)$. Lemma 2.1 If $x \in [\frac{1}{2},1]$ then $(1+x)^{1+x} \geq (4+4 \log 2)x-4 \log 2$. Proof. Since for all $x \in [\frac{1}{2},1]$ we have \begin{align*} (4+4 \log 2)x-4 \log 2 &\geq (4+4 \log 2)\frac{1}{2}-4 \log 2 \\ &= 2-2 \log 2 \\ &> 2-2 \log \mathrm{e} = 0, \end{align*} we can define $\gamma \colon [\frac{1}{2},1] \rightarrow \mathbb{R}$ by $$\gamma(x) = (1+x) \log(1+x) - \log \left((4+4 \log 2)x-4 \log 2\right).$$ For all $x \in [\frac{1}{2},1]$ we have $$\gamma''(x) = \frac{1}{1+x}+\frac{(1+\log (2))^2}{(x+x \log (2)-\log (2))^2} \geq 0,$$ therefore $\gamma'$ is increasing. Since $\gamma'(1) = 0$, we know that $\gamma' \leq 0$. Thus $\gamma$ is decreasing. Since $\gamma(1) = 0$, we have $\gamma \geq 0$ and we are done. $$\tag*{$\Box$}$$ Claim 2.2 If $x \in [\frac{9}{10},1)$ then $$(1+x)^{1+x}+(1-x)^{1-x}+3 \left(1-x^2\right)^{2/3} \geq 5.$$ Proof. For all $x \in [\frac{9}{10},1)$ we have $$(1-x)^{1-x} = \exp\left((1-x) \log(1-x)\right) \geq 1 + (1-x) \log(1-x),$$ therefore \begin{align*} &(1+x)^{1+x}+(1-x)^{1-x}+3 \left(1-x^2\right)^{2/3} \geq 5 \\ &\impliedby (1+x)^{1+x}+1 + (1-x) \log(1-x)+3 \left(1-x^2\right)^{2/3} \geq 5 \\[10pt] &\text{By Lemma 2.1:} \\[10pt] &\impliedby (4+4 \log 2)x-4 \log 2+1 + (1-x) \log(1-x)+3 \left(1-x^2\right)^{2/3} \geq 5 \\ &\iff (4+4 \log 2)(x-1) + (1-x) \log(1-x)+3 \left(1-x^2\right)^{2/3} \geq 0 \\ &\iff (1-x)\left(-4-4 \log 2 + \log(1-x)+3(1-x)^{-1/3} (1+x)^{2/3}\right) \geq 0 \\ &\impliedby -4-4 \log 2 + \log(1-x)+3 (1-x)^{-1/3} (1+x)^{2/3} \geq 0. \end{align*} Let $\gamma\colon [\frac{9}{10},1) \rightarrow \mathbb{R}$ be the function given by $$\gamma(x) = \log(1-x)+3 (1-x)^{-1/3} (1+x)^{2/3} -4-4 \log 2.$$ For all $x \in [\frac{9}{10},1)$ we have \begin{align*} &\gamma'(x) \geq 0 \\ &\iff \frac{3-x-\left(1-x^2\right)^{1/3}}{(1-x)\left(1-x^2\right)^{1/3}} \geq 0 \\ &\impliedby 3-x-\left(1-x^2\right)^{1/3} \geq 0 \\ &\iff (3-x)^3 \geq 1-x^2 \\ &\iff -x^3+10 x^2-27 x+26 \geq 0 \\ &\impliedby -x^3+x^2 \geq 0 \land 9 x^2-27 x+18 \geq 0 \\ &\iff x^2(1-x) \geq 0 \land 9 (1-x)(2-x) \geq 0. \end{align*} Therefore $\gamma$ is increasing. Since \begin{align*} &\gamma\left(\frac{9}{10}\right) \geq 0 \\ &\iff \log\left(\frac{1}{10}\right)+3 \left(\frac{1}{10}\right)^{-1/3} \left(\frac{19}{10}\right)^{2/3} -4-4 \log 2 \geq 0 \\ &\iff -40+3\cdot 190^{2/3}-40 \log 2-10 \log 10 \geq 0 \\ &\impliedby -40+3\cdot 33-40 \log 2-10 \log 16 \geq 0 \\ &\iff \exp \frac{59}{80} \geq 2 \\ &\impliedby 1 + \frac{59}{80} + \frac{59^2}{2\cdot80^2} \geq 2 \\ &\iff 2\cdot59\cdot80 + 59^2 \geq 2\cdot80^2 \\ &\iff 12921 \geq 12800, \end{align*} we have $\gamma \geq 0$ and we are done. $$\tag*{$\Box$}$$ By symmetry we have $$(1+x)^{1+x}+(1-x)^{1-x}+3 \left(1-x^2\right)^{2/3} \geq 5$$ for all $x \in (-1,1)$.<|endoftext|> TITLE: Prove that a torsion module over a PID equals direct sum of its primary components QUESTION [6 upvotes]: Let $R$ be a P.I.D. with $1$ and $M$ be an $R$-module that is annihilated by the nonzero, proper ideal $(a)$. Let $a=p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_k^{\alpha_k}$ be the unique factorization of $a$. Let $M_i$ be the submodule of $M$ annihilated by $p_i^{\alpha_i}$. Prove that $M=M_1\oplus M_2\oplus \cdots \oplus M_k$. My attempt so far: For each $1\leq j \leq k$ define $a_j = \prod_{i\ne j} p_i^{\alpha_i}$. Let $\sum_{i=1}^{n} (a_jr_i)\cdot m_i$ be an arbitrary element of the submodule $(a_j)M$. We have $p_j^{\alpha_j}\cdot (\sum_{i=1}^{n} (a_jr_i)\cdot m_i) = (p_j^{\alpha_j}a_j(r_1 +\cdots + r_n))\cdot (m_1+\cdots +m_n) =(r_1 +\cdots +r_n)\cdot (a \cdot (m_1+\cdots +m_n)) =0$. So $\sum_{i=1}^{n} (a_jr_i)\cdot m_i \in M_j$, so that $(a_j)M\subset M_j$. Next, let $m\in M_j$. Since $R$ is a P.I.D., we know $1= a_jx + p_j^{\alpha_j}y$ for some $x, y \in R$. So $m= 1\cdot m = (a_jx + p_j^{\alpha_j}y)\cdot m = xa_j \cdot m + yp_j^{\alpha_j} \cdot m = xa_j \cdot m +0 \in (a_j)M$. Conclude that $(a_j)M = M_j$. Next, suppose $m\in (a_j)M\cap \sum_{t\ne j} (a_t)M$. We have $1\cdot m = xa_j \cdot m + yp_j^{\alpha_j} \cdot m= xa_j\cdot m + 0$. But note that $xa_j\cdot ((\sum_{t\ne j}a_t)\cdot m) = wa\cdot m$ for some $w\in R$, so that $xa_j\cdot ((\sum_{t\ne j}a_t)\cdot m) = 0$. It follows that $xa_j = 0$, and $m=0+0=0$. Conclude that $ (a_j)M\cap \sum_{t\ne j} (a_t)M = (0)$. Thus, $\sum_{i=1}^{k} (a_i)M$ is a direct sum. At this point, I'm not sure how to actually show this direct sum is equal to $M$. The only thing I tried is applying the Chinese Remainder Theorem as follows, but it doesn't seem to work. We have that $(a)M=(0)$. And since $R$ is a PID, we know that since $(p_i^{\alpha_i}, p_j^{\alpha_j})= (1) = R$ for any $i\ne j$, $(p_i^{\alpha_i})$ and $(p_j^{\alpha_j})$ are comaximal ideals. So apply the Chinese Remainder Theorem to get $M\cong M/(p_1^{\alpha_1})M \times \cdots \times M/(p_k^{\alpha_k})M$. I'd appreciate some help on finishing this. REPLY [3 votes]: Based on your argument, I assume you assume your PID to have a $1$. From there you need only show that $(a_1,\ldots, a_k)=R$. To do this show inductively that for $k TITLE: Show that $a+b+c=0$ implies that $32(a^4+b^4+c^4)$ is a perfect square. QUESTION [12 upvotes]: There are given integers $a, b, c$ satysfaying $a+b+c=0$. Show that $32(a^4+b^4+c^4)$ is a perfect square. EDIT: I found solution by symmetric polynomials, which is posted below. REPLY [5 votes]: EDIT: I found solution by symmetric polynomials (in variables a, b, c) The following more or less transcribes OP's solution in direct calculations, without explicitly using Newton's relations. From the assumption that $a+b+c=0\,$: $$ 0 = (a+b+c)^2 = a^2+b^2+c^2 + 2(ab+bc+ca) $$ $$ \implies 2(ab+bc+ca)=-(a^2+b^2+c^2) \tag{1} $$ $$ \require{cancel} (ab+bc+ca)^2 = a^2b^2+b^2c^2+c^2a^2 + \cancel{2abc(a+b+c) } \tag{2} $$ $$ \begin{align} a^4+b^4+c^4 & = (a^2+b^2+c^2)^2-2(a^2b^2+b^2c^2+c^2a^2) \\[5px] & \overset{(1),(2)}{=} 4(ab+bc+ca)^2 - 2(ab+bc+ca)^2 \\ & = 2 (ab+bc+ca)^2 \end{align} $$ The latter gives $32(a^4+b^4+c^4)=\big(8 (ab+bc+ca)\big)^2\,$.<|endoftext|> TITLE: The projection of a point onto a convex set is unique with respect to any norm QUESTION [6 upvotes]: Given a convex set $C$, the projection of a point $z$ onto $C$ is a point $x$ in $C$ that minimizes $\|z- x\|$. Say the minimum is achieved at $x^*\in C$. My textbook shows such $x^*$ is unique under the euclidean norm, as shown below. My guess is the uniqueness should hold regardless what norm is chosen, but I have trouble proving it. Thanks! REPLY [3 votes]: Counterexample: Take $\mathbb R^2$ with $\|\cdot\|_1$. Be $C=\{x\in\mathbb R^2:\|x\|_1\le1\}$. Be $P=(2,2)$. Then for any point $x_\lambda=(\lambda,1-\lambda)$ with $\lambda\in[0,1]$ you have $\|P-x_\lambda\|_1 = |2-\lambda|+|2-1+\lambda| = 3$, which is also the minimum distance. Note that $\|\cdot\|_1$ is not a differentiable function, therefore the proof doesn't apply.<|endoftext|> TITLE: Solving or approximating infinitely nested integral QUESTION [6 upvotes]: Let $f$ given by $$f(x) = g(x) + \int_0^x\left(g(x_1) + \int_0^{x_1} \left( g(x_2) + \int_0^{x_2} \ldots d_{x_n} \ldots \right) d_{x_2} \right) d_{x_1}$$ where $n \rightarrow \infty$ and $g(x)$ is strictly decreasing in $x$. How can such an integral be solved or approximated for? REPLY [3 votes]: Yet another way: using the Cauchy formula for repeated integration, an $n$-fold integral can be replaced by one integral: $$ (J^k f)(x) = \int_0^{x}\dotsi \int_0^{t_{n-1}} f(t_1) \, dt_n \dotsm dt_1 = \frac{1}{(n-1)!} \int_0^x (x-t)^{n-1} f(t) \, dt, $$ which leads to the right-hand side becoming $$ g(x) + \sum_{k=1}^{\infty} (J^k g)(x) = g(x) + \int_0^x \sum_{k=1}^{\infty} \frac{(x-t)^{k-1}}{k!} g(t) \, dt = g(x) + \int_0^x e^{x-t} g(t) \, dt; $$ the interchange of the integral and sum using the convergence assumption and that $J$ is a contraction mapping on the space of functions.<|endoftext|> TITLE: Inner product of scaled Hermite functions QUESTION [6 upvotes]: I'm attempting to find a closed form expression for $$\int_{-\infty}^{\infty}e^{-\frac{x^2\left(1+\lambda^2\right)}{2}}H_{n}(x)H_m(\lambda x)dx$$ where $H_n(x)$ are the physicist's hermite polynomials, but haven't had any luck. Anyone know of a way to compute this? REPLY [2 votes]: The integral when $n=m$ is $$ I_{nn} = 2^{2n}\sqrt{2\pi}\left(n!\right)^{2} \frac{\lambda^n}{{\left(\lambda^{2} + 1\right)^{n + \frac{1}{2}}}} {\sum_{k=0}^{\left \lfloor \frac{n}{2} \right \rfloor} \frac{\left(-1\right)^{k} }{2^{4k} (k!)^{2} \left(n-2k \right)!}}\left(\frac{{\lambda^{2} - 1}}{\lambda}\right)^{2k}. $$ The integral is zero whenever $n$ and $m$ have opposite parity. When $n\ge m$, define $s=\frac{n-m}{2}$ (which is guaranteed to be an integer) and the integral becomes $$ I_{nm}=2^{2m}\sqrt{2\pi}m! n!\frac{\lambda^{m}{\left(1-\lambda^{2} \right)}^{s} }{{\left(\lambda^{2} + 1\right)}^{m + s + \frac{1}{2}}}{\sum_{l=0}^{\left \lfloor \frac{m}{2} \right \rfloor} \frac{\left(-1\right)^{l} }{2^{4l} \left(l + s\right)! l! \left(m-2l\right)!}\left(\frac{{\lambda^{2} - 1}}{\lambda}\right)^{2l}} $$ The general case is $$ I_{nm}=2^{2m}\sqrt{2\pi}m! n!\frac{\lambda^{\operatorname{min}(n,m)}{\left(1-\lambda^{2} \right)}^{s}}{{\left(\lambda^{2} + 1\right)}^{\operatorname{max}(n,m) + \frac{1}{2}}}{\sum_{l=0}^{\left \lfloor \frac{\operatorname{min}(n,m)}{2} \right \rfloor} \frac{\left(-1\right)^{l} }{2^{4l} \left(l + s\right)! l! \left(\operatorname{min}(n,m)-2l\right)!}\left(\frac{{\lambda^{2} - 1}}{\lambda}\right)^{2l}} $$ In order to derive this result, first change to probabilists Hermite polynomials $$ H_n(x)=2^{\frac{n}{2}}\operatorname{He}_n(\sqrt{2}x) $$ so the integral becomes $$ I_{nm} = 2^{\frac{n+m}{2}}\int_{-\infty}^\infty e^{-\frac{x^2}{2}(1+\lambda^2)}\operatorname{He}_n(\sqrt{2}x)\operatorname{He}_m(\lambda\sqrt{2}x)dx. $$ Change the integration variable in order to recover the probabilists weighting function $y=x\sqrt{1+\lambda^2}$ $$ I_{nm} = \frac{2^{\frac{n+m}{2}}}{\sqrt{1+\lambda^2}} \int_{-\infty}^\infty e^{-\frac{y^2}{2}} \operatorname{He}_n\left(y\sqrt{\frac{2}{1+\lambda^2}}\right)\operatorname{He}_m\left(y\sqrt{\frac{2\lambda^2}{1+\lambda^2}}\right)dy. $$ Use the scaled Hermite polynomial on both polynomials $$ \operatorname{He}_n(\gamma x) = n!\sum_{k=0}^{\left \lfloor \frac{n}{2} \right \rfloor}\frac{1}{2^kk!(n-2k)!}\gamma^{n-2k}\left(\gamma^2-1\right)^k H_{n-2k}(x) $$ leads to a very long expression, from which all three cases ($n=m$, $n\ge m$, $n\le m$) obtain $$ I_{nm} = \frac{2^{\frac{n+m}{2}}}{\sqrt{1+\lambda^2}}n!m!\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}\sum_{l=0}^{\left\lfloor\frac{m}{2}\right\rfloor}\frac{(-1)^k\left(\sqrt{\frac{2}{1+\lambda^2}}\right)^{n-2k+m-2l}\lambda^{m-2l}\left(\frac{\lambda^2-1}{\lambda^2+1}\right)^{k+l}}{2^kk!(n-2k)!2^ll!(m-2l)!}\\ \times\int_{-\infty}^\infty\operatorname{He}_{n-2k}(x)\operatorname{He}_{m-2l}(x)e^{-\frac{x^2}{2}}dx. $$ Orthogonality will constrain one of the sums -- always choose the sum associated with the maximum of $n,m$. Take the case $n\ge m$, as previously stated, the parity of $n$ and $m$ has to be the same, otherwise the integrand is odd and the integral vanishes. Therefore write $n=m+2s$ for $s\in \mathbb{Z}$. The orthogonality constraint can be written $k=l+s$ and, after algebraic manipulation, the result above obtains.<|endoftext|> TITLE: Show that $\int_0^1\ln(-\ln{x})\cdot{\mathrm dx\over 1+x^2}=-\sum\limits_{n=0}^\infty{1\over 2n+1}\cdot{2\pi\over e^{\pi(2n+1)}+1}$ and evaluate it QUESTION [8 upvotes]: Considering this integral and sum are equal, how can we show that and evaluate its closed form? $$\int_{0}^{1}\ln{(-\ln{x})}\cdot{\mathrm dx\over 1+x^2}=-\sum_{n=0}^{\infty}{1\over 2n+1}\cdot{2\pi\over e^{\pi(2n+1)}+1}=I_S\tag1$$ Note: $$\int_{0}^{1}{\mathrm dx\over 1+x^2}=\sum_{n=0}^{\infty}{(-1)^n\over 2n+1}\tag2$$ An attempt: $u=-\ln{x}$ then $xdu=-dx$ $(1)$ becomes $$\int_{0}^{\infty}\ln{u}\cdot{\mathrm du\over e^u+e^{-u}}={1\over 2}\int_{0}^{\infty}\ln{u}\cdot{\mathrm du\over \cosh{u}}\tag3$$ $(3)$, seem complicate. REPLY [4 votes]: The following is based on theory of theta functions and elliptic integrals. Let $0 < q < 1$ and consider the function $$a(q) = \sum_{n = 1}^{\infty}\frac{q^{n}}{n(1 + q^{n})}\tag{1}$$ and $$b(q) = \sum_{n \text{ odd}}^{\infty}\frac{q^{n}}{n(1 + q^{n})}\tag{2}$$ We can see that $$b(q) = a(q) - \frac{a(q^{2})}{2}\tag{3}$$ Now we have \begin{align} a(q) &= \sum_{n = 1}^{\infty}\frac{1}{n}\sum_{m = 1}^{\infty}(-1)^{m - 1}q^{mn}\notag\\ &= \sum_{m = 1}^{\infty}(-1)^{m - 1}\sum_{n = 1}^{\infty}\frac{q^{mn}}{n}\notag\\ &= \sum_{m = 1}^{\infty}(-1)^{m}\log(1 - q^{m})\notag\\ &= \log\prod_{n = 1}^{\infty}\frac{1 - q^{2n}}{1 - q^{2n - 1}}\notag\\ &= \log\prod_{n = 1}^{\infty}\frac{1 - q^{n}}{(1 - q^{2n - 1})^{2}}\notag \end{align} Therefore \begin{align} 2b(q) &= 2a(q) - a(q^{2})\notag\\ &= \log\prod_{n = 1}^{\infty}\frac{(1 - q^{n})^{2}}{(1 - q^{2n - 1})^{4}}\cdot\frac{(1 - q^{4n - 2})^{2}}{(1 - q^{2n})}\notag\\ &= \log\prod_{n = 1}^{\infty}\frac{(1 - q^{2n})^{2}}{(1 - q^{2n - 1})^{2}}\cdot\frac{(1 - q^{4n - 2})^{2}}{(1 - q^{2n})}\notag\\ &= \log\prod_{n = 1}^{\infty}(1 - q^{2n})(1 + q^{2n - 1})^{2}\notag\\ &= \log\vartheta_{3}(q)\notag \end{align} and thus $$b(q) = \frac{1}{4}\log\frac{2K}{\pi}\tag{4}$$ where $K$ is complete elliptic integral for modulus $k$ corresponding to nome $q = e^{-\pi K'/K}$. The value of integral as claimed here is $$-2\pi b(e^{-\pi})$$ This corresponds to $q = e^{-\pi}, k = 1/\sqrt{2}, K = \Gamma^{2}(1/4)/4\sqrt{\pi}$ and hence the integral in question is claimed to be $$-\frac{\pi}{2}\log\frac{1}{\pi}\frac{\Gamma^{2}(1/4)}{2\sqrt{\pi}} = \frac{\pi}{2}\log \left(\sqrt{2\pi} \Gamma\left(\frac{3}{4}\right) / \Gamma\left(\frac{1}{4}\right)\right)$$ and this is evaluated in the question linked by "nospoon" in his comment. It is however desirable to have a direct proof that the integral is equal to the infinite series in question without making use of the answers of the linked question.<|endoftext|> TITLE: Explaining Ito's Lemma QUESTION [5 upvotes]: Find $$\int_{0}^{T}W(t)dW(t)$$ using Ito's Lemma. Now, I know that the answer to that question is: $\int_{0}^{T}W(t)dW(t)= \frac{W^2(T)}{2}-\frac{T}{2}$ but can somebody explain the idea behind the Ito's Lemma by giving a formal mathematical proof of the above? I would be grateful if someone could post any "interesting" (but not so hard) application of Ito's Lemma when it comes to the Brownian Motion. REPLY [6 votes]: Roughly, let $f :(t,x) \rightarrow f(t,x)$ from $\mathbb{R^2}$ to $\mathbb{R}$, a function smooth enough, classic differentiation gives $$\Delta f(t,x)=\frac{\partial f}{\partial t}\Delta t+\frac{\partial f}{\partial x}\Delta x+\frac{1}{2}\frac{\partial^2 f}{\partial x^2}\Delta x^2+\frac{1}{2}\frac{\partial^2 f}{\partial t^2}\Delta t^2+\mathcal{O(\Delta x^2)}+\mathcal{O(\Delta t^2)}$$ Using infinitesimal $\Delta x$ and $\Delta t$, second order terms vanish, and we write $$d f(t,x)=\frac{\partial f}{\partial t}d t+\frac{\partial f}{\partial x}d x$$ However, if $x$ refers now to a Brownian motion , moving to infinitesimal $\Delta x$ and $\Delta t$, $\Delta t^2$ still vanishes, but $\Delta x^2$ becomes $\Delta t$ because of the non-nil quadratic variation of the Brownian motion. That is why you may hear that a Brownian motion is in terms of $\sqrt{t}$. Henceforth, the formula differs from the classic one as $\Delta x^2$ is not negligeable anymore , and we have $$d f(t,x)=\frac{\partial f}{\partial t}d t+\frac{\partial f}{\partial x}d x+\frac{1}{2}\frac{\partial^2 f}{\partial x^2}dx^2$$ This is the Ito-formula, and $x$ can be any Ito-process, a Brownian motion being one of them. Now regarding your exercise : Let $W$ a Brownian motion , and $f(t,x)=x^2$. You would agree that $W$ is a Brownian motion, hence an Ito process. Furthermore $f$ is twice differentiable. You can apply the Ito's lemma, and use that $\frac{\partial f}{\partial t}=0$, $\frac{\partial f}{\partial x}=2x$ and $\frac{\partial^2 f}{\partial x^2}=2$. Here $x$ refers to the Brownian motion $W$ Thus, $$d f(t,W_t)=0d t+2W_td W_t+\frac{1}{2}2dW_t^2$$ I stated before that the quadratic variation of $W$ gives that $dW_t^2=dt$. We have $$d f(t,W_t)= 2W_td W_t+dt$$ By integration the equation , we have $$f(T,W_T)-f(0,W_0)=2\int_{0}^{T}{W_tdW_t}+T$$ or $$ W_T^2=2\int_{0}^{T}{W_tdW_t}+T$$ Finally, by switching the term in $T$ and dividing by $2$, we have what you want .<|endoftext|> TITLE: How to convert a random matrix to Unitary Matrix? QUESTION [7 upvotes]: I know that a complex matrix $n \times n$ is said to be unitary if $AA^*=A^*A=I$ or equivalently if $A^*=A^{-1}$. But I asked what if there is a random matrix and we want to turn it into an unitary matrix, please also give an example. REPLY [4 votes]: If you have access to Matlab or Octave, then producing such matrices is as easy as issuing the following commands: n = 10; A = rand(n) + 1i*rand(n); [U,~] = qr(A); I recommend that you use software such as this to generate these matrices in general. For small matrices that you would like to work by hand, the Gram-Schmidt procedure is what you want. Here is a simple example involving a real $3\times 3$ matrix. Let $$ A = \begin{bmatrix} 1 & 1 & 1\\ 1 & 0 & 2\\ 0 & 2 & 1 \end{bmatrix} $$ We are going to construct a $3\times 3$ unitary matrix $U$ from the columns of $A$. For simplicity I will first construct a matrix $\hat{U}$ with orthogonal columns, and then normalize at the end to get a unitary matrix. The first step is easy, we set the first column of $\hat{U}$ equal to the first column of $A$, i.e., $\hat{u}_1 = a_1$. The second column of $\hat{U}$ is equal to $a_2$ minus any contributions from $\hat{u}_1$, that is, $$ \hat{u}_2 = a_2 - \frac{\hat{u}_1\cdot a_2}{\hat{u}_1\cdot \hat{u}_1}\hat{u}_1 = \begin{bmatrix} 1\\ 0\\ 2 \end{bmatrix} - \frac{1}{2} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} = \begin{bmatrix} \phantom{-}1/2\\ -1/2\\ \phantom{-}2 \end{bmatrix} $$ The last step is the same as the second except now we must remove from $a_3$ any contributions from $\hat{u}_1$ or $\hat{u}_2$. Thus, \begin{align} \hat{u}_3 &= a_3 - \frac{\hat{u}_1\cdot a_3}{\hat{u}_1\cdot \hat{u}_1}\hat{u}_1 - \frac{\hat{u}_2\cdot a_3}{\hat{u}_2\cdot \hat{u}_2}\hat{u}_2\\[1mm] &= \begin{bmatrix} 1\\ 2\\ 1 \end{bmatrix} - \frac{3}{2} \begin{bmatrix} 1\\ 1\\ 0 \end{bmatrix} - \frac{1}{3} \begin{bmatrix} \phantom{-}1/2\\ -1/2\\ \phantom{-}2 \end{bmatrix}\\[1mm] &= \begin{bmatrix} -2/3\\ \phantom{-}2/3\\ \phantom{-}1/3 \end{bmatrix} \end{align} The last step to obtain $U$ from $\hat{U}$ is to normalize the columns of $\hat{U}$. Doing so we obtain the unitary matrix $$ U = \begin{bmatrix} 1/\sqrt{2} & \phantom{-}1/(3\sqrt{2}) & -2/3\\ 1/\sqrt{2} & -1/(3\sqrt{2}) &\phantom{-}2/3\\ 0 & \phantom{-}2\sqrt{2}/3 &\phantom{-}1/3 \end{bmatrix} $$ As you can see from this simple example, the procedure is quite tedious by hand and is best left to a computer. You can read more about the Gram-Schmidt procedure here.<|endoftext|> TITLE: Closed form for the series $\sum_{k=1}^\infty (-1)^k \ln \left( \tanh \frac{\pi k x}{2} \right)$ QUESTION [7 upvotes]: Is there a closed form for: $$f(x)=\sum_{k=1}^\infty (-1)^k \ln \left( \tanh \frac{\pi k x}{2} \right)=2\sum_{n=0}^\infty \frac{1}{2n+1}\frac{1}{e^{\pi (2n+1) x}+1}$$ This sum originated from a recent question, where we have: $$f(1)= -\frac{1}{\pi}\int_0^1 \ln \left( \ln \frac{1}{x} \right) \frac{dx}{1+x^2}=\ln \frac{\Gamma (3/4)}{\pi^{1/4}}$$ If we differentiate w.r.t. $x$, we obtain: $$f'(x)=\sum_{k=1}^\infty (-1)^k \frac{\pi k}{\sinh \pi k x}$$ There is again a closed form for $x=1$ (obtained numerically): $$f'(1)=-\frac{1}{4}$$ So, is there a closed form or at least an integral definition for arbitrary $x>0$? The series converges absolutely (numerically at least): $$\sum_{k=1}^\infty \ln \left( \tanh \frac{\pi k x}{2} \right)< \infty$$ Thus, this series can also be expressed as a logarithm of an infinite product: $$f(x)=\ln \prod_{k=1}^\infty \tanh (\pi k x) - \ln \prod_{k=1}^\infty \tanh \left( \pi (k-1/2) x \right)$$ $$e^{f(x)}= \prod_{k=1}^\infty \frac{\tanh (\pi k x)}{\tanh \left( \pi (k-1/2) x \right)}$$ This by the way leads to: $$\prod_{k=1}^\infty \frac{\tanh (\pi k)}{\tanh \left( \pi (k-1/2) \right)}=\frac{\pi^{1/4}}{\Gamma(3/4)}$$ I feel like there is a way to use the infinite product form for $\sinh$ and $\cosh$: $$\sinh (\pi x)=\pi x \prod_{n=1}^\infty \left(1+\frac{x^2}{n^2} \right)$$ $$\cosh (\pi x)=\prod_{n=1}^\infty \left(1+\frac{x^2}{(n-1/2)^2} \right)$$ REPLY [4 votes]: Let’s use $~\displaystyle\prod\limits_{k=1}^\infty (1+z^k)(1-z^{2k-1}) =1~$ . $\enspace$ (It's explained in a note below.) For $~z:=q^2~$ and $~q:=e^{-\pi x}~$ with $~x>0~$ we get $\displaystyle e^{f(x)} = \prod\limits_{k=1}^\infty\frac{\tanh(k\pi x)}{\tanh((k-\frac{1}{2})\pi x)} = \prod\limits_{k=1}^\infty\frac{ \frac{q^{-k}-q^k}{q^{-k}+q^k} }{ \frac{q^{\frac{1}{2}-k}-q^{k-\frac{1}{2} }}{q^{\frac{1}{2}-k}+q^{k-\frac{1}{2}}} } = \prod\limits_{k=1}^\infty\frac{(1-q^{2k})(1+q^{2k-1})}{(1+q^{2k})(1-q^{2k-1})} =$ $\displaystyle = \prod\limits_{k=1}^\infty (1-q^{2k})(1+q^{2k-1})^2 = \sum\limits_{k=-\infty}^{+\infty} q^{k^2} = \vartheta(0;ix)$ The “closed form” for $\,f\,$ is: $$f(x) = \ln\vartheta(0;ix)$$ Please see e.g. Theta function . Note: $\displaystyle\prod\limits_{k=1}^\infty (1+z^k)(1-z^{2k-1}) =1$ $\Leftrightarrow\hspace{2cm}$ (logarithm) $\displaystyle \sum\limits_{k=1}^\infty \sum\limits_{v=1}^\infty \frac{(-1)^{v-1}z^{kv}}{v} = \sum\limits_{k=1}^\infty \ln(1+z^k) = -\sum\limits_{k=1}^\infty \ln(1-z^{2k-1}) = \sum\limits_{k=1}^\infty \sum\limits_{v=1}^\infty \frac{z^{(2k-1)v}}{v}$ $\Leftrightarrow\hspace{2cm}$ (exchanging the sum symbols which is valid for $~|z|<1~$ $\hspace{2.7cm}$ and using $~\displaystyle\frac{x}{1-x}=\sum\limits_{k=1}^\infty x^k~$) $\displaystyle\sum\limits_{v=1}^\infty \frac{(-1)^{v-1}}{v}\frac{z^v}{1-z^v} = \sum\limits_{v=1}^\infty \frac{1}{v}\frac{z^v}{1-z^v} - 2\sum\limits_{v=1}^\infty \frac{1}{2v}\frac{z^{2v}}{1-z^{2v}} = \sum\limits_{v=1}^\infty \frac{1}{v}\frac{z^v}{1-z^{2v}}$<|endoftext|> TITLE: How to find possible closed-form-formula for a given decimal expansion online? QUESTION [8 upvotes]: What mathematical program or library can guess formula for decimal fraction? For example, for number 2.414213562373095 right guess is 1 + $\sqrt 2$. Are there such tools online? I cannot find one. REPLY [4 votes]: Using keywords from Peter's comment ("closed form by decimal expansion") I found Inverse Symbolic Calculator Also see Sympy number identification<|endoftext|> TITLE: Evaluate $1+\left(\frac{1+\frac12}{2}\right)^2+\left(\frac{1+\frac12+\frac13}{3}\right)^2+\left(\frac{1+\frac12+\frac13+\frac14}{4}\right)^2+...$ QUESTION [6 upvotes]: Evaluate: $$S_n=1+\left(\frac{1+\frac12}{2}\right)^2+\left(\frac{1+\frac12+\frac13}{3}\right)^2+\left(\frac{1+\frac12+\frac13+\frac14}{4}\right)^2+...$$ a_n are the individual terms to be summed. My Try : \begin{align} &a_1=1\\ &a_2=\left(\frac{3}{4}\right)^2=\frac{9}{16}\\ &a_3=\left(\frac{11}{18}\right)^2\\ &a_4=\left(\frac{25}{48}\right)^2 \end{align} now :? REPLY [5 votes]: By setting $H_n = \sum_{k=1}^{n}\frac{1}{k}$ we have to compute $\sum_{n\geq 1}\left(\frac{H_n}{n}\right)^2$. We may notice that $$ \sum_{n=1}^{N}\frac{H_n}{n}=\sum_{1\leq m\leq n\leq N}\frac{1}{mn}=\frac{H_N^2+H_N^{(2)}}{2}\tag{1}$$ and for the same reason $$ \sum_{n=1}^{N}\frac{H_n^{(2)}}{n^2} = \frac{1}{2}\left[\left(\sum_{n=1}^{N}\frac{1}{n^2}\right)^2+\sum_{n=1}^{N}\frac{1}{n^4}\right]\stackrel{N\to +\infty}{\longrightarrow}\frac{\zeta(2)^2+\zeta(4)}{2}=\frac{7\pi^4}{360} \tag{2}$$ Since $-\log(1-x)=\sum_{n\geq 1}\frac{1}{n}\,x^n$, by multiplying both sides by $\frac{1}{1-x}$ and applying termwise integration we have $$ \sum_{n\geq 1}\frac{H_{n}}{n}\,x^{n} = \text{Li}_2(x)+\frac{1}{2}\log^2(1-x) \tag{3}$$ hence by $(1)$ it follows that: $$ \sum_{N\geq 1}\frac{H_N^2+H_N^{(2)}}{2}x^{N} = \frac{\text{Li}_2(x)}{1-x}+\frac{1}{2}\cdot\frac{\log^2(1-x)}{1-x}\tag{4} $$ and by multiplying both sides of $(4)$ by $-\frac{2\log x}{x}$ and performing termwise integration over $(0,1)$: $$ \sum_{N\geq 1}\frac{H_N^2+H_N^{(2)}}{N^2} = -\int_{0}^{1}\left[\frac{2\text{Li}_2(x)\log(x)}{x(1-x)}+\frac{\log^2(1-x)\log(x)}{x(1-x)}\right]\,dx.\tag{5} $$ The integral $-\int_{0}^{1}\frac{\log^2(1-x)\log(x)}{x(1-x)}\,dx$ can be computed by differentiating Euler's beta function, and it equals $\frac{\pi^4}{36}$. Since $\int\frac{\log(x)}{x(1-x)}\,dx=\frac{1}{2}\log^2(x)+\text{Li}_2(1-x)$ and $\frac{d}{dx}\text{Li}_2(x)=-\frac{\log(1-x)}{x}$, by integration by parts the whole problem boils down to computing: $$ I = \int_{0}^{1}\frac{\text{Li}_2(x)\log(x)}{1-x}\,dx \tag{6}$$ but we have already done that in $(2)$, since $\frac{\text{Li}_2(x)}{1-x}=\sum_{n\geq 1}H_n^{(2)}x^n.$ Collecting pieces, $$ \sum_{n\geq 1}\left(\frac{H_n}{n}\right)^2 = \color{red}{\frac{17\pi^4}{360}}.$$ REPLY [4 votes]: Recall that the multiple zeta values are defined by the series $$ \zeta(s_1,\ldots,s_k):=\sum_{n_1>\ldots>n_k\geq 1}\frac{1}{n_1^{s_1}\ldots n_k^{s_k}}. $$ The sum $S$ can be expressed as a linear combination of multiple zeta values. We have $$ \begin{align*} S&=\sum_{n=1}^\infty \sum_{k_1,k_2=1}^n\frac{1}{n^2k_1k_2}\\ &=\left(2\sum_{n>k_1>k_2}+\sum_{n>k_1=k_2}+2\sum_{n=k_1>k_2}+\sum_{n=k_1=k_2}\right)\frac{1}{n^2k_1k_2}\\ &=2\zeta(2,1,1)+\zeta(2,2)+2\zeta(3,1)+\zeta(4). \end{align*} $$ Each of these multiple zeta values is a rational multiple of $\pi^4$. The expressions have been tabulated for instance on the MZV data mine: $$ \begin{align*} \zeta(2,1,1)&=\frac{\pi^4}{90},\\ \zeta(2,2)&=\frac{\pi^4}{120},\\ \zeta(3,1)&=\frac{\pi^4}{360},\\ \zeta(4)&=\frac{\pi^4}{90}. \end{align*} $$ So we get $$ S=\frac{17\pi^4}{360} $$<|endoftext|> TITLE: Proof of $\sqrt{a} + \sqrt{b} \le 2 \times \sqrt{a+b}$? QUESTION [5 upvotes]: I want to prove that $\sqrt{a} + \sqrt{b} \le 2 \times \sqrt{a+b}$, I had the idea to draw it: Would it be enough to prove what I want to prove? If not, is there a way to be more precise by still using my method or should I abandon it and use a more "traditional" way? Thank you. REPLY [2 votes]: Not all problems have a nice geometrical interpretation. I think it is surely better to do this one in a purely algebraic way. It would also be more convenient to assume a,b $\geq$ 0 for your problem. Because a,b$\geq$0, then a$\leq$a+b and b$\leq$a+b, and because f(x)=$\sqrt{x}$ is increasing on $ R _ +$, then $\sqrt{a} \leq \sqrt{a+b}$ and $\sqrt{b} \leq \sqrt{a+b}.$ Finally, by adding the 2 inequalities, we get $\sqrt{a}+\sqrt{b} \leq 2*\sqrt{a+b}$<|endoftext|> TITLE: Does a group of isometries uniquely characterize a metric? QUESTION [9 upvotes]: Let $(X,d)$ be a metric space, whose metric $d$ is not known. Let $G=(f,\circ)$ be its group of isometries (that is, distance preserving functions $f:X\rightarrow X$, with the usual function composition as group operation). Is $d$ uniquely specified by $G$? If the answer is yes: how can we explicitely know the form of $d$ from $G$? If the answer is no: under what simplifying assumptions will $d$ be specified by $G$? REPLY [11 votes]: No. If $d$ is a metric, then so is $d/(1+d)$. There are probably many functions other than $x\mapsto x/(1+x)$ that could be composed with a metric to produce another metric. So the best you might hope for, is that two metrics with the same isometries are related by composition with some function. Whether that is true or not, I don't know.<|endoftext|> TITLE: Why sum of interior angles in convex polygon is $(n-2)\cdot 180$ QUESTION [10 upvotes]: couple days ago in my math high school lessons I learned that sum of interior angles in convex polygon is: $Z$ = sum of the angles, $n$ = number of sides in polygon $Z=(n-2)\cdot 180$ Can someone help me understanding this formula, and why is it like this? REPLY [4 votes]: Start with a triangle. Choose an edge on the triangle and mark a point on it. Now imagine pulling the point outwardly away from the edge. What happens is the original triangle gains 2 more edges and there is an extra $180^{\circ} $ from the triangle formed by the original edge and the two new ones. This has the advantage of explaining why there are $n-2$ less triangles as each triangle is paired with the $2$ parts of a broken edge. If I get chance I'll add an animation or if someone wants to edit feel free. This can be extended to make the polygon grow.<|endoftext|> TITLE: Question from applications of derivatives. QUESTION [5 upvotes]: Prove that the least perimeter of an isoceles triangle in which a circle of radius $r$ can be inscribed is $6r\sqrt3$. I have seen answer online on two sites. One is on meritnation but the problem is that answer is difficult and bad formatting. Other answer on topperlearning but that answer make uses of trigonometric functions. And I want to solve it without trignometric function. So please can someone provide easy method. REPLY [2 votes]: Let $\Delta ABC$ be our triangle, $AB=BC=ax$ and $AC=a$. Hence, since $ax+ax>a$, we get $x>\frac{1}{2}$ and $$r=\frac{2S}{ax+ax+a}=\frac{\frac{2a\sqrt{a^2x^2-\frac{a^2}{4}}}{2}}{2(2ax+a)}=\frac{a}{2}\sqrt{\frac{2x-1}{2x+1}}.$$ Thus, we need to prove that $$2ax+a\geq3\sqrt3a\sqrt{\frac{2x-1}{2x+1}}$$ or $$(2x+1)^3\geq27(2x-1).$$ Let $f(x)=(2x+1)^3-27(2x-1)$ Thus, $f'(x)=6(2x+1)^2-54=24(x-1)(x+2)$, which says that $x_{min}=1$ and we are done!<|endoftext|> TITLE: Canonical divisor of the projective line QUESTION [5 upvotes]: I want to calculate a cannonical divisor of $\mathbb{P}^1_k$. We have the regular function $f=id:\mathbb{P}^1\rightarrow\mathbb{P}^1$. Thus we get a regular differential form $df=f-f(x)\text{ mod }m_p^2.$ But how can we compute $div f?$ What is $v_{(0,0)}(f)?$ Since $f$ vanishes at $0$, $v_{(0,0)}(f)\ge 1$. Since $f$ has no poles and no other zeros, we have $div f=v_{(0,0)}(f)(0,0)-v_{(0,0)}(f)\infty$, because principal divisors have degree zero. But what is $v_{(0,0)}(f)$? The solution should be 2, but why? REPLY [7 votes]: Let $[x:y]$ be the homogeneous coordinates on $\mathbb P^1$. Pick affine charts, $U_0 = \{ [1:z ] : z \in \mathbb A^1\}$ and $U_0 = \{ [w:1] : w \in \mathbb A^1$}. So $z = 1/w$ on the overlap. Let's pick any meromorphic differential and find its corresponding divisor. If you're only interested in finding the divisor class up to linear equivalence, it doesn't matter which meromorphic differential you pick. So how about $\omega = dz$? That seems like the most obvious choice. Well, the expression $\omega = dz$ is valid in the $U_0$ patch. It has no zeroes or poles in $U_0$. However, in the $U_1$ patch, $\omega = d(1/w) = - dw / w^2$. This has an order $2$ pole at $w = 0$. Thus $v_{w=0}(\omega) = -2$. [Let me spell out in more detail how to find the valuation of a differential at a point $p$. Let $t$ be any local parameter at $p$. Try to write your differential as $\omega = \alpha(t) dt$, where $\alpha(t)$ is a function in the local ring at $p$. Then find the valuation of $\alpha(t)$. In the example above, $w$ is a local parameter at $w = 0$. Writing $\omega = (-1/w^2)dw$, the $t$ in this example is $w$ and the $\alpha$ is $-1/w^2$. And $-1/w^2$ has valuation $-2$. To give you a slightly more non-trivial example, suppose the curve is the parabola $V(y-x^2) \subset \mathbb A^2$ and you want to find the valuation of $\omega = ydy$ at $(0,0)$. Here, $x$ is a local parameter but $y$ is not. So you write $\omega = (x^2) d(x^2) = (x^2 ) (2xdx) = 2x^3 dx$. Then you take the valuation of $2x^3$, which is 3.] Anyway, back to differentials on $\mathbb P^1$. Perhaps you're now wondering what happens if we pick a different $\omega$. Let's try it. How about $\omega = z dz = - dw/ w^3$? This has a single zero at $z = 0$ and a triple pole at $w = 0$. Notice that the degree of the divisor associated to my new $\omega$ is $1 - 3 = -2$, agreeing with the degree for our previous choice, $\omega = dz$. This is how it should be. You expect divisors associated to any two choices of meromorphic differential to be linearly equivalent. Indeed, two divisors on $\mathbb P^1$ are linearly equivalent if and only if they have the same degree. If you're wondering what a differential actually is, the most practical advice I can give is not to worry. I just think of a differential as a formal expression of the form $f_1 dg_1 + \dots + f_n dg_n$ where $f_i$ and $g_i$ are functions on the curve. You manipulate these formal expressions using the rules that (i) $dc = 0$ for any constant $c$, (ii) $d(f+g) = df+ dg$ and (iii) $d(fg) = fdg + gdf$. These formal expressions are called Kahler differentials, and Hartshorne uses them.<|endoftext|> TITLE: Average of age in a family QUESTION [15 upvotes]: Some days ago my friend sent me this problem, and I couldn't solve it. It's a pretty simple problem, but I'm struggling with it. It reads: The average age in a family (mother, father and their children) is $18$. If we don't take the father, who is $38$, into the average, it drops to $14$. How many kids are in that family? So how many kids are in that family? Any help is very much appreciated. REPLY [3 votes]: We have one person of age $38$ and an unspecified number of people with average age $14.$ The average age of the entire group is $18.$ When averaging any set of numbers, the sum of all deviations from the average (taking deviations above average as positive, deviations below as negative) will be zero. The father has a deviation of $20$ years above the average, so the total net deviation of all other members of the family from the average age is $-20.$ But the average deviation of the other $n$ members of the family from the whole-family average is $14 - 18 = -4.$ In order for $n$ people with an average deviation of $-4$ to add up to a total net deviation of $-20,$ we must have $n = (-20)/(-4) = 5.$ Therefore there are $5$ other family members, consisting of the mother and $4$ children.<|endoftext|> TITLE: Prove that the sum of pythagorean triples is always even QUESTION [54 upvotes]: Problem: Given $a^2 + b^2 = c^2$ show $a + b + c$ is always even My Attempt, Case by case analysis: Case 1: a is odd, b is odd. From the first equation, $odd^2 + odd^2 = c^2$ $odd + odd = c^2 \implies c^2 = even$ Squaring a number does not change its congruence mod 2. Therefore c is even $ a + b + c = odd + odd + even = even$ Case 2: a is even, b is even. Similar to above $even^2 + even^2 = c^2 \implies c$ is even $a + b + c = even + even + even = even$ Case 3: One of a and b is odd, the other is even Without loss of generality, we label a as odd, and b as even $odd^2 + even^2 = c^2 \implies odd + even = c^2 = odd$ Therefore c is odd $a + b + c = odd + even + odd = even$ We have exhausted every possible case, and each shows $a + b + c$ is even. QED Follow Up: Is there a proof that doesn't rely on case by case analysis? Can the above be written in a simpler way? REPLY [5 votes]: Consider $(a+b+c)^2$ Which is $a^2 + b^2 + c^2 + 2(ab+bc+ca)$ Since $c^2 = a^2 + b^2$ (c being the hypotenuse), $(a+b+c)^2 = 2(c^2 + ab + bc + ca)$ - which is an even number. and since squares of odd is odd and evens is even $a+b+c$ has to be even.<|endoftext|> TITLE: Big picture behind how to use KKT conditions for constrained optimization QUESTION [7 upvotes]: What is the point of KKT conditions for constrained optimization? In other words, how is the best way to use them. I have seen examples in different contexts, but miss a short overview of the procedure, in like one or two sentences. Should we use them to find the optimal solution of a constrained problem? The reason I am very confused is that one of conditions in KKT, already requires the constraints of the original problem to hold. The question is if we knew how to impose constraints in first place, then why look at KKT conditions? Or should we use another one of KKT conditions first, i.e. only set the gradient of Lagrangian to zero and extract the solutions from that, and then check if the inequality and equality constraints hold? I deeply appreciate if you could clarify. REPLY [8 votes]: Since it doesn't seem that anybody is giving an answer I will slightly elaborate on my comments above. The first thing to point out is that KKT conditions don't give a "procedure" as you're question implies. Rather, KKT conditions give a "target" for procedures to move towards. KKT conditions are primarily a set of necessary conditions for optimality of (constrained) optimization problems. This means that if a solution does NOT satisfy the conditions, we know it is NOT optimal. In particular cases, the KKT conditions are stronger and are necessary and sufficient (e.g., Type 1 invex functions). In these cases, if a solution satisfies the system of KKT conditions it is globally optimal. So what do the KKT equations do for us? By giving us a system of equations, we can attempt to find a solution to them. Typically, we can't solve these equations analytically, so we use numerical methods to solve them (e.g., sequential quadratic programming). If you have specific questions about numerical (or exact) methods in given contexts, I'd suggest asking a new question with those details.<|endoftext|> TITLE: Different ways to evaluate $\int_0^\infty xI_0(2x)e^{-x^2}\,dx$ QUESTION [7 upvotes]: Evaluate $$\int_0^\infty xI_0(2x)e^{-x^2}\,dx$$ where $$I_0(x) = \frac 1\pi \int_0^\pi e^{x\cos\theta}\,d\theta$$ is a Bessel Function. Source: Berkeley Math Tournament This question was on a math contest for high school students, so I am looking for other methods that preferably do not involve higher mathematics than Calc II. However, I am also interested in other ways to solve this problem that goes beyond the normal calculus curriculum. My solution is posted below as an answer. REPLY [3 votes]: I thought it might be instructive to present a way forward the relies on only the series expansion of the exponential function, evaluating two integrals using reduction formulae, and straightforward arithmetic. To that end, we proceed. Using the Taylor series for $e^t=\sum_{n=0}^\infty \frac{t^n}{n!}$, with $t=2x\cos(\theta)$, we can write $I_0(2x)$ as $$\begin{align} I_0(2x)&=\frac1\pi \int_0^\pi e^{2x\cos(\theta)}\,d\theta\\\\ &=\frac1\pi \sum_{n=0}^\infty \frac{(2x)^n}{n!}\int_0^\pi \cos^n(\theta)\,d\theta\tag 1 \end{align}$$ Next, using the reduction formula for $\int_0^\pi \cos^n(\theta)\,d\theta=\frac{n-1}{n}\int_0^\pi \cos^{n-2}(\theta)\,d\theta$ reveals $$\int_0^\pi \cos^n(\theta)\,d\theta=\begin{cases}\pi\frac{n!}{(n!!)^2}&,n\,\text{even}\\\\0&,n\,\text{odd}\tag2\end{cases}$$ Using $(1)$ and $(2)$, we find that $$\begin{align} \int_0^\infty xe^{-x^2}I_0(2x)\,dx&= \frac1\pi\sum_{n=0}^\infty \underbrace{\frac{4^n}{(2n)!}\left(\pi\,\frac{(2n)!}{((2n)!!)^2}\right)}_{=\frac{\pi}{(n!)^2}}\,\,\underbrace{\int_0^\infty x^{2n+1}e^{-x^2}\,dx}_{=\frac12 n!}\\\\ &=\frac12\sum_{n=0}^\infty \frac{1}{n!}\\\\ &=\frac{e}{2} \end{align}$$ where we used the reduction formula $\int_0^\infty x^{2n+1}e^{-x^2}\,dx=n\int_0^\infty x^{2n-1}e^{-x^2}\,dx$, along with the elementary integral $\int_0^\infty xe^{-x^2}\,dx=\frac12$, to establish $\int_0^\infty x^{2n+1}e^{-x^2}\,dx=\frac12 n!$. Tools Used: The Taylor Series for $e^x$, the reduction formula for $\int_0^\pi \cos^n(x)\,dx$, and the reduction formula for $\int_0^\infty x^{2n+1}e^{-x^2}\,dx$<|endoftext|> TITLE: Ideals of ring finitely generated if and only if ring Noetherian QUESTION [5 upvotes]: Let $R$ be a commutative unital ring. Then $R$ is Noetherian if and only if all ideals of $R$ are finitely generated. My proof: (=>): Suppose that $R$ is Noetherian, and $I$ is an ideal of $R$, which is not finitely generated. Then $\exists$ set $A\subseteq R$ which contains infinitely many elements, such that $I=\langle A\rangle$. Let $A=\{a_1, a_2,...,a_n,...\}$, then we $\langle a_1 \rangle\subset \langle a_1, a_2 \rangle\subset...\subset \langle a_1, a_1,...,a_n,... \rangle$, which is an infinite chain of nested ideals. This is a contradiction, thus $I$ must be finitely generated. (<=): Suppose that all ideals of $R$ are finitely generated Let $I$ be an ideal of $R$, then $I=\langle A\rangle$, where $A\subseteq R$ is finite. Let $|A|=n$. Then, $\forall a_i\in A$, for $1\le i\le n$, $\langle a_1\rangle \subset \langle a_1, a_2 \rangle \subset... \langle a_1, a_2,..., a_n \rangle$, which is a finite chain of nested ideals. Hence, $R$ is Noetherian. I would appreciate if someone could please clarify the following: (1) Why do we need $R$ to be unital and commutative in this case? (2) Is an ideal generated by an infinite set has in fact infinitely many generators, or does this just mean that such an ideal is simply generated by infinitely many elements, but not necessarily infinitely many generators? (3) Is my proof OK? Thank you very much. REPLY [4 votes]: (1) You don't. The theorem is true without these assumptions. (2) The term "an ideal generated by an infinite set" is basically meaningless. Literally, it means simply an ideal $I$ such that some infinite subset $A\subseteq I$ generates $I$. But this is true of every infinite ideal $I$, since you can just take $A=I$. In any case, I don't know why you're asking about the meaning of this term because it does not appear in the statement of what you are trying to prove. I also don't understand what distinction you are making between "elements" and "generators". (3) Your proofs of both directions are incorrect. In the forward direction, you seem to be assuming that your ideal $I$ is countably generated, which need not be true. If $I$ is countably generated, then your argument is correct: you know your ascending sequence of ideals cannot stabilize because then $I$ would actually be generated by finitely many of the $a_n$, contradicting the assumption that $I$ is not finitely generated. However, there need not exist any countable set that generates $I$. So if you do not know there exists a countable set $A$ that generates $I$, you must argue more carefully. You might try choosing a sequence elements $a_n\in I$ by induction so that each $a_n$ is not in the ideal generated by $a_1,\dots,a_{n-1}$ (why is this possible?). In the reverse direction, your argument is just completely incorrect: you are not proving the right thing. To prove that $R$ is Noetherian, you need to start with an arbitrary ascending sequence of ideals $$I_1\subseteq I_2\subseteq I_3\subseteq\dots$$ and prove the sequence must stabilize. To prove this, I suggest letting $I=\bigcup_n I_n$ and using the fact that $I$ is finitely generated.<|endoftext|> TITLE: Existence of positive integers $m,k$ such that $2^m > k\alpha > 2^m - 1$ for fixed irrational number $\alpha$. QUESTION [5 upvotes]: Let $\alpha \in \mathbb{R} $\ $\mathbb{Q}$, then I claim that there exist positive integers $m,k$ such that $2^m > k\alpha > 2^m - 1$. I tried many elementary approaches but all of them either reformulated the problem or cycled back on itself. This area of maths is uncharted territory for me and so I don't know any advanced approaches I could use, but I am open to anything. REPLY [4 votes]: The claim is false; there exist $\alpha \in \mathbb{R} \setminus \mathbb{Q}$ that cannot be approximated in this way. To see this, rewrite the desired inequality as $$\frac{k}{2^m} < \frac1\alpha < \frac{k}{2^m-1} \tag{1}$$ If the right-hand inequality holds at all, it holds when $k$ is chosen as large as it can be while the left-hand inequality holds: $k = \left\lfloor \frac{2^m}{\alpha} \right\rfloor$. In other words, an equivalent task is to find $m$ such that $$\frac1\alpha < \frac{\lfloor 2^m/\alpha \rfloor}{2^m - 1} \tag{2}$$ We'll assume that $\alpha > 1$, so that $\frac1\alpha \in (0,1)$. Suppose that the binary expansion of $\frac1\alpha$ is $0.b_1b_2b_3b_4\dots$. In that case, $k = \lfloor 2^m/\alpha \rfloor$ is the integer whose binary expansion is $b_1b_2b_3\dots b_m$, and $\frac{\lfloor 2^m/\alpha \rfloor}{2^m - 1}$ is the real number whose binary expansion repeats as follows: $$0.b_1b_2b_3\dots b_mb_1b_2b_3\dots b_mb_1b_2b_3\dots b_m \dots$$ So to rule out the possibility that $(2)$ holds for any $m$, it is sufficient to choose $\frac1\alpha$ to satisfy the following: $b_1 = b_2 = 0$. In the binary expansion of $\frac1\alpha$, we never have $b_i = b_{i+1} = 0$ for any $i>1$. Then for any $m$, $\frac1\alpha$ and $\frac{\lfloor 2^m/\alpha \rfloor}{2^m - 1}$ agree in the first $m$ bits, but the $(m+1)$-th and $(m+2)$-th bits of $\frac{\lfloor 2^m/\alpha \rfloor}{2^m - 1}$ are both $0$ to match $b_1$ and $b_2$, whereas at least one of $b_{m+1}$ and $b_{m+2}$ is $1$. Therefore $\frac{\lfloor 2^m/\alpha \rfloor}{2^m - 1} < \frac1\alpha$ and $(2)$ does not hold.<|endoftext|> TITLE: What does prolongation mean in differential geometry? QUESTION [12 upvotes]: What is the meaning of the term "prolongation" in differential geometry? Differential geometers often talk about "prolonging" a system of differential equations, or jet prolongation of bundle sections, but I don't really understand what mental picture the term "prolongation" is supposed to convey. Is it because when you introduce new variables for higher derivatives in a differential equation the system becomes "longer" when you write it down? Is that all there is to it, or is there some better reason for the terminology? REPLY [10 votes]: Actually not the system is becoming "longer" but the space of dependent variables. Basically, you want a geometric object not just representing the dependent and independent variables but also the appearing partial derivatives. So you "prolong" the space of dependent variables $U$ by spaces representing the partial derivatives of order $n$ denoted by $U_n$. Then $U^{(n)} = U \times U_1 \times \dots \times U_n$ is the $n$th prolonged space. If you have a smooth function $u = f(x)$ with $f \colon X \to U$ then its $n$th prolongation is $u^{(n)} = \mathsf{pr}^{(n)} f$ given by the partial derivatives up to order $n$. For example the $2$-prolongation of a function $f(x,y)$ would be $(f, \frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial^2 f}{\partial x^2}, \frac{\partial^2 f}{\partial x \partial y}, \frac{\partial^2 f}{\partial x^2})$, representing the Taylor polynomial of second order. So the term prolongation originates from the prolongation of the space of dependent variables. See also the book of Olver: "Applications of Lie Groups to Differential Equations", Sec. 2.3 and 3.5.<|endoftext|> TITLE: On irrationality of natural logarithm QUESTION [22 upvotes]: Is there any rational number $r$ such that ln (r) is rational as well? If so, what's the proof? If proofs are too lengthy to be cointained as an answer here, I would truly appreciated any easy-to-understand references to study them. REPLY [57 votes]: Aside from $r=1$, no. To prove it, suppose we had an example. Then we'd write $$\frac mn=e^{\frac ab}\implies e^a=\left( \frac mn \right)^b$$ But, with $a\neq 0$ this would tell us that $e$ was algebraic, which is not the case.<|endoftext|> TITLE: Liouville's Theorem QUESTION [5 upvotes]: Why does $f(z)=\cos(z^2)$ not contradict Liouville's theorem? Is the best approach to put $\cos(z^2)$ into its Taylor expansion? How can I visualize $\cos(z^2)$? REPLY [13 votes]: Because in $\mathbb{C}$, $\cos$ and $\sin$ are not bounded functions like they are in $\mathbb{R}$. In particular, $\cos(ix)=\cosh(x)$, so $\cos$ grows exponentially on the imaginary axis.<|endoftext|> TITLE: Show that $\frac{P(z)}{Q(z)} = \sum_{k=1}^{n}\frac{P(\alpha_k)}{Q'(\alpha_k)(z-\alpha_k)}$ QUESTION [5 upvotes]: Here, $Q$ is a polynomial with distinct roots $\alpha_1, \ldots, \alpha_n$ and $P$ is a polynomial of degree $ TITLE: Taking an integral of an integrand consisting of different power radical functions QUESTION [11 upvotes]: So I have this integral, $$\int_0^1 (1-x^7)^{1/3}-(1-x^3)^{1/7} dx$$ and I don't know where to start with this. I tried doing some Algebra, but I'm not recognizing any patterns. (Maybe it is my tired brain, but I'm completely lost.) If any of you can point me in the right direction, give me some useful hints, or explain how to solve it, I would be forever grateful. Thank you! REPLY [6 votes]: HINT: If $(1-x^7)^{1/3}=y,y^3+x^7=1$ and if $x=1,y=0;$ if $x=1,y=0$ and if $(1-x^3)^{1/7}=y,y^7+x^3=1$ and if $x=1,y=0;$ if $x=1,y=0$ So, both represent the same area of the curve between $[0,1]$<|endoftext|> TITLE: Definition of Maximal atlas QUESTION [10 upvotes]: I some how could not find the definition of maximal atlas on a manifold. What I see is that an atlas is said to be maximal atlas if it is not contained in any other atlas. What does this containment actually mean? Let $\mathcal{A}$ be an atlas and $\mathcal{B}$ be another atlas. When do we say that $\mathcal{A}$ is contained in $\mathcal{B}$? I was not able to find definition of this. Another confusion is about union of atlases. Let $\mathcal{A}$ and $\mathcal{B}$ be two atlases. What do we mean by union of atlases? Is it just the union $\{(U,\phi)_{\phi\in \mathcal{A}},(V,\psi)_{\psi\in \mathcal{B}}\}$? It may happen that this union is not an atlas i.e., there can be two charts $\phi_\mathcal{A}$ and $\psi_{\mathcal{B}}$such that $\phi_{\mathcal{A}}$ and $\psi_{\mathcal{B}}$ are not compatible. By maximal atlas do I mean an atlas $\mathcal{A}$ such that for any other atlas $\mathcal{B}$, the union as above is not an atlas? Any reference for the definition is welcome. REPLY [12 votes]: "Contain" and "union" here literally mean just that. An atlas $\mathcal{A}$ is a set of charts $(U,\phi)$, and $\mathcal{A}$ is contained in $\mathcal{B}$ if $\mathcal{A}\subseteq\mathcal{B}$: that is, if every chart which is an element of $\mathcal{A}$ is also an element of $\mathcal{B}$. The union of two atlases is just the set $\mathcal{A}\cup\mathcal{B}$, which as you observe may not be an atlas. An atlas $\mathcal{A}$ is called maximal if there does not exist any atlas $\mathcal{B}$ such that $\mathcal{A}\subset\mathcal{B}$ (with a strict inclusion). This is equivalent to saying that if $\mathcal{B}$ is an atlas such that $\mathcal{A}\cup\mathcal{B}$ is an atlas, then $\mathcal{B}\subseteq\mathcal{A}$.<|endoftext|> TITLE: Example of continuous mapping of open(closed) set to not open(closed) set QUESTION [8 upvotes]: I want to find a continuous function: $f:\textbf{R}^n \rightarrow \textbf{R}^m$ s.t. for some open subset $A$, $f(A)$ is not open, and for some closed $B$, $f(B)$ is not closed. I am able to find some mappings that satisfy one condition, e.g. $f(x)=exp(-x)$ maps closed $[0,\infty)$ to not closed $(0,1]$, but cannot find an example which satisfies both conditions. REPLY [8 votes]: Consider the continuous map $f(x)=e^{-|x|}$. Then $f(\Bbb R)=(0,1]$. Note that $\Bbb R$ is both open and closed, but it's image $(0,1]$ is neither open nor closed.<|endoftext|> TITLE: What classes of polygons are equivalent up to affine transformation? QUESTION [5 upvotes]: Let's call two planar figures "equivalent" if each of them is an affine transformation of the other. What are the equivalence classes of the convex figures in the plane? Some classes which I found are: All ellipses; All triangles; All parallelograms. However, not all quadrangles are in the same class. For example, a trapezoid is not equivalent to a parallellogram, since the former has only one pair of parallel sides while the latter has two, and it is known that affine transformations preserve parallelism. Moreover, a convex shape cannot be equivalent to a non-convex shape, since affine transformations preserve convexity. So my questions are: What are the equivalence classes of all quadrangles? (how many classes are there? If there are infinitely many, how many parameters are required to characterize them?) What are the equivalence classes of all convex polygons with $n$ vertices? REPLY [3 votes]: I will be considering labelled convex quadrilaterals $Q$ in the affine plane ${\mathbb A}^2$ with vertices $A, B, C, D$. Two such quadrilaterals $Q=ABCD, Q'=A'B'C'D'$ are affine-equivalent if there is an affine transformation $T$ which sends $A\mapsto A', B\mapsto B', C\mapsto C', D\mapsto D'$. Similarly, regarding ${\mathbb A}^2$ as an affine patch in the real projective plane ${\mathbb P}^2$, we define projectively equivalent quadrilaterals, by allowing projective transformations of ${\mathbb P}^2$. Now, observe that the affine group $Aff({\mathbb A}^2)$ acts simply transitively on the set of non-collinear triples of points in ${\mathbb A}^2$. Therefore, fixing three points in general position $A_0, B_0, D_0\in {\mathbb A}^2$, every convex quadrilateral in ${\mathbb A}^2$ is affine-equivalent to a quadrilateral of the form $A_0B_0CD_0$, where $C$ lies in an open unbounded convex region $R$ in the affine plane, bounded by the lines $A_0B_0, B_0D_0$ and $A_0D_0$. Hence, the space of affine equivalence classes (with your favorite topology) is homeomorphic to the region $R$, which, in turn, is homeomorphic to the affine plane itself. On the other hand, if we consider quadrilaterals up to projective equivalence, we can use the fact that the pointwise stabilizer of $\{A_0, B_0, D_0\}$ in $PGL(3, {\mathbb R})$ acts simply transitively on the region $R$. In order to see this, identify the line $B_0D_0$ with the "line at infinity" in the projective plane and identify the complement to this line with the affine plane. Then the stabilizer of $\{A_0, B_0, D_0\}$ in $PGL(3, {\mathbb R})$ is identified with the group of diagonal matrices (where $A_0$ serves as the origin in the affine plane and the lines $A_0B_0$, $A_0D_0$ serve as the coordinate axes; the region $R$ becomes an open coordinate quadrant). Since the group of diagonal linear transformations acts simply transitively on each open coordinate quadrant, the claim follows. Therefore, up to projective equivalence, there is exactly one convex quadrilateral. More generally, the space of convex $n$-gons modulo projective equivalence is homeomorphic to the open $2(n-4)$-dimensional ball. if you only consider them modulo affine equivalence, you get the open $2(n-3)$-dimensional ball.<|endoftext|> TITLE: How to interpret the sum of two series? QUESTION [5 upvotes]: I am confused a bit while I am recalling the infinite series. Thomas' Calculus says: Sum of two divergent series can be convergent by giving an example: $\sum1 + \sum-1 = \sum0=0$. We also know that $\sum(-1)^n$ is divergent. However, can not we think the series $\sum(-1)^n=-1+1-1+1\cdots$ equals to $\sum1 + \sum-1$? What distinguishes these series exactly? REPLY [6 votes]: Suppose that we have two sequences $\{a_n:n\ge1\}$ and $\{b_n:n\ge1\}$ and we want to find the limit of the sum of these two sequences. Then $$ \lim_{n\to\infty}(a_n+b_n)=\lim_{n\to\infty}a_n+\lim_{n\to\infty}b_n $$ provided that both of the sequences $\{a_n:n\ge1\}$ and $\{b_n:n\ge1\}$ converge. If this is not the case, the equality might not hold. Let us recall that a series is the limit of the sequence of the partial sums. What you are actually doing in your example is the rearrangement of the terms of the series, which does not necessarily give you the same limit unless the series is absolutely convergent. By rearranging a conditionally convergent series you could actually get anything. This is the famous Riemann series theorem. I hope this helps.<|endoftext|> TITLE: Eigenvalues for a product of matrices QUESTION [8 upvotes]: It was mentioned in one MSE answer that eigenvalues of products of square matrices are equal (see the answer of user1551 for Eigenvalues of Matrices and Eigenvalue of product of Matrices) Let's denote this fact: $ \ \ \ \ $ $\text{eig}(AB)=\text{eig}(BA)$. However .. how can this be explained in the case where matrices don't commute? Does some kind of geometrical interpretation of this statement exist - at least in the case of 3D orthogonal matrices where it is known that they usually don't commute ? Can the statement be extended for a case of product of more number of matrices, for example: $\text{eig}(A_1{A_2} ... A_n)=\text{eig}(A_n{A_{n-1}} ... A_1)= \text{eig}(A_{n-1}{A_{n-2}} ... A_n)=$ etc... ? REPLY [3 votes]: On the spectra of cyclic permutation of matrix products You can show that cyclic permutations of matrix produxts have the same spectra rather easily. Consider $$ A_1 \ldots A_n \vec v = \lambda \vec{v}.$$ Now multiply by $A_n$ on both sides. $$ A_n A_1 \ldots A_{n-1} (A_n \vec{v}) = \lambda (A_n \vec{v}).$$ So $ A_n A_1 \ldots A_{n-1}$ has the same eigenvalues as $ A_1 \ldots A_n$ with corresponding eigenvectors given by the above expressions. You can do this procedure repeatedly to show that all cyclic permutations have the same spectrum. The two matrix case is a special case of this. Geometric interpretation Sorry for the hand drawn picture. This gives an explanation for the case where the eigenvalue is $1$ or $-1$. The loci of vectors turned by the same amount due to a rotation matrix form a cone centred at the origin in 3D. When you combine two rotations the eigenvectors corresponding to $1$ lie in the intersection of two such cones, one for each rotation matrix. The picture shows such an intersection. The cones intersect at two different vectors $\vec v$ and $A\vec{v}$ in the picture. When you reverse the order in which the rotation are applied the eigenvector changes from one of these vectors to the other. I cannot think of any geometric interpretation for the case of complex eigenvalues.<|endoftext|> TITLE: (Quasi)coherent sheaves on smooth manifolds, and their applications QUESTION [7 upvotes]: Since a smooth (real) manifold is canonically a locally ringed space, we can define (quasi)coherent sheaves over smooth manifolds in the usual manner. But is the category of (quasi)coherent sheaves on a smooth manifold well-behaved? That is, it an abelian category? The nLab article on the topic states that for many ringed spaces the category of coherent sheaves will fail to be abelian. If (quasi)coherent sheaves on manifolds are indeed well-behaved, are there any interesting applications of these sheaves in differential geometry and differential topology? Obviously vector bundles, being locally free, are quasicoherent, so I'm sure there must be some nice uses of (quasi)coherent sheaves. I'm coming to sheaf theory from a differential geometry background, so I apologize if this is obvious stuff. REPLY [10 votes]: Coherence is a useless notion on a differential manifold! The reason is that there are no non trivial coherent sheaves on a differential manifold of dimension $n\gt 0$, because the structural sheaf itself $\mathcal E=\mathcal C^\infty$is not coherent. Let me show this for the simplest manifold: $\mathbb R$. Of course $\mathcal E$ is of finite type over itself but $\mathcal E$ is not coherent because the kernel of a sheaf morphism $\phi:\mathcal E\to \mathcal E$ is not always of finite type . For example let $f\in \mathcal E (\mathbb R)$ be the infamous Cauchy smooth function such that $f(x)=0$ for $x\leq 0$ and $f(x)=\exp(-\frac {1}{x^2})\gt 0$ for $x\gt 0$ and consider the sheaf morphism $\phi:\mathcal E\to \mathcal E$ given by multiplication with $f$. The kernel $\mathcal K=\operatorname {Ker} \phi\subset \mathcal E$ of $\phi$ is the ideal sheaf of smooth functions $g$ such that $g(x)=0$ for $x\gt0$ and that sheaf is not of finite type, because even the stalk $\mathcal E_0$ is not a module of finite type over the ring $\mathcal O_0$. (Reason : $\mathcal E_0= x\mathcal E_0$ by Hadamard and finite generation would imply $\mathcal E_0=0$ by Nakayama ). OK, but what are coherent sheaves good for anyway? They are tremendously useful for calculating cohomology: for example they are acyclic on Stein holomorphic manifolds or affine algebraic varieties and have finite-dimensional cohomology on compact holomorphic manifolds or projective algebraic varieties. So all is lost? Not at all! Luckily, thanks to the existence of smooth partitions of unity on paracompact manifolds, all sheaves of finitely generated $\mathcal E$-Modules are acyclic and thus are just as useful as coherent sheaves.<|endoftext|> TITLE: Find the area of a spherical triangle made by the points $(0, 0, 1)$, $(0, 1, 0)$ and $(\frac{1}{\sqrt{2}}, 0, \frac{1}{\sqrt{2}})$. QUESTION [5 upvotes]: Calculate the area of the spherical triangle defined by the points $(0, 0, 1)$, $(0, 1, 0)$ and $(\dfrac{1}{\sqrt{2}}, 0, \dfrac{1}{\sqrt{2}})$. I have come up with this: From the spherical Gauss-Bonnet Formula, where $T$ is a triangle with interior angles $\alpha, \beta, \gamma$. Then the area of the triangle $T$ is $\alpha + \beta + \gamma - \pi$. How do I work out the interior angles in order to use this formula? Any help appreciated. REPLY [4 votes]: $A(0, 0, 1)$, $B(0, 1, 0)$ and $C(\dfrac{1}{\sqrt{2}}, 0, \dfrac{1}{\sqrt{2}})$ with $|A|=|B|=|C|=1$ these point lie on unit sphere. These points specify three plane $x=0$, $y=0$ and $x=z$ then the angle between them are $\dfrac{\pi}{2}$, $\dfrac{\pi}{2}$ and $\dfrac{\pi}{4}$, since their normal vectors are $\vec{i}$, $\vec{j}$ and $\vec{i}-\vec{k}$, respectively (by $\cos\theta=\dfrac{u.v}{|u||v|}=u.v$). At last $\sigma=\dfrac{\pi}{4}+\dfrac{\pi}{2}+\dfrac{\pi}{2}-\pi=\dfrac{\pi}{4}$.<|endoftext|> TITLE: Vector space dimension theorem compared to group subsets product formula QUESTION [7 upvotes]: I was wondering if there is a generalization which from it the following theorems will be resulted. It seems that it should be related to the rank of the abelian group underlying the vector space, but I'm unable to formulated anything of this sort. Any enlightenment? Maybe generalizations from category theory?... The dimension theorem of vector spaces: $$ U,W\subset V \rightarrow \dim(U+W) = \dim(U) + \dim(W) - \dim(U\cap W) $$ The cardinality of groups subsets product: $$ H,N\subset G \rightarrow |HN| = \frac{|H||N|}{H\cap N} $$ Which become even more similar if we denote the group product by + and apply log: $$ \log|H+N| = \log|H| + \log|N| - \log|H\cap N| $$ I understand that basically both are related through the property of joint sets, but they should be related more than that (and the extended Inclusion-Exclusion formula for probability etc.) $$ |A\cup B| = |A| + |B| - |A\cap B| $$ REPLY [3 votes]: Here are two other perspectives, to complement C. Cain's answer. Connecting the dimension theorem to groups via logarithms: Vector spaces are, in particular, abelian groups. And finite dimensional vector spaces over finite fields are finite groups. In fact, if $V$ is a vector space over $\mathbb{F}_q$, then $|V| = q^{\dim(V)}$. So in the special case of $\mathbb{F}_q$-vector spaces, the group formula reads $$q^{\dim(V+W)} = \frac{q^{\dim(V)}q^{\dim(W)}}{q^{\dim(V\cap W)}} = q^{\dim(V)+\dim(W) - \dim(V\cap W)}.$$ This is a rather explicit instance of the connection with logs (base $q$, here) that you noticed. Connecting the dimension theorem to inclusion-exclusion: There's a general theory of objects supporting notions of dimension and independence, called pregometries by model theorists (like me), and matroids by most other people. Two canonical examples of pregeometries are Vector spaces, in which the closure operator is $\text{cl}(A) = \text{Span}(A)$ and the dimension a set is the linear dimension of its closure. Of course, the dimension theorem $\dim(X\cup Y) = \dim(X) + \dim(Y) - \dim(X\cap Y)$ always holds in a vector space. Sets, in which the closure operator is trivial: $\text{cl}(A) = A$ and the dimension is cardinality: $\text{dim}(A) = |A|$. In a set, the dimension theorem is just the inclusion-exclusion principle, so again it always holds. Not every pregeometry satisfies the dimension theorem - those that do are called modular. A canonical example of a nonmodular pregeometry is a large algebraically closed field, with closure being algebraic closure and dimension being transcendence degree.<|endoftext|> TITLE: Probability that $x^2+y^2+z^2=0$ mod $p$ QUESTION [20 upvotes]: This question on MSE asked the following: "Given $x,y,z \in \mathbb{N},$ find probability that $x^2+y^2+z^2$ is divisible by $7.$" The OP did not declare the assumed probability model, and was duly criticised for that. On the other hand, it is only natural to assume that $x$, $y$, $z$ are independently uniformly distributed mod $7$. A case analysis then shows that that the probability in question is ${1\over7}$. This simple result led me to solve the same problem for the primes $p=3$, $5$, $11$, and $13$. In each case I obtained ${1\over p}$ as result. Further experiments showed that the remainder of $s=x^2+y^2+z^2$ mod $p$ is not uniformly distributed mod $p$, but that in any case the probability of $s=0$ mod $p$ is equal to ${1\over p}$ for all $p\leq107$. This leads to the following Conjecture. Let the integeres $x$, $y$, $z$ be independently uniformly distributed modulo the prime $p$. Then the probability that $s:=x^2+y^2+z^2$ is divisible by $p$ is equal to ${1\over p}$. Maybe this well known. Otherwise I'd like to see a proof. REPLY [17 votes]: Found it. Given odd dimension $n$ and quadratic form $$ f = a_1 x_1^2 + a_2 x_2^2 + \cdots + a_n x_n^2, $$ everything in a finite field with odd number of elements $q,$ the count $$ \#\left(f = b\right) \; = \; q^{n-1} + q^{(n-1)/2} \; \; \chi \left( \; (-1)^{(n-1)/2} \; b a_1 a_2 \ldots a_n\right). $$ At the bottom of page 91 Small points out that $$ \#\left(f = 0\right) \; = \; q^{n-1} . $$ When $b \neq 0$ we need to know what $\chi$ means. Aah. Page 86, very simple. We have finite field $F$ and element $a.$ First $\chi(0) = 0.$ If $a$ is a nonzero square, $\chi(a)=1.$ If $a$ is nonzero and not a square, $\chi(a)=-1.$ Charles Small, Arithmetic of Finite Fields, Theorem 4.6 on page 91,<|endoftext|> TITLE: Barry Simon quote QUESTION [5 upvotes]: In the back of my mind, I have the strong recollection (though perhaps I am confabulating) that Barry Simon once jokingly wrote something akin to "Let $H$ be a Hilbert space, taken to be separable (are there any other sort?)..." Can anybody please confirm this, and offer a reference to where the real quote was written? REPLY [10 votes]: That's how section 1.1 of "Trace Ideals and Their Applications" begins: Throughout, all our Hilbert spaces will be complex and separable (are there any others?)...<|endoftext|> TITLE: Silly doubt proving $2^{n-1}={n\choose 0}+ {n\choose 2}+ {n\choose 4}+ \dots $? QUESTION [6 upvotes]: I have to prove that $$2^{n-1}={n\choose 0}+ {n\choose 2}+ {n\choose 4}+ \dots $$ I checked the case where $n-1$ is even but I am a little confused when $n-1$ is odd: $$2^{n-1}={n-1\choose 0}+{n-1\choose 1}+{n-1\choose 2}+\dots +{n-1 \choose n-1}$$ I know we can group the terms pairwise with the identity ${n\choose k}+{n\choose k+1}={n+1 \choose k}$ and obtain: $$2^{n-1}={n\choose 0}+{n\choose 2}+{n\choose 4}+\dots +{n-1 \choose n-2 } +{n-1 \choose n-1}$$ But when I do the last pair I would obtain: $${n-1 \choose n-2 } +{n-1 \choose n-1}={n \choose n-2}$$ And the exercise says the last term should be $\displaystyle {n \choose n-1}$ or $\displaystyle {n\choose n}$. I might be missing something truly silly, but I'm not seeing at the moment. REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{\infty}{n \choose 2k} & = \sum_{k = 0}^{\infty}{n \choose k}{1 + \pars{-1}^{k} \over 2} = {1 \over 2}\sum_{k = 0}^{\infty}{n \choose k} + {1 \over 2}\sum_{k = 0}^{\infty}{n \choose k}\pars{-1}^{k} \\[5mm] & = {1 \over 2}\,\pars{1 + 1}^{\,n} + {1 \over 2}\,\bracks{1 + \pars{-1}}^{\,n} = \bbx{\ds{2^{n - 1} + {1 \over 2}\,\delta_{n,0}}} \end{align}