diff --git "a/stack-exchange/math_stack_exchange/shard_0.txt" "b/stack-exchange/math_stack_exchange/shard_0.txt" deleted file mode 100644--- "a/stack-exchange/math_stack_exchange/shard_0.txt" +++ /dev/null @@ -1,31917 +0,0 @@ -TITLE: What Does it Really Mean to Have Different Kinds of Infinities? -QUESTION [186 upvotes]: Can someone explain to me how there can be different kinds of infinities? -I was reading "The man who loved only numbers" by Paul Hoffman and came across the concept of countable and uncountable infinities, but they're only words to me. -Any help would be appreciated. - -REPLY [5 votes]: This is an answer to the following question marked as duplicate which redirects here: "I've known for some time that infinitary numbers can be different in order, such as the integers (countable), and the real numbers (uncountable). I read that you can always find a higher order of infinity given any order of infinity. Since infinity is the limit of the natural numbers under the successor function, I would like to know if there is a similar concept for orders of infinity under taking power-sets, if there is a sort of "super-infinity", a limit to the orders of infinity." -Yes, there is such a concept: the smallest strongly inaccessible cardinal. Roughly, it is the smallest uncountable infinity that can not be reached by taking either unions or power sets of infinities under it, see here http://en.wikipedia.org/wiki/Limit_cardinal. Existence of such cardinals is widely believed to be independent of the standard axioms of set theory (ZFC), in other words it can neither be proved nor disproved from them. However, there are many works, where people postulate existence of strongly inaccessible cardinals and see what they can derive from it. -Of course, even with such a postulate you still don't get the "infinity of all infinities", such a concept is self-contradictory according to the Russel paradox, but the smallest strongly inaccessible cardinal is in a similar relation to the ones under it regarding power sets as the countable cardinal is regarding successors and unions.<|endoftext|> -TITLE: How are we able to calculate specific numbers in the Fibonacci Sequence? -QUESTION [42 upvotes]: I was reading up on the Fibonacci Sequence, $1,1,2,3,5,8,13,\ldots $ when I noticed some were able to calculate specific numbers. So far I've only figured out creating an array and counting to the value, which is incredibly simple, but I reckon I can't find any formula for calculating a Fibonacci number based on it's position. -Is there a way to do this? If so, how are we able to apply these formulas to arrays? - -REPLY [3 votes]: This is an old post, but still... The relation -$$ -F_0=1, F_1 =1, F_n = F_{n-1}+F_{n-2}, n \ge 2 -$$ -defines a linear second order homogeneous difference equation. The solution can be found after computing the roots of the associated characteristic polynomial $p(\lambda)=\lambda^2-\lambda -1$, which are $\lambda = \frac{1 \pm \sqrt{5}}{2}$. The general solution is then given by -$$ -F_n= C_1 \left(\frac{1 + \sqrt{5}}{2} \right)^n + C_2 \left(\frac{1 - \sqrt{5}}{2} \right)^n -$$ -and the constants $C_1, C_2$ are computed knowing that $F_0 = F_1 = 1$. so, finally, -$$ -F_n= \frac{1}{\sqrt{5}} \left(\frac{1 + \sqrt{5}}{2} \right)^n - \frac{1}{\sqrt{5}} \left(\frac{1 - \sqrt{5}}{2} \right)^n -$$ -This is obviously equivalent to Binet's formula, but provides a general process to deal with linear recurrences.<|endoftext|> -TITLE: What is a real number (also rational, decimal, integer, natural, cardinal, ordinal...)? -QUESTION [19 upvotes]: In mathematics, there seem to be a lot of different types of numbers. What exactly are: - -Real numbers -Integers -Rational numbers -Decimals -Complex numbers -Natural numbers -Cardinals -Ordinals - -And as workmad3 points out, some more advanced types of numbers (I'd never heard of) - -Hyper-reals -Quaternions -Imaginary numbers - -Are there any other types of classifications of a number I missed? - -REPLY [3 votes]: You tossed out a ton of names associated with number systems, and I thought you might like to know this: - -With the exception of the natural numbers, ordinals and cardinals, all of the things in your post are examples of rings. Rings are a type of algebraic object that gather up most of the important properties of number systems. - -By "important properties" I'm thinking of addition, subtraction, multiplication and distributivity. It turns out we can do quite well without division and commutativity :) -Rather than thinking of number systems as a menagerie of names like the one in your post, this might help you get a handle on most of them all at once. (Actually after the next paragraph we'll be able to pick the natural numbers back up.) -What are the technicalities keeping the three exceptions I mentioned from being rings? For one thing, the natural numbers, cardinals and ordinals don't really have any appropriate notion of subtraction which is part of the definition of rings. Secondly, all rings are supposed to be made up of a set of points, and while the natural numbers are a set of points, the cardinals and ordinals are not. -The natural numbers actually form a semiring, which is basically a ring that might not have subtraction. In this sense they are just a number system that is not quite as nice as a ring. -The cardinals and ordinals have addition and multiplication operations too which might qualify them to be semirings if they weren't so darn big (being a set is part of the definition of a semiring, too). But if you were willing to accept a semiring-that-is-a-proper-class then you would have a type of object which encompasses all the examples you gave.<|endoftext|> -TITLE: Why is the matrix-defined Cross Product of two 3D vectors always orthogonal? -QUESTION [18 upvotes]: By matrix-defined, I mean -$$\left\times\left = \left| - -\begin{array}{ccc} -i & j & k\\ -a & b & c\\ -d & e & f -\end{array} - -\right|$$ -...instead of the definition of the product of the magnitudes multiplied by the sign of their angle, in the direction orthogonal) -If I try cross producting two vectors with no $k$ component, I get one with only $k$, which is expected. But why? -As has been pointed out, I am asking why the algebraic definition lines up with the geometric definition. - -REPLY [8 votes]: Here's an explanation in terms of the Hodge dual and the exterior (wedge) product. -Let ${e_1, e_2, e_3}$ be the standard orthonormal basis for $\mathbb{R}^3$. Consider the two vectors $a = a_1 e_1 + a_2 e_2 + a_3 e_3$ and $b = b_1 e_1 + b_2 e_2 + b_3 e_3$. From the matrix computation we obtain the familiar formula -$a\times b = (a_2 b_3 - a_3 b_2) e_1 + (a_3 b_1 - a_1 b_3) e_2 + (a_1 b_2 - a_2 b_1) e_3$. -But (see note at the bottom) -$a \wedge b = (a_1 b_2 - a_2 b_1) e_1 \wedge e_2 + (a_2 b_3 - a_3 b_2) e_2 \wedge e_3 + (a_3 b_1 - a_1 b_3) e_3 \wedge e_1$, -where the wedge $\wedge$ represents the exterior product. One can now compute the dual of this latter expression using that the left contraction of $(e_1 \wedge e_2)$ onto $(e_3 \wedge e_2 \wedge e_1)$ is $e_3$ (and similar relations). The result is that -$a \times b = (a \wedge b)^*$, -that is, the cross product of $a$ and $b$ is the dual of their exterior product. -Geometrically, this is an incredible picture. The exterior product is the plane element spanned by both $a$ and $b$, and the dual is the vector orthogonal to that plane. -This is my favorite interpretation of the cross product, but it's only helpful, of course, if you're familiar with exterior algebra and the Hodge dual. -Note: The wedge product can be found by formally computing -$(a_1 e_1 + a_2 e_2 + a_3 e_3) \wedge (b_1 e_1 + b_2 e_2 + b_3 e_3)$ -using the distributivity and anticommutation relations of the exterior product.<|endoftext|> -TITLE: The cow in the field problem (intersecting circular areas) -QUESTION [21 upvotes]: What length of rope should be used to tie a cow to an exterior fence post of a circular field so that the cow can only graze half of the grass within that field? -updated: To be clear: the cow should be tied to a post on the exterior of the field, not a post at the center of the field. - -REPLY [16 votes]: The field is the smaller/left circle, centered at A. The cow is tied to the post at E. The larger/right circle is the grazing radius. Let the radius of the field be R and the length of the rope be L. -The grazable area is the union of a segment of the circular field and a segment of the circle defined by the rope length. (A segment of a circle is a sector of a circle less the triangle defined by the center of the circle and the endpoints of the arc.) The area of a segment of a circle of radius $R$ with central angle $t$ is $\frac{1}{2}R^2(t-\sin(t))$, where $t$ is measured in radians. -In order to express the grazable area in terms of $R$ and one angle, we consider the angles ∠CED and ∠CAD (which define the segments of the circles; call these α and β for convenience) and the triangle CEF. Let $\theta$ be ∠EFC. $2\theta$ is an inscribed angle for the central angle $\beta$ over the same arc, making $\beta = 4\theta$. The sum of angles in triangle CEF is $\theta + \pi/2 +\alpha/2=\pi$ or $\alpha =\pi-2\theta$. -The grazable area is $\frac{1}{2}L^2(\alpha-\sin\alpha)+\frac{1}{2}R^2(\beta-\sin\beta)=R^2(\frac{1}{2}(L/R)^2((\pi-2\theta)-\sin(\pi-2\theta))+\frac{1}{2}(4\theta-\sin(4\theta)))$, where $a = CE = L/R=2\sin(\theta)$. We want that to be equal to half the area of the field, $\frac{1}{2}\pi R^2$. -That is, the equality of areas is $$R^2(2(\sin(\theta))^2((\pi-2\theta)-\sin(\pi-2\theta))+\frac{1}{2}(4\theta-\sin(4\theta)))=R^2\frac{\pi}{2}$$ -Simplifying: -$$R^2(\pi+(2\theta-\pi)\cos(2\theta)-\sin(2\theta)=\frac{\pi}{2})$$ -(The grazable area seems to be $\pi+\alpha\cos\alpha-\sin\alpha$; can this be seen easily?) - -The desired equality of areas is obtained for $\theta = \text{ca. } 0.618$ or $L=\text{ca. }1.159 R$ .<|endoftext|> -TITLE: What Is An Inner Product Space? -QUESTION [33 upvotes]: As I've understood it, what I've learned is that the dot product is just one of many possible inner product spaces. Can someone explain this concept? When is it useful to define it as something other than the dot product? - -REPLY [5 votes]: An inner produce space is a juncture of a vector space with an operation (and is thus an ordered pair consisting of the pair for the vector space $(\mathbb V, \mathbb F)$, and the operation (usually denoted$ \langle,\rangle $). So this would look like $((V,F),\langle,\rangle)$. Obviously there's structure in $\mathbb V$, and $\mathbb F$, and$ (\mathbb V, \mathbb F)$ consistent with a vector space (vector multiplication; scalar multiplication; vector-scalar multiplication; and vector, and scalar addition). The inner product is then a function (with $\mathbb V$ the set of vectors, and $\mathbb F$ the set of scalars): -$$ -\langle,\rangle:\mathbb V \to \mathbb F -$$ -where the notation $\langle\vec a , \vec b\rangle = c$ means the product of a and b(in $\mathbb V$) reutrns the value C in $\mathbb F$. -there are then a set of properties for the product which must adhere (in order for the product space to have results that are consistent and useful): -I'm going to drop the vector notation above the elements since it should be clear that anything in the product is from the vector space, and any result is a scalar from the field. -$$ -\langle a,b\rangle = \overline {\langle b,a\rangle} -$$ -which is to say that the complex conjugate of $\langle a,b\rangle$ is $\langle b, a\rangle$. in the real numbers this then becomes simply $\langle a,b\rangle = \langle b,a\rangle$. -The first argument must also be linear, that is (with $x$ a scalar): -$$ -\langle xa+y,b\rangle = \langle xa,b\rangle + \langle y,b\rangle -$$ -And lastly a relation to some norm on the inner product space: -$$ -\langle x,x\rangle \geq 0 -$$ -That is, a norm can always be defined in terms of the inner product: -$ ||x||$ = $\sqrt{\langle x,x\rangle}$ -This norm existing from the definition of scalar multiplication, and the existence of an inner product space proves that there is always a norm in the inner product space (the one defined by the square root of the inner product. -The simplest example other than a dot product is probably the inner product of two functions using integration which returns a value in the scalar filed in which they are defined to be vectors (which has been mentioned previously).<|endoftext|> -TITLE: Sum of Gaussian Variables -QUESTION [7 upvotes]: Let's say I know $X$ is a Gaussian Variable. -Moreover, I know $Y$ is a Gaussian Variable and $Y=X+Z$. -Let's $X$ and $Z$ are Independent. -How can I prove $Y$ is a Gaussian Random Variable if and only if $Z$ is a Gaussian R.V.? -It's easy to show the other way around ($X$, $Z$ Orthogonal and Normal hence create a Gaussian Vector hence any Linear Combination of the two is a Gaussian Variable). -Thanks - -REPLY [4 votes]: Your question: given that X and Z are independent, X is Gaussian (I'll use "normal"), and Y = X+Z, prove that Y is normal iff Z is normal. Right? As you observed, one direction is easy: if Z is normal, then so is Y=X+Z. So for the other direction, assume that Y is normal. We need to prove that is Z normal too. -Perhaps there's an even easier way, but it's straightforward to use characteristic functions, which completely characterise distributions. Because X and Z are independent, -$ \varphi_Y(t) = E[e^{itY}] = E[e^{it(X+Z)}] = E[e^{itX}]E[e^{itZ}]$, and so, -$ \varphi_Z(t) = E[e^{itZ}] = E[e^{itY}]/E[e^{itX}] $ -This means that Z has exactly the right characteristic function for a normal variable, and hence it's normal. - -More interestingly and much more generally, there is a theorem of Cramer (e.g. see here) which says that if X and Z are independent and X+Z is normally distributed, then both X and Z are!<|endoftext|> -TITLE: Understanding Dot and Cross Product -QUESTION [53 upvotes]: What purposes do the Dot and Cross products serve? -Do you have any clear examples of when you would use them? - -REPLY [2 votes]: If you want me to name 2 concepts that are used in engineering calculations so frequently, they will be dot and cross products.There are several interpretations of the dot and cross product and can be applied in various scenarios: Angle between vectors, Projection of one vector in the direction of another as mentioned in the above posts. -Going a few steps above, interesting facts to remember are that the cross product of two vectors can be written as the product of a matrix(skew symmetric) and a vector.The norm of a cross product can be expressed as a determinant. -The triple product of three vectors namely a.(b$\times$c) represents the area of a parallelopiped and this triple product can also be written as a determinant.This useful fact can be used to prove collinearity of 3 points (of course after representing/considering them as vectors)<|endoftext|> -TITLE: List of Interesting Math Blogs -QUESTION [192 upvotes]: I have the one or other interesting Math blog in my feedreader that I follow. It would be interesting to compile a list of Math blogs that are interesting to read, and do not require research-level math skills. -I'll start with my entries: - -Division By Zero -Tanya Khovanova’s Math Blog - -REPLY [3 votes]: For statistics and related topics, see the very interesting: -http://andrewgelman.com/ -See his blogroll for more of the same!<|endoftext|> -TITLE: How to accurately calculate the error function $\operatorname{erf}(x)$ with a computer? -QUESTION [21 upvotes]: I am looking for an accurate algorithm to calculate the error function -$$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}\ dt$$ -I have tried using the following formula, - -(Handbook of Mathematical Functions, formula 7.1.26), but the results are not accurate enough for the application. - -REPLY [2 votes]: You could get good theoretical approximations of the error function using -$$\text{erf}\left(x\right)\sim \text{sgn}(x)\sqrt{1-\exp\Big(-\frac 4 {\pi}\,\frac{1+P_n(x^2)}{1+Q_n(x^2)}\,x^2 \Big)}$$ -For example -$$P_1(x^2)=\frac{10-\pi ^2}{5 (\pi -3) \pi }x^2\qquad Q_1(x^2)=\frac{120-60 \pi +7 \pi ^2}{15 (\pi -3) \pi }$$ while -$$P_2(x^2)=\frac{\left(105840-110880 \pi +37800 \pi ^2-4260 \pi ^3+69 \pi ^4-17 \pi ^5\right) - }{3 \pi \left(-12600+12600 \pi -3360 \pi ^2-30 \pi ^3+73 \pi ^4\right)}x^2+$$ $$\frac{-2116800+1270080 \pi ^2-504000 \pi ^3+48510 \pi ^4+503 \pi ^6}{315 \pi ^2 - \left(-12600+12600 \pi -3360 \pi ^2-30 \pi ^3+73 \pi ^4\right)}x^4$$ -$$Q_2(x^2)=\frac{60480-70560 \pi +27720 \pi ^2-3600 \pi ^3-143 \pi ^4+43 \pi ^5}{\pi - \left(-12600+12600 \pi -3360 \pi ^2-30 \pi ^3+73 \pi ^4\right)}x^2+$$ -$$\frac{-6350400+8467200 \pi -4515840 \pi ^2+1192800 \pi ^3-145320 \pi ^4+2380 \pi - ^5+793 \pi ^6}{105 \pi ^2 \left(-12600+12600 \pi -3360 \pi ^2-30 \pi ^3+73 \pi - ^4\right)}x^4$$ and so on. -The first one gives a maximum absolute error of $0.000150$ while the second gives a maximum absolute error of $0.000012$. -With regard to the infinite norms, they are respectively $3.04\times 10^{-8}$ and $1.20\times 10^{-10}$.<|endoftext|> -TITLE: Chased by a lion and other pursuit-evasion problems -QUESTION [15 upvotes]: I am looking for a reference (book or article) that poses a problem that seems to be a classic, in that I've heard it posed many times, but that I've never seen written anywhere: that of the possibility of a man in a circular pen with a lion, each with some maximum speed, avoiding capture by that lion. -References to pursuit problems in general would also be appreciated, and the original source of this problem. - -REPLY [5 votes]: The book is Coffee in Memphis by Bollobas. It's the first problem, and there are loads more : -http://www.amazon.com/Art-Mathematics-Coffee-Time-Memphis/dp/0521693950<|endoftext|> -TITLE: Faulty logic when summing large integers? -QUESTION [6 upvotes]: This is in relation to the Euler Problem $13$ from http://www.ProjectEuler.net. -Work out the first ten digits of the sum of the following one-hundred $50$-digit numbers. -$37107287533902102798797998220837590246510135740250$ -Now, this was my thinking: -I can freely discard the last fourty digits and leave the last ten. -$0135740250$ -And then simply sum those. This would be large enough to be stored in a $64$-bit data-type and a lot easier to compute. However, my answer isn't being accepted, so I'm forced to question my logic. -However, I don't see a problem. The last fourty digits will never make a difference because they are at least a magnitude of $10$ larger than the preceding values and therefore never carry backwards into smaller areas. Is this not correct? - -REPLY [10 votes]: If you were supposed to find the last ten digits, you could just ignore the first 40 digits of each number. However you're supposed to find the first ten digits, so that doesn't work. And you can't just ignore the last digits of each number either because those can carry over. - -REPLY [6 votes]: First you are doing it in the wrong end, second, the statement in general is still not correct. -for example: -9999999999 -1000000001 - -Say if you want the first 2 digits, -you will get 10 if you discard the last 2 digit and do the sum. -The right answer is 11<|endoftext|> -TITLE: What are gradients and how would I use them? -QUESTION [8 upvotes]: I keep seeing this symbol $\nabla$ around and I know enough to understand that it represents the term "gradient." But what is a gradient? When would I want to use one mathematically? - -REPLY [11 votes]: The ∇ (pronounced "del") is an operator, more technically. In 3D, it (more or less) means the vector -< df/dx, df/dy, df/dz > - -So, if f(x,y,z) = x^2 + y^3*z + sin(z), ∇f = < 2x, 3y^2*z, y^3 + cos(z) > -It's actually a bit more subtle than that; technically it means -< d/dx, d/dy, d/dz > - -And when you do ∇f, it's sort of like a "multiplication" of ∇ and f; -< d/dx, d/dy, d/dz > f = < d/dx f, d/dy f, d/dz f > - -Only, not multiplication, but operation. -There are some neat properties about the del operator. Here are a couple: - -The most famous is that ∇f yields the gradient of f. That is, at any point (x,y,z), ∇f(x,y,z) is the vector pointing in the direction where it is most increasing. The magnitude of it is the magnitude of the increase. -This is easier to understand with, say, a 2D f(x,y). If f(x,y) represents the height of a point at (x,y), then ∇f(x,y) represents the steepest incline from that point. Or rather, if you placed a ball on that point, it would start rolling in the opposite direction of the gradient vector. -Normally, for multi-dimensional functions, it is easiest to find the derivative along an axis (x, y, z, etc.). With ∇, you can find the derivative along any arbitrary direction by using ∇f * u, where * is the dot product and u is the unit vector along the direction you are calculating. -∇ is also used to calculate divergence (amount that vectors are "spreading out") and curl (amount that vectors are "curling up") of a vector field. -Divergence is ∇ * f (dot product), and curl is ∇ x f (cross product) -They aren't truly "products" in the sense. Rather, when you are calculating divergence and curl and you must do d/dx * (something), you are actually doing d/dx (something) or d(something)/dx.<|endoftext|> -TITLE: What are some classic fallacious proofs? -QUESTION [20 upvotes]: If you know it, also try to include the precise reason why the proof is fallacious. To start this off, let me post the one that most people know already: - - Let $a = b$. - Then $a^2 = ab$ - $a^2 - b^2 = ab - b^2$ - Factor to $(a-b)(a+b) = b(a-b)$ - Then divide out $(a-b)$ to get $a+b = b$ - Since $a = b$, then $b+b = b$ - Therefore $2b = b$ - Reduce to $2 = 1$ - -As @jan-gorzny pointed out, in this case, line 5 is wrong since $a = b$ implies $a-b = 0$, and so you can't divide out $(a-b)$. - -REPLY [6 votes]: The odd number $N = 198585576189 = -3^2 \cdot 7^2 \cdot 11^2 \cdot 13^2 \cdot 22021$ has an interesting property—it is perfect: -$$\sigma(N) = (1 + 3 + 3^2)(1 + 7 + 7^2)(1 + 11 + 11^2)(1 + 13 + 13^2)(1 + 22021) = 397171152378 = 2N$$ -Now, where is the catch? (This one was found by René Descartes. It is also the only known odd number to have this property.) - - We pretend that the number $22021 = 19^2 \cdot 61$ is prime.<|endoftext|> -TITLE: Why is $1$ not a prime number? -QUESTION [133 upvotes]: Why is $1$ not considered a prime number? -Or, why is the definition of prime numbers given for integers greater than $1$? - -REPLY [2 votes]: The Question should be _"Why is the word $prime$ used only for a positive non-invertible indecomposable integer?"_The Answer is that Positive-NonInvertible-Indecomposable-Integer is too long, except in German. If you need a name for numbers that are either prime or $1$, you may invent one!<|endoftext|> -TITLE: Real world uses of hyperbolic trigonometric functions -QUESTION [44 upvotes]: I covered hyperbolic trigonometric functions in a recent maths course. However I was never presented with any reasons as to why (or even if) they are useful. -Is there any good examples of their uses outside academia? - -REPLY [9 votes]: The hyperbolic tangent is also related to what's called the Logistic function: -$L(x)=\frac{1}{1+e^{-x}}=\frac{1+\tanh(\frac{x}{2})}{2}$ -Among many uses and applications of the logistic function/hyperbolic tangent there are: - -Being an activation function for Neural Networks. These are universal function approximators that are pretty much becoming central to modern A.I. -The Fermi-Dirac Distribution and Ising Model in statistical mechanics -Being a sigmoid function ("S-shaped") means that it can be a candidate to a cumulative distribution function assuming that its derivative can be used to model some random variable -Modelling population growths/declines. Although this is more in the realm of Biology, it certainly has quite some appeal from purely the perspective of a dynamical system. -Considering $\tanh(kx)$, one can approximate the Heaviside step function (by setting $k$ to a sufficiently large number) in such a way that it is still continuous and infinitely differentiable. This can be used when solving DEs in physics to analyse the action of, for one example, turning on a switch. - -Moving on to $\cosh (x)$, it also has some nice use-cases: - -A hanging inelastic chain takes the shape of $\cosh (x)$. This shape is called a catenary. -A soap film joining two parallel, disjoint wireframe circles is the surface of revolution of $\cosh (x)$. This in general pops up a lot when studying minimal surfaces. -In the canonical formalism of Statistical Mechanics, the partition function of a 2-level system with state energies of $\pm\varepsilon$ system is given by $Z\propto\cosh(\varepsilon/k_B T)$. This then gives us that the average energy of the system is given by $\langle E\rangle \propto \varepsilon \tanh(\varepsilon/k_B T)$ taking us back to $\tanh (x)$ -In architecture, if you have a free-standing (i.e. unloaded and unsupported) arch, the optimal shape to handle the lines of thrust produced by its own weight is $\cosh(x)$. The dome of Saint Paul's Cathedral in England has a $\cosh(x)$ cross-section. This type of arch was also favoured by architect Antoni Gaudí in his work.<|endoftext|> -TITLE: How do you prove that $p(n \xi)$ for $\xi$ irrational and $p$ a polynomial is uniformly distributed modulo 1? -QUESTION [15 upvotes]: The Weyl equidistribution theorem states that the sequence of fractional parts ${n \xi}$, $n = 0, 1, 2, \dots$ is uniformly distributed for $\xi$ irrational. -This can be proved using a bit of ergodic theory, specifically the fact that an irrational rotation is uniquely ergodic with respect to Lebesgue measure. It can also be proved by simply playing with trigonometric polynomials (i.e., polynomials in $e^{2\pi i k x}$ for $k$ an integer) and using the fact they are dense in the space of all continuous functions with period 1. In particular, one shows that if $f(x)$ is a continuous function with period 1, then for any $t$, $\int_0^1 f(x) dx = \lim \frac{1}{N} \sum_{i=0}^{N-1} f(t+i \xi)$. One shows this by checking this (directly) for trigonometric polynomials via the geometric series. This is a very elementary and nice proof. -The general form of Weyl's theorem states that if $p$ is a monic integer-valued polynomial, then the sequence ${p(n \xi)}$ for $\xi$ irrational is uniformly distributed modulo 1. I believe this can be proved using extensions of these ergodic theory techniques -- it's an exercise in Katok and Hasselblatt. I'd like to see an elementary proof. -Can the general form of Weyl's theorem be proved using the same elementary techniques as in the basic version? - -REPLY [8 votes]: There is a fairly good exposition in Terry Tao's post, see Corollaries 4-6. Here is a sketch: -We prove the more general statement: Let $p(n)= \chi n^d + a_{d-1} n^{d-1} + \cdots + a_1 n + a_0$ be any polynomial, with $\chi$ irrational. Then $p(n) \mod 1$ is equidistributed. Our proof is by induction on $d$; the base case $d=1$ is standard. -Set $e(x) = e^{2 \pi i x}$. By the standard trickery with exponential polynomials, it is enough to show -$$\sum_{n=0}^{N-1} e(p(n)) = o(N).$$ -Choose a positive integer $h$. With a small error, we can replace the sum by -$$\sum_{n=0}^{N-1} (1/h) \left( e(p(n)) + e(p(n+1)) + \cdots + e(p(n+h-1)) \right).$$ -By Cauchy-Schwarz, this is bounded by -$$\frac{\sqrt{N}}{h} \left[ \sum_{n=0}^{N-1} \left( e(p(n)) + \cdots + e(p(n+h-1)) \right) \overline{ \left( e(p(n)) + \cdots + e(p(n+h-1)) \right)} \right]^{1/2}.$$ -Expanding the inner sum, we get $h^2$ terms of the form $e(p(n) - p(n+k))$. There are $h$ terms where $k=0$; these each sum up to $N$. For the other $h^2-h$ terms, the sum is of the form $\sum_{n=0}^{N-1} e(q(n))$, where $q$ has leading term $\chi d n^{d-1}$. By induction, each of these sums is $o(N)$. -So the quantity in the square root is -$$hN+o(N)$$ -where the constant in the $o$ depends on $h$ and $\chi$. Putting it all together, we get a bound of -$$N/\sqrt{h} + o(N).$$ -Since $h$ was arbitrary, this proves the result.<|endoftext|> -TITLE: Why is "the set of all sets" a paradox, in layman's terms? -QUESTION [102 upvotes]: I've heard of some other paradoxes involving sets (i.e., "the set of all sets that do not contain themselves") and I understand how paradoxes arise from them. But this one I do not understand. -Why is "the set of all sets" a paradox? It seems like it would be fine, to me. There is nothing paradoxical about a set containing itself. -Is it something that arises from the "rules of sets" that are involved in more rigorous set theory? - -REPLY [5 votes]: Some of the comments in the previous answers make a subtle mistake, and I think it may be worth clarifying some issues. I am assuming the standard sort of set theory in what follows. -Cantor's diagonal theorem (mentioned in some of the answers) gives us that for any set $X$, $|X|<|\mathcal P(X)|$. Unlike what some comments claim, this really has nothing to do with cardinalities. All it says is that no map $f\!:X\to\mathcal P(X)$ is onto. The usual proof proceeds by noting that $A=\{x\in X:x\notin f(x)\}$ is not in the range of $f$, because if $A=f(a)$, then $a\in A$ if and only if $a\notin f(a)=A$. -The usual argument for Russell's paradox (also mentioned in some of the answers) proceeds by considering $A=\{a:a\notin a\}$. If $V$ is a set, $A$ would be a set as well (by comprehension, if you wish), and we reach a contradiction by noting that $A\in A$ if and only if $A\notin A$. -I think it is misleading to think (as some of the comments suggest) that the two proofs are (fundamentally) different. They are essentially the same. -The point is that if there is a set of all sets (let's call it $V$), then $\mathcal P(V)$ is a set as well (by comprehension) and in fact $\mathcal P(V)=V$ because, on the one hand, any subset of $V$ is a set, and therefore a member of $V$, and on the other hand, any member of $V$ is itself a subset of $V$ (since the members of any set are sets themselves), and therefore a member of $\mathcal P(V)$. -Now, the identity function is a map from $V$ to $\mathcal P(V)$. Let's call it $f$. The set not in the range of $f$ given to us by the standard proof of Cantor's diagonal theorem recalled above is $A=\{x\in V:x\notin f(x)\}$, which in this case reduces to $\{x:x\notin x\}$, which in turn is precisely the set given by the standard proof of Russell's paradox. -[What to make of this result from a foundational point of view is another matter. In $\mathsf{ZF}$ the conclusion is just that there is no set of all sets, although it is perhaps more accurate to say that the result is used to justify dismissing unrestricted comprehension and adopting instead the version of bounded comprehension used in $\mathsf{ZF}$. In $\mathsf{NF}$ the solution is instead to limit unbounded comprehension by only allowing stratified instances. Other foundational solutions may and have been adopted as well.]<|endoftext|> -TITLE: Why is the volume of a sphere $\frac{4}{3}\pi r^3$? -QUESTION [108 upvotes]: I learned that the volume of a sphere is $\frac{4}{3}\pi r^3$, but why? The $\pi$ kind of makes sense because its round like a circle, and the $r^3$ because it's 3-D, but $\frac{4}{3}$ is so random! How could somebody guess something like this for the formula? - -REPLY [2 votes]: I am no where near as proficient in math as any of the people who answered this before me, but nonetheless I would like to add a simplified version; -A cylinder's volume is: -$$\pi r^2h$$ -A cone's volume is $\frac{1}{3}$ that of a cylinder of equal height and radius: -$$\frac{1}{3}\pi r^2h$$ -A sphere's volume is two cones, each of equal height and radius to that of the sphere's: -$$\frac{1}{3}\pi r^2h + \frac{1}{3}\pi r^2h$$ -The height of the sphere is equal to it's diameter $(r + r)$ so the earlier equation can be rewritten as; -$$\frac{1}{3}\pi r^2(r + r) + \frac{1}{3}\pi r^2(r + r)$$ -If we simplify it; -$$\frac{1}{3}\pi r^2(2r) + \frac{1}{3}\pi r^2(2r)$$ -Following the math convention of numbers before letters it changes to: -$$\frac{1}{3}2\pi r^2r + \frac{1}{3}2\pi r^2r$$ -Combining like terms; -$$r^2\cdot r= r^3$$ -and -$$\frac{1}{3}\cdot 2 = \frac{2}{3}$$ -The equation now becomes -$$\frac{2}{3}\pi r^3 + \frac{2}{3}\pi r^3$$ -Again add the like terms, being the $\frac{2}{3}$ together; -$$\frac{2}{3} + \frac{2}{3} = \frac{4}{3}$$ -Finally we get to how $\frac{4}{3}$ is part of the equation; -$$\frac{4}{3}\pi r^3$$<|endoftext|> -TITLE: Will this procedure generate random points uniformly distributed within a given circle? Proof? -QUESTION [11 upvotes]: Consider the task of generating random points uniformly distributed within a circle of a given radius $r$ that is centered at the origin. Assume that we are given a random number generator $R$ that generates a floating point number uniformly distributed in the range $[0, 1)$. -Consider the following procedure: - -Generate a random point $p = (x, y)$ within a square of side $2r$ centered at the origin. This can be easily achieved by: -a. Using the random number generator $R$ to generate two random numbers $x$ and $y$, where $x, y \in [0, 1)$, and then transforming $x$ and $y$ to the range $[0, r)$ (by multiplying each by $r$). -b. Flipping a fair coin to decide whether to reflect $p$ around the $x$-axis. -c. Flipping another fair coin to decide whether to reflect $p$ around the $y$-axis. -Now, if $p$ happens to fall outside the given circle, discard $p$ and generate another point. Repeat the procedure until $p$ falls within the circle. - -Is the previous procedure correct? That is, are the random points generated by it uniformly distributed within the given circle? How can one formally [dis]prove it? - -Background Info -The task was actually given in Ruby Quiz - Random Points within a Circle (#234). If you're interested, you can check my solution in which I've implemented the procedure described above. I would like to know whether the procedure is mathematically correct or not, but I couldn't figure out how to formally [dis]prove it. -Note that the actual task was to generate random points uniformly distributed within a circle of a given radius and position, but I intentionally left that out in the question because the generated points can be easily translated to their correct positions relative to the given center. - -REPLY [3 votes]: I'm not saying that your method is the simplest one to obtain uniformly distributed sample points in the disk $D$ of radius $r>0$, but it is certainly correct. -Let $Q:=[{-r},r]^2$ and assume that the points generated in step 1. of your procedure are uniformly distributed in $Q$. For any set $A\subset Q$ denote by $|A|$ the area of $A$ and by $P(A)$ the probability that a sample point $p$ falls on the set $A$. Then one has the well known formula about switching conditionals: -$$P(A\,|\,D)={P(A)\>P(D\,|\,A)\over P(D)}\ .$$ -Now when $A\subset D$ then $P(D\,|\,A)=1$, and by assumption -$$P(A)={|A|\over|Q|},\quad P(D)={|D|\over|Q|}\ .$$ -It follows that -$$P(A\,|\,D)={|A|\over|D|}\qquad\forall \> A\subset D\ ,$$ -as it should be.<|endoftext|> -TITLE: Distribution of primes? -QUESTION [16 upvotes]: Do primes become more or less frequent as you go further out on the number line? That is, are there more or fewer primes between $1$ and $1{,}000{,}000$ than between $1{,}000{,}000$ and $2{,}000{,}000$? -A proof or pointer to a proof would be appreciated. - -REPLY [5 votes]: The Sieve of Eratosthenes is a very intuitive visual representation of why the frequency of prime numbers goes down as you go further out on the number line.<|endoftext|> -TITLE: Aren't constructive math proofs more "sound"? -QUESTION [17 upvotes]: Since constructive mathematics allows us to avoid things like Russell's Paradox, then why don't they replace traditional proofs? How do we know the "regular" kind of mathematics are free of paradox without a proof construction? - -REPLY [7 votes]: "Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists. To prohibit existence statements and the principle of excluded middle is tantamount to relinquishing the science of mathematics altogether." --David Hilbert<|endoftext|> -TITLE: Simple lowpass frequency response -QUESTION [5 upvotes]: Okay, so hopefully this isn't too hard or off-topic. Let's say I have a very simple lowpass filter (something that smooths out a signal), and the filter object has a position variable and a cutoff variable (between 0 and 1). In every step, a value is put into the following bit of pseudocode as "input": position = position*(1-c)+input*c, or more mathematically, f(n) = f(n-1)*(1-c)+x[n]*c. The output is the value of "position." Basically, it moves a percentage of the distance between the current position and then input value, stores this value internally, and returns it as output. It's intentionally simplistic, since the project I'm using this for is going to have way too many of these in sequence processing audio in real time. -Given the filter design, how do I construct a function that takes input frequency (where 1 means a sine wave with a wavelength of 2 samples, and .5 means a sine wave with wavelength 4 samples, and 0 is a flat line), and cutoff value (between 1 and 0, as shown above) and outputs the amplitude of the resulting sine wave? Sine wave comes in, sine wave comes out, I just want to be able to figure out how much quieter it is at any input and cutoff frequency combination. - -REPLY [3 votes]: I don't have enough mojo to comment on Greg's answer. - -Greg made a silly calculational mistake: The transfer function $A(\omega)$ should be $c/(1-(1-c)e^{-i\omega})$. -What you want is the modulus of $A(\omega)$. Note that $\sin \omega n$ is precisely the imaginary part of $e^{i\omega n}$. Because the relation between input and output is linear, the response to $\sin\omega n$ will be the imaginary part of $A(\omega)e^{i\omega n}$. That's going to be a sinusoid with some shifting and the amplitude $|A(\omega)|$. Here is a plot for $c=1/2$. -To read more about this sort of things, google "IIR filter" or "infinite impulse response".<|endoftext|> -TITLE: Proof that the sum of two Gaussian variables is another Gaussian -QUESTION [17 upvotes]: The sum of two Gaussian variables is another Gaussian. -It seems natural, but I could not find a proof using Google. -What's a short way to prove this? -Thanks! -Edit: Provided the two variables are independent. - -REPLY [12 votes]: I prepared the following as an answer to a question which happened to -close just as I was putting the finishing touches on my work. I posted it as a different (self-answered) question but following suggestions from Srivatsan Narayanan and Mike Spivey, I am putting it here and deleting my so-called question. -If $X$ and $Y$ are independent standard Gaussian random variables, what is -the cumulative distribution function of $\alpha X + \beta Y$? -Let $Z = \alpha X + \beta Y$. We assume without loss of generality that $\alpha$ and $\beta$ are positive real numbers since if, say, $\alpha < 0$, then we can replace $X$ by $-X$ and $\alpha$ by $\vert\alpha\vert$. Then, the cumulative probability distribution function of $Z$ is -$$ -F_Z(z) = P\{Z \leq z\} = P\{\alpha X + \beta Y \leq z\} = \int\int_{\alpha x + \beta y \leq z} \phi(x)\phi(y) dx dy -$$ -where $\phi(\cdot)$ is the unit Gaussian density function. But, since the integrand $(2\pi)^{-1}\exp(-(x^2 + y^2)/2)$ has circular symmetry, the value of the integral depends only on the distance of the origin from the line $\alpha x + \beta y = z$. - Indeed, by a rotation of coordinates, we can write -the integral as -$$ -F_Z(z) = \int_{x=-\infty}^d \int_{y=-\infty}^{\infty}\phi(x)\phi(y) dx dy -= \Phi(d) -$$ -where $\Phi(\cdot)$ is the standard Gaussian cumulative distribution function. -But, -$$d = \frac{z}{\sqrt{\alpha^2 + \beta^2}}$$ -and thus the cumulative distribution function of $Z$ is that of a zero-mean Gaussian random variable with variance $\alpha^2 + \beta^2$.<|endoftext|> -TITLE: Are the "proofs by contradiction" weaker than other proofs? -QUESTION [117 upvotes]: I remember hearing several times the advice that, we should avoid using a proof by contradiction, if it is simple to convert to a direct proof or a proof by contrapositive. Could you explain the reason? Do logicians think that proofs by contradiction are somewhat weaker than direct proofs? -Is there any reason that one would still continue looking for a direct proof of some theorem, although a proof by contradiction has already been found? I don't mean improvements in terms of elegance or exposition, I am asking about logical reasons. For example, in the case of the "axiom of choice", there is obviously reason to look for a proof that does not use the axiom of choice. Is there a similar case for proofs by contradiction? - -REPLY [11 votes]: In order to prove A, let's assume not A. -[Insert 10-page argument here.] -Which of the assertions proved in the foregoing 10 pages are false because they were deduced from the (now proved false) assumption that not A? Which are true but cannot be considered to have been validly proved because the proofs relied on the false assumption that not A? And which were validly proved since their proofs did not rely on that assumption? It can be hard to tell. And if you saw an assertion proved along the way, you might think it's known to be true. -In that way, a proof by contradiction can be at best confusing.<|endoftext|> -TITLE: A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language -QUESTION [334 upvotes]: The following is a quote from Surely you're joking, Mr. Feynman. The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to his challenge? Theorems should be totally counter-intuitive, and be easily translatable to everyday language. (Apparently the Banach-Tarski paradox was not a good example.) - -Then I got an idea. I challenged - them: "I bet there isn't a single - theorem that you can tell me - what - the assumptions are and what the - theorem is in terms I can understand - - where I can't tell you right away - whether it's true or false." -It often went like this: They would - explain to me, "You've got an orange, - OK? Now you cut the orange into a - finite number of pieces, put it back - together, and it's as big as the sun. - True or false?" -"No holes." -"Impossible! -"Ha! Everybody gather around! It's - So-and-so's theorem of immeasurable - measure!" -Just when they think they've got - me, I remind them, "But you said an - orange! You can't cut the orange peel - any thinner than the atoms." -"But we have the condition of - continuity: We can keep on cutting!" -"No, you said an orange, so I - assumed that you meant a real orange." -So I always won. If I guessed it - right, great. If I guessed it wrong, - there was always something I could - find in their simplification that they - left out. - -REPLY [2 votes]: The dimensions of the lattices that construct some of the sporadic groups (eg, Co₃, HN, HS) are so unusual that I think they ultimately allay Feynman’s objection—even if, to answer it, one would have to define what a group is and give a bit of culture/history on the quest to :gotta catch 'em all".<|endoftext|> -TITLE: Is there a real number lookup algorithm or service? -QUESTION [22 upvotes]: Is there a way of taking a number known to limited precision (e.g. $1.644934$) and finding out an "interesting" real number (e.g. $\displaystyle\frac{\pi^2}{6}$) that's close to it? -I'm thinking of something like Sloane's Online Encyclopedia of Integer Sequences, only for real numbers. -The intended use would be: write a program to calculate an approximation to $\displaystyle\sum_{n=1}^\infty \frac{1}{n^2}$, look up the answer ("looks close to $\displaystyle\frac{\pi^2}{6}$") and then use the likely answer to help find a proof that the sum really is $\displaystyle \frac{\pi^2}{6}$. -Does such a thing exist? - -REPLY [25 votes]: I've long used Simon Plouffe's inverse symbolic calculator for this purpose. It is essentially a searchable list of "interesting" numbers. -Edit: link updated (Mar 2022).<|endoftext|> -TITLE: Is $0$ a natural number? -QUESTION [144 upvotes]: Is there a consensus in the mathematical community, or some accepted authority, to determine whether zero should be classified as a natural number? -It seems as though formerly $0$ was considered in the set of natural numbers, but now it seems more common to see definitions saying that the natural numbers are precisely the positive integers. - -REPLY [2 votes]: The Peano-Dedekind axioms (as used in proving propositions by use of the Principle of Mathematical Induction) define the $\mathbb{N}$ as either $\mathbb{N}$ = $\mathbb{Z^+} \cup \text{0} = \text{{0, 1, 2, ...}}$ or $\mathbb{N} = \mathbb{Z^+} = \text{{1, 2, 3, ...}}$, that is, it depends on the context (usually this "context" may be seen from the given proposition to be proved, at least in the case of using PMI).<|endoftext|> -TITLE: Simple numerical methods for calculating the digits of $\pi$ -QUESTION [34 upvotes]: Are there any simple methods for calculating the digits of $\pi$? Computers are able to calculate billions of digits, so there must be an algorithm for computing them. Is there a simple algorithm that can be computed by hand in order to compute the first few digits? - -REPLY [5 votes]: The first method that I applied successfully with function calculator was approximation of circle by $2^k$-polygon with approximating sides with one point on the circle and corners outside the circle. I started with unit circle that was approximated by square and the equation $\tan(2^{-k} \pi/4) \approx 2^{-k} \pi/4$, that gives $\pi \approx \frac{8}{2} = 4$ for $k=0$. I iterated the formula of tangent of half angle, that I solved applying the formula of the solution of second order equation, that was applied to the sum formula of tangent. I obtained the sequence $\pi \approx 8 \cdot 2^k \tan(2^{-k} \pi /4)/2$. -The problem is that the solution formula of the second order equation has square root, that is difficult to calculate by hand. That's why I kept on searching a simple approximation method that applies addition, substraction, multiplication and division of integers. I ended up to the following calculation. This method applies Machin-like formula and was first published by C. Hutton. -\begin{eqnarray} -\pi & = & 4 \frac{\pi}{4} = 4 \arctan(1) = 4 \arctan\Bigg(\frac{\frac{5}{6}}{\frac{5}{6}}\Bigg) = 4 \arctan\Bigg(\frac{\frac{1}{2}+\frac{1}{3}}{1-\frac{1}{2}\frac{1}{3}}\Bigg) \\ -& = & 4 \arctan\Bigg(\frac{\tan(\arctan(\frac{1}{2}))+\tan(\arctan(\frac{1}{3}))}{1-\tan(\arctan(\frac{1}{2}))\tan(\arctan(\frac{1}{3}))}\Bigg) \\ -& = & 4 \arctan\Big(\tan\Big(\arctan\Big(\frac{1}{2}\Big)+\arctan\Big(\frac{1}{3}\Big)\Big)\Big) \\ -& = & 4 \Big(\arctan\Big(\frac{1}{2}\Big)+\arctan\Big(\frac{1}{3}\Big)\Big) \\ -& = & 4 \Big(\Big\vert_0^\frac{1}{2} \arctan(x) + \Big\vert_0^\frac{1}{3} \arctan(x)\Big) \\ -& = & 4 \bigg(\int_0^\frac{1}{2} \frac{1}{1+x^2} dx + \int_0^\frac{1}{3} \frac{1}{1+x^2} dx\bigg) \\ -& = & 4 \bigg(\int_0^\frac{1}{2} \sum_{k=0}^\infty (-x^2)^k dx + \int_0^\frac{1}{3} \sum_{k=0}^\infty (-x^2)^k dx \bigg) \\ -& = & 4 \bigg(\sum_{k=0}^\infty \int_0^\frac{1}{2} (-x^2)^k dx + \sum_{k=0}^\infty \int_0^\frac{1}{3} (-x^2)^k dx \bigg) \\ -& = & 4 \bigg(\sum_{k=0}^\infty \int_0^\frac{1}{2} (-1)^k x^{2k} dx + \sum_{k=0}^\infty \int_0^\frac{1}{3} (-1)^k x^{2k} dx \bigg) \\ -& = & 4 \bigg(\sum_{k=0}^\infty \bigg\vert_0^\frac{1}{2} \frac{(-1)^k}{2k+1} x^{2k+1} + \sum_{k=0}^\infty \bigg\vert_0^\frac{1}{3} \frac{(-1)^k}{2k+1} x^{2k+1} \bigg) \\ -& = & 4 \bigg(\sum_{k=0}^\infty \frac{(-1)^k}{2k+1} \frac{1}{2^{2k+1}} + \sum_{k=0}^\infty \frac{(-1)^k}{2k+1} \frac{1}{3^{2k+1}} \bigg) \\ -& = & 4 \sum_{k=0}^\infty \frac{(-1)^k}{2k+1} \bigg(\frac{1}{2^{2k+1}} + \frac{1}{3^{2k+1}}\bigg) \\ -& = & \sum_{k=0}^\infty \frac{4(-1)^k}{2k+1} \bigg(\frac{1}{2^{2k+1}} + \frac{1}{3^{2k+1}}\bigg). -\end{eqnarray} -It is the most beautiful in practice numerically applicable method I have found so far.<|endoftext|> -TITLE: Why are $\Delta_1$ sentences of arithmetic called recursive? -QUESTION [14 upvotes]: The arithmetic hierarchy defines the $\Pi_1$ formulae of arithmetic to be formulae that are provably equivalent to a formula in prenex normal form that only has universal quantifiers, and $\Sigma_1$ if it is provably equivalent to a prenex normal form with only existential quantifiers. -A formula is $\Delta_1$ if it is both $\Pi_1$ and $\Sigma_1.$ These formulae are often called recursive: why? - -REPLY [4 votes]: The term recursive in computability theory is the same as the term computable, which may or may not give a better intuition of the concept of $\Delta_1^0$. -Anyway, you can understand a set $A$ (or relation $\Theta$) to be computable if there is an algorithmic process such that for any $n$, in finite time will return the answer of whether $n \in A$ or $n \notin A$. -A relation $\Theta$ is $\Sigma_1^0$ if it is of the form $(\exists k)\varphi(x,k)$ where $\varphi$ is something known to be computable. So if $\Theta(n)$ holds, then by searching, one will find a witness $k$ in finite amount of time. However, in finite time you can not ascertain if $\neg\Theta(x)$ holds since it requires checking that every $k$ is not a witness. -A relation is $\Pi_1^0$ if its complement is $\Sigma_1^0$. -Now so understand why $\Delta_1^0$ relations are consider recursive or computable. So $\Theta$ being $\Delta_1^0$ means that it an its complement is $\Sigma_1^0$. Suppose that $\Theta(x) = (\exists k)(\varphi(x))$ and $\neg\Theta(x) = (\exists k)(\psi(x))$ where both $\varphi$ and $\psi$ are computable. The claim is that determining $\Theta$ is computable. To do this just search through all $k$ such and ask if $\varphi(x,k)$ or $\psi(x,k)$. If the first is found then $\Theta(x)$ if the second is found then $\neg\Theta(x)$. Such a $k$ will definitely be found for one or the other since for all $x$, either $\Theta(x)$ or $\neg\Theta(x)$. So $\Delta_1^0$ relations satisfies the intuition that computable relations are those which one can determine satisfiability or nonsatisfiability in finite amount of time.<|endoftext|> -TITLE: What are all the homomorphisms between the rings $\mathbb{Z}_{18}$ and $\mathbb{Z}_{15}$? -QUESTION [9 upvotes]: Any homomorphism $φ$ between the rings $\mathbb{Z}_{18}$ and $\mathbb{Z}_{15}$ is completely defined by $φ(1)$. So from -$$0 = φ(0) = φ(18) = φ(18 \cdot 1) = 18 \cdot φ(1) = 15 \cdot φ(1) + 3 \cdot φ(1) = 3 \cdot φ(1)$$ -we get that $φ(1)$ is either $5$ or $10$. But how can I prove or disprove that these two are valid homomorphisms? - -REPLY [3 votes]: Continuing Akhil M's answer above (the comments under it are getting pushed down out of sight): it is also not hard to systematically find all the idempotents in ${\mathbb Z}/n$. Namely, for example with $n=pq$ with distinct primes $p,q$, $\mathbb Z/n \approx \mathbb Z/p \oplus \mathbb Z/q$, by Sun-Ze's theorem (altho' one might carp about what kind of "sum" it is). So, solving the idempotent condition $x^2=x$ mod $pq$ is equivalent to solving that equation mod $p$ and mod $q$. The integers mod a prime form a field, so we know that there are only the two solutions, the obvious ones, $0,1$. Thus, the idempotents mod $pq$ are $0$-or-$1$ mod $p$ and $0-or-1$ mod $q$. Obviously $0$ and $1$ mod $pq$ work, but/and also $0$ mod $p$ but/and $1$ mod $q$, and vice-versa. In the case at hand, both $6$ and $10$ are non-obvious idempotents.<|endoftext|> -TITLE: If $A$ is a subobject of $B$, and $B$ a subobject of $A$, are they isomorphic? -QUESTION [17 upvotes]: In category theory, a subobject of $X$ is defined as an object $Y$ with a monomorphism, from $Y$ to $X$. If $A$ is a subobject of $B$, and $B$ a subobject of $A$, are they isomorphic? It is not true in general that having monomorphisms going both ways between two objects is sufficient for isomorphy, so it would seem the answer is no. -I ask because I'm working through the exercises in Geroch's Mathematical Physics, and one of them asks you to prove that the relation "is a subobject of" is reflexive, transitive and antisymmetric. But it can't be antisymmetric if I'm right... - -REPLY [4 votes]: The free group on two letters contains as subgroups groups that are isomorphic free group on any finite number of letters. The free group on $n \geq 2$ letters contains the free group on two letters as a subgroup. So if we consider the category of groups, with $A = F_2$ and $B = F_n$ ($n > 2$) we get a counterexample.<|endoftext|> -TITLE: Why are higher-order logics less well-behaved? -QUESTION [9 upvotes]: I've read about about higher-order logics (i.e. those that build on first-order predicate logic) but am not too clear on their applications. While they are capable of expressing a greater range of proofs (though never all, by Godel's Incompleteness theorem), they are often said to be less "well-behaved". -Mathematicians generally seem to stay clear of such logics when possible, yet they are certainly necessary for prooving some more complicated concepts/theorems, as I understand. (For example, it seems the reals can only be constructed using at least $2^{\text{nd}}$ order logic.) Why is this, what makes them less well-behaved or less useful with respect to logic/proof theory/other fields? - -REPLY [2 votes]: A further sense in which Higher Order Logics with standard or saturated semantics (HOL, hereafter) are less well-behaved than First Order Logic (FOL, hereafter) is a direct consequences of the failure of Completeness (and thus, as explained in other answers, of Compactness). The set of logical truths and the set of correct claims of semantic consequence for these logics are not recursively enumerable. -FOL is Complete, yet not Decidable. So, we determine of an arbitrary sentences and sets of sentences of the language of FOL if those sentences are logical truths or if a set has a given sentence as a consequence. But, since FOL is Complete and proofs are finitely long, we can (in the mathematicians sense of "can") enumerate the proofs and inspect one by one, checking what sentence the proof shows as a theorem or what sentence the proof derives from what set of assumptions. This gets us a recursive enumeration of the truths and the sentence/set pairs that stand in the consequence relation. (This does not contradict the failure of decidability as we cannot conclude that since we've yet to come across a proof in out enumeration, there isn't one if only we kept looking.) -Since HOL's are not Complete, this means of showing them recursively enumerable is not available. Indeed, there can be no means; were there such a means, it could be exploited to induce a Complete proof system, and there cannot be such a Complete proof system as the HOL's are not Compact.<|endoftext|> -TITLE: Why is $\int\limits_0^1 (1-x^7)^{1/5} - (1-x^5)^{1/7} dx=0$? -QUESTION [18 upvotes]: When I tried to approximate $$\int_{0}^{1} (1-x^7)^{1/5}-(1-x^5)^{1/7}\ dx$$ I kept getting answers that were really close to $0$, so I think it might be true. But why? When I ask Mathematica, I get a bunch of symbols I don't understand! - -REPLY [7 votes]: $\int_0^1(1-x^m)^{(1/n)}dx=(m+n)\Gamma(1/m)\Gamma(1/n)/\Gamma(1/m+1/n)$ is symmetric in $m, n$.<|endoftext|> -TITLE: Unital homomorphism -QUESTION [9 upvotes]: What is a unital homomorphism? Why are they important? - -REPLY [7 votes]: A lot of results about rings just won't work otherwise: for instance, a unital homomorphism of rings sends units to units. A nonunital homomorphism doesn't have to do that. Nonunital homomorphisms can be very degenerate, e.g. the zero homomorphism. -Another reason you want homomorphisms to preserve the unit is that this is how you get a map $\operatorname{Spec S} \to \operatorname{Spec} R$ from a ring-homomorphism $R \to S$.<|endoftext|> -TITLE: Why are $3D$ transformation matrices $4 \times 4$ instead of $3 \times 3$? -QUESTION [54 upvotes]: Background: Many (if not all) of the transformation matrices used in $3D$ computer graphics are $4\times 4$, including the three values for $x$, $y$ and $z$, plus an additional term which usually has a value of $1$. -Given the extra computing effort required to multiply $4\times 4$ matrices instead of $3\times 3$ matrices, there must be a substantial benefit to including that extra fourth term, even though $3\times 3$ matrices should (?) be sufficient to describe points and transformations in 3D space. -Question: Why is the inclusion of a fourth term beneficial? I can guess that it makes the computations easier in some manner, but I would really like to know why that is the case. - -REPLY [51 votes]: I'm going to copy my answer from Stack Overflow, which also shows why 4-component vectors (and hence 4×4 matrices) are used instead of 3-component ones. - -In most 3D graphics a point is represented by a 4-component vector (x, y, z, w), where w = 1. Usual operations applied on a point include translation, scaling, rotation, reflection, skewing and combination of these. -These transformations can be represented by a mathematical object called "matrix". A matrix applies on a vector like this: -[ a b c tx ] [ x ] [ a*x + b*y + c*z + tx*w ] -| d e f ty | | y | = | d*x + e*y + f*z + ty*w | -| g h i tz | | z | | g*x + h*y + i*z + tz*w | -[ p q r s ] [ w ] [ p*x + q*y + r*z + s*w ] - -For example, scaling is represented as -[ 2 . . . ] [ x ] [ 2x ] -| . 2 . . | | y | = | 2y | -| . . 2 . | | z | | 2z | -[ . . . 1 ] [ 1 ] [ 1 ] - -and translation as -[ 1 . . dx ] [ x ] [ x + dx ] -| . 1 . dy | | y | = | y + dy | -| . . 1 dz | | z | | z + dz | -[ . . . 1 ] [ 1 ] [ 1 ] - -One of the reason for the 4th component is to make a translation representable by a matrix. -The advantage of using a matrix is that multiple transformations can be combined into one via matrix multiplication. -Now, if the purpose is simply to bring translation on the table, then I'd say (x, y, z, 1) instead of (x, y, z, w) and make the last row of the matrix always [0 0 0 1], as done usually for 2D graphics. In fact, the 4-component vector will be mapped back to the normal 3-vector vector via this formula: -[ x(3D) ] [ x / w ] -| y(3D) ] = | y / w | -[ z(3D) ] [ z / w ] - -This is called homogeneous coordinates. Allowing this makes the perspective projection expressible with a matrix too, which can again combine with all other transformations. -For example, since objects farther away should be smaller on screen, we transform the 3D coordinates into 2D using formula -x(2D) = x(3D) / (10 * z(3D)) -y(2D) = y(3D) / (10 * z(3D)) - -Now if we apply the projection matrix -[ 1 . . . ] [ x ] [ x ] -| . 1 . . | | y | = | y | -| . . 1 . | | z | | z | -[ . . 10 . ] [ 1 ] [ 10*z ] - -then the real 3D coordinates would become -x(3D) := x/w = x/10z -y(3D) := y/w = y/10z -z(3D) := z/w = 0.1 - -so we just need to chop the z-coordinate out to project to 2D.<|endoftext|> -TITLE: Best Intermediate/Advanced Computer Science book -QUESTION [9 upvotes]: I'm very interested in Computer Science (computational complexity, etc.). I've already finished a University course in the subject (using Sipser's "Introduction to the Theory of Computation"). -I know the basics, i.e. Turing Machines, Computability (Halting problem and related reductions), Complexity classes (time and space, P/NP, L/NL, a little about BPP). -Now, I'm looking for a good book to learn about some more advanced concepts. Any ideas? - -REPLY [3 votes]: Computational Complexity: A Modern Approach by Sanjeev Arora and Boaz Barak, is a more up to date advanced 'introduction' text.<|endoftext|> -TITLE: History of the Concept of a Ring -QUESTION [53 upvotes]: I am vaguely familiar with the broad strokes of the development of group theory, first when ideas of geometric symmetries were studied in concrete settings without the abstract notion of a group available, and later as it was formalized by Cayley, Lagrange, etc (and later, infinite groups being well-developed). In any case, it's intuitively easy for me to imagine that there was substantial lay, scientific, and artistic interest in several of the concepts well-encoded by a theory of groups. -I know a few of the corresponding names for who developed the abstract formulation of rings initially (Wedderburn etc.), but I'm less aware of the ideas and problems that might have given rise to interest in ring structures. Of course, now they're terribly useful in lots of math, and $\mathbb{Z}$ is a natural model for elementary properties of commutative rings, and I'll wager number theorists had an interest in developing the concept. And if I wanted noncommutative models, matrices are a good place to start looking. But I'm not even familiar with what the state of knowledge and formalization of things like matrices/linear operators was at the time rings were developed, so maybe these aren't actually good examples for how rings might have been motivated. -Can anyone outline or point me to some basics on the history of the development of basic algebraic structures besides groups? - -REPLY [5 votes]: There's also the books A History of Abstract Algebra and Episodes in the History of Modern Algebra (1800-1950).<|endoftext|> -TITLE: Tiling a $3 \times 2n$ rectangle with dominoes -QUESTION [21 upvotes]: I'm looking to find out if there's any easy way to calculate the number of ways to tile a $3 \times 2n$ rectangle with dominoes. I was able to do it with the two codependent recurrences -f(0) = g(0) = 1 -f(n) = f(n-1) + 2g(n-1) -g(n) = f(n) + g(n-1) - -where $f(n)$ is the actual answer and $g(n)$ is a helper function that represents the number of ways to tile a $3 \times 2n$ rectangle with two extra squares on the end (the same as a $3 \times 2n+1$ rectangle missing one square). -By combining these and doing some algebra, I was able to reduce this to -f(n) = 4f(n-1) - f(n-2) - -which shows up as sequence A001835, confirming that this is the correct recurrence. -The number of ways to tile a $2 \times n$ rectangle is the Fibonacci numbers because every rectangle ends with either a verticle domino or two horizontal ones, which gives the exact recurrence that Fibonacci numbers do. My question is, is there a similar simple explanation for this recurrence for tiling a $3 \times 2n$ rectangle? - -REPLY [4 votes]: For a given tiling of $3 \times 2n$, let's see if we can break it up into a $ 3 \times k$ and $ 3 \times (2n-k) $ rectangle with a clean vertical break. -Specifically, consider the smallest $k\geq 1 $ such that there is no domino that is in both columns $k$ and $k+1$. A simple parity check shows that $k$ must be even. -For $k=2$, there are 3 ways to tile the initial portion - all horizontal, only top horizontal, only bottom horizontal. -For $k\geq 4$, there are 2 ways to tile the initial portion - only top horizontal, only bottom horizontal. -Thus, this gives us that $f(n) = 3 f(n-1) + 2f(n-2) + 2 f(n-3) + \ldots + 2 f(0)$. -Similarly, $f(n-1) = 3f(n-2) + 2f(n-3) + \ldots + 2f(0)$. -Hence $f(n) - f(n-1) = 3f(n-1) - f(n-2)$, which gives -$$ f(n) = 4 f(n-1) - f(n-2)$$<|endoftext|> -TITLE: Is there a closed-form equation for $n!$? If not, why not? -QUESTION [14 upvotes]: I know that the Fibonacci sequence can be described via the Binet's formula. -However, I was wondering if there was a similar formula for $n!$. -Is this possible? If not, why not? - -REPLY [4 votes]: This is a riff on some of the comments about what might constitute an "answer" and what "closed form" might mean: although it's somewhat facetious, it's intended to prompt thoughts about these issues. -Our base-10 number system interprets a string ${a_n}{a_{n-1}} \cdots {a_0}$ as the sum $\sum_{i=0}^{n} a_i 10^i $ (which can be, and is, computed recursively as $a_0 + 10 \left( a_1 + 10 \left( \cdots + 10 a_n \right) \cdots \right)$). If you take the former to be an acceptable "closed form" representation, then why not use a slight modification of this number system? Specifically, interpret the same string as equal to $a_0 + 2 \left( a_1 + 3 \left( \cdots + (n+1) a_n \right) \cdots \right)$ and require that $0 \le a_0 \le 1, 0 \le a_1 \le 2, \ldots, 0 \le a_n \le n$. In this "factorial" number system, $n! = 10 \cdots 0$ is represented as a simple $n$-digit string: it's "closed"!<|endoftext|> -TITLE: Sum of reciprocals of numbers with certain terms omitted -QUESTION [44 upvotes]: I know that the harmonic series $1 + \frac12 + \frac13 + \frac14 + \cdots$ diverges. I also know that the sum of the inverse of prime numbers $\frac12 + \frac13 + \frac15 + \frac17 + \frac1{11} + \cdots$ diverges too, even if really slowly since it's $O(\log \log n)$. -But I think I read that if we consider the numbers whose decimal representation does not have a certain digit (say, 7) and sum the inverse of these numbers, the sum is finite (usually between 19 and 20, it depends from the missing digit). Does anybody know the result, and some way to prove that the sum is finite? - -REPLY [59 votes]: It is not very surprising that the sum is finite, since numbers without a 7 (or any other digit) get rarer and rarer as the number of digits increases. -Here's a proof. -Let $S$ be the harmonic series with all terms whose denominator contains the digit $k$ removed. We can write $S =S_1 + S_2 + S_3 + \ldots$, where $S_i$ is the sum of all terms whose denominator contains exactly $i$ digits, all different from $k$. -Now, the number of $i$-digit numbers that do not contain the digit $k$ is $8\cdot9^{i-1}$ (there are $8$ choices for the first digit, excluding $0$ and $k$, and $9$ choices for the other digits). [Well, if $k=0$ there are $9$ choices for the first digit, but the proof still works.] So there are $8\cdot9^{i-1}$ numbers in the sum $S_i$. -Now each number in $S_i$ is of the form $\frac1a$, where $a$ is an $i$-digit number. So $a \geq 10^{i-1}$, which implies $\frac1a \leq \frac1{10^{i-1}}$. -Therefore $S_i \leq 8\cdot\dfrac{9^{i-1} }{10^{i-1}} = 8\cdot\left(\frac9{10}\right)^{i-1}$. -So $S= \sum S_i \leq \sum 8\cdot\left(\frac9{10}\right)^{i-1}$ -which is a geometric series of ratio $\frac9{10} < 1$, which converges. Since $S$ is a positive series bounded above by a converging series, $S$ converges.<|endoftext|> -TITLE: Simple explanation of a monad -QUESTION [48 upvotes]: I have been learning some functional programming recently and I so I have come across monads. I understand what they are in programming terms, but I would like to understand what they are mathematically. Can anyone explain what a monad is using as little category theory as possible? - -REPLY [15 votes]: Monads in Haskell and monads in category theory are very much the same: A monad consists of a functor $T: C \to C$ and two natural transformations $\eta_X : X \to T(X)$ (return in Haskell) and $\mu_X : T(T(X)) \to T(X)$ (join in Haskell) subject to the following laws -$\mu_X \circ T(\eta_X) = \mu_X \circ \eta_{T(X)} = 1_{T(X)}$ (left and right unit laws) -$\mu_X \circ \mu_{T(X)} = \mu_X \circ T(\mu_X)$ (associativity) -So, compared to Haskell, the monad is defined in terms of return, join and fmap instead of return and (>>=). For more details on this, see also the Haskell wikibook. -Two examples may illuminate this definition. -The powerset functor - -$\mathcal{P} = X \mapsto \mathcal{P}(X)$ maps a set to the set of its subsets. -Functions $f:X \to Y$ are extended point-wise to $\mathcal{P}(f):\mathcal{P}(X) \to \mathcal{P}(Y)$ -$\eta_X : X \to \mathcal{P}(X)$ is the function $x \mapsto \left\{x\right\}$ -$\mu_X : \mathcal{P}(\mathcal{P}(X)) \to \mathcal{P}(X)$ flattens the inner layer of subsets: $\mu_X(A) = \left\{ b | a \in A, b \in a \right\}$. -This is similar to the list monad in Haskell. - -The closure operation on the subsets of a topological space $S$ is a monad, too. - -The objects of the category $C$ are the subsets of a given topological space $S$. -There is an unique arrow $X \to Y$ between to objects $X$ and $Y$ exactly when $X \subseteq Y$. -The monad is given by the functor that maps each object $X$ to its topological closure $\bar X$ and the arrow $X \subseteq Y$ to the arrow $\bar{X}\subseteq \bar{Y}$. -Clearly, we have $X \subseteq \bar X$; this is $\eta_X$. -Also, we know that $\bar{\bar X} = \bar X$, in particular $\bar{\bar X} \subseteq \bar X$; this is $\mu_X$.<|endoftext|> -TITLE: Why do complex functions have a finite radius of convergence? -QUESTION [11 upvotes]: Say we have a function $\displaystyle f(z)=\sum_{n=0}^\infty a_n z^n$ with radius of convergence $R>0$. Why is the radius of convergence only $R$? Can we conclude that there must be a pole, branch cut or discontinuity for some $z_0$ with $|z_0|=R$? What does that mean for functions like -$$f(z)=\begin{cases} - 0 & \text{for $z=0$} \\\ - e^{-\frac{1}{z^2}} & \text{for $z \neq 0$} \end{cases}$$ -that have a radius of convergence $0$? - -REPLY [16 votes]: If the radius of convergence is $R$, that means there is a singular point on the circle $|z| = R$. In other words, there is a point $\xi$ on the circle of radius $R$ such that the function cannot be extended via "analytic continuation" in a neighborhood of $\xi$. This is a straightforward application of compactness of the circle and can be found in books on complex analysis, e.g. Rudin's. -However, it does not mean that there is a pole, branch cut, or discontinuity, though those would cause singular values. Indeed, a "pole" on the boundary would only make sense if you can analytically continue the power series to some proper domain containing the disk $D_R(0)$, and this is generally impossible. For instance, the power series $\sum z^{2^j}$ cannot be continued in any way outside the unit disk, because it is unbounded along any ray whose angle is a dyadic fraction. The unit circle is its natural boundary, though it does not make sense to say that the function has a branch point or pole there. (More generally, one can show that given any domain in the plane, there is a holomorphic function in that domain which cannot be extended any further, essentially using variations on the same theme.) -The function $\sum_j \frac{z^j}{j^2}$, incidentally, is continuous on the closed unit disk, but even though there is a singular point there. So continuity may happen at singular points. -The last function you mention does not have a power series expansion in a neighborhood of zero. In fact, it is not continuous at zero, because it blows up if you approach zero along the imaginary axis.<|endoftext|> -TITLE: Why do we use the commutator bracket for Lie algebra's -QUESTION [6 upvotes]: We define Lie algebras abstractly as algebras whose multiplication satisfies anti-commutativity and Jacobi's Identity. A particular instance of this is an associative algebra equipped with the commutator bracket: $[a,b]=ab-ba$. However, the notation suggests that this bracket is the one we think about. Additionally, the right adjoint to the functor I just mentioned creates the universal enveloping algebra by quotienting the tensor algebra by the tensor version of this bracket; but we could always start with some arbitrary Lie algebra with some other satisfactory bracket and apply this functor. -My question is - -"Why the commutator bracket?" - -Is it purely from a historical standpoint (and if so could you explain why)? Or is there a result that says any Lie algebra is essentially one with the commutator bracket (maybe something about the faithfulness of the functor from above)? -I know of (a colleague told me) a proof that the Jacobi identity is also an artifact of the right adjoint to the universal enveloping algebra. He can show that it is the necessary identity for the universal enveloping algebra to be associative (if someone knows of this in the literature I would also appreciate the link to this!) -I hope this question is clear, if not, I can revise and try to make it a bit more specific. - -REPLY [4 votes]: Well Lie algebras naturally arise from the Lie bracket of vector fields and from taking the Lie algebra of a Lie group. If we look at a the Lie algebra of a matrix subgroup, then the Lie bracket is the commutator of matrices.<|endoftext|> -TITLE: What property of certain regular polygons allows them to be faces of the Platonic Solids? -QUESTION [19 upvotes]: It appears to me that only Triangles, Squares, and Pentagons are able to "tessellate" (is that the proper word in this context?) to become regular 3D convex polytopes. -What property of those regular polygons themselves allow them to faces of regular convex polyhedron? Is it something in their angles? Their number of sides? -Also, why are there more Triangle-based Platonic Solids (three) than Square- and Pentagon- based ones? (one each) -Similarly, is this the same property that allows certain Platonic Solids to be used as "faces" of regular polychoron (4D polytopes)? - -REPLY [32 votes]: The regular polygons that form the Platonic solids are those for which the measure of the interior angles, say α for convenience, is such that $3\alpha<2\pi$ (360°) so that three (or more) of the polygons can be assembled around a vertex of the solid. -Regular (equilateral) triangles have interior angles of measure $\frac{\pi}{3}$ (60°), so they can be assembled 3, 4, or 5 at a vertex ($3\cdot\frac{\pi}{3}<2\pi$, $4\cdot\frac{\pi}{3}<2\pi$, $5\cdot\frac{\pi}{3}<2\pi$), but not 6 ($6\cdot\frac{\pi}{3}=2\pi$--they tesselate the plane). -Regular quadrilaterals (squares) have interior angles of measure $\frac{\pi}{2}$ (90°), so they can be assembled 3 at a vertex ($3\cdot\frac{\pi}{2}<2\pi$), but not 4 ($4\cdot\frac{\pi}{2}=2\pi$--they tesselate the plane). -Regular pentagons have interior angles of measure $\frac{3\pi}{5}$ (108°), so they can be assembled 3 at a vertex ($3\cdot\frac{3\pi}{5}<2\pi$), but not 4 ($4\cdot\frac{3\pi}{5}>2\pi$). -Regular hexagons have interior angles of measure $\frac{2\pi}{3}$ (120°), so they cannot be assembled 3 at a vertex ($3\cdot\frac{2\pi}{3}=2\pi$--they tesselate the plane). -Any other regular polygon will have larger interior angles, so cannot be assembled into a regular solid.<|endoftext|> -TITLE: Intuitive reasoning behind the Chain Rule in multiple variables? -QUESTION [16 upvotes]: I've sort of gotten a grasp on the Chain rule with one variable. If you hike up a mountain at 2 feet an hour, and the temperature decreases at 2 degrees per feet, the temperature would be decreasing for you at $2\times 2 = 4$ degrees per hour. -But I'm having a bit more trouble understanding the Chain Rule as applied to multiple variables. Even the case of 2 dimensions -$$z = f(x,y),$$ -where $x = g(t)$ and $y = h(t)$, so -$$\frac{dz}{dt} = \frac{\partial z}{dx} \frac{dx}{dt} + \frac{\partial z}{dy} \frac{dy}{dt}.$$ -Now, this is easy enough to "calculate" (and figure out what goes where). My teacher taught me a neat tree-based graphical method for figuring out partial derivatives using chain rule. All-in-all, it was rather hand-wavey. However, I'm not sure exactly how this works, intuitively. -Why, intuitively, is the equation above true? Why addition? Why not multiplication, like the other chain rule? Why are some multiplied and some added? - -REPLY [12 votes]: The basic reason is that one is simply composing the derivatives just as one composes the functions. Derivatives are linear approximations to functions. When you compose the functions, you compose the linear approximations---not a surprise. -I'm going to try to expand on Harry Gindi's answer, because that was the only way I could grok it, but in somewhat simpler terms. -The way to think of a derivative in multiple variables is as a linear approximation. In particular, let $f: R^m \to R^n$ and $q=f(p)$. Then near $p$, we can write $f$ as $q$ basically something linear plus some "noise" which "doesn't matter" (i.e. is little oh of the distance to $p$). Call this linear map $L: R^m \to R^n$. -Now, suppose $g: R^n \to R^s$ is some map and $r = g(q)$. We can approximate $g$ near $q$ by $r$ plus some linear map $N$ plus some "garbage" which is, again, small. -For simplicity, I'm going to assume that $p,q,r$ are all zero. This is ok, because one can just move one's origin around a bit. -So, as before, applying $f$ to a point near zero corresponds loosely to applying the linear transformation $L$. Applying $g$ to a point near zero corresponds loosely to applying $N$. Hence applying $g \circ f$ corresponds up to some ignorable "garbage" to the map $N \circ L$. -This means that $N \circ L$ is the linear approximation to $g \circ f$ at zero, in particular this composition is the derivative of $g \circ f$. - -REPLY [2 votes]: Think of it in terms of causality & superposition. -$$z = f(x,y)$$ -If you keep $y$ fixed then $\frac{dz}{dt} = \frac{df}{dx} * \frac{dx}{dt}$ -If you keep $x$ fixed then $\frac{dz}{dt} = \frac{df}{fy} * \frac{dy}{dt}$. -Superposition says you can just add the two together.<|endoftext|> -TITLE: Why does the discriminant of a cubic polynomial being less than $0$ indicate complex roots? -QUESTION [22 upvotes]: The discriminant $\Delta = 18abcd - 4b^3d + b^2 c^2 - 4ac^3 - 27a^2d^2$ of the cubic polynomial $ax^3 + bx^2 + cx+ d$ indicates not only if there are repeated roots when $\Delta$ vanishes, but also that there are three distinct, real roots if $\Delta > 0$, and that there is one real root and two complex roots (complex conjugates) if $\Delta < 0$. -Why does $\Delta < 0$ indicate complex roots? I understand that because of the way that the discriminant is defined, it indicates that there is a repeated root if it vanishes, but why does $\Delta$ greater than $0$ or less than $0$ have special meaning, too? - -REPLY [7 votes]: These implications are reached by considering the three, different cases for the roots $\{ r_1, r_2, r_3 \}$ of the polynomial: repeated root, all distinct real roots, or two complex roots and one real root. -When one of the roots is repeated, say $r_1$ and $r_2$, then it is clear that the discriminant is $0$ because the $r_1 - r_2$ term of the product is $0$. -When one root is a complex number $\rho = x+ yi$, then by the complex conjugate root theorem, $\overline{\rho} = x - yi$ is also a root. By the same theorem, the remaining third root must be real. Evaluating the product in the discriminant for this case, -$$ -\begin{align*} -(\rho - \overline{\rho})^2 (\rho - r_3)^2 (\overline{\rho} - r_3)^2 -&= (2yi)^2 (x + yi - r^3)^2 (x - yi - r^3)^2 -\\ &= -4y^2 [((x - r_3) + yi) ((x - r_3) - yi) ]^2 -\\ &= -4y^2 ((x - r_3)^2 + y^2)^2 -\end{align*} -$$ -which is less than or equal to $0$. -Finally, when all roots are real, the product is clearly positive. -Putting it all together, $\Delta$: - -less than $0$ implies that one root is complex; -equal to $0$ implies that one root is repeated; -greater than $0$ implies that all roots are distinct and real.<|endoftext|> -TITLE: How can there be explicit polynomial equations for which the existence of integer solutions is unprovable? -QUESTION [10 upvotes]: This answer suggests that there are explicit polynomial equations for which the existence -(or nonexistence) of integer solutions is unprovable. How can this be? - -REPLY [3 votes]: The answer to this question depends on how the problem is defined, but the answer is no, at least without defining the problem in a misleading way. -Since my first solution was completely off the mark, I have deleted it and posted this new one. -Consider polynomial p. If it has an integer solution, then the solution will eventually be found by random guessing. So if it is impossible to prove the existential status, there must be no solution. -Now we know from this link that there is a polynomial, q, that is unsolvable in the integers iff ZFC is consistent. It is well known that ZFC cannot prove its own consistency. So if it ZFC is consistent, then q is unsolvable, but we cannot prove this as then we could prove ZFC. So it seems like it is accurate to say that if mathematics is consistent, we have a polynomial with no integer roots, but we can't prove it. However, if we are assuming maths is consistent, we can use this to prove that the equation is unsolvable (indeed that is what we have done). So, it really isn't accurate an accurate statement at all. -To further clarify, when considering mathematical truth, there are two basic ways of viewing it. The first is where we are assuming that are axioms are true, which necessarily means assuming consistency. If we show any problem is equivalent to consistency, then we consider it to be true. -The other is where we are considering a formal set of statements, of which the axioms have been defined to be true and seeing which statements can be derived to be true. From this viewpoint, we don't actually know whether the axioms are consistent or not. In fact, Godel's second incompleteness theorem shows that no "non-trivial" atomic system can prove its own consistency. So showing a problem is equivalent to consistency is actually the same as showing that the problem is unprovable by the atomic system. -The confusion comes from assuming ZFC is consistent to eliminate one possibility in a choice, yet not allowing this assumption to be used as an axiom in the proofs.<|endoftext|> -TITLE: Why is the decimal representation of $\frac17$ "cyclical"? -QUESTION [31 upvotes]: $\frac17 = 0.(142857)$... -with the digits in the parentheses repeating. -I understand that the reason it's a repeating fraction is because $7$ and $10$ are coprime. But this...cyclical nature is something that is not observed by any other reciprocal of any natural number that I know of (besides multiples of $7$). (if I am wrong, I hope that I may find others through this question) -By "cyclical," I mean: - -1/7 = 0.(142857)... -2/7 = 0.(285714)... -3/7 = 0.(428571)... -4/7 = 0.(571428)... -5/7 = 0.(714285)... -6/7 = 0.(857142)... - -Where all of the repeating digits are the same string of digits, but shifted. Not just a simple "they are all the same digits re-arranged", but the same digits in the same order, but shifted. -Or perhaps more strikingly, from the wikipedia article: - -1 × 142,857 = 142,857 -2 × 142,857 = 285,714 -3 × 142,857 = 428,571 -4 × 142,857 = 571,428 -5 × 142,857 = 714,285 -6 × 142,857 = 857,142 - -What is it about the number $7$ in relation to the base $10$ (and its prime factorization $2\cdot 5$?) that allows its reciprocal to behave this way? Is it (and its multiples) unique in having this property? -Wikipedia has an article on this subject, and gives a form for deriving them and constructing arbitrary ones, but does little to show the "why", and finding what numbers have cyclic inverses. - -REPLY [24 votes]: For a prime p, the length of the repeating block of $\frac{1}{p}$ is the least positive integer k for which $p|(10^k-1)$. As in mau's answer, $k|(p-1)$, so $k\leq p-1$. When $k=p-1$, then $\frac{1}{p}$ and its multiples behave as discussed in the question. -Of the first 100 primes, this is true for 7, 17, 19, 23, 29, 47, 59, 61, 97, 109, 113, 131, 149, 167, 179, 181, 193, 223, 229, 233, 257, 263, 269, 313, 337, 367, 379, 383, 389, 419, 433, 461, 487, 491, 499, 503, 509, 541 (sequence A001913 in OEIS). -(List generated in Mathematica using Select[Table[Prime[n], {n, 1, 100}], # - 1 == Length[RealDigits[1/#][[1]][[1]]]&].) - -REPLY [11 votes]: It works with 1/19 = 0.(052631578947368421) too, while n/13 has two cycles: 1/13 = 0.(076923), 2/13 = 0.(153846), 3/13 = 0.(230769), 4/13 = 0.(307692), 5/13 = 0.(384615), and so on. -That a cycle must appear when you have a prime number p different from the base in which we work (so in base 10 different from 2 and 5) is clear: if you perform the long division 1/p, sooner or later partial quotients must be repeated, and from that point on the quotients repeat themselves. The length of the cycle must be a divisor of p-1: it may be short (think at 1/11 = 0.(09) ) or have the maximum possible lenght like the cases of 7 and 19. -Wikipedia has an article on Cyclic numbers, and some other example is also here; -unfortunately no sufficient rule is given for a number to have its inverse cyclical.<|endoftext|> -TITLE: In how many different ways can I sort balls of two different colors -QUESTION [15 upvotes]: Let's say, I have 4 yellow and 5 blue balls. How do I calculate in how many different orders I can place them? And what if I also have 3 red balls? - -REPLY [3 votes]: For some reason I find it easier to think in terms of letters of a word being rearranged, and your problem is equivalent to asking how many permutations there are of the word YYYYBBBBB. -The formula for counting permutations of words with repeated letters (whose reasoning has been described by Noldorin) gives us the correct answer of 9!/(4!5!) = 126.<|endoftext|> -TITLE: Are there any interesting semigroups that aren't monoids? -QUESTION [50 upvotes]: Are there any interesting and natural examples of semigroups that are not monoids (that is, they don't have an identity element)? -To be a bit more precise, I guess I should ask if there are any interesting examples of semigroups $(X, \ast)$ for which there is not a monoid $(X, \ast, e)$ where $e$ is in $X$. I don't consider an example like the set of real numbers greater than $10$ (considered under addition) to be a sufficiently 'natural' semigroup for my purposes; if the domain can be extended in an obvious way to include an identity element then that's not what I'm after. - -REPLY [4 votes]: I don't know if this counts as interesting, but a simple example is what C programmers know as the comma operator: Ignore the first argument and return the second. Or written as multiplication rule: $ab=b$ for all $a,b\in S$. This is easily shown to be a semigroup, but as long as there are at least two elements in $S$, this is not a monoid, as with neutral element $e$ you'd have for all $a\in S$ the identity $a=ae=e$.<|endoftext|> -TITLE: What is the meaning of the double turnstile symbol ($\models$)? -QUESTION [36 upvotes]: What's the meaning of the double turnstile symbol in logic or mathematical notation? : - -$\models$ - -REPLY [62 votes]: Just to enlarge on Harry's answer: -Your symbol denotes one of two specified notions of implication in formal logic -$\vdash$ -the turnstile symbol denotes syntactic implication (syntactic here means related to syntax, the structure of a sentence), where the 'algebra' of the logical system in play (for example sentential calculus) allows us to 'rearrange and cancel' the stuff we know on the left into the thing we want to prove on the right. -An example might be the classic "all men are mortal $\wedge$ socrates is a man $\vdash$ socrates is mortal" ('$\wedge$' of course here just means 'and'). You can almost imagine cancelling out the 'man bit' on the left to just give the sentence on the right (although the truth may be more complex...). - -$\models$ -the double turnstile, on the other hand, is not so much about algebra as meaning (formally it denotes semantic implication)- it means that any interpretation of the stuff we know on the left must have the corresponding interpretation of the thing we want to prove on the right true. -An example would be if we had an infinite set of sentences: $\Gamma$:= {"1 is lovely", "2 is lovely", ...} in which all numbers appear, and the sentence A= " the natural numbers are precisely {1,2,...}" listing all numbers. Any interpretation would give us B="all natural numbers are lovely". So $\Gamma$, A $\models$ B. - -Now, the goal of any logician trying to set up a formal system is to have $\Gamma \vdash A \iff \Gamma \models A$, meaning that the 'algebra' must line up with the interpretation, and this is not something we can take as given. Take the second example above- can we be sure that algebraic operations can 'parse' those infinitely many sentences and make the simple sentence on the right?? (this is to do with a property called compactness) -The goal can be split into two distict subgoals: -Soundness: $A \vdash B \Rightarrow A \models B$ -Completeness: $A \models B \Rightarrow A \vdash B$ -Where the first stops you proving things that aren't true when we interpret them and the second means that everything we know to be true on interpretation, we must be able to prove. -Sentential calculus, for example, can be proved complete (and was in Godel's lesser known, but celebrated completeness theorem), but other for other systems Godel's incompleteness theorem, give us a terrible choice between the two. - -In summary: The interplay of meaning and axiomatic machine mathematics, captured by the difference between $\models$ and $\vdash$, is a subtle and interesting thing.<|endoftext|> -TITLE: Looking for a book similar to "Think of a Number" -QUESTION [6 upvotes]: Many years ago, I had read a book entitled "Think of a Number" by Malcolm E. Lines, and it was an eminently readable and thought provoking book. In the book, there were topics like Fibonacci numbers (along with the live examples from the nature) and Golden Section. Now I'm looking for a similar book. Can anyone recommend me one? - -REPLY [2 votes]: Another more general pop math book covering mathematical curiosities is Coincidences, Chaos, and All that Math Jazz. It is at the level you're talking about. A more focused book on the golden ratio is that by Mario Livio. One nice feature of this second book is that he does a nice job of pointing out places where people believe in coincidences that aren't really there. He stays nicely objective, where many others fail to.<|endoftext|> -TITLE: Cardinality of set of real continuous functions -QUESTION [84 upvotes]: I believe that the set of all $\mathbb{R\to R}$ continuous functions is $\mathfrak c$, the cardinality of the continuum. However, I read in the book "Metric spaces" by Ó Searcóid that set of all $[0, 1]\to\mathbb{R}$ continuous functions is greater than $\mathfrak c$: - -"It is demonstrated in many textbooks that $\mathbb{Q}$ -is countable, that $\mathbb{R}$ is uncountable, that every non-degenerate interval is uncountable, that the collection of continuous functions defined on $[0,1]$ is of a greater cardinality than $\mathbb{R}$, and that there are sets of greater and greater cardinality." - -I understand that (via composition with the continuous function $\tan$ or $\arctan$) these sets of continuous functions have the same cardinality. Therefore, which claim is correct, and how do I prove this? - -REPLY [5 votes]: On the one hand it is clear that the set of all the continuous functions from $\mathbb{R}$ to $\mathbb{R}$, which shall be denoted by $\mathcal{C}^0$, is such that: -$$|\mathbb{R}|\le|\mathcal{C}^0|$$ -(because for each $r\in \mathbb{R}$, simply we consider the constant function $f_r:\mathbb{R}\longrightarrow\mathbb{R}$ defined by: for each $x\in \mathbb{R},\;f_r(x)=r$. Obviously, the assignation $r\longmapsto f_r$ is injective). -On the other hand, we know that $\mathbb{R}$ is a Hausdorff space, so if $f,g\in\mathcal{C}^0$ are two continuous functions such that they agree in the (dense) subset of the rational numbers, then $f=g$ (cf Stephen Willard, General Topology, 1970, Addison Wesley, page 89, 13.14). -This allows us to consider the function $F:\mathcal{C}^0\longrightarrow ^\mathbb{Q}\mathbb{R}$ defined by: for each $f\in\mathcal{C}^0,\;F(f)=f|_\mathbb{Q}$ (where $^\mathbb{Q}\mathbb{R}$ denotes the set of all the functions from $\mathbb{Q}$ to $\mathbb{R}$). -From the previous comment, it is clear that $F$ is then an injective function, therefore: -$$|\mathcal{C}^0|\le|^\mathbb{Q}\mathbb{R}|={\big(2^{\aleph_0}\big)}^{\aleph_0}=2^{\aleph_0\times\aleph_0}=2^{\aleph_0}=|\mathbb{R}|$$ -From the Cantor-Bernstein theorem we conclude that $|\mathcal{C}^0|=|\mathbb{R}|$.<|endoftext|> -TITLE: Software for solving geometry questions -QUESTION [12 upvotes]: When I used to compete in Olympiad Competitions back in high school, a decent number of the easier geometry questions were solvable by what we called a geometry bash. Basically, you'd label every angle in the diagram with the variable then use a limited set of basic geometry operations to find relations between the elements, eliminate equations and then you'd eventually get the result. It seems like the kind of thing you could program a computer to do. So, I'm curious, does there exist any software to do this? I know there is lots of software for solving equations, but is there anything that lets you actually input a geometry problem without manually converting to equations? I'm not looking for anything too advance, even seeing just an attempt would be interesting. If there is anything decent, I think it'd be rather interesting to run the results on various competitions and see how many of the questions it solves. - -REPLY [8 votes]: You might be interested in Doron Zeilberger's website. He has a page entitled "Plane Geometry: An Elementary Textbook (Circa 2050)" where he envisioned a world in which computers can derive all of plane geometry without human intervention or interference. The accompanying Maple package proves many statements by computer. -The page exists at http://www.math.rutgers.edu/~zeilberg/PG/gt.html.<|endoftext|> -TITLE: If all sets were finite, how could the real numbers be defined? -QUESTION [41 upvotes]: An extreme form of constructivism is called finitisim. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as $e$ or $\sqrt{2}$. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite). -After doing a little research I found a constructivist definition in Wikipedia http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set). -So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system? -Original version of this question, which had been closed during private beta, is as follows: - -If all sets were finite, how would mathematics be like? -If we replace the axiom that 'there - exists an infinite set' with 'all sets - are finite', how would mathematics be - like? My guess is that, all the theory - that has practical importance would - still show up, but everything would be - very very unreadable for humans. Is - that true? -We would have the natural numbers, - athough the class of all natural - numbers would not be a set. In the - same sense, we could have the rational - numbers. But could we have the real - numbers? Can the standard - constructions be adapted to this - setting? - -REPLY [4 votes]: Finitism still allows you to use infinitary definitions of real numbers, because a finitist is content with finite proofs even if the concepts mentioned by those proofs would seem to require infinite sets. For example, a finitist would still recognize that "ZFC proves that every bounded nonempty set of reals has a least upper bound" even if the finitist does not accept that infinite sets exist. -Proofs in various infinitary systems are of interest to finitists because of conservation results. In this setting, a conservation result would show that if a sentence about the natural numbers of a certain form is provable in some infinitary system, the sentence is actually provable in a finitistic system. For example, there are finitistic proofs that if any $\Pi^0_2$ sentence about the natural numbers is provable in the infinitary system $\text{WKL}_0$ of second order arithmetic, that sentence is also provable in the finitistic system $\text{PRA}$ of primitive-recursive arithmetic. -Many consistency results are proven finitistically. For example, there is a finitistic proof that if ZF set theory without the axiom of choice is consistent, then ZFC set theory with the axiom of choice is also consistent. This proof studies infinitary systems of set theory, but the objects actually handled are finite formal proofs rather than infinite sets.<|endoftext|> -TITLE: Can there be two distinct, continuous functions that are equal at all rationals? -QUESTION [64 upvotes]: Akhil showed that the Cardinality of set of real continuous functions is the same as the continuum, using as a step the observation that continuous functions that agree at rational points must agree everywhere, since the rationals are dense in the reals. -This isn't an obvious step, so why is it true? - -REPLY [5 votes]: Sketch of an alternative proof. -First, recall (or see for the first time) the following fact: -Given a continuous function $h: \mathbb{R} \rightarrow \mathbb{R}$, the set $K_h := \{x \in \mathbb{R}: h(x) = 0\}$ is closed. -"Recalling" this fact might seem just like sweeping details under the rug; indeed, it is Exercise $4.3.7$ in Stephen Abbott's introductory textbook Understanding Analysis. -Nevertheless, one can then proceed as follows: -Let $f, g: \mathbb{R} \rightarrow \mathbb{R}$ be continuous, real-valued functions that agree on $\mathbb{Q}$. -Furthermore, let $h = f - g$. Then $h$ is the difference of continuous functions, hence continuous itself; we can now use the fact above to conclude that $\mathbb{Q} \subset K_h \subset \mathbb{R}$. -In particular, $K_h$ is a closed set of real numbers that contains $\mathbb{Q}$. -We are given that the rationals are dense in the reals, i.e., $cl(\mathbb{Q}) = \mathbb{R}$. -Therefore, $K_h = \mathbb{R}$, which means that for all $x \in \mathbb{R}$, we have $h(x) = 0$. -By the definition of $h$, this means for all $x \in \mathbb{R}$ we have $f(x) - g(x) = 0$, i.e., $f(x) = g(x)$. -This "proves" the desired result. QED -N.B. The problem posed here is also in Abbott's text: it is Exercise $4.3.8(b)$.<|endoftext|> -TITLE: Why does a minimal prime ideal consist of zerodivisors? -QUESTION [49 upvotes]: Let $A$ be a commutative ring. Suppose $P \subset A$ is a minimal prime ideal. Then it is a theorem that $P$ consists of zero-divisors. -This can be proved using localization, when $A$ is noetherian: $A_P$ is local artinian, so every element of $PA_P$ is nilpotent. Hence every element of $P$ is a zero-divisor. (As Matt E has observed, when $A$ is nonnoetherian, one can still use a similar argument: $PA_P$ is the only prime in $A_P$, hence is the radical of $A_P$ by elementary commutative algebra.) -Can this be proved without using localization? - -REPLY [23 votes]: Denote set complements in $\rm A $ by $\rm\,\bar T = A - T.\, $ Consider the monoid $\rm\,S\,$ generated by $\rm\,\bar P\,$ and $\rm\,\bar Z,\ $ for $\rm\,Z = $ all zero-divisors in $\rm A $ (including $0).\,$ $\rm\,0\not\in S\ $ (else $\rm\, 0 = ab,$ $\rm\ a\in \bar P,$ $\rm\ b\in \bar Z\ $ $\rm \Rightarrow b\in Z),\,$ so we can enlarge $\,0\,$ to an ideal $\rm\,Q\,$ maximally disjoint from $\rm\,S.\, $ Since $\rm\,S\,$ is a monoid, $\rm\,Q\,$ is prime. $\rm\, S\,\supset\, \bar P \cup \bar Z\ \Rightarrow\ Q \subset \bar S \subset P\cap Z,\, $ so by minimality of $\rm\,P\,$ we infer $\rm\, P = Q \subset Z.\quad$ QED<|endoftext|> -TITLE: How do you prove that a group specified by a presentation is infinite? -QUESTION [33 upvotes]: The group: -$$ -G = \left\langle x, y \; \left| \; x^2 = y^3 = (xy)^7 = 1\right. \right\rangle -$$ -is infinite, or so I've been told. How would I go about proving this? (To prove finiteness of a finitely presented group, I could do a coset enumeration, but I don't see how this helps if I want to prove that it's infinite.) - -REPLY [4 votes]: One general way to do this, (which is not guaranteed to work, as Noah points out), is to exhibit infinitely many different homomorphic images of the group you start with . In this case, any group generated by an element of order $2$ and an element of order $3$ whose product has order $7$ is a homomorphic image of the group $G$. Such a group is a Hurwitz group, and these are well studied, for example in the work of M. Conder and of G. Higman, among others. Infinitely many finite simple groups are known to be Hurwitz groups, I believe.<|endoftext|> -TITLE: How can I tell which matrix decomposition to use for OLS? -QUESTION [5 upvotes]: I want to find the least squares solution to $\boldsymbol{Ax}=\boldsymbol{b}$ where $\boldsymbol{A}$ is a highly sparse square matrix. -I found two methods that look like they might lead me to a solution: QR factorization, and singular value decomposition. Unfortunately, I haven't taken linear algebra yet, so I can't really understand most of what those pages are saying. I can calculate both in Matlab though, and it looks like the SVD gave me a smaller squared error. Why did that happen? How can I know which one I should be using in the future? - -REPLY [3 votes]: It's all dependent on the 2-norm condition number of your matrix (the ratio of the largest singular value to the smallest); as a nice rule of thumb, if the base-10 logarithm of the reciprocal of the condition number is much less than the number of digits your computer uses to store numbers, QR (and maybe even the normal equations) might be sufficient. Otherwise, SVD is a "safer bet": it always works, but is much slower than the other methods for solving least squares problems. -Good references would be the classic "Solving Least Squares Problems" by Lawson and Hanson, and the newer "Numerical Methods for Least Squares Problems" by Björck. - -To add to the answer I gave previously, one way you can proceed for a matrix A whose conditioning you don't know would be as follows: - -Compute the QR decomposition of A. -Estimate the condition number of R. -If R is well conditioned (condition number is "small enough"), stop; else -Compute the singular value decomposition of R=U∑VT -Multiply Q and U to get the SVD of A. - -The advantage of proceeding in this manner is if your matrix is badly conditioned enough to necessitate the use of SVD, the algorithm for computing the SVD has to handle only a triangular matrix instead of the original matrix (which may have more rows than columns as is usual in least-squares applications).<|endoftext|> -TITLE: Why are the only division algebras over the real numbers the real numbers, the complex numbers, and the quaternions? -QUESTION [55 upvotes]: Why are the only (associative) division algebras over the real numbers the real numbers, the complex numbers, and the quaternions? -Here a division algebra is an associative algebra where every nonzero number is invertible (like a field, but without assuming commutativity of multiplication). -This is an old result proved by Frobenius, but I can't remember how the argument goes. Anyone have a quick proof? - -REPLY [2 votes]: There is another proof using the theory of central simple algebras and quaternion algebras. -Denote by $D$ such a division algebra. Note that a division algebra is always simple. Moreover, since the center of a division algebra is a field, $D$ is central simple over either $\mathbb{C}$ or $\mathbb{R}$. -As there is no nontrivial division algebra over an algebraically closed field, $D$ must be $\mathbb{C}$ if $Z(D)=\mathbb{C}$. -Otherwise, $D$ is central simple over $\mathbb{R}$, then $\mathbb{C}$ splits $D$. Hence we obtain the following: -$$ -\sqrt{\mathbf{dim}_{\mathbb{R}}(D)} = \mathbf{ind}_{\mathbb{R}}(D)|[\mathbb{C}:\mathbb{R}]=2 -$$ -$D$ is nontrivial, hence the dimension of $D$ is 4. Then $D$ must be $\mathbb{H}$ by Wedderburn's theorem.<|endoftext|> -TITLE: Can you find a domain where $ax+by=1$ has a solution for all $a$ and $b$ relatively prime, but which is not a PID? -QUESTION [17 upvotes]: In Intro Number Theory a key lemma is that if $a$ and $b$ are relatively prime integers, then there exist integers $x$ and $y$ such that $ax+by=1$. In a more advanced course instead you would use the theorem that the integers are a PID, i.e. that all ideals are principal. Then the old lemma can be used to prove that "any ideal generated by two elements is actually principal." Induction then says that any finitely generated ideal is principal. But, what if all finitely generated ideals are principal but there are some ideals that aren't finitely generated? Can that happen? - -REPLY [2 votes]: I found this 6-year old question when I was searching about Bézout domains and I think I can say something about the question. Well, more exactly I think the work of P. M. Cohn in his paper "Bézout rings and their subrings" deserves to be exposed. -First, let me give some terminology. Given an integral domain $R$, we say that $a,b\in R$ are coprime if $\gcd(a,b)$ exists and $\gcd(a,b)=1$. On the other hand, we say that $a$ and $b$ are comaximal if there are $x,y\in R$ such that $ax+by=1$. -It's easy to see that comaximal $\implies$ coprime, but the other implication isn't necessarily true. Domains where coprime $\implies$ comaximal were called by Cohn Pre-Bézout domains. As the names suggest, these aren't necessarily Bézout domains, because we only have the "Bézout relationship" for coprime elements. But, it turns out that we can use Pre-Bézout domains to characterize Bézout domains among the class of GCD domains. More exactly, it's true the following: -Theorem: Let $R$ be an integral domain. TFAE: -i) $R$ is a Bézout domain. -ii) $R$ is a GCD Pre-Bézout domain. -Proof: i)$\implies$ii) It's immediate. -ii)$\implies$i) Let $a,b\in R$. WLOG, we can suppose that $a\neq 0\neq b$. As $R$ is a GCD domain, then $d=\gcd(a,b)$ exists. By an elementary property of gcds we have that $1=\gcd(a/d,b/d)$ and since $R$ is Pre-Bézout then $a/d$ y $b/d$ are comaximal, which means that there are $x,y\in R$ such that $$\frac{a}{d}x+\frac{b}{d}y=1.$$ Finally, if we multiply by $d$ the above equality we get $$ax+by=d.$$ -Thus $d$ is a $R$-linear combination of $a$ and $b$. Hence, $R$ is Bézout domain. -In conclusion, according to Cohn, the class of domains you are looking for are known as Pre-Bézout domains, and these aren't necessarily Bézout domains, let alone PIDs.<|endoftext|> -TITLE: What is "ultrafinitism" and why do people believe it? -QUESTION [89 upvotes]: I know there's something called "ultrafinitism" which is a very radical form of constructivism that I've heard said means people don't believe that really large integers actually exist. Could someone make this a little bit more precise? Are there good reasons for taking this point of view? Can you actually get math done from that perspective? - -REPLY [9 votes]: Greg Egan has some fun with this idea in one of his best short stories, "Luminous" (published in the collection of the same name). A pair of researchers are exploring an apparent "defect" in mathematics: - -"You still don't get it, do you, - Bruno? You're still thinking like a - Platonist. The universe has only been - around for fifteen billion years. It - hasn't had time to create infinities. - The far side can't go on - forever-because somewhere beyond the - defect, there are theorems that don't - belong to any system. Theorems that - have never been touched, never been - tested, never been rendered true or - false." - -Terrific stuff!<|endoftext|> -TITLE: Classifying Quasi-coherent Sheaves on Projective Schemes -QUESTION [9 upvotes]: I know some references where I can find this, but they seem tedious. Both Hartshorne and Ueno cover this. -I am wondering if there is an elegant way to describe these. If this task is too difficult in general, how about just $\mathbb{P}^n$? -Thanks! - -REPLY [10 votes]: Quasi-coherent sheaves on affine schemes (say $Spec(A)$) are obtained by taking an $A$-module $M$ and the associated sheaf (by localizing $M$). This gives an equivalence of categories between $A$-modules and q-c sheaves on $Spec(A)$. -Let $R$ be a graded ring, $R = R_0 + R_1 + \dots$ (direct sum). Then we can, given a graded $R$-module $M$, consider its associated sheaf $\tilde{M}$. The stalk of this at a homogeneous prime ideal $P$ is defined to be the localization $M_{(P)}$, which is defined as generated by quotients $m/s$ for $s$ homogeneous of the same degree as $m$ and not in $P$. -In short, we get sheaves of modules on the affine scheme just as we get the normal sheaves of rings. We get sheaves of modules on the projective scheme in the same homogeneous localization way as we get the sheaf of rings. -However, it's no longer an equivalence of categories. Why? Say you had a graded module $M= M_0 + M_1 + \dots$ (in general, we allow negative gradings as well). Then it is easy to check that the sheaves associated to $M$ and $M' = M_1 + M_2 + \dots$ are exactly the same. -Nevertheless, it is possible to get every sheaf on $Proj(R)$ for $R$ a graded ring in this way. See Proposition II.5.15 in Hartshorne.<|endoftext|> -TITLE: Applications of the "soft maximum" -QUESTION [7 upvotes]: There is a little triviality that has been referred to as the "soft maximum" over on John Cook's Blog that I find to be fun, at the very least. -The idea is this: given a list of values, say $x_1,x_2,\ldots,x_n$ , the function -$g(x_1,x_2,\ldots,x_n) = \log(\exp(x_1) + \exp(x_2) + \cdots + \exp(x_n))$ -returns a value very near the maximum in the list. -This happens because that exponentiation exaggerates the differences between the $x_i$ values. For the largest $x_i$, $\exp(x_i)$ will be $really$ large. This largest exponential will significantly outweigh all of the others combined. Taking the logarithm, i.e. undoing the exponentiation, we essentially recover the largest of the $x_i$'s. (Of course, if two of the values were very near one another, we aren't guaranteed to get the true maximum, but it won't be far off!) -About this, John Cook says: "The soft maximum approximates the hard maximum but it also rounds off the corners." This couldn't really be said any better. -I recall trying to cleverly construct sequences for proofs in advanced calculus where not-everywhere-differentiable operations would have been great to use if they didn't have that pesky non-differentiable trait. I can't recall a specific incidence where I was tempted to use $max(x_i)$, but this seems at least plausible that it would have come up. -Has anyone used this before or have a scenario off hand where it would be useful? - -REPLY [3 votes]: This is close to being the flipside of the geometric mean, which is the nth root of the product of the numbers, and can be expressed as the exponential of the sum of the logarithms. -Another pair of dual mean measures is the regular mean and the harmonic mean (n divided by the sum of the reciprocals). -I say the soft maximum is close to being the flipside of the geometric mean, but it lacks the good property that all of the others have of taking a list of the same value to that value (for definedness, let all values be positive). Let's call the hyperbolic mean the "soft maximum" of the nth roots of the terms in the list: then this has that good property. -The hyperbolic mean the emphasises large values in a roughly symmetric manner to the way that the geometric mean emphasises small values (which is always smaller than the regular mean), and is, of course, much smaller for long list of large values. -So I say, consider it an amplified version of a useful addition to the family of of mean operators.<|endoftext|> -TITLE: Describe the locus in the complex plane of the zeros of a quartic polynomial as the constant term varies -QUESTION [11 upvotes]: (Diagram and setup from UCSMP Precaluclus and Discrete Mathematics, 3rd ed.) -Above is a partial plot of the zeros of $p_c(x)=4x^4+8x^3-3x^2-9x+c$. The text stops at showing the diagram and does not discuss the shape of the locus of the zeros or describe the resulting curves. Are the curves in the locus some specific (named) type of curve? Is there a simple way to describe the curves (equations)? -The question need not be limited to the specific polynomial given--a similar sort of locus is generated by the zeros of nearly any quartic polynomial as the constant term is varied. - -REPLY [2 votes]: We need to find all $z:\exists c\in \mathbb{R}:p_c(z)=0$. -Some manipulation gives -$\begin{array} -&&p_c(z)=0\\\ -\Leftrightarrow&p_0(z)+c=0\\\ -\Leftrightarrow&Re(p_0)+i Im(p_0)+c=0\\\ -\Leftrightarrow&Re(p_0)+c=0\text{ and } Im(p_0)=0\\\ -\end{array}$ -Now $Im(p_0(z))=0$ is a fixed set of points on the complex plane. -$Re(p_0)+c$ defines a surface on on $\mathbb{C}$ which varies with $c$. The surfaces that we get varying $c$ are just the translation of the surface $z=Re(p_0)$, along $z$ axis. -Our required set of points are those that lie in intersection of these the sets -$\\{z\in \mathbb{C}:Im(p_0(z))=0\\}\cap\\{z\in \mathbb{C}:\exists c:Re(p_0(z))+c=0\\}$ -$=\\{z\in \mathbb{C}:Im(p_0(z))=0\\}\cap\mathbb{C}$ -$=\\{z\in \mathbb{C}:Im(p_0(z))=0\\}$ -Example 1: As an example let's take $p_c(z)=z^2+c$. -Plot of $Im(p_0(z))=2 x y = 0$ is just the two axes, as given below. - -This is the required locus. -A plot of $Re(p_0(x))$ is also given below. This doesn't help in finding the locus though, and this is attached only to see why the second term in the intersection above is entire $\mathbb{C}$. Evidently, the points which intersect with the complex plane as we move the surface up and down is the whole complex plane. This will hold true for any polynomial as polynomials are holomorphic functions. - -Example 2: Let's take your example $p_c(z)=z^4+8z^3−3z^2−9z+c$. -We have -$\begin{array}\ - Im(p_0(z))&=&Im(p_0(x+iy))\\\ -&=&16 x^3 y+24 x^2 y-16 x y^3-6 x y-8 y^3-9 y\\\ -&=&-y \left(-16 x^3-24 x^2+16 x y^2+6 x+8 y^2+9\right)\end{array}$ -Plot of $p_0(z)=0$ (which gives your locus) is shown below.<|endoftext|> -TITLE: What is the Riemann-Zeta function? -QUESTION [149 upvotes]: In laymen's terms, as much as possible: What is the Riemann-Zeta function, and why does it come up so often with relation to prime numbers? - -REPLY [25 votes]: The above answers give excellent explanations about why the zeta function has close connections to number theory, but I thought I'd mention something about why the Riemann Hypothesis should matter so much. -By taking the logarithm and then differentiating the zeta function, one gets the formula -$$\frac{\zeta'(s)}{\zeta(s)}=\sum_{n=1}^\infty\frac{\Lambda(n)}{n^s}$$ -for $\Re(s)>1$, where $\Lambda(n)$ is the von Mangoldt function which takes the value $\log p$ at powers of primes $p$, and is 0 everywhere else. Think of it as a weighted way of counting the primes (the prime number theorem tells us that $\log p$ is the natural weight to choose). -Much of analytic number theory proceeds by choosing a weight of the set we wish to consider (often the primes), and then encoding this weighting in a so-called Dirichlet series (an infinite sum of the form above). We can then use analysis to study this series and get lots of useful information. -In this case, then, the function we need to study to get information about the primes is $\frac{\zeta'(s)}{\zeta(s)}$, which we can study using complex analysis. -In complex analysis, a good slogan is 'the only things that matter are zeros and poles' (effectively points where the function shoots off to infinity). -Hence to understand the prime numbers, we just need to understand the zeros and poles of $\frac{\zeta'(s)}{\zeta(s)}$ - we know about the simple pole at $s=1$, we know there aren't any other zeros where it counts, and we also know that the only other poles are at zeros of $\zeta(s)$ (roughly because dividing by zero causes infinity). -In other words, if we knew where these zeros are (i.e. the Riemann hypothesis) we can work with $\frac{\zeta'(s)}{\zeta(s)}$ in all kinds of clever ways to get good results on the prime numbers. - -More specifically, in the usual contour proof of the prime number theorem, knowing that there aren't any other zeros in $\Re(s)>1/2$ would allow us to shift the contour further to the left, reducing the error term in the result to (roughly) $O(\sqrt{x})$.<|endoftext|> -TITLE: Which books would you recommend about Recreational Mathematics? -QUESTION [14 upvotes]: By this I mean books with math puzzles and problems similar to the ones you would find in mathematical olympiads. - -REPLY [3 votes]: THE PENGUIN DICTIONARY OF CURIOUS AND INTERESTING NUMBERS, by David Wells. -Sample entry: -$199: 199 + 210n$ for $n = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9$ provides the smallest $8, 9$ and $10$ primes in arithmetical progression. -Obviously you can reformulate any of the information given in the book as a problem to pose to someone else.<|endoftext|> -TITLE: What is a Markov Chain? -QUESTION [28 upvotes]: What is an intuitive explanation of Markov chains, and how they work? Please provide at least one practical example. - -REPLY [3 votes]: I had a programming project in college where we generated large amounts of psuedo-English text using Markov chains. The assignment is here, although I don't know if that link will be good forever. From that page: - -For example, suppose that [our Markov chains are of length] 2 and the sample file - contains -I like the big blue dog better than the big elephant with the big blue hat on his tusk. - -Here is how the first three words - might be chosen: - -A two-word sequence is chosen at random to become the initial prefix. - Let's suppose that "the big" is - chosen. -The first word must be chosen based on the probability that it - follows the prefix (currently "the - big") in the source. The source - contains three occurrences of "the - big". Two times it is followed by - "blue", and once it is followed by - "elephant". Thus, the next word must - be chosen so that there is a 2/3 - chance that "blue" will be chosen, and - a 1/3 chance that "elephant" will be - chosen. Let's suppose that we choose - "blue" this time. -The next word must be chosen based on the probability that it - follows the prefix (currently "big - blue") in the source. The source - contains two occurrences of "big - blue". Once it is followed by "dog", - and the other time it is followed by - "hat". Thus, the next word must be - chosen so that there is a 50-50 - probability of choosing "dog" vs. - "hat". Let's suppose that we choose - "hat" this time. -The next word must be chosen based on the probability that it - follows the prefix (currently "blue - hat") in the source. The source - contains only one occurrence of "blue - hat", and it is followed by "on". - Thus, the next word must be "on" (100% - probability). -Thus, the first three words in the output text would be "blue hat - on". - - -You keep going like that, generating text that is completely nonsensical, but ends up having sort of the same "tone" as the original text. For example, if your sample file is the complete text of Alice In Wonderland (one of the texts we tried it on) then your nonsense comes out kind of whimsical and Carrollian (if that's a word). If your sample file is The Telltale Heart, you get somewhat dark, morbid nonsense. -Anyway, while not a rigorous, formal, definition, I hope this helps give you a sense of what a Markov chain is.<|endoftext|> -TITLE: Explanation of method for showing that $\frac{0}{0}$ is undefined -QUESTION [31 upvotes]: (This was asked due to the comments and downvotes on this Stackoverflow answer. I am not that good at maths, so was wondering if I had made any basic mistakes) -Ignoring limits, I would like to know if this is a valid explanation for why $\frac00$ is undefined: - -$x = \frac00$ - $x \cdot 0 = 0$ -Hence There are an infinite number of values for $x$ as anything multiplied by $0$ is $0$. - -However, it seems to have got comments, with two general themes. -Once is that you lose the values of $x$ by multiplying by $0$. -The other is that the last line is: - -$x \cdot 0 = \frac00 \cdot 0$ - -as it involves a division by $0$. -Is there any merit to either argument? More to the point, are there any major flaws in my explanation and is there a better way of showing why $\frac00$ is undefined? - -REPLY [3 votes]: Suppose that division by $0$ is possible. Then consider the following equation, -$$x=0\implies1\cdot x=0\cdot x\implies \large{\large{\color{blue}{1=0}}}$$ -This implies that the successor of $0$ is equal to it and that contradicts the Peano Axioms. The above was possible because we assumed that division by $0$ is possible.<|endoftext|> -TITLE: Varying definitions of cohomology -QUESTION [8 upvotes]: So I know that given a chain complex we can define the $d$-th cohomology by taking $\ker{d}/\mathrm{im}_{d+1}$. But I don't know how this corresponds to the idea of holes in topological spaces (maybe this is homology, I'm a tad confused). - -REPLY [8 votes]: Edited to clear some things up: -Simplicial and singular (co)homology were invented to detect holes in spaces. To get an intuitive idea of how this works, consider subspaces of the plane. Here the 2-chains are formal sums of things homeomorphic to the closed disk, and 1-chains are formal sums of things homeomorphic to a line segment. The operator d takes the boundary of a chain. For example, the boundary of the closed disk is a circle. If we take d of the circle we get $0$ since a circle has no boundary. And in general it happens that $d^2 = 0$, that is boundaries always have no boundaries themselves. Now suppose we remove the origin from the plane and take a circle around the origin. This circle is in the kernel of d since it has no boundary. However, it does not bound any 2-chain in the space (since the origin is removed) and so it is not in the image of the boundary operator on two-dimensions. Thus the circle represents a non-trivial element in the quotient space $\ker( d ) / \operatorname{im} (d)$. -The way I have defined things makes the above a homology theory simply because the d operator decreases dimension. Cohomology is the same thing only the operator increases dimension (for example the exterior derivative on differential forms). Thus algebraically there really is no difference between cohomology and homology since we can just change the grading from $i$ to $-i$. -From a homology we can get a corresponding cohomology theory by dualizing, that is by looking at maps from the group of chains to the underlying group (e.g. $\Bbb Z$ or $\Bbb R$). Then d on the cohomology theory becomes the adjoint of the previous boundary operator and thus increases degrees.<|endoftext|> -TITLE: genericness and the Zariski topology -QUESTION [7 upvotes]: What does it mean (in a mathematically rigorous way) to claim something is "generic?" How does this coincide with the Zariski topology? - -REPLY [6 votes]: In general*, if something is "generic", it means it happens or is true "almost all of the time" or "almost everywhere". -In measure theory, for example, when you say "$P(x)$ is true for almost all $x$", this has the precise meaning that the set of $x$'s for which $P(x)$ does not hold has measure zero. -One can relate this to the Zariski topology via the fact that Zariski closed subsets of $\mathbb{C}^n$ have Lebesgue measure zero: https://mathoverflow.net/questions/25513/zariski-closed-sets-in-cn-are-of-measure-0 -See also these MO posts: -https://mathoverflow.net/questions/19688/what-does-generic-mean-in-algebraic-geometry -https://mathoverflow.net/questions/2162/what-are-the-most-important-instances-of-the-yoga-of-generic-points -*or perhaps I should say... generically ;-)<|endoftext|> -TITLE: Projective duality -QUESTION [5 upvotes]: Given a curve how do you intuitively construct the picture of its projective dual? I know points --> lines, lines--> points but for something like the swallowtail this is not really obvious. - -REPLY [2 votes]: Answer edited, in response to the comment and a second wind for explaining mathematics: -Let $F(x_0,x_1,x_2)=0$ be the equation for your curve, and take $(y_0,y_1,y_2)$ to be coordinates on $(\mathbb{P}^2)^*$. Also, assume that $F$ is irreducible and has no linear factors. -Then $y_0 x_0+y_1 x_1+y_2 x_2=0$ is the equation of a general line in $\mathbb{P}^2$ (recall, here $y_0,y_1,y_2$ are fixed, and the $x_i$ are the coordinates on the plane) and we look at the open set of $(\mathbb{P}^2)^*$ where $y_2\neq 0$. On this open set, we can solve the equation of the line for $x_2$, and look at $g(x_0,x_1)=y_2^n F(x_0,x_1,-\frac{1}{y_2}(y_0x_0+y_1x_1))$, a homogeneous polynomial of degree $n$ in $x_0,x_1$ with coefficients homogeneous polynomials in the $y_i$. This polynomial has zeros the intersections of our curve $C$ with the line $L$ we're looking at. -So we want to find points of multiplicity at least two. So how do we find multiple roots of a polynomial? We take the discriminant! Specifically, we do it for an affinization, and we get a homogeneous polynomial of degree $2n^2-n$ in the $y_i$. -All that's left is to factor the polynomial, and kill all the linear factors, just throw them away, the reasons are explained in more computational detail on my blogpost, and there I also do explicit examples, but the method for calculating the equation of the dual curve is as above.<|endoftext|> -TITLE: Has anyone ever proposed additional axioms? -QUESTION [8 upvotes]: According to Wikipedia, Godel's incompleteness theorem states: - -No consistent system of axioms whose - theorems can be listed by an - "effective procedure" (essentially, a - computer program) is capable of - proving all facts about the natural - numbers. - -This obviously includes our current system. So has anyone proposed any additional axioms that seem credible? - -REPLY [16 votes]: Set theory is completely full of new axioms, some of them expressing fundamental set principles, and many of them having consequences even in natural number arithmetic that are not provable without them (via consistency strength statements). -In the past fifty years of research in set theory, a major lesson that has been learned is that many many fundamental questions about set theory are simply independent of the usual ZFC axioms. This includes most all of the questions about infinite cardinal arithmetic, but also subtle questons about infinite combinatorics. For example, Is the exponential function (size of power set function) injective on infinite cardinalities? Independent of ZFC. Do Souslin trees exist? Independent of ZFC. Is the size of the smallest dominating family of functions $f:\omega\to\omega$ necessarily $\aleph_1$? Independent of ZFC. Can Lebesgue measure be more than countably additive, when CH fails? Independent of ZFC. -There are hundreds of examples. -The response to this phenomenon was naturally to investigate the various hypotheses that go beyond ZFC. So we now know a great deal about what happens when CH holds, or when it fails, or when CH fails but the dominating number is low, and so on. The effect of this is to treat these hypotheses as semi-temporary axioms within a domain of research, which may not be the same usage of axiom that you inquired about, but the effect is the same. -The powerful method of forcing, invented by Cohen in order to prove the consistency of $ZFC+\neg CH$, allows us to show that many other theories are also consistent. This method has given rise to a number of forcing axioms, such as Martin's Axiom, the Proper Forcing Axiom, Martin's Maximum and many others. -A much larger and very interesting class of axioms are provided by the large cardinal hierarchy, involving such notions as inaccessible cardinals, measurable cardinals, Woodin cardinals and so on. A curious feature of these strong axioms of infinity is that they imply certain highly regular features to occur among the projective sets of reals. Thus, although the axioms themselves seem to have nothing to do with sets of reals, they imply very nice properties for the sets of reals that we can easily define. Many set theorists take this as some kind of positive evidence for their truth. Meanwhile, the large cardinal hypotheses are intensely studied as a fundamental research effort to understand the nature of mathematical infinity. But you cannot do this without assuming that these cardinals exist, since this is provably not provable in ZFC, if consistent, and so these large cardinal hypotheses constitute new axioms.<|endoftext|> -TITLE: What functions can be represented as power series? -QUESTION [24 upvotes]: How do we know if a particular function can be represented as a power series? And once we have come up with a power series representation, how does one figure out its radius of convergence ? - -REPLY [4 votes]: To your question regarding radius of convergence, Wikipedia gives a good answer.<|endoftext|> -TITLE: Is there a relationship between $e$ and the sum of $n$-simplexes volumes? -QUESTION [10 upvotes]: When I look at the Taylor series for $e^x$ and the volume formula for oriented simplexes, it makes $e^x$ look like it is, at least almost, the sum of simplexes volumes from $n$ to $\infty$. Does anyone know of a stronger relationship beyond, "they sort of look similar"? -Here are some links: -Volume formula -http://en.wikipedia.org/wiki/Simplex#Geometric_properties -Taylor Series -http://en.wikipedia.org/wiki/E_%28mathematical_constant%29#Complex_numbers - -REPLY [4 votes]: The answer is, it's just a fact “cone over a simplex is a simplex” rewritten in terms of the generating function: -observe that because n-simplex is a cone over (n-1)-simplex $\frac{\partial}{\partial x}vol(\text{n-simplex w. edge x}) = vol(\text{(n-1)-simplex w. edge x})$; in other words $e(x):=\sum_n vol\text{(n-simplex w. edge x)}$ satisfies an equvation $e'(x)=e(x)$. So $e(x)=Ce^x$ -- and C=1 because e(0)=1.<|endoftext|> -TITLE: What is the most efficient way to determine if a matrix is invertible? -QUESTION [18 upvotes]: I'm learning Linear Algebra using MIT's Open Courseware Course 18.06 -Quite often, the professor says "... assuming that the matrix is invertible ...". -Somewhere in the lecture he says that using a determinant on an $n \times n$ matrix is on the order of $O(n!)$ operations, where an operation is a multiplication and a subtraction. -Is there a more efficient way? If the aim is to get the inverse, rather than just determine the invertibility, what is the most effecient way to do this? - -REPLY [6 votes]: Computing the determinant and Gaussian elimination are both fine if you are using exact computations, for instance if the entries of your matrix are rational numbers and you are using only rational numbers during the computations. The disadvantage is that the numerator and denominator can get very large indeed. So the number of operations may indeed be O(n2.376) or O(n3), but the cost of every addition and multiplication gets bigger as n grows because the numbers get bigger. -This is not an issue if you are using floating point numbers, but then you have the problem that floating point computations are not exact. Some methods are more sensitive to this than others. In particular, checking invertibility by computing the determinant is a bad idea in this setting. Gaussian elimination is better. Even better is to use the singular value decomposition, which will be treated towards the end of the MIT course.<|endoftext|> -TITLE: Circular permutations with indistinguishable objects -QUESTION [22 upvotes]: Given n distinct objects, there are $n!$ permutations of the objects and $n!/n$ "circular permutations" of the objects (orientation of the circle matters, but there is no starting point, so $1234$ and $2341$ are the same, but $4321$ is different). -Given $n$ objects of $k$ types (where the objects within each type are indistinguishable), $r_i$ of the $i^{th}$ type, there are -\begin{equation*} -\frac{n!}{r_1!r_2!\cdots r_k!} -\end{equation*} -permutations. How many circular permutations are there of such a set? - -REPLY [17 votes]: I wrote a series of blog posts which explains how to solve questions like this; the relevant one is here. The generating function you want is -$$\frac{1}{n} \sum_{d | n} (x_1^{n/d} + ... + x_k^{n/d})^d \varphi \left( \frac{n}{d} \right)$$ -where the coefficient of $x_1^{r_1} ... x_k^{r_k}$ is the number you want.<|endoftext|> -TITLE: Counting how many hands of cards use all four suits -QUESTION [10 upvotes]: From a standard $52$-card deck, how many ways are there to pick a hand of $k$ cards that includes one card from all four suits? -I know that for any specific $k$, it's possible to break it up into cases based on the partitions of $k$ into $4$ parts. For example, if I want to choose a hand of six cards, I can break it up into two cases based on whether there are $(1)$ three cards from one suit and one card from each of the other three or $(2)$ two cards from each of two suits and one card from each of the other two. -Is there a simpler, more general solution that doesn't require splitting the problem into many different cases? - -REPLY [8 votes]: Count the number of hands that do not contain at least one card from every suit and subtract from the total number of k-card hands. To count the number of hands that do not contain at least one card from every suit, use inclusion-exclusion considering what suits are not in a given hand. That is, letting $N(\dots)$ mean the number of hands meeting the given criteria, $$\begin{align} -&N(\mathrm{no\ }\heartsuit)+N(\mathrm{no\ }\spadesuit)+N(\mathrm{no\ }\clubsuit)+N(\mathrm{no\ }\diamondsuit) -\\ -&\quad\quad-N(\mathrm{no\ }\heartsuit\spadesuit)-N(\mathrm{no\ }\heartsuit\clubsuit)-N(\mathrm{no\ }\heartsuit\diamondsuit)-N(\mathrm{no\ }\spadesuit\clubsuit)-N(\mathrm{no\ }\spadesuit\diamondsuit)-N(\mathrm{no\ }\clubsuit\diamondsuit) -\\ -&\quad\quad+N(\mathrm{no\ }\heartsuit\spadesuit\clubsuit)+N(\mathrm{no\ }\heartsuit\spadesuit\diamondsuit)+N(\mathrm{no\ }\heartsuit\clubsuit\diamondsuit)+N(\mathrm{no\ }\spadesuit\clubsuit\diamondsuit) -\\ -&\quad\quad-N(\mathrm{no\ }\heartsuit\spadesuit\clubsuit\diamondsuit) -\\ -&=4{39 \choose k}-6{26 \choose k}+4{13 \choose k}-{0 \choose k}. -\end{align}$$ -So, the number of hands of k cards that include at least one card from every suit is $${52 \choose k}-4{39 \choose k}+6{26 \choose k}-4{13 \choose k}+{0 \choose k}.$$ [Drop terms as appropriate for larger values of k.]<|endoftext|> -TITLE: Why does the log-log scale on my Slide Rule work? -QUESTION [12 upvotes]: For a long time I've eschewed bulky and inelegant calculators for the use of my trusty trig/log-log slide rule. For those unfamiliar, here is a simple slide rule simulator using Javascript. -To demonstrate, find the $LL_3$ scale, which is on the back of the virtual one. Let's say we want to solve $3^n$. -First, you would move the cursor (the red line) over where $3$ is on the $LL_3$ scale. Then, you would slide the middle slider until the $1$ on the $C$ scale is lined up to the cursor. -And voila, your slide rule is set up to find $3^n$ for any arbitrary $n$. For example, to find $3^2$, move the cursor to $2$ on the $C$ scale, and your answer is what the cursor is on on the $LL_3$ scale ($9$). Move your cursor to $3$ on $C$, and it should be lined up with $27$ on $LL_3$. To $4$ on C, it is on $81$ on $LL_3$. -You can even do this for non-integer exponents ($1.3,\cdots$ etc.) -You can also do this for exponents less than one, by using the $LL_2$ scale. For example, to do $3^{0.5}$, you would find $5$ on the $C$ scale, and look where the cursor is lined up at on the $LL_2$ scale (which is about $1.732$). -Anyways, I was wondering if anyone could explain to me how this all works? It works, but...why? What property of logarithms and exponents (and logarithms of logarithms?) allows this to work? -I already understand how the basics of the Slide Rule works ($\ln(m) + \ln(n) = \ln(mn)$), with only multiplication, but this exponentiation eludes me. - -REPLY [7 votes]: If x = 3n, then log x = n log 3. -The C scale is logarithmic, which means if the reading is p, then the distance is proportional to   log p. -Similarly, in the LLx scale the distance is proportional to   log log p. -Thus, when you align 1 to "3" in LL3, you introduce an offset of (log log 3). Suppose you get a reading of n in the C scale, then the corresponding value in LL3 would be: -log log p = log log 3 + log n - (LL3) (offset) (C) - -eliminating one level of log gives - log p = log 3 * n - -eliminating one more level of log gives - p = 3^n - -LL2 is the same as LL3 except it covers a different range.<|endoftext|> -TITLE: Given enough time, what are the chances I can come out ahead in a coin toss contest? -QUESTION [7 upvotes]: Assuming I can play forever, what are my chances of coming out ahead in a coin flipping series? -Let's say I want "heads"...then if I flip once, and get heads, then I win, because I've reached a point where I have more heads than tails (1-0). If it was tails, I can flip again. If I'm lucky, and I get two heads in a row after this, this is another way for me to win (2-1). -Obviously, if I can play forever, my chances are probably pretty decent. They are at least greater than 50%, since I can get that from the first flip. After that, though, it starts getting sticky. -I've drawn a tree graph to try to get to the point where I could start see the formula hopefully dropping out, but so far it's eluding me. -Your chances of coming out ahead after 1 flip are 50%. Fine. Assuming you don't win, you have to flip at least twice more. This step gives you 1 chance out of 4. The next level would be after 5 flips, where you have an addtional 2 chances out of 12, followed by 7 flips, giving you 4 out of 40. -I suspect I may be able to work through this given some time, but I'd like to see what other people think...is there an easy way to approach this? Is this a known problem? - -REPLY [7 votes]: The question can be answered using Catalan numbers. Let C_n denote the number of sequences of 2n coin tosses in which you are never ahead. Formally, we count sequences in which every prefix has no less T's than H's. We call this property A. -The number of total sequences of length 2n is $2^{2n}$. We then show that as n→∞, the ratio $C_n / 2^{2n}$ tends to 0. This means that in almost every sequence you will eventually be ahead (the chances of a random sequence having property A tend to 0 as the sequence gets longer). -Indeed, -$C_n = \frac{(2n)!}{(n+1)!n!}$ -so -$C_n / 2^{2n} = \frac{(2n)!}{2^{2n}} \cdot \frac{1}{(n+1)!n!}$ -and it can be shown that this tends to 0 by Stirling's approximation (multiply and divide by $(2n/e)^{2n}$).<|endoftext|> -TITLE: What are some applications outside of mathematics for algebraic geometry? -QUESTION [23 upvotes]: Are there any results from algebraic geometry that have led to an interesting "real world" application? - -REPLY [11 votes]: Broadly speaking, algebraic geometry is used a lot in some areas of robotics and mechanical engineering. Real algebraic geometry, for example, is important to the development of CAD systems (think NURBS, computing intersections of primitives, etc.) And AG comes up in robotics when it is important to figure out, say, what motions a robotic arm in a given configuration is capable of, or to construct some kind of linkage that draws a prescribed curve. -Something specific in that vein: Kempe's Universality Theorem gives that any bounded algebraic curve in $\mathbb{R}^2$ is the locus of some linkage. The "locus of a linkage" being the path drawn out by all the vertices of a graph, where the edge lengths are all specified and one or more vertices remains still. -Interestingly, Kempe's orginal proof of the theorem was flawed, and more recent proofs have been more involved. However, Timothy Abbott's MIT masters thesis gives a simpler proof that gives a working linkage for a given curve, and makes for interesting reading concerning the problem in general. -Edit: The NURBS connection is, in part, that can construct a B-spline that approximates a given real algebraic curve, which is crucial in displaying intersection curves, for example. See here for more details (I'm afraid I don't know many on this.) - -REPLY [9 votes]: The following slideshow gives an explanation of how algebraic geometry can be used in phylogenetics. -See also this post of Charles Siegel on Rigorous Trivialties. This is not an area I've looked at in much detail at all, but it appears that the idea is to use a graph to model evolutionary processes, and such that the "transition function" for these processes is given by a polynomial map. In particular, it'd be of interest to look at the potential outcomes, namely the image of the transition function; that corresponds to the image of a polynomial map (which is not necessarily an algebraic variety, but it is a constructible set, so not that badly behaved either). (In practice, though, it seems that one studies the closure, which is a legitimate algebraic set.)<|endoftext|> -TITLE: How to determine annual payments on a partially repaid loan? -QUESTION [5 upvotes]: Question : A $10$-year loan of $\$500$ is repaid with payments at the end of each year. The lender charges interest at an annual effective rate of $10\%$. Each of the first ten payments is $150\%$ of the amount of interest due. Each of the last ten payments is $X$. Calculate $X$. -My Attempt -$\$ 500$ will earn $\$50$ interest each year, so each of the first $10$ payments must be $\$75$. -Then after $10$ years, a total of $750$ has been repaid. In $10$ years, I can find the accumulated debt by saying -PV=500 -I/Y=10 -N=10 - -giving me FV=$\$1296.87$. -So the balance would be $\$ 1296.87-\$ 750=\$ 546.97$. -Now I am stuck! How do I find out what the last payments should be? I know I can't just divide $\$ 546.97/10=\$ 54.697$, because the lender is still charging interest while the borrower pays off this remaining debt, so there would still be the interest left over. -This situation isn't mentioned anywhere in my calculator manual! Can one of you give me some explanation about what is going on so that I can do it by hand? - -I tried working on it some more, and came up with a really great idea! Since the payments are $1.5\times$ the $10\%$ interest, it's just like paying off $5\%$ of the principal each year! This saves me a lot of time, because I can just set $I/Y=-5$ and get $FV=598.74$ on my calculator. I did it the long way by calculating the future value of each interest payment (turns out they were not all $\$75$, because the outstanding principal got smaller), and they were the same. Is this always going to work, or did I just get lucky here? - -Another update! -I think I solved it. All I needed to do was to set - FV=0 - I/Y=10 - N=10 - PV=598.74 - -and then I got $PMT=97.44$. I never used the PMT button before, though, so is there some other way I can check the answer is right? - -REPLY [5 votes]: This problem is in two stages. For the first stage, notice that you are paying 150% interest, but ending up owing more. This is because you subtracted $\$ $750 from the future value, when in fact each $\$ $75 amount was paid at a time in the past and needs be converted to a future value too. The payment in the nth year has a future value of $75\times(1.1)^{10-n}$. The total future value of the repayment is: -$$75(1.1^9)+75(1.1^8)+\cdots+75(1.1^0).$$ -Note that I have assumed that the interest is charged before the repayments are made. This sequence is a geometric progression. We consider it as a geometric sequence in reverse to make the maths easier. It has first term ($a$) 75, each term 1.1 times the previous ($r$) and 10 terms ($n$). The sum is given by the formula: -$$\frac{a{r^{n-1}}}{r-1}=\frac{75{1.1^{10-1}}}{0.1}\approx\$1195.31$$ -After we have solved this first part, then it is just a standard interest with repayments problem..<|endoftext|> -TITLE: Problem: Two Trains and a Fly -QUESTION [13 upvotes]: The Problem: -Two trains travel on the same track towards each other, each going at a speed of 50 kph. They start out 50km apart. A fly starts at the front of one train and flies at 75 kph to the front of the other; when it gets there, it turns around and flies back towards the first. It continues flying back and forth til the two trains meet and it gets squashed (the least of our worries, perhaps). -How far did the fly travel before it got squashed? -Attempt at a solution: -I can do this by summing the infinite series of the fly's distance for each leg. I get an answer of 37.5 km: but that's so nice! There must be a more intuitive way...is there? - -REPLY [14 votes]: The trains take half an hour to collide, which, at a rate of 75kph, leads to the fly travelling 37.5km.<|endoftext|> -TITLE: Importance of Representation Theory -QUESTION [188 upvotes]: Representation theory is a subject I want to like (it can be fun finding the representations of a group), but it's hard for me to see it as a subject that arises naturally or why it is important. I can think of two mathematical reasons for studying it: - -The character table of a group is packs a lot of information about the group and is concise. -It is practically/computationally nice to have explicit matrices that model a group. - -But there must certainly be deeper things that I am missing. I can understand why one would want to study group actions (the axioms for a group beg you to think of elements as operators), but why look at group actions on vector spaces? Is it because linear algebra is so easy/well-known (when compared to just modules, say)? -I am also told that representation theory is important in quantum mechanics. For example, physics should be $\mathrm{SO}(3)$ invariant and when we represent this on a Hilbert space of wave-functions, we are led to information about angular momentum. But this seems to only trivially invoke representation theory since we already start with a subgroup of $\mathrm{GL}(n)$ and then extend it to act on wave functions by $\psi(x,t) \mapsto \psi(Ax,t)$ for $A$ in $\mathrm{SO}(n)$. -This Wikipedia article on particle physics and representation theory claims that if our physical system has $G$ as a symmetry group, then there is a correspondence between particles and representations of $G$. I'm not sure if I understand this correspondence since it seems to be saying that if we act an element of G on a state that corresponds to some particle, then this new state also corresponds to the same particle. So a particle is an orbit of the $G$ action? Anyone know of good sources that talk about this? - -REPLY [4 votes]: Although many years later, for the sake of completeness -regarding the last question on references about uses in physics- i believe one might find lots of interest on the classic text by J. Baez and J. Huerta: -The Algebra of Grand Unified Theories -Lots of the mathematical physics developed during the last decades of the 20th century is reviewed there, in a particularly didactic way.<|endoftext|> -TITLE: Balance chemical equations without trial and error? -QUESTION [27 upvotes]: In my AP chemistry class, I often have to balance chemical equations like the following: -$$ \mathrm{Al} + \text O_2 \to \mathrm{Al}_2 \mathrm O_3 $$ -The goal is to make both side of the arrow have the same amount of atoms by adding compounds in the equation to each side. -A solution: -$$ 4 \mathrm{Al} + 3 \mathrm{ O_2} \to 2 \mathrm{Al}_2 \mathrm{ O_3} $$ -When the subscripts become really large, or there are a lot of atoms involved, trial and error is impossible unless performed by a computer. What if some chemical equation can not be balanced? (Do such equations exist?) I tried one for a long time only to realize the problem was wrong. -My teacher said trial and error is the only way. Are there other methods? - -REPLY [2 votes]: One method I am particularly fond of is balancing the half reactions of the redox (reduction-oxidation) reaction that is occurring. In the following reaction: -$$\mathrm{Al} + \mathrm{O_2} \rightarrow \mathrm{Al_2 O_3}$$ -it should be clear that Al is being oxidized and O is being reduced. Both Al and $\mathrm{O_2}$ are in their elemental states, and therefore both have an oxidation number of 0. Simply by looking at the periodic table, you can devise that the oxidation numbers of Al and O in $\mathrm{Al_2 O_3}$ are +3 and -2 respectively. Write the half reactions with the amount of each species present in the unbalanced reaction, along with the charge changes: -$$\mathrm{Al^0}\rightarrow 2\mathrm{Al^{+3}}$$ -$$2\mathrm{O^0}\rightarrow 3\mathrm{O^{-2}}$$ -Find the least common multiple of the coefficients of each half reaction reaction: -$$2(\mathrm{Al^0})\rightarrow 2\mathrm{Al^{+3}}$$ -$$3(2\mathrm{O^0})\rightarrow 2(3\mathrm{O^{-2}})$$ -Now consider the amount of electrons that must be lost by 2 atoms of Al to transform elemental Al into $\mathrm{Al^{+3}}$, as well as the amount of electrons that must be gained by 6 atoms of O to transform elemental $\mathrm{O_2}$ into $\mathrm{O^{-2}}$: -$$2(\mathrm{Al^0})\rightarrow 2\mathrm{Al^{+3}}+6e^-$$ -$$3(2\mathrm{O^0})+12e^-\rightarrow 2(3\mathrm{O^{-2}})$$ -The discrepancy between the two reactions is clear--the oxidation reaction must be multiplied through by 2 in order to balance the number of electrons lost by Al with the number of electrons gained by O: -$$2[2(\mathrm{Al^0})\rightarrow 2\mathrm{Al^{+3}}+6e^-]$$ -This finally results in a balance in the number of electrons lost and gained, and thereby the coefficients of both the products and the reactants: -$$4(\mathrm{Al^0})\rightarrow 2(2\mathrm{Al^{+3}})+2(6e^-)$$ -$$3(2\mathrm{O^0})+12e^-\rightarrow 2(3\mathrm{O^{-2}})$$ -Now transcribe our coefficients back into the original equation and your balancing is done: -$$4\mathrm{Al} + 3\mathrm{O_2} \rightarrow 2\mathrm{Al_2 O_3}$$ -Yes, although this process does seem extremely long, it is applicable to more complicated redox reactions. This includes reactions in which the the species participating in the redox reaction are not the only ones requiring balancing, and even reactions in which multiple species are oxidized/reduced at once. It is quite simple once you get the hang of it, and I hope this reduces the amount of time you spend balancing equations in the long run.<|endoftext|> -TITLE: Why is the derivative of a circle's area its perimeter (and similarly for spheres)? -QUESTION [115 upvotes]: When differentiated with respect to $r$, the derivative of $\pi r^2$ is $2 \pi r$, which is the circumference of a circle. -Similarly, when the formula for a sphere's volume $\frac{4}{3} \pi r^3$ is differentiated with respect to $r$, we get $4 \pi r^2$. -Is this just a coincidence, or is there some deep explanation for why we should expect this? - -REPLY [5 votes]: I recommend the article by J. Tong, Area and perimeter, volume and surface area, College Math. J. 28 (1) (1997) 57. He shows that for any region where the area can be written as $A(s)=c s^2$ and the perimeter as $L(s)= k s$, you can set $x=(2c/k) s$, and you will get $A'(x)=L(x)$. That means that by careful parametrization, the above holds for rectangles and ellipses, too.<|endoftext|> -TITLE: Validating a mathematical model (Lagrange formulation and geometry) -QUESTION [6 upvotes]: I am working on computing phase diagrams for alloys. These are -blueprints for a material that show what phase, or combination of -phases, a material will exist in for a range of concentrations and -temperatures (see this -pdf presentation). -The crucial step in drawing the boundaries that separate one phase -from another on these diagrams involves minimizing a free energy -function subject to basic physical conservation constraints. I am -going to leave out the chemistry/physics and hope that we can move forward -with the minimization using Lagrange multipliers. -The free energy that is to be minimized is this: -$\widetilde{G}(x_1, x_2) = f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2),$ -subject to: -$f^{(1)}x_1 + f^{(2)}x_2 = c_1,$ -$f^{(1)} + f^{(2)} = 1. $ -(and also that the $x_{i} > 0$ and $f^{(i)} > 0$, for $i=1,2$.) -The Lagrange formulation is: -$L(x_1,x_2,f^{(1)},f^{(2)},\lambda_1, \lambda_2, \lambda_3) = -f^{(1)}G_{1}(x_1) + f^{(2)}G_{2}(x_2)$ -$- \lambda_{1}(f^{(1)}x_1 + f^{(2)}x_2 - c_1)$ -$- \lambda_{2}(f^{(1)} + f^{(2)} - 1) $ -The minimization of $\widetilde{G}$ follows from finding the $x_{i}$'s that satisfy $\nabla L = 0:$ -$\frac{\partial L}{\partial x_{1}} = f^{(1)}G_{1}'(x_1) - \lambda_{1}f^{(1)} = 0$ -$\frac{\partial L}{\partial x_2} = f^{(2)}G_{2}'(x_2) - \lambda_{1}f^{(2)} = 0$ -$\frac{\partial L}{\partial f^{(1)}} = G_{1}(x_1) - \lambda_{1}x_{1} - \lambda_2 = 0$ -$\frac{\partial L}{\partial f^{(2)}} = G_{2}(x_2) - \lambda_{1}x_{2} - \lambda_2 = 0$ -which yields: -$(*) f^{(1)}\left[G_{1}'(x_1) - \lambda_1 \right] = 0$ -$(**) f^{(2)}\left[G_{2}'(x_2) - \lambda_1 \right]= 0 $ -$(\***) G_{1}(x_1) - G_{2}(x_2) = \lambda_1 \left[ x_1 - x_2\right]$ -Because $f^{(1)}$ and $f^{(2)}$ are not to be zero, from (*) and (**) we have that -$G_{1}'(x_1) = G_{2}'(x_2) = \lambda_{1}.$ -And, a manipulation of equation (***) looks like -$\frac{G_{1}(x_1) -G_{2}(x_2)}{x_1 - x_2} = \lambda_{1}.$ -Now, think of $G_{i}$ as an even degree polynomial (which it isn't, but -it's graph sometimes resembles one) in the plane. Let the points $x_1$ -and $x_2$ be locations along the x-axis that lie roughly below the -minima of this curve. The constraints (*),(**), and (***) describe the -condition that the line drawn between $(x_1,G_{1}(x_1))$ and $(x_2,G_{2}(x_2))$ form a common tangent -to the "wells" of the curve. It is these points $x_1$ and $x_2$, -which represent concentrations of pure components in our alloy, that -become mapped onto a phase diagram. It is essentially by repeating this procedure for many -temperatures that we can trace out the boundaries in the desired phase diagram. -The question is: Looking at this from a purely analytic geometry -perspective, how would one derive the "variational" approach to find a common tangent line that we seem to have found using the above Lagrangian? (warning: I don't really know how to -model things using variational methods.) -And, secondly: I have presented a model of a binary alloy, meaning -two variables to keep track of representing concentrations. I have -been working on ternary alloys, where this free energy $\widetilde{G}$ -is a function of three variables (two independent: $x_1,x_2,x_3$, -where $x_3 = 1- x_1 - x_2$) and is therefore a surface over a Gibbs -triangle. Then $\nabla L = 0$ produces partial derivatives that no -longer "speak geometry" to me, although the solution is a common tangent -plane. (I have attempted to characterize a common tangent plane -based purely in analytic geometry - completely disregarding the -Lagrangian - and have come up with several relations between -directional derivatives... How might directional derivatives relate -to the optimality conditions set forth by the Lagrangian?) -EDIT: Thank you Greg Graviton for wading through this sub-optimal notation and pointing out several mistakes in the statement of the problem. (Also, thank you for the excellent discussion below.) - -REPLY [4 votes]: Concerning the physical meaning, I take it that $f_1$ and $f_2$ represent the fractions of the two phases in the alloy (this implies $f_1 + f_2 = 1$). I imagine $x_1$ and $x_2$ to correspond to an intensional variable like pressure, whose average $f_1x_1 + f_2x_2 =: \bar x$ is held constant in the experiment. Now, the alloy minimizes the free energy under these constraints. -To answer your first question: there is a geometric reason why the solution is the common tangent to $G_1$ and $G_2$ in the case of two dimensions. Namely, the fractions $f_1$ and $f_2$ are exactly the Barycentric coordinates of the average $\bar x$ sitting between $x_1$ and $x_2$. In particular, the value of the total energy $\tilde G$ is the height of the line drawn between $(x_1,G_1(x_1))$ and $(x_2,G_2(x_2))$ evaluated at $\bar x$. Here's a sketch: - -From this picture, it is clear that if this line is not tangent to both $G_1$ and $G_2$, then you can move it a bit so that the value at $\bar x$ will decrease. -To answer your second question, the geometry readily extends to higher dimensions. For instance, for an alloy of three compounds, one has to consider the triangle enclosed by the three points $(x_1,G_1(x_1))$, $(x_2,G_2(x_2))$ and $(x_3,G_3(x_3))$. The situation is a bit degenerate here, any point inside this triangle whose first coordinate is $\bar x$ represents a valid value of $\tilde G$. Of these, nature will choose the smallest one. Consequently, the lower side of the triangle has to be tangent to two of the individual free energies. - -Similar reasoning applies when the variable $x$ is not just a number, but, say, a pair of numbers, then we're dealing with a plane tangent to three individual Gibbs functions. -While not terribly useful in this case, there is also a very general geometric interpretation of the method of Lagrange multipliers. Namely, consider a goal function $f$ and a holonomic constraint $g$. Then, the Euler-Lagrange-equations give $\nabla f = \lambda \nabla g$ which means that $f$ changes only in directions orthogonal to the surface $g$. But since we're confined to the surface $g$, this must be an extremum. Wikipedia has a picture.<|endoftext|> -TITLE: How many knight's tours are there? -QUESTION [25 upvotes]: The knight's tour is a sequence of 64 squares on a chess board, where each square is visted once, and each subsequent square can be reached from the previous by a knight's move. Tours can be cyclic, if the last square is a knight's move away from the first, and acyclic otherwise. -There are several symmetries among knight's tours. Both acyclic and cyclic tours have eight reflectional symmetries, and cyclic tours additionally have symmetries arising from starting at any square in the cycle, and from running the sequence backwards. -Is it known how many knight's tours there are, up to all the symmetries? - -REPLY [14 votes]: This problem seems to be have been solved recently by Alex Chernov, see sequence A165134 in OEIS, who claims that there are 19,591,828,170,979,904 open tours (where rotations and reflections are counted separately).<|endoftext|> -TITLE: Why are differentiable complex functions infinitely differentiable? -QUESTION [82 upvotes]: When I studied complex analysis, I could never understand how once-differentiable complex functions could be possibly be infinitely differentiable. After all, this doesn't hold for functions from $\mathbb R ^2$ to $\mathbb R ^2$. Can anyone explain what is different about complex numbers? - -REPLY [10 votes]: When one uses the complex plane to represent the set of complex numbers ${\bf C}$, -$z=x+iy$ -looks so similar to the point $(x,y)$ in ${\bf R}^2$. -However, there is a difference between them which is not that obvious. The linear transformation in ${\bf R}^2$, can be represented by a $2\times 2$ matrix as long as one chooses a basis in ${\bf R}^2$, and conversely, any $2\times 2$ matrix can define a linear transformation by using the matrix multiplication $A(x,y)^{T}$. -On the other hand, the linear transformation on $\bf C$ is different. Let $f:{\bf C}\to{\bf C}$ where $f(z)=pz$, $p \in{\bf C}$. If one writes $p=a+ib$ and $z=x+iy$, this transformation can be written as -$$ -\begin{bmatrix} -x\\ -y -\end{bmatrix}\to -\begin{bmatrix} -a &-b\\ -b &a -\end{bmatrix} -\begin{bmatrix} -x\\ -y -\end{bmatrix} -$$ -when one sees it as in the complex plane. Hence, not all matrices can define a linear transformation $f:\bf C\to C$. - -The derivative, which can be regarded as a "linear transformation", is also different for $f:{\bf R}^2\to {\bf R}^2$ and $f:\bf C\to C$. In the real case -$$ -f \left( \begin{bmatrix} -x\\ -y -\end{bmatrix} \right) = -\begin{bmatrix} -f_1(x,y)\\ -f_2(x,y) -\end{bmatrix} -$$ -$f_1$ and $f_2$ are "independent" for the sake of $f$ being differentiable. -While in the complex case $f_1$ and $f_2$ have to satisfy the Cauchy-Riemann equations. - -The relationship between $f:{\bf R}^2\to{\bf R}^2$ and $f:{\bf C}\to{\bf C}$ is also discussed here.<|endoftext|> -TITLE: How is prisoner's dilemma different from chicken? -QUESTION [31 upvotes]: Chicken is a famous game where two people drive on a collision course straight towards each other. Whoever swerves is considered a 'chicken' and loses, but if nobody swerves, they will both crash. So the payoff matrix looks something like this: - B swerves B straight -A swerves tie A loses, B wins -A straight B loses, A wins both lose - -But I have heard of another situation called the prisoner's dilemma, where two prisoners are each given the choice to testify against the other, or remain silent. The payoff matrix for prisoner's dilemma also looks like - B silent B testify -A silent tie A loses, B wins -A testify B loses, A wins both lose - -I remember hearing that in the prisoner's dilemma, it was always best for both prisoners to testify. But that makes no sense if you try to apply it to chicken: both drivers would crash every time, and in real life, almost always someone ends up swerving. What's the difference between the two situations? - -REPLY [4 votes]: The Prisoner's Dilemma was a game constructed for a very specific purpose: - -Each player has a preferred strategy that collectively results in an inferior outcome. - -In game theory language, both players have a dominating strategy: regardless of the opponent's action, they should choose a specific action (in this case, an action typically called Defect). If both players choose their dominating strategy, it leads to a (Nash) equilibrium, from which no individual player benefits from deviating. This equilibrium is (Pareto) inefficient in the sense that all players prefer an alternative outcome. Let's see this in practice: -\begin{vmatrix} -\ & \color{green}{Cooperate} & \color{green}{Defect} \\ -\ \color{blue}{Cooperate} & \color{blue}{4},\space\space\color{green}{4} & \color{blue}{5},\space\space\color{green}{1} \\ -\ \color{blue}{Defect} & \color{blue}{1},\space\space\color{green}{5} & \color{blue}{2},\space\space\color{green}{2} -\end{vmatrix} -We see that the $\color{blue}{blue}$ player's payoff is always higher for Defect than for Cooperate; that's what it means to be a dominating strategy. The same is true for $\color{green}{green}$. If both players choose Defect, the outcome is $2,2$ which is inferior for all players to the outcome $4,4$. -Chicken was constructed in the same vein for a different purpose: - -No player has a preferred strategy, and all players are in direct - rivalry with one another. - -Unlike the Prisoner's Dilemma, there are no dominating strategies, and this makes a big difference. For example, what would you choose in the following game: -\begin{vmatrix} -\ & \color{green}{Cooperate} & \color{green}{Defect} \\ -\ \color{blue}{Cooperate} & \space\space\space\color{blue}{0},\space\space\color{green}{0} & \space\space\space\color{blue}{2},\color{green}{-1} \\ -\ \color{blue}{Defect} & \color{blue}{-1},\space\space\color{green}{2} & \color{blue}{-5},\color{green}{-5} -\end{vmatrix} -Your best strategy is to anti-coordinate with your opponent; that is, to Defect when they Cooperate and Cooperate when they Defect. But if you had a choice, you would prefer to be the one Defecting. Mutual Defection is the worst outcome and isn't an equilibrium, but neither is Mutual Cooperation. In fact, the equilibria are when you and your partner anti-coordinate and is inherently adversarial. -So in general, we have a symmetric payoff matrix: -\begin{vmatrix} -\ & \color{green}{Cooperate} & \color{green}{Defect} \\ -\ \color{blue}{Cooperate} & Reward & \color{blue}{T},\space\space\color{green}{S} \\ -\ \color{blue}{Defect} & \color{blue}{S},\space\space\color{green}{T} & Punish -\end{vmatrix} - -In PD, $Temptation (T) > Reward (R) > Punish (P) > - Sucker (S)$ -In Ch, $Temptation (T) > Reward (R) > Sucker (S) > Punish (P)$ - -While it is true that the Prisoner’s Dilemma and Chicken have a different preferential ordering of outcomes and thus different equilibria, the purposes of the two games are completely different. -Interesting Questions You Could Ask (on a Game Theory StackExchange): - -What happens when each game is iterated (i.e. multiple rounds)? -What happens as the number of players increases (i.e public goods -game)?<|endoftext|> -TITLE: Real world uses of homotopy theory -QUESTION [19 upvotes]: I covered homotopy theory in a recent maths course. However I was never presented with any reasons as to why (or even if) it is useful. -Is there any good examples of its use outside academia? - -REPLY [2 votes]: Homotopy methods can be used for solving nonlinear equations (algebraic and differentials) where these equation appear in different problems in engineering and science. One example of these equations is a system of nonlinear algebraic equations which is a model of electrical circuit. This mathematical model consists of two equations which can be easily solved using Newton homotopy to find the value of the current in this circuit. For more information, please see my PhD thesis entitled "Newton homotopy algorithms for solving nonlinear systems" and my papaers presented in some conferences and published in some journals upon my name: Talib hashim Hasan.<|endoftext|> -TITLE: Proof for multiplying generating functions -QUESTION [7 upvotes]: I've learned that multiplying two generating functions $f(x)$ and $g(x)$ will give the result -\begin{equation*} -\sum_{k=0}^\infty\left(\sum_{j=0}^k a_j\,b_{k-j}\right)x^k. -\end{equation*} -I've used the result, but it was presented in my class without proof and I'm having some trouble tracking one down. Weak google-foo today, I suppose. Can anyone give me a pointer to a proof? If this is a question better answered in book form, that is fine as well. - -REPLY [5 votes]: Casebash is correct that this is a definition and not a theorem. But the motivation from 3.48 (Defintion of product of series) of little Rudin may convince you that this is a good definition: -$\sum_{n=0}^{\inf} a_n z^n \cdot \sum_{n=0}^{\inf} b_n z^n = (a_0+a_1z+a_2z^2+ \cdots)(b_0+b_1z+b_2z^2+ \cdots)$ -$=a_0b_0+(a_0b_1 + a_1b_0)z + (a_0b_2+a_1b_1+a_2b_0)z^2 + \cdots$ -$=c_0+c_1z+c_2z^2+ \cdots $ -where $c_n=\sum_{k=0}^n a_k b_{n-k}$ - -REPLY [3 votes]: It is actually the other way round. A generating function is generally defined to have an addition operation where the components are added and a multiplication operation like that you mentioned. Once we have made these definitions, we observe that polynomials obey the same laws and so that it is convenient to represent generating functions as infinite polynomials rather than just an infinite tuple.<|endoftext|> -TITLE: Stacks are just sheaves up to Isomorphism -QUESTION [18 upvotes]: I have heard that one can think of stacks on a site as taking sheaves but instead of the restrictions being equal, we just loosen it to isomorphic, and treat the sheaf conditions with the "obvious" coherence relations. -How seriously can one take this analogy? Note that my background is stacks is feeble at best. -I hope this question isn't too vague. One may choose to respond to this question with "That analogy is stupid because of __ gotcha". -Thanks in advance! - -REPLY [10 votes]: What you are referring to is the "stacks as sheaves of groupoids" point of view. -To illustrate where it comes from, imagine for example that we are talking about the moduli stack of elliptic curves (on the category of schemes). To give an elliptic curve over a scheme, it is not just enough to specify the elliptic curve over the members of the open cover; we have to explain how we glue the restrictions of the curves on the various opens on their overlaps, and this gluing has to be coherent over triple overlaps. -The reason for this is that elliptic curves can have non-trivial automorphisms, so that there is no a priori determined way to make the identifications on the overlaps (because having non-trivial automorphisms is the same as saying that when two curves are isomorphic, they can be isomorphic in more than one way), so it is your job to choose these identifications, and to make sure that you do it in a coherent way. -(Here elliptic curves can be replaced by any other moduli problem you can think of, of course.)<|endoftext|> -TITLE: What's an intuitive way to think about the determinant? -QUESTION [763 upvotes]: In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a $2\times 2$ matrix by the formula. Our teacher showed us how to compute the determinant of an $n \times n$ matrix by breaking it up into the determinants of smaller matrices. Apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related? - -REPLY [2 votes]: I will try to explain this intuitively. But first you must understand certain concepts. I recommend 3b1b videos for intuition in "linear combinations". Anyways, it's not a difficult concept to understand, I will slightly introduce it. -First of all, let's start with an example and then try to generalize. So imagine we have the matrix $A=\left[ -\begin{matrix} -3 & 1 - \\ -1.5 & 2 -\end{matrix} -\right]$. -Now let's take the column vectors of this matrix, $\left[ -\begin{matrix} -3 \\ 1.5 -\end{matrix} -\right]$ -and -$\left[ -\begin{matrix} -1 - \\ 2 -\end{matrix} -\right]$. The linear combination of this vectors is what we call the Column Space - Col(A), all the possible combinations of this vectors. Graphically it looks something like this: - -Also we have the Row Space - Row(A), identically, defined as the linear combinations of the row vectors $r_{1}=\left[ -\begin{matrix} -3 & 1 -\end{matrix} -\right]$ -and -$r_{2}=\left[ -\begin{matrix} -1.5 & 2 -\end{matrix} -\right]$. They can be represented graphically the same way as with Col(A). - -So basically, the determinant is the area created by the parallelogram defined by the row vectors (column vector generates the same area, but for convention let's use the row vectors). In the image it's represented by the blue parallelogram. So, Area of Parallelogram $= Determinant(A) = Det(A)$. -So, how can we calculate this area? For understanding this part you should have basic knowledge in "row operations" and "area of a parallelogram". -Let's call "$r_{1}$" the first row and "$r_{2}$" the second row. One of the basic row operations consist on add to one row other row scaled. So imagine row operating on $r_{1}$ as $r_{1}'=r_{1}+kr_{2}$, $k$ any real number. Don't desperate if you don't understand why we are row operating, thing are going to be clear right away. -So, let's call B the new matrix generated after replacing $r_{1}$ by $r_{1}+kr_{2}$. So, $r_{1}'$ with a quotation mark is going to be the transformed version of $r_{1}$. What would happen to Row(A) and to Det(A)? See what happens to Row(B) and Det(B) when we changed $r_{1}$ to $r_{1}'=r_{1}+kr_{2}$ with different values for $k$: - -So, we can see that $r_{1}'$ moves parallel to $r_{2}$ which is obvious because we are adding a scaled version of $r_{2}$ to $r_{1}$. -Assuming you have knowledge in "parallelogram areas", you can verified that the base and the height doesn't change. That means that the area stays constant when adding one row scaled to another row because we doesn't change the height never cause we move parallel. Thus Det(A)=Det(B). -So here's come the MAGICAL PART, we should find a $k$ such that we eliminate the y-component of $r_{1}$ row vector (y-component$=A_{12}=0$). So applying row operation with $k=\frac{1}{2}$ such that $A_{12}=0$, the transformed matrix would be: -$$\left[ -\begin{matrix} -3 & 1 - \\ -1.5 & 2 -\end{matrix} -\right] -\xrightarrow{r_1-\frac{1}{2}r_2} -\left[ -\begin{matrix} -2.25 & 0 - \\ -1.5 & 2 -\end{matrix} -\right]$$ -So our matrix B have a triangular form, Row(B) looks like: - -Now we have a parallelogram with base length $=2.25$ and height length $=2$. Thus by definition of parallelogram area, we have that $Det(A)=Det(B)=2.25*2=4.5$. So the determinant is just the product of the diagonal elements of the triangular matrix form, we call it the echelon form. MAGIC -We could search for a rectangle that have the same area as Det(A) by repeating this process but this time applying the row operation to $r_{2}$ such that we eliminate it's x-component (x-component$=A_{21}=0$) such that we get a rectangle with area Det(B) that have the equivalent area as Det(A), but this is completely unnecessary since it doesn't change the base and height of the parallelogram. Anyways for intuition of this proccess $r_{2}'=r_{2}+kr_{1}'$ would look like: - -So $k=-\frac{2}{3}\approx-0.66$. -$$\left[ -\begin{matrix} -2.25 & 0 - \\ -1.5 & 2 -\end{matrix} -\right] -\xrightarrow{r_{2}-\frac{2}{3}r_{1}'} -\left[ -\begin{matrix} -2.25 & 0 - \\ -0 & 2 -\end{matrix} -\right]$$ -We have that the base is $2.25$ and the height is $2$, so the area of the rectangle is $Det(B)=4.5=Det(A)$. The determinant is just the product of the diagonal elements of the diagonal matrix. -So we've seen that the product of the diagonal elements of a converted matrix in triangular form gives us the determinant of the matrix. Why triangular form? Imagine $x_{i}$ being the $i$ dimension, so every row vector in echelon form of the matrix adds a new component to the $i$ dimension, so in geometric terms it adds a height to the dimension. -The great think of this technique is that it can be applied to any n-dimensions and maintains intuition of what you are doing. I would like to present the 3-d graphical proof but it would be a lot of work that I think you could do with a little of imagination. -I try to make an intuitive and geometrically process using this Bibliography: - -Linear Algebra Series - 3Blue1Brown (Youtube Channel) - Intuition on determinants. -A Derivation of Determinants - Mark Demers. http://faculty.fairfield.edu/mdemers/linearalgebra/documents/2019.03.25.detalt.pdf - For mathematical rigurosity and intuition in how to calculate determinants.<|endoftext|> -TITLE: Probability that a stick randomly broken in two places can form a triangle -QUESTION [32 upvotes]: Randomly break a stick (or a piece of dry spaghetti, etc.) in two places, forming three pieces. The probability that these three pieces can form a triangle is $\frac14$ (coordinatize the stick form $0$ to $1$, call the breaking points $x$ and $y$, consider the unit square of the coordinate plane, shade the areas that satisfy the triangle inequality edit: see comments on the question, below, for a better explanation of this). -The other day in class*, my professor was demonstrating how to do a Monte Carlo simulation of this problem on a calculator and wrote a program that, for each trial did the following: - -Pick a random number $x$ between $0$ and $1$. This is the first side length. -Pick a random number $y$ between $0$ and $1 - x$ (the remaning part of the stick). This is the second side length. -The third side length is $1 - x - y$. -Test if the three side lengths satisfy the triangle inequality (in all three permutations). - -He ran around $1000$ trials and was getting $0.19$, which he said was probably just random-chance error off $0.25$, but every time the program was run, no matter who's calculator we used, the result was around $0.19$. -What's wrong with the simulation method? What is the theoretical answer to the problem actually being simulated? -(* the other day was more than $10$ years ago) - -REPLY [11 votes]: FYI: This question was included in a Martin Gardner 'Mathematical Games' article for Scientific American some years ago. He showed that there were 2 ways of randomly choosing the 2 'break' points: - -choose two random numbers from 0 to 1, or -choose one random number, break the stick at that point, then choose one of the two shortened sticks at random, and break it at a random point. - -The two methods give different answers.<|endoftext|> -TITLE: Probability that two people see each other at the coffee shop -QUESTION [10 upvotes]: Two mathematicians each come into a coffee shop at a random time between 8:00 a.m. and 9:00 a.m. each day. Each orders a cup of coffee then sits at a table, reading a newspaper for 20 minutes before leaving to go to work. -On any day, what is the probability that both mathematicians are at the coffee shop at the same time (that is, their arrival times are within 20 minutes of each other)? - -REPLY [21 votes]: Working in hours and letting 8:00 a.m. be t=0, each mathematician's arrival time is a number between 0 and 1. The sample space can be represented by the unit square in the coordinate plane with one professor's arrival time as x and the other's as y, where regions with equal areas are equally likely. We want x - 1/3 < y < x + 1/3 -- that is, the second professor arrives earlier than the first by no more than 1/3 of an hour or later than the first by no more than 1/3 of an hour. - -The area of the desired region is 5/9.<|endoftext|> -TITLE: Increasing network throughput by cutting routes -QUESTION [10 upvotes]: Suppose we model traffic flow between two points with a directed graph. Each route has either a constant travel time or one that linearly increases with traffic. We assume that each driver wishes to minimise their own travel time and we assume that the drivers form a Nash equilibria. Can removing a route ever decrease the average travelling time? -Note that the existence of multiple Nash equilibria makes this question a bit complicated. To clarify, I am looking for a route removal that will guarantee a decrease in the average traveling time regardless of the Nash equilibria that are chosen before and after. - -REPLY [2 votes]: The form this question is usually asked is whether adding a route can increase the average traveling time, and this is known as Braess's paradox. The Wiki article gives an explicit example in which the travel time on some of the routes depends on the traffic.<|endoftext|> -TITLE: Are there variations on least-squares approximations? -QUESTION [7 upvotes]: In least-squares approximations the normal equations act to project a vector existing in N-dimensional space onto a lower dimensional space, where our problem actually lies, thus providing the "best" solution we can hope for (the orthogonal projection of the N-vector onto our solution space). The "best" solution is the one that minimizes the Euclidean distance (two-norm) between the N-dimensional vector and our lower dimensional space. -There exist other norms and other spaces besides $\mathbb{R}^d$, what are the analogues of least-squares under a different norm, or in a different space? - -REPLY [2 votes]: For regression, least sum of squares criterion tried to fit your function through the mean of the data. In other words, with enough degrees of freedom, value of your fitted function for x will be the average of all observed values at x. Least sum of absolute values instead produces a function that'll go through the median of observed values. Some discussion on this<|endoftext|> -TITLE: How many circles of a given radius can be packed into a given rectangular box? -QUESTION [22 upvotes]: I've just came back from my Mathematics of Packing and Shipping lecture, and I've run into a problem I've been trying to figure out. -Let's say I have a rectangle of length $l$ and width $w$. -Is there a simple equation that can be used to show me how many circles of radius $r$ can be packed into the rectangle, in the optimal way? So that no circles overlap. ($r$ is less than both $l$ and $w$) -I'm rather in the dark as to what the optimum method of packing circles together in the least amount of space is, for a given shape. -An equation with a non-integer output is useful to me as long as the truncated (rounded down) value is the true answer. -(I'm not that interested in how the circles would be packed, as I am going to go into business and only want to know how much I can demand from the packers I hire to pack my product) - -REPLY [2 votes]: Nesting circles into rectangular sheets -Optimal nesting and practical limits: -When considering different nesting options while searching for an optimal nesting solution, it is desirable to find the solution quickly. This begs the question: how do I know a solution is optimal? The answer is not always obvious. -An automated nesting search is part of the answer, which can explore a number of options quickly, automatically and report the results. Finding the maximum number of parts in a full sheet or finding the smallest sized sheet required for a given number of parts. -It should be noted that circles have subtle nuances in packing efficiencies. It can be an advantage to have a working knowledge of these expected packing efficiencies of typical cases. (See efficiency graph) -Oddly in some cases the optimal packing for circles is irregular packing which is counter-intuitive. Transferring these irregular types of packing placements into other software is difficult. Hence, generally a trade-off is made by selecting the most optimal of the more regular circle packing patterns. -Rectangular, Hexagonal and Worst case packing -There is no set formula for calculating the maximum number of discs from a rectangular sheet. The efficiency of disc packing depends on the arrangement of discs in the material. -The Rectangular disc packing array (with zero spacing) is 78.5% (does not suffer from the low efficiency of edge effects) -The Hexagonal disc packing array (with zero spacing) is 90.6% -Worst case disc packing is (2 discs inside a square) 53.8% -Circle packing software -The above disc packing software calculates and compares eight different packing methods and highlights the most efficient solutions. -Each variation uses a different nesting pattern. -Note that no single method will give the optimum yield for nesting every size disc into every sized sheet. The optimum method varies depending on the disc sizes and sheet dimensions. -Note that transferring these optimal arrangements of the x,y positions of each disc to the profiling software can be challenging. -Different nesting options examined by the software -Different nesting options examined by the software when searching for optimal quantity per sheet. -A graph of nesting efficiency vs disc diameter -A graph of nesting efficiency (%) vs disc diameter (mm) nested into a rectangular sheet 2400x1200 with 5mm spacing. The blue line is the actual efficiency, and other colours theoretical. -The maximum value of the results of 8 different circle packing methods is taken. -The graph’s non-linear nature indicates a simple formula for the maximum number of discs is unlikely. Note also the low packing efficiency of discs smaller than 100mm diameter due to inter-part spacing being a greater percentage of the area and efficiency peaking at 78.5%. -During tabulation the packing result was noted together with the method of circle packing. Further to these automatically generated results, if the efficiency of that data point appeared low compared to nearby points on the graph, a manual nest of the discs was attempted and any better yields tabulated and noted as Irregular packing. -Using these results in a practical sense helps halt the search with confidence if adding another disc (N+1) would require an efficiency that, by the graph, is not possible. -Maximum packing efficiencies for discs in rectangles is continually being researched and improved. -For the current ultimate best nest for discs for irregular packing refer to the on-line link: -http://hydra.nat.uni-magdeburg.de/packing/csq/csq.html -McErlean, P. (2018) "The CAD/CNC Programming Handbook: 2D Material Optimization and Tips for Laser, Plasma and Oxy profile cutting"<|endoftext|> -TITLE: Proof that the irrational numbers are uncountable -QUESTION [64 upvotes]: Can someone point me to a proof that the set of irrational numbers is uncountable? I know how to show that the set $\mathbb{Q}$ of rational numbers is countable, but how would you show that the irrationals are uncountable? - -REPLY [5 votes]: One can use a variant of the familiar Cantor diagonalization argument. Let us suppose that $x_1,x_2,x_3,\dots$ is an enumeration of the irrationals. Let $d_n$ be the $n$-th decimal digit of $x_n$ after the decimal point. -Let $w_n=5$ if $d_n$ is not equal to $3$ or $4$, and let $w_n=6$ if $d_n$ is equal to $3$ or $4$. -Now we give the decimal representation of an irrational $y$ not in the list $x_1,x_2,x_3,\dots$. Very roughly speaking, it is the number whose $n$-th digit after the decimal point is $w_n$. But some modification is made to ensure irrationality. -First we describe the $n$-th digit $e_n$ after the decimal point of $y$, where $w_n=5$. List these $n$ as $n_1,n_2,n_3,n_4,\dots$. Let $e_{n_1}=5$. Let $e_{n_2}=6$. Let $e_{n_3}=e_{n_4}=5$. Let $e_{n_6}=6$. Let $e_{n_7}=e_{n_8}=e^{n_9}=5$. Let $e_{n_{10}}=6$. Continue, leaving longer and longer strings of unchanged $5$'s. -The same procedure is used to produce the $n$-th digit $e_n$ after the decimal point of $y$, where $w_n=3$. Leave longer and longer strings of these unchanged, and switch the digit to $4$ occasionally. -The number $y$ we produce has decimal expansion which differs in the $n$-th place from the corresponding digit of $x_n$. The occasional switches from $5$ to $6$ or $3$ to $4$ ensure that the decimal expansion of $y$ is not ultimately periodic.<|endoftext|> -TITLE: What does it mean to be going 40 mph (or 64 kph, etc.) at a given moment? -QUESTION [13 upvotes]: I was coming back from my Driver's Education class, and something mathsy really stuck out to me. -One of the essential properties of a car is its current speed. Or speed at a current time. For example, at a given point in time in my drive, I could be traveling 40 mph. But what does that mean? -From my basic algebra classes, I've learned that speed = distance/time. So if I travel ten miles in half an hour, my average speed would be $20$ mph ($\frac{10 mi}{25 h}$). -But instantaneous velocity...you aren't measuring average speed for a given amount of time. You're measuring instantaneous speed over an...instantaneous amount of time. -That would be something like (miles) / (time), where time = $0$? Isn't that infinite? -And perhaps, in a difference of time = $0$, then I'd be travelling $0$ miles. So would I be said to be going $0$ mph at an instantaneous moment in time? I'd like to be able to tell that to any cops pull me over for "speeding"! -But then if miles = $0$ and time = $0$, then you have $\frac00$? -This is all rather confusing. What does it mean to be going $40$ mph at a given moment in time, exactly? -I've heard this explained using this strange art called "calculus" before, and it's all gone over my head. Can anyone explain this using terms I (a High School Algebra and Geometry and Driving student) will understand? -(I figured that my problem had numbers in it, and therefore has to do with Maths.) - -REPLY [4 votes]: Hmm, apparently the other answerers' algebra classes were a lot more intense than mine...My answer's based on what I would have been comfortable with after basic algebra and geometry. -Basically, if you think graphically, instantaneous velocity is the slope of a line at a single point rather than over an interval. At least that's what you want. So, if you happen to be going at a constant velocity (any straight line on a position graph), you can just use the slope formula. Where calculus comes in is if you're dealing with an inconstant velocity (a curved line). -If that's not making sense right off, just try to realistically graph the movement of a car as it accelerates/decelerates, where the y-axis is position and the x-axis is time. -Using algebra, you can't take the slope of a curved line. What you can do is take the slope between two points on the curve. The closer these points get to each other, the closer the slope between them will approximate the actual slope of the curve. So, if the slope between t=5.001 min. and t=5.01 min. is 40, then that approximates the actual slope, and instantaneous velocity, at t=5.0055 min. -I don't think I can get much more specific/accurate without going into (pre-)calculus.<|endoftext|> -TITLE: Useful examples of pathological functions -QUESTION [25 upvotes]: What are some particularly well-known functions that exhibit pathological behavior at or near at least one value and are particularly useful as examples? -For instance, if $f'(a) = b$, then $f(a)$ exists, $f$ is continuous at $a$, $f$ is differentiable at $a$, but $f'$ need not be continuous at $a$. A function for which this is true is $f(x) = x^2 \sin(1/x)$ at $x=0$. - -REPLY [3 votes]: The constant function $0$ is an extremely pathological function: it has all kinds of properties that almost none of the functions $\mathbb R\to\mathbb R$ have: it is everywhere continuous, differentiable, analytic, polynomial, constant (not all of those are independent of course...), you name it. By contrast many of the answers given here involve properties that almost all functions have, or that almost all functions with the mentioned pathologies (e.g., being everywhere continuous for the Weierstrass function) have. -In fact being a definable function is a pathology that all answers share, but this is admittedly hard to avoid in an answer.<|endoftext|> -TITLE: What is linear programming? -QUESTION [13 upvotes]: I asked this question on Stack Overflow but it was closed as "not programming related". So I think this is probably the best place for it... - -I read over the wikipedia article, but it seems to be beyond my comprehension. It says it's for optimization, but how is it different than any other method for optimizing things? -An answer that introduces me to linear programming so I can begin diving into some less beginner-accessible material would be most helpful. - -REPLY [3 votes]: I suggest you this article, which talks about Linear Equations, Linear Programming, Integer Programming and P=NP. It's easy to understand and talks about the differences among these things<|endoftext|> -TITLE: Determinants and volume of parallelotopes -QUESTION [7 upvotes]: The absolute value of a $2 \times 2$ matrix determinant is the area of a corresponding parallelogram with the $2$ row vectors as sides. -The absolute value of a $3 \times 3$ matrix determinant is the volume of a corresponding parallelepiped with the $3$ row vectors as sides. -Can it be generalized to $n-D$? The absolute value of an $n \times n$ matrix determinant is the volume of a corresponding $n-$parallelotope? - -REPLY [7 votes]: Yes it can. In fact, as Jamie Banks noted, a determinant is an intuitive way of thinking about volumes. To summarise the argument, if we consider the vectors as a matrix, switching two rows, multiplying one by a constant or adding a linear combination will have the same effect on the volume as on the determinate. We can use these operations to transform any n-parallelotope to cube and note that the determinate matches the signed volume here, so it will match it everywhere as well. - -REPLY [3 votes]: Yes--see the Wikipedia article. -This follows from the change of variables formula with the Jacobian. However, it is probably more accurate to say that the change-of-variables formula is a (nontrivial) consequence of this. One argument for this fact, which is given in Rudin's Real and Complex Analysis, is the fact that both the determinant and the volume function up to sign behave very nicely satisfying certain specific properties (i.e., multilinear, alternating, normalized) hence are the same.<|endoftext|> -TITLE: Twenty questions against a liar -QUESTION [14 upvotes]: Here's one that popped into my mind when I was thinking about binary search. -I'm thinking of an integer between 1 and n. You have to guess my number. You win as soon as you guess the correct number. If your guess is not correct, I'll give you a hint by saying "too high" or "too low". What's your best strategy? -This is an easy problem if I always tell the truth: by guessing the (rounded) mean of the lower bound and the upper bound, you can find my number in roughly log2 n guesses at most. -But what if I'm allowed to cheat once? What is your best strategy then? To clarify: if you guess my number, you always win instantly. But I'm allowed, at most once, to tell you "too high" when your guess is actually too low, or the opposite. I can also decide not to lie. -Here's a rough upper bound: you can ask each number twice to make sure I'm not cheating, and if I ever give two different answers, just ask a third time and from then on, play the regular game. In this way, you can win with about 2 log2 n guesses at most. -I'm pretty sure that bound can be improved. Any ideas? - -REPLY [11 votes]: There is a fairly simple strategy which requires (1 + ε)log(n) + 1/ε + O(1) queries, for any constant ε > 0. I illustrate this strategy below. - -First, I ask you whether or not X, the secret number, is n/2. Without loss of generality, suppose you answer "less". I learn nothing at first, because you may be lying; but for the moment I give you the benefit of the doubt. -I next ask you whether X is n/4. If you say "less", I don't know whether or not 0 < X < n/4, but I do know that 0 < X < n/2, because you can't lie twice. Similarly, if I go about a normal binary search, so long as you continue to answer "less", I know that your answer to the preceding query was honest. So we may reduce to the case where you say "more" to my query of whether X is n/4. -If I continue to take you at your word, persue a binary search, and enquire about whether X is 3n/8, you may say either "more" or "less". If you say "more", then I don't know whether X > n/2, but I do know that X > n/4, again because you can't lie twice. So again so long as you continue to answer "more" in my normal binary search, I know that your answer to the preceding query was honest. - -More generally: if I consistently guess under the hypothesis that you are being honest, in any "monotonic" sequence of responses, I know that all but (possibly) the last of them are honest. So it might seem as though the worst case scenario is where your responses alternate a lot, as would occur for instance if you had chosen something like X = ⌊ n/3 ⌋. But in the alternating case too, I can be confident of the honesty of some of your answers: - -If you say (non-monotonically) that X is less than 3n/8, I don't know whether X is less than 3n/8 for sure; but I do know that X < n/2, because again you can't have lied about both. - -More generally: not only do I know that monotonic subsequences of answers are mostly honest, I know that any time that you answer "more" or "less", your previous answer of the same kind was also honest. So in fact I should be most suspicious, when I encounter long monotonic subsequences in your answers, that the answer previous to that monotonic subsequence was a lie. -What I need is a strategy that will tell me when to revisit an old question, depending on how large n is. Ideally, revisiting old questions would require very little overhead. If I encounter a monotonic sequence of f(n) responses, I revisit the last question before that monotonic sequence started. - -For instance, if your responses to queries about n/4, 3n/8, 7n/16, etc. are all monotonically "more", I eventually ask about the last time you said "less", which is for n/2, just in case you lied back then. This allows me to avoid the scenario where you lie about n/2, and keep me guessing at (2t−1 − 1)/2t until I eliminate all possibilities, catch you in your lie, and then redo all of my binary search for queries greater than n/2. - -If I do a double-check like this every time I encounter a monotonic sequence of length r, I will in the worst case query you about (r+1)log(n)/r = (1 + 1/r)log(n) times. If I catch you in a lie, I will have only "wasted" r queries, and my strategy afterwards can be just a simple binary search without double-checks; so your optimal strategy for maximizing the number of queries I make is actually not to lie at all, or to save your lie for nearly the end of the game to cost me about r additional queries. Here r can be an arbitrarily large integer; thus for any ε, I can achieve a query rate of (1 + ε)log(n) + 1/ε + O(1) by setting r > 1/ε. -Bonus problem #1. Without too much extra work, I think you can improve this to a strategy requiring only log(n) + O(log log(n)) queries, but I'm too lazy to work out the details right now. -Bonus problem #2. Generalize this strategy to the regime where you are allowed to lie to me some fixed number of times L > 0.<|endoftext|> -TITLE: How is the Riemann integral a special case of the Stieltjes integral? -QUESTION [12 upvotes]: From Rudin's Principles of mathematical analysis, - -6.2 Definition -Let $\alpha$ be a monotonically increasing function on $[a,b]$. ... Corresponding to each partition $P$ of $[a,b]$, we write -$$\Delta \alpha_i = \alpha(x_i) - \alpha(x_{i-1}).$$ - -He then goes on to define the Riemann Stieltjes integral of $f$ with respect to $\alpha$, over the interval $[a,b]$. -The Riemann integral is then pointed out to be a special case of this when $\alpha(x)=x$. -With $\alpha(x)=x$, I understand $\Delta x = x_i - x_{i-1}$ to represent the directed magnitude of the "base of the approximating rectangle" that we then multiply by the value of $f$ taken somewhere within this interval, thus obtaining the area of an approximating rectangle. -I don't know where to begin to interpret the case where $\alpha(x) \not\equiv x$. - -REPLY [2 votes]: In Stieltjes integral you assign different importance to different parts of the set you are integrating over. -The usefulness of this would become clearer when you know the theory of Lebesgue measure, which generalizes this even further. For example, there is the Dirac measure which when used for integrating functions cares only about the value of a function at a point (typically the origin). -Understanding it in the Stieltjes form is not any harder. Moreover, it will pave the way for measure theory.<|endoftext|> -TITLE: RHS Congruency test - What makes 90 degrees different? -QUESTION [6 upvotes]: RHS is a well known test for determining the congruency of triangles. It is easy enough to prove it works, simply use Pythagorus' theorem to reduce to SSS. I thought that it seems strange that this only works for an angle being 90 degrees - or does it? What if I tried changed the given angle to 89 degrees or 91 degrees, would it still be uniquely identified up to congruence? - -REPLY [6 votes]: Supposing that we knew two triangles had one angle congruent, a side adjacent to the angle congruent, and the side opposite the angle congruent. This is sometimes referred to as SSA, which is not a congruence theorem (and I've heard it said that it is "ass-backwards"). With a little more information, it is possible to determine congruence in some instances. -As you'd said about RHS, in that case, you can use right-triangle trigonometry to determine that the unknown sides are congruent, then use SSS to establish congruence. Without the right angle, the technique for determining the length of the third side of the triangle is to use the Law of Sines to determine the measure of the unknown angle opposite the known side, use that to find the measure of the third angle, then use the Law of Cosines to determine the length of the unknown side. -Let's call one of the triangles ABC with ∠A and AB and BC being known. From the Law of Sines, $\frac{\sin A}{BC}=\frac{\sin C}{AB}$ or $\sin C=\frac{AB\cdot\sin A}{BC}$. There will be two values of C in the range 0° to 180° that satisfy this equation, unless sin C = 1. So: - -if sin C = 1, then C is a right angle, the triangle is uniquely determined so congruence can be established; -if A ≥ 90°, C < 90°, so there is only one solution for C that makes sense in this triangle, the triangle is uniquely determined, and congruence can be established (one might call this SSobtuseA); -if A < 90°, but BC ≥ AB, then B ≥ C (in a triangle, the largest/smallest side is opposite the largest/smallest angle), so the only solution for C that makes sense in this triangle is the one with C < 90° (if B ≥ C > 90°, then A + B + C > 180°), the triangle is uniquely determined, and congruence can be established (one might call this SsA, with the relative sizes of the S/s indicating the relative lengths); -otherwise (when A < 90° and BC < AB), there are two possible values for C, both of which lead to triangles, so there are two possible triangles satisfying the given information, and congruence cannot be established. - -So, to your specific question, if the angle were 91° (case 2), congruence would follow; if the angle were 89°, congruence may or may not follow, depending on what you can determine about the other sides. -As an aside, RHS is also commonly referred to (at least in the midwestern U.S. in contemporary high school geometry) as hypotenuse-leg or HL.<|endoftext|> -TITLE: Expected value of a function of a random variable: help! -QUESTION [6 upvotes]: I am trying to show the following: -\begin{equation*} -E[e^{-\gamma W}]=e^{-\gamma(E[W]-\frac{\gamma}{2}Var [W])} -\end{equation*} -but I really can't remember what I am supposed to do to get from the LHS to the RHS. I have tried using integration this way -\begin{equation*} -\int We^{-\gamma W}dW -\end{equation*} -and then use integration by parts, but even though what I get resembles it, it can't be correct (because $e^{-\gamma W}$ is not the distribution of W). -I have also tried using Taylor series expansion, but I think I am way off, and I don't think an approximation here is what I need, because the equality above is exact. -FYI, this is not homework, I am working through a paper (page 10) and I would really like to know how every step was derived. -Can anyone at least point me to the right direction? -EDIT: This expectation on the RHS is very similar to the moment generating function formula (with a negative exponent). If you check here, you will see that the moment generating function for the normal distribution is like the LHS (but with a positive sign). So in a way I have my answer, but I still would like to know how to derive it, if there is a way. I know little if anything at all about moment generating functions, so maybe I shouldn't try and derive it but rather just use the result? Does it even make sense to try and derive it? - -REPLY [4 votes]: If W is randomly chosen with the PDF P(x), then the expectation value should be - -$E[e^{-\gamma W}]=\int_{-\infty}^\infty P(x) e^{-\gamma x} dx$ http://mathcache.appspot.com/?tex=%5cpng%5c%5bE%5Be%5E%7B-%5Cgamma%20W%7D%5D%3D%5Cint_%7B-%5Cinfty%7D%5E%5Cinfty%20P%28x%29%20e%5E%7B-%5Cgamma%20x%7D%20dx%5c%5d - -And I think that equation (E[e-γW] = e-γ(E[W] - ½γVar[W])) is correct only when W is a normal distribution.<|endoftext|> -TITLE: Proof of Angle in a Semi-Circle is of $90$ degrees -QUESTION [6 upvotes]: There is a well known theorem often stated as the angle in a semi-circle being $90$ degrees. To be more accurate, any triangle with one of its sides being a diameter and all vertices on the circle has its angle opposite the diameter being $90$ degrees. The standard proof uses isosceles triangles and is worth having as an answer, but there is also a much more intuitive proof as well (this proof is more complicated though). - -REPLY [4 votes]: Really short vector proof: -Center the circle at the origin, and scale to have radius 1. Let the vertex of the right triangle be at vector $v$, and let the diameter be the segment from the vector $w$ to $-w$. -Then -$(v-w) \cdot (v-(-w)) = (v-w) \cdot (v+w) = (v \cdot v) - (w \cdot w) = 1 - 1 = 0$, so the angle formed by $vw$ and $v(-w)$ is a right angle.<|endoftext|> -TITLE: How to compute the volume of this object via integration? -QUESTION [6 upvotes]: What is the volume of intersection of the three cylinders with axes of length $1$ in $x, y, z$ directions starting from the origin, and with radius $1$? - -REPLY [3 votes]: Pictured is one sixteenth of the overall shape. - -Note that the top face of the shape is a piece of $x^2+z^2=1$ and the front face is a piece of $x^2+y^2=1$; I’ve drawn in the edge where these two faces meet. -To find its volume, we’ll integrate cross-sectional area with respect to $x$ (trial and error shows that this is vastly simpler than doing it with respect to $y$.) -We’ll use $A(x)$ to denote the area of the cross-section at $x$. This is a piecewise-defined function. The rectangle in the drawing is the location of $A$’s discontinuity. For $x \leq \sqrt{2}/2$, the cross-section is a rectangle with width $x$ and height $\sqrt{1-x^2}$, thus $A(x)=x\sqrt{1-x^2}$. For $x \geq \sqrt{2}/2$, the cross-section is a square of side-length $\sqrt{1-x^2}$, giving $A(x)=1-x^2$. -Thus we get -$$V = 16\left(\int_0^{\sqrt{2}/2}x\sqrt{1-x^2}\,dx+\int_{\sqrt{2}/2}^1(1-x^2)\,dx\right)=16-8\sqrt{2}.$$<|endoftext|> -TITLE: Exponential and log functions compose to identity -QUESTION [5 upvotes]: How to prove that the exponential function and the logarithm function are the inverses of each other? I want it the following way. We must use the definition as power series, and must verify that all the terms of the composition except the coefficent of $z$ vanish, and that the first degree term is $1$. -I can write down the proof for the coefficient of $x^n$ for arbitrary but fixed $n$ by explicit verification. But how to settle this for all $n$ at one go? - -REPLY [2 votes]: Qiaochu Yuan's proof is without doubt the most elegant. However, here is a purely formal argument using the derivative: -Let $L(x) = \log \frac 1{1-x} = \sum_{n\geq 1} \frac{x^n}n$. I will show that $\exp(L(x))=\frac 1{1-x}$. -Let $u(x)=\exp(L(x))$. By the chain rule for differentiation, we have -$$ -u'(x) = L'(x)u(x). -$$ -Noting that $L'(x)=(1-x)^{-1}$, the above can be rewritten as -$$ -u(x)=(1-x)u'(x). -$$ -Suppose $u(x)=\sum_{n\geq 0} a_nx^n$, then comparing the coefficients of each power of $x$ in the preceding identity gives -$$ -a_n = (n+1)a_{n+1} - na_n -$$ for each $n\geq 0$. -Also, $a_0=1$. -It follows that $a_n=1$ for all $n$, so $u(x)=\frac 1{1-x}$, as desired. -The reverse identity, $\log(\exp(x))=x$ is simpler; the chain rule tells us that the derivative of $\log(\exp(x))=1$.<|endoftext|> -TITLE: What's so "natural" about the base of natural logarithms? -QUESTION [82 upvotes]: There are so many available bases. Why is the strange number $e$ preferred over all else? -Of course one could integrate $\frac{1}x$ and see this. But is there more to the story? - -REPLY [10 votes]: I'm surprised I never answered this; maybe I was deterred by the fact that several other answers are here. -One short answer is this: An exponential function $y=a^x$ grows at a rate proportional to its present size, but only when the base is $e$ does it grow at a rate equal to its present size. In other words -$$ -\frac{d}{dx} a^x = \left(\text{constant}\cdot a^x \right), -$$ -but only when $a=e$ is the "constant" equal to $1$. -The number $a=2$ is too small for this to happen. To see that consider -$$ -\frac{d}{dx} 2^x = \lim_{h\to0} \frac{2^{x+h}-2^x}{h} = \lim_{h\to0}2^x\frac{2^h-1}{h} -$$ -This last limit is equal to $\displaystyle 2^x \lim_{h\to0}\frac{2^h-1}{h}$. That step can be done because $2^x$ is a "constant", but in this instance, "constant" means "not depending on $h$". Then observe that $\displaystyle\lim_{h\to0}\frac{2^h-1}{h}$ is a "constant", where "constant" now means "not depending on $x$". -So -$$ -\frac{d}{dx} 2^x = \left(\text{constant}\cdot 2^x\right). -$$ -But what number is this "constant"? Notice that as $x$ increases from $0$ to $1$, $2^x$ increases from $1$ to $2$, so the average slope on that interval is $\dfrac{2-1}{1-0}=1$. Since the curve gets steeper as $x$ increases, it's not yet that steep at $x=0$. Its slope at $x=0$ is $\left.\dfrac{d}{dx}2^x\right|_{x=0}=\left(\text{constant}\cdot2^0\right)$, so that "constant" must be less than $1$. -A similar argument shows that if $4$ is used as the base, the "constant" is more than $1$. This is done by using the interval from $-1/2$ to $0$ instead of the interval from $0$ to $1$. -So $2$ is too small, and $4$ is too big, to be the natural base. $e$ must be somewhere between $2$ and $4$. In a similar way one can show that $3$ is to big, but that's where the previously simple arithmetic gets messy. Use the interval from $-1/6$ to $0$ for that. -Similarly with logarithms: -\begin{align} -\frac d {dx} \log_6 x & = \frac{\text{constant}} x, \\[12pt] -\text{ and } \quad \frac d {dx} \log_e x & = \frac{\text{constant}} x, \text{but this time, the constant is 1.} -\end{align} -What is natural about $e$ is the same thing that is natural about radians: -$$ -\frac d {dx} (\text{sine of $x$ degrees}) = \Big((\text{cosine of $x$ degrees}) \times \text{constant} \Big) -$$ -but only when radians are used is the "constant" equal to $1$. $($With degrees the constant is $\pi/180.)$<|endoftext|> -TITLE: Applications of class number -QUESTION [10 upvotes]: There is the notion of class number from algebraic number theory. Why is such a notion defined and what good comes out of it? -It is nice if it is $1$; we have unique factorization of all ideals; but otherwise? - -REPLY [10 votes]: The class group of a number field $K$ can be used to parametrize other objects. -1) If $[L:K] = n$, the possible $O_K$-module structure of $O_L$ is described by the ideal classes of $K$, although it is still an open question in general to show for each $n > 1$ and each ideal class of $K$ that there's an extension $L/K$ with degree $n$ such that $O_L$ as an $O_K$-module corr. to that ideal class. (This is known for small $n$, but not for general $n$.) -2) The orbits of the action of $\text{SL}_2(O_K)$ on ${\mathbf P}^1(K)$ are in bijection with ideals classes in $K$. For instance, the action is transitive iff $K$ has class number 1. -3) When $O$ is a quadratic order with discriminant $d$, the (narrow) class group of $O$ describes the primitive quadratic forms of discriminant $d$ up to proper equivalence. Here we need a slightly more general concept than the usual ideal class group (unless $O = O_K$). -4) Weierstrass equations for an elliptic curve over $K$ up to a standard change of variables are related to ideal classes in $K$ (see Silverman's first book on ell. curves, Chap. VIII).<|endoftext|> -TITLE: Optimal Strategy for Deal or No Deal -QUESTION [8 upvotes]: When I have watched Deal or No Deal (I try not to make a habit of it) I always do little sums in my head to work out if the banker is offering a good deal. Where odds drop below "evens" it's easy to see it's a bad deal, but what would be the correct mathematical way to decide if you're getting a good deal? - -REPLY [5 votes]: There are (at least) two factors that mean that simply calculating the average of the remaining options is not enough to describe how someone should play. - -Risk aversion -Someone's utility is not a predictable function of the amount of money that they win. For instance my utility from winning $\$$5 is more than 100 times my utility from winning 5 cents. However, my utility from winning $\$$100 million is less than 100 times my utility from winning $\$$1 million.<|endoftext|> -TITLE: Law of cosines with impossible triangles -QUESTION [7 upvotes]: Is there any mathematical significance to the fact that the law of cosines... -$$ -\cos(\textrm{angle between }a\textrm{ and }b) = \frac{a^2 + b^2 - c^2}{2ab} -$$ -... for an impossible triangle yields a cosine $< -1$ (when $c > a+b$), or $> 1$ (when $c < \left|a-b\right|$) -For example, $a = 3$, $b = 4$, $c = 8$ yields $\cos(\textrm{angle }ab) = -39/24$. -Or $a = 3$, $b = 5$, $c = 1$ yields $\cos(\textrm{angle }ab) = 33/30$. -Something to do with hyperbolic geometry/cosines? - -REPLY [8 votes]: This is not a directly a matter of hyperbolic geometry but of complex Euclidean geometry. The construction of "impossible" triangles is the same as the construction of square roots of negative numbers, when considering the coordinates the vertices of those triangles must have. If you calculate the coordinates of a triangle with sides 1,3,5 or 3,4,8 you get complex numbers. In ordinary real-coordinate Euclidean geometry this means there is no such triangle. If complex coordinates are permitted, the triangle exists, but not all its points are visible in drawings that only represent the real points. -In plane analytic geometry where the Cartesian coordinates are allowed to be complex, concepts of point,line, circle, squared distance, dot-product, and (with suitable definitions) angle and cosine can be interpreted using the same formulas. This semantics extends the visible (real-coordinate) Euclidean geometry to one where any two circles intersect, but possibly at points with complex coordinates. We "see" only the subset of points with real coordinates, but the construction that builds a triangle with given distances between the sides continues to work smoothly, and some formulations of the law of Cosines will continue to hold. -There are certainly relations of this picture to hyperbolic geometry. One is that $cos(z)=cosh(iz)$ so you can see the hyperbolic cosine and cosine as the same once complex coordinates are permitted. Another is that the Pythagorean metric on the complex plane, considered as a 4-dimensional real space, is of the form $x^2 + y^2 - w^2 - u^2$, so that the locus of complex points at distance $0$ from the origin contains copies of the hyperboloid model of hyperbolic geometry. But there is no embedding of the hyperbolic plane as a linear subspace of the complex Euclidean plane, so we don't get from this an easier way of thinking about hyperbolic geometry. -To help visualize what is going on it is illuminating to calculate the coordinates of a triangle with sides 3,4,8 or other impossible case, and the dot-products of the vectors involved.<|endoftext|> -TITLE: What is the best way to factor arbitrary polynomials? -QUESTION [9 upvotes]: I am currently working on a Computer Algebra System and was wondering for suggestions on methods of finding roots/factors of polynomials. I am currently using the Numerical Durand-Kerner method but was wondering if there are any good non-numerical methods (primarily for simplifying fractions etc). -Ideally this should work for equations in multiple variables. - -REPLY [10 votes]: If you are interested in the factorization algorithms employed in modern computer algebra systems such as Macsyma, Maple, or Mathematica, then see any of the standard introductions to computer algebra , e.g. Geddes et.al. "Algorithms for Computer Algebra"; Knuth, "TAOCP" v.2; von zur Gathen "Modern Computer Algebra"; Zippel "Effective Polynomial Computation". See also -Kaltofen's surveys on polynomial factorization [116,68,56,7] in his publications list, which contains plenty of theory, history and literature references. Note: Kaltofen's home page appears to be temporarily down so instead see his paper [1] to get started (see comments) -1 Kaltofen, E. Factorization of Polynomials, pp. 95-113 in: -Computer Algebra, B. Buchberger, R. Loos, G. Collins, editors, Vienna, Austria, (1982). -http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.7916&rep=rep1&type=pdf<|endoftext|> -TITLE: Approximation symbol: Is $\pi \approx 3.14\dots$ equivalent to $\pi \fallingdotseq 3.14\dots$? -QUESTION [11 upvotes]: This could be a trivial question, but what is exactly the difference of between these two expressions? Am I correct to state the both interchangeably whenever I need to express the approximation of $\pi$? I'm bit confused as here, it states $\pi$ can be express by $\fallingdotseq$ as it's not a rational number, but $\pi$ can also be expressed by a series (asymptotic), so it should be $\approx$ as well. -$$\pi \approx 3.14\dots$$ $$\pi \fallingdotseq 3.14\dots$$ - -REPLY [2 votes]: The $\approx$ symbol and the $ \fallingdotseq $ symbol both have the same denotation to generally mean "approximately", with the latter symbol commonly used throughout some Asian countries like Japan and Korea. You can read more about the symbols here: https://en.wikipedia.org/wiki/Equals_sign<|endoftext|> -TITLE: Meaning of closed points of a scheme -QUESTION [33 upvotes]: This is a question in Liu's book. -Let $X$ be a quasi-compact scheme. Show that $X$ contains a closed point. -Well I'm unable to do this question, so any help would be appreciated. This question also makes me curious to know about the meaning/use of closed points of a scheme in general - by that I mean a scheme which is not an algebraic variety/local scheme over a field, which has a geometric meaning. Thanks! - -REPLY [9 votes]: Sorry for posting again on this old thread but I'd like to know what's wrong, if something, with the following, because I think it should works. -We have an open affine covering $X = \bigcup_{i} U_{i}$ with $U_{i} = \mbox{Spec}(A_{i})$. Let $\xi_{1}$ be a closed point in $U_{1}$ (just take a maximal ideal). If $\xi_{1}$ is still closed in $X$ we are done. Otherwise, take $\xi_{2} \neq \xi_{1}$ with $\xi_{2} \in \overline{\left\{\xi_{1}\right\}}$. Hence $\xi_{2} \in U_{i}$ for some $i \neq 1$, say $i = 2$. Then repeat the reasoning above again. The process must stop since there are only finitely many $U_{i}$ and so $X$ has a closed point.<|endoftext|> -TITLE: Proving the Riemann Hypothesis without revealing anything other than you proved it -QUESTION [13 upvotes]: Consider the following assertion from Scott Aaronson's blog: - -Supposing you do prove the Riemann - Hypothesis, it’s possible to convince - someone of that fact, without - revealing anything other than the fact - that you proved it. It’s also possible - to write the proof down in such a way - that someone else could verify it, - with very high confidence, having only - seen 10 or 20 bits of the proof. - -Can anyone explain where this result comes from? - -REPLY [10 votes]: The point is that SHORTPROOF is an NP-complete problem: given a sentence of length $n$ in the language of some formal proof system (ZFC, Peano arithmetic, etc), does it have a proof of length at most some fixed polynomial in $n$, such as $(2n)^{100}$? It's in NP because for a reasonable formal system you can check a given proof fairly quickly. This problem was considered in Goedel's letter to von Neumann that implicitly stated what we now call the $P \neq NP$ question (the heart of the problem, the universality of concrete NP-complete problems, wasn't known until much later). -Any NP-complete problem has a zero-knowledge proof protocol for demonstrating solutions of instances, e.g. that we have a SHORTPROOF of the Riemann hypothesis. These are "proofs that reveal nothing other than their own validity". -The role of the PCP theorem is to show that the proof protocols (interactive challenge/response games) can be very efficient for any stipulated level of confidence. The probability that the prover really does have a SHORTPROOF of Riemann, given that we follow the protocol and the prover wins, is at least 99 percent, or whatever specified degree of certainty.<|endoftext|> -TITLE: Computation with a memory wiped computer -QUESTION [62 upvotes]: Here is another result from Scott Aaronson's blog: - -If every second or so your computer’s - memory were wiped completely clean, - except for the input data; the clock; - a static, unchanging program; and a - counter that could only be set to 1, - 2, 3, 4, or 5, it would still be - possible (given enough time) to carry - out an arbitrarily long computation — - just as if the memory weren’t being - wiped clean each second. This is - almost certainly not true if the - counter could only be set to 1, 2, 3, - or 4. The reason 5 is special here is - pretty much the same reason it’s - special in Galois’ proof of the - unsolvability of the quintic equation. - -Does anyone have idea of how to show this? - -REPLY [19 votes]: 1) Why does a small number of states suffice? -Regardless of whether the constant is 5 or 500, its still very surprising. Thankfully, it's fairly straightforward to prove this if you allow the counter to be $\{1, \ldots, 8\}$ instead of $\{1, \ldots, 5\}$. [This proof is by Ben-Or and Cleve.]. Start by representing the computation as a circuit, and ignore the whole wiping-clean thing. -Define a register machine as follows: It has 3 registers $(R_1,R_2,R_3)$, each of which holds a single bit. At each step, the program performs some computation on the registers of the form $R_i \gets R_a + x_b R_c$ or $R_i \gets R_a + x_b R_c + R_d$ (where $x_1\ \ldots x_n$ is the input). -Initially, set $(R_1,R_2,R_3) = (1,0,0)$. The machine should end in the state $R_3 + f R_1$. We'll simulate the circuit using a register machine. -We now proceed by induction on the depth of the circuit. If the circuit has depth 0, then we just copy the appropriate bit: $R_3 \gets R_3 + x_i R_1$. -For the induction, we have 3 cases, according to whether the final gate is NOT, AND, or OR. -Suppose that the circuit is $\neg f$. By induction, we can compute $f$, yielding the state $(R_1,R_2,R_3 + f R_1)$. We can therefore perform the instruction $R_3 \gets R_3 + R_1$ to get the desired output. -If the circuit is $f_1 \wedge f_2$, then life is a tad more complicated. By induction, we then execute the following 4 instructions: -\begin{align*} -R_2 &\gets R_2 + f_1 R_1 \\ -R_3 &\gets R_3 + f_2 R_2 \\ -R_2 &\gets R_2 + f_1 R_1 \\ -R_3 &\gets R_3 + f_2 R_2 -\end{align*} -Assuming I haven't made any typos, we are left with the state $(R_1,R_2,R_3+f_1f_2R_1)$, as desired. $f_1 \vee f_2$ works similarly. -QED. -Take a moment to process what just happened. It's a slick proof that you have to read 2 or 3 times before it begins to sink in. What we've shown is that we can simulate a circuit by applying a fixed program that stores only 3 bits of information at any time. -To convert this into Aaronson's version, we encode the three registers into the counter (that's why we needed the extra 3 spaces). The simple program uses the input and the clock to determine how far we've made it through the computation and then applies the appropriate change to the counter. -2) But what's the deal with 5? -To get from 8 states down to 5, you use a similar argument, but are much more careful about exactly how much information needs to be propagated between stages and how it can be encoded. A formal proof requires lots of advanced group theory. - -Edit to answer Casebash's questions: -1) Correct. Any computation can be expressed as a circuit composed solely of "NOT", binary-"AND", and binary-"OR" gates. -2) The notation $f R_1$ means (boolean) multiplication. -3) The program for computing $f$ should take input $(R_1,R_2,R_3)$ to $(R_1,R_2,R_3 + f R_1)$. We insist that the first two registers are unchanged since we use those as temporary storage in the induction. For example, when computing $f_1 \wedge f_2$, we compute the first branch and store the result in $R_2$ while computing the second branch. -4) The single bit of output is the final value of $R_3$. Since we started with $(1,0,0)$, we end with $(1,0,f)$.<|endoftext|> -TITLE: Visualising functions from complex numbers to complex numbers -QUESTION [10 upvotes]: I think that complex analysis is hard because graphs of even basic functions are 4 dimensional. Does anyone have any good visual representations of basic complex functions or know of any tools for generating them? - -REPLY [2 votes]: Oh yes, there's a way to do this. Here is my exploration into the topic about a month ago using Mathematica. The easiest thing to do is to plot the vector field and let the direction of the arrows represent the phase and let the color represent the magnitude. This is a great way to get all four dimensions on a plane and I think it's very enlightening. -https://mathematica.stackexchange.com/questions/4244/visualizing-a-complex-vector-field-near-poles<|endoftext|> -TITLE: Learning Lambda Calculus -QUESTION [126 upvotes]: What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus? -Specifically, I am interested in the following areas: - -Untyped lambda calculus -Simply-typed lambda calculus -Other typed lambda calculi -Church's Theory of Types (I'm not sure where this fits in). - -(As I understand, this should provide a solid basis for the understanding of type theory.) -Any advice and suggestions would be appreciated. - -REPLY [2 votes]: Some time ago, I was surprised not to find many untyped & simply-typed lambda calculus interpreters among the answers to this question, so I started working for a while in an educational lambda calculus interpreter called Mikrokosmos (can also be used online). It implements untyped and simply typed lambda calculus (and also illustrates Curry-Howard). -Surely, you could also use some functional programming language; but at least for me, it was difficult to determine exactly which constructs of the language are lambda calculus and which are extras offered by that particular language. Haskell for instance approximately implements System-F, but I also wanted to have a simply(as simple as possible)-typed lambda calculus interpreter. -The interpreter is free software and you can integrate them on other learning materials (such as Jupyter notebooks or web pages). I have used it before to teach lambda calculus to CS students. -Please note that I am the main developer of this interpreter. I am only posting here because this same question inspired me to start the development.<|endoftext|> -TITLE: What is the difference between matrix theory and linear algebra? -QUESTION [5 upvotes]: I have lifted this from Mathoverflow since it belongs here. -Hi, -Currently, I'm taking matrix theory, and our textbook is Strang's Linear Algebra. Besides matrix theory, which all engineers must take, there exists linear algebra I and II for math majors. What is the difference, if any, between matrix theory and linear algebra? -Thanks! -kolistivra - -REPLY [12 votes]: My answer from the MO thread: -A matrix is just a list of numbers, and you're allowed to add and multiply matrices by combining those numbers in a certain way. When you talk about matrices, you're allowed to talk about things like the entry in the 3rd row and 4th column, and so forth. In this setting, matrices are useful for representing things like transition probabilities in a Markov chain, where each entry indicates the probability of transitioning from one state to another. You can do lots of interesting numerical things with matrices, and these interesting numerical things are very important because matrices show up a lot in engineering and the sciences. -In linear algebra, however, you instead talk about linear transformations, which are not (I cannot emphasize this enough) a list of numbers, although sometimes it is convenient to use a particular matrix to write down a linear transformation. The difference between a linear transformation and a matrix is not easy to grasp the first time you see it, and most people would be fine with conflating the two points of view. However, when you're given a linear transformation, you're not allowed to ask for things like the entry in its 3rd row and 4th column because questions like these depend on a choice of basis. Instead, you're only allowed to ask for things that don't depend on the basis, such as the rank, the trace, the determinant, or the set of eigenvalues. This point of view may seem unnecessarily restrictive, but it is fundamental to a deeper understanding of pure mathematics.<|endoftext|> -TITLE: Best Algebraic Geometry text book? (other than Hartshorne) -QUESTION [76 upvotes]: Lifted from Mathoverflow: -I think (almost) everyone agrees that Hartshorne's Algebraic Geometry is still the best. -Then what might be the 2nd best? It can be a book, preprint, online lecture note, webpage, etc. -One suggestion per answer please. Also, please include an explanation of why you like the book, or what makes it unique or useful. - -REPLY [4 votes]: I possibly cannot say what is "the best" book in this topic, but I've recently started studying it and found Hartshorne's book extremely difficult, so I went to study Mumford's red book of varieties. But other than these books that have been introduced I found the followings also helpful: -A Royal Road to Algebraic Geometry by Audun Holme is a newly published book which tries to make Algebraic Geometry as easy as possible for studetns. -Also, the book by Griffits and Harris called Principles of Algebraic Geometry in spite of being rather old, and working mostly with only complex field, gives a good intuition on this very abstract topic.<|endoftext|> -TITLE: Proving the Shoelace Method at the Precalculus Level -QUESTION [5 upvotes]: Using only precalculus mathematics (including that the area of the triangle with vertices at the origin, $(x_1,y_1)$, and $(x_2,y_2)$ is half of the absolute value of the determinant of the $2\times 2$ matrix of the vertices $(x_1,y_1)$ and $(x_2,y_2)$, $\frac{1}{2}\cdot\left|x_1\cdot y_2 - x_2\cdot y_1\right|$) how can one prove that the shoelace method works for all non-self-intersecting polygons? - -REPLY [9 votes]: One way is to note that $x_1y_2 - x_2y_1$ is a signed area, i.e. it may positive or negative. Adding up all the signed areas of the triangles formed by the points $O$, $P_k$ and $P_{k+1}$ will cancel all the superfluous parts, as can be seen from the sketch: - -The same argument also works for trapezoids with the $x$-axis instead of triangles with the origin. (I think this is the most illuminating argument, because the key trick is to give the area a sign depending on orientation.) -One could argue that this is not very rigorous, however. A more rigorous proof is to divide the polygon into two smaller polygons (it's not trivial to show that this is possible) and argue that adding the shoelace sums of the two parts gives the shoelace sum of the whole. That's because the two additional terms for the extra side cancel each other. (This cancellation is well-known from line integrals, we are in essence calculating $\frac12 \oint_{polygon} x\,dy - y\,dx$ here.) By induction, you then only have to verify the formula for a triangle.<|endoftext|> -TITLE: Number of colorings of cube's faces -QUESTION [7 upvotes]: How many ways are there to color faces of a cube with N colors if -two colorings are the same if it's possible to rotate the cube such that -one coloring goes to another? - -REPLY [7 votes]: The number of different colorings is equal to -\begin{equation*} -\frac{n^6 + 3n^4 + 12n^3 + 8n^2}{24}. -\end{equation*} -You can get this number using Burnside lemma. -The wikipedia article contains solution of your problem as well.<|endoftext|> -TITLE: Perfect set without rationals -QUESTION [35 upvotes]: Give an example of a perfect set in $\mathbb R^n$ that does not contain any of the rationals. -(Or prove that it does not exist). - -REPLY [13 votes]: Just consider a translation of Cantor set $C$, denote as $E=C+\{x_0\}$. The perfectness of $E$ is trivial due to the perfectness of $C$. To make $E\cap\mathbb{Q}=\varnothing$, we need to choose an $x_0\notin \mathbb{Q}-C$. The only thing left is to show $\mathbb{Q}-C\neq\mathbb{R}$, i.e. $\mathbb{Q}+C\neq\mathbb{R}$. By Baire Category theorem -$$\mathbb{Q}+C=\bigcup_{r\in\mathbb{Q}}\{r\}+C$$ -can't have any interior point, since $\{r\}+C$ don't have any interior point, for any $r\in\mathbb{Q}$. The conclusion follows.<|endoftext|> -TITLE: Packing boxes and proof of Riemann Hypothesis -QUESTION [9 upvotes]: From Scott Aaronson's blog: - -There’s a finite (and not - unimaginably-large) set of boxes, such - that if we knew how to pack those - boxes into the trunk of your car, then - we’d also know a proof of the Riemann - Hypothesis. Indeed, every formal proof - of the Riemann Hypothesis with at most - (say) a million symbols corresponds to - some way of packing the boxes into - your trunk, and vice versa. - Furthermore, a list of the boxes and - their dimensions can be feasibly - written down. - -His later commented to explain where he get this from: "3-dimensional bin-packing is NP-complete." -I don't see how these two are related. -Another question inspired by the same article is here. - -REPLY [9 votes]: The question of whether a formal proof of the Riemann Hypothesis exists (with at most a million symbols) is a problem in NP: given such a proof, it can be verified to be correct in polynomial time. -Bin-packing is NP-complete: this means that every problem in NP can be reduced to bin packing. In particular, the problem mentioned in the previous paragraph can. (This is a reduction that can be made explicit, so once we specify the proof verifier etc., we can carry out the steps of the reduction to get an instance of bin packing. We also need the reduction to be "parsimonious" i.e. solutions correspond one-to-one; I believe it is.)<|endoftext|> -TITLE: Intuitive explanation of covariant, contravariant and Lie derivatives -QUESTION [23 upvotes]: I would be glad if someone could explain in intuitive terms what these different derivatives are, and possibly give some practical, understandable examples of how they would produce different results. -To be clear, I would like to understand the geometrical or physical meaning of these operators more than the mathematical or topological subtleties that lead to them! -Thanks! - -REPLY [19 votes]: The Lie derivative is a derivative of a vector field V along another vector field W. It is defined at a point p as follows: flow the point p along W for some time t and look at the value of V at this point. Then push this forward along the flow of W to a vector at p. Subtract $V_p$ from this, divide by t, and take the limit as $t \to 0$. So this is a measure of how V changes as it gets pushed around by the flow of W. -The covariant derivative is a derivative of a vector field V along a vector W. Unlike the Lie derivative, this does not come for free: we need a connection, which is a way of identifying tangent spaces. The reason we need this extra data is because if we wanted to take the directional derivative of V along the vector W how we do in Euclidean space, we would be taking something like $V_{p+tW} - V_p$, which is the difference of vectors living in different tangent spaces. If we have a metric, then we can impose reasonable conditions that give us a unique connection (the Levi-Civita connection). -I have no idea what a contravariant derivative is. I'd guess it has to do with applying a covariant derivative and lowering indices.<|endoftext|> -TITLE: Sum of two periodic functions -QUESTION [31 upvotes]: Let $f$ and $g$ be two periodic functions over $\Bbb{R}$ with the following property: If $T$ is a period of $f$, and $S$ is a period of $g$, then $T/S$ is irrational. -Conjecture: $f+g$ is not periodic. -Could you give a proof or a counter example? It is easier if we assume continuity. But is it true for arbitrary real valued functions? - -REPLY [2 votes]: If each function has a smallest period, and otherwise fits the conditions, then a proof may be forthcoming by attempting to compute the smallest period of the sum and failing. However, things become unclear if there is no smallest period, as in the case of the characteristic function of the rationals. Progress might be made in this case by decomposing such a function as an infinite sum of periodic functions, or at least give more -counterexamples to study. (e.g. Write the characteristic function of the rationals as -an infinite sum of functions of smallest period 1. ) - -REPLY [2 votes]: As a first start, if $f+g$ is periodic the period cannot be rational w.r.t to the period of $f$ and $g$. -Let us suppose that $T$ is the smallest period of $f(x)$, i.e. for all $x$, $f(x+T) = f(x)$.Similarly $S$ is the smallest period of $g(x)$, i.e. forall $x$, $g(x+S) = g(x)$. If $f+g$ had a period $Q$, and $\frac{Q}{T} = \frac{m}{n}$, we have that forall $x$, $f(x+nQ)+g(x+nQ) = f(x)+g(x)$. But $f(x+nQ)=f(x+mT)=f(x)$, Thus, forall $x$, $g(x+nQ)=g(x)$ and therefore $nQ$ is a period of $g$, which is impossible since it would mean that $\frac{T}{S}$ is rational.<|endoftext|> -TITLE: Is there a geometrical interpretation to the notion of eigenvector and eigenvalues? -QUESTION [12 upvotes]: The wiki article on eigenvectors offers the following geometrical interpretation: - -Each application of the matrix to an arbitrary vector yields a result which will have rotated towards the eigenvector with the largest eigenvalue. - -Qn 1: If there is any other geometrical interpretation particularly in the context of a covariance matrix? -The wiki also discusses the difference between left and right eigenvectors. -Qn 2: Do the above geometrical interpretations hold irrespective of whether they are left or right eigenvectors? - -REPLY [6 votes]: Instead of giving an answer, let me point out to you this chapter in Cleve Moler's book "Numerical Computing with MATLAB", there is a nice geometric demonstration in MATLAB on how eigenvalues/eigenvectors (as well as singular values/vectors) of an order-2 square matrix are involved in how a circle is transformed into an ellipse after a linear transformation represented by the matrix.<|endoftext|> -TITLE: Mandelbrot-like sets for functions other than $f(z)=z^2+c$? -QUESTION [33 upvotes]: Are there any well-studied analogs to the Mandelbrot set using functions other than $f(z)= z^2+c$ in $\mathbb{C}$? - -REPLY [3 votes]: I couldn't say anything about "well-studied". What I do know is this: - -You can grab any random function you like and iterate it. Generally, the results aren't particularly interesting. -You seem to get the most "natural" looking images (i.e., the ones most like the usual Mandelbrot and Julia sets) if you stick to nice, well-behaved complex-valued functions. -If you want to be picky about it, the Mandelbrot set really ought to be the set of parameter values with connected Julia sets. There's some theorem about every basin of attraction containing on critical point or something like that, but I don't really understand the details. The important point is, the parameter-space image tends to "look nicest" when you use critical points of the interation. -It's quite possible for the function to have more than one parameter. This doesn't change the Julia sets much (except that there are more of them), but it makes the Mandelbrot set have more than 2 dimensions. - -My personal favourite is the cubic function $f(z) = z^3 - 3a^2z + b$. This has two critical points, $+a$ and $-a$. The Julia sets have 3-fold symmetry, and aren't especially stunning. The Mandelbrot set, however, is strictly speaking the intersection of two sets, one iterating with $z_0 = +a$ and the other $z_0 = -a$. If you combine the colourings of these two iterations, you get a strange "shadowy" effect. On top of that, the Mandelbrot set is 4D, and plotting various 3D slices of it looks interesting.<|endoftext|> -TITLE: Why should one expect valuations to be related to primes? How to treat an infinite place algebraically? -QUESTION [6 upvotes]: I understand the mechanics of the proof of Ostrowski's Theorem, but I'm a little unclear on why one should expect valuations to be related to primes. Is this a special property of number fields and function fields, or do primes of K[x,y] correspond to valuations on K(x,y) in the same way? -I'm hoping for an answer that can explain what exactly are the algebraic analogs of archimedian valuations, and how to use them - for example, I've heard that the infinite place on K(x) corresponds to the "prime (1/x)" - how does one take a polynomial in K[x] "mod (1/x)" rigorously? -Thanks in advance. - -REPLY [3 votes]: I couldn't divine much information on your background (e.g. undergraduate, master's level, PhD student...) from the question, but I recently taught an intermediate level graduate course which had a unit on valuation theory. Sections 1.6 through 1.8 of -http://math.uga.edu/~pete/8410Chapter1.pdf -address your questions. In particular, if your field $K$ is the fraction field of a Dedekind domain $R$, then you can always use each prime ideal $\mathfrak{p}$ of $R$ to define a valuation on $K$, essentially the "order at $\mathfrak{p}$". There is also a converse result, Theorem 13: if you have a valuation on $K$ which has the additional property that it is non-negative at every element of the Dedekind domain $R$, then it has to be (up to equivalence) the $\mathfrak{p}$-adic valuation for some $\mathfrak{p}$. I felt the need to give this additional condition a name, so I called such a valuation R-regular. -The point is that (as Qiaochu says in his comments), in case $K$ is a number field and $R$ is its ring of integers, every valuation on $K$ is $R$-regular. However, in the function field setting this is not true and this leads to a discussion of "infinite places". Note that I do describe the analogues of Ostrowski's Theorem for finite extensions both of $\mathbb{Q}$ and of $F(t)$ for any field $F$ (in the latter case, one restricts to valuations which are trivial on $F$; when $F$ is finite, this condition is automatic). -I would be interested to know whether you find the notes helpful. If not, I or someone else can probably recommend an alternative reference. - -REPLY [3 votes]: Discrete valuations <-> points on a curve -For a nonsingular projective curve over an algebraically closed field, there is a one-one correspondence between the points on it, and the discrete valuations of the function field (i.e. all the meromorphic functions of the curve). The correspondence is point P -> the valuation that sends a function f, to the order of zero/pole of f at P. -Maximal ideals <-> points on a curve -At least for varieties (common zeros of several polynomials) over an algebraically closed field, there is a one-one correspondence between points on it, and the maximal ideal in $k[x_1,\cdots,x_n]$. The correspondence is point $P = (a_1,\cdots,a_n)$ -> the polynomials vanishing at P, which turns out to be $(x_1-a_1,\cdots,x_n-a_n)$. This is something true not only for curves, but for varieties. (Hilbert's Nullstellensatz) -So putting these together, for nonsingular projective curves over an algebraically closed field, you know that there is a one-one correspondence between the maximal ideals (think them as points) and the discrete valuations of the function field. Now the situation here is analogous. You consider a "curve", whose coordinate ring is $\mathbb{Z}$, with function field $\mathbb{Q}$. The nonarchimedean valuations correspond to discrete valuations in this case. So they should capture order of zeros/poles at some "points". What are the points? They should correspond to the maximal ideals of $\mathbb{Z}$, which are exactly the primes here. -As for $K(x)$, look at it as the function field of $K\mathbb{P}^1$. Just like the usual real/complex projective spaces, you should have two pieces here. Let's say $K[x]$ corresponds to the piece where the second coordinate is nonzero. So the corresponding homogeneous coordinates here is like $[x,1]$. We know there is one point missing, which is $[1,0]$. For this, we change our coordinates $[x,1] \to [1,1/x]$, so the piece where the first coordinate is nonzero should be $K[1/x]$. The missing point corresponds to the ideal $(1/x - 0) = (1/x)$, so this is why the infinite place corresponds to (1/x). Of course, a more straight forward interpretation is that for a rational function, you divide both numerator and denominator sufficiently high power of $x$ so that they both become polynomials in 1/x, have nonzero constant term, with an extra term (x to the some power). The infinite place measures this power.<|endoftext|> -TITLE: Number of ways to partition a rectangle into n sub-rectangles -QUESTION [25 upvotes]: How many ways can a rectangle be partitioned by either vertical or horizontal lines into n sub-rectangles? At first I thought it would be: - f(n) = 4f(n-1) - 2f(n-2) -where f(0) = 1 - and f(1) = 1 - -but the recurrence relation only counts the cases in which at least one side (either top, bottom, left or right of the original rectangle) is not split into sub-rectangles. There are many other partitions that don't belong to those simple cases like -[EDIT ImageShack has removed the picture. One of the cases is the sixth partition when n = 4 in the picture in the accepted answer below.] -Any other related problem suggestions are welcome. Also it is nice to know how to traverse this partition efficiently. - -REPLY [25 votes]: I had my student, Tim Michaels, work on this. We came up with a complicated recurrence relation, -indicated below. The first few answers are 1, 2, 6, 25, 128, 758, 5014, 36194, 280433, 2303918, 19885534, 179028087, 1671644720, 16114138846, 159761516110, 1623972412726, -16880442523007, 179026930243822, 1933537655138482, 21231023519199575, -236674460790503286, 2675162663681345170, 30625903703241927542, -354767977792683552908, 4154708768196322925749, -49152046198035152483150, 587011110939295781585102, -7072674305834582713614923. Note that we are counting rotations and reflections as distinct tilings. An interesting pattern is that mod 2, there is an 8-fold periodicity which we don't understand and can't prove in general. -Here's a picture of the cases n=1,2,3,4, with 1,2,6,25 tilings in each case.The way to systematically generate these is to "push in" a vertical line from the right to all previously constructed tilings in all possible ways. That's how the recurrence relation is defined. - -Okay, here is the recurrence: -Let $a_{\ell,j,m}$ be the number of distinct tilings with $\ell$ tiles, $j$ edges that meet the right-hand side of the square and $m$ 4-valent vertices. -$$a_{\ell,j,m}=\sum_{p=1}^\ell(-1)^{p+1}\sum_{i=0}^m\sum_{n=1}^{\ell-1}\sum_{k=0}^{n-1}\alpha_{n,k,i,\ell,j,m,p} a_{n,k,i}$$ -where, letting $t=m-i, s=\ell-n-p-t$ and $r=k+s+t+p-j$, -$$\alpha_{n,k,i,l,j,m,p}=\binom{r-1}{p-1}\binom{k-r+2}{p}\binom{s+r-1}{r-1}\binom{r-p}{t}.$$ -Edit: I have posted a preprint describing the recurrence relation here. Comments are welcome. Is this sort of thing publishable anywhere to anyone's knowledge? -Edit 2: Nathan Reading has just posted a relevant preprint. He finds a bijection between generic tilings (no 4-valent vertices) and a set of permutations that avoid a certain pattern. -Edit 3: The paper has been published in the Annals of Combinatorics.<|endoftext|> -TITLE: Characterising functions $f$ that can be written as $f = g \circ g$? -QUESTION [25 upvotes]: I'd like to characterise the functions that ‘have square roots’ in the function composition sense. That is, can a given function $f$ be written as $f = g \circ g$ (where $\circ$ is function composition)? -For instance, the function $f(x) = x+10$ has a square root $g(x) = x+5$. -Similarly, the function $f(x) = 9x$ has a square root $g(x) = 3x$. -I don't know if the function $f(x) = x^2 + 1$ has a square root, but I couldn't think of any. -Is there a way to determine which functions have square roots? To keep things simpler, I'd be happy just to consider functions $f: \mathbb R \to \mathbb R$. - -REPLY [19 votes]: I showed you the link to the MO question mostly to convince you that this is a hard question. I will "answer" it in the special case that $f$ is a bijection. -Recall that given a bijection $f : S \to S$, where $S$ is a set, a cycle of $f$ length $n$ is a set of distinct points $x, f(x), ... f^{n-1}(x)$ such that $f^n(x) = x$. A cycle of infinite length is a set of distinct points $x, f(x), f^2(x), ...$. It is not hard to see that $S$ is a disjoint union of cycles of $f$. -Claim: A bijection $f : S \to S$ has a square root if and only if there are an even number of cycles of $f$ of any given even length. (For the purposes of this result, infinity is an even number; so there can be an infinite number of cycles, and you need to consider cycles of infinite length.) -Proof. First we show that any bijection with a square root has this property. Let $g : S \to S$ be a bijection such that $g(g(x)) = f(x)$. Then each cycle of $g$ corresponds to either one or two cycles of $f$, as follows. If the cycle has odd length, it corresponds to one cycle of $f$. For example, the cycle $1 \to 2 \to 3 \to 1$ of $g$ would correspond to the cycle $1 \to 3 \to 2 \to 1$ of $f$. If the cycle has even length, it corresponds to two cycles of $f$. For example, the cycle $1 \to 2 \to 1$ of $g$ would correspond to the pair of cycles $1 \to 1$ and $2 \to 2$, and the cycle $1 \to 2 \to 3 \to ... $ would correspond to the pair of cycles $1 \to 3 \to ... $ and $2 \to 4 \to ...$. In particular, cycles of $f$ of odd length can come from cycles of $g$ one at a time or two at a time, but cycles of $f$ of even length can only come from cycles of $g$ two at a time. -Now we show the reverse implication. Given a cycle of $f$ of odd length $2k+1$, consider the corresponding cycle of $f^{k+1}$ of odd length. Since $f^{2k+2} = f$ when restricted to this cycle, make this a cycle of $g$. Given a pair of cycles of $f$ of the same even length, just weave them together to get a cycle of $g$. -I say "answer" instead of answer because it's not obvious if you can always find the cycle decomposition of some complicated bijection on an infinite set. In any case, if $f$ isn't assumed to be a bijection this question becomes much harder; the analogue of cycle decomposition is much more difficult to work with. I suggest you look at some examples where $S$ is finite if you really want to get a grip on this case; best of luck.<|endoftext|> -TITLE: Watchdog Problem -QUESTION [7 upvotes]: I just came up with this problem yesterday. -Problem: -Assume there is an important segment of straight line AB that needs to be watched at all time. A watchdog can see in one direction in front of itself and must walk at a constant non-zero speed at all time. (All watchdogs don't need to have the same speed.) When it reaches the end of the segment, it must turn back (at no time) and keep watching the line. -How many watchdogs does it need to guarantee that the line segment is watched at all time? And how (initial positions and speeds of the dogs)? -Note: -It's clear that two dogs are not enough. I conjecture that four will suffice and three will not. For example, the below configuration doesn't work from 7.5 second if AB's length is 10 meters. -Dog 1 at A walks to the right with speed 1.0 m/s -Dog 2 at between A and B walks to the right with speed 1.0 m/s -Dog 3 at B walks to the left with speed 1.0 m/s - -Or it can be illustrated as: - A ---------------------------------------- B -0.0 sec 1 --> 2 --> <-- 3 - -2.5 sec 1 --> <-- 32 --> - -5.0 sec <-- 31 --> <-- 2 - -7.5 sec <-- 3 <-- 21 --> - -Please provide your solutions, hints, or related problems especially in higher dimensions or looser conditions (watchdogs can walk with acceleration, etc.) - -REPLY [4 votes]: I'll make the trivial answer: 1 dog at point A, facing point B, walking with a velocity of 0. Presumably, you should really highlight that the dogs' velocities must be non-zero...this is the kind of side case that math people love to exploit.<|endoftext|> -TITLE: If and only if, which direction is which? -QUESTION [22 upvotes]: I can never figure out (because the English language is imprecise) which part of "if and only if" means which implication. -($A$ if and only if $B$) = $(A \iff B)$, but is the following correct: -($A$ only if $B$) = $(A \implies B)$ -($A$ if $B$) = $(A \impliedby B)$ -The trouble is, one never comes into contact with "$A$ if $B$" or "$A$ only if $B$" using those constructions in everyday common speech. - -REPLY [8 votes]: The explanation in this link clearly and briefly differentiates the meanings and the inference direction of "if" and "only if". In summary, $A \text{ if and only if } B$ is mathematically interpreted as follows: - -'$A \text{ if } B$' : '$A \Leftarrow B$' -'$A \text{ only if } B$' : '$\neg A \Leftarrow \neg B$' which is the contrapositive (hence, logical equivalent) of $A \Rightarrow B$<|endoftext|> -TITLE: A definition of Conway base-$13$ function -QUESTION [14 upvotes]: Can you give a definition of the Conway base-$13$ function better than the one actually presented on wikipedia, which isn't clear? Maybe with some examples? - -REPLY [12 votes]: I understand why the Wikipedia article uses the notation it does, but I find it annoying. Here is a transliteration, with some elaboration. - -Expand x ∈ (0,1) in base 13, using digits {0, 1, ... , 9, d, m, p} --- using the convention d = 10, m = 11, p = 12. N.B. for rational numbers whose most reduced expression a/b is such that b is a power of 13, there are two such expansions: a terminating expansion, and a non-terminating one ending in repeated p digits. In such a case, use the terminating expansion. -Let S ⊂ (0,1) be the set of reals whose expansion involves finitely many p, m, and d digits, such that the final d digit occurs after the final p digit and the final m digit. (We may require that there be at least one digit 0--9 between the final p or m digit and the final d digit, but this does not seem to be necessary.) Then, every x ∈ S has a base 13 expansion of the form -0.x1x2 ... xn [ p or m ] a1a2 ... ak [ d ] b1b2 ... -for some digits xj ∈ {0, ... , p} and where the digits aj and bj are limited to {0, ... , 9} for all j. The square brackets above are only intended for emphasis; and in particular the n+1st base-13 digit of x is the final occurance of either p or m in the expansion of x. -For x ∈ S, we define f(x) by transliterating the string format above. We ignore the digits x1 through xn , transliterate the p or m as a plus-sign or minus-sign, and the d as a decimal point. This yields a decimal expansion for a real number, either +a1a2 ... ak . b1b2 ... or −a1a2 ... ak . b1b2 ...according to whether the n+1st base-13 digit of x is a p or an m respectively. For x ∈ S, we set f(x) to this number; for x ∉ S, we set f(x) = 0. - -Note: this function is not computable, as there is no way that you can determine in advance whether the base-13 expansion of x ∈ (0,1) has only finitely many occurances of any of the digits p, m, or d; even if you are provided with a number which is promised to have only finitely many, in general you cannot know when you have found the last one. However, if you are provided with a number x ∈ (0,1) for which you know the location of the final p, m, and d digits, you can compute f(x) very straightforwardly. - -REPLY [3 votes]: The idea of the Conway base-13 number is to find a function that is not continuous, yet if f(a) -TITLE: Intuitive Way To Understand Principal Component Analysis -QUESTION [24 upvotes]: I know that this is meant to explain variance but the description on Wikipedia stinks and it is not clear how you can explain variance using this technique -Can anyone explain it in a simple way? - -REPLY [11 votes]: Spent the day learning PCA, hope my cartoon translates the intuition over to you! -I have also tried to briefly explain the utility of PCA and related it to an analogy (no maths) to help give that feeling of "learning closure". -Visual Intuition (zoom in) - -Intuition via Utility -I think the main usage for PCA is to be able to categorise different distinct "things" e.g. Shiny cells vs. Dark cells in a way that leads to least error (in terms of predicting the right colour cell). E.g. Imagine sam was hiding behind me and I pinched a cell off the left side of his body then asked you to guess the color of the cell, by looking at the winning photo, or even the winning line, you can make a very good guess it will be a "dark cell". -Intuition via Analogy -So my understanding is that PCA is like taking a "picture" in a lower dimension, but the various methods used out there attempt to make the picture as informative as possible by deciding which "angle" to take the picture from (notice for 1D the angle of "squishing line" also vary). -Good video -http://www.youtube.com/watch?v=UUxIXU_Ob6E<|endoftext|> -TITLE: Comparing/Contrasting Cosine and Fourier Transforms -QUESTION [10 upvotes]: What are the differences between a (discrete) cosine transform and a (discrete) Fourier transform? I know the former is used in JPEG encoding, while the latter plays a big part in signal and image processing. How related are they? - -REPLY [13 votes]: Cosine transforms are nothing more than shortcuts for computing the Fourier transform of a sequence with special symmetry (e.g. if the sequence represents samples from an even function). -To give a concrete example in Mathematica ($VersionNumber >= 6), consider the sequence -smp = {1., 2., 3., 4., 5., 4., 3., 2.}; - -The sequence has redundancy (e.g. smp[[2]] == smp[[8]], but note that in usual Fourier work, the indexing is taken to be from $0$ to $n-1$ instead of $1$ to $n$). A sequence like smp is termed an even sequence. The discrete Fourier transform of smp can be expected to have redundancy as well: -Fourier[smp] // Chop -{8.48528137423857, -2.414213562373095, 0, -0.4142135623730949, 0, --0.4142135623730949, 0, -2.414213562373095} - -and the discrete Fourier transform is itself even. One could hope to have a way to compute the discrete Fourier transform without redundancy, and this is where the type I discrete cosine transform (DCT-I) comes in: -FourierDCT[Take[smp, Length[smp]/2 + 1], 1] // Chop -{8.48528137423857, -2.414213562373095, 0., -0.4142135623730949, 0.} - -The more usual type II discrete cosine transform (DCT-II) is the redundancy-free method for computing the Fourier transform of a so-called "quarter wave even" sequence (with an additional transformation to make the results entirely real for real inputs). A quarter wave even sequence looks like this: -smp = {1., 2., 3., 4., 4., 3., 2., 1.}; - -and the correspondence (e.g. smp[[2]] == smp[[7]]) is easily seen. DCT-II requires only half of the given sequence to do its job: -Exp[2 Pi I Range[0, 7]/16] Fourier[smp]/Sqrt[2] // Chop -{4.999999999999999, -1.5771610149494746, 0, -0.11208538229199128, 0, - 0.11208538229199126, 0, 1.5771610149494748} - -FourierDCT[Take[smp, Length[smp]/2], 2] // Chop -{5., -1.577161014949475, 0, -0.11208538229199139} - -(We see in this example that the exploitation of symmetry in this case led to a slightly more accurate result.) -The other two types of discrete cosine transforms, as well as the four types of discrete sine transforms, are intended to be redundancy-free methods for computing discrete Fourier transforms. For DCT-I, one can deal with a sequence of length $\frac{N}{2}+1$ instead of a sequence of length $N$, while for DCT-II, only a length $\frac{N}{2}$ sequence is required. This represents a savings in computational time and effort. (I assume the case of even length here; a similar symmetry property can be established for the case of odd length.) -In any event, I wish to point out two good references on how FFT and the DCTs/DSTs are related: Van Loan's Computational Frameworks for the Fast Fourier Transform and Briggs/Henson's The DFT: an owner's manual for the discrete Fourier transform.<|endoftext|> -TITLE: Intuitive explanation of the Burnside Lemma -QUESTION [13 upvotes]: The Burnside Lemma looks like it should have an intuitive explanation. Does anyone have one? - -REPLY [3 votes]: You can quickly reduce to the case of a transitive action, in which case we just want to explain why the total number of times that something gets fixed is exactly the size of the group. But in this case everything is symmetric at all points in the (unique) orbit. So to count all the times something gets fixed, we can just count how many times a particular x gets fixed, and multiply by the size of the orbit. Now we've reduced to the fact that the size of an orbit is the index of the stabilizer.<|endoftext|> -TITLE: Division by imaginary number -QUESTION [14 upvotes]: I ran into a problem dividing by imaginary numbers recently. I was trying to simplify: -$2 \over i$ -I came up with two methods, which produced different results: -Method 1: ${2 \over i} = {2i \over i^2} = {2i \over -1} = -2i$ -Method 2: ${2 \over i} = {2 \over \sqrt{-1}} = {\sqrt{4} \over \sqrt{-1}} = \sqrt{4 \over -1} = \sqrt{-4} = 2i$ -I know from using the formula from this Wikipedia article that method 1 produces the correct result. My question is: why does method 2 give the incorrect result? What is the invalid step? - -REPLY [5 votes]: The only soundproof way to be sure to find the right result while dividing two complex numbers -$$\frac{a+bi}{c+di}$$ -is reducing it to a multiplication. The answer is of the form $x+yi$; therefore -$$(c+di)(x+yi) = a+bi$$ -and you will end up with two linear equations, one for the real coefficient and another for the imaginary one. As Simon and Casebash already wrote, taking a square root leads to problems, since you cannot be sure which value must be chosen.<|endoftext|> -TITLE: $f(a(x))=f(x)$ - functional equation -QUESTION [5 upvotes]: I was reading "Functional Equations and How to Solve Them" by Small and the following comment pops up without much justification on p. 13: - -If $a(x)$ is an involution, then $f(a(x))=f(x)$ has as solutions $f(x) = T\,[x,a(x)]$, where $T$ is an arbitrary symmetric function of $u$ and $v$. - -I was wondering why this was true (it works for examples I've tried, but I am not sure $(1)$ how to prove this and $(2)$ if there's anything obvious staring at me in the face here). - -REPLY [8 votes]: Any function f(x) that is a solution to your functional equation f(a(x)) = f(x) must satisfy the property that it is unchanged when you plug in a(x) instead of x. In addition, f(x) clearly must be a function f(x) = T[x, a(x)] depending on the x and a(x); the question is to see why T must be symmetric. -Now, T[x, a(x)] = f(x) = f(a(x)) = T[a(x), a(a(x))] = T[a(x), x] since a(x) is an involution. In particular, this means that T must be symmetric in its two variables.<|endoftext|> -TITLE: Why are superalgebras so important? -QUESTION [20 upvotes]: I know that a superalgebra is a $\mathbb Z/2\mathbb Z$-graded algebra and that it behaves nicely. I know very little physics though, so even though I know that the super- prefix is related to supersymmetry, I don't know what that means; is there a compelling mathematical reason to consider superalgebras? - -REPLY [5 votes]: If you are willing to build a formalism of supergeometry then some constructions are easier to state in this language, notably differential forms and the De Rham complex. The first one is just functions from the odd line ($R^{(0 | 1)}$) to your manifold, and the De Rham differential comes from super-diffeomorphisms acting on that line. -Also, there is a symplectic/orthogonal duality in representation theory that some people (most famously Kontsevich) have advocated as best understood in terms of Lie superalgebras. -Constructions using a formally negative-dimensional object (or calculations that look like they might come from such an object) sometimes can be interpreted in terms of bona fide $Z/2$ graded objects whose superdimension (even dimension minus odd dimension) is the negative dimension in question. -edit: you will get more knowledgeable answers if you post to Mathoverflow, where some of the contributors have written papers on super- or noncommutative geometry.<|endoftext|> -TITLE: Non-integer powers of negative numbers -QUESTION [10 upvotes]: Roots behave strangely over complex numbers. Given this, how do non-integer powers behave over negative numbers? More specifically: - -Can we define fractional powers such as $(-2)^{-1.5}$? -Can we define irrational powers $(-2)^\pi$? - -REPLY [10 votes]: As other posters have indicated, the problem is that the complex logarithm isn't well-defined on $\mathbb{C}$. This is related to my comments in a recent question about the square root not being well-defined (since of course $\sqrt{z} = e^{ \frac{\log z}{2} }$). -One point of view is that the complex exponential $e^z : \mathbb{C} \to \mathbb{C}$ does not really have domain $\mathbb{C}$. Due to periodicity it really has domain $\mathbb{C}/2\pi i \mathbb{Z}$. So one way to define the complex logarithm is not as a function with range $\mathbb{C}$, but as a function with range $\mathbb{C}/2\pi i \mathbb{Z}$. Thus for example $\log 1 = 0, 2 \pi i, - 2 \pi i, ...$ and so forth. -So what are we doing when we don't do this? Well, let us suppose that for the time being we have decided that $\log 1 = 0$. This is how we get other values of the logarithm: using power series, we can define $\log (1 + z)$ for any $z$ with $|z| < 1$. We can now pick any number in this circle and take a power series expansion about that number to get a different power series whose circle of convergence is somewhere else. And by repeatedly changing the center of our power series, we can compute different values of the logarithm. This is called analytic continuation, and typically it proceeds by choosing a (say, smooth) path from $1$ to some other complex number and taking power series around different points in that path. -The problem you quickly run into is that the value of $\log z$ depends on the choice of path from $1$ to $z$. For example, the path $z = e^{2 \pi i t}, 0 \le t \le 1$ is a path from $1$ to $1$, and if you analytically continue the logarithm on it you will get $\log 1 = 2 \pi i$. And that is not what you wanted. (This is essentially the same as the contour integral of $\frac{1}{z}$ along this contour.) -One way around this problem is to arbitrarily choose a ray from the origin and declare that you are not allowed to analytically continue the logarithm through this ray. This is called choosing a branch cut, and it is not canonical, so I don't like it. -There is another way to resolve this situation, which is to consider the Riemann surface $(z, e^z) \subset \mathbb{C}^2$ and to think of the logarithm as the projection to the first coordinate from this surface to $\mathbb{C}$. So all the difficulties we have encountered above have been due to the fact that we have been trying to pretend that this projection has certain properties that it doesn't have. A closed path like $z = e^{2\pi i t}$ in which the logarithm starts and ends with different values corresponds to a path on this surface which starts and ends at different points, so there is no contradiction. This was Riemann's original motivation for defining Riemann surfaces, and it is this particular Riemann surface that powers things like the residue theorem.<|endoftext|> -TITLE: Solution to $1-f(x) = f(-x)$ -QUESTION [17 upvotes]: Can we find $f(x)$ given that $1-f(x) = f(-x)$ for all real $x$? -I start by rearranging to: $f(-x) + f(x) = 1$. I can find an example such as $f(x) = |x|$ that works for some values of $x$, but not all. Is there a method here? Is this possible? - -REPLY [2 votes]: We have $f(x)-1/2=-(1-f(x))+1/2=-f(-x)+1/2=-(f(-x)-1/2)$; -hence $f(x)-1/2$ is an odd function. -So, $f(x)=1/2+\phi(x)$ for some odd function $\phi$. -Clearly, any such function satisfies the original equation.<|endoftext|> -TITLE: How do you estimate the flow rate of one fluid into another like the Deep Horizon Oil leak? -QUESTION [11 upvotes]: How have experts estimated the amount of oil that was shooting out of that pipe in the Gulf? I bet there's some neat math or physics involved here, and some interesting assumptions considering how little concrete data are available. - -REPLY [3 votes]: One interesting fact told to me by my father, a chem engineer, is that if you have a high pressure gas leaking into a low pressure gas through a small hole, there is a upper limit to the rate of flow. That is, no matter how high the pressure gets on the high pressure side, the rate of flow does not surpass some finite limit. I suppose this relates to the speed of sound, but I don't know. I also don't know if there a corresponding phenomenon with liquids.<|endoftext|> -TITLE: Motivating Example for Algebraic Geometry/Scheme Theory -QUESTION [39 upvotes]: I am in the process of trying to learn algebraic geometry via schemes and am wondering if there are simple motivating examples of why you would want to consider these structures. -I think my biggest issue is the following: I understand (and really like) the idea of passing from a space to functions on a space. In passing from $k^n$ to $R:=k[x_1,\ldots,x_n]$, we may recover the points by looking at the maximal ideas of $R$. But why consider $\operatorname{Spec} R$ instead of $\operatorname{MaxSpec} R$? Why is it helpful to have non-closed points that don't have an analog to points in $k^n$? On a wikipedia article, it mentioned that the Italian school used a (vague) notion of a generic point to prove things. Is there a (relatively) simple example where we can see the utility of non-closed points? - -REPLY [3 votes]: A very late answer, my apologies if I am not contributing anything that the very wonderful answers above have already contributed. -Generic points $\Leftrightarrow$ irreducible subsets... as long as we are doing classical geometry, there really is no major difference between using $\text{Spec}$ and using $\text{MaxSpec}$ and talking about things "being true outside a closed subset." And this is just an incredibly useful thing to do; for example, if we want to do an induction by taking hyperplane sections, we can talk about a randomly-chosen hyperplane as a generic hyperplane, and this does not really cause any issues. -In fact, in Mumford's Algebraic Geometry I: Complex Projective Varieties, Mumford literally defines a generic point as a point with coefficients transcendentally independent from the coordinates of our equations, and thinking in this way causes zero hinderance as long we are classical. -The real power of generic points is as an extremely convenient language for all these things, which in particular, is infinitely cleaner once we are not working over a field like $\mathbb{C}$ which is absurdly large. For example, let us say that we want to prove that a variety of dimension $n$ is generically smooth. The point is that we can define "smooth at a point" in a way that makes sense for "normal points" and generic points, so we can literally check that the generic point is smooth by plugging in equations. If one thinks about this, we are basically "formally taking a point with transcendental coefficients."<|endoftext|> -TITLE: Are isosceles always and only similar to other isosceles? -QUESTION [5 upvotes]: In my geometry class last year I remember putting down the statement in a column proof "That all isosceles are always and only similar to other isosceles". I do not remember what I was trying to prove. But, I do remember that I was stressed and that was the only thing I could think of and made a guess thinking I would probably get the proof wrong on my test. -Funny enough though, I didn't get the proof wrong and I was wondering if anyone could show a proof as to why this would be true. I mean I makes sense but, I do not see any way to prove it. Could you please explain how this is true? - -REPLY [9 votes]: Re-reading your question, I see two possible interpretations of your statement. -First (and my original answer), "If △ABC is isosceles and △ABC~△DEF, then △DEF is isosceles." Two triangles are similar if and only if the three angles of one are congruent to the three angles of the other. Since a triangle is isosceles if and only if two of its angles are congruent, if a triangle is similar to an isosceles triangle, then it will also have two congruent angles and must be isosceles. -Second, "If △ABC and △DEF are isosceles, then they are similar." This is not true. Suppose one triangle has angles with measures 20°, 20°, and 140° and another other triangle has angles with measures 85°, 85°, and 10°. Both triangles are isosceles (since within each triangle, there is a pair of congruent angles), but the triangles are not similar (because the angles of one are not congruent to the angles of the other).<|endoftext|> -TITLE: Is there a known mathematical equation to find the nth prime? -QUESTION [70 upvotes]: I've solved for it making a computer program, but was wondering there was a mathematical equation that you could use to solve for the nth prime? - -REPLY [34 votes]: Far better than sieving in the large range ShreevatsaR suggested (which, for the 10¹⁵th prime, has 10¹⁵ members and takes about 33 TB to store in compact form), take a good first guess like Riemann's R and use one of the advanced methods of computing pi(x) for that first guess. (If this is far off for some reason—it shouldn't be—estimate the distance to the proper point and calculate a new guess from there.) At this point, you can sieve the small distance, perhaps just 10⁸ or 10⁹, to the desired number. -This is about 100,000 times faster for numbers around the size I indicated. Even for numbers as small as 10 to 12 digits, this is faster if you don't have a precomputed table large enough to contain your answer.<|endoftext|> -TITLE: Closed form of a partial sum of the power series of $\exp(x)$ -QUESTION [11 upvotes]: I am looking for a closed form (ideally expressed as elementary functions) of the function $\exp_n(x) = \sum_{k=0}^n x^k / k!$. I am already aware of expressing it in terms of the gamma function. -Background / Motivation -When counting combinations of objects with generating functions, it is useful to be able to express the partial sum $1 + x + \cdots + x^n$ as $\frac{1-x^{n+1}}{1-x}$. For example, to count the number of ways to pick 5 marbles from a bag of blue, red, and green marbles where we pick at most 3 blue marbles and at most 2 red marbles, we can consider the generating function $f(x) = (1+x+x^2+x^3)(1+x+x^2)(1+x+x^2+\cdots)$. -By using the partial sum identity, we can express it as $f(x) = \left(\frac{1-x^4}{1-x}\right)\left(\frac{1-x^3}{1-x}\right)\left(\frac{1}{1-x}\right)$. Simplify, express as simpler product of series, and find the coefficient of the $x^5$ term. -I want to be able to do the same for a generating function in the form -$g(x) = \exp_{n_1}(x)^{p_1} \exp_{n_2}(x)^{p_2} \cdots \exp_{n_j}(x)^{p_j}$ -The easiest way to extract the coefficient of a given term $x^p / p!$ would be to use a similar closed form expression for $\exp_n(x)$ and a similar technique to $f$. -Attempted Solutions -Differential equation -Recall that the way to prove the identity $1+x+x^2+\cdots+x^n = \frac{1-x^{n+1}}{1-x}$ is to define $S = 1 + x + x^2 + \cdots + x^n$ and notice that: $S - Sx = 1 - x^{n+1}$. Likewise, notice that $y(x) = \exp_n(x)$ satisfies $y - y' = x^n/n!$. Via SAGE, the solution is $y(x) = \frac{c+\Gamma(n+1,x)}{n!}e^x$. Our initial condition $y(0) = 0$ so $c=0$. By (2), $\Gamma(n+1,x) = n! e^{-x} \exp_n(x)$ so we have $y(x) = \exp_n(x)$. -Recurrence Relation -Notice that $\exp_n(x) = \exp_{n-1}(x) + x^n/n!$. Using the unilateral Z-Transform and related properties, we find that $\mathcal{Z}[\exp_n(x)] = (z e^{x/z})/(z-1)$. -Therefore, $\exp_n(x) = \mathcal{Z}^{-1}\left[(z e^{x/z})/(z-1)\right] = \frac{1}{2 \pi i} \oint_C z^n e^{x/z}/(z-1)\;dz$. -$(z^n e^{x/z})/(z-1)$ has two singularities: $z = 1$ and $z = 0$. The point $z = 1$ is a pole of order one with residue $e^x$. To find the residue at $z = 0$ consider the product $z^n e^{x/z} (-1/(1-z)) = -z^n \left( \sum_{m=0}^\infty x^m z^{-m} / m! \right) \left( \sum_{j=0}^\infty x^j \right)$. The coefficient of the $z^{-1}$ term is given when $n - m + j = -1$. The residue of the point $z=0$ is then $-\sum_{m,j} x^m / m! = -\sum_{m=n+1}^\infty x^m / m!$. -Let $C$ by the positively oriented unit circle centered at the origin. By Cauchy's Residue Theorem, $\frac{1}{2 \pi i} \oint_C z^n e^{x/z}/(z-1)\;dz = \frac{1}{2 \pi i} 2 \pi i \left(e^x - \sum_{m=n+1}^\infty x^m / m!\right) = \exp_n(x)$. -Finite Calculus -I've tried to evaluate the sum using finite calculus, but can't seem to make much progress. - -REPLY [6 votes]: I'm not sure you'll like this, but in terms of the incomplete $\Gamma$ function, one can get a closed form as -$$\frac{e^{x}\Gamma(n+1,x)}{\Gamma(n+1)}.$$ -The incomplete $\Gamma$ function is defined as -$$\Gamma(s,x) = \int_x^{\infty} t^{s-1}e^{-t}dt$$.<|endoftext|> -TITLE: How to test if a point is inside the convex hull of two circles? -QUESTION [6 upvotes]: Following my previous question, I'm wondering how I can determine if a point is within the convex hull of two circles (a boolean value). There's no problem testing if the point is in either of the two circles, but it can also be "between" them and I don't know how to test for that. -Seeing Wolfram MathWorld's article on Circle-Circle Tangeants, it seems that an inequation that tests if the point is on the internal side the two external circle tangeants would do the trick, but I'm afraid my solving skills are too far away to be able to turn the tangeant equations into a fitting inequality. -I'm defining the convex hull of two circles using both centers and radii. - -REPLY [7 votes]: You can first calculate the four points of tangency, then use the Point in Polygon algorithm to determine if your point is inside the quadrilateral (this is assuming you have a programming related problem) -Otherwise, once you have found the four tangent points, you can form four lines and get four simultaneous inequalities.<|endoftext|> -TITLE: Concrete examples of valuation rings of rank two. -QUESTION [12 upvotes]: Let $A$ be a valuation ring of rank two. Then $A$ gives an example of a commutative ring such that $\mathrm{Spec}(A)$ is a noetherian topological space, but $A$ is non-noetherian. (Indeed, otherwise $A$ would be a discrete valuation ring.) -Is there a concrete example of such a ring $A$? - -REPLY [2 votes]: I'm late to this game, and Robin Chapman's elaboration of Qiaochu's example is good -- so good I just used it as a (counter)-example in my answer here. Whilst thinking about that, I realised there is another example which feels slightly more natural to me as someone who works with $p$-adics a lot: -Let $K$ be the field $\mathbb Q_p((X))$ of Laurent series over the $p$-adics. We have the obvious rank-one discrete valuation -$$w_1(\sum a_i X^i) = \min\{i: a_i \neq 0\}$$ -like over any other field of Laurent series, just treating the coefficient field as being of zero valuation and identifying it with the residue field of the valuation ring $R_1 = \{x \in K: w_1(x) \ge 0\} = \mathbb Q_p[[X]]$ with respect to its maximal ideal $\mathfrak{m}_1 = \{x \in K: w_1(x) \ge 1\}= X \cdot \mathbb Q_p[[X]]$. -So far we have not used that $\mathbb Q_p$ has a valuation $v_p$ as well. Let's do it. Define a rank-two valuation $w_2 : K \rightarrow \mathbb Z \times \mathbb Z \cup \{\infty\}$ as follows: $w_2(0) := \infty$ and for $0 \neq x = \sum a_i X^i$ set -$$w_2(x) := \left(w_1(x), v_p(a_{w_1(x)})\right)$$ -i.e. we "refine" the rank-one valuation $w_1$ by also keeping track of the $p$-adic valuation of the leading coefficient. Of course we give the value group the lexicographic order here. Check that the valuation ring to this is -$$R_2 = \{x \in K: w_2(x) \ge (0,0)\} = \{\sum_{i \ge 0} a_i X^i \in \mathbb Q_p[[X]]: a_0 \in \mathbb Z_p\};$$ -and the maximal ideal of $R_2$ is -$$\mathfrak m_2= (p) = \{x \in K: w_2(x) \ge (0,1)\} = \{\sum_{i \ge 0} a_i X^i \in \mathbb Q_p[[X]]: a_0 \in p\mathbb Z_p\}.$$ -Note that we have proper inclusions $R_2 \subsetneq R_1$ and $\mathfrak m_1 \subsetneq \mathfrak m_2$ (indeed, "the other way around"!), exactly as in the answer linked above. I found it worthwhile to think about how the valuation rings and their maximal ideals relate for this kind of "refinement" of a given valuation. One could obviously iterate the procedure. -It is also worthwhile to think how this example and Robin Chapman's / Qiaochu's are "almost the same". Actually, these are standard first examples of higher(-dimensional) local fields which have been studied in recent decades for their class field theory (Kato, Fesenko), and in connection to $p$-adic Langlands.<|endoftext|> -TITLE: Is there an atlas of Algebraic Groups and corresponding Coordinate rings? -QUESTION [31 upvotes]: I was wondering if there was a resource that listed known algebraic groups and their corresponding coordinate rings. -Edit: The previous wording was terrible. -Given an algebraic group $G$, with Borel subgroup $B$ we can form the Flag Variety $G/B$ which is projective. I am hoping for a list of the graded ring $R$ such that $Proj(R)$ corresponds to this Flag Variety. - -REPLY [2 votes]: You probably mean for $G$ to be a reductive group. Keep in mind that $G/B$ is equal to $\text{Proj}(R)$ for many different $R$'s, corresponding to different embeddings of $G/B$ into projective space. The best object to study is the homogeneous coordinate ring (also known as the Cox ring) of $G/B$. In that case, when $G = SL_n$, the homogeneous coordinate ring is in Miller and Sturmfels' Combinatorial Commutative Algebra Chapter 14. For the general case, some keywords to look for are "standard monomial theory", "straightening laws", and "Littelmann path model". The homogeneous coordinate ring of a general $G/B$ (or at least $G/P$ for $P$ a maximal parabolic) might be in Lakshmibai and Raghavan's Standard Monomial Theory: Invariant Theoretic Approach, but I am not sure. Regardless, that is a good introduction to the subject and should have a fairly comprehensive list of references for further information.<|endoftext|> -TITLE: Is a curve's curvature invariant under rotation and uniform scaling? -QUESTION [8 upvotes]: The title really say's it all, but once again is a curve's curvature invariant under rotation and uniform scaling? - -REPLY [10 votes]: A curve's curvature is invariant under rotation. Intuitively, a curve turns just as much no matter how it is oriented. More formally, for a curve $\gamma(s)$ that is parametrized by arc length, the curvature is $\kappa(s) = ||\gamma''(s)||$. Rotation does not change the length of the $\gamma''(s)$ vector, only the direction; therefore, rotation does not affect curvature. -A curve's curvature is not invariant under uniform scaling, however. Consider the example of a circle. All circles are the same up to scaling, but they don't all have the same curvature; in general, a circle of radius r has curvature 1/r.<|endoftext|> -TITLE: What is the form of curvature that is invariant under rotations and uniform scaling -QUESTION [8 upvotes]: This is a followup to this question, where I learned that curvature is invariant to rotations. -I have learned of a version of curvature that is invariant under affine transformations. -I am wondering if there a is a form of curvature between the two. Invariant under uniform scaling and rotation but not all affine transformations? - -REPLY [5 votes]: I don't know if this would suit you, but one thing you can consider (much more naive than -the notion of affine curvature) is to fix a point P_0 on your curve, and then consider the -function on the curve given by sending a point P to the quantity -curvature(P)/curvature(P_0) . -This is a kind of relative curvature, where you measure how much everything is curving -in comparison to the curvature at P_0, and is invariant under scaling and rotation.<|endoftext|> -TITLE: Dot product in coordinates -QUESTION [8 upvotes]: Dot product of two vectors on plane could be defined as product of lengths -of those vectors and cosine of angle between them. -In cartesian coordinates dot product of vectors with coordinates $(x_1, y_1)$ and -$(x_2, y_2)$ is equal to $x_1x_2 + y_1y_2$. -How to prove it? - -REPLY [12 votes]: I suppose you want to prove that two your definitions of dot product are the same. -We start with definition of dot product as $(\vec{u}, \vec{v}) = |\vec{u}| |\vec{v}| \cos \theta$. We start with definition of dot product as $(\vec{u}, \vec{v}) = |\vec{u}| |\vec{v}| \cos \theta$ and prove that it also satisfies $(\vec{u}, \vec{v}) = x_1 x_2 + y_1 y_2$. -At first you can prove that dot product is linear: $(\vec{v_1}, \vec{v_2} + \alpha \vec{v_3}) = (\vec{v_1}, \vec{v_2}) + \alpha (\vec{v_1}, \vec{v_3})$. This is true because $(\vec{v_1}, \vec{v_2})$ is equal to the product of $|\vec{v_1}|$ and projection of $\vec{v_2}$ on $\vec{v_1}$. Projection of sum of vectors is equal to sum of projections. Hence dot product is linear. -Let $\vec{e_1}$ and $\vec{e_2}$ be vectors with coordinates $(1, 0)$ and $(0, 1)$. -After that if $\vec{v_1} = x_1 \vec{e_1} + y_1 \vec{e_2}$ and $\vec{v_2} = x_2 \vec{e_2} + y_2 \vec{e_2}$ then by linearity of dot product we have $(\vec{v_1}, \vec{v_2}) = x_1 x_2 (\vec{e_1}, \vec{e_1}) + x_1 y_2 (\vec{e_1}, \vec{e_2}) + x_2 y_1 (\vec{e_2}, \vec{e_1}) + x_2 y_2 (\vec{e_2}, \vec{e_2})$. -Since $(\vec{e_1}, \vec{e_1}) = (\vec{e_2}, \vec{e_2}) = 1$ and $(\vec{e_1}, \vec{e_2}) = (\vec{e_2}, \vec{e_1}) = 0$ we have $(\vec{v_1}, \vec{v_2}) = x_1 x_2 + y_1 y_2$. - -REPLY [8 votes]: The dot product is invariant under rotations, we may therefore rotate our coordinate system so that v is along the x-axis. In this case, $v = (|v|, 0)$. Letting $w = (x,y)$ we have (using the definition of dot product in Cartesian coordinates) $v \cdot w = |v| x$. But what is $x$? Well, if you draw the picture and let $\theta$ be the angle between v and w, then we see that $\cos \theta = x/|w||$ so that $x = |w| \cos \theta$. Thus $v\cdot w = |v||w| \cos \theta$.<|endoftext|> -TITLE: Compute the Centroid of a $3D$ Planar Polygon Without Projecting It To Specific Planes -QUESTION [6 upvotes]: Given a list of coordinates of a coplanar plane $\left(pt_1, pt_2, pt_3, \cdots \right)$, how to compute the centroid of the coplanar plane? -One way to do it is to project the plane onto $XY$ and $YZ$ plane, but I don't really favor this approach as you have to check the orientation of the coplanar plane first before doing the projection and computing the centroid. -More specifically, I'm looking for a natural extension of the 2D centroid plane algorithm in 3D: -\begin{align} -C_x&=\frac1{6A}\sum_{i=0}^{n-1}(x_i+x_{i+1})(x_iy_{i+1}-x_{i+1}y_i)\\ -C_y&=\frac1{6A}\sum_{i=0}^{n-1}(y_i+y_{i+1})(x_iy_{i+1}-x_{i+1}y_i)\\ -A&=\frac12\sum_{i=0}^{n-1}(x_iy_{i+1}-x_{i+1}y_i) -\end{align} -Any idea? - -REPLY [5 votes]: You can take any two orthogonal vectors $\vec{e_1}$ and $\vec{e_2}$ on the plane and use them as a basis. You also need some point $(x_0, y_0, z_0)$ on the plane as origin. -Given point with coordinates $(x_1, y_1, z_1)$ on your plane you calculate it's coordinates with respect to new basis: -$x = (x_1 - x_0) e_{1x} + (y_1 - y_0) e_{1y} + (z_1 - z_0) e_{1z}$ -$y = (x_1 - x_0) e_{2x} + (y_1 - y_0) e_{2y} + (z_1 - z_0) e_{2z}$ -And after that you can apply your formulae to get $C_x$ and $C_y$. Those coordinates are easyly transformed back into original 3d coordinates: -$x = x_0 + e_{1x} C_x + e_{2x} C_y$ -$y = y_0 + e_{1y} C_x + e_{2y} C_y$ -$z = z_0 + e_{1z} C_x + e_{2z} C_y$<|endoftext|> -TITLE: Examples/other references for EGA 0.4.5.4 -QUESTION [10 upvotes]: Proposition 0.4.5.4 in EGA appears to be a general representability theorem. It reads: - -Suppose $F$ is a contravariant functor from the category of locally ringed spaces over $S$ to the category of sets. Suppose given representable sub-functors $F_i$ of $F$, such that the morphisms $F_i \to F$ are representable by open immersions. Suppose furthermore that if $Hom(-, X) \to F$ is a morphism and the functors $F_i \times_F Hom(-,X)$ are representable by $Hom(-,X_i)$, the family $X_i$ forms an open covering of $X$. (That $X_i \to X$ is an open immersion follows from the definitions.) Finally, suppose that if $U$ ranges over the open subsets of a locally ringed space $X$, the functor $U \to F(U)$ is a sheaf. Then, $F$ is representable. - -I haven't yet been able to grok the proof, but it appears to be some sort of extended gluing construction. This result appears to be used in proving that fibered products exist in the category of schemes. However, it's fairly easy to directly construct fibered products by gluing open affines. -Are there examples where this result actually makes the life of algebraic geometers easier? Also, I'd appreciate any links to examples (outside of EGA) where this result is used. - -REPLY [2 votes]: Actually the result is used all the time in the basics of algebraic geometry. It provides the formalization of the principle of gluing constructions. -For example if want to show that fibered products $X \times_S Y$, first do it for affine schemes $X,Y,S$ using the adjunction between $\operatorname{Spec}$ and global sections. Now, if $S$ is arbitrary, the functor $\operatorname{Sch}/S \to \operatorname{Set}, Z \mapsto \operatorname{Hom}_S(Z,X) \times \operatorname{Hom}_S(Z,Y)$ is a sheaf and locally representable on $S$, thus representable. Thus $X \times_S Y$ exists. Now if also $X$ is arbitrary, consider the functor $\operatorname{Sch}/X \to \operatorname{Set}, Z \mapsto \operatorname{Hom}_S(Z',Y)$, where $Z' = Z \to X \to S$. This is a sheaf and locally representable on $X$, thus representable, which shows that $X \times_S Y$ exists. The usual "ad hoc" proofs for the existence of the fibered product actually just reprove the general representabilty result in the special case. -Here is another example: If $A$ is a quasi-coherent sheaf of algebras on a scheme $X$, it is possible to construct the $X$-scheme $Spec(A)$. It is very laborious to check all the details of the gluing construction, well-definedness etc. when you just want to glue the $U$-schemes $\operatorname{Spec}(A(U))$, $U \subseteq X$ affine, together. But instead, you could just consider the functor -$$\operatorname{Sch}/X \to \operatorname{Set}, (t : Z \to X) \mapsto \operatorname{Hom}_{\mathcal{O}_X-\operatorname{Alg}}(A,t_* \mathcal{O}_Z)$$ -and show that it is a sheaf (obvious) and locally on $X$ representable (usual adjunction with spectrum of a ring), so it is representable by an $X$-scheme $\operatorname{Spec}(A)$ for which you also directly have a universal property. Again I want to emphasize: You get into a big mess when you want to construct this without using functors or universal properties. These rather abstract notions are very useful also in concrete situations, because they make you able to fix your ideas and make every construction fit together nicely. When you get more accustomed to these techniques, you stop thinking of specific functors, but you think in a "functorial way" and recognize, for example, why gluing constructions work.<|endoftext|> -TITLE: Prove: $(a + b)^{n} \geq a^{n} + b^{n}$ -QUESTION [10 upvotes]: Struggling with yet another proof: - -Prove that, for any positive integer $n: (a + b)^n \geq a^n + b^n$ for all $a, b > 0:$ - -I wasted $3$ pages of notebook paper on this problem, and I'm getting nowhere slowly. So I need some hints. -$1.$ What technique would you use to prove this (e.g. induction, direct, counter example) -$2.$ Are there any tricks to the proof? I've seen some crazy stuff pulled out of nowhere when it comes to proofs... - -REPLY [12 votes]: You can write $n=m+1$ where $m \geq 0$, then -$(a+b)^n = (a+b)^{m+1} = (a+b) (a+b)^m = a(a+b)^m +b(a+b)^m \geq a^{m+1} + b^{m+1}$ -no induction and works for any real $n \geq 1$.<|endoftext|> -TITLE: Counting subsets with r mod 5 elements -QUESTION [12 upvotes]: Some time ago Qiaochu Yuan asked about counting subsets of a set whose number of elements is divisible by 3 (or 4). -The story becomes even more interesting if one asks about number of subsets of n-element set with $r\bmod 5$ elements. Denote this number by, say, $P_n (r \bmod 5)$. -An experiment shows that for small $n$, $P_n(r \bmod 5)-P_n(r' \bmod 5)$ is always a Fibonacci number (recall that for "$r \bmod 3$" corresponding difference is always 0 or 1 and for "$r \bmod 2$" they are all 0). It's not hard to prove this statement by induction but as always inductive proof explains nothing. Does anybody have a combinatorial proof? (Or maybe some homological proof — I've heard one for "$r \bmod 3$"-case.) -And is there some theorem about $P_n(r \bmod l)$ for arbitrary $l$ (besides that it satisfies some recurrence relation of degree growing with $l$)? - -REPLY [3 votes]: (Sketch of a bijective solution.) -Recall that binomial coefficients count number of walks with steps (+1,+1) and (+1,-1) from the origin to different points (e.g. the number of walks to the point (2n,0) is $\binom{2n}{n}$). Consider the following involution on the set of all such walks: if the path intersects with the line y=l-1 or the line y=-1, reflect its part starting from the first intersection point (w.r.t. corresponding line). -This involution almost gives a bijection between Pn(r mod l) and Pn(-r-2 mod l) (and moving the strip we can get other correspondences of this kind). But it has some fixed points — namely, paths that lie inside the strip 0≤y≤l-2 (aka walks on the path graph of length l-1 mentioned by Qiaochu Yuan). -Now to answer original question one only needs to recall that numbers of such paths for l=5 are exactly Fibonacci numbers.<|endoftext|> -TITLE: Not quite Fermat's Last Theorem -QUESTION [20 upvotes]: Prove that the equation $n^a + n^b = n^c$, with $a,b,c,n$ positive integers, has infinite solutions if $n=2$, and no solution if $n\ge3$. - -REPLY [10 votes]: Wlog $\,a \le b$. Dividing by $n^a$ yields $\,1 + n^{b-a} = n^{c-a}$ $\Rightarrow$ $b=a\ $ (else $\,n\mid1)\,$ $\Rightarrow$ $\, n = 2,\, c = a\!+\!1$.<|endoftext|> -TITLE: Relative sizes of sets of integers and rationals revisited - how do I make sense of this? -QUESTION [5 upvotes]: I already asked if there are more rationals than integers here... -Are there more rational numbers than integers? -However, there is one particular argument that I didn't give before which I still find compelling... -Every integer is also a rational. There exist (many) rationals that are not integers. Therefore there are more rationals than integers. -Obviously, in a sense, I am simply choosing one particular bijection, so by the definition of set cardinality this argument is irrelevant. But it's still a compelling argument for "size" because it's based on a trivial/identity bijection. -EDIT please note that the above paragraph indicates that I know about set cardinality and how it is defined, and accept it as a valid "size" definition, but am asking here about something else. -To put it another way, the set of integers is a proper subset of the set of rationals. It seems strange to claim that the two sets are equal in size when one is a proper subset of the other. -Is there, for example, some alternative named "size" definition consistent with the partial ordering given by the is-a-proper-subset-of operator? -EDIT clearly it is reasonable to define such a partial order and evaluate it. And while I've use geometric analogies, clearly this is pure set theory - it depends only on the relevant sets sharing members, not on what the sets represent. -Helpful answers might include a name (if one exists), perhaps for some abstraction that is consistent with this partial order but defined in cases where the partial order is not. Even an answer like "yes, that's valid, but it isn't named and doesn't lead to any interesting results" may well be correct - but it doesn't make the idea unreasonable. -Sorry if some of my comments aren't appropriate, but this is pretty frustrating. As I said, it feels like I'm violating some kind of taboo. - -EDIT - I was browsing through random stuff when I was reminded this was here, and that I actually ran into an example where "size" clearly can't mean "cardinality" fairly recently (actually a very long time ago and many times since, but I didn't notice the connection until recently). -The example relates to closures of sets. Please forgive any wrong terminology, but if I have a seed set of {0} and an operation $f x = x+2$, the closure of that set WRT that operation is the "smallest" set that is closed WRT that operation, meaning that for any member $x$ of the set, $x+2$ must also be a member. So obviously the closure is {0, 2, 4, 6, 8, ...} - the even non-negative integers. -However, the cardinality of the set of even non-negative integers is equal to the cardinality of the set of all integers, or even all rationals. So if "smallest" means "least cardinality", the closure isn't well-defined - the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...} is no larger than the set {0, 2, 4, 6, 8, ...}. -Therefore, the meaning of "smallest" WRT set closures refers to some measure of size other than cardinality. -I'm not adding this as a late answer because it's already covered by the answers below - it's just a particular example that makes sense to me. - -Another addition - while skimming the first chapter of a topology textbook in a library some time ago, IIRC I spotted a definition of the set closure which did not use the word "smallest", and made no direct reference to "size". That led me to think maybe the common "definition" of closures I'm familiar with is just a stopgap for those of us who aren't ready for a formally precise definition. -However, while searching for another source, I instead found this answer to a topology question that uses the word "smallest" in its definition of closure (and "largest" in its "dual definition of interior"). And then I found this answer which describes a concept of size based on partial ordering of topological embeddings. I think that's another example to add to those in answers below. - -REPLY [3 votes]: Is there, for example, some alternative named "size" definition - consistent with the partial ordering given by the - is-a-proper-subset-of operator? - -There is such an alternative. Maybe two, depending on how you count. -Some references: - -Sets and Their Sizes (my dissertation, 1981) at http://arxiv.org/abs/math/0106100. -Offers a general theory of set size that includes a proper-subset principle, trichotomy, and, in fact, -all statements true of sizes of finite sets (in a restricted language) and -constructs a model over sets of natural numbers that respects the ordering by -asymptotic density. -An article, Measuring the size of infinite collections of natural numbers: Was Cantor's theory of infinite number inevitable? by Paolo Mancosu that provides a good historical perspective and mentions more recent - and much more extensive - work on developing such a -theory by V. Benci, M. Di Nasso, and M. Forti. Available at: http://philpapers.org/rec/MANMTS. -A critical view of such theories, Set Size and the Part Whole Principle, by Matthew Parker, at philpapers.org as PARSSA-3. (It seems I've run out of link power!) - -Fred M. Katz<|endoftext|> -TITLE: Probability that the convex hull of random points contains sphere's center -QUESTION [22 upvotes]: What is the probability that the convex hull of $n+2$ random points on $n$-dimensional sphere contains sphere's center? - -REPLY [15 votes]: This problem is discussed in J. G. Wendel; A Problem in Geometric Probability, Mathematica Scandinavica 11 (1962) 109-111. Wendel showed that the probability of $N$ random points lying on the surface of the unit sphere in dimension $n$ all lie on one hemisphere is -$2^{-N+1}\sum_{k=0}^{n-1} {{N-1}\choose k}$ -I've found this here. - -REPLY [13 votes]: This is one of those old chestnuts that come up again and again. -To be precise, the probability that the convex hull of $n+2$ points in $S^n$ -(the unit sphere in $\mathbb{R}^{n+1}$) contains the origin is $2^{-n-1}$. -There's a brief argument at Wolfram's mathworld which I don't find -entirely convincing but which certainly can be patched to form -a convincing argument. In brief, show that for random points $P_1,\ldots,P_{n+2}$ -on the sphere, then with probability one, exactly one choice of signs -will put the centre in the convex hull of $\pm P_1,\pm P_2,\ldots,\pm P_{n+1}$ -and $P_{n+2}$. -Added (3/8/2010) -Thanks to Grigory for his comment. Changing the notation slightly, -one can show that under some fairy weak hypotheses, if we choose $m+1$ -points randomly and indepedently in $\mathbb{R}^m$ the probability their convex -hull contains the origin is $2^{-m}$. -Take a probability distribution on $\mathbb{R}^m$ and choose a sequence -of points (which we identify with vectors) independently from that distribution. -Our first condition on this distribution is that $m$ vectors $v_1,\ldots,v_m$ -chosen independently from it are linearly independent with probability one. -This can fail if say some point occurs with nonzero probability or the -distribution lies in a hyperplane through the origin. Assume this condition. -Now a sequence $v_0,v_1,\ldots,v_m$ of random points chosen according to -our distribution are linearly independent: there are reals $a_i$ not all zero with -$\sum_i a_i v_i=0$. By our condition, with probability one, the sequence -$(a_0,\ldots,a_m)$ is unique up to constant multiple, and moreover all the -$a_i$ are nonzero. So we may assume $a_0=1$ and $a_1,\ldots,a_m$ -are nonzero and uniquely determined. Then the convex hull of the $v_i$ -contains the origin if an only if all the $a_i$ are positive. -Now we introduce another condition: that the distribution is centrally symmetric; -in detail the probability that a random vector $v$ lies in a set $A$ equals -the probability that $-v$ lies in $A$. A condition like this is clearly necessary; -it stops the distribution being supported on a small region far from the origin. -This condition shows that all the $2^m$ possibilities of signs -for $a_1,\ldots,a_m$ are equiprobable; since changing the sign of some -$v_i$ changes the sign of $a_i$. -To conclude, if our probability distribution on $\mathbb{R}^m$ satisfies -these two condition, the probability that the convex hull of $m+1$ -indepdendently chosen points contains the origin is $2^{-m}$. -These conditions are satisfied by the uniform distribution on a sphere -with centre at the origin, but also by many others.<|endoftext|> -TITLE: Why $PSL_3(\mathbb F_2)\cong PSL_2(\mathbb F_7)$? -QUESTION [19 upvotes]: Why are groups $PSL_3(\mathbb{F}_2)$ and $PSL_2(\mathbb{F}_7)$ isomorphic? -Update. There is a group-theoretic proof (see answer). But is there any geometric proof? Or some proof using octonions, maybe? - -REPLY [5 votes]: (A sketch of a proof from http://www.math.vt.edu/people/brown/doc/PSL(2,7)_GL(3,2).pdf by E.Brown and N.Loehr.) -$PSL_3(\mathbb F_2)$ is the group of automorphisms of $\mathbb F_8$ as a vector space over $\mathbb F_2$. Fix a generator $x$ in $\mathbb F_8^{\times}$. It defines a map $\mathbb P^1(\mathbb F_7)\to\mathbb F_8$: $k\mapsto x^k$ (we define $x^\infty:=0$). -Now, for $f\in PSL_2(\mathbb F_7)$ the map $x^k\mapsto x^{f(k)}$ is, in general, not $\mathbb F_2$-linear, but the map $x^k\mapsto x^{f(k)}+x^{f(\infty)}$ is$^1$. And the map $T:f\mapsto(x^k\mapsto x^{f(k)}+x^{f(\infty)})$ gives a desired isomorphism $PSL_2(\mathbb F_7)\to PSL_3(\mathbb F_2)$. -$^1$ There is no conceptual explanation in the paper, but a check for generators of $PSL_2(\mathbb F_7)$ isn't hard --- and a check that $T$ is a homomorphism finishes the proof.<|endoftext|> -TITLE: Computing stalks: do direct limits behave like limits? -QUESTION [12 upvotes]: Suppose that $X$ is a topological space with a sheaf of rings $\mathcal{O}_X$. In general, the stalk at a point $p \in X$ is the direct limit of the rings $\mathcal{O}_X(U)$ for all open sets $U$ containing $p$. -Here are two questions on computing stalks - I think both should be true, since a direct limit should be some sort of "limiting process", but that's far from convincing for me. - -Can I compute the stalk of $\mathcal{O}_X$ at a point $p \in X$ by only limiting over basic open sets of $X$ containing $p$? -Can I compute the stalk of $\mathcal{O}_X$ at a point $p \in X$ by excluding some finite number of "large" open sets around $p$, and then limiting over the remaining open sets around $p$? - -REPLY [6 votes]: I think there is a missing word in Akhil Mathew's answer: and it's "filtered". -You can do that because stalks are filtered colimits (aka "direct limits"). -For filtered colimits, $\varinjlim_i X_i$, you can take representatives of elements $x \in \varinjlim_i X_i$ for some $i$ belonging to the set of indexes $I$ (in our case, the open sets $U$). That is, you can find some $i$ and $x_i \in X_i$ that goes to your $x$ through the universal arrow $X_i \longrightarrow \varinjlim_i X_i$. For instance, every element of the stalk $O_{X,p}$ can be represented by a section $f \in O_X(U)$ for some open set $U$. -But this is not true for other kinds of colimits. -For instance, take the push-out of two arrows $f: A \longrightarrow B$ and $g: A \longrightarrow C$ in the category of, say, abelian groups. Elements of this push out $B \oplus_A C$ are classes of pairs $(b,c) \in B\oplus C$, where you quotient out elements of the form $(f(a), 0) - (0, g(a))$, for all $a\in A$. That is, $(f(a),0) = (0,g(a))$ in $B\oplus_A C$. -Elements of $B\oplus_A C$ cannot be represented, in general, by elements coming from just $B$ or $C$, which are of the form $(b,0)$ or $(0,c)$, respectively: so, for a general $(b,c) \in B\oplus_A C$ there is no $b \in B$, nor $c\in C$, that represents it.<|endoftext|> -TITLE: Why should I care about fields of positive characteristic? -QUESTION [34 upvotes]: This is what I know about why someone might care about fields of positive characteristic: - -they are useful for number theory -in algebraic geometry, a theory of "geometry" can be developed over them, and it's fun to see how this geometry works out - -Some people might read this and think, "What more could you need?" But I've never been able to make myself care about number theory, so (1) doesn't help me. (2) is nice for what it is, but I'm hoping there's something more. My understanding of (2) is that this is only geometry in a rather abstract sense and, for instance, there's no generally useful way to directly visually represent these fields or varieties over them the way we can over the reals or complex numbers. (Drawing a curve in R^2 and saying it's the curve over some other field may be helpful for some purposes, but it's not what I'm after here.) -Is there anything else? The ideal (surely impossible) answer for me would be "Yes, such fields are very good models for these common and easy to understand physical systems: A, B, C. Also, we can visualize them and varieties over them quite easily by method D. Finally, here's a bunch of surprising and helpful applications to 500 other areas of mathematics." -UPDATE: to answer Qiaochu's comment about what I do care about. -Let's say I care about: -algebraic & geometric topology -differential geometry & topology -applications to physics -and I certainly care about algebraic geometry over C -(this is to say I understand the motivations behind these subjects and the general idea, not necessarily that I know them in depth) - -REPLY [6 votes]: The construction of the real field from the rationals using Cauchy sequences can be mimicked to construct other (characteristic zero) complete fields not isometric to the reals. Namely, instead of starting out with the metric defined by the usual (archimedean) absolute value, one can consider the $p$-adic metric ($p$ a fixed prime) and proceed in the same lines. A theorem of Ostrowski says that up to equivalence these are in fact the only ways to complete $\Bbb Q$. -Being complete, the fields ${\Bbb Q}_p$ thus obtained can be used to develop an analytic theory which is similar to the classical theory "over $\Bbb R$" but has some subtle differences. -A source of difference is that the fields ${\Bbb Q}_p$ have--unlikely $\Bbb R$--a natural subring, the radius $1$ sphere centered in $0$, a.k.a. the $p$-adic integers $\Bbb Z_p$, which is a local ring with residue field isomorphic to the field with $p$ elements. -This is a clue to the fact that "$p$-adic analysis" is deeply intertwined with the theory of finite fields.<|endoftext|> -TITLE: Algebra of Random Variables? -QUESTION [14 upvotes]: I've been looking online (and in teaching journals) for a good introduction to Algebras of Random Variables (on an undergraduate level) and their usage, and have come up short. I know I can find the probability distribution of $h(z)$ where: -\begin{equation*} -z = x + y. -\end{equation*} -If $x$ and $y$ are from known independent probability distributions (the solution is simply a convolution). Two other operations $z=xy$ and $z=y/x$ can be solved for quite easily as well. -Does anyone know of any other, more complicated, uses for treating random variables as objects to be manipulated? - -REPLY [11 votes]: In the special case that the sample space is the non-negative integers (or a subset thereof), one can think of a probability distribution as a generating function $f(x) = \sum_{n \ge 0} a_n x^n$ where $a_n \ge 0$ and $f(1) = 1$. Then the sum of random variables corresponds to the product of generating functions, so one can bring generating function techniques (see, for example, Wilf) to bear on such random variables. For example, it is particularly easy to compute expected values this way: the expected value is $f'(1)$, and the product rule expresses the fact that expected value is additive. Similarly, the variance is $f''(1) + f'(1) - f'(1)^2$. -I don't really know a place where these issues are discussed in detail, but one spectacular family of examples is the computation of the expected values and variances of certain statistics on permutations. For example, suppose we want to compute the expected number of fixed points that a permutation of $n$ elements has. By Burnside's lemma, the answer is $1$. But another way to do this computation is to construct the family of polynomials -$\displaystyle P_n(x) = \frac{1}{n!} \sum_{\pi \in S_n} x^{c_1(\pi)}$ -where $c_1(\pi)$ is the number of fixed points. Then the number we want is $P_n'(1)$. It turns out we can compute all these numbers at the same time because the bivariate generating function is -$\displaystyle P(x, y) = \sum_{n \ge 0} P_n(x) y^n = \frac{1}{1 - y} \exp \left( xy - y \right).$ -Then the family of numbers we want is $\frac{\partial}{\partial x} P(x, y)$ evaluated at $x = 1$, which (as it is not hard to verify) is $\frac{1}{1 - y}$. In fact this is true for the second derivative and all higher derivatives as well; in particular, the variance of the number of fixed points is also $1$. -What if we want to know the expected number and variance of, say, the total number of cycles? Now we want to look at the family of polynomials -$\displaystyle Q_n(x) = \frac{1}{n!} \sum_{\pi \in S_n} x^{c(\pi)}$ -where $c(\pi)$ is the total number of cycles. Then the number we want is $Q_n'(1)$. Now it turns out that the bivariate generating function is -$\displaystyle Q(x, y) = \sum_{n \ge 0} Q_n(x) y^n = \frac{1}{(1 - y)^x}$ -(which should be interpreted as $\exp \left( x \log \frac{1}{1 - y} \right)$). The partial derivative $\frac{\partial}{\partial x} Q(x, y)$ evaluated at $x = 1$ is now -$\displaystyle \frac{1}{1 - y} \log \frac{1}{1 - y} = \sum_{n \ge 1} H_n y^n$ -where $H_n$ is the $n^{th}$ harmonic number. Thus the expected number of cycles of a permutation of $n$ elements is about $\log n$. (It actually turns out that the expected number of cycles of length $r$ is $\frac{1}{r}$, from which this result immediately follows.) The second partial derivative evaluated at $x = 1$ is -$\displaystyle \frac{1}{1 - y} \log^2 \frac{1}{1 - y} = \sum_{n \ge 1} G_n y^n$ -where $\displaystyle G_n = \sum_{k=1}^{n-1} \frac{1}{k} H_{n-k}$; I'm not sure of the asymptotic growth of this sequence, though, but whatever it is, the variance of the total number of cycles is $G_n + H_n - H_n^2$. (In any case $G_n \le H_n^2$, so the variance is less than or equal to $H_n$, and this is probably about right asymptotically.) One can deduce asymptotics for these kind of sequences using methods such as those in Flajolet and Sedgewick's Analytic Combinatorics, which is my best guess for more examples of using generating functions in this way. There are probably examples there related to statistics of trees. -All the generating function identities I used above are a consequence of the exponential formula, one version of which is proven and discussed in this blog post.<|endoftext|> -TITLE: Which continuous functions are polynomials? -QUESTION [14 upvotes]: Suppose $f \in C(\mathbb{R}^n)$, the space of continuous $\mathbb{R}$-valued functions on $\mathbb{R}^n$. Are there conditions on $f$ that guarantee it is the pullback of a polynomial under some homeomorphism? That is, when can I find $\phi:\mathbb{R}^n \to \mathbb{R}^n$ such that $f \circ \phi \in \mathbb{R}[x_1,\ldots, x_n]$? I have tried playing around with the implicit function theorem but haven't gotten far. It feels like I may be missing something very obvious. -Some related questions: - -A necessary condition in the case of $n = 1$ is that $f$ cannot attain the same value infinitely many times (since a polynomial has only finitely many roots). Is this sufficient? -What if we replace $\mathbb{R}$ by $\mathbb{C}$? -What if we look at smooth functions instead? -What about the complex analytic case? - -REPLY [2 votes]: Since I can't leave comments I'm writing this here. I think this question is made difficult by the condition that $\phi$ is just required homeomorphism versus say a diffeomorphism. -In the case n = 1 you can certainly come up with continuous functions that are not differentiable on a discrete set but can be pulled back to yield a polynomial. As a baby example consider the function $f$ that is $\sqrt{x}$ on the positive reals and x on the negative reals. Consider the homeomorphism that is $x^2$ on the positive reals and x on the negative reals, then $f$ pulls back to the polynomial $x$. -I don't think its sufficient that $f$ doesn't attain the same value infinitely many times. I don't have a counterexample but I think a candidate might be contained in this article.1 The gist is that there are functions everywhere continuous and strictly monotonic but with derivative 0 almost everywhere. -I think you'd have more luck using the implicit function theorem if you required $\phi$ to be a diffeomorphism. Also I believe its true that 'most' continuous function from $\mathbb{R} \to \mathbb{R}$ are not very nice (nonwhere differential) so a more tractable question might be the same question but requiring $f$ to be smooth. -If you replace $\mathbb{R}$ with $\mathbb{C}$ and impose $f$ and $\phi$ both be holomorphic then I think it suffices that $f^{(n)}$ vanish for all sufficiently large $n$ because you can recover $f$ from its Taylor series. -1Hisashi Okamoto. Marcus Wunsch. "A geometric construction of continuous, strictly increasing singular functions." Proc. Japan Acad. Ser. A Math. Sci. 83 (7) 114 - 118, July 2007. https://doi.org/10.3792/pjaa.83.114<|endoftext|> -TITLE: Number of finite simple groups of given order is at most $2$ - is a classification-free proof possible? -QUESTION [20 upvotes]: This Wikipedia article states that the isomorphism type of a finite simple group is determined by its order, except that: - -$L_4(2)$ and $L_3(4)$ both have order $20160$ -$O_{2n+1}(q)$ and $S_{2n}(q)$ have the same order for $q$ odd, $n > 2$ - -I think this means that for each integer $g$, there are $0$, $1$ or $2$ simple groups of order $g$. -Do we need the full strength of the Classification of Finite Simple Groups to prove this, or is there a simpler way of proving it? - -REPLY [5 votes]: There are many mathematicians outside finite group theory who asked whether important infinite fragments of the classification were possible without the entire classification. I believe the favorite questions has always been : Can you prove the finiteness of the sporadics without the full classification? -There is a good chance that third generation proof technology will reduce the entanglement between different portions of the classification because one knows the unipotent primes during earlier arguments where current methods only reveal semi-simple structure. There has been one remarkable success in this direction : -Theorem (Altinel, Borovik, Cherlin). A simple group of finite Morley rank containing an infinite elementary abelian 2-group is a Chevalley group over an algebraically closed field of characteristic 2. -There is no known proof that simply groups of finite Morley rank even have an involution, much less that groups with odd characteristic looking Sylow 2-subgroup are also algebraic. In consequence, there is a conjecture by Borovik that basically proposes one might classify finite simple groups who's 2-rank vastly exceeds the p-rank for any other p prime. -The final proof of [ABC] weighs in around 500 pages, but any finite analog would require many thousands of pages to deal with issues like twisted groups of Lie type and alternating groups, even assuming you find some trick for avoiding all the sporadics. -In short, there are an awful lot of interesting results that depend upon the full CFSG for the foreseeable future because only funky asymptotic fragments look even vaguely realistic as stand alone results and even those look extremely difficult.<|endoftext|> -TITLE: Solve an equation with linear and exponential functions, $x=10^{x/10}$ -QUESTION [10 upvotes]: How to solve this equation? -$$ -x = 10^{x/10} -$$ - -REPLY [8 votes]: You can study and graph the two functions $y = x$ and $y = 10^{x/10}$. - -From which you can see that there are only two solutions.<|endoftext|> -TITLE: Is there a direct proof of this lcm identity? -QUESTION [36 upvotes]: The identity -$\displaystyle (n+1) \text{lcm} \left( {n \choose 0}, {n \choose 1}, ... {n \choose n} \right) = \text{lcm}(1, 2, ... n+1)$ -is probably not well-known. The only way I know how to prove it is by using Kummer's theorem that the power of $p$ dividing ${a+b \choose a}$ is the number of carries needed to add $a$ and $b$ in base $p$. Is there a more direct proof, e.g., by showing that each side divides the other? - -REPLY [9 votes]: More generally, for $0 \leq k \leq n$, there is an identity -$(n+1) {\rm lcm} ({n \choose 0}, {n \choose 1}, \dots {n \choose k}) = {\rm lcm} (n+1,n,n-1, \dots n+1-k)$. -This is simply the fact that any integer linear combination of $f(x), \Delta f(x), \Delta^2 f(x), \dots \Delta^k f(x)$ is an integer linear combination of $f(x), f(x-1), f(x-2), \dots f(x-k)$ where $\Delta$ is the difference operator, $f(x) = 1/x$, and $x = (n+1)$.<|endoftext|> -TITLE: Interesting properties of ternary relations? -QUESTION [8 upvotes]: Many people are familiar with some properties of binary relations, such as reflexivity, symmetry and transitivity. -What are the commonly studied properties of ternary (3-ary) relations? -If you could provide a motivating example of why the property is interesting that would also be helpful. - -REPLY [2 votes]: Many ternary properties are interesting, and many of them are learned in school before properties of binary relations. I define any ternary relation as a relation that is expressed as a three-place predicate. For example, consider the three place predicate, '_____added to_____yields_____'. This particular 3-adic relation has a number of interesting properties, including: -the commutative property: $\forall x \forall y \forall z : R(x,y,z) \implies R(y,x,z)$ -the associative property: $\forall w \forall x \forall y \forall z : R(s(w,x),y,z) \implies R(w,s(x,y),z)$, where the function $s$ takes the sum of its inputs. -et cetera. -Note that these two properties are also properties of the relation '_____multiplied by_____yields_____'. -These two ternary relations differ, however, on this property (the additive identity property): -$\forall x : R(x,0,x)$ -For all quantities, $x$ added to $0$ yields $x$, but it is not the case that for all quantities $x$, $x$ multiplied by $0$ yields $x$. -Notice that the 3-adic relation '_____and_____sit to either side of_____on the sofa' also shares the commutative property.<|endoftext|> -TITLE: Intutive explanation of the PCP Theorem -QUESTION [10 upvotes]: The PCP theorem states that: - -Every decision problem in NP has - probabilistically checkable proofs of - constant query complexity and - logarithmic randomness complexity. - -Can anyone give an intuitive explanation of how this can be done? -Links - -This is a follow up from this -question. -There is a proof here. - -REPLY [8 votes]: A detailed explanation can be found in many places. I'll to provide an intuitive one. -By Cook-Levin theorem, the Boolean satisfiability problem is NP-complete - ie. every decision problem in NP can be reduced to it. We will ask of our prover to supply the input and output of every gate in the circuit and consider this as the proof that $x\in L$. -If we were to stop there, then the prover could cheat by changing a single bit in his proof, which would require us to check a large (non-constant) number of bits in it. So we must ask something more from the prover. This something more will be the encoding of his proof using some (special) error-correcting code. Intuitively, this "smudges" the false bits onto a large number of bits in the code-word. -The verifier receives a word from the prover, there are 2 ways in which the prover might try and cheat, this word might not be a code-word in the chosen code, or it might be the encoding of something which isn't a proof. We'll examine the (slightly different) separation of cases of the word either being far from every codeword (a large number of bits must be changed to reach a codeword) or that it is close to codeword which isn't an encoding of a proof. -To be able to detect these we demand that our code has the following 2 properties: -1) locally checkable - we can, by reading a constant number of bits, detect w.h.p if a word is far from any codeword. -2) locally decodable - we can, by reading a constant number of bits, decode w.h.p a bit from the encoded word. -(finding such codes is hard, hadamard has this properties but the code-word's size is exponential, RM codes have these properties as well (to a lesser degree) and they are the ones generally used. -So the verifier checks if the word is close to being a code-word (using 1), and if the test succeeds, picks a random gate and decodes it's outputs and output (using 2). this is sufficient to achieve constant query complexity and logarithmic randomness. -It should be mentioned that (Dinur) has a combinatorial proof of PCP which is quite different from what i've discribed, but her version is (for me, at least) less intuitive.<|endoftext|> -TITLE: Recreating an Integer Sequence After Convolution -QUESTION [5 upvotes]: ...and encoding it as a probability distribution. -Suppose we have a sequence of non-negative integers that is periodic with period $N$: -\begin{equation*} -A_{1},A_{2},...,A_{N},A_{1}... -\end{equation*} -Each $A_{k}$ takes on a value no greater than some constant $B$: -\begin{equation*} -0 \leq A_{k} \leq B -\end{equation*} -We then take this sequence and do a simple convolution, for some constant $L > 0$ and $1 \leq n \leq N$: -\begin{equation*} -S_{L}(n) = A_{n} + A_{n+1} +...+ A_{n+L-1}. -\end{equation*} -From $S_{L}(n)$ we then form a probability distribution $P(n)$ which gives the frequency of each of its values. Let $e_{j}(k) = 1$ if $j = k$ and $0$ otherwise. Then: -\begin{equation*} -P(n) = (e_{n}(S_{L}(1)) + e_{n}(S_{L}(2)) +...+ e_{n}(S_{L}(N))) / N. -\end{equation*} -What I would like to find out is the extent to which this process can be reversed. I have two data points: -1) I know (pretty much) everything about the probability distribution $P(n)$: the distribution itself, its mean, range, variance, skewness, kurtosis, etc. -2) I can tell you the frequency of values of $A_{k}$ in one period, so that if the sequence is 1,0,2,3,1,0, I can tell you there are two 0's, two 1's, one 2, and one 3. -To what extent am I able to reconstruct the sequence $A_{k}$ from these two data points? - -REPLY [2 votes]: No, you cannot recover the sequence $A_k$. As a trivial example, note that any cyclic permutation of $(A_1, A_2, \dots, A_N)$ would result in the same distribution (and frequencies). But since this is a periodic sequence, you probably don't care about distinguishing between cyclic permutations of the same sequence, so here's another example. -Consider L=1. Then your probability distribution is just equivalent to the frequencies, so any permutation of the $A_k$s would give the same distribution. If L=1 is too degenerate, here's another example with L=2. -Say $L=2$, and $N=10$. Then, both sequences $(1, 1, 0, 0, 0, 1, 1, 0, 0, 0)$ and $(1, 1, 0, 0, 0, 0, 1, 1, 0, 0)$ would have the same distribution of sums $S_L$: 2 twice, 1 four times, and 0 four times. You can easily extend this example to any $L$.<|endoftext|> -TITLE: Why can a Venn diagram for $4+$ sets not be constructed using circles? -QUESTION [210 upvotes]: This page gives a few examples of Venn diagrams for $4$ sets. Some examples: - - -Thinking about it for a little, it is impossible to partition the plane into the $16$ segments required for a complete $4$-set Venn diagram using only circles as we could do for $<4$ sets. Yet it is doable with ellipses or rectangles, so we don't require non-convex shapes as Edwards uses. -So what properties of a shape determine its suitability for $n$-set Venn diagrams? Specifically, why are circles not good enough for the case $n=4$? - -REPLY [2 votes]: I'm quite late to the party, but I'd like to add a very simple but much less rigorous answer for variety. -Each region of a Venn diagram must represent some combination of true or false for each category. For instance, the two circle Venn diagram has 4 regions: the intersection, the outside, and the last two circle parts. The intersection contains everything true to both categories, the outside neither category, and the last two only one of the two categories. (You could list them like TT, TF, FT, FF) -In order to fully cover all cases of true/false combinations, we need $2^n$ sections. Each new category we add doubles the number of regions as it must split each existing region into 2 smaller regions. (One for the true case of the new category, the other for false, so the TT region is now split into TTT and TTF) -We can see how the third circle intersects all 4 regions, but there's no way to draw a 4th circle that intersects all 8 of the new regions, making it impossible to draw a 4 circle Venn diagram.<|endoftext|> -TITLE: Proper Measurable subgroups of $\mathbb R$ -QUESTION [6 upvotes]: If $(\mathbb{R},+)$ is a group and $H$ is a proper subgroup of $\mathbb{R}$ then prove that $H$ is of measure zero. - -REPLY [6 votes]: As was noted by Jason DeVito if H is measurable then measure of H is 0. -From the other hand, if we suppose, that the axiom of choice holds, there is a possibility, that H is not measurable. The proof is quite simple. $\mathbb{R}$ is a vector space over $\mathbb{Q}$. Therefore there is some basis. Suppose e is one of it's elements and H is a subspace in $\mathbb{R}$, generated by others. Then H is a subgroup of ($\mathbb{R}$, +). -Lemma: H is not measurable. -Suppose H is measurable. Then as was noted above, it's measure m(H)=0. Then every set of the form $H+q e = \{h+qe, h\in H\}$, where $q\in Q$, has measure 0 (because it is just a shift of H). But $\mathbb{R}$ is equal to union of countable many sets with measure 0: $\mathbb{R} = \cup_{q\in\mathbb{Q}}(H+qe)$. Therefore $m(\mathbb{R})=0$. We have come to a contradiction. -Also there is simple proof of the fact, that if H is measurable, then H is of measure 0. -Lemma: If H is measurable proper subgroup of $\mathbb{R}$, then m(H)=0. -If H={0} then proposition of the lemma is obvious. Otherwise we can find positive element z in H. Suppose, $H_0 = H\cap [0,z)$. If $m(H_0)=0$ then $m(H)=0$. Otherwise $m(H_0)=\delta>0$. Let's take integer N, such that $\delta N > z+1$ (we will see later, why). -Note that if x is not in H, then x/n (for every positive integer n) and -x are also not in H. Therefore, using the fact that H is proper, we can find positive x<1, such that $x\notin H$. Suppose y=x/N!. Then for $n=1,\dots,N$ number ny obeys the following properties: -1. 1>ny>0. -2. ny is not in H. -Then sets $H_0, H_0+y, \dots, H_0+(N-1)y$ are disjoint subsets of $[0, 1+z)$. Therefore, -$\displaystyle 1+z = m\Big( [0, 1+z) \Big) \geq m\Bigg(\bigcup_{n=0}^{N-1} (H_0 + n y)\Bigg) = N \delta.$ Here we have a contradiction with definition of N.<|endoftext|> -TITLE: Definition of a set -QUESTION [10 upvotes]: What is a set? I know that results such as Russell's paradox mean that the definition isn't as straight forward as one might expect. - -REPLY [7 votes]: In very naive set theory (say in the late 19th century), a set was taken to be an arbitrary collection of objects. The difficulty is in telling which things that seem like they should be collections actually are well-defined collections. The paradoxes show that it's unclear whether this concept of set is coherent, although it is the natural-language meaning of the word "set". -Because the concept seems poorly defined, almost all contemporary set theory deals with a more restrictive notion: pure, well-founded sets. These are the sets that can be constructed starting with the empty set and taking powersets and subsets. The only elements of these sets are other sets. -These sets are defined in stages. At the first stage, you only have the empty set. At every larger stage, you add the powerset of every set that has already been constructed. There is one stage for every ordinal number, and the collection of all sets available after stage $\alpha$ is named $V_\alpha$. Symbolically, we have $V_0 = \emptyset$ and, in general, -$$ -V_\alpha = \bigcup_{\beta < \alpha} P(V_\beta) -$$ -For each ordinal $\alpha$, $V_\alpha$ is a set. The union $V = \bigcup_\alpha V_\alpha$ is a proper class. The sequence $( V_\alpha )$, indexed by ordinals, is known as the cumulative hierarchy. -The "sets" that mathematicians study and that are formalized in Zermelo-Fraenkel set theory are exactly the sets in $V$. Moreover, the formalization of mathematics into set theory does not require any other sets than those in $V$ (which is why essentially all mathematics can be formalized into ZFC). -So, for all practical mathematical purposes (outside of set theory), the answer to "what is a set" is "a set is an element of $V$".<|endoftext|> -TITLE: Visualising $\mathbb CP^2$: a problem of attaching cells with a dimension gap >1 -QUESTION [13 upvotes]: For the uninitiated -Morse theory, as many other early algebraic-topology widgets, leads to a picture of smooth manifolds as being built up from 'cells', copies of $\mathbb{D}^n$ for varying $n$, 'glued' to each other by the usual topological tools; giving rise to (in some sense) a more natural picture of homology as 'coming from' cellular homology. -Example -As an example, consider the torus $\mathbb{T}^2$: we begin with empty space, attach a 0-cell ($\mathbb{D}^0=$ a point), attach a 1-cell ($\mathbb{D}^1=$ a line) to your point (both ends of the line are attached to the point, creating a circle), attach another 1-cell (in the same way, to the same point, creating a sort of figure 8). -The hardest bit to visualise is next: attaching a 2-cell ($\mathbb{D}^2=$ a disk, which we will think of as its homeomorphism equivalent, a square). Begin by twisting your figure 8 so that one circle is in the xy-plane, the other in the xz-plane, now attach the top and bottom of your square (coloured red in picture below) to the xy circle (creating a 'curling round' tube) and the left and right edges (coloured blue below) of your square (now a tube) either side of the xz circle, completing the torus. - -Problem -The above takes some thinking, but a little reading around shows that this is fairly easy to see. What makes it so easy is that the cells we are attaching are of adjacent dimensions, that is; we may easily identify the boundary of one with the entirety of another. Where it gets harder to visualise is when the dimensions of the cells we are attaching to one another differ by >1- the canonical example of this is the complex projective plane $\mathbb{CP}^2$, a 4-manifold built by attaching a disk to a point (making a sphere) and then attaching a 4-ball to that sphere. -The latter attaching map (wherein points are identified with their images), I know, may be thought of as the Hopf fibration $\partial \mathbb{D}^4=S^3 \to \mathbb{CP}^1=S^2 $, but I have no way of visualising this, particularly with regard to the interior of the 4-disk. - -How does a 4 cell wrap around a 2 cell without producing a singularity of some kind? Is this analagous in some sense to Dehn surgery in which one uses a thickening? Is there a right way to think about this or can it only really be thought of 'intellectually'? - -REPLY [5 votes]: Tom, I'm not sure I see how it's getting any harder in passing from a torus to a projective space. In your $\mathbb CP^2$ case, you have $\mathbb CP^1$ sitting inside of it, and the boundary of a regular (tubular) neighbourhood of the $\mathbb CP^1$ is $S^3$. And $D^4$ has $S^3$ as its boundary, so the attaching map is tautological. The normal bundle is the missing data and that's what your CW-decomposition is ignoring. -This is essentially what always happens. Perhaps the conceptual hump you're dealing with is that you're asking for CW-decompositions of manifolds. Morse functions generically only build homotopy-equivalences to CW-complexes, they do not put CW-structures on the manifold without some significant work. Moreover, CW-decompositions ignore some of the most essential properties of the manifold, like smooth structures. -If instead you work with handle decompositions, what I state in my first paragraph is basically a generality -- critical points amount to handle attachments and the gluing instructions are always given in a direct way from the flow lines of the Morse function's (suitably normalized) gradient. So the handle decomposition is on the given manifold -- unlike the CW-case where you only have a homotopy-equivalence to a CW-complex.<|endoftext|> -TITLE: Need faster division technique for $4$ digit numbers. -QUESTION [5 upvotes]: I have to divide $2860$ by $3186$. The question gives only $2$ minutes and that division is only half part of question. Now I can't possibly make that division in or less than $2$ minutes by applying traditional methods, which I can't apply on that division anyways. -So anyone can perform below division using faster technique? -$2860/3186$ -Thanks for reading, hoping to get some answers. :) -This is a multiple choice question, with answers $6/7$, $7/8$, $8/9$, and $9/10$. - -REPLY [2 votes]: This isn't so much a math answer as a test-taking answer: You don't have to compute the fraction, you just have to determine which is the right answer. - -2860/3186 -This is a multiple choice question, with answers 6/7, 7/8, 8/9 and 9/10. - -If you reduce the fraction to get a/b, then b must be a divisor of 3186. This allows you to immediately eliminate some choices. It can't be 9/10 because 10 doesn't divide 3186. You can quickly check that 7 and 8 don't divide 3186, but 9 does, so the only one of the choices that has a shot at being the correct answer is 8/9. -Incidentally, none of those answer is correct; it's not 8/9 either. The furthest you can reduce the fraction is 1430/1593, so you must have an error in your question. Either the fraction is wrong or you're supposed to find the best approximation rather than the actual value.<|endoftext|> -TITLE: Why is Euler's Gamma function the "best" extension of the factorial function to the reals? -QUESTION [174 upvotes]: There are lots (an infinitude) of smooth functions that coincide with $f(n)=n!$ on the integers. Is there a simple reason why Euler's Gamma function $\Gamma (z) = \int_0^\infty t^{z-1} e^{-t} dt$ is the "best"? In particular, I'm looking for reasons that I can explain to first-year calculus students. - -REPLY [6 votes]: For me another argument is the convincing one. -Consider the log of the factorial resp the gamma-function; this is for the integer arguments a sum of logarithms of the integers. Now for the interpolation of sums to fractional indexes (which is required for the gamma to noninteger arguments) there exists the concept of "indefinite summation", and the operator for that indefinite summation can be expressed by a power series. we find, that the power series for the log of the Eulerian gamma-function matches exactly that of that operator for the indefinite summation of the sum of logarithms. -I've seen this argument elsewhere; I thought it has been here at mse before (by the used "anixx") but may be it is at MO; I'm not aware of a specific literature at the moment, but I've put that heuristic in a small amateurish article on my website; in the essence the representation of that indefinite summation is fairly elementary and should be existent in older mathematical articles. See "uncompleting the gamma" pg 13 if that seems interesting. -Conclusion: the Eulerian gamma-function is "the correct one", because it is coherent with the indefinite summation-formula for the sums of consecutive logarithms.<|endoftext|> -TITLE: G a finite group, M a maximal subgroup; M abelian implies G solvable? -QUESTION [14 upvotes]: Here is a classic theorem of Herstein: -$G$ is a finite group, $M$ a maximal subgroup, which is abelian. Then $G$ is solvable. -The proof is pretty easy, but it uses character theory (specifically, Frobenius' theorem on Frobenius groups). Is there a character-theory-free proof? -To get things going, note that we can reduce to the case where: -i) $M$ is core-free and a Hall subgroup of $G$; -ii) $Z(G)=1$. -Steve - -REPLY [5 votes]: You can apply Burnside's normal $p$-complement theorem to get a normal complement $N$ of $M$. -Then take an element $m$ of $M$ with prime order. -Case 1: The centralizer $C_N(m)$ of $m$ in $N$ is nontrivial. -As $M$ centralizes $m$, it acts on the fixed points of $m$ (in the action by conjugation on $N$), i.e., $C_N(m)$ is an $M$-invariant subgroup of $N$. If $C_N(m) = N$, then $m \in Z(G)$ contradicts ii). Otherwise $M < C_N(m)M < G$ contradicting the maximality of $M$. -Case 2: $N$ has the fixed point free automorphism $m$ of prime order. -By Thompson's thesis $N$ is nilpotent, hence $G$ solvable.<|endoftext|> -TITLE: Example of non-isomorphic structures which are elementarily equivalent -QUESTION [21 upvotes]: I just started learning model theory on my own, and I was wondering if there are any interesting examples of two structures of a language L which are not isomorphic, but are elementarily equivalent (this means that any L-sentence satisfied by one of them is satisfied by the second). -I am using the notation of David Marker's book "Model theory: an introduction". - -REPLY [47 votes]: First, I'm glad you are reading my book! :) -Let me make a couple of comments on Pete's answer--this is my first time here and I don't see how to leave comments. - -Any two dense linear orders without endpoints are elementarily equivalent. -In particular $(Q,<)$ and $(R,<)$ are elementarily equivalent. So there is no first order way of expressing the completeness of the reals. - -Any two algebraically closed fields of the same characteristic are elementarily equivalent. So the algebraic numbers -is elementarily equivalent to the complex numbers. This means you can prove first order things about the algebraic numbers using complex analysis or the complex numbers using Galois Theory or countability. - -Similarly the reals field is elementarily equivalent to the real algebraic numbers or to the field of real Puiseux series. -One can for example use the Puiseux series to prove asymptotic properties of semialgebraic functions. - - -Finally, Pete's comment 5) about infinite models of the theory of finite fields being elementarily equivalent isn't quite right. This is only true if the relative algebraic closure of the prime fields are isomorphic. -For example, -a) take an ultrapower of finite fields $F_p$ where the ultrafilter containing $\{2,4,8,\ldots\}$ then the resulting model has -characteristic $2,$ while if the ultrafilter contains the set of primes, then the ultraproduct has characteristic $0.$ -b) if the ultrapower contains the set of primes congruent to $1 \bmod 4,$ in the ultraproduct $-1$ is a square, while if -the ultraproduct contains the set of primes congruent to $3 \mod 4$ then in the ultraproduct $-1$ is not a square..<|endoftext|> -TITLE: '(Pseudo)-random functions' by seeding of PRNGs? -QUESTION [5 upvotes]: I have an application that wants controllable random functions from $\mathbb{Z}^2$ and $\mathbb{Z}^3$ to $2^{32}$ , where by controllable I basically mean seedable by some parameters (say, on the order of 3 to 5 32-bit integers) such that the same seeds will always produce the same functions. The most obvious way of doing this (for the two-dimensional case, say) would seem to be computing the value at some point $(x,y)$ by using $x$, $y$, and the seed parameters as seeds for something like an LFSR generator or a Mersenne Twister, then running the RNG for some fixed number of steps and taking the resultant value as the value of the function at that point. -My question is, how can I be certain that this procedure won't keep too much correlation between adjacent 'seed points', and is there either a straightforward analysis or even just some general guideline for how many iterations would be necessary to eliminate that correlation? My first back-of-the-envelope guess would be that each iteration roughly doubles the decorrelation between given seed values, so that 32 iterations would be necessary to achieve the requisite decorrelation over a range of $2^{32}$ values (and in practice I'd probably double it to 64 iterations), but that's strictly a guess and any proper analysis would be welcome! -Edited for clarification: To further outline the issue, I may be sampling this random function $f$ (for some given seed parameters) at arbitrary values, and need those samples to be identical between passes; so for instance, if a first application computes $f(0, 0)$, $f(437, 61)$, $f(-23, 129)$, and then $f(5,3)$, and a second (potentially concurrent) application computes $f(1,0)$ and then $f(5,3)$, both passes need to find the same value of $f$ at $(5,3)$. I may also be sampling $f$ at arbitrary points, so I'd like the evaluation to take constant time (and in particular, evaluating $f(x,y)$ shouldn't take time linear in $x+y$). - -REPLY [2 votes]: Note that when an LFSR is used to generate a sequence of $+1\;/-1$ values, the correlation between that sequence and any time-shifted version of that sequence is near zero: (sum from $i = 0$ to $N-1$ of $a[i]a[i+k]$) is either $N$ if $k$ is a multiple of $N$, or $-1$ otherwise. -The initial state of an LFSR has a one-to-one mapping to the time shift -- this is analogous to a discrete logarithm problem in Galois fields; given a time shift k, it's easy to find the initial state of an LFSR ($\mathcal{O}(log(k))$, I think, as it's just computing exponentiation in the appropriate Galois field), but given the initial state of an LFSR, it's hard to find the time shift k in anything shorter than $\mathcal{O}(k)$. So your PRNG could be an LFSR with very long period e.g. 64 bits or greater, and the "seed" could be a time shift used to derive a different initial state. -As far as $2$ - or $3$ - or $k$-dimensional mapping, interleave the bits of the coordinates, and use a hash function to map those bits to an integer for use in deriving LFSR seeds. -I'm not sure what kind of correlation you're looking to avoid between functions. (e.g. correlation in a statistical sense, or cryptographic independence e.g. one PRNG cannot be used to predict future output of another).<|endoftext|> -TITLE: disjoint union of Baire spaces which is a Baire space -QUESTION [5 upvotes]: Say we have a family {$A_\alpha$} of disjoint Baire spaces. Also suppose that each $A_\alpha$ is disjoint from the closure of the union of the other sets. Show that $\bigcup_{\alpha} A_\alpha$ is a Baire space. -I think we can prove this by transfinite induction. Suppose the property holds for all $\beta$ < $\alpha$, show it holds for $\alpha$. -If $\alpha$ is a limit ordinal: we want to show that for every sequence of open dense subsets of $\bigcup_{\beta<\alpha} A_\alpha$ their intersection is open dense. This sequence of open dense sets in the union is defined as the union of all open dense sets in each of the sets since they all are Baire spaces and by induction hypothesis the union up to $\beta$ is a Baire space. Suppose this is not the case. Then there exists an open set $U$ such that the intersection of these open dense sets misses $U$. There is some ordinal $\beta$ such that $U \in A_\beta$, but this is a contradiction since we supposed that the union up to $\beta$ was a Baire space. -Now how do I handle the successor case? it seems trickier. - -REPLY [4 votes]: I'm pretty sure it's simpler and doesn't require transfinite induction or anything nearly as difficult. In this case the $A_\alpha$ are open sets (as the complement of the closure of the union of the others). So suppose $U_1, \dots,$ are open dense sets in this union. Then each intersection $U_i \cap A_\alpha$ is open and dense in $A_\alpha$. The countable collection has intersection which is dense in each $A_\alpha$ by the Baire property. -This means that the countable intersection, call it $F$, is dense in the union $\bigcup A_\alpha$. (Any point in the union belongs one of the $A_\alpha$, and there is a point of $F \cap A_\alpha$ nearby, hence a point of $F$ nearby.)<|endoftext|> -TITLE: n-ary version of gcd$(a,b)\space $lcm$(a,b)$ = $ab$ -QUESTION [11 upvotes]: This question was motivated by pondering this lcm identity. -Consider that $\gcd(1,6,15) = 1$, but $\operatorname{lcm}(1,6,15)=30$, but $1\cdot 6\cdot 15 = 90$. $(2,6,15)$ shows a similar phenomenon. -So what is the correct identity for $n$-ary $\gcd$/$\operatorname{lcm}$? - -REPLY [9 votes]: Such generalizations of gcd & lcm identities occur frequently as problems. Below are a couple of examples - which include references to prior literature, e.g. to an inclusion-exclusion max/min proof in Polya & Szego's problem book. They're problem E2229 from the Monthly 77 1970 p. 402; problem 1344 from Math. Mag. 64 1990 p. 134.<|endoftext|> -TITLE: Toy sheaf cohomology computation -QUESTION [21 upvotes]: I asked this question a while back on MO : -One thing that really helped in learning the Serre SS was doing particular computations (like $H^*(CP^{\infty})$) -I am curious, as a sort of followup if anyone can suggest: - -A reference where small computations are carried out? or -A specific computation to do with a small enough sheaf an some simple topological space that would be able to give one a feel for sheaf cohomology. So this space that we are working over need not be a scheme, in fact it would probably be best if it were not a scheme since I don't understand them quite yet. And are there tricks of the trade to computing these things? or do people just hammer away ate injective resolutions? - -In short, please suggest a space and a sheaf on it that I should work on computing the sheaf cohomology of. -PS: I of course welcome any other suggestions for understanding how to compute sheaf cohomology. - -REPLY [3 votes]: This is rather scheme-y, but there's a really nice paper by Kempf (hopefully you have institutional access :() that gives a very basic and elementary proof that the higher cohomology of a quasi-coherent sheaf on an affine scheme is trivial. The first part of the paper uses nothing more than the basic properties (e.g. long exact sequence) of cohomology, and might be fun. I thought it was fun, anyway; it's also nice because it shows that Hartshorne is unnecessarily restrictive in sticking to noetherian affine schemes in chapter III (even if one wants to avoid anything fancy). -OK, update: here is the proof explained (admittedly by a beginner :)).<|endoftext|> -TITLE: Non-Linear Transformation -QUESTION [6 upvotes]: Can someone explain to me in simple terms what a non-linear transformation is in maths? -I know some single-variable calculus, but I read it has to do with multi-variable calculus, which I'm not familiar with. -If someone could explain it in simple words, that would be helpful. - -REPLY [4 votes]: It's a mapping that is not linearly closed. -either: zero vector is not within the image of the transformation, the transformation is not closed under scalar multiplication, the transformation is not closed under addition.<|endoftext|> -TITLE: Intuitive explanations for the concepts of divisor and genus -QUESTION [44 upvotes]: When trying to explain AG-codes to computer scientists, the major points of contention I am faced with are the concepts of divisors, Riemann-Roch space and the genus of a function field. Are there any intuitive explanations for these concepts, preferably explanations that are less dependent on knowledge of algebraic-geometry/topology? - -REPLY [16 votes]: If your CS friends are like me, they might still find the answers above a little overwhelming. So you could start as follows: -First, show them the Wiki's divisor page (it always works!). Then explain to them that by the fundamental theorem of arithmetic, any divisor is just a bunch of prime numbers with multiplicities. -Next, tell them that completely similar to $\mathbb Z$, the FTA works over $\mathbb C[x]$ (which you can think of as a line). Except now that each "divisor" (polynomial) can be thought of as a bunch of points (roots of the polynomial) on that line, with multiplicities. -But why stop as a straight line? One can do the same thing for a (reasonably nice) curve on the plane, and a natural way to get a bunch of points is to intersect with another curve. By the way, a bunch of points "divide" the curve, further justifying the terminology! -Back to $\mathbb C[x]$, you can point out that the total numbers of points (with mult.) is the degree of your polynomial $p(x)$, or the dimension of the vector space $\mathbb C[x]/(p(x))$. -At this point, if they still follow you, show them Matt E's splendid answer (-:<|endoftext|> -TITLE: Is the natural map $L^p(X) \otimes L^p(Y) \to L^p(X \times Y)$ injective? -QUESTION [11 upvotes]: Let $X,Y$ be $\sigma$-finite measure spaces, and let $L^p(X) \otimes L^p(Y)$ be the algebraic tensor product. The product has a natural map into $L^p(X \times Y)$ which takes $\sum a_{ij} f_i \otimes g_j$ to the function $F(x,y) = \sum a_{ij} f_i(x) g_j(y)$. A moment's thought shows that this map is well-defined. Is it also injective? -It seems that this should be true, but I can't see how to prove it. Intuitively, one needs to show that if $\sum a_{ij} f_i(x) g_j(y) = 0$ a.e., then one should be able to cancel all the terms in the sum using bilinearity. It is not quite clear how to do this without knowing anything about the terms. - -REPLY [3 votes]: EDIT: Here is a cleaned-up and corrected version of this answer, based on Pierre-Yves' suggestion (thanks!). His answer above contains a much more complete version. -If $\sum_{i=1}^n a_{i} f_i \otimes g_i$ is not the zero element of $L^p(X) \otimes L^p(Y)$, we may assume without loss of generality that the $f_i$ are linearly independent. We can also assume that $a_1 \ne 0$ and $g_1 \ne 0$. -Suppose that the corresponding function $F(x,y) = \sum_{i=1}^n a_{i} f_i(x) g_i(y) = 0$ a.e. Since $g_1 \ne 0$, there is a measurable $B \subset Y$ of positive finite measure such that $\int_B g_1 \ne 0$ (the integral is finite by Hölder). Then by Fubini's theorem, for a.e. $x$ we have -$$ -0 = \int_{B} F(x,y)dy = \sum_{i=1}^n a_i \left(\int_{B} g_i\right) f_i(x). -$$ -This contradicts the assumed linear independence of the $f_i$.<|endoftext|> -TITLE: Modular exponentiation using Euler’s theorem -QUESTION [11 upvotes]: How can I calculate $27^{41}\ \mathrm{mod}\ 77$ as simple as possible? -I already know that $27^{60}\ \mathrm{mod}\ 77 = 1$ because of Euler’s theorem: -$$ a^{\phi(n)}\ \mathrm{mod}\ n = 1 $$ -and -$$ \phi(77) = \phi(7 \cdot 11) = (7-1) \cdot (11-1) = 60 $$ -I also know from using modular exponentiation that $27^{10} \mathrm{mod}\ 77 = 1$ and thus -$$ 27^{41}\ \mathrm{mod}\ 77 = 27^{10} \cdot 27^{10} \cdot 27^{10} \cdot 27^{10} \cdot 27^{1}\ \mathrm{mod}\ 77 = 1 \cdot 1 \cdot 1 \cdot 1 \cdot 27 = 27 $$ -But can I derive the result of $27^{41}\ \mathrm{mod}\ 77$ using $27^{60}\ \mathrm{mod}\ 77 = 1$ somehow? - -REPLY [3 votes]: By little Fermat: $\; 6,10\:|\:120\ \Rightarrow\ 3^{120} \equiv 1 \pmod{7, 11}\ \Rightarrow\ 3^{123} \equiv 3^3 \pmod{77}$ -See also these Fermat-Euler-Carmichael generalizations of little Fermat-Euler from my sci.math post on Apr 10 2009. -THEOREM 1 $\ $ For naturals $\rm\: a,e,n\: $ with $\rm\: e,n>1 $ -$\rm\qquad\qquad\ n\ |\ a^e-a\ $ for all $\rm\:a\ \iff\ n\:$ is squarefree and prime $\rm\: p\:|\:n\: \Rightarrow\: p-1\ |\ e-1 $ -REMARK $\ $ The special case $\rm\:e \:= n\:$ is Korselt's criterion for Carmichael numbers. -THEOREM 2 $\ $ For naturals $\rm\: a,e,n \:$ with $\rm\: e,n>1 $ -$\rm\qquad\qquad\ n\ |\ a^e-1\ $ for all $\rm\:a\:$ coprime to $\rm\:n\ \iff\ p\:$ prime, $\rm\ p^k\: |\: n\ \Rightarrow\ \lambda(p^k)\:|\:e $ -with $\rm\quad\ \lambda(p^k)\ =\ \phi(p^k)\ $ for odd primes $\rm\:p\:,\:$ or $\rm\:p=2,\ k \le 2 $ -and $\quad\ \ \rm \lambda(2^k)\ =\ 2^{k-2}\ $ for $\rm\: k>2 $ -The latter exception is due to $\rm\:\mathbb Z/2^k\:$ having multiplicative group $\rm\ C(2) \times C(2^{k-2})\ $ for $\rm\:k>2\:.$ -Note that the least such exponent $\rm\:e\:$ is given by $\rm\: \lambda(n)\: =\: lcm\ \{\lambda(\;{p_i}^{k_i})\}\;$ where $\rm\ n = \prod {p_i}^{k_i}\:.$ -$\rm\:\lambda(n)\:$ is called the (universal) exponent of the group $\rm\:\mathbb Z/n^*,\:$ a.k.a. the Carmichael function. -See my post here for proofs and further discussion.<|endoftext|> -TITLE: How do I figure out what kind of distribution this is? -QUESTION [10 upvotes]: I've sampled a real world process, network ping times. The "round-trip-time" is measured in milliseconds. Results are plotted in a histogram: - -Ping times have a minimum value, but a long upper tail. -I want to know what statistical distribution this is, and how to estimate its parameters. -Even though the distribution is not a normal distribution, I can still show what I am trying to achieve. -The normal distribution uses the function: - -with the two parameters - -μ (mean) -σ2  (variance) - -Parameter estimation -The formulas for estimating the two parameters are: - -Applying these formulas against the data I have in Excel, I get: - -μ = 10.9558 (mean) -σ2  = 67.4578 (variance) - -With these parameters I can plot the "normal" distribution over top my sampled data: - -Obviously it's not a normal distribution. A normal distribution has an infinite top and bottom tail, and is symmetrical. This distribution is not symmetrical. - -What principles would I apply, what flowchart, would I apply to determine what kind of distribution this is? -And cutting to the chase, what is the formula for that distribution, and what are the formulas to estimate its parameters? - -I want to get the distribution so I can get the "average" value, as well as the "spread": - -I am actually plotting the histrogram in software, and I want to overlay the theoretical distribution: - -Tags: sampling, statistics, parameter-estimation, normal-distribution - -REPLY [2 votes]: From the comments on stats.stackexchange, it seems like you may not care too much about the distribution, but just a pretty curve to overlay on your graph. In which case, some kind of spline is your best bet. Use some kind of curves with asymptotes at y=0 for your upper- and lower-most segments, and whatever fits best in between. -If you do actually care about the underlying distribution: -The first step would be to use whatever outside knowledge you have to characterize the distribution. For example: -Network ping is a sum of independent wait times (the individual nodes in the network). This would suggest a Gamma/Erlang distribution if each of these steps is identical, and a more complex distribution if they are not. -Ping is a measure of time until the computer at the other end responds to your request, the likelihood of which is proportional to the time elapsed. This would suggest a Weibull distribution. -Ping time is the accumulation of a large number of factors that all have a multiplicative effect on the result. Then a log-normal distribution would be best. -I don't know enough about networking to say anything about the accuracy of any of the above models, and it's also perfectly likely that ping time follows some other model which I haven't thought of. I just wanted to demonstrate the idea: that you should think about what factors contribute to the thing you are trying to model, and how they interact. -And, of course, the distribution does not necessarily have to be a known one! In which case the above won't get you very far! In this case you might want to come up with your own empirical distribution, for which a variety of methods exist. The most common are to take your measurements as the distribution (as long as you have a sufficiently large number) or to take each of those data points and treat it as the center of some uniform/normal/other distribution, and sum everything with appropriate scaling. -After you know the type of distribution, you may also be able to use domain knowledge to estimate some of its parameters. For example, you might guess at the number of exponentials being summed based on the shape of the network. You can also use your measured mean and variance to form estimates of the distribution parameters. For example, if you thought that your distribution was a Gamma(3,θ), then you could use your measured variance to estimate θ=4.74182454 based on our known formula for variance of a Gamma Distribution. -Once you have your guess at a distribution, you will want to test its goodness of fit. -For this, the standard method would be to apply the one-sample Kolmogorov-Smirnov test. -Other potentially applicable tests are the Cramer-von-Mises, Anderson-Darling, or chi-square tests. -This is incomplete, I will add more later.<|endoftext|> -TITLE: Function behavior with very large variables -QUESTION [7 upvotes]: Whenever I think about how a function behaves, I always try to identify a general pattern of behavior with some common numbers (somewhere between 5 and 100 maybe) and then I try to see if anything interesting happens around 1, 0 and into negative numbers if applicable. -If that all works out, I essentially assume that I know that the function is going to behave similarly for very large numbers as it does for those relatively small numbers. -Are there notable (famous, clever or common) functions where very large numbers would cause them to behave significantly differently than would initially be thought if I followed my regular experimental pattern? If so, are there any warning signs I should be aware of? - -REPLY [2 votes]: The Chebyshev bias behavior is properly understood only by examining up to very very large numbers. - -REPLY [2 votes]: The values of a function $f(x)$ for relatively small values of its argument $x$ is typically a very bad predictor of asymptotic behavior of $f(x)$ for large $x$. This is true even when $f(x)$ is an analytic function which is uniquely determined by its values on any small interval $x\in[-\epsilon,\epsilon]$. -Have a look at this excerpt from "Concrete Mathematics" for a (not the worst possible) example of how deceptive the "small argument values" intuition could be. - -It helps to cultivate an expansive attitude when we're doing asymptotic analysis: We should think big, when imagining a variable that approaches infinity. For example, the hierarchy says that $\log n\prec n^{0.0001}$; this might seem wrong if we limit our horizons to teeny-tiny numbers like one googol, $n = 10^{100}$. For in that case, $\log n = 100$, while $n^{0.0001}$ is only $10^{0.01}\approx 1.0233$. But if we go up to a googolplex, $n = 10^{10^{100}}$, then $\log n = 10^{100}$ pales in comparison with $n^{0.0001} = 10^{10^{96}}$.<|endoftext|> -TITLE: Non-completeness of the space of bounded linear operators -QUESTION [5 upvotes]: If $X$ and $Y$ are normed spaces I know that the space $B(X,Y)$ of bounded linear functions from $X$ to $Y$, is complete if $Y$ is complete. Is there an example of a pair of normed spaces $X,Y$ s.t. $B(X,Y)$ is not complete? - -REPLY [4 votes]: Let $X = \mathbb{R}$ with the Euclidean norm and let $Y$ be a normed space which is not complete. You should find that $B(X, Y) \simeq Y$.<|endoftext|> -TITLE: Rigorous synthetic geometry without Hilbert axiomatics -QUESTION [7 upvotes]: Are there books or article that develop (or sketch the main points) of Euclidean geometry without fudging the hard parts such as angle measure, but might at times use coordinates, calculus or other means so as to maintain rigor or avoid the detail involved in Hilbert-type axiomatizations? -I am aware of Hilbert's foundations and the book by Moise. I was wondering if there is anything more modern that tries to stay (mostly) in the tradition of synthetic geometry. - -REPLY [2 votes]: There are some axioms systems such as Birkoff axioms which assume the existence of a field from the beginning. -For the synthetic approach the main axiom systems are those of Hilbert and Tarski. -You can also use Tarski's axiom as described in W. Schwabhäuser, -W Szmielew, A. Tarski, Metamathematische Methoden in der Geometrie.<|endoftext|> -TITLE: How did the notation "ln" for "log base e" become so pervasive? -QUESTION [29 upvotes]: Wikipedia sez: - -The natural logarithm of $x$ is often written "$\ln(x)$", instead of $\log_e(x)$ especially in disciplines where it isn't written "$\log(x)$". However, some mathematicians disapprove of this notation. In his 1985 autobiography, Paul Halmos criticized what he considered the "childish $\ln$ notation," which he said no mathematician had ever used. In fact, the notation was invented by a mathematician, Irving Stringham, professor of mathematics at University of California, Berkeley, in 1893. - -Apparently the notation "$\ln$" first appears in Stringham's book Uniplanar algebra: being part I of a propædeutic to the higher mathematical analysis. -But this doesn't explain why "$\ln$" has become so pervasive. I'm pretty sure that most high schools in the US at least still use the notation "$\ln$" today, since all of the calculus students I come into contact with at Berkeley seem to universally use "$\ln$". -How did this happen? - -REPLY [24 votes]: As noted in the original question, Wikipedia claims that the ln notation was invented by Stringham in 1893. I have seen this claim in other places as well. However, I recently came across an earlier reference. In 1875, in his book Lehrbuch der Mathematik, Anton Steinhauser suggested denoting the natural logarithm of a number a by "log. nat. a (spoken: logarithmus naturalis a) or ln. a" (p. 277). This lends support to the theory that "ln" stands for "logarithmus naturalis."<|endoftext|> -TITLE: Yoneda-Lemma as generalization of Cayley`s theorem? -QUESTION [49 upvotes]: I came across the statement that Yoneda-lemma is a generalization of Cayley`s theorem which states, that every group is isomorphic to a group of permutations. -How exactly is Yoneda-lemma a generalization of Cayley`s theorem? Can Cayley's theorem be deduced from Yoneda lemma, is it a generalization of a particular case of Yoneda, or is this instead, a philosophical statement? -To me, it seems that Yoneda embedding is more canonical than Cayley's theorem because in the latter you have to choose whether the group acts from the left or from the right on itself. But maybe this is an optical illusion. - -REPLY [22 votes]: I can also add that both Yoneda and Cayley are results which follow from the general philosophy of investigating algebraic structures by letting them act on themselves. -1) If you let a group $G$ act on itself, you realize it as a subgroup of the permutation group of the underlying set; in particular if $G$ is finite, as a subgroup of $\mathfrak{S}_n$. -2) If you let a ring with unit act on itself, you realize it as a subring of $\operatorname{End}(E)$, where $E$ is the underlying additive group. -3) Similarly, if you let a finite dimensional $k$-algebra act on itself, you realize it as a subalgebra of the matrix algebra $\mathcal{M}_n(k)$. In particular this gives the classical realization of $\mathbb{C}$ as a matrix algebra over $\mathbb{R}$ and of the quaternions as a matrix algebra over $\mathbb{C}$ or $\mathbb{R}$. -4) You can let a Lie algebra act on itself, but unfortunately this action need not be faithful (Lie algebra don't have units...). So you only obtain the easy first step of Ado's theorem about embedding Lie algebras into matrix Lie algebras. -5) If you let a category $\mathcal{C}$ act on itself, you obtain an embedding into $Fun(\mathcal{C}^{op}, Set)$, which is the content of Yoneda's lemma.<|endoftext|> -TITLE: When does the virtual cohomological dimension become a numerical invariant? -QUESTION [5 upvotes]: In group cohomology theory, we know that the cohomological dimension $cd(G)$ for a profinite group is a fundamental numerical invariant. We say $G$ is of virtual cohomological dimension $n$ (denote it by $vcd(G)$) if there exists an open subgroup $H$ such that $cd(H)=n$. -It seems that $vcd(G)$ is not always a fixed integer and my question is: -Is there any criterion when $vcd(G)$ become fixed ,or say, when $cd(G)=cd(H)$ for all open subgroup $H$ of $G$ ? If there are any related references please let me know, thx ! - -REPLY [6 votes]: Although it is not obvious from the definition, in fact -- assuming that there exists at least one open subgroup of finite cohomological dimension, otherwise there is nothing to say -- the virtual cohomological dimension is always a fixed integer. In other words, if there exists a single open subgroup $H_0$ of $G$ such that $\operatorname{cd}(H_0) < \infty$, then for all open subgroups $H$ of $G$ with $\operatorname{cd}(H) < \infty$, we have -$\operatorname{cd}(H) = \operatorname{cd}(H_0)$. -(Moreover, this also holds for the $p$-cohomological dimension at any prime $p$.) -I believe this result was first proved by Serre. In any case, it follows easily from Proposition I.14 in Serre's Galois Cohomology. In fact, a slightly stronger result is given as Proposition I.14$'$: again assuming that there exists at least one open subgroup $H_0$ of finite cohomological dimension, an open subgroup $H$ has infinite cohomological dimension if $H$ has nontrivial elements of finite order and cohomological dimension equal to $\operatorname{cd}(H_0)$ otherwise.<|endoftext|> -TITLE: Indefinite summation of polynomials -QUESTION [7 upvotes]: I've been experimenting with the summation of polynomials. My line of attack is to treat the subject the way I would for calculus, but not using limits. -By way of a very simple example, suppose I wish to add the all numbers between $10$ and $20$ inclusive, and find a polynomial which I can plug the numbers into to get my answer. I suspect its some form of polynomial with degree $2$. So I do a integer 'differentiation': -$$ -\mathrm{diff}\left(x^{2}\right)=x^{2}-\left(x-1\right)^{2}=2x-1 -$$ -I can see from this that I nearly have my answer, so assuming an inverse 'integration' operation and re-arranging: -$$ -\frac{1}{2}\mathrm{diff}\left(x^{2}+\mathrm{int}\left(1\right)\right)=x -$$ -Now, I know that the 'indefinite integral' of 1 is just x, from 'differentiating' $x-(x-1) = 1$. So ultimately: -$$ -\frac{1}{2}\left(x^{2}+x\right)=\mathrm{int}\left(x\right) -$$ -So to get my answer I take the 'definite' integral: -$$ -\mathrm{int}\left(x\right):10,20=\frac{1}{2}\left(20^{2}+20\right)-\frac{1}{2}\left(9^{2}+9\right)=165 -$$ -(the lower bound needs decreasing by one) -My question is, is there a general way I can 'integrate' any polynomial, in this way? -Please excuse my lack of rigour and the odd notation. - -REPLY [5 votes]: For any particular polynomial there is an easier way to do indefinite summation than using the Bernoulli numbers, going off of Greg Graviton's answer. Here we'll use the forward difference $\Delta f(x) = f(x+1) - f(x)$. Then -$\displaystyle \Delta {x \choose n} = {x \choose n-1}.$ -This implies that we can perform a "Taylor expansion" on any polynomial to write it in the form $f(x) = \sum a_n {x \choose n}$ by evaluating the finite differences $\Delta^n f(0)$ at zero. For any particular polynomial $f$ it is easy to write these finite differences down by constructing a table. In general, the formula is -$\displaystyle a_n = \Delta^n f(0) = \sum_{k=0}^{n} (-1)^{n-k} {n \choose k} f(k)$ -as one can readily prove by writing $\Delta = S - I$ where $S$ is the shift operator $S f(x) = f(x+1)$ and $I$ is the identity operator $I f(x) = f(x)$. Then the indefinite sum of $f$ is just $\sum a_n {x \choose n+1}$. This is the easiest way I know how to do such computations by hand, and it also leads to a fairly easy method for polynomial interpolation given the values of a polynomial at consecutive integers.<|endoftext|> -TITLE: How do you define functions for non-mathematicians? -QUESTION [36 upvotes]: I'm teaching a College Algebra class in the upcoming semester, and only a small portion of the students will be moving on to further mathematics. The class is built around functions, so I need to start with the definition of one, yet many "official" definitions I have found too convoluted (or poorly written) for general use. -Here's one of the better "light" definitions I've found: - -A function is a relationship which - assigns to each input (or domain) - value, a unique output (or range) - value." - -This sounds simple enough on the surface, but putting myself "in the head" of a student makes me pause. It's almost too compact with potentially ambiguous words for the student (relationship? assigns? unique?) -Here's my personal best attempt, in 3 parts. Each part of the definition would include a discussion and examples before moving to the next part. - -A relation is a set of links between - two sets. -Each link of a relation has an input - (in the starting set) and an output - (in the ending set). -A function is a relation where every - input has one and only one possible - output. - -I'm somewhat happier here: starting with a relation gives some natural examples and makes it easier to impart the special importance of a function (which is "better behaved" than a relation in practical circusmtances). -But I'm also still uneasy ("links"? A set between sets?) and I was wanting to see if anyone had a better solution. - -REPLY [2 votes]: All of the above state the usual metaphors with more or less flair; but none convey the idea of a function. Functions have nothing inherently to do with machines or monkeys or rules (whatever they are) or black boxes (whatever they are) or inputs and outputs or the equals sign. There are reasons why we want the invention of a function. There are reasons why we need the invention of a variable. There is a reason why we need the notion of dependence that leads to the idea of an "expression" to provide what we want. The expression is a brilliant device; and it's a shame not to see that explicitly. There are reasons why we consider a constant to be a special case of a function. There are reasons why we commonly limit the word "function" to single valued functions. There are reasons why we expand the notion of a function to a function equation with an explicit dependent variable. And like other objects in mathematics, the same function can have different forms, algebraic and otherwise. Stating conclusions without their sense makes mathematics a mystique instead of a rational interesting science that we can think about and question and explore. One last point: The question should have been how to explain, not "define." No so-called "definition" comes out of nowhere. Presenting a "definition" like Athena springing full grown and armored from the head of Zeus is, to the great misfortune of most of us, standard practice in teaching mathematics.<|endoftext|> -TITLE: If $\sum_{n=1}^{\infty} a_{n}^{3}$ converges does $\sum_{n=1}^{\infty} \frac{a_{n}}{n}$ converge? -QUESTION [13 upvotes]: Suppose $a_{n}>0$ and the following series converges -$\sum_{n=1}^{\infty} a_{n}^{3}$ -Does this imply that -$\sum_{n=1}^{\infty} \frac{a_{n}}{n}$ -converges? -I was able to prove that the second series also converges by using the limit comparision test. Is there another way to show the second series converges (e.g. root or ratio test)? - -REPLY [4 votes]: By Hölder's inequality you have -$$ A_k=\sum_ {i=1}^{k} \frac{a_n}{n} \leq \left ( \sum_ {i=1}^{k}a_ n ^3 \right )^\frac{1}{3} \left ( \sum_ {i=1}^{k} \frac{1}{n^\frac{3}{2}}\right ) ^\frac{2}{3}$$ -By taking $k \rightarrow \infty $,we see $\sum_ {i=1}^{\infty} \frac{a_n}{n}$ is bounded and since $\frac{a_n}{n} \geq 0$ we conclude it is convergent( since $A_k$ is bounded and increasing).<|endoftext|> -TITLE: More convergent series -QUESTION [7 upvotes]: This question just reminded me of a conundrum I posed myself in my first year of university. I never did get a satisfactory answer... - -Let $a_n$ be a null sequence. Does it follow that $\sum \frac{a_n}{n}$ converges? - -Any ideas? - -REPLY [13 votes]: If by null sequence you mean a sequence that converges to 0, then no. Try $a_n=1/\log n.$ By integral comparison, the series diverges: -$$\sum_2^\infty\dfrac1{n\log n}\geq\int_2^\infty\dfrac{dx}{x\log x}=\int_{\log 2}^\infty\dfrac{du}u=\infty,$$ -where I've used the change of variables $u=\log x$.<|endoftext|> -TITLE: Undergraduate/High-School-Olympiad Level Introductory Number Theory Books For Self-Learning -QUESTION [14 upvotes]: I don't know whether the books metioned in Best ever book on Number Theory are beyond undergraduate/high-school-olympiad level. -Please recommend your favourite. - -REPLY [3 votes]: Davenport's The Higher Arithmetic was my first number theory book. I think its very accessible to a high school student or beginning undergraduate student. It's quite short and very quickly readable. -If you find this treatment too informal, Niven and Zuckerman's an Introduction to the Theory of Numbers is a standard text that I think is a very well written undergraduate text, but this has already been mentioned.<|endoftext|> -TITLE: Can this gravitational field differential equation be solved, or does it not show what I intended? -QUESTION [16 upvotes]: This is the equation I'm having trouble with: -$$G \frac{M m}{r^2} = m \frac{d^2 r}{dt^2}$$ -That's the non-vector form of the universal law of gravitation on the left and Newton's second law of motion on the right. I assume that upon correctly modeling and solving this, I will have a function of time that gives the distance from a spherical mass in space (e.g. distance from the Earth from an initial condition of $r(0) = 10,000 \mathrm{km}$). -However, WolframAlpha gives a hell of an answer, which leads me to believe that I'm modeling this equation completely wrong. Can anyone shed some light on this problem? - -REPLY [5 votes]: The correct differential equation is: -$$ - \frac{G M m}{r^2} = \mu \ddot r $$ -where $ \mu = \frac{M m}{M+m} $ is the reduced mass. -This can be simplified by dividing by $\mu$ -$$ - \frac{G(M+m)}{r^2} =\ddot r $$ -The problem can be further simplified by setting: $ G(M+m) $ equal to a constant, usually 1/2. -$$ - \frac{1}{2} = r^2 \ddot r $$ -This is a 2nd order quasilinear nonhomogenous ordinary differential equation. This equation is ill-formed, it does not have a unique solution. Adding initial or boundary conditions will lead to a unique particular solution. -Like the more general Kepler orbits, radial orbits can also be classified as elliptic, parabolic, or hyperbolic, corresponding to three forms of the particular solutions. -In the parabolic case, setting $ G(M+m) = 2/9 $, with initial conditions $ r(1)=1, \ \dot r(1)=2/3 \ $ leads to a simple solution:$$ t = r^{3/2} $$ -In the elliptic case, setting $ G(M+m) = 1/2 $, with initial conditions $ r(\pi/2)=1, \ \dot r(\pi/2)=0 \ $, the particular solution is: $$ t = \arcsin(\sqrt{r})-\sqrt{r(1-r)} $$ -In the hyperbolic case, setting $ G(M+m) = 1/2 $, with initial conditions $ t_0 = \sqrt{2}-\operatorname{arcsinh}(1)$, $ r(t_0)=1 $, $ \dot r(t_0)=\sqrt{2} $, the particular solution is:$$ t = \sqrt{r(1+r)} - \operatorname{arcsinh}(\sqrt{r}) $$<|endoftext|> -TITLE: Books on Lie Groups via nonstandard analysis? -QUESTION [6 upvotes]: Is there any book or online note that covers the basics of lie groups using nonstandard analysis? Another thing I would like is to see these things in category theory (along the lines of Algebra: Chapter 0 except for differential geometry?) - -REPLY [3 votes]: I'm not aware of any Lie theory being done with Robinson's nonstandard analysis, but a different approach to infinitesimals via Kock and Lawvere's smooth infinitesimal analysis has certainly been used to develop some Lie theory. This might be more fruitful to look into since it's an axiomatization of some of Grothendieck's methods in algebraic geometry (formal schemes and such), which are in mainstream use. There's some stuff on Lie groups and Lie algebras in Kock's Synthetic Differential Geometry and Kock's Synthetic Geometry of Manifolds which are both available for free. Lavendhomme's Basic Concepts of Synthetic Differential Geometry also covers some Lie theory. -Smooth infinitesimal analysis is very nice and intuitive, more so than classical differential geometry in many ways (tangent vectors as actual curves and vector fields as infinitesimal transformations). It is certainly categorical in style, as you can probably guess since it was pioneered by Kock and Lawvere.<|endoftext|> -TITLE: Tips for writing math solutions for others -QUESTION [5 upvotes]: I am working a bit on a collection of Linear Algebra examples, -as well as some examples on induction. This is what is taught freshman year at our university. -I intend to release this to the public, either by selling printed copies or releasing it online. -Since I do not have experience using such material myself, there are some questions I would like some opinions on: - -How much theory should I include? Is references to course litterature enough? -Is there a format preference? Small text, so that the collection is more enviromental-friendly, or with big marginals for notes? -Best way to deal with misprints? -Should induction and Linear algeba be separate pieces? - -Please share your experience if you have done something similar. -EDIT: -An answer I seek is something along the lines of: -"I am a "something" stident, and I prefer "something", and would like to see more of "something". - -REPLY [11 votes]: I am an undergrad student, and I just finished taking Application of Linear Algebra. Here is what I want from a linear algebra text: -EXAMPLES WHICH ACTUALLY DEMONSTRATE THE POINT. -An example of what I don't ever want to see: the first example in my textbook was to show how to do matrix multiplication. Simple, right? No. They used a square symmetric matrix multiplied by itself to demonstrate this, and didn't walk through the individual steps. It just gave some matrix A and said look! A.A = B. See? And of course we didn't. -So please! Check that your examples can only be understood one way, and that this way is the way you intend.<|endoftext|> -TITLE: Why is Gimbal Lock an issue? -QUESTION [13 upvotes]: I understand what the problem with Gimbal Lock is, such that at the North Pole, all directions are south, there's no concept of east and west. But what I don't understand is why this is such an issue for navigation systems? Surely if you find you're in Gimbal Lock, you can simply move a small amount in any direction, and then all directions are right again? -Why does this cause such a problem for navigation? - -REPLY [4 votes]: One problem that I have come across is when roller coasters are being designed. If you have the pitch at +/-90 degrees (pointing straight up/down) then using normal Euler-based orientation you can't easily specify an angle of banking as you have no reference to 'up'. To solve this Quaternions are often used.<|endoftext|> -TITLE: Comparison of two convergence conditions for sequences of non-negative numbers -QUESTION [6 upvotes]: Let $a_n\geq 0$ be a sequence of non-negative numbers. Consider the following two statements: -$$ -\text{(I)}\qquad\qquad \lim_{n\to\infty} \frac{1}{n^2}\sum_{i=1}^n a_i =0, -$$ -$$ -\text{(II)}\qquad\qquad\qquad \sum_{n=1}^\infty \frac{a_n}{n^2}<\infty. -$$ - -Questions: Does (I) imply (II)? Does (II) imply (I)? Otherwise, please provide counterexamples. - -Motivation: Both statements occur in the context of the law of large numbers for non-identically distributed random variables. With $a_n=\mathrm{Var}(X_n)$, one can conclude the weak LLN if the $X_n$ are pairwise uncorrelated and condition (I) holds. The strong LLN can be concluded if the $X_n$ are stochastically independent and condition (II) holds. Therefore, one might expect that (II) implies (I). - -REPLY [11 votes]: II implies I: -(we deal with the partial sums here) -One applies the Cauchy-Schwarz inequality to get: -$\displaystyle \left( \sum_{i=1}^n \frac{a_i}{i^2} \right)\left(\sum_{i=1}^n i^2\right) \ge (\sum_{i=1}^n a_i)^2 $ -We can see that $ \sum_{i=1}^n i^2 $ is of the magnitude $n^3$ as $n \rightarrow \infty$. So when we divide both sides by $n^4$, LHS will converge to 0 as $n \rightarrow 0$, which implies that $\displaystyle \frac{\sum a_n}{n^2}$ converges to 0. -I does NOT imply II: -e.g. take $a_1,a_2$ to be anything, $\displaystyle a_n = \frac{n}{\log n}$ for $n \ge 3$. This does not satisfy II (well known). But it does satisfy I, because asymptotically, -$\displaystyle \frac{1}{n^2} \sum_{i=1}^n a_i \sim \frac{1}{n^2} \int_3^n \frac{x}{\log x} dx$. -The logarithmic integral has a well known approximation -$\displaystyle \int_3^n \frac{1}{\log x}dx = O\left(\frac{n}{\log n}\right)$. -So $\displaystyle \frac{1}{n^2} \int_3^n \frac{x}{\log x} dx \leq \frac{n}{n^2} \int_3^n \frac{1}{\log x} dx = O\left(\frac{1}{\log n}\right) \rightarrow 0$ as $n \rightarrow \infty$.<|endoftext|> -TITLE: How to calculate reflected light angle? -QUESTION [7 upvotes]: On a two dimensional plane, line $X$ is at an angle of $x$ radians and an incoming light travels at an angle of $y$ radians. How can I calculate the angle of the outgoing light reflected off of the line $X$? How can I cover all possible cases? -Edit: I was trying to figure out Project Euler problem 144. - -REPLY [2 votes]: Ok, here's another way to look at it, using vector directions and kinda intuitive, but I think it is pretty close to a real proof. -Start with the mirror line X horizontal, that is, its angle x = 0. The it's clear that the vector angle, z, of the reflected light is the negative of the vector angle, y, of the light itself: z = -y. - -Now rotate line X by d degrees around the point of reflection, either way, leaving the original light ray (angle y) fixed and letting the reflected ray (angle z) rotate around the point of reflection to maintain the angle of reflection equal to the angle of incidence. Assuming counterclockwise rotation, d > 0, this "pushes" the reflected line by 2d, one d for a direct push to maintain the angle fo reflection, and another d because the angle of incidence also increases by d, so the reflected light must rotate that much more to keep the two angles of the light equal. Likewise when d < 0 for clockwise rotation. -So we are increasing (or decreasing, for counterclockwise rotation) angle x by d, but angle z (vector angle) by 2d. Hence... -z = 2d - y = 2x - y<|endoftext|> -TITLE: What is the rigorous definition of polyhedral fan? What are some good resources to learn about them? What context do they arise naturally in? -QUESTION [6 upvotes]: I've been reading about tropical geometry and many papers reference polyhedral fans. I feel like I have a decent intuitive picture of what they are from reading articles but I still haven't been able to guess the general definition. All the ones I've encountered have been systems of linear inequalities, so that is my best guess at a general definition. -Any comments on where they appeared first historically or links/books to general resources on learning about them would be appreciated. Also, I'm curious to know what other areas of math these show up in? - -REPLY [11 votes]: A polyhedral cone is a subset of a real vector space which is the intersection of finitely many closed half spaces. (The defining planes of these half spaces must pass through $0$.) A fan is a finite set $F$ of polyhedral cones, all living in the same vector space, such that -(1) if $\sigma$ is a cone in $F$, and $\tau$ is a face of $\sigma$, then $\tau$ is in $F$. -(2) if $\sigma$ and $\sigma'$ are in $F$, then $\sigma \cap \sigma'$ is a face of both $\sigma$ and of $\sigma'$. -This blog post of mine might help you visualize these definitions. -Most mathematicians I know learned fans from Fulton's Toric Varieties. This would involve learning a lot of algebraic geometry on top of your combinatorics, although it is algebraic geometry that is very relevant to tropical geometry. -For a pure combinatorics reference, have you tried Chapter 2 of De Loera, Rambau and Santos? They focus on polyhedral complexes, which is the more general setup where you don't require that the half spaces pass through $0$, but they talk about fans as well. I haven't had a chance to look at it yet but, based on my knowledge of the authors, I expect it is very good.<|endoftext|> -TITLE: On multiplying quaternion matrices -QUESTION [6 upvotes]: Both matrix multiplication and quaternion multiplication are non-commutative; hence the use of terms like "premultiplication" and "postmultiplication". After encountering the concept of "quaternion matrices", I am a bit puzzled as to how one may multiply two of these things, since there are at least four ways to do this. -Some searching has netted this paper, but not having any access to it, I have no way towards enlightenment except to ask this question here. -If there are indeed these four ways to multiply quaternion matrices, how does one figure out which one to use in a situation, and what shorthand might be used to talk about a particular version of a multiplication? - -REPLY [2 votes]: I guess I should expand my comment into an answer. Given two matrices $a_{ij}$ and $b_{ij}$ with entries in any (associative) ring $R$, the natural definition of the product has entries -$\displaystyle c_{ij} = \sum_k a_{ik} b_{kj}.$ -This multiplication is associative, and it also agrees with the multiplication one obtains from any finite-dimensional matrix representation of $R$ by replacing each entry by the corresponding matrix. -I do not see any particular reason to consider a different notion of multiplication. Changing the order of some of the multiplications seems nonsensical to me, and multiplying in the opposite order gives you essentially the same multiplication. -This definition does not agree with the definition in my first comment; multiplication by one of the above matrices does not define an $R$-module homomorphism when $R$ is noncommutative.<|endoftext|> -TITLE: Presentation of the fundamental group of a manifold minus some points -QUESTION [25 upvotes]: I recently noticed a few things in some recent questions on MO: -1) the fundamental group of $S^2$ minus, say, 4 points, is $\langle a,b,c,d\ |\ abcd=1\rangle$. -2) The fundamental group of a torus minus a point is $\langle a,b,c\ |\ [a,b]c=1\rangle$. -I was just wondering if you have a manifold $M$, and you know a presentation of its fundamental group, can you quickly get a presentation of the fundamental group of $M$ minus $n$ points? -Of course, this could merely be a coincidence. These are both surfaces, and both their fundamental groups only have one relation. But perhaps there is more to it than that? -Any and all helpful comments appreciated, -Steve - -REPLY [28 votes]: The basic tool here is Van Kampen's theorem. Let $M$ be the manifold, and let $M' = M\setminus {x}.$ Let $D$ be a small ball around $x$. -Then $M = M' \cup D,$ and $M \cap D = D\setminus {x}$ is homotopic to $S^{n-1}$ (here $n$ -is the dimension of $M$). -So Van Kampen's theorem says that $\pi_1(M) = \pi_1(M')*_{\pi_1(S^{n-1})} \pi_1(D)$. -If $n > 2,$ then $S^{n-1}$ is simply connected, and of course $\pi_1(D)$ is simply connected, -and so this reduces to $\pi_1(M) = \pi_1(M')$; in other words, deleting points doesn't change -$\pi_1$ in dimensions $> 2$. (You can probably convince yourself of this directly: imagine you have some loop contracting in a 3-dimensional space; then if you remove a point, you can always just perturb the contraction slightly so that it misses the point. Of course, -this is not the case in two dimensions.) -If $n = 2$, then $\pi_1(S^1) = \mathbb Z$ is infinite cyclic, while $\pi_1(D)$ is trivial -again. So we see that $\pi_1(M)$ is obtained from $\pi_1(M')$ by killing a loop. One can be more precise, of course (and I will restrict myself to the orientable case, so as to make life easier): if you have a genus $g$ surface, with $r>0$ punctures, and also -$g + r > 1$, then $\pi_1$ is a free group on $2 g + r - 1$ generators. (Every puncture after the first adds another independent loop, namely the loop around that puncture.) If $g \geq 1,$ and $r = 0$, then $\pi_1$ is obtained via Van Kampen as above: you -begin with a free group on $2g$ generators for $M'$, and then you kill off the loop -around the puncture when you fill it in (so you get the standard presentation for the -$\pi_1$ of a compact orientable surface of genus $g \geq 1$). (Conversely, going from -the compact surface $M$ to the once-punctured surface $M'$ does not add any generators -to $\pi_1$, but gets rid of a relation.) -If $g = 0$ and $r \leq 1,$ then you have either a disk ($r = 1$) or a sphere ($r = 0$) -and so $\pi_1$ is trivial.<|endoftext|> -TITLE: Uniform semi-continuity -QUESTION [10 upvotes]: Background -It is a standard and important fact in basic calculus/real analysis that a continuous function on a compact metric space is in fact uniformly continuous. That is, suppose $(X,d)$ is a compact metric space and $f\colon X \to\mathbb R$ is such that for every $x\in X$ and $\varepsilon>0$ there exists $\delta>0$ such that $d(x,y)<\delta$ implies $|f(x)-f(y)|<\varepsilon$. Then in fact, such a $\delta$ can be chosen independently of $x$. -Question -Does a similar statement hold regarding semi-continuous functions? For concreteness, let's consider upper semi-continuous functions, so suppose $(X,d)$ is compact and $f\colon X \to\mathbb R$ has the property that for every $x\in X$ and $\varepsilon >0$ there exists $\delta >0$ such that $d(x,y)<\delta$ implies $f(y) < f(x)+\varepsilon$. (Note the asymmetry of $x$ and $y$ in this definition.) Then is it true that $\delta=\delta(\varepsilon)$ can be chosen independently of $x$? -Reformulation -Given $\delta, \epsilon > 0$, consider the set -$$ -X_\delta^\epsilon := \lbrace x\in X \mid f(y) < f(x) + \epsilon \text{ for every } y\in B(x,\delta) \rbrace. -$$ -Then $f$ is upper semi-continuous if and only if $\displaystyle\bigcup_{\delta>0} X_\delta^\epsilon = X$ for every $\epsilon > 0$, and $f$ is uniformly upper semi-continuous if and only if this union stabilises -- that is, if for every $\epsilon > 0$ there exists $\delta>0$ such that $X_\delta^\epsilon = X$. - -REPLY [7 votes]: A little further thought reveals the following: uniform semi-continuity implies uniform continuity. Thus the answer to my question is a resounding "no", since any function that is upper semi-continuous but not continuous cannot be uniformly upper semi-continuous. -Proof. Let $f$ be uniformly upper semi-continuous. Then for every $\epsilon>0$ there exists $\delta>0$ such that for every $x\in X$, we have $f(y) < f(x) + \epsilon$ whenever $y\in B(x,\delta)$. However, since this statement holds for every $x$, it also holds with $x$ and $y$ reversed; in the language of the original post, both $x$ and $y$ are contained in the set $X_\delta^\epsilon = X$. Since $y$ is in this set and $x\in B(y,\delta)$, we also have $f(x) < f(y) + \epsilon$, and thus $|f(x) - f(y)| < \epsilon$. But this is just the definition of uniform continuity.<|endoftext|> -TITLE: Why is $22/7$ a better approximation for $\pi$ than $3.14$? -QUESTION [10 upvotes]: This seems counterintuitive, but $22/7$ is closer to $\pi$ than $3.14=314/100$ which has a significantly greater denominator. - -Why is $22/7$ a better approximation for $\pi$ than $3.14$? - -This has important implications: e.g. Should "$\pi$-day" be the $14^{th}$ of March or $22^{nd}$ of July? - -REPLY [23 votes]: Just for fun... -Here is a proof that $\displaystyle \frac{22}{7}$ is a better approximation than $\displaystyle 3.14$. -First we consider the amazing and well known integral formula for $\displaystyle \frac{22}{7} -\pi$ (for instance see this page: Proof that 22/7 exceeds pi). -$$\int_{0}^{1}\frac{x^{4}(1-x)^{4}}{1+x^2}dx = \frac{22}{7} -\pi$$ -We will to show that -$$0 < \frac{22}{7} -\pi < \pi - 3.14$$ -That $\displaystyle 0 < \frac{22}{7} - \pi$ follows trivially from the above integral. -We will now show that -$$\int_{0}^{1}\frac{x^{4}(1-x)^{4}}{1+x^2}dx < \frac{1}{700}$$ -We split this up as -$$\int_{0}^{1}\frac{x^{4}(1-x)^{4}}{1+x^2}dx = \int_{0}^{\frac{1}{2}}\frac{x^{4}(1-x)^{4}}{1+x^2} + \int_{\frac{1}{2}}^{1}\frac{x^{4}(1-x)^{4}}{1+x^2}dx$$ -The first integral can be upper-bounded by replacing $\displaystyle x$ in the denominator with $\displaystyle 0$ and the second integral can be upper-bounded by replacing $\displaystyle x$ in the denominator with $\displaystyle \frac{1}{2}$. -Thus we have that -$$\int_{0}^{1}\frac{x^{4}(1-x)^{4}}{1+x^2}dx < \int_{0}^{\frac{1}{2}}x^{4}(1-x)^{4}dx + \int_{\frac{1}{2}}^{1} \frac{4x^{4}(1-x)^{4}}{5}dx $$ -Now $$\int_{0}^{\frac{1}{2}}x^{4}(1-x)^{4}dx = \int_{\frac{1}{2}}^{1}x^{4}(1-x)^{4}dx$$ as $\displaystyle x^4(1-x)^4$ is symmetric about $\displaystyle x = \frac{1}{2}$ -It is also known that $$\int_{0}^{1}x^{4}(1-x)^{4}dx = \frac{1}{630}$$ (see the above page again) -Thus we have that -$$\int_{0}^{1}\frac{x^{4}(1-x)^{4}}{1+x^2}dx < \frac{1}{2*630} + \frac{4}{5*2*630} = \frac{1}{700}$$ -Thus we have that -$$\frac{22}{7} - \pi < \frac{1}{700}$$ -i.e -$$2\pi > 2(\frac{22}{7} - \frac{1}{700})$$ -$$2\pi > \frac{22}{7} + \frac{22}{7} - \frac{2}{700}$$ -$$2\pi > \frac{22}{7} + \frac{2200}{700} - \frac{2}{700}$$ -$$2\pi > \frac{22}{7} + \frac{2198}{700}$$ -$$2\pi > \frac{22}{7} + \frac{314}{100}$$ -Thus we have that -$$0 < \frac{22}{7} - \pi < \pi - \frac{314}{100}$$<|endoftext|> -TITLE: Characterizing Dense Subgroups of the Reals -QUESTION [13 upvotes]: Possible Duplicate: -Subgroup of $\mathbb{R}$ either dense or has a least positive element? - -Let $(\mathbb{R},+)$ be the group of Real Numbers under addition. Let $H$ be a proper subgroup of $\mathbb{R}$. Prove that either $H$ is dense in $\mathbb{R}$ or there is an $a \in \mathbb{R}$ such that $H=\{ na : n=0, \pm{1},\pm{2},\dots\}$. -I am not able to proceed. - -REPLY [18 votes]: If there is a smallest positive element, then we are done, since any positive element must be an integer multiple of it, or otherwise we could use a euclidean-type-algorithm to get a positive element with smaller value. (I.e., suppose $a$ is the smallest positive element, and $b$ a positive element which is not an integer multiple of $a$---keep subtracting copies of $a$ until you get something that is strictly between $0 $ and $a$.) -So assume there is a sequence $a_n$ contained in the group that consists of positive numbers tending to zero. Then the group contains each ${\mathbb{Z} a_n}$. This means that for each $n$, any number in $\mathbb{R}$ is within $|a_n|$ of an element of the group. Since the $|a_n|$ can be small, we find that the group is dense.<|endoftext|> -TITLE: minimal size of $2-$transitive groups -QUESTION [7 upvotes]: Let $X $ be a set, $|X|=n$ and $G$ be a group with a $2-$transitive action on $X$. -what can be said about the size of $G$? - -REPLY [5 votes]: As a complement to Robin Chapman's answer: -Since $G$ is $2-$transitive, its order is divisible by $n\times(n-1)$, not just bounded below by, and so it would be interesting to bound $\dfrac{|G|}{(n\times(n-1))}.$ -If $n$ is not a prime power, then it is quite possible for the lower bound to be huge. The smallest reasonable non-prime, $n=6,$ has its smallest $2-$transitive group with order $60 = 6\times5\times2$. The next, $n=10,$ has its smallest $2-$transitive group with order $10\times9\times4$. For most n, the smallest multiple is $\dfrac{(n-2)!}2,$ that is, the alternating group on $n$ points is the smallest $2-$transitive group. This happens already at $n=22, 33, 34, 35,$ and asymptotically takes over. Cameron–Neumann–Teague showed this in their $1982$ paper $\text{MR661693}$, and I believe it is covered in Dixon–Mortimer's textbook. -So on the one hand the lower bound for a $2-$transitive group on n points is $n\times(n-1)$ for prime powers $n$, but for most $n$ the lower bound is $\dfrac{n!}2.$<|endoftext|> -TITLE: Mathematical paradoxes? -QUESTION [6 upvotes]: What are some interesting mathematical paradoxes? -What I have in mind are things like the Banach-Tarski paradox, Paradox of Zeno of Elea, Russel's paradox, etc.. -Edit: As an additional restriction, let us focus on paradoxes that are not already in the list at: -https://secure.wikimedia.org/wikipedia/en/wiki/Category:Mathematics_paradoxes - -REPLY [3 votes]: One of the consequences of Goedel's incompleteness theorem is that if $T$ is a finitely axiomatizable theory of arithmetic, then - -$T$ proves that $T$ is consistent - -if and only if - -$T$ is inconsistent! - -The reason is that an inconsistent theory proves anything, and a consistent theory never proves its own consistency.<|endoftext|> -TITLE: Ignoring elements of small order in the simple group of order $60$ -QUESTION [38 upvotes]: The simple group of order $60$ can be generated by the permutations $(1,2)(3,4)$ and $(1,3,5)$, but all you need to do is square the first one and it becomes the identity. Can't we find a version of the simple group where the elements of small order can be ignored? -For a group $H$, define $Ω_n(H)$ to be the subgroup generated by elements of order less than $n$. For instance, if $n=3$ and $H=\operatorname{SL}(2,5)$ is the perfect group of order $120$, then $Ω_n(H)$ has order $2$, and $H/Ω_n(H)$ is the simple group of order $60$. If $n=4$ and $H=\operatorname{SL}(2,5)⋅3^4$ is the perfect group of order $(60)⋅(162)$ whose $3$-core is not complemented, then $Ω_n(H)$ has order $162$ and $H/Ω_n(H)$ is again the simple group of order $60$. -My first question is if there are smaller examples for $n=4$, since the jump $1$, $2$, $162$ seems a bit drastic for $n=2, 3, 4$. - -Is there a group $H$ of order less than $(60)⋅(162)$ such that $H/Ω_4(H)$ is the simple group of order $60$? - -Probably, for each positive integer $n$, there is a finite group $H$ such that $H/Ω_n(H)$ is the simple group of order $60$. I am interested in whether such $H$ can be chosen to be "small" somehow. - -Is there a sequence of finite groups $H_n$ and a constant $C$ such that $H_n/Ω_n(H_n)$ is the simple group of order $60$ and such that $|H_n| ≤ C⋅n$? - -I would also be fine with some references to where such a problem is discussed. It would be nice if there was some sort of analogue to the Schur multiplier describing the largest non-silly kernel, and a clear definition of what a silly kernel is (I think it is too much to ask for a non-silly kernel to be contained in the Frattini subgroup, and I think it might be unreasonable to ask for the maximum amongst minimal kernels). - -In case it helps, here are some reduced cases that I know can be handled: -A simpler example: if instead of the simple group of order $60$, we concentrate on the simple group of order $2$, then we can choose $H_n$ to be the cyclic group of order $2^{1+\operatorname{lg}(n−1)}$ when $n≥2$, and the order of $H_n$ is bounded above and below by multiples of $n$. We can create much larger $H_n$ for $n≥3$ by taking the direct product of our small $H_n$ with an elementary abelian $2$-group of large order, but then $Ω_n(H_n×2^n) = Ω_n(H_n)×2^n$ has just become silly since the entire elementary abelian $2$-group part, $2^n$, is unrelated and uses a lot of extra generators, that is, it is not contained within the Frattini subgroup. -A moderate example: if instead of the simple group of order $60$, we take the non-abelian group of order $6$, then I can find a natural choice of $H_n$ with $|H_n| ≤ C⋅n$, but I am not sure if there are other reasonable choices. My choice of $H_n$ has $Φ(H_n)=1$, which suggests to me that Frattini extensions may not be the right idea. - -REPLY [4 votes]: Regarding your first question about groups of order less than $162\cdot 60$ for which $H/\Omega_4(H)$ equals $A_5$, the simple group of order 60. Clearly, in any example the order of $H$ must be a multiple of 60; and also $H$ must be perfect. It is now a routine problem to write a GAP program, using the library of perfect groups, to find answers. Note that within the GAP system, each perfect group is identified by a pair $[n, i]$, where $n$ denotes the order, and the $i$ identifies which perfect group of that order is meant (in case there are multiple). -Using this, I verified that all perfect groups whose order is a multiple of 60 and less than $162\cdot 60$ satisfy $H/\Omega_4(H)=1$. So your example of order $9720=162\cdot 60$ is indeed the first where this quotient is non-trivial. And it is pretty special with that property, too; the next examples I found are of order $155520=2592\cdot60$ and $311040=5184\cdot60$. (But note that the database is incomplete for all orders $2^n\cdot60, n\geq 10$.) -The perfect group $[174960, 2]$ satisfies $H/\Omega_4(H)\cong A_6$. But for all other perfect groups up to the order $302400=5040\cdot 60$, the quotient is again trivial. -Then at order $311040=5184\cdot 60$ there are again a couple examples where the quotient is $A_5$. -And finally, the perfect group with id $[311040, 14]$ is the first (up to the gaps in the database!) group to satisfy $H/\Omega_5(H)\cong A_5$ (indeed, for all other groups before it, that quotient is trivial).<|endoftext|> -TITLE: Divisor -- line bundle correspondence in algebraic geometry -QUESTION [62 upvotes]: I know a little bit of the theory of compact Riemann surfaces, wherein there is a very nice divisor -- line bundle correspondence. -But when I take up the book of Hartshorne, the notion of Cartier divisor there is very confusing. It is certainly not a direct sum of points; perhaps it is better to understand it in terms of line bundles. But Cartier divisors do not seem to be quite the same thing as line bundles. The definition is hard to figure out. Can someone clear the misunderstanding for me and explain to me how best to understand Cartier divisors? - -REPLY [96 votes]: When discussing divisors, a helpful distinction to make at the beginning is effective divisors vs. all divisors. Normally effective divisors have a more geometric description; all divisors can then be obtained from the effective ones by allowing some minus signs to come into the picture. -An irreducible effective Weil divisor on a variety $X$ is the same thing as an irreducible codimension one subvariety, which in turn is the same thing as a height one point $\eta$ of $X$. (We get $\eta$ as the generic point of the irred. codim'n one subvariety, and we recover the subvariety as the closure of $\eta$.) - An effective Weil divisor is a non-negative integral linear combination of irreducible ones, so you can think of it as a non-negative integral linear combination of height one points $\eta$. -Typically, one restricts to normal varieties, so that all the local rings at height one points are DVRs. Then, given any pure codimension one subscheme $Z$ of $X$, you can attach a Weil -divisor to $Z$, in the following way: -because the local rings at height one points are DVRs, if $Z$ is any codimension one subscheme of $X$, cut out by an ideal sheaf $\mathcal I_Z$, and $\eta$ is a height one point, then the stalk $\mathcal I_{Z,\eta}$ is -an ideal in the DVR $\mathcal O_{X,\eta}$, thus is just some power of the maximal ideal -$\mathfrak m_{\eta}$ (using the DVR property), say $\mathcal I_{Z,\eta} = \mathfrak m_{\eta}^{m_{Z,\eta}},$ and so the multiplicity $m$ of $Z$ at $\eta$ is well-defined. -Thus the effective Weil divisor $$div(Z) := \sum_{\eta \text{ of height one}} m_{Z,\eta}\cdot \eta$$ -is well-defined. -Note that this recipe only goes one way: starting with the Weil divisor, we can't recover $Z$, because the Weil divisor does not remember all the scheme structure (i.e. the whole -structure sheaf, or equivalently, the whole ideal sheaf) of $Z$, but only its behaviour at its generic points (which amounts to the same thing as remembering the irreducible components and their multiplicities). -An effective Cartier divisor is actually a more directly geometric object, namely, -it is a locally principal pure codimension one subscheme, that is, -a subscheme, each component of which is codimension one, and which, locally around each point, is the zero locus of a section of the structure sheaf. Now in order to cut out -a pure codimension one subscheme as its zero locus, a section of the structure -sheaf should be regular (in the commutative algebra sense), i.e. a non-zero divisor. -Also, two regular sections will cut out the same zero locus if their ratio is a unit -in the structure sheaf. So if we let $\mathcal O_X^{reg}$ denote the subsheaf of -$\mathcal O_X$ whose sections are regular elements (i.e. non-zero divisors in each stalk), -then the equation of a Cartier divisor is a well-defined global section of the quotient -sheaf -$\mathcal O_X^{reg}/\mathcal O_X^{\times}$. -Now suppose that we are on a smooth variety. Then any irreducible codimension one subvariety -is in fact locally principal, and so given a Weil divisor -$$D = \sum_{\eta \text{ of height one}} m_{\eta} \cdot\eta,$$ -we can actually canonically attach a Cartier divisor to it, in the following way: -in a n.h. of some point $x$, let $f_{\eta}$ be a local equation for the Zariski closure -of $\eta$; then if $Z(D)$ is cut out locally by $\prod_{\eta} f_{\eta}^{m_{\eta}} = 0,$ -then $Z(D)$ is locally principal by construction, and, again by construction, -$div(Z(D)) = D.$ -So in the smooth setting, -we see that $Z \mapsto div(Z)$ and $D \mapsto Z(D)$ establish a bijection between -effective Cartier divisors and effective Weil divisors. -On the other hand, on a singular variety, it can happen that an irreducible codimension one subvariety need not be locally principal in the neighbourhood of a singular point (e.g. a generating line on the cone $x^2 +y^2 + z^2 = 0$ -in $\mathbb A^3$ is not locally -principal in any neighbourhood of the cone point). Thus there can be Weil divisors that -are not of the form $div(Z)$ for any Cartier divisor $Z$. -To go from effective Weil divisor to all Weil divisors, you just allow negative coefficients. -To go from effective Cartier divisors to all Cartier divisors, you have to allow yourself -to invert the functions $f$ that cut out the effective Cartier divisors, or equivalently, to go from the sheaf of monoids $\mathcal O_X^{reg}/\mathcal O_X^{\times}$ to the associated sheaf of groups, which is $\mathcal K_X^{reg}/\mathcal O_X^{\times}.$ -(Here, it helps to remember that $\mathcal K_X$ is obtained from $\mathcal O_X$ -by inverting non-zero divisors.) -Finally, for the connection with line bundles: if $\mathcal L$ is a line bundle, -and $s$ is a regular section (i.e. a section whose zero locus is pure codimension one, -or equivalently, a section which, when we choose a local isomorphism $\mathcal L_{| U} -\cong \mathcal O_U$, is not a zero divisor), then the zero locus $Z(s)$ of $s$ -is an effective Cartier divisor, essentially by definition. -So we have a map $(\mathcal L,s) \mapsto Z(s)$ which sends line bundles with regular sections to effective Cartier divisors. This is in fact an isomorphism of monoids -(where on the left we consider pairs $(\mathcal L,s)$ up to isomorphism of pairs): -given an effective Cartier divisor $D$, we can define $\mathcal O(D)$ to be the -subsheaf of $\mathcal K_X$ consisting (locally) of sections $f$ such that the locus of -poles of $f$ (a well-defined Cartier divisor) is contained (as a subscheme)in the Cartier -divisor $D$ (perhaps less intuitively, but more concretely: if $D$ is locally cut out -by the equation $g = 0$, then $\mathcal O(D)$ consists (locally) of sections $f$ -of $\mathcal K_X$ such that $fg$ is in fact a section of $\mathcal O_X$). -The constant function $1$ certainly lies in $\mathcal O(D)$, and (thought of as a section -of $\mathcal O(D)$ -- not as a function!) its zero locus is exactly $D$. -Thus $D \mapsto (\mathcal O(D), 1)$ is an inverse to the above map $(\mathcal L,s) \mapsto -Z(s)$. -Finally, if we choose two different regular sections of the same line bundle, -the corresponding Cartier divisors are linearly equivalent. Thus we are led to the -isomorphism "line bundles up to isomorphism = Cartier divisors up to linear equivalence". -But, just to emphasize, to understand this it is best to restrict first to line bundles -which admit a regular section, and then think of the corresponding Cartier divisor as being -the zero locus of that section. This brings out the geometric nature of the Cartier divisor quite clearly.<|endoftext|> -TITLE: Are the computable reals finitary? -QUESTION [7 upvotes]: In the comment thread of an answer, I said: - -The computable numbers are based on the intuitionistic continuum, and are not finitary. - -To which T.. replied: - -Computable numbers are not based on the intuitionistic continuum. - -This disagreement contains, I think, a good example of a philosophical question: are the computable reals within the scope of finitistic mathematics? -References - -Bendegem, 2010, Finitism in Geometry -Edalat, 2009, A computable approach to measure and integration theory -Zach, 2001, Hilbert's Finitism -Zach, 2003, Hilbert's Program - -REPLY [2 votes]: Depends on what you exactly mean by finitary. A computable real has a finite description (a Turing Machine), so it is a finite object. -But many properties of computable real numbers are not finitary. -We can develope an interesting amount of analysis in very weak arithmetic theories (see this), theories which are much weaker PRA (which is often associated with Hilbert's finitism).<|endoftext|> -TITLE: Use of noncommutative group cohomology -QUESTION [7 upvotes]: I have seen a lot of places where the group cohomology when a group acts on a module, is extensively used. But beyond seeing the definition and some claims of partial results, I havent seen any uses of the case when we replace action on a module by action on noncommutative group. It seems hard to believe that much can be made out of it as H_1 is just a set, not even a group. Can somebody explain some uses of bringing up and studying this notion? - -REPLY [6 votes]: Matt E's response is the canonical first answer. As someone (else) who works with nonabelian group (in particular, Galois) cohomology frequently, let me give a second answer. -Certain interesting maps between commutative cohomology groups are defined via a non-commutative intermediary. The justification for this is that, though one does not in general have anything like a "long exact sequence" in non-commutative cohomology, if one has an extension of $\mathfrak{g}$-modules -$$1 \rightarrow Z \rightarrow G \rightarrow A \rightarrow 1$$ -where $A$ is commutative and $Z$ is central, then one gets a connecting map in cohomology -$$\Delta: H^1(\mathfrak{g},A) \rightarrow H^2(\mathfrak{g},Z).$$ -Often elements in an $H^2$ may be viewed as "obstructions" to something desirable happening at the $H^1$-level. In particular, this is the case for the period-index obstruction map in the Galois cohomology of abelian varieties. See for instance -http://math.uga.edu/~pete/wc1.pdf -and also publications [12], [14], [16] on -http://math.uga.edu/~pete/papers.html<|endoftext|> -TITLE: Explicitly proving invariance of curvatures under isometry -QUESTION [13 upvotes]: I would like to know how to explicitly prove that Riemann Curvature,Ricci Curvature, Sectional Curvature and Scalar Curvature are left invariant under an isometry. -I can't see this explained in most books I have looked at. -They atmost explain preservation of the connection. -I guess doing an explicit proof for the sectional curvature should be enough (and easiest?) since all the rest can be written in terms of it. - -Given Akhil's reply I think I should try to understand the connection invariance proof better and here goes my partial attempt. -Let $\nabla$ be the connection on the manifold $(M,g)$ and $\nabla '$ be the Riemann connection on the manifold $(M',g')$ and between these two let $\phi$ be the isometry. Then one wants to show two things, - -$D\phi [\nabla _ X Y] = \nabla ' _{D\phi[X]} D\phi [Y]$ -$R(X,Y)Z = R'(D\phi [X],D\phi [Y]) D\phi [Z]$ - -Where $R$ and $R'$ are the Riemann connection on $(M,g)$ and $(M',g')$ respectively. -One defines the map $\nabla ''$ on M which maps two vector fields on M to another vector field using $\nabla '' _X Y = D\phi ^{-1} (\nabla' _{D\phi[X]} D\phi [Y]$. By the uniqueness of the Riemann connection the proof is complete if one can show that this $\nabla ''$ satisfies all the conditions of being a Riemann connection on M. -I am getting stuck after a few steps while trying to show the Lebnitz property of $\nabla ''$. Let $f$ be some smooth function on M and then one would like to show that, -$\nabla '' _X fY = X(f)Y + f\nabla '' _X Y$ which is equivalent to showing that, -$D\phi ^{-1} (\nabla' _{D\phi[X]} D\phi [fY]) = X(f) + f D\phi ^{-1} (\nabla' _{D\phi[X]} D\phi [Y])$ knowing that $\nabla '$ satisfies the Leibniz property on $M'$. -Some how I am not being able to unwrap the above to prove this. -I can get the second term of the equation but not the first one. -Proving symmetry of $\nabla ''$ is easy but again proving metric compatibility is stuck for me. If $X,Y,Z$ are 3 vector fields on M then one would want to show that, -$Xg(Y,Z) = g(\nabla ''_X Y,Z) + g(Y, \nabla '' _X Z)$ -which is equivalent to showing that, -$Xg(Y,Z) = g(D\phi ^{-1} (\nabla' _{D\phi[X]} D\phi [Y]),Z) + g(Y,D\phi ^{-1} (\nabla' _{D\phi[X]} D\phi [Z]) )$ -knowing that $\nabla'$ satisfies metric compatibility equation on $M'$ -It would be helpful if someone can help me fill in the steps. -Then one is left with proving the curvature endomorphism equation. - -REPLY [2 votes]: Take a look at the last big displayed equation under "formal definition" here. It shows you Gauss's explicit form for a Levi-Civita connection in terms of the metric. Since you know how the metric transforms under an isometry, and how a Lie bracket transforms under a diffeomorphism, working out how the connection transforms under an isometry amounts to putting those ingredients together. -http://en.wikipedia.org/wiki/Levi-Civita_connection<|endoftext|> -TITLE: Lie algebras and infinitesimals -QUESTION [6 upvotes]: I have seen at many places the notions that Lie Algebras are infinitesimal objects and they look really close at a point. But I never understood this. They are abstract algebraic objects different from rings in that they are equipped with a weird sort of product and a weird Jacobi identity. Any hints on how to make this connection to infinitesimals? - -REPLY [9 votes]: The direct infinitesimal analogue of a given Lie group is not the Lie algebra with its bracket operation and Jacobi identity, but the Lie algebra (thought of as just the space of tangent vectors at the identity, without the added structure of a bracket operation) with addition of elements being the group operation. Addition is commutative but the group generally is not. To capture the noncommutativity you need to squeeze additional information from the group down to the infinitesimal level of the Lie algebra (by taking second-order information in the series expansion of group elements near the identity; the tangent space is first-order). -The Lie multiplication $[x,y]$ is the second-order infinitesimal analogue of the commutator $x y x^{-1} y^{-1}$, and the Jacobi identity is an analogue of an identity for the commutator. Historically, the Jacobi identity for algebras (that is, for Lie algebras whose bracket is $XY - YX$ in an associative algebra) must have come first, and is used mainly in algebras, but you can think of it as coming from the group. -Locally, second-order information is enough: the Lie algebra determines the structure of the group, up to some questions of a different, "global" nature (topology) about connectivity and covering spaces. - -REPLY [7 votes]: This is all carefully explained in Chapter 8 of Fulton and Harris. The key fact is that if $G$ is a connected Lie group, then $G$ is generated by the elements in any neighborhood of the identity. This implies that a morphism out of $G$ is determined by what it does to elements arbitrarily close to the identity (and it is a general principle in category theory that an object is determined by the morphisms out of it). Since we have a tangent space at the identity, we can say even more: it turns out that a morphism $f : G \to H$ is determined by its differential $df : T_e G \to T_e H$, where $T_e$ is the tangent space at the identity, otherwise known as the Lie algebra. -This differential is just a linear map between finite-dimensional vector spaces, so it's much easier to handle than the original map $f$. The problem is then to characterize which linear maps can occur. As a necessary condition, $f$ must preserve the Lie bracket on $T_e G$, and if $G$ is simply connected this is both necessary and sufficient. So essentially the whole point of the wacky definition of a Lie algebra is to make this theorem true. -This means, roughly speaking, that Lie algebras capture the local, or infinitesimal, structure of a Lie group. The Lie algebra of a Lie group can't capture the global topological structure, but being able to separate out the easy part and the hard part of understanding a Lie group is very valuable. -(The connection to Charles' answer is that a tangent vector at the identity determines, by translation, a left-invariant vector field on $G$. You can think of such a vector field as specifying, at each element of $G$, a direction in which something can flow.)<|endoftext|> -TITLE: Cantor Set and Compact Metric Spaces -QUESTION [5 upvotes]: Can every compact metric space be realized as the continuous image of a cantor set? - -REPLY [9 votes]: Yes (assuming it's nonempty, of course). Moreover if you google "continuous image of the Cantor set", the first hit takes you to -http://en.wikipedia.org/wiki/Cantor_set -where you can read that this theorem is true and a reference is given to Willard's General Topology. (The article does not say this and perhaps it should, but it is specifically Theorem 30.7 on p. 217 of the Dover edition.) - -REPLY [7 votes]: Yes. -See: -https://mathoverflow.net/questions/5357/theorems-for-nothing-and-the-proofs-for-free/5388#5388 -And the comment of Harald-Hanche Olsen. - -Surprising, yes, but once you know about it, it seems easy enough to cook up a proof. Just write the set as a union of two closed subsets, decide to map the left half of the Cantor set onto one and the right half to the other, then do the same to each of these two sets, and so on. In the limit you have the map you want, provided you have arranged for the diameters of the parts to go to zero.<|endoftext|> -TITLE: Latest known result on Lindelöf hypothesis -QUESTION [7 upvotes]: The Phragmen-Lindelöf theorem gives a consequence of the Riemann hypothesis, viz, the Lindelöf hypothesis. As such this is weaker than Riemann hypothesis; but it is still considered that even a proof of this weaker result will be a breakthrough. -Question: - -What is the strongest known result yet on the Lindelöf hypothesis? - -REPLY [3 votes]: The Lindelof Hypothesis may have been proved -In work published at the Arxiv (latest version March 2018, previous version November 2017) Professor Athanassios Fokas of Cambridge University states that he has proved it. In the first version of his paper (August 2017), he offers a proof of a "slight variant"; in the second and third versions he says it is possible to get from there to a proof of the Lindelof hypothesis itself. The publication of the required step is stated to be under preparation in a linked paper co-authored by himself and two other researchers. -Given the stature of the claimant, the claim is in a different category from the many that are made by mathematicians not previously known for having published weighty peer-reviewed results who write that they have proved famous hypotheses.<|endoftext|> -TITLE: Does a Person Need a Mathematics Degree in order to Publish in a Mathematics jounal? -QUESTION [36 upvotes]: I am a neophyte amateur mathematician. I have been reading a lot about journals and the topic of peer-review in mathematics journals. Does one have to have professional credentials or have a Doctorate in order to publish in peer-reviewed mathematics journals or just the desire to compently solve mathematical problems? - -REPLY [2 votes]: In the math department at MIT, there's an annual award for best published paper by an undergraduate majoring in math. Most undergraduates lack credentials other than a high-school diploma, and one must suspect that most undergraduates majoring in math, if they have professional credentials, are credentialed in other fields.<|endoftext|> -TITLE: Striking applications of Morera's theorem -QUESTION [14 upvotes]: Morera's theorem is an underappreciated theorem in complex analysis. I have been struck by the simplicity of its proof and some clever applications of it and I had been interested in finding out more of such. Please contribute examples. One example is the Weierstrass theorem that if a sequence of holomorphic functions converge absolutely and uniformly on every compact subset in a domain, then the limit is also holomorphic. And there are numerous applications of this latter fact. -So, please come ahead and contribute clever and slick applications of Morera's theorem that will impress people! - -REPLY [5 votes]: Not sure if either of these applications qualify as "striking," but here's my two cents: -In Rudin's "Real and Complex Analysis" (Third Edition), Morera's Theorem assists in the proofs of the following two interesting theorems. (Here, I'm paraphrasing, not quoting.) - -Müntz-Szász Theorem: Let $0 < \lambda_1 < \lambda_2 < \cdots$, and let $X = \{1, x^{\lambda_1}, x^{\lambda_2}, \ldots\} \subset C[0,1].$ Then $X$ is dense in $C[0,1]$ if and only if $\sum \frac{1}{\lambda_n} = \infty$. - -In proving the reverse implication $(\Leftarrow)$, Rudin invokes Morera's Theorem to show that the function $$f(z) = \int_0^1 t^z\,d\mu(t)$$ -is holomorphic in the right half-plane, where $\mu$ is a complex Borel measure concentrated on $(0,1]$. - -Theorem 16.8: Let $\Omega \subset \mathbb{C}$ be a region, $L$ a line or circular arc, and suppose $\Omega - L = \Omega_1 \cup \Omega_2$ is the union of two regions. If $f\colon \Omega \to \mathbb{C}$ is continuous in $\Omega$, and is holomorphic in both $\Omega_1$ and $\Omega_2$, then $f$ is holomorphic on $\Omega$. - -Morera's Theorem is used (of course) to show that $f$ is holomorphic.<|endoftext|> -TITLE: Additive category that is not abelian -QUESTION [27 upvotes]: What is a simple example, without getting into the mess of triangulated categories, of an additive category that is not abelian? - -REPLY [19 votes]: There've been lots of mildly complicated examples given, but what about the category of even-dimensional vector spaces over a field?<|endoftext|> -TITLE: Meaning of Lagrange multiplier always being 0? -QUESTION [6 upvotes]: I'm optimizing a family of objectives over $x_1,\ldots,x_n$ with a single constraint that $x$'s add up to $1$, and when using method of Lagrange multipliers, the single multiplier always ends up $0$ for members of this family... does this tell us anything interesting about these objectives? -Edit: the answer was -- it means the constraint it inactive, ie, removing the constraint doesn't change the answer of optimization problem - -REPLY [4 votes]: One of two things that I can think of are happening here. The global minimum (or max, whatever your goal is here) of your function may actually lie within the simplex defined by $\sum_{i=1}^{4} q_{i} = 1$. In this case your algorithm is finding the correct solution and the constraint is trivially satisfied. (Consider minimizing $x^2+y^2$ constrained to the line $x = 0$, so the Lagrange formulation is $L(x,y,\nu) = x^2 + y^2 + \nu x$. Solving $\nabla L=0$ demands that $\nu=0$. ) -Alternatively, your algorithm could be sampling only points that already satisfy the constraint - so you are only checking those points (intentionally or otherwise). Then the algorithm is hopefully finding a local minimum (or max) along this constraint and your multiplier will always be zero. -I hope this helps you identify where a problem may be in your model - or clears up a conceptual issue and allows you to move forward with the solution you have. As far as interpreting the meaning of Lagrange multipliers, look here.<|endoftext|> -TITLE: Colimit of $\frac{1}{n} \mathbb{Z}$ -QUESTION [6 upvotes]: We should have $\displaystyle\mathbb{Q} = \lim_{\rightarrow} \frac 1n \mathbb{Z}$ but a few things are confusing me. Since the index category is a set, we should get the coproduct: $\bigsqcup \frac 1n \mathbb{Z}$. But in this coproduct we have $1/2 \neq 2/4$ which clearly is wrong. This make me suspect that the index category is not an index set. What we probably should get is a union (and not a coproduct) in the category of rings (in which $1/2 = 2/4$). -Question: Help me understand what the index category is (eg, what are its maps) and am I correct when believing we should get an ordinary union? - -REPLY [7 votes]: Index category is the category of positive integers with a morphism from n to m iff n|m. The limit is not coproduct but union of $\frac{1}{n}\mathbb{Z}$ with natural embedings (which is the same as union of $\frac{1}{n}\mathbb{Z}$ inside $\mathbb{Q}$ — hence the equality).<|endoftext|> -TITLE: Dirichlet's Divisor Problem -QUESTION [9 upvotes]: We know that if $ \displaystyle d(n)= \sum\limits_{d \mid n} 1$, then we have -$$ \sum\limits_{n \leq x} d(n)= x\log{x} + (2C-1)x + \mathcal{O}(\sqrt{x})$$ -I have referred Apostol's "Analytic Number theory" and i understood the first half of the proof where the error term is $\mathcal{O}(x)$, but please tell me as to how to improve the error term to $\sqrt{x}$. - -REPLY [14 votes]: You can use inclusion/exclusion: -$$\sum_{n\leq x} d(n) = \sum_{mn\le x} 1 = \sum_{m\le \sqrt{x}} \ \sum_{n\le x/m} 1 + \sum_{n\le \sqrt{x}} \ \sum_{m\le x/n} 1 - \sum_{m\le\sqrt{x}} 1 \sum_{n\le\sqrt{x}} 1.$$ -Now the first two double sums are the same (with the roles of $m$ and $n$ interchanged). Hence -$$ \sum_{n\leq x} d(n) = 2 \sum_{n\le \sqrt{x}} \Big\lfloor\frac{x}{n}\Big\rfloor - \Big( \big\lfloor\sqrt{x}\big\rfloor \Big)^2$$ -where $\lfloor x \rfloor$ denotes the largest integer less than or equal to $x$. Now use the fact that $\lfloor x \rfloor=x+O(1)$ to finish the proof. You will have to use the identity -$$ \sum_{n\leq x} \frac{1}{n} = \log x +\gamma + O\Big(\frac{1}{x}\Big)$$ -where $\gamma$ is Euler's constant.<|endoftext|> -TITLE: Explanation of numeric experiment that approximates e? -QUESTION [6 upvotes]: Recently I found this post on Reddit. It describes the following algorithm to find e: - -Here is an example of e turning up - unexpectedly. Select a random number - between 0 and 1. Now select another - and add it to the first. Keep doing - this, piling on random numbers. How - many random numbers, on average, do - you need to make the total greater - than 1? The answer is e. - -This means that you need on average ~2.7 random real numbers to make the sum greater than 1. -However, a random number between 0 and 1 would on average be equal to 0.5. So intuitively I would think that, on average, only 2 random numbers would be required to have a sum > 1. -So where did I go wrong in my thinking? -Update -I just figured it out myself: You need at least two numbers to have a sum > 1, but often you'll need three, sometimes you'll need four, sometimes five, etc... So it only natural that the average required numbers is above 2. -Thanks for the replies! - -REPLY [3 votes]: The other answers should have resolved why the number is >2, but for the exact reason why it is e: -For having N random numbers whose sum is <1, the probability is exactly the same as the volume of a unit standard N-simplex, i.e. 1/N!. -Therefore, the probability that it takes exactly N random numbers for a sum of ≥1, the probability is -$$ \frac1{(N-1)!} - \frac1{N!} = \frac1{N(N-2)!} $$ -Therefore the expected number is -$$ \sum_{N=1}^\infty \frac N{N(N-2)!} = e $$<|endoftext|> -TITLE: Choosing subsets of a set with a specified amount of maximum overlap between them -QUESTION [6 upvotes]: How can I determine the size of the largest collection of $k$-element subsets of an $n$-element set such that each pair of subsets has at most $m$ elements in common? - -REPLY [5 votes]: I think this problem is still open, but the following might be useful: -Ray-Chaudhuri-Wilson Theorem: -Let $L$ be a set of $m$ integers and $F$ be an $L$-intersecting $k$-uniform family of subsets of a set of $n$ elements, where $m \le k$, then -$|F| \le {n \choose m}$ -$\bullet$ -$k$-uniform family is a set of subsets, each subset being of size k. -An $L$-intersecting family is such that the intersection size of any two distinct sets in the family is in $L$. -The following result of Frankl gives us a lower bound -Frankl's Result: -For every $k \ge m \ge 1$ and $n \ge 2k^{2}$ there exists a $k$-uniform family $F$ of size $> (\frac{n}{2k})^{m}$ on $n$ points such that $|A \cap B| \le m-1$ for any two distinct sets $A,B \in F$. -$\bullet$ -For an algorithm for constructing such sets (based on Frankl's result) refer: https://stackoverflow.com/questions/2955318/creating-combinations-that-have-no-more-one-intersecting-element/2955527#2955527<|endoftext|> -TITLE: How is moduli of curves relevant in physics? -QUESTION [8 upvotes]: From: Moduli space -we see that moduli of curves is a very algebro-geometric topic. -It is easy to understand its relevance and importance in algebraic geometry. But the mind boggles when we try to imagine how on earth such a topic from pure and abstruse mathematics is relevant in physics. -I will be thankful if somebody can give some explanation. - -REPLY [6 votes]: In quantum field theory many quantities are calculated as (formal) Feynman path integrals, that is, integrals "over the space of all paths". To make sense of this in dimensions higher than 2, one uses a perturbation expansion (in powers of Planck's constant) of the integral which leads to a sequence of finite dimensional integrals described by Feynman diagrams: finite graphs with (among other decorations) some "input" and "output" vertices --- think of a process where some particles collide and some are emitted. -In string theory instead of a sum over paths of a point there is a sum over trajectories of a closed loop (a string), i.e., integration over some space of surfaces. The analogue of the finite-dimensional perturbative Feynman diagrams are surfaces connecting some oriented loops. Loops with positive orientation relative to the surface are the "inputs" and the others are the "outputs". The surface can also have empty input or output. Integration over all surfaces of this type is still an infinite-dimensional problem. To reduce to a finite-dimensional integral, one integrates over the moduli space of conformal structures on each topological type of bordered surface. This is finite-dimensional and there are standard measures on it (Weil-Petersson). For problems with conformal invariance ("conformal field theory") this recipe was given by Polyakov and is basic for string theory. As far as I understand it, the worldsheet is always governed by a conformal field theory, so the Polyakov recipe is "the" method for defining the observables in perturbative string theory. -Also, for open strings one might want the boundaries to be open circles, or punctures, thus cutting out closed discs or points from the surface. So the general moduli space is that of conformal structures on surfaces of a fixed finite genus with a given finite type of boundary and one can at least define a measure on this space. Rigorous calculations can be extremely complicated as seen in the arxiv papers by D'Hoker and Phong where they went through a huge tour-de-force to construct the perturbative superstring measure through 2 and 3 loops. -As Matt explained, the relation to algebraic geometry is that (at least for compact surfaces without boundary) the moduli space of conformal structures is the same as that of algebraic structures; in each conformal class there is an algebraic curve of that genus. I am less sure of this in the non-compact case of punctured surfaces, and for bordered Riemann surfaces where the boundary contains loops, the moduli space is of geometric/symplectic/complex-analytic nature and I don't know if it has an algebraic analogue. -(edit: I think that there is a basic difference between QFT and string theory in this analogy, because in string theory only the perturbative measure is known at present, while in QFT path-space integrals are thought of as an underlying non-perturbative theory and one can make sense of that to a large extent even if Feynman's "measure" on path space doesn't exist as an integration measure in the ordinary mathematical sense. In string theory the non-perturbative definition of the theory is presently unknown. There are objects in string theory that are considered non-perturbative and thus hints of the underlying theory, but even conjecturally or heuristically there is not a consensus as to what perturbative string theory is a perturbation of.) -(added: -According to this paper, for bosonic string theory in Polyakov's prescription, the moduli spaces in the perturbative integrals are those of smooth closed surfaces, without punctures or boundary loops. This is interesting because conformal field theory does use the more complicated surfaces, but I don't know the details of how CFT is used in string theory. -http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.cmp/1104116138 )<|endoftext|> -TITLE: Is It True that We Can Never Be Sure of Validity of a Mathematical Proof? -QUESTION [8 upvotes]: The reason I ask this is because difficult mathematical proofs are just not plain self-evident. You would need a few years of intensive study before you can get to the point of understanding the topics and the proofs. The problem is that for a difficult math theorem, maybe only a handful of mathematicians have studied and agreed on the validity of the proof. There is a chance, given that the topic is highly esoteric, that there are bugs in the proof that render it invalid. We just don't know. -This would give us a conundrum; we thought that mathematics is the most secure branch of knowledge, but it isn't; because a lot of the proofs are convoluted and esoteric to the point that only a handful a people who can understand them. So we can never be 100% sure that a proof is always valid. -Am I right? - -REPLY [4 votes]: If I understand you correctly, you aren't bringing up the issues raised by Gödel. You are raising the point that since it's always possible a mistake has been made, we can't be sure of the validity of a mathematical proof. There is always the possibility that even our computational examination of a proof is forgetting some facet that renders it untrue. -One must understand, however, that this is not a flaw in mathematics but an inherent element of reality. Maths does remain the most secure branch of knowledge, except that the way we practically do math may not be as secure as previously imagined. A fair point.<|endoftext|> -TITLE: Discontinuous at rationals and differentiable at irrationals? -QUESTION [18 upvotes]: We know that there exist real functions which are continuous at each irrational and dis- -continuous at each rational number. - -But does there exist a function $f: \mathbb{R} \to \mathbb{R}$ that is differentiable at every irrational and discontinuous at every rational? - -REPLY [6 votes]: I think that there can be no such function. -Suppose that $f$ is discontinuous at some point $q$. There must be constants $A,B$ so that $f(s) < A < B < f(t)$ holds for points $s,t$ arbitrarily near to $q$. -Suppose that $x_0 x_0$ converging to $q$ and satisfying $f(s_n) < A < B < f(t_n)$. Then we have -$$\frac{f(t_n) - f(x_0)}{t_n - x_0} - \frac{f(s_n) - f(x_0)}{s_n - x_0} > \frac{B - f(x_0)}{t_n - x_0} - \frac{A - f(x_0)}{s_n - x_0} \to \frac{B-A}{q - x_0} > 1$$ -and may conclude that there exist $s,t$ as close to $q$ as we like satisfying -$$\frac{f(t) - f(x_0)}{t - x_0} - \frac{f(s) - f(x_0)}{s - x_0} > 1$$ -Arguing similarly when $x_0 > q$ and trivially (since the Newton quotients will be unbounded as a result of the discontinuity) when $x_0 = q$ we get the same conclusion with only $|x_0 - q| < B-A$. -For $n$ a positive integer let $X_n$ denote all of the points $x_0$ in $\mathbf{R}$ for which there exist $s,t$ at distance less than $1/n$ from $x_0$ satisfying the preceding inequality. Our argument implies that, for every $n$, $X_n$ is a neighbourhood of every point $q$ at which $f$ is discontinuous (consider the points $x_0$ whose distance from $q$ is less than $1/n$ and $B-A = B_q - A_q$). If it happens that the points of discontinuity are dense (as in the case of $\mathbf{Q}$) then this implies that every $X_n$ contains an open dense set and thus $\bigcap_n X_n$ is second category in $\mathbf{R}$. -The point is of course that no point at which $f$ is differentiable can be in all the $X_n$ so if $f$ is discontinuous on the rationals then $f$ is differentiable on at most a set of 1st category in $\mathbf{R}$ (which the irrationals are not). - -REPLY [2 votes]: It came up in an answer that has been deleted that a solution to this problem can be found as "solution 2" in the following file: http://www.isibang.ac.in/~statmath/problems/soljan09.pdf. Another reference is "A theorem concerning functions discontinuous on a dense set" by Fort.<|endoftext|> -TITLE: Primes dividing the values of integer polynomials -QUESTION [13 upvotes]: Problem: Let $n$ be an integer and $p$ a prime dividing $5(n^2-n+\frac{3}{2})^2-\frac{1}{4}$. Prove that $p \equiv 1 \pmod{10}$. -The polynomial can be re-written as $(\sqrt{5}(n^2-n+\frac{3}{2})-\frac{1}{2})(\sqrt{5}(n^2-n+\frac{3}{2})+\frac{1}{2})$. If this vanishes mod $p$ then $5$ is a quadratic residue mod $p$, which shows that $p \equiv \pm 1 \pmod{5}$ (the primes 2 and 5 are easily ruled out). It feels like the problem should be solvable by understanding the splitting of primes in the splitting field of this polynomial, but I can't find an appropriate "reciprocity law". -The things I'm not sure about are: - -How does one rule out the primes congruent to $-1$ mod $5$? -Under what circumstances is it the case that the set {rational primes that split in the ring of integers of some number field} is the union of arithmetic progressions? This a kind of generalized reciprocity law but I don't know in what generality they are known to hold. - -REPLY [8 votes]: HINT Your polynomial $p(n)$ splits over ${\mathbb Q}(w), w = \zeta_5$, namely -$ 125 \; p(x) = 125 \; (5 x^4-10 x^3+20 x^2-15 x+11) $ -$\quad\quad\quad\quad\quad\; = \;\; (5 x+3 w^3-4 w^2-w-3) (5 x+4 w^3+3 w^2+7 w+1)$ -$\quad\quad\quad\quad\quad\quad\; * \; (5 x-3 w^3+4 w^2+w-2) (5 x-4 w^3-3 w^2-7 w-6) $ -Regarding the other questions in the query and the comments: there has been much research on various ways of characterizing number fields by splitting behavior, norm sets, etc - going all the way back to Kronecker. Searching on the terms "Kronecker equivalent" or "arithmetically equivalent" will find pertinent literature. E.g. below is one enlightening review -MR0485790 (58 #5595) 12A65 (12A75) -Gauthier, François -Ensembles de Kronecker et représentation des nombres premiers par une forme quadratique binaire. -Bull. Sci. Math. (2) 102 (1978), no. 2, 129--143. -L. Kronecker [Berlin Monatsber. 1880, 155--162; Jbuch 12, 65] first tried to characterize algebraic number fields by the decomposition behavior of primes. Recently, the Kronecker classes of algebraic number fields have been studied by W. Jehne [J. Number Theory 9 (1977), no. 2, 279--320; MR0447184 (56 #5499)] and others. -This article deals with the following types of questions: -(a) When does the set of primes having a given splitting type in an algebraic number field contain (up to a finite set) an arithmetic progression? -(b) When is this set a union of arithmetic progressions? -If $K$ is an algebraic number field, let $\text{spl}(K)$ denote the set of rational primes which split completely in $K$ and let $\text{spl}^1(K)$ denote the set of rational primes which have at least one linear factor in $K$. Moreover, if $K/Q$ is a Galois extension with Galois group $G$, let ${\text Art}\_{K/Q}$ denote the Artin map which assigns a conjugacy class of $G$ to almost all rational primes $p$. If $C$ is a conjugacy class of $G$ then $\text\{Art\}_{K/Q}^{-1}(C)$ is the set of primes having Artin symbol $C$. Finally a set $S$ of rational primes is said to contain an arithmetic progression or to be the union of arithmetic progressions if the set of primes in the arithmetic progression(s) differs from $S$ by at most a finite set. -Let $G'$ denote the commutator subgroup of the Galois group $G$. Two results proved in the article are: -Theorem A. The following statements are equivalent: -(a) $|C|=|G'|$; -(b) $\text{Art}\_{K/Q}^{-1}(C)$ is the union of arithmetic progressions; -(c) $\text{Art}\_{K/Q}^{-1}(C)$ contains an arithmetic progression. -Theorem B. The following statements are equivalent: -(a) $K/Q$ is abelian; -(b) $\text{spl}(K)$ contains an arithmetic progression; -(c) $\text{spl}(K)$ is the union of arithmetic progressions; -(d) there exist a modulus $m$ and a subgroup $\{r_1,\cdots,r_t\}$ of the multiplicative group modulo $m$ such that $\text{spl}(K)$ is the union of the arithmetic progressions $mx+r_i\ (i=1,\cdots,t)$. -When $K/Q$ is a non-Galois extension it is well known that $\text{spl}(K)=\text{spl}(\overline K)$ where $\overline K$ denotes the normal closure of $K$. It follows from Theorem B that $\text{spl}(K)$ cannot contain an arithmetic progression. However, the author gives two conditions, one necessary and the other sufficient, for $\text{spl}^1(K)$ to be the union of arithmetic progressions when $K/Q$ is non-Galois. As a final application of his result the author gives a necessary and sufficient condition for the set of primes represented by a quadratic form to be the union of arithmetic progressions. -The proofs use class field theory, properties of the Artin map and the Čebotarev density theorem. -Reviewed by Charles J. Parry<|endoftext|> -TITLE: What's so special with small categories? -QUESTION [6 upvotes]: Sometimes one encounter the requirement that the objects of a category needs to be a set. What if it was not, could you provide examples of what could go wrong? (One example per answer). - -REPLY [10 votes]: (Arriving late on the scene.) One important thing about small categories which hasn't been mentioned yet is that they’re what you use to define small limits and colimits. -You want to say, for instance, that the category $\mathbf{Top}$ of topological spaces is complete, in some sense. But if you tried to say “all (co)limits exist”, then you could take something like “the coproduct of $\mathrm{ob}(\mathbf{Top})$-many copies of the point”, which would get you into the territory of Russell’s paradox. -What you can show is statements like “the category of small spaces has all small (co)limits”. (Or replace small with “$n$-small” for the $n$th Grothendieck universe, or with “$\lambda$-bounded” if you prefer set-theoretic language, and so on.) -So this exemplifies an important thing about small categories: they’re the categories which play nicely in algebraic constructions with small objects of other kinds. -Yet another reason for wanting to talk about small categories is because we obviously want to discuss some kind of category of categories, $\mathbf{Cat}$. But versions of Russell’s paradox tell us that this category has to be bigger than all the categories we put into it; so to form it as a category itself, we have to accept a size limitation of some kind. -Or, from another point of view (a different logical formalism, to be precise), if we want to really talk about the category of all categories, we have to let ourselves talk about this as a proper class — in other words, step back to a perspective where we can see larger things than we could before; at which point, for want of a better term, we start calling the things we could see before “small”.<|endoftext|> -TITLE: Permutations with duplicates -QUESTION [8 upvotes]: I have a data set $2\; 3\; 3\; 4\; 4\; 4\; 4$ -I want to find the number of unique numbers of $3$ digit numbers that can be formed using this. -I was thinking of doing $\large{\frac{^7P_3}{4!\times 2!}}$, but this doesn't seem right. - -REPLY [4 votes]: I answered a similar question here. -Your data set is a multiset that can be written as -$$ -S=(\underbrace{2,\cdots,2 }_{1},\underbrace{3,\cdots,3 }_{2},\underbrace{4,\cdots,4 }_{4}) -$$ -where the number under the brace is the multiplicity of the element above, wich is the number of instances of the element in the multiset. Lets call the number of digit you need $l$ (in this case $l=3$). Lets call the number of distinct elements of the multiset $n$ (in this case $n=3$) and lets call the multiplicity of the $k_{th}$ element $m_k$ (in this case $m_1=1$, $m_2=2$ and $m_3=4$), what you need are all the combinations of $(x_1,x_2,\dots,x_n)$ where $0 \leq x_k \leq m_k$ so that -$$ -\sum_{k=1}^n x_k = l -$$ -Lets list all the values that any $x_k$ can have: -$$ -x_1=(0,1) -$$ -$$ -x_2=(0,1,2) -$$ -$$ -x_3=(0,1,2,3,4) -$$ -and, with those values, lets list all their possible combinations, remembering that their sum must be equal to $l$: -$$ -C_1=(0,0,3) -$$ -$$ -C_2=(0,1,2) -$$ -$$ -C_3=(0,2,1) -$$ -$$ -C_4=(1,0,2) -$$ -$$ -C_5=(1,1,1) -$$ -$$ -C_6=(1,2,0) -$$ -Since the sum of the elements of each combination is equal to $l$, you can see each combination as a multiset in wich $l$ is equal to the sum of the multiplicity of each element, so the number of permutations for each combination is -$$ -\frac{l!}{x_1!x_2! \dots x_k!} -$$ -What remains to do is to sum all the numbers of permutations of each combination to get the final answer, so -$$ -\frac{3!}{0!0!3!}+\frac{3!}{0!1!2!}+\frac{3!}{0!2!1!}+\frac{3!}{1!0!2!}+\frac{3!}{1!1!1!}+\frac{3!}{1!2!0!}=1+3++3+3+6+3=19 -$$ -If you need a deeper explanation of why this works you can check the link above. I came up with this method while working on a project and I'm not a matematician myself, so it could be a bit messy. Sorry in advance.<|endoftext|> -TITLE: Division of Factorials [binomal coefficients are integers] -QUESTION [45 upvotes]: I have a partition of a positive integer $(p)$. How can I prove that the factorial of $p$ can always be divided by the product of the factorials of the parts? -As a quick example $\frac{9!}{(2!3!4!)} = 1260$ (no remainder), where $9=2+3+4$. -I can nearly see it by looking at factors, but I can't see a way to guarantee it. - -REPLY [3 votes]: Let's prove using induction, the special case of two numbers, i.e., the statement that if $p, q$ $\in \mathbb{N}$ then $p!q!|(p+q)!$. -(Assume any new variables introduced below to refer to natural numbers.) -First note that since $(p+1)! = (p+1)p!1!$, the statement is true for $q = 1$ and any $p$ (including $p = 1$). In particular, it is true for $p + q = 1 + 1 = 2$. Let us assume that the statement holds for $p, q$ such that $p + q = n$. -Now, for $p, q$ such that $p + q = n + 1$, we can write -$$(p + q)! = (p + q)(p + q - 1)!$$ -$$= p [\underbrace{(p - 1) + q}_n]! + q [\underbrace{p + (q - 1)}_n]!$$ -$$= p \underbrace{k_1 (p-1)!q!}_{\text{using induction assumption}} + q \underbrace{k_2 p!(q-1)!}_{\text{using induction assumption}}$$ -$$= (k_1 + k_2) p! q!$$. Hence the principle of mathematical induction implies the truth of the statement. -Now it is easy to prove the analogous statement for three numbers, i.e. $p!q!r! | (p + q + r)!$, since (using the statement just proven) $(p + q + r)!$ is divisible by $p! (q+r)!$ and $(q + r)!$ is divisible by $q!r!$. -This can be generalised for any number of parts, by induction.<|endoftext|> -TITLE: A curious compactness confusion: space filling curves in the hilbert cube that contradict a bona fide theorem? -QUESTION [7 upvotes]: Now, I may have only slept two hours last night and would currently struggle to discern a 'proof' by induction of FLT from a piece of genuine mathematics, but that doesn't stop mathematics from bugging me. At present I am puzzled by something I saw on MO this morning... -The linked question concerns the Hilbert cube $[0,1]^\mathbb{N}$ (an infinite product of intervals) and the existence of space filling curves thereof- that is: continuous images of the unit circle that are surjections on the Hilbert cube. The accepted answer, together with another answer (which actually constructs such a map) and various comments, seems to allude toward an answer in the affirmative. However, the linked theorem (the 'Hahn–Mazurkiewicz theorem') which states: - -A nonempty Hausdorff topological space is a continuous image of the unit interval if and only if it is a compact, connected, locally connected second-countable space. - -seems in direct contradiction to this since (and I may be mistaken for reasons explained above): - -The Hilbert cube is a subset of a normed space and hence a metric space -The sequence $(1,0,0...), (0,1,0...), (0,0,1,...)$ has no convergent subsequence -So the Hilbert cube is not sequentially compact, therefore non-compact (the two are equivalent in metric spaces). - -Which seems at odds with the only if portion of the theorem's statement. Maybe this is wikipedia taking me for a ride. Maybe I am just hallucinating a portion of this argument. Either way, this is annoying me. Thanks in advance for clearing this up... - -REPLY [8 votes]: $(1,0,0,\ldots), (0,1,0,\ldots)$ converges to $(0,0,0,\ldots)$, so your example doesn't contradict the compactness of the Hilbert cube. -$[0,1]^{\mathbb N}$ is homeomorphic to $[0,1]\times[0,1/2]\times[0,1/3]\times\cdots$ with the $\ell_2$-norm. So the norm of $(a_1,a_2,a_3,\ldots)$ is $\sum\left(\dfrac{a_n}{n}\right)^2$, and therefore your sequence converges to $(0,0,0,\ldots)$ since the norm of $(0,\ldots,0,1,0,\ldots)$ (with the 1 in the $n$th position) is $1/n^2$, which tends to 0. - -REPLY [7 votes]: The sequence you give converges to $0$ in the product topology. For instance, convergence in the product topology is equivalent to pointwise (or coordinatewise) convergence, and your sequence converges coordinatewise to $0$. See e.g. -http://en.wikipedia.org/wiki/Pointwise_convergence -(Added: Samuel's answer proceeds differently from mine -- via an explicit metric on the "Hilbert cube" -- and is also correct.)<|endoftext|> -TITLE: Graph Path Length Problem -QUESTION [5 upvotes]: Let $G$ be a graph such that $\delta(G) \geq k$. - -Prove that $G$ has a path of length at least $k$. -Solution: We know that -$\delta(G) = \min\lbrace \deg(v) \mid v \in V(G) \rbrace$ -If $\delta(G) = k$ then there exists some $v \in V(G)$ such that $\deg(v) = k$. This means all other vertices $u \in V(G)$ have $\deg(u) \geq k$. -Now I know this must be part of the proof. How would I prove that there exists a path of at least $k$? -If $k \geq 2$, prove that $G$ has a cycle of length at least $k+1$. - -REPLY [2 votes]: a) Take a maximal path $P=v_0v_1 \ldots v_l$. As $P$ is maximal, all neighbourghs of $v_0$ are in $P$. As $v_0$ has at least $k$ neighbourghs, then $l \geq k$.<|endoftext|> -TITLE: Cancellation of Direct Products -QUESTION [9 upvotes]: Given a finite group $G$ and its subgroups $H,K$ such that $$G \times H \cong G \times K$$ does it imply that $H=K$. -Clearly, one can see that this doesn't work out for all subgroups. Is there any condition by which this can remain true. - -REPLY [2 votes]: Concerning Steve D's interpretation of the question, here is a partial answer. -Consider the following property (P) of a group: for any two subgroups $H$ and $K$ of $G$, if $H \cong K$, then $H = K$. -Claim: For a finite group $G$, the following are equivalent: -(i) $G$ has property (P). -(ii) $G$ is cyclic. -Cyclic groups are characterized among finite groups by having at most one subgroup of any given order, so certainly (ii) $\implies$ (i). -Conversely, assume $G$ has property (P). Then it is a Dedekind group: all of its subgroups are normal (for otherwise it has two subgroups which are conjugate -- hence isomorphic -- but unequal). -Case 1: $G$ is abelian, of exponent $N$. Then $G$ is isomorphic to $C_N \times G'$ for some subgroup $G'$ (this is a step in the classification of finite abelian groups; it also follows easily from the theorem). If $G'$ is not cyclic, then $G'$ is not trivial hence contains an element of order $p | N$ and thus over all $G$ contains at least two subgroups of order $p$. -Case 2: $G$ is a nonabelian Dedekind group, i.e., a Hamiltonian group. As Dedekind showed, a Hamiltonian group (finite or otherwise!) must contain a subgroup isomorphic to the quaternion group $Q_8$. But $Q_8$ contains three cyclic subgroups of order $4$, namely those generated by the elements $i$, $j$ and $k$. -There are some noncyclic infinite abelian groups with property (P), e.g. $\mathbb{Q}/\mathbb{Z}$, but probably one can classify them as well: I haven't thought much about it.<|endoftext|> -TITLE: Why is the Hilbert Cube homogeneous? -QUESTION [40 upvotes]: The Hilbert Cube $H$ is defined to be $[0,1]^{\mathbb{N}}$, i.e., a countable product of unit intervals, topologized with the product topology. -Now, I've read that the Hilbert Cube is homogeneous. That is, given two points $p, q\in H$, there is a homeomorphism $f:H\rightarrow H$ with $f(p)=q$. -What's confusing to me is that it seems like there seems to be a stratification of points. That is, there are - -Points contained in $(0,1)^{\mathbb{N}}$ -Points which have precisely $n$ coordinate a $0$ or $1$ for n a fixed natural number. -Point which have countably many coordinates equaling $0$ or $1$ and countably many not and -Points which have n many coordinates NOT equal to $0$ or $1$. - -Now, for fixed $p$ and $q$ both in class $1$ or $3$ (or fix an n and use class $2$ or $4$), it's clear to me that there is a homeomorphism taking $p$ to $q$, simply by swapping around factors and using the fact that $(0,1)$ is clearly homogeneous. -But what are the homeomorphisms which mix the classes? In particular, what homemorphism takes $(0,0,0,\ldots )$ to $(1/2, 1/2,1/2,\ldots )$? -Said another way, for any natural number $n>1$, $[0,1]^n$ is NOT homogeneous, precisely because of these boundary points. What allows you to deal with the boundary points in the infinite product case? -As always, feel free to retag, and thanks in advance! -Edit In the off chance that someone stumbles across this question, I just wanted to provide a rough idea of the answer, as garnered from the link Pete provided in his answer. -If one has a point of the form $(1,p)$ in $[0,1] \times [0,1]$, then there is a self homeomorphism of $[0,1]\times[0,1]$ taking $(1,p)$ to $(q,1)$ with $q\neq 0, 1$. For example, one can use a "square rotation". From here, the idea is simple: given a point in $H$ of the form $(1, p_2, p_3, p_4,\ldots )$, apply the square rotation on the first two factors to get a new point of the form $(q_1, 1, p_2, p_3,\ldots )$. Now, apply the square rotation on the second two factors to get a new point of the form $(q_1, q_2, 1, p_3,\ldots )$. The point is that after $k$ iterations, the first $k$ coordinates are all in the interior. -Now one proves a techinical lemma that states that the infinite composition of these homeomorphisms is a well defined homeomorphism. The infinite composition maps the point $(1, p_2, \ldots )$ to a point of the form $(q_1, q_2,\ldots )$ which lies on the "interior" of $H$. Finally, using the fact that $(0,1)$ is clearly homogeneous, one can easily map $(q_1, q_2,\ldots )$ to $(1/2,1/2,\ldots )$. - -REPLY [5 votes]: In the meantime, an elementary and self-contained proof of homogeneity of the Hilbert cube was given in -The Homogeneous Property of the Hilbert Cube, by Denise M. Halverson, David G. Wright, 2012.<|endoftext|> -TITLE: Bugs walking in a plane -QUESTION [12 upvotes]: There are $N$ bugs in a plane. All bugs are moving at the same constant (nonzero) speed, but no two bugs are moving in the same direction (velocity vectors are of the same speed, but no two are parallel). -Prove that at some point in time $N$ bugs will form convex polygon. -Edit: Can you loosen up any of the conditions so that the statement still holds? - -REPLY [10 votes]: The counterexample I thought I had here doesn't work. -Here is a proof. Since none of the bugs are moving in the same direction, any pair of lines determined by the velocity vectors intersect. Let $C$ denote the convex hull of these intersection points. Since after waiting a sufficiently long period of time the bugs will be arbitrarily far away from $C$, if we "zoom out" far enough $C$ will become arbitrarily small with respect to the convex hull of the location of the bugs. It follows that we can assume that $C$ is arbitrarily small to begin with. -We now claim that the bugs eventually form a convex polygon in which the angle at each vertex is strictly less than $\pi$. To do this it suffices to examine a configuration of three bugs $a, b, c$ in consecutive counterclockwise order. Pick a coordinate system in which the centroid of $C$ is the origin and $b$ travels in the positive $y$-direction (hence $a$ travels to the right and $c$ travels to the left). Then it is easy to see that regardless of where $a, b, c$ initially begin along their routes, $b$ will eventually have $y$-coordinate greater than either $a$ or $c$, so angle $abc$ will eventually be strictly less than $\pi$. -It follows that by waiting sufficiently long the bugs will always form a convex polygon. In fact, the bugs are approximating the convex polygon whose vertices are the unit velocity vectors of the bugs. - -REPLY [8 votes]: Assuming no bugs get squashed in the process: -At $\lim_{t\to\infty}$ when $t$ is time, the bugs' beginning points shrink to $\frac{P}{t} = 0$ as observed when zoomed out. This means the final position of each bug lies on $\sqrt{r^2 + (vt)^2}$, or on the edge of a huge circle. Thus, you can see that all the bugs make a convex polygon (or $N$-gon) since a circle can be thought of as a regular (and convex) $\infty$-gon. -Of course this isn't the most vigorous proof in the world, but it's written in the style that most humans can understand.<|endoftext|> -TITLE: undergraduate courses emphasizing theory building? -QUESTION [7 upvotes]: I was wondering if anyone had any experience with an undergraduate course that emphasized the building of mathematical theories or if they'd ever heard of this being done? How did the class work (did the professor list axioms and definitions each week and see what the students were able to prove, did you discuss a specific mathematical theory and the various problems that arose during its creation?) -Does this sort of thing work well for undergraduates? I'd be very interested in taking a class like this because I've never really had any experience with that aspect of mathematics. So if there are any classes that are commonly taught in this sort of style, I'd be interested to know that as well. -Thanks. - -REPLY [8 votes]: I have benefited from more than one professor who taught more or less according to the Moore Method. These professors happened to come out of SUNY Binghamton in the early 1980's. -The courses required lots of input from the students. One professor in particular would ask the class to make a conjecture about some construction he would put on the board and then name the conjecture after the student - we were then personally responsible for success or failure of our ideas. After the first few humiliations, this became wonderfully empowering and fun. -The biggest impact I have noticed from taking these courses has been on my independent studying in other classes. I notice that I am able to replicate the types of attacks on my own reasoning that my professor used to - hence leading me closer toward the boundaries of my own understanding. -I don't really know how to respond to your idea of theory building... it seems to me that mathematics is not created upward from luckily chosen axioms, but in a highly nonlinear and back-and-forth fashion. The only example I know of to illustrate this clearly is the development of topology, starting in the late 1800's with Cantor and more or less culminating in the early 1900's with Hausdorff (excellent discussions with references appear in Engelking, General Topology). -I agree that it would be a great benefit to you to find courses which at least approach what you have in mind. Good luck to you.<|endoftext|> -TITLE: Convex sequences and Integral representation for the generating function -QUESTION [13 upvotes]: Suppose that $c_k$ is an decreasing sequence of non-negative real numbers, such that $c_0=1$ -and $c_{k}\leq \frac{1}{2}(c_{k-1}+c_{k+1})$. -Is it true that the generated function of $c_k$ admit an integral representation as below -$$ -\sum_{k=0}^{\infty}c_kz^k=\int_{\partial\mathbb D} \frac{1}{1-\zeta z}d\mu(\zeta), -$$ -where $\mu$ is a Borel Probability Measure in $\partial\mathbb D$ ? -Motivation: This question is related a possible slight different solution for the question -asked in https://math.stackexchange.com/questions/2188/complex-analysis-question whose the answer, as pointed out by damiano, can be found at the IMC website. - -REPLY [2 votes]: I think this is not possible. We need a stronger condition on the $c_k$. -Assume such a measure exists, then $L:C(S^1) \to \mathbb{C}$ given by -$$L(f) = \int_{S^1} f \, d\mu$$ -is a linear functional so we can extend it to $L^2(S^1)$. To see it is continuous note that -$$|L(f)| \leq \|f\|_1 \leq \|f\|_2$$ -Also write this extension as $L$. -By Riesz representation theorem we know that $L(f) = \langle f, x \rangle$ for some $x \in L^2(S^1)$. -So $L(\sum \zeta^n z^n) = \sum b_n z^n$ where $b_n = \langle \zeta^n, x \rangle$ and we know that this is equal to $\sum c_n z^n$. So that $c_n = \langle \zeta^n, x \rangle$. -But we also know that $\sum |c_n|^2 < \infty$. This is a stronger condition than convexity as George pointed out below. -I hope it is correct this time. -The problem looks a lot like the Hausdorff moment problem and the standard work on moment problems is probably the book by Akhiezer: The classical moment problem and some related questions in analysis. -Hafner Publishing Co., New York 1965.<|endoftext|> -TITLE: Example of a function whose Fourier Series fails to converge at One point -QUESTION [6 upvotes]: Can one think of an example of a continuous $2\pi$ periodic function whose Fourier series fails to converge on $\mathbb{R}$. -I referred this in the wikipedia page but no avail: It might be interesting to note that Jean-Pierre Kahane and Yitzhak Katznelson proved that for any given set E of measure zero, there exists a continuous function ƒ such that the Fourier series of ƒ fails to converge on any point of E. - -REPLY [3 votes]: All, please see this example. -Let $G_{n}$ denote the grouping of this $2n$ numbers, $$\frac{1}{2n-1},\frac{1}{2n-3},...,\frac{1}{3},1,-1,-\frac{1}{3},\cdots,-\frac{1}{2n-1}$$ -We take a strictly increasing sequence of positive integers ${\lambda_n}$ and consider the groups $G_{\lambda_1},G_{\lambda_2},\cdots,$. We multiply each number of the group $G_{\lambda_n}$ by $n^{-2}$ and obtain the sequence $$\frac{1}{1^{2}(2\lambda_{1}-1)}, \cdots,-\frac{1}{1^{2}(2\lambda_{1}-1)}, \frac{1}{2^{2}(2\lambda_{2}-1)},...,-\frac{1}{2^2(2\lambda_{2}-1)},....,$$ -say $\alpha_{1},\alpha_{2},\cdots$. Our aim is to show that $$\sum\limits_{n=1}^{\infty} \alpha_{n} \cos{nx}$$ -is the fourier series of a continuous function. We group the terms in the following way $$\sum\limits_{n=1}^{2\lambda_{1}} \alpha_{n} \cos{nx} + \sum\limits_{n=2\lambda_{1}+1}^{2\lambda_{1}+2\lambda_{2}} \alpha_{n}\cos{nx} + \sum\limits_{n=2\lambda_{1}+2\lambda_{2}+1}^{2\lambda_{1}+2\lambda_{2}+2\lambda_{3}} \alpha_{n} \cos{nx}\cdots$$ -The last series can be written as $$\sum\limits_{n=1}^{2\lambda_{1}} \alpha_{n} \cos{nx} + \sum\limits_{n=2}^{\infty} \frac{\phi(\lambda_{n},2\lambda_{1}+2\lambda_{2} + \cdots + 2\lambda_{n-1},x)}{n^2}$$ -where $$\phi(n,r,x)= \frac{\cos{(r+1)x}}{2n-1} + \frac{\cos{(r+2)x}}{2n-3} + \cdots + \frac{\cos{(r+n)x}}{1} - \frac{\cos{(r+n+1)x}}{1} - \cdots - \frac{\cos{(r+2n)x}}{2n-1}$$ -Now one can show that there is a constant $M$ (independent of $n,r$ and $x$) such that $|\phi(n,r,x)|\leq M$. From this it follows that the grouped series $$\sum\limits_{n=1}^{2\lambda_{1}} \alpha_{n} \cos{nx} + \sum\limits_{n=2}^{\infty} \frac{\phi(\lambda_{n},2\lambda_{1}+2\lambda_{2} + \cdots + 2\lambda_{n-1},x)}{n^2}$$ -converges absolutely on $\mathbb{R}$, say to $f(x)$, and $f$ is continuous on $\mathbb{R}$. It is also easy to check that $$f(x) \sim \sum\limits_{n=1}^{\infty} \alpha_{n} \cos{nx}$$ -We shall finally show that ${\lambda_n}$ can be chose so that the above series diverges at zero, that is $S_{n} = \alpha_{1} + \alpha_{2} + \cdots + \alpha_{n}$ diverges to infinity. -Since $$S_{2\lambda_{1}+2\lambda_{2}+ \cdots + 2 \lambda_{n-1} + \lambda_{n}} = \frac{1}{n^2} \Bigl( \frac{1}{2\lambda_{n}-1} + \cdots + \frac{1}{3} + 1 \Bigr)$$ behaves as $\frac{\ln{\lambda_{n}}}{{2n^{2}}}$ as $n \to \infty$, it is enough to take $\lambda_{n}=n^{n^2}$. Then the fourier series does not converge to $f$ at $x=2k\pi, \ k\in \mathbb{Z}$.<|endoftext|> -TITLE: What is the commutative analogue of a $C^*$-subalgebra? -QUESTION [10 upvotes]: Using the duality between locally compact Hausdorff spaces and commutative $C^*$-algebras one can write down a vocabulary list translating topological notions regarding a locally compact Hausdorff space $X$ into algebraic notions ragarding its ring of functions $C_0(X)$ (see Wegge-Olsen's book, for instance). For example, we have the following correspondences: -\begin{align*} -\text{open subset of $X$}\quad &\longleftrightarrow\quad\text{ideal in $C_0(X)$}\newline -\text{dense open subset of $X$}\quad &\longleftrightarrow\quad\text{essential ideal in $C_0(X)$}\newline -\text{closed subset of $X$}\quad &\longleftrightarrow\quad\text{quotient of $C_0(X)$}\newline -\text{locally closed subset of $X$}\quad &\longleftrightarrow\quad\text{subquotient of $C_0(X)$}\newline -\text{???}\quad &\longleftrightarrow\quad\text{$C^*$-subalgebra in $C_0(X)$} -\end{align*} -By ideal I always mean a two-sided closed (and hence self-adjoint) ideal. -Well, I can't quite see how to reconvert a $C^*$-subalgebra in $C_0(X)$ into something topological involving only the space $X$. Can you come up with something handy? - -Example: A simple example of a subalgebra of a commutative $C^*$-algebra not being an ideal is -$$ -\mathbb C\cdot(1,1)\subset \mathbb C\oplus\mathbb C. -$$ - -(Alternatively, we could think about this question within the duality of affine algebraic varieties and finitely generated commutative reduced algebras or even within the duality between affine schemes and commutative rings.) - -Edit: Since I was not completely satisfied by the response I got here, I reposted this question on MO. - -REPLY [7 votes]: Roughly, the answer will be that closed C*-subalgebras will correspond to quotient spaces -(via pull-back of functions). In your example, the quotient map is one which identifies -the two points into a single point. I haven't thought through, though, whether this is a completely correct statement as it stands, or whether one has to add additional caveats. -[Added to answer a question in comments:] The idea is that if $X$ surjects onto $Y$ -then we get an injection $C_0(Y) \to C_0(X)$, and conversely. -[Additional discussion added after more thought:] Let me say something about the analogous situation in algebraic geometry, where I am more comfortable with the technical issues: -Affine algebraic sets over $\mathbb C$ correspond to finite type reduced $\mathbb C$-algebras. -Giving an inclusion $A \hookrightarrow B$ of finite type reduced $\mathbb C$-algebras -corresponds to giving a map $X \to Y$ of algebraic sets which is dominant, i.e. the image is dense. -Now in your set-up: if $X \to Y$ is a map of locally compact Hausdorff spaces with dense image, then again the map $C_0(X) \to C_0(Y)$ will be injective; so I might have been too hasty when I asserted that we get a surjective map. On the other hand, perhaps the -image of $C_0(X) \to C_0(Y)$ will not be closed in this level of generality; it's a while since I've thought carefully about these sorts of things, so I don't think I can say more -right now with any certainty. -In particular, I'm not so used to working in the case of rings without unit, so my suggestion has more chance to be correct in the case when the spaces are compact. -So perhaps it would be easiest to think about the case when $X$ and $Y$ are compact first; note then a map with dense image will automatically be surjective, and so this case might be simpler to understand for this reason too. (In fact, thinking about your example of an ideal that you mention in comments, it might be easier to pass to one-point compactifications --- and thus add a unit --- before proceeding. Because indeed I think -that in the ideal case, what will happen is that we will get a map from the 1 point compactification of the space to the one point compactification of the open set which crushes the complement of the open set down to the point at infinity, exactly as you suggest in your comment.)<|endoftext|> -TITLE: Importance of determining whether a number is squarefree, using geometry -QUESTION [13 upvotes]: Despite appearances, this is not a question on computational aspects of number theory. The background is as follows. I once asked a number theorist about what he considered to be the most important unsolved problems in arithmetic geometry. He told me about a few, but along with some well-known problems he told me the following one also: - -How to determine conceptually when a number is squarefree or not? - -When I protested that this sounded like a computational question, he told me that no, this is not so, and demonstrated that this has a rather nice solution for the ring of polynomials over a field, which is in many senses analogous to the ring of integers. Take the polynomial, take its derivative and compute the gcd using the Euclidean algorithm. But for the ring of integers there is nothing analogous to the derivative, and he wanted a solution of the problem by constructing a good notion of a differential in this case. -Question: What are the known investigations along this line? What well-known topics in arithmetic geometry are related to this? And what would be some other interesting consequences of a successful development of such a method? -Any other comments that might enlighten me further would be received with gratitude. - -REPLY [19 votes]: No feasible (polynomial time) algorithm is currently known for -recognizing squarefree integers or for computing the squarefree -part of an integer. In fact it may be the case that this problem -is no easier than the general problem of integer factorization. -This problem is important because one of the main tasks -of computational algebraic number theory reduces to it (in -deterministic polynomial time). Namely the problem of computing -the ring of integers of an algebraic number field depends upon -the square-free decomposition of the polynomial discriminant -when computing an integral basis, e.g. [2] S.7.3 p.429 or [1] -This is due to Chistov [0]. See also Problems 7,8, p.9 in [3], -which lists 36 open problems in number theoretic complexity. -The reason that such problems are simpler in function fields (e.g. polynomial rings) versus number fields -is due to the availability of derivatives. This opens up a powerful -toolbox that is not available in the number field case. For example -once derivatives are available so are Wronskians - which provide powerful -measures of dependence in transcendence theory and diophantine approximation. -A simple yet stunning example is the elementary proof of the polynomial case of Mason's ABC theorem, which yields as a very special case FLT for polynomials, cf. -my recent MO post and my old sci.math post [4]. -[0] A. L. Chistov. The complexity of constructing the ring of integers -of a global field. Dokl. Akad. Nauk. SSSR, 306:1063--1067, 1989. -English Translation: Soviet Math. Dokl., 39:597--600, 1989. 90g:11170 -http://citeseerx.ist.psu.edu/showciting?cid=854849 -[1] Lenstra, H. W., Jr. Algorithms in algebraic number theory. -Bull. Amer. Math. Soc. (N.S.) 26 (1992), no. 2, 211--244. 93g:11131 -http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.105.8382 -[2] Pohst, M.; Zassenhaus, H. Algorithmic algebraic number theory. -Cambridge University Press, Cambridge, 1997. -[3] Adleman, Leonard M.; McCurley, Kevin S. -Open problems in number-theoretic complexity. II. -Algorithmic number theory (Ithaca, NY, 1994), 291--322, -Lecture Notes in Comput. Sci., 877, Springer, Berlin, 1994. 95m:11142 -http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.4877 -[4] sci.math.research, 1996/07/17 -poly FLT, abc theorem, Wronskian formalism [was: Entire solutions of f^2+g^2=1] -http://groups.google.com/group/sci.math/msg/4a53c1e94f1705ed -http://google.com/groups?selm=WGD.96Jul17041312@berne.ai.mit.edu - -REPLY [7 votes]: The standard example is the proof, by differentiation, of Fermat's last theorem for polynomials. More generally, the proof of the ABC conjecture for polynomials (Mason's theorem) by differentiation. In light of the analogies between algebraic number theory and algebraic geometry this suggests the hope that some kind of arithmetic differentiation, if it exists, could be a missing structure leading to breakthroughs such as a proof of the ABC conjecture or a simple proof of Fermat. -As somebody once said, "define PDE's over a number field and you'll be a rich man". -As far as I know the furthest-developed approach to this problem at present is in a series of works by Alexander Buium on arithmetic differential operators: -http://www.math.unm.edu/~buium/prebook.pdf<|endoftext|> -TITLE: Extending Automorphism of a Field -QUESTION [5 upvotes]: We know that $(\mathbb{Q},+,\times)$ is a subfield of $(\mathbb{R},+,\times)$. It is easy to see that the automorphism of $\mathbb{Q}$ is only the identity. For a quick proof, lets go through the main steps: - -$f(1)=1 \Longrightarrow f(n)=n$ for all $n \in \mathbb{N}$. -$f(-1)=-1$ which says that $f(x)=x$ for all $x \in \mathbb{Z}$. -$\displaystyle f \Bigl(\frac{p}{q}\Bigr) = \frac{p}{q}$, where $q \neq 0$. - -One, then uses the continuity of $f$ and the denseness of $\mathbb{Q}$ to prove that the Automorphism of $\mathbb{R}$ is also trivial. -My Question Given a subfield $K$ of $\mathbb{C}$ is it and an automorphism of $K$ can it be extended to the whole of $\mathbb{C}$. - -REPLY [3 votes]: This and related questions about automorphism groups of algebraically closed fields (a topic I find interesting and have spent some time thinking about) are discussed in Section 9.1 of -http://math.uga.edu/~pete/FieldTheory.pdf -Specifically, Theorem 77 answers the OP's question affirmatively. -(These notes are still very rough. In particular there is not yet a bibliography. When this gets remedied, a citation to Paul Yale's paper will be in order: it was definitely something I read when writing these notes.)<|endoftext|> -TITLE: For any $n$, is there a prime factor of $2^n-1$ which is not a factor of $2^m-1$ for $m < n$? -QUESTION [11 upvotes]: Is it guaranteed that there will be some $p$ such that $p\mid2^n-1$ but $p\nmid 2^m-1$ for any $m n_0,\,\ a^n - b^n\, $ has a prime ideal factor $P$ not dividing $\,a^m - b^m\, $ for all $ m < n$. - -Later (1993, same MR) "he generalized this to show that every -algebraic number which is not a root of unity satisfies only a -finite number of independent generalized cyclotomic equations -considered by the reviewer [in Structural properties of -polylogarithms, Chapter 11, see p. 236, Amer. Math. Soc., -Providence, RI, 1991; see MR 93b:11158]". -There are also elliptic and polynomial generalizations. - -REPLY [2 votes]: No. $2^6-1 = 3^2 \cdot 7$. But we have that $3|2^2-1$ and $7|2^3-1$<|endoftext|> -TITLE: If two subgroups have a complete set of left coset representatives in common, then -QUESTION [5 upvotes]: Let $H,K$ be proper subgroups of a group $G$ having a complete set $S$ of representatives of left cosets in common, that is, -$$ -G = \bigsqcup_{s \in S} sH = \bigsqcup_{s \in S} sK -$$ -It seems in general one cannot expect any serious relation on $H,K.$ But I am afraid of overlooking some general result here. Any information on the subject will be warmly accepted. Regards, Olod - -REPLY [3 votes]: |S| = [G:H] = [G:K], so at least their indexes are equal. I don't think much else is true, because of the following example: -If G = 2 × 2 is the Klein four group, then H = 2 × 1 and K = 1 × 2 are two subgroups with a common set of coset representatives: S = { (0,0), (1,1) }. H and K are not conjugate. -In general if G is a semi-direct product H⋉N, then H has S=N as a set of coset representatives. It is very possible for N to have more than one "complement" H, that is, another subgroup K such that G=K⋉N. In nice situations like G nonabelian of order 6, all complements are conjugate, but in general they need not be as the dihedral 2-groups (including the Klein four group) show. -It would be nice to have an example where H and K are not even isomorphic, but semi-direct products won't do that. I don't believe H, K need to be complemented to have a common transversal, but I guess that is another reasonable guess to rule out. -Edit: Well, the dihedral group G of order 8 has a cyclic normal subgroup H and Klein four normal subgroup K, the union of which is not all of G. Since a set of coset representatives S has only 2 elements, we just need to take the identity and an element neither in H nor K. In particular, H need not be isomorphic to K. -Also the dihedral group of order 16 has a similar pair (H cyclic order 4, K a four-group), and so neither H nor K need be complemented. -One can also use Abelian examples as Arturo points out, and one can extend the D8 example to S4 as Steve points out. A fun problem. - -REPLY [2 votes]: If $S$ is a complete set of (left) coset representatives for $H$, then for every $x\in G$ there exists $s\in S$ such that $xH=sH$, and moreover, if $s_1,s_2\in S$ are such that $s_1H=s_2H$, then $s_1=s_2$. That is: there is one and exactly one representative from each coset of $H$ in $G$ in the set $S$. As such, your condition is trivially satisfied under the assumption that $S$ is a complete set of coset representatives for both $H$ and $K$, since for $s_1,s_2\in S$, you have -$$s_1H=s_2H \Longleftrightarrow s_1=s_2\Longleftrightarrow s_1K=s_2K.$$ -So either you meant something else, or you are just asking for condition under which two subgroups $H$ and $K$ can have the same complete set of coset representatives. -Jack Schmidt already gave an example with $H$ and $K$ not conjugate, and asked if there is an example in which $H$ and $K$ are not isomorphic. I think this does it: take $G=\mathbb{Z}_2\times \mathbb{Z}_2\times \mathbb{Z}_4$, the product of two cyclic groups of order two and one of order four. Take $H={0}\times{0}\times\mathbb{Z}_4$, and $K={0}\times\mathbb{Z}_2\times\langle 2\rangle$ (so $H$ is cyclic of order $4$, and $K$ is the Klein $4$-group). Let $S={(0,0,1), (1,0,1), (1,1,0), (0,1,0)}$. If I did not make some silly mistake, then this is a complete set of coset representatives for both $H$ and $K$.<|endoftext|> -TITLE: Convergence of $\sum \limits_{n=1}^{\infty}\sin(n^k)/n$ -QUESTION [49 upvotes]: Does $S_k= \sum \limits_{n=1}^{\infty}\sin(n^k)/n$ converge for all $k>0$? -Motivation: I recently learned that $S_1$ converges. I think $S_2$ converges by the integral test. Was the question known in general? - -REPLY [60 votes]: This is a replacement for my previous answer. The sum converges, and this fact needs even more math than I believed before. -Begin by using summation by parts. This gives -$$\sum_{n=1}^N \left(\sum_{m=1}^N \sin(m^k) \right) \left( \frac{1}{n}-\frac{1}{n+1}\right) + \frac{1}{N+1} \left(\sum_{m=1}^N \sin(m^k) \right).$$ -Write $S_n:= \left(\sum_{m=1}^n \sin(m^k) \right)$. So this is -$$\sum_{n=1}^N S_n/(n(n+1)) + S_N/(N+1).$$ -The second term goes to zero by Weyl's polynomial equidistribution theorem. So your question is equivalent to the question of whether $\sum s_n/(n(n+1))$ converges. We may as well clean this up a little: Since $|S_n| \leq n$, we know that $\sum S_n \left( 1/n(n+1) - 1/n^2 \right)$ converges. So the question is whether -$$\sum \frac{S_n}{n^2}$$ -converges. -I will show that $S_n$ is small enough that $\sum S_n/n^2$ converges absolutely. -The way I want to prove this is to use Weyl's inequality. Let $p_i/q_i$ be an infinite sequence of rational numbers such that $|1/(2 \pi) - p_i/q_i| < 1/q_i^2$. Such a sequence exists by a standard lemma. Weyl inequality gives that -$$S_N = O\left(N^{1+\epsilon} (q_i^{-1} + N^{-1} + q_i N^{-k})^{1/2^{k-1}} \right)$$ -for any $\epsilon>0$. - -Thanks to George Lowther for pointing out the next step: According to Salikhov, for $q$ sufficiently large, we have -$$|\pi - p/q| > 1/q^{7.60631+\epsilon}.$$ -Since $x \mapsto 1/(2x)$ is Lipschitz near $\pi$, and since $p/q$ near $\pi$ implies that $p$ and $q$ are nearly proportional, we also have the lower bound $|1/(2 \pi) - p/q|> 1/q^{7.60631+\epsilon}$. -Let $p_i/q_i$ be the convergents of the continued fraction of $1/(2 \pi)$. By a standard result, $|1/(2 \pi) - p_i/q_i| \leq 1/(q_i q_{i+1})$. Thus, $q_{i+1} \leq q_i^{6.60631 + \epsilon}$ for $i$ sufficiently large. Thus, the intervals $[q_i, q_i^{7}]$ contain all sufficiently large integers. -For any large enough $N$, choose $q_i$ such that $N^{k-1} \in [q_i, q_i^7]$. Then Weyl's inequality gives the bound -$$S_N = O \left( N^{1+\epsilon} \left(N^{-(k-1)/7} + N^{-1} + N^{-1} \right)^{1/2^{k-1}}\right)$$ -So $$S_N = \begin{cases} O(N^{1-(k-1)/(7\cdot 2^{k-1}) + \epsilon}) &\mbox{ if } \ k\leq 7, \\ -O(N^{1-1/(2^{k-1})+\epsilon}) &\mbox{ if } \ k\geq 8, \end{cases}$$ -which is enough to make sure the sum converges. -${ }{}{}{}{}$<|endoftext|> -TITLE: Irrationality of powers of $\pi$ -QUESTION [14 upvotes]: Everyone knows that $\pi$ is an irrational number, and one can refer to this page for the proof that $\pi^{2}$ is also irrational. -What about the highers powers of $\pi$, meaning is $\pi^{n}$ irrational for all $n \in \mathbb{N}$ or does there exists a $m \in \mathbb{N}$ when $\pi^{m}$ is rational. - -REPLY [31 votes]: What Robin hinted at: -If $\pi^{n}$ was rational, then $\pi$ would not be transcendental, as it would be the root of $ax^{n}-b = 0$ for some integers $a,b$.<|endoftext|> -TITLE: Power series expansion without Cauchy theorem -QUESTION [5 upvotes]: How is the power series expansion of an analytic function at a point constructed, without using Cauchy's theorem (or formula)? - -REPLY [7 votes]: Below are a couple of possible approaches. -MR0123687 (23 #A1010) 30.20 -Porcelli, P.; Connell, E. H. -A proof of the power series expansion without Cauchy's formula. -Bull. Amer. Math. Soc. 67 1961 177--181. -http://projecteuclid.org/euclid.bams/1183524076 -Starting with the basic result of topological analysis that a differentiable function of a complex variable generates an open mapping, the authors succeed in establishing in remarkably simple fashion, independent of any integration theory, the validity of the power-series expansion of such a function. A series of lemmas leads up to the theorem that differentiability of a function $f(z)$ on $0<|z-z_0| < r $ plus continuity at $z_0$ implies differentiability also at $z_0$. Using this, along with existence of higher-ordered derivatives, it is then shown that if $f(z)$ is continuous and $|f(z)|\leq 1$ for $|z|\leq 1$ and differentiable for $|z|<1$, then $|f^{(n)}(0)|\leq n!2^n$, for all $n$, and the Taylor series for $f$ converges to $f$ for $|z|<1/2$. -Reviewed by G. T. Whyburn -MR0993637 (90m:30002a) 30A99 (26A39 30B10 42A20) -Shisha, Oved(1-RI) -Proof of power series and Laurent expansions of complex differentiable functions without use of Cauchy's integral formula or Cauchy's integral theorem. -J. Approx. Theory 57 (1989), no. 2, 117--135. -MR1006337 (90m:30002b) 30A99 (26A39 30B10 42A20) -Shisha, Oved -Erratum: ``Proof of power series and Laurent expansions of complex differentiable functions without use of Cauchy's integral formula or Cauchy's integral theorem''. -J. Approx. Theory 58 (1989), no. 2, 246. -The author shows how to establish the basic results of complex function theory in the context of real variable Fourier analysis. His basic tool is a generalization of the classical Riemann integral, which he proposes should become the standard integral of the working analyst in place of currently used integrals (including the Lebesgue integral), all of which it contains as a special case. A topological development, eliminating all use of integrals, is due to \n E. Connell\en, \n R. L. Plunket\en, \n P. Porcelli\en, \n A. H. Read\en and \n G. T. Whyburn\en [Whyburn, Topological analysis, revised edition, Princeton Univ. Press, Princeton, NJ, 1964; MR0165476 (29 #2758)]. The present development generalizes the work of \n P. R. Beesack\en [Canad. Math. Bull. 15 (1972), 473--480; MR0310199 (46 #9301)], who placed various conditions on the derivative to achieve his results. -The relevant property of the new integral is the fact that if $f$ is an arbitrary differentiable function on an interval $[a, b]$, then $\int^b_a f'(x)\,dx=f(b)-f(a)$. For any function $f$ on $[a,b]$, the new integral is defined as the number $I$ such that for every $\varepsilon >0$, there exists a positive function $\delta_\varepsilon(x)$ on $[a,b]$, such that for every partition $a=x_0 < x_1 < \cdots < x_n=b$ and sequence $s_1,\cdots, s_n$, $x_{k-1} \le s_k\le x_k$, for $k=1,\cdots,n$, such that $x_k-x_{k-1}<\delta_{\epsilon}(k)$ for $k=1,\cdots,n$, one has $|I-\sum ^n_{k=1}f(s_k)(x_k-x_{k-1})|<\varepsilon$. Let $f$ be a complex differentiable function on an annulus $0\le R' < |z| < R''<\infty$. Then by Dini's test, since $f$ is everywhere differentiable, for each $r$, $R' < r < R''$, the function $f(re^{i\phi})$, $0\le \phi\le 2\pi$, can be expanded in a Fourier series $f(re^{i\phi}) =\sum^{+\infty}_{n=-\infty}c_k(r)e^{i\phi}$. The objective of the paper is achieved if it can be shown that the coefficients $a_k(r)$ are independent of $r$. The basic lemma used states that $$ \iint_{\begin{array}{} R_1\le r\le R \\ 0\le \varphi\le 2\pi \end{array}} f'(re^{i\varphi})drd\varphi =\int_0^{2\pi}e^{-i\varphi}[f(Re^{i\varphi})-F(R_1e^{-i\varphi})]d\varphi,$$ $$ \widehat{\iint}_{\begin{array}{} R_1\le r\le R \\ 0\le \varphi\le 2\pi \end{array}} f'(re^{i\varphi})drd\varphi =\int_{R_1}^{R}r^{-1}\int_0^{2\pi}e^{-i\varphi}f(re^{i\varphi})d\varphi dr,$$ where the integrals on the left are two-dimensional generalized Riemann integrals. The exposition is quite detailed and fully self-contained, including the development of all relevant characteristics of the generalized Riemann integral. -\edref {Two minor errors are corrected in the erratum.} -Reviewed by Kenneth O. Leland<|endoftext|> -TITLE: Bounded Function Which is Not Riemann Integrable -QUESTION [5 upvotes]: This problem is taken from Problem 2.4.31 (page 84) from Problems in Mathematical Analysis: Integration by W. J. Kaczor, Wiesława J. Kaczor and Maria T. Nowak. - -Give an example of a bounded function $f:[0,1] \to \mathbb{R}$ which is not Riemann Integrable, but is a derivative of some function $g$ on $[0,1]$. - -REPLY [8 votes]: I gave an answer to this question on Math Overflow some months ago: -Integrability of derivatives -See, in particular, this paper: Goffman, Casper A bounded derivative which is not Riemann integrable. Amer. Math. Monthly 84 (1977), no. 3, 205--206.<|endoftext|> -TITLE: Invariant Subspaces (Hardy Space) -QUESTION [6 upvotes]: Suppose $M_1$ and $M_2$ are invariant subspaces of the unilateral shift U such that $M_1$ subset $M_2$ and $M_1$ is of codimension strictly larger than $1$ in $M_2$. Show that there exists $M$ invariant under $U$ satisfying $M_1 \subset M \subset M_2$ where the inclusions are strict. All subspaces are closed. -This problem is from the Springer GTM: "An introduction to operators on the Hardy-Hilbert space". -Edit: Perhaps I can take $M := U M_2$? Maybe I should give that some more thought. - -REPLY [4 votes]: To solve this you can use the function theoretic description of the invariant subspaces of the shift known as Beurling's theorem. Identify your Hilbert space with the Hardy space $H^2$ on the disk such that $U$ is identified with "multiplication by $z$". Beurling's theorem says that each invariant subspace of $U$ has the form $\phi H^2$ for a so-called inner function $\phi$, i.e., a bounded analytic function on the disk whose radial (or non-tangential) limit function has modulus 1 a.e. on the circle. -In your problem, there are inner functions $\phi_1$ and $\phi_2$ such that $M_1=\phi_1 H^2$ and $M_2=\phi_2 H^2$. You have $\phi_1\in M_1\subset M_2$, so $\phi_1=\phi_2 f$ for some $f\in H^2$. The modulus of $f$ on the circle is 1 a.e. because $f=\phi_1/\phi_2$ a.e., and thus $f$ is an inner function. Suppose, for the sake of argument, that you can write $f=gh$ for some nonconstant inner functions $g$ and $h$, and let $M=\phi_2 g H^2$. Then $M_1\subset M\subset M_2$. I claim that the inclusions are strict, and more specifically that $\phi_2 g$ is in $M\setminus M_1$ and $\phi_2$ is in $M_2\setminus M$. This amounts to the same thing as saying that $1/h$ and $1/g$ (respectively) are not in $H^2$. Note that $1/h$ and $1/g$ have modulus 1 a.e. on the circle, so if they were in $H^2$ they would in fact be in $H^\infty$ and bounded by 1 on the disk. But $h$ and $g$ are bounded by 1 on the disk and nonconstant, so this is impossible, showing that in fact $1/h$ and $1/g$ are not in $H^2$ as claimed. -It remains to be seen why $f$ has such a factorization, and this is where the hypothesis about codimension is used. Every inner function has a factorization into a Blaschke product times a singular inner function. If the singular part of $f$ is nontrivial, it can be factored nontrivially by scaling the corresponding singular measure by numbers between 0 and 1. If the Blaschke part of $f$ has more than one factor, then it factors. Given that $M_1\neq M_2$, $f$ is not constant, so the only other possibility is that $f$ is a Blaschke product with a single factor, a.k.a. a holomorphic automorphism of the disk. I claim that this would imply that $M_1$ has codimension 1 in $M_2$, and more specifically $M_2=M_1 + \mathbb{C}\phi_2$. To see this, let $\alpha$ be the zero of $f$, and let $G=\phi_2 H$ be an element of $M_2$. Then $G=\phi_2 f\frac{H-H(\alpha)}{f}+\phi_2 H(\alpha)\in M_1+\mathbb{C}\phi_2$. Q.E.D. -For more details on factorizations of inner functions and more, see Chapter 2 of the book named in the question. -(As for the edit, you can't always take $M=UM_2$. Let $0<|\alpha|<1$, and let $\phi_\alpha$ be the holomorphic automorphism of the disk that swaps 0 and $\alpha$, $\phi_\alpha(z)=\frac{\alpha-z}{1-\overline{\alpha}z}$. Suppose that $M_2=H^2$ and $M_1=\phi_\alpha^2 H^2$. Then your hypotheses are met, but $M_1$ is not contained in $UM_2$ because $\phi_\alpha^2$ is not in $UM_2$.)<|endoftext|> -TITLE: Is the rank of a matrix the same of its transpose? If yes, how can I prove it? -QUESTION [64 upvotes]: I am auditing a Linear Algebra class, and today we were taught about the rank of a matrix. The definition was given from the row point of view: - -"The rank of a matrix A is the number - of non-zero rows in the reduced - row-echelon form of A". - -The lecturer then explained that if the matrix $A$ has size $m -\times n$, then $rank(A) \leq m$ and $rank(A) \leq n$. -The way I had been taught about rank was that it was the smallest of - -the number of rows bringing new information -the number of columns bringing new information. - -I don't see how that would change if we transposed the matrix, so I said in the lecture: -"then the rank of a matrix is the same of its transpose, right?" -And the lecturer said: -"oh, not so fast! Hang on, I have to think about it". -As the class has about 100 students and the lecturer was just substituting for the "normal" lecturer, he was probably a bit nervous, so he just went on with the lecture. -I have tested "my theory" with one matrix and it works, but even if I tried with 100 matrices and it worked, I wouldn't have proven that it always works because there might be a case where it doesn't. -So my question is first whether I am right, that is, whether the rank of a matrix is the same as the rank of its transpose, and second, if that is true, how can I prove it? -Thanks :) - -REPLY [23 votes]: There are several simple proofs of this result. Unfortunately, most textbooks use a rather complicated approach using row reduced echelon forms. Please see some elegant proofs in the Wikipedia page (contributed by myself): -http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29 -or the page on rank factorization: -http://en.wikipedia.org/wiki/Rank_factorization -Another of my favorites is the following: -Define $\operatorname{rank}(A)$ to mean the column rank of A: $\operatorname{col rank}(A) = \dim \{Ax: x \in -\mathbb{R}^n\}$. Let $A^{t}$ denote the transpose of A. First show that $A^{t}Ax = 0$ -if and only if $Ax = 0$. This is standard linear algebra: one direction is -trivial, the other follows from: -$$A^{t}Ax=0 \implies x^{t}A^{t}Ax=0 \implies (Ax)^{t}(Ax) = 0 \implies Ax = 0$$ -Therefore, the columns of $A^{t}A$ satisfy the same linear relationships -as the columns of $A$. It doesn't matter that they have different number -of rows. They have the same number of columns and they have the same -column rank. (This also follows from the rank+nullity theorem, if you -have proved that independently (i.e. without assuming row rank = column -rank) -Therefore, $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t}A) \leq \operatorname{col rank}(A^{t})$. (This -last inequality follows because each column of $A^{t}A$ is a linear -combination of the columns of $A^{t}$. So, $\operatorname{col sp}(A^{t}A)$ is a subset -of $\operatorname{col sp}(A^{t})$.) Now simply apply the argument to $A^{t}$ to get the -reverse inequality, proving $\operatorname{col rank}(A) = \operatorname{col rank}(A^{t})$. Since $\operatorname{col rank}(A^{t})$ is the row rank of A, we are done.<|endoftext|> -TITLE: A nicer proof of Lagrange's 'best approximations' law? -QUESTION [6 upvotes]: Let $p_N/q_N$ be the $N^\text{th}$ convergent of the continued fraction for some irrational number $\alpha$. It turns out that for any other approximation $p/q$ (with $q \le q_N$) which isn't a convergent $|\alpha q - p| > |\alpha q_{N-1} - p_{N-1}|$. I'm wondering if there are any nice proofs for this result? - -In my book this is proved by picking $x,y$ that solves -$$ -\begin{pmatrix} p_N & p_{N-1} \\ q_N & q_{N-1} \end{pmatrix} -\begin{pmatrix} x \\ y \end{pmatrix} = -\begin{pmatrix} p \\ q \end{pmatrix} -$$ -since $x$ and $y$ have opposite sign, as well as $\alpha q_N - p_N$ and $\alpha q_{N-1} - p_{N-1}$ have opposite sign we can conclude that $|\alpha q - p| = |x (\alpha q_N - p_N) + y (\alpha q_{N-1} - p_{N-1})| = |x| |\alpha q_N - p_N| + |y| |\alpha q_{N-1} - p_{N-1}|$ which proves the theorem. -I am looking for different proofs than this one. - -REPLY [8 votes]: Of course there is a nicer proof! In fact, it's almost obvious if one thinks about geometric interpretation of continued fraction: consider the line $y=\alpha x$; then the best approximation (i.e. approximation that minimizes $|\alpha q-p|=q|\alpha-\frac{p}{q}|$) is the point of integer lattice nearest to this line; finally observe that convergents with even/odd numbers of the continued fraction give coordinates of [verices of] the convex hull of [points of] lattice lying over/under the line. - -One can also (as J. M. suggests) take a look at Lorentzen, Waadeland. Continued Fractions with Applications, I.2.1 (esp. figure 1 and the text near it; there are no words "convex hull" there but implicitly everything is explained, more or less). -Upd. One more reference is Davenport's "The Higher Arithmetics" (section IV.12). - -Finally, an illustration (from Arnold's book). - -Bold line $y=\alpha x$ (on the picture $\alpha=\frac{10}7$) is approximated by vectors $e_i$ corresponding to convergents (namely, $e_i$ ends at $(p_i,q_i)$); -each vector (starting from $e_3$) is a linear combination of two preciding ones: $e_{i+2}=e_i+a_{i-1}e_{i+1}$, where $a$ is the maximal integer s.t. the new vector still doesn't intersect the line; -this is exactly the algorithm for representing $\alpha$ by a continued fraction: $\alpha=[a_0,a_1,\dots]$.<|endoftext|> -TITLE: Checking if Algebraic Groups are simply connected -QUESTION [7 upvotes]: I have recently been thnking some about algebraic groups and reading parts of Humphreys book on them, and I was wondering if there is a general process to showing they are simply connected. In particular I was wondering over other fields than $\mathbb{C}$ but if the answer only works in $\mathbb{C}$ I will settle for that. -One idea I had was that using Borel-Weil-Bott one could make a slick arguement for when the fundamental group is trivial. I would like to get away from the ad-hoc thinking process I am using. - -REPLY [3 votes]: Please see https://mathoverflow.net/questions/19262/simply-connectedness-of-algebraic-group<|endoftext|> -TITLE: Is $\sin(n^k) ≠ (\sin n)^k$ in general? -QUESTION [13 upvotes]: Is it true that $\sin(n^k) ≠ (\sin n)^k$ for any positive integers $n$ and integers $k ≠ 1$? -What if $n > 0, k ≠ 1$ are rational? - -REPLY [18 votes]: If $\sin(n^k)=\sin(n)^k$, then $\frac{e^{in^k}-e^{-in^k}}{2i}=(\frac{e^{in}-e^{-in}}{2i})^k$, so that $e^i$ would be algebraic. -But that directly contradicts the Lindemann-Weierstrass theorem because $i$ is algebraic.<|endoftext|> -TITLE: Continuous function satisfying $f^{k}(x)=f(x^k)$ -QUESTION [14 upvotes]: How does one set out to find all continuous functions $f:\mathbb{R} \to \mathbb{R}$ which satisfy $f^{k}(x)=f(x^k)$ , where $k \in \mathbb{N}$? -Motivation: Is $\sin(n^k) ≠ (\sin n)^k$ in general? - -REPLY [5 votes]: Case $k=1$ is trivial. So, suppose $k>1$. -Suppose k is odd. For even k one should correct this solution a bit. -First note, that $a^k=a$ is equivalent to $a\in{-1,0,1}$. We have this equation for a=f(-1), a=f(0), a=f(1). -Note that for $x\in (-1,1)$ sequence $x,x^k,(x^k)^k,\dots$ goes to 0. Denote this sequence by $b_n$: $b_n = x^{(k^n)}$. From continuity of f we know, that $f(b_n)\rightarrow f(0)$ and $(f(x))^{(k^n)}\rightarrow f(0)$. Therefore for $x\in(-1,1)$ we know, that $f(x) \in (-1,1)$ or f is constant on this interval and equal to 1 or -1. -Now suppose x is in $(0,\infty)$. By the same reason the sequence $(f(x))^{(k^{-n})}$ goes to f(1). It is true iff (f(x)=0 and f(1)=0) or (f(x) is in $(-\infty,0)$ and f(1)=-1) or (f(x) is in $(0,\infty)$ and f(1)=1). The same is for x in $(-\infty,0)$ and f(-1). -So we have the following cases - -f(0)=1, then f(x)=1 for $x \in -[-1,1]$. Other values of f can be -uniquely determined by values on -$[-2^k,-2]\cup[2,2^k]$. This values can be chosen arbitrary to form continuous function -to $(0,\infty)$ such that -$f(-2^k)=(f(-2))^k$ and -$f(2^k)=(f(2))^k$. - -Now I will just list all other cases - without going to details, which are - similar to shown in the first case. - -f(0)=-1 -f(0)=0,f(-1)=1, f(1)=1 -f(0)=0,f(-1)=1, f(1)=-1 -f(0)=0,f(-1)=1, f(1)=0 -f(0)=0,f(-1)=-1, f(1)=1 -f(0)=0,f(-1)=-1, f(1)=-1 -f(0)=0,f(-1)=-1, f(1)=0 -f(0)=0,f(-1)=1, f(1)=1 -f(0)=0,f(-1)=1, f(1)=-1 -f(0)=0,f(-1)=1, f(1)=0<|endoftext|> -TITLE: How to prove $(f \circ\ g) ^{-1} = g^{-1} \circ\ f^{-1}$? (inverse of composition) -QUESTION [15 upvotes]: I'm doing exercise on discrete mathematics and I'm stuck with question: - -If $f:Y\to Z$ is an invertible function, and $g:X\to Y$ is an invertible function, then the inverse of the composition $(f \circ\ g)$ is given by $(f \circ\ g) ^{-1} = g^{-1} \circ\ f^{-1}$. - -I've no idea how to prove this, please help me by give me some reference or hint to its solution. - -REPLY [3 votes]: $f:Y \to Z$ and $g:X \to Y$ are invertible functions. We need to prove $(f\circ g)^{-1}=g^{-1}\circ f^{-1}$. -$$ -(f\circ g)^{-1}\circ (f\circ g)=(g^{-1}\circ f^{-1})\circ(f\circ g)=(g^{-1}\circ f^{-1})\circ f)\circ g=(g^{-1}\circ (f^{-1}\circ f))\circ g=(g^{-1}\circ I_{Y})\circ g=g^{-1}\circ g=I_{X} -$$ -Similarly, -$$ -(f\circ g)\circ (f\circ g)^{-1}=(f\circ g)\circ (g^{-1}\circ f^{-1})=f\circ(g\circ (g^{-1}\circ f^{-1}))=f\circ((g\circ g^{-1})\circ f^{-1})\\=f\circ (I_{Y}\circ f^{-1})=(f\circ I_{Y})\circ f^{-1}=f\circ f^{-1}=I_{Z} -$$ -$(g^{-1}\circ f^{-1})\circ(f\circ g)=I_{X}$ and $(f\circ g)\circ (g^{-1}\circ f^{-1})=I_{Y}$ proves $f\circ g$ is invertible with $(f\circ g)^{-1}=g^{-1}\circ f^{-1}$.<|endoftext|> -TITLE: A Fourier series failing to converge on the Cantor Set -QUESTION [8 upvotes]: This is a strengthening of Chandru's question: -Example of a function whose Fourier Series fails to converge at One point -Is there a nice and concrete example of a Fourier series that fails to converge on some "big" set of measure zero, for instance on the Cantor ternary set? - -REPLY [2 votes]: I don't have an exact answer for this but I think this paper might help: -Pointwise convergence of multiple Fourier series using different convergence notions - -One more link: -In this paper the authors show that for any set of zero measure there exists a continuous function on the circle whose Fourier series diverges on that set. In French. -Sergei Vladimirovich Konyagin, On divergence of trigonometrique Fourier series everywhere, C. R. Acad. Sci. Paris 329 (1999), 693-697.<|endoftext|> -TITLE: Is there a group with exactly 92 elements of order 3? -QUESTION [51 upvotes]: The number of elements of order 2 in a group is fairly restricted: 0, odd, or infinity. All such possibilities occur already in the trivial group and in dihedral groups. -The number of elements of order 3 in a group can be shown to be similarly restricted: 0, 2 mod 6, or infinity. However, something strange happens: not all possibilities can be realized. -Even worse, there is a fairly small number that I cannot decide whether it is or is not the number of elements of order 3 in a finite group: - -Is there a group with exactly 92 elements of order 3? - -More boldly, I would like to know (but feel free to answer only the first question): - -Exactly which numbers occur as the number of elements of order 3 in a group? - -Background: Such questions were studied a bit by Sylow and more heavily by Frobenius. The theorem that the number of elements of order p is equal to −1 mod p is contained in one of Frobenius's 1903 papers. Since elements of order 3 come in pairs, this doubles to give 2 mod 6 for p=3. -However, Frobenius's results were improved some 30 years later by P. Hall who showed that if the Sylow p-subgroups are not cyclic, then the number of elements of order p is −1 mod p2. -If the Sylows are cyclic of order pn, then the number of subgroups of order p is congruent to 1 mod pn by the standard counting method. If the Sylow itself is order p, then the subgroup generated by the elements of order p acts faithfully and transitively on the Sylow subgroups, so for small enough numbers, the subgroup can just be looked up. -In all cases, we can assume the group is finite since the subgroup generated by the elements of a fixed order is finite (assuming there are only finitely many elements of that fixed order). -Easier example: For instance there is no group with exactly 68 elements of order 3, since such a group would have cyclic Sylow 3-subgroups by Hall, order 3 Sylows by the counting, but then would have 34 Sylow 3-subgroups, and so (the subgroup generated by the elements of order 3 would) be a primitive group of degree 34. One checks the list of primitive groups of degree 34 (that is, A34 and S34, both with ginormous Sylow 3-subgroups) to see no such group exists. -One could also try 140, but the action need not be primitive so the table lookup is harder. Such a group has Sylows of order 3, but is not solvable, so is somewhat restricted. - -REPLY [4 votes]: Here is my attempt, fixed by Septimus Harding: -If H is a group with 92 elements of order 3, then let G be the subgroup generated by those elements. G has finite order by Dicman's lemma, and so G has a Sylow 3-subgroup P. -P must be cyclic, lest Hall's result imply 46≡4 mod 9. Since P is cyclic, 46≡1 mod |P|, so |P| divides 45, so |P| is 1, 3, or 9. 1 is impossible since there would be 0 elements of order 3. 3 is impossible, since then G has 46 Sylow 3-subgroups but no finite group has 46 Sylow p-subgroups. So P is cyclic of order 9. -Now G acts by conjugation on the 46 subgroups of order 3, and since the Sylow 3-subgroups are cyclic and (all of) their order 3-subgroups are conjugate, G acts transitively on the 46 subgroups. Let Q = Ω(P) ≤ P be one of the subgroups of order 3, and let M = NG(Q) be its normalizer. Then N = NG(P) ≤ M and [G:M] = 46. If M ≤ X ≤ G, then both [M:N] and [X:N] are ≡1 mod 3 by Sylow, but [X:N] = [X:M][M:N], so [X:M] ≡ 1 mod 3. However [X:M] divides 46, so [X:M] = 1 or 46, so X = M or G, and so M is a maximal subgroup of G. So G acts primitively on 46 points (the cosets of N, aka, the subgroups of order 3), so G/Core(G,M) is a primitive group of degree 46, that is, G/Core(G,M) is the alternating or the symmetric group on 46 points. Since the Sylow 3-subgroups of the alternating and symmetric groups (on 6 or more points) are not cyclic, we have a contradiction, and no such G exists. - -This works to prove that: if n=pq for primes p,q ≡ {5,8} mod 9, and if the only primitive groups of degree pq have non-cyclic Sylow 3-subgroups (like Alt(n) and Sym(n)), then there is no group with exactly 2n elements of order 3, even though 2n ≡ 2 mod 6. I believe primitive groups of degree pq are classified, so this could probably be made completely explicit. 140 elements is not immediately handled, since 70 = 2*5*7, but I suspect the same idea will work.<|endoftext|> -TITLE: Family of functions with two horizontal asymptotes -QUESTION [9 upvotes]: I'm looking for the equation of a family of functions that roughly resembles the sketch below (with apologies for the crudeness of said sketch): -    -Properties I'm looking for: - -$\lim_{x\to-\infty}f(x)=y_1$ (i.e. approaches asymptote y=y1) -$\lim_{x\to+\infty}f(x)=y_2$ (i.e. approaches asymptote y=y2) -$|f'(x)|$ is at a maximum at x=0 - -I remember something involving ex/(??), but I can't remember exactly and can't seem to come up with the formula. I want to use it to weight tags in a tag cloud (i.e. the more recently a tag was used (smaller x), the more weight (larger y) the tag gets). The whole graph would be shifted up and to the right of what is shown in the sketch, but I know how to do that part once I have a formula. - -REPLY [7 votes]: Just take $- tan^{-1} x$. Vary it appropriately for additional constraints. - -REPLY [5 votes]: Another famous family of functions that behave as you describe is those of form $y=\dfrac{x}{\sqrt{x^2+1}}$. (This function is actually the sine of the arctan function George suggested) -Graph of $y=-\dfrac{x}{\sqrt{x^2+1}}$: - -For a general y1 and y2, the formula would be $y=-\dfrac{y_1-y_2}{2}*\dfrac{x}{\sqrt{x^2+1}}+\dfrac{y_1+y_2}{2}$<|endoftext|> -TITLE: A question on Hamilton Quaternions -QUESTION [5 upvotes]: How does one prove that ring of Hamilton Quaternions with coefficients coming from the field $\mathbb{Z}/p\mathbb{Z}$ is not a divison ring. - -REPLY [3 votes]: This answer builds on damiano's comment. -Because of Wedderburn's theorem, we know that there are no finite-dimensional non-commutative division algebras over $\mathbb{F}_p$. In particular, every quaternion algebra over $\mathbb{F}_p$ is split, i.e., isomorphic to the algebra $M_2(\mathbb{F}_p)$ of $2 \times 2$ matrices over $\mathbb{F}_p$. Can we see this directly by arguments particular to quaternion algebras? -Yes. For instance, a quaternion algebra is split iff its associated norm form -$N(x) = x \overline{x} = q(x_1,x_2,x_3,x_4)$ -is isotropic -- i.e., there exists $(x_1,\ldots,x_4) \neq (0,\ldots,0)$ such that $q(x_1,\ldots,x_4) = 0$. -In the case of the Hamiltonian quaternion algebra, -$N(x) = x_1^2 + x_2^2 + x_3^2 + x_4^2$. -As Robin points out, one way to see that this form is isotropic over $\mathbb{F}_p$ is to -realize that by Lagrange's Four Squares Theorem, $p$ is a sum of four integral squares, and then reduce modulo $p$. -This is overkill. The fact that any quadratic form in at least three variables over a finite field is isotropic is a special case of the (easy to prove) Chevalley-Warning theorem: see e.g. -http://math.uga.edu/~pete/4400ChevalleyWarning.pdf -Even this is more than is necessary: it follows from a simple counting argument that any nondegenerate quadratic form in at least two variables over $\mathbb{F}_p$ is universal, i.e., represents every nonzero element of the field. For the details, see e.g. p. 5 of -http://math.uga.edu/~pete/quadraticforms2.pdf -From this we get that every nondegenerate quadratic form in at least three variables over a finite field is isotropic, and in particular the norm form of any quaternion algebra over a finite field is split. -In fact the universality of binary quadratic forms over finite fields is the key to this result. Assuming the characteristic is not $2$, the quaternion algebra may be represented as $\langle a, b \rangle$. For a quaternion algebra $Q = \langle a, b \rangle$ over any field $F$ (of characteristic not $2$), a necessary and sufficient criterion for $Q$ to be split is that $b$ is a norm in the extension $F(\sqrt{a})/F$. Thus the universality of the norm form of $\mathbb{F}_p(\sqrt{a})/\mathbb{F}_p$ implies that all quaternion algebras over $\mathbb{F}_p$ are split. -For that matter, from the perspective of Galois cohomology, Wedderburn's theorem is not especially deep or difficult to prove. Using standard properties of cohomology of cyclic groups, it comes down to showing that for any finite degree extension $\mathbb{F}_{q^n}/\mathbb{F}_q$ of finite fields, the norm map is surjective, and this may also be proved by an elementary counting argument (or simply by a straightforward calculation).<|endoftext|> -TITLE: Construction of $p^n$ field -QUESTION [5 upvotes]: I heard that one can construct field based on $p^n$ elements where $p$ is prime. I tried with $p = n = 2$. It seemed easy as there are 2 groups which $\{0,1,2,3\}$ can form and 1 group formed by $\{1,2,3\}$. However each time I get into contradiction. Is there anything I missed or it is not possible to form $p^n$ element group? - -REPLY [3 votes]: The very nice answer by Arturo Magidin describes explicit constructions of finite fields with $p^n$ elements. Let me work out a couple of explicit examples here. -For instance, you can construct a field $\mathbb{F}_4$ with $4$ elements by considering the quotient $\mathbb{F}_4=\mathbb{F}_2[x]/(x^2+x+1)$. Here $\mathbb{F}_2 = \mathbb{Z}/2\mathbb{Z}$ is a field with two elements, and notice that $x^2+x+1$ is irreducible over $\mathbb{F}_2$, because neither $0$ or $1$ are roots. Indeed, you can check that the $4$ polynomials $\{0, 1, x, 1+x\}$ form a complete set of representatives for the quotient of $\mathbb{F}_2$ modulo $(x^2+x+1)$, because any polynomial $q(x)$ of degree $2$ or higher is congruent to one of $0$, $1$, $x$ or $1+x$ modulo $x^2+x+1$ (just use long division of $q(x)$ by $x^2+x+1$). Here is the addition table for the field $\mathbb{F}_4$: -$$\begin{array}{c|cccc} - -+ & 0 & 1 & x & 1+x \\ -\hline -0 & 0 & 1 & x & 1+x \\ - -1 & 1 & 0 & 1+x & x \\ - -x & x & 1+x & 0 & 1 \\ - -1+x & 1+x & x & 1 & 0 \\ - -\end{array} $$ -Notice that the shape of the multiplication table agrees with the one given by Robin Chapman. Here is the multiplication table for $\mathbb{F}_4$: -$$\begin{array}{c|cccc} - -\times & 0 & 1 & x & 1+x \\ -\hline -0 & 0 & 0 & 0 & 0 \\ - -1 & 0 & 1 & x & 1+x \\ - -x & 0 & x & 1+x & 1 \\ - -1+x & 0 & 1+x & 1 & x \\ - -\end{array} $$ -For example, $x(1+x)=x+x^2\equiv 1+(1+x+x^2)\equiv 1 \bmod (1+x+x^2)$. -Finally, let's construct a field with $9$ elements. This can be done by considering $\mathbb{F}_3/(x^2+1)$, since $x^2+1$ is irreducible over $\mathbb{F}_3$. Notice that $2\equiv -1\bmod 3$ is not a quadratic residue modulo $3$, thus $x^2+1\equiv x^2-2$ must be irreducible. Similarly, if $p$ is any prime with $p\equiv 3 \bmod 4$, then $x^2+1$ is irreducible and $\mathbb{F}_p/(x^2+1)$ is a field with $p^2$ elements.<|endoftext|> -TITLE: Is the Subset Axiom Schema in ZF necessary? -QUESTION [12 upvotes]: I am learning the axioms of Zermelo-Fraenkel (ZF) set theory. -One axiom schema basically says that given any set S and any formula phi(x), there is a set T consisting of all those elements x of S such that phi(x). -I find this axiom schema unsatisfying because it only guarantees that subsets of S definable by a formula are really sets. It's like saying that a function from X to Y only exists if you can write down a formula for it rather than just allowing arbitrary single-valued subsets of X cross Y. Is there a way in the language of ZF to say "given a set S, if T is a subset of S then T is a set?" -Related is the power set axiom. Given a set S, there is a set P(S) consisting of exactly all the subsets of S. But in ZF the objects of the theory are sets (no urelements). Everything is a set. So, the elements of P(S) are all sets. Doesn't this mean that every subset of a given set S is a set? If so, why the need for the subset axioms? -I can anticipate some difficulty. I want to say that "if x is a member of P(S) then x is a set," but I cannot express the predicate "is a set" in the language of set theory, and strangely enough there is no need to since everything is a set! I attempted: -"(for all S)(for all x)[x subset S --> (there exists y)(x = y)]" -where "x subset S" is an abbreviation for "(for all z)[z in x --> z in S]." -But this attempt is silly because when I say "for all x," x is automatically a set and there is no need to say it is a set. -I'm confused. - -Are the subset axioms necessary? -Is there a way to remove reference to a defining formula so that every subset of a given set is a set? -If the answer to 1 is "yes" and the answer to 2 is "no" then why is set theory so weird as to allow only definable subsets on the one hand and yet allow for a set of all subsets on the other? - -REPLY [4 votes]: Once upon a time, mathematicians used "naive set theory": for any logical predicate, there was a set of all objects satisfying that predicate: "unrestricted comprehension". The ability to do this was one of the main features of set theory -- the ability to gather all of the objects in a class up into a set, so that you could reason about the set. -Of course, the famous contradictions were discovered (e.g. Russell's paradox), and that put an end to naive set theory. -Now, comprehension was a rather important feature of set theory, and Zermelo kept it when he came up with his axioms for set theory. The trick was to replace unrestricted comprehension with restricted comprehension. Zermelo set theory uses a limited number of "safe" operations (e.g. pairs, power sets, unions) to build up sets, and then restricts the use of comprehension to merely selecting out elements of sets -- i.e. the axiom (scheme) of subsets -- rather than allowing it to collect elements out of the entire universe. -This turns out to be powerful enough to do mathematics, and so Zermelo set theory (and eventually ZFC) became a foundation of mathematics. -In the very closely related NBG set theory, the axiom of subsets simply becomes "If $\Phi$ is a class, and $S$ is a set satisfying $\Phi \subseteq S$, then $\Phi$ is a set." (or there is a set with the same elements as $\Phi$, depending on technical convention)<|endoftext|> -TITLE: Is it misleading to think of rank-2 tensors as matrices? -QUESTION [41 upvotes]: Having picked up a rudimentary understanding of tensors from reading mechanics papers and Wikipedia, I tend to think of rank-2 tensors simply as square matrices (along with appropriate transformation rules). Certainly, if the distinction between vectors and dual vectors is ignored, a rank 2 tensor $T$ seems to be simply a multilinear map $V \times V \rightarrow \mathbb{R}$, and (I think) any such map can be represented by a matrix $\mathbf{A}$ using the mapping $(\mathbf{v},\mathbf{w}) \mapsto \mathbf{v}^T\mathbf{Aw}$. -My question is this: Is this a reasonable way of thinking about things, at least as long as you're working in $\mathbb{R}^n$? Are there any obvious problems or subtle misunderstandings that this naive approach can cause? Does it break down when you deal with something other than $\mathbb{R}^n$? In short, is it "morally wrong"? - -REPLY [15 votes]: You're absolutely right. -Maybe someone will find useful a couple of remarks telling the same story in coordinate-free way: - -What happens here is indeed identification of space with its dual: so a bilinear map $T\colon V\times V\to\mathbb{R}$ is rewritten as $V\times V^*\to\mathbb{R}$ — which is exactly the same thing as a linear operator $A\colon V\to V$; -An identification of $V$ and $V^*$ is exactly the same thing as a scalar product on $V$, and using this scalar product one can write $T(v,w)=(v,Aw)$; -So orthogonal change of basis preserves this identification — in terms of Qiaochu Yuan's answer one can see this from the fact that for orthogonal matrix $B^T=B^{-1}$ (moral of the story: if you have a canonical scalar product, there is no difference between $T$ and $A$ whatsoever; and if you don't have one — see Qiaochu Yuan's answer.)<|endoftext|> -TITLE: Precise connection between Poincare Duality and Serre Duality -QUESTION [41 upvotes]: The statements of Poincare duality for manifolds and Serre Duality for coherent sheaves on algebraic varieties or analytic spaces look tantalizingly similar. I have heard tangential statements from some people that there is indeed some connection between the two. But I was never able to figure it out on myself. For instance for a naive attempt on a smooth complex manifold, the dimensions don't match. Can somebody help me out? - -REPLY [9 votes]: I'd like to point out that there is a generalization of Poincare duality which lives purely in the land of smooth manifolds and looks like Serre duality. Let $M$ be a compact connected smooth $n$-manifold. Let $E$ be a vector bundle on $M$ equipped with a flat (some people say integrable) connection $\nabla$. Let $T^{\ast}$ be the cotangent bundle to $M$. For a vector bundle $V$ on $M$, let $C^{\infty}(V)$ be the sheaf of smooth sections of $V$. So $C^{\infty}(V)(U)$ is smooth sections of $V$ over $U$. -The connection $\nabla$ induces maps $C^{\infty}(E \otimes \bigwedge^k T^{\ast}) \to C^{\infty}(E \otimes \bigwedge^{k+1} T^{\ast})$. These maps form a complex -$$0 \to C^{\infty}(E)(M) \to C^{\infty}(E \otimes T^{\ast})(M) \to \cdots \to C^{\infty}(E \otimes \bigwedge\nolimits^n T^{\ast})(M) \to 0.$$ -Define $H_{DR}^i(M, E, \nabla)$ (not standard notation), to be the cohomology groups of this complex. Then we have: -Relation to sheaf cohomology -Let $E_0$ be the subsheaf of $C^{\infty}(E)$ given by the kernel of $\nabla$. (The so-called flat sections of $E$.) Then -$$H^i_{sheaf}(M, E_0) \cong H^i_{DR}(M, E, \nabla).$$ -Proof sketch: -$$E_0 \to C^{\infty}(E) \to C^{\infty}(E \otimes T^{\ast}) \to \cdots \to C^{\infty}(E \otimes \bigwedge\nolimits^n T^{\ast})\to 0$$ -is a resolution of $E_0$ by acyclic sheaves. -Duality -We have $H^n(M, \bigwedge^n T^{\ast}) \cong \mathbb{R}$ and the cup product pairing -$$H_{DR}^q(M, E, \nabla) \otimes H_{DR}^{n-q}(M, E^{\vee} \otimes \bigwedge\nolimits^n T^{\ast}, \nabla') \longrightarrow H_{DR}^n(M, \bigwedge\nolimits^n T^{\ast}) \cong \mathbb{R}$$ -is perfect. Here $\nabla'$ is the connection on $E^{\vee} \otimes \bigwedge^n T^{\ast}$ which is adjoint to $\nabla$, in a sense I don't want to define. -Note that, if $M$ is orientable, then $\bigwedge^n T^{\ast}$ is trivial, which makes the statement simpler but look a bit less like Serre duality. -If $E$ is the trivial one dimensional bundle, and $\nabla$ is the standard connection $f \mapsto df$, then $H^i_{DR}(M, E, \nabla)$ is the standard DeRham cohomology $H^i_{DR}(M)$. So, if $E$ and $\nabla$ are as above, and $M$ is orientable, we recover Poincare duality. -I recall a good discussion of this in Voisin's book Hodge Theory and Complex Algebraic Geometry, volume I, chapter 5.3.2. I talked about this in my Hodge Theory course.<|endoftext|> -TITLE: Asimov quote about "eight million trillion" arrangements of amino acids -QUESTION [7 upvotes]: A friend of mine is subediting a book whose author died in 1999. The author, at some point, uses the word "trillion" which is, unfortunately, an ambiguous word in the UK: when I was at school it used to mean $10^{18}$ but nowadays it means $10^{12}$. My friend is faced with the following paragraph: - -According to Asimov, the amino acids of the proteins behave in a much freer way than our words do: they can be rearranged in any manner and always retain some meaning. A simple protein is made up of eight amino acids, which can be classified putting the numbers one to eight in a series, changing the order of sequence by one digit each time. Out of the same number of "words" we can construct a little over 40,000 organised "biological phrases" from the same genetic code, each one with its own meaning, which is the mission of every protein. But if the chains become longer, as in the case of more complicated molecules such as insulin, which consists of 30 amino acids, the tally rises to a staggering eight million trillion possibilities. - -The editor doesn't like "eight million trillion" and wants to replace it with "800000..000". The question, of course, is "how many zeros"? More precisely, the question is: is someone with a better understanding of chemistry than me able to reconstruct Asimov's calculation and see whether the answer is approximately $8 \times 10^{24}$ or $8 \times 10^{18}$? - -REPLY [4 votes]: The second sentence of that paragraph is meaningless, I think. The number of eight-amino-acid polypeptides (those are chains of amino acids) is 20^8, which is much larger than 40,000. And insulin isn't really big enough to be a good example of a typical protein; typical proteins have hundreds of amino acids. -That being said, Qiaochu's interpretation seems correct. I was able to track down the structure of human insulin -- note that the image is for bovine insulin, and human insulin differs by substituting Thr for Ala in the final position. -So the B-chain of human insulin has the sequence -Phe Val Asn Gln His Leu Cys Gly Ser His Leu Val Glu Ala Leu Tyr Leu Val Cys Gly Glu Arg Gly Phe Phe Tyr Thr Pro Lys Ala -which contains -4 Leu -3 each Phe, Val, Gly, -2 each His, Cys, Glu, Ala, Tyr, -1 each Asn, Gln, Ser, Arg, Thr, Pro , Lys -and the number of ways that all of these can be rearranged is -$$ \frac{30!}{4! 3!^3 2!^5} \approx 1.6 \times 10^{27} $$ -which is not all that close to either of the proposed replacements. Then again, it's possible that whoever did the calculation was working from the sequence of some insulin other than human, or just made a mistake.<|endoftext|> -TITLE: Finding $\lim_{x \to \infty} \left[ {x^{x+1} \over (x+1)^x} - { (x-1)^x\over x^{x-1}}\right]$ -QUESTION [8 upvotes]: The limit is $$\lim_{x \to \infty} \left[ {x^{x+1} \over (x+1)^x} - { (x-1)^x\over x^{x-1}}\right]$$ -Experimentally, this limit appears to converge to ${1 \over e}$, but I can't figure out how to solve it. - -REPLY [8 votes]: A fairly mechanical approach is to write the limit as -$$\lim_{x\to\infty}(f(x)-f(x-1))$$ -where -$$f(x)=\frac{x^{x+1}}{(x+1)^x}.$$ -Then -$$\log f(x)=(x+1)\log x - x\log(x+1) -= \log x - x \log(1+1/x)$$ -and so -$$\log f(x)= \log x - 1 + 1/(2x) + O(x^{-2})$$ -as $x\to\infty$ (using the Maclaurin series for $\log(1+t)$). -Therefore -$$f(x) = (x/e)(1+1/(2x)+O(x^{-2}))=x/e+1/(2e)+O(x^{-1})$$ -and so -$$f(x-1) =(x-1)/e+1/(2e) +O((x-1)^{-1}).$$ -Subtracting, -$$f(x)-f(x-1)= 1/e+O(x^{-1})).$$<|endoftext|> -TITLE: Rotate a point in circle about an angle -QUESTION [5 upvotes]: How should I rotate a point $(x,y)$ to location $(a,b)$ on the co-ordinate by any angle? - -REPLY [5 votes]: Assuming that you ask where will x end up after being rotated clockwise for an angle $\theta$ around the origin, then the answer is as given by George S. and J.M. I'd like to give a brief explanation of why that is the answer. -The key is that rotation is a linear transformation. If we write Ax for the point x rotated, then then $A(ax+bx) = a(Ax) + b(Ax)$, where a and b are some real numbers. Here ax is a point obtained by stretching both coordinates, and lies on the line that contains x and the origin. It's a scaling operation. Also, $x+y$ is obtained by adding the coordinates, which you can imagine as the diagonal of a parallelogram whose edges are x and y. So the equation $A(ax+bx) = a(Ax) + b(Ax)$ says that if you scale a figure with two points, then `add' the points by taking the parallelogram diagonal, and then rotate you get the same result you get if you start by rotating, and then you scale and add. - -Now, assuming that equation is OK when A represents a rotation, note that every vector $x=(a,b)$ can be written as $a(1,0)+b(0,1)$, so its rotation will be $a(A(1,0))+b(A(0,1))$. In other words, all you need to know is where do the points (1,0) and (0,1) end up after being rotated (namely, you need the points A(1,0) and A(0,1), each having two coordinates). So four numbers are enough to figure out where all points end up after being rotated. -Linear transformations are the reason matrices are important.<|endoftext|> -TITLE: Is there a simple test for uniform distributions? -QUESTION [28 upvotes]: I have a function that (more or less) is supposed to select a small number $m$ of random numbers from the range $[1,n]$ (for some $n \gg m$) and I need to test that it work. Is there an easy to implement test that will give good confidence that it is working correctly? (For reference the full on chi squared test is not simple enough.) -Edit: - -$m$ in $[5,500]$, $n$ is small enough I can use normal int/float types for most math. -I can get as many sets as I'm willing to wait for ($>100$s per second) - -The reason I'm looking for something simple is that the code I'm checking isn't that complicated. If the test code is more complex than the code under test, then the potential for the test being wrong becomes a real problem. -Edit 2: -I took another look at Chi-squared and it's not as bad as I remembered... as long as I have a fixed number of buckets and a fixed significance threshold. (Or a canned CDF, and I don't think I do.) - -REPLY [4 votes]: A good way to test for this is to note that the CDF for any continuous random variable transforms it to a uniform distribution, so you can transform a uniform distribution by the inverse CDF to get any distribution you like, and then compute statistics designed to test for that distribution. -For instance, in section 2.3, p. 679 of Cook, Gelman & Rubin they describe how to do this to transform a hypothetically uniform distribution to the standard normal, and then use the fact that the sum of squares of the normal follow a χ$^2$-distribution with N degrees of freedom, where N is the number of observations<|endoftext|> -TITLE: Assuming $P \neq NP$, do we know whether there are problems which are in $NP$, not in $P$ and are not $NP$ complete? -QUESTION [11 upvotes]: Here's a question. Have there been any theoretical results showing that if $P \neq NP$, there must exist some problems in $NP$ which are not $NP$-complete and which are not in $P$ either? Just curious because I've never seen this question addressed. - -REPLY [12 votes]: Yes, this is known as Ladner's theorem, proved by Richard Ladner in 1975. The class of such problems are called NP-intermediate. Here's one proof that does "cut and paste" between some problem in P and some NP-complete problem, here are two, and there are others. -There are also problems that are already believed to be neither NP-complete nor in P, like integer factorisation and graph isomorphism, as well as problems known to be in NP ∩ coNP but not known to be in P.<|endoftext|> -TITLE: Measure of the Cantor set plus the Cantor set -QUESTION [14 upvotes]: The Sum of 2 sets of Measure zero might well be very large for example, the sum of $x$-axis and $y$-axis, is nothing but the whole plane. Similarly one can ask this question about Cantor sets: -If $C$ is the cantor set, then what is the measure of $C+C$? - -REPLY [31 votes]: Here is the full question from Halmos's Problems for mathematicians, young and old: - -Here's the hint: - -And here's the solution:<|endoftext|> -TITLE: Geometric Proof of the Formula for Simplex Numbers -QUESTION [12 upvotes]: I was quite impressed by a beautiful proof1 of the formula for n'th triangular number — it's kind of a bijective proof built on the top of geometric summation idea. -Question: does this argument generalize to higher dimensions? i.e. can someone prove the formula for n-th «k-dimensional simplex number» this way? - -For example, the statement for k=3 is that n-th pyramidal number is $\binom{n+2}{3}$. -(One obvious fallback would be observing that any ball in the pyramid is characterized by its coordinates2, i.e. balls are counted by a negative binomial coefficient — which is exactly the desired result. That's a good proof, but needless to say, that's not the proof I'm looking for.) -1 original post by Mariano Suárez-Alvarez, cited by Vaughn Climenhaga -2 k-simplex is the subset $x_0+\dots+x_k=1$, $x_i\ge0$ in $\mathbb{R}^{k+1}$, and n-th k-simplex number is the number of non-negative integer solutions of $x_0+\dots+x_k=n$ - -REPLY [6 votes]: Place the balls so that the corners of the pyramid have coordinates (1,1,1), (n,1,1), (1,n,1), (1,1,n). Then for the ball at (x,y,z) we see that (x,x+y,x+y+z) is an ordered triple of distinct numbers between 1 and n+2 inclusive. There are $\binom{n+2}{3}$ such triples, and they are clearly in one-to-one correspondence with the balls. Geometrically, this corresponds to drawing planes x=constant, x+y=constant, and x+y+z=constant through a given ball, and seeing where these planes intersect the x axis. -(To compare with the case $k=2$, "straighten" the picture above into a right triangle. Then the diagonal lines become lines x=constant and x+y=constant, and the bottom row of balls (the blue ones) can be identified with the points where these lines intersect the x axis.) -This also works in higher dimensions, giving $\binom{n+k-1}{k}$. -Whether this proof really is any different from the one you suggested in the question can of course be debated...<|endoftext|> -TITLE: Multiples of 4 as sum or difference of 2 squares -QUESTION [7 upvotes]: Is it true that for any $n \in \mathbb{N}$ we can have $4n = x^{2} + y^{2}$ or $4n = x^{2} - y^{2}$, for $x,y \in \mathbb{N} \cup (0)$? -I was just working out a proof and this turns out to be true from $n=1$ to $n=20$. After that I didn't try, but I would like to see if a counterexample exists for a greater value of $n$. - -REPLY [7 votes]: It is true because -$$ (n+1)^2 - (n-1)^2 = 4n $$ - -REPLY [4 votes]: Here is a hint: the form $x^2-y^2$ factors as $(x+y)(x-y)$. Therefore, if you want to represent an integer $N$ as $x^2-y^2$, you can attempt to do so by choosing a factorization $N = ab$ and solving the linear system -$x+y = a$ -$x-y = b$. -This system has the unique rational solution $x = \frac{a+b}{2}$, $y = \frac{a-b}{2}$. This gives an integral solution iff $a$ and $b$ have the same parity. Use this to show: -A positive integer $N$ is of the form $x^2 - y^2$ for $x,y \in \mathbb{Z}$ (possibly $0$) iff $N$ is odd or $N$ is divisible by $4$. -In particular, $4n$ is always of the form $x^2-y^2$. You want a little more: that $x$ and $y$ are both nonzero. Clearly $x$ cannot be zero, so you need to analyze the case $y = 0$ and show that whenever all possible solutions to $n = x^2 - y^2$ have $y = 0$, then there are nonzero $X$ and $Y$ such that $4n = X^2 + Y^2$. This is not so hard...<|endoftext|> -TITLE: Has L'Hopital's Rule been studied as an operator? -QUESTION [8 upvotes]: I discovered while teaching Calc 2 that if you apply L'Hopital's rule to $\frac{x}{\sqrt{x^2+1}}$ you get $\frac{\sqrt{x^2+1}}{x}$, and if you apply L'Hopital again you get $\frac{x}{\sqrt{x^2+1}}$ back. In other words the L'Hopital operator has a cycle of order two. -EDIT (Thanks KennyTM): "I suppose the L'Hopital operator should be defined on equivalence classes of pairs $(f(x),g(x))$ of differentiable functions with the fractional equivalence: $(f(x),g(x))\equiv(h(x),k(x))$ if and only if $fk=gh$." This does not work. But it doesn't really take pairs of functions to pairs of function, either. So the first problem is to find out how it is an operator. -Has anyone ever studied this operator? Wikipedia tells me nothing. -Yes, I know it is easier to find the limit by dividing through by $x$, but some students want to apply L'Hopital to everything. - -REPLY [2 votes]: I dunno if its been studied as a differential operator, and I kind of doubt it, but I think you could define it in a way similar to the way that you suggested. -You consider two pairs (f,g) and (h,k) equivalent if $f/g = h/k$ or rather $fk = gh$. Then the operator L acts on equivalence classes by the following operation: -$ L(f,g) = (Df, Dg) $ -But for this operator to be well defined we must enforce that $L(h,k)$ is equivalent to $L(f,g)$. This means that we must have -$Df/Dg = Dh/Dk$. -This is not always true. Suppose that $(f,g) = (x, x^2)$ which is equivalent to $(h,k) = (1,x)$. -$Df/Dg = \frac{1}{2x} \neq 0 = Dh/Dk$ -So the L'Hospital operator is well defined on the equivalence classes of pairs $(f,g)$ with $g$ non-constant with the equivalence relation: -$(f,g)$ equiv $(h,k) \Leftrightarrow fk = hg $ and $ Df Dk = Dh Dg $.<|endoftext|> -TITLE: Quadratic reciprocity via generalized Fibonacci numbers? -QUESTION [60 upvotes]: This is a pet idea of mine which I thought I'd share. Fix a prime $q$ congruent to $1 \bmod 4$ and define a sequence $F_n$ by $F_0 = 0, F_1 = 1$, and -$\displaystyle F_{n+2} = F_{n+1} + \frac{q-1}{4} F_n.$ -Then $F_n = \frac{\alpha^n - \beta^n}{\alpha - \beta}$ where $\alpha, \beta$ are the two roots of $f(x) = x^2 - x - \frac{q-1}{4}$. When $q = 5$ we recover the ordinary Fibonacci numbers. The discriminant of $f(x)$ is $q$, so it splits $\bmod p$ if and only if $q$ is a quadratic residue $\bmod p$. -If $\left( \frac{q}{p} \right) = -1$, then the Frobenius morphism $x \mapsto x^p$ swaps $\alpha$ and $\beta$ (working over $\mathbb{F}_p$), hence $F_p \equiv -1 \bmod p$. And if $\left( \frac{q}{p} \right) = 1$, then the Frobenius morphism fixes $\alpha$ and $\beta$, hence $F_p \equiv 1 \bmod p$. In other words, -$\displaystyle F_p \equiv \left( \frac{q}{p} \right) \bmod p.$ -Quadratic reciprocity in this case is equivalent to the statement that -$\displaystyle F_p \equiv \left( \frac{p}{q} \right) \bmod p.$ -Question: Does anyone have any ideas about how to prove this directly, thereby proving quadratic reciprocity in the case that $q \equiv 1 \bmod 4$? -My pet approach is to think of $F_p$ as counting the number of ways to tile a row of length $p-1$ by tiles of size $1$ and $2$, where there is one type of tile of size $1$ and $\frac{q-1}{4}$ types of tiles of size $2$. The problem is that I don't see, say, an obvious action of the cyclic group $\mathbb{Z}/p\mathbb{Z}$ on this set. Any ideas? - -REPLY [25 votes]: The following paper seems to answer your question: P. T. Young, "Quadratic reciprocity via Lucas sequences", Fibonacci Quart. 33 (1995), no. 1, 78–81. -Here's its MathSciNet Review by A. Grytczuk: - -Let $\{\gamma_n\}^\infty_{n=0}$ be a given Lucas sequence defined by $\gamma_0=0$, $\gamma_1=1$, $\gamma_{n+1}=\lambda \gamma_n+\mu \gamma_{n-1}$, $n\geq 1$, $\lambda, \mu\in{\bf Z}$, and let $q$ be an odd prime such that $D=(\frac{-1}q)q=\lambda^2+4\mu$. Then the author proves that there is a unique formal power series $\Phi$ with integer coefficients and constant term zero such that (1) $\sum^\infty_{n=1}\gamma_n\Phi^n(t)/n=\sum^\infty_{n=1}(\frac nq)t^n/n$ holds, where $(\frac nq)$ is the Legendre symbol. - From this result follows the Gauss law of quadratic reciprocity in the following form: (2) $(\frac pq)=(\frac Dp)$, where $p$, $q$ are distinct odd primes and $D=(\frac{-1}q) q=\lambda^2+4\mu$. - -Here's the direct link to the paper.<|endoftext|> -TITLE: What is the biggest number ever used in a mathematical proof? -QUESTION [9 upvotes]: Probably a proof (if any exist) that calls upon Knuth's up-arrow notation or Busy Beaver. - -REPLY [7 votes]: In one of Friedman's posts on the FOM mailing list, he mentions a number called SCG(13) that is far larger than TREE(3): http://www.cs.nyu.edu/pipermail/fom/2006-April/010362.html -I couldn't find a lot of other information about it, though.<|endoftext|> -TITLE: What are large cardinals for? -QUESTION [12 upvotes]: I've heard large cardinals talked about, and I (think I) understand a little about how you define them, but I don't understand why you would bother. -What's the simplest proof or whatever that requires the use of large cardinals? Is there some branch of mathematics that makes particularly heavy use of them? - -REPLY [10 votes]: G. Rodrigues's specific answer gets at the general issue: large cardinals are used to examine how much more one can proof in ZFC set theory. The first time I discovered large cardinals (in Jech's 2000 book Set Theory), I was amazed. A large cardinal is just a "very big" set, after all, but I did not realize that the existence of such a set changed the nature of what was mathematically provable. For example, there is, according to Jech, the event that started it all: Ulam's work on the problem of measure. It is well-known that Lebesgue measure over the reals is not defined for all sets, but it turns out to be undecidable in ZFC alone if any non-trivial measure on the reals exists at all. In order to get such a measure, one must assume the existence of a large cardinal, which is now called a measurable cardinal. So I think of large cardinals as things that change the very nature of the mathematical "plumbing". Deep stuff. -On a more practical level, I think it was Dudley who said that large cardinals can be useful for seeing why a proof is failing: if a proof is not working, seeing if it fails at a large cardinal can provide insight.<|endoftext|> -TITLE: Elementary proof of the Prime Number Theorem - Need? -QUESTION [15 upvotes]: Although I am very much new to "Analytic Number Theory", there are some non mathematical questions which puzzle me. First of all, why was G.H.Hardy so keen to have an elementary proof of the Prime Number Theorem. He also stated that producing such a proof, will change the complexion of Mathematics, but nothing like that has happened. What was on Hardy's mind? -Although the elementary proof has some intricate tricks involved, I am curious to know whether the methodology can be applied for attacking more complex problems. I have seen that the analytic proof has a continuation and is not over and discuss some more interesting properties regarding the $\zeta$ function. -I also saw this thread. Are there any such theorem's which piques peoples curiosity in getting an elementary proof. (While writing this FLT comes to my mind!!). - -REPLY [7 votes]: It's definitely late in the game to this question, but Martin kindly pointed me here, and I think there's something else to be said. -The search for the elementary proof has two points of view I think are related to the soul of mathematics as it stood during the time when the preoccupation was great. - -The First -Further Advancement of the subject through new techniques. - -The idea of an elementary proof meant that we would necessarily need to come up with new ideas that we had not before in order to produce the proof that would prior only yield to techniques using analytic methods. It is very common in mathematics to have several proofs of the same results. Oftentimes new proofs either simplify old ones--allowing easier transmission of the ideas and oftentimes bringing in new viewpoints which allow the theory to take a large step forward. -Tate's thesis allows one to prove the functional equation for the $\zeta$ function. This is something we've known for some time, but the ideas present in the new proof, allowed for the roots of incredibly important new mathematics to be developed because of how he did it. One can see very slick, elucidating proofs which somehow seem to be the "right" ones. Erdös would probably say they were "proofs from the book" which is the namesake and inspiring spirt of this book. The proof of the PNT equivalence with the non-vanishing statement mentioned in the previous answer comes from a proof of the PNT through the Wiener-Ikehara theorem, another idea that has more widespread applications than just the purpose to which it is put with the PNT. -And this is not a phenomenon unique to number theory. We have a proof of the classification of surfaces via Ricci flow, decades after the original proof, which motivated the idea that techniques using Ricci flow might give the classification of $3$ manifolds (Perelman's celebrated proof of the Poincaré conjecture proved this and more to be true.) Many old proofs in modular arithmetic are much easier to prove using the language of groups, such as the fact that $\left(\Bbb Z/p\Bbb Z\right)^*$ is a cyclic group or Wilson's theorem. The proof that $\Bbb R^1\not\cong\Bbb R^n$ for $n>1$ is easy and uses only that the image of a connected set is connected, however that method doesn't generalize nicely. Compare with the homology proof, and we can easily demonstrate $\Bbb R^n\not\cong\Bbb R^m$ (as topological spaces) for any $n\ne m$. The idea of "uniform distribution modulo $1$" gave way to topological group dynamics, whose modern techniques have proven very powerful indeed. -In short there are some techniques that are only able to go so far, it often takes a genuinely new idea to allow for a great surge forward in the theory, and in this spirit the elementary proof presented an opportunity for such advancement. If it proved to exist, as Hardy noted it might give us greater insight in how to understand the theorem. -It is of course true that we do not "need" that proof for the purposes of establishing the PNT, but that's too simplistic a way to think about the elementary proof, or indeed of any proof which uses different approaches from the original proofs. New proofs have almost inherent merit if they are qualitatively different, in that they encourage use to look in those directions for proving other results which might not yield to the standard techniques to which we are accustomed. - -The second -A little bit of superstition - -Historically new discoveries in all of the sciences, mathematics included, have been intertwined with politics and superstition. In the ancient days the Pythagoreans had an entire cult dedicated to numbers, which--to the Greeks--meant rational numbers. Indeed, there is some historical evidence to suggest the man who discovered irrational numbers may have been killed over the fact! Other notable examples are complex numbers, which were shunned or ignored for the longest time. -The biggest example I can think of is the axiom of choice. Once upon a time it was of a much more central focus in mathematics. Proofs using it were sometimes rejected by some sectors for not being constructivistic. Brouwer even renounced his own fixed-point theorem proof for not being constructive. I imagine it must just not have sat well with a greater percentage of the mathematical population than it does today (I still know of a good handful who will argue vehemently on the subject). I think especially after the long time where we had a lot less precise formulations of theorems, and less air-tight proofs than we do today, this was a more valid concern, but experience seems to show we really shouldn't worry too much about such things. -That is all to say that the relative importance attached to an elementary proof seems historically similar to desires for constructive proofs of things like fixed points, and to an extent I acknowledge they are useful to have for the purposes of illustration, and sometimes in practice where such things are useful objects. Cantor's work proves there are a lot of transcendental numbers, but Liouville, Lindemann, Baker, et al are those that give us our best examples. At the same time, the community's interest moves on to bigger and ultimately more important things. - -In short -I think Hardy was much like any other mathematician of his time: still not too far away from the original proof to wonder about a simpler (or at least more elementary) proof of the PNT, an interest which has greatly faded away with all the many proofs we have today. Even today there is a bit of mysticism with our subject, some things that feel like they ought to be true, even if we cannot prove them. I think Hardy was--mistakenly--in the camp that thought somehow an elementary proof would reveal some deep secret about primes in the way it was proved. It could certainly have seemed that way based on what was known and believed in his time, so it was not unreasonable, it just happened to be incorrect. -In despite of what some may think, a lot of what is studied comes from where the general interest lies--Kronecker vs Dedekind turned a lot if those in the latter's school to push down those in the former's, though that effect has naturally lessened over time. The personalities of the day played a large role in what was deemed "important" to discover. Hilbert's famous problems probably gave direction to thousands upon thousands of careers. Ultimately the field continues to evolve, and things like the elementary proof do have their place in the history and in the practice, it just depends on your personal views on whether "requiring complex analysis" really makes a result "fundamentally deeper" than one that does not.<|endoftext|> -TITLE: Toward "integrals of rational functions along an algebraic curve" -QUESTION [18 upvotes]: In a talk by V.I. Arnold, this is said: - -When I was a first-year student at the Faculty of Mechanics and Mathematics of the Moscow State University, the lectures on calculus were read by the set-theoretic topologist L.A. Tumarkin, who conscientiously retold the old classical calculus course of French type in the Goursat version. He told us that integrals of rational functions along an algebraic curve can be taken if the corresponding Riemann surface is a sphere and, generally speaking, cannot be taken if its genus is higher, and that for the sphericity it is enough to have a sufficiently large number of double points on the curve of a given degree (which forces the curve to be unicursal: it is possible to draw its real points on the projective plane with one stroke of a pen). - -I would like to understand the mathematical part of this. What do I need to know to see why this makes sense? Where can I get enough of the background to understand it fully? - -REPLY [20 votes]: What Arnold is describing (in a slightly oblique manner) is the theory of algebraic curves. -The idea is that if one wants to integrate some rational expression $R(x,y)dx + S(x,y) dy$ -over the curve $f(x,y) = 0$, then the question of whether one can find an antiderivative -in terms of elementary functions has a positive or negative answer depending on whether -the geometric genus of the curve $f(x,y) = 0$ is zero or positive. -One direction is not so hard to see directly: if $f(x,y) = 0$ has geometric genus zero, -this means that we can trace out this curve in terms of a single parameter, i.e. -we can find parametric expressions $x = x(t)$ and $y = y(t)$ so that $f(x(t),y(t)) = 0$. -Then if we rewrite the integral in terms of the variable $t$, basic integral calculus (the substitution rule) lets us rewrite the integrand as a rational function of $t$, and we can -always integrate a rational function in terms of elementary functions. -What is less obvious is that if $f(x,y) = 0$ has positive geometric genus, then it is not possible to find such a parameterization of the curve (this is a non-trivial statement), -and it is not possible to find an elementary antiderivative (this is related to the previous statement, but is another non-trivial deduction). -The first example is the curve $y^2 = (1-x^2)(1-kx^2)$ (here $k$ is some constant, neither 0 nor 1), -with the integral being $\int dx/y = \int dx/\sqrt{(1-x^2)(1 - kx^2)}$. This is what is called an elliptic integral, and (for more or less 150 years, beginning with the invention -of calculus) people tried to find an elementary expression for it, until finally Abel and Jacobi showed that this wasn't possible, because this curve has geometric genus one. -If you don't know any algebraic geometry, then a good place to start is Miles Reid's "Undergaduate algebraic geometry". The theorem you need is the one which says that -there is no rational map from a genus zero curve to a positive genus curve, -which I'm pretty sure is proved in that book, at least for genus one curves. -(It is not so difficult to pass from this theorem in the case of smooth curves to the -case of singular curves, but you will have more difficulty finding a treatment of the -singular curve case, which is what Arnold is talking about when he mentions double points.) -Depending on the level at which you're beginning, you might want to consult one of the algebraic geometry road-map questions on Mathoverflow; there is one asking for an undergraduate road-map for learning algebraic geometry, and a second asking for a graduate road-map. -If you do already know some algebraic geometry, then what you want is a historical source -that relates the geometry you know to its historical origins. Dieudonne wrote a history of algebraic geometry, which must surely discuss this. There will be many other historical -sources too. For the best guide to the historical literature, you might want to ask on Mathoverflow, where there will probably be more people reading who are familiar with historical treatments of the theory. -I should say that if you are beginning from a position of knowing no algebraic geometry, then it will take some time and effort to learn what is needed to fully understand what Arnold is discussing, especially from a standard textbook (which will likely not proceed in a straight line to where you want to go, but rather develop a more general theory, which will then be specialized to the situation you are interested in). -So even if that is your situation, I recommend that you also do some historical reading, -to help get a better feeling for exactly what parts of an algebraic geometry text-book -you will need to read to satisfy your interest in Arnold's statement.<|endoftext|> -TITLE: $100$ boxes of fruits - pick $51$ and get at least half of each type? -QUESTION [11 upvotes]: Friend told me this one, I'm completely stuck but also completely fascinated: -There are $100$ boxes with apples, oranges and bananas (mixed). How to Prove that you can pick $51$ boxes and to get at least half of all apples, at least half of all oranges and at least half of all bananas? -Edit: You can take a look in the boxes. - -REPLY [6 votes]: This is apparently a (hard) problem from the Russian Math Olympiad which no one in the exam solved. -See here for a list of questions in that exam: http://www.artofproblemsolving.com/Forum/viewtopic.php?f=125&t=32171 -A solution for this problem is here: http://www.artofproblemsolving.com/Forum/viewtopic.php?p=1367869#p1367869 -A hint that was given (by Fedor Petrov): - -If we have $2k$ boxes, we may - partition them into two groups of $k$ - boxes in such a way that number of - apples in both groups differ by at - most the maximal number of apples in a - single box, and the same for oranges.<|endoftext|> -TITLE: square root of symmetric matrix and transposition -QUESTION [15 upvotes]: I have a symmetric matrix A. How do I compute a matrix B such that $B^tB=A$ where $B^t$ is the transpose of $B$. I cannot figure out if this is at all related to the square root of $A$. -I've gone through wikimedia links of square root of a matrix. - -REPLY [17 votes]: As J. M. says, you need your matrix $A$ to be positive definite. Since $A$, being symmetric, is always diagonalizable, this is the same as saying that it has non-negative eigenvalues. If this is the case, you can adapt alex's comment almost literally for the real case: as we've said, $A$ is diagonalizable, but, also, there exists an orthonormal base of eigenvectors of $A$. That is, there is an invertible matrix $S$ and a diagonal matrix $D$ such that -$$ -D = SAS^t , \quad \text{with} \quad SS^t = I \ . -$$ -Since -$$ -D = \mathrm{diag} (\lambda_1, \lambda_2, \dots , \lambda_n) \ , -$$ -is a diagonal matrix and has only non-negative eigenvalues $\lambda_i$, you can take its square root -$$ -\sqrt{D} = \mathrm{diag} (\sqrt{\lambda_1}, \sqrt{\lambda_2}, \dots , \sqrt{\lambda_n} ) \ , -$$ -and then, on one hand, you have: -$$ -\left( S^t \sqrt{D} S \right)^2 = \left( S^t \sqrt{D} S\right) \left(S^t \sqrt{D} S \right) = S^t \left( \sqrt{D}\right)^2 S = S^t D S = A \ . -$$ -On the other hand, $S^t \sqrt{D} S$ is a symmetric matrix too: -$$ -\left( S^t \sqrt{D} S \right)^t = S^t (\sqrt{D})^t S^{tt} = S^t \sqrt{D^t} S = S^t \sqrt{D} S \ , -$$ -so you have your $B = S^t \sqrt{D} S$ such that $B^t B = A$.<|endoftext|> -TITLE: Are there higher-dimensional analogues of sectional curvature? -QUESTION [15 upvotes]: I recently learned that on Riemannian manifolds, one can define the sectional curvature (http://en.wikipedia.org/wiki/Sectional_curvature) of a (2-dimensional) plane section. I was wondering if a similar concept exists for higher dimensional "space sections." -Here is what got me thinking about this: For 2-dimensional manifolds (surfaces), the sectional curvature is equal to $\kappa_1\kappa_2$, where $\kappa_1$ and $\kappa_2$ are the principal curvatures. Is there a name for the quantity $\kappa_1\kappa_2\kappa_3$ for 3-manifolds, etc., and does it carry similar geometric significance? -(Edit: Typesetting fixed) - -REPLY [4 votes]: In general the term "sectional curvature" is used in the n-dimensional setting. Basically one computes what is known in theory of surfaces as "Gauss Curvature" for the surface gotten by exponentiating every two plane inside the tangent space of the n-manifold. Then one can show that the following things to build intuition, - -If two Riemann curvatures are giving the same sectional curvature for every two plane then the Riemann curvatures are equal as endomorphisms. -Ricci curvature along a direction is the sum of the sectional curvatures of all possible two planes spanned by that direction and one more from a basis gotten by extending the given direction. -Scalar curvature at a point is the sum of the sectional curvatures of all possible two planes spanned by a chosen basis. - -A theorem of Schur says that sectional curvature if constant on all 2-planes at a point is constant at all point of the manifold. Sectional curvature is the strongest notion of curvature and will made constant under very strong conditions like maximal symmetry or locally geodesic reflecting isometry or transitive action of the isometry group on orthonormal frames.<|endoftext|> -TITLE: Irrational painting device -QUESTION [11 upvotes]: Part a) of the following problem appeared in one of the Putnam Exams (sorry, don't know which year exactly). -If you want to solve Part a) don't read Part b). -You have a painting device, which given the co-ordinates of a points in the 2D plane, will colour all points on that plane black, which are at an irrational distance from the given point. -Initially you start out with the 2D plane being white. -a) You want to colour the whole plane black. What is the minimum number of points you need to feed to the painting device? -b) Show that it is sufficient to feed $(0,0), (1,0), (\sqrt{2},0)$. - -REPLY [6 votes]: Each proof thus far is computational, some relying heavily on number theory. The top-rated answer has a noted flaw. Below, I've reproduced a proof from Halmos' "Problems for Mathematicians, Young and Old" (problem 4k): -It's easy to see that two points will not suffice: take a point on the perpendicular bisector of the first two points with irrational distance from both. To show that three points are sufficient, we note that each of the first two points leaves unpainted countably many circles of points (those with rational distance from the center). Thus, after two applications, we are left with countably many points, corresponding to intersection points of countably many circles. Take an arbitrary line in $\mathbb{R}^2$. It will intersect the remaining points in a countable set. Take a point on the arbitrary line, not one of the countably many points. Center our painting device here; then it is (by definition) an irrational distance from each remaining point. -The upshot of this approach: full generality to higher dimensions.<|endoftext|> -TITLE: the logarithm of quaternion -QUESTION [14 upvotes]: I'm reading 3D math primer for graphics and game development by Fletcher Dunn and Ian Parberry. On page 170, the logarithm of quaternion is defined as -\begin{align} -\log \mathbf q &= \log \left( \begin{bmatrix} \cos \alpha & \mathbf n \sin \alpha \end{bmatrix} \right) \\\\ -&\equiv \begin{bmatrix} 0 & \alpha \mathbf n \end{bmatrix} -\end{align} -I don't see how $\log \left( \begin{bmatrix} \cos \alpha & \mathbf n \sin \alpha \end{bmatrix} \right)$ is equal to $\begin{bmatrix} 0 & \alpha \mathbf n \end{bmatrix}$. Can anyone help me out? -Thanks. - -REPLY [4 votes]: Just recall $\exp(\alpha i) = \cos \alpha + i \sin \alpha$ for complex numbers, the quaternion (remember a quaternion is just 3 complex numbers which all have the same real part) version is by direct analogue and take logarithm of both sides.<|endoftext|> -TITLE: On applying the quadratic formula to a first-degree equation -QUESTION [19 upvotes]: You're probably thinking, "Why?" Please let me explain... -It is (very) well-known that -$$ \forall (a,b,c,x) \in \mathbb{C}^* \times \mathbb{C}^3: ax^2 + bx + c = 0 \Leftrightarrow x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}. $$ -For some bizarre reason, I decided to try to solve $ bx + c = 0 $ using this formula by introducing a term $ \alpha x^2 $ and removing it in the limit $ \alpha \to 0 $. Doing so with L'Hopital's rule, I find these solutions: -$$ \displaystyle x_1 = \lim_{\alpha \to 0} {\frac{-b + \sqrt{b^2 - 4c \alpha}}{2 \alpha}} = \lim_{\alpha \to 0} {\frac{-c}{\sqrt{b^2 - 4c \alpha}}} = \frac{-c}{b}, $$ -$$ \displaystyle x_2 = \lim_{\alpha \to 0} {\frac{-b - \sqrt{b^2 - 4c \alpha}}{2 \alpha}} = \infty. $$ -The first was to be expected, but I still haven't been able to explain the second cleanly (that is, in a way other than "since $ -c/b $ is gone, it couldn't be a true number"). -In addition, carrying out the analogous process one degree lower yields a root at either zero or infinity, depending on the constant. The latter possibility (which occurs when $ c \neq 0 $) corresponds to the unsolvable case, while the former (in which $ c = 0 $) corresponds to the trivially satisfied one, so a root at zero here appears to have a vastly different meaning from $ x_1 = 0 $ above, where $ x_1 $ gives the location of the unique, genuine root of $ bx + 0 = 0 $ provided $ b \neq 0 $. -My question is - -why a solution at zero can have either of the two meanings just described, and -whether the phantom root $ x_2 = \infty $ (obtained by treating the first-degree polynomial $ bx + c $ as a degenerate case of the second-degree one) has a meaningful interpretation. - -Thank you all in advance, and sorry if my typesetting doesn't render nicely (this is my first experience). - -REPLY [19 votes]: I think the answer is to work projectively. Rather than consider the solutions to $ax^2 + bx + c = 0$ in $\mathbb{C}$ one should think of the solutions to $aX^2 + bXY + cY^2 = 0$ in $\mathbb{P}^1(\mathbb{C})$. Then the $a = 0$ case is easy to explain; the corresponding equation $bXY + cY^2 = 0$ has one root $(c : -b)$ which is expected and another $(1 : 0)$ which is the point at infinity. -This seems reasonable to me because the degeneration at $a = 0$ is something like a failure of Bezout's theorem, which is repaired precisely by working projectively. - -REPLY [18 votes]: Informally, the "quadratic" polynomial with a=0 has a second zero at the compactification point at infinity. Graphically (working in the reals): - -So, as a goes through zero, the "quadratic" goes through the linear special case, where the second zero goes through infinity, crossing between the positive and negative ends of the real axis. -I believe that this parallels the more technical explanation given by Qiaochu Yuan. - -REPLY [17 votes]: Just a note on your attempt to solve a degenerate quadratic: remember that the quadratic formula can be derived in two ways: solving ax²+bx+c for x, or solving a+b/x+c/x² for 1/x and then reciprocating the result. Viewing it in this manner, one equation's "infinite root" is the reversed equation's 0 root.<|endoftext|> -TITLE: Euler's remarkable prime-producing polynomial and quadratic UFDs -QUESTION [14 upvotes]: Good example of a polynomial which produces a finite number of primes is: $$x^{2}+x+41$$ which produces primes for every integer $ 0 \leq x \leq 39$. -In a paper H. Stark proves the following result: $X_{n}$ (the ring of "algebraic integers" in $\mathbb Q(\sqrt{-n}))$ is a principal ideal domain for positive $n$ if and only if -$n = 1,2,3,7,11,19,43,67,163. $ For a reference one can see: -Harold Stark, A complete determination of the complex quadratic fields of class-number one, Michigan Math J., 14 (1967) 1-27. -Consider in general the polynomial $x^{2}+x + K= (x+ \alpha)(x+ \bar{\alpha})$ which we can factorize where $\alpha$ is given by $$ \alpha = \frac{{1} + \sqrt{1-4K}}{2}, \quad \bar{\alpha} = \frac{1 - \sqrt{1-4K}}{2}.$$ -One can get some relationships between polynomials which produce prime in the field $\mathbb Q(\sqrt{-n})$. - -Question is if a polynomial produces a prime, then will $X_{n}$ as defined above be a PID? - -REPLY [20 votes]: THEOREM $\ $ The polynomial $\rm\ f(x)\ =\ (x-\alpha)\:(x-\alpha')\ =\ x^2 + x + k\ $ assumes only prime values for $\rm\ 0\ \le\ x\ \le\ k-2 \ \iff\ \mathbb Z[\alpha]\ $ is a PID. -HINT $\ (\Rightarrow)\ $ Show all primes $\rm\ p \le \sqrt{n},\; n = 1-4k\ $ satisfy $\rm\ (n/p) = -1\ $ so no primes split/ramify. -For proofs, see e.g. Cohn, Advanced Number Theory, pp. 155-156, or Ribenboim, My numbers, my friends, 5.7 p.108. Note: both proofs employ the bound $\rm\ p < \sqrt{n}\ $ without explicitly mentiioning that this is a consequence of the Minkowski bound - presumably assuming that is obvious to the reader based upon earlier results. Thus you'll need to read the prior sections on the Minkowski bound. Compare Stewart and Tall, Algebraic number theory and FLT, 3ed, Theorem 10.4 p.176 where the use of the Minkowski bound is mentioned explicitly. -Alternatively see the self-contained paper [1] which proceeds a bit more simply, employing Dirichlet approximation to obtain a generalization of the Euclidean algorithm (the Dedekind-Rabinowitsch-Hasse criterion for a PID). If memory serves correct, this is close to the approach originally employed by Rabinowitsch when he first published this theorem in 1913. -[1] Daniel Fendel, Prime-Producing Polynomials and Principal Ideal Domains, -Mathematics Magazine, Vol. 58, 4, 1985, 204-210<|endoftext|> -TITLE: About powers of irrational numbers -QUESTION [7 upvotes]: Square of an irrational number can be a rational number e.g. $\sqrt{2}$ is irrational but its square is 2 which is rational. -But is there a irrational number square root of which is a rational number? -Is it safe to assume, in general, that $n^{th}$-root of irrational will always give irrational numbers? - -REPLY [4 votes]: It's true precisely because the rationals $\mathbb Q$ comprise a multiplicative subsemigroup of the reals $\mathbb R$, -i.e. the subset of rationals is closed under the multiplication operation of $\mathbb R$. Your statement arises by taking the contrapositive of this statement - which transfers it into an equivalent statement in the complement set $\mathbb R \backslash \mathbb Q$ of irrational reals. -Thus $\rm\quad\quad\quad r_1,\ldots,r_n \in \mathbb Q \;\Rightarrow\; r_1 \cdots r_n \in \mathbb Q$ -Contra+ $\rm\quad\; r_1 r_2\cdots r_n \not\in \mathbb Q \;\Rightarrow\; r_1\not\in \mathbb Q \;\:$ or $\rm\;\cdots\;$ or $\rm\;r_n\not\in\mathbb Q$. -Your case $\rm\;\;\; r^n\not\in \mathbb Q \;\Rightarrow\; r\not\in \mathbb Q \;$ is the special constant case $\rm r_i = r$ -Obviously the same is true if we replace $\rm\mathbb Q\subset \mathbb R$ by any subsemigroup chain $\rm G\subset H$ -The contrapositive form is important in algebra since it characterizes prime ideals in semigroups, rings, etc.<|endoftext|> -TITLE: Real Numbers to Irrational Powers -QUESTION [17 upvotes]: In a related question we discussed raising numbers to powers. -I am interested if anybody knows any results for raising numbers to irrational powers. -For instance, we can easily show that there exists an irrational number raised to an irrational power such that the result is a rational number. Observe ${\sqrt 2 ^ {\sqrt 2}}$. Since we do not know if ${\sqrt 2 ^ {\sqrt 2}}$ is rational or not, there are two cases. - -${\sqrt 2 ^ {\sqrt 2}}$ is rational, and we are finished. -${\sqrt 2 ^ {\sqrt 2}}$ is irrational, but if we raise it by ${\sqrt 2}$ again, we can see that -$$\left ( \sqrt 2 ^ \sqrt 2 \right ) ^ \sqrt 2 = \sqrt 2 ^ {\sqrt 2 \cdot \sqrt 2} = \sqrt 2 ^ 2 = 2.$$ - -Either way, we have shown that there exists an irrational number raised to an irrational power such that the result is rational. -Can more be said about raising irrational numbers to irrational powers? - -REPLY [3 votes]: If a is an algebraic number other than 0, 1, or -1, then whenever you raise it to an irrational algebraic number, the result is transcendental by the Gelfond–Schneider theorem.<|endoftext|> -TITLE: Cutting sticks puzzle -QUESTION [93 upvotes]: This was asked on sci.math ages ago, and never got a satisfactory answer. - -Given a number of sticks of integral length $ \ge n$ whose lengths - add to $n(n+1)/2$. Can these always be broken (by cuts) into sticks of - lengths $1,2,3, \ldots ,n$? - -You are not allowed to glue sticks back together. Assume you have an accurate measuring device. -More formally, is the following conjecture true? (Taken from iwriteiam link below). - -Cutting Sticks Conjecture: For all natural numbers $n$, and any given sequence $a_1, .., a_k$ of - natural numbers greater or equal $n$ of which the sum equals - $n(n+1)/2$, there exists a partitioning $(P_1, .., P_k)$ of $\{1, .., n\}$ - such that sum of the numbers in $P_i$ equals $a_i$, for all $1 \leq i \leq k$. - -Some links which discuss this problem: - -http://www.iwriteiam.nl/cutsticks.html - -REPLY [8 votes]: I have implemented the suggestion I made in a comment to joriki's answer. For $3 \le n \le 18$, I have generated a list of subsets $S \subset \{1,2,...,n-1\}$ with the property that if a set of sticks with total length $n(n+1)/2$ takes all the lengths in $S$, together with any other lengths ≥n, then the sticks can always be cut into sticks of length $1,2,...,n$. It is available at this link (it's about 900K). -I stared at it for a while, but nothing jumped out at me. -Edited to add: I have changed the program to output the sets in a more human-friendly order: part 1 (n = 1 to 17) and part 2 (n = 18).<|endoftext|> -TITLE: Solutions to the equation $y^{(n)} y = 1$ for even $n$ -QUESTION [8 upvotes]: A long time ago I was curious about the closed-form solutions to the equation: -\begin{equation*} -\frac{d^{n}y}{dx^n} y = 1. -\end{equation*} -For $n$ an odd number, try $y = A x^k$. Then $y^{(n)} = A k(k-1)...(k-n)x^{k-n}$. This gives the formula -\begin{equation*} -A^2 k(k-1)...(k-n) x^{2k - n - 1} = 1 -\end{equation*} -which can only be true if $k = \frac{n+1}{2}$ and $n$ is odd ($k$ cannot be an integer for the formula to work, check this yourself). Furthermore one has to have that -\begin{equation*} -A = (k(k-1)...(k-n))^{-1/2} -\end{equation*} -which is real if $n$ is odd. -Thus there are closed-form solutions to my problem for $n$ odd, and my question is if anyone can find a closed-form solution for $n=2$ or in general if $n$ is even. - -REPLY [9 votes]: Let $y=y(x)$ and write primes for derivatives w.r.t $x$. In the equation $$y''y=1$$ the independent variable $x$ does not appear, so we can do a (somewhat magical) change of variables: in the new equation, the independent variable will be $y$ and the dependent variable will be $p=y'$. If we use dots to denote derivatives with respect to $y$, a little computation using the chain rule shows that $p''=p\dot p$, so the equation becomes $$\dot ppy=1.$$ This can be rewritten as $$p\,\mathrm dp=\frac{\mathrm dy}{y},$$ which has its variables separated. It can be integrated at once, to get the relation $$\frac{p^2}{2}=\log y+c$$ with $c$ a constant. Now we recall what $p$ was, that is $y'$, and this becomes a new (first order!) ODE on $y(x)$. We have «simplified» things... (We could have obtained the same reduction od order by multiplying the original equation by $y'$, dividing it by $y$ and integrating: the trick with the $p$ substitution is, though, a general method) -The same approach will similarly simplify the general case.<|endoftext|> -TITLE: Need to compute values of all the entries of a $3 \times 3$ matrix? -QUESTION [7 upvotes]: I have a Image processing application in which I have a Matrix equation as below: -$A \cdot R=I$ -where, -$A = 3 \times 3$ matrix (Constants) -$R = 3 \times 1$ matrix (Column Vector) Lets call this as actual output. -$I = 3 \times 1$ matrix . Lets call $I$ as Ideal output -I know the values of matrix $I$, and $R$. I have to find what matrix $A$, if post multiplied by $R$ would give me matrix $I$. -How can I set his situation up in matrix algebra and solve it compute $A$? -Any pointers would be helpful. -Thank You. --AD. - -REPLY [2 votes]: Given some $A$, you can measure how well it performs on the ensemble of $R_1,...,R_{24}$ and $I_1,...,I_{24}$ by computing the error. For example: -$$ -J = \sum_{i=1}^{24} ||A R_i - I_i ||^2 -$$ -Where we have used the standard Euclidean norm. The goal is to find the $A$ that makes $J$ as small as possible. We can express this more compactly by stacking the vectors $R_i$ and $I_i$ into larger 3x24 matrices. So if we let: $R = [R_1 R_2 \dots R_{24}]$ and similarly for $I$, we can rewrite our error as: -$$ -J = ||AR - I||_F^2 -$$ -where we are now using the Frobenius norm. This optimization problem can be solved in many different ways. For example, you can differentiate J with respect to A and set the result equal to zero. This yields the optimality condition: -$$ -ARR^T = IR^T -$$ -So any A that satisfies the above equation will be optimal in the sense that it will minimize J. If R is a full-rank matrix (likely), then the optimal A is given by: -$$ -A = IR^T(RR^T)^{-1} -$$<|endoftext|> -TITLE: Elementary number theory results that are not generalized by ring or group theory -QUESTION [10 upvotes]: I've taken an undergraduate course in ring and group theory, but haven't studied number theory formally. I've noticed that many important results in number theory have been generalized in group/ring theory (e.g. Lagrange's Theorem generalizing Fermat's Little Theorem). -My question is, are there any results in elementary number theory that have not been generalized using elementary group and ring theory? - -REPLY [10 votes]: Problems for integers whose generalizations in algebraic number theory have not been developed (or at least considered) are rare. The only examples I know of are: - -Ancient problems without a strong relation to modern algebraic techniques. This includes the existence of odd perfect numbers, and compositeness of Fermat numbers $2^{2^k}+1$. Keep in mind, though, that most of the very old problems have been understood or finished off by modern techniques, such as the congruent number problem and Fermat's Last Theorem being reduced to facts about elliptic curves, or the probabilistic approach to the distribution of special types of primes (twin, Mersenne, k-term progressions, etc), and these better-understood problems are more amenable to generalization beyond the integers. -Questions of combinatorial nature, such as covering congruences or arithmetic progressions (van der Waerden theorems). Here the problems can be generalized to other rings, but most interest has focused on the integer case. -Uses of number theory in logic and computer science, such as Goedel coding, complexity of Presburger arithmetic, or other problems where iterated exponentiation appears. Algebraic number theory is primarily about problems described by polynomial equations, and the reduction of exponential Diophantine equations to polynomial ones (as in the solution of Hilbert's 10th problem) is not direct enough to allow the use of the standard algebraic tools. - -For everything else, where standard algebraic techniques or analytic ones (zeta functions, Diophantine approximation, sieves, transcendence theory, complex analysis, etc) are applicable to a problem over the integers, there has usually been an effort to search for analogues of those methods in algebraic number theory, p-adic number theory, geometry over finite fields, and complex geometry. This applies to all the major techniques and theories developed since 1800. So to find examples one needs to move away from "central core" number theory and look at questions that are not, so far, primarily studied by the methods of algebraic number theory and algebraic geometry.<|endoftext|> -TITLE: Adjoint functors requiring a natural bijection -QUESTION [13 upvotes]: When showing that two functors $F:A\rightarrow B$ and $G:B\rightarrow A$ are adjoint, one defines a natural bijection $\mathrm{Mor}(X,G(Y)) \rightarrow \mathrm{Mor}(F(X),Y)$. What if one do not require the bijection to be natural, what issues would arise? - -REPLY [3 votes]: Given a natural transformation -$$\varphi_{} \colon \mathbf A(-,G(-)) \to \mathbf B(F(-),-) $$ -this is the same as a family natural transformation between the functors -$$\varphi_Y \colon \mathbf A(-,G(Y)) \to \mathbf B(F(-),Y)$$ -natural in $Y$. -For every $Y \in \mathbf B$ by yoneda lemma we have that, if $\epsilon_Y=\varphi_Y(1_{G(Y)})$, then -$$\varphi_Y(f)=\mathbf B(F(f),Y)(\epsilon_Y)=\epsilon_Y \circ F(f)$$ -for every $f \in \mathbf A(X,G(Y))$. -The requirement that $\varphi$ is an isomorphism implies that $\varphi_Y$ are all isomorphisms and so $\epsilon_Y$ must be universal (and that's why the adjoint are important, because they make arise universal objects). -If you drop the naturality condition you cannot use yoneda and so you cannot get the universal morphism. -That's why we need naturality. :) -Hope this helps.<|endoftext|> -TITLE: Is there some universal sense of -ification (eg, groupification) in category theory -QUESTION [15 upvotes]: I have three questions. -1: -Does the groupification of a semigroup always exist? I believe this should be yes because for every $x$ in the semigroup one could just define an element $x'$ that should work as its inverse. But what would then happen to the product $x'y$ for $x,y$ elements of the semigroup? It feels like we get choices (or maybe not) here that messes things up. -2: -When defining the groupification, $G$, of a semigroup $S$ one require it to come with a morphism (of semigroups) $S \rightarrow G$ such that any other morphism (of semigroups) from $S$ to another group $G'$ factorizes through the previous map. Exactly which type of objects can be groupified? I guess one cannot groupify a topological space. -3: -This is a broad question but is there some sense of -ification? In the example one could replace "group" by "topological space" and talk about topologyfication. Now, no such word seem to exist so I guess one could not "topologyfy". -We can (i think) consider the groupification functor from the category of semigroups to the category of groups and it should be adjoint to the forgetful functor from the category of groups to the category of semigroups. This would suggest that we need some sense of a forgetful functor in the first place to talk about a -ification. -Apologies for this bad question, sometimes asking the right question is just as hard as answering it. - -REPLY [5 votes]: The name of the really really general construction (which is indeed in many cases the left adjoint of the forgetful functor) is called the completion of an object. The basic idea is to find the 'most natural' or 'smallest' object having the original as a subobject, which means we can extend the idea to, say, the closure of a subset of a topological space. -Here's the n-lab page if you're interested...<|endoftext|> -TITLE: Modelling forces acting on a sail -QUESTION [8 upvotes]: I'd like to create a basic model of the forces acting on a sail (wind sail, like a tail ship) -A couple of things I was thinking about: -1) Can create a very simple model where wind is 'one' force acting on the a uniform body. -2) Model wind as vectors. This is where I am a little confused on how to start. -3) Adding in multiple sails. -4) I know that this would be a differential equation but after that I can't really see how it would be modelled. -Any pointers? -I'm not looking for someone to actually do the modelling for me, just a place to start. Like maybe some Wikipedia articles, etc. -Thanks - -REPLY [2 votes]: When your analysis is on a simple basis where the sail can be modeled as a plane, the force will be perpendicular to the plane. If you need a 3-D model of sail, it's more complex. -If you take what might be called the 20th century model in sailboat design, the force on the sail is resolved into a vector perpendicular to the approaching wind (lift) and a force parallel to the wind (drag). Likewise the forces on the hull are resolve to lift perpendicular to the direction of travel, and drag parallel to the direction of travel. You can find diagrams like this going back to Manfred Curry. -I don't know about square riggers, but for the normal sloop rig, the jib and main interact too closely to consider them as independent. -One math professor who wrote about sailing was c. stanley ogilvy. I wouldn't rush out to buy his books, but if you can find them in a library, you might find them interesting. The most widely available heavy duty treatments are by Marchaj (http://en.wikipedia.org/wiki/Czes%C5%82aw_Marchaj).<|endoftext|> -TITLE: An elementary way of simplifying a trigonometric triple integral? -QUESTION [7 upvotes]: By stressing my manipulative powers and a bit of help from Mathematica I was able to show that the triple integral -$$\frac1{8\pi}\int_0^{2\pi}\int_0^\pi\int_0^\pi\,(2\cos\theta\cos\varphi-\sin\theta\sin\varphi\cos\psi)^{2k}\sin\theta\sin\varphi \,\mathrm{d}\theta\mathrm{d}\varphi\mathrm{d}\psi $$ -where k is a positive integer is equivalent to -$${\left(\frac{2^k}{2k+1}\right)^2}\sum_{j=0}^k\frac{\binom{2j}{j}}{\binom{2k}{k}}$$ -by making use of the binomial expansion and then using a hypergeometric identity that degenerates to a ratio of gamma functions. -For some reason I think there was no need for Mathematica to invoke the Gauss hypergeometric function ${}_2 F_1\left(a,b;c;z\right)$ in performing the triple integration, so I'm wondering if there might be a way to show that this identity holds, using nothing more complicated than a gamma function. - -REPLY [3 votes]: Here is a hint to an other approach: Search for the key words "convolution formula" "Legendre polynomial" "addition formula" and you will be en-lighted... ---- A little explanation and some spherical harmonics --- -@J. M. You are totally right in particular when I was wrong in my suggestion! :/ -What I thought about was convolution on the double coset space $X=K\backslash G/K$ of $G= SO(3)$ where $K=SO(2)$. The space $X$ may be identified with $[-1,1]$ and the convolution (as induced from the translation operator on $G$) is then given by -$$f*g(x)= \frac{1}{4\pi}\int_{-1}^1\int_{0}^{2\pi}f(y)g(xy+\sqrt{1-x^2}\sqrt{1-y^2}\cos\theta) d\theta dy.$$ -An interesting fact is that $f*g=g*f$ even though $G$ is not Abelian. -Choosing $f(x)=1$ and $g(x)=x^{2k}$ we get -$$\frac{1}{8\pi}\int_{-1}^1\int_{-1}^1\int_{0}^{2\pi}(xy+\sqrt{1-x^2}\sqrt{1-y^2}\cos\theta)^{2k} d\theta dydx=$$ -$$\int_{-1}^1f*g(x)dx/2 = \int_{-1}^1 g*f(x)dx/2=\int_{-1}^1x^{2k}dx/2= 1/(2k+1).$$ -In our case the integrand is (after changing variables) -$$(2xy + \sqrt{1-x²}\sqrt{1-y²}\cos\theta)^{2k}=$$ -$$=\sum_{j=0}^{2k}{2k \choose j}(xy)^j(xy+\sqrt{1-x²}\sqrt(1-y²)\cos\theta)^{2k-j}$$ -Using this we get an other convolution - but I can not see how this can be used directly. :( -Legendre polynomials might be good since they are the characters on the space $X$.<|endoftext|> -TITLE: Certain Liouville Numbers -QUESTION [5 upvotes]: A Liouville number is a number which can be approximated very closely be a sequence of rational numbers (here is the rigorous definition I am working off of: http://en.wikipedia.org/wiki/Liouville_number). -I'm looking for an example of a Liouville number which cannot be approximated by a sequence of rational numbers with a denominators which are all a constant c multiplied by powers of some number a. -For instance, the Louiville constant ($0.110001000000000000000001$...) can be approximated by the sequence $\frac{1}{10}$, $\frac{11}{10^2}$, $\frac{110001}{10^6}$, etc, which is not what I am looking for because in each case the denominator is a power of $10$. In this case, we would say that $c=1$, $a=10$, and the denominator is always of the form $c \cdot a^n$ for some positive $n$. - -REPLY [4 votes]: Consider the continued fraction -$$x = \cfrac{1}{a_1 + \cfrac{1}{a_2+ \cfrac{1}{a_3 + \ldots}}}$$ -The $k$'th convergent is of the form $p_k/q_k$ where $q_k = a_k q_{k-1} + q_{k-2}$. -If $P_k$ is a prime that does not divide $q_{k-1}$, we can choose -$a_k \mod P_k$ so that $P_k$ divides $q_k$. Since $$\left|x - \frac{p_k}{q_k}\right| < \frac{1}{q_k q_{k+1}} < \frac{1}{a_{k+1} q_k^2}$$ taking $a_k > q_{k-1}^{k-2}$ will be enough to make $x$ a Liouville number. Moreover, all irreducible fractions $a/b$ such that $\left| x - \frac{a}{b} \right| < \frac{1}{2b^2}$ are convergents of $x$. Since $q_k$ is divisible by the prime $P_k$, for any given $c$ and $d$ only finitely many of these can be of the form $c d^m$.<|endoftext|> -TITLE: probability and statistics: Does having little correlation imply independence? -QUESTION [16 upvotes]: Suppose there are two correlated random variable and having very small correlation coefficient (order of 10-1). Is it valid to approximate it as independent random variables? - -REPLY [6 votes]: The correlation coefficient only measures linear dependence of two random variables. So if they depend on each other in a non-linear way, the correlation coefficient will not catch it.<|endoftext|> -TITLE: Sets of integers in which every element is the sum of two others -QUESTION [10 upvotes]: i) Does there exists a nonempty subset of integers $S$, such that for all $a \in S$, there exists $b,c \in S$ such that $a = b + c$, where $a, b, c$ are distinct integers. -Edited to add: -ii) Can an $S$ with these properties be finite, and also have the property that $a \in S$ implies $-a \notin S$ -This question is inspired by another question, which has not been fully answered yet. - -REPLY [2 votes]: A further answer, generalizing flagar's example: -For any N ≥ 10, there exists a set of the sort you ask for which contains at most eleven elements (with fewer than eleven in the case that N < 13), and with all elements having absolute value at most N: - -{−N, −N+2, −N+4, −2, 1, 3, 4, 5, N−7, N−6, N−5}. - -The shorter example presented by flagar corresponds to the case N = 10. -The key to this construction was to think in terms of small sequences of almost mutually-supporting subsequences. For instance, the sequence −N, −N+2, −N+4 consist of integers which are sums of themselves, plus one of −2 or 4. We can include 4 in the set by including it in the sequence 3, 4, 5: we can support this by including 1 in the set (where 1 itself is supported as the sum of 3 and −2). We then only need a way of supporting −2: we do this with the other subsequence which includes N−6. -Each time we have a short sequence, we have a collection of integers which is easy to "support" using only a small number of other integers. So one strategy to constructing such self-supporting sets of integers is to think in terms of one or more short sequences of integers, with each sequence working to support another.<|endoftext|> -TITLE: How to prove the optimal Towers of Hanoi strategy? -QUESTION [9 upvotes]: In the towers of Hanoi game, how do we know that we have the optimal algorithm for solving it? I thought about this and it seemed like any deviation from the standard strategies would be putting you back a step, but I had no idea how to demonstrate this rigorously. -All I know is that the proof involves the Lucas correspondence between the Hanoi graph and the odd coefficients in Pascal's triangle. How is Pascal's triangle turned into a graph? I assume the coefficient are the vertices, but I don't see how you form the edges? -I was also wondering how to generalize the strategy to n discs and k rods because it seems like this correspondence argument wouldn't really work in the general case. -Basically, I am wondering how the odd coefficients of the Pascal triangle are turned into a graph and whether or not there is a similar method to find and prove optimal strategy when we increase the number of rods. - -REPLY [14 votes]: I will address your first question, but not the one for larger number of rods; as far as I know, it's still generally wide open what the optimal strategy might be even for $4$ rods and a smallish number of disks. -To show the optimal strategy for $n$ disks in $3$ rods is the "usual" one, you can do it by induction (which yields a recursive solution). I'm sure there are other ways of proving it, perhaps with Lucas numbers as you suggest. -Clearly, the optimal strategy with $n=1$ is to simply move the disk directly. -Assume you already have the optimal strategy for moving $k$ disks. To move $k+1$ disks, you need to move the largest disk from the initial rod to the terminal rod, but that is the only time it needs to move (it cannot help you with the other disks, since it must lie at the bottom at any given time, so any other moves only require further moves in the end); to move the bottom ($k+1$)st disk from the initial rod $I$ to the terminal rod $T$, you must first move the top $k$ disks out of the way; this requires moving the $k$ disks from the initial rod $I$ to the auxiliary rod $A$, and the optimal way of doing this is the optimal strategy you know for $k$ disks. Then you move the $k+1$st disk, and then you want to move the remaining $k$ disks from the auxiliary rod to the terminal one in as few moves as possible (the optimal way). So the optimal strategy for $k+1$ disks is to move the top $k$ using the optimal strategy for $k$ from $I$ to $A$, then move the largest disk from $I$ to $T$, then move the top $k$ disks using the optimal strategy for $k$ from $A$ to $T$. -By keeping track of the actual number of moves needed at each step, you can give the number. For $n=1$, the number is $1=2^1-1$. Assume that moving $k$ disks requires $2^k-1$ moves in the optimal strategy. The optimal strategy for $k+1$ described above takes $(2^k-1) + 1 + (2^k-1) = 2^k+2^k - 1 = 2^{k+1}-1$ steps; thus, the optimal solution for $n$ disks and $3$ rods requires $2^n-1$ moves. -(This does not generalize easily to more than $3$ rods for presumably obvious reasons). -A bit more interesting is trying to prove that the non-recursive solution gives an optimal solution; this solution only requires you to remember the last disk you moved at any given time (the recursive solution is more memory intensive, of course). Number the rods $0$, $1$, and $2$. We have three rules: - -Never move the same disk twice in succession. -You can only move a disk from the top of one rod to the top of another rod. -Moving a disk from rod $i$ to rod $j$ is only valid if $i\neq j$, and either rod $j$ is empty or the top disk in rod $j$ is larger than the top disk in rod $i$. - -With these rules, the non-recursive algorithm has two simple steps: - -If you are moving the disk from rod $i$, and the two other rods are valid destinations, then move it to rod $i+1\mod 3$. Otherwise, move it to the only valid destination. -If no move is possible, stop. Otherwise, continue. - -This process will solve the puzzle with $3$ rods in the minimum number of moves.<|endoftext|> -TITLE: Is it possible to find the position of a prime number online? -QUESTION [8 upvotes]: $2$ is the first prime number. -$3$ is the second. -If I give a prime number such as $1151024046313875220631$, is there any software/website which can give the position of the prime number? -I know there are resources to find $N$th prime. But I am having a hard time finding the reverse. - -REPLY [2 votes]: Andrew Booker's Nth prime page is excellent... but it can't handle your example number. -I have custom code that can calculate values up to about 2^64, but your number is larger than that. -Thanks to Dusart [1], we can say that its rank is somewhere between 24244547260299402427 and 24247918127257270377. -If the Riemann Hypothesis is true, then we know by Schoenfeld [2] that its rank is somewhere between 24245911027060346607 and 24245911157987206331. -[1] Pierre Dusart, 'Estimates of Some Functions Over Primes without R.H.', preprint (2010), arXiv:1002.0442 -[2] Lowell Schoenfeld, 'Sharper Bounds for the Chebyshev Functions theta(x) and psi(x). II'. Mathematics of Computation, Vol 30, No 134 (Apr 1976), pp. 337-360.<|endoftext|> -TITLE: Is the closedness of the image of a Fredholm operator implied by the finiteness of the codimension of its image? -QUESTION [19 upvotes]: Let $X$ and $Y$ be Banach spaces. A bounded operator $T\colon X\to Y$ is called Fredholm iff - -The dimension of $\ker(T)$ is finite, -The codimension of the image $\mathrm{im}(T)$ is finite, -The image $\mathrm{im}(T)$ is closed in $Y$. - -Question: Is the third condition redundant? -Some lecture notes I'm working through claim that the third condition follows from the second one together with the open mapping theorem. I've checked some books on functional-analysis without finding a proof of this. - -REPLY [18 votes]: Assume that $T$ is injective, because if it is not we know that $\textrm{ker}(T)$ is closed so we can replace $T$ by the induced map from $X/\textrm{ker}(T)$ (which is a Banach space). -Now define $T' := X \oplus C \to Y$ by $T'(x, c) = T(x) + c$ where $C$ is a closed complement for the range of $T$. -This $T'$ is clearly bounded, linear and an isomorphism. So by the open mapping theorem $T'$ is open. Note that $\textrm{im}(T) = T'(X \oplus \{0\})$ which is closed.<|endoftext|> -TITLE: Why does the Mandelbrot set contain (slightly deformed) copies of itself? -QUESTION [86 upvotes]: The Mandelbrot set is the set of points of the complex plane whos orbits do not diverge. An point $c$'s orbit is defined as the sequence $z_0 = c$, $z_{n+1} = z_n^2 + c$. -The shape of this set is well known, why is it that if you zoom into parts of the filaments you will find slightly deformed copies of the original shape, for example: - - -I measured some points on the Mandelbrot, and the corresponding points from one of these smaller Mu-molecules. Comparing the orbit sequences it was possible to find points on each sequence which were very close - but this experiment did not really help me to understand anything new. - -REPLY [2 votes]: See this paper: - -McMullen, Curtis T., - The Mandelbrot set is universal. In The Mandelbrot set, theme and variations, 1–17, - London Math. Soc. Lecture Note Ser., 274, Cambridge Univ. Press, 2000. - MR1765082 (2002f:37081) -PDF available at the author's site. - -See also The significance of the Mandelbrot set.<|endoftext|> -TITLE: Constructing a seven digit number such that one divides the other -QUESTION [5 upvotes]: This was asked to me by a friend. I tried this problem for a long time, but i couldn't solve it. Seems interesting though. -Prove that it is impossible to construct two different seven digit numbers, one of which is divisible by the other, out of the digits 1,2,3,4,5,6,7 (All seven digits must be in each number) - -REPLY [2 votes]: 7654321/1234567 = 6. something -so x = ky where 2<=k<=6 -sum of the digits are not divisible by 3 so no 3 and 6. -now x = ky where k = 2,4 or 5 so not possible.<|endoftext|> -TITLE: Averaging 2 roots of a cubic polynomial -QUESTION [20 upvotes]: Consider a cubic polynomial, $p(x)=k(x-a)(x-b)(x-c)$ where $k$ is some constant and $a,b,c$ its $3$ roots (not necessarily distinct, not necessarily real). It is very simple to show that if you average two roots of a cubic polynomial and compute the tangent line at their average that it will intersect the cubic polynomial at the remaining root. I tried to generalize this result to other odd degree polynomials and it did not seem to work well. -What is so special about cubic polynomials that allows this to work that isn't true about higher degree odd polynomials? - -REPLY [12 votes]: This result does hold for higher-degree polynomials; you just didn't generalize it correctly. The general statement is as follows. Let $p(x) = (x - a) q(x)$ where $q(x)$ is any differentiable function, and let $r$ be such that $q'(r) = 0$. Then the tangent line at $(r, p(r))$ intersects the $x$-axis at $(a, 0)$. -This is a fairly simple computation. Since $p'(x) = (x - a) q'(x) + q(x)$, it follows that $p'(r) = q(r)$, so the tangent line at $(r, p(r))$ has slope $q(r)$ and hence it intersects the $x$-axis at $(r - \frac{p(r)}{q(r)}, 0) = (a, 0)$. In particular, the above result holds for $q(x)$ a polynomial of any degree greater than or equal to $2$. -(Intuitively, when $q'(r) = 0$, the linear approximation to $p(x)$ at $r$ is $(x - a) q(r)$, so of course it has to hit the $x$-axis at $(a, 0)$. In fact, this way of thinking about it tells you that the $r$ such that $q'(r) = 0$ are the only $r$ for which this occurs.) -In the special case that $q(x)$ is quadratic, $r$ happens to be equal to the average of the roots of $q$. For higher-degree polynomials this is no longer the case. Instead, all you know is that the roots of $q'(x)$ interlace between the roots of $q(x)$.<|endoftext|> -TITLE: Is there an elementary proof that $\sum \limits_{k=1}^n \frac1k$ is never an integer? -QUESTION [230 upvotes]: If $n>1$ is an integer, then $\sum \limits_{k=1}^n \frac1k$ is not an integer. -If you know Bertrand's Postulate, then you know there must be a prime $p$ between $n/2$ and $n$, so $\frac 1p$ appears in the sum, but $\frac{1}{2p}$ does not. Aside from $\frac 1p$, every other term $\frac 1k$ has $k$ divisible only by primes smaller than $p$. We can combine all those terms to get $\sum_{k=1}^n\frac 1k = \frac 1p + \frac ab$, where $b$ is not divisible by $p$. If this were an integer, then (multiplying by $b$) $\frac bp +a$ would also be an integer, which it isn't since $b$ isn't divisible by $p$. -Does anybody know an elementary proof of this which doesn't rely on Bertrand's Postulate? For a while, I was convinced I'd seen one, but now I'm starting to suspect whatever argument I saw was wrong. - -REPLY [5 votes]: Here's a short proof: Let $H_n = \displaystyle \sum_{k=1}^n\dfrac{1}{k}.$ One can show that $\displaystyle\sum_{k=1}^{n}\dfrac{(-1)^{k-1}\binom{n}{k}}{k}= H_n.$ This can be rewritten as: $$\sum_{k=0}^{n}{(-1)^k\binom{n}{k}a_k} = b_n$$ -where $a_0 =0$ and $a_i = \dfrac{1}{i}$ for $i=1,\ldots n$ and $b_n = -H_n$ -This answer shows that the $b_i$ are integers if and only if the $a_i$ are integers. Clearly for $i \geq 2 $ we can see that the $a_i$ are not integers, from which it follows that neither are the $b_i, i\geq 2.$<|endoftext|> -TITLE: Why can you turn clothing right-side-out? -QUESTION [609 upvotes]: My nephew was folding laundry, and turning the occasional shirt right-side-out. I showed him a "trick" where I turned it right-side-out by pulling the whole thing through a sleeve instead of the bottom or collar of the shirt. He thought it was really cool (kids are easily amused, and so am I). -So he learned that you can turn a shirt or pants right-side-out by pulling the material through any hole, not just certain ones. I told him that even if there was a rip in the shirt, you could use that to turn it inside-out or right-side-out, and he was fascinated by this and asked "why?" -I don't really know the answer to this. Why is this the case? What if the sleeves of a long-sleeve shirt were sewn together at the cuff, creating a continuous tube from one sleeve to the other? Would you still be able to turn it right-side-out? Why? What properties must a garment have so that it can be turned inside-out and right-side-out? -Sorry if this is a lame question, but I've always wondered. I wouldn't even know what to google for, so that is why I am asking here. -If you know the answer to this, could you please put it into layman's terms? -Update: Wow, I really appreciate all the participation. This is a really pleasant community and I have learned a lot here. It seems that the answer is that you need at least one puncture in the garment through which to push or pull the fabric. It appears that you can have certain handles, although it's not usually practical with clothing due to necessary stretching. -Accepted (a while ago actually -- sorry for not updating sooner) Dan's answer because among the answers that I understand, it is the highest ranked by this community. - -REPLY [187 votes]: I'm going to try to give a lighter-flavoured version of my previous answer. I'd rather not edit the previous one anymore so here goes another response. I want to make clear, this response is to you, not your 10-year-old nephew. How you translate this response to any person depends more on you and that person than anything else. -Take a look at the Wikipedia page for diffeomorphism. In particular,the lead image -When I look at that image I see the standard Cartesian coordinate grid, but deformed a little. - -There's a "big theorem" in a subject called Manifold Theory and it's name is the "Isotopy Extension Theorem". Moreover, it has a lot to do with these kinds of pictures. -The isotopy extension theorem is roughly this construction: say you have some rubber, and it's sitting in a medium of liquid epoxy that's near-set. Moreover, imagine the epoxy to be multi-coloured. So when you move the rubber bit around in the epoxy, the epoxy will "track" the rubber object. If your epoxy had a happy-face coloured into it originally, after you move the rubber, you'll see a deformed happy-face. - - -So you get images that look a lot like mixed paint. Stir various blotches of paint, and the paint gets distorted. The more you stir, the more it mixes and it gets harder and harder to see the original image. The important thing is that the mixed paint is something of a "record" of how you moved your rubber object. And if your motion of the rubber object returns it to its initial position, there is a function -$$ f : X \to X $$ -where $X$ is all positions outside your rubber object. Given $x \in X$ you can ask where the particle of paint at position $x$ went after the mixing, and call that position $f(x)$. -All my talk about fibre bundles and homotopy-groups in the previous response was a "high level" encoding of the above idea. An intermediate step in the formalization of this idea is the solution of an ordinary differential equation, and that differential equation is essentially the "paint-mixing idea" above, in case you want to look at this subject in more detail later. -So what does this mean? A motion of an object from an initial position back to the initial position gives you an idea of how to "mix paint" outside the object. Or said another way, it gives you an Automorphism of the complement, in our case that's a 1-1, continuous bijective function between 3-dimensional space without the garment and itself. -You may find it odd but mathematicians have been studying "paint mixing" in all kinds of mathematical objects, including "the space outside of garments" and far more bizarre objects for well over 100 years. This is the subject of dynamical systems. "Garment complements" are a very special case, as these are subsets of 3-dimensional euclidean space and so they're 3-manifolds. Over the past 40 years our understanding of 3-manifolds has changed and seriously altered our understanding of things. To give you a sense for what this understanding is, let's start with the basics. 3-manifolds are things that on small scales look just like "standard" 3-dimensional Euclidean space. So 3-manifolds are an instance of "the flat earth problem". Think about the idea that maybe the earth is like a flat sheet of paper that goes on forever. Some people (apparently) believed this at some point. And superficially, as an idea, it's got some things going for it. The evidence that the earth isn't flat requires some build-up. - -Anyhow, so 3-manifolds are the next step. Maybe all space isn't flat in some sense. That's a tricky concept to make sense of as space isn't "in" anything -- basically by definition whatever space is in we'd call space, no? Strangely, it's not this simple. A guy named Gauss discovered that there is a way to make sense of space being non-flat without space sitting in something larger. Meaning curvature is a relative thing, not something judged by some exterior absolute standard. This idea was a revelation and spawned the idea of an abstract manifold. To summarize the notion, here is a little thought experiment. -Imagine a rocket with a rope attached to its tail, the other end of the rope fixed to the earth. The rocket takes off and goes straight away from the earth. Years later, the rocket returns from some other direction, and we grab both loose ends of the rope and pull. We pull and pull, and soon the rope is tight. And the rope doesn't move, it's taut. as if it was stuck to something. But the rope isn't touching anything except your hands. Of course you can't see all the rope at one time as the rope is tracing out the (very long) path of the rocket. But if you climb along the rope, after years you can verify: it's finite in length, it's not touching anything except where it's pinned-down on the earth. And it can't be pulled in. -This is what a topologist might call a hole in the universe. We have abstract conceptions of these types of objects ("holes in the universe") but by their nature they're not terribly easy to visualize -- not impossible either, but it takes practice and some training. -In the 1970's by the work of many mathematicians we started to achieve an understanding of what we expected 3-manifolds to be like. In particular we had procedures to construct them all, and a rough idea of how many varieties of them there should be. The conjectural description of them was called the geometrization conjecture. It was a revelation in its day, since it implied that many of our traditional notions of geometry from studying surfaces in 3-dimensional space translate to the description of all 3-dimensional manifolds. The geometriztion conjecture was recently proven in 2002. -The upshot of this theory is that in some sense 3-dimensional manifolds "crystalize" and break apart in certain standard ways. This forces any kind of dynamics on a 3-manifold (like "paint mixing outside of a garment") to respect that crystalization. -So how do I find a garment you can't turn inside-out? I manufacture one so that its exterior crystalizes in a way I understand. In particular I find a complement that won't allow for this kind of turning inside-out. The fact that these things exist is rather delicate and takes work to see. So it's not particularly easy to explain the proof. But that's the essential idea. -Edit: To say a tad more, there is a certain way in which this "crystalization" can be extremely beautiful. One of the simplest types of crystalizations happens when you're dealing with a finite-volume hyperbolic manifold. This happens more often than you might imagine -- and it's the key idea working in the example in my previous response. The decomposition in this case is very special as there's something called the "Epstein-Penner decomposition" which gives a canonical way to cut the complement into convex polytopes. Things like tetrahedra, octahedra, icosahedra, etc, very standard objects. So understanding the dynamics of "garments" frequently gets turned into (ie the problem "reduces to") the understanding of the geometry of convex polytopes -- the kind of things Euclid was very comfortable with. In particular there's software called "SnapPea" which allows for rather easy computations of these things. - -(source: utk.edu) - -Images taken from Morwen Thistlethwaite's webpage. These are images of the closely-related notion of a "Dirichlet domain". -Here is an image of the Dirichlet domain for the complement of $8_{17}$, the key idea in the construction of my previous post. -Dirichlet domain for the complement of $8_{17}$ -Technically, this in the Poincare model for hyperbolic space, which gives it the jagged/curvy appearance.<|endoftext|> -TITLE: What is the expected number of runs of same color in a standard deck of cards? -QUESTION [12 upvotes]: Standard deck has $52$ cards, $26$ Red and $26$ Black. A run is a maximum contiguous block of cards, which has the same color. -Eg. - -$(R,B,R,B,...,R,B)$ has $52$ runs. -$(R,R,R,...,R,B,B,B,...,B)$ has $2$ runs. - -What is the expected number of runs in a shuffled deck of cards? - -REPLY [2 votes]: Note: This solution works because of the happy "coincidence" that the number of ways to get $k$ runs is the same as the number of ways to get $2n+2 - k$ runs. We will prove that first, then apply it. -Lemma: The number of ways to achieve $k$ runs is $ 2 \times { n-1\choose \lceil \frac{k}{2} \rceil-1} \times { n-1 \choose \lfloor \frac{k}{2} \rfloor - 1 }$. -Proof: A sequence of $k$ runs can be defined as having $a_1$ of the first color, $b_1$ of the second color, $a_2$ of the first color, $b_2$ of the second color, so on and so forth. For the final term, if $i$ is odd, we have $a_{ \lceil \frac{k}{2} \rceil }$ of the first color, and if $k$ is even, we have $b_{\lfloor \frac{k}{2} \rfloor}$ of the second color. -After choosing what the first color is, we have the constraint that $ \sum a_i = n$ and $ \sum b_i = n$, of which there are $ \lceil \frac{k}{2} \rceil $ terms in the first summation and $ \lfloor \frac{k}{2} \rfloor$ terms in this second summation. It is easy to see that this is the only constraint, since given any 2 such summations, we can form a sequence of $i$ runs. This establishes the bijection between a sequence of $k$ runs and solutions to these equations. -We now count the number of ways to get such solutions. By bars and stars, there are $ { n-1 \choose \lceil \frac{i}{2} \rceil - 1 } $ solutions to the first equation, and ${ n-1 \choose \lfloor \frac{i}{2} \rfloor - 1 }$ solutions to the second. Hence the lemma follows. -Corollary: Given $2n$ cards, the number of ways to get $r$ runs is the same as the number of ways to get $2n+2-r$ runs. -Corollary: By symmetry, the expected number of runs is $ \frac{ r + ( 2n+2 - r ) } { 2 } = n+ 1 $.<|endoftext|> -TITLE: Blow up of a solution -QUESTION [24 upvotes]: What exactly does blow up mean, when people say, for example, that a solution (to a pde (say)) blows up. -Thanks. - -REPLY [29 votes]: The meaning is, of course, context-dependent... -In the context of differential equations, that a solution to an equation with a "time" variable blows up usually means that the maximal domain for which it is defined is finite, so that at the endpoint of that interval something `bad' happens: either the solution goes to infinity, or it stops being smooth (in a way that makes the differential equation stop having sense, maybe), or something. This is an important phenomenon, one which causes trouble. A couple of examples: - -Perelman's solution of the Poincaré conjecture—in a very vague sense—consists of a way to `work around' the fact that certain solutions of a (very complicated non-linear) PDE blow up; -the third `Millenium' Clay problem is (very roughly) the question «do the solutions of the Navier-Stokes equation blow up?». - -Consider, as a very simple example, the equation $$\frac{\mathrm dx}{\mathrm dt}=x^2.$$ This equation makes sense and satisfies the conditions for existence and uniqueness of solutions on all of the $(t,x)$-plane, but if you solve it (which is easy to do explicitely, as it has the two variabls separated) you'll see that all of its solutions have a maximal interval which is a half-line (which is bounded on the left or on the right, depending on the initial condition) and that at the finite end of that interval the solutions become unbounded. We thus say that all solutions of our equation blow up in finite time. -There are also equations which have some solutions which blow up and some which live forever. One example is $$\frac{\mathrm dx}{\mathrm dt}=\begin{cases}x^2&\text{if $x\geq0$}\\0&\text{if $x\leq0$}\end{cases}$$ and you'll surely find lots of fun in trying to concoct examples where even more interesting phenomena occur.<|endoftext|> -TITLE: Companions to Rudin? -QUESTION [17 upvotes]: I'm starting to read Baby Rudin (Principles of mathematical analysis) now and I wonder whether you know of any companions to it. Another supplementary book would do too. I tried Silvia's notes, but I found them a bit too "logical" so to say. Are they good? What else do you recommend? - -REPLY [4 votes]: Okay, so as you haven't accepted any of the answer, I thought maybe I should give a try to answer as best as I can. -I'm currently doing Chapter-3 from Walter Rudin's text, and simultaneously following these wonderful notes, https://www.math.ucdavis.edu/~emsilvia/math127/math127.html by UC,Davis. - They include proof of each and every theorem and lemma in the book, and even a statement that is left in the book for the readers to prove is proved in these notes. -If you want, you can follow only these notes and not do theory from the book at all, and directly come back to it to solve problems at the end.<|endoftext|> -TITLE: Binomial coefficients: how to prove an inequality on the $p$-adic valuation? -QUESTION [10 upvotes]: In section 4 of the article by Afred van der Poorten's A Proof That Euler Missed ... the following inequality is used: -$$\nu_{p}\displaystyle\binom{n}{m}\leq\left\lfloor\dfrac{\ln n}{\ln p}\right\rfloor-\nu_{p}(m)\qquad(\ast)$$ -(In the original denoted $\text{ord}$ $_{p}(\cdot)$ instead of $\nu_p(\cdot)$). -where $\nu_{p}(k)$ is the $p$-adic valuation of $k\in\mathbb{Q}$, i. e. the exponent of the prime $p$ in the prime factorization of $k$. -I know some properties of the foor function and that -$$\nu_{p}(a/b)=\nu_{p}(a)-\nu_{p}(b),$$ -$$\nu_{p}(a\cdot b)=\nu_{p}(a)\cdot \nu_{p}(b)$$ -and -$$\nu_{p}(n!)=\displaystyle\sum_{i\geq 1}\left\lfloor \dfrac{n}{p^{i}}\right\rfloor $$ -but I didn't convince myself on the correct argument I should use to prove $(\ast )$. -Question: How can this inequality be proven? - -REPLY [7 votes]: Theorem (Kummer): $\nu_p {n \choose m}$ is the number of carries it takes to add $m$ and $n-m$ in base $p$. -Now note that $\lfloor \frac{\ln n}{\ln p} \rfloor$ is the maximum possible number of carries and the last $\nu_p(m)$ digits of $m$ cannot be associated to any carries. -Kummer's theorem itself is not hard to prove; it more or less follows from the last identity you list.<|endoftext|> -TITLE: Good 1st PDE book for self study -QUESTION [97 upvotes]: What is a good PDE book suitable for self study? I'm looking for a book that doesn't require much prerequisite knowledge beyond undergraduate-level analysis. My goal is to understand basic solutions techniques as well as some basic theory. - -REPLY [2 votes]: I would also like to add Peter Olver's "Introduction to partial differential equations" to this growing list. It was published in 2014 and is very suitable for self-studying. The preface does a good job of explaining the various topics and prerequisites required to understand PDEs which I hadn't appreciated until I read it in the book-highly recommended. Solutions to about 20% of exercises are available on the author's website.<|endoftext|> -TITLE: Solving a linear equation given the solution of another -QUESTION [6 upvotes]: Suppose I have a matrix $S$ having a one-dimensional nullspace $\{ e \}$ such that $S + ee^\top$ is a positive definite symmetric matrix. -Now let $b \in Range(S)$ and suppose I solve the equation $(S + ee^\top)x = b$ is there anyway I can derive the solution $x'$ of the equation $Sx' = b$? I was trying a Sherman Morrison Woodbury type formula, but this fails since the denominator is $0.$ -Any help would be appreciated. - -REPLY [7 votes]: Suppose $x$ is the solution to $(S+ee^T)x = b$. Since $b\in Range(S)$, we may write $b = Sz$ for some z. Now compute: -$$ -e^T(S+ee^T)x = e^Tb = e^TSz -$$ -Since S is symmetric, and e is in its nullspace, we have $e^TS = 0$. So the above equation simplifies to $e^Tx = 0$. But this implies -$$ -(S+ee^T)x = Sx -$$ -So x is a solution to the equation $Sx=b$ as well. As noted above, the solution to $Sx=b$ is not unique; $x + \lambda e$ is also a solution for any real $\lambda$.<|endoftext|> -TITLE: Mapping natural numbers into prime-exponents space -QUESTION [27 upvotes]: Take any natural number $n$, and factor it as $n=2^{e_1} 3^{e_2} 5^{e_3} ... p^{e_i}$, -where $i$ is the $i$-th prime. -Now map $n$ to the point $n \mapsto (e_1,e_2,\ldots,e_i,0,\ldots)$, where $i$ is the last prime in the factorization -of $n$. -For example, -$$n=123456789 -\mapsto -(0,2,\ldots,1,\ldots,1,\ldots)$$ -because $123456789=2^0 3^2 \cdots 3607^1 \cdots 3803^1$. -So every number in $\mathbb{N}$ is mapped to a point in an arbitrarily high dimensional space. -This mapping has the property that addition of the vectors corresponds to multiplication of -the numbers. -My (extremely vague!) question is: Does this viewpoint helps gain any insights into the -structure/properties of natural numbers? -Do line/planes/curves in this space mark out numerically interesting regions? -Perhaps allowing real or complex numbers? -The numbers in $\mathbb{N}$ fill this infinite-dimensional space very sparsely. -This is (very!) far from my research expertise, so any comments/references/links would be appreciated. - -REPLY [2 votes]: A friend of mine and me were discussing this possibility some years ago. This vectorial space with infinite dimension is what we called the "encoded space". There are some interesting facts, for instance, prime numbers in $\mathbb{N}$ are in the sphere of radius 1 in the enconded space. Also, you can encode far beyond natural numbers. In fact, you can encode the whole set of $\mathbb{Q}$, rational numbers if you write negative numbers inside the coordinates of the vector. But the thing is that you can even encode any root of a rational number if you add the rational numbers inside the vector. For instance: -$$ 12 = 2^2·3 \rightarrow (2, 1, 0, 0,...) = (2, 1 \rangle $$ -$$ \frac{12}{7} = 2^2·3·7^{-1} \rightarrow (2, 1, 0, -1 \rangle $$ -$$ \sqrt[7]{\frac{12}{5}} = (2^2·3·5^{-1})^{\frac{1}{7}} =2^{\frac{2}{7}}·3^{\frac{1}{7}}·5^{-\frac{1}{7}} \rightarrow (\frac{2}{7}, \frac{1}{7}, -\frac{1}{7} \rangle $$ -Also, some transcendental numbers can be encoded if you add to the vector some irrational algebraic numbers: -$$ 2^{\sqrt{2}} \rightarrow (\sqrt{2}\rangle $$ -There is no problem if you try to encode these kind of numbers, BUT the really hard problem arrives when you try to encode the usual summation between real numbers. For instance, you can encode $\frac{1}{2}$ and also $\frac{\sqrt{5}}{2}$, but you cannot encode this: -$$ \frac{1}{2} + \frac{ \sqrt{5}}{2} $$ -I've been trying to find a way to encode this but dealing with summation codification is difficult. - -I think that the most interesting thing is that it's easy to prove that the exponent-vectors and the encoded space behave as a vector space. It can easily proved (we did some years ago). The most important implication of this is that you can take all the theorems from linear algebra and apply them to the natural numbers or the rational numbers. -SPOILER: we try to get the equivalent theorems from algebra to $\mathbb{N}$ and $\mathbb{Q}$ and it was a little bit disappointing (pretty obvious facts).<|endoftext|> -TITLE: Can this standard calculus result be explained "intuitively" -QUESTION [9 upvotes]: Recently I stumbled upon someone who said he wanted to understand why -$\arctan x = \int\dfrac{dx}{1+x^2}$ -At first I was confused. This is an easy result in any integral calculus course. But then he explained that although he understood the proof, he wanted to understand it "intuitively". He wanted to see why it was in terms of arclength and addition and subtraction. -My question is: Is there an "intuitive" way to explain this identity? - -REPLY [5 votes]: <|endoftext|> -TITLE: Period of 3 implies chaos -QUESTION [14 upvotes]: Let $f(x)$ be a continuous function from $\mathbb{R}\rightarrow\mathbb{R}$. -Let's denote $k$-times repeated application of the function, $f(f(f(...f(x)...)))$ as $f^{(k)}(x)$. -Let's call any $x$ a periodic point with period $n$ if $f^{(n)}(x)=x$. -Is it true that if a point with period 3 exists, then points with all possible periods exist? -In other words is it true that -$$\exists x:f^{(3)}(x)=x\Rightarrow \forall n>0 \exists y:f^{(n)}(y)=y$$ -and if so, why? - -REPLY [6 votes]: This is (a special case of) a famous theorem, and of course there are many published proofs. However, knowing the statement, you might be able to work out a proof yourself. -The simple key fact for the proof is: -Lemma. Suppose that $f$ is a continuous function on the compact interval $I=[a,b]$ such that the image of $f$ contains $I$. Then $f$ has a fixed point. -(Hint: intermediate value theorem.) -Now suppose you have a period 3 orbit. First of all think about how the three points $x_1 -TITLE: Why can't I combine complex powers -QUESTION [17 upvotes]: I came across this 'paradox' - -$$1=e^{2\pi i}\Rightarrow 1=(e^{2\pi i})^{2\pi i}=e^{2\pi i \cdot 2\pi i}=e^{-4\pi^2}$$ -I realized the fallacy lies in the fact that in general $(x^y)^z\ne x^{yz}$. Why doesn't it work with complex numbers even though it is valid in real case? Is it related to the fact that logarithm of complex number is not unique? - -REPLY [5 votes]: This is related to a note by Euler, maybe he was the first to realize that $i^i$ is real. Actually, -$$i^i = (e^{i\pi/2})^i = e^{-\pi/2}$$ -on the other hand -$$i^i = (e^{i(\pi/2+2\pi n})^i = e^{-\pi/2 -2\pi n},\ n\in\mathbb{Z}.$$ -So maybe it is better to say that $i^i$ is a subset of the $\mathbb{R}$ and that certain equality signs are to be understood as congruences.<|endoftext|> -TITLE: Do convergence in distribution along with uniform integrability imply convergence in mean? -QUESTION [8 upvotes]: There are at least 2 places in Wikipedia saying that $X_n$ converges to $X$ in mean in and only if $X_n$ converges to $X$ in probability and $X_n$ is uniformly integrable. See the following link for example: -http://en.wikipedia.org/wiki/Convergence_of_random_variables#Properties_4 -However, I found a reference in Billingsley's book saying that we only need convergence in distribution along with uniform integrability to imply the convergence in mean. Is Wikipedia wrong, or am I missing something here? -Thanks. - -REPLY [14 votes]: I don't have Billingsley's book here, but I guess what he says is that uniform integrability and convergence in distribution imply convergence of the means (and not in mean), i.e. $E[X_n]$ converges to $E[X]$. -One simple (though not truely elementary) way to see it is to use Skorokhod's theorem stating that convergence in distribution is equivalent to almost-sure convergence of copies of the random variables in some abstract probability state. Then we just apply the usual Vitali theorem about uniform integrability for these copies. It gives convergence in mean of the copies of $X_n$ to the copy of $X$, and therefore convergence of the mean of (the copy of) $X_n$ to that of (the copy of) $X$. I put parentheses since the mean only depends on the law and hence is the same for $X_n$ or a copy of it. This is the result.<|endoftext|> -TITLE: Schur–Zassenhaus exercise: coprime subgroup contained in a complement -QUESTION [10 upvotes]: Exercise 3B.4 on page 84 of Isaacs's Finite Group Theory asks one to prove a precursor of the Sylow theorems for $π$-subgroups, and I only know how to do it using the Sylow theorems, so I have missed the point. I actually needed a similar (false) statement in my research, so I thought it would be wise to get the point. - -Suppose $G$ is a finite group, and $N$ is a normal subgroup whose index and order are coprime. Let $U$ be a subgroup of $G$ whose order is coprime to the order of $N$. If either N or $U$ is solvable, then show that $U$ is contained in some subgroup $H$ whose order is the index of $N$ in $G$. - -Available tools: We already have Schur–Zassenhaus, and so we know that $G$ has a subgroup whose order is the index of $N$. -In other words, if $G$ has a normal Hall $π$-subgroup, then every $π′$-subgroup is contained in some Hall $π′$-subgroup. However Hall subgroups are the next section, and containment and $π$-separability are even in the section after that, so appealing to Hall's generalization of Sylow's theorem will miss the point. -Attempt: Let $M$ be a maximal subgroup of $G$ containing $U$. If $N$ is not contained in $M$, then $M∩N$ is a normal subgroup of $M$ whose order is coprime to its index, and so we can find $H ≤ M$ with $U ≤ H$ and $|H| = [M:M∩N]=[G:N]$, and we are done. -If $N$ is contained in every maximal subgroup of $G$ containing $U$, then $\ldots$ . It'd be nice if $N$ was in the Frattini, but that's not true. Perhaps at this point it would be wise to use solvability? - -REPLY [8 votes]: Let $K$ be a complement to $N$ in $G$, which exists by Schur-Zassenhaus. Then $|K|=|G|/|N|$. Consider the subgroup $UN$. Since $K(UN)=G$, $|K\cap UN|=|K|\cdot |UN|/|G|=|K|\cdot|U|\cdot|N|/|G|=|U|$, since $K$ is a complement to $N$. Thus, $K\cap UN$ is a complement to $N$ in $UN$. Since either $U$ or $N$ is solvable, all complements to $N$ in $UN$ are conjugate, so $K\cap UN=U^g$. Thus, $K^{g^{-1}}$ contains $U$. -Steve<|endoftext|> -TITLE: Why do differential forms and integrands have different transformation behaviours under diffeomorphisms? -QUESTION [6 upvotes]: Let $f$ be a diffeomorphism, say from $\mathbb R^n$ to $\mathbb R^n$ , such as the transition map between two coordinate charts on a differentiable manifold. -A differential $n$-form (or rather its coefficient function which is obtained by using the canonical one-chart atlas on $\mathbb R^n$) then transforms essentially by multiplication with $\mathrm{det}(Df)$, while integrals transform essentially by multiplication of the integrand with $\lvert\mathrm{det}(Df)\rvert$. -(This is the reason for the necessity to choose an orientation in order to define the integral of a top form on a differentiable manifold.) -Question: What is an intuitive or conceptional reason for these different transformation behaviours of forms and integrands? - -REPLY [3 votes]: A manifold is not necessary orientable. But the orientation bundle always exists. You can construe it as a real line bundle. Then the density bundle is obtained by tensoring the bundle of top forms with the orientation bundle. By definition, a nonvanishing section of the orientation bundle is an orientation, and a section of the density bundle is a density. Compactly supported densities can be integrated. -Locally, orientations "transform" by the sign of the determinant (of the jacobian matrix), top forms by the determinant, and densities by the absolute value of the determinant. -It suffices to understand the case of the real line. -EDIT. I've just realized that, in your question, you mention manifolds only in a parenthesis, and concentrate on $\mathbb R^n$. Open subsets of $\mathbb R^n$ are of course always orientable. But the key point, again, is to understand what happens on the real line. -For the real line, it's natural to write a 1-form as $f\ dx$, and a density as $f\ |dx|$ (say with $f$ continuous). If you have a compact interval $I$, then you can integrate $f\ |dx|$ over $I$, and $f\ dx$ from one bound to the other. -EDIT 2. You can write the standard orientation of $\mathbb R$ as $|dx|/dx$, and define the integral of the form $f\ dx$ over the interval $I$ equipped with the orientation $\pm|dx|/dx$ as the integral over the nonoriented interval $I$ of the density $\pm f\ |dx|$ equal to the orientation times the form. -So you only need to define the integral of a density over a nonoriented (compact) interval. [See for instance de Rham's book on differentiable manifolds.] -EDIT 3. In the same spirit, to define the $L^2$ space of a manifold, you consider half-densities. More precisely, you complete the pre-hilbert space of compactly supported half-densities. The half-densities "transform" by the square root of the determinant. -The general principle is this: -If $M$ is an $n$-manifold, then each action of $G:=GL_n(\mathbb R)$ on a manifold $F$ induces a bundle over $M$ with fiber $F$, called the associated bundle. If the action is an $r$-dimensional $\mathbb R$-linear representation, then the associated bundle is a rank $r$ vector bundle. -The most basic example is the tangent bundle, associated to the natural representation on $\mathbb R^n$. -Let $\chi$ be the characters of $G$ given by the inverse of the determinant. The bundles of $n$-forms, orientations, densities, and half-densities, are the line bundles respectively associated to the following characters of $G$: $\chi$, the sign of $\chi$, its absolute value, and the square root of its absolute value. -For the definition of the associated bundle, see -http://en.wikipedia.org/wiki/Associated_bundle#Fiber_bundle_associated_to_a_principal_bundle -In our case, the principal bundle is the frame bundle: -http://en.wikipedia.org/wiki/Frame_bundle<|endoftext|> -TITLE: Definition of cusp of a congruence group -QUESTION [11 upvotes]: I am reading p.22 of Dan Bump's Automorphihic forms and representations. A cusp of the congruence group acting on the upper half plane is defined to be an orbit of the action of the congruence subgroup on $\mathbb{Q} \mathbb{P}^1$. It also says that intuitively, the cusps are the places where a fundamental domain of the congruence subgroup touching the boundary of the upper half plane. -Question: Why does the definition mean what the intuition says? -Thanks! - -REPLY [14 votes]: You should identify the upper half plane with a subspace of $\mathbb{C}$ with a subspace of the Riemann sphere. In this identification $\mathbb{P}^1(\mathbb{R})$ is a great circle separating the upper half plane and lower half plane, and $\mathbb{P}^1(\mathbb{Q})$ is the orbit of $\infty$ under $\text{PSL}_2(\mathbb{Z})$. This orbit breaks up into a union of orbits under any congruence subgroup and these are the points at which a fundamental domain can touch the boundary because - -under the action of $\text{PSL}_2(\mathbb{Z})$ the fundamental domain touches the boundary only at $\infty$, and -a fundamental domain for any congruence subgroup is a union of translates of fundamental domains for $\text{PSL}_2(\mathbb{Z})$.<|endoftext|> -TITLE: Which one result in mathematics has surprised you the most? -QUESTION [198 upvotes]: A large part of my fascination in mathematics is because of some very surprising results that I have seen there. -I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius! -So I ask you which are your most surprising moments in maths? - -Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks! - -REPLY [4 votes]: The Feit-Thompson theorem, although I don't know yet how to prove it. It was impressive for me that only from the condition of having an odd order a group would be solvable.<|endoftext|> -TITLE: When is the weighted space $\ell^p(\mathbb{Z},\omega)$ a Banach algebra ($p>1$)? -QUESTION [8 upvotes]: Let $\omega:\mathbb{Z}\to (0,\infty)$ and let $1\leq p<\infty$. Consider the space $\ell^p(\mathbb{Z},\omega)$ of complex valued sequences $f=(a_n)_{n \in \mathbb{Z}}$ such that -$$\|f\|=\|f\|_{\ell^p(\mathbb{Z},\omega)}:=\left(\sum_{n\in\mathbb{Z}}|a_n|^p\omega(n)^p\right)^{1/p}<\infty.$$ -Next, given two complex sequences $f=(a_n)_{n \in \mathbb{Z}}$ and $g=(b_n)_{n \in \mathbb{Z}}$ their formal convolution is defined by $f*g=(c_n)_{n\in\mathbb{Z}}$ where $c_n=\sum_{k\in\mathbb{Z}}a_kb_{n-k}$. -The problem is to find necessary and sufficient conditions on $\omega$ such that $\ell^p(\mathbb{Z},\omega)$ is a Banach algebra. -In other words, if $f,g\in\ell^p(\mathbb{Z},\omega)$ then $f*g\in \ell^p(\mathbb{Z},\omega)$ and $\|f*g\|\leq\|f\|\cdot\|g\|$. -For $p=1$ the condition is $\omega(n+k)\leq\omega(n)\omega(k)$. ---- Reformulated the problem so that $f\in\ell^p(\omega)$ is the same as $f\omega\in\ell^p$. -I believe this is an open problem, there are however sufficient conditions: $\omega^{-p'}*\omega^{-p'}\leq \omega^{-p'}$ where $1/p̈́'+1/p=1$ (the history of the condition is hard to tell but it is given as Lemma 8.11 in Acta Mathematica Volume 174, Number 1, 1-84, "Completeness of translates in weighted spaces on the half-line" by Alexander Borichev and Håkan Hedenmalm). The proof is based on Hölder's inequality: -$$\|f*g\|^p =\sum_n \left|\sum_k a_kb_{n-k}\right|^p\omega(n)^p\leq $$ -$$\leq\sum_n \left(\sum_k |a_k|^p|b_{n-k}|^p\omega(n-k)^p\omega(k)^p\right)\left(\sum_k\frac{1}{\omega(n-k)^{p'}\omega(k)^{p'}}\right)^{\frac{p}{p'}}\omega(n)^p\leq $$ -$$\qquad\leq\|f\|^p\|g\|^p$$ - -REPLY [4 votes]: This is supposed to be a comment, but I don't seem to be able to write one : -I'm quite sure it is an open problem : it was a few years ago when I last checked up on it. It's a very difficult one. There are indeed sufficient conditions, but necessary and sufficient ones seem very hard to find. -I recommend you take a look at the recent work of Kuznetsova, Yu. N. on the subject. -Malik<|endoftext|> -TITLE: Tiling pentominoes into a 5x5x5 cube -QUESTION [22 upvotes]: I have this wooden puzzle composed of 25 y-shaped pentominoes, where the objective is to assemble them into a 5x5x5 cube. After spending quite a few hours unsuccessfully trying to solve the puzzle, I finally gave up and wrote a program to try all possible combinations using backtracking. Analyzing the results revealed that for every solution found, the computer made - on average - 50 million placements and removals of pieces. This is obviously beyond my capabilities as a human, even if I can see a few steps ahead that a partial solution leads to a "dead end". -So, my question is this: given that the puzzle is so symmetrical, is there some way to significantly prune the search tree (maybe even making it possible for me to solve the puzzle on my own)? - -(sorry for the poor quality) - -REPLY [12 votes]: Get Burr Tools. It takes less than a minute to set up this problem, and then a few minutes to generate 1264 solutions. I'm not sure if that's all solutions, the solver tells me 22 minutes are now needed to completely check the solution space. (EDIT -- Total solving time = 24.8 minutes) -A slightly more interesting problem is 25 N pentominos in a cube. There are 4 solutions, found in 2.6 minutes. Many other interesting puzzles are compiled at Puzzles will be played<|endoftext|> -TITLE: Algorithms to compute the class number -QUESTION [10 upvotes]: Let the class number $h(d)$ denote the number of distinct binary quadratic forms with discriminant $d < 0$. -Is there a better algorithm for $h$ than brute force? -To be precise, by brute force I meant to generate enough forms to completely cover the space and then reducing them down to see how many equivalence classes there are. - -REPLY [5 votes]: We use the notation of this question. -Let $D$ be a negative non-square integer such that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). -Let $\Gamma = SL_2(\mathbb{Z})$. -We denote the set of positive definite primitive binary quadratic forms of discriminant $D$ by $\mathfrak{F}^+_0(D)$. -By this question, $\mathfrak{F}^+_0(D)$ is $\Gamma$-invariant. -We denote the set of $\Gamma$-orbits on $\mathfrak{F}^+_0(D)$ by $\mathfrak{F}^+_0(D)/\Gamma$. -Let $h(D) = |\mathfrak{F}^+_0(D)/\Gamma|$. -Let $\mathcal{H} = \{z \in \mathbb{C}; Im(z) > 0\}$. -We denote by $\mathcal{H}(D)$ the set of quadratic numbers of discriminant $D$ in $\mathcal{H}$. -By this question, $\mathcal{H}(D)$ is $\Gamma$-invariant. -We denote the set of $\Gamma$-orbits on $\mathcal{H}(D)$ by $\mathcal{H}(D)/\Gamma$. -Let $f = ax^2 + bxy + cy^2 \in \mathfrak{F}^+_0(D)$. -We denote $\phi(f) = (-b + \sqrt{D})/2a$, where $\sqrt{D} = i\sqrt{|D|}$. -It is clear that $\phi(f) \in \mathcal{H}(D)$. -Hence we get a map $\phi\colon \mathfrak{F}^+_0(D) \rightarrow \mathcal{H}(D)$. -By this question, $\phi$ is a bijection and induces a bijection $\mathfrak{F}^+_0(D)/\Gamma \rightarrow \mathcal{H}(D)/\Gamma$. -Hence, computing $h(D)$ is the same as computing $|\mathcal{H}(D)/\Gamma|$. -Let $G = \{ z \in \mathcal{H}\ |\ -1/2 \le Re(z) \lt 1/2, |z| \gt 1$ or $|z| = 1$ and $Re(z) \le 0 \}$. It is known that $G$ is a fundamental domain of $\mathcal{H}/\Gamma$(e.g. Serre's A Course in Arithmetic). -Hence it suffices to count the number of $f \in \mathfrak{F}^+_0(D)$ such that $\phi(f) \in G$. -Let $f = ax^2 + bxy + cy^2$. -Then $\phi(f) \in G$ if and only if $|b| \le a \le c$(if $|b| = a$ or $a = c, b \ge 0)$. -Hence it suffices to count the number of $(a, b, c)$ which satisfies the following conditions. - -$a \gt 0$. -gcd$(a, b, c) = 1$. -$D = b^2 - 4ac$. -$|b| \le a \le c$, if $|b| = a$ or $a = c, b \ge 0$. - -The following observation suffices to determine $(a, b, c)$. -Since $D = b^2 - 4ac, 4ac = b^2 + |D|$. -Hence $c = (b^2 + |D|)/4a$. -Hence it suffices to determine $a$ and $b$. -Since $a \le c$, $a \le (b^2 + |D|)/4a$. -Hence $4a^2 \le b^2 + |D| \le a^2 + |D|$. -Hence $3a^2 \le |D|$. -Hence $a^2 \le |D|/3$. -Hence $a \le \sqrt{|D|/3}$. -As an example, we compute $h(D)$ when $D = -584 = -2^3\cdot73$ by our method. -This is the class number of $\mathbb{Q}(\sqrt {-146})$. -$a \le \sqrt{|D|/3} = \sqrt{\frac{584}{3}} = 13.95\cdots$. -Hence $1 \le a \le 13$. -$4ac = b^2 + |D| = b^2 + 584$. -Hence $b^2 \equiv 0$ (mod $2$). -Hence $b$ is even. -We compute $b^2 + 584$ for $0 \le b \le 13$. -$0^2 + 584 = 584 = 4\cdot146 = 4\cdot2\cdot73$ -$2^2 + 584 = 588 = 4\cdot147 = 4\cdot3\cdot7^2$ -$4^2 + 584 = 600 = 4\cdot150 = 4\cdot2\cdot3\cdot5^2$ -$6^2 + 584 = 620 = 4\cdot155 = 4\cdot5\cdot31$ -$8^2 + 584 = 648 = 4\cdot162 = 4\cdot2\cdot3^4$ -$10^2 + 584 = 684 = 4\cdot171 = 4\cdot3^2\cdot19$ -$12^2 + 584 = 728 = 4\cdot182 = 4\cdot2\cdot7\cdot13$ -Thus we get the following results. -$a = 1\colon\ |b| = 0, c = 2\cdot73 = 146, (a, b, c) = (1, 0, 146)$ -$a = 2\colon\ |b| = 0,c = 73, (a, b, c) = (2, 0, 73)$ -$a = 3\colon\ |b| = 2,c = 7^2 = 49,(a, b, c) = (3, \pm2, 49)$ -$a = 4\colon$ none -$a = 5\colon\ |b| = 4,c = 2\cdot3\cdot5 = 30,(a, b, c) = (5, \pm4, 30)$ -$a = 6\colon\ |b| = 4,c = 5^2 = 25,(a, b, c) = (6, \pm4, 25)$ -$a = 7\colon\ |b| = 2,c = 3\cdot7 = 21,(a, b, c) = (7, \pm2, 21)$ -$a = 8 \colon$ none -$a = 9 \colon\ |b| = 8,c = 2\cdot3^2 = 18,(a, b, c) = (9, \pm8, 18)$ -$a = 10 \colon\ |b| = 4,c = 3\cdot5 = 15,(a, b, c) = (10, \pm4, 15)$ -$a = 11\colon$ none -$a = 12\colon$ none -$a = 13\colon\ |b| = 12,c = 2\cdot7 = 14,(a,b,c) = (13, \pm12, 14)$ -Therefore $h(D) = 16$.<|endoftext|> -TITLE: Math behind a "fling"? (i.e. on a mobile touch device) -QUESTION [5 upvotes]: I'm working on a game which relies on "flinging" an object. That is, click and hold on the object, and then drag and release it, and it continues on the path you were dragging it. Of course, the most well-known example of flinging is with iPhone and Android devices, where you can quickly scroll down a list by quickly swiping your finger upward, giving the illusion of "flinging" the list. -I'm tracking mouse positions (x,y) and timestamps. But I'm drawing a blank as to how I can take a list of positions and times and get out of it a velocity or curve that an object should follow. -What are my options? Right now I am looking only for a straight-line fling action, but if it's easy to implement some sort of curve that better fits the fling, that would be good information that I might be able to integrate into the design of the game. - -REPLY [7 votes]: In the physical world, once an object is released from all external forces, it will travel in a straight line. UI design strongly suggests that interfaces work better when the user is using previous knowledge of motor control, eye movement, physics, etc. Unless you have a legitimate reason to curve after the object is released, I would personally go with linear only. -In that case, take the last few coordinates and timestamps, and average the velocity components together. Then run a quadratic or exponential function of time on each component until the change of position is negligibly small. The latter functions relates to throwing an object upward and watching it being slowed by gravity, and the former relates to shoving an object on the ground and seeing it come to a stop by friction. Since the exponential function seems like the best for the job, here's an example. -Find the last few velocities for the $x$ and $y$ components, and average them: -$v = \frac{\Delta x}{\Delta t}; y = \cdots$ -$x = \frac{\sum v_x}{n}; y = \cdots$ -Set time from release for $t$, $\beta$ as some friction constant, $v_x$ as the average velocity for each component, and $x_0$ as the initial release point. -$x = v_x (1 - e^{-\beta t})$ -The purpose of the 1 is to align the function to start at zero when time is zero. -So I hope this helps. I'm interested in what you're doing, so you'll have to show me when you're done. ;)<|endoftext|> -TITLE: Closing up the elementary functions under integration -QUESTION [27 upvotes]: The elementary real-valued functions are not closed under integration. (Elementary function has a precise definition -- see Risch algorithm in Wikipedia). This means there are elementary functions whose integrals are not elementary. So we can construct a larger class of functions by adjoining all the integrals of elementary functions. You can repeat this process indefinitely. If I understand things correctly, the set of functions that is the countable closure of this process is closed under integration. Does any finite iteration of the process achieve closure under integration? -My guess is no. Has anyone thought about this? - -REPLY [3 votes]: In a differential Galois field, integration is the antiderivative operation and not the infinite summation, but I will still call it integration. -If we start out with simple elementary functions, say rational functions with integer coefficients, then there will be many functions that do not have integrals. If the original set of functions is finite, then only a finite number of functions need to be added. The process can then be repeated at the next level. The only catch is that at every stage one has to deal with constants (think of the roots of the denominators). Most functions force the addition of several integrals. The number of functions added grows geometrically with the levels, so I would expect there is no easy way to define a countable closure. -Are there initial sets for which the process stops? Yes, polynomials for example. Are there initial sets for which it goes on forever? Yes.<|endoftext|> -TITLE: Not understanding Simple Modulus Congruency -QUESTION [5 upvotes]: Hi this is my first time posting on here... so please bear with me :P -I was just wondering how I can solve something like this: -$$25x ≡ 3 \pmod{109}.$$ -If someone can give a break down on how to do it would be appreciated (I'm a slow learner...)! -Here is proof that I've attempted: - -Using definition of modulus we can rewrite $$25x ≡ 3 \pmod{109}$$ as $25x = 3 + 109y$ (for some integer $y$). We can rearrange that to $25x - 109y = 3$. -We use Extended Euclidean Algorithm (not sure about this part, I keep messing things up), so this is where I'm stuck at. - -Thanks! - -REPLY [8 votes]: Here's an alternative method that is due to Gauss. Scale the congruence so to reduce the leading coefficient. Hence we seek a multiple of $\:25\:$ that is smaller $\rm(mod\ 109)\:.\ $ Clearly $\,4 = \lfloor 109/25\rfloor\,$ works: $\; 4\cdot25\equiv 100 \equiv -9 \;$ has smaller absolute value than $25$. Scaling by $\,4\,$ yields $\rm\, -9\ x \equiv 12.\;$ Similarly, scaling this by $\,12 = \lfloor 109/9\rfloor$ yields $\rm\ x \equiv 144 \equiv 35$. See here for a vivid alternative presentation using fractions. -This always works if the modulus is prime, i.e. it will terminate with leading coefficient $1$ (versus $0$, else the leading coefficient would properly divide the prime $\rm\:p\:$). It's a special case of the Euclidean algorithm that computes inverses mod $\:\rm p\:$ prime. This is the way that Gauss proved that irreducible integers are prime (i.e. that $\,\rm p\mid ab\Rightarrow p\mid a\,$ or $\,\rm p\mid b$), hence unique factorization; it's essentially Gauss, Disquisitiones Arithmeticae, Art. 13, 1801, which iterates $\rm (a,p) \to (p \;mod\; a, p)\;$ i.e. $\rm a\to a' \to a'' \to \cdots,\; n' = p \;mod\; n \;$ instead of $\rm (a,p) \to (p \;mod\; a,\: a)$ as in the Euclidean algorithm. It generates a descending chain of multiples of $\rm\ a\pmod{\!p}.\,$ -For further discussion see this answer and my sci.math post on 2002\12\9.<|endoftext|> -TITLE: Is there something deep in the fact that an endomorphism of a finite dimensional complex vector space has an eigenvector? -QUESTION [12 upvotes]: In my course of linear algebra I studied that if $V$ is a finite dimensional vector space on the complex field, then every endomorphism of $V$ has an eigenvector. The proof is simple: taken a polynomial $f$ that is null on the endomorphism $A$ (it exists because of the finite dimension of $V$), exploiting the algebraic closure of $C$ we are there. -I gave the following geometrical interpretation: such a space can't be fully twisted by any of its linear operators, we always have at least an invariant subspace of dimenson 1. This fact still appears a bit magical to me. Is there some deep or different reason for this in other parts of mathematics such as geometry or algebra? Are there other interesting geometrical consequences? Does it remain true if the space is infinite dimensional? - -REPLY [9 votes]: As to the depth and significance of the existence of eigenvectors for finite-dimensional linear operators over $\mathbb{C}$, I think you have already identified the core algebraic reason: it is that the field $\mathbb{C}$ is algebraically closed. I think it is fair to call this a "deep fact" -- the task of giving a rigorous proof was the topic of Gauss's thesis work, and in fact by modern standards Gauss's (first) proof is still not completely rigorous. (Some feel this way, anyway -- there is not universal agreement here.) Whole books have been written on various proofs of this result. -Of course the depth here lies in the fact that the definition of the complex field is topological / analytic, ultimately relying on the completeness of $\mathbb{R}$. If you start with a field $K$, the following is not very deep: -Proposition: For a field $K$, the following are equivalent: -(i) Every linear endomorphism of a finite dimensional $K$-vector space has an eigenvector. -(ii) $K$ is algebraically closed. -To see (i) ⇒ (ii), use the fact that every monic polynomial of degree $n$ is the chracteristic and minimal polynomial of its companion matrix. -To answer your second question: no, there are linear operators on infinite dimensional $\mathbb{C}$-vector spaces without eigenvalues, even bounded linear operators. A fundamental example is the Hilbert space $L^2([0,1])$ of square integrable $\mathbb{C}$-valued functions on the unit interval. Then the multiplication operator $M: f \mapsto xf$ has norm $1$ but is easily seen to have no eigenvectors, since the equation $xf = \lambda f$ forces $f = 0$ almost everywhere. -In functional analysis, there is a suitable generalization of the notion of the set of eigen*values* of a linear operator on certain infinite dimensional spaces, namely the spectrum. - -Addendum: inspired by damiano's comment, here is what is in some sense the simplest possible example of a linear operator on an infinite dimensional space without an eigenvector: Let $K$ be any field, and let $V$ be a vector space of countably infinite dimension with basis ${e_n}_{n=1}^{\infty}$. Then consider the shift operator $T$ -on $V$, i.e., the unique linear operator such that for all $n \in \mathbb Z^+$, $T(e_n) = e_{n+1}$. $T$ is very simple and easy to visualize: it just happens never to "cycle back" on itself. Indeed, if $v = \sum_{n=1}^{\infty} a_n e_n \in V$ (with $a_n = 0$ -for all but finitely many $n$), then assuming that $Tv = \lambda v$ gives -$\sum_{n=1}^{\infty} \lambda a_n e_n = \sum_{n=2}^{\infty} a_{n-1} e_n$. -In particular $\lambda a_1 = 0$. It is clear that the kernel of $T$ is $0$, so if $v \neq 0$, $\lambda \neq 0$ and thus $a_1 = 0$ and then the above equation implies that $a_n = 0$ for all $n$, i.e., $v = 0$. (If $V = K[x]$ is the space of polynomials with $K$-coefficients, then with respect to the natural basis $e_n = x^{n-1}$, multiplication by $x$ is the shift operator $T$.) - -REPLY [4 votes]: Topological argument -Suppose $A$ has no kernel. Then it induces an automorphism of the projective space $\mathbb P(V)$. Now the fact that A has an eigenvector follows from the (purely topological) Lefschetz formula (because $\mathbb P(V)$ has non-zero Euler characteristic and any linear transformation lies in the connected component of the identity).<|endoftext|> -TITLE: Could you explain why $\frac{d}{dx} e^x = e^x$ "intuitively"? -QUESTION [49 upvotes]: As the title implies, It is seems that $e^x$ is the only function whoes derivative is the same as itself. -thanks. - -REPLY [2 votes]: A one-liner: $e^x$ is an eigenvector of the operator $\frac{d}{dx}$ -Similarly to the way a vector $x$ can be an eigen-vector of a matrix operator $A$. -By the definition of derivation and the eigenvalue equation $Ax=\lambda x$, recursively one gets a representation (one of the known formulas) of $e^x$ -What is an eigen-vector of an operator? It is this vector (part of the space where the operator acts) which is left unchanged (except a re-scaling) from the action of the operator. In other words, the eigen-vector is aligned ("parallel") to the action of the operator, thus not changed in other way. -How does this relate to derivation and exponential function. Even more intuitively, derivation decimates the function it acts upon, exponentialy (consider $(x^n)'=nx^{n-1}$), so a function that is unchanged by such exponential decimation, must be exponential itself (eigen-vector) -(to maintain the terminology used above, the function should be exponential to all orders, i.e $e^x$).<|endoftext|> -TITLE: Does the Gelfand transformation on $\ell^1(\mathbb Z)$ possess a continuous inverse on its image? -QUESTION [9 upvotes]: I am interested in the Gelfand transformation -$$ -\Phi\colon\ell^1(\mathbb Z)\to\mathcal C(\mathbb T),\quad a\mapsto\sum_{n\in\mathbb Z}a_n z^n. -$$ -This is an injective homomorphism of Banach algebras. It is neither isometric nor surjective. However, its image---the Wiener algebra $W$ consisting of all continuous functions on $\mathbb T$ whose Fourier series is absolutely convergent---is a subalgebra of $\mathcal C(\mathbb T)$ which is dense in the subspace topology. -Question: Can we prove of disprove that $\Phi$ has a continuous inverse on its image $W$? -In other words: Is $\Phi\colon\ell^1(\mathbb Z)\to W$ an isomorphism of topological algebras? -(Here $W$ carries the topology induced by the sup-norm from $\mathcal C(\mathbb T)$. - -REPLY [7 votes]: No, because if there were a continuous inverse, then $\Phi$ would be bounded below, and from this it would follow that $W$ is complete. But it isn't, because there are continuous functions that have Fourier series that are uniformly but not absolutely convergent, and hence Cauchy sequences (the partial sums of the Fourier series) in $W$ with no limit in $W$. There are even examples of my claim from the last sentence in the disk algebra, i.e. functions in $C(\mathbb{T})$ whose negatively indexed Fourier coefficients vanish, as was first shown by Hardy and as I discovered when trying to answer a MathOverflow question here.<|endoftext|> -TITLE: "Cayley's theorem" for Lie algebras? -QUESTION [40 upvotes]: Groups can be defined abstractly as sets with a binary operation satisfying certain identities, or concretely as a collection of permutations of a set. Cayley's theorem ensures that these two definitions are equivalent: any abstract group acts as a collection of permutations of its underlying set, and this action is faithful. -Similarly, rings can be defined abstractly as sets with a pair of binary operations satisfying certain identities, or concretely as a collection of endomorphisms of an abelian group. There is a "Cayley's theorem" here as well: any abstract ring acts as a collection of endomorphisms of its underlying abelian group, and this action is faithful. -The situation for Lie algebras seems much less clear to me. The adjoint representation is not generally faithful, and Ado's theorem comes with qualifications and doesn't have the simplicity of the two theorems above. For me, the problem is that I don't have a good sense of what the concrete definition of a Lie algebra is supposed to be. -I suspect that a good concrete definition of a Lie algebra is as a space of derivations on some algebra closed under commutator. In that case, is it correct to say that a Lie algebra acts faithfully as derivations on its universal enveloping algebra? Is this a good analogue of Cayley's theorem? -(Motivation: in the books on Lie algebras I have read, the authors verify that Lie algebras which occur in nature satisfy alternativity and the Jacobi identity, but I have never seen any simple justification that these axioms are "enough" in the same way that Cayley's theorem tells you that the axioms for a group or a ring are "enough." There is just Ado's theorem which, again, comes with qualifications and is hard.) - -REPLY [22 votes]: Here is a way of avoiding Ado's theorem, at the expense of using the Poincare-Birkhoff-Witt Theorem. The PBW theorem has no finite dimensionality or characteristic hypotheses, so you may like this better. Note, however, that I will realize finite dimensional Lie algebras as endomrophisms of infinite dimensional vector spaces. -Let's define a concrete Lie algebra to be a vector space $V$, and a vector subspace $\mathfrak{h}$ of $\mathrm{End}(V)$ closed under commutator. -Theorem: Every Lie algebra is isomorphic to a concrete Lie algebra. -Proof: Let $\mathfrak{g}$ be a Lie algebra and $U$ its universal enveloping algebra. The Lie algebra $\mathfrak{g}$ acts on $U$ by left multiplication, so this gives a map $\mathfrak{g} \to U$ taking bracket to commutator. We must prove this map is injective. -Choose a basis ${ v_i }$ for $\mathfrak{g}$. Suppose that left multiplication by $\sum a_i v_i$ is $0$. Then $\left( \sum a_i v_i \right) \cdot 1 = \sum a_i v_i$ would be zero in $U$. But, by the PBW theorem, the $v_i$ are linearly independent in $U$, a contradiction. QED -As far as I know, this special case of PBW is as hard as the whole theorem.<|endoftext|> -TITLE: Binomial distribution and upper bound -QUESTION [8 upvotes]: This is from Feller's Introduction to Probability Theory and Its Applications. In the context of Bernoulli trials, we define: -$$b(k;n,p) = \binom{n}{k}p^kq^{n-k},$$ -$$P\{S_n \ge r\} = \sum_{v=0}^{\infty}b(r+v;n,p).$$ -The latter being the probability of having at least $r$ successes. Now, supposing $r \gt np$ and knowing that -$$\frac{b(k; n,p)}{b(k-1;n,p)}=\frac{(n-k+1)p}{kq}=1+\frac{(n+1)p-k}{kq},$$ -show that -$$P\{S_n \ge r\} \le b(r;n,p)\frac{rq}{r-np}.$$ -According to Feller, it follows from the obvious fact that the terms of the series decrease faster than the terms of a geometric series with ratio $1-\frac{r-np}{rq}$. However, it's not obvious for me and I don't see how the upper bound follows. - -REPLY [5 votes]: First notice that -For $k > r$ we have that -$\frac{(n-k+1)p}{kq} <\frac{(n-k+1)p}{rq}$ (as $n \ge k-1$) -Now $\frac{(n-k+1)p}{rq} \leq \frac{(n-(r+1)+1)p}{rq} = \frac{(n-r)p}{rq}$ as the numerator is the largest when $k = r+1$ -Now $1 - \frac{(n-r)p}{rq} = \frac{rq - np + rp}{rq} = \frac{r -np}{rq}$ (as $p+q=1$).<|endoftext|> -TITLE: Is the set of continuous functions from $\mathbb R \to \mathbb R$ path connected? -QUESTION [8 upvotes]: Let $C^0$ be the set of all of all (edit: bounded) continuous functions from $\mathbb R \to \mathbb R$ with the sup norm. Then a "path" in $C^0$ is a continuous function $f \colon [0,1] \to C^0$. The question boils down to if there is path (or continuous deformation) which transforms any one bounded continuous function into another. - -REPLY [6 votes]: I think it's worth to develop a little bit more Akhil's interesting remark. -1. If instead of ${\cal C}^0(\mathbb{R}, \mathbb{R})$, you take ${\cal C}^0([a,b], \mathbb{R})$, where $[a,b] \subset \mathbb{R}$ is a compact interval, then something interesting happens: you can give ${\cal C}^0([a,b], \mathbb{R})$ the compact-open topology, and this topology coincides with the one induced by the sup norm (also called the topology of uniform convergence). More generally, the same is true for ${\cal C}^0 (X , Y)$, where $X$ is any compact topological space and $Y$ any metric space. -2. Now, for psychological reasons, let's use the notation $Y^X$ instead of ${\cal C}^0 (X , Y) $, and let $X$, $Y$ be any pair of topological spaces. Put the compact-open topology on $Y^X$ and let $Z$ be a third topological space. To every map $F: X\times Z \longrightarrow Y$ you can associate the map $\Phi (F) : Z \longrightarrow Y^X$ defined by -$$ -(\Phi (F) (z)) (x)= F(x,z) \ . -$$ -Reciprocally, to every map $\gamma : Z \longrightarrow Y^X$ you can associate the map $\Psi (\gamma ): X \times Z \longrightarrow Y$ defined by -$$ -\Psi (\gamma) (x,z) = (\gamma (z)) (x) \ . -$$ -You can check easily that $\Phi$ and $\Psi$ are inverses one each other. Moreover, if $F$ is continuous, so is $\Phi (F)$. If $X$ is a locally compact Hausdorff space, the same is true for $\Psi$: $\gamma$ continuous implies $\Psi (\gamma )$ continuous. (See Munkres' "Topology", theorem 46.11.) -So, when $X$ is a locally compact Hausdorff space, you have bijections, inverses one each other: -$$ -\Phi : Y^{X \times Z} \longrightarrow (Y^X)^Z \qquad \text{and} \qquad \Psi: (Y^X)^Z \longrightarrow Y^{X \times Z} -$$ -For instance, take $Z = I$, the unit interval. This bijection tells us that, when $X$ is a locally compact Hausdorff space, paths $\gamma : I \longrightarrow Y^X$ and homotopies $F : X \times I \longrightarrow Y$ are the same. -3. An easy topological exercise: any map $f: X \longrightarrow Y$ is homotopic to a constant map and all the constant maps are homotopic, either if - -$X$ is contractile and $Y$ is path-connected, or -$Y$ is contractile. - -So, coming back to your situation, we could have said that ${\cal C}^0 (\mathbb{R}, \mathbb{R})$, with the compact-open topology, is always path-connected because $X = \mathbb{R}$ is locally compact, Hausdorff and contractile, and $Y= \mathbb{R}$ is path-connected (or because $X=\mathbb{R}$ is locally compact and Hausdorff and $Y= \mathbb{R}$ is contractile). -(Notice that Jason has used the second possibility: he has constructed a homotopy from the constant function $0$ to $f$. We could use the first one: $F(x,t) = f(xt)$ is a homotopy from the constant function $f(0)$ to $f$.) -Of course, Jason's solution is more straightforward and doesn't need all this elementary machinery of point-set topology. But now we can say that also other function spaces, with the compact-open topology, are path-connected: every time you take as $X$ a locally compact Hausdorff space and any of the two previous possibilities, ${\cal C}^0(X,Y)$ is path-connected. For instance, -$$ -{\cal C}^0(\mathbb{R}, Y) ,\ {\cal C}^0([a,b],Y) , \ {\cal C}^0(\mathbb{R}^n,Y) ,\ {\cal C}^0(D^2,Y) \dots -$$ -for any path-connected space $Y$, are path-connected. Also -$$ -{\cal C}^0(X,\mathbb{R}) , \ {\cal C}^0(X, [a,b]) , \ {\cal C}^0(X,\mathbb{R}^n) ,\ {\cal C}^0(X,D^2) \dots -$$ -for any locally compact Hausdorff space $X$, are path-connected.<|endoftext|> -TITLE: What is the theory of non-linear forms (as contrasted to the theory of differential forms)? -QUESTION [23 upvotes]: It is often said that differential forms (sections of an exterior power of the cotangent bundle) are the things that you can integrate. But unless I'm being thoroughly dense differential forms are not the only things that you can integrate, c.f. the arclength form (on a 2d manifold) $ds=\sqrt{dx^2+dy^2}$, the unsigned 1-d forms $|f(x,y)dx+g(x,y)dy|$, or the unsigned area forms $|h(x,y)dx\wedge dy|$. -My question is: - -Where do the arclength form $ds=\sqrt{dx^2+dy^2}$, the unsigned 1-d forms |f(x,y)dx+g(x,y)dy|, and the unsigned area forms $|h(x,y)dx\wedge dy|$ live relative to the differentials $dx$ and $dy$, which I understand to live in the cotangent bundle of some 2-dimensional manifold? - -REPLY [2 votes]: In my opinion, you're looking for the notion of a cogerm. -If I understand correctly, the fact that such things act on paths (and not just vectors) allows for "higher order" forms like $d^2 x$, and the fact that such things aren't assumed linear allows for "non-linear" forms like $ds := \sqrt{dx^2+dy^2}$. And yes, there is indeed a notion of integration for such forms; see the link.<|endoftext|> -TITLE: Arrangement of Numbers -QUESTION [10 upvotes]: How can we prove that it is posible to arrange numbers $1,2,3,4,\ldots, n$ in a row so that the average of any two of these numbers never appears between them? - -REPLY [15 votes]: Assume the inductive hypothesis that it is true for lists smaller than $n$. To reorder $1 \ldots n$ this way as well, split it into evens and odds, and apply the function $\left \lceil{(x+1)/2}\right \rceil $ to these sets to create two half-problems of the same type which by the hypothesis can be solved. Now unapply the transformation to each half-solution using the function $2x$ to recover the evens and the function $2x-1$ to recover the odds. These functions are linear so the condition is still satisfied. Now concatenate these two lists together to form the solution. This concatenation always works because the average of an even and an odd is not an integer.<|endoftext|> -TITLE: How can we show there is a set whose cardinality is greater than $\cal P^n(\Bbb N)$ for every natural number $n$? -QUESTION [7 upvotes]: I haven't studied properly the theory of infinities yet. -Let $A_0$ denote the set of natural numbers. Let $A_{i+1}$ denote the set whose elements are all the subsets of $A_i$ for $i=0,...,n,...$ -I understand well that the cardinality of $A_{i+1}$ is always greater than the cardinality of $A_i$ for all $i \in \mathbb{N}$. -Which is the simplest argument which proves that there exists a set whose cardinality is greater than $A_i$ for all $i \in \mathbb{N}$? -Thanks. - -REPLY [8 votes]: Let $X = \bigcup_{i=0}^\infty A_i$ Then $A_i\subseteq X$ for all i so $|A_i|\leq |X|$ for all i. Now, consider the powerset of $X$. Then we have $|A_i|\leq |X|<|P(X)|$. - -REPLY [8 votes]: Your question is closely linked to Beth numbers, their definition is: - -$\beth_0 = \aleph_0 =$ the cardinality of the non-negative integers -For a successor ordinal $\alpha +1$ put $\beth_{\alpha+1} =$ the cardinality of the power-set of $\beth_{\alpha}$ -For a limit ordinal $\delta$ put $\beth_\delta = \bigcup_{\alpha\lt\delta} \beth_\alpha$ - -What you're asking about is $\beth_\omega$. -Further reading: Wikipedia page on Beth number.<|endoftext|> -TITLE: A comprehensive list of binomial identities? -QUESTION [67 upvotes]: Is there a comprehensive resource listing binomial identities? I am more interested in combinatorial proofs of such identities, but even a list without proofs will do. - -REPLY [43 votes]: The most comprehensive list I know of is H.W. Gould's Combinatorial Identities. It is available directly from him if you contact him. He also has some pdf documents available for download from his web site. Although he says they do "NOT replace [Combinatorial Identities] which remains in print with supplements," they still contain many more binomial identities even than in Concrete Mathematics. In general, Gould's work is a great resource for this sort of thing; he has spent much of his career collecting and proving combinatorial identities. -Added: Another useful reference is John Riordan's Combinatorial Identities. It's hard to pick one of its 250 pages at random and not find at least one binomial coefficient identity there. Unfortunately, the identities are not always organized in a way that makes it easy to find what you are looking for. Still it's a good resource.<|endoftext|> -TITLE: Counting equivalence relations -QUESTION [5 upvotes]: I am aware that on a finite set the number of equivalence relations is the n-th Bell's Number. On the other hand, the only reference I could find on the web for infinite sets was this: On counting equivalence relations by D.J. Baylis, in which he proves that the set of equivalence relations on $\mathbb{N}$ is uncountable. I am interested in any sort of theorem that gives some rule about counting equivalence relations on an infinite set -something like $\beth_0 = \aleph_0$ and $\beth_n = 2^{\beth_{n-1}}$ - if there are any. - -REPLY [5 votes]: Firstly, to address Beth numbers (as you noted, the math markup doesn't work with them. I'll use words instead) -$\beth$ (Beth) numbers are cardinal numbers defined as followed: - -$\beth_0 = \aleph_0$, -$\beth_{\alpha+1} = 2^{\beth_n}$, -$\beth_\delta=\sup_{\gamma<\delta}\beth_\gamma$, for a limit ordinal $\delta$. - -Now, given an infinite set $X$ and assuming the axiom of choice, we have that $|X\times X|=|X|$, therefore $|P(X\times X)| = |P(X)|$. -Define $R(X)$ as the set of all equivalence relations on $X$, then clearly $R(X) \subset P(X\times X)$ and therefore $|R(X)| \le |P(X)|$. -On the other hand there are $2^{|X|}$ partitions of $X$ into two sets (an easy exercise in cardinal arithmetic), so for every partition as such $\{X', X\setminus X'\}$ we can define an equivalence relation with two equivalence classes, namely $X'$ and its complement in $X$. Therefore $|R(X)| \ge |P(X)|$. -All in all we have that the number of equivalence relations on $X$ is $2^{|X|}$.<|endoftext|> -TITLE: Derivative of a product and derivative of quotient of functions theorem: I don't understand its proof -QUESTION [5 upvotes]: I'm studying for math exam and one of the questions that often appears is related to derivative of a product of two functions. -The theorem says that $(f(x)g(x))'=f'(x)g(x)+f(x)g'(x)$. -The proof goes like this: -$f(x+h)g(x+h)-f(x)g(x)=(f(x+h)-f(x))g(x)+(g(x+h)-g(x))f(x)$ -After that we divide the equation by h and let h approach 0. -Now what I don't understand is how they got the right side of the proof. -Same problem with quotient: -Theorem says: -$\left(\frac{f(x)} {g(x)}\right)'=\frac{f'(x)g(x)-f(x)g'(x)} {g^2(x)}$ -The above comes from -$\frac {f(x+h)} {g(x+h)} - \frac {f(x)} {g(x)} = \frac{(f(x+h)-f(x))g(x)-(g(x+6h)-g(x))f(x)} {g(x+h)g(x)}$ -I can see from where $f(x+h)g(x)$ comes, but I can't see from where $f(x)g(x)$ came. - -REPLY [2 votes]: As per AndrejaKo's comment, I am posting my comment as an answer. Note that there are very interesting and useful answers to this question motivating the formula for the derivative of a product: my answer is simply a technical explanation of a typo in the argument provided in the question. -You should add and subtract the quantity $f(x)g(x+h)$ from the left hand side of your equation (and the equality you wrote is indeed incorrect). Once you collect terms appropriately, everything should be clear!<|endoftext|> -TITLE: If $a \mid m$ and $(a + 1) \mid m$, prove $a(a + 1) | m$. -QUESTION [29 upvotes]: Can anyone help me out here? Can't seem to find the right rules of divisibility to show this: -If $a \mid m$ and $(a + 1) \mid m$, then $a(a + 1) \mid m$. - -REPLY [31 votes]: If $\rm\,\ a\mid m,\ a\!+\!1\mid m\ \,$ then it follows that $\rm\ \, \color{#90f}{a(a\!+\!1)\mid m}$ - -${\bf Proof}\rm\quad\displaystyle \frac{m}{a},\; \frac{m}{a+1}\in\mathbb{Z} \ \,\Rightarrow\,\ \frac{m}{a} - \frac{m}{a\!+\!1} \; = \;\color{#90f}{\frac{m}{a(a\!+\!1)} \in \mathbb Z}.\quad$ QED -${\bf Remark}\rm\ \, \text{More generally, if }\, \color{#c00}{n = bc \:\!-\:\! ad} \;$ is a linear combination of $\rm\, a, b\, $ then -$\rm\text{we have}\quad\,\ \displaystyle \frac{m}{a},\; \frac{m}{b}\in\mathbb{Z} \;\;\Rightarrow\;\; \frac{m}{a}\frac{\color{#c00}{bc}}{b} - \frac{\color{#c00}{ad}}{a}\frac{m}{b} = \frac{m\:\!\color{#c00}n}{a\:\!b} \in \mathbb Z$ -By Bezout, $\rm\, \color{#c00}{n = \gcd(a,b)}\, $ is the least positive linear combination, so the above yields -$\rm\qquad\qquad a,b\mid m \;\Rightarrow\; ab\mid m\;gcd(a,b) \;\Rightarrow\; \mathfrak{m}_{a,b}\!\mid m\ \ $ for $\ \ \rm \mathfrak{m}_{a,b} := \dfrac{ab}{\gcd(a,b)}$ -i.e. $ $ every common multiple $\rm\, m\,$ of $\,\rm a,b\,$ is a multiple of $\;\rm \mathfrak{m}_{a,b},\,$ so $\rm\, \color{#0a0}{\mathfrak{m}_{a,b}\le m}.\,$ But $\rm\,\mathfrak{m}_{a,b}\,$ is also a common multiple, i.e. $\rm\ a,b\mid \mathfrak{m}_{a,b}\,$ viz. $\displaystyle \,\rm \frac{\mathfrak{m}_{a,b}}{a} = \;\frac{a}{a}\frac{b}{gcd(a,b)}\in\mathbb Z\,$ $\,\Rightarrow\,$ $\rm\, a\mid \frak{m}_{a,b},\,$ and $\,\rm b\mid \mathfrak{m}_{a,b}\,$ by symmetry. Thus $\,\rm \mathfrak{m}_{a,b} = lcm(a,b)\,$ is the $\rm\color{#0a0}{least}$ common multiple of $\rm\,a,b.\,$ In fact we have proved the stronger statement that it is a common multiple that is divisibility-least, i.e. it divides every common multiple. This is the general definition of LCM in an arbitrary domain (ring without zero-divisors), i.e. we have the following universal dual definitions of LCM and GCD, which essentially says that LCM & GCD are $\,\sup\,$ & $\,\inf\,$ in the poset induced by divisibility order $\,a\preceq b\!\iff\! a\mid b$. -Definition of LCM $\ \ $ If $\quad\rm a,b\mid c\,\iff\; d\mid c \ \ \,$ then $\rm\ d\approx lcm(a,b)$ -compare: $\, $ Def of $\rm\,\cap\ \ \,$ If $\rm\ \ \ a,b\supset c\iff d\supset c\,\ $ then $\,\ \rm d = a\cap b$ -Definition of GCD $\ \ $ If $\quad\rm c\mid a,b \;\iff\; c\mid d \,\ $ then $\,\ \rm d \approx \gcd(a,b)$ -compare: $\, $ Def of $\rm\,\cup\ \ \,$ If $\rm\ \ \ c\supset a,b\iff c\supset d\,\ $ then $\,\ \rm d = a\cup b$ -Note $\;\rm a,b\mid [a,b] \;$ follows by putting $\;\rm c = [a,b] \;$ in the definition. $ $ Dually $\;\rm (a,b)\mid a,b$. -Above $\rm\,d\approx e\,$ means $\rm\,d,e\,$ are associate, i.e. $\rm\,d\mid e\mid d\,$ (equivalently $\rm\,d = u\!\: e\,$ for $\,\rm u\,$ a unit = invertible). In general domains gcds are defined only up to associates (unit multiples), but we can often normalize to rid such unit factors, e.g. normalizing the gcd to be $\ge 0$ in $\Bbb Z,\,$ and making it monic for polynomials over a field, e.g. see here and here. -Such universal definitions enable slick unified proofs of both arrow directions, e.g. -Theorem $\rm\;\; (a,b) = ab/[a,b] \;\;$ if $\;\rm\ [a,b] \;$ exists. -Proof: $\rm\quad d\mid a,b \iff a,b\mid ab/d \iff [a,b]\mid ab/d \iff\ d\mid ab/[a,b] \quad$ QED -The conciseness of the proof arises by exploiting to the hilt the $\:\!(\!\!\iff\!\!)\:\!$ definition of LCM, GCD. Implicit in the above proof is an innate cofactor duality. Brought to the fore, it clarifies LCM, GCD duality (analogous to DeMorgan's Laws), e.g. see here and here. -By the theorem, GCDs exist if LCMs exist. But common multiples clearly comprise an ideal, being closed under subtraction and multiplication by any ring element. Hence in a PID the generator of an ideal of common multiples is clearly an LCM. In Euclidean domains this can be proved directly by a simple descent, e.g. in $\:\mathbb Z \;$ we have the following high-school level proof of the existence of LCMs (and, hence, of GCDs), after noting the set $\rm M$ of common multiples of $\rm a,b$ is closed under subtraction and contains $\:\rm ab \ne 0\:$: -Lemma $\ $ If $\;\rm M\subset\mathbb Z \;$ is closed under subtraction and $\rm M$ contains a nonzero element $\rm\,k,\,$ then $\rm M \:$ has a positive element and the least such positive element of $\;\rm M$ divides every element. -Proof $\, $ Note $\rm\, k-k = 0\in M\,\Rightarrow\, 0-k = -k\in M, \;$ therefore $\rm M$ contains a positive element. Let $\rm\, m\,$ be the least positive element in $\rm\, M.\,$ Since $\,\rm m\mid n \iff m\mid -n, \;$ if some $\rm\, n\in M\,$ is not divisible by $\,\rm m\,$ then we -may assume that $\,\rm n > 0,\,$ and the least such. Then $\rm\,M\,$ contains $\rm\, n-m > 0\,$ also not divisible by $\rm m,\,$ and smaller than $\rm n$, contra leastness of $\,\rm n.\ \ $ QED<|endoftext|> -TITLE: When did the term "tuple" get its current meaning? -QUESTION [8 upvotes]: In a recent discussion, someone told me tuples in the modern meaning (in particular, tuples are heterogeneous: that is, different elements of a tuple can belong to different sets/have different types) first appeared in Codd's tuple calculus. I was surprised it would be so late, but searching Google Books before 1970, I can't see any clearly heterogeneous examples, and quite a few clearly homogeneous ones ("tuple of ones and zeros", "tuple of natural numbers", etc.) -Сan anybody confirm that Codd introduced heterogeneous tuples or point out an earlier appearance? - -REPLY [5 votes]: Tuples appear in essentially all formal treatments of set theory, and those go back to way before the 70s!<|endoftext|> -TITLE: Find the coordinates in an isosceles triangle -QUESTION [5 upvotes]: Given: -$A = (0,0)$ -$B = (0,-10)$ -$AB = AC$ -Using the angle between $AB$ and $AC$, how are the coordinates at C calculated? - -REPLY [2 votes]: Let $a,b$ and $c$ be the side lengths and $A,B$ and $C$ the angles. -$a^{2}=x^{2}+\left( y+10\right) ^{2}$ -$b^{2}=x^{2}+y^{2}=10^{2}$ -$b=c=10$ -By the (Neper) theorem of tangents (corollary of the Law of tangents): -$\tan \frac{A-B}{2}=\frac{a-b}{a+b}\cot \frac{C}{2}$ -On the other hand -$\frac{A+B}{2}=\frac{\pi }{2}-\frac{C}{2}\quad C<\pi $ -and by the theorem of sinus -$c\sin A=a\sin C\iff \left( x^{2}+\left( y+10\right) ^{2}\right) \sin -C=10\sin A$ -Compiling, we get: -$\frac{A-B}{2}=\arctan (\frac{\sqrt{x^{2}+\left( y+10\right) ^{2}}-10}{\sqrt{% -x^{2}+\left( y+10\right) ^{2}}+10}\cot \frac{C}{2})$ -$\frac{A+B}{2}=\frac{\pi }{2}-\frac{C}{2}\quad C<\pi $ -$(x^{2}+(\sqrt{100-x^{2}}+10)^{2})\sin C=10\sin A$ -$y^{2}=10^{2}-x^{2}$ -We have to solve this system of four equations and four unknowns $x,y,A,B$. -Edit: I started this approach before the question has been updated.<|endoftext|> -TITLE: Is whether a set is closed or not a local property? -QUESTION [18 upvotes]: If I want to show a topological subspace is closed in an ambient space, does it suffice to know what happens on an open cover of the ambient space? More specifically, - -Let $X$ is a topological space with a given open cover ${ U_i }$. Suppose that $Z \subset X$ is a set such that $Z \cap U_i$ is closed in $U_i$ for all $i$. Does it follow that $Z$ is closed in X? - -This is clearly true if there are finitely many ${ U_i }$. At first thought, it seems unlikely to be true in the infinite case, but I'm having trouble coming up with a suitable counter-example. - -REPLY [14 votes]: $U_i \setminus (Z \cap U_i)$ is open in $U_i$, thus also open in $X$. Then $X \setminus Z = \cup_ {i \in I} U_i \setminus (Z \cap U_i)$ is open in $X$, i.e. $Z$ is closed in $X$.<|endoftext|> -TITLE: What is the best way to solve an equation involving multiple absolute values? -QUESTION [20 upvotes]: An absolute value expression such as $|ax-b|$ can be rewritten in two cases as $|ax-b|=\begin{cases} -ax-b & \text{ if } x\ge \frac{b}{a} \\ -b-ax & \text{ if } x< \frac{b}{a} -\end{cases}$, so an equation with $n$ separate absolute value expressions can be split up into $2^n$ cases, but is there a better way? -For example, with $|2x-5|+|x-1|+|4x+3|=13$, is there a better way to handle all the possible combinations of $x\ge\frac{5}{2}$ versus $x<\frac{5}{2}$, $x\ge 1$ versus $x< 1$, and $x\ge-\frac{3}{4}$ versus $x<-\frac{3}{4}$? - -REPLY [9 votes]: This is merely a very special case of the powerful CAD (cylindrical algebraic decomposition) algorithm for quantifier elimination in real-closed fields, e.g. see Jirstrand's paper [1] for a nice introduction. -[1] M. Jirstrand. Cylindrical algebraic decomposition - an introduction. 1995 -Technical report S-58183, Automatic Control group, Department of Electrical Engineering -Linkoping University, Linkoping, Sweden. -Freely available here or here.<|endoftext|> -TITLE: What is the image near the essential singularity of z sin(1/z)? -QUESTION [26 upvotes]: This was part of a homework problem from J.B. Conway's complex analysis text which I was assigned long ago but didn't get. A few years later I was a TA for a course where the problem was assigned. I still didn't know how to solve it, nor did any of my students. If you can give me a simple answer, my gratefulness will far outweigh my embarrassment. - -Define $f(z)=z\sin(\frac{1}{z})$ on the punctured plane. What is the image under $f$ of a (small) punctured disk at the origin? - -Remarks: - -The corresponding problem for $\sin(\frac{1}{z})$ is straightforward to solve using the definition of $\sin$ in terms of exponentials, the fact that $\exp$ is onto the punctured plane, and periodicity. The answer in this case is the whole plane. -The corresponding problem for $z\cos(\frac{1}{z})$ is straightforward to solve using Picard's theorem, because the oddness of the function implies that 0 is the only possible excluded point, and it is easy to see that 0 is in the image by taking reciprocals of large zeros of $\cos$. Thus the answer in this case is also the whole plane. (The same argument could be applied to the previous remark, but there no big theorems are needed. Perhaps Picard's theorem isn't needed here either.) -Another part of the problem I couldn't solve generalizes this, taking $z^n\sin(\frac{1}{z})$ when $n$ is a positive integer. When $n$ is even, there is no problem, because the argument from the previous remark applies. But I don't know how to solve it when $n$ is odd. -By Picard's theorem, there is at most one point missing. I would be surprised if there is a point missing, but I have no argument to back up this expectation. -Although an elementary argument would be nice, I do not care whether the argument is "appropriate" for what is covered up to that point in the text. This is pure curiosity. - -REPLY [11 votes]: The image of $f(z)=z^n\sin(\frac1z)$ (for n odd) on any punctured disc about the origin is the whole of $\mathbb{C}$. -The following is (I think) a simplified proof along the lines of that given in Moron's reference. I'll use proof by contradiction, so suppose that there is an $a$ such that $f(z^{-1})=z^{-n}\sin(z)\not=a$ for all large $z$. By symmetry, $f(z^{-1})=g(z^2)$ for an analytic function $g$ with, for $n>1$, a pole at the origin of order $(n-1)/2$. As $g(z)-a$ can have only finitely many zeros, we can decompose it as -$$ -g(z)=a+z^{(1-n)/2}h(z)\prod_{i=1}^m(z-b_i) -$$ -where $b_i$ are the zeros of $g(z)-a$. -Writing out $f(z^{-1})=\frac{1}{2i}z^{-n}(e^{iz}-e^{-iz})$ shows that $f(z^{-1})$ is of order $1$. So $g$ and, hence, $h$ are of order 1/2. Any entire and nonzero function of order less than 1 is constant, so $h$ is constant and we see that $f$ is a rational function, which is false.<|endoftext|> -TITLE: Why do we need noetherianness (or something like it) for Serre's criterion for affineness? -QUESTION [14 upvotes]: Serre's criterion for affineness (Hartshorne III.3.7) states that: - -Let $X$ be a noetherian scheme. Suppose $H^1(X, \mathcal{F})= 0$ for every quasi-coherent sheaf on $X$. Then $X$ is affine. - -There is a more general statement (EGA II.5.2.1) that reaches the same conclusion with the hypothesis that $X$ is a separated quasi-compact scheme or one where the topological space is noetherian. -I don't, however, understand why we need these hypotheses. The idea of the proof is to find a set of $g_i \in \Gamma(X, O_X)$ generating the unit ideal and such that the $X_{g_i}$ are affine. This can be done by showing the $X_{g_i}$ form a basis at each closed point using the cohomology statement (an argument which I'm pretty sure works without any conditions on the scheme). Then, you take open affine sets of the form $X_f$ containing each closed point and take their union; this is an open set whose complement is closed and must contain no closed point, hence is empty. This part of the argument relies on the fact that every closed subset contains a closed point, a fact which is true under noetherian hypotheses since a scheme is a $T_0$-space. -But isn't it true that every quasi-compact $T_0$ space has a closed point? A minimal closed set must be reduced to a point. So I'm unclear why "quasi-compactness" alone is not enough. (It shouldn't be enough. Grothendieck is not one to mince hypotheses!) -Questions: - -Where does the standard argument break down for schemes which are only quasi-compact? -Is this true if $X$ is only quasi-separated and quasi-compact? (Are there counterexamples otherwise?) - -REPLY [13 votes]: The short answer is that quasi-compactness is enough (for the statement you asked about): see lemma 3.1 in http://www.math.columbia.edu/algebraic_geometry/stacks-git/coherent.pdf (but its essentially the argument in Hartshorne) -this business of extra hypothesis comes up because Hartshorne and Gronthendieck are proving an iff statement; that is, these hypothesis are needed to prove that if $X$ is affine then $H^1(X,F) = 0$ for every quasicoherent sheaf. In the case of Hartshorne you need notherian hypothesis to prove -lemma II.3.3 If $I$ is an injective $A$-module and $f \in A$ then $I \to I_f$ is surjective. -with this lemma you can then show if $I$ is injective then $\tilde I$ is flasque an so you can use them to calculate cohomology: $F$ is quasicoherent $\Rightarrow F = \widetilde{M}$. If $M \to I_0 \to I_1 \to ...$ (1) is an injective resolution then $\widetilde{M} \to \widetilde{I_0} \to \widetilde{I_1} \to ...$ is flasque resolution and applying global sections just recovers (1) so all the higher cohomology vanishes. -I'm not familiar with Gronthendieck's argument and I don't know if you can replace separated with quasi-separated.<|endoftext|> -TITLE: Why doesn't a simple mean give the position of a centroid in a polygon? -QUESTION [21 upvotes]: I was having a look at this question on SO. -From what I know, the centroid is the center of mass of an object. so, by definition its position is given by a simple mean of the positions of all the points in the object. -For a polygon, it only has mass at the vertices. So, the centroid should be given by the arithmetic mean of the coordinates of the vertices. -But Wikipedia says centroid is given by - -where A is - -Why doesn't a simple arithmetic mean work? - -REPLY [7 votes]: Yes, yes, but why? The current answers (as well as Wikipedia) do not contain enough detail to understand these formulas -immediately. So let's start with the very basic definition of the area centroid: -$$ -\vec{C} = \frac{\iint \vec{r}(x,y)\,dx\,dy}{\iint dx\,dy} = -\frac{(\iint x\,dx\,dy,\iint y\,dx\,dy)}{\iint dx\,dy} = \frac{(m_x,m_y)}{A}=(C_x,C_y) -$$ -Allright, let's get rid of the double integrals in the first place, by employing -Green's theorem : -$$ -\iint \left( \frac{\partial M}{\partial x} - \frac{\partial L}{\partial y} \right) dx\,dy -= \oint \left( L\,dx + M\,dy \right) -$$ -At the edges of the (convex) polygon we have: -$$\begin{cases} -x = x_i + (x_{i+}-x_i)\,t \\ y = y_i + (y_{i+}-y_i)\,t \end{cases} -\quad \mbox{with} \quad \begin{cases} i = 0,1,2,\cdots,n-1 \\ i+=i+1\mod n \end{cases} -\quad \mbox{and} \quad 0 \le t \le 1 -$$ -Then by substitution of $M(x,y) = x$ and $L(x,y) = 0$ we have: -$$ -A = \iint dx\,dy = \oint x\,dy = \sum_{i=0}^{n-1} \int_0^1 \left[x_i + (x_{i+}-x_i)\,t\right](y_{i+}-y_i)\,dt=\\ -\sum_{i=0}^{n-1}(y_{i+}-y_i)\left[x_i\left.t\right|_0^1 + (x_{i+}-x_i)\frac{1}{2}\left.t^2\right|_0^1\right] = -\frac{1}{2}\sum_{i=0}^{n-1}(x_{i+}+x_i)(y_{i+}-y_i)=\\ -\frac{1}{2}\sum_{i=0}^{n-1} (x_iy_{i+}-x_{i+}y_i) -$$ -The last move by telescoping. -The main integral for the $x$-coordinate of the centroid is,with $M(x,y) = x^2/2$ and $L(x,y) = 0$: -$$ -m_x = \iint x\,dx\,dy = \oint \frac{1}{2}x^2 \,dy = \frac{1}{2}\sum_{i=0}^{n-1}(y_{i+}-y_i)\int_0^1\left[x_i + (x_{i+}-x_i)\,t\right]^2\,dt=\\ -\frac{1}{2}\sum_{i=0}^{n-1}(y_{i+}-y_i)\left[x_i^2\left.t\right|_0^1+2x_i(x_{i+}-x_i)\frac{1}{2}\left.t^2\right|_0^1 -+(x_{i+}-x_i)^2\frac{1}{3}\left.t^3\right|_0^1\right]=\\ -\frac{1}{2}\sum_{i=0}^{n-1}(y_{i+}-y_i)\left[x_i^2+x_i(x_{i+}-x_i)+\frac{1}{3}(x_{i+}-x_i)^2\right]=\\ -\frac{1}{6}\sum_{i=0}^{n-1}(y_{i+}-y_i)\left[x_{i+}^2+x_ix_{i+}+x_i^2\right]=\\ -\frac{1}{6}\sum_{i=0}^{n-1}\left[x_ix_{i+}y_{i+}+x_i^2y_{i+}-x_{i+}^2y_i-x_ix_{i+}y_i\right]\quad\Longrightarrow\\ -m_x = \frac{1}{6}\sum_{i=0}^{n-1}(x_i+x_{i+})(x_iy_{i+}-x_{i+}y_i) -$$ -The last two moves after telescoping again. -The main integral for the $y$-coordinate of the area centroid is,with $M(x,y) = 0$ and $L(x,y) = -y^2/2$: -$$ -m_y = \iint y\,dx\,dy = \oint -\frac{1}{2}y^2 \,dx = -\frac{1}{2}\sum_{i=0}^{n-1}(x_{i+}-x_i)\int_0^1\left[y_i + (y_{i+}-y_i)\,t\right]^2\,dt -$$ -Which is similar to the main integral for the $x$-coordinate of the centroid: -$$ -m_x = \iint x\,dx\,dy = \oint \frac{1}{2}x^2 \,dy = \frac{1}{2}\sum_{i=0}^{n-1}(y_{i+}-y_i)\int_0^1\left[x_i + (x_{i+}-x_i)\,t\right]^2\,dt -$$ -It is seen that everything is the same if we just exchange $x$ and $y$, except for the minus sign, hence: -$$ -m_y = -\frac{1}{6}\sum_{i=0}^{n-1}(y_i+y_{i+})(y_ix_{i+}-y_{i+}x_i)=\frac{1}{6}\sum_{i=0}^{n-1}(y_i+y_{i+})(x_iy_{i+}-x_{i+}y_i) -$$ -Combining the partial results found gives the end result, as displayed in the question.<|endoftext|> -TITLE: Is the universal cover of an algebraic group an algebraic group? -QUESTION [15 upvotes]: Here algebraic group means affine algebraic group in both instances. Also I'm mainly interested in groups over $\mathbb{C}$. In fact I'm taking $\pi_1(G)$ to mean the fundamental group of $G_{an}$, the analytification. So I guess my question only applies to the base field being either $\mathbb{C}$ or $\mathbb{R}$. -In this case these groups are also Lie groups with Lie algebras. If $\mathfrak{g}$ is a semisimple Lie algebra then there is a connection with the weight and root lattices: there is a 1-1 correspondence between connected Lie groups with Lie algebra $G$ and lattices $\Lambda$ with $\Lambda_W \supset \Lambda \supset \Lambda_R$. -The group corresponding to $\Lambda = \Lambda_R$ is always algebraic because it is the adjoint group. A slightly more general question is then, for $\Lambda_W \supset \Lambda \supset \Lambda_R$ is the corresponding group $G_\Lambda$ affine algebraic? - -REPLY [13 votes]: Over $\mathbf{C}$ I believe the answer is yes (the universal cover is algebraic), although I'm not really an expert. Here's the story as I understand it. -A connected[*] semisimple linear algebraic group over a field $k$ is called simply connected if it admits no nontrivial isogeny from another connected group. (An isogeny is a surjective, flat homomorphism of algebraic $k$-groups with finite kernel.) Now just like in the Lie group story, nice algebraic groups are classified by combinatorial data. Precisely, a reductive $k$-group $G$ together with a split maximal torus $T$, is classified by a "root datum", which is roughly the roots and coroots of $G$ with respect to $T$, plus the lattices of characters and cocharacters of $T$. ("Roughly" because when $k$ is not algebraically closed, you need to keep track of the action of the Galois group of $k$ on the lattices, too.) Correspondingly, isogenies between such groups match bijectively with appropriate "morphisms" between the respective root data. These theorems are a bit involved, but can certainly be found in Borel's book on linear algebraic groups, for the case you care about. (Algebraically closed field of characteristic zero, namely $\mathbf{C}$.) -A consequence of this theory is that for any connected semisimple linear algebraic $\mathbf{C}$-group, there exists an "algebraic universal cover", i.e. an isogeny $\tilde{G}\to G$ from a simply connected $\mathbf{C}$-group $\tilde{G}$. (In fact, we don't need to be working over $\mathbf{C}$ for this to be true.) -Now here's the crux. I claim that if $G$ is a simply connected $\mathbf{C}$-group, then the closed points $G(\mathbf{C})$ are simply connected in the classical topology. The hypothesis that we are over $\mathbf{C}$ is crucial: $\mathrm{Sp}(2n)$ is a simply connected $\mathbf{R}$-group, but the universal cover of $\mathrm{Sp}(2n)(\mathbf{R})$ is the non-algebraic metaplectic group. -Let me sketch the proof of the claim, which is suprisingly hard. (This was explained to me by Brian Conrad; any errors I introduce are, of course, my own.) First, it's a fact, although not a tautology, that since $G$ is connected, $G(\mathbf{C})$ is connected in the classical topology. So that's a good start. Next, by classical Lie theory, the complex Lie group $G(\mathbf{C})$ is homotopy equivalent to a maximal compact real Lie-subgroup $K$. Since $K$ is a compact manifold, this implies that $H^1(G(\mathbf{C}),\mathbf{Z})$ is finitely generated. But this group is the abelianization of $\pi_1(G(\mathbf{C}))$ (Hurewicz); and the fundamental group is already abelian because this is true for all topological groups. So $\pi_1(G(\mathbf{C}))$ is finitely generated (and abelian). In particular, it has a finite quotient. So if $G(\mathbf{C})$ were not (classically) simply connected, there would be a finite covering map of complex Lie groups $G'\to G(\mathbf{C})$. Now a hard theorem of Grauert (relating $\pi_1(G(\mathbf{C}))$ to the "\'etale fundamental group" of $G$, which classifies the algebraic analogue of finite covering maps) implies that $G'$, as well as its analytic group structure, can uniquely be given an algebraic structure. In other words, there is an algebraic $\mathbf{C}$-group $G_0'$ with $G'=G_0'(\mathbf{C})$, and an isogeny $G_0'\to G$, such that the induced map -on $\mathbf{C}$-points is $G'\to G(\mathbf{C})$. In particular $G_0'\to G$ is a nontrivial isogeny, since on $\mathbf{C}$-points it is a nontrivial finite convering map. And this contradicts the (algebraic) simple-connectedness of $G$. -Whew! So in summary, for connected semisimple linear algebraic $\mathbf{C}$-groups $G$, the universal cover of $G(\mathbf{C})$ is precisely $\tilde G(\mathbf{C})$ where $\tilde G$ is the simply connected form of $G$. -[*] Warning: In this answer, "connected" means Zariski-connected. For algebraic groups over $\mathbf{R}$, this is VERY different from the $\mathbf{R}$-points being connected in the classical topology!<|endoftext|> -TITLE: What are the units of cyclotomic integers? -QUESTION [31 upvotes]: This question made me realize I had a misconception about the cyclotomic integers: I thought the units were exactly the roots of unity. There are only finitely many units but infinitely many integers so the question is impossible to solve unless there are more units. So what are the units of cyclotomic integers? - -REPLY [2 votes]: We take the $ p^{\text{th}} $ cyclotomic ring of integers $ \mathbb{Z}[\zeta] $, $ p $ an odd prime, a primitive root $ \gamma\pmod{p} $ and the homomorphism $ \sigma\zeta=\zeta^\gamma $. Kummer took the units -$$\tag{1} \varepsilon_{j}=\dfrac{\sigma^j\zeta-\sigma^{j}\zeta^{-1}}{\sigma^{j-1}\zeta-\sigma^{j-1}\zeta^{-1}}=\sigma^{j-1}\left(\dfrac{\sigma\zeta-\sigma\zeta^{-1}}{\zeta-\zeta^{-1}}\right),\quad 1\le j\le\mu-1, $$ -with $ \mu=(p-1)/2 $. These are real units and Kummer also proved that every unit can be expressed as a real unit times a root of unity $ (-\zeta)^n $. We denote the fundamental system of units with $ \hat\varepsilon_j $, $ 1\le j\le\mu-1 $. A unit of the fundamental system of units can also be expressed as a real unit times a root of unity. We choose real units $ \hat\varepsilon_j $ for the fundamental system of units. -Assume that the multiplicative quotient module -$$ [\hat\varepsilon_1,\dots,\hat\varepsilon_{\mu-1}]/[\pm1,\varepsilon_1,\dots,\varepsilon_{\mu-1}] $$ -is not trivial or has an order $ N $ greater than $ 1 $. Let the prime $ 2 $ divide the order $ N $. Then there exists a real unit $ E(\zeta) $ of order $ 2 $ in the quotient module and exponents $ x_k $ such that -$$ (-1)^s\prod_{j=1}^{\mu-1}\varepsilon_j^{x_j}=E^2(\zeta),\quad x_j\in\mathbb{Z},\quad s\in\{0,1\}. $$ -Kummer observed that the left hand side is a positive number and it must also be a positive number for all conjugates $ \sigma^kE^2(\zeta) $. This allows us to search for possible exponents for which this criterion holds. We just solve the linear system of congruences: -$$ \left(\dfrac{1-\text{sign}\;\sigma^i\varepsilon_j}{2}\right)_{i\times j}\cdot\begin{pmatrix}x_1 \\ \vdots \\ x_{\mu-1}\end{pmatrix}\equiv\begin{pmatrix}s \\ \vdots \\ s\end{pmatrix}\pmod{2}. $$ -In the $ 163^{\text{rd}} $ cyclotomic ring of integers we obtain three candidates -$$ E_{0}^2=-\prod_{j=0}^{26}\varepsilon_{1+3j},\quad E_{1}^2=-\prod_{j=0}^{26}\varepsilon_{2+3j},\quad E_{2}^2=E_{0}^2\cdot E_{1}^2, $$ -with the primitive root $ \gamma=2\pmod{163} $. Let $ p-1=ef $. The units are invariant under the homomorphism $ \sigma^e $, $ e=3 $, with $ \sigma^{ef/2}\varepsilon_k=\sigma^{\mu}\varepsilon_k=\varepsilon_k $ for a real unit $ \varepsilon_k\equiv\varepsilon_k(\zeta+\zeta^{-1}) $ because $ \sigma^\mu\zeta=\zeta^{-1} $ and we should assume that the units can be expressed by the Gaussian periods -$$ \eta_j=\sigma^j\zeta+\sigma^{j+e}\zeta+\dots+\sigma^{j+(f-1)e}\zeta=\sum_{k=0}^{f-1}\sigma^{ke+j}\zeta,\quad j=0,\dots,e-1, $$ -thus, we assume that $ E_k=a_0\eta_0+a_1\eta_1+a_2\eta_2 $ holds for some integers $ a_j $. This gives with $ \sigma\eta_2=\eta_0 $ for periods $ \eta_j $ with length $ f=54 $ -\begin{align} -\pm\sigma^0 E_i&=a_0\eta_0+a_1\eta_1+a_2\eta_2 \\ \tag{2} -\pm\sigma^1 E_i&=a_0\eta_1+a_1\eta_2+a_2\eta_0 \\ -\pm\sigma^2 E_i&=a_0\eta_2+a_1\eta_0+a_2\eta_1 -\end{align} -Writing the units and periods as complex numbers, we can easily solve the linear system of equations and obtain $ E_{0}^2=(5+\eta_2)^2 $ and $ E_{1}^2=(5+\eta_1)^2 $ with $ 1+\eta_0+\eta_1+\eta_2=0 $. There is no unit with order $ 4 $ in the quotient module. The $ 349^{\text{th}} $ cyclotomic ring of integers has the four linearly independent units -\begin{align*} -E(1,3)&=(30 \eta_{0}+30 \eta_{1}+36 \eta_{2}+30 \eta_{3}+42 \eta_{4}+37 \eta_{5})^2, f=58 \\ -E(2,4)&=(37 \eta_{0}+30 \eta_{1}+30 \eta_{2}+36 \eta_{3}+30 \eta_{4}+42 \eta_{5})^2, f=58 \\ --E(2,3)&=(8 \eta_{0}+7 \eta_{1}+6 \eta_{2}+6 \eta_{3}+7 \eta_{4}+6 \eta_{5})^2, f=58 \\ --E(2,5)&=(7 \eta_{0}+7 \eta_{1}+6 \eta_{2})^2, f=116 -\end{align*} -with -$$ E(a,b)=\prod_{j=0}^{28}\varepsilon_{a+6j}\varepsilon_{b+6j} $$ -The periods were built with the primitive root $ \gamma=2\pmod{349} $. There is no unit with order $ 4 $ in the quotient module. -Kummer also had a method in store for computing units that have an odd order $ q\ne p $ in the multiplicative quotient module. In this case we have -$$\tag{3} \varepsilon_q(\zeta)=\prod_{j=1}^{\mu-1}\varepsilon_j^{x_j}=E^q(\zeta),\quad x_j\in\mathbb{Z}. $$ -If the sign on the left hand side is negative, we take the unit $ -E(\zeta) $ so that we could leave this sign out. Now, $ E^q(\zeta)=E(\zeta^q)+q\omega_q(\zeta) $ with some cyclotomic integer $ \omega_q(\zeta) $. This gives -$$ \varepsilon_q^q(\zeta)=\left\lbrace E(\zeta^q)+q\omega(\zeta)\right\rbrace^q\equiv E^q(\zeta^q)=\varepsilon_q(\zeta^q)\pmod{q^2} $$ -or $ \varepsilon_q^q(\zeta)\equiv\varepsilon_q(\zeta^q)\pmod{q^2} $. With $ \varepsilon_j\equiv\varepsilon_j(\zeta) $ for the units $ (1) $ we also have $ \varepsilon_j^q(\zeta)=\varepsilon_j(\zeta^q)+q\omega_j(\zeta) $ with some cyclotomic integer $ \omega_j(\zeta) $ and this gives -$$ {\varepsilon_q^q(\zeta)}={\left\lbrace \prod_{j=1}^{\mu-1}\varepsilon_j^{x_j}(\zeta) \right\rbrace^q} -={\prod_{j=1}^{\mu-1}\left\lbrace \varepsilon_j(\zeta^q)+q\omega_j(\zeta) \right\rbrace^{x_j}} -={\prod_{j=1}^{\mu-1}\varepsilon_j^{x_j}(\zeta^q)\left\lbrace1+q\dfrac{\omega_j(\zeta)}{\varepsilon_j(\zeta^q)} \right\rbrace^{x_j}} -\equiv{\left\lbrace \prod_{j=1}^{\mu-1}\varepsilon_j^{x_j}(\zeta^q) \right\rbrace\cdot\prod_{j=1}^{\mu-1}\left\lbrace1+qx_j\dfrac{\omega_j(\zeta)}{\varepsilon_j(\zeta^q)} \right\rbrace} -\equiv{\left\lbrace \prod_{j=1}^{\mu-1}\varepsilon_j^{x_j}(\zeta^q) \right\rbrace\cdot\left\lbrace1+\sum_{j=1}^{\mu-1}qx_j\dfrac{\omega_j(\zeta)}{\varepsilon_j(\zeta^q)} \right\rbrace} -\equiv{\varepsilon_q(\zeta^q)+\varepsilon_q(\zeta^q)\sum_{j=1}^{\mu-1}qx_j\dfrac{\omega_j(\zeta)}{\varepsilon_j(\zeta^q)}} -\pmod{q^2} $$ -or -$$ \sum_{j=1}^{\mu-1}x_j\dfrac{\omega_j(\zeta)}{\varepsilon_j(\zeta^q)}\equiv0\pmod{q} $$ -with $ \varepsilon_q^q(\zeta)\equiv\varepsilon_q(\zeta^q)\pmod{q^2} $ and $ \varepsilon_q(\zeta^q)\not\equiv0\pmod{q} $. This is a linear system of congruences so that we can search for possible exponents $ x_j $ and compute the units similar to the system of equations $ (2) $. In the $ 401^{\text{st}} $ cyclotomic ring of integers, we obtain two linearly independent units -$$ {\prod_{j=0}^{24}\varepsilon_{1+8j}^{2}\cdot\varepsilon_{3+8j}\cdot\varepsilon_{4+8j}\cdot\varepsilon_{5+8j}^{2}\cdot\varepsilon_{6+8j}} -={(3836\eta_0+3637\eta_1+2718\eta_2+3877\eta_3+2915\eta_4+2835\eta_5+3116\eta_6+3559\eta_7)^3} $$ -and -$$ {\prod_{j=0}^{24}\varepsilon_{1+8j}^{2}\cdot\varepsilon_{2+8j}^{2}\cdot\varepsilon_{3+8j}\cdot\varepsilon_{4+8j}^{2}\cdot\varepsilon_{7+8j}}={(-85\eta_0-118\eta_1-89\eta_2-89\eta_3-95\eta_4-107\eta_5-116\eta_6-111\eta_7)^3} $$ -with periods $ \eta_k $ of length $ f=50 $ and the primitive root $ \gamma=3\pmod{401} $. A more detailed approach to finding these units can be taken from the sections $ 16.5 $ and $ 16.6 $, here. -René Schoof states in his paper Class numbers of real cyclotomic fields of prime conductor, $ 2002 $, that he had found all factors dividing the number $ N $ (or the class number of the $ p^{\text{th}} $ real cyclotomic field) for primes $ p<10000 $ with a likelihood of $ 98\% $. If we could make it for the remaining $2\% $ we would be able to compute many fundamental systems of units!<|endoftext|> -TITLE: Gauge transformations in differential forms -QUESTION [6 upvotes]: I am aware of gauge transformations and covariant derivatives as understood in Quantum Field Theory and I am also familiar with deRham derivative for vector valued differential forms. -I thinking of the gauge field A of the gauge group G as a Lie(G) valued 1-form on the manifold. -But I can't see why under a gauge transformation on A by an element $g\in G$ amounts to the following change, $A \mapsto A^g = gAg^{-1} -dgg^{-1}$ (if G is thought of as a matrix Lie Group) or in general $A_g = Ad(g)A + g^* \omega$ (where $\omega$ is the left invariant Maurer-Cartan form on G and I guess $g^*$ is pull-back of $\omega$ along left translation map by $g$). -Curvature is defined as $F = dA + \frac{1}{2}[A,A]$ and using this one wants to now see why does $F \mapsto F_g = gFg^{-1}$. -Firstly is the expression for $A_g$ a definition or is there a derivation for that? -When I try proving this (assuming matrix Lie groups) I am getting stuck in multiple places like what is $dA_g$ ? -I would be happy if someone can explain the explicit calculations and/or give a reference where such things are explained. Usual books which explain differential forms or connections on principal bundles don't seem to help with such calculations. - -REPLY [4 votes]: The method is to use the Leibniz rule in the differentiation and and change the sign whenever the exterior derivative moves past an odd form. In addition the following identity must be used $ dgg^{-1} + g dg^{-1} = 0$, and remember that the commutator is is between odd forms thus it is with a plus sign. -Here are the intermediate results: -$\frac{1}{2}[A_g, A_g] = \frac{1}{2}g[A, A] g^{-1} -[dgg^{-1}, g A g^{-1}] + dgg^{-1}\wedge dgg^{-1}$ -$dA_g = g dA g^{-1} + [dgg^{-1}, g A g^{-1}] - dgg^{-1}\wedge dgg^{-1}$ -Here are the required details: -$ d(gAg^{-1})$ - -Application of the Leibniz rule (Please observe the minus sign in the last term) - -$ d(gAg^{-1}) = dg \wedge A g^{-1} + g dA g^{-1} - g A \wedge dg^{-1}$ - -Using the identities $g g^{-1} = 1$ in the first term and $ dg^{-1} = - g^{-1}dg g^{-1} $ -in the last term - -$ = dg g^{-1} g \wedge A g^{-1} + g dA g^{-1} +g A g^{-1} \wedge dg g^{-1} $ - -Collection of the first and last term into a commutator: - -$ = g dA g^{-1} +[ dg g^{-1}, g A g^{-1} ]$ -$ d(dg g^{-1})$ - -Application of the Leibniz rule (Please observe the minus sign in the last term) - -$ d(dg g^{-1}) = ddg g^{-1} - dg \wedge dg^{-1}$ - -Using the identities $dd = 0$ and again $ dg^{-1} = - g^{-1}dg g^{-1} $, we obtain: - -$ d(dg g^{-1}) = + dg g^{-1}\wedge dg g^{-1}$<|endoftext|> -TITLE: Taking trace of vector valued differential forms -QUESTION [5 upvotes]: Can anyone kindly give some reference on taking trace of vector valued differential forms? -Like if $A$ and$B$ are two vector valued forms then I want to understand how/why this equation is true? -$dTr(A\wedge B) = Tr(dA\wedge B) - Tr(A\wedge dB)$ -One particular case in which I am interested in will be when $A$ is a Lie Algebra valued one-form on some 3-manifold. Then I would like to know what is the precise meaning/definition of $Tr(A)$ or $Tr(A\wedge dA)$ or $Tr(A\wedge A \wedge A)$? -In how general a situation is a trace of a vector valued differential form defined? -It would be great if someone can give a local coordinate expression for such traces. -Any references to learn this would be of great help. - -REPLY [3 votes]: You can define a trace on any vector space $V$ where there is a representation of $V$ on some other space $W$ just by picking a basis on $W$, defining the trace on $V$ as matrix trace (every element of $V$ becomes a matrix with respect to the basis of $W$) and proving that under a change of bases, the trace stays the same. -Now what to do in the case of $V$-valued differential forms? First, let us assume that there is a representation of $V$ on some vector space. Without the assumption there is no meaning of a trace, I believe. And at least for finite dimensional Lie algebras, there always is a representation, the adjoint representation. -So we have a linear map $\operatorname{tr}: V \to \mathbb{R}$. -Recall that a $V$-valued differential form on $M$ is a smooth map $\omega : TM \to V$ such that $\omega$ restricted to any tangent space $T_p M$ is an element of the $V$-valued exterior algebra $\Lambda^n (T_p M, V)$ of $T_p M$. -That is, the restriction $\omega_p$ is a completely antisymmetric map $\omega_p : T_p M \times T_p M \times \cdots \times T_p M \to V$. -By $\operatorname{tr}(\omega)$, we just mean the composition $\operatorname{tr} \circ \omega$. We just feed whatever the differential form gives us into the trace operation. It is a real valued differential form. -Now, if you also have a multiplication defined on $V$, as will be the case if there is a representation (just ordinary matrix multiplication), you can also define the wedge product $\wedge$ analogously to the real-valued case, just inserting the $V$-multiplication instead of the ordinary scalar multiplication. -As Mariano already explained, it satisfies the Leibniz equation. -Mariano also explained that tr is linear and therefore we can pull the $-$ through the trace. -To your special cases: Be careful with Lie algebra valued differential forms! There are at least two possible $\wedge$ products, depending on whether you define it upon multiplication in the adjoint representation or the Lie bracket! The difference is normally only is a factor, but still one should be clear about what $\wedge$ one uses. So please clarify this.<|endoftext|> -TITLE: Principal bundles on 3-manifolds -QUESTION [8 upvotes]: If G is a simply connected Lie Group then why is every G-bundle over an orientable 3-manifold trivial? (Why is orientability important?) - -REPLY [2 votes]: Are you sure that orientability is necessary? The result is proved in lemma 4.1.1. here and I do not see where orientability is used.<|endoftext|> -TITLE: Proof that Pi is constant (the same for all circles), without using limits -QUESTION [65 upvotes]: Is there a proof that the ratio of a circle's diameter and the circumference is the same for all circles, that doesn't involve some kind of limiting process, e.g. a direct geometrical proof? - -REPLY [4 votes]: Let me start by claiming that this is just simply "not true", (while see below if you want a proof.) -In Hyperbolic spaces, the ratio between the circunference and the radius is exponential -In a round Sphere, the ratio between the circunference and the radius is sinusoidal. -So, this means that the ratio between the circunference and the radius is not something that can be easily done by means of simple geometric tools. For instance, the above examples shows that you cannot prove it without the use of the fith Euclid postulate. -Of course, the proportionality between the circunference and the radius is trivially true if you accept that the procedure of scaling a geometric figure by $\lambda$ scales all the one-dimensional length by $\lambda$. -What you can easily do with all the standard geometric tools is prove it for polygons: For triangles that's just Tale's Intercept Theorem, and a polygon can be easily subdivided in triangles. -Now, if you want to use Thale's theorem for building a propotionality principle for circunferences, than you are forced to introduce limits. -Note that if you don't want to use limits, then your big problem is to define the lenght of a curve, rather that prove that in the Euclidean space this is scale-multiplicative. - -Finally, if you are more interested in a proof that "hides" limits (for instance for didattic purposes) here is a paper-and-cisor proof of the doubling properties for circunferences: -Consider a disk of paper of radius R. Its circumference has some length L. Now, cut the disk in two halfs: No one has problems in accepting that the two half circunferences have equal lenghts L/2. -If you glue the two radii of the half-disks, you get two identical cones. -Now you have to convince you audience that if you put one of this cones on the table and look it "from the same level of the table" (i.e. you do a projection) then you see an equilateral triangle!!! -For doing that put your two cones on the table in two different ways: one with the basis on the table, the other with a radius on the table: they will look the same. -This means that the circumference at the base of the cone has radius R/2. -Since we know from the beginning that the base-circumference has length L/2, -We have "proved" that if the circumference of radius R has length L, than the circumference of radius R/2 has lenght L/2. -By changing the cone angle you get multiplicative constants different from 2 or 1/2, but now convincing your audience will be more tricky.<|endoftext|> -TITLE: Simple Complex Number Problem: $1 = -1$ -QUESTION [5 upvotes]: Possible Duplicate: --1 is not 1, so where is the mistake? - -I'm trying to understand the exact point of failure in the following reasoning: -\begin{equation*} -1 = \sqrt{1} = \sqrt{(-1)(-1)} = \sqrt{\sqrt{-1}^2\sqrt{-1}^2} = \sqrt{(\sqrt{-1}\sqrt{-1})^2} = \sqrt{-1}\sqrt{-1} = \sqrt{-1}^2 = -1. -\end{equation*} -I've been previously told that the problem is due to square root not being a function in C; which I found totally unhelpful. Could someone please explain the problem here in simpler terms. -Edit: -Thank you all for your comments in trying to help me understand this. I finally do. Following is the explanation on my problems in understanding this, in case it'll be of any help to anyone else. -My problem was really due to using an incorrect definition of i: $i = \sqrt{-1}$. While the correct definition would be: $i^2 = -1$. -My incorrect definition led me reasoning, such as (which superficially seemed to give expected results. I see now that this is incorrect, too): -\begin{equation*} -\sqrt{-9} = \sqrt{9 * (-1)} = \sqrt{\sqrt{9}^2 \sqrt{-1}^2} = \sqrt{(\sqrt{9} \sqrt{-1})^2} = \sqrt{9} \sqrt{-1} = 3i. -\end{equation*} -Instead, had I used the correct definition of i: -\begin{equation*} -{(xi)}^2 = x^2i^2 = -x^2 = -9, \\ -x^2 = 9, \\ -x = +- 3. -\end{equation*} -Now, analyzing the equations in the original problem, I can see at least the following two errors: -1) In the third =, I'm relying on $-1 = {\sqrt{-1}}^2$, while I should be relying on: $-1 = (+-\sqrt{-1})^2$ which would of course give two different branches. Hmm.. on the second reading, this isn't really a problem, as even with the two separate branches, both of them will lead to the result in the next step. -2) In the fifth =, I'm relying on $\sqrt{i^4} = i^2$, which would be correct, if i was a non-negative number in R. But as i is the imaginary unit and in C: $\sqrt{i^4} = \sqrt{i} = +-(1/\sqrt{2})(1 + i)$. - -REPLY [4 votes]: You need to pay attention to branches of multivalued functions, e.g. see the Wikipedia explanation here. Similar less-trivial questions often arise when symbolic mathematical sotfware systems exhibit bugs due to failure to stay on principal branches, e.g. see this thread where John McKay asks what your favorite system returns for $(-1)^{5/9} - (-1)^{2/9} - (-1)^{8/9}$. You may find such discussions instructive. -For the reader who may be interested in algorithms see for example -Thomas Breuer. sam@math.rwth-aachen.de -Integral Bases for Subfields of Cyclotomic Fields. -AAECC 8, 1997, 279-289 -https://doi.org/10.1007/s002000050065 -Abstract. Integral bases of cyclotomic fields are constructed that allow to -determine easily the smallest cyclotomic field in which a given sum of roots of -unity lies. For subfields of cyclotomic fields integral bases are constructed -that consist of orbit sums of Galois groups on roots of unity. These bases are -closely related to the bases of the enveloping cyclotomic fields mentioned above. -In both situations bases over the rationals and over cyclotomic fields are treated.<|endoftext|> -TITLE: Converse To Quotient Manifold Theorem [Exercise in Lee Smooth Manifolds] -QUESTION [10 upvotes]: I would like help with the following problem (chapter 9, #4) from Lee's Smooth Manifolds [its not homework, I'm reading it and I got stuck on this one] -If a Lie group $G$ acts smoothly and freely on a smooth manifold $M$ and the orbit space $M/G$ has a smooth manifold structure such that the quotient map $\pi: M\to M/G$ is a smooth submersion, then $G$ acts properly. -Its kind of a converse to the standard theorem about quotienting a manifold by a group action. -Any hints/help? - -REPLY [2 votes]: The following is an addendum to Sam's answer, meant to fill-in missing details. -Part 1. Suppose that $G\times M\to M$ is such that the quotient $B=M/B$ has structure of a smooth manifold such that the quotient map $q: M\to B$ is a submersion. Then $q: M\to B$ is a topological principal $G$-fiber bundle. -Proof. Pick a $G$-orbit $Gx\subset M$. Our goal is to find a $G$-invariant neighborhood $W$ of $Gx$ in $M$ such that $W$ is $G$-equivariantly homeomorphic to a product $U\times G$, $U$ is an open subset of $B$, and $G$ acts trivially on the first factor and by left multiplication on the second factor. Equivariance of a homeomorphism $f: U\times G\to U$ means: -$$ -f(g(u,h))=g f(u,h) -$$ -for all $u\in U, g\in G, h\in G$. -Since $q$ is a submersion, there exists, $V$, a neighborhood of $x$ in $M$ and a diffeomorphism $g: V\to U\times {\mathbb R}^n$, where $U=q(V)$ and $q|_V= p_U\circ g$, where $p_U: U\times {\mathbb R}^n\to U$ is the projection to the first factor, $g(x)=(q(x), 0)$. Here $n+dim(B)=dim(M)$. -Set $U':= g^{-1}(U\times \{0\})$. Such a subset $U'$ of $V$ is called a slice through $x$ for the $G$-action on $M$. By the construction, every $G$-orbit intersects $U'$ in at most one point and $U'$ is a smooth submanifold of dimension $d=dim(B)$ in $M$. Now, consider the orbit map -$$ -f: G\times U'\to M, \quad f(g,y)= gy. -$$ -This map is smooth and 1-1 (since each $G$-orbit intersects $U'$ at most once). Moreover, $dim(G\times U')=dim(M)$. Hence, by the invariance of domain theorem, $f$ is a homeomorphism to its image, which is an open subset $W\subset M$. Clearly, $W$ contains $Gx$. Moreover, by the construction, $f$ is $G$-equivariant. -Remark. i. With more work, one can prove that $f$ is also a diffeomorphism, but I do not need this. -ii. The map $f$ as above plays the role of "spreading $V$ out'' in Sam's answer. -This concludes the proof of Part 1. -Part 2. If $M\to B$ is a principal $G$-fiber bundle (where $G$ is a Lie group and $M, B$ are topological manifolds), then the $G$-action on $M$ is proper. -This properness property holds even in much greater generality, see Lemma B in my answer here.<|endoftext|> -TITLE: Does $9^{2^n} + 1$ always have a prime factor larger than $40$? -QUESTION [11 upvotes]: I'm trying to find out for which natural numbers $n$, does $9^n + 1$ have all of its prime factors less than $40$. If I can provide a positive answer to my title question, then I will have a proof that only $n = 1$ causes $9^n + 1$ to have all of its prime factors less than $40$. Thank you for any help. - -REPLY [19 votes]: Here is a proof with very few computations involved. -It is well known that -If $\displaystyle x^2 + 1 = 0 (\mod p)$ for some prime $\displaystyle p$, then $\displaystyle p = 1 (\mod 4)$. -Perhaps a little less well known than the above is that -If $\displaystyle x^4 + 1 = 0 (\mod p)$ for some prime $\displaystyle p$, then $\displaystyle p = 1 (\mod 8)$. -Since $\displaystyle 9^{2^n}+1 = (3^{2^{n-1}})^4 + 1$ we must have that the only possible prime < 40 which is a candidate is $\displaystyle 17$. -Thus we need to check the residues only for $\displaystyle 17$. -Now since $\displaystyle 3$ is a primitive root of $\displaystyle 17$, we have that $\displaystyle 3^{8} + 1 = 0 (\mod 17)$. Since $\displaystyle 3^{2^3} = -1 ( \mod 17)$, we have that $\displaystyle 3^{2^n} + 1 = 2 (\mod 17)$ for all $\displaystyle n > 3$. -Thus we only need to check $\displaystyle 3^{2^3} +1 = 9^{2^2} + 1 = 2×17×193$, which has a factor $\displaystyle > 40$ and we are done. - -Interestingly, for any number $N$, we can show that at most a finite number of numbers of the form $\displaystyle 9^{2^n}+1$ have all their prime factors < $\displaystyle N$. -This is because for $\displaystyle m$ < $\displaystyle n$, $\displaystyle 9^{2^m}-1$ divides $\displaystyle 9^{2^n}-1$ which implies that $\displaystyle (9^{2^n} + 1, 9^{2^m} + 1) = 2$. -i.e. any two numbers in the sequence have a gcd of $\displaystyle 2$. -Thus the number of numbers with all prime factors < $\displaystyle N$ is finite. -(Another proof of the same fact is that if $\displaystyle 9^{2^m}+1 = 0 (\mod p)$ then $\displaystyle 9^{2^n}+1 = 2 (\mod p)$ for $n > m$) -In fact, this is a different proof that the number of primes is infinite!<|endoftext|> -TITLE: Minimum of the Gamma Function $\Gamma (x)$ for $x>0$. How to find $x_{\min}$? -QUESTION [21 upvotes]: The $\Gamma (x)$ function has just one minimum for $x>0$ . This result uses some -properties of the gamma function: - -$\Gamma ^{\prime \prime }(x)>0$ and $\Gamma (x)>0$ for all $x>0$ -$\Gamma (1)=\Gamma (2)=1$. - -Observing the following graph (created in SWP) of $y=\Gamma (x)$ this minimum is near $x=3/2$, but likely $\min \Gamma (x)\neq \Gamma \left( 3/2\right) =\dfrac{1}{2}\Gamma \left( 1/2\right) =\dfrac{1}{2}\sqrt{\pi }$. - -I think that it is not possible to find analytically the exact value of $x_{\min }$, even by converting to an adequate problem in the interval $]0,1]$ and using the functional equation $\Gamma (x+1)=x\Gamma (x)$ and the reflection formula -$\Gamma (p)\Gamma (p-1)=\dfrac{\pi }{\sin px}\qquad $( $0\lt p\lt 1$) -Question: -a) Which is the best way to find $\min_{[1,2]}\Gamma (x)$ and does $x_{\min }$ lay in $[1,3/2]$ or in $[3/2,2]$? -b) Is there some useful series expansion of $\Gamma (x)$? -c) Which numeric method do you suggest? - -Edit: Due to the shape of $\Gamma (x)$ I thought on the one-dimensional Davies-Swann-Campey method of direct search for unconstrained optimization, which approximates a function near a minimum by successive approximating quadratic polynomials. - -REPLY [2 votes]: You have -$$\frac d {dx} \Gamma(x)=\Gamma (x)\, \psi (x)$$ Knowing that the solution is close to $\frac 32$, make a Taylor expansion around this value. This would give -$$y=\psi (x)=\psi^{(0)} \left(\frac{3}{2}\right)+\left(\frac{\pi ^2}{2}-4\right) - \left(x-\frac{3}{2}\right)+\frac{1}{2} \psi - ^{(2)}\left(\frac{3}{2}\right)\left(x-\frac{3}{2}\right)^2+O\left(\left(x-\frac{3}{2}\right)^3\right)$$ Now, using series reversion -$$x=\frac{3}{2}+t-\frac{ \psi ^{(2)}\left(\frac{3}{2}\right)}{\pi - ^2-8}t^2+O\left(t^3\right)\qquad \text{where} \qquad t=\frac{y-\psi ^{(0)}\left(\frac{3}{2}\right)}{\frac{\pi ^2}{2}-4}$$ Making $y=0$ gives $t=-\frac{2 (2-\gamma -2 \log (2))}{\pi ^2-8}$ leading to the estimate -$$x\sim\frac 32 +\frac{2(\gamma +2 \log (2)-2)}{\left(\pi ^2-8\right)^3} A$$ where -$$A=28 \zeta (3) (\gamma +2 \log (2)-2)+128-32 \gamma -16 \pi ^2+\pi ^4-64 \log (2)$$ -$$x \approx 1.461632068$$ while the "exact" solution is $1.461632145$.<|endoftext|> -TITLE: How do you formally prove that rotation is a linear transformation? -QUESTION [29 upvotes]: The fact that rotation about an angle is a linear transformation is both important (for example, this is used to prove the sine/cosine angle addition formulas; see How can I understand and prove the "sum and difference formulas" in trigonometry?) and somewhat intuitive geometrically. However, even if this fact seems fairly obvious (at least from a diagram), how does one turn the picture proof into a formal proof? On a related note, it seems likely that many formal proofs using a diagram will end up relying on Euclidean geometry (using angle/side congruence properties), but isn't one of the points of linear algebra to avoid using Euclidean geometry explicitly? - -REPLY [3 votes]: As an exercise in trigonometry. - -The transformation of rectangular co-ordinates $(x,y)$ of $P$ into the -rectangular co-ordinates $(x',y')$ of $P$ by a rotation of the axes is given by: -$x=x^{\prime }\cos \phi -y^{\prime }\sin \phi $ -$y=x^{\prime }\sin \phi +y^{\prime }\cos \phi $. -This transformation is invertible. -Edit: This way of proving the transformation of rectangular co-ordinates does not "avoid using Euclidean geometry explicitly" (in this case trigonometry), as writen in the question. -However the same method can be used to derive the transformation of rectangular into spherical co-ordinates, which is a non-linear transformation.<|endoftext|> -TITLE: Approximation theorems -QUESTION [14 upvotes]: The Weierstrass' approximation theorem for continuous functions on a compact space by using polynomials is well-known. As far as I know, there are some variants of this theorem, e.g. Stone-Weierstrass that refers not only to polynomials as approximator functions. Where could I find these Weierstrass-like approximation theorems? On-line references are OK, but one might also point to some books. -Thanks in advance, -Lucian - -REPLY [2 votes]: If you want some technical challenge (or let's say, it really is for me) you can have a look at A. Pinkus. N-widths in Approximation Theory, Springer-Verlag, New York, 1980.<|endoftext|> -TITLE: How to approximate/connect two continuous cubic Bézier curves with/to a single one? -QUESTION [10 upvotes]: I subdivide a cubic Bézier curve at a given t value using de Casteljau’s algorithm, which yields two cubic Bézier curves. Afterwards I “scale” the second curve (proportionally). -I’d like to reconnect or approximate the two curves to/with a single curve in a third step. Is that possible? - -This illustrates what I’m intending to do. -I guess reversing de Casteljau’s algorithm won’t work because I don’t have one of the intermediate points. -If there are multiple approaches, I’d favor a simpler (faster to compute) strategy. -Thanks in advance. -Update: -Maybe this figure makes it more clear; it shows all the points I have: - -The original cubic Bézier curve is defined by the points $ p_{0}, p_{1}, p_{2}, p_{3} $. -It is divided at a given $ t $ (timing) value using de Casteljau’s algorithm, which yields the points $ q_{1}, r_{2}, i_{1}, q_{2}, r_{1}, k $ where $ k $ is the division point. -The two subcurves are defined by the control points $ p_{0}, q_{1}, q_{2}, k $ and $ k, r_{1}, r_{2}, p_{3} $, respectively. -The scaled second subcurve is defined by the points $ k, {r}' _{1}, {r}' _{2}, {r}' _{3} $ -Scaling is applied as follows: $ {p}' = k + (p - k) \cdot factor $ for $ r_{1}, r_{2}, p_{3} $ - -REPLY [2 votes]: You can calculate the distance of q2 k and k r′1 as LeftLength and RightLength. -We can image we have got a cubic curve, which we subdivide at k, but what is the t value? -We can calculate t from de Casteljau’s algorithm, that is t = leftLength / rightLength -We also know that the new curve's p1' is along p0->p1 direction, and p2' is along p3->p2, so we can use t to calculate the new curve's p1' and p2'.<|endoftext|> -TITLE: Is the Gamma function superadditive? -QUESTION [7 upvotes]: A function $f$ is superadditive if $f(x) + f(y) \le f(x+y)$. The question is: -Does a real number $a$ exists such that for all -real numbers with $x, y\ \ge \ a $ -$$ \Gamma(x) + \Gamma(y) \le \Gamma(x+y) \quad ?$$ - -REPLY [9 votes]: $a = 2$ will do, because (letting $x \ge y$ wlg), -$\Gamma(x+y) \ge \Gamma(x+2) = (x+1)x \Gamma(x) \ge 6 \Gamma(x) \ge \Gamma(x) + \Gamma(x) \ge \Gamma(x) + \Gamma(y)$.<|endoftext|> -TITLE: Parametrizing implicit algebraic curves -QUESTION [25 upvotes]: Back in the day, I was absolutely enthralled by the study of plane curves and their properties (I have Lockwood and Zwikker to thank). I learned early on that for the purposes of generating plots on a computer (and for that matter deducing equations of "derived curves" and determining other special properties), one should try to find a representation in parametric equations for your plane curve. -As I recall, in dealing with algebraic curves represented by an implicit Cartesian equation, I knew of only three tricks to derive parametric equations from an implicit equation (listed in decreasing order of effectiveness; I note that I did all these investigations even before I knew computer algebra systems existed): -1: Convert to polar coordinates to express in the form $r=r(\theta)$; the parametric equations are then -$\begin{align*}x&=r(\theta)\cos\,\theta\\y&=r(\theta)\sin\,\theta\end{align*}$ -2: The $y=mx$ "trick" (I never did get to learn the formal name for this technique); to use the implicit equation for the folium of Descartes as an example: -$x^3+y^3=3xy$ -$x^3+(mx)^3=3x(mx)$ -and then by solving for x and using the relation $y=mx$ again, -$\begin{align*}x&=\frac{3m}{1+m^3}\\y&=\frac{3m^2}{1+m^3}\end{align*}$ -(I remember this worked especially well for curves whose (only?) singular points are at the origin, but not very well for other curves; can anybody explain why?) -3: Randomly replacing x or y with any of the six trigonometric functions (maybe multiplied by a convenient constant), and hope that I can easily solve for the other variable. For instance, I managed to derive the parametric equation for the bicorn and the Dürer conchoid in this way. -Probably the only other thing I learned way after I had moved on to other things was that elliptic curves can for instance be represented as parametric equations involving the Weierstrass ℘ function or the elliptic exponential, but this is apparently limited to elliptic curves only. -Now for my question: did I miss any other useful (general?) methods for turning an implicit Cartesian equation for an algebraic curve into parametric equations? - -Addendum, 8/7/2011 -I didn't want to ask a separate question, so: are there systematic methods for parametrizing a plane algebraic curve in terms of (Jacobi or Weierstrass) elliptic functions? For instance, we find here that the Fermat cubic $x^3+y^3=a^3$ can be parametrized in terms of Weierstrass functions, in addition to the elliptic curve example I gave previously. I've also encountered in my readings that the Cartesian ovals can also be parametrized with Weierstrass functions, but I have been unable to find an explicit construction of the parametric equations. - -REPLY [16 votes]: This is not intended as a full answer, but as a way to try to decide if it is feasible to be able to parameterize a plane curve in an easy way. -The two examples you give, namely the folium and the bicorn, are examples of "irreducible curves of geometric genus zero". For the purpose of this discussion you can take "irreducible curve" to mean the locus of zeros of an irreducible polynomial in two variables. Geometric genus zero is harder to define, though see this question and the various answers, especially the answer of Matt E. In the linked answer, the "genus" that is defined is the "arithmetic genus", whereas we are interested in the "geometric genus". -The two genera are not unrelated, and the discrepancy between them is concentrated at the singular points of the curve. A singular point of a plane curve $C$ given as the vanishing set of a polynomial $f(x,y)$ is a point $p$ in the plane that lies on the curve (i.e. $f(p)=0$) and where both partial derivatives of $f$ vanish (i.e. $df/dx(p)=df/dy(p)=0$). -Thus, while the arithmetic genus of an (irreducible) plane curve $C$ of degree $d$ is simply $p_a:=(d-1)(d-2)/2$, the geometric genus of $C$ is an integer less than or equal to $p_a$. I will not dwell too much on how to compute the geometric genus, for the moment I will simply say that every singular point decreases the arithmetic genus by at least one (counting also the "esoteric" singular points at infinity, as you need to explain the bicorn). There are algorithmic way to decide the drop in genus that every singular point determines and thus the computation of the geometric genus can be taken as "easy". -Over the complex numbers. It is a fact that, whenever the geometric genus of an irreducible curve $C$ is zero then you can find a parameterization of $C$ by rational functions of a single variable $t$, provided that you are thinking about the complex solutions of your curve. This parameterization can be found algorithmically, and while there are various tricks to use, the substitution $y=mx$ is at the core: depending on context it might be called "projection" or "blow up". The other trick is that you can "reembed" your curve by considering the vector space whose coordinates are all the monomials of some given degree $n$ and take the image of the curve under the map that sends a point in the plane in the $N$-tuple of all the evaluations of the monomials you have chosen. Instead of being more explicit, let me give an example: if the curve is the $x$-axis in the plane, and you choose the monomials $1,x,y,x^2,xy,y^2$ to reembed, then we image of the $x$-axis (i.e. the locus where $y=0$) is the set of points with coordinates $(1,x,0,x^2,0,0)$ in a 6-dimensional space. Forgetting the useless coordinates (this is an instance of projection) gives us a plane curve parameterized by $(x,x^2)$, namely the parabola $x_1^2=x_2$. In this example we started with a parameterization and ended with a parameterization. Being more refined you can also start with an implicit equation and end with (lots of) implicit equations in more variables. -Once you have projections and reembeddings you can play them against one another: embed your curve in large enough space. Project away from singular points until you get back to the plane; reembed and reproject. There is a way of making sure that "singular points get resolved" by doing this, so that eventually you get to a curve having no singular points. Once you are there, further projections will do the trick of getting you to a line. -Over the real numbers. Of course things are somewhat trickier. Certainly two possibilities are the folium, where the rational parameterization you had over the complex numbers "just works" over the reals, but there is the slightly more elaborate one of the bicorn, where you have a rational parameterization in terms of trigonometric functions. Again, by modification of the tricks discussed above, you can get every irreducible curve of genus zero to become a plane conic. Once in this form, you will find a rational parameterization if and only if you can find a point, and deciding if there is a point reduces to the computation of whether a matrix constructed out of the coefficients of the equation has eigenvalues of different signs or not. The distinction between rational parameterization in terms of a variable or in terms of trigonometric functions has to do with personal taste. -So far, this has dealt with curves of genus zero and I will not say much about higher genus. If the genus is one, there is, as you mentioned, the possibility of using the Weierstrass $\wp$. Here things become more complicated: the function used to parameterize everything is less familiar, you will probably need some transcendental numbers to get this parameterization to work, and the topological description of the space of solutions gets more in the way. In the genus zero case, the only thing that mattered was: is it empty, is it not empty. In the genus one case there is "one more" possibility. Of course, higher genera become harder and harder, and I will stop here, given that this is already way too much for this!<|endoftext|> -TITLE: Positive definite Hessians from strictly convex functions -QUESTION [9 upvotes]: Let $f: D \to \mathbb{R}\ $ be a function on non-singular, convex domain $D \subseteq \mathbb{R}^d$ and let us assume the second-order derivatives of $f$ exist. It is well known that $f$ is convex if and only if its Hessian $\nabla^2 f(x)$ is positive semi-definite for all $x \in D$. It is also known that if $\nabla^2 f(x)$ is positive definite for all $x \in D$, we may conclude that $f$ is strictly convex (for a reference, see Boyd and Vandenberghe, 2004). -On the other hand, if $f$ is strictly convex, we still merely know that $\nabla^2 f(x)$ is positive semi-definite for all $x \in D$. That is, there may be $x \in D$ such that $y^T \nabla^2 f(x) y = 0\ $ for some $y\not=0$. -As an example, consider $f(x)=x^4$. In this case, $f$ is strictly convex, but $f''(x)=12x^2$ and, hence, $yf''(x)y=0$ for $x=0$ and $yf''(x)y>0$ for all $x\not=0$. -Yet, these points that ruin the complete positive definiteness seem to be very sparsely distributed within $D$. So, my question is as follows: - -If $f$ is strictly convex, how can we characterize the set of points $X$ for which - $\nabla^2f(x)$ is not positive definite for $x \in X$, and - $\nabla^2f(x)$ is positive definite for $x \in D \setminus X$? - -That is, has such a set $X$ been investigated before, what properties are known, and where can I learn more about it? Any reference is welcome. -In particular, my guess is that on can state the following: - -Conjecture 1: The set $X$ is merely a discrete subset of $D$. - - -EDIT: Since Conjecture 1 has obviously been disproved by George Lowther below, allow me to restate my guess as the following (less bold) statement: - -Conjecture 2: The set $X$ is does not contain a non-empty open ball. - -or, even more cautious: - -Conjecture 3: The set $D \setminus X$ does contain a non-empty open ball. - -REPLY [7 votes]: I can't fully characterize such sets, but can say one thing - the conjecture is false. -For a counterexample on one dimension, choose a closed and nowhere dense set X (such as the Cantor middle thirds set, which is not discrete) and let $g(x)=\min\{\vert x-y\vert\colon y\in X\}$ be the distance of a point x from X. Integrate it twice, $f(x)=\int_0^x\int_0^yg(z)\,dz\,dy$ to get a strictly convex function whose second derivative $f^{\prime\prime}=g$ vanishes precisely on X. -In fact, you could take X to be the Smith-Volterra-Cantor set, which has positive Lebesgue measure. -This can be used to define a strictly convex function on $\mathbb{R}^2$, $\tilde f(x,y)=f(x)+f(y)$, whose Hessian vanishes completely on a set $X\times X$ of positive Lebesgue measure.<|endoftext|> -TITLE: Two ways of defining the gamma function $\Gamma (x)$. How to show they are equivalent? -QUESTION [5 upvotes]: In the Portuguese book Análise Matemática (Mathematical Analysis) by C. Sarrico, it is proved there exist -$$\displaystyle\lim_{n\rightarrow +\infty}f_{n}(x)=\lim_{n\rightarrow +\infty }e^{\log f_{n}(x)},\qquad(1)$$ -where -$$f_{n}(x)=\dfrac{n!n^{x}}{x(x+1)(x+2)\cdots (x+n)}.\qquad(2)$$ -And it is stated there that -$$\displaystyle\lim_{n\rightarrow +\infty }f_{n}(x)=\Gamma (x)=\int_{0}^{+\infty}t^{x-1}e^{-t}dt\qquad x>0.\qquad(3)$$ -The author writes that a proof can be found in "Principles of Mathematical Analysis" by W. Rudin. Since I don't have it, I ask the following: -Question: How is the sketch of such a proof or other proofs of the same result? - -REPLY [5 votes]: I don't have a copy of Rudin, but can give you a proof of this. There's various ways of playing about with the integral form to get what you want, although I'm not sure what the cleanest method is. -One way is to use the fact that $1_{\{t\le n\}}(1-t/n)^n\to e^{-t}$ to write $\Gamma(x)$ as -$$ -\begin{align} -\Gamma(x)&=\lim_{n\to\infty}\int_0^nt^{x-1}(1-t/n)^{n}\,dt\\ -&=\lim_{n\to\infty}n^x\int_0^1s^{x-1}(1-s)^n\,ds. -\end{align} -$$ -To prove that the first integral commutes with the limit, you could use the dominated convergence theorem. The second integral is just using the substitution $t=ns$. -It needs to be shown that this integral is equal to your function $f_n(x)$. In fact, I recognize both the function $f_n$ and the integral as beta functions. -$$ -\int_0^1s^{x-1}(1-s)^n\,ds=B(x,n+1) -$$ -To find the explicit form for this, you can either use $B(x,y)=\Gamma(x)\Gamma(y)/\Gamma(x+y)$ (a proof is given on the Wikipedia page) or repeatedly apply the identity $B(x,n+1)=B(x+1,n)n/x$ (which follows from integration by parts) and $B(x,1)=1/x$. -Note: You can also apply a very similar argument to the one above using the limit $(1+t/n)^{-n}\to e^{-t}$, which I did in my first version of this answer. Using $(1-t/n)^n\to e^{-t}$ seems cleaner though, so I edited the answer accordingly. -An alternative method is to write -$$ -\begin{align} -\Gamma(x) &= \frac{\Gamma(x+n+1)}{\Gamma(n+1)}\frac{\Gamma(n+1)\Gamma(x)}{\Gamma(x+n+1)}\\ -&= \frac{\Gamma(x+n+1)}{\Gamma(n+1)}\frac{n!}{x(x+1)\cdots(x+n)}. -\end{align} -$$ -Then, the limit you need would follow as long as it can be shown that $\Gamma(x+n+1)/\Gamma(n+1)$ approaches $n^x$ asymptotically as $n\to\infty$. -$$ -\Gamma(n+x+1)=\int_0^\infty t^x t^ne^{-t}\,dt -$$ -Then, the required expression depends on showing that, in the limit $n\to\infty$, to leading order, the integral only contributes for values of $t/n$ close to 1 (you can see that $t^ne^{-t}$ has its maximum at $t=n$).<|endoftext|> -TITLE: Uniqueness of $\sin x$ and $\cos x$? (Putnam Exam problem) -QUESTION [7 upvotes]: I'm struggling with this former Putnam Exam problem: -Suppose $f$ and $g$ are nonconstant, differentiable, real-valued functions on $R$. Furthermore, suppose that for each pair of real numbers $x$ and $y$, $f(x + y) = f(x)f(y) - g(x)g(y)$ and $g(x + y) = f(x)g(y) + g(x)f(y)$. If $f'(0) = 0$, prove that $(f(x))^2 + (g(x))^2 = 1$ for all $x$. -Right. So obviously, $f(x) = \cos x$ and $g(x) = \sin x$ satisfy the conditions and also the conclusion of the problem. But are these the unique such functions, and if so, how to prove it? And if not, then how to prove the conclusion otherwise? - -REPLY [11 votes]: Here is a different way. -We have -$f(x+y) = f(x) f(y) - g(x) g(y)$ -Differentiate wrt $y$ -$f'(x+y) = f(x)f'(y) - g(x) g'(y)$ , put $y = 0$. -$f'(x) = -g(x) g'(0)$ -Similarly we get -$g'(x) = f(x) g'(0)$ -Thus $f(x)f'(x) + g(x)g'(x) = 0$ -Thus the function $f^{2}(x) + g^{2}(x)$ is constant, as it's derivative is zero. -Now -$f(0) = f^2(0) - g^{2}(0)$ -and -$g(0) = 2f(0)g(0)$ -Squaring and adding both we get -$f^{2}(0) + g^{2}(0) = (f^{2}(0) + g^{2}(0))^2$ -Now if $f^{2}(0) + g^{2}(0) = 0$ then because $f^{2}(x) + g^{2}(x)$ is a constant, we get $f(x) = 0$ which implies $f$ is constant. -Thus $f^{2}(0) + g^{2}(0) = 1$ and hence $f^{2}(x) + g^{2}(x) = 1$<|endoftext|> -TITLE: the number of loops on lattice? -QUESTION [12 upvotes]: Walking on a lattice. The number of various paths from $(0,0)$ to $(m,n)$ using north and east steps is binomial coefficient -$C(m+n,m)$ -if he needs to go back $(0,0)$ using south and west steps, and doesn't pass by the passed points. Then what is the number of various loops walking from $(0,0)$ to $(m,n)$ then returning to $(0,0)$? Any algebraic expression for this? - -btw:i asked this question before, but had not get an answer yet. Maybe I can get a good answer at here. - -REPLY [12 votes]: The number of loops is just the number of pairs of non-intersecting paths s.t. first one goes from (0,1) to (m-1,n) and the second one goes from (1,0) to (m,n-1). -Non-intersecting paths on a lattice are counted by some determinant formula. In this case it's just $\det\left(\begin{matrix}\binom{m+n-2}{m-1}&\binom{m+n-2}{m-2}\\\\\binom{m+n-2}{n-2}&\binom{m+n-2}{n-1}\end{matrix}\right)=\binom{m+n-2}{m-1}^2-\binom{m+n-2}{m-2}\binom{m+n-2}{n-2}$. -It's not hard to prove this formula directly: a pair (path from (0,1) to (m-1,n); path from (1,0) to (m,n-1)) either forms a loop without intersection or (if paths intersect) can be (canonically) identified with a pair (path from (0,1) to (m,n-1); path from (0,1) to (m,n-1). - -Upd. quantumelixir asked for more detailed explanation. Here it is. - -The number of (monotonic) lattice paths from $(a,b)$ to $(a',b')$ is $\binom{(a'-a)+(b'-b)}{a'-a}$. -Any loop can be decomposed into 2 paths: first one, going from $(0,1)$ to $(m-1,n)$, and second one, going from $(1,0)$ to $(m,n-1)$. -There are $\binom{m+n-2}{m-1}$ paths of each type. -But not every such pair gives a loop: we need to count only pairs that don't interesect; or, equivalently, we need to count the number $I$ of pairs of such paths s.t. they do intersect — the answer to the original question will be $\binom{m+n-2}{m-1}^2-I$. -There is an obvious bijection between the set of intersecting pairs (path $(0,1)\to(m-1,n)$, path $(1,0)\to(m,n-1)$) and the set of intersecting pairs (path $(1,0)\to(m-1,n)$, path $(0,1)\to(m,n-1)$) — namely, “go by the first path (of the pair) till the (first) intersection point, then go by the second path”. -So $I$ is the number of intersecting pairs (path $(1,0)\to(m-1,n)$, path $(0,1)\to(m,n-1)$). But any such pair is intersecting! -So $I$ is just $\binom{m+n-2}{m-2}\binom{m+n-2}{n-2}$. And the final answer is $\binom{m+n-2}{m-1}^2-\binom{m+n-2}{m-2}\binom{m+n-2}{n-2}$.<|endoftext|> -TITLE: Topological properties preserved by continuous maps -QUESTION [26 upvotes]: A continuous function does not always map open sets to open sets, but a continuous function will map compact sets to compact sets. One could make list of such preservations of topological -properties by a continuous function $f$: -$$ f( \mathrm{open} ) \neq \mathrm{open} \;,$$ -$$ f( \mathrm{closed} ) \neq \mathrm{closed} \;,$$ -$$ f( \mathrm{compact} ) = \mathrm{compact} \;,$$ -$$ f( \mathrm{convergent \; sequence} ) = \mathrm{convergent \; sequence} \;.$$ -Could you please help in extending this list? -(And correct the above if I've erred!) -Edit. Thanks for the several comments and answers extending my list. -I was hoping that I could see some common theme among the properties preserved by a continuous -mapping, separating those that are not preserved. But I don't see such a pattern. -If anyone does, I'd appreciate a remark. Thanks! - -REPLY [3 votes]: Continues image of a connected set is connected. -Continues image of a complete set is not complete. -Continuity does not preserve Cauchy sequences, unless it's a uniform continuity.<|endoftext|> -TITLE: When does the product of two polynomials = $x^{k}$? -QUESTION [15 upvotes]: Suppose $f$ and $g$ are are two polynomials with complex coefficents (i.e $f,g \in \mathbb{C}[x]$). -Let $m$ be the order of $f$ and let $n$ be the order of $g$. -Are there some general conditions where -$fg= \alpha x^{n+m}$ -for some non-zero $\alpha \in \mathbb{C}$ - -REPLY [12 votes]: We don't need the strong property of UFD. If $\rm D$ is a domain $\rm D$ then $\rm x$ is prime in $\rm D[x]$ (by $\rm D[x]/x \cong D$ a domain), and products of primes factor uniquely in every domain (same simple proof as in $\Bbb Z$). In particular, the only factorizations of the prime power $\rm x^i$ are $\rm \,x^j x^k,\ i = j+k\ $ (up to associates as usual). This fails over non-domains, e.g. $\,\rm x = (2x+3)(3x+2) \in \mathbb Z/6[x].$ - -REPLY [10 votes]: Yes, the intuitively evident ones: all other terms in $f$ and $g$ must vanish. To see this, note that the product of the constant terms of $f$ and $g$ equals the constant term of $fg$, which is zero, whence at least one of these polynomials is multiple of $x$. Without any loss of generality assume it is $f$. Then -$$fg = x\, \left(\frac{f}{x}\right) g,$$ -implying $(f/x) g$ is a multiple of $x^{n+m-1}$. By induction this reduces us to the case $n+m=0$, which is trivial (because $f$ and $g$ then have no other terms). QED.<|endoftext|> -TITLE: Continued fraction for $\frac{1}{e-2}$ -QUESTION [22 upvotes]: A couple of years ago I found the following continued fraction for $\frac1{e-2}$: -$$\frac{1}{e-2} = 1+\cfrac1{2 + \cfrac2{3 + \cfrac3{4 + \cfrac4{5 + \cfrac5{6 + \cfrac6{7 + \cfrac7{\cdots}}}}}}}$$ -from fooling around with the well-known continued fraction for $\phi$. Can anyone here help me figure out why this equality holds? - -REPLY [24 votes]: Euler proved in "De Transformatione Serium in Fractiones Continuas" Reference: The Euler Archive, Index number E593 (On the Transformation of Infinite Series to Continued Fractions) [Theorem VI, §40 to §42] that -$$s=\cfrac{1}{1+\cfrac{2}{2+\cfrac{3}{3+\cdots }}}=\dfrac{1}{e-1}.$$ -Here is an explanation of how he proceeded. -New Edit in response to @A-Level Student's comment. I transcribed the following assertion from the available translation to English of Euler's article. Now I checked the original paper and corrected equation (1b). -He stated that "we are able to demonstrate without much difficulty, that if -$$\cfrac{a}{a+\cfrac{b}{b+\cfrac{c}{c+\cdots }}}=s,\tag{1a}$$ -then -$$a+\cfrac{a}{b+\cfrac{b}{c+\cfrac{c}{d+\cdots }}}=\dfrac{s}{1-s}.\text{"}\tag{1b}$$ -Since, in this case, we have $s=1/(1-e)$, $a=1,b=2,c=3,\ldots $ it follows -$$1+\cfrac{1}{2+\cfrac{2}{3+\cfrac{3}{4+\cdots }}}=\dfrac{1}{e-2}.$$ -Edit: Euler proves first how to form a continued fraction from an alternating series of a particular type [Theorem VI, §40] and then uses the expansion -$$e^{-1}=1-\dfrac{1}{1}+\dfrac{1}{1\cdot 2}-\dfrac{1}{1\cdot 2\cdot 3}+\ldots - .$$ - -REFERENCES -The Euler Archive, Index number E593, http://www.math.dartmouth.edu/~euler/ -Translation of Leonhard Euler's paper by Daniel W. File, The Ohio State University. - -REPLY [6 votes]: Another possibility: remember that the numerators and denominators of successive convergents of a continued fraction can be computed using a three term recurrence. -For a continued fraction -$$b_0+\cfrac{a_1}{b_1+\cfrac{a_2}{b_2+\dots}}$$ -with nth convergent $\frac{C_n}{D_n}$, the recurrence -$$\begin{bmatrix}C_n\\\\D_n\end{bmatrix}=b_n\begin{bmatrix}C_{n-1}\\\\D_{n-1}\end{bmatrix}+a_n\begin{bmatrix}C_{n-2}\\\\D_{n-2}\end{bmatrix}$$ -with starting values -$\begin{bmatrix}C_{-1}\\\\D_{-1}\end{bmatrix}=\begin{bmatrix}1\\\\0\end{bmatrix}$, $\begin{bmatrix}C_{0}\\\\D_{0}\end{bmatrix}=\begin{bmatrix}b_0\\\\1\end{bmatrix}$ -holds. -With $b_j=j+1$ and $a_j=j$, you now try to find a solution for those two difference equations. -Skipping details, the solution of those two recursions are -$$C_n=\frac{(n+3)!}{n+2}\sum_{j=0}^{n+3}\frac{(-1)^j}{j!}$$ -and -$$D_n=\frac{(n+3)!}{n+2}\left(1-2\sum_{j=0}^{n+3}\frac{(-1)^j}{j!}\right)$$ -are solutions to the two difference equations. -Divide $C_n$ by $D_n$ and take the limit as $n\to\infty$; you should get the expected result.<|endoftext|> -TITLE: Characterizing non-constant entire functions with modulus $1$ on the unit circle -QUESTION [44 upvotes]: Is there a characterization of the nonconstant entire functions $f$ that satisfy $|f(z)|=1$ for all $|z|=1$? - -Clearly, $f(z)=z^n$ works for all $n$. Also, it's not difficult to show that if $f$ is such an entire function, then $f$ must vanish somewhere inside the unit disk. What else can be said about those functions? -Thank you. - -REPLY [4 votes]: For your $f$ we know there are no zeros on the boundary of the circle, and only a finite number of zeros inside. So you can form the Blaschke product $b$ for the non-zero zeros of $f$ inside the circle, and $f(z)/(z^{n}b(z))$ is non-vanishing and holomorphic on an open neighborhood of the closed unit disk for some natural number $n$, and has modulus $1$ on the unit circle. Of course, it may be that $f$ has no non-zero zeros inside the disk, in which case, define $b\equiv 1$. -So $\ln|f(z)/(z^{n}b(z))|$ is harmonic on an open neighborhood of the closed unit disk, and it vanishes on the unit circle. So this harmonic function is identically $0$, which means $|f(z)/(z^{n}b(z))|\equiv 1$ inside the disk. Now, by the maximum modulus principle for holomorphic functions, $f(z)=Cz^{n}b(z)$ for some unimodular constant $C$, and for all $|z| \le 1$. By the identity theorem, this must be true for all $z$ where $b(z)$ is finite, which means that the Blaschke factor $b$ must not be present in the factoring because $f$ is entire. Therefore, $f=Cz^{n}$ for some constant $C$ with $|C|=1$ and for some $n \ge 0$.<|endoftext|> -TITLE: Accessible Intro to Random Matrix Theory (RMT) -QUESTION [8 upvotes]: I read this fascinating article: -http://www.newscientist.com/article/mg20627550.200-enter-the-matrix-the-deep-law-that-shapes-our-reality.html -Unfortunately all the other papers I find googling are just not tangible to me :-( -Could anyone please point me to some material that bridges the gap from this popular science exposition to the hard core papers that seem to pile up in the Net? - -REPLY [10 votes]: Random matrix theory is a diverse area; and different people prefer different introductions. One question is whether you're mostly interested in mathematical aspect or in applications in physics (and other areas). But I can recommend everything below. -Lecture notes -Topics in random matrix theory, -excellent lecture notes by Fields medallist Terence Tao -Introduction to the Random Matrix Theory: Gaussian Unitary Ensemble and Beyond, -lecture notes by Yan Fyodorov -Topics in random matrix theory -lectures mostly focused on QCD by Jac Verbaarschot -Books -Random Matrices, by Madan Lal Mehta -canonical on the orthogonal polynomial approach -An Introduction to Random Matrices, by Greg Anderson, Alice Guionnet and Ofer Zeitouni -Overview of applications -The Oxford Handbook of Random Matrix Theory, -Editors: Gernot Akemann, Jinho Baik, Philippe Di Francesco.<|endoftext|> -TITLE: Is it possible to represent every huge number in abbreviated form? -QUESTION [88 upvotes]: Consider the following expression. -$1631310734315390891207403279946696528907777175176794464896666909137684785971138$ -$2649033004075188224$ -This is a $98$ decimal digit number. -This can be represented as $424^{37}$ which has just 5 digits. -or consider this number: -$1690735149233357049107817709433863585132662626950821430178417760728661153414605$ -$2484795771275896190661372675631981127803129649521785142469703500691286617000071$ -$8058938908895318046488014239482587502405094704563355293891175819575253800433524$ -$5277559791129790156439596789351751130805731546751249418933225268643093524912185$ -$5914917866181252548011072665616976069886958296149475308550144566145651839224313$ -$3318400757678300223742779393224526956540729201436933362390428757552466287676706$ -$382965998179063631507434507871764226500558776264$ -This $200$ decimal digit number can be simply expressed as $\log_e 56$ when we discard first $6$ numbers and then consider first $200$ digits. -Now the question is, is it possible to represent any and every huge random number using very few characters as possible, theoretically. -...Also, is there any standard way to reduce it mathematically? - -REPLY [5 votes]: As in other answers, at the very least a pigeon-hole principle shows that "most" numbers cannot be "described" in fewer characters than their decimal (or whatever-you-pick) expression... -To my mind, the relevant developed body of ideas is "Solomonov-Chaitin-Kolmogorov" complexity, which is about descriptional (or program-length) complexity, rather than run-time "complexity". -This does remind one of the "smallest boring number" pseudo-paradox, which argues that the first non-interesting number has some interest because it is the first... -The bottom line is that "size" of numbers is not reliably comparable to "descriptional complexity", in general, although most large-ish numbers are also descriptionally complex, by pigeon-hole. -There is a book by Li and Vitanyi which is not only authoritative, but fascinatingly readable...<|endoftext|> -TITLE: A polynomial determined by two values -QUESTION [13 upvotes]: From a St. Petersburg school olympiad, 11th grade. -Prove or disprove: a non constant polynomial $P$ with non-negative integer coefficients is uniquely determined by its values $P(2)$ and $P(P(2))$. - -REPLY [16 votes]: True. If $P(x)=a_nx^n+\cdots+a_1x+a_0$ then each of the coefficients are less then $b\equiv P(2)$. Each of these coefficients can then be read off from the base-b expansion of $P(b)=P(P(2))$. - -REPLY [11 votes]: Look at $P(P(2))$ in base $P(2)$. The nth place is the coefficient of $x^n$. -Steve<|endoftext|> -TITLE: Search square factors in Lucas–Lehmer sequence -QUESTION [5 upvotes]: I’m interested in some sequences which have no square factor. -$$ -s_i =\begin{cases} -4 & i=0; \\\\ -s_{i-1}^2 - 2 & \text{otherwise} -\end{cases}$$ -This is Lucas–Lehmer primality test sequence. A003010 in OEIS. -when ${s_i}$ has a square factor? I had checked that the sequence has no square factor for i=1,2,...,7 - -REPLY [4 votes]: It seems plausible that there may be no square factors other than $4$, or at least that there are only finitely many primes for which $p^2$ divides $s_n$. A sketchy argument follows: -Set $\tau=2 +\sqrt{3}$. Then $s_n = \tau^{2^n} + \tau^{-2^n}$. We work in the ring $R=\mathbb{Z}[\sqrt{3}]$, where $\tau$ is a unit. So $p^2$ divides $S_n$ if and only if $\tau^{2^{n+1}} \equiv 1 \mod p^2$, where our congruence is in $R$. -Let $p$ be a prime $\geq 5$. Let $U_{p^2}$ be the unit group of $R/p^2 R$. You want to know whether the order of $\tau$ in $U_{p^2}$ is a power of $2$. -There are two cases. First, suppose there is a square root of $3$ mod $p$ (this happens when $p \equiv \pm 1 \mod 12$). This square root will lift to a solution $s$ of $s^2 \equiv 3 \mod p^2$ (we can lift by Hensel's lemma). In this case $R/p^2 \cong \mathbb{Z}/p^2 \oplus \mathbb{Z}/p^2$ and we can identify $\tau$ with the ordered pair $(2 + s, 2 - s)$. Notice that $2-s = (2+s)^{-1}$, so the two halves of the ordered pair have the same order. The unit group of $\mathbb{Z}/p^2$ is cyclic of order $p(p-1)$, and the number of elements whose orders are powers of $p$ is $2^{v_2(p-1)}$, where $v_2(k)$ is the exponent to which $2$ divides $k$. So, heuristically, I would expect the number of $p$ for which $p^2$ divides $s_n$ and there is a square root of $3$ mod $p$ to be something like -$$\sum_{p \equiv \pm 1 \mod 12} \frac{2^{v_2(p-1)}}{p(p-1)}.$$ -There case where where $p \equiv \pm 5 \mod 12$ is a little trickier but, if I haven't made any mistakes, the similar heuristic estimate is -$$\sum_{p \equiv \pm 5 \mod 12} \frac{2^{v_2(p+1)}}{p(p+1)}.$$ -Now, here is my point: those sums converge. In fact, -$$\sum_{n=1}^{\infty} \frac{2^{v_2(n)}}{n^2}$$ -converges to $\pi^2/4$ (exercise!) and the above sums are less than this because they are restricted to just summing over primes. Moreover, the sum numerically looks pretty small. -So the "expected" number of such primes is a small finite quantity, and it wouldn't surprise me if $2$ were the only one.<|endoftext|> -TITLE: Intuition for the definition of the Gamma function? -QUESTION [304 upvotes]: In these notes by Terence Tao is a proof of Stirling's formula. I really like most of it, but at a crucial step he uses the integral identity -$$n! = \int_{0}^{\infty} t^n e^{-t} dt$$ -, coming from the Gamma function. I have a mathematical confession to make: I have never "grokked" this identity. Why should I expect the integral on the right to give me the number of elements in the symmetric group on $n$ letters? -(It's not that I don't know how to prove it. It's quite fun to prove; my favorite proof observes that it is equivalent to the integral identity $\int_{0}^{\infty} e^{(x-1)t} dt = \frac{1}{1 - x}$. But if someone were to ask me, "Yes, but why, really?" I would have no idea what to say.) -So what are more intuitive ways of thinking about this identity? Is there a probabilistic interpretation? What kind of random variable has probability density function $\frac{t^n}{n!} e^{-t}$? (What does this all have to do with Tate's thesis?) -As a rough measure of what I'm looking for, your answer should make it obvious that $t^n e^{-t}$ attains its maximum at $t = n$. -Edit: The kind of explanation I'm looking for, as I described in the comments, is similar to this explanation of the beta integral. - -REPLY [15 votes]: Geometric approach -Note that $\frac{t^n}{n!}$ is the volume of the set $S_t=\{(t_1,t_2,\dots,t_n)\in\mathbb R^{n}\mid t_i\geq 0\text{ and } t_1+t_2+\cdots+t_n\leq t\}$. -So we can perform a change of variables in the integral, replacing $t=t_1+\dots + t_{n+1}$, as: -$$\begin{align}\int_{0}^\infty \frac{t^n}{n!}e^{-t}\,dt &= \int_{0}^{\infty}\left(\int_{(t_i)\in S_t}\,dt_1\dots dt_n\right)e^{-t}\,dt \\&= \int_{t_1,t_2,\dots,t_n,t_{n+1}} e^{-(t_1+\dots +t_{n+1})}\,dt_1\dots dt_{n+1} -\end{align}$$ -Where all variables go from $0$ to $\infty$. But this can clearly be factored as the $$\left(\int_{0}^\infty e^{-t}\,dt\right)^{n+1}=1$$ -So, what's happening is really in $n+1$ dimensions - the linear map $\mathbb R^{n+1}\to\mathbb R^{n+1}$ defined as $$(t_1,\dots,t_n,t_{n+1})\mapsto\left(t_1,\dots,t_n,\sum_{i=1}^{n+1}t_i\right)$$ -preserves volumes. -And then using the property $e^{x+y}=e^xe^y$. - -Moment generating function approach -The exponential random variable $T$ with $P(T -TITLE: What is Modern Mathematics? Is this an exact concept with a clear meaning? -QUESTION [7 upvotes]: Motivated by this question I would like to know whether there is an exact definition of modern mathematics. In which point in time (century, decade) does one think, when speaking about modern mathematics. Does it refer to the Abstract Algebra? - -Edit: I got -1. If this is not a correct question, please state why. And it can be closed if the question is not proper. -This is a genuine doubt and ignorance of mine! -Edit 2: In my humble opinion instead of closing, should be better to tag it as a community wiki. Anyway I do not have any no objections against in closing it. -Edit 3: Now is a community wiki. -Edit 4. I have learned from the answers and comments, including the explanation for closing! - -REPLY [11 votes]: Further to the other answers, which are indeed correct: no technical definition exists- barks $\iff$ dog, frankly- but 'modern' is a well defined concept outside of mathematics; and to a certain extent it is one to which the barkings of modern mathematics agree. -It was once the case that mathematicians believed proofs to uncover the neccessarily true- there must be numbers working thus, and those numbers must naturally have no zero divisors- there must be a geometry built thus, and this geometry must have angles in a triangle summing to $180^o$. With the exception of the work of Euclid (whose axioms were largely seen by others as immutable anyway), the theorems of mathematics were seen as universal truths, fouded in pure logic- facts about platonic ideals. -Except none of this was true. -Perhaps the first chink in the armour of this classical mathematics came with the work of Bolyai and Gauss, constructing consistent geometries where triangles behaved unusually (turning, as we all know, into modern Hyperbolic geometry), that seeped from a change in the hitherto 'immutable' axioms. -And from here the trickle began, which rushed and swelled with time, and burst the banks of mathematics as was: axioms became plastic, changeable at will, and with them the mathematics that followed from them. New concepts were created, and concepts of concepts, enriching and enlarging the mathematical landscape in ways that generations before could not have imagined. -Parallell to this explosion was the search for foundations for these axioms- the dying embers of platonism in the work of Frege, Russell and Whitehead; Hilbert's program, seeming at first promising, were spectacularly micturated on by Godel's incompleteness theorems. And it soon became (quite) clear, that any (provably) 'ultimate' description of mathematics was doomed to failure. -Modernism outside of mathematics is characterised by a certain relativism- an understanding that different perspectives can lead to different (equally valid) conclusions. In modern mathematics one has the reals and the p-adics, euclidean and non-euclidian geometries, topologies and metric spaces, groups, rings, algebras: sets and mereology- and we cannot claim one to be more valid than the others. -In modern mathematics, our truths are absolute but crucially contingent, the children of axioms in a pluralistic universe of possible postulates. -Of course some would say that 'modern' just means 'with categories', but that's not quite as neat- perhaps we can fit categories to 'post-modern' somehow....<|endoftext|> -TITLE: Rules for rounding (positive and negative numbers) -QUESTION [31 upvotes]: I'm looking for clear mathematical rules on rounding a number to $n$ decimal places. -Everything seems perfectly clear for positive numbers. Here is for example what I found on math.about.com : - -Rule One Determine what your rounding digit is and look to the right side of it. If that digit is $4, 3, 2, 1,$ or $0$, simply drop all digits to the right of it. -Rule Two Determine what your rounding digit is and look to the right side of it. If that digit is $5, 6, 7, 8,$ or $9$ add $1$ to the rounding digit and drop all digits to the right of it. - -But what about negative numbers ? Do I apply the same rules as above ? -For instance, what is the correct result when rounding $-1.24$ to $1$ decimal place ? $-1.3$ or $-1.2$ ? - -REPLY [7 votes]: Out of the six methods described in Ilmari's answer, one has two noticeable advantages: the "round away from zero" rule. - -We only need to look at a single place to determine which direction to round to. - -When faced with e.g. 0.15X where X could be any digit, we don't need to concern ourselves with what X might be when rounding to 1 decimal place. If X is zero then the rule tells us to round to 0.2 and if it is non-zero then we would round to 0.2 anyway. -This also applies with the "round up" rule, but only for positive numbers. Any of the other rules could require us to examine X to determine whether or not we should round to 0.1 or 0.2. -This advantage holds true for negative numbers with the "round away from zero" rule. -0.15X will always round to -0.2 regardless of X. This works with the "round down" and "round towards zero" rule for negative numbers, but not any other rule. -"Round away from zero" is the only rule that has this benefit for both positive and negative numbers. - -Lack of bias - -With the "round away from zero" rule, half of all numbers will be rounded up and half rounded down when the digit 5 is encountered. This means that for a random selection of numbers that you round to the same place, the exepcted average amount that you will round by is 0. This is because every digit that you round down is paired with a digit that you will round up (amount rounded in brackets): -1 9 (-1 +1) -2 8 (-2 +2) -3 7 (-3 +3) -4 6 (-4 +4) -5 5 (-5 +5) <-- for negative and positive numbers respectively - -This advantage exists with some of the other rules, but with the others you lose the first advantage. With "round up" or "round down" you introduce a bias because the digit 5 will always result in a +5 or a -5 respectively. -Note that this only works if you expect to encounter positive and negative numbers with equal probability.<|endoftext|> -TITLE: Why the name 'FACTORIAL'? -QUESTION [13 upvotes]: Factorial is defined as -$n! = n(n-1)(n-2)\cdots 1$ -But why mathematicians named this thing as FACTORIAL? -Has it got something to do with factors? - -REPLY [24 votes]: Below is the etymology, from Jeff Miller's Earliest Known Uses of Some of the Words of Mathematics (F). Perhaps a native French speaker can lend further insight. - -FACTORIAL. The earlier term faculty was introduced around 1798 by Christian Kramp (1760-1826). -Factorial was coined (in French as factorielle) by Louis François Antoine Arbogast (1759-1803). -Kramp withdrew his term in favor of Arbogast's term. In the Preface, pp. xi-xii, of his "Éléments d'arithmétique universelle," Hansen, Cologne (1808), Kramp remarks: -...je leur avais donné le nom de facultés. Arbogast lui avait substitué la nomination plus nette et plus française de factorielles; j'ai reconnu l'avantage de cette nouvelle dénomination; et en adoptant son idée, je me suis félicité de pouvoir rendre hommage à la mémoire de mon ami. [...I've named them facultes. Arbogast has proposed the denomination factorial, clearer and more French. I've recognised the advantage of this new term, and adopting its philosophy I congratulate myself of paying homage to the memory of my friend.]<|endoftext|> -TITLE: Example of a trigonometric series that is not fourier series? -QUESTION [8 upvotes]: My textbook doesn't give any example of this kind of series. Could you provide some? -Trigonometric series is defined in wikipedia as : -$A_{0}+\sum_{n=1}^{\infty}(A_{n} \cos{nx} + B_{n} \sin{nx})$ -When -$A_{n}=\frac{1}{\pi} \int^{2 \pi}_0 f(x) \cos{nx} dx\qquad (n=0,1,2,3 \dots)$ -$B_{n}=\frac{1}{\pi} \int^{2 \pi}_0 f(x) \sin{nx} dx\qquad (n=1,2,3, \dots)$ -It is fourier series. -thanks. - -REPLY [2 votes]: Theorem. If $a_n>0$ and $\sum_{n>0}\frac{a_n}{n}=\infty$. Then $\sum_1^\infty a_n\sin nt$ is not a Fourier series. (AN INTRODUCTION TO HARMONIC ANALYSIS, Yitzhak Katznelson)<|endoftext|> -TITLE: Do endomaps of sets have interesting properties? -QUESTION [7 upvotes]: I've been thinking about maps between sets. Injections, surjections and the rest. Often when thinking about some kind of map, it is interesting to say "what about the maps from a set to itself?" Call these maps endomaps. Permutations of elements of the set are a special case of endomaps: they are bijective endomaps. Permutations have lots of interesting properties: they form groups and so on. -But what about general endomaps that are not necessarily bijective? Do they have any interesting properties that people study? They don't necessarily have inverses, which rules them out as forming groups, but composition of endomaps is associative, so they aren't totally devoid of interesting properties. -It is also the case that for every endomap, there exists some subset of its domain (not necessarily unique) such that it is bijective on that subset. So these maps are permutations if restricted to a particular subset. Is this enough to make endomaps interesting in their own right, or are they only studied as a part of the study of maps between sets in general? -[It might be obvious that this question was motivated by thinking about category theory, but there's nothing particularly categorical about the question as such...] - -REPLY [5 votes]: The generalization of cycle decomposition to endomaps is quite interesting; rather than just cycles, endomaps break up into cycles in which each vertex is the root of a tree. Counting endomaps (of which there are $n^n$) is therefore relevant to counting trees, and this is the basis of a beautiful proof due to Joyal (which I believe can be found in Bergeron, Labelle, and Leroux's Combinatorial Species and Tree-like Structures) of Cayley's formula. -A set equipped with an endomap is just about the simplest kind of dynamical system. The category of sets equipped with endomaps is sometimes called the category of (discrete) dynamical systems and can be a useful example in category theory, e.g. it pops up in Lawvere and Schanuel's Conceptual Mathematics.<|endoftext|> -TITLE: How are gauge transformations of a $G$-bundle related to the adelic points of $G$? -QUESTION [9 upvotes]: In a very interesting blog discussion at the $n$-category cafe, an anonymous poster made the following remark: "... using the dictionary between number fields and function fields, Weil suggested that G-with-adelic-entries is analogous to the group of gauge transformations of a principal G-bundle over a Riemann surface." -I would like to know how this particular piece of the number field / function field analogy is made precise. As a preliminary question, does anyone know a reference to where Weil (or someone else) explains it? I have a suspicion that it might be discussed in the notes from Weil's 1959-1960 lectures on "Adeles and algebraic groups", but the copy in my local library is checked out so I'm not sure. -My next question is for an explanation of the analogy in what I assume is the simplest case: holomorphic vector bundles on a compact Riemann surface. There are two obstacles here. First, I don't know what a gauge transformation is, and the "Mathematical Formalism" section -of the Wikipedia page seems nonsensical to me. What is being transformed, and what is the transformation itself? Can anyone provide a physics-free definition, purely in the language of geometry? -Now given a curve $X$ over the complex numbers (or any field), we can form a topological ring $\mathbf{A}_X$ as the restricted product of the completed local rings at the closed points of $X$. The second obstacle is that when the ground field is infinite, these completions are not locally compact, so it seems unlikely that $\mathbf{A}_X$ is a good thing to consider. -Nonetheless, can one put the constructions of the previous two paragraphs together to produce, for any rank $n$ holomorphic vector bundle $\mathcal{E}$ on $X$, a bijection as follows? -$${\text{gauge transformations of }\mathcal{E}}\stackrel{?}{\leftrightarrow} \mathrm{GL}_n(\mathbf{A}_X)$$ -My final question is, what is the correct version of this when the complex numbers are replaced by a finite field? Now we can consider algebraic vector bundles, and even better the adeles of $X$ are now a good thing to consider. So in trying to extend the putative bijection above to this case, the content seems to be in algebraizing the notion of gauge transform. Thus, I'm asking for a (second?) definition of gauge transformation which is native to algebraic geometry, if such exists. - -REPLY [7 votes]: If $X$ is a smooth projective curve over a finite field $k$, with field of rational functions $F$, then the quotient $GL_n(F)\backslash GL_n(\mathbf A_F)/GL_n(\mathcal O_F)$ (where -$\mathcal O_F$ denotes the subring of integral adeles in $\mathbf A_F$) is in natural bijection with the set of isomorphism classes of rank $n$ vector bundles on $X$, i.e. the -set $Bun_n(k)$ of $k$-valued points of the moduli stack of rank $n$ bundles on $X$. -(And the same should be true if we replace $GL_n$ by another reductive group $G$, -and replace "rank $n$ vector bundles" by principal $G$-bundles; recall that principal $GL_n$-bundles are "the same" as rank $n$ vector bundles; one goes from the latter to the former -by passing to the associate frame bundle.) -The passage from a bundle to an a point in the double coset space is made as follows: given the vector bundle, one chooses an affine open subset $U = X \setminus {x_1,\ldots,x_r}$ over which -the bundle is trivialized, and also trivialiations in a n.h. $U_i$ of each of the points $x_i$. The gluing data comparing the two trivializations on $U_i \cap U$ gives an element -$g_{x_i} \in GL_n(F_{x_i})$, where $F_{x_i}$ is the completion of $F$ at $x_i$. -If $x$ is a point distinct from the $x_i$, set $g_{x} = 1$. We can then put all -the $g_x$ together into an adele, and its class in -the double coset space $GL_n(F)\backslash GL_n(\mathbf A_F)/GL_n(\mathcal O_F)$ -is independent of all choices. -To see that this map is a bijection, one first observes -that any element in the double coset space can be represented as -$(g_x)$ with $g_x = 1$ for almost all $x$, and then one interprets these $g_x$ as formal -gluing data around the points $x_i$ at which $g_{x_i} \neq 1$ and uses them to -extend the trivial bundle on $X\setminus {x_1,\ldots,x_r}$ to a bundle on all of $X$. -The relationship with gauge transformations is that the $g_x$ are change of -basis matrices (gauge tranformations) in formal punctured n.h.s of the points $x$.<|endoftext|> -TITLE: What is "reform calculus"? -QUESTION [8 upvotes]: In an answer to another question I asked, Isaac suggested a book that is the standard "reform calculus" book. In a comment, I asked what the phrase "reform calculus" means, and Isaac provided a link to this summary page. It seems to be a new or revised teaching methodology. However, the page linked to doesn't provide a good explanation of what exactly the phrase "reform calculus" means or its history. -The summary describes things like courses being "leaner in terms of the number of topics in the syllabus" and says that "technology alters the relative importance of specific techniques, and methodology because technology offers opportunities for creating new learning environments". But what do these things actually mean for people, especially the calculus student and the professor teaching a calculus course? - -REPLY [8 votes]: It is hard to describe faithfully an entire movement or "ism": suppose for instance you had asked for a description of Buddhism, or Marxism, or post-modernism. For every principle that you put forward as a "plank" of the movement, there is someone to say that that's a misunderstanding/oversimplification, or there is a specific submovement formed out of the violation of that principle. -As a result, let me focus on a particular branch of calculus reform which is better defined (and which I have seen more of myself): the Harvard Calculus Consortium. A very clear description, review and critique of this movement is given by Oliver Knill (who is well placed to comment on it, having been involved in the teaching and administration of calculus at Harvard for many years) here: -http://www.math.harvard.edu/~knill///pedagogy/harvardcalculus/index.html -So I encourage you to read this first and then ask any further questions with at least one specific platform underneath your feet. - -REPLY [6 votes]: Here's an excerpt from Reviewing Reformed Calculus, by Lisa Murphy, 2006. - -The very beginnings of mathematics reform started in the 1960's, but the big push for Calculus - reform started in earnest in 1989 with the publication of the National Council of Teachers of Mathematics' - (NCTM) Principles and Standards for Mathematics Education. The NCTM published the - Principles and Standards in response to the apathy of students towards math and the lack of academic - success in the mathematics classroom. To combat these negative trends the NCTM outlined - five goals for "the processes of problem solving, reasoning and proof, connections, communication, - and representation" [1]. Through these goals it was hoped that students would be equipped with - the basic skills and understanding that they would need to be successful. - As the Principles and Standards inspired the reform of secondary mathematics education, thoughts - of reform began to surface in the collegiate mathematics arena, especially with regard to Calculus. - College Calculus courses were experiencing some of the same problems as secondary mathematics. - Of the roughly 300,000 college students that are annually enrolled in an engineering-based Calculus - course, only 140,000 earn a grade of D or higher [6]. Less than half of the students were performing - "well" in their Calculus courses. Armed with statistics such as this, reform-minded professors set - out to develop a new curriculum that would help raise the achievement level and stimulate student - interest in mathematics. -From the reform movement, numerous curricular designs have been generated. Calculus and - Mathematica; Calculus, Concepts, Computers and Cooperative Learning (C4L); and The Calculus - Consortium at Harvard (CCH) are a few of the commonly used curriculums. These new curriculums - cover the entire spectrum of reform. Some are grounded in traditional techniques but incorporate - snippets of reform, while others differ in most aspects from the traditional approach. Despite this - vast array, there are some basic elements that are common to all reform curriculums in varying - degrees that separate them from the traditional Calculus curriculum. - One of the most noticeable differences of reformed Calculus is the use of graphing calculators - and/or computers. The graphing calculator is a critical component in the reform classroom. Many - reform classes include a weekly lab session where students meet in a computer lab. The students - make use of calculators and math computer programs to investigate new topics and to graphically - see what they are working on. Most reform textbooks urge students to read through the text with a - calculator in hand to see directly what is discussed in the text. The idea behind the incorporation of - calculators and computers is to alleviate the heavy algebraic manipulation that students typically do - in a traditional Calculus setting. Reform supporters argue that the removal of manipulation allows - students to move beyond the drudgery of computation and start learning the fundamental ideas of - Calculus. They additionally argue that topics are discussed more fully with the use of graphical - representations. -A reformed Calculus class differs from a traditional course in methods of instruction. When - walking into a reform classroom it is immediately clear that it is indeed a reform classroom. Most - noticeably the teacher is no longer the central focus of the classroom experience. The lecture method - of instruction, a standard of traditional curriculum, has a lesser place within a reform setting. The - teacher still lectures occasionally and is available to answer questions from the students, but there is - greater emphasis placed on cooperative learning. Reform students often work in groups to determine - solutions or to explore concepts in a laboratory setting. This idea is rooted in the constructivist - learning theory. Each student constructs their own meaning as they learn. Students are given the - basic tools and from these discover how the pieces fit together to form the concept that they are - studying. One of the primary goals of the C4L curriculum is to "create situations which foster - students to make the necessary mental constructions to learn mathematics concepts" [10]. - Within the curriculum itself, the reformed method stresses the applications of Calculus. This - emphasis hopes to justify the topics of study, which in theory raises interest in the material. In - an effort to accomplish this, some of the mathematical rigor is removed from the curriculum. Most - reform textbooks are void of a single proof. In the introduction to Calculus from Graphical, Numerical - and Symbolic Point of View, the authors state that "proving theorems in full generality is - less valuable, we think, than helping students understand concretely what theorems say" [9]. As - a result of this change, a common question that arises from students new to reformed Calculus is - "Where is the math?" [5]. -Accompanying this application heavy curriculum is a different method of assessment. Reformed - Calculus courses emphasize the use of writing. Projects, reports and lengthy explanations of problem - solutions are common place within the reform classroom. In some cases, the students are graded - more on the thoroughness and completeness of written explanations as opposed to correctness of - answer. - [...] -The emphasis on correct explanation rather than correct answer is seen explicitly in the directions - for the midterm. The problem also provides an example of the type of application problems that - reformed Calculus students are accustomed to working with. This midterm question additionally - exhibits one of the flaws that traditional professors are quick to point out. The problem asks students - to determine when the population becomes infinite, which is a misuse of the word infinite. - The population may become uncontrollable but it will never become infinite. Traditional professors - argue that the misuse of mathematical terms, such as infinite, teaches students the wrong meaning - of or concept behind the term, which results in misunderstandings in future math work. - More generally, there is a trend in reformed Calculus moving away from individual study and - towards a social study of Calculus. The context of learning Calculus is now placed in a more social - setting. Students work primarily in groups to gain knowledge both from a textbook and from each - other.<|endoftext|> -TITLE: Hint on how to prove $\zeta ( 2) =\pi ^{2}/6$ using the complex Fourier series of $f(x)=x$ -QUESTION [5 upvotes]: I know how to prove $\zeta (2)=\pi ^{2}/6$ by using the trigonometric Fourier series expansion of $x^{2}/4$. How can one prove the same result using the complex Fourier series of $f(x)=x$ for $0\leq x\leq 1$? Any suggestion? - -REPLY [2 votes]: Extending off from Aryabhatta answer: - -For our situation: We have $f(x)=x ~~~{\text{ for }} 0\leq x\leq 1$ -$2L=1,\Rightarrow L=\frac{1}{2}$ -So restating we have: -$f(x) = \displaystyle\sum_{n=-\infty}^{\infty} {c_{n} e^{inx}}, \text{ where }c_{n} = \displaystyle\frac{1}{2\pi}\int_{-\frac{1}{2}}^{\frac{1}{2}}{f(x)e^{-inx}} ~\mathrm{d}x,~~~~~~n=0,~\pm 1,~\pm 2, \cdots~ $ -$ -\Rightarrow~~ c_{n} = \displaystyle\frac{1}{2\pi}\int_{-\frac{1}{2}}^{\frac{1}{2}}{xe^{-inx}}~\mathrm{d}x -$ -After integrating the complex Fourier coefficient we see that we get the following: -$\Rightarrow~~~~\displaystyle c_n=i\left(\frac{\cos(\frac{n}{2})}{2\pi n}-\frac{\sin(\frac{n}{2})}{\pi n^2}\right),~~~\text{for }n \in \mathbb{R}$ -Lastly plugging back $c_n$ into $f(x)$ we then get our desired result for $n=0,~\pm 1,~\pm 2, \cdots~$. -Please update if you see any mistakes with any of the work. It has been quite some time since I work with Fourier Series and went off from my head. Feel free to edit mistakes as necessary if willing. -Thanks.<|endoftext|> -TITLE: Algebra structure of tensor product of two Galois extensions -QUESTION [6 upvotes]: Sorry if this question is too basic. It is from Fröhlich and Taylor's "Algebraic Number Theory". -Let $E/F$ be a finite Galois extension of fields, with $G=Gal(E/F)$, and let $K$ and $L$ be two subfields of $E$, containing $F$, such that $K/F$ and $L/F$ are both Galois. Let $M=Gal(E/K)$ and $N=Gal(E/L)$ be normal subgroups of $G$. Suppose ${\gamma_1,\ldots,\gamma_n}$ is a transversal for $MN$ in $G$, with $n=[G:MN]$. If $C$ is the compositum $KL$ in $E$, how can I show the map -$$ k\otimes l\mapsto (k^{\gamma_1}l,\ldots,k^{\gamma_n}l) $$ -induces an isomorphism between $K\otimes_F L$ and $\prod_{i=1}^n C$? -It is clear to me that this map is an $F$-algebra homomorphism, and that they both have the same dimension over $F$. Thus surjectivity, or injectivity, would be enough. I have not been able to figure out what the idempotents of $\prod_{i=1}^n C$ should look like in $K\otimes_F L$, so I have not been able to show surjectivity. Meanwhile, I think injectivity should be easier to show, because if we have -$$ k_1\otimes l_1 + \cdots + k_m\otimes l_m\mapsto 0,$$ -then we get a system of equations -$$ k_1l_1 + \cdots + k_ml_m=0$$ -$$ \cdots$$ -$$ k_1^{\gamma_n}l_1 + \cdots k_m^{\gamma_n}l_m=0.$$ -Summing up the columns, I get -$$ \sum_{i=1}^m(\sum_{j=1}^n k_i^{\gamma_j})l_i=0 $$ -and all this is happening in $L$. But I can't seem to finish this argument. Any help would be greatly appreciated. - -REPLY [2 votes]: To expand on Sam's awesome comment: -We can write $K$ as $F(\alpha)$, where the minimal polynomial of $\alpha$ is $f(x)$. Then we have a map from $L[x]$ to $K\otimes_F L$ given by -$$ a_0+a_1x+\cdots+a_nx^n\mapsto 1\otimes a_0 + \alpha\otimes a_1+\cdots + \alpha^n\otimes a_n.$$ -This induces an isomorphism between $L[x]/fL[x]$ and $K\otimes_F L$. Composing with the map I've given above, we then want to show the map -$$ a(x)\mapsto (a(\alpha^{\gamma_1}),\ldots,a(\alpha^{\gamma_n}))$$ -from $L[x]$ to $\prod_{i=1}^n C$, is surjective. -Set -$$ f_i=\prod_{n\in N} (x-\alpha^{\gamma_in});$$ -this is a polynomial in $L[x]$; now let -$$ g_i=\prod_{j\neq i} f_j; $$ -then $g_i$ maps to $(0,\ldots,g_i(\alpha^{\gamma_i}),0,\ldots)$. -Since $g_i(\alpha^{\gamma_i})$ is a non-zero element of $K$, we can write $g_i(\alpha^{\gamma_i})^{-1}=h_i(\alpha^{\gamma_i})$, and so $g_ih_i$ maps to $(0,\ldots,1,\ldots)$, where the $1$ is in the $i$th place. Since $C=L(\alpha)$, this map is surjective.<|endoftext|> -TITLE: Summing the series $(-1)^k \frac{(2k)!!}{(2k+1)!!} a^{2k+1}$ -QUESTION [8 upvotes]: How does one sum the series $$ S = a -\frac{2}{3}a^{3} + \frac{2 \cdot 4}{3 \cdot 5} a^{5} - \frac{ 2 \cdot 4 \cdot 6}{ 3 \cdot 5 \cdot 7}a^{7} + \cdots $$ -This was asked to me by a high school student, and I am embarrassed that I couldn't solve it. Can anyone give me a hint?! - -REPLY [20 votes]: HINT $\quad \:\;\;\rm (a^2+1) \: S' = 1 - a \: S \;\:$ by transmuting the coefficient recurrence to a differential equation. -$\rm\;\Rightarrow\; 1 = (a^2+1) \: S' + a \: S \; = \; f \: (f \; S)' \;\;$ for $\rm\;\; f = (a^2+1)^{1/2}$ -$\rm\displaystyle\;\Rightarrow\; S = f^{-1} \int \; f^{-1} = \frac{\sinh^{-1}(a)}{(a^2+1)^{1/2}}$ - -REPLY [11 votes]: You can use the formula -$\displaystyle \int_{0}^{\frac{\pi}{2}}{\sin^{2k+1}(x) dx} = \frac{2 \cdot 4 \cdot 6 \cdots 2k}{3\cdot 5 \cdots (2k+1)}$ -This is called Wallis's product. -So we have $\displaystyle S(a) = \sum_{k=0}^{\infty} (-1)^k a^{2k+1} \int_{0}^{\frac{\pi}{2}}{\sin^{2k+1}(x) dx}$ -Interchanging the sum and the integral -$\displaystyle S(a) = \int_{0}^{\frac{\pi}{2}}{\sum_{k=0}^{\infty}{(-1)^{k}(a\sin x)^{2k+1}} dx}$ -The sum inside the integral is a geometric series of the form -$\displaystyle x - x^3 + x^5 - \cdots = x(1 - x^2 + x^4 - \cdots) = \frac{x}{1+x^2}$ -Hence, -$\displaystyle S(a) = \int_{0}^{\frac{\pi}{2}}{\frac{a\sin x}{1 + (a\sin x)^2}}dx$ -Now substitute $\displaystyle t = a \cos x$ -The integral becomes -$\displaystyle \int_{0}^{a}{\frac{1}{1+a^2 - t^2}}dt = \frac{1}{2\sqrt{a^2+1}}\ln \left(\frac{\sqrt{a^2+1}+a}{\sqrt{a^2+1}-a} \right)$ -Now $\displaystyle \ln \left(\frac{\sqrt{a^2+1}+a}{\sqrt{a^2+1}-a}\right) = \ln \left(\frac{\left(\sqrt{a^2+1}+a \right)^2}{\left(\sqrt{a^2+1}-a \right)\left(\sqrt{a^2+1}+a \right)}\right) = 2\ln \left(\sqrt{a^2+1}+a \right)$ -So -$\displaystyle S(a) = \frac{1}{\sqrt{a^2+1}}\ln \left(\sqrt{a^2+1}+a \right) = \frac{\sinh^{-1}(a)}{\sqrt{a^2+1}}$ - -REPLY [3 votes]: Making my comments more explicit: -Your sum of interest is -$\sum_{j=0}^\infty {(-1)^j \frac{(2j)!!}{(2j+1)!!} a^{2j+1}}$ -where $(2j)!!=2\cdot 4\cdot 6\cdots (2j)$ and $(2j+1)!!=3\cdot 5\cdot 7\cdots (2j+1)$. -To simplify things a bit, we rearrange the series a bit to -$a\sum_{j=0}^\infty {\frac{(2j)!!}{(2j+1)!!}\left(-a^2\right)^j}$ -The double factorials can be also expressed as -$(2j)!!=2^j j!=2^j (1)_j$ -and -$(2j+1)!!=2^j \left(\frac32\right)_j$ -where $(a)_j$ is a Pochhammer symbol. -Substitute both expressions into the series, and then note that the series now looks like a hypergeometric series. Now you can employ the formula here.<|endoftext|> -TITLE: Purely combinatorial proof that$ (e^x)' = e^x$ -QUESTION [30 upvotes]: At the beginning of Week 300 of John Baez's blog, -Baez gives a proof that the "number" of finite sets (more specifically, the cardinality of the groupoid of all finite sets, where an object in the groupoid counts as $1/n!$ if it has $n!$ symmetries) equals $e$. -He then says that this leads to a purely combinatorial proof that $e^x$ is its own derivative. -Can anyone explain the purely combinatorial proof? - -REPLY [29 votes]: I am not quite sure how to translate this into groupoid cardinality language, but here is the standard proof. Suppose $A(x) = \sum_{n \ge 0} a_n \frac{x^n}{n!}$ is an exponential generating function. Then we should interpret $a_n$ as being the number of ways to put a certain structure on a set of size $n$. For example, when $a_n = 1$ this is the structure of "being a set." When $a_n = n!$ this is the structure of "being a totally ordered set." And so forth. We will call this an $A$-structure. -Then $A'(x) = \sum_{n \ge 0} a_{n+1} \frac{x^n}{n!}$ can be interpreted as having coefficients $b_n = a_{n+1}$ which count the number of ways to add an element to a set of size $n$, then put an $A$-structure on the resulting set of size $n+1$. This is a purely combinatorial definition of differentiation. -With this definition, the proof is quite obvious: there is exactly one way for a set to be a set, and there is also exactly one way to add an element to a set and then make the result a set. So $\frac{d}{dx} e^x = e^x$. -This proof might seem contentless. Try to see how it generalizes to show that $\frac{d}{dx} e^{ax} = ae^{ax}$ for any positive integer $a$, and if you're up for a challenge see if you can generalize it all the way to this identity. -Vaguely the proof in groupoid cardinality language goes like this. For a finite set $X$ the groupoid of finite sets equipped with a function to $X$ has cardinality $e^{|X|}$. (The morphisms between two objects $A \to X, B \to X$ in this category are isomorphisms $A \simeq B$ such that the obvious triangle commutes.) One way to think about this groupoid is as the groupoid of "colored" sets, where $X$ is the set of colors and an isomorphism must respect color. Then it is easy to see that an isomorphism class of colored sets where there are $|X|$ colors is the same thing as a disjoint union of isomorphism classes of $|X|$ sets, one for each color. One gets a direct interpretation of the terms in the expansion $\left( \sum_{n \ge 0}^{\infty} \frac{1}{n!} \right)^{|X|}$ this way. -Differentiation replaces $|X|^n$ with $n|X|^{n-1}$, which means that we replace functions from an $n$-element set $S$ to $X$ with functions from $S - \{ s \}$ to $X$ where $s$ ranges over all elements of $S$. The resulting groupoid is still the groupoid of finite sets equipped with a function to $X$; in particular, it has the same cardinality. (Note that $X$ does not really have to be a finite set of a particular size for this argument to work; it can be a "formal" set in the same way that $x$ is a formal variable and the resulting groupoid cardinality is a generating function instead of a number. I think this is what the formal theory of "stuff types" is for, but I am not familiar with it.)<|endoftext|> -TITLE: Determinant of a polynomial matrix -QUESTION [5 upvotes]: A matrix determinant (naively) can be computed in $O(n!)$ steps, or with a proper LU decomposition $O(n^3)$ steps. This assumes that all the matrix elements are constant. If, however the matrix elements are polynomials (say univariate of max order $p$) at each step of the LU decomposition an element is multiplied by another element producing (on average) ever larger polynomials. Each step therefore takes longer and longer - is the cost perform the decomposition still a polynomial? -EDIT: To clarify my reasoning a bit, if polynomial (using FFT as J.M. suggests in the comments) takes $O(m \log m)$, and we must perform $O(n^3)$ operations to get the LU decomposition, each step the polynomial could effectively double in degree at each multiplication*. The running time would look something like -$$ \approx O \left ( \sum_{k}^{n^3} (p \ln p)^{2k} \right ) .$$ -* (it doesn't quite do that, and this is where I'm stuck) - -REPLY [5 votes]: Suppose each entry of your $n \times n$ matrix is a polynomial of degree $d$ in the variable $t$. Appealing to the cofactor expansion, we see that the determinant will be a polynomial in $t$ of degree at most $dn$; lets call it $D(t)$. So you can do the following: evaluate the determinant at $t=1,2, \ldots, dn$. You will thus know $D(1),D(2),\ldots,D(dn)$ and can find out the coefficients of $D$ by interpolation. -This involves: -(i) $dn$ determinant evaluations, each of which takes $O(n^3)$ operations. -(ii) Interpolating a polynomial of degree $dn$. This can be done by solving a $dn \times dn$ system of equations (see https://en.wikipedia.org/wiki/Polynomial_interpolation under "Constructing the interpolating polynomial"). There are algorithms which will do this in $O(dn \log^2 dn)$. -Your final number of operations will be $O(d n^4 + dn \log^2 dn)$. -I would be interested in knowing if its possible to do this faster. -Update: I corrected an error and incorporated a suggestion of J.M. in the comments.<|endoftext|> -TITLE: How to prove Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$? -QUESTION [173 upvotes]: Could you provide a proof of Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$? - -REPLY [3 votes]: Let $y=\cos \phi+i\sin \phi$ $...(1)$ -Differentiating both sides of equation (1) with respect to $\phi$, we get, -$\frac{dy}{d\phi}=-\sin \phi+i\cos \phi$ -$\implies \frac{dy}{d\phi}=i(\cos \phi-\frac{1}{i}\sin \phi)$ -$\implies \frac{dy}{d\phi}=i(\cos\phi+i\sin \phi)$ -$\implies \frac{dy}{d\phi}=iy$ -$\implies\frac{1}{y}dy=id\phi$ $...(2)$ -Integrating both sides of equation (2), we get, -$\int\frac{1}{y}dy=\int id\phi$ -$\implies \ln(y)=i\phi+c$ $...(3)$ -Substituting $\phi=0$ in equation (1), we get, -$y=\cos 0+i\sin 0$ -$\implies y=1$ -Substituting $\phi=0$ and $y=1$ in equation (3) we get, -$\ln(1)=c$ -$\implies c=0$ -Substituting $c=0$ in eqaution (3) we get, -$\ln(y)=i\phi$ -$e^{i\phi}=y$ -$\therefore e^{i\phi}=\cos \phi+i\sin \phi$<|endoftext|> -TITLE: Liouville's number revisited -QUESTION [5 upvotes]: Liouville's Number is defined as $L = \sum_{n=1}^{\infty}(10^{-n!})$. Does it have other applications than just constructing a transcendental number? -(Personally, I would have defined it (as "Steven's Number" :-)) as binary: $S = \sum_{n=1}^{\infty}(2^{-n!})$, since each digit can only be "0" or "1": the corresponding power of 2 (instead of 10) included or not. Since according to Cantor most number are transcendental one can conjecture that this is also the case for Steven's Number. Can a proof for this be devised based on the proof for Liouville's number?) -I'm not a mathematician, so please type slowly! :-) - -REPLY [2 votes]: Liouville's Number makes it's next appearance in Making Transcendence Transparent - Edward B. Burger, Robert Tubbs. The transcendence of $e + L$ is proved. (This might be the end of it's career though).<|endoftext|> -TITLE: A series expansion for $\cot (\pi z)$ -QUESTION [15 upvotes]: How to show the following identity holds? -$$ -\displaystyle\sum_{n=1}^\infty\dfrac{2z}{z^2-n^2}=\pi\cot \pi z-\dfrac{1}{z}\qquad |z|<1 -$$ - -REPLY [7 votes]: I have found a link which deals with this problem: people.reed.edu/~jerry/311/cotan.pdf<|endoftext|> -TITLE: Beta function derivation -QUESTION [29 upvotes]: How do I derive the Beta function using the definition of the beta function as the normalizing constant of the Beta distribution and only common sense random experiments? -I'm pretty sure this is possible, but can't see how. -I can see that -$$\newcommand{\Beta}{\mathrm{Beta}}\sum_{a=0}^n {n \choose a} \Beta(a+1, n-a+1) = 1$$ -because we can imagine that we are flipping a coin $n$ times. The $2^n$ unique sequences of flips partition the probability space. The Beta distribution with parameters $a$ and $n-a$ can be defined as the prior over the coin's bias probability $p$ given the observation of $a$ heads and $n-a$ tails. Since there are ${n \choose a}$ such sequences for any $n$ and $a$, that explains the scaling factor, and we know that it all sums to unity since the sequences partition the probability space, which has total measure 1. -What I can't figure out is why: -$${n \choose a} \Beta(a+1, n-a+1) = \frac{1}{n+1} \qquad \forall n \ge 0,\quad a \in \{0, \dots, n\}.$$ -If we knew that, we could easily see that -$$\Beta(a + 1,n - a + 1) = \frac{1}{(n+1){n \choose a}} = \frac{a!(n-a)!}{(n+1)!}.$$ - -REPLY [10 votes]: The multinomial generalization mentioned by Qiaochu is conceptually simple but getting the details right is messy. The goal is to compute $$\int_0^1 \int_0^{1-t_1} \ldots \int_0^{1-t_1-\ldots-t_{k-2}} t_1^{n_1} t_2^{n_2} \ldots t_{k-1}^{n_{k-1}} t_k^{n_k} dt_1 \ldots dt_{k-1},$$ where $t_k = 1 - t_1 - \ldots - t_{k-1},$ for nonnegative integers $n_1, \ldots, n_k$. -Draw $k-1 + \sum_{i = 1}^{k}n_k$ numbers $X_1, \ldots, X_{k-1 + \sum_{i = 1}^{k}n_k}$ independently from a uniform $[0,1]$ distribution. Define $X_0 = 0$ and $X_{k + \sum_{i = 1}^{k}n_k} = 1$ for convenience. Let $E$ be the event that the numbers $X_1$ through $X_{k-1}$ are in ascending order and that the numbers $X_{j + \sum_{i = 1}^{j-1} n_i}$ through $X_{j + \sum_{i = 1}^{j}n_i - 1}$ are between $X_{j-1}$ and $X_j$ for $j = 1, \ldots, k$. -Define a linear transformation from $(X_1, \ldots, X_{k-1}) \to (T_1, \ldots, T_{k-1})$ by $T_i = X_i - X_{i-1}$ for $i = 1, \ldots, k-1$. Note that the determinant of this linear transformation is 1 and it is therefore measure-preserving. Given values of $X_1$ through $X_{k-1}$, the conditional probability of $E$ is -$$\mathbb{P}[E|(X_1, \ldots, X_{k-1}) = (x_1, \ldots, x_{k-1})] = \prod_{i = 1}^{k}(x_i - x_{i-1})^{n_k} \mathbf{1}_{\{x_i > x_{i-1}\}}.$$ Marginalizing with respect to the distribution of $X_1 \times \ldots \times X_{k-1}$ gives -$$\begin{aligned} -\mathbb{P}[E] &= \int_{0}^1 \ldots \int_{0}^1 \prod_{i = 1}^{k}(x_i - x_{i-1})^{n_k} \mathbf{1}_{\{x_i > x_{i-1}\}} p_{X_1 \times \ldots \times X_{k-1}}(x_1, \ldots, x_{k-1}) dx_{k-1} \ldots dx_{1} \\ -&= \int_{0}^1 \int_{-t_1}^{1-t_1} \ldots \int_{-t_1 - \ldots - t_{k-1}}^{1 -t_1 - \ldots - t_{k-1}} \prod_{i = 1}^{k} t_k^{n_k} \mathbf{1}_{\{t_k > 0\}} p_{T_1 \times \ldots \times T_{k-1}}(t_1, \ldots, t_{k-1}) dt_{k-1} \ldots dt_{1} \\ -&= \int_0^1 \int_0^{1-t_1} \ldots \int_0^{1-t_1-\ldots-t_{k-2}} t_1^{n_1} \ldots t_{k-1}^{n_{k-1}} t_k^{n_k} dt_{k-1} \ldots dt_{1}, -\end{aligned}$$ -so if we can compute $\mathbb{P}[E]$ combinatorially we will have evaluated the desired intergral. -Let $\{R_i\}_{i \in \{1, \ldots, k-1 + \sum_{i = 1}^{k}n_k\}}$ be the ranks that the numbers $\{X_i\}_{i \in \{1, \ldots, n+m+1\}}$ would have if sorted in ascending order. (Note that the numbers are all distinct with probability 1). Since the numbers were drawn independently from a uniform distribution, the ranks are a random permutation of the integers $1$ through $k-1 + \sum_{i = 1}^{k}n_k$. Note that $E$ is exactly the event that $R_j = j + \sum_{i = 1}^j n_i$ for $j \in \{1, \ldots, k-1\}$ and that for each $l \in \{1, \ldots, k\}$, $$R_j \in \{l + \sum_{i = 1}^{l-1} n_i, \ldots, l + \sum_{i=1}^{l}n_i - 1\}$$ for $$j \in \{k+\sum_{i = 1}^{l-1}n_i, \ldots, k + \sum_{i = 1}^{l}n_i - 1\}.$$ There are $n_1!\ldots n_k!$ possible permutations which satisfy these conditions out of $(\sum_{i=1}^{k}n_i+k-1)!$ total possible permuations, so $$\mathbb{P}[E] = \frac{n_1!\ldots n_k!}{(\sum_{i=1}^{k}n_i+k-1)!}.$$<|endoftext|> -TITLE: Minimal Ellipse Circumscribing A Right Triangle -QUESTION [10 upvotes]: Find the equation of the ellipse circumscribing a right triangle whose lengths of it's sides are $3,4,5$ and such that its area is the minimum possible one. -You may chose the origin and orientation of the $x,y$ axes as you want. -Motivation: It can be proved [Problem of the Week, Problem No. 8 (Fall 2008 Series), Department of Mathematics, Purdue University] that the area of this ellipse is $8\pi /\sqrt{3}$, without the need of using its equation, but I am also interested in finding it. - -Edit: picture from this answer. - -REPLY [2 votes]: Not much to contribute at this point except this enhanced picture: - -(dashed lines: axes; gray lines: medians) -and the Mathematica notebook used to produce it. -(Thanks Isaac!)<|endoftext|> -TITLE: How can angular velocity or angular momentum be a vector? -QUESTION [6 upvotes]: Rotations in 3 dimensions are not commutative; however they are in the plane. In classical mechanics, are we allowed to say that angular momentum is a vector because particles only rotate along a single axis? Or do we make some kind of argument that you can assign angular velocity to an object because you can approximate it locally as being along a single rotation axis and so its "locally commutative" and hence appropriate to call it a vector? If that is the case what happens if we have a particle rotating in a way that isn't differentiable? Is there some physical reason that particles can't rotate in that sort of way? -I'm really confused by this and haven't taken physics since high school, any help would be appreciated. :) - -REPLY [9 votes]: There are a couple of things at play here, which coincidentally(?) mirror the difference between mathematicians who say "vector" to mean anything can be added commutatively and multiplied by a scalar, and physicists who say "vector" to mean a 3- (or 4-)dimensional quantity that transforms properly under change of coordinates. - -While arbitrary rotations in 3 dimensions do not commute, infinitesimal rotations do. (In fact any "infinitesimal" transformations commute, as you can see by multiplying $I + \epsilon A$ and $I + \epsilon B$ and ignoring second-order terms in $\epsilon$.) Since angular velocity can be thought of as "infinitesimal rotation per infinitesimal time", it ends up being a vector (in the mathematical sense) even though rotation itself isn't. Similarly, angular momentum is the derivative of kinetic energy with respect to angular velocity, so it is a gradient, which is a (dual) vector (to angular velocity). -There's an interesting mapping between these infinitesimal rotation matrices and vectors (in the physical sense) which only works in 3 dimensions. If you think of an infinitesimal rotation as $I + \epsilon A$, one can show that $A$ must be antisymmetric. This means its diagonal entries are 0, leaving only 3 degrees of freedom in the off-diagonal entries. Such a matrix can be associated with a vector using the cross product: - -$$\begin{bmatrix}0 & -\omega_z & \omega_y \\\ \omega_z & 0 & -\omega_x \\\ -\omega_y & \omega_x & 0\end{bmatrix} \mathbf{r} = \mathbf{\omega} \times \mathbf{r}$$ -So angular velocity isn't "naturally" a vector in physical space, but rather lives in a different 3-dimensional vector space. That's why once you interpret it as a physical vector, it turns out to transform slightly differently, as J.M. has pointed out in the comments.<|endoftext|> -TITLE: Permutation groups and symmetric groups -QUESTION [19 upvotes]: Wikipedia has separate pages for symmetric group and permutation group, but I don't understand what the difference between them is. A symmetric group on a set is the set of all bijections from the set to itself with composition of functions as the group action. Permutation group on a set is the set of all permutations of elements on the set. -Aren't these two things the same thing? -On one of the discussion pages, someone suggested that permutation groups don't have to include all permutations: they just have to be collections of permutations on the set, closed under composition etc. But this seems weird. First, that's not how I was taught it, and second (thanks to Cayley's theorem) it looks like it's redundant: all groups are "permutation groups" on this reading. -Is there some subtle difference I'm missing? - -REPLY [14 votes]: "Permutation group" usually refers to a group that is acting (faithfully) on a set; this includes the symmetric groups (which are the groups of all permutations of the set), but also every subgroup of a symmetric group. -Although all groups can be realized as permutation groups (by acting on themselves), this kind of action does not usually help in studying the group; special kinds of actions (irreducible, faithful, transitive, doubly transitive, etc), on the other hand, can give you a lot of information about a group. For example, Jordan proved that the only finite sharply five transitive groups are $A_7$, $S_6$, $S_5$, and the Mathieu group $M_{12}$. (A "sharply five transitive group" is a group $G$ acting on a set $X$ with five or more elements, such that for every ten elements $a_1,\ldots,a_5,b_1,\ldots,b_5\in X$, with $a_i\neq a_j$ for $i\neq j$ and $b_i\neq b_j$ for $i\neq j$, there exists one and only one $g\in G$ such that $g\cdot a_i = b_i$). (In fact, Jordan showed that the only finite sharply $k$-trasitive gruops for $k\geq 4$ are $S_k$, $S_{k+1}$, $A_{k+2}$, $M_{11}$, and $M_{12}$; see http://en.wikipedia.org/wiki/Mathieu_group.) -You think of a permutation group as a group $G$, together with a faithful action $\sigma\colon G\times X\to X$ on a set $X$ (faithful here means that if $gx=x$ for all $x$, then $g=e$). Cayley's Theorem tells you that every group $G$ can be thought of as a permutation group, by taking $X$ to be the underlying set of $G$, and $\sigma$ to multiplication. But this gives you an embedding of $G$ into a very large symmetric group, because the set on which it is acting is large. You usually get more information if the set you are acting on is "small"-ish. -The reason for Cayley's Theorem is that, historically, people only considered permutation groups: collections of functions that acted on sets (the sets of roots of a polynomial, the points on the plane via symmetries, etc). Cayley was trying to abstract the notion of group; he then pointed out that his more abstract definition certainly included all the things that people were already considering, and that in fact it did not introduce any new ones in the sense that every abstract group could be considered as a permutation group. But, as he pointed out, it is sometimes more convenient or useful to consider the group abstractly, sometimes to consider it as a group of permutations. Having both viewpoints is better than having just one. I think Cayley's Theorem has more historical interest than practical interest these days, but your mileage may vary.<|endoftext|> -TITLE: Split up $n \in \mathbb{N}$ into sum of naturals with maximum LCM -QUESTION [12 upvotes]: Question: -Given some natural number, we can of course split it up into various sums of other naturals (e.g. $7 = 6 + 1 = 1 + 4 + 2 = \ldots$) -More precisely, for $n \in \mathbb{N}$, we can a find distribution sequence (or partition) $s_0,\ldots,s_u \in \mathbb{N}$ with -$$\sum_{i=0}^{u}s_i = n$$ -Now how can I find the partition $s$ for which the overall least common multiple of all elements is maximal. -Or differently formulated the maximum product of distinct prime factors of all elements. -Example: -$$ 7 \mapsto (3, 4); \Pi_{lcm} = 12 $$ -$$ 7 \mapsto (1, 2, 4); \Pi_{lcm} = 4$$ -$$ \ldots $$ -Here, the first solution is the desired one. -Background: -The background of my question is: How can I split up a sequence/string into contiguous subsequences that, when repeated and zipped together, will yield the longest possible resulting sequence. -For the above example, I could split up a string of length $7$ in the following way: -7: abcdefg 7: abcdefg - - I II -1: aaaa 3: abcabcabcabc -2: bcbc 4: defgdefgdefg -4: defg - -Of course, using the second distribution, the resulting sequence has a much greater period. -So: -What algorithm/approach can I use to solve this problem and maximize the product? Is this some known problem and how complex would calculating a solution be? It's not NP, I hope?! -Edit: Partial solution -As @KennyTM pointed out in the comments, Landau's function $g$ describes the maximum LCM, i.e. $g(7) = 12$. -So this actually becomes: How to actually produce the partition? Does knowledge of $g(x)$ help here, maybe for a dynamic programming solution? - -REPLY [4 votes]: The blog post here describes somebody else's efforts at coding this up. He gets an improvement over the naive approach by using some nice internal structure of the problem.<|endoftext|> -TITLE: Interesting Taxicab Problem? -QUESTION [5 upvotes]: I came up with this problem after discussion of taxicab geometry in math class... I thought it was a simple problem, but still pretty neat; however, I am as of yet unsure of whether my answer is correct, or logical. -Let $[X]$ be the area of region $X$, and region $S_n$ be represented by the equation $|x-n|+|y-n|=k-n$ for all $n=0,1,2,\ldots,k-1$. Now let region $R_n$ be the region between $S_n$ and $S_{n+1}$ and $L=\displaystyle\sum_{n=0}^{k-2}{[R_n]}$. Find the smallest positive integer $k$ such that $L > A$. ($A$ is any number you can plug in) -Can anyone else verify my result of $L=\frac{5k^2-k-4}{2}$? - -REPLY [4 votes]: (diagram for k=10) -Each region $S_n$ is a square with sides of slope $\pm 1$, center at $(n,n)$, and side length $\sqrt{2}(k-n)$. Each pair of successive squares is positioned such that $S_{n+1}$ mostly overlaps $S_n$, but not quite. As the top and right vertices of every square are, respectively, on the same horizontal and vertical line and 1 unit apart (the centers of successive squares are shifted 1 unit up and 1 unit to the right), the rectangular region of $S_{n+1}$ that is not inside $S_n$ sticks out by $\frac{1}{\sqrt{2}}$ and has length equal to the side length of $S_{n+1}$, which is $\sqrt{2}(k-(n+1))$, so this rectangular region has area $k-n-1$. Thus, -$$\begin{align} -[R_n]&=(\text{area of }S_n)-(\text{area of }S_{n+1})+(\text{area of rectangular region}) -\\\\ -&=2(k-n)^2-2(k-n-1)^2+k-n-1 -\\\\ -&=5k-5n-3. -\end{align}$$ -Now, -$$\begin{align} -\sum_{n=0}^{k-2}[R_n]&=\sum_{n=0}^{k-2}(5k-5n-3) -\\\\ -&=5k\sum_{n=0}^{k-2}1-5\sum_{n=0}^{k-2}n-3\sum_{n=0}^{k-2}1 -\\\\ -&=5k(k-1)-5\left(\frac{(k-2)(k-1)}{2}\right)-3(k-1) -\\\\ -&=\frac{10k^2-10k-5k^2+15k-10-6k+6}{2} -\\\\ -&=\frac{5k^2-k-4}{2}. -\end{align}$$<|endoftext|> -TITLE: Why are the only numbers $m$ for which $n^{m+1}\equiv n \pmod{m}$ is true also unique for $\displaystyle\sum_{n=1}^{m}{n^m}\equiv 1 \bmod m$? -QUESTION [10 upvotes]: It can be seen here that the only numbers for which $n^{m+1}\equiv n \pmod{m}$ is true are 1, 2, 6, 42, and 1806. Through experimentation, it has been found that $\displaystyle\sum_{n=1}^{m}{n^m}\equiv 1 \bmod m$ is true for those numbers, and (as yet unproven) no others. Why is this true? -If there is a simple relation between $n^{m+1} \bmod{m}$ and $n^m \bmod{m}$, that would probably make this problem make more sense. It is obvious that $n^m \equiv 1 \pmod{\frac{m}{d}}$ (dividing out $n$ from both sides gives this result) for all $n$ on the interval $[1,m]$ where $d$ is a divisor of $m$. As a result of this, $n^m \bmod{m}$ takes on only values of the form $1+k \frac{m}{d} \pmod m$ where $k = -1, 0, 1$. How can it be shown that the sum of those values is equivalent to $1 \bmod{m}$? - -REPLY [2 votes]: Well, I've made a full proof! Part 1 was solved here, and Part 2 was solved here. -Lemma 1: Any integer $m$ which satisfies the original problem also satisfies $n^{m+1} \equiv n \bmod{m}$ for all $n$. -Proof: Let $p$ be a prime dividing $m$. Then $\sum_{n=1}^mn^m\equiv1\pmod p$, so $(m/p)\sum_{n=1}^{p-1}n^m\equiv1\pmod p$, so $p^2$ doesn't divide $m$. Let $g$ be a primitive root mod $p$. Then $\sum_{n=1}^{p-1}n^m\equiv\sum_{r=0}^{p-2}g^{rm}$. That's a geometric series, it sums to $(1-g^{(p-1)m})/(1-g^m)$ which is zero mod $p$ - unless $g^m=1$, in which case it sums to $-1$ mod $p$. So we must have $p-1$ dividing $m$. Looking at $n^{m+1}\equiv n\pmod m$ and letting $n=p$, we see that $p^2$ cannot divide $m$. Now looking mod $p$, we get $n^{m+1}\equiv n\pmod p$. This is equivalent to $m+1\equiv1\pmod{p-1}$ (if $a^x \equiv a^y \bmod{n}$, then $x \equiv y \bmod{\varphi(n)}$ by Euler's theorem, and $\varphi(p) = p-1$), that is, $p-1$ divides $m$, so any integer $n^{m+1} \equiv n \bmod{m}$ as $p-1|m$ for all $m$ if $p|m$. -Lemma 2: There are only a finite amount of integers $m$ which satisfy $n^{m+1} \equiv n \bmod{m}$ -Proof: Since $p^2$ does not divide $m$, we may let $m = p_1 \ldots p_r$ with $p_1 < p_2 < \ldots < p_r$, with $p_i$ prime; as $p-1|m$ for all $p|m$, we say that $p_i-1|p_1 \ldots p_{i-1}$ for $i = 1, \ldots, r$. If we take $i = 1$, this forces $p_1-1|1$, so if $r \ge 1$, $p_1 = 2$. If $i = 2$, $p_2-1|2$, so if $r \ge 2$, $p_2 = 3$. Continuing, if $r \ge 3$, then $(p_3-1)|p_1 p_2 = 6$, so $p_3 = 7$; if $r \ge 4$, $(p_4 - 1)|p_1 p_2 p_3 = 42$, so $p_4 = 43$, as the numbers $d+1$ are not prime for other divisors $d$ of 42 larger than 6. If $r \ge 5$, then $(p_5 -1)|p_1 p_2 p_3 p_4 = 1806$, but 1, 2, 6 and 42 are the only divisors of 1806 with $d+1$ prime, so $p_5$ cannot exist. Therefore, $r \le 4$ and $m \in \\{1, 2, 6, 42, 1806\\}$.<|endoftext|> -TITLE: Infinite processes riddle -QUESTION [5 upvotes]: A train with infinitely many seats, one for each rational number, stops in countably many villages, one for each positive integer, in increasing order, and then finally arrives at the city. -At the first village, two women board the train. -At the second village, one woman leaves the train to go visit her cousin, and two other women board the train. -At the third village, one woman leaves the train to go visit her cousin, and two other women board the train. -At the fourth village, and in fact at every later village, the same thing keeps happening: one woman off to visit her cousin, two new women on board the train. How many women arrive at the city? - -REPLY [4 votes]: The problem is not well posed, since you don't specify which woman leaves the train at each station. See Ross–Littlewood paradox.<|endoftext|> -TITLE: Can a non-abelian subgroup be such that the right cosets equal the left cosets? -QUESTION [7 upvotes]: So, I know that every abelian (commutative) group $G$ is such that, for any subgroup $H$, the left cosets of $H$ in $G$ are the right cosets. I guess this is true even if $G$ is not abelian, but $H$ is (but I'm not sure). Is this enough to characterise the subgroup as abelian, or are there examples of non-abelian subgroups with this property? - -REPLY [10 votes]: The left cosets of $H$ are the same as the right cosets -of $H$ if and only if $H$ is a normal subgroup of $G$. -There are non-abelian groups all subgroups of which are normal, -for example the quaternion group of order $8$. - -REPLY [6 votes]: For a subgroup of a group, the condition of being abelian is neither necessary nor sufficient for being normal: -If $G$ is non-abelian, then $H=G$ is a normal but non-abelian subgroup. As another example, the alternating group is a normal but non-abelian subgroup of the symmetric group. -For a counterexample in the converse direction, consider the free group with two generators. The subgroup generated by one of the generators is abelian (isomorphic to $\mathbb Z$) but not normal. (This means that your guess above is wrong.)<|endoftext|> -TITLE: Is there a natural model of Peano Arithmetic where Goodstein's theorem fails? -QUESTION [25 upvotes]: Goodstein's Theorem is the statement that every Goodstein sequence eventually hits 0. It is known to be independent of Peano Arithemtic (PA), and in fact, was the first such purely number theoretic result. It is provable in ZFC. -One way of phrasing this is that the theory "PA + Goodstein's theorem is false" is consistent (assuming PA is). -By Godel's completeness theorem, there must exist a model of PA in which Goodstein's theorem fails. In fact, applying the downward Lowenheim-Skolem theorem, we may assume this model is countable. -However, in the interest of speaking about this result to a group of grad students (of various interests), I'd like to run this backwards. So, - -is there some known, obvious, or easy to construct countable nonstandard model of $PA$ in which Goodstein's theorem fails? - -In answering this, I'm willing to accept the "foundational" first order logic theorems: Godel's completeness and compactness results, the Lowenheim-Skolem theorem. -Here is the kind of answer I'd really like: There is an explicit countable collection $\Sigma = \{\phi_n\}$ of first order sentences (possibly in a slightly larger language) such that $\mathbb{N}$ is a model of $PA + \Sigma_0$ for any finite $\Sigma_0\subseteq \Sigma$, and such that $PA + \Sigma$ implies Goodstein's theorem is false. - -One approach I've thought of is to first enlarge the language by adding a constant symbol c. Next, let $\phi_n$ be the statement "The Goodstein sequence for $c$ takes longer than "n" steps to terminate". (While I do not personally know how to encode "the Goodstein sequence for $c$" in first order language, I am confident it can be done, for otherwise, one could not even formulate "PA proves the Goodstein sequence converges".) -In this case, $\mathbb{N}$ is a model of any $PA + \Sigma_0$ by simply setting c = n+1, where n is the largest subscript of a $\phi_k$ in $\Sigma_0$ (which exists because $\Sigma_0$ is finite). -By Godel's and Lowenheim-Skolem theorems, $PA + \Sigma$ has a countable model $M$. Then the interpretation of $c$ in this model satisfies $\phi_n$ for all $n$, and hence the Goodstein sequence doesn't terminate for $c$ in this model. -However, since the independence of Goodstein's theorem was so difficult to prove, I'm quite certain there's a mistake in this line of reasoning (though I don't know where). I'd love for someone to patch this up into something correct. -As always, please feel free to retag as neccesary, and thank you for the reponses! - -REPLY [14 votes]: Here's the flaw in the compactness argument you sketched. In the model you construct, the interpretation of $c$ will indeed be a nonstandard number whose Goodstein sequence does not halt in a standard number of steps. But Goodstein's theorem has a universal quantifier for the number of steps, so nonstandard "Goodstein sequences" are also permitted. That is, when the theorem says "there is a finite sequence", in a particular model that "finite" sequence might have the length of a nonstandard number, which cannot be excluded by the sequence of sentences in your argument. -One way to find an extension of PA where Goodstein's theorem is provable is to add enough transfinite induction to PA to formalize the usual proof of Goodstein's theorem. The details of how to do that are not so bad, although they may take too much time for an elementary class.<|endoftext|> -TITLE: Algebra - Find the equation of the line perpendicular to 3x+2y-4=0 going through point (2,-3) -QUESTION [5 upvotes]: I was wondering if you could help... -I have Math homework, I was hoping if you could check my answer. -Find the equation of the line perpendicular to $3x+2y-4=0$ going through point $(2,-3)$. -$y=\large\frac{-3x+4}{2}$ -$y = 2$ -$x=-\large\frac{2(y-2)}{3}$ -$x=-3$ -Therefore my equation is correct? -thanks for you help in advance guys. - -REPLY [5 votes]: If two lines are perpendicular, the product of their slopes is -1. This is often restated as the slope of the line perpendicular to a given line is the opposite reciprocal of the slope of the given line. For example, the line $6x-15y+3=0$ has slope $\frac{2}{5}$, so a line perpendicular to it will have slope $-\frac{5}{2}$. -With that fact, you should be able to determine the slope of the line for which you are finding an equation, and you know a point on the line. Those two pieces of information should be enough for you to write an equation of the line perpendicular to $3x+2y-4=0$ going through point $(2,-3)$.<|endoftext|> -TITLE: Can I skip the first chapter in Rudin's Principles of Mathematical Analysis? -QUESTION [7 upvotes]: I am a statistician who wishes to learn real analysis in order to better understand the foundations of statistics. With that aim in mind I plan to go through Rudin's classic on "Principles of Mathematical Analysis". -Given the above context can I skip chapter 1? It seems to me that the material in chapter 1 is not as important to someone with my goals. I understand that it may help in establishing the need for rigor in mathematical arguments but that is something I already appreciate. In particular, I wonder will skipping chapter 1 impede my understanding of the material in subsequent chapters? -Any advice will be appreciated. - -REPLY [25 votes]: I'll go through the paragraphs of the third edition, motivating why you should/shouldn't consider them -INTRODUCTION -You should carefully read this. This paragraph is not so important for the subsequent development, but it's fundamental for your understanding of the utility of $\mathbb{R}$, it contains an enlightening example wich shows you (with some weird algebraic trick) that $\mathbb{Q}$ contains gaps and so we really need to "patch" it in order to do interesting things. -ORDERED SETS -You should read only the definitions of bound and least upper bound (or supremum). They really matter and are used, explicitly or not, in many theorems. In particular, the latter is subtle and you should practice with it, for example with exercises 4 and 5 at the end of the chapter. The rest of the paragraphs regards ordered fields, you can skip it if you are used to work with $\mathbb{R}$ and its ordering, and this practical experience should suffice to you. Perhaps, you could find strange and interesting that $\mathbb{C}$ cannot be ordered without destroying its algebraic properties. -FIELDS -You can skip this. If your interest is real analysis, your only field will be $\mathbb{R}$ (perhaps $\mathbb{C}$ sometimes) and as said above, the practical properties of field and ordering should be enough for you (e.g. if $a,b,c \in \mathbb{R}$ and $a < b$ then $a + c < b + c$, you cannot divide by zero, etc.) -THE REAL FIELD -There are two ways to build $\mathbb{R}$. The first is axiomatic: you say "how I'd like to work in place that has such property" and magic! You have it by axiom. The second way is contructive: you take $\mathbb{Q}$, do something on it and come up with a mathematical structure that act as $\mathbb{R}$, has the properties of $\mathbb{R}$ and you eventually call it $\mathbb{R}$. It's very subtle and not practically useful, you should skip the latter method, reported in the appendix of the chapter, and know that when you follow the axiomatic method your are speaking of something that exists, in some mathematical sense. You should also consider theorem 1.20 (archimedean property and density of $\mathbb{Q}$ in $\mathbb{R}$) if you don't read the proof, at least carefully read the statement, it's very used and justify some mysterious things as: if $a \in \mathbb{R}$ and $0 \leq a < \epsilon$ for all $\epsilon > 0$ then $a = 0$. Jump over the existence of the n-th root of a positive real, it's intuitive and you can prove it later in different (and simpler) ways. -THE EXTENDED REAL NUMBER SYSTEM -Not only it's not very useful, I think it's dangerous to introduce symbols for infinity when someone still isn't completely conscious of what infinity is and how it acts in many theorems of analysis. Skip. -THE COMPLEX FIELD -As in the case of the real field, if you know what $\mathbb{C}$ is and how to work with it, you can safely skip this, or read it later if you need. The only thing that you probably need are the trianglular inequality in theorem 1.33 and theorem 1.35, known as Cauchy–Schwarz inequality. -EUCLIDEAN SPACES -Read it, it's used in the following chapters. -APPENDIX -As said above, skip it. -EXERCISES -As said above, 4 and 5 are very useful. I also suggest you to work on 6 and 7, they teach you what we mean when we say things like $3^{\pi}$ or when we talk about logarithms. - -REPLY [4 votes]: From a strategic point of view: given that mathematical terminology and formalism are not totally standardised, it's worth at least reading through the chapter quickly, at least to stop yourself later having to refer back to what the various terms and symbols mean. -This advice goes for any book, I guess. -From the point of view of content: no one but you is in a position to really decide whether you understand the material enough to skip the chapter.<|endoftext|> -TITLE: Solving an equation with irrational exponents -QUESTION [5 upvotes]: Is there any theory (analogous to Galois theory) for solving equations with irrational exponents like: -$ x^{\sqrt{2}}+x^{\sqrt{3}}=1$ -? - -REPLY [14 votes]: The study of such equations is not "abstract algebra" as it is usually understood. The reason is that to even define the function $x^{\sqrt{2}}$, for example, requires analysis; one has to prove certain properties of $\mathbb{R}$ to ensure that such a function exists. This is in marked contrast to the case of integer or rational powers, where one has a purely algebraic definition and the background theory is equational. To define the function $x^{\sqrt{2}}$ one has to either define $e^x$ and the logarithm or consider a limit of functions $x^{p_n}$ where $p_n$ form a sequence of rational approximations to $\sqrt{2}$, and this is irreducibly non-algebraic stuff. -In particular, while polynomials can be studied in an absurdly general setting, transcendental equations like those you describe are more or less restricted to $\mathbb{R}$ (or $\mathbb{C}$ if you really want to pick a branch of the logarithm). The LHS is an increasing function of $x$, so there is at most one root, which probably one can really only compute numerically if it exists. (Its nonexistence can be ruled out by computing local minima in $(0, 1)$.) -This is another question which touches on a theme which has come up several times on math.SE, which is that exponentiation should really not be thought of as one operation. Instead, it is a collection of related operations with various degrees of generality and applicability which happen to share the same algebraic properties, and one should not infer too much about how similar these operations are.<|endoftext|> -TITLE: Book recommendation on the history of PDE/ODE? -QUESTION [5 upvotes]: I would like to know something like what's the first PDE etc. -Could you recommend book on the history of PDE/ODE? -thanks. - -REPLY [2 votes]: SIAM (The Society for Industrial and Applied Mathematics) has collected some oral interviews and other materials that have some bearing on the history of ordinary and partial differential equations: -http://history.siam.org/ -There is also a lot of material on this topic that is related to more specialized historical studies: -History of Hydraulics by Hunter Rouse and Simon Ince -A History of the Theory of Elasticity and of the Strength of Materials ... -by Isaac Todhunter -and work on the Euler equations and the Navier-Stokes equations.<|endoftext|> -TITLE: easy to implement method to fit a power function (regression) -QUESTION [14 upvotes]: I want to fit to a dataset a power function ($y=Ax^B$). What is the best and easiest method to do this. I need the $A$ and $B$ parameters too. -I'm using in general financial data in my project, which looks like this: -8553600 458.2 -17193600 373.6 -25833600 694.16 -34646400 738.33 -44064000 817.89 -54259200 1040.67 -67910400 1032.69 -76291200 1222.1 -84844800 1245.65 -94089600 1217.44 -102211200 1579.38 -110592000 1859.24 -118886400 1711.67 -127612800 2303.62 -136684800 2658.26 -219196800 3669.23 -225676800 3525.02 -225763200 3749.27 - -I need to implement the algorithm in a Java similar language called ActionScript. - -REPLY [17 votes]: There's the obvious approach of proceeding based on taking the logarithm of both sides of your regression formula: -$$\ln\;y=\ln\left(ax^b\right)$$ -$$\ln\;y=\ln\;a+b\ln\;x$$ -which can be seen to be an expression of the form -$$v=k+bu$$ -where $v=\ln\;y$, $u=\ln\;x$, and $k=\ln\;a$. -Now a linear regression in the variables $v$ and $u$ applies. -In particular, formula numbers 16, 20, 27 and 28 of this page now apply. -Once you have the slope $b$ and the intercept $k$ of the linear transformation, $a$ is just the exponential of the intercept ($\exp\;k$), and $b$ is the slope of the transformed line. -I note here the possible caveat of the fit parameters being biased towards the data with ordinates small in magnitude. The rigorous way of going about it would be to treat the parameters from the linear regression as provisional and then apply a nonlinear least-squares algorithm like Levenberg-Marquardt to the data, using the parameters from the linear regression as a starting point. This may or may not be needed though; it really depends on the data you have. - -I'll polish off comments I gave earlier: again, the problem with using the logarithm to linearize your nonlinear function is that it tends to overemphasize the errors in the small values of y. Remember that the assumption of linear least squares is that the abscissas are accurate, but the ordinates are contaminated by error. -In other words, the $y_i$ are actually of the form $\hat{y}_i\pm\sigma_i$ where the $\hat{y}_i$ are the "true values" (presumably unknown), and the $\sigma_i$ are inherent uncertainties. If one takes the logarithms of the $y_i$, the uncertainties are also transformed, and we have to take this into account. -The key formula is that if the $y_i$ are transformed by a function $f(y)$, the $\sigma_i$ are transformed according to $f'(y_i)\sigma_i$. -For the case at hand, the objective function we now have to minimize is of the form -$$f(a,b)=\sum_i{y_i^2\;(\ln\;y_i-\ln\;a-b\ln\;x_i)^2}$$ -and we have to modify the formulae for linear regression accordingly: -$$m=\sum_i y_i^2$$ -$$\bar{x}=\frac{\displaystyle \sum_i y_i^2 \ln\;x_i}{m}$$ -$$t=\sum_i y_i^2 (\ln\;x_i-\bar{x})^2$$ -whereupon -$$b=\frac{\displaystyle \sum_i y_i^2\ln\;y_i (\ln\;x_i-\bar{x})}{t}$$ -and -$$a=\exp\left(\frac{\displaystyle \sum_i y_i^2\ln\;y_i}{m}-b\bar{x}\right)$$ -These should be better provisional values for later polishing. - -It turns out that for a separable nonlinear fit (linear in one of the parameters), the NLLS problem greatly simplifies. -Remember that the actual quantity that we have to minimize is -$$F(a,b)=\sum_i{(y_i-a x_i^b)^2}$$ -If we take the gradient $\nabla F(a,b)$: -$$\nabla F(a,b)=(2a\sum_i x_i^{2b}-2\sum_i x_i^b y_i\quad 2a\sum_i \ln\left(x_i\right) x_i^{2b}-2\sum_i \ln\left(x_i\right) x_i^b y_i)^T$$ -equate both components to 0, and then eliminate the linear parameter $a$, we get the univariate nonlinear equation in $b$: -$$\left(\sum_i x_i^b y_i\right)\left(\sum_i \ln\left(x_i\right) x_i^{2b}\right)-\left(\sum_i x_i^{2b}\right)\left(\sum_i \ln\left(x_i\right) x_i^b y_i\right)=0$$ -which can be attacked with standard techniques, e.g. the secant method or Newton's method. But, how do we start the iteration? Why, with the provisional $b$ we got from the (weighted) linear regression earlier! -Having gotten $b$, it is now a simple matter to get $a$: -$$a=\left(\sum_i x_i^b y_i\right) / \left(\sum_i x_i^{2b}\right)$$ -and you now have your parameters.<|endoftext|> -TITLE: Diagonal update of the inverse of $XDX^T$ -QUESTION [8 upvotes]: I have a matrix $F=XDX^T$, where $D$ is $m\times m$ and diagonal and $X$ is $n\times m$. -Now, I compute $F^{-1}$. -Is there an efficient method to update $F^{-1}$ if $D$ is updated by $D'=D+G$, where $G$ is a sparse diagonal matrix? - -REPLY [6 votes]: Yes, there is a very efficient way to compute the updated $F^{-1}$. Begin by writing $X$ in terms of its columns: $X = [x_1 x_2 \dots x_m]$. It follows that: -$$ -XGX^T = \sum_{i=1}^{m} G_{ii} x_ix_i^T = \sum_{i\in S} G_{ii} x_ix_i^T -$$ -where $S \subset \{1,\dots,m\}$ is the set of indices for which $G_{ii}$ is nonzero. Suppose that $S$ contains $r$ indices. Since $G$ is sparse by assumption, $r$ is much smaller than $m$. If we assemble the columns of $X$ corresponding to indices in $S$, we obtain a new matrix $Y$, such that: -$$ -XGX^T = YHY^T -$$ -where $Y$ is $m \times r$, and $H$ is a diagonal matrix whose diagonal consists of all the nonzero entries along the diagonal of $G$. Now apply the Woodbury matrix identity: -$$ -(F+YHY^T)^{-1} = F^{-1} - F^{-1}Y(H^{-1}+Y^TF^{-1}Y)^{-1}Y^TF^{-1} -$$ -Computing the left-hand side directly is costly, as it requires inverting an $m\times m$ matrix. However, since $F^{-1}$ has already been computed, we can compute the right-hand side efficiently; the only inverse required is that of an $r\times r$ matrix, and $r$ is much smaller than $m$.<|endoftext|> -TITLE: On the functional square root of $x^2+1$ -QUESTION [55 upvotes]: There are some math quizzes like: -find a function $\phi:\mathbb{R}\rightarrow\mathbb{R}$ -such that $\phi(\phi(x)) = f(x) \equiv x^2 + 1.$ - -If such $\phi$ exists (it does in this example), $\phi$ can be viewed as a "square root" of $f$ in the sense of function composition because $\phi\circ\phi = f$. Is there a general theory on the mathematical properties of this kind of square roots? (For instance, for what $f$ will a real analytic $\phi$ exist?) - -REPLY [3 votes]: Using the same approach as in this answer, it is possible to produce the functional square root by using: -$$\phi_0(x)=|x|^{\sqrt2}$$ -$$\phi_{n+1}(x)=f^{-1}(\phi_n(f(x)))=\sqrt{\phi_n(x^2+1)-1}$$ -which converges uniformly to a function $\phi$ satisfying $\phi(\phi(x))=f(x)$, which can be seen by noting that -$$\phi_n(\phi_n(x))=f^{-n}(|f^n(f^{-n}(|f^n(x)|^{\sqrt2}))|^{\sqrt2})=f^{-n}(f^n(x)^2)$$ -is nearly equal to -$$\phi_n(\phi_n(x))\simeq f^{-n}(f^n(x)^2+1)=f(x)$$ -with the error decreasing with each application of $f^{-n}$ since $|f^{-1}(a+\epsilon)-f^{-1}(a)|\simeq\epsilon/\sqrt a\ll\epsilon/2$ implies that -$$|\phi_n(\phi_n(x))-f(x)|\ll1/2^n$$ -and in fact the error tends to zero much faster, since initially $a=f^n(x)^2\gg f^2(x)^{2^{n-1}}$ is insanely large. - -There is the additional caveat that technically this requires us to compute iterations of $f$ since we have -$$\phi_n(x)=f^{-n}(|f^n(x)|^{\sqrt2})$$ -where $f^n(x)$ is far too large to compute. We note two things at this point: - -The convergence is at least quadratic, so one should not need to use very large $n$ anyways. -If one does need to use larger $n$, it is better to use an improved $\phi_0$. Expanding on what is shown in the linked question, we can use the improved$$\phi_0(x)=|x|^{\sqrt2}+\frac1{\sqrt2}|x|^{\sqrt2-2}-\frac12|x|^{-\sqrt2}$$to get faster convergence. Note that we only care about the behavior as $x\to\infty$ since $f^n(x)$ is large.<|endoftext|> -TITLE: Fundamental group of GL(n,C) is isomorphic to Z. How to learn to prove facts like this? -QUESTION [31 upvotes]: I know, fundamental group of $GL(n,\mathbb{C})$ is isomorphic to $\mathbb{Z}$. It's written in Wikipedia. Actually, I've succeed in proving this, but my proof is two pages long and very technical. I want - -to find some better proofs of this fact (in order to compare to mine); -to find some book or article, using which I can learn, how to calculate fundamental groups and, more generally, connectedness components of maps from one space to another; -to find something for the reference, which I can use in order to learn, how to write this proofs nicely, using standard terminology. - -REPLY [30 votes]: The first thing you have to do is to note that the inclusion $U(n)\to\mathrm{GL}(n,\mathbb C)$ induces an isomorphism on the fundamental groups. This can be done by noting that a loop in $\mathrm{GL}(n,\mathbb C)$ can be deformed to one in $U(n)$ by performing the Gram-Schmidt procedure at each point of the loop, and checking that this can be done continuously and so on. -Next, considering the beginning of the long exact sequence for the homotopy groups of the spaces appearing in the fibration $$U(n-1)\to U(n)\to S^{2n-1}$$ which arises from the transitive linear action of $U(n)$ on $S^{2n-1}\subseteq\mathbb C^{n}$ you can prove, by induction, that the inclusion $U(1)\to U(n)$ induces an isomorphism on fundamental groups. -Then you can explicitly describe $U(1)$ as a space homeomorphic to $S^1$. - -REPLY [26 votes]: The method Mariano discusses in his answer is absolutely the way that mathematicians compute fundamental groups (and also higher homotopy groups) of Lie groups. Here I just want to mention how his first step applies in a more general context. -1) Concerning $\operatorname{GL}_n(\mathbb{C})$: the unitary group $U(n)$ is a maximal compact subgroup of $\operatorname{GL}_n(\mathbb{C})$, and moreover any maximal compact subgroup is conjugate to $U(n)$. (This can be seen by considering Hermitian forms, c.f. e.g. Section 1 of http://math.uga.edu/~pete/8410Chapter9.pdf.) Moreover, the Gram-Schmidt process gives a deformation retraction from $\operatorname{GL}_n(\mathbb{C})$ to $U(n)$, hence these two spaces are homotopy equivalent. And in fact even more is true: there exists a finite-dimensional Euclidean space $E$ such that $\operatorname{GL}_n(\mathbb{C})$ is homeomorphic to $U(n) \times E$: this is the QR decomposition. Moreover: -2) Everything in 1) goes over verbatim for $\operatorname{GL}_n(\mathbb{R})$ with the unitary group $U(n)$ replaced by the orthogonal group $O(n)$. -3) For any reductive group $G$ over $\mathbb{R}$ or $\mathbb{C}$, there exists a maximal compact subgroup $K$, any two such are conjugate in $G$, and $G$ is homeomorphic to the product of $K$ with a finite-dimensional Euclidean space. This last fact is a consequence of the Iwasawa decomposition, a far-reaching generalization of the QR-decomposition. - -REPLY [3 votes]: For 2. and 3. I can recommend Allen Hatcher's Algebraic Topology book which can be accessed free from the author's webpage: http://www.math.cornell.edu/~hatcher/AT/ATpage.html<|endoftext|> -TITLE: Recovering the two $SU(2)$ matrices from $SO(4)$ matrix -QUESTION [22 upvotes]: Since there is a $2$-$1$ homomorphism from $SU(2)\times SU(2)$ to $SO(4)$ there should be a way to recover the two $SU(2)$ matrices given an $SO(4)$ matrix. -I believe I could set this up as a system of equations using the map from above and solve for the coefficients of the $SU(2)$ matrices. However, I wonder if anyone has already done this or can point me to the formulas? - -REPLY [4 votes]: In terms of SU(2) = unit quaternions, the answer can be found in the sections "Isoclinic decomposition" and "Relation to quaternions" of the Wikipedia article about SO(4). -(This is equivalent to Robin Chapman's answer, I presume, but the formulas are a bit more explicit.)<|endoftext|> -TITLE: Units of $M_2(Z)$ -QUESTION [7 upvotes]: In one of my classes we discussed the ring of 2x2 matrices $M_{2}(\mathbb{Z})$. We said that its group of units was $SL_{2}(\mathbb{Z})$ which means that it is the set of 2x2 with determinant equal to $\pm$1. -Why can't we have a 2x2 matrix with entries a,b,c, and d such that $\frac{a}{ad-bc}$,$\frac{-b}{ad-bc}$,$\frac{-c}{ad-bc}$, and $\frac{d}{ad-bc}$ are all integers? -I'm sure its a simple contradiction argument, but I couldn't see it. So if anyone knows a quick elementary argument, it'd be greatly appreciated - -REPLY [4 votes]: HINT $\;\;$ Multiplicative maps preserve units: $\rm\; MN = 1 \;\;\Rightarrow\;\; d(M)\:d(N) = 1$ -NOTE $\rm\;\; d(1) = 1\;$ via apply $\rm d$ to $1\cdot 1 = 1\:$ then cancel $\rm d(1)\ne 0,$ valid since $\mathbb Z$ has cancellation.<|endoftext|> -TITLE: What do all the $k$-cycles in $S_n$ generate? -QUESTION [18 upvotes]: Why don't $3$-cycles generate the symmetric group? was asked earlier today. The proof is essentially that $3$-cycles are even permutations, and products of even permutations are even. -So: do the $3$-cycles generate the alternating group? Similarly, do the $k$-cycles generate the alternating group when $k$ is odd? -And do the $k$-cycles generate the symmetric group when $k$ is even? I know that transpositions ($2$-cycles) generate the symmetric group. - -REPLY [34 votes]: If $n\geq5$, then the only normal subgroups of the symmetric group $S_n$ are the trivial group, the alternating group and the symmetric group itself. Since the $k$-cycles form a full conjugacy class, it follows that the subgroup they generate is normal. This determines everything if $n \geq 5$. -More specifically: the $k$-cycles in $S_n$ generate the alternating group if $k$ is odd and $k \ne 1$; they generate the full symmetric group if $k$ is even.<|endoftext|> -TITLE: What is the value of $1^i$? -QUESTION [46 upvotes]: What is the value of $1^i$? $\,$ - -REPLY [2 votes]: $1^{i} = e^{\log(1) i} = e^{0}=1$<|endoftext|> -TITLE: Is there a name (and use) for an average based on the unique values of a set of data? -QUESTION [7 upvotes]: Consider the following data points: $1, 1, 2, 3, 4$ -I understand that... -the average is the total of the numbers divided by the count of numbers in the set, the median is the central value based on location in the set, the mode is the value occurring most often in the set, and the midrange is highest and lowest values divided by $2$. -Is there a name for the calculation of the average based on the unique values found in the set? So... $\frac{1 + 2 + 3 + 4}{4} = 2.5$? And is there a use for it? -Pardon a possibly elementary question, I'm a programmer but I'm not exactly a math-oriented person, and this has been bugging me recently. - -REPLY [2 votes]: You could conceive of your result as being a weighted average. In a batch of data consisting of $n_i \ge 1$ instances of $x_i$, $1 \le i \le n$, your average is -$$ \frac{ \sum_{i=1}^{n} x_i}{n} - = \frac{ \sum_{i=1}^{n} \left(\frac {1}{n_i} n_i x_i \right) }{n} - = \frac{ \sum_{i=1}^{n} \sum_{j=1}^{n_j} \left( \frac {1}{n_i} x_i \right) }{ \sum_{i=1}^{n} \sum_{j=1}^{n_j} \frac {1}{n_i}}$$ -exhibiting the weights as $1/n_i$.<|endoftext|> -TITLE: Pivoting in LU decomposition -QUESTION [6 upvotes]: I tried to find some reference on the net but couldn't find a good one. What is the advantage of pivoting in LU decomposition over regular LU decomposition? Is it something to do with matrix singular or not? - -REPLY [6 votes]: To further generalize Rahul's answer, any matrix that has a singular leading block cannot have an LU decomposition. By allowing pivoting (or in matrix factorization terms, allowing the multiplication of your original matrix by an appropriate permutation matrix), all matrices admit an LU decomposition. This is the explanation for pivoting in exact arithmetic. -In inexact arithmetic, the condition "singular" in the explanation above is replaced by the term "ill-conditioned". Every matrix has an associated condition number, which is defined as the product of the norm of the matrix and the norm of its inverse. In attempting to proceed with the LU decomposition of a matrix with an ill-conditioned leading block, you will hit a point where you have to divide by a small number (resulting in a number that may be much larger in magnitude than the matrix's original entries), which causes all sorts of trouble in the succeeding additions/subtractions that have to be done to finish the LU decomposition. -By pivoting, we avoid (or at the very least delay) the onset of encountering numbers much larger than the entries of the original matrix, which is one way precision is lost in the operations. -Of course, one could make the objection that the coefficients are badly scaled: for instance, if you have two equations in two unknowns, and you multiply both sides of any of the two equations by a number much smaller or much larger than the original coefficients, the solution to the system is still the same, but attempting to perform LU decomposition on the transformed system can be disastrous. This is where concepts like "scaled pivoting" comes in, where relative instead of absolute magnitudes are taken into account in the selection of pivots. -Then there are applications where "partial pivoting" (swapping of rows) is not enough; rank determination of a matrix, for instance, requires "complete pivoting" (swapping of both rows and columns). -In short, LU decomposition behaves much better with pivoting. -(I have been intentionally vague in some parts; you would do well to read Golub and Van Loan, as already recommended by jmoy, or the books "Matrix Decompositions" by Stewart or "Applied Numerical Linear Algebra" by Demmel for more rigorous versions of my explanation.)<|endoftext|> -TITLE: Why doesn't $0$ being a prime ideal in $\mathbb Z$ imply that $0$ is a prime number? -QUESTION [46 upvotes]: I know that $1$ is not a prime number because $1\cdot\mathbb Z=\mathbb Z$ is, by convention, not a prime ideal in the ring $\mathbb Z$. -However, since $\mathbb Z$ is a domain, $0\cdot\mathbb Z=0$ is a prime ideal in $\mathbb Z$. Isn't $(p)$ being a prime ideal the very definition of $p$ being a prime element? -(I know that this would violate the Fundamental Theorem of Arithmetic.) - -Edit: -Apparently the answer is that a prime element in a ring is, by convention a non-zero non-unit (see wikipedia). -This is strange because a prime ideal of a ring is, by convention, a proper ideal but not necessarily non-zero (see wikipedia). -So, my question is now: Why do we make this awkward convention? - -REPLY [38 votes]: You have a point here: absolutely we want to count $(0)$ as a prime ideal in $\mathbb{Z}$ -- because $\mathbb{Z}$ is an integral domain -- whereas we do not want to count $(1)$ as being a prime ideal -- because the zero ring is not an integral domain (which, to me, is much more a true fact than a convention: e.g., every integral domain has a field of fractions, and the zero ring does not). -I think we do not want to call $0$ a prime element because, in practice, we never want to include $0$ in divisibility arguments. Another way to say this is that we generally want to study factorization in integral domains, but once we have specified that a commutative ring $R$ is a domain, we know all there is to know about factoring $0$: $0 = x_1 \cdots x_n$ iff at least one $x_i = 0$. -Here is one way to make this "ignoring $0$" convention look more natural: the notions of factorization, prime element, irreducible element, and so forth in an integral domain $R$ depend entirely on the multiplicative structure of $R$. Thus we can think of factorization questions as taking place in the cancellative monoid $(R \setminus 0,\cdot)$. (Cancellative means: if $x \cdot y = x \cdot z$, then $y = z$.) In this context it is natural to exclude zero, because otherwise the monoid would not be cancellative. Contemporary algebraists often think about factorization as a property of monoids rather than integral domains per se. For a little more information about this, see e.g. Section 4.1 of http://math.uga.edu/~pete/factorization2010.pdf. - -REPLY [17 votes]: There are good reasons behind the convention of including $(0)$ as a prime ideal -but excluding $(1).\ $ First, we include zero as a prime ideal because it facilitates many useful reductions. For example, in many ring theoretic problems involving an ideal $\, I\,$, one can reduce to the case $\,I = P\,$ prime, then reduce to $\,R/P,\,$ thus reducing to the case when the ring is a domain. In this case one simply says that we can factor out by the prime $ P\,$, so w.l.o.g. assume $\, P = 0\,$ is prime, so $\,R\,$ is a domain. For example, I've appended to the end of this post an excerpt from Kaplansky's classic textbook Commutative Rings, section $1\!\!-\!\!3\!:\,G$-Ideals, Hilbert Rings, and the Nullstellensatz. -Thus we have solid evidence for the utility of the convention that the zero ideal is prime. So why don't we adopt the same convention for the unit ideal $(1)$ or, equivalently, why don't we permit the zero ring as a domain? There are a number of reasons. First, in domains and fields it often proves very convenient to assume that one has a nonzero element available. This permits proofs by contradiction to conclude by deducing $\,1 = 0.\ $ More importantly, it implies that the unit group is nonempty, so unit groups always exist. It'd be very inconvenient to have to always add the proviso (except if $\, R = 0)\,$ to the many arguments involving units and unit groups. For a more general perspective it's worth emphasizing that the usual rules for equational logic are not complete for empty structures so that is why groups and other algebraic structures are always axiomatized to prevent nonempty structures (see this thread for details). -Below is the promised Kaplansky excerpt on reduction to domains by factoring out prime ideals. I've explicitly emphasized the reductions e.g. reduce to.... - -Let $\, I\,$ be any ideal in a ring $\, R.\,$ We write $\, R^{*}\,$ for the quotient ring $\, R/I.\,$ In the polynomial ring $\, R[x]\,$ there is a smallest extension $\, IR[x]\,$ of $\, I.\,$ The quotient ring $\, R[x]/IR[x]\,$ is in a natural way isomorphic to $\, R^*[x].\,$ In treating many problems, we can in this way reduce to the case $\, I = 0,\,$ -and we shall often do so. -THEOREM $28$. $\,$ Let $\, M\,$ be a maximal ideal in $\, R[x]\,$ and suppose that the contraction $\, M \cap R = N\,$ is maximal in $\, R.\ $ Then $\, M\,$ can be generated by $\, N\,$ and one more element $\, f.\ $ We can select $\, f\,$ to be a monic polynomial which maps $\!\bmod N\,$ into an irreducible polynomial over the field $\, R/N.\ $ -Proof. $\,$ We can reduce to the case $\, N = 0,\,$ i. e., $\, R\,$ a field, and then -the statement is immediate. -THEOREM $31$. $\,$ A commutative ring $\, R\,\,$ is a Hilbert ring if and only if the polynomial ring $\, R[x] \,\,$ is a Hilbert ring. -Proof. $\,$ If $\, R[x]\,$ is a Hilbert ring, so is its homomorphic image $\, R\,$. -Conversely, assume that $\, R\,$ is a Hilbert ring. Take a G-ideal $\, Q\,$ in -$\, R[x]\,$; we must prove that $\, Q\,$ is maximal. Let $\, P = Q \cap R\,$; we can reduce the problem to the case $\, P = 0,\,$ which, incidentally, makes $\, R\,$ a domain. -Let $\, u\,$ be the image of $\, x\,$ in the natural homomorphism $\, R[x] \to R[x]/Q.\,$ -Then $\, R[u]\,$ is a G-domain. By Theorem $23$, $\,u\,$ is algebraic over $\,R\,$ and $\,R\,$ is a G-domain. Since $\,R\,$ is both a G-domain and a Hilbert ring, $\,R\,$ is a field. But this makes $\, R[u] = R[x]/Q\,$ a field, proving $\, Q\,$ to be maximal. - -REPLY [8 votes]: Generally we make nice conventions because they make the statements of theorems nice. The theorem relevant to prime ideals is that $P$ is a prime ideal of $R$ if and only if $R/P$ is an integral domain. The theorem relevant to prime elements is prime factorization (when it holds). -These two concepts almost coincide for principal ideals, but we must distinguish between the generic point $(0)$ and closed points, and there are good reasons for doing this. (The zero ideal, for example, can't occur in the factorization of a nonzero ideal in a Dedekind domain.)<|endoftext|> -TITLE: Integral Representation of Infinite series -QUESTION [14 upvotes]: Let's take a look at the following integrals : -1) $\displaystyle \int\limits_{0}^{1} \frac{\log{x}}{1+x} \ dx = -\frac{\pi^{2}}{12} = -\frac 1 2 \sum\limits_{n=1}^{\infty} \frac{1}{n^2}= -\frac 1 2 \zeta(2)$ -2) For $c<1$ $\displaystyle \int\limits_{0}^{\frac{\pi}{2}} \arcsin(c \cos{x}) \ dx = \frac{c}{1^2} + \frac{c}{3^2} + \frac{c}{5^2} + \cdots $ -3) Summing the series $(-1)^k \frac{(2k)!!}{(2k+1)!!} a^{2k+1}$ -I have seen integral representations of series and sums employed in ingenious way ways to to compute closed-forms and deduce other interesting properties (e.g. asympotics, recurrences, combinatorial interpretations, etc). Are there any general algorithms or theories behind such methods of integral representations? - -REPLY [28 votes]: There is a very powerful calculus of multidimensional residues that accomplishes what you seek, see for example the book G. P. Egorychev. Integral representation and computation of combinatorial sums. AMS, Transl. of Math. Monogr. v. 59 Providence 2nd ed. 1989. Below are two illustrative examples of Egorychev's "method of coefficients" excerpted from the survey by Egorychev and Zima in volume 5 of Hazewinkel's Handbook of Algebra:<|endoftext|> -TITLE: Sums of a set of symmetric matrices -QUESTION [5 upvotes]: Say we have a set of symmetric $n \times n$ matrices $M_i$ for $1 \leq i \leq k$, elements in $\mathbb{R}$. Suppose that for every $\boldsymbol{\lambda} = (\lambda_1, \dots , \lambda_k) \in \mathbb{R}^k$ we have that the kernel of -\begin{equation*} -M_{\boldsymbol{\lambda}} = \sum_i \lambda_i M_i -\end{equation*} -is nontrivial. Does it follow that there exists some nonzero $n$ vector $\textbf{v}$ with $M_i \textbf{v} = 0$ for all $i$? - -REPLY [4 votes]: Still no. Counterexample: -$M = \lambda M_1 + \mu M_2 = \pmatrix{0 & \mu & \mu \cr \mu & \lambda & 0 \cr \mu & 0 & -\lambda}$. -Obviously $\det(M)\equiv 0$ for all $\lambda$ and $\mu$. However, the only $(x,y,z)$ that satisfies -$M\pmatrix{x\cr y\cr z} = \pmatrix{0 & \mu & \mu \cr \mu & \lambda & 0 \cr \mu & 0 & -\lambda}\pmatrix{x\cr y\cr z} = \pmatrix{\mu(y+z)\cr \mu x + \lambda y\cr \mu x - \lambda z} = \mathbf{0}\quad \forall \lambda, \mu$ -is the zero vector.<|endoftext|> -TITLE: Integrate product of Dirac delta and discontinuous function? -QUESTION [10 upvotes]: Consider the piecewise constant function $\psi:I=[-1,1] \rightarrow \mathbb{R}$ given by -$$\psi(x) = \begin{cases} \psi_1 & x \leq 0, \ \psi_2 & x > 0 \end{cases}$$ -for some constants $\psi_1, \psi_2 \in \mathbb{R}$. I would like to evaluate the integral -$$\int_I \delta(x) \psi(x) dx$$ -where $\delta$ is the Dirac delta distribution centered at $x=0$. Of course, if we think about distributions in the usual way then you might say that $\delta(\psi) = \langle \delta, \psi \rangle = \psi(0) = \psi_1.$ But then the result depends on a fairly arbitrary choice when defining $\psi$: should the left- or right- half of the interval be closed? This question doesn't seem to have a meaningful answer when dealing with a problem that arises from a physical system (say). -Instead, consider a family of distributions $\phi_\epsilon(x)$ such that - -$\phi_\epsilon(x)=\phi_\epsilon(-x)$, -$\int_I \phi_\epsilon(x)=1$ for all $\epsilon$, and -$\lim_{\epsilon \rightarrow 0} \phi_\epsilon = \delta$, - -i.e., any family of even distributions with unit mass that approaches the Dirac delta distribution as $\epsilon$ approaches zero. (For instance, you could use the family of Gaussians $\phi_\epsilon(x) = \frac{1}{\epsilon\sqrt{\pi}}e^{-x^2/\epsilon^2}$.) -I can now think of my integral as -$$\lim_{\epsilon \rightarrow 0} \int_I \phi_\epsilon(x) \psi(x) dx.$$ -For some $\epsilon > 0$, the integral inside the limit can be expressed as -$$u(\epsilon) = \psi_1 \int_{-1}^0 \phi_\epsilon(x) dx + \psi_2 \int_0^1 \phi_\epsilon(x) dx = \frac{1}{2}\left( \psi_1 + \psi_2 \right) = \bar{\psi},$$ -where $\bar{\psi}$ is the mean of the constant values. Can we say, then, that $\lim_{\epsilon \rightarrow 0} u(\epsilon) = \bar{\psi}$? It would seem so: for any $\mu > 0$ there exists an $\epsilon_0$ such that $\epsilon < \epsilon_0$ implies $|u(\epsilon) - \bar{\psi}|<\mu$ (namely, $\epsilon$ is any positive constant!). But clearly I've got a problem somewhere, because $\bar{\psi} \ne \psi_1$, i.e., this result does not agree with my earlier interpretation. -So what's the right thing to do here? The latter answer ($\bar{\psi}$) agrees more with my "physical" intuition (because it's invariant with respect to deciding which half-interval is open), but I'm concerned about rigor. -Edit: Since the problem as stated is not well-posed ($\delta$ cannot be evaluated on discontinuous functions), let me give some motivation. Imagine that I have a pair of piecewise linear functions $f,g:I^2 \rightarrow \mathbb{R}$, which are again discontinuous only at $x=0$. I would like to integrate the wedge product of $df$ and $dg$ over the domain: -$$\int_{I^2} df \wedge dg = \int_I \int_I \frac{\partial f}{\partial x} \frac{\partial g}{\partial y} - \frac{\partial f}{\partial y}\frac{\partial g}{\partial x} dx dy.$$ -Consider just the first term $(\partial f/\partial x)(\partial g/\partial y)$ and consider just the inner integral $\int_I \cdot dx$. We now have (almost) the original problem: $\partial f/\partial x$ can be thought of as a $\delta$ (plus a piecewise constant), and $\partial g/\partial y$ is simply piecewise constant along the $x$-direction. -So, the problem could be restated as: how do I integrate the wedge product $df \wedge dg$ of piecewise linear 0-forms $f$ and $g$ defined over a planar region? Formally this problem may again be ill-posed, yet it is a real problem that comes up in the context of finite element analysis where basis functions are nonsmooth or even discontinuous. - -REPLY [5 votes]: The solution you have suggested is a perfectly good one. The only problem is that you are venturing outside the bounds of conventional Distribution theory. Consequently, the onus is on you to prove whatever properties of your definition that you use. -Actually, the Heaviside step function is more usually treated as a distribution itself. Bracewell's book, "The Fourier Transform and its Applications" considers the problem of defining a value at the discontinuity and concludes that it mostly doesn't matter. -What you are doing seems somewhat similar to the idea of density in physics. I'd be surprised if the physicists or the electrical engineers hadn't already confronted this issue. However, after looking around a little bit I am unable to find anything specific.<|endoftext|> -TITLE: Finding the Units in the Ring $\mathbb{Z}[t][\sqrt{t^{2}-1}]$ -QUESTION [7 upvotes]: This is problem taken from Problem 4. -I couldn't find the solution anywhere and I am curious to see a solution for this problem, as I can at least comprehend the question and it seems that the mechanism for the solution involved will be somewhat understandable. - -REPLY [2 votes]: Just a "geometric translation" of Matt's "algebraic" proof: -It is clear that the ring $\mathbb{Z}[t, \sqrt{t^2 - 1}]$ is equal to $A = \mathbb{Z}[x,y] / (x^2 - y^2 + 1)$. Consider the ring $B = A \otimes_\mathbb{Z} \mathbb{C} = \mathbb{C}[x,y] / (x^2 - y^2 + 1)$. -$B$ is the ring of regular functions of the hyperbola $X \subseteq \mathbb{A}^2_\mathbb{C}$ defined by the equation $x^2 - y^2 + 1 = 0$. The projection of $X$ into one of its asymptotes gives an isomorphism -$$ -(x,y) \mapsto x-y -$$ -which maps $X$ onto $\mathbb{A}^1_\mathbb{C} \setminus \{ 0 \}$. Therefore $B$ is isomorphic, as a $\mathbb{C}$-algebra, to $\mathbb{C}[u,u^{-1}]$, where $u = x-y$ is trascendental over $\mathbb{C}$. So every unit in $B$ is of the form $\lambda u^n$, for some $\lambda \in \mathbb{C}^*$ and $n \in \mathbb{Z}$. Since $A$ is flat over $\mathbb{Z}$, $A \subseteq B$. Therefore every unit in $A$ is of the form $\pm (x-y)^n$, for some $n \in \mathbb{Z}$.<|endoftext|> -TITLE: A criterion for the existence of a holomorphic logarithm of a holomorphic function -QUESTION [17 upvotes]: Suppose $\Omega$ is a domain of the complex plane (i.e. an open and connected subset of the plane). Suppose $f$ is holomorphic on $\Omega$, and $f$ is not identically zero. -Suppose $f$ has a holomorphic logarithm on $\Omega$, which means that there is a function $g$ holomorphic on $\Omega$ such that $e^g=f$. Then it is easy to show that $f$ has holomorphic $n$-th roots on $\Omega$ for each $n$, which means that for each integer $n$, there exist a function $g_n$ holomorphic on $\Omega$ such that $(g_n)^n = f$. -Is the converse true? i.e. if $f$ has holomorphic $n$-th roots on $\Omega$ for all $n$, then can we find a function $g$ holomorphic on $\Omega$ such that $f=e^g$? -A few remarks : -One can prove that if $f$ has holomorphic $n$-th roots on $\Omega$ for all $n$, then $f$ does not vanish on $\Omega$. Therefore, we can define a holomorphic logarithm locally, but is it possible to find a global holomorphic logarithm? -Furthermore, notice that $\Omega$ is not supposed simply connected, in which case the answer to my question is yes. - -REPLY [22 votes]: The condition that $f$ have a holomorphic logarithm is equivalent to -$df/f=f'(z)dz/f(z)$ being an exact differential. This is equivalent to -the integral of $df/f$ over all closed curves in $\Omega$ vanishing. -Let $C$ be a closed curve in $\Omega$. -If $f=g^n$ is an $n$-th power in $\Omega$ of a holomorphic $g$ then -$\int_C df/f=n\int_C dg/g$. But $\int_C dg/g$ is an integer multiple of $2\pi i$. -Hence $\int_C df/f$ is an integer multiple of $2\pi ni$. If this holds for -all $n$ then $\int_C df/f=0$. It follows that $f$ has a holomorphic logarithm.<|endoftext|> -TITLE: Favourite open problem? -QUESTION [15 upvotes]: Do you have any favorite open problem? -Let me mention one of my favorites. Let $A(\mathbb{T})$ be the Wiener algebra, that is, the linear space of absolutely convergent Fourier series on the unit circle $\mathbb{T}$ $(=\mathbb{R}/2\pi\mathbb{Z}=(-\pi,\pi])$ carrying the norm -$$ f\mapsto \|f\|=\sum_{n\in\mathbb{Z}}|\hat{f}(n)|<\infty$$ -where $\hat{f}(n)=\int_{-\pi}^\pi f(t)e^{-int}dt/2\pi$ is the $n$:th Fourier coefficient of $f$. In fact $A(\mathbb{T})$ is a unitary commutative Banach algebra. By absolute convergence it follows that each $f\in A(\mathbb{T})$ is continuous on $\mathbb{T}$. Moreover, if $f(t)\not=0$ for all $t\in\mathbb{T}$ then obviously $1/f$ is also continuous on $\mathbb{T}$ - a famous theorem of Norbert Wiener (The Wiener Lemma) states that we also have $1/f\in A(\mathbb{T})$. -Next consider a possible quantitative refinement of the Wiener lemma: -Given $\delta>0$ let $$C_\delta = \sup_{A_\delta}\|1/f\|$$ -where $A_\delta={f\in A(\mathbb{T}):|f(t)|>\delta,\ \|f\|\leq1}$. -Problem: Find -$$\delta_\inf=\inf_{\delta>0} \ C_\delta<\infty$$. -Remark 1: It is known that $\delta_\inf\leq 1/\sqrt{2}$ and that $\delta_\inf\geq 1/2$ (see [1,2]). -Remark 2: The problem can be treated in any commutative Banach algebra we unit. -[1] N. Nikolski, In search of the invisible spectrum, Annales de l'institut Fourier, 49 no. 6 (1999), p. 1925-1998 -[2] H.S. SHAPIRO, A counterexample in harmonic analysis, in Approximation Theory, Banach Center Publications, Warsaw (submitted 1975), Vol. 4 (1979), 233-236. - -REPLY [3 votes]: I find Brocard's problem very interesting. It asks for integer solutions to the equation -$$m!+1=n^2.$$ -Ramanujan considered, but could not solve, the problem. While very few solutions have been found: $(4,5),(5,11)(7,71)$, we do not yet know whether these are the only solutions, there are more solutions, or if infinitely many exist. Curiously, it follows from the $abc$ conjecture that, if the conjecture is true, that there are only finitely many solutions.<|endoftext|> -TITLE: Why do we care about dual spaces? -QUESTION [238 upvotes]: When I first took linear algebra, we never learned about dual spaces. Today in lecture we discussed them and I understand what they are, but I don't really understand why we want to study them within linear algebra. -I was wondering if anyone knew a nice intuitive motivation for the study of dual spaces and whether or not they "show up" as often as other concepts in linear algebra? Is their usefulness something that just becomes more apparent as you learn more math and see them arise in different settings? - -Edit -I understand that dual spaces show up in functional analysis and multilinear algebra, but I still don't really understand the intuition/motivation behind their definition in the standard topics covered in a linear algebra course. (Hopefully, this clarifies my question) - -REPLY [4 votes]: To take the case in $ \mathbb{R}^n $ -If $ A $ are the Transformation Matrix from the Natural Basis to an Arbitrary Standard Basis then $ A^{-1} $ are the Map from the Natural Basis to the Dual Basis. That $ A^{-1} $ exist is guaranteed as long as the Standard Basis Span the same Vector Space as the Natural Basis and are Linear Independent (which it by definition should be). -Dual Basis <---> Natural Basis <---> Standard Basis -Let $ \vec{e}_{\alpha} = \vec{e}^{\ \alpha} $ be Natural Basis -Let $ \vec{e}_{\widetilde{\alpha}} $ be the Standard Basis -Let $ \vec{e}^{\ \widetilde{\alpha}} $ be the Dual Basis -$ \vec{e}_{\widetilde{\alpha}} = \sum_{\alpha} A^{\alpha}_{\ \ \widetilde{\alpha}} \ \vec{e}_{\alpha} $ -$ \vec{e}_{\widetilde{\alpha}} = \sum_{\alpha} \sum_{\widetilde{\beta}} A^{\alpha}_{\ \ \widetilde{\alpha}} \ A^{\alpha}_{\ \ \widetilde{\beta}} \vec{e}^{\ \widetilde{\beta}} $ -Note that the two A's in the middle can be substituted with the Metric Tensor. -$ \vec{e}_{\widetilde{\alpha}} = \sum_{\widetilde{\beta}} g_{\widetilde{\alpha}\widetilde{\beta}} \vec{e}^{\ \widetilde{\beta}} $ -The "Engineer's Approach" to Dual Space would be: -1) Some Vector Components $ (x^{\alpha}) $ have the unit $[m]$ -2) Some Vector Components $ (x_{\alpha}) $ have the unit $[1/m] \ $ (e.g. gradient) -3) When Multiplying Components with Bases the resulting Vector live in $ \mathbb{R}^n $ and have the unit $ [1] $ -4) Therefore; Need two different Bases<|endoftext|> -TITLE: Summing the series $ \frac{1}{2n+1} + \frac{1}{2} \cdot \frac{1}{2n+3} + \cdots \ \text{ad inf}$ -QUESTION [8 upvotes]: How does one sum the given series: $$ \frac{1}{2n+1} + \frac{1}{2} \cdot \frac{1}{2n+3} + \frac{1 \cdot 3}{2 \cdot 4} \frac{1}{2n+5} + \frac{ 1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6} \frac{1}{2n+7} + \cdots \ \text{ad inf}$$ -Given, such a series, how does one go about solving it. Getting an Integral Representation seems tough for me. -I thought of going along these lines, Summing the series $(-1)^k \frac{(2k)!!}{(2k+1)!!} a^{2k+1}$ but couldn't succeed. - -REPLY [10 votes]: Observe that -$$\displaystyle \frac{1}{4^n} {2n \choose n} = \frac{(2n-1)(2n-3)(2n-5)...}{2n(2n-2)(2n-4)...}$$ -and recall that -$$\displaystyle \frac{1}{ \sqrt{1 - x^2} } = \sum_{k \ge 0} \frac{1}{4^k} {2k \choose k} x^{2k}.$$ -It follows that the desired quantity is -$$\displaystyle \int_0^1 \frac{x^{2n}}{\sqrt{1 - x^2}} dx.$$ -But letting $x = \sin \theta$ this is just -$$\displaystyle \int_0^{ \frac{\pi}{2} } \sin^{2n} \theta d \theta = \frac{\pi}{2} \frac{1}{4^n} {2n \choose n}.$$ -This is equivalent to Mariano's closed form via the identities $\Gamma(x+1) = x \Gamma(x)$ and $\Gamma \left( \frac{1}{2} \right) = \sqrt{\pi}$. I should mention that the integral at the end of Moron's answer is quite doable; write it in terms of a cosine and use the substitution $\cos \theta = \frac{1 - t^2}{1 + t^2}$ where $t = \tan \frac{\theta}{2}$ to reduce the problem to the integral of a rational function, and then one can use one of several related methods (partial fractions, contour integration). -Remark: The last integral identity above happens to be one of my favorite identities. I describe a representation-theoretic and combinatorial proof of it in this blog post. Another approach implicitly occurs in this blog post.<|endoftext|> -TITLE: Generalized graph - edges weights are functions, not scalars -QUESTION [5 upvotes]: Can someone please point me in the direction of any theory on graphs where the edge weights are not scalars but represent some relation between the nodes that is a simple function of a single variable (simple, say piecewise linear). -In particular, I'm interested in various basic graph properties and also thinking of the graph as representing a network. So, for example, if the graph represented a communication network over time then the edge weights would be a function represented connectivity as a function of time, how do you find a valid path between nodes? I'm looking for help both on specific algorithms but also general theory if it exists? -I'm aware of time-extended networks where you explicitly expand out the dependence on the variable but, from what I've read, this incomplete and of limited applicability. - -REPLY [2 votes]: I came across a similar problem of using non-scalars as edge weights (vectors in my special case) and got to the conclusion -that this is valid as long as you can define a total order on the edge weights and they are elements from a vector space that implements a linear operator. -The linearity together with the total order gives you characteristics of a equivalence relation, e. g. the transitivity that we need to get the weight of paths, since we can't just 'add up' the weights of the edges in the path. -Algorithms rely on that order (that is inherent in scalars). E. g. dijkstra that picks a unvisited node with the minimum distance. -If those requirements are met one can easily adjust the algorithms to be applied to that graph instance. - -More specific -change the algorithms that you use to keep track of time (or introduce time steps that increase after each piece of solution your algorithm derived) and evaluate the edge functions each time you need them. If the functions do not take up too much computation time, you should be fine.<|endoftext|> -TITLE: Upper and Lower Bounds of $\emptyset$ -QUESTION [17 upvotes]: From some reading, I've noticed that $\sup(\emptyset)=\min(S)$, but $\inf(\emptyset)=\max(S)$, given that $\min(S)$ and $\max(S)$ exist, where $S$ is the universe in which one is working. Is there some inherent reasoning/proof as to why this is? It seems strange to me that an upper bound of a set would be smaller than a lower bound of the same set. - -REPLY [3 votes]: A very natural explanation of the correct definition for the values of min and max on empty sets arises from their dual universal definitions - analogous to the universal GCD, LCM definitions that I presented in a post here. -First some notation. $\;$ Write $\ \ \rm x \le S \;\iff\; x \le s,\;\: \forall\: s \in S,\;$ and dually for $\rm\; x \ge S$ -DEFINITION of $\:$ min $\quad$ $\quad\rm x \le S \;\iff\; x \le min\ S$ -DEFINITION of max $\:\quad$ $\quad\rm x \ge S \;\iff\; x \ge max\ S$ -For min, when $\;\rm S = \emptyset\;$ is empty, the first clause $\;\rm x \le S\;$ in the min definition is vacuously true, hence the definition reduces to $\;\rm x \le \min \emptyset\;$ for all $\;\rm x\;$. Hence $\;\rm \min \emptyset = \infty\;$. Dually $\;\rm \max\emptyset = -\infty\;$. -As I remarked in said GCD, LCM post, such universal definitions often facilitate slick proofs. For some nontrivial examples of min, max flavor consider the slick proofs of the integrality of various products of binomial coefficients by employing the floor function, e.g. see Joe Roberts: Elementary Number Theory. Instead I close with an analogous slick GCD, LCM proof from my mentioned post (see it for further details). -Generally, in any domain, we have the following dual universal definitions of LCM and GCD: -DEFINITION of LCM $\quad$ If $\quad\rm a,b\ |\ c \;\iff\; [a,b]\ |\ c \quad$ then $\quad\rm [a,b] \;\;$ is an LCM of $\;\rm a,b$ -DEFINITION of GCD $\quad$ If $\quad\rm c\ |\ a,b \;\iff\; c\ |\ (a,b) \quad$ then $\quad\rm (a,b) \;$ is an GCD of $\;\;\rm a,b$ -Note: that $\;\rm a,b\ |\ [a,b] \;$ follows by putting $\;\rm c = [a,b] \;$ in the definition. Dually $\;\rm (a,b)\ |\ a,b \;$ -Such $\iff$ definitions provide slick unified proofs of both arrow directions, e.g. the fundamental -THEOREM $\rm\quad (a,b)\ =\ ab/[a,b] \;\;$ if $\;\rm\ [a,b] \;$ exists. -Proof: $\rm\quad\quad d\ |\ a,b \;\iff\; a,b\ |\ ab/d \;\iff\; [a,b]\ |\ ab/d \;\iff\; d\ |\ ab/[a,b] \quad\;\;$ QED -The conciseness of this proof arises by exploiting to the hilt the $\iff$ definition of LCM and GCD. Compare to less concise / general / illuminating proofs in many number theory textbooks.<|endoftext|> -TITLE: Limit of a particular variety of infinite product/series -QUESTION [7 upvotes]: I was musing about a particular limit, -$L = \prod\limits_{n > 0} \bigl(1 - 2^{-n}\bigr)$: -we may bound 0.288 < L < 0.308, which we may show by taking the logarithm: -$\ln(L) = \ln \bigl( \frac{315}{1024}\bigr) + \sum\limits_{n > 4} \ln\bigl(1 - 2^{-n}\bigr) > \ln\bigl(\frac{315}{1024}\bigr) - \sum\limits_{n > 4} 2^{-n} =\; \ln\bigl(\frac{315}{1024} \cdot \mathrm e^{-1/16}\bigr)$. -I was wondering if this type of infinite product (or the corresponding sum of logarithms) has a name, and whether there are techniques for obtaining a closed form expression for the limit. - -REPLY [2 votes]: Another approach that confirms the above extremely close approximation is to introduce -$$S = \log P = \log \prod_{n\ge 1} \left(1-\frac{1}{2^n}\right) -= \sum_{n\ge 1} \log \left(1-\frac{1}{2^n}\right)$$ -and observe that this sum is harmonic and may be evaluated by inverting its Mellin transform. To do this introduce -$$S(x) = \sum_{n\ge 1} \log \left(1-\frac{1}{2^{nx}}\right)$$ -Recall the harmonic sum identity -$$\mathfrak{M}\left(\sum_{k\ge 1} \lambda_k g(\mu_k x);s\right) = -\left(\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} \right) g^*(s)$$ -where $g^*(s)$ is the Mellin transform of $g(x).$ -In the present case we have -$$\lambda_k = 1, \quad \mu_k = k \quad \text{and} \quad -g(x) = \log\left(1-\frac{1}{2^x}\right).$$ -We need the Mellin transform $g^*(s)$ of $g(x)$ which is -$$\int_0^\infty \log\left(1-\frac{1}{2^x}\right) x^{s-1} dx.$$ -The function $g(x)$ is well-behaved near zero where it is on the order of $\log x$ and vanishes faster than any polynomial at infinity. -To calculate the Mellin transform start with -$$\int_0^\infty \log\left(1-\frac{1}{2^x}\right) x^{s-1} dx -= - \int_0^\infty \sum_{q\ge 1} \frac{2^{-qx}}{q} x^{s-1} dx -= - \sum_{q\ge 1} \frac{1}{q} \int_0^\infty 2^{-qx} x^{s-1} dx.$$ -Observe that -$$\int_0^\infty 2^{-qx} x^{s-1} dx = -\frac{1}{(q \log 2)^s} \Gamma(s)$$ -by a straightforward substitution that turns the integral into a gamma function integral. -This yields -$$g^*(s) = - \sum_{q\ge 1} \frac{1}{q} \frac{1}{(q \log 2)^s} \Gamma(s) -= -\frac{1}{(\log 2)^s} \Gamma(s) \sum_{q\ge 1} \frac{1}{q^{s+1}} -= -\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s+1).$$ -By the harmonic sum identity we now have that the Mellin transform $Q(s)$ of $S(x)$ is given by $$Q(s) = -\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s) \zeta(s+1)$$ -with the Mellin inversion integral being -$$\frac{1}{2\pi i} \int_{3/2-i\infty}^{3/2+i\infty} Q(s)/x^s ds$$ -which we evaluate by shifting it to the left for an expansion about zero. -Fortunately the two zeta function terms with their trivial zeros combine to cancel the poles of the gamma function and we are left with just three poles and residues. -We have -$$\mathrm{Res}(Q(s)/x^s; s=1) = -\frac{\pi^2}{6x\log 2}.$$ -Furthermore -$$\mathrm{Res}(Q(s)/x^s; s=0) = -\frac{1}{2}\left(\log\frac{2\pi}{\log 2}-\log x\right)$$ -and finally -$$\mathrm{Res}(Q(s)/x^s; s=-1) = \frac{1}{24} x \log 2.$$ -Putting $x=1$ we obtain the following approximation for $S(1):$ -$$S(1)\approx \frac{1}{24} \log 2 + \frac{1}{2} \log\frac{2\pi}{\log 2} -- \frac{\pi^2}{6\log 2}.$$ -This is $$-1.2420620948124149457978452979784311762117047031228$$ -while the exact value is -$$-1.2420620948124149457978454818946296689734039782504$$ -so this approximation is good to an amazing $25$ digits. -This gives for $P$ the approximation -$$P \approx 2^{1/24} \sqrt{\frac{2\pi}{\log 2}} -\exp\left(- \frac{\pi^2}{6\log 2}\right).$$ -This is also good to $25$ digits, confirming the observation from the other poster above.<|endoftext|> -TITLE: Is there a function with a removable discontinuity at every point? -QUESTION [81 upvotes]: If memory serves, ten years ago to the week (or so), I taught first semester freshman calculus for the first time. As many calculus instructors do, I decided I should ask some extra credit questions to get students to think more deeply about the material. The first one I asked was this: -1) Recall that a function $f: \mathbb{R} \rightarrow \mathbb{R}$ is said to have a removable discontinuity at a point $x_0 \in \mathbb{R}$ if $\lim_{x \rightarrow x_0} f(x)$ exists but not does not equal $f(x_0)$. Does there exist a function $f$ which has a removable discontinuity at $x_0$ for every $x_0 \in \mathbb{R}$? -Commentary: if so, we could define a new function $\tilde{f}(x_0) = \lim_{x \rightarrow x_0} f(x)$ and it seems at least that $\tilde{f}$ has a fighting chance to be continuous on $\mathbb{R}$. Thus we have successfully "removed the discontinuities" of $f$, but in so doing we have changed the value at every point! -Remark: Lest you think this is too silly to even seriously contemplate, consider the function $f: \mathbb{Q} \rightarrow \mathbb{Q}$ given by $f(0) = 1$ and for a nonzero -rational number $\frac{p}{q}$, $f(\frac{p}{q}) = \frac{1}{q}$. It is easy to see that this function has limit $0$ at every (rational) point! -So I mentioned this problem to my students. A week later, the only person who asked me about it at all was my Teaching Assistant, who was an older undergraduate, not even a math major, I think. (I hasten to add that this was not in any sense an honors calculus class, i.e., I was pretty clueless back then.) Thinking about it a bit, I asked him if he knew about uncountable sets, and he said that he didn't. At that point I realized that I didn't have a solution in mind that he would understand (so still less so for the freshman calculus students) and I advised him to forget all about it. - -So my actual question is: can you solve this problem using only the concepts in a non-honors freshman calculus textbook? (In particular, without using notions of un/countability?) -[Addendum: Let me say explicitly that I would welcome an answer that proceeds directly in terms of the least upper bound axiom. Most freshman calculus books do include this, albeit somewhere hidden from view of the casual readers, i.e., actual freshman calculus students.] - -If you can't figure out how to answer the question at all, I think the following related question helps. -2) Define a function $f: \mathbb{R} \rightarrow \mathbb{R}$ to be precontinuous if the limit exists at every point. For such a function, we can define $\tilde{f}$ as above. Prove/disprove that, as suggested above, $\tilde{f}$ is indeed continuous. [Then think about $f - \tilde{f}$.] -Now that I think about it, there is an entire little area here that I don't know anything about, e.g. -3) The set of discontinuities of an arbitrary function is known -- any $F_{\sigma}$ set inside $\mathbb{R}$ can serve. What can we say about the set of discontinuities of a "precontinuous function"? [Edit: from the link provided in Chandru1's answer, we see that it is countable. What else can we say? Note that taking the above example and extending by $0$ to the irrationals, we see that the set of points of discontinuity of a precontinuous function can be dense.] - -REPLY [39 votes]: I think the following works: -Here is a sketch, I will fill in the details later if required. -Let $g(x) = \lim_{t\rightarrow x} f(t)$. Then we can show that $g(x)$ is continuous. -Let $h(x) = f(x) - g(x)$. Then $\lim_{t \rightarrow x} h(t)$ exists and is $0$ everywhere. -We will now show that $h(c) = 0$ for some $c$. -This will imply that $f(x)$ is continuous at $c$ as then we will have $f(c) = g(c) = \lim_{t->c} f(t)$. -Consider any point $x_0$. -By limit of $h$ at $x_0$ being $0$, there is a closed interval $I_0$ (of length > 0) such that $|h(x)| < 1$ for all $x \in I_0$. -This is because, given an $\epsilon > 0$ there is a $\delta > 0$ such that $|h(x)| < \epsilon$ for all $x$ such that $0 < |x - x_{0}| < \delta$. Pick $\epsilon = 1$ and pick $I_{0}$ to be any closed interval of non-zero length in $(x_{0}, x_{0} + \delta)$. -Now pick any point $x_1$ in $I_0$. -By limit of $h$ at $x_1$ being $0$, there is a closed interval $I_1 \subset I_0$ (of length > 0) such that $|h(x)| < 1/2$ for all $x \in I_1$, by argument similar to above. -Continuing this way, we get a sequence of closed intervals $I_n$ such that -$|h(x)| < \frac{1}{n+1}$ for all $x \in I_n$. We also have that $I_{n+1} \subset I_n$ for each $n$, and that length $I_n$ > 0. We could also arrange so that length $I_n \rightarrow 0$. -Now there is a point $c$ (by completeness of $\mathbb{R}$) such that $c \in \bigcap_{n=0}^{\infty}I_{n}$. -Thus we have that $|h(c)| < \frac{1}{n+1}$ for all $n$ and so $h(c) = 0$ and $f(c) = g(c)$.<|endoftext|> -TITLE: Polynomials which satisfy $p^{2}(x)-1 = p(x^{2}+1)$ -QUESTION [7 upvotes]: Can we find a polynomial $p(x) \in \mathbb{R}$ such that $\text{deg}\ p(x)>1$ and which satisfies $$p^{2}(x)-1=p(x^{2}+1)$$ for all $x \in \mathbb{R}$. -This question can be very well identified with my previous question. - -REPLY [14 votes]: There is no solution. Here is the proof: -First note, that the highest coefficient of $p$ must be 1. In particular, $\lim_{x \to \infty} p(x) = \infty$. -Assume, that there exists a real value $c$ such that $p (c) = 0$. Then -$$ 0 = (p(c)^2 - 1)^2 - 1 = p(c^2 + 1)^2 - 1 = p((c^2 + 1)^2 + 1)$$ -But $(c^2 + 1)^2 + 1 > c$, so any real root of $p$ would lead to another, higher root of $p$ which is impossible. So $p$ has no real root and thus $p(x) > 0$ for all $x$. -But then, $p(x)^2 - 1 = p(x^2 + 1) > 0$ and thus $p(x) > 1$. -Define a sequence $x_n$ by $x_0 = 0$ and $x_{n + 1}^2 + 1 = x_n$ with imaginary part of $x_n$ positive for $n > 0$. -We first show, that $p(x_n) \neq 0$ for all $x_n$: -clearly, $p(x_0) \neq 0$ and $p(i)^2 - 1 = p(0) > 1$ and so $p(x_1) = p(i) \neq 0$. -Assume, that $p(x_{n+1}) = 0$ for some $n > 1$. Then $p(x_n) = -1$ and so $p(x_{n - 1}) = 0$. This is impossible. -From the functional equation, we obtain by differentiating -$$ -p(x) p'(x) = x p'(x^2 + 1) -$$ -We now show by induction, that $p'(x_n) = 0$ for all $n$. -For $n = 0$, we obtain $p(0) p'(0) = 0$, and thus $p'(0) = 0$. -Assume, that $p'(x_n) = 0$, then -$$ -p(x_{n+1}) p'(x_{n+1}) = x_{n + 1} p'(x_{n+1}^2+1) = x_{n+1} p'(x_n) = 0 -$$ -and so, $p'(x_{n+1}) = 0$ as required. -But $x_n$ is a sequence converging to $\frac{1 + i \sqrt{3}}{2}$, and in particular consists of infinitely many distinct points, all of which are roots of $p'$ which is impossible.<|endoftext|> -TITLE: cardinal exponentiation, $k^{<\lambda}$ -QUESTION [9 upvotes]: I have the following well-known exercise in cardinal arithmetics: -If $\kappa, \lambda$ are cardinals such that $\lambda$ is infinite, then $\kappa^{<\lambda}$ equals the supremum of the $\kappa^{\theta}$, where $\theta < \lambda$ is a cardinal. -1) Isn't this false for $\kappa=1$? The left side is $\lambda$ and the right side $1$. Remark that $\kappa^{<\lambda}$ is defined to be the cardinality of the disjoint(?) union of the function sets $map(\alpha,\kappa)$, where $\alpha < \lambda$ is an ordinal. -2) Is this supremum understood as a cardinal supremum or an ordinal supremum? If the latter is the case, why should the supremum be a cardinal at all? -3) Anyway, I don't know how to solve this exercise. I've already seen it several times, but 1) and 2) remain obscure in the literature I know. - -REPLY [4 votes]: First, (2): if you define a cardinal to be an ordinal that is not bijectable with any strictly smaller ordinal, then suppose that $\alpha$ is the ordinal supremum of $\kappa^{\theta}$, $\theta\lt\lambda$. Let $\beta$ be a strictly smaller ordinal than $\alpha$. By the definition of supremum, there must exist a $\theta\lt\lambda$ such that $\beta\lt\kappa^{\theta}\leq\alpha$; in particular, $\kappa^{\theta}$ cannot be bijected with $\beta$ (being a cardinal), and therefore neither can $\alpha$ (that would give an embedding of $\kappa^{\theta}$ into $\beta$, and Cantor-Bernstein would give you $|\beta|=\kappa^{\theta}$, contradicting that the latter is not bijectable with any strictly smaller ordinal). Thus, $\alpha$ is not bijectable with any strictly smaller ordinal, and so must be a cardinal. So whether you define the supremum as the ordinal-sup or the cardinal-sup, you will still get a cardinal (this holds for any set of cardinals). -Second, (1): you are correct that the definition does not match this for $\kappa=1$ (or for $\kappa = 0$); as noted by Carl in the comments, this is likely an erratum or ommission; it should hold for any $\kappa>1$. -Edit: The definition as a sum is done running over all ordinals, rather than all cardinals, so I'm fixing this below. -Finally, (3): you are trying to show that the supremum of the $\kappa^{\theta}$ equals the sum over all ordinals $\alpha<\lambda$ of $|\kappa^{\alpha}$, assuming $\kappa\geq 2$ (to prevent the problems noted). This because $|\kappa^{\alpha}|=|map(\theta,\kappa)|$ by definition, and the cardinality of the disjoint union is the cardinal sum. I believe this can be shown by transfinite induction on $\lambda$ as I do below, but there probably is a simpler method. -So, the proposition we want to show is that for any infinite ordinal $\lambda$, we have -$$\sup_{\alpha\lt\lambda}|\kappa^{\alpha}| = \sum_{\alpha\lt\lambda}|\kappa^{\alpha}|.$$ -First, the equality holds for $\lambda=\omega$: if $2\leq \kappa\lt\aleph_0$, then $\sup\{|\kappa^n|\,|\, n=0,1,2,3,\ldots\} = \aleph_0$ and $\sum_{n=0}^{\infty}|\kappa^n| = \aleph_0$; if $\aleph_0\leq\kappa$, then $|\kappa^n|=\kappa$ for all $n$, and $\sum_{n=0}^{\infty}|\kappa^n| = \sum_{n=0}^{\infty}\kappa = \kappa\aleph_0=\kappa$, so both sides agree. -Assume the result holds for $\lambda$; then the supremum of $|\kappa^{\alpha}|$ with $\alpha\lt\lambda^+$ is $|\kappa^{\lambda}|$; on the other hand, -$$\sum_{\alpha\lt\lambda^+}|\kappa^{\alpha}| = \left(\sum_{\alpha\lt\lambda}|\kappa^{\alpha}|\right) + \kappa^{\lambda} = \sup_{\alpha\lt\lambda}|\kappa^{\alpha}|+|\kappa^{\lambda}|=|\kappa^{\lambda}|,$$ -where the last equality holds because $|\kappa^{\alpha}|\leq |\kappa^{\lambda}|$ for each $\alpha\lt \lambda$, so the supremum is at most $|\kappa^{\lambda}|$, and the sum of two infinite cardinals is equal to their maximum. So again the two expressions agree. -Finally, we want to show that if $\lambda$ is a limit ordinal and the result holds for all $\beta\lt\lambda$, then it holds for $\lambda$. Then -$$\sup_{\alpha\lt\lambda}(|\kappa^{\alpha}|) = \sup_{\beta\lt\lambda}\left(\sup_{\alpha\lt\beta}(|\kappa^{\alpha}|\right) = \sup_{\beta\lt\lambda}\sum_{\alpha\lt\beta}|\kappa^{\alpha}| = \sum_{\alpha\lt\lambda}(|\kappa^{\alpha}|).$$ -So the equality holds for $\lambda$ as well. This establishes the result by transfinite induction for all infinite ordinals $\lambda$ -Further Edit, 2 Sep 2010: Clarify how to finish it off. -So, the above shows that for any ordinal $\lambda$, $\sup_{\alpha\lt\lambda}|\kappa^{\alpha}| = \sum_{\alpha\lt\lambda}|\kappa^{\alpha}|$. To finish the exercise, we need to show that if $\lambda$ is a cardinal (that is, an ordinal that is not bijectable with any strictly smaller ordinal), then $\sup\{|\kappa^{\alpha}|\colon\alpha$ is an ordinal and $\alpha\lt\lambda\} = \sup\{|\kappa^{\theta}|\colon \theta$ is a cardinal and $\theta\lt\lambda\}$. To see this, note that $|\kappa^{\alpha}|=|\kappa|^{|\alpha|}$, and since $\lambda$ is assumed to be a cardinal, if $\alpha\lt\lambda$, then there exists a cardinal $\theta$, $\theta\lt\lambda$, such that $|\alpha|=|\theta|$, and hence $|\kappa^{\alpha}|=|\kappa^{\theta}|$. Thus, the two sets are equal, so their suprema are equal as well.<|endoftext|> -TITLE: Presentation of $D_{2n}$ -QUESTION [10 upvotes]: The presentation of the dihedral group $D_{2n}$ is -$$D_{2n}= \langle r,s \mid r^{n}=s^{2}=1, rs=sr^{-1} \rangle.$$ -Why is it incorrect to conclude that $r=s=1$ ? In other words why doesn't this group presentation describe the trivial group. - -REPLY [2 votes]: In the question -"$r^{n}=s^{2}=1, rs=sr^{-1}$ ... why is it incorrect to conclude that $r=s=1$ ? ... why doesn't this group presentation describe the trivial group" -the key words are "conclude" and "describe". You are in effect asking what logic applies to statements about $r$ and $s$. -The answer is that assertions about $r$ and $s$ are understood as valid in this context only if they hold in ALL groups. The assertion "if a pair of group elements $r$ and $s$ satisfies the defining relations of $D_{2n}$, then $r=s=1$" is false in some groups and true in others. It is true in groups with no subgroup of order $2n$, such as large cyclic groups of prime order. It is false in other groups, such as the dihedral group of order 2n, or the group of order 2 where $r=1, s=-1$ is a pair satisfying the equations. -Truth in all groups is a semantic definition and can be replaced by an equivalent, syntactic definition that gives the combinatorial rules for making all valid deductions from the defining relations. $r=s=1$ is not a valid algebraic consequence of the algebraic rules. As in all problems of logic, showing that something does NOT follow from some rules or axioms is a matter of constructing an example (a model) where the rules apply but the assertion is false. Here the models are groups with a pair of elements $r,s$ satisfying the relations, so for showing what is not true in the logic of generators and relations the algebraic approach doesn't provide an alternative to constructing models. For assertions that do follow from the defining relations, the algebraic definition is useful, because instead of surveying a possibly infinite collection of models to show that the statement is correct, one can perform a finite combinatorial derivation that works for all models. -(Added in response to comment from Matt E about normal forms: showing that any element can be placed in the normal form follows from the algebraic relations alone. However, to use normal forms as a method of calculating in the group or understanding it, you need to show that distinct elements in the group have different normal forms. This is again the non-algebraic (or not purely algebraic) problem of showing that relations do not hold in a structure, which is done by constructing models such as actions of the group on particular spaces.)<|endoftext|> -TITLE: Does a section that vanishes at every point vanish? -QUESTION [9 upvotes]: Let $R$ be the coordinate ring of an affine complex variety (i.e. finitely generated, commutative, reduced $\mathbb{C}$ algebra) and $M$ be an $R$ module. -Let $s\in M$ be an element, such that $s\in \mathfrak{m}M$ for every maximal ideal $\mathfrak{m}$. Does this imply $s=0$? - -REPLY [9 votes]: Not in general, no. For example, if $R = \mathbb C[T]$ and $M$ is the field of fractions -of $R$, namely $\mathbb C(T)$, then (a) every maximal ideal of $R$ is principal; (b) every -element of $M$ is divisible by every non-zero element of $R$. Putting (a) and (b) together -we find that $M = \mathfrak m M$ for every maximal ideal $\mathfrak m$ of $R$, but certainly -$M \neq 0.$ -Here is a finitely generated example: again take $R = \mathbb C[T]$, and take $M = \mathbb C[T]/(T^2).$ Then $s = T \bmod T^2 \in \mathfrak m M$ for every $\mathfrak m$, because -$\mathfrak m M = M$ if $\mathfrak m$ is a maximal ideal other than $(T)$, and this -is clear from the choice of $s$ if $\mathfrak m = (T)$. -The answer is yes if $M$ is finitely generated and torsion free. For let $S$ be the total quotient ring of -$R$ (i.e. the product of functions fields $K(X)$ for each irreducible component $X$ -of the variety attached to $R$). -Then $M$ embeds into $S\otimes_R M$ (this is the torsion free condition), which in turn embeds into -a finite product of copies of $S$ (since it is finite type over $S$, which is just a product -of fields). -Clearing denominators, we find that in fact $M$ then embeds into $R^n$ for some $n$. -Thus it suffices to prove the result for $M = R^n$, and hence for $R$, in which case -it follows from the Nullstellensatz, together with the fact that $R$ is reduced. -Finally, note that for any finitely generated $R$-module, if $M = \mathfrak m M$ for all $\mathfrak m$ -then $M = 0$ (since Nakayama then implies that $M_{\mathfrak m} = 0$ for all -$\mathfrak m$). Thus if $M$ is non-zero it can't be that every section lies -in $\mathfrak m M$ for all $\mathfrak m$.<|endoftext|> -TITLE: Why does the set of all singleton sets not exist? -QUESTION [21 upvotes]: Proposition: For a set $X$ and its power set $P(X)$, any function $f\colon P(X)\to X$ has at least two sets $A\neq B\subseteq X$ such that $f(A)=f(B)$. -I can see how this would be true if $X$ is a finite set, since $|P(X)|\gt |X|$, so by the pigeonhole principle, at least two of the elements in $P(X)$ would have to map to the same element. -Does this proposition still hold for $X$ an infinite set? And if so, how does this show that the set of all singleton sets cannot exist? - -REPLY [3 votes]: Your presupposition is incorrect; there are a number of set theories (some of them provably equiconsistent with ZF) in which the set of all singletons exists; see Limiting set theory using symmetry, or Forster’s Oxford Logic Guide on the subject. (Disclaimer: Forster discusses my work in the book, so I’m hardly unbiased, but it is the standard work.)<|endoftext|> -TITLE: What is your favorite proof that $e^{ix}$ has a period of $2\pi$? -QUESTION [10 upvotes]: as a function of a real variable, apparently. -Part of the freedom in choosing a proof is that you get to choose what definition of $e^{ix}$ to start from -- do you use a differential equation? a power series? a definition in terms of trig functions? -Another bit of freedom is that you get to choose what definition of $\pi$ to start from. - -REPLY [11 votes]: My favorite has always been Walter Rudin's proof in the prologue to his "Real and Complex Analysis" (2nd Ed.). Here's a sketch: - -Define $\exp$ in terms of the power series. -By manipulating the series, deduce that $\exp$ is a homomorphism from the additive group to the group of complex units. -Show it satisfies the usual first order ODE. -Define $\cos z$ and $\sin z$ as the real and imaginary parts of $\exp(iz)$, respectively. -Define $\pi$ as twice the smallest positive real root of $\cos$. -Deduce that $\exp( i \pi / 2) = i$. -By multiplying, conclude that $2 \pi i$ is a period of $\exp$. -Show, by means of the preceding properties, that no smaller period exists.<|endoftext|> -TITLE: What is the intuitive relationship between SVD and PCA? -QUESTION [439 upvotes]: Singular value decomposition (SVD) and principal component analysis (PCA) are two eigenvalue methods used to reduce a high-dimensional data set into fewer dimensions while retaining important information. Online articles say that these methods are 'related' but never specify the exact relation. -What is the intuitive relationship between PCA and SVD? As PCA uses the SVD in its calculation, clearly there is some 'extra' analysis done. What does PCA 'pay attention' to differently than the SVD? What kinds of relationships do each method utilize more in their calculations? Is one method 'blind' to a certain type of data that the other is not? - -REPLY [4 votes]: There is a way to do an SVD on a sparse matrix that treats missing features as missing (using gradient search). I don't know any way to do PCA on a sparse matrix except by treating missing features as zero.<|endoftext|> -TITLE: Find polynomials such that $(x-16)p(2x)=16(x-1)p(x)$ -QUESTION [7 upvotes]: Find all polynomials $p(x)$ such that for all $x$, we have $$(x-16)p(2x)=16(x-1)p(x)$$ -I tried working out with replacing $x$ by $\frac{x}{2},\frac{x}{4},\cdots$, to have $p(2x) \to p(0)$ but then factor terms seems to create a problem. -Link: http://web.mit.edu/rwbarton/Public/func-eq.pdf - -REPLY [14 votes]: Hint $\:$ Note the equation is $\displaystyle \rm\; \frac{\sigma\:f}f \:=\; \frac{\sigma^4 g}g\ $ for $\rm\ g = x-16 \:,\:\;$ shift $\rm\;\;\sigma\:f(x) = f(\sigma\:x) = f(2\:x)$ -Or, written additively $\rm\ (\sigma-1) \; f \;=\: (\sigma^4 - 1) \; g \quad\;$ [see Note below on this additive notation] -Thus $\rm\;\; f\, =\,\smash{\dfrac{\sigma^4-1}{\sigma-1}}\,g\, =\, (1+\sigma+\sigma^2\!+ \sigma^3) \: g \;=\: (x-16)\:(2x-16)\:(4x-16)\:(8x-16) \;$ -unique up to a factor of $\rm\;h \in ker(\sigma-1) = \{h: \sigma\:h = h\:\} = \;$ constants, i.e. $\rm deg\;h = 0 \ \ $ QED -Remark $\;$ This reformulation of my prior answer is intended to dramatically illustrate the innate symmetry. Its striking simplicity arises precisely from the simple structure of the orbits of the shift automorphism $\rm\: \sigma.\;$ Namely, by orbit decomposition the problem reduces to one on the single orbit of an irreducible polynomial. By exploiting polynomial structure there, we reduce the problem to a trivial polynomial division by $\rm\:\sigma - 1.\:$ This nicely illustrates the essence of the ideas employed to solve general difference equations (recurrences) over rational function fields - ideas at the foundation of algorithms employed in computer algebra systems. -Hopefully the symmetry is clearer in the above reformulation. Based on prior comments, and someone reposting my prior solution stripped of the symmetry, I fear my prior answer did not succeed in explicitly emphasizing the beautiful innate symmetry lying at the heart of this problem - a germ of the Galois theory of difference fields. Probably the problem was devised to help spur one to discover this beautiful structure. -Note $\;\;$ The point of the additive notation is to exploit the natural polynomial structure of the action of $\rm\:\sigma\:$ on the multiplicative group generated by the elements $\rm\: \sigma^n \,f\;$ in the orbit of $\rm\!\: f \!\:$ under $\rm\:\sigma.\:$ This action is best comprehended by examining it on a specific example.$ $ Recalling $\rm\,\sigma\,f(x) = f(2\:\!x)$ -$\quad\quad\begin{align}{} -\rm \sigma\:(\:f(2^{-2}\: x)^a \; f(2^3 x)^b \;\: f(2^5 x)^c)\; =& \rm\;\;\: f(\:2^{-1} x)^a \;\: f(2^4 x)^b \;\: f(2^6 x)^c \\\\ -\rm \iff \quad\;\;\: \sigma\;\:(\:(\:\sigma^{-2}\: f\;)^a \;\; (\:\sigma^3\: f\:)^b \;\; (\:\sigma^5\: f\:)^c)\; =& \rm\;\; \;\; (\:\sigma^{-1} \: f\:)^a \;\; (\:\sigma^4 \: f\:)^b \;\; (\:\sigma^6 \: f\;)^c \\\\ -\rm \iff \quad\;\; \sigma\; (\:a\:\sigma^{-2} \;+\:\;\; b\:\sigma^3 \;+\;\; c\:\sigma^5)\:\; f\quad\; =& \rm\; (\:a\:\sigma^{-1} \: + \:\;\: b\: \sigma^4 \; +\;\; c\: \sigma^6)\; f \\\\ -\end{align}$ -In the Galois theory literature the above action is frequently written in a highly suggestive exponential form. To illustrate this, below is the key identity in the problem at hand, expressed in this exponential notation. -$\quad\quad\begin{align}{} -\rm g^{\:\sigma^4-\,1} \;=\;& \rm g^{\:(1\:+\;\sigma\:+\;\sigma^2\:+\:\,\sigma^3)\:(\sigma\,-\,1)} \\\\ -\iff\quad\quad\rm \frac{\sigma^4 g}g \;=\;& \rm (g \;\: \sigma\:g \;\:\sigma^2 g \;\:\sigma^3 g)^{\sigma - 1} \;=\; \frac{\phantom{g\;\;\:} \sigma\:g \;\;\: \sigma^2 g \;\;\:\sigma^3 g \;\;\:\sigma^4 g}{g \;\;\:\sigma\:g \;\;\:\sigma^2 g \;\;\:\sigma^3 g\phantom{\;\;\:\sigma^4 g}} \\\\ -\end{align}$<|endoftext|> -TITLE: Evaluating a convergent improper triple integral over the unit sphere -QUESTION [6 upvotes]: In Exercise 5 (f) of Angus Taylor's Advanced calculus (p. 659) one is asked to find the value of the following integral if convergent: -$$I:=\underset{R}{\iiint}\dfrac{x^2 y^2 z^2}{r^{17/2}}\mathrm dV$$ -where $R$ is the unit sphere $x^2+y^2+z^2\leq 1$ and $r^2=x^2+y^2+z^2$. -Observing that $\dfrac{x^2 y^2 z^2}{r^{17/2}}\leq \dfrac{r^6}{r^{17/2}}=r^{-5/2}$ I proved that $I$ is convergent. -Using spherical co-ordinates $r$, $\theta $, $\phi $ i.e. -$$\begin{align*}x&=r\sin \phi \cos \theta\\y&=r\sin \phi \sin \theta\\z&=r\cos \theta\end{align*}$$ -I transformed the integral $I$ into -$$I=\int\nolimits_0^{2\pi }\left(\int_0^{\pi }\left(\lim_{\delta \to 0}\int_{\delta }^1\left(r^2 \sin \phi\right)\dfrac{x^2 y^2 z^2}{r^{17/2}}\;\mathrm dr\right)\;\mathrm d\phi \right)\;\mathrm d\theta$$ -$$=\lim_{\delta \to 0}\left( \int_{\delta }^1 r^{-1/2}\mathrm dr\right)\int_0^{2\pi }\cos^4 \theta \sin^2 \theta \mathrm d\theta\int_0^{\pi }\sin^5 \mathrm d\phi $$ -$$=2\cdot \dfrac18 \pi \cdot \dfrac{16}{15}=\dfrac4{15}\pi $$ -In the solutions the answer is $\dfrac8{105}\pi$. Since sometimes there -are a few book typos (in the exercises) to prevent undue copying, I ask the -following -Question: What is the correct solution, $\dfrac4{15}\pi $ or $\dfrac8{105}\pi $? - -UPDATE (Correction): instead of $z=r\cos \theta $ it is -$z=r\cos \phi $ -See a comment from whuber. -The integral $I$ is transformed into -$$I=\int_0^{2\pi }\left(\int_0^{\pi }\left(\lim_{\delta \to 0}\int_{\delta }^1\left(r^2 \sin \phi\right)\dfrac{x^2 y^2 z^2}{r^{17/2}}\;\mathrm dr\right)\;\mathrm d\phi \right)\;\mathrm d\theta$$ -Since -$$(r^2 \sin \phi )\dfrac{x^2 y^2 z^2}{r^{17/2}}=(r^2\sin \phi )\dfrac1{r^{17/2}}\left( r\sin \phi \cos \theta \right) -^{2}\left( r\sin \phi \sin \theta \right) ^{2}\left( r\cos \phi \right) ^{2}$$ -$=r^{-1/2}\cos ^{2}\theta \cdot\sin ^{2}\theta \cdot\cos ^{2}\phi \cdot\sin ^{5}\phi $, -the transformed integral becomes (if I am right): -$$I=\left(\lim_{\delta \to 0} \int_{\delta -}^1 r^{-1/2}\mathrm dr\right)\int_0^{2\pi }\cos^2\theta\cdot\sin^2\theta \;\mathrm d\theta -\int_0^{\pi }\cos^2 \phi \cdot\sin^5 \phi \;\mathrm d\phi$$ -$$=2\cdot \dfrac14 \pi \cdot \dfrac{16}{105}=\dfrac8{105}\pi$$ -The correct solution will be $\dfrac8{105}\pi $ as in the book. - -REPLY [6 votes]: As a double check, let's perform the integral in a completely different manner (so that my mistakes are unlikely to overlap yours!). -It is handy to use a condensed notation. I will write $[i,j,k;a]$ for the value of $x^{2 i}y^{2 j}z^{2 k} \rho^{2 a}$ integrated over the unit sphere, where $i,j,k$ are integral and $a$ is real. These relations are easy to establish: - -$[i,j,k;a]$ is invariant under permutations of $(i,j,k)$. -$6 [1,1,1;a] = [0,0,0;a+3] - 3[3,0,0;a] - 18[2,1,0;a]$ is a consequence of expanding $\rho^6 = (x^2+y^2+z^2)^3$ as a multinomial, using the first property to collect equal terms, and isolating $[1,1,1;a]$ on the lhs. -$[i,j,k;a+1] = [i+1,j,k;a] + [i,j+1,k;a] + [i,j,k+1;a]$ follows from $\rho^2 = x^2 + y^2 + z^2$. Use this to compute $[2,1,0;a]$ in terms of $[3,0,0;a]$ and $[2,0,0;a+1]$. -$[n,0,0;a] = \frac {4 \pi} {(2 n + 1) (2 n + 3 + 2 a)}$ can be obtained via integration by parts in cylindrical coordinates or by directly performing the integration of $ z^{2 n} \rho ^ {2 a}$ over the sphere, which is very easy to do (because the angular part only involves $\phi$ and the integrand is exact). (The numerator is the area of the unit sphere and the factors in the denominator come from integrating a $2 n$ power (in the angular part of $z$, which separates from the $\rho$ integral) and a $2 n + 2 a + 2$ power due to the $\rho^{2 n} \rho^{2 a} \rho^2 d \rho$ term in the integrand.) Even when $a \lt 0$, this is justified whenever $2 n + 3 + 2 a \gt 0$, because the integral still converges at the origin. - -From these we algebraically obtain -$$\eqalign{ -[1,1,1;a] = &\frac{1}{6} \left( [0,0,0;a+3] - 3 [3,0,0;a] - 18 [2,1,0;a] \right) \cr - -= &\frac{4 \pi} {6} \left( \frac{1}{2 a+9} - \frac{3}{7 (2 a+9)} - \frac{18}{35(2 a+9)} \right) \cr - -= &\frac{4 \pi}{105 (2 a+9)}. -}$$ -Setting $2 a = -17/2$ gives the textbook answer $\frac{8 \pi}{105}$.<|endoftext|> -TITLE: The right "weigh" to do integrals -QUESTION [33 upvotes]: Back in the day, before approximation methods like splines became vogue in my line of work, one way of computing the area under an empirically drawn curve was to painstakingly sketch it on a piece of graphing paper (usually with the assistance of a French curve) along with the axes, painstakingly cut along the curve, weigh the cut pieces, cut out a square equivalent to one square unit of the graphing paper, weigh that one as well, and reckon the area from the information now available. -One time, when we were faced with determining the area of a curve that crossed the horizontal axis thrice, I did the careful cutting of the paper, and made sure to separate the pieces above the horizontal axis from the pieces below the horizontal axis. Then, my boss suddenly scooped up all the pieces and weighed them all. -I argued that the grouped pieces should have been weighed separately, and then subtract the weights of the "below" group from the "above" group, while my boss argued that we were calculating the total area, and thus, not separating the pieces was justifiable. -Which one of us was correct? - -REPLY [15 votes]: In my opinion, this is not really a math question. Which procedure is correct depends what you're going to do with your calculation. -As a matter of definition, the integral indeed measures the signed area (positive area minus negative area), as you suggest. So your approach is computing an approximation to the definite integral $\int_a^b f$. -But maybe you want the total (unsigned) area. E.g. if you're going to lay concrete along (some real-world space corresponding to) the region bounded by the curve and the x-axis then surely you want the total area -- there's no such thing as negative concrete. -Without knowing what you're using the calculation for, it's impossible to say. (Essentially, you're asking us: "Which of these is mathematically correct: $A-B+C$ or $A+B+C$?" Of course it depends upon what you're trying to do.) I would like to think that your boss knew what the point of it all was, so without further information I guess I would trust him. -In fact, your story arouses my curiosity. I suppose you're not putting us on, but weighing paper cutouts is just about the last method I would ever think of for computing area (aren't you going to need a very sensitive scale or an awfully big piece of paper to get anywhere with this?). How long ago are we talking? What was the job? You don't have to answer these questions, but it would be interesting to know...<|endoftext|> -TITLE: Finding all normal subgroups of a group -QUESTION [11 upvotes]: On my homework today, we had to find all the normal subgroups of $D_{n}$, the dihedral group of order 2n. I solved the problem by looking at how the conjugacy classes change based on whether n is even or odd and then constructed the normal subgroups as unions of the conjugacy classes. -I have 2 questions: -(i) Is there a better way to approach the problem than looking at the conjugacy classes? -(ii) Could someone explain why we want to find all the normal subgroups of a particular group? How does this provide us additional insight into the structure of the group we are studying? (Right now, this exercise feels more like a 'computation' to me than a way of understanding $D_{n}$) -Thanks :) - -REPLY [8 votes]: I'm assuming you want to know more about group theory, so here is an answer to point you to some of the interesting things that you can learn if you are interested in computing with finite groups. -(i) If you have the conjugacy classes and there are not too many, then writing normal subgroups as unions of conjugacy classes is fast and easy. This is often the case when you only have access to a group through its character table, such as when the Suzuki and the sporadic simple groups were being discovered, and we wanted to understand some of their (non-simple) subgroups. -For a typical finite group given concretely as a permutation group, you use a special type of induction working along what is called a chief series, where you find a maximal chain of normal subgroups. It turns out there are some fairly easy ways to find these: for a solvable group, or any group G with an abelian quotient group, you can fairly easily and concretely find the derived subgroup, [G,G]. The quotient group is an abelian group, so every subgroup between the whole group and the derived subgroup is normal. You then take the derived subgroup of the derived subgroup, but now you only take subgroups that are normalized by G/[G,G]. The action of G/[G,G] is relatively plain and easy to understand, so these "invariant" subgroups are pretty easy to find. To finish the inductive step, you need to find the normal subgroups of G/[[G,G],[G,G]] that don't contain [G,G], and this can be done by a slightly easier version of the general subgroup lattice algorithm using what is called the first cohomology group, or "derivations" (basically derivatives for groups, and related in a way to the derived subgroup). When you reach the point where the group is its own derived subgroup, a perfect group, then a second algorithm begins, that identifies the simple groups involved in the top of the group, and then constructs the solvable group below them. Sometimes you even have to repeat those last two steps, but not for groups of order less than 6024 ≈ 4.7e42 or so. -(ii) Often in mathematics, you want to use "induction". A normal subgroup is one of the two main ways to do induction in group theory. Usually it is not necessary to find all normal subgroups, but rather a single (nice) chief series will do. Some groups only have a single chief series (a fair number of dihedral groups are like this), and so finding the chief series and finding all normal subgroups is the same question. You might try the symmetric group of degree 4 and order 24 as an example.<|endoftext|> -TITLE: Rotations by degrees other than $90, 180,$ and $270$. -QUESTION [8 upvotes]: Say I have a triangle with vertices $(0,0), (2,4), (4,0)$ that I want to rotate along the origin. Rotation by multiples of $90^{\circ}$ is simple. However, I want to rotate by something a bit more complicated, such as $54^{\circ}$. How do I figure out where the vertices would be then? - -REPLY [13 votes]: One way is to use complex numbers. Multiplying by $\cos\theta+i\sin\theta$ rotates by $\theta$ about 0, so you could multiply $(2+4i)(\cos 54^\circ+i\sin 54^\circ)$ to get the rotation image of (2,4). - -REPLY [7 votes]: In the answer to this question, I mentioned the formula for the rotation matrix; one merely takes the product of the rotation matrix with the coordinates (treated as 2-vectors) to get the new rotated coordinates. Note that I gave the matrix for clockwise rotation; for anticlockwise rotation, negate the angle (thus switching the sign of the two sine components).<|endoftext|> -TITLE: The orthogonal complement of the space of row-null and column-null matrices -QUESTION [5 upvotes]: I propose the following lemma and its proof. It is related to row-null and column-null matrices - i.e. matrices whose rows and columns both sum to zero. Could you please give your opinion on the plausibility of the lemma, and the validity of the proof? -Lemma: Let $Z\in\text{GL}(n,\mathbb{R})$ be a general $n\times n$ real matrix, and let $Y\in\mathcal{S}(n,\mathbb{R})$, where $\mathcal{S}(n,\mathbb{R})$ is the space of row-null column-null $n\times n$ real matrices. Then $\text{Tr}(ZY)=0$ for all $Y$ in $\mathcal{S}(n,\mathbb{R})$ if and only if $Z$ has the form $$Z_{ij}=\left(p_{j}-p_{i}\right)+\left(q_{j}+q_{i}\right)$$. -Proof: -Consider the space of row-null and column-null matrices -$$\mathcal{S}(n,\mathbb{R})= \left\{ Y_{ij}\in GL(n,\mathbb{R}):\sum_{i}Y_{ij}=0,\sum_{j}Y_{ij}=0 \right\} $$ -Its dimension is -$$\text{dim}(S(n,\mathbb{R}))=N^{2}-2N+1$$ -since the row-nullness and column-nullness are defined by $2N$ equations, only $2N-1$ of which are linearly independent. -Consider the following space -$$\mathcal{G}(n,\mathbb{R})=\left\{ Z_{ij}\in GL(n,\mathbb{R}):Z_{ij}=\left(p_{j}-p_{i}\right)+\left(q_{j}+q_{i}\right)\right\}$$ -Its dimension is -$$\text{dim}(\mathcal{G}(n,\mathbb{R}))=2N-1$$ -where $N-1$ is the contribution from the antisymmetric part and $N$ is from the symmetric part. -Assume $Y\in\mathcal{S}$ and $Z\in\mathcal{G}$, then the Frobenius inner product of two such elements is -$$ -\text{Tr}(ZY) =\sum_{ij}\left[\left(p_{j}-p_{i}\right)Y_{ji}+\left(q_{j}+q_{i}\right)Y_{ji}\right] -$$ -$$ -=\sum_{j}(q_{j}+p_{j})\sum_{i}Y_{ji}+\sum_{i}(q_{i}-p_{i})\sum_{j}Y_{ji}=0 -$$ -Since $\text{dim}(\mathcal{G})+\text{dim}(\mathcal{S})=\text{dim}(GL)$ and $\mathcal{G}\perp\mathcal{S}$, then $\mathcal{G}$ and $\mathcal{S}$ must be complementary in $GL$. Therfore, if $Y$ is orthogonal to all the matrices in $\mathcal{S}$, it must lie in $\mathcal{G}$. -PS: How can I get the curly brackets {} to render in latex mode? - -REPLY [5 votes]: Here is an alternate way of proving your Lemma. I'm not sure if its any simpler than your proof -- but it's different, and hopefully interesting to some. -Let $S$ be the set of $n\times n$ matrices which are row-null and column-null. We can write this set as: -$$ -S = \left\{ Y\in \mathbb{R}^{n\times n} \,\mid\, Y1 = 0 \text{ and }1^TY=0\right\} -$$ -where $1$ is the $n\times 1$ vector of all-ones. The objective is the characterize the set $S^\perp$ of matrices orthogonal to every matrix in $S$, using the Frobenius inner product. -One approach is to vectorize. If $Y$ is any matrix in $S$, we can turn it into a vector by taking all of its columns and stacking them into one long vector, which is now in $\mathbb{R}^{n^2\times 1}$. Then $\mathop{\mathrm{vec}}(S)$ is also a subspace, satisfying: -$$ -\mathop{\mathrm{vec}}(S) = \left\{ y \in \mathbb{R}^{n^2\times 1} \,\mid\, (\mathbf{1}^T\otimes I)y = 0 \text{ and } (I \otimes \mathbf{1}^T)y = 0 \right\} -$$ -where $\otimes$ denotes the Kronecker product. In other words, -$$ -\mathop{\mathrm{vec}}(S) = \mathop{\mathrm{Null}}(A),\qquad\text{where: } -A = \left[ \begin{array}{c} \mathbf{1}^T\otimes I \\ I \otimes \mathbf{1}^T \end{array}\right] -$$ -Note that vectorization turns the Frobenius inner product into the standard Euclidean inner product. Namely: $\mathop{\mathrm{Trace}}(A^T B) = \mathop{\mathrm{vec}}(A)^T \mathop{\mathrm{vec}}(B)$. Therefore, we can apply the range-nullspace duality and obtain: -$$ -\mathop{\mathrm{vec}}(S^\perp) = -\mathop{\mathrm{vec}}(S)^\perp = -\mathop{\mathrm{Null}}(A)^\perp = -\mathop{\mathrm{Range}}(A^T) -$$ -So every vector in $vec(S^\perp)$ is of the form $(\mathbf{1}\otimes I)a + (I\otimes \mathbf{1})b$ for some vectors $a$ and $b$ in $R^{n\times 1}$. It follows that every matrix in $S^\perp$ is of the form $a1^T + 1b^T$. This parametrization is equivalent to the one you presented if you set $a_i = q_i-p_i$ and $b_j = q_j + p_j$. - -REPLY [2 votes]: I've checked your proof. It is correct. -I want to say obvious thing: I know two methods for checking the proof is correct. -1. Check the answer, if you can get it from something else. -1'. If you can't get the answer independently, check some of it's properties. -2. Check the proof line by line. -2'. Divide your proof on parts and check each of them, using one of the above. -Applying second method is obvious. If you want, to apply first, you can try to prove the following. -1. Every symmetric matrix, that is orthogonal to S(n,R), is of the form $q_i+q_j$. -2. Every symmetric matrix, that is orthogonal to S(n,R), is of the form $q_i+q_j$. -This can be done by using equations of the form Tr(ZY)=0 with matrices $Y\in S(n,\mathbb{R})$ of the form $Y_{ij}=a_i b_j$, where a and b are vectors of the form vectors with one coordinate equal to 1, another --- equal to -1, all other coordinates equal to 0 (like $(0,\dots,0,1,0,\dots,0,-1,0,\dots,0)$).<|endoftext|> -TITLE: Proving that this sum $\sum\limits_{0 < k < \frac{2p}{3}} { p \choose k}$ is divisible by $p^{2}$ -QUESTION [10 upvotes]: How does one prove that for a prime $p \geq 5$ the sum : $$\sum\limits_{0 < k < \frac{2p}{3}} { p \choose k}$$ is divisible by $p^{2}$. -Since each term of $\displaystyle \sum\limits_{0 < k < \frac{2p}{3}} { p \choose k}$ is divisible by $p$, only thing remains is to prove that the sum $$\sum\limits_{ 0 < k < \frac{2p}{3}} \frac{1}{p} { p \choose k}$$ is divisible by $p$. -How to evaluate this sum: $\displaystyle \frac{1}{p} { p \choose k} = \frac{(p-1)(p-2) \cdots (p-k+1)}{1 \cdot 2 \cdot 3 \cdots k}$ - -REPLY [7 votes]: Since we are working in the field $\mathbb{F}_p$ we can write -$$\frac{(p-1)(p-2) \cdots (p-k+1)}{1 \cdot 2 \cdots k}$$ as -$$\frac{(-1)(-2) \cdots (-(k-1))}{1 \cdot2 \cdots k}$$ -= $$\frac{(-1)^{k-1}}{k}$$ -Let $N = [\frac{2p}{3}]$ and $M = [\frac{N}{2}]$ -Thus what we need is -$$ \sum_{k=1}^{N} \frac{(-1)^{k-1}}{k}$$ $$ = \sum_{k=1}^{N} \frac{1}{k} - 2 \sum_{k=1}^{M}\frac{1}{2k}$$ $$ =\sum_{k=M+1}^{N}\frac{1}{k}$$ -Now $N+M+1 =p$ so we can rewrite as -$$\frac{1}{N} + \frac{1}{M+1} + \frac{1}{N-1} + \frac{1}{M+2} + \cdots = $$ -$$\frac{p}{N(M+1)} + \frac{p}{(N-1)(M+2)} + \cdots $$ -which is $0$. -There are $N-M$ terms, which is even, so each term gets paired off. - -REPLY [3 votes]: From your last equation one gets -$$\frac1p{p\choose k}\equiv(-1)^k\frac 1k\pmod p.$$ -So the problem is equivalent to -$$\sum_{0 < k < 2p/3}(-1)^k\frac1k\equiv0\pmod{p}.$$ -The case $p=1979$ was set as the first problem at the 1979 IMO. -The method used for this extends to all primes, and no doubt -you can find solutions for the IMO problem out on the interweb.<|endoftext|> -TITLE: Collinear points theorem -QUESTION [5 upvotes]: I was droodling a bit and a given moment I drew the following construction: - -It appears that the three blue intersections are collinear (red line), no matter how I draw the construction lines. If this is always true, I assume that this a know fact [otherwise I have my first theorem! :-) erm conjecture, since I can't prove it :-( ]. -What's the theorem called? -TIA -Steven - -REPLY [9 votes]: Congratulations! Looks to me like you have rediscovered Pappus' Hexagon Theorem. -The image from the link:<|endoftext|> -TITLE: What's more general than category theory? -QUESTION [21 upvotes]: First there was arithmetic with numerical calculations (i.e., one unknown on one side of an equation). Then algebra with manipulations of variables (many unknowns anywhere in an equation). Then systems are studied that differ from ordinary arithmetic but share some of the same properties (equations where the unknowns represent all sorts of things - even functional equations) and then these properties are abstracted in abstract algebra and whole classes are studied such as groups and rings. Then category theory studies maps between structures (functorial equations), then n-category theory, then ... - -Where do we go now? Is category theory the end of the road for the foreseeable future? - Is the only way forward to go backwards and generalize in a different direction (like "generalized equations" of optimization or something)? - -REPLY [3 votes]: Final Answer: Type Theory and the Univalent Foundations -The Book -The paradox of categorys as a foundational theory is that it aims to divorce itself from set theory but doing so by building on intuitions and formalisms that are best explained in a set format. Thus, CT is the best way to think about sets and to structures that generalize from them and very useful for the study of set-like structures. But the further we go away from the traditional attributes of a set, the more category theory becomes contentious or at least more difficult to work with. Normally, this is "swept under the rug" by instead of focusing on objects and their morphisms we focus on classes and their morphisms, in a sense taking us away from the strictly quantitative nature of set theory into the more qualitative world of classes. -But in my opinion, this is like drawing a bunch of one's and zero's on paper with charcoal, then smudging it with an eraser to look like a non-descript face, and then saying that you have progressed from the class of set (you can count the number of smudges) to a class of faces (you can't count the faces the smudge represents) when in fact the face that you can represent will always be limited by the initial scoring of the one and zero in charcoal and thus still reflect a structurally set-based nature -My opinions on the implicit presence of set theory within categories aside, the notion that a type of object and then what you can do on that type is a construct that is familiar to every programmer in the world. So to are the notions of polymorphism and inheritance childs play to an object oriented programmer yet they are very difficult to deal with in a basic categorical perspective since they require one object to have multiple identities and this leads to construct such as multi-categories, or colored categories, etc. But since type theory is built from day one as it where on the more abstract notion of a type (which for all intents and purposes can be considered the categorical class), then it is better equipped to deal with structures that bear absolutely no relation to sets. -In effect, type theory allows you to draw the face exactly and then deal with that construct as its own mathematical identity, be it's own "thing" instead of having to build up that identity digitally and be limited by the combinations of smudged ones and zeros.<|endoftext|> -TITLE: Expressing $\sin(2x)$ as a polynomial of $\sin{x}$ -QUESTION [13 upvotes]: Using trignometric identities ( double angle forumlas) one can see that $\cos{2x} = 2 \: \cos^{2}{x} - 1$ can be expressed as a polynomial of $\cos{x}$, where $p(\cos{x})=2 \: \cos^{2}{x}-1$. Then its natural to ask the same question for the sine function. - -Can we express $\sin{2x}$ as a polynomial of $\sin{x}$? - -REPLY [20 votes]: Here's a slightly different way of proving that $\sin nx$ is not -a polynomial in $\sin x$ when $n$ is even. Indeed $\sin nx$ -cannot be written as $f(\sin x)$ for any function $f$. -To prove this all one has to do is to write down two numbers $a$ -and $b$ such that $\sin a=\sin b$ but $\sin na\ne\sin nb$. -Let $a=\pi/(2n)$ and $b=\pi-a$. Then $\sin a=\sin b$. But -$na=\pi/2$ and $nb=n\pi-\pi/2$ so that $\sin na=1$ and $\sin nb=-1$ -(recalling that $n$ is even).<|endoftext|> -TITLE: A nice enumeration of $R(\omega)$ -QUESTION [6 upvotes]: Define $R(0)=0=\emptyset, R(n+1)=P(R(n))$ and $R(\omega) = \cup_{n < \omega} R(n)$. Thus $R(\omega)$ is the set of all sets, which are build out of finitely many braces and $0$. -Consider the following relation $E$ on $\omega$: If and only if the $n$th number in the binary representation of $m$ is $1$, then $n E m$. Now I want to construct an isomorphism $(R(\omega),\in) \cong (\omega,E)$ [This is an exercise in Kunen's set theory]. After some playing around I've come up with the following definition: -$g : R(\omega) \to \omega, g(x) = \sum_{y \in x} 2^{g(y)}$. -This is a well-defined recursion, since $rank(y) < rank(x)$. If $g$ is injective, then it is easy to see that $x \in y \Leftrightarrow g(x) E g(y)$. However I don't see this; neither why $g$ is surjective. - -REPLY [7 votes]: It is easier to build the isomorphism in the other direction, and indeed we can see that the map is forced upon us. There is a unique isomorphism. -Specifically, you want a map $h:\mathbb{N}\to R(\omega)$ such that $n\mathrel{E} m\iff h(n)\in h(m)$. This very equivalence tells you that you must define $h(m)=\{ h(n) \mid n\mathrel{E} m\}$. This function is defined by recursion, and obviously preserves $E$ to $\in$. Thus it is injective. It is surjective because your function $g$ is the inverse. QED -If one knows about the Mostowski collapse, then you can see immediately that $h$ is precisely the Mostowski collapse of $(\mathbb{N},E)$, which is a well-founded extensional relation. This provides another way to see that $h$ is an isomorphism.<|endoftext|> -TITLE: On the binary decimal expansion of the reciprocal prime's -QUESTION [6 upvotes]: I have been thinking a little bit about the binary decimal expansion of reciprocal prime numbers; and I have a few questions. -I found this neat table which lists the binary expansion of many fractions, and I was trying to find some patterns. -Here are my questions, for brevity I say a natural number has a period N if the binary decimal expansion of it's reciprocal has period N: (for example, 1/7 = .001001... has period 3) - -Given an arbitrary natural number N, does there exist a prime number of minimum period N? - -(By minimum period I mean to exclude the case that one prime has a period which is a multiple of the period of another prime. For example, 1/3 = .0101... has period 2, and 1/5 = .00110011... has period 4; so while 1/3 has period 4, what I call it's "minimum period" is 2) -2. - Can two prime numbers have the same minimum period? -A useful result which I believe is well known, is that a natural number has a period N if and only if it is a factor of 2N - 1. -Does anyone know of a good reference that describes some theory behind the relationship between the period of the reciprocal of a natural number, and the prime factorization of that number? - -REPLY [3 votes]: (In the below post there are several links with the apostrophes omitted; fill them in if the links don't work.) -The phenomenon you are studying is a phenomenon in modular arithmetic. If a prime $p$ has period $n$, this means that there is some numerator $N$ such that $\frac{N}{2^n - 1} = \frac{1}{p}$. This is equivalent to $Np = 10^n - 1$, or $p | 2^n - 1$, or $2^n \equiv 1 \bmod p$. The smallest $n$ for which this is true is called the order of $2 \bmod p$, sometimes denoted $\text{ord}_p(2)$ (although this is confusingly also used to denote the greatest power of $p$ which divides $2$...). -Fermat's little theorem guarantees that $\text{ord}_p(2)$ always divides $p-1$; you have already observed this yourself. However, predicting the exact order is very difficult to do in general. For example, knowing that the order is actually equal to $p-1$ is equivalent to knowing that $2$ is a primitive root, and it is not currently even known whether this is true infinitely often. In any case, you should be able to find basic information about order in any good textbook on elementary number theory. -With that background out of the way... -The answer to question 1 is no. The only exception is $n = 6$ by Zsigmondy's theorem. -The answer to question 2 is yes. If $p$ and $n$ are relatively prime, then $p$ has period $n$ if and only if $p$ divides $\Phi_n(2)$, where $\Phi_n(x)$ is the $n^{th}$ cyclotomic polynomial. (This is more or less a restatement of the condition that $p | 2^n - 1$ but $p$ doesn't divide $2^k - 1$ for $k < n$.) So it suffices to show that some number of this form has more than one prime factor relatively prime to $n$. There are two cases here which are particularly classical: - -$n$ is a prime $q$. In this case $\Phi_q(2) = 2^q - 1$ is a Mersenne number, and $2^{11} - 1 = 23 \cdot 89$ is the smallest composite Mersenne number, hence $23$ and $89$ both have period $11$. -$n = 2^k$ for some $k$. In this case $\Phi_{2^k}(2) = 2^{2^{k-1}} + 1$ is a Fermat number, and $\Phi_{64}(2) = 2^{32} + 1 = 641 \cdot 6700417$ is the smallest composite Fermat number, hence $641$ and $6700417$ both have period $64$. - -The answer to question 3 is the following. -Lemma: If $n, m$ are relatively prime odd numbers, then $\text{ord}_{mn}(2) = \text{lcm}(\text{ord}_n(2), \text{ord}_m(2))$. -Proof. $\text{ord}_{mn}(2)$ is the order of the element $2$ in the multiplicative group of $\mathbb{Z}/mn\mathbb{Z}$, which we will denote $U(mn)$. By the Chinese remainder theorem, $U(mn)$ is isomorphic to the direct product $U(m) \times U(n)$, so the order of $2$ in $U(mn)$ must be the $\text{lcm}$ of the orders of $2$ in $U(m)$ and $U(n)$. -It follows that to compute $\text{ord}_m(2)$ for arbitrary $m$ it suffices to compute it for the odd prime power factors of $m$ and then to take the $\text{lcm}$ of the resulting numbers. (Note that if $m$ is divisible by a power of $2$ this only contributes a leading string of zeroes to the binary expansion of $\frac{1}{m}$ and hence does not affect the computation of the period.) Again, you can find a discussion of the Chinese remainder theorem in any good textbook on elementary number theory. (I am particularly bad at recommending textbooks on elementary number theory because I learned mine through a summer program, not a textbook...)<|endoftext|> -TITLE: 3D software like GeoGebra -QUESTION [31 upvotes]: Does it exist a free interactive geometry software, like GeoGebra, which works for 3D geometry? I would be able to draw spheres, great circles, and so on. - -REPLY [2 votes]: I have developed a free 3d drawing program that runs in modern browsers (HTML5, based on Three.js). The English version is called Geoservant 3D, the original German version is called Geoknecht 3D. - -I would be able to draw spheres, great circles, and so on. - -You can easily draw: - -cube, cuboid, cylinder, line, line segment, plane, point, polygon, quadrangle (square), sphere, text, triangle, vector. - -Rotation is available for: Cuboid, cube, text, cylinder as well. -After you specified some 3d shapes you can also animate their properties by changing the number values. See description on page how to change values: holding down ALT key and using cursor keys. Also have a look at the gallery on the page to see what things you can achieve with this program. It also calculates basic values of the 3d objects. -Currently I am using it a lot to solve vector and plane equations. By the visualization I can check if the result is correct. That is one use case, and it is super helpful for me. -Hope that helps you too. - -Example: - -If you search for a program for 2D drawings, you could use the new Geodrafter 2D. -Update: Geoservant is now available in German, English, Chinese, and Swedish.<|endoftext|> -TITLE: How to calculate the number of decimal digits for a binary number? -QUESTION [11 upvotes]: I was going to ask this on Stack Overflow, but finally decided this was more math than programming. I may still turn out to be wrong about that, but... -Given a number represented in binary, it's fairly easy to derive a decimal representation using integer division and remainders to extract digits in reverse order. You can then simply count how many digits you extracted. -However, I wondered about calculating an exact number of decimal digits more efficiently by avoiding calculating the actual digits. -At first sight this is easy. First, count the binary digits needed (ie determine the position of the highest set bit). Many processors have an operation to do this directly, and there are bit-fiddling algorithms to do it in O(log n) time where n varies with the width of a machine register. In an arbitrary position integer, you normally know how many machine words you have, so you can usually jump directly to the most significant word. -Once you know the size in binary digits, you can scale by $\log 2 \over \log 10$ to get the number of decimal digits. Sort of. -The problem is that this scale factor is (I think) an irrational number. If you have a maximum number of digits you need to worry about, you can use a rational approximation and you only need to worry about getting the rounding right. -The question is, therefore - is there a way to determine this number of decimal digits efficiently and precisely for numbers of any size? Is there (for example) a completely different approach to the calculation that I haven't thought of? - -REPLY [9 votes]: I would post this as a comment if I could, because it seems too simple for an answer. You already said that you can find the highest bit with a single instruction (if the number fits into a register). For every result $r$ you get (that is, your number is between $2^r$ and $2^{r+1}-1$), there are only two possible numbers $k$ and $k+1$ of decimal digits. So you could just use $r$ as an index into a lookup table that stores both $k$ and $10^k$, and output $k$ if your number is less than $10^k$, or $k+1$ otherwise.<|endoftext|> -TITLE: Is there a notion of basis for Banach spaces? -QUESTION [5 upvotes]: Consider the Banach space $\ell^1(\mathbb N)$. -The sequence $(e_n)_{n\in\mathbb N}$ feels like a kind of basis because every element $a\in\ell^1(\mathbb N)$ can be written as an absolutely convergent infinite linear combination $\sum_{n\in\mathbb N}a(n)e_n$ in a unique way. -(Here $e_n$ denotes the vector whose $n$th entry is 1 and all of whose other entries vanish.) -The same is true for the Banach space $c_0(\mathbb N)$. -Is the above property of the sequence $(e_n)_{n\in\mathbb N}$ appropriate in order to abstractly define a basis of a Banach space? Has this been considered? - -REPLY [5 votes]: As Soarer points out: Yes, it is called a Schauder basis.<|endoftext|> -TITLE: Maximal Ideals in the Ring of Complex Entire Functions -QUESTION [8 upvotes]: Let $X = \mathcal{C}([0,1],\mathbb{R})$ be the ring of all continuous real-valued functions $f:[0,1] \to \mathbb{R}$. For $x \in [0,1]$, let $M_{x} = \\{ f \in M \ | \ f(x)=0\\}$. One can show by using compactness of $[0,1]$ that every maximal ideal is of this form. -Extending the Question to Entire functions: Let $\mathsf{C}(z)$ be the ring of complex entire functions. For $ \lambda \in \mathsf{C}$ let $M_{\lambda}$ denote the set of all entire functions which have a zero at $\lambda$. Then is $M_{\lambda}$ a maximal ideal in $\mathsf{C}(z)$, and does every ideal happen to be of this form? I don't know how to prove this! - -REPLY [4 votes]: If I remember correctly from my student days, if D is a connected domain in the complex plane, then the ring O(D) of holomorphic functions on D has the property that every finitely generated ideal is actually principal, but there are ideas tat cannot be finitely generated (such as in Robin's answer above)<|endoftext|> -TITLE: How to solve DE that relate values of derivatives at different points? -QUESTION [5 upvotes]: I try to solve for the specific function -$f(x) = \frac{2-2a}{x-1} \int_0^{x-1} f(y) dy + af(x-1)$ -It looks similar to the function used to find the Renyi's parking constant because it came out from a simple generalization of that problem. -The skill I have gained in my differential class can't even solve -$f(x) = f'(x-1)$ -I'm not looking for anyone to solve it. I just want to know the techniques for solving DE where functions and it's derivatives are evaluated at different points.(What's the terminology for this kind of DE?) - -REPLY [2 votes]: Let $u=x-1$ , -Then $f(u+1)=\dfrac{2(1-a)}{u}\int_0^uf(y)~dy+af(u)$ -$\dfrac{2(a-1)}{u}\int_0^uf(y)~dy=af(u)-f(u+1)$ -$2(a-1)\int_0^uf(y)~dy=auf(u)-uf(u+1)$ -$2(a-1)f(u)=auf'(u)+af(u)-uf'(u+1)-f(u+1)$ -$uf'(u+1)+f(u+1)-auf'(u)+(a-2)f(u)=0$ -Let $f(u)=\int_Ce^{us}K(s)~ds$ , -Then $u\int_Cse^{(u+1)s}K(s)~ds+\int_Ce^{(u+1)s}K(s)~ds-au\int_Cse^{us}K(s)~ds+(a-2)\int_Ce^{us}K(s)~ds=0$ -$\int_Cs(e^s-a)e^{us}K(s)~d(us)+\int_C(e^s+a-2)e^{us}K(s)~ds=0$ -$\int_Cs(e^s-a)K(s)~d(e^{us})+\int_C(e^s+a-2)e^{us}K(s)~ds=0$ -$[s(e^s-a)e^{us}K(s)]_C-\int_Ce^{us}~d(s(e^s-a)K(s))+\int_C(e^s+a-2)e^{us}K(s)~ds=0$ -$[s(e^s-a)e^{us}K(s)]_C-\int_C(s(e^s-a)K'(s)+((s+1)e^s-a)K(s))e^{us}~ds+\int_C(e^s+a-2)e^{us}K(s)~ds=0$ -$[s(e^s-a)e^{us}K(s)]_C-\int_C(s(e^s-a)K'(s)+(se^s-2(a-1))K(s))e^{us}~ds=0$ -$\therefore s(e^s-a)K'(s)+(se^s-2(a-1))K(s)=0$ -$s(e^s-a)K'(s)=-(se^s-2(a-1))K(s)$ -$\dfrac{K'(s)}{K(s)}=-\dfrac{e^s}{e^s-a}+\dfrac{2(a-1)}{s(e^s-a)}$ -$\int\dfrac{K'(s)}{K(s)}ds=\int\left(-\dfrac{e^s}{e^s-a}+\dfrac{2(a-1)}{s(e^s-a)}\right)ds$ -$\ln K(s)=-\ln(e^s-a)+\int_k^s\dfrac{2(a-1)}{r(e^r-a)}dr+c_1$ -$K(s)=\dfrac{ce^{\int_k^s\dfrac{2(a-1)}{r(e^r-a)}dr}}{e^s-a}$ -$\therefore f(u)=\int_C\dfrac{ce^{us+\int_k^s\dfrac{2(a-1)}{r(e^r-a)}dr}}{e^s-a}ds$ -$f(x)=\int_C\dfrac{ce^{xs+\int_k^s\dfrac{2(a-1)}{r(e^r-a)}dr}}{e^s-a}ds$ -But since the above procedure in fact suitable for any complex number $s$, -$\therefore f_n(x)=\int_{a_n}^{b_n}\dfrac{c_ne^{x(p_n+q_ni)t+\int_{k_n}^{(p_n+q_ni)t}\dfrac{2(a-1)}{r(e^r-a)}dr}}{e^{(p_n+q_ni)t}-a}d((p_n+q_ni)t)$ -For some $x$-independent real number choices of $a_n$ , $b_n$ , $p_n$ , $q_n$ and $k_n$ such that: -$\displaystyle\lim_{t\to a_n}(p_n+q_ni)te^{x(p_n+q_ni)t+\int_{k_n}^{(p_n+q_ni)t}\dfrac{2(a-1)}{r(e^r-a)}dr}=\lim_{t\to b_n}(p_n+q_ni)te^{x(p_n+q_ni)t+\int_{k_n}^{(p_n+q_ni)t}\dfrac{2(a-1)}{r(e^r-a)}dr}$ -$\int_{a_n}^{b_n}\dfrac{e^{x(p_n+q_ni)t+\int_{k_n}^{(p_n+q_ni)t}\dfrac{2(a-1)}{r(e^r-a)}dr}}{e^{(p_n+q_ni)t}-a}d((p_n+q_ni)t)$ converges<|endoftext|> -TITLE: Is there a (deep) relationship between these various applications of the exponential function? -QUESTION [18 upvotes]: Here is a list of some applications of the exponential function. -1) The exponential mapping in Lie theory. -I put this first because my intuition tells me that this must be the most fundamental, or deep, way of thinking about the exponential function. I have often been misled by my intuition however, and the main reason I feel strongly about this is because of how fundamental I consider Lie theory to be. -2) Fourier Series -3) Roots of Unity -4) Gaussian Distribution -5) Boltzmann Distribution -There are certainly other applications, but it always kind of bothered me that I couldn't use symmetry methods to see how (all of) these applications are related. Is it possible that there is no way to do this, i.e. that it's just a happy accident that the exponential function has these applications and it is unrelated to any continuous symmetries? - -REPLY [7 votes]: Questions and Answers in MSE and two other references: - Ad 1) : -How to derive these Lie Series formulas & chain references - Ad 3) : -Ambiguous matrix representation of imaginary unit? idem - Extra : -Where the exponent in the Laplace Transform comes from - Ad 4) : -Gaussian Blur as a sample application of the Extra item - -Updates.Ad 2). The Fourier transform -is a special case of the double-sided Laplace transform: -$$ -F(p) = \int_{-\infty}^{+\infty} e^{-pt}\,f(t) dt \quad \Longrightarrow \quad -F(i\omega) = \int_{-\infty}^{+\infty} e^{-i\omega t} f(t)\,dt -$$ -The Fourier transform, in turn, is a generalization of the complex -Fourier series: -start with equation (20) in the Wolfram reference and read until the end. -Ad 5). In this reference - -Derivation of the Boltzmann Distribution - -it is argued on page 23 (with obviously a typo in it) that the Boltzmann probability distribution $f(E)$ must have the following form: -$$ -f(E_1) \times f(E_2) = h(E_1+E_2) -$$ -Let's elaborate on this a little bit: -$$ -f(E_1) \times f(E_2) = h(E_1+E_2) \quad \Longrightarrow \quad h(E) = h(E+0) = f(0)f(E) -$$ -Derivative: -$$ -h'(E) = f(0)f'(E) = \lim_{\delta\to 0} \frac{h(E+\delta)-h(E)}{\delta} = -\lim_{\delta\to 0} \frac{f(E)f(\delta)-f(0)f(E)}{\delta} =\\ -f(E)\,\lim_{\delta\to 0} \frac{f(\delta)-f(0)}{\delta} = -f(E) f'(0) \quad \Longrightarrow \quad f'(E) = \frac{f'(0)}{f(0)} f(E) -$$ -Continuing in terms of the article: -$$ -\frac{df(E)}{f(E)} = \frac{-dE}{E_c} \quad \Longrightarrow \quad f(E) = A e^{-E/E_c} -$$ -Which is the Ansatz for the Boltzmann distribution. -Ad 3). According to Wikipedia, De Moivre's formula is: -$$ -\left[cos(x)+i\sin(x)\right]^n = \cos(nx) + i\sin(nx) -$$ -And this can be proved for any integer $n$ , quite independent of Euler's formula. -It's rather the other way around: because of the pattern $\;f(x)^n = f(nx)\;$ , de Moivre's formula can be considered as a heuristics for Euler's formula. -But uniquesolution is quite right: Probably the most fundamental fact about it is that it is the only measurable function for which $f(x+y)=f(x)f(y)$ for all $x,y$ Can we mimic this behavior of $\,e^x$ with the function $f(x) = \cos(x)+i\sin(x)$ ? From trigonometry we know that: -$$ -\cos(x+y) = \cos(x)\cos(y) - \sin(x)\sin(y)\\ -\sin(x+y) = \sin(x)\cos(y) + \cos(x)\sin(y) -$$ -Hence: -$$ -f(x+y) = \cos(x+y) + i \sin(x+y) =\\ \left[\cos(x)\cos(y) - \sin(x)\sin(y)\right] + i \left[\sin(x)\cos(y) + \cos(x)\sin(y)\right] -=\\ \left[\cos(x) + i \sin(x)\right]\left[\cos(y) + i \sin(y)\right] = f(x)f(y) -$$ -So our $f(x)$ behaves like an exponential function.<|endoftext|> -TITLE: Proving that $||A-B||=||A+B||\Leftrightarrow AB=0$ -QUESTION [5 upvotes]: I have to prove that -\begin{equation*} -||A-B||=||A+B||\Leftrightarrow AB=0 -\end{equation*} -and I was wondering if this approach is correct, or if there's a better/more elegant way to prove this. -Given n-dimensional vectors A and B, we can write $||A-B||=||A+B||$ as: -\begin{equation*} -\sqrt{\sum\limits_{j=1}^{n}(a_j-b_j)^2}=\sqrt{\sum\limits_{j=1}^{n}(a_j+b_j)^2}. -\end{equation*} -Squaring both sides and expanding the binomials: -\begin{equation*} -\sum\limits_{j=1}^{n}a^2_j-2a_jb_j+b_j^2=\sum\limits_{j=1}^{n}a^2_j+2a_jb_j+b_j^2. -\end{equation*} -Simplifying: -\begin{equation*} --\sum\limits_{j=1}^{n}a_jb_j=\sum\limits_{j=1}^{n}a_jb_j,~\text{which holds true if and only if}~\sum\limits_{j=1}^{n}a_jb_j=0. -\end{equation*} -Since $AB$ is equivalent to $\sum\limits_{j=1}^{n}a_jb_j$, then $||A-B||=||A+B||\Leftrightarrow AB=0$ -Thanks in advance. - -REPLY [8 votes]: What you've done is correct, but I think it's better to work without coordinates; just with the definition of norm in terms of the dot product: -$$ -\| A \| = +\sqrt{A\cdot A} \ . -$$ -Then you may observe that, since $\|A \| \geq 0$, -$$ -\|A+B\| = \|A -B\| \ \Longleftrightarrow \ \|A +B\|^2 = \|A-B\|^2 . -$$ -Now, for instance, compute the difference -\begin{align} - \|A +B\|^2 - \|A-B\|^2 &= (A+B)\cdot (A+B) - (A-B)\cdot(A-B) \\ - &= A\cdot A + A\cdot B + B\cdot A + \cdots -\end{align} -EDIT. I forgot to point out an obvious geometric interpretation of this result: if you draw a parallelogram with sides $A$ and $B$, then $A+B$ and $A-B$ are the diagonals of the parallelogram, right? These diagonals are equal if and only if...?<|endoftext|> -TITLE: Continuous coloring of a Mandelbrot fractal -QUESTION [8 upvotes]: I've recently started making a small fractal app in Javascript using the famous Mandelbrot bulb $(z = z^2 + c)$. I've been trying to find the best method of coloring the points on the complex plane, and I've come across some very interesting ideas: -http://linas.org/art-gallery/escape/escape.html -http://en.wikibooks.org/wiki/Fractals/Iterations_in_the_complex_plane/Mandelbrot_set#Real_Escape_Time -So I've been focusing on this "normalized iteration count algorithm" and I've run into a small mental hurdle sorting out what needs to be calculated. Basically, I'm confused about the first equation that shows up in that second link. Assuming we have a complex number $Z$ which is the end result of $n$ iterations. I can't quite figure out what "Zx2" or "zy2" is and how how one might add them together or take the square root of their sum. Is that just shorthand for some basic complex operators, or is there something else going on? Basically, my problem is that I'm not particularly good at reading mathematical notation in this area. -Anyway...any directional guidance you can offer would be extremely helpful. Thanks in advance! - -REPLY [7 votes]: Based on the C code given there, Zx is just the real part $\Re z$ and Zy is just the imaginary part $\Im z$. -Remember that the complex iteration formula for the Mandelbrot set -$z=z^2+c$ -when expanded to real variables looks like -$x=x^2-y^2+u$ -$y=2xy+v$ -where $z=x+iy$ and $c=u+iv$. -The variables with a 2 appended are the squares, as can be seen from how they were defined, so for instance $|z|=\sqrt{x^2+y^2}$ is sqrt(Zx2+Zy2) which is used for determining how much the iteration diverges (escapes to infinity), which is used by the coloring functions. - -REPLY [6 votes]: edit: "Zx2" and "Zy2" are defined in the C code in the second code block at this point in your wikibooks link. They are the squares of the real and imaginary parts, respectively, of $z$. - -I'm not sure if it's exactly the same, but what I've used is: -$$n-\log\left(\frac{\log(|z|)}{\log(r)}\right)$$ -where $z$ is the result of $n$ iterations and has just escaped as noted by $|z|>r$ (where $r$ is the escape radius). This generates real numbers in $[0,n)$, so you could divide by $n$ to end up with a number in $[0,1)$.<|endoftext|> -TITLE: What are good resources for learning predicate logic / predicate calculus? -QUESTION [8 upvotes]: I'm trying to learn predicate logic. -And I'm looking for some good resources on it: -I've seen that I learn better when I can program -So I was wondering if there was a 'predicate logic' programning language? -Maybe an interactive tutorial? -Maybe lots of examples, something like: Predicate Logic by Example? -Or at least some pretty basic books? -TIA - -REPLY [2 votes]: There are lots of good books on symbolic logic, Understanding Symbolic Logic being one. -However, there aren't many of them using a "programming" approach, Haskell Road to Logic, Maths, and programming being such a gem.<|endoftext|> -TITLE: How do you show the ring of formal laurent series is well-defined? -QUESTION [5 upvotes]: The only place I've encountered well-definition is with proving an operation defined on an equivalence class is independent of the choice of representative. -On my homework, it asks us to show that the ring of formal Laurent series is well-defined, and I don't understand what exactly I need to show. However, I don't understand what I'm trying to prove. I've read the wikipedia article and don't understand how there is anything to prove in this case. -If anyone could either point me to some better references explaining well-definition or explain what I need to do to prove that a laurent series is well-defined, I would appreciate it. -Thanks, :) - -REPLY [7 votes]: I'll make my comment into an answer so that it can be marked off. -The issue is probably not whether a formal Laurent series is well-defined by itself, but rather whether the operations you are defining on formal Laurent series (and in particular the operation of multiplication of two formal Laurent series) is well-defined, in the sense that if you take any two formal Laurent series, then the definition of "product" will in fact yield a formal Laurent series. When you work with formal power series (only nonnegative exponents), one usually defines the product by: -$$\left(\sum_{n=0}^{\infty}a_nx^n\right)\left(\sum_{n=0}^{\infty}b_nx^n\right) = \sum_{n=0}^{\infty}\left(\sum_{i+j=n}a_i b_j\right)x^n.$$ -In this case, since $a_k=b_k=0$ if $k<0$, then the definition makes sense, as each term on the right hand side is a finite sum, which makes sense. If you try doing the same thing with formal Laurent series, where the index runs from $-\infty$ to $\infty$, then it is not obvious that this definition always results in something that you can call a formal Laurent series. So one needs to check that it does in fact yield a formal Laurent series, and that the product so defined makes the set into a ring. -So here, the issue of "well-defined"ness is not like the one when you define a function in terms of representatives (in terms of the "name" of an object when the object may have many different names), but rather in terms of whether the function actually makes sense for every input and yields an appropriate output. It is the same issue that arises if you try to define a map $f\colon(0,1)\to\mathbb{N}$ by taking a number in decimal expansion $0.a_1a_2a_3\ldots$ and "defining" $f(0.a_1a_2a_3\ldots) = \cdots a_3a_2a_1$; this map is not well-defined because for some inputs the output does not lie in the range or does not make sense.<|endoftext|> -TITLE: Proving that Ring of Complex Entire functions is neither Artinian nor Noetherian -QUESTION [9 upvotes]: Question: Prove that the Ring of Complex Entire functions is neither Artinian nor noetherian. -Proof: Clearly $R$ is not Artinian because it is a commutative integral domain which is not a field, and $R$ is not noetherian because it is not a factorisation domain. -Is there a proof of this theorem using the Ascending / Descending Chain condition for Artinian / Noetherian rings? - -REPLY [12 votes]: Let $J_n$ be the ideal of entire functions vanishing on the first $n$ positive integers, and let $I_n$ be the ideal of entire functions vanishing on positive integers greater than $n$. Then $$J_1\supset J_2\supset J_3\supset\cdots,$$ $$I_1\subset I_2\subset I_3\subset\cdots,$$ and all of these containments are proper. One way to see that the containments are proper is to use the fact that given any sequence of complex numbers $a_1,a_2,\ldots$, there is an entire function $f$ such that $f(n)=a_n$ for each positive integer $n$. For more on this, see these MathOverflow questions.<|endoftext|> -TITLE: Adding powers of $i$ -QUESTION [8 upvotes]: I've been struggling with figuring out how to add powers of $i$. -For example, the result of $i^3 + i^4 + i^5$ is $1$. But how do I get the result of $i^3 + i^4 + ... + i^{50}$? Writing it all down would be pretty mundane... -It has to do something with division by 4, since the "power cycle" of $i$ repeats every fourth power. -Thank you for any clues. - -REPLY [8 votes]: HINT $\rm\quad\quad i^3 + \: i^4 \; + \:\;\cdots\;\: + \; i^k = 0\ \:\iff\: k\:\equiv\: 2 \:\pmod 4$ -Generally, suppose that $\rm\: z \:$ has order $\rm m>1\:.$ Therefore $\rm\; z^n = 1 \iff\ m\:|n\;\;\:$ hence: -LEMMA $\quad\rm z^j + z^{j+1} + \:\cdots + z^k = 0\;\; \iff \rm\: k \:\equiv\;\; j \:-\: 1 \:\pmod m $ -Proof: $\;\;\;\;\rm \displaystyle z^j \ (1+z+\cdots + z^{k-j}) \;=\; z^j \: \frac{1-z^{k-j+1}}{1-z} = 0 \;\iff\; \rm m\:|\:k-j+1\quad\;$<|endoftext|> -TITLE: Rationals of the form $\frac{p}{q}$ where $p,q$ are primes in $[a,b]$ -QUESTION [16 upvotes]: Consider the closed interval $[0,1]$, there is $\frac{2}{3} \in [0,1]$ where $p=2$ and $q=3$. Similarly consider $[2,3]$, one can have $\frac{5}{2} \in [2,3]$ where $p=5$ and $q=2$. Does every interval of the form $[a,b]$, where $a,b \in \mathbb{R}$ contain a rational of this kind. If yes, how can we prove it? - -REPLY [20 votes]: Roughly speaking this asks whether the quotients of two primes -are dense in the positive reals. The answer is yes. -Let $0 < a < b$ and let $q$ be a prime. -Then there will a a prime $p$ with $a < p/q\le b$ if and only if -$\pi(bq) > \pi(aq)$ where $\pi$ is the prime-counting function. -But by the prime number theorem, as $q\to\infty$, -$$\frac{\pi(bq)}{\pi(aq)}\sim\frac{b\log(aq)}{a\log(bq)} -=\frac{b(\log q+\log a)}{a(\log q+\log b)}\sim\frac ba>1.$$ -For all large enough $q$, $\pi(bq)/\pi(aq) > 1$ as required.<|endoftext|> -TITLE: Explain why calculating this series could cause paradox? -QUESTION [5 upvotes]: $$\ln2 = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots - = (1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots) - 2(\frac{1}{2} + \frac{1}{4} + \cdots)$$ - $$= (1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots) - (1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots) = 0$$ -thanks. - -REPLY [3 votes]: In Matt's answer I began discussing with Matt E over several comments, which I think should be written out as an answer. -As Matt pointed out, this is a rearrangement of this conditionally convergent series which is why you have this sort of paradox. -However it was unclear about how this is exactly a rearrangement, as the equities seems perfectly legal - even for a conditionally convergent series. - -$1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \ldots = 1 + (\frac{1}{2} - 1) + \frac{1}{3} + (\frac{1}{4} - \frac{1}{2})$ is the first step, which is legal as you simply replace the negative terms by pairs of a positive and negative terms, but you don't change the order of summation from the original series which makes this exchange legit. -$1 + (\frac{1}{2} - 1) + \frac{1}{3} + (\frac{1}{4} - \frac{1}{2}) = 1 + \frac{1}{2} + \frac{1}{3} + \ldots - 1 - \frac{1}{2} - \ldots$ this is where things break, you've taken a conditionally convergent series and changed the order, basically we've performed infinitely many commutations in order to rearrange the series into this order, and that is what breaks the summation. - -The rearrangement wasn't very obvious, but it was hiding there with its big sharp pointy teeth... and when you stepped too close to its cave - it jumped out at you and bit your head off. -The series in the question is closely reminding me of the one my calculus teacher used when he first showed us what changing conditionally convergent series can do, although his was even less obvious.<|endoftext|> -TITLE: How do you show monotonicity of the $\ell^p$ norms? -QUESTION [64 upvotes]: I can't seem to work out the inequality $(\sum |x_n|^q)^{1/q} \leq (\sum |x_n|^p)^{1/p}$ for $p \leq q$ (which I'm assuming is the way to go about it). - -REPLY [2 votes]: For completeness I will add this as an answer (it is a slight adaptation of the argument from AD.): -For $a\in[0,1]$ and any $y_i\geq 0, i\in\mathbb N$, with at least one $y_i\neq0$ and the convention that $y^0=1$ for any $y\geq0$, \begin{equation}\label{*}\tag{*}\sum_{i=1}^\infty \frac{y_i^a}{\left(\sum_{j=1}^\infty y_j\right)^a}=\sum_{i=1}^\infty \left(\frac{y_i}{\sum_{j=1}^\infty y_j}\right)^a\geq \sum_{i=1}^\infty \frac{y_i}{\sum_{j=1}^\infty y_j}=1,\end{equation} -where I have used $y^a\geq y$ whenever $y\in[0,1]$ and $a\in[0,1]$. (This can be derived for instance from the concavity of $y\mapsto y^a$.) -For $p=q$, there is nothing to prove. For $1\le p< q\le\infty$ and $x=(x_i)_{i\in\mathbb N}\in \ell^q$, set $a\overset{\text{Def.}}=\frac pq\in[0,1]$ and $y_i\overset{\text{Def.}}=\lvert x_i\rvert^q\ge0$. Then \eqref{*} yields -\begin{equation*} -\sum_{i=1}^\infty \lvert x_i\rvert^p\geq\left(\sum_{i=1}^\infty \lvert x_i\rvert^{q}\right)^{\frac pq}, -\end{equation*} -i.e. -\begin{equation*} -\lVert x\rVert_{\ell^q}\le\lVert x\rVert_{\ell^p}. -\end{equation*}<|endoftext|> -TITLE: $5$-vertex graphs with vertices of degree $2$ -QUESTION [5 upvotes]: I'm trying to show that all graphs with $5$ vertices, each of degree $2,$ are isomorphic to each other. Is there a more clever way than simply listing them all out? - -REPLY [9 votes]: I'm assuming you do not allow multi-edges, as otherwise there is a trivial counterexample (the cycle of length $5$, versus a disconnected graph consisting of a triangle and two vertices joined by two edges). -Pick one vertex $v_0$; it must be joined to two other distinct vertices, which we may call $v_{-1}$ and $v_1$. Each of those must be joined to another vertex; can they be joined to each other? Can they both be joined to the same vertex that is not yet listed? Consider the possibilities. Then see where each of them leads you. - -REPLY [9 votes]: Consider this simple, powerful theorem: all vertices of a graph have even degree if and only if the set of its edges can be partitioned in cycles. -The number of edges of your graph $G$ is $e(G) = \frac 1 2 \sum_{x\in V(G)} d(x) = 5$. Observe that there are not isolated vertices and that every cycle has at least three edges. Therefore the only possible partition in cycles is $E(G)$ itself, i.e. all the graphs with the required property are 5-cycles and thus they are isomorphic.<|endoftext|> -TITLE: normalization factor for restricted density -QUESTION [5 upvotes]: Bounty update: this can be solved by change of basis, but I'm intrigued by David's solution relying on Fourier Transform of Dirac Delta function, so the bounty is for whoever finds a way to fix his solution to give the right result. -Suppose I have a non-negative real-valued function over $d$-dimensional real vectors as follows -$$f(\mathbf{x})=\exp(-\mathbf{x}' A \mathbf{x})$$ -Where $A$ is some symmetric positive definite $d\times d$ matrix. What is the normalization factor to turn this into a valid density over the following set? -$$S_d=\{(x_1,\ldots,x_d)\in \mathbf{R}^d | \sum_i x_i=0 \}$$ -Below, David Bar Moshe gives a general solution to computing that integral over space orthogonal to some vector $v$, but I suspect it has a mistake because the answer depends on the norm of $v$. -In particular, suppose $A$ is $d$-by-$d$ identity matrix. Let $v$ be a vector of all ones. Because of symmetry, integrating over space orthogonal to $v$ should be the same as as $d-1$ dimensional Gaussian integral, ie $\pi^{(d-1)/2}$, whereas David's solution gives -$$\frac{\pi^{(d-1)/2}}{d}$$ - -REPLY [3 votes]: This is a correction of the answer of David Bar Moshe edit: and generalizing it for the case, when $\mathbf{A}$ is degenerate. Using formulas for Fourier and inverse Fourier transforms we can write -$$f(0)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}k\int_{-\infty}^{\infty} \mathrm{d}y f(y)e^{iky}. (1)$$ -Suppose $S_d(y)=$"$S_d$ shifted by $(y,0,\dots,0)$" and -$f(y)=\int_{\mathbf{x} \in S_d(y)} \mathrm{d}\mathbf{x} \exp(-\mathbf{x}^T \mathbf{A}\mathbf{x}).$ -Then using the formula (1) we will get -$$N^{-1}=f(0) = \frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}k\int_{-\infty}^{\infty} \mathrm{d}y f(y)e^{iky} =$$ -$$\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}k\int_{-\infty}^{\infty} \mathrm{d}y \int_{\mathbf{x} \in S_d(y)}\mathrm{d}\mathbf{x} \exp(-\mathbf{x}^T \mathbf{A}\mathbf{x} + iky) =$$ -$$\frac{\|\mathbf{v}\|}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}k \int_{\mathbf{x} \in \mathbb{R}^d}\mathrm{d}\mathbf{x} \exp(-\mathbf{x}^T \mathbf{A}\mathbf{x} + ik \mathbf{v}^T \mathbf{x}),$$ -where $\mathbf{v}$ is vector, orthogonal to the plane we are interested in. For example, we can take $\mathbf{v}=(1,\dots,1)$. Note $\|\mathbf{v}\|$ in the last integral, appearing from change of coordinates from $y$ and coordinates in $S_d$ to coordinates in $\mathbb{R}^d$. -Now $N^{-1}=$ -$$\frac{\|\mathbf{v}\|}{2\pi}\int_{-\infty}^{\infty} \mathrm{d}k \int_{\mathbf{x} \in \mathbb{R}^d}\mathrm{d}\mathbf{x} \exp(-(\mathbf{x}+\frac{i}{2}k\mathbf{A}^{-1}\mathbf{v})^T \mathbf{A}(\mathbf{x}+\frac{i}{2}k\mathbf{A}^{-1}\mathbf{v}) - \frac{k^2}{4} \mathbf{v}^T \mathbf{A}^{-1}\mathbf{v})=$$ -$$\frac{\|\mathbf{v}\|}{2\pi} \frac{\pi^{d/2}}{\sqrt{\det{A}}} \frac{2\sqrt{\pi}}{\sqrt{\mathbf{v}^T \mathbf{A}^{-1}\mathbf{v}}}=\frac{\|\mathbf{v}\| \pi^{(d-1)/2}}{\sqrt{\det{A}}\sqrt{\mathbf{v}^T \mathbf{A}^{-1}\mathbf{v}}}=\frac{\|\mathbf{v}\| \pi^{(d-1)/2}}{\sqrt{\mathbf{v}^T \mathbf{C}\mathbf{v}}},$$ -where $\mathbf{C}$ is adjugate matrix to $\mathbf{A}$. Initial integral and the answer both are continuous in $\mathbf{A}$ when $A$ restricted to $S_d$ is not degenerate. From this we can conclude, that equation -$N^{-1}=\frac{\|\mathbf{v}\| \pi^{(d-1)/2}}{\sqrt{\mathbf{v}^T \mathbf{C}\mathbf{v}}}$ -is true whenever $\mathbf{A}$ restricted to $S_d$ is not degenerate (i.e. even if $\mathbf{A}$ is degenerate).<|endoftext|> -TITLE: What are some examples of theories stronger than Presburger Arithmetic but weaker than Peano Arithmetic? -QUESTION [22 upvotes]: What are some examples of theories stronger than Presburger Arithmetic but weaker than Peano Arithmetic? Are all such theories decidable? If not, by what methods other than Gödelization can undecidability be established? -EDIT: Both answers have indicated Robinson Arithmetic as an intermediate theory, but I don't understand how it can be "stronger" than Presburger Arithmetic in any reasonable way, because Robinson Arithmetic cannot prove commutativity of addition. I have attempted to define "stronger" in the comments. I would appreciate any suggestions for how to clarify the question and this definition. - -REPLY [6 votes]: The first-order theory of $(\mathbb{N},+,|_2)$ (here $x |_2 y$ means that $x$ is a power of $2$ and that $x$ divides $y$) is decidable. This can be proved using automata. Note that the set of powers of 2 is obviously definable as $x |_2 x$. -Moreover, there are many concrete decidable expansions. One can see this as follows: 1) this first-order theory is bi-interpretable with the weak monadic-second order theory of $(\mathbb{N},n \mapsto n+1)$. Thus decidable expansions of one induce decidable expansions of the other; 2) there are many concrete decidable expansions of the monadic-second order theory of $(\mathbb{N},n \mapsto n+1)$ by unary predicates. For instance, every morphic predicate (such as the Thue-Morse word). -For a characterisation of the decidable expansions see: -Alexander Rabinovich On decidability of monadic logic of order over the naturals extended by monadic predicates. Inf. Comput. 205(6): 870-889 (2007)<|endoftext|> -TITLE: $5^n+n$ is never prime? -QUESTION [60 upvotes]: In the comments to the question: If $(a^{n}+n ) \mid (b^{n}+n)$ for all $n$, then $ a=b$, there was a claim that $5^n+n$ is never prime (for integer $n>0$). -It does not look obvious to prove, nor have I found a counterexample. -Is this really true? -Update: $5^{7954} + 7954$ has been found to be prime by a computer: http://www.mersenneforum.org/showpost.php?p=233370&postcount=46 -Thanks to Douglas (and lavalamp)! - -REPLY [59 votes]: A general rule-of-thumb for "is there a prime of the form f(n)?" questions is, unless there exists a set of small divisors D, called a covering set, that divide every number of the form f(n), then there will eventually be a prime. See, e.g. Sierpinski numbers. -Running WinPFGW (it should be available from the primeform yahoo group http://tech.groups.yahoo.com/group/primeform/), it found that $5^n+n$ is 3-probable prime when n=7954. Moreover, for every n less than 7954, we have $5^n+n$ is composite. -To actually certify that $5^{7954}+7954$ is a prime, you could use Primo (available from http://www.ellipsa.eu/public/misc/downloads.html). I've begun running it (so it's passed a few more pseudo-primality tests), but I doubt I will continue until it's completed -- it could take a long time (e.g. a few months). -EDIT: $5^{7954}+7954$ is officially prime. A proof certificate was given by lavalamp at mersenneforum.org. - -REPLY [12 votes]: If $n$ is odd, then $5^n + n$ is always even because LSD of $5^n$ is always $5$ for $n \gt 0$. Hence, for odd $n ( n \gt 0)$, $5 ^n + n$ is composite. - -REPLY [7 votes]: After reading Douglas S. Stones comment I asked mathematica to check if $5^{2\times 3977} + 2\times 3977$ is prime and after about $27$ seconds, found that it is indeed prime. So the claim $5^n +n$ is never prime is false. -Edit: It turns out the function I used in mathematica is not a deterministic algorithm. However we can still say the claim $5^n +n$ is never prime is false is most likely true.<|endoftext|> -TITLE: Operations research book to start with -QUESTION [16 upvotes]: for somebody having a quite strong background in Mathematics, which are some good books for the domain of Operations research? I guess there are textbooks covering topics like linear and nonlinear optimization, convex optimization and quadratic programming, dynamic programming, multicriterial optimizations (did I miss something?) -Thanks, -Lucian - -REPLY [2 votes]: introduction to operation research-Hillier and Libermann -They give pretty good motivation for the material being presented<|endoftext|> -TITLE: Finding seven disjoint seven element subsets of $\{1,2, ..., 49\}$ with same sum -QUESTION [6 upvotes]: I have a set containing numbers $1$ to $49: \{1,2,3, \cdots, 49\}$. -Now, I want to divide the set into $7$ subsets such that each subset should contain $7$ elements and sum of the elements of each subset should be $175$. -Is it possible to prove that such subsets exist? - -REPLY [7 votes]: Magic squares are fine, but here any single 7x7 Latin square works. An $n\times n$ Latin square is a square with the property that all the integers $1\ldots n$ appear exactly once -on each row and column. There are several ways of constructing these. When $n=7$ we have for example the following -$$ -\begin{array}{ccccccc} -1&2&3&4&5&6&7\\ -7&1&2&3&4&5&6\\ -6&7&1&2&3&4&5\\ -5&6&7&1&2&3&4\\ -4&5&6&7&1&2&3\\ -3&4&5&6&7&1&2\\ -2&3&4&5&6&7&1\\ -\end{array} -$$ -Given an $n\times n$ Latin square we can construct a solution to this problem (or its generalization of partitioning the integers $\{1,2,\ldots,n^2\}$ into $n$ equal sum groups of $n$ each) as follows. -Add $n(i-1)$ to all the entries on row $\#i$. After that the sum of the entries on any column equals $\sum_{i=1}^ni+ n\left(\sum_{i=1}^{n}(i-1)\right)$, which is manifestly independent of the column. Furthermore, the integer $n(i-1)+j$, $1\le i,j\le n$, can only appear on row $\#i$, and does occur there exactly once (wherevere the entry $j$ of that row of the Latin square resides). The above 7x7 Latin square gives rise to the groups -$$\{1,7+7=14,6+14=20,5+21=26,4+28=32,3+35=38,2+42=44\}$$ from the first column, $\{2,8,21,27,33,39,45\}$ from the second, $\{3,9,15,28,34,40,46\}$ from the third column, -and so forth. -It is also possible to construct magic squares from Latin squares, but then you need richer structure. You need two so called mutually ortogonal Latin squares (=MOLS).<|endoftext|> -TITLE: How can I write this proof about partial derivatives better? -QUESTION [5 upvotes]: This question isn't about the actual substance of math so much as it is about style. I'm not going to school, so my feedback on these sorts of problems is limited. How would you improve this proof's readability? And if this isn't the sort of question that's suitable here, where would be a more appropriate venue? The problem is pretty straightforward (from little Spivak): -If $f:\mathbb{R}^2\rightarrow \mathbb{R}$ and $D_2f=0$, show that $f$ is independent of the second variable. If $D_1f=D_2f=0$, show that $f$ is constant. -The definition of "independent of the second variable" that we are using is: -A function $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ is independent of the second variable if for each $x\in \mathbb{R}$ we have $f(x,y_1)=f(x,y_2)$ for all $y_1,y_2\in\mathbb{R}$. -My solution as I have written it is: -Assume $f:\mathbb{R}^2\rightarrow \mathbb{R}$ and $D_2f=0$. Let $x$ be arbitrary. By integration, $\int f_2(x,y)=\int 0=C$ for some constant $C$. It follows that $f(x,y_1)=f(x,y_2)=C$ for all $y_1,y_2\in\mathbb{R}$. Since $x$ was arbitrary, it follows from the definition that $f$ is independent of the second variable. -To show that $f$ is constant when $D_1f=D_2f=0$ we proceed by contradiction. Assume that $f$ is not constant. Then there exists some $x_1,y_1,x_2,y_2$ such that $f(x_1,y_1) \ne f(x_2,y_2)$. Since $f$ is independent of the second variable, it follows that $f(x_1,y_1)=f(x_1,y_2)$ and $f(x_1,y_2) \ne f(x_2,y_2)$. By the mean value theorem there exists $x\in(x_1,x_2)$ such that -$$Df_1(x,y_2)=\displaystyle\frac{f(x_2,y_2)-f(x_1,y_2)}{x_2-x_1} \ne 0$$ -which is the desired contradiction. - -REPLY [4 votes]: I've marked this as "community wiki" for myself... -Your argument is a bit vague in the first paragraph; you have an indefinite integral, but what you actually want is the function; that is, you want $f(x,y_1)=C$. Edit: The problem with this is that, if you recall, the indefinite integral does not produce a function, but produces a family of functions, namely all the antiderivatives of your function, and not a value of a particular function. -It would be better to use the Fundamental Theorem of Calculus, which you can use to get specific values of a function (or to be more precise, the difference between two values of a function), and write something along these lines: - -Let $f\colon\mathbb{R}^2\to\mathbb{R}$ be a function such that $D_2f=0$. Fix an arbitrary $x$; then for any $y$ we have: - $$f(x,y)-f(x,0) = \int_0^y D_2f(x,t)dt = \int_0^y 0dt = 0,$$ - so for fixed $x$, $f(x,y)=f(x,0)$ for all $y$. Therefore, $f$ is independent of the second variable. - -You may, of course, replace $0$ with another point above. -For the second part, why not use the first part? From $D_2f=0$ you know $f$ is independent of the second variable; the same argument, by exchanging $x$ and $y$, shows that from $D_1f=0$ you get that $f$ is independent of the first variable. So you have -$$f(x_1,y_1) = f(x_1,y_2) = f(x_2,y_2)$$ -for any $(x_1,y_1)$ and $(x_2,y_2)$; first equality because $f$ is independent of the second variable, second because it is independent of the first. Then you don't have to invoke something like the Mean Value Theorem. -But in any case, let me point out that you seem to be assuming that $(x_1,y_1)\neq(x_2,y_2)$ implies both that $x_1\neq x_2$ and $y_1\neq y_2$, but this is not true. You do not know a priori that $x_2\neq x_1$; all you know is that $(x_1,y_1)\neq(x_2,y_2)$, and therefore either $x_1\neq x_2$ or $y_1\neq y_2$ (or both). Your argument breaks down if $x_1=x_2$, and you need to consider that case separately. I would also add at some point, in the case $x_1\neq x_2$, that "we may assume without loss of generality that $x_1\lt x_2$", so that "$x\in(x_1,x_2)$" makes sense.<|endoftext|> -TITLE: Constructing an example that supports the Propositional Interpolation Theorem -QUESTION [5 upvotes]: I'm trying to construct a simple proof of the Propositional Interpolation Theorem. For the following, let $At(\phi)$ be the set of sentence symbols that occur in a sentence $\phi$. Suppose that $\psi$ is a tautological consequence of $\phi$, but neither $\neg\phi$ nor $\psi$ is a tautology, and so $\psi$ is not a tautological consequence of $\phi$ for trivial reasons. I want to show that there is some sentence $\gamma$ such that -i) $\gamma$ is a tautological consequence of $\phi$. -ii) $\psi$ is a tautological consequence of $\gamma$ -iii) $At(\gamma)\subseteq At(\phi)\cap At(\psi)$ -From iii) $\gamma$ must be constructed from the sentence symbols found both in $\phi$ and $\psi$, so I first showed that $At(\phi)\cap At(\psi)\neq\emptyset$. Since neither $\neg\phi$ nor $\psi$ is a tautology, there must exist truth assignments $V,W$ such that $V(\neg\phi)=F$ and $W(\psi)=F$ and from this $V(\phi)=T$ and $W(\phi)=F$. If $At(\phi)\cap At(\psi)=\emptyset$ we may define a truth assignment for sentence symbols $p_i$, -$U(p_i)=V(p_i)$ if $p_i\in At(\phi)$, $U(p_i)=W(p_i)$ if $p_i\in At(\psi)$. -But then $U(\phi)=V(\phi)=T$, but $U(\psi)=W(\psi)=F$, which contradicts the fact that $\psi$ is a tautological consequence of $\phi$. -My question now is, is there some sort of method to construct $\gamma$ such that conditions i) and ii) are satisfied? - -REPLY [4 votes]: You can find some other proofs of the Craig interpolation theorem in this paper by Krajíček.<|endoftext|> -TITLE: Beautiful identity: $\sum_{k=m}^n (-1)^{k-m} \binom{k}{m} \binom{n}{k} = \delta_{mn}$ -QUESTION [40 upvotes]: Let $m,n\ge 0$ be two integers. Prove that -$$\sum_{k=m}^n (-1)^{k-m} \binom{k}{m} \binom{n}{k} = \delta_{mn}$$ -where $\delta_{mn}$ stands for the Kronecker's delta (defined by $\delta_{mn} = \begin{cases} 1, & \text{if } m=n; \\ 0, & \text{if } m\neq n \end{cases}$). -Note: I put the tag "linear algebra" because i think there is an elegant way to attack the problem using a certain type of matrices. -I hope you will enjoy. :) - -REPLY [6 votes]: I will try to give an answer using basic complex variables here. -Suppose we are trying to show that -$$\sum_k {n\choose k} (-1)^{k-m} {k\choose m} = \delta_{mn}.$$ - -Introduce the integral representation -$${k\choose m} -= \frac{1}{2\pi i} -\int_{|z|=\epsilon} -\frac{(1+z)^{k}}{z^{m+1}} \; dz.$$ - -This gives for the sum the integral (the second binomial coefficent -enforces the range) -$$\frac{1}{2\pi i} -\int_{|z|=\epsilon} \frac{1}{z^{m+1}} (-1)^m -\sum_{k=0}^n {n\choose k} (-1)^k -(1+z)^k \; dz -\\ = \frac{1}{2\pi i} -\int_{|z|=\epsilon} \frac{1}{z^{m+1}} (-1)^m -(1-1-z)^n \; dz -\\ = \frac{1}{2\pi i} -\int_{|z|=\epsilon} \frac{z^n}{z^{m+1}} (-1)^{m+n} \; dz.$$ -This integral evaluates to $\delta_{mn}$ by inspection. - -We have not made use of the properties of complex integrals here so -this computation can also be presented using just algebra of -generating functions. - -Apparently this method is due to Egorychev although some of it is -probably folklore.<|endoftext|> -TITLE: Proving uniqueness in the structure theorem for finitely generated modules over a principal ideal domain -QUESTION [7 upvotes]: In an introductory algebra course, one proves the Structure theorem for finitely generated modules over a principal ideal domain. -http://en.wikipedia.org/wiki/Structure_theorem_for_finitely_generated_modules_over_a_principal_ideal_domain -In every book that I have looked in, the uniqueness part of the proof is a mess. -Can anyone give me a reference for a nicer proof of the uniqueness part? I am especially interested in a proof of uniqueness when the underlying ring is the ring of polynomials in one variable over a field. - -REPLY [4 votes]: I will give a proof of uniqueness that handles the torsion and torsion-free components simultaneously and that characterizes the invariant factors directly, rather than doing it indirectly through the elementary divisors $p^i$. Specifically, I prove the following result. - -Proposition. Let $A$ be a commutative ring and $I_1 \subseteq I_2 \subseteq \dots \subseteq I_n$ an increasing sequence of proper ideals of $A$ (some or all of which may be zero). Let $E$ be an $A$-module for which an isomorphism - \begin{equation} \tag{*} -E \cong A/I_1 \times \dots \times A/I_n -\end{equation} - holds. -$(1)$ The minimal number of generators of $E$ is $n$. -$(2)$ For $k = 1, \dots, n$, the ideal $I_k$ is equal to the set of all $x \in A$ such that the $A$-module $xE$ can be generated by fewer than $k$ elements. - -Proof of $(1)$. Since $E$ is in an obvious way the homomorphic image of $A^n$, it is clear that $E$ has a generating set consisting of $n$ elements. Conversely, assume that $E$ can be generated by $r$ elements for some $r < n$. Let $M$ be a maximal ideal containing $I_n$. We have an obvious $A$-linear surjection $E \to (A/M)^n$. Consequently, $(A/M)^n$ can be generated by $r$ elements as an $A$-module and hence also as an $A/M$-vector space, which is absurd. -Proof of $(2)$. Let $x \in A$, and let $k \leq n$. For any ideal $I$ of $A$, denote by $I_x$ the ideal of $A$ that is the inverse image of $I$ under the map $A \to A, \ y \mapsto xy$. We have -\begin{equation} -xE \cong A/{(I_1)}_x \times \dots \times A/{(I_n)}_x. -\end{equation} -This decomposition is of the type $\text{(*)}$ except for the fact that some of the factors $A/{(I_j)}_x$ at the end may be zero. By $(1)$, the module $xE$ can be generated by fewer than $k$ elements if and only if the $k$th factor $A/{(I_k)}_x$ is zero, or equivalently when $x \in I_k$. -Note. The existence of $M$ asserted in the proof of $(1)$ follows from Krull's Theorem. However, its existence is trivial when $A$ is a p.i.d., which is the case of interest here. If $A$ is a p.i.d. other than a field, we can let $M = pA$, where $p$ is an irreducible element dividing a generator of $I_n$. If $A$ is a field, let $M = {0}$. -The use of Krull's Theorem can also be avoided entirely by replacing $M$ with $I_n$ and using the fact that there can be no $B$-linear surjection $f \colon B^r \to B^n$, where $B = A/I_n$. For if there were, it would split, say $fg = \operatorname{id}_{B^n}$. This is absurd, because the characteristic polynomial of $fg$ is divisible by $X^{n-r}$, as shown here.<|endoftext|> -TITLE: Equations of a projective toric variety -QUESTION [10 upvotes]: Given complete fan $\Delta$ defining a projective toric variety (so that $\Delta$ is the normal fan of some polytope). How do one go on to find a defining ideal of the toric variety in projective space? - Or, going the other way, given binomial equations defining a projective toric variety, is there some way to recover the fan? -In my case, I am trying to recover the fan of the hypersurface $Z(x_0x_3-x_1x_2)$ in $\mathbb{P}^n$ ($n>3$). - -REPLY [12 votes]: To get the equations, one way to find this is to read David Cox's The Homogeneous Coordinate Ring of a Toric Variety, which you can find here. -If you are given the equations and you know the action of the torus on the variety, you can reconstruct the fan by looking at the way orbits are placed relative to each other. I think this is explained in Fulton's book, for example. -Everything should be explained in Cox's new book (together with Little and Schenk), aptly named Toric Varieties -Later. Let's do your example with $n=4$. There is a simply transitive action of the torus $T=(\mathbb C^\times)^3$ on the subset $\mathcal T$ of your variety $X$ of those points all of whose coordinates are non-zero such that $$(t_1,t_2,t_3)\cdot(x_0:x_1:x_2:x_3:x_4)=(x_0:t_1x_1:t_2x_2:t_1t_2x_3:t_3x_4).$$ -The set of group homomorphisms $\mathbb C^\times\to T$ is parametrized by the abelian group $\mathbb Z^3$, with $(a,b,c)\in\mathbb Z^2$ corresponding to the map $$\chi_{a,b,c}:t\in\mathbb C^\times\longmapsto (t^a,t^b,t^c)\in T.$$ -Now let $p=(1:1:1:1:1)$, a point in $%\mathcal T$, and the consider the limit $$\lim_{t\to0} \chi_{a,b,c}(t)\cdot p=\lim_{t\to0}(1:t^a:t^b:t^{a+b}:t^c)$$ Depending on the value if $(a,b,c)$ in $\mathbb Z^3$, this limit changes (and in some cases may not exists). This determies regions in $\mathbb R^3$ where the behaviour does not change: these regions are the relative interiors of the cones in the fan. -For example, in the regions - -$a>0, b>0,c>0$ -$a<0, a+b>0, c>0$ -$b<0, a+b>0, c>0$ - -the limits exist (and are $(1:0:0:0:0)$, $(0:1:0:0:0)$ and $(0:0:1:0:0)$, respectively) so this gives us three of the cones. You'll surely have fun determining all the rest :)<|endoftext|> -TITLE: On the definition of weakly compact cardinals -QUESTION [7 upvotes]: I am reading in Jech's Set Theory the chapter about large cardinals. After discussing measurable cardinals he moves on to weakly-compact cardinals, which have been discussed far earlier in the book. -I went back to the chapter dealing with weakly-compact cardinals and began retracing the definition. -Eventually, it came into this: -We denote $[k]^n = \{X \subseteq \kappa\mid |X| = n\}$. If $\lambda$ is a cardinal, we denote $\kappa \to (\lambda)^2$ when for every partition of $[\kappa]^2$ into $2$ we have $H \subseteq \kappa$ that is of cardinality $\lambda$, and for which $[H]^2$ is strictly in one part. -And we say that $\kappa$ is weakly-compact if it satisfies the property $\kappa \to (\kappa)^2$. -The problem is that I'm a bit lost in all those definitions, and not even sure about the $\kappa \to (\lambda)^2$ notation. -My questions are, if so, can someone help me make some sense into those definitions, and is there an equivalent definition for weakly-compact cardinals which can help me understand their properties better? - -REPLY [7 votes]: Let me add to Ivan's answer another equivalent -characterization of weak compactness, which might appeal to -you, as it makes them resemble miniature measurable -cardinals. -Namely, if $\kappa$ is a cardinal and -$\kappa^{<\kappa}=\kappa$, then $\kappa$ is weakly compact -if and only if for every transitive set $M$ of size -$\kappa$ with $\kappa\in M$, there is another transitive -set $N$ and an elementary embedding $j:M\to N$ having -critical point $\kappa$. -This embedding characterization admits myriad forms. For -example, one can insist that $M\models ZF^-$ or even -$M\models ZFC$, and that $M^{<\kappa}\subset M$, or that -every $A\subset \kappa$ can be placed into such an $M$, and so -on. One can even insist that $j\in N$, a property known as -the Hauser property. -These various embedding formulations of weak compactness -allow one to borrow many of the methods and techniques from -much larger large cardinals, which are most often described -in terms of embeddings, and apply them with weakly compact -cardinals. For example, using Easton support forcing -iterations, one can control the value of $2^\kappa$ while -preserving the weak compactness of $\kappa$.<|endoftext|> -TITLE: Singular and Sheaf Cohomology -QUESTION [9 upvotes]: Let $X$ be a complex manifold of dimension $n$. Thus, it's a real manifold of dimension $2n$. -Now cohomology is a topological concept so it should not depend upon the structure given on a topological space. -We know that $k^{th}$ Singular cohomology of $X$ is $0$ for $k \gt 2n$. We can also define a sheaf cohomology on that space using derived functor approach of Grothendieck. Then (by a result of Grothendieck) we know that k'th sheaf cohomology is $0$ for $k \gt n$. -Now, for constant sheaves [say R], the sheaf cohomology agrees with singular cohomology [with coefficient R]. Does this means that even the $k^{th}$ singular cohomology of $X$ vanishes for $k \gt n$? -[Edited] I now feel that the result which says that sheaf and singular agrees is actually this that $k^{th}$ sheaf cohomology [of a complex manifold and constant sheaf] will agree with $2k^{th}$ singular cohomology [of the underlying real manifold]. Is this correct?? I would still like others to comment. -Thanks. - -REPLY [8 votes]: The statement is the following: If you take a variety with the analytic topology (or a complex manifold), then yes, sheaf cohomology of the constant sheaf agrees with singular cohomology. More generally, sheaf cohomology of a constant sheaf on a locally contractible space agrees with singular cohomology of that space. I think there's an explanation of this somewhere in Warner's book "Foundations of Differentiable Manifolds and Lie Groups". -On the other hand, if you have an irreducible variety with the Zariski topology, then sheaf cohomology $H^i$ of the constant sheaf is zero for $i > 0$, because the constant sheaf is flasque. -Further remarks: If what you want is to be able to deal with singular cohomology in a more "algebraic" way, you can do so by using the Hodge decomposition $H^n(X;\mathbb{C}) = \bigoplus_{p+q=n}H^q(X,\Omega^p)$ which expresses singular cohomology in terms of sheaf cohomologies of $\Omega^p$'s. This works if your $X$ is a compact Kähler manifold, e.g., a smooth projective variety. If your $X$ is an algebraic variety, you can also use etale cohomology for an "algebraic" way of dealing with singular cohomology; see Milne's book/notes on etale cohomology for more.<|endoftext|> -TITLE: Induction on Real Numbers -QUESTION [164 upvotes]: One of my Fellows asked me whether total induction is applicable to real numbers, too ( or at least all real numbers ≥ 0) . We only used that for natural numbers so far. -Of course you have to change some things in the inductive step, when you want to use it on real numbers. -I guess that using induction on real numbers isn't really possible, since $[r,r+\epsilon]$ with $\epsilon > 0$, $r \in \mathbb R$ is never empty. -Can you either give a good reason, why it isn't possible or provide an example where it is used? - -REPLY [4 votes]: In the case of $[0,\infty[$, perhaps this looks like induction: -Let $A\subseteq [0,\infty[$ such that -$i)$ $[0,1]\subset A$ -$ii)$ if $x\in A$ then $x+1\in A$. -Let $x\in[0,\infty[$. Then $x - \lfloor x\rfloor\in A$ by $i)$, and by repeating $ii)$ some times, we get $$(x - \lfloor x\rfloor)+\underbrace{1+\cdots+1}_{\text{$\lfloor x\rfloor$ times}}=x\in A.$$ Therefore $A=[0,\infty[$.<|endoftext|> -TITLE: What is the $x$ in $\log_b x$ called? -QUESTION [15 upvotes]: In $b^a = x$, $b$ is the base, a is the exponent and $x$ is the result of the operation. But in its logarithm counterpart, $\log_{b}(x) = a$, $b$ is still the base, and $a$ is now the result. What is $x$ called here? The exponent? - -REPLY [18 votes]: Another name (that I've only ever seen when someone else asked this question) is "logarithmand". - -From page 36 of The Spirit of Mathematical Analysis by Martin Ohm, translated from the German by Alexander John Ellis, 1843:<|endoftext|> -TITLE: Can we slice an object into two pieces similar to the original? -QUESTION [7 upvotes]: I suspect it is impossible to split a (any) 3d solid into two, such that each of the pieces is identical in shape (but not volume) to the original. How can I prove this? - -REPLY [9 votes]: You can certainly take a rectangular box, $2^{1/3} \times 2^{2/3} \times 2$ and slice it into two boxes of size $1 \times 2^{1/3} \times 2^{2/3}$.<|endoftext|> -TITLE: Is this a known algebraic identity? -QUESTION [10 upvotes]: In the course of analyzing a certain Markov chain, I once had to prove the following algebraic identity. -Is there a slick or known proof? -For $n$-tuples $(x_1,x_2,\dots, x_n)$ of positive real numbers define -$$\mu(x_1,x_2,\dots, x_n)=\prod_{j=1}^n {x_j\over x_j+x_{j+1}+\cdots+x_n}.$$ -Then if $x^\ast$ is another positive real, and $1\leq k\leq n+1$, then -define $x^*_k$ to be the $(n+1)$-tuple $(x_1,x_2,\dots, x^*,\dots, x_n)$ where -$x^*$ is in the $k$th place. The identity is -$$\sum_{k=1}^{n+1}\ \mu(x^\ast_k)=\mu(x_1,x_2,\dots, x_n).$$ -For example, -$$ {xyz\over(x+y+z)(y+z)z} - + {yxz\over(y+x+z)(x+z)z} - + {yzx\over(y+z+x)(z+x)x}={yz\over(y+z)z}.$$ - -REPLY [16 votes]: Consider the following experiment: there is an interval of length $x_1 + \cdots + x_n$ divided into $n$ segments of sizes $x_1,\ldots,x_n$. We sample an infinite sequence of points from the interval, and write out the segments corresponding to the points. Then, we look at the order in which the segments are "discovered", i.e. which one was hit first, which was the next one, and so on. Then $\mu(x_1,\ldots,x_n)$ is the probability that the segments will be discovered in the order $x_1,\ldots,x_n$. -Now suppose we add an extra segment $x^* $, and repeat the same experiment. If we just forget about this extra segment, the new experiment is just like the old one. This is the same as running the new experiment and deleting $x^* $ from the order of discovery. Your identity follows.<|endoftext|> -TITLE: “Cartesian” dual vs. polar dual of convex polytope -QUESTION [5 upvotes]: Say $P$ is a convex Euclidean polytope, where the origin is not contained in any bounding hyperplane containing a facet of $P$, with $n$ facets given by $\langle f_i , x\rangle = 1$ and $m$ vertices $v_j$ with $1\leq i\leq n$ and $1\leq j\leq m$. (That is, each facet equation has been multiplied through as needed to obtain $1$ on the right side.) -Is it true that the polytope $P’$ with $m$ facets given by $\langle v_j , x\rangle = 1$ and $n$ vertices $f_i$ is combinatorially equivalent to the polar dual $P^*$ of $P$? - -REPLY [6 votes]: Yes. Your polytope $P'$ is exactly the polar $P^*$ of $P$ with respect to the standard unit sphere. Choosing another conic instead of the unit sphere will in general give another polar. See this wikpedia page for the two-dimensional case of polarity with respect to other conics. So your "Cartesian" dual corresponds to the unit sphere. -Polarity/duality of polytopes or more generally, convex bodies, is treated in many books on convexity. Unfortunately, many of them leave a lot of detail to the reader. I recommend Webster's Convexity, Brønsted's An Introduction to Convex Polytopes or Matousek's Lectures on Discrete Geometry.<|endoftext|> -TITLE: Examples of fields of characteristic 0? -QUESTION [8 upvotes]: I was preparing for an area exam in analysis and came across a problem in the book Real Analysis by Haaser & Sullivan. From p.34 Q 2.4.3, If the field F is isomorphic to the subset S' of F', show that S' is a subfield of F'. I would appreciate any hints on how to solve this problem as I'm stuck, but that's not my actual question. -I understand that for finite fields this implies that two sets of the same cardinality must have the same field structure, if any exists. The classification of finite fields answers the above question in a constructive manner. -What got me curious is the infinite case. Even in the finite case it's surprising to me that the field axioms are so "restrictive", in a sense, that alternate field structures are simply not possible on sets of equal cardinality. I then started looking for examples of fields with characteristic zero while thinking about this problem. I didn't find many. So far, I listed the rationals, algebraic numbers, real numbers, complex numbers and the p-adic fields. What are other examples? Is there an analogous classification for fields of characteristic zero? - -REPLY [7 votes]: Is there an analogous classification for fields of characteristic zero? - -Yes, but it is somewhat useless and nobody would call it a classification. -Every field of characteristic zero has the form $Quot(\mathbb{Q}[X]/S)$, where $X$ is a set of variables and $S$ is a set of polynomials in $\mathbb{Q}[X]$ (which you may replace by the ideal generated by $S$, which must be prime). This may be improved by the existence of transcendence bases: Every field of characteristic zero has the form $Quot(\mathbb{Q}[X])[T]/S$, where $X$ and $T$ are sets of variables and $S$ consists of polynomials, which have each only one variable of $T$.<|endoftext|> -TITLE: Convergence of $\sum\limits_{n=1}^{\infty} \frac{1}{nf(n)}$ -QUESTION [13 upvotes]: This problem is taken from Vojtěch Jarník International Mathematical Competition 2010, Category I, Problem 1. — edit by KennyTM - -On going through this post Does there exist a bijective $f:\mathbb{N} \to \mathbb{N}$ such that $\sum f(n)/n^2$ converges? i happened to get the following 2 problems into my mind: -Let $f: \mathbb{N} \to \mathbb{N}$ be a bijection. Then does the series $$\sum\limits_{n=1}^{\infty} \frac{1}{nf(n)}$$ converge? -Next, consider the series $$\sum\limits_{n=1}^{\infty} \frac{1}{n+f(n)}$$ where $f: \mathbb{N} \to \mathbb{N}$ is a bijection. Clearly by taking $f(n)=n$ we see that the series is divergent. Does there exist a bijection such that the sum above is convergent? - -REPLY [2 votes]: For question $2$, consider any $f(n)$ which is $2^n$ for $n$ which are not powers of $2.$ This lets us divide our sum into two parts, both of which converge.<|endoftext|> -TITLE: Support of a module with extended scalars -QUESTION [6 upvotes]: I have a question which should be pretty basic commutative algebra, but I can't find a reference and I'm stuck on proving a result myself, so here it goes: -Let $\varphi: S \to R$ be a morphism of commutative rings and let $M$ be a finitely generated $S$-module. Then, we can "extend the scalars of $M$" by considering $M \otimes_S R$, which is naturally an $R$-module. I would like to get a description of $\text{Supp}(M \otimes_S R)$ (as an $R$-module) involving $\text{Supp}(M)$ and $\varphi$. -Maybe this helps: by Eisenbud's Commutative algebra with a view toward algebraic geometry, Corollary 2.7, the problem can probably be solved by finding a description of $\text{Ann}(M \otimes_S R)$ and it is not difficult to see that $\varphi(\text{Ann}(M))R \subset \text{Ann}(M \otimes_S R)$, but I fail to prove the other inclusion. - -REPLY [2 votes]: Since you say that you can't find a reference, look at Atiyah Macdonald Chapter 3, Exercise 19(viii).<|endoftext|> -TITLE: The tree property for non-weakly compact $\kappa$ -QUESTION [6 upvotes]: In my previous question, Weakly-compact cardinals, I was asking about weakly-compact cardinals and equivalent definitions to the basic one, which is $\kappa \to (\kappa)^2_2$. -One of which was that $\kappa$ is inaccessible and has the tree property (that is if any tree of cardinality $\kappa$ for which every level is of cardinality $<\kappa$ then it has a branch (i.e. a maximal chain) of cardinality $\kappa$). -I can understand the property itself and what it means. However, since $\aleph_1$ or $\aleph_\omega$ are clearly not weakly-compact cardinals, there should be a tree which contradicts this property. -How do you build this sort of tree? - -REPLY [5 votes]: Some results supplementing Joel's answer: - -Shelah proved (around 1995) that if $\lambda$ has cofinality $\omega$ and is the supremum of strongly compact cardinals, then $\lambda^+$ has the tree property. See - - -Menachem Magidor, and Saharon Shelah. The tree property at successors of singular cardinals, Archive for Math Logic, 35 (5-6), (1996), 385-404. MR1420265 (97j:03093). - - -Neeman proved that, assuming the existence of $\omega$ supercompact cardinals, we can force a model where the tree property holds at all the $\aleph_n$ ($2\le n<\omega$) and at $\aleph_{\omega+1}$. See - - -Itay Neeman. The tree property up to $\aleph_{\omega+1}$, preprint. - -Neeman's result improves previous results, both in terms of the cardinals with the tree property, and in consistency strength: Magidor and Shelah had obtained the tree property at $\aleph_{\omega+1}$ from a huge cardinal with $\omega$ supercompact cardinals above. As mentioned in Joel's answer, Cummings and Foreman had obtained the tree property for the $\aleph_n$ ($2\le n<\omega$), also from $\omega$ supercompact cardinals. At the moment, Neeman's is the best current result in terms of intervals of regular cardinals with the tree property. At least in $\mathsf{ZFC}$. - -Arthur Apter proved (around 2009) that the following is consistent, relative to a proper class of supercompact cardinals: $\mathsf{ZF} + \mathsf{DC} +$ Every successor cardinal is regular and has the tree property, while every limit cardinal is singular. See these slides, and - - -Arthur W. Apter. A remark on the tree property in a choiceless context, Arch. Math. Logic, 50 (5-6), (2011), 585–590. MR2805298 (2012d:03115). - -The conclusion of Apter's result implies determinacy in $L(\mathbb R)$, and more. - -The upper bound in consistency strength for successive cardinals with the tree property is a supercompact cardinal with a weakly compact cardinal above it. Around 1983, Abraham forced, from these assumptions, that $2^{\aleph_0}=\aleph_2$, and both $\aleph_2$ and $\aleph_3$ have the tree property. All results on successive cardinals with the tree property build on Abraham's argument. See - - -Uri Abraham. Aronszajn trees on $\aleph_2$ and $\aleph_3$, Ann. Pure Appl. Logic, 24 (3), (1983), 213–230. MR0717829 (85d:03100). - - -The best known lower bound is due to Foreman, Magidor, and Schindler. They show that if all $\aleph_n$ ($2\le n<\omega$) have the tree property, and $\aleph_\omega$ is strong limit, then $\mathsf{PD}$ holds. See - - -Matthew Foreman, Menachem Magidor, and Ralf Schindler. The consistency strength of successive cardinals with the tree property, J. Symbolic Logic, 66 (4),(2001), 1837–1847. MR1877026 (2003m:03083). - -This result is frustrating in the sense that we expect two successive cardinals with the tree property should give us much more in consistency strength than this, beyond $\mathsf{AD}^{L(\mathbb R)}$, and likely beyond the current reach of descriptive inner model theory. Still, this would be frustratingly short of the best current upper bounds, which experts expect are much closer to the truth.<|endoftext|> -TITLE: Alternate definition of prime number -QUESTION [10 upvotes]: I know the definition of prime number when dealing with integers, but I can't understand why the following definition also works: - -A prime is a quantity $p$ such that whenever $p$ is a factor of some product $a\cdot b$, then either $p$ is a factor of $a$ or $p$ is a factor of $b$. - -For example, take $4$ (which clearly is not a prime): it is a factor of $16=8\cdot 2$, so I should check that either $4\mid 8$ or $4\mid 2$. But $4\mid 8$ is true. So $4$ is a prime, which is absurd. -Please note that English is not my first language, so I may have easily misunderstood the above definition. -Edit: Let me try formalize the definition as I understood it: $p$ is prime if and only if $\exists a\exists b(p\mid a\cdot b)\rightarrow p\mid a\lor p\mid b$. - -REPLY [6 votes]: You have encountered the general ring-theoretic definition of a prime (versus irreducible) element. For general rings one distinguishes between the inequivalent properties of being irreducible, i.e. having no nontrivial factors, and being prime, i.e a nonunit such that if it divides a product then it divides some factor of the product. The latter property is key to the uniqueness of factorizations into irreducibles since one easily proves by induction that products of primes factor uniquely in any domain (i.e. a ring such that $\rm\; ab = 0 \;\Rightarrow\; a=0\ \ or\ \ b=0)$. -Here are the precise definitions. Let $\rm\; a,b,p \;$ be elements of a domain $\rm\,Z$. -Definition $\rm\ \ \; a\;$ is a unit (invertible)$\ \ $ if $\rm\ \ a\: |\: 1\ \ \:\; [recall \ \:a\ |\ b\ :=\ a\ \ divides\ \ b\:]$ -Definition $\ \ $ nonunit $\rm\; p\;$ is $\quad\;$ prime $\quad\;$ if $\;\;\;\rm p\ \mid\ ab \;\;\Rightarrow\;\; p\mid a \;\;\; or \;\;\; p\mid b$ -Definition $\ \ $ nonunit $\rm\; p\;$ is $\:$ irreducible$\;$ if $\;\;\:\rm p = ab \;\;\Rightarrow\;\; p\mid a \;\;\; or \;\;\; p\mid b\ \ $ (a.k.a.$\:$ atom) -Corollary $\ \ \;$ prime $\;\;\Rightarrow\;\;$ irreducible, $\ $ since $\;\;\;\rm p = ab \;\;\Rightarrow\;\; p\mid ab$ -Conversely $\:$ irreducible $\;\Rightarrow\;\;$ prime $\;\;$ iff $\;$ factorizations into irreducibles are unique (up to order and units). Indeed, it is very easy to prove that a factorization into primes is unique, i.e. any other factorization into irreducibles is the same (up to order and units), as in the classical proof of the Fundamental Theorem of Arithmetic. -A common equivalent definition of an irreducible is a nonunit with only trivial (unit) factors -Definition $\ \ $ nonunit $\rm\; p\;$ is $\:$ irreducible$\;$ if $\;\;\:\rm p = ab \;\;\Rightarrow\;\; a\mid 1 \;\;\; or \;\;\; b\mid 1\ \ $ -To see this definition is equivalent to that above simply observe that -If $\rm \ p = ab\,$ then $\, \dfrac{a}p = \dfrac{1}b\ $ so $\ p\mid a\iff b\mid 1\iff b=1.\,$ Similarly $\,\rm p\mid b\iff a = 1,\,$ hence combining the two we infer that $\ p\mid a\,$ or $\,p\mid b\iff b=1\,$ or $\,a=1$.<|endoftext|> -TITLE: Is a field (ring) an algebra over itself? -QUESTION [8 upvotes]: I was wondering if a field is an algebra over itself (http://en.wikipedia.org/wiki/Algebra_over_a_field)? -Also is a ring an algebra over itself (http://en.wikipedia.org/wiki/Algebra_(ring_theory)? If not, does the ring require to be commutative? -Thanks and regards! - -REPLY [5 votes]: The answer is clear if you know the universal view of $\rm\:R\:$-algebras, namely they are simply those rings $\rm\:A\:$ into which one can always evaluate $\rm R[x]$ for every $\rm\:a \in A\:$. More directly, an $\rm\:R$-algebra is just a ring $\rm A\:$ containing a central subring $\rm R'$ that's a ring image of $\:\rm R\:,\:$ i.e. $\rm\: R'\:$ is either an embedding of $\rm\:R\:$ or $\rm\:R/I\:$ for some ideal $\rm\;I\in R\:.\;$ Being central is precisely the condition needed for elts in $\rm\:R'\:$ to serve as "coefficients" in the sense that this makes polynomial rings $\rm\: R[x]\:$ be a universal $\rm\:R\:$-algebras. Namely, the fact that the coefficients commute with all elts of $\rm\:A\:$ is precisely what is required to make the evaluation map be a ring homomorphism $\rm\: R[x]\to A\:,\:$ -viz $\rm\;\; r\; x = x\: r\;$ in $\rm\:R[x]\;\Rightarrow\; r\: a = a\:r\;\;$ by evaluation $\rm\: x\to a\in A,\:$ i.e. by definition polynomial multiplication assumes that the coefficients commute with the indeterminates, so this property must remain true at values of indeterminates if evaluation is to be a ring homomorphism. -A simple but powerful application: $ $ factorizations of polynomials persist as operator factorizations ("operator algebra"). when we evaluate them into $R$-algebra of linear operators, e.g. matrix algebras, and linear differential and difference operators (recurrences), etc.<|endoftext|> -TITLE: We can divide $7^{17} - 7^{15}$ by? -QUESTION [6 upvotes]: We can divide $7^{17} - 7^{15}$ by? -The answer is $6$, but how? -Thanks in advance. - -REPLY [9 votes]: HINT $\rm\quad X^{n+2} - X^n \;=\; (X^2 - 1)\: X^n \;=\; (X-1)\: (X+1)\: X^n$ -Often number identities are more perceptively viewed as special cases of function or polynomial identities. For example, Aurifeuille, Le Lasseur and Lucas discovered so-called Aurifeuillian factorizations of cyclotomic polynomials $\rm\;\Phi_n(x) = C_n(x)^2 - n\ x\ D_n(x)^2\;$. These play a role in factoring numbers of the form $\rm\; b^n \pm 1\:$, cf. the Cunningham Project. Below are some simple examples of such factorizations: -$$\begin{array}{rl} -x^4 + 2^2 \quad=& (x^2 + 2x + 2)\;(x^2 - 2x + 2) \\ -\frac{x^6 + 3^2}{x^2 + 3} \quad=& (x^2 + 3x + 3)\;(x^2 - 3x + 3) \\ -\frac{x^{10} - 5^5}{x^2 - 5} \quad=& (x^4 + 5x^3 + 15x^2 + 25x + 25)\;(x^4 - 5x^3 + 15x^2 - 25x + 25) \\ -\frac{x^{12} + 6^6}{x^4 + 36} \quad=& (x^4 + 6x^3 + 18x^2 + 36x + 36)\;(x^4 - 6x^3 + 18x^2 - 36x + 36) \\ -\end{array}$$<|endoftext|> -TITLE: Contravariant Grothendieck Spectral Sequence -QUESTION [8 upvotes]: I'm currently getting confused about indices in some spectral sequences. Assume we work in the category of modules for simplicity. Let $A^\cdot$ be a (bounded on the right) complex and let $B^\cdot$ (I don't think we have to assume anything about the boundedness of $B$). I want to compute $Ext^n(A^\cdot,B^\cdot)$, which is classically called hyperext (and sometimes denoted by $\mathbb{E}xt$. -Now, (perhaps assuming $B^\cdot$ to be bounded on the right), there exists a spectral sequence -$$E^{p,q}_2 = Ext^p(A^\cdot,H^q(B^\cdot)) \Rightarrow Ext^{p+q}(A^\cdot,B^\cdot).$$ -There should be an analogous by switching A and B, but I'm unsure of the indices, so my question is - -is $$ E^{p,q}_2 = Ext^q(A^\cdot,H^{-p}(B^\cdot)) \Rightarrow Ext^{q-p}(A^\cdot,B^\cdot) $$ - the right thing? - -Thanks. - -REPLY [6 votes]: I think that you will want $B^{\bullet}$ to be bounded below (i.e. on the left), so that -after replacing either $A^{\bullet}$ by a projective resolution or $B^{\bullet}$ by an -injective resolution, the complex $Hom(\text{complex},\text{complex})$ that you compute -will be bounded below. -In any case, the spectral sequence you are looking for comes about by taking this $Hom$ -complex (the one you get after replacing $A^{\bullet}$ or $B^{\bullet}$ by a projective -or injective resolution), writing it as the total complex of a double complex, and applying the standard -machine. If you do this, you will find that the second spectral sequence that you want is: -$$E_2^{p,q} = Ext^p(H^{-q}(A^{\bullet}),B^{\bullet}) \Rightarrow Ext^{p+q}(A^{\bullet},B^{\bullet}).$$ -(Your first spectral sequence comes from replacing $A^{\bullet}$ by a projective resolution. -This one comes from replacing $B^{\bullet}$ by an injective resolution.)<|endoftext|> -TITLE: Sparsest matrix with specified row and column sums -QUESTION [7 upvotes]: Given a sequence of row sums $r_1, \ldots, r_m$ and column sums $c_1, \ldots, c_n$, all positive, I'd like to find a matrix $A_{m\times n}$ consistent with the given row and column sums that has the fewest nonzero entries. That is, we want to minimize $|\{a_{ij} \ne 0\}|$ under the constraint that $\sum_j a_{ij} = r_i$ and $\sum_i a_{ij} = c_j$. -One can also think of this as a network flow problem, with $m$ sources and $n$ sinks that must be connected with the minimum number of edges to achieve the specified flow out of each source ($r_i$) or into each sink ($c_j$). -Is this a known problem, or does it fit into a well-known class of problems? What is the computational complexity of finding the optimal solution? According to the comments on this Reddit discussion, it could be related to the cutting stock problem, but it's not exactly the same. -The motivation for this problem comes from settling debts. Suppose you have a set of people $P$ with debts between them. These debts can be settled by (1) finding the net debt $d(p)$ owed to each person $p \in P$, (2) dividing $P$ into two disjoint sets of people who are owed, $P_+ = \{p : d(p) > 0\}$, and people who owe, $P_- = \{p : d(p) < 0\}$, and (3) having the people in $P_-$ pay the ones in $P_+$. Then the problem of resolving debts with the fewest transactions is equivalent to the above optimization problem, with $a_{ij}$ being the amount that the $i$th person in $P_-$ must give the $j$th person in $P_+$. - -REPLY [4 votes]: It turns out that, as T.. had guessed, the problem is indeed NP-hard. I realised this after participating in some discussions in the comments of this blog post where the same problem turned up again, and the corresponding Reddit discussion. The solution I am posting here is based on the above. -First, a lemma: The optimal solution for $m+n$ people cannot have more than $m+n-1$ nonzero entries. -The proof is by inductive construction of a solution with $m+n-1$ entries. For $m = n = 1$, the solution trivially has 1 entry. When $m+n > 2$, let $a_{11} = \min(r_1,c_1)$. This satisfies one row or column (or both, if $r_1 = c_1$), leaving a subproblem of size $m+n-1$ whose solution has at most $m+n-2$ entries. This completes the proof of the lemma. -Suppose a solution exists with fewer than $m+n-1$ entries. Consider the network flow view of the problem, with $m+n$ nodes connected by edges. If there are fewer than $m+n-1$ edges, the underlying undirected graph is disconnected. Then each connected component is "self-sufficient" in that it has no net in- or out-flow. That means that the sum of the corresponding subset of $\{r_i\} \cup \{-c_j\}$ is zero. So, this problem includes the subset sum problem, which is NP-complete. -More precisely, suppose an instance of the subset sum problem is given as a set of integers $X = \{x_1,\ldots,x_k\}$, and the problem is to determine whether there is a nonempty subset of $X$ whose sum is zero. Let these be the debts $d(p)$ among a set of people $P$ as in the question, introducing an additional person with $d(p) = -\sum X$ to balance the books. Then there is a solution with fewer than $|P|-1$ entries if and only if there is a subset of $X$ with sum zero. (The above lemma is useful for proving the "if" part.) Therefore the subset sum problem can be reduced to this problem.<|endoftext|> -TITLE: Generalizing $\sum \limits_{n=1}^{\infty }n^{2}/x^{n}$ to $\sum \limits_{n=1}^{\infty }n^{p}/x^{n}$ -QUESTION [22 upvotes]: For computing the present worth of an infinite sequence of equally spaced payments $(n^{2})$ I had the need to evaluate -$$\displaystyle\sum_{n=1}^{\infty}\frac{n^{2}}{x^{n}}=\dfrac{x(x+1)}{(x-1)^{3}}\qquad x>1.$$ -The method I used was based on the geometric series $\displaystyle\sum_{n=1}^{\infty}x^{n}=\dfrac{x}{1-x}$ differentiating each side followed by a multiplication by $x$, differentiating a second time and multiplying again by $x$. There is at least a second (more difficult) method that is to compute the series partial sums and letting $n$ go to infinity. -Question: Is there a closed form for -$$\displaystyle\sum_{n=1}^{\infty }\dfrac{n^{p}}{x^{n}}\qquad x>1,p\in\mathbb{Z}^{+}\quad ?$$ -What is the sketch of its proof in case it exists? - -REPLY [9 votes]: Here is a different look: -The differentiating and multiplying by $x$ gives rise to Stirling Numbers of the Second Kind. -Say you denote the operator of differentiating and multiplying by $x$ as $D_{x}$ -Then we have that -$$(D_{x})^{n}f(x) = \sum_{k=1}^{n} s(n,k) f^{(k)}(x) x^{k}$$ -where $s(n,k)$ is the stirling number of the second kind and $f^{(k)}(x)$ is the $k^{th}$ derivative of $f(x)$. -This can easily be proven using the identity $$s(n,k) = s(n-1,k-1) + k \cdot s(n-1,k)$$ -Here is a table for the Stirling numbers of the second kind (from the wiki page): - -n/k 0 1 2 3 4 5 6 7 8 9 -0 1 -1 0 1 -2 0 1 1 -3 0 1 3 1 -4 0 1 7 6 1 -5 0 1 15 25 10 1 -6 0 1 31 90 65 15 1 -7 0 1 63 301 350 140 21 1 -8 0 1 127 966 1701 1050 266 28 1 -9 0 1 255 3025 7770 6951 2646 462 36 1 - - -So in your case, we can start with $ f(x) = \frac{1}{1-x}$ and obtain that -$$ \sum_{k=0}^{\infty} k^{n} x^k = \sum_{r=1}^{n} r! \cdot s(n,r) \frac{x^{r}}{(1-x)^{r+1}}$$ -For example, in your case for $n=3$ we get -$$\sum_{k=0}^{\infty} k^3 x^k = \frac{1! \cdot 1 \cdot x}{(1-x)^2} + \frac{2! \cdot 3 \cdot x^2}{(1-x)^3} + \frac{3! \cdot 1 \cdot x^3}{(1-x)^4}$$ -$$ = \frac{x(1-x)^2 + 6x^{2}(1-x) + 6x^3}{(1-x)^4} $$ -$$ = \frac{x^3 + 4x^2 + x}{(1-x)^4} $$<|endoftext|> -TITLE: Check whether a point is within a 3D Triangle -QUESTION [13 upvotes]: I have a 3D plane defined by three points: $P_0$ , $P_1$ and $P_2$. How to check whether a point $P$ is located right on and inside the 3D triangle? -So, for example, if I have a plane defined by $({0,0,0})$, $({10,0,0})$ and $({0,10,0})$, then the point $({50,0,0})$ is considered not located on the plane, whereas the point $({5,0,0})$ is. - -REPLY [2 votes]: Vectors $V_{01}=P_1-P_0$ and $V_{02}=P_2-P_0$ lie in the plane of the triangle, and $V_{01}\times V_{02}$ is normal to this plane. Let $V_{0p}=P-P_0$, and if $V_{0p} \cdot (V_{01}\times V_{02}) = 0$ then $P$ lies in the plane. -Let $P=cP_0+aP_1+bP_2$ where $c=1-a-b$. Then $P=(1-a-b)P_0+aP_1+bP_2$ or $V_{0p}=aV_{01}+bV_{02}$. If $0 \le a,b,c \le 1$ then $P$ lies in the triangle or on its edge. -Vector $V_{01} \times (V_{01}\times V_{02})$ is orthogonal to $V_{01}$ so $V_{02} \cdot(V_{01} \times (V_{01}\times V_{02}))b=V_{0p} \cdot(V_{01} \times (V_{01}\times V_{02}))$ can be solved for b. -Likewise $V_{02} \times (V_{01}\times V_{02})$ is orthogonal to $V_{02}$ so $V_{01} \cdot(V_{02} \times (V_{01}\times V_{02}))a=V_{0p} \cdot(V_{02} \times (V_{01}\times V_{02}))$ can be solved for a.<|endoftext|> -TITLE: Are all algebraic integers with absolute value 1 roots of unity? -QUESTION [110 upvotes]: If we have an algebraic number $\alpha$ with (complex) absolute value $1$, it does not follow that $\alpha$ is a root of unity (i.e., that $\alpha^n = 1$ for some $n$). For example, $(3/5 + 4/5 i)$ is not a root of unity. -But if we assume that $\alpha$ is an algebraic integer with absolute value $1$, does it follow that $\alpha$ is a root of unity? - -I know that if all conjugates of $\alpha$ have absolute value $1$, then $\alpha$ is a root of unity by the argument below: -The minimal polynomial of $\alpha$ over $\mathbb{Z}$ is $\prod_{i=1}^d (x-\alpha_i)$, where the $\alpha_i$ are just the conjugates of $\alpha$. Then $\prod_{i=1}^d (x-\alpha_i^n)$ is a polynomial over $\mathbb{Z}$ with $\alpha^n$ as a root. It also has degree $d$, and all roots have absolute value $1$. But there can only be finitely many such polynomials (since the coefficients are integers with bounded size), so we get that $\alpha^n=\sigma(\alpha)$ for some Galois conjugation $\sigma$. If $\sigma^m(\alpha) = \alpha$, then $\alpha^{n^m} = \alpha$. -Thus $\alpha^{n^m - 1} = 1$. - -REPLY [2 votes]: As pointed out above, an algebraic integer with absolute value equal to 1 does not have to be a root of unity. But if all the conjugates of the algebraic integer have absolute value 1, then it is indeed the case.Let $\alpha$ be an algebraic integer with minimum polynomial $f\in \mathbb{Z}[X]$, which is monic and say of degree $n$. We can assume $n \geq 2$ otherwise $\alpha \in \{-1,1\}$. Assume all the roots of $f$ have absolute value 1. Let us concentrate on the coefficient of $X^i$: the sum of the roots taking $i$ of them each time is in absolute value bounded by ${n}\choose{i}$ by the triangle inequality. Thus the coefficient of $X^i$ is in absolute value bounded by ${n}\choose{i}$, hence for any $n$ there are only finitely many algebraic integers of degree $n$ such that all conjugates have absolute value $1$, since there are only finitely many polynomials in $\mathbb{Z}[X]$ with given bounded coefficients. -Next, consider the powers of $\alpha$. They are all algebraic integers of degree at most $n$, and furthermore all their conjugates also have absolute value $1$ since the Galois actions map powers of $\alpha$ to powers of its conjugates. Thus, the powers of $\alpha$ are elements of a finite set. This implies $\alpha$ must be a root of unity. -In the case of a finite group $G$, $g \in G$ and complex (not necessarily irreducible) character $\chi$ with $|\chi(g)|=1$ all the Galois conjugates of the algebraic number $\chi(g)$ also have absolute value $1$. Let $n=o(g)$ and $K=\Omega_{\mathbb{Q}}^{X^n-1} \subseteq \mathbb{C}$ the splitting field. Let $\mathfrak{G}=Gal(K/\mathbb{Q}) \cong (\mathbb{Z}/n\mathbb{Z})^*$. If $\sigma \in \mathfrak{G}$ and $\varepsilon$ is an $n$th-root of unity, then $\sigma(\varepsilon)=\varepsilon^m$, for some $m \in \mathbb{Z}$, with gcd$(m,n)=1$. Now, $\chi(g)=\varepsilon_1 + \cdots + \varepsilon_{\chi(1)}$, a sum of $n$th-roots of unity. Hence, $\sigma(\chi(g))=\varepsilon_1^m + \cdots + \varepsilon_{\chi(1)}^m=\chi(g^m)$. Note that $\mathfrak{G}$ is abelian and that the restriction of complex conjugation to $K$ induces an element of $\mathfrak{G}$ of order $2$. Hence, $|\sigma(\chi(g))|^2=\sigma(\chi(g)) \cdot \overline{\sigma(\chi(g))}=\sigma(\chi(g)) \cdot \sigma(\overline{\chi(g)})=\sigma(|\chi(g)|^2)=\sigma(1)=1$, which yields $|\chi(g^m)|=1$.<|endoftext|> -TITLE: Where to start learning Linear Algebra? -QUESTION [147 upvotes]: I'm starting a very long quest to learn about math, so that I can program games. I'm mostly a corporate developer, and it's somewhat boring and non exciting. When I began my career, I chose it because I wanted to create games. -I'm told that Linear Algebra is the best place to start. Where should I go? - -REPLY [4 votes]: There is a new online course about Linear Algebra https://stepic.org/79 -Same platform (Stepic.org) was used in several courses at Coursera and it seems pretty cool. -The goal of the course is "to help you to find and understand the building blocks of linear algebra. Moreover, it is important to develop your personal vision and intuition about them. This cannot be achieved by solving 1000 similar typical exam problems. In this course you will be provided with a sort of "minimal linear algebra kit", so that you can manage the saved time yourself."<|endoftext|> -TITLE: Solving (quadratic) equations of iterated functions, such as $f(f(x))=f(x)+x$ -QUESTION [15 upvotes]: In this thread, the question was to find a $f: \mathbb{R} \to \mathbb{R}$ such that -$$f(f(x)) = f(x) + x$$ -(which was revealed in the comments to be solved by $f(x) = \varphi x$ where $\varphi$ is the golden ratio $\frac{1+\sqrt{5}}{2}$). - -Having read about iterated functions shortly before though, I came up with this train of thought: -$$f(f(x)) = f(x) + x$$ -$$\Leftrightarrow f^2 = f^1 + f^0$$ -$$f^2 - f - f^0 = 0$$ -where $f^n$ denotes the $n$'th iterate of $f$. -Now I solved the resulting quadratic equation much as I did with plain numbers -$$f = \frac{1}{2} \pm \sqrt{\frac{1}{4} + 1}$$ -$$f = \frac{1 \pm \sqrt{1+4}}{2} = \frac{1 \pm \sqrt{5}}{2}\cdot f^0$$ -And finally the solution -$$f(x) = \frac{1 \pm \sqrt{5}}{2} x .$$ -Now my question is: *Is it somehow allowed to work with functions in that way?** I know that in the above, there are denotational ambiguities as $1$ is actually treated as $f^0 = id$ ... But since the result is correct, there seems to be some correct thing in this approach. -So can I actually solve certain functional equations like this? And if true, how would the correct notation of the above be? - -REPLY [2 votes]: In fact this belongs to a functional equation of the form http://eqworld.ipmnet.ru/en/solutions/fe/fe1220.pdf. -Let $\begin{cases}x=u(t)\\f=u(t+1)\end{cases}$ , -Then $u(t+2)=u(t+1)+u(t)$ -$u(t+2)-u(t+1)-u(t)=0$ -$u(t)=C_1(t)\left(\dfrac{1+\sqrt{5}}{2}\right)^t+C_2(t)\left(\dfrac{1-\sqrt{5}}{2}\right)^t$ , where $C_1(t)$ and $C_2(t)$ are arbitrary periodic functions with unit period -$\therefore\begin{cases}x=C_1(t)\left(\dfrac{1+\sqrt{5}}{2}\right)^t+C_2(t)\left(\dfrac{1-\sqrt{5}}{2}\right)^t\\f=C_1(t)\left(\dfrac{1+\sqrt{5}}{2}\right)^{t+1}+C_2(t)\left(\dfrac{1-\sqrt{5}}{2}\right)^{t+1}\end{cases}$ , where $C_1(t)$ and $C_2(t)$ are arbitrary periodic functions with unit period<|endoftext|> -TITLE: Solving recurrence relations that involve all previous terms -QUESTION [8 upvotes]: I'm not sure if this a proper recurance relation per se but I'd be interested in the methodology in solving a recurrence relation of the following form: -$Z_0 = 1$ -$Z_1 = x_1$ -$Z_2 = x_1Z_1 + x_2 = x_1^2 + x_2$ -$Z_3 = x_1Z_2 + x_2Z_1 + x_3 = x_1^3 + x_1x_2 + x_1x_2 + x_3$ -$Z_n = x_1 Z_{n-1} + x_2 Z_{n-2} + ... + x_n Z_0$ -As written, each term requires the knowledge of the previous terms. Is it possible to write down a closed form for $Z_n$? For this particular recurrence, I can write down the result of $Z_n$ but I would be very interested in seeing how one can derive this from the recurrence relation itself. My gut feeling is that a generating function is lurking underneath all this. - -REPLY [9 votes]: Your gut feeling is right, but let me change your $x$s to $a$s to make the answer easier to read. Consider the generating functions -$$Z(x) = \sum_{n \ge 0} Z_n x^n$$ -$$A(x) = \sum_{n \ge 1} a_n x^n.$$ -Then the recurrence relation you have written down is equivalent to -$$Z(x) = Z(x) A(x) + 1$$ -which gives -$$Z(x) = \frac{1}{1 - A(x)}.$$ -So if you know the generating function $A(x)$ in closed form, you then know the generating function $Z(x)$ in closed form. This is one of the most basic manipulations to do with generating functions, and techniques like this are thoroughly covered in, for example, Wilf's generatingfunctionology. - -REPLY [6 votes]: To answer the general question in the title ("solving recurrence relations that involve all previous terms"), such relations can be thought of as path- or history-dependent calculations and one tries to replace them by equivalent state-dependent (Markovian) calculations that retain a finite amount of state information and keep transforming the state, but not remembering any additional information. -For example, if the recurrence is -$S_n = S_0 + S_1 + \dots S_{n-1}$ -this is equivalent, for $n>1$, to $S_n = S_{n-1} + S_{n-1} = 2S_{n-1}$, and remembering the previous term is enough to reproduce the recurrence as iterated doubling (after the first two terms). -The example in the question is also an iteration of a single transformation, but on an unbounded-dimensional state space.<|endoftext|> -TITLE: A recurrence that wiggles? -QUESTION [15 upvotes]: Consider the following sequence $a_n$: -$a_1 = 0$ -$a_n = 1 + \frac{1}{2^n-2} \sum_{i=1}^{n-1} \binom{n}{i} a_i$ -The first few terms are $0,1,\frac{3}{2},\frac{13}{7},\frac{15}{7}$. -The sequence comes out of the analysis of a certain process, whose asymptotics we expect to be about $\log_2 n$. Numerical computations show that indeed $a_n \approx \log_2 n - C$, where $C \approx 0.318$. It should be easy to show that $a_n = \log_2 n + O(1)$. - Is it true that $a_n = \log_2 n - C + o(1)$ for some constant $C$? -There are some sequences defined by similar recurrence relations for which the "constant" part actually wiggles (unfortunately the reference escapes me). - -REPLY [4 votes]: Noam Elkies gives an example here.<|endoftext|> -TITLE: Kernel of the tangent map -QUESTION [9 upvotes]: If $\varphi:U\subset \mathbb{R}^n \to \mathbb{R}^m$ is $C^1$, let $\mathrm{T}\varphi:\mathrm{T}U \to \mathrm{T}R^m$ be its tangent map. The inverse function theorem tells us that if $\ker(\mathrm{T}\varphi(x))$ is zero, $\varphi$ is injective in some neighborhood of $x$. If the kernel is non-zero, what can we say about $\varphi$ near $x$ provided we know the kernel? In particular, can we say anything about curves through $x$ whose tangents belong to this kernel? - -REPLY [3 votes]: As Wikipedia says: -"The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with locally constant rank near a point can be put in a particular normal form near that point." -See http://en.wikipedia.org/wiki/Derivative_rule_for_inverses#Constant_rank_theorem -The Constant Rank Theorem is stated as Theorem (7.1) p. 47 of An Introduction to Differentiable Manifolds and Riemannian Geometry, Revised Second Edition, William M. Boothby, Academic Press. (This is the reference given by Wikipedia.) -Here is, for the reader's convenience, a statement of the Constant Rank Theorem. -Let $k,n$ and $r$ be positive integers, let $a$ be in $\mathbb R^n$, let $b$ be in $\mathbb R^k$, let $f$ be a smooth map from a neighborhood of $a$ to $\mathbb R^k$ sending $a$ to $b$, and let $\ell$ be the linear map from $\mathbb R^n$ to $\mathbb R^k$ sending $x$ to $(x_1,\dots,x_r,0,\dots,0)$. Assume that the rank of the tangent map to $f$ at $x$ is equal to $r$ for all $x$ in our neighborhood of $a$. -Then there is a diffeomorphism $g$ from a neighborhood of $a$ to a neighborhood of 0 in $\mathbb R^n$, and a diffeomorphism $h$ from a neighborhood of 0 in $\mathbb R^k$ to a neighborhood of $b$, such that the equality $f=h\circ\ell\circ g$ holds in some neighborhood of $a$. -EDIT OF MARCH 19, 2011 -Here is a statement and a proof of the Constant Rank Theorem. - -Theorem. Let $U$ be open in $\mathbb{R}^n$, let $a$ be a point in $U$, and let $f$ be $C^p$ map ($1\le p\le\infty$) of rank $r$ from $U$ to $\mathbb{R}^k$. Then there are open sets $U_1,U_2\subset\mathbb{R}^n$, $U_3\subset\mathbb{R}^k$ and $C^p$ diffeomorphisms $\varphi:U_1\to U_2$, $\psi:U_3\to U_3$ such that $a\in U_1$ and $(\psi\circ f\circ\varphi^{-1})(x)=(x_1,\dots,x_r,0,\dots,0)$ for all $x$ in $U_2$. - -Proof. For $$x\in\mathbb{R}^r,\quad y\in\mathbb{R}^{n-r},\quad(x,y)\in U$$ write -$$f(x,y)=(f_1(x,y),f_2(x,y)),\quad f_1(x,y)\in\mathbb{R}^r,\quad f_2(x,y)\in\mathbb{R}^{k-r}.$$ -We can assume that $\partial f_1(x,y)/\partial x$ is invertible for all $(x,y)\in U$. Define $$\varphi:U\to\mathbb{R}^n,\quad(x,y)\mapsto(f_1(x,y),y).$$ By the Inverse Function Theorem, there are open sets -$$U_1\subset\mathbb{R}^n,\quad U_4\subset\mathbb{R}^r,\quad U_5\subset\mathbb{R}^{n-r}$$ such that $a\in U_1\subset U$, $\varphi$ is a $C^p$ diffeomorphism from $U_1$ onto $U_2:=U_4\times U_5$, and $U_5$ is connected. -Then $f(\varphi^{-1}(x,y))=(x,g(x,y))$ for some $C^p$ map $g$ from $U_2$ to $\mathbb{R}^{k-r}$. As $\partial g/\partial y=0$, we can write $g(x)$ for $g(x,y)$, and it suffices to set $U_3:=U_4\times\mathbb{R}^{k-r}$ and $\psi(u,v):=(u,v-g(u))$ for $ u\in U_4$ and $v\in\mathbb{R}^{k-r}.$<|endoftext|> -TITLE: How to show that all even perfect numbers are obtained via Mersenne primes? -QUESTION [12 upvotes]: A number $n$ is perfect if it's equal to the sum of its divisors (smaller than itself). A well known theorem by Euler states that every even perfect number is of the form $2^{p-1}(2^p-1)$ where $2^p-1$ is prime (this is what is called a Mersenne prime). -How is this theorem proved? The converse (showing that every number of that form is perfect) is simple (the divisors of such a number are powers of 2 and powers of 2 times the Mersenne prime, and it's an easy sum to calculate) but I couldn't find a proof for the theorem itself. - -REPLY [8 votes]: Many elementary number theory texts have the proof, e.g., Hardy & Wright's. -Here is an online proof from Chris Caldwell's Prime Pages.<|endoftext|> -TITLE: Computing the largest Eigenvalue of a very large sparse matrix? -QUESTION [17 upvotes]: I am trying to compute the asymptotic growth-rate in a specific combinatorial problem depending on a parameter w, using the Transfer-Matrix method. This amounts to computing the largest eigenvalue of the corresponding matrix. -For small values of w, the corresponding matrix is small and I can use the so-called power method - start with some vector, and multiply it by the matrix over and over, and under certain conditions you'll get the eigenvector corresponding to the largest eigenvalue. However, for the values of w I'm interested in, the matrix becomes to large, and so the vector becomes too large - $n>10,000,000,000$ entries or so, so it can't be contained in the computer's memory anymore and I need extra programming tricks or a very powerful computer. -As for the matrix itself, I don't need to store it in memory - I can access it as a black box, i.e. given $i,j$ I can return $A_{ij}$ via a simple computation. Also, the matrix has only 0 and 1 entries, and I believe it to be sparse (i.e. only around $\log n$ of the entries are 1's, $n$ being the number of rows/columns). However, the matrix is not symmetric. -Is there some method more space-effective for computation of eigenvalues for a case like this? - -REPLY [4 votes]: For a computer package that solves the eigenvalue for large sparse matrix problem. Use ARPACK. -For wiki: - -The package is designed to compute a - few eigenvalues and corresponding - eigenvectors of large sparse or - structured matrices, using the - Implicitly Restarted Arnoldi Method - (IRAM) or, in the case of symmetric - matrices, the corresponding variant of - the Lanczos algorithm.the Lanczos algorithm.<|endoftext|> -TITLE: Prerequisites for learning (basic) Graph Theory -QUESTION [15 upvotes]: I would like to learn Graph Theory from the beginning. It seems to me that one does not need to be familiar with many abstract type subjects to be able to understand the more basic concepts of graphs. - -Which subjects should one know prior to learn Graph Theory at the introductory level? -And which book or lecture notes would you advise to learn it? - -REPLY [3 votes]: I thought about this question for a graph theory course I'm teaching. Prerequisites would be mathematical proof technique (induction, proof by contradiction), and linear algebra (determinants, eigenvalues). -The book I eventually chose was Bondy and Murty's Graph Theory. It's a bit dry, but it's mathematically very nice, it has a lot of material if you want to delve deeper (you won't throw it away after the "course" is over), and it's quite readable. It's also cheap, and in fact available for free online from Springer's website.<|endoftext|> -TITLE: Why is median age a better statistic than mean age? -QUESTION [5 upvotes]: If you look at Wolfram Alpha - -or this Wikipedia page List of countries by median age - -Clearly median seems to be the statistic of choice when it comes to ages. -I am not able to explain to myself why arithmetic mean would be a worse statistic. Why is it so? - -REPLY [2 votes]: The best statistic to summarize a distribution depends upon the distribution and what you want to use it for. For distributions that are nicely bell-shaped, the mean, median, and mode are close together and it doesn't matter. For skew distributions the mean is out on the skew side from the median, but it still represents the expected value of the average of a large number of samples. The median is closer to more of the individuals than the mean. For stranger distributions no one number can provide a useful summary.<|endoftext|> -TITLE: Characterising Continuous functions -QUESTION [14 upvotes]: We know that if $f : \mathbb{R} \to \mathbb{R}$ is a continuous function, then $f$ carries connected sets to connected sets and compact sets to compact sets. That is if $A \subset \mathbb{R}$ is connected then $f(A)$ is connected, and if $A$ is compact then $f(A)$ is compact. -Question: Suppose $f: \mathbb{R} \to \mathbb{R}$ is a function such that for every connected, compact subsets $A \subset \mathbb{R}$, $f(A)$ is connected, compact, then is $f$ continuous? If yes, i would like to see a proof. - -Update: Does this result remain true if $f: \mathbb{R}^{2} \to \mathbb{R}$, or from any $f: \mathbb{R}^{m} \to \mathbb{R}^{n}$. - -REPLY [20 votes]: Much more general results are known: no statement of this form can characterize continuous functions, derivatives, Baire class 1 functions, Borel functions, measurable functions, etc. -THEOREM $\;$ There do not exist families of sets of reals $\rm\: \cal A\:,\:\cal B\;$ such that the following statement is true: -$\quad\quad$ for every function $\rm\; f : \mathbb R\to \mathbb R\:,\;\;\; f\:$ is continuous $\iff$ for every $\: A\in {\cal A}\:,\;\; {\rm f}\:(A) \in \cal B$ -For the elementary proof see this 1997 Monthly paper by Velleman and for more general results on classes of functions characterizable by images of sets see this paper, whose abstract I have appended below: -For non-empty topological spaces $X$ and $Y$ and arbitrary -families $\cal{A}\subseteq\cal{P}(X)$ and $\cal{B}\subseteq\cal{P}(Y)$ we put -$\cal{C}_{\cal{A},\cal{B}}=\{f\in Y^X\colon(\forall A\in\cal{A})(f[A]\in\cal{B})\}$. -In this paper we will examine -which classes of functions $\cal{F}\subseteq Y^X$ -can be represented as $\cal{C}_{\cal{A},\cal{B}}$. We will be mainly -interested in the case -when -$\cal{F}=\cal{C}(X,Y)$ is the class of all continuous functions from $X$ into $Y$. -We prove that for non-discrete Tychonoff space $X$ the class -$\cal{F}=\cal{C}(X,\mathbb{R})$ -is not equal to $\cal{C}_{\cal{A},\cal{B}}$ for any -$\cal{A}\subseteq \cal{P}(X)$ and $\cal{B}\subseteq\cal{P}(\mathbb{R})$. Thus, $\cal{C}(X,\mathbb{R})$ -cannot be characterized by images of sets. -We also show that none of the -following classes of real functions can be represented as -$\cal{C}_{\cal{A},\cal{B}}$: -upper (lower) semicontinuous functions, -derivatives, approximately continuous functions, -Baire class 1 functions, Borel functions, and measurable functions.<|endoftext|> -TITLE: Can we define definitions as axioms in logic? -QUESTION [6 upvotes]: In a proof or deduction system, there are some axioms and some inference rules. To have a small proof system, axioms are defined for the primitives (for example for negation and implication in propositional logic). Now, we want to have a proof system that proves all provable formulae including the formulae contain non-primitive logical connectors (such as OR in propositional logic). The question is should we add definitions for non-primitives as axioms to the proof system, or we should do something else? In other words, do logicians accept definitions as axioms? - -REPLY [3 votes]: Yes, absolutely you can introduce definitions as axioms. That's what Metamath does: -http://us.metamath.org/mpegif/mmset.html#definitions -See also -http://www.hss.cmu.edu/philosophy/techreports/181_Avigad.pdf -which defines a formal system with definitions: - -We extend the foundational framework in two ways. First, we allow for explicit definitions of new predicates and functions on the universe of sets. And, - second, we allow function symbols to denote functions that are only partially de- - fined, using a logic of partial terms. We call the resulting formal system DZFC .<|endoftext|> -TITLE: Fourier Analysis textbook recommendation -QUESTION [13 upvotes]: I am taking a fourier analysis course at the graduate level and I am unhappy with the textbook (Stein and Shakarchi). What I am looking for is a book that is less conversational and more to the point. Further, I am not terribly interested in applications and would rather be exposed to how Fourier Analysis fits into the broader framework of analysis. -For background, I used Baby Rudin for a one-year course in advanced calculus, I am currently taking a course from Kolmogorov and Fomin's Introductory Real Analysis and I have taken complex analysis (using Conway's text, Functions of One Complex Variable) as well as topology (using Munkres as well as Engelking) at the graduate level, but I have not yet been introduced to the Lebesgue integral. - -REPLY [7 votes]: I can't help but recommend G. Folland's Tata notes on PDE, which are light, but not conversational/sloppy. It becomes immediately clear how Fourier transforms help. -Rudin's "Functional Analysis" treats Fourier transforms carefully, but gives the impression that he doesn't care about them very much. Not inspirational. -Hormander's volume I of his expanded PDE books is (unlike the later volumes) readable by everyone, and very useful. -The case of Fourier series in one variable is treated in a fashion meant to be down-to-earth, but also forward-looking, in my notes - functions on circles, which includes discussion of Sobolev spaces and distributions on circles.<|endoftext|> -TITLE: Absolute continuity on an open interval of the real line? -QUESTION [6 upvotes]: In classical real analysis I've only seen absolute continuity defined for functions on compact interval $[a,b]$, where the two equivalent definitions are: $f:[a,b]\rightarrow\mathbb{R}$ is AC if -(1) Given $\epsilon > 0$ there is a $\delta > 0$ such that $\sum_{i=1}^n |f(y_i)-f(x_i)|< \epsilon$ for every finite collection of nonoverlapping intervals $( (x_i,y_i) )_{i=1}^n $ each contained in $[a,b]$ with $\sum_{i=1}^n |y_i-x_i|< \delta$. Or, -(2) $f'$ exists a.e. on $[a,b]$, $f'$ is integrable on $[a,b]$, and -$f(x) = \int_a^x f'(y) dy + f(a)$ for all $x \in [a,b]$. -Is there an accepted definition for absolute continuity of a function on an open, possibly unbounded, interval $(a,b)$ where $-\infty \leq a < b \leq \infty$? -It seems that definition (1) extends easily to this case if we replace $[a,b]$ by $(a,b)$. If we call this condition (1'), then it's easy to show (1') is equivalent to (see answer by Jonas below). A natural extension of (2) to this case would be -(2') $f$ is AC on an open set $U$ if, for all compact intervals $[c,d] \subset (a,b)$, $f$ is AC in the sense of (2) on $[c,d]$. -Which of these is the best extension of the definition? I don't know enough about the notion of absolute continuity of measures to know if my extended definitions are consistent with that generalization as well. - -REPLY [3 votes]: (1') is not equivalent to (2'). For example, $f(x)=x^2$ satisfies (2') on $(-\infty,\infty)$ but not (1'). It is not even uniformly continuous. -Condition (2'), being AC on all compact subintervals, is a condition I have at least seen used, and it is the right one if you want to extend the equivalence to being an indefinite integral. Namely, it is equivalent to: -(3) If $a\lt c \lt b$, then $f(x)=f(c)+ \int_c^x f'$ for all $x\in(a,b)$. -This is only a slight modification of (2), which must be made because $f$ may be unbounded or otherwise undefined at $a$ and $f'$ may not be integrable on $(a,x)$, even if $f$ satisfies the stronger condition (1') (e.g., $f(x)=x$ on $(-\infty,\infty)$). -But you asked if this is the "best" definition or if there is an "accepted" definition, and of that I am not sure. I have not seen anyone write "$f$ is AC on $(a,b)$" when they mean (2') holds. For the case $(-\infty,\infty)$, I have seen simply "f is AC on bounded intervals". On the Wikipedia page, they use the phrase "locally absolutely continuous" in the section on the relationship to measures.<|endoftext|> -TITLE: Is the proof of this lemma really necessary? -QUESTION [18 upvotes]: To prove the Cayley-Hamilton theorem in linear algebra, my professor said that a lemma was necessary: - -Lemma: Let $A \in M_n(\mathbb{K})$ be an $n\times n$ matrix over a field $\mathbb{K}$, let $b(t) \in M_n(\mathbb{K})[t]$ and $P(t) = b(t)[A-tI]$, then $P(A) = 0$ - -The theorem (which says that if $f$ is an endomorphism of V, then $f$ is a solution to its characteristic polynomial), was then proven thus: -let $B(t) = \text{adj}[A-tI]$ and $P(t) = B(t)[A-tI]$, then $P(A)=0$ but also $P(t) = \delta I$ (where $\delta = \det(A-tI)$). Since $\delta = \chi_f(t)$, so $P(A) = 0 \Rightarrow \chi_f(A) = 0$. -My question is: since we interpret the $P(t)$ of the theorem as a polynomial with matrix coefficients, isn't the whole thing kind of obvious for the properties of a polynomial ring? (Assuming we all know how to switch between matrices and endomorphisms) - -REPLY [4 votes]: I'd just like to make the following three observations as a minor complement to the other answers. -OBSERVATION 1. What the Cayley-Hamilton Theorem says is $$\det -\begin{pmatrix} -a_{11}-A & a_{12}&\cdots&a_{1n}\\ -a_{21}&a_{22}-A&\cdots&a_{2n}\\ -\vdots&\vdots&\vdots&\vdots\\ -a_{n1}&a_{n2}&\cdots&a_{nn}-A -\end{pmatrix}=0.$$ -OBSERVATION 2. The proof of the Cayley-Hamilton Theorem I like best (among the ones I know) is on page 21 (proof of Proposition 2.4) of Introduction to Commutative Algebra by Atiyah and MacDonald. The argument can be phrased as follows. -Let $K$ be a commutative ring; let $n$ be a positive integer; let $A=(a_{ij})\in M_n(K)$ be an $n$ by $n$ matrix with entries in $K$; let $\chi$ be its characteristic polynomial; define $B=(b_{ij})\in M_n(K[A])$ by $b_{ij}:=\delta_{ij}\,A-a_{ij}$; observe $$\sum_i\ \ b_{ij}\ e_i=0,\quad\det B=\chi(A);$$ and write $(c_{ij})$ for the adjugate of $B$. Applying (a trivial case of) Fubini's Theorem to the double sum $\sum_{i,j}\ c_{jk}\ b_{ij}\ e_i$, we get $\chi(A)=0$. -OBSERVATION 3. It's easy to define the ring $R[X]$ of polynomials in the indeterminate $X$ with coefficients in the noncommutative $R$. But when you think in terms of universal properties, you see that this construction is not very natural. So, it's better, I think, not to introduce it just to prove the Cayley-Hamilton Theorem.<|endoftext|> -TITLE: Proving commutativity of convolution $(f \ast g)(x) = (g \ast f)(x)$ -QUESTION [19 upvotes]: From any textbook on fourier analysis: -"It is easily shown that for $f$ and $g$, both $2 \pi$-periodic functions on $[-\pi,\pi]$, we have $$(f \ast g)(x) = \int_{-\pi}^{\pi}f(x-y)g(y)\;dy = \int_{-\pi}^{\pi}f(z)g(x-z)\;dz = (g \ast f)(x),$$ by using the substitution $z = x-y.$" - -I don't doubt that this is true, but I cannot figure out what happened to the negative sign coming from $dy = -dz\;$ after the change of variable $z = x - y$. In particular, after the change of variable $z = x-y,\;$ I am coming up with $ -\int_{-\pi}^{\pi}f(z)g(x-z)\;dz$. What am I missing here? - -REPLY [6 votes]: \begin{align*} -\\&\int_0^tf(t-u)g(u)\cdot\text du\\ -\\\text{let } v=t-u\\ -\text dv=-\text du\\ -\\ u=0\rightarrow v=t\\u=t\rightarrow v=0\\ -&=\int_t^0f(v)g(t-v)\cdot -\text dv\\ -&=\int_0^tf(v)g(t-v)\cdot \text dv\\ -\\&\\ -\\&\\ -\\&\\ -\\&\\ -\end{align*} -\begin{align*}\mathcal L\left\{\int\limits_0^tf(t-u)g(u)\cdot\text du\right\} -&=\int\limits_0^\infty\int\limits_0^t f(t-u)g(u)\cdot\text du\cdot e^{-pt}\cdot\text dt\\ -&=\int\limits_0^\infty e^{-pt}\int\limits_0^t f(t-u)g(u)\cdot\text du\cdot\text dt\\ -u=0\rightarrow t=u\\ -u=t \rightarrow t=\infty\\ -&=\int\limits_0^\infty g(u)\int\limits_u^\infty e^{-pt}f(t-u)\cdot\text dt\cdot\text du\\ -\text{let }v=t-u\\ -\text dv =\text dt\\ -t=u\rightarrow v=0\\ -t=\infty \rightarrow v=\infty\\ -&=\int\limits_0^\infty g(u)\int\limits_{0}^\infty e^{-p(v+u)}f(v)\cdot\text dv\cdot\text du\\ -&=\int\limits_0^\infty g(u)e^{-up}\cdot\text du\times\int\limits_{0}^\infty f(v)e^{-pv}\cdot\text dv\\ -\\&=G(p)F(p) -\end{align*}<|endoftext|> -TITLE: Terminology for point in dent in surface? -QUESTION [5 upvotes]: This is a simple terminology question. Let $S$ be a (let's say smooth) -surface in $\mathbb{R}^3$, and $p$ a point -on $S$. Suppose the principle curvatures $\kappa_1$ and $\kappa_2$ at $p$ are both negative. -I am imagining $p$ sitting at the bottom of a dent in the surface. -Is there an accepted term to describe such a point? -The difficulty is that the Gaussian curvature $\kappa_1 \kappa_2$ is positive, so intrinsically -$p$ is no different than if it were on a bump rather than a dent. -I could make up my own term of course, e.g., valley point or cup point, but I'd rather follow convention. -Thanks! - -REPLY [3 votes]: This question (does established terminology for a certain situation exist) has been answered in the comments (no). But for posterity--- ie, for students who may meet "principal curvature" terminology before they learn a lot of differential geometry, and who may run across this question on web searches for related topics--- I can't resist emphasizing a point hinted at in the question itself: having both principal curvatures negative at a point is not a well defined property of a point on a surface; the signs of the principal curvatures depend on the coordinate system you calculate them in. -For example, if you pick a point $p$ on a sphere in $\mathbb{R}^3$ and a coordinate patch around $p$ in which to compute principal curvatures, you find that both principal curvatures are positive at $p$ if the normal vector field associated to the coordinate patch points inward (ie, toward the center of the sphere), and both negative if the normal vector field associated to the patch points outward. The choice of normal is yours to make; it doesn't come with the sphere. (I suppose this point might be better made with a complicated surface that does not correspond to a well-known shape--- e.g. the surface formed by a tangle of ribbon after you have unwrapped a gift. If you punch a hole in this ribbon and replace the missing disc with a hemisphere, whether this is a "valley" or a "cup" is entirely up to you. In choosing the normal direction, you are deciding whether or not curves in the hemisphere are "curving" "toward" the normal or "away from" it.) -Now, this indeterminancy up to sign is the worst thing that can happen: if $S \subseteq \mathbb{R}^3$ is a smooth surface, and the principal curvatures at a point $p$ in $S$ are found in some coordinate system to be $\kappa_1$ and $\kappa_2$, then in any other coordinate system they will be either found to be exactly the same (that is, $\kappa_1$ and $\kappa_2$) or the same, but with opposite sign (that is, $-\kappa_1$ and $-\kappa_2$). How you prove this depends on how you define the principal curvatures, but arises e.g. from the fact that they are the maximum and minimum normal components of accelerations of unit-speed curves in the surface at $p$, and there are only two possible choices of unit normal at $p$, differing only by sign. So "both principal curvatures are negative" is a well-defined property of an oriented smooth surface in $\mathbb{R}^3$ (a property which, we learned in the comments, apparently does not have a well-established name; for what it's worth, I like "bump point"). -This same discussion shows that the slightly weaker condition that "both principal curvatures at $p$ have the same sign" is well-defined independent of a choice of normal. (As pointed out in the comments, it does have a well-established name: such a $p$ is said to be an elliptic point.) What the above discussion does not show, but is nevertheless true, is that this weaker condition is even independent of how one "embeds" $S$ in any "ambient space" (what the asker meant when he referred to the Gaussian curvature at $p$ as being "intrinsic"). For more, open up any book on the differential geometry of surfaces and look for the sections around Gauss's Theorema Egregium.<|endoftext|> -TITLE: The UFD field lemma -QUESTION [8 upvotes]: This page contains a result which it refers to as the UFD field lemma. I was wondering if anybody knew of any other references which discuss this result--this page is the only place I've seen it. -The UFD field lemma appears to assert that if $R$ is a unique factorization domain containing infinitely many prime elements, if $F$ is the field of fractions of $R$, and if $A$ is a finitely generated $R$-algebra which is a field and is algebraic over $F$, then $A$ does not contain $F$. -I'm looking for other sources because I find the exposition on that page a little hard to follow. - -REPLY [3 votes]: I think you might find it helpful to first comprehend the essence of the matter in a slightly simpler context, e.g. see the proof I gave on sci.math on 22 Apr 2009. The key ideas are already there. Generally, I recommend Kaplansky, Commutative Rings, for the circle of ideas around the generalized Nullstellensatz (Goldman, Krull, Zariski).<|endoftext|> -TITLE: Bijection between twin primes and numbers $n$ such that $n^2-1$ has exactly four positive divisors -QUESTION [7 upvotes]: I'm working my way through Niven's Introduction to Number Theory, and the wording of the following problem is making me unsure of my answer: -Show that there is a one-to-one correspondence between twin primes and numbers $n$ such that $n^2-1$ has just four positive divisors. -I felt an obvious bijection would be $f\colon A\rightarrow B\colon (p,p+2)\mapsto p+1$, where $A$ is the set of all pairs of twin primes, and $B$ is the set of all positive $n$ such that $n^2-1$ has only four positive divisors. $f$ is injective, and for any such $n$, if $n^2-1=(n-1)(n+1)$, has only four positive divisors, they must be $1,(n-1),(n+1),n^2-1$, implying that $n-1$ and $n+1$ are twin primes, and thus $(n-1,n+1)$ would be a suitable preimage. -However, I assumed the wording of the problem meant I should show a bijection from unordered pairs of twin primes to positive integers, but I'm not sure if the problem intended for me to find a bijection that takes a single prime that happens to be a twin prime, to any such $n$, not necessarily positive which is problematic since both $n$ and $-n$ give the same $n^2-1$. Have I interpreted this problem correctly? If not, do other bijections exist with the differing domains and ranges I mentioned above? - -REPLY [2 votes]: I did this same problem for a class on number theory, the book omits a constraint that n>3 since 3^2 -1 = 8 which is divisible by 1,2,4,8 and 4 is not prime it does however hold for all n>3. Intuitively this is a consequence of the fact that 2 is the only even prime and that the product of three consecutive integers is divisible by 6 (n-1 being divisible by two would typically eliminate its candidacy for being prime EXCEPT for the n-1 = 2 case). Proving that a one-to-one correspondence exists is done by proving a that there is an iff statement relating (n-1),(n+1) being prime and (n-1)(n+1) having 4 dividers. Since this would imply that the set of all n such that n^2-1 has 4 dividers is equal to the set of n s.t. (n-1),(n+1) are prime and the function mapping a set back to itself is a trivial bijective map. -One direction is pretty easy, (n-1),(n+1) prime => s = (n-1)(n+1) = p1*p2 therefore s has 4 dividers, 1,s,p1, and p2. -The other direction is left for you to figure out,<|endoftext|> -TITLE: Parallel transport of a vector along two distinct curves -QUESTION [7 upvotes]: Let $\mathcal{M}$ be an n-dimensional manifold endowed with an affine connection $\nabla$. Let $\gamma_1:[a,b]\rightarrow M$ and $\gamma_2:[c,d]\rightarrow \mathcal{M}$ be two curves with the same initial and final points, that is, -$p=\gamma_1(a)=\gamma_2(c), q=\gamma_1(b)=\gamma_2(d)$. Take $X\in T_p\mathcal{M}$. Parallelly propagating $X$ along $\gamma_1$ and $\gamma_2$ we obtain two vectors $X_1, X_2\in T_q\mathcal{M}$, respectively. Let $R$ be the curvature tensor of the connection, $R(X,Y)Z=\nabla_X\nabla_Y Z - \nabla_Y\nabla_X Z -\nabla_{[X,Y]}Z$, and $\tau$ its torsion, $\tau(X,Y)=\nabla_X Y-\nabla_Y X -[X,Y]$. -The question is: How can I compare the two vectors $X_1$ and $X_2$? Can I write the difference $(X_2-X_1)$ in terms of $R,\tau$ and the curves? - -REPLY [3 votes]: If your two paths are homotopic you can make a comparison between your two vectors. And yes the comparison involves an integral over the homotopy of a function of curvature. -See for example Theorem 13.6.4 in Pressley's "Elementary Differential Geometry" (Google books will bring up the statement of the theorem). -If your paths are not homotopic you're out of luck, as T describes. There are things you can say of course but it's not clear what you're looking for. You should think of an example of a Riemann manifold you're interested in, to get a sense for how bad it can get.<|endoftext|> -TITLE: Can linear maps between infinite-dimensional spaces be represented as matrices? -QUESTION [21 upvotes]: Any linear map between two finite-dimensional vector spaces can be represented as a matrix under the bases of the two spaces. -But if one or all of the vector spaces is infinite dimensional, is the linear map still represented as a matrix under their bases? -If there is matrix of infinite dimension, what is it used for if not used as a representation of a linear map between vector spaces? -Thanks and regards! - -REPLY [12 votes]: What, however, must be understood that the role of matrices when one works with linear operators of infinite-dimensional vector spaces is a (very) marginal one. Familiar techniques such as the use of determinants, traces etc. no longer work. For instance, any determinant-like function on $\mathrm{GL}(V)$ where $V$ is an infinite-dimensional vector space over a field is necessarily the trivial one, since any element of $\mathrm{GL}(V)$ is a product of commutators (the group $\mathrm{GL}(V)$ is perfect, in other words; a result by Alex Rosenberg of 1958.)<|endoftext|> -TITLE: The intuition behind generalized eigenvectors -QUESTION [29 upvotes]: An ordinary eigenvector can be viewed as a vector on which the operator acts by only stretching (without rotating) it. -Is there a similar intuition behind generalized eigenvectors? -EDIT: By generalized eigenvectors I'm referring to vectors in the kernel of $(T-\lambda I)^k$ for some $k$. - -REPLY [2 votes]: Generalized eigenvectors also have an interpretation in system dynamics:If only generalized eigenvectors can be found the dynamics components (=blocks in Jordan canonical form) cannot be completely decoupled. Only if the respective matrix is fully diagonizable a full decoupling is possible. -See also this instructive video from Stanford university (from about 1:00 hours on):http://academicearth.org/courses/introduction-to-linear-dynamical-systems<|endoftext|> -TITLE: classification of small complete groups -QUESTION [6 upvotes]: I take it there isn't a classification of finite complete groups yet. Has someone put together a classification of small complete groups? I.e. $S_4$, $\text{Aut}(G)$ for $G$ simple, $\text{Hol}(C_p)$ for $p$ odd prime are complete. The smallest complete group not of one of these forms is $H$ of order $\left|H\right|$, which generalises to a class $P$. The smallest complete group not of one of these forms is $J$, and so on, until we run out of ideas. - -REPLY [2 votes]: This is not a classification, but it is interesting to notice that every finite complete group can be obtained by the following process: start with any finite centerless group $G$ and iterate the automorphism group operations to build the automorphism tower: - -$G\to\mathop{Aut}(G)\to \mathop{Aut}(\mathop{Aut}(G))\to\cdots$ - -Each group maps into the next by the inner automorphisms. If $G$ is centerless, then so is $\mathop{Aut}(G)$ and all the groups in the tower are centerless. The tower terminates at a complete group, a centerless group that is isomorphic to $\mathop{Aut}(G)$ by the canonical map. -It is a beautiful theorem of Wielandt (1939) that every finite centerless group has an automorphism tower that terminates in finitely many steps. Thus, every finite complete group arises in this fashion. -This question is also considered at this classic MO question. -Truly interesting things occur if one is willing to push things harder by continuing the iteration transfinitely. That is, having built the finite part of the automorphism tower, which has an associated system of mappings, one may simply take the direct limit to produce the limit group $G_\omega$ at stage $\omega$ and continue the iteration transfinitely with - -$G_0\to G_1\to G_2\to\cdots G_\omega\to G_{\omega+1}\to\cdots\to G_\alpha\to\cdots$ - -where one takes the automorphism group at successor ordinal stages and direct limits of the system at limit ordinal stages. Simon Thomas proved the wonderful theorem that every centerless group leads in this transfinite manner to a complete group, where the process stops (see Proceedings AMS article). I extended this result by proving: -Theorem. Every group $G$ has a terminating transfinite automorphism tower. -The proof proceeds by showing that every $G$ leads eventually to a centerless group in the transfinite tower, whose subsequent tower then terminates by Thomas' theorem. You can see my paper at the ArXiv, published in the Proceedings of the AMS. -It is an open question how tall the automorphism tower of a finite group can be. Examples are known with nontrivial centers that do not terminate at any finite stage and make it out to $\omega+n$ for any desired $n$, but the smallest upper bound is on the order of the least inaccessible cardinal. -Simon Thomas is writing a book on the subject of the automorphism tower problem, and the preliminary versions that I have seen are excellent.<|endoftext|> -TITLE: Is Inner product continuous when one arg is fixed? -QUESTION [37 upvotes]: In a inner product space with inner product $\langle\ ,\ \rangle$ and real or complex line as its base field, for each point $x$ in the space, is $\langle x,-\rangle$ continuous function on the second argument, and is $\langle - ,x\rangle$ continuous function on the first argument? "Continuous" is defined respect to the topology induced by the inner product. -Thanks and regards! - -REPLY [5 votes]: I think the easiest way is to show that it is a convex function, and then use the theorem that says that if a convex function defined on a convex set, then the function is continuous in every interior point. Since we are talking about $\Bbb{R}^n$, then it is true for all points because $\Bbb{R}^n$ is a convex and open set.<|endoftext|> -TITLE: Are there infinitely many primes of the form $4n^{2}+3$? -QUESTION [13 upvotes]: We, know that there are infinitely many primes of the form $4n-1,4n+1,5n-1,\cdots, \text{etc}$. I saw these things in Apostol's Introduction to Analytic Number theory textbook. I would like to have an argument working for $n^{2}$. The first expression which came to my mind was $4n^{2}+3$, which gives a prime for $n=1,2,3,4,5$. For $n=6$ it gives $147$ which is divisible by 3. For $n=7$ it gives $199$ which is again a prime. Then for $n=11$, it gives $487$ which is again a prime. I would like to know whether there are infinitely many primes of the form $4n^{2}+3$? If yes, then a proof! - -REPLY [16 votes]: Many people "would like to have an argument working for $n^2$", but what is available at the moment (and for the last two centuries) are conjectures. For any list of integer polynomials there is a conjecture on how often all polynomials on the list are prime: -http://en.wikipedia.org/wiki/Bateman%E2%80%93Horn_conjecture -It is extremely hard to prove that any natural set of integers of density 0 contains infinitely many primes. It is known for the set of values of $x^2 + y^4$ but not for the values of any single-variable polynomial of degree higher than one. -The asymptotic formula in the Bateman-Horn conjecture isn't necessarily the most general expression of what people in the field believe to be true (and it is probably a lot older than Bateman and Horn's article that formally codified it), but it does subsume many earlier conjectures on primes of the form $n^2+1$, prime twins and k-tuplets, Schinzel's Hypothesis and Buniakowsy's conjecture. You can calculate from the formula the predicted frequency of $n$ such that $4n^2 + 3$ is prime.<|endoftext|> -TITLE: Finding the power series of a rational function -QUESTION [12 upvotes]: In many combinatorial enumeration problems it is possible to find a rational generating function (i.e. the quotient of two polynomials) for the sequence in question. The question is - given the generating function, how can we find (algorithmically) the values of the sequence, i.e. the coefficients of the corresponding power series? -I know that for a rational generating function, the sequence satisfies a recurrence relation given by the coefficients of the polynomial in the denominator, so it's really just the question of finding the finite initial values. - -REPLY [4 votes]: Sometimes partial fractions decomposition is useful. Especially if there are only simple poles, since then each partial fraction can be expanded in a geometric series.<|endoftext|> -TITLE: Euler class and Vandermonde polynomial -QUESTION [5 upvotes]: I found the following in the wikipedia page for Euler class. -«If the rank $r$ is even, then this cohomology class $e(E) \cup e(E)$ equals the top Pontryagin class $p_{r/2}(E)$. -Under the splitting principle, this corresponds to the square of the Vandermonde polynomial equaling the discriminant: the Euler class corresponds to the Vandermonde polynomial, the basic alternating polynomial, while the top Pontryagin class corresponds to the discriminant, a symmetric polynomial. -More formally, the Euler class of a direct sum of line bundles is the Vandermonde polynomial (orientation determines the order of the line bundles up to sign), while top Pontryagin class is the discriminant.» -Can anybody clarify me this? In particular I don't understand the sense of "the Euler class of a direct sum of line bundles is the Vandermonde polynomial". I checked the references but there is nothing about that. - -REPLY [6 votes]: According to the Chern-Weil theory (see for example these lecture notes by Johan Dupont) , characteristic classes such as the Euler and Pontryagin classes correspond to invariant polynomials (under the bundle structure group) of the curvature form of the vector bundle. Basically, what the splitting principle tells us is that these polynomials can be "formally" written in terms of pullbacks of curvature forms of line bundles which correspond to the "eigenvalues" of the curvature form (viewed as an antisymmetric endomorphism).The invariant polynomial corresponding to the Euler class is the Vandermonede polynomial. For a given rank of the vector bundle, one can always write these polynomials in terms of the vector bundle curvature form and the use of the line bundle curvature forms is for simplifying the algebraic work.<|endoftext|> -TITLE: Proof: Series converges $\implies $ the limit of the sequence is zero -QUESTION [11 upvotes]: I've been using the sentence: - -If a series converges then the limit of the sequence is zero - -as a criterion to prove that a series diverges (when $\lim \neq 0$) and I can understand the rationale behind it, but I can't find a formal proof. -Can you help me? - -REPLY [13 votes]: Yes. -$$\lim_{n \to \infty} \left ( \sum_{k = 1}^{n + 1} a_k - \sum_{k = 1}^{n} a_k \right ) = \lim_{n \to \infty} a_{n + 1} $$ -And both sums will converge to the same number so the limit is zero. This is by far the easiest proof I know. -This is the Cauchy criterion in disguise by the way, so you could use that too. - -REPLY [3 votes]: If we know that the sequence converges and merely wish to show it converges to zero, then a proof by contradiction gives a little more intuition here (although the direct proofs are simple and beautiful). Assume $a_n\to a$ with $a>0$, then for all $n>N$ for some large enough $N$ we have $a_n > a/2$ (take $\varepsilon = a/2$ in the definition of the limit). Now the sum diverges: $\sum_{n>N}a_n > \sum_{n>N}a/2 = \infty$. A similar argument works when $a<0$.<|endoftext|> -TITLE: Does contractibility imply contractibility with a basepoint? -QUESTION [5 upvotes]: Let $X$ be a contractible space. If $x_0 \in X$, it is not necessarily true that the pointed space $(X,x_0)$ is contractible (i.e., it is possible that any contracting homotopy will move $x_0$). An example is given in 1.4 of Spanier: the comb space. However, this space is contractible as a pointed space if the basepoint is in the bottom line. -Is there a contractible space which is not contractible as a pointed space for any choice of basepoint? -My guess is that this will have to be some kind of pathological space, because for CW complexes, we have the Whitehead theorem. (So I'm not completely sure that the Whitehead theorem is actually a statement about the pointed homotopy category, but hopefully I'm right.) - -REPLY [3 votes]: Yes. See exercise 7 here.<|endoftext|> -TITLE: Solving randomized recurrence relation -QUESTION [10 upvotes]: I'm looking at the random sequence $x_n$ with $x_0=x_1=1$ and -\begin{equation} -x_{n+1}=2x_n\pm x_{n-1} -\end{equation} -where we choose the $\pm$ sign independently with equal probability. Now considering the recurrence relations $x_{n+1}=2x_n+x_{n-1}$ and $x_{n+1}=2x_n-x_{n-1}$ separately, it is clear that $x_n\rightarrow\infty$ as $n\rightarrow\infty$. However, the sequence $x_n^{1/n}$ seems to tend to a limit near 1.91 (got this from numerical computation and some brute force by a Monte Carlo simulating the $\pm$ sign). Thus the sequence $x_n^{1/n}$ seems to convenge almost surely. I was wondering if anyone could show that the sequence was indeed almost surely converging and/or work out the limit. -Thanks in advance. -Update: -This comment shows that $\lim_{n\rightarrow\infty} |x_n|^{1/n}$ converges almost surely. Let $y_n=\frac{x_n}{2^n}$ then $2^{n+1}y_{n+1}=2^{n+1}y_n\pm 2^{n-1}y_{n-1}$. Hence -$y_{n+1}=y_n\pm \frac{1}{4}y_{n-1}$ -Embree-Trefethen showed that $\lim_{n\rightarrow\infty} |y_n|^{1/n}$ converges almost surely. See Embree, M.; Trefethen, L. N. (1999), "Growth and decay of random Fibonacci sequences". -However, finding the almost sure limit accurately or analytically is proving difficult at the moment. - -REPLY [3 votes]: If you replace 2 with 1 in your recurrence, you get the random Fibonacci sequence. Viswanath has proven that the random Fibonacci sequence almost surely has exponential growth (and gives the growth constant). I haven't carefully read the proof, though, so I can't comment on how it could be adapted to the case you're interested in. -Kesten and Furstenberg showed quite a while ago that the limit you say is around 1.91 at least exists, by restating the problem as one about multiplication of random matrices.<|endoftext|> -TITLE: Why is $e^{\pi \sqrt{163}}$ almost an integer? -QUESTION [85 upvotes]: The fact that Ramanujan's Constant $e^{\pi \sqrt{163}}$ is almost an integer ($262 537 412 640 768 743.99999999999925...$) doesn't seem to be a coincidence, but has to do with the $163$ appearing in it. Can you explain why it's almost-but-not-quite an integer in layman's terms (I'm not a mathematician)? - -REPLY [2 votes]: I know it's super late but in case anyone has this question in the future, Richard Borcherds made a really digestible video on it in 2020: https://www.youtube.com/watch?v=a9k_QmZbwX8&list=PLar4u0v66vIodqt3KSZPsYyuULD5meoAo&index=85<|endoftext|> -TITLE: How can I prove $\sup(A+B)=\sup A+\sup B$ if $A+B=\{a+b\mid a\in A, b\in B\}$ -QUESTION [46 upvotes]: If $A,B$ non empty, upper bounded sets and $A+B=\{a+b\mid a\in A, b\in B\}$, how can I prove that $\sup(A+B)=\sup A+\sup B$? - -REPLY [3 votes]: $$\forall ~~a\in A,~~b\in B, a\le \sup A~~ \text{and }~~b\le \sup B \implies a+b \le a\le \sup A +\sup B$$ -hence $$\sup A +B\le \sup A +\sup B$$ -Now we know there exist $a_n\in A$ and $b_n\in B$ such that $$\sup A=\lim_{n\to\infty} a_n$$ -and -$$\sup B =\lim_{n\to\infty}b_n$$ -then -$$\sup A+\sup B =\lim_{n\to\infty} a_n+\lim_{n\to\infty} b_n =\lim_{n\to\infty} a_n+b_n \le \sup A+B$$ -whence -$$\sup A+\sup B = \sup A+B$$<|endoftext|> -TITLE: Is this proof of $a^{1/2}$ being either integer or irrational circular/incorrect? -QUESTION [16 upvotes]: In the question How to prove: if $a,b \in \mathbb N$, then $a^{1/b}$ is an integer or an irrational number? -there was an answer (revision 0) by Douglas S Stones which said in complete: -" -If $\sqrt{a}=x/y$ where $y$ does not divide $x$, then $a=(\sqrt{a})^2=x^2/y^2$ is not an integer (since $y^2$ does not divide $x^2$), giving a contradiction. -" -Two people whose opinions I respect claimed that this proof approach was "totally bogus/circular". -I don't really see how this is circular, or bogus for that matter. -Doesn't the result follow immediately from unique factorization? -So my question is: What is wrong with that answer? - -REPLY [7 votes]: It seems to me that maybe the best way to describe the situation is this: Douglas Stone's original answer to the original question consisted of rephrasing the question in such a way as to make it accessible to a proof using basic properties on the integers (specifically, unique factorization). -In my opinion, one thing which is being lost in all this discussion is just how important it can be to rephrase a question! Sure, the process of rephrasing contains "no math" as Qiaochu has pointed out. But that doesn't make it useless (and I wouldn't use the word circular here either). -Finding ways to rephrase questions so that they become accessible to the methods available is a basic skill beginning students of mathematics need to learn. For example, much of the material in the early chapters of modern linear algebra books consists of teaching students how to rephrase questions in linear algebra so that they can be solved by row reduction. -I wouldn't accept Douglas Stone's answer as a complete solution to the problem if it were turned in by a student in an elementary number theory class, just as in my linear algebra class, reducing a problem to a question of row reduction isn't a complete solution. But if a student came to me and said he or she was stuck on the problem, the first thing I'd try to do is get them to rephrase the question in precisely the way Douglas Stone did. -Pleasantly, the community has pointed out (both here and at the original question) exactly how to finish the proof after rephrasing it in this useful way.<|endoftext|> -TITLE: Why does $K \leadsto K(X)$ preserve the degree of field extensions? -QUESTION [18 upvotes]: The following is a problem in an algebra textbook, probably a well-known fact, but I just don't know how to Google it. - -Let $K/k$ be a finite field extension. Then $K(X)/k(X)$ is also finite with the same degree as $K/k$. - -Obviously if $v_1,...,v_n$ is a $k$-Basis of $K$, any polynomial in $K(X)$ can be written as a $k(X)$-linear combination of $v_1,...,v_n$, but I have no idea what to do with a nontrivial denominator. -Is there perhaps a more elegant way of proving this? - -REPLY [6 votes]: Let $K/k$ be a field extension. Then $K \otimes_k k(X)$ is a localization of the integral domain $K \otimes_k k[X] = K[X]$, thus the natural map $K \otimes_k k(X) \to K(X)$ is injective. Now we have the following equivalence: -a) $K/k$ is algebraic -b) Every polynomial over $K$ divides some polynomial over $k$. -c) $K \otimes_k k(X) = K(X)$. -$a \Rightarrow b$: Because of uniqueness of the division algorithm, it suffices to prove divisibilty in an algebraic extension. So take a splitting field of the polynomial and reduce to the case of linear factors $x-a$. But they divide the minimal polynomial over $k$. -$b \Rightarrow c$: $K \otimes_k k(X)$ is the localization of $K[X]$ at all nontrivial polynomials over $k$. Since we have $b$, we localize at all nontrivial polynomials over $K$, i.e. we get $K(X)$. -$c \Rightarrow a$: Take $u \in K$. Then $1/(X-u) = p/q$ for $p \in K[X], q \in k[X]$, i.e. $p (X-u)$ is a polynomial over $k$ with root $u$. QED -In particular, if $K/k$ is algebraic, we have $[K(X):k(X)] = [K:k]$.<|endoftext|> -TITLE: Explanation of Maclaurin Series of $x^\pi$ -QUESTION [6 upvotes]: I am reviewing Calc $2$ material and I came across a problem which asked me to explain why $x^\pi$ does not have a Taylor Series expansion around $x=0$. To me it seems that it would have an expansion but it would just be $0$, so maybe it's not a suitable expansion. It doesn't have any holes and it is infinitely differentiable so I don't know why it couldn't have an expansion. - -REPLY [4 votes]: As a graphical supplement to Jonas's and WWright's answers: - -This is a plot of the real and imaginary parts of $(x+iy)^\pi$ in the complex plane. Note the cut running across the negative real axis. This cut is precisely the reason why you cannot have a Maclaurin expansion; polynomials cannot exhibit cuts, and a Maclaurin expansion amounts to approximating your function with a sequence of polynomials.<|endoftext|> -TITLE: Question about the P versus NP Problem -QUESTION [5 upvotes]: It seems to be an accepted belief based on decades of experience that naive algorithms are not -adequate to solve NP-complete problems in a reasonable amount of time. Even those who believe P = NP seem to be looking for an algorithm with a very clever representation, so far without success. -Can the observations above be formalized? Let's use the full, unrestricted Clique problem as an example. Suppose it could be shown that no clever representation exists for the Clique problem. That is, suppose it could be proven for every algorithm that either: - -the algorithm is correct and uses an internal representation such that no two distinct input cliques lead to the same internal representation, -or -the algorithm is incorrect. - -Would this be a worthwhile result? How important would it be? Or is it wrong? -Is there already a proof that CLIQUE or SAT, for example, cannot be solved in polynomial time by performing operations on cliques or Boolean expressions respectively? -References with your answer would be helpful. Thanks - -REPLY [6 votes]: You ask: - -suppose it could be proven for every - algorithm that either: - -the algorithm is correct and uses an internal representation such - that no two distinct input cliques - lead to the same internal - representation, or -the algorithm is incorrect. - -Would this be a worthwhile result? How - important would it be? Or is it wrong? - -It's wrong. You can always take a correct algorithm for clique and add a preprocessing step that removes edges which are clearly not in any maximal clique. This will still be a correct algorithm for clique, and obviously does not satisfy either (1) or (2).<|endoftext|> -TITLE: Why are quadratic equations called quadratics? -QUESTION [29 upvotes]: The word "quad" generally means 4. Quadratics don't have 4 of anything. Can anyone explain where the name comes from? - -REPLY [24 votes]: From MathWorld: - -The Latin prefix quadri- is used to indicate the number 4, for example, quadrilateral, quadrant, etc. However, it also very commonly used to denote objects involving the number 2. This is the case because quadratum is the Latin word for square, and since the area of a square of side length $x$ is given by $x^2$, a polynomial equation having exponent two is known as a quadratic ("square-like") equation. By extension, a quadratic surface is a second-order algebraic surface.<|endoftext|> -TITLE: How can I (algorithmically) count the number of ways n m-sided dice can add up to a given number? -QUESTION [9 upvotes]: I am trying to identify the general case algorithm for counting the different ways dice can add to a given number. For instance, there are six ways to roll a seven with two 6-dice. -I've spent quite a bit of time working on this (for a while my friend and I were using Figurate numbers, as the early items in the series match up) but at this point I'm tired and stumped, and would love some assistance. -So far we've got something to this effect (apologies for the feeble attempt at mathematical notation - I usually reside on StackOverflow): -count(x): - x = min(x,n*m-x+n) - if x = n - 1 - else - some sort of (recursive?) operation - -The first line simplifies the problem to just the lower numbers - where the count is increasing. Then, if we're looking for the count of the minimum possible (which is also now the max because of the previous line) there is only one way to do that so it is 1, no matter the n or m. - -REPLY [2 votes]: The formula in the last answer (of Yuval Filmus) seems to be wrong: for m=4,n=9,S=6 one gets a nonzero result while it's impossible to have a sum of 6 with 9 dices! -The wanted formula i guess is the one corresponding to the coefficient "c" at http://mathworld.wolfram.com/Dice.html<|endoftext|> -TITLE: Intuition behind the "large but finite search space" of the proof of the four colour theorem? -QUESTION [8 upvotes]: I know the four colour theorem was solved by a computer checking a large number of cases. What I don't understand is why there are only a large but finite number of cases. It seems like there should be infinitely many planar graphs... What is the intuition behind cutting this down to a finite number of configurations? - -REPLY [18 votes]: I'm not sure whether or not you understand how the proof of the $4$ color map theorem worked. I'll sketch that, and then make some philosophical comments. -Let's start by proving the $6$ color map theorem. Without loss of generality, we may assume that all the faces of our graph are triangular, because adding edges just makes a graph harder to color. (We are coloring vertices.) Let $V$, $E$ and $F$ be the number of vertices, edges and faces. Every edge is in two faces, and every face has three edges, so $F=(2/3)E$. We have $V-E+F=2$ by Euler's relation, so $V=E/3 +2$. Since the sum of all vertex degrees is $2E$, the average degree of a vertex is $2E/V = (2E)/(E/3 + 2) < 6$. So the average degree is less than $6$ and there must be a vertex $v$ of degree $\leq 5$. Delete $v$, $6$-color the rest of the graph (by induction), and add $v$ back. Since the vertex has degree $\leq 5$, there is some color which does not border $v$, color $v$ that color. -To prove the $5$ color map theorem requires one new idea. Find the vertex $v$ as before, color the rest of the graph and add it back. But we have a problem if the $v$ has degree $5$ and its neighbors all have different colors. The idea to get around this is to consider the subgraphs $G_{ij}$ consisting of vertices of colors $i$ and $j$, for different pairs $(i,j)$. Because $G_{ij}$ and $G_{k \ell}$ can not cross, for $i$, $j$, $k$ and $\ell$ distinct, some of these graphs must be disconnected. By considering the finitely possible many arrangements of these graphs, one sees that, in each case, one can take some component of $G_{ij}$ and switch the colors $i$ and $j$ on that component in order to make $v$ have only $4$ distinct colors of neighbor. See, for example, Wikipedia for details. -The $4$ color map theorem requires the recoloring idea of the previous paragraph, plus one more idea. Instead of showing that there is always a vertex of degree $3$, $4$ or $5$, one shows that one of a finite list of larger configurations is unavoidable. This list tends to have 600-2000 members, depending on exactly which copy of the proof you read. The idea to showing that this list is unavoidable is an averaging argument like the one I used; the technical term to search for is "discharging". For each of these hundreds of configurations, one removes it from the graph, colors the rest, and analyzes the possible positions of the graphs $G_{ij}$. In each case one can recolor some of the components of the $G_{ij}$ in such a way that, when the configuration is put back in, the whole graph will be colorable. -As I understand it, computers are used in two ways. The less interesting one is that there is a routine which verifies that the discharging argument is correct, finds the possible positions of the $G_{ij}$ and checks that they can be recolored. -The more interesting one is that, when the routine in the previous paragraph fails, the computer perturbs the discharging argument, finds a new resulting list of configurations and tries again. -It is basically a genetic process, where the computer tweaks the discharging algorithm until it finds a list of configurations which works. -The leap of faith which Appel and Haken took was to believe that there was some sufficiently complex discharging algorithm, and some list of configurations for which this procedure would work, and they just had to search long enough. Since then, this search has been implemented many times, with the details of the search algorithm different in each case, and many different solutions have been found. -To my mind, the whole thing reminds me of parallel evolution. There are lots of ways to make a wing, and if you leave an organism for long enough in an environment that rewards flight then it will develop one. But all of those ways are so complex that a human intellegence couldn't engineer them. Similarly, there are lots of discharging algorithms that prove the $4$ color map theorem, but they are all too bizarre for a human mathematician to find directly.<|endoftext|> -TITLE: The Hamiltonian problem on Polyominoes -QUESTION [7 upvotes]: A polyomino is a connected subset of $\mathbb{Z}^2$ - a set of squares joined along their edges such that the resulting form is connected (or, more shortly, a generalized form of Tetris cube). A polyomino can be easily thought of as graph with vertices being elements of $\mathbb{Z}^2$ and edges between two vertices of hamming distance 1 (i.e. with one coordinate equal and the other different by 1). -The Hamiltonian path/cycle problem is the problem of determining for a given graph whether it contains a path/cycle that visits every vertex exactly once. It is known to be NP-complete for general graphs and even for planar graphs. My question is whether it's still NP-complete when restricting the graphs to be polyominoes (and if it is NP-complete, how it is shown). - -REPLY [9 votes]: This problem (both the path version and the circuit version) was shown to be NP-complete by Itai, Papadimitriou and Szwarcfiter [IPS82] by a reduction from a special case of the Hamiltonian circuit problem for planar graphs. -[IPS82] Alon Itai, Christos H. Papadimitriou and Jayme Luiz Szwarcfiter. Hamilton paths in grid graphs. SIAM Journal on Computing, 11(4):676–686, 1982. http://dx.doi.org/10.1137/0211056<|endoftext|> -TITLE: Can Robinson's Q prove Presburger arithmetic consistent? -QUESTION [9 upvotes]: I made an assertion in What are some examples of theories stronger than Presburger Arithmetic but weaker than Peano Arithmetic? that Q has higher consistency strength than Pres, Presburger arithmetic; i.e., Q proves the consistency sentence for Pres. -But in fact, I only know something weaker, that Q can formalise the provability predicate for Hilbert systems, and so prove, say, that Peano arithmetic proves Pres consistent. -Is there a direct proof of consistenct of Pres in Q? - -REPLY [8 votes]: I believe that Theorem 1 of Bezboruah and Shepherdson 1976 [1] covers your question, at least in spirit. Their theory $T_0$ is a finite theory extending $Q$. Quoting their paper: - -Theorem 1. Let $L$ be any formal system with a recursive set of axioms, a finite number of finitary and recursive rules of inference including modus ponens and having $A \to A$ as a theorem for all sentences $A$. Let - $$Con_L =_{df}\quad \lnot(\exists y,z)(Th_L(y) \land Th_L(z) \land neg(z,y))$$ - where $\text{Th}_L$, $\text{neg}$ are given in Definition 3 below. Then $\text{Con}_L$ is not provable in $T_0$. - -The authors, however, express the common doubt that consistency proofs in Q are philosophically meaningful. - -"We must agree with Kreisel that this is devoid of any philosophical interest and that in such a weak system this formula cannot be said to express consistency but only an algebraic property which in a stronger system (e.g. Peano arithmetic P) could reasonably be said to express the consistency of Q." - -The (well known?) difficulty here is that Q can formalize the provability predicate but cannot verify the Hilbert-Bernays derivability conditions for it. -1: A. Bezboruah and J. C. Shepherdson, "Gödel's Second Incompleteness Theorem for Q", The Journal of Symbolic Logic Vol. 41, No. 2 (Jun., 1976), pp. 503-512, JStor.<|endoftext|> -TITLE: Why do lambda and pi go together? -QUESTION [8 upvotes]: In measure theory, we have "lambda systems" and "pi systems". Pearl's message passing algorithm has "lambda messages" and "pi messages". Is there a reason that lambda and pi go together? - -REPLY [6 votes]: The measure theory ideas of $\pi$-system and $\lambda$-system were introduced by Dynkin in his book Die Grundlagen der Theorie der Markoffschen Prozesse (1961 German translation of 1959 Russian original); a note at the end of the book mentions they are new, but doesn't explain why they are so called. My guess has been that $\pi$ is for "product" and $\lambda$ is for "limit," or some Russian cognates thereof; I'm not sure there's any connection between the letters themselves. Although Professor Dynkin has recently retired, he still has an office here at Cornell; if I see him and I think of it, I may ask him about it. -Unfortunately I don't know anything about message passing, so I can't say whether Pearl's terminology is related or a coincidence.<|endoftext|> -TITLE: Quantifying dimensionality of combinatorial space -QUESTION [5 upvotes]: Is it possible to quantify the number of dimensions in combinatorial spaces. The space I am particularly interested is all partitions of a set, bounded by the Bell number, where objects in this space are particular partitions. - -REPLY [6 votes]: It makes sense to consider some sets of combinatorial objects as spaces (or polytopes) and therefore discuss dimensionality (e.g. the set of n by n (-1,0,+1)-matrices). Although, perhaps the word "dimension" could better be described as "degrees of freedom". - -Mathematically, degrees of freedom is - the dimension of the domain of a - random vector, or essentially the - number of 'free' components: how many - components need to be known before the - vector is fully determined. - -I suspect that it will be difficult to discuss dimensionality in many combinatorial settings. For example, imagine constructing a Latin square, starting from an empty matrix, placing symbols one-at-a-time in a non-clashing manner. After placing (say) half of the symbols, we might find: (a) there are still many completions of this partial Latin square, (b) there are no completions of this partial Latin square or (c) there is a unique completion of this partial Latin square. This seems to go against the notion of dimensionality -- the number of "components" required to determine the Latin square is not fixed. -You could think of the set of partitions of a set of n elements as having dimension n. You require n pieces of information to determine the partition (which set each element is in). But as Qiaochu Yuan points out, who cares? There's no point in having a notion of "dimension" unless you can use it for something.<|endoftext|> -TITLE: Example: Function sequence uniformly converges, its derivatives don't. -QUESTION [9 upvotes]: Could anyone give an example of a sequence of differentiable (real) functions that uniformly converge to a differentiable function, but the derivatives of which don't converge to the derivative of the limit function? - -REPLY [17 votes]: This is from the book, Counterexamples in Analysis by Gelbaum and Olmstead. (Google books link: http://books.google.com/books?id=cDAMh5n4lkkC) -This is under: -A sequence of infinitely differentiable functions converging uniformly to zero, the sequence of whose derivatives diverges everywhere. -$$f_{n}(x) = \frac{\sin nx}{\sqrt{n}}$$ -This is on page 76, Chapter 7. - -REPLY [4 votes]: Consider the function $f_{n}: [0,2\pi] \to \mathbb{R}$ defined by $f_{n}(x)=n^{-1/2}\sin(nx)$ and let $f:[0,2\pi] \to \mathbb{R}$ be the zero function, that is $f(x)=0$. -Ok, here i work out the details. Since $\sin{x}$ oscillates between -1 and 1, we have $d_{\infty}(f_{n},f) \leq n^{-1/2}$, where $d_{\infty}$ is the uniform metric defined as $d_{\infty}(f,g)= \sup_{x \in [0, 2\pi]} |f(x)-g(x)|$. Since $\frac{1}{\sqrt{n}} \to 0$, using squeeze test we see that $f_{n} \to f$ uniformly. -Whereas $f_{n}'(x)= \sqrt{n}\cos(nx)$ so we have $|f_{n}'(0)-f'(0)|=\sqrt{n}$, which says that $f_{n}'$ does not converge pointwise to $f'$.<|endoftext|> -TITLE: Simplification of expressions containing radicals -QUESTION [23 upvotes]: As an example, consider the polynomial $f(x) = x^3 + x - 2 = (x - 1)(x^2 + x + 2)$ which clearly has a root $x = 1$. -But we can also find the roots using Cardano's method, which leads to -$$x = \sqrt[3]{\sqrt{28/27} + 1} - \sqrt[3]{\sqrt{28/27} - 1}$$ -and two other roots. -It's easy to check numerically that this expression is really equal to $1$, but is there a way to derive it algebraically which isn't equivalent to showing that this expression satisfies $f(x) = 0$? - -REPLY [17 votes]: Pardon my skepticism, but has anyone so much as breadboarded Blömer '92 or Landau '93 in all these 18 years? For lack of same, people still publish ugly surdballs, e.g., -$$\vartheta _3\left(0,e^{-6 \pi }\right)=\frac{\sqrt[3]{-4+3 \sqrt{2}+3 \sqrt[4]{3}+2 \sqrt{3}-3^{3/4}+2 \sqrt{2}\, 3^{3/4}} \sqrt[4]{\pi }}{2\ - 3^{3/8} \sqrt[6]{\left(\sqrt{2}-1\right) \left(\sqrt{3}-1\right)} \Gamma \left(\frac{3}{4}\right)}$$ -(J. Yi / J. Math. Anal. Appl. 292 (2004) 381–400, Thm 5.5 vi) instead of -$$\vartheta _3\left(0,e^{-6 \pi }\right)=\frac{\sqrt{2+\sqrt{2}+\sqrt{2} \sqrt[4]{3}+\sqrt{6}} \,\sqrt[4]{\pi }}{2\ 3^{3/8} \Gamma - \left(\frac{3}{4}\right)}\quad .$$ -And why do both papers trot out the same old Ramanujan denestings instead of new and interesting ones? E.g., -$$\sqrt{2^{6/7}-1}=\frac{2^{8/7}-2^{6/7}+2^{5/7}+2^{3/7}-1}{\sqrt{7}}$$ -or -$$\sqrt[3]{3^{3/5}-\sqrt[5]{2}}=\frac{2^{2/5}+\sqrt[5]{3}+2^{3/5} 3^{2/5}-\sqrt[5]{2}\, 3^{3/5}}{5^{2/3}}$$ -or -$$\frac{\sqrt[3]{1+\sqrt{3}+\sqrt{2}\, 3^{3/4}}}{\sqrt[6]{\sqrt{3}-1}}=\frac{\sqrt{1+\sqrt{3}+\sqrt{2} \sqrt[4]{3}}}{\sqrt[6]{2}}\quad ?$$ -These results were found by two young students of mine who would very much like to know values of q and b in Bill Dubuque's structure theorem which effect the denesting -$$\sqrt[3]{-\frac{106}{25}-\frac{369 \sqrt{3}}{125}+\frac{3 \sqrt{3} \left(388+268 \sqrt{3}\right)}{100 \sqrt[3]{2}\, - 5^{2/3}}}=\frac{3}{5^{2/3}}-\frac{1+\sqrt{3}}{\sqrt[3]{10}}+\frac{1}{5} \sqrt[3]{2} \left(3+2 \sqrt{3}\right)\quad.$$ -Thanks in advance.<|endoftext|> -TITLE: Continuous and bounded variation does not imply absolutely continuous -QUESTION [15 upvotes]: I know that a continuous function which is a BV may not be absolutely continuous. Is there an example of such a function? I was looking for a BV whose derivative is not Lebesgue integrable but I couldn't find one. - -REPLY [18 votes]: The Devil's staircase function does the trick. -Its derivative is almost surely zero with respect to Lebesgue measure, -so the function is not absolutely continuous. -See http://mathworld.wolfram.com/DevilsStaircase.html - -REPLY [9 votes]: Byron already answered your main question, but your last sentence is another matter. You want a BV function whose derivative is not integrable, but such things don't exist. In particular, if $f$ is monotone on $[a,b]$, then $f'$ exists a.e., is Lebesgue integrable, and $\int_a^b f' \leq f(b)-f(a)$. Thus half of the fundamental theorem of calculus holds, so to speak. General BV functions are differences of monotone functions, so their derivatives are also Lebesgue integrable.<|endoftext|> -TITLE: What are some alternative definitions of vector addition and scalar multiplication? -QUESTION [9 upvotes]: While teaching the concept of vector spaces, my professor mentioned that addition and multiplication aren't necessarily what we normally call addition and multiplication, but any other function that complies with the eight axioms needed by the definition of a vector space (for instance, associativity, commutativity of addition, etc.). Is there any widely used vector space in which alternative functions are used as addition/multiplication? - -REPLY [5 votes]: What do you "normally" call addition and multiplication? -Just those operations with real numbers, or all kind of addition and multiplication "derived" from the well-known operations with real numbers, or that "look like" these operations? -Because, in the first case, you have plenty of elementary and widely used vector spaces with operations which are not those of real numbers: - -$\mathbb{R}^2$, the set of ordered pairs of real numbers $(x,y)$, is a real vector space, with addition and multiplication defined as $(x,y) + (u,v) = (x+u, y+v)$ and $\lambda (x,y) = (\lambda x , \lambda y)$. These operations are defined using the "normal" addition and multiplication of real numbers, but are not the "normal" addition and multiplication of real numbers just because $(x,y)$ is not a real number. -${\cal C}^0 (\mathbb{R}, \mathbb{R})$, the set of continuous functions $f:\mathbb{R} \longrightarrow \mathbb{R}$, is a real vector space, with addition and multiplication defined point-wise; that is $(f+g)(x) = f(x) + g(x)$ and $(\lambda f)(x) = \lambda f(x)$. Again, these addition and multiplication are defined using the "normal" addition and multiplication of real numbers, but are not the "normal" addition and multiplication of real numbers for the same reason. -$\mathbb{Z}/2\mathbb{Z}$, the set of integers mod 2, is a $\mathbb{Z}/2\mathbb{Z}$-vector space, with addition and multiplication $\widetilde{m} + \widetilde{n} = \widetilde{n+m}$ and $\widetilde{\lambda}\widetilde{m} = \widetilde{\lambda m}$, where $\widetilde{m}$ denotes the class of $m$ mod 2. Ditto. -$\mathbb{R}(x)$, the field of rational functions $\frac{p(x)}{q(x)}$, where $p(x), q(x) \in \mathbb{R}[x]$ are polynomials, $q(x) \neq 0$, is a $\mathbb{R}(x)$-vector space, with addition and multiplication $\frac{p(x)}{q(x)} + \frac{r(x)}{s(x)} = \frac{p(x) s(x) + r(x) q(x)}{q(x)s(x)} $ and $\frac{p(x)}{q(x)} \frac{r(x)}{s(x)} = \frac{p(x)r(x)}{q(x)s(x)}$. Ditto. -$\mathbb{C}$, the set of complex numbers, is a $\mathbb{C}$-vector space, whit the addition and multiplication of complex numbers. Ditto. -$\mathbb{K}^n$, the set of ordered families $(x_1, \dots , x_n)$ of elements of any field $\mathbb{K}$, is a $\mathbb{K}$-vector space, with addition and multiplication defined as in example 1. Examples 3, 4 and 5 are particular cases of this one with $n=1$ and $\mathbb{K} =$ $\mathbb{Z}/2\mathbb{Z}$, $\mathbb{R}(x)$ and $\mathbb{C}$, respectively. Example 1 is also a particular case, with $n=2$ and $\mathbb{K} = \mathbb{R}$. Addition and multiplication in $\mathbb{K}$ may have nothing in common with the operations with real numbers.<|endoftext|> -TITLE: limit inferior and superior for sets vs real numbers -QUESTION [25 upvotes]: I am looking for an intuitive explanation of $\liminf$ and $\limsup$ for sequence of sets and how it corresponds to $\liminf$ and $\limsup$ for sets of real numbers. I researched online but cannot find a good comparison. Any link, reference or answer very much appreciated. -For example, what is $\liminf$ and $\limsup$ of real number sequences $a_n=(-1)^n$ and $b_n=1/n$. Corresponding to this, what is the $\liminf$ and $\limsup$ of sequence of sets $A_n=\{(−1)^n\}$ and $B_n=\{1/n\}$? - -REPLY [25 votes]: Let's deal with the examples first. -The $\liminf$ of the real sequence $\{a_n\}$ with $a_n = (-1)^n$ is $-1$; the $\limsup$ of the same sequence is $1$. For the sequence $\{b_n\}$, with $b_n=\frac{1}{n}$, since the sequence converges to $0$, every subsequence converges to $0$ so both $\liminf$ and $\limsup$ are equal to $0$. You can think of the $\liminf$ as the infimum of the all the limits of all converging subsequences of the sequence, and the $\limsup$ is the supremum of all the limits of all converging subsequences of the sequences. -Using the definitions given by Jens, for $A_n=\{(-1)^n\}$ the $\limsup$ is $\{-1, 1\}$, because both elements show up in infinitely many of the $A_n$, and the $\liminf$ is empty, because no element occurs in all but finitely many of the $A_n$. For $B_n$, they are both empty because no element occurs in infinitely many of the sets, nor in all but finitely many (each element occurs in only one $B_n$). Again, $\liminf$ is the collection of all points that are in all but finitely many of the sets, while $\limsup$ is the collection of all points that are in infinitely many of the sets. -But you are looking at the wrong sets if you want your sets to be related to your sequences. As Nate Eldredge points out, what you should be looking at is the set $A_n = (-\infty,a_n)$, or $A_n = (-\infty,(-1)^n)$. Using that definition, you have that $\limsup A_n = (-\infty,1)$ (as expected, since $\limsup a_n = 1$; each of these numbers occurs in infinitely many of the $A_n$), and $\liminf A_n=(-\infty,-1)$ because those are the only ones that occur in all but finitely many of the sets (in fact, in all; every other number that occurs in any $A_n$ occurs only in the $A_n$ with even $n$, so is missing from infinitely many $A_n$); while if you let $B_n = (-\infty,\frac{1}{n})$, then $\liminf B_n=\limsup B_n = (-\infty,0]$ (again, as expected, since the limit inferior and limit superior of $b_n$ are both equal to $0$). -Now, the reason you seem to be getting hung up is that there seems to be little relation between the limits inferior and superior of a $a_n$, and the limits inferior and superior of the sequence of sets $\{a_n\}$. But the point that Nate Eldredge made is that these are not the sets you want to associate with the sequence $a_n$. -You may remember that a sequence $\{a_n\}$ converges to $L$ if and only if every subsequence $\{a_{n_k}\}$ converges to $L$. Also, every sequence contains a monotone sequence, so if we allow $\infty$ and $-\infty$ as "limits", it follows that every sequence will necessarily have a converging subsequence. So one can ask: "what are all the points $M$ for which there is a subsequence of $\{a_n\}$ that converges to $M$?" One can view the limits inferior and superior in terms of this set: the limit inferior of the sequence is the smallest number $\ell$ (including possibly $\infty$ or $-\infty$) for which there is a subsequence of $\{a_n\}$ converging to $\ell$. The limit superior is the largest number $L$ for which there is a subsequence of $\{a_n\}$ that converges to $L$. As it happens, the limit exists if and only if $\ell=L$. The limits inferior and superior can also be defined by -$$\liminf a_n = \lim_{n\to\infty}(\inf\{a_m|m\geq n\}) = \sup_n\left(\inf\{a_m|m\geq n\}\right)$$ -and -$$\limsup a_n = \lim_{n\to\infty}(\sup\{a_m|m\geq n\}) = \inf_n\left(\sup\{a_m|m\geq n\}\right).$$ -Viewed like this, you can perhaps see a bit more of a connection with the limits inferior and superior of a sequence of sets. If $\{A_n\}$ is a sequence of sets, then the limits inferior and superior are defined to be: -$$\liminf A_n = \cup_{n=1}^{\infty}\left(\cap_{m=n}^{\infty} A_m\right)$$ -and -$$\limsup A_n = \cap_{n=1}^{\infty}\left(\cup_{m=n}^{\infty} A_m\right).$$ -Think of an intersection as taking "smallest" thing in common (so like an infimum), and think of union as taking "largest" thing in common (so like a supremum). The limit inferior is the supremum of the infima, while the limit superior is the infimum of the suprema. It is now a nice exercise to verify that $\liminf A_n$ is the collection of all things which are in all but finitely many of the $A_i$, while $\limsup A_n$ is the collection of all things which are in infinitely many of the $A_i$ (as described by Jens). -So, how do you connect a sequence $\{a_n\}$ to sets so that the limits inferior and superior correspond in some way? You can't just take $A_n = \{a_n\}$, because then each $A_n$ knows nothing about what came before or after; you lose all information that could tell you something about subsequences. You could try letting $A_n =\{a_m|m\geq n\}$, and that will even work in some instances, but the problem here is that the information you are losing is that in the real numbers, a sequence may converge to a number even if no term in the sequence equals the limit; then the limit is never going to show up in any of the sets, and it's not going to show up in the limits inferior nor superior of the sets. -What's the solution? The limit inferior of a sequence is going to be a lower bound for all but finitely many of the terms of the sequence (if there were infinitely many terms of the sequence strictly smaller than $\liminf a_n$, then you would be able to get a subsequence from among them that converges to something strictly smaller than $\liminf a_n$, a contradiction). This suggests that what you want to do is let $A_n$ be the collection of all lower bounds to $a_n$; then the limit inferior of the $A_n$ will be the collection of all things that are lower bounds to all but finitely many terms of the sequence, exactly the set you want to consider to find $\liminf a_n$. -What is the limit superior of $a_n$? You can define it dually, as the smallest of all numbers that upper bounds for all but finitely many terms of the sequence (this will lead to the formula that says that $\limsup a_n = -\liminf(-a_n)$). Or you can try to define it in terms of the lower bounds again: any number $k$ strictly smaller than the limit superior must have infinitely many terms of the sequence larger than $k$ (otherwise, no subsequence could converge to something larger, so no subsequence could converge to the limit superior). That is: look at the collection of all things that are lower bounds for infinitely many of the $a_n$, and the supremum of that will be the limit superior. So we again look at $A_n = (-\infty,a_n)$ (the set of all lower bounds to $a_n$), and consider the limit superior of the $A_n$; this is the collection of all numbers that are lower bounds to infinitely many of the $a_n$, and so its supremum will be $\limsup a_n$. So that's why you consider the set $A_n(-\infty,a_n)$ instead of the set $\{a_n\}$, and where they come from. -There are other ways of associating to each $a_n$ an appropriate set; in this situation, note that $\sup A_n = a_n$ for each $n$, that $\sup(\liminf A_n) = \liminf (\sup A_n) = \liminf a_n$, and $\sup(\limsup A_n) = \limsup(\sup A_n) = \limsup a_n$, which makes this association pretty nice. -I hope this helps clarify it further.<|endoftext|> -TITLE: Covering for connected and locally path connected spaces -QUESTION [5 upvotes]: Under the condition that the spaces (or maybe just the total) are connected and locally path connected, is then the a covering the same as a homeomorphism? - -REPLY [7 votes]: Dear Down, To prove the surjectivity, you will have to use connectivity of $E'$ (otherwise the result is not true). You want to show that $f(E)=E'$. Here are some hints: (i) think about what properties you need to prove for $f(E)$ to get this equality. (This is where connectedness will be used). (ii) Prove them using the covering space properties. - -– Matt E Sep 15 '10 at 18:49 - -Incidentally, this statement (that $E\to B$ is surjective) is also true if we are working with a fibration. This is actually a hint: use path lifting. - -– Akhil Mathew Sep 16 '10 at 2:41<|endoftext|> -TITLE: How can I pack $45-45-90$ triangles inside an arbitrary shape ? -QUESTION [5 upvotes]: If I have an arbitrary shape, I would like to fill it only with $45-45-90$ -triangles. -The aim is to get a Tangram look, so it's related to this question. -Starting with $45-45-90$ triangles would be an amazing start. After -the shape if filled I imagine I could pick adjacent triangles, either -$2$ or $4$ and draw squares and parallelograms instead, but just getting -the outline estimated with packed $45-45-90$ triangles would be great. -How do I get started ? -EDIT -@J.M.'s comment makes perfect sense, which means I should first make sure my shape is suited for this. Here's a sketch I did to illustrate this: - -The black shape is the 'arbitrary shape', the blue path is the path I should be filling. -So far, I see the first step is to 'estimate' arbitrary paths with lines at right or 45 degree angles. The second step would be the initial question, packing 45-45-90 triangles into the shape. -Hints for estimating random angles lines with 45/90 degrees angled lines or 45-45-90 triangle packing ? -UPDATE2 -I've gone ahead with a naive approach to estimate an arbitrary path(outline) using only straight or diagonal(45 degrees) lines. -function estimate45(points:Vector.):Vector. { - var result:Vector. = new Vector.(); - var pNum:int = points.length,angle:Number,pi:Number = 3.141592653589793,qpi:Number = pi*.25,d:Number = 0; - for(var i:int = 0 ; i < pNum ; i++){ - if(i == 0) angle = Math.atan2(points[i].y,points[i].x); - else { - angle = Math.atan2(points[i].y-points[i-1].y,points[i].x-points[i-1].x); - d = Math.sqrt((points[i].x-points[i-1].x)*(points[i].x-points[i-1].x)+(points[i].y-points[i-1].y)*(points[i].y-points[i-1].y)) - } - //constraint to 45 (1. scale to values between 0,1 (/qpi) 2. round, 3. do whatever(convert to degrees/radians as needed) - angle = (Math.round(angle/qpi)) * 45 / 57.2957795; - if(i == 0) result.push(new Point(Math.cos(angle)*d,Math.s(angle)*d)); - else result.push(new Point(result[i-1].x+Math.cos(angle)*d,result[i-1].y+Math.s(angle)*d)); - } - return result; -} - -I loop trough the the path (an ordered list of points) and I calculate the angle and radius of each line(cartesian to polar I think). Then I 'round' the angle to 45 degrees, -and draw a line to a 45 degrees constrained version of the original angle, and keep the same length for the line. -This isn't very good way to do it, especially for consecutive lines with similar angles. -Here are some tests: - -The faded red is the original, the green is the estimation. - -@Américo Tavares's suggestion is great though. I could use this approach for bitmap graphics too, not just vector graphics. -If I would go with this approach, I imagine I would do something like: -get a mosaic(create a grid of boxes to cover the size of the shape) -boxes_xnum = floor(w/box_size) -boxes_ynum = floor(h/box_size) -for(y to boxes_ynum): - for(x to boxes_xnum): - grid.addBitmap(copyPixels(source,x*box_size,y*box_size,box_size,box_size));//copypixels(source,x,y,width,height) - -for box in grid: - if(box.nonAlphaPixels/box.totalPixels > .75): fullBox - else: - checkDiagonalType()//is it like this / or like this \ - checkFillSide()//which of the two sides should be filled - //which means I should check for something constrained to 45 degrees angles \| or _\ or |/ or /_ - //in the case of halfs go for random diagonal ? - -If I think about this better, -when I loop though the pixels of a box, keep a pixel count per box 'quadrant'(top-left). -If the non transparent pixels in one quadrant in larger than .5 or .75 it's marked as used. -Based on how many and which of the 4 quadrants are used in a box, a diagonal with direction is used. - -Does this make sense, or am I over complicating this ? - -REPLY [4 votes]: I show here my interpretation of your question and comments until now: - -One has to define the triangle sides lengths (one is of course enough) depending on the resolution one needs. In the figure I approximated the red shape by the green right triangles with two equal angles. The shape boundary determines the triangles over or closed to it, the criteria being that at least 50% of the triangle area lies inside the shape. The inner triangles can be chosen more freely. -Remark: there are a few mistakes in the sketch.<|endoftext|> -TITLE: Direct proof that f(x)=x sin(1/x) does not satisfy Lusin N condition -QUESTION [6 upvotes]: Let $f$ be defined as -$$f(x)=\begin{cases} x \sin(\frac{1}{x}) & x\ne 0 \\ 0 & x=0 \end{cases}$$ -$f(x)$ is not absolutely continuous so it cannot might not satisfy the Lusin N condition. -Is there a direct proof of that it does not? i.e. I wanted to know how to construct a set of zero measure which does not satisfy Lusin's Condition for this function. - -REPLY [3 votes]: The function sends each set of measure zero to a set of measure zero. -Let $A\subset\mathbb R$ be a null set (i.e. $A$ has measure 0). Then $f(A)=f(A\cap \{0\})\cup f(A\cap(\mathbb{R}\setminus \{0\}))$. The first set in the union has at most one point, so we need only worry about the second. The set $A\cap(\mathbb{R}\setminus \{0\})$ can be expressed as a countable union of sets of the form $A\cap [a,b]$ with $0\lt a$ or $b\lt 0$, and therefore, since the image of a union is the union of the images, $f(A\cap(\mathbb{R}\setminus \{0\}))$ can be expressed as a countable union of sets of the form $f(A\cap [a,b])$ with $0\lt a$ or $b\lt 0$. For each such $a$ and $b$, $f$ is continuously differentiable in a neighborhood of $[a,b]$, and hence the restriction of $f$ to $[a,b]$ is absolutely continuous (e.g. by the fundamental theorem of calculus for $C^1$ functions). Therefore the image of the null set $A\cap[a,b]$ under $f$ is null. Countable unions of null sets are null, so this shows that $f(A)$ is null.<|endoftext|> -TITLE: Packing disjoint family of discs with radii $\tfrac{1}{2}, \tfrac{1}{3}, \tfrac{1}{4},\ldots$ inside the unit disc -QUESTION [13 upvotes]: Does there exist a family of discs $\lbrace D_{n}\rbrace_{n=1}^{\infty}$ in the Euclidean plane such that - -the radius of $D_{n}$ is $\frac{1}{n+1}$, -each $D_{n}$ is contained in the unit disc, and -$D_{n}\cap D_{m} = \emptyset$ for each $n\neq m$ ? - -(I'm not sure what tags are appropriate for this kind question, so if You have any suggestions, You're welcome to inform me about it via comments) - -REPLY [5 votes]: The following is the rigorous construction of desired packing. -Consider the following picture: - -On this picture discs with curvatures 2,2,3,3,6,6,6,6,11,11,11,11 are packed inside a unit disc. One can prove correctness of this image by solving quadratic equations. I want to cut circles with curvatures 2,3,4,5,6,... form these: use already obtained circles with curvatures 2 and 3. Use the second circle with curvature 2 for cutting 4,6,8,10,16,... (all even curvatures, starting with 4) using repeating the same procedure scaled to the circle with curvature 2. Use the second 3 for cutting 5 and 9. Use 6 for cutting 7. Use 11 for 11. -Now we have circles with radii 6, 6 and 6. Use the same procedure to obtain circles with curvatures 6*2, 6*3, 6*4, ... = 12, 18, 24, ... from them, each repeating 3 times. Use 12, 12 and 12 for 13, 15, 17; 18 for 19, 21, 23; 24 for 25, 27, 29 and so on. -One can check, that words "use the same procedure" are ok, because if we will use the scheme, described above to cut circles one by one (2, then 3, then 4, then 5, then 6, then 7 in this order) immediately repeating the steps in described smaller circles, we will never use result of the step before the step itself.<|endoftext|> -TITLE: A good lower bound on the maximum curvature in a loop -QUESTION [9 upvotes]: Suppose $\alpha: \mathbb{R} \rightarrow \mathbb{R}^3$ is a $C^\infty$ -curve, parameterized by arc length ($\left\|\alpha'(t)\right\| = 1$), -and with $\alpha(0) = \alpha(\ell)$. Show that -there exists a $t_0 \in [0,\ell]$ such that -$ -\left\| \alpha''(t_0)\right\| \geq \frac{2\pi}{\ell}. -$ - -If we had that $\alpha'(0) = \alpha'(\ell)$, then the result would -follow from Fenchel's theorem, but we don't. Actually there is a nice -extension to Fenchel's theorem described in Toponogov's book -``Differential Geometry of Curves and Surfaces: A Concise Guide'', -under the name Fenchel-Reshetnyak, which gives lower bounds on the -integral curvature in an open curve. This can be adapted to our -situation to show that $\left\| \alpha''(t_0)\right\| \geq -\frac{\pi}{\ell}$, but that bound is only half as good as it -presumably could be. It makes sense that the integral curvature bound -will be only half as good if we don't constrain $\alpha'(0)$ -w.r.t. $\alpha'(\ell)$ (in particular, one may be the negative of the -other), but nonetheless it seems that the original bound on the -maximum curvature in the loop should still stand. How can we -recover it? - -REPLY [3 votes]: This smells like a problem in the Calculus of Variations and can be solved as such. We therefore make a slight reformulation: among all $C^3$ curves of length $l$ originating and terminating at the same point, find the one(s) for which the maximum curvature is smallest. -Let's begin by developing some intuition. In general, a minimax problem like this has a solution where the objective (the curvature in this case) is as constant as possible. A circle is an obvious candidate for a solution and indeed a circle of length $l$ has constant radius $l / (2 \pi)$ and constant curvature of $2 \pi / l$. -(As a tiny simplification, by choosing our linear units of measurement let's henceforth assume $l=1$.) -The technique of the Calculus of Variations is to assume we have a solution and perturb it a little, showing that any perturbation, no matter how small, cannot decrease the value of the objective. To motivate this in the present case, though, I find it more appealing to consider how one might go about reducing the maximum curvature of any unit-length loop. A point of maximum curvature is part of a relatively sharp "bump" on the curve. If we push that bump inwards a little, we should be able to decrease its sharpness and at the same time we slightly decrease the total length of the curve. To make up for the latter, uniformly dilate the entire curve relative to its point of origin until the dilated curve again has unit length. This uniformly decreases all curvatures along the loop. In this fashion the maximum curvature has strictly decreased. The only way this operation can fail is when there is no "bump": the curvature everywhere is constant. -To make this go through rigorously, use an adapted orthonormal frame for the curve: $T(t) = \alpha'(t)$ is the tangent, $N(t)$ is the inward-pointing unit normal with $T'(t) = \kappa(t)N(t)$, and $B(t)$ is the unit binormal, with $N'(t) = -\kappa(t)T(t) + \tau(t)B(t)$ and $B'(t) = -\tau(t)N(t)$. ($\tau$ is the torsion. These Serret-Frenet formulae generalize readily to higher dimensions and in two dimensions just forget about $B$.) Notice that $\kappa \ge 0$. Let $\epsilon > 0$ be small and let $\delta(t)$ be a smooth non-negative function in an open neighborhood of $[0,1]$, vanishing at $0$ and $1$ (in order to keep the endpoints of the new curve fixed). Set the "pushed" curve to be -$$\psi(t) = \alpha(t) + \epsilon \delta(t) N(t).$$ -Compute the curvature of $\psi$ to first order in $\epsilon$ using the Serret-Frenet formulae. I find that its square becomes -$$\kappa^2 + 2 \kappa \left( \delta'' - \delta \tau^2 \right) \epsilon + O(\epsilon^2).$$ -Suppose the curvature is not constant. Then there is a closed interval of maximum curvatures (perhaps reducing to a point) and within some neighborhood of that interval all curvatures are strictly less than the maximum. We can make $\delta$ zero outside this larger neighborhood, let it increase slowly enough (and choose $\epsilon$ sufficiently small) so that $\delta''$ keeps the new curvature less than the maximum within the outer neighborhood, and make $\delta''$ strictly negative at all points within the interval of maximum curvature. This procedure strictly decreases the maximum curvature within the outer interval. You can also check, again to second order in $\epsilon$, that the length of $\psi'$ decreases by $\epsilon \delta \kappa \ge 0$, so we will be able to apply our dilation trick in order to keep the total loop length constant after this perturbation. -A little more formally (and using less work, actually), the preceding equation shows that when $\kappa$ is constant and $\tau = 0$, we would need $\delta'' \lt \delta \tau^2 = 0$ everywhere in order to decrease the curvature, which implies $\delta$ is identically zero in the unit interval. Otherwise (namely, if $\kappa$ is not constant or $\tau \gt 0$), it will be possible to decrease the curvature subject to maintaining the unit length of the curve. Curves of constant curvature and zero torsion are circular arcs, and the only circular arc that returns to its starting point is a full circle. Therefore the maximum curvature in any sufficiently smooth unit-length loop can be no less than $2 \pi$ and is strictly greater than that if the loop is not a circle, QED. -I believe the same technique works in higher dimensions, too, although I have not done the calculations. (There are additional vectors in the adapted frame, and therefore additional torsion terms, but this doesn't seem to present any obstacle to carrying out the same program.)<|endoftext|> -TITLE: Formal power series coefficient multiplication -QUESTION [7 upvotes]: Given that I have two formal power series: -$$ A(x) = \sum_{k \ge 0} a_k x^k $$ -$$ B(x) = \sum_{k \ge 0} b_k x^k $$ -The Cauchy Product gives a series -$$ C(x) = \sum_{k \ge 0} c_k x^k $$ -$$ c_k = \sum_{n=0}^k a_n b_{k-n} $$ -Which comes from taking the product of the two series $C(x)=A(x)B(x)$. What then, in terms of $A(x)$ and $B(x)$, is this series? -$$Y(x) = \sum_{k \ge 0} a_k b_k x^k $$ - -REPLY [2 votes]: For more on Hadamard Products or Hadamard Multiplication Theorem look for papers by Louis R. Bragg in American Mathematical Monthly, Jan. 1999, pp 36-42 or SIAM J. Math. Anal., Vol. 17, 1986, pp 220-230 for starters.<|endoftext|> -TITLE: Uncountable ordinals without power set axiom -QUESTION [13 upvotes]: Assume $M$ is a set, in which all axioms of $ZF - P + (V=L)$ hold. Does then $M$ believe that there exists an uncountable ordinal? I mean, why should the class of all countable ordinal numbers be a set? - -REPLY [12 votes]: No. One cannot prove that $\omega_1$ exists in ZF - P, with or without V = L. -The set $HC$ of hereditarily countable sets always satisfies ZF - P. (This is straightforward to check axiom per axiom.) Of course, $\omega_1 \notin HC$ but moreover every set in $HC$ is countable or finite as witnessed by a function in $HC$. Therefore, $HC$ knows that every set in $HC$ is countable and hence $HC$ is a model of ZF - P + "every set is at most countable." -Throwing in V = L into the mix doesn't help. Indeed, $HC^L = L_{\omega_1^L}$ (see my answer to your recent MO question for a proof) is a model of ZF - P + V = L + "every set is at most countable."<|endoftext|> -TITLE: Complexity - why is RL=NL when omitting the demand for polynomial run-time? -QUESTION [5 upvotes]: The complexity class RL is described at the complexity zoo as: Has the same relation to L as RP does to P. The randomized machine must halt with probability 1 on any input. It must also run in polynomial time (since otherwise we would just get NL). -The question is - how can we get NL when omitting the demand for polynomial time? I have a solution of my own but it seems strange to me. -My solution: If suffices to solve ST-CON. We can use the same NL algorithm for ST-CON (guessing a path) with one major difference - we count the number of steps we make, and if it suppresses the number of vertices in the graph, we restart the computation, without remembering ANYTHING. -This means we can play this game indefinitely. If the graph is not ST-connected, then we'll never halt, but if it's ST-connected we'll halt in probability 1 (this is the same as saying that a geometric random variable obtains a finite value in probability 1). However, since we do not halt for NO-instances, this solution "feels wrong" to me. -Is there another solution? And is my solution correct? - -REPLY [3 votes]: Your solution is almost correct. -In the definition of RL, we require that the algorithm halt with probability $1$ on any input. As you note, your algorithm runs forever on "No" instances. To fix this, we'll take advantage of the fact that we're allowed a small chance of error. In particular, we're allowed to sometimes reject "Yes" instances, as long as we never accept a "No" instance. -The basic idea is to keep a counter of how many iterations we've used so far. If a path exists, we should find it within $2N^N$ iterations (with probability at least $3/4$). So if we've gone that long without finding a valid path, we'll reject the graph. Note that we'll never accept a bad graph this way, and we'll rarely reject a good graph. -We can't naively count to $N^N$, however, since this would require $N\lg N$ space. Instead, we'll probabilistically "count" that high. -After each iteration, we'll flip $N\lg N + 2$ coins. If every coin lands heads, then we stop (and reject the graph). If at least one coin lands tails, then we begin another iteration. This can be done in log-space, since we only need to count how many flips have occurred. -The probability of getting all heads is $1/2^{N\lg N + 2}=1/4N^N$, so with probability at least $3/4$, this won't happen until at least $2N^N$ iterations have elapsed. -Taking a union bound over both ways the algorithm could fail, we see that everything works with probability at least $1/2$. -Finally, note that this halts with probability $1$, since we'll flip all heads eventually.<|endoftext|> -TITLE: Finding $\lim\limits_{n \to \infty} \sum\limits_{k=0}^n { n \choose k}^{-1}$ -QUESTION [11 upvotes]: We know that $$ 2^n= (1+1)^n = \sum_{k=0}^n {n \choose k}$$ I was asked to solve this limit, $$\lim_{n \to \infty} \ \sum_{k=0}^n {n \choose k}^{-1}=? \quad \text{for} \ n \geq 1$$ - -REPLY [13 votes]: This does not exactly answer the question (which was answered very nicely by Qiaochu), but I am adding this as a curiosity. -We can write $\displaystyle \sum_{k=0}^{n} {n \choose k}^{-1}$ as an integral (which can be generalized to other expressions involving the reciprocals of the binomial coefficients). -We have the following beta integral identity -$${n \choose k}^{-1} = (n+1)\int_{0}^{1} t^{n-k}(1-t)^k \ dt$$ -(For a nice application of Beta integrals, see this answer by Robin Chapman: Formula for the harmonic series due to Gregorio Fontana.) -We have -$$\sum_{k=0}^{n} {n \choose k}^{-1} = (n+1) \sum_{k=0}^{n} \int_{0}^{1} t^{n-k}(1-t)^k \ dt$$ -$$ = (n+1)\int_{0}^{1} \sum_{k=0}^{n} t^{n-k}(1-t)^k \ dt$$ -$$ = (n+1)\int_{0}^{1} \frac{t^{n+1} - (1-t)^{n+1}}{2t-1} \ dt$$ -We should be able to use the analytical tools available to investigate the properties of this integral. -For instance, at least two different proofs that the limit is $2$ can be found here: Different proofs of $\lim\limits_{n \rightarrow \infty} n \int_0^1 \frac{x^n - (1-x)^n}{2x-1} \mathrm dx= 2$ -For an example of a different use of this integral formula: -Setting $\displaystyle 2t-1 = y$ we get -$$\sum_{k=0}^{n} {n \choose k}^{-1} = \frac{n+1}{2^{n+2}} \int_{-1}^{1} \frac{(1+y)^{n+1} - (1-y)^{n+1}}{y} \ dy$$ -Expanding out the right side gives us this nice looking formula -$$\sum_{k=0}^{n} {n \choose k}^{-1} = \frac{n+1}{2^n}\left(\frac{{n+1 \choose 1}}{1} + \frac{{n+1 \choose 3}}{3} + \frac{{n+1 \choose 5}}{5} + \dots \right)$$<|endoftext|> -TITLE: How fundamental is the fundamental theorem of algebra? -QUESTION [25 upvotes]: Despite its name, its often claimed that the fundamental theorem of algebra (which shows that the Complex numbers are algebraically closed - this is not to be confused with the claim that a polynomial of degree n has at most n roots) is not considered fundamental by algebraists as it's not needed for the development of modern algebra. My question is - what are the major uses of the theorem, and to which extent can they justify the claim that the theorem is fundamental for something? -An example I think of is the Jordan canonical form for matrices, but I don't think it suffices. - -REPLY [13 votes]: It's easy to be jaded about FTA after mathematical history has run for another two centuries. Sure, it is not the number one most important result any more, or the center of any research program (though understanding the algebraic closure of Q could be considered as half of number theory). But consider the situation around 1800. In addition to solution of algebraic equations one had new methods of constructing numbers, using power series, integrals and other limits. Algebra and number theory dealt with the first situation, to a limited extent, and analysis showed that the second type of construction could be iterated but still stay within the same realm of numbers. There was still the possibility that solving equations with $\pi$ and $e$ as coefficients could require an entirely new type of super-transcendental analysis. Fundamental Theorem of Algebra is self-defeating in this sense: it shows that nothing more was needed than complex numbers. But this is not clear in a world where you don't know that FTA is true. -To get an idea what algebraic geometry looked like without complex numbers, look up Newton's classification of degree 3 algebraic curves in the plane, $P_3(x,y)=0$, using real coordinates. The reason this work is obscure today is that there are many dozens of cases compared to the complex projective version. As in Lie groups and topology, looking from the universal covering (C) downward, modulo some Galois-theoretic details (R), is usually easier than working from the bottom up. -Suppose you want to evaluate the integral, from $- \infty$ to $+\infty$, of a rational function (one with integer coefficients would illustrate the point). The answer will involve $\pi$ and the usual method for finding it will use the specific location of the roots in the complex plane, so is more specific to complex numbers than the existence of roots in an algebraic closure. There are some non-usual methods that stay entirely within the real numbers, but they are nonstandard because they are more complicated, and harder to understand and adapt to other problems. -Algebraic geometry in general has a transcendental part -- periods, Hodge theory, uniformization, etc -- that, in the present state of knowledge, cannot be fully substituted by algebraic methods over fields of characteristic 0 (or p). Sometimes Lefschetz principle or reduction to positive characteristic can be used, sometimes not, or the theory is unknown.<|endoftext|> -TITLE: The partial fraction expansion of $\frac{1}{x^n - 1}$. -QUESTION [7 upvotes]: If $n$ is an integer, is there a nice way to write the partial fraction expansion of $\frac{1}{x^n - 1}$? I figure that if $\zeta$ is the $n$-th root of unity, then for some coefficients $a_0, a_1, \ldots, a_{n-1}$ we may write -$$ -\frac1{x^n - 1} = \frac{a_0}{x - 1} + \frac{a_1}{x - \zeta} + \frac{a_2}{x - \zeta^2} + \ldots + \frac{a_{n-1}}{x - \zeta^{n-1}}. -$$ -Then for $0 \leq i \leq n -1$, -$$ -a_i = \lim_{x \to \zeta^i} \frac{x - \zeta^i}{x^n - 1} = \frac1{(\zeta^{i} - 1) \cdots (\zeta^i - \zeta^{i-1}) (\zeta^i - \zeta^{i + 1}) \cdots (\zeta^i - \zeta^{n-1})}. -$$ -Is there a simpler expression for this and if so, how could I see it easily? - -REPLY [12 votes]: Use l'Hopital's rule. $a_i = \lim_{x \to \zeta^i} \frac{1}{nx^{n-1}} = \frac{\zeta^i}{n}$. Note that the identity I am claiming is true is equivalent to the identity -$$\sum_{i=0}^{n-1} (\zeta^i)^k = \begin{cases} n \text{ if } n | k \\\ 0 \text{ otherwise} \end{cases}$$ -by taking the generating function of both sides. More generally, the coefficient of $\frac{1}{x - a}$ in the partial fraction decomposition of $\frac{P(x)}{Q(x)}$ is (as long as $a$ is a simple root of $Q$) -$$\lim_{x \to a} \frac{(x - a) P(x)}{Q(x)} = \frac{P(a)}{Q'(a)}.$$ -This formula is tremendously useful (for example I recently used it on this math.SE question) but does not seem to be widely known.<|endoftext|> -TITLE: Fitting an infinite collection of balls in an infinite dimensional unit ball -QUESTION [7 upvotes]: Given an infinite dimensional normed linear space, how would one show that it is possible to fit an infinite collection of non-overlapping balls of radius $\frac{1}{4}$ in the unit ball? -I guess one can immediately reduce the problem to a normed linear space of countably infinite dimensions. The solution seems clear if the concept of orthogonality exists, but not every normed linear space has an inner product so it's not possible to apply something like Gram-Schmidt to produce an orthogonal basis. Is there any way around this, or there another approach that can be used? - -REPLY [10 votes]: Here is a standard lemma you could use. If $M$ is a closed proper subspace of a normed linear space $E$, then for all $\epsilon\gt0$ there is an $x\in E$ of norm 1 whose distance to $M$ is greater than $1-\epsilon$. (E.g., here's a proof in Lemma 3-6.10 of Tsoy-Wo Ma's Classical analysis on normed spaces.) -Here's how you could use it. Let $B$ denote the unit ball of an infinite dimensional normed space $X$. Let $x_1\in X$ have norm 1, and let $M_1$ be the span of $\{x_1\}$. By the lemma there is an $x_2\in X$ of norm 1 whose distance to $M_1$ is greater than $\frac{2}{3}$. Let $M_2$ be the span of $\{x_1,x_2\}$, and let $x_3\in X$ have norm 1 and distance greater than $\frac{2}{3}$ to $M_2$. Repeat countably infinitely many times to obtain a sequence $x_1,x_2,\ldots$ of elements of $X$ of norm 1 with pairwise distances greater than $\frac{2}{3}$. Note that the lemma will always apply, because each $M_k=\operatorname{span}\{x_1,x_2,\ldots,x_k\}$ is finite dimensional, hence closed and proper. Then the balls of radius $\frac{1}{4}$ centered at the points $\frac{3}{4}x_1,\frac{3}{4}x_2,\ldots$ are disjoint and contained in $B$.<|endoftext|> -TITLE: Is there any relation about rational homology of X and X/G -QUESTION [6 upvotes]: If we know the rational homology of $X$ is $0,$ can we get some information about the rational homology of $X/G,$ where $G$ is a finite group? Thank you very much for the answers! - -REPLY [7 votes]: When $G$ is finite, the rational cohomology of $X/G$ is the fixed point set $H^*(X;\mathbb{Q})^G$. This is proven in Grothendieck's Tohoku paper (Theorem 5.3.1 and the Corollary to Proposition 5.3.2). -So if the rational cohomology of $X$ is trivial, the same is true for $X/G$. And rationally the cohomology and homology are isomorphic. -For paracompact Hausdorff spaces, these cohomology groups can be taken to be the Cech cohomology groups. Note that if $X$ is homotopy equivalent to a CW complex, then Cech cohomology agrees with singular cohomology. You might also want to look at Oscar Randall-Williams comments here: https://mathoverflow.net/questions/18898/grothendiecks-tohoku-paper-and-combinatorial-topology/30015#30015.<|endoftext|> -TITLE: Harmonic function composed with conformal map is harmonic (in $\mathbb{R}^n$) -QUESTION [9 upvotes]: Here's the setup: Let $U,V$ open $\subset \mathbb{R}^n$, and let $u:V\rightarrow \mathbb{R}$ be harmonic, and $v:U\rightarrow V$ be conformal, i.e. $v$ is $C^1$ and the Jacobian $J_v(x)$ is a scalar multiple of an orthogonal transformation for all [; x\in U ;]. -I'm trying to prove $u\circ v$ is harmonic. [I've seen this stated as a fact in a few places without reference, namely here: http://en.wikipedia.org/wiki/Conformal_map#Uses , but maybe my hypotheses are slightly different and this is not true at all] -I've seen a proof that if $u$ is $C^2$ and $T$ is an orthogonal transformation then -$\Delta (u \circ T) = \Delta(u) \circ T$. -So I'm thinking that to show $u\circ v$ is harmonic, we can use the fact that $v$ acts locally as its Jacobian, which is an orthogonal transformation, and move the Laplacian onto $u$ and conclude $u\circ v$ is harmonic. -However, I'm having trouble making this idea precise. After glancing at my copy of baby Rudin, my hunch is to use the inverse function theorem or constant rank theorem, but I'm unsure how to apply those. Any suggestions? - -REPLY [16 votes]: Though this is an old question, it seems worthwhile to set this matter straight: - -The proof by @alext87 is flawed: it does not account the dependency of $x_\epsilon$ on $x$. -The statement is false in dimensions $n>2$: the composition of a harmonic function with a Möbius transformation is not harmonic in general. Example in 3 dimensions: let $u(x)=x_1$, which is harmonic. Let $v(x)=x/|x|^2$ be the inversion in the unit sphere, which is conformal. Now $(u\circ v)(x)=x_1/|x|^2$, and computation yields $\Delta(u\circ v)=-2x_1/|x|^4$. -The proof for $n=2$ can be found in many places, e.g., in Composition of a harmonic function with a holomorphic function -One can retain some of the connection between conformality and harmonicity in higher dimension by replacing the Laplacian $\Delta u=\mathrm{div}\,\nabla u$ with the $n$-Laplacian $\Delta_n u=\mathrm{div}\,(|\nabla u|^{n-2}\nabla u)$. The composition of an $n$-harmonic function with a Möbius transformation is $n$-harmonic.<|endoftext|> -TITLE: Distributions of point charges -QUESTION [12 upvotes]: Problem -$N$ point charges are distributed in the unit ball in $\mathbb{R}^k$, $k=2,3$. Given locations of the particles $x_1,\ldots,x_N$ the potential energy is -$E=\sum_{j=1}^{N-1}\sum_{k=j+1}^N |x_j-x_k|^{-1}$ -where $|x_j-x_k|$ is Euclidean distance between $x_j$ and $x_k$. I'm interested in both the minimal value of $E$ over all possible locations of the particles in the unit ball and what this configuration looks like. -On the Unit Interval -For $k=1$ the $N$ charges are distributed on the interval $[-1,1]$ according to the roots of the $N+1$th Chebshev polynomial. See: http://en.wikipedia.org/wiki/Chebyshev_polynomials#Roots_and_extrema - -REPLY [4 votes]: The canonical thing to do for a question like this is to look at Neal Sloane's home page. Sure enough, there is a table giving some good arrangements. -http://neilsloane.com/electrons/index.html -This was indeed one of the links on the page in wok's answer, but it may be the most complete resource.<|endoftext|> -TITLE: Can we turn the functor "category ring" into a 2-functor in a natural way? -QUESTION [9 upvotes]: Let $C$ be a small pre-additive category. Let $R(C)$ denote its category ring, that is, -$$ -R(C)=\bigoplus_{a,b\in \mathrm{Ob}(C)} C(a,b) -$$ -as Abelian group, where the direct sum runs over all object $a$, $b$ of $C$. The multiplication in $R(C)$ is given by composition of composable morphisms and 0 for uncomposable morphisms (extended by bilinearity). -This constuction is functorial: An additive functor $C\to D$ between small pre-additive categories induces a ring homomorphism $R(C)\to R(D)$ between the corresponding category rings in a canocical way. -Hence we have a functor $R$ from the category of small pre-additive categories (with additive functors as morhisms) to the category of rings (with ring homomorphisms as morphisms). -But the category of small pre-additive categories has a 2-categorical structure given by natural transformations. -Hence my question: Does this 2-categorical structure have a counterpart in the category of rings? More precisely, is there a natural notion of 2-morphisms between ring homomorphisms turning $R$ into a 2-functor? - -REPLY [3 votes]: This construction is not even a functor! -The problem is that two morphisms which are not composable in $C$ may become composable in $D$. The simplest example is to take $C$ to be the discrete category on two objects $a, b$, $D$ to be the discrete category on an object $d$, and $F : C \to D$ the unique functor between them (or rather the free preadditive category on these). Then $R(C)$ has two generators $\text{id}_a$ and $\text{id}_b$ which are not composable and hence whose product is $0$. But their image in $D$ is $\text{id}_d$ which squares to itself. So -$$R(F)(\text{id}_a \times \text{id}_b) = R(F)(0) = 0 \neq R(F)(\text{id}_a) \times R(F)(\text{id}_b) = \text{id}_d.$$ -So $R(F)$ is not a ring homomorphism. -This construction is not a functor, but it can be understood in terms of a functor as follows. I'll restrict my attention to the case that $C$ has finitely many objects. Given such a preadditive category you can talk about the category obtained by formally adjoining finite direct sums (this is a functor, even a 2-functor), and in this new category I claim that the category ring is just $\text{End}(\oplus_{c \in C} c)$. The significance of this ring is that it is Morita equivalent to $C$, meaning that right modules over it are naturally equivalent to presheaves on $C$ valued in abelian groups.<|endoftext|> -TITLE: Why is an integral of a complex function defined as a line integral? -QUESTION [12 upvotes]: In real analysis, we can define a line integral, but we also define (earlier) the regular definite integral. -Why is it that in complex analysis we are interested only in a line integral? - -REPLY [16 votes]: Let's suppose, imitating the real case, the we want to integrate the expression -$f(z) dz$, where $f$ is a function of the complex variable $z$. What would this mean? -Well, imitating the real case, we find that we have to form Riemann sums -$\sum_{i = 1}^{n-1} f(z_i) (z_{i+1} - z_i)$, and then let the absolute value of the differences$| z_{i+1} - z_i|$ tend to zero. -This sum is easily seen to be the line integral along the polygonal arc joining $z_1$ to -$z_n$ via $z_2, z_3, \ldots ,$ of the piecewise constant function whose value is $f(z_i)$ on -the arc joining $z_i$ to $z_{i+1}$. -If we can form the limit in any reasonable sense, we will get a line integral of $f(z) dz$ -along the curve which is the limit of these polygonal arcs as -$n \to \infty$. -In particular, the limit we get will depend not just on the end points $a$ and $b$, -but (at least a priori) on the particular path that arises as the limit of these polygonal arcs. -Seeing this, we now see that to define the integral $\int_a^b f(z) dz$, we should -choose a path joining $a$ to $b$, and choose the $z_i$ to lie along this path. -The resulting integral will depend on this choice of path a priori. -What we are seeing is that although passing from the real to the complex numbers involves -passing from a one-dimensional to a two-dimensional situation, taking an integral -of $f(z) dz$ still involves choosing a finite sequence of points and then passing to the -limit. In the real case, these points fill out an interval (in the limit), while in the complex case they will fill out a curve. -If you wanted to write an integral over an area in terms of complex variables, you would -need to do integrals like $\int f(z) dzd\overline{z}$; this now involves adding up Riemann sums in which the terms involve products $\Delta z\Delta\overline{z}$ of a small change in $z$ and an independent small change in $\overline{z}$. One computes that this -is the same as the integral $-2i\int f(z) dxdy$, which is a usual double integral in the plane. -In summary: If we try to generalize the one real variable case to the one complex variable case, -we find that we are forced to choose a path along which to take the integral, because -the complex numbers lie in a two-dimensional plane, while the idea of forming Riemann sums to compute the integral requires us to choose some arc along which the points $z_i$ lie. -Fundamentally, adding up small changes $\Delta z$ (weighted by values of the function $f$) -requires looking at changes in $z$ along some one-dimensional object. Thus $\int f(z) dz$ has to be a line integral. -Adding up products of indepenent small -changes $\Delta z$ and $ \Delta \overline{z}$ (weighted by vaues of the function $f$) requires moving in two independent directions, and so takes place over a two-dimensional -region. Thus $\int f(z) dz d \overline{z}$ has to be a double integral.<|endoftext|> -TITLE: Group theory proof of existence of a solution to $x^2\equiv -1\pmod p$ iff $p\equiv 1 \pmod 4$ -QUESTION [9 upvotes]: I've read through the elementary proof of why there exists a solution $x$ to $x^2\equiv -1\pmod p$ iff $p\equiv 1 \pmod 4$ for $p$ an odd prime. Is there a group theory generalization for this fact as well? - -REPLY [3 votes]: Probably the simplest way to state a corresponding group theory result is the generalization of Frobenius's theorem mentioned by Bill Dubuque. The generalization says that if $G$ is a finite group, then the number of solutions in $G$ to $x^n=1$ is a multiple of $\gcd(|G|,n)$. -In the case of the congruence $x^2\equiv -1\pmod{p}$, the solutions to this congruence are exactly the solutions to $x^4\equiv 1 \pmod{p}$ that are not solutions to $x^2\equiv 1\pmod{p}$. The order of the group here is $p-1$, so the number of solutions to the first is a multiple of $\gcd(4,p-1)$; the number of solutions of the second is a multiple of $2$, and we know the two: $1$ and $-1$. Since every solution to $x^4\equiv 1\pmod{p}$ is a solution to $x^2\equiv 1 \pmod{p}$, there are solutions to $x^2\equiv -1\pmod{p}$ if and only if $\gcd(4,p-1) = 4$.<|endoftext|> -TITLE: Spectrum of a convolution operator -QUESTION [6 upvotes]: Let $T$ be the operator from $L^2(\mathbb R^n)$ to $L^2(\mathbb R^n)$ that is given by $Tf := f * g$ where $g$ is in $L^2$. -How do I now find that the spectrum of $T$ is equal to the essential range of $\hat{g}$? How is the spectrum of $T$ related to the invertibility of the operator $G\hat{f} = \hat{f}\hat{g}$? -The hat denotes the Fourier transform. - -REPLY [7 votes]: The point is that the Fourier transform $\mathcal{F}$ -conjugates $T$ to the multiplication -operator $S:h\mapsto\hat g h$ (i.e. $\mathcal{F}(f\ast g)=\hat g\mathcal{F}(f)$). -Hence $T$ and $S$ have the same spectrum.<|endoftext|> -TITLE: Is a ball a polyhedron? -QUESTION [5 upvotes]: In the book Introduction to Linear Optimization by Bertsimas Dimitri, a polyhedron is defined as a set $ \lbrace x \in \mathbb{R^n} | Ax \geq b \rbrace $, where A is an m x n matrix and b is a vector in $\mathbb{R^m}$. What it means is that a polyhedron is the intersection of several halfspaces. -A ball can also be viewed as the intersection of infinitely many halfspaces. So I was wondering if a ball is also a polyhedron by that definition or by any other definition that you might use? -Thanks and regards! - -REPLY [2 votes]: Case $n=1$ -When $n=1$ the ball is a segment and it is indeed a polyhedron. -Case $n=2$ -Assume that the circle is a polyhedron. Think of the condition $A\mathbb{x}\ge\mathbb{b}$ as a system of linear inequalities, each of them defining a line and an associated halfplane. Since there is only a finite number of lines, there must exist $(x_0,y_0)$ such that $x_0^2 + y_0^2 = 1$ which is not in any of these lines. In particular $(x_0,y_0)$ would satisfy all the inequalities with $``>"$ rather than $``="$. This is a contradiction because, by continuity, $(x_0,y_0)$ would belong in the interior of the circle. -Case $n>2$ -Define $h\colon\mathbb{R}^2\to\mathbb{R}^n$ by $h(x,y)=(x,y,0,\dots,0)$. The pre-image by $h$ of the ball in $\mathbb{R}^n$ is the circle. And since the pre-image of a polyhedron by an affine transformation is a polyhendron, we conclude that the $n$-dimensional ball cannot be a polyhedron either.<|endoftext|> -TITLE: Why does symplectic geometry have many applications in mathematics -QUESTION [5 upvotes]: It is not quite intuitive , at least from its origin. Could any one can give me an intuitive explanation?Thank you! - -REPLY [6 votes]: I suppose a question to ask here is : what kinds of applications? -One of the areas where symplectic geometry is used is Hamiltonian dynamics, where the geometric framework allows one to generalize away from cotangent bundles, but this direction is directly related to the origins of the subject. -Another direction comes from algebraic geometry, where the fact that complex and Riemannian structure determine a symplectic one (see Aaron's answer) makes all complex projective (and affine) algebraic varieties symplectic. Viewing them as such allows one to use differential-geometric methods, while the symplectic form provides some control (in the theory of holomorphic curves, symplectic form gives the crucial energy bounds without which things fall apart; this is why there is no Gromow-Witten theory of almost-complex manifolds). Add a bit of mirror symmetry to the mix and you have interaction with many other areas of math. -There is also the fact that the wedge product is anti-symmetric, which leads the space of connections on a principal bundle over 2-d Riemannian manifold to be an (infinite dimensional) symplectic manifold (the gauge group is Hamiltonian, and curvature is the moment map, leading to flat connections being the reduction!). This leads, for G=SO(3) and the surface coming from a Heegard splitting, to Atiyah-Floer conjecture and some related stuff leads to Heegard-Floer theory. Thus connections to low-dimensional topology. The "naive" connection here is the anti-symmetry underlying symplectic form and wedge product, but probably there is a deeper reason known to wise people...<|endoftext|> -TITLE: A question on FLT and Taniyama Shimura -QUESTION [11 upvotes]: Sometime back i watched the documentary of Andrew Wiles proving the Fermat's Last theorem. A truly inspiring video and i still watch it whenever i am in a depressed mood. There are certain things(infact many) which i couldn't follow and i would like it to be explained here. -The first is: - -The Taniyama-Shimura conjecture. In the video it's said that that an elliptic curve is a modular form in disguise. I would want someone to explain this statement. I have seen the definition of a Modular form in Wikipedia, but i can't correlate this with an elliptic curve. The definition of an elliptic curve is simply a cubic equation of the form $y^{2}=x^{3}+ax+b$. How can it be a modular form? - -Next, there was a mention of this Mathematician named "Gerhard Frey" who seems to considered this question of what could happen if there was a solution to the equation, $x^{n}+y^{n}=z^{n}$, and by considering this he constructed a curve which is not modular, contradicting Taniyama-Shimura. If he had constructed such a curve then what was the need for Prof. Ribet to actually prove the Epsilon conjecture. -Lastly, here i would like to know this answer: How many of you agree with Wiles, that possibly Fermat could have fooled himself by saying that he had a proof of this result? I certainly disagree with his statement. Well, the reason, is just instinct! - -REPLY [2 votes]: The starting point of the link between elliptic curves and modular forms is the following. -From a topological point of view, elliptic curves are just (2-dimensional) tori, i.e. products $S^1\times S^1$, where $S^1$ is a circle. -A torus has always an invariant never vanishing tangent field. Dually, one can find a non-vanishing invariant differential form $\omega$ on every elliptic curve $E$. It is a useful exercise to write $\omega$ in terms of the coordinates $x$ and $y$ when $E$ is given as a Weierstrass cubic. -If you have a modular parametrization $\pi:X_0(N)\rightarrow E$ you can pull-back the form $\omega$ to $X_0(N)$ and in terms of the coordinate $z$ in the complex upper halfplane $\pi^*(\omega)=f(z)dz$ for some holomorphic function $f(z)$. It is basically immediate that $f(z)$ is a modular form of weight $2$.<|endoftext|> -TITLE: Relationship between torsion modules and topology -QUESTION [5 upvotes]: I was reviewing my class notes and found the following: -"The name 'torsion' comes from topology and refers to spaces that are twisted, ex. Möbius band" -In our notes we used the following definition for torsion element and torsion module: -An element m of an R-module M is called a torsion element if $rm=0$ for some $r\in R$. -A torsion module is a module which consists solely of torsion elements -What is the relationship between torsion modules and twisted spaces? Was the definition of torsion module somehow motivated from topological considerations of twisted spaces? -I don't really see any obvious connection. I'm taking my first topology class this semester, so I apologize if this is something you learn about later in courses like algebraic topology, but I haven't been able to find any explanation of this. - -REPLY [8 votes]: When you compute the homology groups of "twisted" spaces (which are abelian groups), you (sometimes) find that they contain non-zero torsion elements; furthermore, the presence of these particular elements in the homology is due to the twisting (in that, when you compute the homology groups, you see that is the twisting in the space that causes the calculation to give rise to torsion elements). -Since you haven't studied algebraic topology yet, I won't say more here. Hopefully, while necessarily vague, the above description gives you some feeling for the meaning of the remark in your notes. - -REPLY [7 votes]: The definition of torsion in modules is a generalization of the definition of torsion in $\mathbb{Z}$-modules, e.g. abelian groups. Torsion in abelian groups refers to elements of finite order, and this in turn relates to topology because to any topological space we can associate abelian groups called (integral) homology groups, and torsion in these groups is suggestive of a kind of "twistedness" in the space. The simplest example of this is in the first integral homology of closed surfaces; the group has torsion if and only if the surface is non-orientable, such as the Klein bottle.<|endoftext|> -TITLE: Are localized rings always flat as $R$-modules? -QUESTION [10 upvotes]: We know this is true for commutative ring, but if $S\subset R$ is a left and right Ore set, and $S^{-1}R$ its localization by this Ore set, is this always a flat $R$-module? - -REPLY [4 votes]: As I mentioned in a prior post here there is a wealth of information on noncommutative localizations in Ranicki, A.(ed). Noncommutative localization in algebra and topology. ICMS 2002. In particular, there you will find an interesting paper on this very topic by Beachy: "On flatness and the Ore condition". Below is general reference information for flatness in the commutative case. -There is a very nice treatment of flatness in Bourbaki's "Commutative Algebra" - which begins with an excellent chapter on flat modules before turning to localizations in Chapter 2 (see Theorem 2.41. p. 68 for the result you seek). Also perhaps of interest is the following motivational remark from the introduction - -The study of the passage from a ring $\rm A$ to a local ring $\rm A$, or to a completion - $\rm \hat A$ brings to light a feature common to these two operations, the property of - flatness of the $\rm A$-modules $\rm A$, and $\rm \hat A$, which allows amongst other things - the use of tensor products of such $\rm A$-modules with arbitrary $\rm A$-modules somewhat - similar to that of tensor products of vector spaces, that is, without all the precautions - surrounding their use in the general case. The properties associated with this notion, which - are also applicable to modules over non-commutative rings, are the object of study in Chapter I. - -See also Atiyah and Macdonald, Corollary 3.6 and Proposition 3.10 pp. 40-41.<|endoftext|> -TITLE: Calculating Intersections of Lines and Algebraic Surfaces -QUESTION [5 upvotes]: For context I am developing a ray-tracer for a computer science class, and want to implement some more advanced shapes than just spheres. So while this is related to schoolwork, I'm not asking you to do my work for me, the work is implementing the programming, and it's the math I don't understand, so I'm just looking for help understanding how the math works. -I am trying to understand how to calculate the intersection point, and the normal vector from that point, of several algebraic surfaces. I am at the very frustrating point of knowing what I need to do, and how it is theoretically done, but not really grasping how to actually do it. -I know that I need to take the equation for the line and substitute the x, y, and z variables in the surface equation for the equivalent portions of the line equation, but as soon as I sit down to do that, I immediately hit a mental brick wall. As for the normal calculations, I'm really lost, I'm not even sure there is a general case way to calculate the normals. -So, I'd love some help on how to calculate the intersection and normal of some of these shapes, and any sort of general case rules for these calculations would be fantastic. -Update -While real general case solutions would be super awesome, it's ok to assume the shapes are in their standard orientation, not rotated or transformed at all - just positioned and (maybe) scaled. This make the problem much simpler, I believe. If there are other limitations you can use to make the problem even simpler, that's likely fine. - -REPLY [12 votes]: Perhaps this more elementary description could help. -Let $e$ be the eye/camera, and $v$ a line-of-sight vector. -You want to solve simultaneously $e + t v$ with the surface you want to view, solving for $t$. -If you have two or more surfaces, don't try to intersect them with one another, -which can be algebraically complex, but rather let the ray tracing (effectively) do it for you. -Suppose you have a surface patch $S$ (perhaps a Bezier surface) parametrized by $a$ and $b$. -So now you want to solve simultaneously for $(t, a, b)$. If $S$ is a sphere or cylinder, -this amounts to quadratic equations. If $S$ is a cubic patch, it will reduce to solving -cubic equations. If $S$ is a torus, degree-4 equations. Once you have $(a,b)$, you can get the normal vector at that point from your parametric equations, as -J.M. describes.<|endoftext|> -TITLE: Applications of Probability Theory in pure mathematics -QUESTION [12 upvotes]: My (maybe wrong) impression is that while probability is widely used in science (for example, in statistical mechanics), it is rarely seen in pure mathematics. Which leads me to the question - -Are there some interesting application of Probability Theory in pure mathematics, outside Probability Theory itself? - -REPLY [6 votes]: The Erdős–Kac theorem shows that the (log of the log of the) prime factors of a number are Poisson/normally distributed.<|endoftext|> -TITLE: How could I calculate the rank of the elliptic curve $y^2 = x^3 - 432$? -QUESTION [13 upvotes]: The birational change of variables $(u,v) = (\frac{36+y}{6x},\frac{36-y}{6x})$ maps $u^3+v^3=1$ to $y^2 = x^3 - 432$ which has discriminant $-2^{12}\cdot 3^9$. -Using pari/gp we can compute the torsion subgroup: -? elltors(ellinit([0,0,0,0,-432])) -%1 = [3, [3], [[12, 36]]] - -This says the torsion subgroup has order 3, is $\mathbf{Z}/3\mathbf{Z}$ and is generated by $(12,36)$ (which corresponds to $1^3+0^3=1^3$). The reason it has order 3 is because this also includes the projective solution $[0:0:1]$ of $X^3+Y^3=Z^3$. - -Edit: By Nagell-Lutz one only needs to solve $y^2 = x^3 - 432$ in integers for $y=0$ and $y^2|2^{12}\cdot 3^9$ (which is a simple generate and test) to compute the elements of the torsion subgroup 'on paper'. - -The group of rational points for this curve is then (by Mordell's Theorem) of the form $\mathbf{Z}^r \times \mathbf{Z}/3\mathbf{Z}$ where $r$ is the rank of the curve. If we can show the rank is 0 then this would prove fermats last theorem for $n = 3$. -How can it be shown directly the rank of this curve is 0? - -REPLY [7 votes]: The approved answer has caused some risibility at mathoverflow, and I'll elaborate on Robin's -more reasonable comment (but I'm inclined to attribute the descent argument in this case to -Euler--at least he wrote it down). The version I give in an undergrad number theory class is this: First one develops the standard facts about Z[w] where w^2+w+1=0. (It has unique -factorization, 2 is prime, the units are 1,-1,w,-w,w^2 and -w^2, any element not 0 or a unit -has absolute value >1, and each congruence class mod 2 is represented by 0 or a unit). Then one notes that it's enough to prove: -Theorem--There are no a,b,c in R with a+b+c=0, abc a non-zero cube and a=b=1 mod 2. -The proof of the theorem is a reductio. Let H be max(/a/,/b/,/c/) and choose a solution -a,b,c with minimal H. (H^2 is an integer). a, b and c are evidently pairwise prime. Since -their product is a non-zero cube, each is (unit)(cube). Since a=b=1 mod 2, a=A^3 and -b=B^3, and we may assume that A=B=1 mod 2. Since abc is a cube, c=C^3 for some C in R. Since -2 divides c, 2 divides C and H is at least 8. -Now let S=Aw+Bw^2, T=Aw^2+Bw, and U=A+B. Then S+T+U=0 while STU is A^3+B^3=-C^3. Also S=T=1 mod 2, while max(/S/,/T/,/U/) is at most 2(H^(1/3)). This contradicts the minimality assumption.<|endoftext|> -TITLE: Solving separable differential equation -QUESTION [7 upvotes]: Seems straight-forward but I've been unable to get it right. Here are my steps: -$$y'(x) = \sqrt{-2y(x) + 28},\hspace{20 pt} y(-4)=-4$$ -$$\int {1 \over \sqrt{28-2y} }\hspace{2 pt}\text{d}y = \int \text{d}x$$ -$$-\sqrt{28-2y} = x + c$$ -$$(28-2y) = (x+c)^2$$ -$$y = -1/2x^2 - cx - c^2/2 + 14$$ -$$c = {-2, 10}$$ -$$\Rightarrow y = -1/2x^2 - 10x - 36$$ -I've checked it over many countless times but for the life of me I can't figure out why it won't work. I've tried plugging the result back into the original equation and it seems to me like it checks out if you take the negative of the square root.. - -REPLY [2 votes]: So what we want here is a particular solution to our ODE given our condition: $y(-4) = -4$ -$$y'(x) = \sqrt{-2y(x) + 28},\hspace{20 pt} y(-4)=-4$$ -$$\Rightarrow \dfrac{dy}{dx} = \sqrt{-2y(x) + 28}$$ -$$\Rightarrow \int {1 \over \sqrt{28-2y} }\hspace{2 pt}\text{d}y = \int {1}~\text{d}x$$ -$u = 28-2y$ -$du = -2dx$ -$dx = -\dfrac{1}{2}du$ -$$\Rightarrow -\dfrac{1}{2}\int {1 \over \sqrt{u} }\hspace{2 pt}\text{d}y = \int {1}~\text{d}x$$ -$$\Rightarrow -\dfrac{1}{2}\int {u^{-\dfrac{1}{2}}} \hspace{2 pt}\text{d}y = \int {1}~\text{d}x$$ -$$\Rightarrow -\dfrac{1}{2}2u^{\dfrac{1}{2}} = x$$ -$$\Rightarrow -\sqrt{28-2y} = x + c$$ -$$\Rightarrow \sqrt{28-2y} = -c - x,~~y(-4) = -4$$ -$$\Rightarrow \sqrt{28-2(-4)} = -c - (-4)$$ -$$\Rightarrow \sqrt{36} = -c + 4$$ -$$\Rightarrow 6 = -c + 4$$ -$$\Rightarrow c = -2$$ -$$\Rightarrow \sqrt{28-2y}^{2} = (-c-x)^2$$ -$$\Rightarrow 28-2y = (-(-2)-x)$$ -$$\Rightarrow 28-2y = (2-x)^{2}$$ -$$\Rightarrow 28-2y = 4-4x+x^{2}$$ -$$\Rightarrow 2y = 28-4+4x-x^{2}$$ -$$\Rightarrow y(x) = \dfrac{28-4+4x-x^{2}}{2}$$ -$$\Rightarrow y(x) = \dfrac{28}{2}-\dfrac{4}{2}+\dfrac{4x}{2}-\dfrac{x^{2}}{2}$$ -$$\Rightarrow y(x) = 14-2+2x-\dfrac{1}{2}x^{2}$$ -$$\Rightarrow y(x) = -\dfrac{1}{2}x^{2}+2x+12.$$ -Hence, -$~~~~~~~~~~~~~~~~~~~~~~~~y(x) = -\dfrac{1}{2}x^{2}+2x+12$ -is our particular solution found to our original first-order seperable linear ordinary differential equation. $\blacksquare$ -I hope this helped out, and hopefully I did not make any mistakes to cause any type of confusion -here.<|endoftext|> -TITLE: Normal Families -QUESTION [7 upvotes]: Suppose $\mathcal{F}$ is a famliy of analytic functions of the unit disc. Suppose also that -$( Re(f(z)) )^2 \ne ( Im(f(z)) ) $for all $|z|<1$ and all $f \in \mathcal{F}$. -It follows from the Fundamental Normality test that $\mathcal{F}$ is a normal family. -Is there a for elementary way of showing $\mathcal{F}$ is a normal family without invoking the Fundamental Normality Test? - -REPLY [5 votes]: Actually, you can avoid uniformisation, and just use the following simple normality criterion: -Theorem (Montel). If there is a nonempty open set $U$ such that all functions in $\mathcal{F}$ omit all points of $U$, then $\mathcal{F}$ is normal. -(By postcomposing with a M\"obius transformation, we can assume that the family $\mathcal{F}$ is uniformly bounded, and the claim follows e.g. from Marty's theorem.) -Since each of your functions omits one of the two complementary domains of the mentioned parabola, your family is the union of two normal families, and hence itself normal.<|endoftext|> -TITLE: How to start with mathematics? -QUESTION [47 upvotes]: I fell in love with mathematics a bit too late when I've already taken decisions regarding my future, career-wise. Now I would like to learn math on my own but I'm a bit confused as where to start. My knowledge of mathematics is comparable to that of a 15-16 year old highschool freshman. I would like to know how would you (if you were in my position) start your learning adventure on your own. There are a lot of resources online, of that I'm sure but I would like to follow a path. I'm pretty sure I can learn mathematics on my own and been thinking about this decision for almost a year. -Thank you! - -REPLY [4 votes]: I wrote an essay based on my experiences learning math on my own over the past few years - you can read it here.<|endoftext|> -TITLE: Can two structures be embeddable in each other, but not isomorphic? -QUESTION [8 upvotes]: I was reading about isomorphisms and homomorphisms on general structures, and first came across the definition of an injective homomorphism, or an embedding. This made me curious, is it possible for two structures $A$ and $B$ to be embeddable in each other, yet no isomorphism exists between them? -After some looking around, I let the structures be $A=\mathbb{R}$ and $B=[-1,1]$ with $f\colon [-1,1]\to\mathbb{R}\colon r\mapsto r$ and $g\colon\mathbb{R}\to [-1,1]\colon r\mapsto \frac{2}{\pi}\arctan(r)$. If I refrain from defining any relations, functions, or distinguished elements in the universes of $A$ and $B$, then it is vacuously true that $f$ and $g$ are homomorphisms. Also, $[-1,1]$ and $\mathbb{R}$ would not be isomorphic since $[-1,1]$ has a maximum and minimum element. (Or would this require me to define $\lt$ on $[-1,1]$?). -Are there some other structures, even contrived ones, where such embeddings $f$ and $g$ exist, but $A$ and $B$ are still not isomorphic? - -REPLY [4 votes]: There are unenlightening examples if you don't ask for the image of the embedding to be large. -For example, let A be a square and B an annulus, considered as topological spaces. A small copy of each one can be placed inside the other but they are not isomorphic in the topological category. -Bidirectional embedding is an equivalence relation, as is bidirectional dense (epimorphic) embedding, so at least linguistically the more natural problem in a given category is to ask what is the same about A and B if they are equivalent in this sense.<|endoftext|> -TITLE: Is there any formula for the series $1 + \frac12 + \frac13 + \cdots + \frac 1 n = ?$ -QUESTION [27 upvotes]: Is there any formula for this series? - -$$1 + \frac12 + \frac13 + \cdots + \frac 1 n .$$ - -REPLY [11 votes]: Here is a way to interpret the harmonic numbers combinatorially: -$$H_n=\dfrac {\genfrac{[}{]}{0pt}{}{n+1}{2}}{n!}$$ -where $\genfrac{[}{]}{0pt}{}{n+1}{2}$ is the absolute value of the Stirling number of the first kind, namely, the number of permutations of ${1,2,\dots,n,n+1}$ that have exactly $2$ cycles. -These satisfy the following recurrence: -$$\genfrac{[}{]}{0pt}{}{n}{k}=\genfrac{[}{]}{0pt}{}{n-1}{k-1}+(n-1)\genfrac{[}{]}{0pt}{}{n-1}{k}$$ which makes it algebraically obvious that they are related to the Harmonic numbers the way they are, though there is also a purely combinatorial proof. The virtue of this interpretation is that you can prove a whole host of crazy identities involving the Harmonic numbers by just translating the natural identities for the Stirling numbers. -The Stirling numbers of the first kind are also the coefficients of $x(x-1)(x-2)\cdots(x-(n-1))$, so their absolute values are the coefficients of $x(x+1)(x+2)\cdots (x+n-1)$, and so in particular - -The nth Harmonic number is the coefficient of $x^2$ in $\frac1{n!}x(x+1)(x+2)\dots(x+n)$. - -All of this can be found in Chapter 7 of Benjamin & Quinn's wonderful book Proofs That Really Count.<|endoftext|> -TITLE: Countability of disjoint intervals -QUESTION [9 upvotes]: According this problem/solution set from an MIT class (http://ocw.mit.edu/courses/mathematics/18-100c-analysis-i-spring-2006/exams/exam1_sol.pdf), the assertion: -"Every collection of disjoint intervals in R is countable." -is True, because "every interval contains a rational number", and the rationals are countable. -It seems to me this should be False, with possible counterexample: -{ [x,x] | x is an element of R} -ie the set of all singelton intervals on R. Why isn't this a valid counterexample? - -REPLY [10 votes]: Your thinking is correct; the set of all singleton sets of R is certainly uncountable. -It seems that the question meant something like "Every collection of disjoint open intervals in R is countable." (In this case, the claim that each interval contains a rational number is valid.) -Maybe there was some convention in the course that "interval" meant open interval, or excluded singleton sets; perhaps it's simply a mistake. Either way, it's good that you noticed this detail! - -REPLY [2 votes]: Because "singleton interval" is usually not considered to be an interval.<|endoftext|> -TITLE: Formula for the harmonic series $H_n = \sum_{k=1}^n 1/k$ due to Gregorio Fontana. -QUESTION [23 upvotes]: My question was inspired by this stackexchange question. For the last 90 minutes I have been trying to prove this formula due to Gregorio Fontana: -$$H_n = \gamma + \log n + {1 \over 2n} - \sum_{k=2}^\infty { (k-1)! C_k \over n(n+1)\ldots(n+k-1)}, -\qquad \textrm{ for } n=1,2,3,\ldots,$$ -where $H_n = \sum\limits_{k=1}^n 1/k$ and the coefficients $C_k$ are the Gregory coefficients given by -$${ z \over \log(1-z)} = \sum_{n=0}^\infty C_k z^k \qquad \textrm{ for } |z|<1.$$ -It's a bit frustrating as it's something I recall proving as a student many years ago. I have a vague recollection that I began with something like: -$$H_n = \int_0^1 {1-(1-x)^n \over x } \textrm{d}x,$$ -but my attempts to follow on from there have failed. Can you help? - -REPLY [15 votes]: Let's rewrite the formula, using $C_1=1/2$ as -$$S_n=\gamma+\log n-H_{n-1}$$ -where -$$S_n=\sum_{k=1}^\infty\frac{(k-1)!C_k}{n(n+1)\cdots(n+k-1)}.$$ -Use the formula -$$\frac{(k-1)!}{n(n+1)\cdots(n+k-1)}=\int_0^1 x^{n-1}(1-x)^{k-1}dx$$ -which is a special case of the beta integral. -Therefore -$$S_n=\int_0^1x^{n-1}\sum_{k=1}^\infty C_k(1-x)^k -dx=\int_0^1\left(\frac{1}{1-x}+\frac{1}{\log x}\right)x^{n-1}dx.$$ -When $n=1$ we get -$$S_1=\int_0^1\left(\frac{1}{1-x}+\frac{1}{\log x}\right)dx=\gamma$$ -by a well-known formula which can be derived from the aymptotic -$$\int_t^\infty\frac{e^{-x}}{x}dx=-\log t-\gamma+O(t)$$ -as $t\to0$, for the exponential integral. -By induction it suffices to consider the difference -$$S_n-S_{n+1}=\int_0^1\left(x^{n-1}+\frac{x^{n-1}-x^n}{\log x}\right)dx -=\frac1n-\int_0^\infty\frac{e^{-ny}-e^{-(n+1)y}}{y}dy.$$ -Using the identity -$$\int_0^\infty\frac{e^{-ay}-e^{-by}}{y}dy=\log\frac ba$$ -for $0 < a < b$ -which follows by integrating $e^{-xy}$ over the region $[a,b]\times[0,\infty)$ -we get -$$S_n-S_{n+1}=\frac1n-\log\frac{n+1}{n}.$$ -The desired formula for $S_n$ now follows by induction.<|endoftext|> -TITLE: How do I prove that $x^TAy = y^TAx$ if A is symmetric? -QUESTION [7 upvotes]: Ok this is for a HW but I'm not looking for a handout...just a hint to get me on the right track. -I have no idea where to begin proving this: -Show that if A is a symmetric matrix, then -$$x^TAy = y^TAx$$ - -REPLY [5 votes]: If $A$ is symmetric then we know that $A_{ij} = A_{ji}$. -If you understand that $x^T A y$ = $\sum_i\sum_j x_iA_{ij}y_j$ , then swapping the indices of $A$ should directly lead you to the answer.<|endoftext|> -TITLE: Does a continuous scalar field on a sphere have continuous loop of "isothermic antipodes" -QUESTION [8 upvotes]: For a continuous scalar field on a circle, there is a diameter of the circle such that the endpoints of the diameter have the same value. If you think of the scalar field as "temperature", then what this says is that there are points on opposite sides of the circle that are the same temperature: isothermic antipodes. -So, for a continuous scalar field on a sphere, the same is true: there are isothermic antipodes. (Just consider any great circle.) -Now, is there more you can say? Can you say for example that there is a closed loop on the surface of the sphere such that every point on the loop has the same value as the other endpoint of its diameter? - -REPLY [6 votes]: [edit: as predicted, there is a counterexample with no continuous loop. See addition below.] -I think that you may not quite get a loop, only a topological continuum (a compact connected subset) on the sphere whose complement contains multiple components. The continuum can be gotten by elaborating Rahul's answer to choose a suitable component of the $g=0$ locus. -The existence of topologically wild continua such as the "Warsaw circle" suggests that you can draw such a creature on the sphere or projective plane and then extend to a continuous function that would give a counterexample. Or you could take a field that has the equator as the locus of isothermal antipodes ($g=0$) and try to perform a (antisymmetric) bending construction that modifies parts of the equator, turning it into a wild curve that cannot be traced by a continuous loop. -[added: the extension construction would work as follows. Take two opposite points on the equator. Join them with a wild continuum in one hemisphere, and the antipode of that continuum in the opposite hemisphere. Define $f(x)$ to be the distance to the wild thing, in one hemisphere, and the negative of distance to the wild thing, in the opposite hemisphere. Hence $f(x) = -f(-x)$ on the whole sphere, and $f=0$ only on the wild construction that cannot be traversed continuously by a path.]<|endoftext|> -TITLE: what does ∇ (upside down triangle) symbol mean in this problem -QUESTION [37 upvotes]: Given $f(x) = \frac{1}{2}x^TAx + b^Tx + \alpha $ -where A is an nxn symmetric matrix, b is an n-dimensional vector, and alpha a scalar. Show that -$\bigtriangledown _{x}f(x) = Ax + b$ -and -$H = \bigtriangledown ^{2}_{x}f(x) = A$ -Is this simply a matter of taking a derivative with respect to X, how would you attack this one? - -REPLY [37 votes]: $\nabla f = (\partial f/\partial x_1, \ldots, \partial f/\partial x_n)^t$ denotes the vector of partial derivatives of $f$ and is a completely standard notation. -On the other hand, $\nabla^2 f$ seems to be used here in an unusual way, namely to denote the Hessian (the matrix of all second order partial derivatives), $(\partial^2 f/\partial x_i \partial x_j)_{i,j=1}^n$. -(The usual meaning of $\nabla^2 f$ is the Laplacian, $\partial^2 f/\partial x_1^2 + \ldots + \partial^2 f/\partial x_n^2$.) - -REPLY [12 votes]: $\bigtriangledown f$ finds the direction of maximal change in f.<|endoftext|> -TITLE: Branches of mathematics not having a general method to solve -QUESTION [9 upvotes]: I studied applied math, so each course (except abstract algebra) was dedicated to solution of a similar problems. After those courses it seems that every branch of mathematics has a developed theory with a few unsolved problems. I believe that is wrong. I found out that there is no general approach to Diophantine equations (the 10th Hilbert problem), to problems like "Collatz conjecture" and to nonlinear differential equations. -And so the question is: what is the other branches of mathematics (collections of similar problems) without a general method to solve them? - -REPLY [4 votes]: Irrationality of certain numbers is still though problem for mathematics. Even though irrationality of some numbers like square roots of integers that are not perfect squares and irrationality of $\pi$ and $e$ has long been known, it is still unknown whether $\pi + e$, $2^e$, $\pi^{e}$, $\pi^{\sqrt{2}}$, $\pi \cdot e$ , $\pi / e$, Catalan's constant, Euler-Mascheroni constant, etc. are irrational or not, even though all are highly suspected to be.<|endoftext|> -TITLE: what are the product and coproduct in the category of topological groups -QUESTION [27 upvotes]: I know the limits in the categories of groups, abelian groups and topological spaces and was wondering about the same thing. - -REPLY [9 votes]: @Martin: @Agusti Just come across your answer, Martin, which rightly pointed out the use of Freyd representability to get a general result. In fact this was used in essence in the paper -R. Brown and J.P.L. Hardy, ``Topological groupoids I: universal -constructions'', Math. Nachr. 71 (1976) 273-286. -to give colimits of topological groupoids, and so in particular topological groups. -As you say, it is difficult to get hold of specific properties of this topology; the construction is defined by its universal property, and that is all. -However, in the case of free topological groups, there are some quite specific exploration of the properties, with papers by Markov, Graev, and S.A. Morris, P. Nickolas. -Here are two papers which should be relevant to the discussion: -Nickolas, Peter, -Coproducts of abelian topological groups. -Topology Appl. 120 (2002), no. 3, 403–426. -Nickolas, Peter, A Kurosh subgroup theorem for topological groups. Proc. London Math. Soc. (3) 42 (1981), no. 3, 461–477. -Another way of dealing with this problem is to work in a convenient category of spaces, usually the category of compactly generated spaces, in which the product of identification maps is an identification map.<|endoftext|> -TITLE: How is $\mathbb{C}$ different than $\mathbb{R}^2$? -QUESTION [45 upvotes]: I'm taking a course in Complex Analysis, and the teacher mentioned that if we do not restrict our attention to analytic functions, we would just be looking at functions from $\mathbb{R}^2$ to $\mathbb{R}^2$. -What I don't understand is why this is not true when we do restrict our attention to analytic functions. I understand that complex analytic functions have different properties than real functions on $\mathbb{R}^2$, but I don't understand why this is so. If I look at a complex number $z$ as a vector in $\mathbb{R}^2$, then isn't differentiability of $w=f(z)$ in $\mathbb{C}$ defined the same way -as differentiability of $(u,v)=F(x,y)$ in $\mathbb{R}^2$? - -REPLY [10 votes]: To go down a level from differentiability: the root of the difference between $\mathbb{C}$ and $\mathbb{R}^2$ comes from the multiplicative structure on $\mathbb{C}$. Look at the definition of differentiation itself: $\lim_{h\rightarrow 0} h^{-1}\cdot \left(f(z+h)-f(z)\right)$ - there's a multiplication here, by the multiplicative inverse of the (complex) number $h$, that simply can't be performed in $\mathbb{R}^2$ without giving it a field structure. There isn't 'a' derivative of a function from $\mathbb{R}^2 \mapsto \mathbb{R}^2$, just two partials; the multiplicative structure of $\mathbb{C}$ is then what forces the Cauchy-Riemann constraints on those partial derivatives and allows for a definition of the derivative as a single function from $\mathbb{C}\mapsto\mathbb{C}$.<|endoftext|> -TITLE: Why does $1/x$ diverge? -QUESTION [26 upvotes]: So for the formula $\dfrac {1}{n}$, If you were to add up all $y$ values from $n=1$ to $n=∞$, wouldn't the sum -$$\sum_{n=1}^\infty \frac{1}{n}$$ -approach a number because even though you are always adding, aren't you just adding smaller and smaller numbers? Wouldn't this mean that it approached a certain number? - -REPLY [13 votes]: As others have explained, the series diverges. But the divergence is very slow, indeed. -See below. -A recent related idea for a first year calculus exercise: -An intelligent robot named Marvin travelled back in time to the moment of the Big Bang 13.7 billion years ago. He started calculating the partial sums of the harmonic series -$$ -\frac11+\frac12+\frac13+\frac14+\frac15+\cdots -$$ -He added one term to the partial sum per second. Using the estimates leading to the so called integral test answer the following question: As of today, has Marvin's sum reached the value 42?<|endoftext|> -TITLE: Example where union of increasing sigma algebras is not a sigma algebra -QUESTION [41 upvotes]: If $\mathcal{F}_1 \subset \mathcal{F}_2 \subset \dotsb$ are sigma algebras, what is wrong with claiming that $\cup_i\mathcal{F}_i$ is a sigma algebra? -It seems closed under complement since for all $x$ in the union, $x$ has to belong to some $\mathcal{F}_i$, and so must its complement. -It seems closed under countable union, since for any countable unions of $x_i$ within it, each of the $x_i$ must be in some $\mathcal{F}_j$, and so we can stop the sequence at any point and take the highest $j$ and we know that all the $x_i$'s up to that point are in $\mathcal{F}_j$, and thus so must be their union. There must be some counterexample, but I don't see it. - -REPLY [25 votes]: Something more drastic is true: If $\langle \mathcal{F}_n: n \geq 1\rangle$ is a strictly increasing sequence of sigma algebras over some set $X$ then $ \bigcup_{n \geq 1} \mathcal{F_n}$ is not a sigma algebra. As a corollary, there is no countably infinite sigma algebra. See, for example, "A comment on unions of sigma fields, A. Broughton, B. Huff, American mathematical monthly, 1977 Vol. 84 No. 7, pp 553-54".<|endoftext|> -TITLE: Free resources to start learning Discrete Mathematics -QUESTION [5 upvotes]: Can anyone recommend good, free online articles or books to learn Discrete mathematics? When I google'd for them, I came across few resources..but don't know whether they are good to start learning with. It's better to know a tutorial or a book that get you started in Discrete Mathematics. Can readers of SE Mathematics can suggest a good starting point? - -REPLY [3 votes]: The UVic Discrete Mathematics Study Guide is also good for concise notes on the basics: -http://www.math.uvic.ca/faculty/gmacgill/guide/index.html<|endoftext|> -TITLE: What are the subgroups of a semidirect product? -QUESTION [15 upvotes]: Goursat's Lemma characterizes the subgroups of direct products. Is there a similar characterization for the subgroups of semidirect products? What about if I'm only interested in the normal subgroups? - -REPLY [16 votes]: The short answer is that there is nothing nearly so nice as Goursat's Lemma. You can certainly reduce easily to the case where $\pi_K(H)=K$, much like you can reduce Goursat's Lemma to the case of a subdirect product, but after that it gets complicated. To give you an idea, here are three references. -A theorem of Rosenbaum (Die Untergruppen von halbdirekten Produkten, Rostock. Math. Kolloq. No. 35 (1988), 21-30) gives (from the MathScieNet Review MR991728 (90c:20032)): - -Theorem. A set $U$ of elements of the semidirect product $G=NK$ with $N\triangleleft G$ is a subgroup of $G$ if and only if - -$UN\cap K$ and $U\cap K$ are subgroups of $G$; -$U\cap N$ is a subgroup and $UK\cap N$ is a collection of $U\cap N$-cosets in $N$; and -There is a mapping $\varphi$ defined for all $g\in UK\cap N$ mapping $(U\cap K)g$ onto some coset $n(U\cap N)$, with $n\in N$, satisfying $\varphi(g_1g_2)=g_2^{-1}\varphi(g_1)g_2\varphi(g_2)$. - - -The criterion was then used by Gutiérrez-Barrios to develop a criterion for a set of elements to be a normal subgroup of the semidirect product (Die Normalteiler von halbdirekten Produkten. Wiss. Z. Pädagog. Hochsch. Erfurt/Mühlhausen Math.-Natur. Reihe 25 (1989), no. 2, 108-114. MR1044548 (91b:20029)) -Usenko (Subgroups of semidirect products, english translation in -Ukrainian Math. J. 43 (1991), no. 7-8, 982-988 (1992), MR1148867 (92k:20045)) uses crossed homomorphisms to study subgroups of semidirect products.<|endoftext|> -TITLE: Group Law for an Elliptic curve -QUESTION [12 upvotes]: I was reading this book "Rational points on Elliptic curves" by J.Silverman, and J.Tate, 2 prominent figures in Number theory and was very intrigued after reading the first couple of pages. -The connection between Algebra and Geometry is displayed in a professional manner in that book. One can generally ask this question: - -It's clear that the Mathematicians had to develop some laws in order to make the points on an Elliptic curve to a group. The first axioms of adding 2 points on an elliptic curve is clear. But i would like to know how does one come up with the mechanism of the remaining axioms(like associativity) which are actually intricate in nature. What is the motivation behind it. Can we define an axiom in an other way so that the point on the elliptic curve still forms a group? Also why to define the group axioms only on for this type of curves. - -A very much related question was asked in MO link: https://mathoverflow.net/questions/6870/why-is-an-elliptic-curve-a-group but i actually expect more about the mechanism of the intricate axioms. - -REPLY [30 votes]: The group law on an elliptic curve was not discovered in a vacuum. It came up in the context -of abelian integrals. -Let $y^2 = f(x)$, where $f(x)$ is a cubic in $x$ be an elliptic curve; call it $E$. -Elliptic integrals are integrals of the form $$\int_{a}^x dx/y = \int_a^x \frac{d x}{\sqrt{ -f(x)}}.$$ (Here $a$ is some fixed base-point.) They come up (in a slightly transformed manner) when computing the arclength of an ellipse (whence their name). -If was realized at some point (in the 1600s or 1700s) (at least in special cases) that if -you apply certain substitutions to $x$, you can double the value of the integral, or that -if you apply certain substitusions of the form $x = \phi(x_1,x_2)$, the integral you compute is the sum of the individual integrals for $x_1$ and $x_2$. -Real understanding came from the work of Abel and Jacobi. (Unfortunately I don't know -the precise history or attributions here.) -What they realized (in modern terms) is that, if we fix the base-point $a$ and let $b$ -vary, then the elliptic integral is giving a multivalued map from the elliptic curve $E$ -to $\mathbb C$, -and that the formula $\phi(x_1,x_2)$ mentioned above shows us a way to add points on the -elliptic curve, so that this map is a (multi-valued) group homomorphism. -Taking the inverse of this multi-valued map gives a single-valued map (which is how we are -more used to thinking about it) -$$\mathbb C \to E,$$ -which is a homomorphism when we give $\mathbb C$ its addivite structure and $E$ the -group law coming from $(x_1,x_2)\mapsto \phi(x_1,x_2)$. The kernel of this map turns out -to be a lattice $\Lambda$, so that we get an isomorphism $\mathbb C/\Lambda \cong E.$ -The formula $\phi(x_1,x_2)$ turns out to be precisely the formula describing addition -on the elliptic curve via chords and tangents, and there are lots of theoretical explanations for it, as you can find on the linked MO page. -For a higher genus curve $C$, it turns out that there is not just the one holomorphic differential $dx/y$, but $g$ linearly independent such (if the curve has genus $g$), -say $\omega_1, \ldots,\omega_g$. -Furthermore, if we fix a particular differential $\omega_i$, then there is no formula -$\phi(x_1,x_2)$ such that the sum of the integrals of $\omega$ for $x_1$ and $x_2$ is -equal to the integral of $\omega$ for $\phi(x_1,x_2)$. -However, what Abel and Jacobi found is that, if we consider the map -$$(x_1,\ldots,x_g) \mapsto (\sum_{i = 1}^g \int_a^{x_i} \omega_1, -\ldots,\sum_{i = 1}^g \int_a^{x_i} \omega_g),$$ -which gives a multi-valued map $$Sym^g C \to \mathbb C^g$$ -(here $Sym^g C$ denotes the $g$th symmetric power of $C$, so it is the product of -$g$ copies of $C$, modulo the action of the symmetric group on the $g$ factors), -then we can find a formula $\phi(x_{1,1},\ldots,x_{1,g},x_{2,1},\ldots,x_{2,g})$ -such that $\phi$ defines a group law on $Sym^g C$ (at least generically) and -such that this map is a homomorphism. -Again, this map - becomes well-defined and (generically) single valued if we quotient out the target by an appropriate lattice $\Lambda$ (the period lattice), to get a birational map -$$Sym^g C \to \mathbb C^g /\Lambda.$$ -The target here is called the Jacobian of $C$, and can be identified with $Pic^0(C)$. -In summary, to generalize the addition law on an elliptic curve to higher genus curves, -you have to consider unordererd $g$-tuples of points (where $g$ is the genus), and add those (not just individual -points). -(The relationship with $Pic^0(C)$ is that if $x_1,\ldots,x_g$ are $g$ points on $C$, -then $x_1 + \ldots + x_g - g a$ is a degree zero divisor on $C$, and every degree zero -divisor is linearly equivalent to a divisor of this form, and, generically, to a unique -such divisor.) - -REPLY [11 votes]: I agree that from the chord-and-tangent definition of the group law it is far from clear why associativity should hold. This definition is the one presented in Silverman and Tate because all the other ways you might define the group law require much more background; however, these are the definitions from which associativity makes more sense. -In one approach an elliptic curve is a group of the form $\mathbb{C}/\Lambda$ where $\Lambda$ is a lattice, e.g. a discrete subgroup of $\mathbb{C}$ isomorphic to $\mathbb{Z}^2$. (The motivation for looking at such quotients comes basically from the uniformization theorem). Such a quotient forms a compact Riemann surface, topologically a torus, hence it admits no nonconstant holomorphic functions; therefore we would like to describe the meromorphic functions. A natural way to do this is to write down a function and then try to average over $\Lambda$, which (if you do it right) will lead you to the definition of the Weierstrass elliptic function $\wp(z)$. By examining its poles and the poles of its derivative $\wp'(z)$ we can deduce the differential equation -$$\wp'(z)^2 = 4 \wp(z)^3 - g_2 \wp(z) - g_3$$ -where $g_2, g_3$ are certain constants associated to $\Lambda$. In fact the map from $\mathbb{C}/\Lambda$ to the elliptic curve $y^2 = 4x^3 - g_2 x - g_3$ given by $z \mapsto (\wp(z), \wp'(z))$ is an isomorphism. This forms the connection between the complex-analytic picture and the algebraic picture in terms of Weierstrass normal forms. The group law here is just regular addition, e.g. the sum of the points $(\wp(a), \wp'(a))$ and $(\wp(b), \wp'(b))$ is just $(\wp(a+b), \wp'(a+b))$, and so it is obvious that associativity holds. The fact that if $a + b + c = 0$ then the corresponding points are collinear is then a relatively simple computation. -(Historically, this is not how the first group laws on elliptic curves were discovered. For the elliptic-function perspective I recommend Stevenhagen's notes on the subject.) -In another approach an elliptic curve (over $\mathbb{C}$, for simplicity) is a smooth projective curve of genus $1$. The category of smooth projective curves over $\mathbb{C}$ is known to be equivalent to the category of compact Riemann surfaces, which is part of the connection between this definition and the above definition. To any smooth projective curve $C$ one can associate its Jacobian variety, which is a group of the form $\mathbb{C}^g/\Lambda$ where $g$ is the genus which naturally occurs when one thinks about integration over such a curve. The Abel-Jacobi theorem asserts that the Jacobian variety of a curve is isomorphic to its Picard group (the group of divisors modulo principal divisors), and the points of the Picard group of a curve of genus $1$ can be put into natural bijection with the curve itself (once one chooses an identity), so the group law on the Jacobian variety (which is perfectly natural) then gives a group law on the original curve (again once one chooses an identity). The connection to collinear points comes from the fact that lines determine principal divisors, which are zero in the Picard group. -As for your last question, elliptic curves are, as it turns out, the only projective curves which can be given the structure of algebraic groups; see the Wikipedia article on abelian varieties. - -REPLY [3 votes]: This is a failed attempt at constructing the group law with associativity built in, It is still interesting to consider though - I will explain why at the end. - -Every line intersects 3 points on an elliptic curve, call them $P,Q,R$. -We can pick some designated point $\mathcal O$ as the identity on the curve and then set the ternary relation $P+Q+R = \mathcal O$ to hold for all triples which lie on a line (N.B. I could just write this $C(P,Q,R)$ but I use the suggestive notation of $+$ and $=$ instead). -Inspired by the group axioms we have $P + \mathcal O + -P = \mathcal O$, so we know how to compute the inverse of a point. -Given any two points $P,Q$ we could also find a third $Q$ easily, so that $P + Q + R = \mathcal O$. We can now define the binary operation of addition as $P + Q = -R$. - -Now it is possible to reduce the proof of associativity of the group law to a purely combinatoric statement. - -$(P + Q) + R$ means that $P + Q + -PQ = \mathcal O$ and $PQ + R + -PQR = \mathcal O$ -$P + (Q + R)$ means that $Q + R + -QR = \mathcal O$ and $P + QR + -PQR' = \mathcal O$ - -(and also the extra relations between positive and negative terms) -These relations define an incidence structure: -{(P,Q,-PQ),(PQ,O,-PQ),(PQ,R,-PQR),(PQR,O,-PQR),(Q,R,-QR),(QR,O,-QR),(P,QR,-PQR'),(PQR',O,-PQR')} -Now for the interesting bit! We would hope that a Fano plane style situation would force PQR to be equal to PQR' but it doesn't happen. So we didn't get any proof out of this, but we do get a sense of the difficulty of this associativity problem: There is something special about the shape of the elliptic curve which forces these points to be equal. That implies that any proof or construction of this group law must use the geometric properties of the curve in an essential way.<|endoftext|> -TITLE: Find the radius of a circle based off of its intersection with another -QUESTION [6 upvotes]: So I have some circles that look kind of like this: - -I'm given the radius of the circle with center point $A$ which is also the distance $AB$, the distance $AB$ between the two center points on the x axis (they share same $y$ values for the centers), and the distance $CD$ which is the height of the shape created by the intersection. -I'm looking for a way to find the distance $BD$ which is also the radius of the circle centered at the point $B$. - -REPLY [4 votes]: First, note that segment AB bisects segment CD. Call their point of intersection E, which is also the midpoint of CD. Since you know the length of CD, you know the length of CE. The measure of angle BAC (which could also be called EAC) is $\sin^{-1}\left(\frac{CE}{AC}\right)$. Apply the Law of Cosines to triangle ABC to find BC=BD: -$$BC^2=AC^2+AB^2-2\cdot AC\cdot AB\cdot\cos\left(\sin^{-1}\left(\frac{CE}{AC}\right)\right).$$ -Fill in the known lengths and solve.<|endoftext|> -TITLE: Is complex conjugation needed for valid inner product? -QUESTION [26 upvotes]: What are the benefits of using a conjugate linear inner product in a complex vector space vs a simple linear inner product? That is, why do we demand that $(y,x) = \overline{(x,y)}$ as opposed to $(y,x)=(x,y)$? Of course, this ensures that $(x,x)$ is real and thus makes an easy definition of norm, but is that necessary? - -REPLY [36 votes]: It is in fact necessary. The inner product axioms without the conjugation are inconsistent: -(Here $u$, $v$, $w$ are vectors and $c$ is a scalar) - -$\langle cu, v\rangle = c\langle u, v\rangle$ -$\langle u,v\rangle = \langle v,u\rangle$ -If $u \neq 0$, then $\langle u,u\rangle$ is a positive real number -$\langle u+v,w\rangle = \langle u,w\rangle + \langle v,w\rangle$ - -In fact, 1-3 alone are inconsistent. Indeed, let $u$ be any nonzero vector, so $\langle u,u\rangle > 0$ by condition 3. But if $i = \sqrt{-1}$, then $\langle iu, iu\rangle = i\langle u, iu\rangle$ (by 1) $ = i\langle iu, u\rangle$ (by 2) $ = i^2\langle u, u\rangle$ (by 1) $ = -\langle u, u\rangle < 0$, contradicting condition 3. -The upshot is that you can choose: either conjugate one side of condition 2, giving you the axioms for an inner product, or get rid of condition 3, giving you the axioms for a symmetric bilinear form. You could also consider a weaker version of 3, like requiring that if $u\neq 0$, then $\langle u, v\rangle \neq 0$ for some $v$. That gives you nondegenerate symmetric bilinear forms. -Note that there's nothing wrong with bilinear forms on complex vector spaces; they're just not inner products. They're disjoint concepts, unlike in real vector spaces, where inner products are just special symmetric bilinear forms. In some ways, bilinear forms are nicer than inner products, since you don't have to worry about complex conjugation. However, bilinear forms over the complex numbers do not give rise to norms, which means they don't endow vector spaces with good geometry. Inner products do, hence their ubiquity. - -REPLY [3 votes]: You need to define the inner product this way (in a complex vector space) to have a good definition of the norm of a vector. Let $V$ be this vector space, and take $X\in V$. Then, the norm of X is defined as $|X|=\sqrt{(X,X)}$. This definition is good ($|X|$ is well defined for all $X$) if $(y,x)=(x,y)^*$. If this is not the case, for example if $(y,x)=(x,y)$, then $ (x,x) $ need not be positive. More: $(x,x)$ need not be real. -Aother thing, related to the above one: in metric spaces, given an inner product we define a norm (the same way we did above) and, from the norm, a distance function. This distance function determines the topology of the space (the topology induced by the inner product). To do this, we need to be able to define in a good way the norm of a vector, and the problem appears again.<|endoftext|> -TITLE: What's the probability that a sum of dice is prime? -QUESTION [14 upvotes]: Prompted by today's Minute Math question on the MAA site (http://amc.maa.org/mathclub/5-0,problems/T-problems/T-web,ia/2005web/tb05-12-ia.shtml), I started thinking about the probability that the sum of the numbers rolled on a set of $n$ dice is prime, particularly the asymptotics as $n\rightarrow\infty$. Heuristics strongly suggest that this is proportional to $1/\mathrm{ln}\ n$, and in fact, that it's $1/\mathrm{ln}\ n - O(1/\mathrm{ln}^2n)$, but I'm wondering how I would go about getting better asymptotics on the second term. -For the record, the heuristic argument goes something like this: assume for concreteness' sake that we're rolling 6-sided dice. Then the sum of the dice is closely approximated by a normal variable with mean $\mu=7n/2$ and variance $\sigma^2=35n/12$, and since the PNT says that the 'probability' of an integer $n$ being prime is roughly $1/\mathrm{ln}\ n$, we should be able to integrate that probability with respect to the normal distribution: -$$p = {1\over \sqrt{2\pi\sigma^2}}\int_n^{6n} e^{-\left({(t-\mu)^2\over 2\sigma^2}\right)} {1\over\mathrm{ln}\ t} dt$$ -And since $1/\mathrm{ln}\ t$ is monotonic, the value of the integral is bounded by the values we get by replacing its term in the integral with its maximum and minimum values on the integration interval: -$${1\over\mathrm{ln}\ 6n} {1\over \sqrt{2\pi\sigma^2}}\int_n^{6n} e^{-\left({(t-\mu)^2\over 2\sigma^2}\right)} dt < p < {1\over\mathrm{ln}\ n} {1\over \sqrt{2\pi\sigma^2}}\int_n^{6n} e^{-\left({(t-\mu)^2\over 2\sigma^2}\right)} dt$$ -Both of the integrals in the latter formula are essentially 1 (by definition), so we get ${1\over\mathrm{ln}\ 6n} < p < {1\over\mathrm{ln}\ n}$; replacing $\mathrm{ln}\ 6n$ by $(\mathrm{ln}\ 6 + \mathrm{ln}\ n)$ and using the binomial formula gives the heuristic approximation I alluded to above. This leads me to a couple of questions: -How safe is the heuristic argument above? I know that the PNT gives good bounds on the number of primes in an interval (on the order of $n^{1/2}$ here, which in particular means that the error from the prime-counting would be $O(n^{-1/2})$ and so much smaller than the inverse-log terms above), but my analytic number theory isn't good enough to know whether 'weighting' by the normal distribution would throw off the classical proofs. -How would I go about evaluating the integral above? Obviously the bounds I use bring it in to a fairly small range, but it seems as though to get a second term in my asymptotics I'd need to be able to at least approximate the integral, and there aren't any obvious tricks that look like they'd handle it well... - -REPLY [4 votes]: For (1.), Rosser and Schoenfeld (sp?) non-asymptotic estimates bound $\pi(n)$ between functions of the form $x/(\log x + C)$ and this should be enough. -For (2) your integrals are rapid decaying and incredibly close to the integrals on the whole real line. O($n$) standard deviations from the average is quite an unlikely event.<|endoftext|> -TITLE: What structure does the alternating group preserve? -QUESTION [54 upvotes]: A common way to define a group is as the group of structure-preserving transformations on some structured set. For example, the symmetric group on a set $X$ preserves no structure: or, in other words, it preserves only the structure of being a set. When $X$ is finite, what structure can the alternating group be said to preserve? -As a way of making the question precise, is there a natural definition of a category $C$ equipped with a faithful functor to $\text{FinSet}$ such that the skeleton of the underlying groupoid of $C$ is the groupoid with objects $X_n$ such that $\text{Aut}(X_n) \simeq A_n$? -Edit: I've been looking for a purely combinatorial answer, but upon reflection a geometric answer might be more appropriate. If someone can provide a convincing argument why a geometric answer is more natural than a combinatorial answer I will be happy to accept that answer (or Omar's answer). - -REPLY [2 votes]: Let us define an orientation of a finite set recursively. - -An orientation of the empty set is either $-1$ or $+1$. -An orientation of a finite set $X$ is the data of an orientation for each $Y ⊆ X$ of cocardinality 1 (meaning $|Y|=|X|-1$) and such that for each $Z ⊆ X$ of cocardinality 2, if $X = Z∪\{a,b\}$, then the two orientations induced on $Z$ by going through the two possible paths $X → Z∪\{a\} → Z$ and $X→Z∪\{b\}→Z$ are different ("opposites"). - -We obtain, for each set, two possible orientations. This can be proved by showing that for any $Y ⊆ X$ of cocardinality $1$, there is a unique way of extending an orientation of $Y$ to an orientation of $X$. This also shows that each linear ordering of $X$ gives an orientation. -We can visualize what an orientation is on a drawing of a triangle or of a tetrahedron: we orient the faces/edges and make it so that everything glues nicely along the edges/vertices. -Each permutation of $X$ acts naturally on the set of orientations of $X$, and the permutations acting trivially are the even ones. -But it is not clear enough to me how this interacts in a more formal way with the intuitions there are in simplicial homology or about the exterior algebra.<|endoftext|> -TITLE: Relation between torsion of a curve and the curl of a vector field -QUESTION [8 upvotes]: The torsion of a curve in $\mathbb{R}^3$ indicates how much it twists around. The curl of a vector field indicates how much the vector field twists around. Is there a relation between the curl of a vector field and the torsion of a curve through that vector field at a given point? - -REPLY [7 votes]: The paper "Curvature Measures of 3D Vector Fields and Their Applications" by Weinkauf et al. describes how to compute the curvature and torsion of the tangent curves in terms of the spatial derivatives of the underlying vector field. -Going the other way and recovering any information about the vector field from its tangent curves is not necessarily possible. As a simple counterexample, consider that all vector fields of the form $\mathbf{v}(r,\theta,z) = f(r,z)\,\mathbf{e}_\theta$ in cylindrical coordinates have exactly the same tangent curves (circles around the $z$-axis) but completely different curls. -In general, the difficulty is that the properties of the tangent curve can only tell you about the derivatives of the vector field along the parallel direction; you get no information about how the vector field varies in the plane normal to the curve.<|endoftext|> -TITLE: What Are $4$ Sided Shapes Called Again? -QUESTION [7 upvotes]: I apologise for the really basic question. This didn't really fit on any other StackExchange website so the Maths one was the closest one where I could ask. -Really Basic Question- What are $4$ sided shapes called again? -Like how triangles are $3$ sided shapes, octagons are $8$ sided shapes, ... What are the $4$ sided ones called then? - -REPLY [4 votes]: Shorter version is "quadrangle".<|endoftext|> -TITLE: How do I prove this sum is not an integer -QUESTION [16 upvotes]: Assume that $k,n\in\mathbb{Z}^+$. Prove that the sum -\begin{equation*} -\dfrac{1}{k+1}+\dfrac{1}{k+2}+\dfrac{1}{k+3}+\ldots +\dfrac{1}{k+n-1}+\dfrac{1}{k+n} -\end{equation*} -is not an integer. -The case $k=0$ is proved in this question: "Is there an elementary proof that $\sum_{k=1}^{n}\frac{1}{k}$ is never an integer?" - -REPLY [15 votes]: The proof that I gave in that thread works just as well here. It depends only on the fact that in any contiguous sequence of integers (here denominators) the maximal power $\rm 2^k$ that divides any element occurs in precisely one element. Indeed, after the first (necessarily odd) multiple of $\rm 2^k$, the next multiple would, by contiguity, be an even multiple, so a multiple of $\rm 2^{k+1}$, contra maximality. Here is said proof: -HINT $\;$ Since there is a unique denominator $\rm 2^k$ having maximal power of $2$, upon multiplying all terms through by $\rm 2^{k-1}$ one deduces the contradiction that $\rm\; a/2 = b/c \;$ with $\rm \; a,\ c \:$ odd. As an example: -$\quad\quad\quad\quad\quad\quad m = \frac{1}{2} + \frac{1}{3} +\; \frac{1}{4} \;+\; \frac{1}{5} + \frac{1}{6} + \frac{1}{7} $ -$\quad\quad\Rightarrow\quad\;\; 2m = \; 1 + \frac{2}{3} +\; \frac{1}{2} \;+\; \frac{2}{5} + \frac{1}{3} + \frac{2}{7} $ -$\quad\quad\Rightarrow\quad -\frac{1}{2} = \; 1 + \frac{2}{3} - 2m + \frac{2}{5} + \frac{1}{3} + \frac{2}{7}$ -The prior sum has all odd denominators so reduces to a fraction with odd denominator $\rm d\:|3\cdot 5\cdot 7$. -Note $\:$ I purposely avoided any use of valuation theory because Anton requested an "elementary" solution. The above proof can easily be made comprehensible to a high-school student. The proof is trivial to anyone who knows valuation theory: namely the sum has a lone dominant term with 2-adic value $\rm v_2<0\:$ so, by the domination principle, the sum has 2-adic value $\rm v_2<0\:,\:$ i.e. the sum has even denominator in lowest terms, so it is nonintegral. - -REPLY [4 votes]: {Differences of harmonic numbers are not integers} -Consider $\sum_{k=m}^n {1\over k}$. Find the -largest power of $2^r$ that divides one of the denominators between $m$ and $n$. -There can be no even multiples of $2^r$ between $m$ and $n$, hence there - is only one odd multiple. -Therefore $2^{r-1}(1/m+\cdots+1/n)$ is equal to $1/2o_1$ plus a bunch of -fractions with odd denominator. Adding them gives one fraction with -an odd denominator, write it as $x/o_2$. -Here $o_1$ and $o_2$ are odd integers. -We have $2^{r-1}(1/m+\cdots+1/n)=1/2o_1+x/o_2$ so that -$${1\over m}+\cdots+{1\over n}={2xo_1+o_2\over 2^r o_1 o_2}.$$ -Therefore $2^r$ divides the denominator. -If $n\not=m$, then $r\geq1$ and the sum is not an integer. -Cf. Exercise 6.21 (page 311) of Graham, Knuth, and Patashnik.<|endoftext|> -TITLE: When is an affine group complete? -QUESTION [6 upvotes]: Call a finite group $A$ affine if it has a normal, self-centralizing, complemented, elementary abelian subgroup $V$. Such a group $A$ is a semi-direct product $G\ltimes V$ where $V$ is a vector space of dimension $n$ over $\mathbb{F}_p$ and $G$ is a group of matrices in $\operatorname{GL}(n,p)$. The elements of $A$ can be written as matrices $\left(\begin{smallmatrix}g& v \\ 0& 1 \end{smallmatrix}\right)$, where $g \in G$, $v \in V$ and $0,1$ are row vectors of the appropriate length. Conversely, given $G ≤ \operatorname{GL}(V)$, $V$ a vector space over $\mathbb{F}_p$, $A=G\ltimes V$ is an affine group. -An example is the full affine group, $\operatorname{AGL}(n,p)$ where $G$ is $\operatorname{GL}(n,p)$ and $V$ is $\mathbb{F}_p^n$, that is $A$ is all $(n+1)×(n+1)$ matrices of the form $\left(\begin{smallmatrix}g& v \\ 0& 1 \end{smallmatrix}\right)$ where $g$ is $n×n$ invertible, $v$ is anything, $0$ is a zero vector, and $1$ is just a $1×1$ identity matrix. -$V$ becomes a $G$-module and its $G$-module structure has a large influence on the group theoretic structure of $A$. In particular, $V$ contains no "trivial" (central) summand as a $G$-module iff $A$ is centerless. - -Supposing $A$ is centerless, what conditions on $V$ ensure $A$ is a complete group, that is, so that $A$ is also "outerless"? - -See the previous questions: - -4238: Finite non-abelian group with centre but no outer automorphism -4498: Classification of small complete groups - -REPLY [4 votes]: Derek Holt helped me figure this out. -Suppose $V$ is not just normal, but characteristic in $A$. Then $\operatorname{Out}(A)$ has a normal series with factors $N, S, H$ with: - -$N ≤ \operatorname{Out}(G)$ consisting of those automorphisms that take $V$ to an isomorphic $G$-module -$S$ a quotient of the group $\operatorname{Aut}_G(V)$ of $G$-module automorphisms of $V$ by the normal subgroup of automorphisms induced by the center of $G$ -$H = H^1(G,V)$, the first cohomology group - -For $A$ to be complete, $\operatorname{Out}(A) = 1$, so $N=S=H=1$. In particular, - -No (non-identity) outer automorphism can take $V$ to an isomorphic $G$-module -$V$ has to be multiplicity-free and split, and the center of $G$ needs to include all the (block) scalar matrices -The first cohomology group has to vanish - -Assuming then that $V$ is irreducible and characteristic, then the second condition is just that $V$ is absolutely irreducible and $G$ contains the scalar matrices $Z(\operatorname{GL}(V))$. -If $V$ is not characteristic (say if $G$ is very small and unipotent) then this analysis fails, but I think if $G$ starts out close to being complete, $V$ is likely to be characteristic. Certainly if $V$ is irreducible.<|endoftext|> -TITLE: Is there any difference between the notations $\int f(x)d\mu(x)$ and $\int f(x) \mu(dx)$? -QUESTION [32 upvotes]: Suppose $\mu$ is a measure. Is there any difference in meaning between the notation -$$\int f(x)d\mu(x)$$ -and the notation -$$\int f(x) \mu(dx)$$? - -REPLY [10 votes]: At times, I find the $\mu(dx)$ notation to be quite intuitive. Informally, if we think of $dx$ as representing an infinitesimally small "chunk" of the real line, then $\mu(dx)$ is its measure. -For a formal example, let $F$ be right-continuous and increasing and $f$ continuous. Let $\mu$ be the Lebesgue-Stieltjes measure associated to $F$, that is, $\mu((a,b])=F(b)-F(a)$. Let $\{x_j\}_{j=1}^n$ be a partition of some interval $I$ and let $\Delta x_j = (x_{j-1},x_j]$. Although it is customary to let $\Delta x_j$ denote the length of this interval, in cases where we may apply different notions of length to the same interval, it may make more sense to simply let $\Delta x_j$ denote the interval itself. -In this case, we have -$$ -\int_I f(x)\,\mu(dx) = \lim_{n\to\infty}\sum_{j=1}^n f(x_j)\mu(\Delta x_j), -$$ -provided the mesh of the partition tends to zero. In this setting, the $\mu(dx)$ notation keeps both sides of the equality notationally consistent with one another. -That said, though, whether you choose to use $\mu(dx)$ or $d\mu(x)$, the meaning is the same.<|endoftext|> -TITLE: Evaluating the integral $\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$? -QUESTION [259 upvotes]: A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral: -$$\int\limits_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$ -Well, can anyone prove this without using Residue theory? I actually thought of using the series representation of $\sin x$: -$$\int\limits_0^\infty \frac{\sin x} x \, dx = \lim\limits_{n \to \infty} \int\limits_0^n \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$ -but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$. - -REPLY [2 votes]: By definition (Laplace Transform): -\begin{equation*} - F(s)=L\left[\frac{\sin(t)}{t}\right]=\int_{0}^{\infty}\frac{\sin(t)}{t}e^{-st}dt=\arctan^{-1}\left(\frac{1}{s}\right) -\end{equation*} -Then, for $s=0$, -\begin{equation*} - F(0)=\int_{0}^{\infty}\frac{\sin(t)}{t}dt = \lim_{s\to0}\arctan^{-1}\left(\frac{1}{s}\right)=\lim_{u\to\infty}\arctan^{-1}(u)=\frac{\pi}{2} -\end{equation*}<|endoftext|> -TITLE: Is there an explanation for the patterns formed by the binomial coefficients $\binom{n+d-1}{d}\pmod{512}$? -QUESTION [5 upvotes]: Simplicial sequences generalize the familiar "linear", "triangular", and "tetrahedral" number sequences. (A line segment is a $1$-simplex, a triangle is a $2$-simplex, a tetrahedron is a $3$-simplex, and so on, so I'm calling these simplicial sequences.) -In general, the $n$-th $d$-simplicial number is the binomial coefficient $\binom{n+d-1}{d}$. -The pictures below graphically show the $n$-th element $\pmod{512}$ of $512$ simplicial sequences side by side. The white dots in a given frame have coordinates $(d,\binom{n+d-1}{d} \pmod {512})$, where $d$ runs from $0$ through to $511$, and the value of $n$ is displayed at the bottom of the diagram. -Also, if you are using a "modern browser" (i.e. not IE6,7,8), you can see an animation running through all values of $n$ by following this link. The animation allows you to start and stop, and even save individual frames. -If you watch the animation, you'll see that binary visual patterns emerge, in ways that are threaded through the image and increase and decrease in strength. The images below for $n=5$ and $n=256$ indicate the extremes .. for $n=5$ the points look randomly distributed, for $n=256$ they are strongly patterned. -So my question is whether there are any explanations for the emergence and disappearance of visual binary patterns as we iterate through $n$. - -REPLY [5 votes]: Theorem (Kummer): The greatest power of $p$ dividing ${m+n \choose n}$ is the number of carries it takes to add $m$ and $n$ in base $p$. -In particular the greatest power of $2$ dividing ${n+d-1 \choose d}$ is the number of carries it takes to add $n-1$ and $d$ in base $2$, and so depends entirely on the binary digits of both $n-1$ and $d$. (When $n = 256$, for example, there are likely to be a lot of carries, which is why the position of the dots is so restricted in that case. More generally the same happens when $n-1$ has lots of $1$s in its binary representation.) -The above theorem places a strong restriction on where the white dots can be, which I think will be clearer visually if you plot things in the shape of Pascal's triangle instead of whatever it is you're doing now (for example $\bmod 2$ you will get the Sierpinski triangle. More generally I think you should try plotting the entries of Pascal's triangle $\bmod 512$ where the modulus determines the color of a block. This won't be an animation but will allow you to consolidate all of the patterns you're seeing into one image which I think will ultimately be clearer). -See also Lucas' theorem for a partial explanation of patterns in the exact remainder $\bmod 512$.<|endoftext|> -TITLE: Approximating $\pi$ using Monte Carlo integration -QUESTION [5 upvotes]: I'm trying to approximate $\pi$ using Monte Carlo integration; I am approximating the integral -$$\int\limits_0^1\!\frac{4}{1+x^2}\;\mathrm{d}x=\pi$$ -This is working fine, and so is estimating the error (variance), $\sigma$. However, when I then try to use importance sampling with a Cauchy(0,1) distribution, things start to go wrong: -$$\frac{1}{n}\sum\limits_{i=0}^n\frac{f(x_i)}{p(x_i)}=\frac{1}{n}\sum\limits_{i=0}^n\frac{\frac{4}{1+x^2}}{\frac{1}{\pi(1+x^2)}}=\frac{1}{n}\sum\limits_{i=0}^n\frac{4\pi(1+x^2)}{1+x^2}=\frac{1}{n}\sum\limits_{i=0}^n4\pi=4\pi$$ -Obviously something's wrong, since the mean is computed independently of the random variables I generate. Where is this going wrong? Is the distribution too close to $f$? - -REPLY [7 votes]: This is quite a common error when doing Monte Carlo integration. The support of the random variable you choose must be equal to the range of integration. Though the Cauchy distribution has support on $\mathbb{R}$ we can adapt it slightly to make it work here. -Note that: $\int_0^1 \frac{1}{\pi(1+x^2)} dx = \frac{1}{\pi}\tan^{-1}(1)=\frac{1}{4}$ -Thus $g(x) = \frac{4}{\pi(1+x^2)}$ for $x\in(0,1)$ is a probability density with support on $(0,1)$. -using this we have -$\frac{1}{n}\sum_{i=0}^n \frac{f(x_i)}{g(x_i)} = \pi$ -This is not a problem! Since $g(x) = \frac{1}{\pi}f(x)$ you have found exactly the right probability distribution to use to evaluate $\int_0^1f(x)dx$! The error is zero for any sample, regardless of the size. -Note however, I needed to know how to integrate $\int_0^1\frac{1}{\pi(1+x^2)}$ in the first place to form the probability distribution. Thus for arbitrary functions it is impossible to get this situation. -Also note the Monte Carlo is not a very good way of approximating integrals in general. Far better deterministic methods are quadrature rules.<|endoftext|> -TITLE: Why is $\mathbb{Q}(t,\sqrt{t^3-t})$ not a purely transcendental extension of $\mathbb{Q}$? -QUESTION [27 upvotes]: This question is taken from Dummit and Foote (14.9 #6). Any help will be appreciated: - -Show that if $t$ is transcendental over $\mathbb{Q}$, then $\mathbb{Q}(t,\sqrt{t^3-t})$ is not a purely transcendental extension of $\mathbb{Q}$. - -Here's what I've got so far: -Abbreviate $\sqrt{t^3-t}$ as $u$. -I've shown that the transcendence degree is 1, so the problem boils down to showing that $\mathbb{Q}(t,u) \supset \mathbb{Q}(f(t,u)/g(t,u))$ strictly, for all polynomials $f,g$ in two variables. -Suppose for contradiction that $\mathbb{Q}(t,u) = \mathbb{Q}(f(t,u)/g(t,u))$. Look at this field as $\mathbb{Q}(t)[x]/(x^2-(t^3-t))$, with $\bar{x}=u$. -Then since $t$ and $u$ are generated by $f/g$, we have that for some polynomials $a,b,c,d$ in 1 variable, $a(\frac{f(t,x)}{g(t,x)})/b(\frac{f(t,x)}{g(t,x)})-t \in (x^2-(t^3-t))$, and $c(\frac{f(t,x)}{g(t,x)})/d(\frac{f(t,x)}{g(t,x)}) - x \in (x^2-(t^3-t))$. -I then tried playing with degrees, but I haven't found a contradiction. - -REPLY [30 votes]: For convenience change the base field to $k=\mathbb{C}$ -(if the extension were purely transcendental before -then it would be purely transcendental after). -As the extension has transcendence degree $1$, then -if the extension were purely transcendental it would equal $k(x)$ -for some $x$. Then there are nonconstant rational functions -$f(x)$ and $g(x)$ such that -$$g(x)^2=f(x)^3-f(x).$$ -It's easy to see we can write $f(x)=u(x)/w(x)^2$ -and $g(x)=v(x)/w(x)^3$ in lowest terms -where $u$, $v$ and $w$ are polynomials. -We find that -$$\phi(x)=\frac{g'(x)}{3f(x)^2-1}=\frac{f'(x)}{2g(x)}.$$ -In fact $\phi(x)$ is a polynomial. Otherwise the denominator -of $\phi$ has a factor $x-a$. Expressing $\phi$ in terms of $u$, $v$ -and $w$ (details?) we see that this implies that -$$2g(a)=3f(a)^2-1=0$$ -as well as -$$g(a)^2=f(a)^3-f(a).$$ -This is impossible. As $f$ and $g$ are nonconstant, $\phi$ is a nonzero -polynomial. -Now replace $f(x)$ and $g(x)$ by $f(1/x)$ and $g(1/x)$. Then $\phi(x)$ -is replaced by $-x^{-2}\phi(1/x)$ so that is also a polynomial, which -it isn't. So we get the required contradiction. -Added. The above was composed in a rush, and I cut a few corners, -so some extra details. We have $f'(x)=\textrm{polynomial}/w(x)^3$ and so -$$\phi(x)=\frac{f'(x)}{g(x)}=\frac{\textrm{polynomial}}{v(x)}.$$ -Similarly -$$\phi(x)=\frac{\textrm{polynomial}}{3u(x)^2-w(x)^4}.$$ -If $x-a$ divides the denominator of $\phi(x)$ then $v(a)=0=u(a)^2-w(a)^4$. -We can't have $w(a)=0$ as then $x-a$ would divide $u(x)$ and $w(x)$ -contrary to $u(x)/v(x)^2$ being in lowest terms. Hence $f(a)$ and $g(a)$ -make sense and we get $g(a)=3f(a)^2-1=0$. -This is really a geometric argument. On the elliptic curve -$$E:\quad z^2=t^3-t$$ -the "invariant differential" -$$\omega=\frac{dt}{2z}=\frac{dz}{3t^2-1}$$ -would have no poles on the (projective curve) $E$. But were $E$ rational -then $\omega=\phi(x)dx$ has no poles on the projective line; but every -nonzero differential on the projective line has a pole. The above is just -a naive version of this argument, which works for all elliptic curves.<|endoftext|> -TITLE: What concepts were most difficult for you to understand in Calculus? -QUESTION [13 upvotes]: I'm developing some instructional material for a Calculus 1 class and I wanted to know from experience for yourself, tutoring others, and/or helping people on this site where is the most difficulty in Calculus? -If you had any good methods of helping people that would be very helpful. - -REPLY [2 votes]: The logic behind the $\mathbb {\epsilon}$ - $\mathbb { \delta}$ definition. - -This might have been mentioned in an answer above but since you are a teacher (or are instructing one) I think I must express some frustration I have had with your standard freshman year Calculus course. -My teacher - and many others I've heard - take the $\epsilon$ - $\delta$ definition of limits for granted. They just get away with repeating the statement "For any given $\epsilon \gt 0 $ $ \;\;\; \exists \delta \gt 0$ such that $|f(x) - L| \lt \epsilon $ whenever $|x - a| \lt \delta$". The students in my country (Sri Lanka) get entrance to university based on a very competitive final paper in school. Almost all of them are very capable of strong and mind-wrecking computations. But almost all of them have difficulty understanding limits in their first semester, as a result hate calculus and then despise pure mathematics in general. And I blame the teaching for this trend. The tutor fails to instill on the student the logic behind the definition. Not many know the fact that you are required to prove the existence of $\delta$ and not just an implication $|x - a| \lt \delta \implies |f(x) - L| \lt \epsilon $. It has been a couple of months since I joined this site and I repeatedly come across questions posted with similar dilemmas and all of them have flimsy logical foundation. And that is the issue. -So my suggestion here is a better introduction to the logic behind the introductory calculus courses. One professor I know starts off by asking students to negate statements like "Every girl is pretty" and "Some parents are kind" which I think is an excellent approach. -I found it very difficult to grasp the sense behind the limit statement and took me long hours and several books to get a hold of it. And I think this can and should be avoided. Yes a student should work out problems on his own and do lot of work on his own. But the foundation should be laid. Solidly. And I think the teacher is responsible for that.<|endoftext|> -TITLE: How to find a suitable contour to integrate round? -QUESTION [6 upvotes]: I am having trouble answering the following question: - -Problem -By integrating the function: -$$f(z) = \frac{R + z}{z(R - z)}$$ -round a suitable contour, prove that, for $0 \leq r < R$, -$$\frac{1}{2\pi}\int_0^{2\pi} \frac{R^2-r^2}{R^2 - 2Rr\cos(\theta) + r^2}\mathrm{d}\theta = 1 $$ - -My Attempt -What I think they want me to do is to use the equation: -$$\int_C f(z) \mathrm{d}z = \int_{z_1}^{z_2} f(z) \mathrm{d}z = \int_a^bf(z(t))\cdot z'(t)\mathrm{d}t $$ -where C is a contour from $z_1$ to $z_2$ and $z(a) = z_1$ and $z(b) = z_2$. Then I should find a contour $z(t)$ in terms of $r$ and $t$ such that: -$$f(z(t))\cdot z'(t) = \frac{R^2-r^2}{R^2 - 2Rr\cos(t) + r^2} $$ -And then with this contour I could find values for $z_1$ and $z_2$ meaning I could evaluate: -$$ \int_{z_1}^{z_2} f(z) \mathrm{d}z $$ -which should equal 2$\pi$ or something. However so far I have not been able to find a suitable z(t). The closest I have come is $z(t) = re^{i\theta}$ which yields: -$$f(z(t))\cdot z'(t) = \frac{R^2\cdot i - 2Rr\sin(t) + r - r^2\cdot i}{R^2 - 2Rr\cos(t) + r^2} $$ -However this entire approach could be wrong. I've got to the stage when you start to question your whole approach and feel like resorting to violence...According to Einstein, only insane people do the same thing over and over again and expect different results......Oh wait that's me... -Please help me. - -Edit 1 - -First of all a big thank you to everyone who has helped me here - your hints are like a life raft to a drowning man! -To Mariano: I have not done anything on residues at this stage in the course - so the question should be answerable without using them. That technique does look very powerful though and I hope I will get a good understanding of it in the future... -To Hans Lundmark: Your link helped me to see the bigger picture put this equation in context (wow its more that a figment of my prof's imagination!)...Thanks. -To J. M: Your note helped a lot to remove the log jam in my head but there are still some issues that remain with my answer. I would very much appreciate your comments on my 2nd attempt. -2nd Attempt -(Step 1) -The function $f(z)$ can be decomposed using partial fractions: -$$f(z) = \frac{R + z}{z(R - z)} = \frac{1}{z} + \frac{2}{R - z}$$ -and then integrated with respect to $z$: -$$\int f(z)dz = log(z) - 2log(R - z) $$ -(Step 2) -With this information we can evaluate: -$$\int_C f(z)dz $$ -where C is the positively oriented circle $z = re^{i\theta}$. Because log(z) is not defined at its branch cut we cannot integrate around the whole circle but must split the circle into halves $C_1$ and $C_2$. -Let $C_1$ denote the right half of the circle where $ -\frac{\pi}{2} <= \theta <= \frac{\pi}{2}$ and consider the branch: $log(z) = ln(r) + i\theta$ where $-\pi <= \theta <= \pi$. Therefore: -$$ \int_{C_1} f(z) = [log(z) - 2log(R - z)]_{-ri}^{ri} $$ -which equals: -$$ \int_{C_1} f(z)dz = \pi{i} - 2log(R - ri) + 2log(R + ri) $$ -Let $C_2$ denote the left half of the circle where $\frac{\pi}{2} <= \theta <= \frac{3\pi}{2}$ and consider the branch: $log(z) = ln(r) + i\theta$ where $0 <= \theta <= 2\pi$. Therefore: -$$ \int_{C_2} f(z) = [log(z) - 2log(R - z)]_{ri}^{-ri} $$ -which equals: -$$ \int_{C_2} f(z)dz = \pi{i} - 2log(R + ri) + 2log(R - ri) $$ -and since: -$$ \int_{C} f(z)dz = \int_{C_1} f(z)dz + \int_{C_2} f(z)dz $$ -it follows that: -$$ \int_{C} f(z)dz = 2\pi{i} $$ -(Step 3) -This is the part I am currently having difficulty with: -Does the equation: -$$\int_C f(z) \mathrm{d}z = \int_a^bf(z(t))\cdot f'(t) dt $$ -imply that: -$$\Im\int_C f(z) \mathrm{d}z = \int_a^b\Im[{f(z(t))}\cdot f'(t) dt] $$ -Because if it did I could say that: -$$ \Im(2\pi{i}) = \int_0^{2\pi} \frac{R^2-r^2}{R^2 - 2Rr\cos(\theta) + r^2}\mathrm{d}\theta $$ -and the proof would be complete...But I dont know if I can do this? -Can some one help please? - -Edit 2 - -Thanks to J.M step 3 should be: -Since the integral operator is linear: -$$ \int z(t) \mathrm{d}t=\int \Re z(t) \mathrm{d}t+i\int \Im z(t) \mathrm{d}t$$ -therfore: -$$ \int_C f(z) \mathrm{d}z = \int_a^b\Re{f(z(t))}\cdot f'(t) dt + i\int_a^b\Im{f(z(t))}\cdot f'(t) dt $$ -also we know that: -$$ \int_{0}^{2\pi}\Re[{f(z(t))}\cdot f'(t) dt] = 0 $$ -and: -$$ \int_C f(z) \mathrm{d}z = 2{\pi}i $$ -and: -$$ \Im[f(z(t))\cdot z'(t)] = \frac{R^2-r^2}{R^2 - 2Rr\cos(t) + r^2} $$ -Therefore: -$$ \int_C f(z) \mathrm{d}z = i\int_0^{2\pi}\Im[{f(z(t))}\cdot f'(t)] dt $$ -$$ 2{\pi}i = i\int_0^{2\pi}\Im[{f(z(t))}\cdot f'(t)] dt $$ -$$ 2{\pi} = \int_0^{2\pi}\Im[{f(z(t))}\cdot f'(t)] dt $$ -$$ 2{\pi} = \int_0^{2\pi}\frac{R^2-r^2}{R^2 - 2Rr\cos(t) + r^2}dt $$ -and the proof is complete. Awesome! -I hope this helps some other dude out there who is trying to understand this stuff... - -REPLY [4 votes]: Note that -$$\Im\left(\frac{R+r\exp(i\theta)}{r\exp(i\theta)(R-r\exp(i\theta))}(i r\exp(i\theta))\right)=\frac{R^2-r^2}{R^2+r^2-2Rr\cos\theta}$$ - -As for your second attempt, remember that the integral is a linear operator, so -$$\int z(t) \mathrm{d}t=\int \Re z(t) \mathrm{d}t+i\int \Im z(t) \mathrm{d}t$$<|endoftext|> -TITLE: Why do statements which appear elementary have complicated proofs? -QUESTION [16 upvotes]: The motivation for this question is : Rationals of the form $\frac{p}{q}$ where $p,q$ are primes in $[a,b]$ and some other problems in Mathematics which looks as if they are elementary but their proofs are very much sophisticated. -I would like to consider two famous questions: First the "Fermat's Last Theorem" and next the unproven "Goldbach conjecture". These questions appear elementary in nature, but require a lot of Mathematics for even comprehending the solution. Even the problem, which I posed in the link is so elementary but I don't see anyone even giving a proof without using the prime number theorem. -Now the question is: Why is this happening? If I am able to understand the question, then I should be able to comprehend the solution as well. A Mathematician once quoted: Mathematics is the understanding of how nature works. Is nature so complicated that a common person can't understand as to how it works, or is it we are making it complicated. -At the same time, I appreciate the beauty of Mathematics also: Paul Erdős' proof of Bertrand's postulate is something which I admire so much because, of its elementary nature. But at the same time i have my skepticism about FLT and other theorems. - -I have stated 2 examples of questions which appear elementary, but the proofs are intricate. I know some other problems, in number theory which are of this type. Are there any other problems of this type, which are not Number Theoretical? If yes, I would like to see some of them. - -REPLY [2 votes]: Statements containing the word "every" are really much more complicated than they appear. For example the question "is every even integer n >= 4 the sum of two primes" looks easy, but if I ask "is 329872923459897823598798789723452396862359798797234597972798352 the sum of two primes", that's not really an easy question. Answering with "yes" in the obvious way would require to verify that some huge number is not composite, which isn't really easy. Or you'd have to show that one out of the possibly large set of huge numbers must be a prime number. Answering with "no" if that was the correct answer (which is unlikely but not impossible) would be a lot harder.<|endoftext|> -TITLE: Topological group: Multiplying two loops is homotopic to linking these paths? -QUESTION [13 upvotes]: Let G be a topological group and let $s_1$ and $s_2$ be loops in G (both loops are based at the identity e of G). Is it true that the loop $s_1s_2$ (where the multiplication is the one of the group structure of G) is equal, in $\pi_1(G,e)$, to the loop $s_1*s_2$ where this product is given by first going around $s_1$ and then $s_2$ (i.e., do we have $[s_1s_2] = [s_1*s_2]$)? If yes, what is the proof for this? - -REPLY [24 votes]: Instead of using the Eckmann-Hilton argument abstractly, you can make it explicit, as follows: -We can reparameterize the loop $s_1$ to be constant (and equal to the neutral element) on $[1/2,1]$ and $s_2$ to be constant on $[0,1/2]$. Then $[s_1 \ast s_2] = [s_1 s_2]$ and since $s_1s_2 = s_2s_1$ we also get that the fundamental group is abelian.<|endoftext|> -TITLE: Odd number of reals with equal partitions -QUESTION [7 upvotes]: Consider the following problem: -You are given a multiset (a set with repetitions allowed) of $2n+1$ real numbers, say $S = \{r_1, \dots, r_{2n+1}\}$. -These numbers are such that for every $k$, the multiset $S - \{r_k\}$ can be split into two multisets of size $n$ each, such that the sum of numbers in one multiset is same as the sum of numbers in the other. -Show that all the numbers must be equal.( i.e. $r_{i} = r_{j}$) -Please stop reading further if you want to try and solve this problem. -Spoiler: - - Now this problem can easily be solved using Linear Algebra. We have a set of $2n+1$ linear equations, which corresponds to a matrix equation $Ar = 0$. It can be shown that $A$ has rank at least $2n$ which implies the result. - -The question is, is there any solution to this problem which does not involve any linear algebra? - -REPLY [7 votes]: You can't avoid some sort of algebra, because the statement is false in a commutative group where $nx = 0$ has nontrivial solutions. -If you allow use of the linear algebra fact that rank is the same over any field containing the coefficients of the equations, it is enough to consider rational (and thus integer) solutions and extra structure is available. One can then avoid use of determinants or matrices: -If $\Sigma$ is the sum of all elements, $\Sigma - r_i$ is even and thus all $r_i$ have the same parity. We can replace each $r_i$ by $(r_i-r_k)/2$ and get a smaller solution, where $r_k$ is the smallest of the numbers. This process ends at the zero solution, and is reversible, so the original solution has all numbers equal. -(added: you can consider this a use of either the real or 2-adic metric on integers, so this must correspond to a linear algebra argument using inequalities or reduction mod 2^(2n+1) on the system of equations or its determinant.)<|endoftext|> -TITLE: On radial limits of Blaschke Products -QUESTION [11 upvotes]: A Blaschke product is a function of the form -$$B(z):=z^k\prod_{n=1}^{\infty}\frac{a_n-z}{1-\overline{a_n}z}\frac{|a_n|}{a_n}$$ -where the $a_n$ are the non-zero zeros of $B$, and satisfie $\sum_{n=1}^{\infty}(1-|a_n|) < \infty$. -Blashke products are holomorphic and bounded by 1 on the unit disk. A well known theorem asserts that $B$ has radial limits almost everywhere on the unit circle, i.e. that the limit -$$\lim_{r \rightarrow 1} B(re^{i \theta})$$ exist for almost every $\theta$. I'm looking for an example of Blashke product such that the radial limit does not exist at a certain point, say $1$ for example. In particular, a Blaschke product with zeros in $(0,1)$ such that -$$\limsup_{r \rightarrow 1}|B(r)| =1$$ would work. -Does anyone have a construction or reference? -Thank you, -Malik - -REPLY [4 votes]: There is an exercise in Rudin's Real and complex analysis whose solution would answer your question, #14 in Chapter 15: - -Prove that there is a sequence $\{\alpha_n\}$ with $0\lt\alpha_n\lt1$, which tends to $1$ so rapidly that the Blaschke product with zeros at the points $\alpha_n$ satisfies the condition - $$\limsup_{r\to1}|B(r)|=1.$$ - Hence this $B$ has no radial limit at $z=1$. - -(The previous exercise says that the limit is $0$ if $\alpha_n=1-n^{-2}$.) -Instead of trying to solve it (with the guess of something like $\alpha_n=1-4^{-n}$), I found the article "On functions with Weierstrass boundary points everywhere" by Campbell and Hwang, which says on page 510 (page 4 of the pdf): - -Let $B(z)$ be the Blaschke product with zeros at $z=1-\exp(-n)$, for $n=1,2,\ldots$. Then $B(z)$ has no radial limit at $z=1$.... - -The authors cite page 12 of "Sequential and continuous limits of meromorphic functions" by Bagemihl and Seidel for this fact, but I do not currently have access to that article. Hopefully you can track it down to get your question answered, or perhaps someone will take up the challenge of solving Rudin's problem.<|endoftext|> -TITLE: On the generators of the Modular Group -QUESTION [10 upvotes]: The modular group is the group $G$ consisting of all linear fractional transformations $\phi$ of the form -$$\phi(z)=\frac{az+b}{cz+d}$$ -where $a,b,c,d$ are integers and $ad-bc=1$. I have read that $G$ is generated by the transformations $\tau(z)=z+1$ and $\sigma(z)=-1/z$. Is there an easy way to prove this? In particular, is there a proof that uses the relation between linear fractional transformations and matrices? Any good reference would be helpful. -Thank you, -Malik - -REPLY [2 votes]: Here is an elementary proof (not necessarily easy). -We consider 3 cases: -(i) If $a=0$, then $bc=-1$, so $b=-c=\pm 1$ therefore $\phi(z)=\frac{\pm 1}{\mp z+ d}=\sigma\tau^{\mp d}(z)$. -(ii) If $|a|=1$. Since $\frac{-az-b}{-cz-d}=\frac{az+b}{cz+d}$, we may just assume $a=1$, so that $d-bd=1$. Notice that: -$$\phi(z)=\frac{z+b}{cz+d}\stackrel{\sigma}{\longrightarrow}\frac{-cz-d}{z+b}\stackrel{\tau^c}{\longrightarrow}-\frac{1}{z+b}$$ -Therefore $\sigma\tau^c\phi(z)=-\frac{1}{z+b}$, which brings us back to case (i). -(iii) If $|a|>1$, we will show that we can reduce it to case (ii). -The idea is to modify $\phi$ to get a new $\phi_1(z)=\frac{a_1z+b_1}{c_1z+d_1}$ with $|a_1|<|a|$ and $|c_1|<|c|$. If this is possible, then $|a_1|$ gets closer to $1$ so, repeating the process, we eventualy get $\phi_n$ with $|a_n|=1$. -Let's define $\phi_1$. First, we assume $|a|>|c|$ (otherwise, just consider $\sigma\phi$ instead of $\phi$). Now take a convenient $n\in\mathbb{Z}$ so that $|a-nc|$ smaller then both $|a|$ and $|c|$ (this is exactely the closest integer $n:=\left[\frac{a}{c}\right]$). The term $a-nc$ appears by appling $\tau^{-n}$: -$$\tau^{-n}\phi(z)=\frac{(a-nc)z+(b-nd)}{cz+d}$$ -We then apply $\sigma$ so that $(a-nc)$ and $c$ switch places (modulo a $-1$ sign), so $\phi_1(z):=\sigma\tau^{-n}\phi(z)$ is exactely what we need.<|endoftext|> -TITLE: Probability and Infinity -QUESTION [5 upvotes]: If the probability of an event is $\frac{1}{\infty}$ and $\infty$ trials are conducted, how many times will the event occur — $0$, $1$, or $\infty$? - -REPLY [7 votes]: There are meaningful ways to work out probabilities on infinite spaces, but your question is not well-defined. We could define situations matching your question with the answer 0, 1, $\infty$, or any other finite answer. -Probability distributions on finite sets are pretty easy to write down and work with. For rolling a single die, the probability distribution is {p(1)=1/6, p(2)=1/6, p(3)=1/6, p(4)=1/6, p(5)=1/6, p(6)=1/6}. From this, we can ask and answer many different questions-- for example, the probability of rolling a 3 three times in a row is 1/216. However, the probability of rolling a 7 is 0; and it doesn't matter how many times you roll, you'll never get a 7. -Probability distributions on infinite sets require a lot more background to define and work with. You need a probability measure and you can compute integrals to find probabilities. (Measure theory is usually a graduate-level topic, and it requires very careful, abstract thought to get the details straight.) -An instructive and fun problem to read about is the Random Walk. Suppose I start at (0,0) on an infinite 2-dimensional grid; and each second, I go up, down, left, or right one unit. (Each direction has probability 1/4 every second.) The probability space is the set of all possible walks, which is an infinite set. The probability measure on this space is determined by the fact that each direction has equal probability every second. -Even though there are many walks in the probability space that never return to (0,0), the subset of walks that do return to (0,0) has measure 1. This implies that on a 2-dimensional random walk, the expected number of returns to (0,0) is infinite. However, the expected number of times I take a diagonal step is 0 (because, by my definition of the problem, it never happens). -We can also consider the 3-dimensional random walk. Interestingly, on a 3-dimensional random walk, the expected number of returns to (0,0,0) is about 0.516385. (You can think of this as the average number of returns if you sample many 3-dimensional random walks.) -These results requires some work to obtain, and you always have to start with a precisely defined problem. If you think about them and read other examples on your own, it should help you gain some understanding of infinite probability spaces, probability 0 events, and probability 1 events.<|endoftext|> -TITLE: Set-theoretical description of the free product? -QUESTION [5 upvotes]: There is something in the definition of the free product of two groups that annoys me, and it's this "word" thing: - -If $G$ and $H$ are groups, a word in $G$ and $H$ is a product of the form -$$ - s_1 s_2 \dots s_m, -$$ -where each $s_i$ is either an element of $G$ or an element of $H$. - -So what is this "word" guy? Does it come out of the blue? Does it come from some sort of new operation that I can perform with the two sets $G$ and $H$ -in addition to the well-known ones of union, intersection, Cartesian product...? -Fortunatelly, I think there is nothing new under the sun of set operations: it's easy to realise that words can be identified with elements of some Cartesian product (see below): -$$ -(s_1, s_2, \dots , s_m ) \ . -$$ -And Cartesian product is a well-established set-theoretical operation. -So I tried to translate the rest of Wikipedia's definition - -Such a word may be reduced using the following operations: -Remove an instance of the identity element (of either $G$ or $H$). - Replace a pair of the form $g_1g_2$ by its product in $G$, or a pair $h_1h_2$ by its product in $H$. -Every reduced word is an alternating product of elements of $G$ and elements of $H$, e.g. -$$ - g_1 h_1 g_2 h_2 \dots g_r h_r. - $$ -The free product $G ∗ H$ is the group whose elements are the reduced words in $G$ and $H$, under the operation of concatenation followed by reduction. - -in an elementary set setting. First, consider the set of "unreduced" tuples of elements of $G$ and $H$ -$$ -U = G \sqcup H \sqcup (G\times G) \times (G\times H) \sqcup (H\times G) \sqcup (H\times H) \sqcup (G\times G \times G) \sqcup \dots -$$ -More concisely: - -EDIT: -I think the following formula may be less messier than the one I wrote previously: -$$ -U = \bigsqcup_{r \geq 1} (S_1 \times \cdots \times S_r), -$$ -where $S_i = G$ or $S_i = H$. - -So, elements of $U$ are ordered tuples (unreduced ones) -$$ -(s_1, s_2, \dots , s_m), -$$ -where each $s_i$ is either an element of $G$ or an element of $H$. -The product of two unreduced tuples is defined by concatenation -$$ -(s_1, \dots , s_m) \cdot (t_1, \dots , t_n) = (s_1, \dots , s_m, t_1 , \dots , t_n) \ . -$$ -Now, consider the following equivalence relation in the set of unreduced tuples $U$: -$$ -(s_1, s_2, \dots , s_{i-1}, 1, s_{i+1}, \dots , s_n) \sim (s_1, s_2, \dots, s_{i-1}, s_i, \dots , s_n) \ , -$$ -where $1$ is either the unit element of $G$ or the one of $H$. And -$$ -(s_1, s_2, \dots , s_i,s_{i+1}, \dots , s_r) \sim (s_1, s_2, \dots , s_is_{i+1}, \dots , s_r ) -$$ -whenever two adjacent $s_i, s_{i+1} \in G$ or $s_i, s_{i+1} \in H$ at the same time. -If you want, you may call the equivalence class of a tuple under this equivalence relation a reduced tuple. So every reduced tuple is an alternating one, -$$ -(g_1, h_1, \dots , g_r , h_r) \ , -$$ -with $g_i \in G$ and $h_i \in H$ for all $i = 1, \dots , r$. -Define the free product of $G$ and $H$ as the quotient: -$$ -G*H = U/\sim \ . -$$ -Finally, one verifies that concatenation is well-defined on unreduced tuples and gives $G*H$ a group structure. -After performing this elementary exercise I understand perfectly well why nobody defines the free product in this way, but I still wanted to ask: - -Is this correct? -Is it written somewhere? - -REPLY [8 votes]: You can see essentially the same construction in two different ways in George Bergman's An Invitation to General Algebra and Universal Constructions in Chapter 2 (link is to a postscript file) for the free group. -First, you define "the set of all terms in the elements of the set $X$ under the formal group operations $\mu$, $i$, $e$" to mean a set which is given with functions symb${}_T\colon X\to T$, $\mu_T\colon T^2\to T$, $i_T\colon T\to T$, and $e_T\colon T^0\to T$, such that each of these maps is one-to-one, their images are disjoint, and $T$ is the union of the images, and $T$ is generated by symb${}_T(X)$ under the operations $\mu_T$, $i_T$, and $e_T$. Such a set exists (it can be constructed inductively with enough care; given in Chapter 1 of the same notes). Then one defines an apropriate equivalence relation $\sim$ on $T$; the set $T/\sim$ gives the underlying set of the free group, and one defines the operations in the free group via representatives in the natural way. Bergman labels this "the logician's approach" (section 2.2). -An alternative construction ("the classical construction", section 2.4) gives "free groups as groups of words". Again, you start with a set $X$, and let $T$ be the set of all group-theoretic terms of $X$; identify $X$ with its image under symb, and one defines a subset $T_{red}$ of "reduced terms" (defining what this means appropriately) and then defining operations $\otimes$, ${}^{(-)}$, and $e_T$ on this set to make it into a group. Proving it is a group can be done either in the straightforward but tedious way, or by using "van der Waerden's trick" (embed the set $T_{red}$ into a group of permutations, and check that the operations you defined correspond to the operations in the image, so that "group"-ness gets inherited). -To get the free product, you let $X$ be the disjoint union of the underlying sets of $G$ and $H$, and either adds to the equivalence relation (in the "logician's approach"), or restricts the definition of "reduced words" (in the "classical approach"), in essentially the way you did.<|endoftext|> -TITLE: Very simple question, but what is the proof that x.y mod m == ((x mod m).y) mod m? -QUESTION [9 upvotes]: I apologise for this question, as it is no doubt very simple, but I've never been very confident with proofs. Our lecturer today (in a course related to maths but not mathematical itself) was playing with doing the modulus of powers, and used the above fact - $(x.y)\; mod\; m\; ==\; ((x\; mod\; m).y)\; mod\; m$ - to show a quick way to do it. She mentioned offhand that the proof was short and easy, and that we should try it. Well, I did. And I'm not really sure that what I've done is mathematically sound. So I'd very much appreciate it if anyone could give me some help with how to do this proof correctly - googling for it turned up many pages which just state it as simple fact. I would not call my proof short or easy, so I reckon I've just got completely the wrong end of the stick. -My proof: -When considering any number, denoted n, in regards to another number m, we can write it as $n_b$ + $n_r$, where the former is some multiple of m, and the latter is the remainder when you take n mod m. -If we expand the original formula, $(x.y)\; mod\; m$ like this, we get $((x_b + x_r)(y_b + y_r))\; mod\; m$. This then expands to $(x_b y_b + x_b y_r + x_r y_b + x_r y_r)\; mod\; m$. The first three terms in the brackets are all some multiple of m, and so when they are taken modulus m they are 0. The final term we do not know, so we are left with: $(x_r y_r)\; mod\; m$. -Now we can consider the left hand side. $((x\; mod\; m).y)\; mod\; m$ is equivalent to $((x_r).y)\; mod\; m$. Expanding y gives us: $((x_r)(y_b + y_r))\; mod\; m$. Multiplying out then gives $(x_r y_b + x_r y_r)\; mod\; m$. As before, the first term in the brackets are multiples of m and so are 0 when taken mod m, and the last one we do not know, so we are left with: $(x_r y_r)\; mod\; m$. -Since both the LHS and RHS boil down to the same thing, the equality holds. - -REPLY [7 votes]: Below are proofs of the product rule proof expressed in both divisibility and congruence form, using the standard notation: $\rm\ \ a\ |\ b \ :=\ a\,$ divides $\rm\, b\:,\;$ and $\rm\ \; a\equiv b\ \ (mod\ m)\: \iff\: m\:|\:a-b$ -$\begin{eqnarray} -\rm {\bf Lemma}\ \ &\rm m\ \ |&\rm\ \, X-x\quad\ and &&\rm m\ |\: Y-y \ \Rightarrow\ m\:|\!\!&&\rm XY - \: xy\\ \\ -\rm {\bf Proof}\ \ \ \ \ &\rm m\ \ |&\rm (X-\color{#C00}x)\:\color{#C00}Y\ \ \ + &&\rm\, \color{#C00}x\ (\color{#C00}Y-y)\ \ \ = &&\rm XY - \: xy \\ -\\ -\rm {\bf Lemma}\ \ & &\rm\ \, X\equiv x\quad\ \ and &&\rm\quad\ \ Y\equiv y \ \ \ \ \Rightarrow\ &&\rm XY\equiv xy\\ \\ -\rm {\bf Proof}\ \ \ \ \ &0\equiv& \rm (X-\color{#C00}x)\:\color{#C00}Y\ \ \ + &&\rm\, \color{#C00}x\ (\color{#C00}Y-y)\ \ \ \equiv &&\rm XY - \: xy \\ -\end{eqnarray}$ -Note how the congruence notation eliminates cumbersome manipulation of relations (divisibility). Indeed, the relations are replaced by a generalized equality (congruence) which, being compatible with multiplication (as above) and addition (similar proof), enables us to exploit our well-honed intuition manipulating integer equations - which immediately generalizes to manipulating congruences (mod m). When you study abstract algebra you'll learn that this is a very special case of a quotient or residue ring. This product rule arises in many analogous contexts, e.g. see my post on the product rule for derivatives.<|endoftext|> -TITLE: Types of infinity -QUESTION [21 upvotes]: I understand that there are different types of infinity: one can (even intuitively) understand that the infinity of the reals is different from the infinity of the natural numbers. Or that the infinity of the even numbers is the same as that of the natural numbers. How many types of infinity are there? Or is this infinite itself? - -REPLY [2 votes]: I should point out that your examples are all the same type of infinite number: they are all cardinal numbers, measuring an aspect of sets that extends the notion of "how many?" There is a related notion of ordinal numbers which extend the notion of counting. -But there are other types of infinite numbers. One family of related concepts includes the extended real numbers $+\infty$ and $-\infty$ that you might see in calculus. These two infinite numbers quantify something very different than what cardinal numbers quantify. The geometric idea they express is that $+\infty$ marks one "end" of the real number line, and $-\infty$ marks the other "end". There are a variety of other examples in this family, such as the projective real line, and the geometric concept of points at infinity (e.g. see the projective plane or read about elliptic curves). -Another type of infinite number is probably better expressed in relation to infinitesimal numbers: objects that behave as if they were smaller in magnitude than any non-zero real number, but yet were non-zero themselves. In some such number systems (e.g. the hyperreal numbers), one can take the reciprocal of a non-zero infinitesimal number, and the result is an infinite number. -All three of these types of infinite numbers behave very differently and are used for very different purposes.<|endoftext|> -TITLE: Covering of a topological group is a topological group -QUESTION [9 upvotes]: If we have a covering $p:H\rightarrow G$ where $G$ is a topological group, then $H$ is also a topological group. The multiplication function can be defined as follows. Consider the map $f:H\times H \rightarrow G$ which is a composition of the map $p\times p$ and the multiplication function on $G$. Choose $h\in p^{-1}(e)$ where $e$ is the identity element of $G$. If $$f_* (\pi_1(H\times H,(h,h))) \subset p_*(\pi_1(H,h)),$$ then $f$ can be lifted to a map $g:H\times H \rightarrow H$ such that $p\circ g = f$ and $g(h,h) = h$. Suppose we have shown the "if" part, then $g$ should function as our multiplication map on $H$. But given any $x\in H$, why do we know that $g(x,h) = x$ and that $g(x,h)$ does not equal any other element of $p^{-1}(p(x))$? - -REPLY [13 votes]: Consider the map $k: H\to H$ given by $k(x) = g(x,h)$. Then for any $x\in H$, we have $$p\circ k(x) = p\circ g(x,h) = m\circ (p\times p)(x,h) = m(p(x),e) = p(x),$$ -which implies that $k$ is a lift of $p\colon H\to G$. Note also that $k(h) = g(h,h) = h$. Thus $k$ and the identity are both lifts of $p$ that agree at a point, so they are equal. This implies $g(x,h)=x$ for all $x$.<|endoftext|> -TITLE: Intuitive explanation of variance and moment in Probability -QUESTION [21 upvotes]: While I understand the intuition behind expectation, I don't really understand the meaning of variance and moment. -What is a good way to think of those two terms? - -REPLY [4 votes]: The localization and the dispersion probabilistic measures can be "seen" as the corresponding momentums of mechanical systems of "probabilistic masses". -The expectation has the following mechanical interpretation. Given that $F(x)$ is the "probabilistic mass" contained in the interval $0\le X\lt x$ (in one dimention), the mathematical expectation of the random variable $X$ is the static momentum with respect to the origin of the "probabilistic masses" system. -The variance is the mechanical analog of the inertia momentum of the "probabilistic masses" system with respect to the center of masses. -The variance of the random variable $X$ is the 2nd order momentum of $X-m$ where $m$ is the expectation of $X$, i.e. $m=E(X)=\displaystyle\int_0^{x} x\, dF(x)$ ($F(x)$ is the cumulative distribution function). -The $k$-order momentum is the expectation of $(X-m)^k$.<|endoftext|> -TITLE: What is the modulus of a tensor on a Riemannian 3-manifold? -QUESTION [5 upvotes]: Let $v^i$ be a vector on a Riemannian 3-manifold with metric $g_{ij}$ embedded inside a 3+1 space-time such that for some constant $N_M$ it satisfies the inequality $g_{ij}v^iv^j \leq N_M ^2$. Let $K$ be a symmetric rank-2 tensor on the 3-manifold. Then apparently the following holds: -$$\vert K_{ij} v^i v^j \vert \leq \vert K \vert _g N_M ^2.$$ -This looks like some sort of a Cauchy-Schwarz inequality but given that $K$ is a tensor as described, I don't understand what the notation on the RHS means. For a rank-2 symmetric tensor $K$ what does $\vert K \vert _g$ mean? -If one knows that for some function $N$, $g_{ij}v^iv^j \leq N^2,$ where the function $N$ is itself bounded between constants -$$N_m \leq N \leq N_M,$$ -then using inequalities like the above one can apparently show the following bound: -$$\int _{t_1} ^t \frac{1}{N} \Big(-v^i \partial _i N - \frac{dN}{dt} + K_{ij}v^iv^j\Big) dt - \leq -2\log N_m + \frac{1}{N_m} \int _{t_1}^t (\vert \nabla N \vert _g N_M + \vert K \vert _g N_M ^2 )dt,$$ -for some fixed $t_1$ and $t$. -I can't understand that first "log" term in the above. -Also once the above bound is shown does it follow that the integral can be unbounded above or below depending solely on the property of the function $N$? If yes then what would be needed of $N$ to make the integral unbounded above or below? - -REPLY [4 votes]: I am just going to answer the first question. For a two-tensor we have -$$|T|^2 = \left< T,T \right> = g^{ik}g^{jl}T_{ij}T_{kl},$$ -where $g$ is the metric, and the summation convention is understood. Note that this is the standard inner product structure induced by $g$, as mentioned by Jason DeVito. You might like to try and prove that this really does define a norm in the Riemannian case. -More generally, for a $(k,l)$ ($k$ times contravariant, $l$ times covariant) tensor field $T$ (on the manifold) we have -$$|T|^2 = \left< T,T \right> = g^{j_1q_1}g^{j_2q_2}\cdots g^{j_lq_l}g_{i_1p_1}g_{i_2p_2}\cdots g_{i_kp_k}T^{i_1i_2\cdots i_k}_{j_1j_2\cdots j_l}T^{p_1p_2\cdots p_k}_{q_1q_2\cdots q_l}.$$ -For your other questions, you could do worse than look in 'Einstein Manifolds' by Besse.<|endoftext|> -TITLE: Iterative refinement algorithm for computing exp(x) with arbitrary precision -QUESTION [13 upvotes]: I'm working on a multiple-precision library. I'd like to make it possible for users to ask for higher precision answers for results already computed at a fixed precision. My $\mathrm{sqrt}(x)$ can pick up where it left off, even if $x$ changes a bit, because it uses Newton-Raphson. But $\exp(x)$ computed using a Maclaurin series or continued fraction has to be computed from scratch. -Is there an iterative refinement (i.e. Newton-Raphson, gradient descent) method for computing $\exp(x)$ that uses only arithmetic and integer roots? -(I know Newton-Raphson can solve $\log(y)-x=0$ to compute $\exp(x)$. I am specifically not asking for that. Newton-Raphson can also solve $\exp(y)-x=0$ to compute $\log(x)$. Note that each requires the other. I have neither right now as an arbitrary-precision function. I have arithmetic, integer roots, and equality/inequality tests.) - -REPLY [4 votes]: There is an algorithm for computing $\log_2(x)$ that might suit you. -Combine that with the spigot algorithm for $e$, and you can get $\ln(x)$. -From there, you can use Newton-Raphson to get $\exp(x)$. -I don't know if this roundabout way ends up doing any better than just recomputing.<|endoftext|> -TITLE: Formally proving that a function is $O(x^n)$ -QUESTION [5 upvotes]: Say I have a function -\begin{equation*} -f(x) = ax^3 + bx^2 + cx + d,\text{ where }a > 0. -\end{equation*} -It's clear that for a high enough value of $x$, the $x^3$ term will dominate and I can say $f(x) \in O(x^3)$, but this doesn't seem very formal. -The formal definition is $f(x) \in O(g(x))$ if constants $k, x_0 > 0$ exist, such that $0 \le f(x) \le kg(x)$ for all $x > x_0$. -My question is, what are appropriate values for $k$ and $x_0$? It's easy enough to find ones that apply (say $k = a + b + c + d$). By the formal definition, all I have to do is show that these numbers exist, so does it actually matter which numbers I use? For some value of $x$, $k$ could be anywhere from $1$ to $a + b + c + d + ... $. From my understanding, it doesn't matter what numbers I pick as long as they 'work', but is this right? It seems too easy. -Thanks - -REPLY [6 votes]: HINT $\quad\rm ax^3 + bx^2 + cx + d\ \le \ (|a|+|b|+|c|+|d|)\ x^3 \ $ for $\rm\ x > 1$<|endoftext|> -TITLE: Supplementary reading for probability theory studies -QUESTION [5 upvotes]: Can you advise some good books covering areas which are required for serious probability theory studies (e.g. measure theory, functional analysis)? Preferably this book should have some problem sets to work on. Thanks! - -REPLY [2 votes]: Jacod and protter is great. -Probability Theory: A Comprehensive Course by Klenke is worth looking at too.<|endoftext|> -TITLE: How is the codomain for a function defined? -QUESTION [15 upvotes]: Or, in other words, why aren't all functions surjective? Isn't any subset of the codomain which isn't part of the image rather arbitrary? - -REPLY [6 votes]: Here is an answer which directly addresses the question in the title: the codomain has to be given as part of the information telling you what the functions is; it can't be deduced (or in the language of question, "defined") if all you know is the domain and values of -the function. It is an extra piece of data. (This is why it seems arbitrary to you; you are thinking about how to determine the codomain from the other data, which can't be done! -You have to be told what it is as part of the initial description of the function.) -First note that this is incompatible with one traditional definition of a function as being a set of ordered pairs. The set of ordered pairs definition determines the domain and values of the function, but not the codomain. (I guess that some people do use this definition of function; for them, a function doesn't have a codomain separate from its image.) -To define a function which has a domain and a codomain, one should instead use the scheme -described by Ryan: a function is a triple (domain $A$, codomain $B$, set of elements of -$A\times B$ determining its values). -As to why we introduce the concept of codomain, Qiaochu's answer describes this.<|endoftext|> -TITLE: Extension of previous problem, involving $\ell^p$ norm circles -QUESTION [5 upvotes]: If you look at this previous problem, I asked how to find the sum of all the areas between two taxicab geometry circles. However, upon learning about $\ell^p$ norms, I thought it would be pretty interesting to extend the problem to all $\ell^p$ norm circles, not just $\ell^1$ (taxicab). -If $p=1$, then the result has already been found (the total area is $\frac{5k^2-k-4}{2}$). If $p = \infty$, then each "circle" is just a square, and the area is also easily found (I'm too tired to think about it, but I think it would just be $4(k^2-1)$). Is there, however, a general formula for the area of each circle and the total area of the regions between circles in terms of $k$ and $p$; that is, what is the equation for the area of each overlapping region? -The area of an individual circle, if I did it correctly, is the area of a Lamé curve with $r = p$ and a radius of $k-n$ (see the linked problem), which equals $\displaystyle 4(k-n)^2\frac{(\Gamma(1+\frac{1}{p})^2)}{\Gamma(1+\frac{2}{p})}$. This can be reduced to $\displaystyle 2(k-n)^2 \frac{\Gamma(\frac{1}{p})^2}{p \Gamma(\frac{2}{p})}$ (see equations 41 and 42 here). -Here are some explanatory pictures: -$k=5, p=1$ - -$k=5, p=2$ - -$k=5, p=3$ - -REPLY [2 votes]: The superellipses would seem to fit your bill, as long as $p < \infty$.<|endoftext|> -TITLE: How do I find the lowest $n$ for which $a^n \equiv 1 \pmod{b}$? -QUESTION [7 upvotes]: This is mostly related to doing large modular exponentiation by hand. For example, a problem I was doing was to find the last 3 digits of $7^{9729}$; that is, find $7^{9729}\bmod{1000}$. -Using the simplest concept for Euler's theorem, I found that $7^{400}\equiv 1\pmod{1000}$, since $\varphi(1000)=400$. Using Carmichael's theorem, I found a smaller number, $7^{100}\equiv 1\pmod{1000}$, as $\lambda(1000)=100$. Now, by manually multiplying it out, I found that $7^{20}\equiv 1 \pmod{1000}$, and that is the first $n$ for which that is true, meaning I just need to find $7^9 \bmod{1000}$, making the answer 607. -Is there a way to arrive at this answer without multiplying it out each time for every number I get? For example, could I do something like $13^{12937}\bmod{1000}$ without sitting around modding out multiples of $13^4$? (I know that the first $n$ for 13 would be 100, so no less than using Carmichael's theorem, but I want to know if there are other ways to find numbers lower than those given by Carmichael's theorem) - -REPLY [9 votes]: If $\rm\: gcd(a,10)=1\:$ then order of $\rm\: a\:$ in $\:\mathbb Z/1000 \:$ must be a divisor of $100 = \lambda(1000)$. You can compute the order simply and quickly by computing in order $\rm a^2, a^4, a^5, a^{10}, a^{20}, a^{25}, a^{50}\:$ by squaring or multiplying previous entries. This requires at most 5 squaring and 2 multiplication operations $\rm (mod\ 1000)$. Obviously the same sort of optimized divisor lattice searching works for any modulus. -Alternatively we can use the following well-known simple order algorithm for groups - which is based upon the order test. - -This thesis is a good reference on order algorithms in generic groups. Here's the abstract:<|endoftext|> -TITLE: Proving an upper bound for Prob[X>=E[X]] -QUESTION [7 upvotes]: Let random variable $X\sim\text{Binomial}\left(a+b,\frac{a}{a+b}\right)$, where $a$ and $b$ are positive integers. -I'm trying to prove that $\mathbb{P}[X\geq\mathbb{E}[X]]\leq\frac{3}{4}$, which appears to be true numerically. -Does anyone have a suggestion on how to proceed? -The complication in this particular problem is that the condition is not a strict inequality "$>$". -I've tried the Chernoff bound but it's not tight enough. - -REPLY [4 votes]: Here is an answer giving the outline of a proof. -The Bound is tight -If the bound is true then it is tight: If $a=1$, $b=1$ then -$\mathbb{P}[X\geq \mathbb{E}[x]] = \mathbb{P}[X\geq 1] =1-\mathbb{P}[X=0] = 1-\frac{1}{2}\frac{1}{2}=\frac{3}{4}$ -Getting a feel for the problem -As a rule of thumb (though can be made precise) if $a>5$ and $b>5$ then we can approximate the Binomial by normal random variable by the central limit theorem. That is $X\sim N(a,Var(X))$ approximately. The error in this approximation becomes increasing small as $a$ and $b$ increase. -Hence $\mathbb{P}[X\geq \mathbb{E}[x]] \approx \frac{1}{2}<\frac{3}{4}$ for $a>5$ and $b>5$ -To make precise, first read this: http://en.wikipedia.org/wiki/Binomial_distribution#Normal_approximation -and then: Box, Hunter and Hunter (1978). Statistics for experimenters. Wiley. p. 130 -We are left with three cases: a and b both small, a large and b small, b large and a small. -$b$ large and $a\leq 5$ -If $b+a>100$ then a poisson approximation to the Binomial r.v. becomes appealling. That is $X\sim Po(a)$ approximately. -Then $\mathbb{P}(X\geq a)=1-\sum_{i=0}^{a-1}\frac{e^{-a}a^i}{i!}\leq 0.63<\frac{3}{4}$ -For explicit error bounds look in A bound on the Poisson-binomial relative error by Teerapabolarn (2007). Theorem 2.1 contains a good error bound. -$b\leq 5$ and $a$ large -Let $Y=a+b-X\sim \text{Bin}(a+b,\frac{b}{a+b})$ now if $b+a>100$ $Y$ is approximately Poisson and -$\mathbb{P}(X\geq a)\approx\mathbb{P}(Y\leq b) <0.45 \leq \frac{3}{4}$ -Remaining cases -Note that the cases above are not quite mutually exclusive, but there a finite number of cases left. The cases left are -1) $a\leq 5$ and $b\leq 100$ -2) $b\leq 5$ and $a\leq 100$ -And this can be checked on a computer. To give the bound.<|endoftext|> -TITLE: Does projectivizing always fix problems at infinity? (Or, am I making a mistake somewhere?) -QUESTION [10 upvotes]: This question is motivated by the following homework problem. I'm trying to explicitly compute the homeomorphism $f:S^2 \rightarrow \mathbb{CP}^1$ by using stereographic projection and considering $\mathbb{CP}^1 = \mathbb{C}\cup {\infty}$. I'll want to prove that this is an isometry, where $S^2$ has the standard angle metric and $\mathbb{CP}^1$ has the Fubini-Study metric given by $d(\overline{x},\overline{y})=2\cos^{-1}|(x,y)|$, where $x,y\in \mathbb{C}^2$ are unit vectors (and presumably $(-,-)$ is the usual Hermitian inner product). Later, I'll use this to explicitly compute the Lie group homomorphism $U(2)\rightarrow SO(3)$. -My stereographic projection is from the north pole, takes the equator to the unit circle, and puts the south pole at the origin. What I've gotten so far is that for $z\not= 1$, \begin{equation*} f(x,y,z)=\left( \frac{x}{1-z} , \frac{y}{1-z} \right) = \frac{x+iy}{1-z} = [x+iy : 1-z ], \end{equation*} where these are coordinates in $\mathbb{R}^2$, $\mathbb{C}$, and $\mathbb{C}\subseteq \mathbb{CP}^1$ respectively. This is troublesome, because philosophically I'd expect that I should be able to define this for $(x,y,z)\not= (0,0,1)$ and then end up with a function to projective space that extends continuously over the north pole; that's sort of the point of projective space, to make $\infty$ into just another point. However, it is not immediately obvious that this works, although luckily \begin{equation*} \left| \frac{x+iy}{1-z} \right| = \sqrt{ \frac{|x+iy|^2}{(1-z)^2} } = \sqrt{ \frac{1-z^2}{(1-z)^2}}, \end{equation*} and the limit of this expression as $z\rightarrow 1^-$ is indeed $\infty$. -So, fair enough. This ends up extending to a continuous function after all. But: Am I wrong in my philosophical understanding of projective space? -(For what it's worth, I tried using my calculations to verify that $f$ is an isometry, and it didn't look like it was going to work out. So maybe I really am just doing something wrong.) - -REPLY [4 votes]: One complication in your situation is that you are mixing real and complex coordinates. -If you were considering a map from a complex curve to $\mathbb{CP}^1$, then the kind of computation you are trying to make would work out more straightforwardly. -Because you are looking at a map of real analytic manifolds, not complex analytic ones -(concretely, you are working with the variables $x,y,z$, which are real coordinates), the -point of view you have adopted is perhaps not quite as natural. Nevertheless, it can be -made to work, as follows: -$$[x+iy:1-z] \text{ (which is where you finished) } -= [x^2 + y^2: (1-z)(x - i y)]$$ -$$ = [ 1 - z^2: (1-z)(x-iy)] = [1+z:(x-iy)].$$ -This rewriting of your map to $\mathbb{CP}^1$ is now well-defined in a neighbourhood of $(0,0,1)$ on the sphere. (The fact that I introduced a complex conjugate of $x + i y$ to -facilitate the computation is related to the real vs. complex issue mentioned above. This is also essentially the same computation you made to check that your map tends to $\infty$ as $z \to 1$, just rewritten in homogeneous coordinates.)<|endoftext|> -TITLE: Find the sum to n terms of the series $\frac{1} {1\cdot2\cdot3\cdot4} + \frac{1} {2\cdot3\cdot4\cdot5} + \frac{1} {3\cdot4\cdot5\cdot6}\ldots $ -QUESTION [8 upvotes]: Find the sum to n terms of the series $\frac{1} {1\cdot2\cdot3\cdot4} + \frac{1} {2\cdot3\cdot4\cdot5} + \frac{1} {3\cdot4\cdot5\cdot6}\ldots $ -Please suggest an approach for this task. - -REPLY [9 votes]: This is similar to what has been said by Branimir, but shows how we can extend the result to -$$\sum_{k=1}^n {1 \over k(k+1) \cdots (k+m)}, \qquad m \in \mathbb{N}.$$ -We can build up the result from the identities -$${1 \over k(k+1)} = {1 \over k} - { 1 \over k+1}, \qquad (1)$$ -$${1 \over k(k+1)(k+2)} = {1 \over 2} \left( {1 \over k(k+1)} - { 1 \over (k+1)(k+2)} \right),$$ -$${1 \over k(k+1)(k+2)(k+3)} = -{1 \over 3} \left( {1 \over k(k+1)(k+2)} - { 1 \over (k+1)(k+2)(k+3)} \right), \quad \textrm{ etc...}$$ -Write $S_1 = \sum_{k=1}^n {1 \over k(k+1)},$ -$S_2 = \sum_{k=1}^n {1 \over k(k+1)(k+2)},$ etc -Summing for $S_1$ using (1) all terms on RHS cancel to get the classic -$$S_1 = \sum_{k=1}^n {1 \over k(k+1)} = 1 – {1 \over n+1} = {n \over n+1}.$$ -We then sum the series for $S_2$ using this result obtained for $S_1,$ and so on. - -REPLY [5 votes]: HINT $\rm\displaystyle\ \frac{1}{(k+1)(k+2)(k+3)(k+4)} = \frac{1}{6(k+1)} - \frac{1}{2(k+2)}+\frac{1}{2(k+3)}-\frac{1}{6(k+4)}$ -$\rm\ f(k+1)-f(k)\: = $ above $\rm\displaystyle\ \Rightarrow\ f(k) \:=\: c_0 + \frac{c_1}{k+1}\ \:+\:\ \frac{c_2}{k+2}\ \:+\:\ \frac{c_3}{k+3}$ -Calculating yields $\rm\ c_0,c_1,c_2,c_3 \ =\ 1/18,\ -1/6,\ 1/3,\ -1/6$. -For remarks on the group theory behind rational indefinite summation see my post here<|endoftext|> -TITLE: Why an inconsistent formal system can prove everything? -QUESTION [31 upvotes]: I am reading a Set Theory book by Kunen. He presents first-order logic and claims that if a set of sentences in inconsistent, then it proves every possible sentence. Since he does not explicitly specify the inference rules, I became curious as to how fundamental this property of inconsistent systems is. -So my question is what is the simplest proof, with the least use of assumptions, of the vague claim that "inconsistent systems can prove anything" - in particular I'm interested in the assumptions about the system needed to prove this - is it true only for first order logic? Only for first order logic with the "standard" rules of inference (Modus ponens and GEN)? Or is it such a basic truth that it can be proved for every "reasonable" proof system (and what is "reasonable")? - -REPLY [11 votes]: It doesn't have to: logics which don't are called paraconsistent. -The most important paraconsistent logic is relevance logic, which repudiates the K axiom: -$$\alpha \rightarrow (\beta \rightarrow \alpha)$$ -and replaces it by axioms that do not allow there to be unused assumptions. This is equivalent to saying weakening, the principle that if $\Gamma \vdash \alpha$ then $\Gamma'\vdash \alpha$ for $\Gamma\subset\Gamma'$. This blocks derivations such as Weltschmertz's, which appeals to the K axiom once, Asaf's which uses it twice; Francesco appeals to monotonicity in his proof, which is another name for weakening. -It's not difficult to see that this also blocks proofs of everything from a contradictory pair of propositions in a logic satisfying compactness, since one can prove inductively about such proof systems that if $\alpha\rightarrow\beta$, then all positive atoms in $\beta$ must occur either negatively in $\beta$ or positively in $\alpha$. So if our contradictory pair (over an assumption) takes the form $\alpha\rightarrow\beta$ and $\alpha\rightarrow\neg\beta$, we need to prove for any $\gamma$ that $\alpha\rightarrow\gamma$. But if we choose $\gamma$ to be any positive atom not occuring in $\alpha$, our inductive proof tells us this cannot be done. We need compactness here, to be ensure that the basis for all contradictory pairs can be expressed by a finitary formula.<|endoftext|> -TITLE: Why are some mathematical constants irrational by their continued fraction while others aren't? -QUESTION [8 upvotes]: Catalan's Constant and quite a few other mathematical constants are known to have an infinite continued fraction (see the bottom of that webpage). On wikipedia (I'm sorry, I can't post anymore hyperlinks because of my low rep.), a condition for irrationality of that continued fraction is given (see 'generalized continued fractions'). It is said that a given continued fraction converges to an irrational limit if $ b_n > a_n $ in the continued fraction $b_0 + a_0/(b_1 + a_1/(b_2 + a_2/(...b_n+a_n)))$ for some sufficiently large $n$. In the webpage I provided you with, however, the degree of the polynomial $a_n$ of the continued fractions is bigger than the one of $b_n$. Therefore, all values of $b_n$ will never exceed all values of $a_n$, even after some large $n$. -My question is: Why does the degree of $a_n$ need to be smaller than the degree of $b_n$ in order for a continued fraction representation of a constant to be irrational? I think I read somewhere it had to do with something like 'Tietschzes criterion' (but I'm not sure). (bonus question: Does anyone know where a proof of this 'criterion for irrationality' can be found?) -Thanks, -Max Muller - -REPLY [2 votes]: Though infinite, the Zudilin's continued fraction for the Catalan's constant is not "simple". So, the irrationality theorem does not apply to Zudilin result. The distinction between "simple" and "generalized" continued fractions can be found at -http://en.wikipedia.org/wiki/Continued_fraction -Best wishes, -Dr. Fabio M. S. Lima<|endoftext|> -TITLE: When does the topological boundary of an embedded manifold equal its manifold boundary? -QUESTION [17 upvotes]: Suppose I embed a manifold-with-boundary $M$ in some $\mathbb{R}^n$. Are there conditions (necessary, sufficient, or both) that can help determine when the topological boundary of $M$ is equal to the manifold boundary? -By "topological boundary," I'm referring to $\text{Bd } M$, which is the closure minus the interior (relative to $\mathbb{R}^n$). -By "manifold boundary," I mean the boundary $\partial M$ that is specified in the definition of "manifold-with-boundary." - -REPLY [2 votes]: In order to complement and clarify Chapman's answer: -For a topological manifold $M$, possibly with boundary, I will use the notation $\partial M$ to denote its boundary and $int(M)=M\setminus \partial M$ its interior (both are understood here in the sense of manifold topology). -For a subset $Y$ of a topological space $X$, I will use the notation $Int(Y)$ to denote its topological interior, i.e. the union of all open subsets of $X$ contained in $Y$. Accordingly, I let $Fr(Y)$ denote the frontier of $Y$ in $X$, which is $cl(Y)\setminus Int(Y)$. (I prefer the terminology "frontier" to that of "topological boundary," which appears in the OP.) -Suppose now that $M$ is an $m$-dimensional topological manifold, possibly with boundary, $m\le n$, and $f: M\to E^n$ is a topological embedding. Then the following proposition relates the boundary $M$ and the frontier of $f(M)$ in $E^n$: -Proposition. 1. Suppose that $m< n$. Then: -(a) $f(M)\subset Fr(f(M))$. -(b) If $M$ is compact, then $f(M)= Fr(f(M))$. - -Suppose that $m=n$. Then: - -(a) $f(int(M))= Int(f(M))$ and $f(\partial M)\subset Fr(f(M))$. -(b) If $M$ is compact, then $f(\partial M)= Fr(f(M))$. If $M$ is noncompact, this equality might fail. -Proof. 1a is an immediate consequence of the invariance of domain theorem: An open subset of $E^m$ cannot be homeomorphic to an open subset of $E^n$, implying that $Int(f(M))$ is empty. -1b. Since $M$ is compact, $f(M)$ is closed in $E^n$, hence, $Fr(f(M))= f(M)\setminus Int(f(M))= f(M)$. -2a. $f(int(M))\subset Int(f(M))$ is a consequence of the invariance of domain theorem again. Suppose that $y=f(x)\in Int(f(M))$. Then there is an open ball neighborhood $V$ of $y$ contained in $R^n$, By applying the invariance of domain theorem to $f^{-1}: f(M)\to M$, we see that $x$ is an interior point of $M$. Thus, $f(int(M))= Int(f(M))$. In other words, $f(\partial M)\cap Int(f(M))=\emptyset$. It follows that -$f(\partial M)\subset Fr(f(M))$. -2b. Since $M$ is compact, $f(M)$ is closed in $E^n$, hence, $Fr(f(M))=f(M)\setminus Int(f(M))= f(M)\setminus f(int(M))= f(\partial M)$. -To get an example of a noncompact manifold where the equality fails, take, for instance $M=[0,1)$ and $f$ the identity inclusion $M\to {\mathbb R}=E^1$. Then $Fr(M)= \{0, 1\}$, while $\partial M= \{0\}$. Similar examples exist in all dimensions. qed<|endoftext|> -TITLE: Classification Theorem for Non-Compact 2-Manifolds? 2-Manifolds With Boundary? -QUESTION [11 upvotes]: I recently learned about the Classification Theorem for compact 2-manifolds. Is there a similar classification theorem for ALL 2-manifolds, not just the compact ones? -Moreover, is there a theorem which classifies the 2-manifolds with boundary? - -REPLY [17 votes]: Here's a summary of the situation regarding noncompact $2$-manifolds with boundary, thanks to Moishe Kohan and Jacques Darné. -Originally, I posted an answer pointing to the 2007 paper Classification of noncompact surfaces with boundary by A. O. Prishlyak and K. I. Mischenko, Methods Funct. Anal. Topology 13 (2007), no. 1, 62–66. However, Kohan pointed out that the classification had actually been completed much earlier by E. Brown and R. Messer (The classification of two-dimensional manifolds, Trans. Amer. Math. Soc. 255 (1979), 377–402). Then more recently Darné pointed out that the theorem claimed by Prishlyak and Mischenko is false, because it contradicts the one of Brown and Messer. See Darné's comment below for details. -So the upshot is that the correct classification of noncompact $2$-manifolds with boundary was completed in 1979 by Brown and Messer in the paper cited above.<|endoftext|> -TITLE: Real-measurable cardinals that are not measurable ones -QUESTION [11 upvotes]: I'm reading Jech's Set Theory, and in the chapter about measurable cardinals there is a theorem that if $\kappa$ is real-measurable but not measurable then it is $\le 2^{\aleph_0}$ and so and so. (Corollary 10.10) -How can a cardinal number be real-measurable without being measurable? Can't a measure be "destructed" (as in opposite of constructed) into a trivial $0,1$ measure? - -REPLY [6 votes]: It is a good question. The answer is that real-valued measurable cardinal need not be strongly inaccessible, whilst every measurable cardinal is strongly inaccessible. Indeed, it is consistent that the continuum itself is a real-valued measurable cardinal, but the continuum can never be a measurable cardinal, since every measurable cardinal is strongly inaccessible. -Nevertheless, part of what you claim is true: Solovay proved that every real-valued measurable cardinal $\kappa$ is fully measurable (with a two-valued measure) in an inner model of the universe. That is, if $\kappa$ is a real-valued measurable cardinal, then there is a definable transitive class $W$ satisfying ZFC in which $\kappa$ is an actual measurable cardinal. The class $W$ is defined directly from the real-valued measure on $\kappa$, and this provides a sense in which the measure is deconstructed to form a 2-valued measure. - -REPLY [6 votes]: Two-valued measures behave very differently from real-valued measures. For example, suppose $\mathcal{U}$ is a countably complete ultrafilter on a set $X$ and suppose that $f:X\to2^\omega$ is an injection. There is a $b \in 2^\omega$ such that -$$B_n = \{ a \in X : f(a)(n) = b(n) \} \in \mathcal{U}$$ -for every $n < \omega$. By countable completeness, $B = \bigcap_{n<\omega} B_n \in \mathcal{U}$. But $B$ contains exactly one element (namely $f^{-1}(b)$) since $f$ is an injection. Therefore, $\mathcal{U}$ is a principal ultrafilter. -This argument shows that the first measurable cardinal is larger than $2^{\aleph_0}$. Indeed, a slightly more general argument can be used to show that a measurable cardinal must be inaccessible. However, this argument cannot be carried out with a real-valued measure. In fact, it is possible (assuming the consistency of a measurable cardinal) for Lebesgue measure to be extended to a measure defined on all subsets of $\mathbb{R}$.<|endoftext|> -TITLE: Very ample sheaf on a blowup -QUESTION [5 upvotes]: Suppose one wants to prove that a $1$-dimensional integral proper scheme over an algebraically closed field is projective. This is a step in how Hartshorne has you prove that any $1$-dimensional proper scheme (over an algebraically closed field) is projective. -The method is that first note the normalization is non-singular and hence projective. Consider $f: \tilde{X}\to X$, and let $\mathcal{L}$ be a very ample sheaf on $\tilde{X}$. The goal is to prove that there is an effective divisor $D=\sum P_i$ such that $\mathcal{L}(D)\simeq \mathcal{L}$ and such that each $f(P_i)$ are non-singular points. -The rest of the proof follows from the fact that there merely exists some very ample sheaf with that property, but the exercise seems to imply that any very ample sheaf satisfies this property. -I think I'm missing something really obvious, but take a proper curve over $k$, say $C$, that has a single singularity. Blow-up that singularity, $\pi: \tilde{C}\to C$, this is the normalization. Take a point $P\in \tilde{C}$ such that $\pi(P)$ is the singularity. It seems to me that $D=P$ is an effective divisor, so $\mathcal{L}(D)$ is an invertible sheaf. If the above is correct, then $\mathcal{L}(D)$ cannot be very ample. Does anyone have a simple reason for this? - -REPLY [4 votes]: Notice that in your 2nd paragraph you say the goal is to prove that «there is an effective divisor $D=\sum P_i$ such that $\mathcal{L}(D)\simeq \mathcal{L}$ and such that each $f(P_i)$ are non-singular points», but in your 4th paragraph you are picking a non singular point and asking if there is an effective divisor with a specific support. -The two things have quantifiers in different places. In particular, the claim in your 2nd paragraph does not say that you can pick the support of the divisor as you want!<|endoftext|> -TITLE: Surjection on composed function? -QUESTION [5 upvotes]: Helping a buddy with his "intro to math" course for CompSci. I'm afraid my 'leet math skills are already giving out, so any help would be appreciated! -If f and g are both surjective functions, then so is the composition f o g. (so far so good). -I would expect the converse to be true too: If f o g is a surjective function, does that mean f and g are surjective? -From my research so far it seems that in this case f is indeed surjective but g is not necessarily so. Can anybody explain why, please? An example would rock! - -REPLY [2 votes]: For a treatment of these basic facts about what happens to injectivity and surjectivity under composition, see Section 2.5 of these lecture notes from a "transition to upper level mathematics" course that I taught twice at UGA. Note that elsewhere in the course there were exercises asking students to prove that certain other statements which are not listed here are not always true: e.g. if $g \circ f$ is surjective, then $g$ must be surjective (as is in the notes) but $f$ need not be. Similarly, if $g \circ f$ is injective, then $f$ must be injective (...) but $g$ need not be. -Given how fundamental and ubiquitous these results are, it's surprising that they are not always covered in standard texts. (I guess the student is supposed to figure them out for herself, but in my experience most actual undergraduates do much better if they are exposed to these concepts in the context of an actual formal course.)<|endoftext|> -TITLE: The Power of Lambda Calculi -QUESTION [14 upvotes]: A simple question here, which likely demands a somewhat complex answer... Or rather, a set of related questions. - -What are the advantages of typed lambda calculus over untyped lambda calculus in terms of proof theory? -Specifically, Church's original Lambda Calculus was untyped and allows arbitrarily high-order functions. What are the limitations with respect to constructing a proof calculus from it? -Are not untyped and typed lambda calculi inherently higher-order formal systems? -What are the reasons for using complex (e.g. polymorphic/dependent) type theories over the simple type theory in lambda calculus? Are they more 'powerful' in some sense; if so, how exactly? -Do semantics (interpretation) have anything to say here, with respect to typed and untyped theories, especially in terms of soundness and completeness? -The well-known proof verifier Coq, which (I believe) uses a language of higher-order complex-typed lambda calculus to represent proofs in constructive (intuistionistic) mathematics. I have read that the theory behind it (the Calculus of Constructions) is essentially an extension of the Curry-Howard isomorphism to higher-order logic. Are there any elaborations/clarifications I should be aware of here? - -REPLY [9 votes]: What are the advantages of typed lambda calculus over untyped lambda calculus in terms of proof theory? and also -Specifically, Church's original Lambda Calculus was untyped and allows arbitrarily high-order functions. What are the limitations with respect to constructing a proof calculus from it? - -Simply, typed lambda calculus has a proof theory, and untyped lambda calculus doesn't because it lacks a normal-form theorem. - -Are not untyped and typed lambda calculi inherently higher-order formal systems? - -This is tricky, because there are two ways of looking at higher-orderness. They are higher-order when you look at the terms, since the definition of higher order is abstraction over higher-order entities like functions, and lambda abstraction is abstraction. But note, under the formulae-as-types correspondence, the proposaitions in logic are associated with the types of the lambda terms, and in, say, the simply-typed lambda calculus, there is no abstraction over anything. This point confused me when I first studied Church's simple theory of types, because it is a higher-order calculus based on simply-typed lambda calculus, where the propositions are formed using the lambda terms, rather than, as with the formulae-as-types corespondence, being the types. - -What are the reasons for using complex (e.g. polymorphic/dependent) type theories over the simple type theory in lambda calculus? Are they more 'powerful' in some sense; if so, how exactly? - -They do add power. Under the formulae-as-types correspondence, the simply-typed lambda calculus has as its matching logic propositional calculus, adding dependent types adds universal and existential quantification at higher finite types to its logic. Polymorphic types allow types allowing all usual mathematical entities to be constructed (although without the usual theory of provability) and has high proof-theoretic consistency: Strong normalisation of system F has (over a base theory such as RCL0) the same strength as consistency second order arithmetic, even though it lacks an induction principle. - -Do semantics (interpretation) have anything to say here, with respect to typed and untyped theories, especially in terms of soundness and completeness? - -They are much less useful with these sorts of theories than model theory is with classical logic, although there are good applications of category theory to the semantics of type theory. - -The well-known proof verifier Coq, which (I believe) uses a language of higher-order complex-typed lambda calculus to represent proofs in constructive (intuistionistic) mathematics. I have read that the theory behind it (the Calculus of Constructions) is essentially an extension of the Curry-Howard isomorphism to higher-order logic. Are there any elaborations/clarifications I should be aware of here? - -Yes, that polymorphic types are tricky, and non-conservative over the base theory. I'd recommend sarting an exploration of formulae-as-types with Martin-Löf's dependent type theory. If you want to work with a proof assistant, there is Agda, which is a functional programming language whose type system is Martin-Löf's type theory.<|endoftext|> -TITLE: Is addition continuous? -QUESTION [16 upvotes]: I'm going to ask a very silly question, so I'm begging you to be understanding if it is absolutely trivial, or if it's an exercise in some Bourbaki. I'm afraid of asking you, because the question entails this particular case: is the addition of (a finite, but variable quantity of) real numbers a continuous function? -Ok, let me try to explain what I mean. -I'm insisting in doing exercices about topological groups, see if I can understand something about those previous question 1, question 2 and question 3. This one should be more elementary. -Let $G$ be a topological Abelian group, written additively, and let $I$ be an arbitrary set. I don't mind if you take $G = \mathbb{R}$, with the usual Euclidian topology and the usual addition of real numbers, but you cannot assume $I$ is finite, nor countable. Let -$$ -G^I = \prod G = \prod_\alpha G_\alpha , \qquad \text{where}\ G_\alpha = G \qquad \text{for all}\ \alpha \in I \ . -$$ -Let us denote $(x_\alpha)$ the elements of $\prod G$. This is an Abelian topological group with the usual product topology and operations defined component-wise: -$$ -\begin{align} -(x_\alpha) + (y_\alpha) &= (x_\alpha + y_\alpha) \\\ --(x_\alpha) &= (-x_\alpha) -\end{align} -$$ -Consider also the weak product $\prod' G$ (aka, direct sum). That is, elements of $\prod' G$ are those tuples $(x_\alpha) \in \prod G$ such that $x_\alpha = 0$, except for a finite number of indexes $\alpha \in I$. -$\prod'G $ is of course a subgroup of $\prod G$, $\prod' G \subset \prod G$. Let's consider the subspace topology on $\prod' G$, induced from the product topology on $\prod G$. -Then we have a well-defined addition map that we hadn't in $\prod G$: -$$ -\sum : \prod ' G \longrightarrow G \ , \qquad (x_\alpha ) \mapsto \sum_\alpha x_\alpha \ . -$$ -This makes sense, despite the fact $I$ is not necessarily an ordered set because, given an element $(x_\alpha) \in \prod'G$, if $x_{\alpha_1}, \dots , x_{\alpha_n}$ are its non-zero components, we don't have a canonical choice in order to perform the addition $x_{\alpha_1} + \cdots + x_{\alpha_n}$. Nevertheless, since the sum is associative and commutative, we may do it as we please: the result will always be the same. Hence $\sum_\alpha x_\alpha$ is well-defined. -Question. Is this map $\sum$ continuous? -That is, if you don't like too much abstraction in your life, but prefer to be very specific: is the sum of real numbers -$$ -\sum: \prod' \mathbb{R} \longrightarrow \mathbb{R} \ , \qquad (x_\alpha) \mapsto \sum_\alpha x_\alpha -$$ -continuous? -Remark 1. The question seems silly, doesn't it? Well, maybe it is, but the first answer that came to my mind is wrong, as far as I can see: you cannot say "this $\sum$ is continuous because it is a composition of continuous maps"; namely, the iteration of the addition of $G$. Yeah, but: which composition? Notice that there is no canonical choice for doing the addition in an specific order -there could be no order at all in $I$. So every time you compute $\sum_\alpha x_\alpha$ you can change the order in which you perform the (continuous) sums $x_\alpha + x_\beta$. So, $\sum$ is not a composition of a finite number of specific maps. -(Edit. Previous Remark 2 was wrong.) - -REPLY [2 votes]: It might be added that the example shows that the true coproduct of a non-finite number of copies of $\mathbb R$ does not have the subspace topology from the corresponding product. Indeed, on the usual categorical grounds, there is only (at most) one topology that fulfills the requirement (with regard to all possible mappings from the coproduct to all other topological groups/vectorspaces. As the example shows, the "correct" coproduct topology is considerably finer than the restriction of the product topology: for [Edit:] locally convex topological vector spaces it is essentially a "diamond" topology. -[Edit 2:] The diamond topology on a coproduct/sum of $V_i$ has local basis at 0 given by convex hulls of (image of) opens at $0$ in $V_i$. In the locally convex category, it is pretty clear that this has the desired property, thus constructing the coproduct. -[Edit: corrections related to "locally convex" modifier and "uncountable" modifier...] -The situation is not as trivial/boring as one might imagine, since uncountable coproducts of copies of $\mathbb R$ in the category of not-necessarily locally convex topological vector spaces are themselves not locally convex, due to the existence of not-locally-convex topological vector spaces, the $\ell^p(I)$ spaces with $00$ such that $N\cap \mathbb R_i\supset (-\delta_i,\delta_i)$. For $I$ uncountable, there is some $n_o$ such that there are infinitely-many $i_1,i_2,\ldots \in I$ with $\delta_i\ge 1/n_o$. Then the $p$-norms of the ever-larger convex combinations of $i_j$th-inclusion of $\delta_{i_j}$ are $\delta_{i_1}^p/n^p+\ldots+\delta^p_{i_n}/n^p$. These are bounded below by $n/n_o^pn^p=n^{1-p}/n_o^p$ which go to $+\infty$. This contradicts any hope for a continuous induced map to $\ell^p(I)$ with $0 -TITLE: ZF is almost finitely axiomatizable -QUESTION [9 upvotes]: I want to show that there is a finite conjunction $\phi$ of axioms of $ZF$, such that every transitive proper class $M$, which satisfies $\phi$, is already a model of $ZF$. -This is an exercise in Kunen's set theory. There is a hint, it seems to be useful to apply the Reflection principle to the union $M = \cup_{\alpha} M \cap R(\alpha)$. But I don't know with which axioms we can do that (we can only use finitely many!), and why this yields an ordinal which is independent from $M$. Please give me only a hint, because basically I want to solve this on my own, but I don't know how to start with the hint above. -Also, what is the "philosophical" reason that we cannot deduce from this, that $ZF$ is finitely axiomatizable (which is wrong)? I mean I cannot prove that this $\phi$ above proves every axiom, but is there also a deeper reason for this? -EDIT: There was an answer with some hints, but it was deleted... -I still don't know how to produce this strange sentence $\phi$. - -REPLY [8 votes]: Suppose that $M$ satisfies all the easy things like extensionality, pairing, etc., plus $\Delta_0$ separation and suppose also that $M$ thinks for every ordinal $\alpha$ that $V_\alpha$ exists (that is, $V_\alpha^M$ exists in $M$). All this is expressible by a single axiom, but when $M$ is a proper class, it ensures that $M$ satisfies the entire Collection scheme, because for any formula $\varphi(x,y)$ and set $A\in M$ for which we want to collect, we may apply Collection in $V$ to find an ordinal $\alpha$ such that whenever $a\in A$ has $\exists y\varphi(a,y)$ in $M$, then such a $y$ may be found in $V_\alpha$ and hence in $V_\alpha^M$. Thus, $V_\alpha^M$ serves as a collection set inside $M$. Similarly, one can get full Separation in $M$ from $\Delta_0$ Separation (or much less, if you care to optimize it), by applying the Reflection theorem, which allows you to bound all the quantifiers by a suitably large $V_\alpha^M$. -The argument works only when $M$ is a proper class, however, because it appeals to Collection in $V$, and uses that $M$ grows taller than the resulting collection set. This method would break down completely when $M$ is a set, since the collection set in $V$ could be unbounded in $M$.<|endoftext|> -TITLE: Sequence of measurable functions -QUESTION [6 upvotes]: If $h(x,t)$ is measurable and the measures involved are $\sigma$-finite, does there exist a sequences of functions -$$h_n(x,t) = \sum_{j=1}^{N_n} f_{j,n}(x) 1_{F_{j,n}}(t)$$ -where the sets are pairwise disjoint (in $j$) such that $h_n \to h$ pointwise almost everywhere? -I know that if we fix $x$ we can get a sequence like that but we cannot just make the coefficients depending on $x$ since the sets $F_{j,n}$ will be different. Can I somehow combine them? Do I need the $\sigma$-finiteness? -If the answer is positive, we can use this to prove Minkowski's inequality for integrals quite easily. - -REPLY [2 votes]: In contrast to William, I think the answer is yes. I had to prove something similar once; here is a suitably adapted argument. It might be more complicated than necessary. -First, reduce to the case that $(\Omega_1, \mu_1), (\Omega_2, \mu_2)$ are finite measure spaces and $h$ is bounded. Let $\mathcal{P}$ be the set of all functions on $\Omega_1 \times \Omega_2$ of the form $F(x,y) = f(x) g(y)$ where $f$ is bounded and measurable and $g$ is simple; such functions can be written in the form you seek. Let $\mathcal{Q}$ be the linear span of $\mathcal{P}$; these functions can also be written in the desired form. Let $\mathcal{L}$ be the closure of $\mathcal{Q}$ in $L^1(\mu_1 \times \mu_2)$ and let $\mathcal{L}_b$ be the bounded functions from $\mathcal{L}$. -Now $\mathcal{L}_b$ is a vector space which is closed under bounded convergence (by the dominated convergence theorem), contains the constants and contains $\mathcal{P}$. $\mathcal{P}$ is closed under multiplication and contains all functions of the form $1_{A \times B}$ with $A \subset \Omega_1$, $B \subset \Omega_2$ measurable; the collection of such sets $A \times B$ generates the product $\sigma$-algebra. By the functional version of the Dynkin $\pi$-$\lambda$ theorem (references below), $\mathcal{L}_b$ contains all bounded measurable functions; in particular it contains $h$. -Since $h$ is in $\mathcal{L}$ which is the $L^1$ closure of $\mathcal{Q}$, there is a sequence $\mathcal{Q} \ni h_n \to h$ in $L^1$. Then some subsequence converges almost everywhere. -For the functional $\pi$-$\lambda$ theorem, see Theorem 8.2 of Bruce Driver's probability notes. A reference is also given to C. Dellacherie, Capacités et processus stochastiques, page 14.<|endoftext|> -TITLE: Generalizing values which Euler's-totient function does not take -QUESTION [16 upvotes]: I was reading about Euler's totient function on wikipedia, and it eventually led me to this book on google: -Page 74 of the book, Prime numbers: the most mysterious figures in math - By David G. Wells. -Anyway, the books lists many assertions without proof, only references which I can't find. One I could not solve for myself said that not all even numbers are values of $\phi(n)$. The sequence of even non-values of $\phi(n)$ starts: -$14, 26, 34, 38, 50, 62,\dots$ -After thinking about it for a while, I've made little headway. Looking at 14 specifically, I suppose if such a solution $a$ to $\phi(x)=14$ were to exist, $(a,m_i)=1$ for $1\leq i\leq 14$, for some $m_i\lt a$ and so there must also exist inverses $\overline{a_i}$ such that $\overline{a_i}a\equiv 1 \pmod {m_i}$ for each $i$. I'm looking for some contradiction, possibly of the Chinese Remainder Theorem, to show such $a$ cannot exist. -Is there some way to generalize which values are not taken by $\phi$, or at least explain why this is the case? I was hoping to see why $14$ is the least such integer such that this is true, but values $1$ to $13$ must be indeed taken. I suppose this would also explain why 26 is the next such value that is not taken, while $15$ to $25$ are. - -REPLY [17 votes]: It's just casework on the prime factorization of $n$. If $\phi(n) = 14$ then $n$ can't be divisible by any primes larger than $7$ (because $p-1$ cannot divide $14$ for such primes), and it can't be divisible by any of $5, 7$ because $4, 6$ don't divide $14$. So it can only be divisible by $2$ or $3$. Since $14$ is not divisible by $3$, $3$ can only divide $n$ at most once, so $n$ is of the form $2^k 3$. But then $\phi(n) = 2^k$; contradiction. -Similarly, if $\phi(n) = 26$ then $n$ can't be divisible by any primes larger than $13$, and it can't be divisible by any of $5, 7, 11, 13$ because $4, 6, 10, 12$ don't divide $26$. So it can only be divisible by $2$ or $3$, and then the same argument as above works. -For any particular potential value of $\phi(n)$ a similar, but longer, argument should work; in particular it should be possible to test in finite time whether a particular candidate works. - -REPLY [7 votes]: From the comments to A005277 : -If p is prime then the following two statements are true. I. 2p is in the sequence iff 2p+1 is composite (p is not a Sophie Germain prime). II. 4p is in the sequence iff 2p+1 and 4p+1 are composite. - Farideh Firoozbakht (mymontain(AT)yahoo.com), Dec 30 2005 -This covers most of your cases, 50 is covered by the next comment.<|endoftext|> -TITLE: Zero-Sum Game Theory -QUESTION [6 upvotes]: Me and a team of programmers are programming a robot to play a game in a competition. We did not create the hardware, and all teams will use the same type of robot. - -GAME DESCRIPTION -The game has two players opposing each other and are both trying to achieve an objective. This objective is to move to a position to pick up a payload then return to another position to return the payload. Each team has their own payload and their own target to bring them to, and it is impossible to pick up the other team's payload. The team that brings their payload to the target first wins and the game is ended. There is a time limit of 210 seconds, and if the games times out, the team who held the payload the longest wins. -However, it is a little more complicated than that. The robots also have the ability to face the opponent and "push them back" from any range (the closer they are the more -forcefully they push). Also, if a robot is pushed out of the bounds of the playing field, they drop their payload and it is moved back inside the playing field. - -GAME THEORY QUESTIONS -First of all, is this a zero-sum game or not? I am new to game theory math and I am not completely sure if it is. -Also, how does the minimax theorem apply to this specific game? I understand how it works, but I do not know what the values to account for would be in this game (would it be the difference in how long each team has held the payload somehow combined with how close they are to bringing to the target?) -I really am not sure at all how to calculate any of this, but if anyone can at least point me in the right direction for coming up with an effective strategy system for this I would be very appreciative. -Thank you so much for your time, this project is very important to me and your help means a lot. If I need to clarify anything please ask, I am not sure if I included all the information needed for this problem. - -REPLY [4 votes]: This is a zero sum game differential game. -Your strategy is a function from the state of the board (everyone's position and velocity and how long they have held their payload) to your control inputs. Your payoff is 1 if you win and 0 if you lose. [Which is why this is a zero sum game, since the sum of payoffs over all players equals 1]. -Having said that, the actual solved examples of differential games I have seen have been for simple two player problems. You may be better off using some kind of heuristic for the contest.<|endoftext|> -TITLE: How closely can we estimate $\sum_{i=0}^n \sqrt{i}$ -QUESTION [21 upvotes]: By looking at an integral and bounding the error? - -REPLY [2 votes]: By the Euler-Maclaurin summation formula, we have -\begin{align}\sum_{k=1}^n\sqrt k&=\frac23n^{3/2}+\frac12n^{1/2}+\zeta(-1/2)+\sum_{k=1}^\infty\frac{B_{2k}\Gamma(\frac32)}{(2k)!\Gamma(\frac52-2k)}n^{\frac32-2k}\\&=\frac23n^{3/2}+\frac12n^{1/2}+\zeta(-1/2)+\frac1{24}n^{-1/2}+\frac1{1920}n^{-3/2}+\mathcal O(n^{-7/2})\end{align} -Seeing as the remainder term in the expansion goes to zero for $n\ge1$ and the collected constants add up to the zeta function. -A graph of the terms as far as expanded is shown below: - -More generally, we have, for $s\ne-1$ and large enough $n$, -$$\sum_{k=1}^nk^s=\zeta(-k)+\frac1{s+1}n^{s+1}+\frac12n^s+\sum_{k=1}^\infty\frac{B_{2k}\Gamma(s+1)}{(2k)!\Gamma(s+1-k)}n^{s-k}$$ -As implimented in this graph.<|endoftext|> -TITLE: The product of two Riemann integrable functions is integrable -QUESTION [14 upvotes]: The goal is to show that the product of two Riemann integrable functions is integrable. -First step is to use the identity $f\cdot g = \frac{1}{4} \left[(f+g)^2 - (f-g)^2\right]$ so that we only need to consider squares of functions. -The second step is to reduce to positive valued functions because $f(x)^2=\left|f(x)\right|^2$. -The third step is to use that if $0 \leq f(x) \leq M$ on $\left[a,b\right]$, $$f^2(x) - f^2(y) \leq 2M \left(\,f(x)-f(y)\right)$$ -How should I go about implementing the above steps? - -REPLY [5 votes]: For sake of completeness I will prove the statement in a more elementary way. Let's start by showing that $f^2$ is Riemann-integrable: -We know that $f:[a,b]\to \mathbb{R}$ is Riemann-integrable so it follows that $|f|$ is Riemann-integrable as well. It holds that $|f|^2=f^2$, so if $\int\limits_a^b f(x)^2dx $ exists then we have the equality $\int\limits_a^b |f(x)|^2dx =\int\limits_a^b f(x)^2dx$. -As $|f|$ is Riemann-integrable (and hence bounded) it follows that for an arbitrary $\epsilon>0$ there exists a partitition $P:=\{t_0,t_1\cdots, t_n\}$ of $[a,b]$ such that: -$$ -\sum\limits_{i=1}^{n}(M_i-m_i)(t_i-t_{i-1})<\frac{\epsilon}{2\sup(|f|)},\\ -\text{where } M_i:=\sup\{|f(x)|\mid x\in[t_{i-1},t_i]\}\\ -\text{and } m_i:=\inf\{|f(x)|\mid x\in[t_{i-1},t_i]\}. -$$ -Since $|f([a,b])|\geq 0$, we know that $\sup(|f|^2)=\sup(|f|)^2$ and $\inf(|f|^2)=\inf(|f|)^2$, respectively. -We use these results to give an upper bound of the difference of the Darboux-sums of $f^2$: -$$ -\sum\limits_{i=1}^{n}(M'_i-m'_i)(t_i-t_{i-1})=\sum\limits_{i=1}^{n}(M_i^2-m_i^2)(t_i-t_{i-1})=\sum\limits_{i=1}^{n}(M_i-m_i)(M_i+m_i)(t_i-t_{i-1})\\ \leq 2\sup(|f|)\sum\limits_{i=1}^{n}(M_i-m_i)(t_i-t_{i-1})<2\sup(|f|)\frac{\epsilon}{2\sup(|f|)}=\epsilon\\ -\text{where } M'_i:=\sup\{f(x)^2\mid x\in[t_{i-1},t_i]\}\\ -\text{and } m'_i:=\inf\{f(x)^2\mid x\in[t_{i-1},t_i]\}. -$$ -This shows that $f^2$ is Riemann-integrable. Applying the ordinary rules of Riemann-integrable functions (regarding addition of functions and multiplying factors) and the hint that $fg=\frac{1}{4}((f+g)^2-(f-g)^2)$ one immediately sees that $fg$ is Riemann-integrable.<|endoftext|> -TITLE: Connections between K-Theory and PDEs? -QUESTION [18 upvotes]: I've recently spent some time learning (the very basics of) K-theory for $C^*$-algebras and topological K-theory. Actually, my main fields of interest are PDEs and related topics, in particular functional calculus for unbounded operators, Sobolev/Bessel potential/Besov spaces, interpolation theory, semigroups and so on. You get the idea. -Now I would like to deepen (and broaden) my knowledge about the machinery of K-theory, ideally by learning some interesting connection to said PDE-related topics such as for example some PDE-relevant result admitting a proof with a k-theoretic flavour. Or some general idea how K-theory might provide insight into (or an interesting point of view on) some PDE-related results or concepts. -So I'd be very thankful, and I hope this request is not too broad, if you could provide me with some examples of interesting relations, if they exist, between K-theory and mentioned PDE-related topics. - -REPLY [4 votes]: Here is another way that K-theory will show up when studying PDEs. Say you have an elliptic operator $D$ on a manifold with boundary $M$ and you want to know if you can impose local boundary conditions $B$ so that the boundary value problem -$Du=f, Bu\vert_{\partial M}=0$ is well behaved. -Here well-behaved means that: there are finitely many linear conditions on $f$ that guarantee the existence of a solution and, when there is a solution, the solutions form a finite dimensional space. In short, you want the operator with boundary conditions to be Fredholm. -A necessary and sufficient condition for the existence of such a $B$ is given by a triviality condition on the K-theory class of the symbol of $D$ restricted to the boundary. (It should be a pull-back from the K-theory of the boundary.) This was proven by Atiyah and Bott in their paper on the index theorem on manifolds with boundary.<|endoftext|> -TITLE: What is an example of a finite model in first order logic having a unique undefinable element? -QUESTION [7 upvotes]: This is (a slight paraphrase) of question 1.3.14 in Chang and Keisler's Model Theory book. -"Show that for each natural number $n$, there is a language $L_n$ and finite model $M_n$ of $L$ such that $M_n$ has precisely $n$ undefinable elements." -Here, an element $x\in M$ is definable if there is a (first order) formula in $L$, called $\phi$, such that $x$ is the unique element of $M$ satisfying $\phi$. Of course, "undefinable" means "not definable". -It is starred, indicating that it is more difficult than a standard problem in that book. -Chang and Keisler remark that $n=1$ is the only difficult case. In that spirit, here is the proof for all $n\neq 1$. -Let $L_n$ have a single 2 place predicate symbol (I'm thinking of $L_n = \{ < \}$). Let $M_n$ be the partial order with minimum a and with elements $b_1,...,b_{n}$ with $a < b_i$ for all $i$ and the $b_i$ pairwise incomparable. -First note that a is definable: it uniquely satisfies $\phi(x) =$ for all y, $x\leq y$. -Now, if $n =0$, there are no $b_i$, and hence in this model, we have 0 undefinable elements. -If $n > 1$, then I claim that all the $b_i$ are undefinable. The short answer is that any permutation of the $b_i$ can be extended to a unique isomorphism of $M_n$. Hence, for any formula $\phi$, we have $\phi(b_i)$ iff $\phi(b_j)$ for all $b_j$. Thus, no $\phi$ can single out any particular $b_i$, so each $b_i$ is undefinable. -This proof fails completely for $n=1$, for then $b_1$ is the unique element that satisfies "b_1 is not a". Or more in the spirit of first order logic, $b_1$ is the unique element satisfying $\phi(x) =$ there is a $y$ such that for all $z$, $y< z$ and $x$ is not equal to $y$." Incidentally, this proves that any such model that works for $n=1$ must have at least 3 elements. -So, my question is: - -What is an example of a language with finite model having precisely one undefinable element? Is the smallest cardinality of such a model known? - -Thanks in advance! - -REPLY [15 votes]: If your languages necessarily have the = relation, which is a common assumption, then the case n=1 is impossible in a finite model. The reason is that if a model $M$ has $k$ elements $x_1$, $x_2$, ... $x_k$ for finite $k\gt 1$ and each $x_i$ is defined by $\varphi_i(x)$ for $i\lt k$, then the remaining element $x_k$ is defined by the formula $\neg\varphi_1(x)\wedge\cdots\wedge \neg\varphi_{k-1}(x)$. If $M$ has only one element, then it is defined by the formula $x=x$. -But if we do not insist that $=$ is in the language, then let $L$ be the empty language, having no relations at all. In this case, there are no atomic formulas and hence no well-formed formulas to define elements, so a one-point model has exactly one undefinable element. -If we allow infinite models, then we can easily arrange to have exactly one undefinable element, even in a language with $=$. For example, consider the lanuage with infinitely many constant symbols, and have a model where all these constants are interpreted by different elements, and there is one extra un-named object. That extra object will be the only non-definable element, even when $=$ is in the language. - -REPLY [6 votes]: You can't have equality as a relation in the language that is interpreted as equality in the model, otherwise you can define the undefinable element as that (one) which is not equal to any of the definable ones. Most languages in common use have equality so it is hard to think of a natural example that describes, e.g., algebraic structures or combinatorial objects.<|endoftext|> -TITLE: What is the relationship between non-Archimedean places of infinite extensions of number fields and primes in the ring of integers? -QUESTION [12 upvotes]: Let $K$ be a number field and $L$ an infinite algebraic extension of $K$. Fix a non-trivial absolute value $v$ on $K$ (so $v$ is induced either by an embedding into the complex numbers or by a prime ideal in the ring of integers of $K$). If $K_v$ is the corresponding completion and $\overline{K}_v$ a choice of algebraic closure of $K_v$, then the extensions of $v$ to an absolute value on $L$ are in bijection with the $Gal(\overline{K}_v/K_v)$-orbits of $Hom_{K-alg}(L,\overline{K}_v)$ (this is described in detail in, e.g., Neukirch's Algebraic Number Theory). -My question regards the case when $v$ is non-Archimedean, arising from a prime ideal $\mathfrak{p}$ with residue characteristic $p$. In this case, is there a bijection between prime ideals of $\mathscr{O}_L$ lying above $\mathfrak{p}$ and places of $L$ above $v$? Since $\mathscr{O}_L$ is not generally going to be Dedekind (though it is a one-dimensional, integrally closed domain), we don't get an (additive) discrete valuation in the usual way from a prime ideal of $\mathscr{O}_L$, but if $w$ is a non-Archimedean absolute value on $L$ extending $v$, then by restricting to finite sub-extensions $L_i$ of $L/K$ we get sort of a coherent sequence of prime ideals $\mathfrak{p}_i$ in the integer rings of the $L_i$. Perhaps there is a way to convert this sequence of primes into a single prime ideal of $\mathscr{O}_L$ lying above each of the $\mathfrak{p}_i$? I thought maybe some kind of compactness\inverse limit type argument might work, but I'm not sure anymore...even so, if such an argument were to work, it would probably just show the existence of such a prime, as opposed to determining it uniquely. Alternatively, if I look at the maximal ideal of the valuation ring of $w$ in $L$ and intersect it with $\mathscr{O}_L$, I should get a (hopefully) non-zero prime ideal of $\mathscr{O}_L$ that maybe fits the bill. -This might be the wrong approach entirely (and maybe the answer to my question is just "no"). The reason I'm interested is because, for example, in Washington's book on cyclotomic fields, he (in an appendix) defines the decomposition group of a prime ideal in an infinite Galois extension of number fields, but makes no mention of the decomposition group of a place of an infinite algebraic extension of a number field. When one starts considering objects like the maximal unramified abelian $p$-extension of a $\mathbb{Z}_p$-extension of a number field, surely this (possibly more general) notion becomes relevant. -I would greatly appreciate if anyone could set me straight on this issue or point me in the direction of a reference where it's discussed. Thanks. - -REPLY [12 votes]: Since $\mathcal O_L$ is the union of $\mathcal O_{L_i}$, where $L_i$ runs over all the finite subextensions, giving a prime ideal $\mathfrak p$ in $\mathcal O_L$ is the same as giving a compatible collection of primes ideals $\mathfrak p_i$ in each $\mathcal O_{L_i}$. (We set -$\mathfrak p_i := \mathfrak p \cap \mathcal O_{L_i}$, and $\mathfrak p = \cup_i \mathfrak p_i$.) -Since $L$ is the union of the $L_i$, giving an absolute value $v$ on $L$ is the same as giving compatible absolute values $v_i$ on the various $L_i$. (Take $v_i$ to be the restriction to $L_i$ of $v$.) -Combining the preceding two remarks, we see that the bijection between primes ideals and non-Archimedean valuations in the case of finite extensions extends to a corresponding bijection in the case of infinite extensions.<|endoftext|> -TITLE: Correspondence between ideals, quadratic irrationals and binary quadratic forms -QUESTION [5 upvotes]: In the literature it is stated that to each quadratic irrational $\gamma=\frac{P+\sqrt{D}}{Q}$ there is a corresponding ideal $I=[|Q|/\sigma , (P+\sqrt{D})/\sigma]$, where $\sigma=1$, if $\Delta \equiv0$ mod $4$ and $\sigma=2$, otherwise. -Thus, in the case of $\frac{2+\sqrt{13}}{3}$ the associated ideal must be $I=[3/2, (2+\sqrt{13})/2]$ which makes no sense, as $N(I)=3/2$ is supposed to be a rational integer. -What am I doing wrong here? - -REPLY [5 votes]: Below is a proof of the standard equivalences between forms, ideals and numbers, excerpted from section 5.2, p. 225 of Henri Cohen's book A course in computational algebraic number theory. -Note that your quadratic number is not of the form specified in this equivalence, viz. $\rm\ \tau = (-b+\sqrt{D})/(2a),\:$ and $\rm\: 4a\:|\:(D-b^2),\,$ i.e. $\rm\ a\:|\:N(a\tau),\,$ a condition that is equivalent to the $\,\Bbb Z\rm\!-\!\!module$ $\rm\, a\:\mathbb Z + a\tau\ \mathbb Z\,$ being an ideal when $\rm\,D\,$ and $\rm\,b\,$ have the same parity, e.g. see Proposition 2.8 p.18 in Franz Lemmermeyer's notes.<|endoftext|> -TITLE: How can you find the complex roots of i? -QUESTION [6 upvotes]: A variation of the Root of Unity problem. -I want to find all possible answers to this: -$$z^n = i$$ -Where $$i^2 = -1$$ - -REPLY [2 votes]: Also, observe that if $z^n=i$ then $z^{4n}=1$. Thus, the complex numbers you're looking for are particular $4n$-th roots of $1$. -If you know that the $m$-th roots of 1 (any $m$) can be written as powers of a single well-chosen one (a primitive root), it shouldn't be too hardto find exactly which $4n$-th roots have the desired property.<|endoftext|> -TITLE: Bounding higher moments of truncated normal -QUESTION [7 upvotes]: I'm looking for a convenient upper bound on the integral -\begin{equation*} -\int_y^\infty x^k \exp(-(x-\mu)^2/2) dx -\end{equation*} -for (possibly large) positive integer $k.$ This is equivalent to finding higher moments of a truncated normal distribution. A bound that works for non-integer $k$ as well would be even better. -Of course "convenient" is in the eye of the beholder, but I'd like some sort of fairly simple expression that I can use in further calculations. For example, an upper bound of the form $f(x) \exp( -g(x))$ where where $f$ and $g$ are low-degree polynomials would be great. I'm more interested in simplicity of form than in obtaining the tightest possible bound. - -REPLY [3 votes]: I assume $y \gt 0$ and $y \gg \mu$. If you replace $x^k$ by $y^k \exp {\left( (k/y)x - k \right) }$ you will overestimate the $x^k$ term (because this is the exponential of the first two terms of the MacLaurin series of $\log(x^k)$ expanded around $x=y$, the series is alternating, and the remainder term is negative). Completing the square yields a closed-form formula for an upper bound, one of whose factors is a Gaussian integral: -$$\sqrt{2\pi }y^k \exp \left( {\frac{k (k+2 y (-y+\mu ))}{2 y^2}} \right) \Phi \left(\frac{k}{y}-y+\mu \right)$$ -This will work extremely well when $k$ is large compared to $y$ and $\mu$ because then most of the mass of the integral is concentrated at its lower limit where the exponential upper bound to $x^k$ is a good approximation. To avoid exponential overflow, use logarithms to compute the product.<|endoftext|> -TITLE: How big are transitively reduced graphs? -QUESTION [9 upvotes]: What is the largest transitively reduced acyclic connected graph on $n$ vertices, for every $n$? -How many edges $e$ does it have? How does $e$ grow as a function of $n$? Faster than $n^2/4$? - -REPLY [6 votes]: Since the transitively reduced graph $G$ is acyclic, there is a corresponding undirected simple graph $G'$, which can be formed by ignoring the direction of the edge. The number of edges in $G$ is the same as the number of edges in $G'$. -For $G$ to be transitively reduced, a necessary condition is that $G'$ must not contain any triangles. -It is well known (Turan's/Mantel's theorem) that any simple undirected graph on $n$ vertices and more than $n^2/4$ edges has a triangle. -It is also known that the undirected graph on $n$ vertices with maximum edges and no triangles is the complete bipartite graph $K_{[n/2],[n/2]}$ (for instance check out: http://books.google.com/books?id=SbZKSZ-1qrwC&pg=PA28, exercise 4) -From your comments, it looks like you found a $G$ whose corresponding $G'$ is $K_{[n/2],[n/2]}$ and so that proves it.<|endoftext|> -TITLE: Characterizing continuous functions based on the graph of the function -QUESTION [8 upvotes]: I had asked this question: Characterising Continuous functions some time back, and this question is more or less related to that question. -Suppose we have a function $f: \mathbb{R} \to \mathbb{R}$ and suppose the set $G = \\{ (x,f(x) : x \in \mathbb{R}\\}$ is connected and closed in $\mathbb{R}^{2}$, then does it imply $f$ is continuous? - -REPLY [14 votes]: This Monthly paper has short simple proofs of the following -THEOREM$\ $ TFAE if $\rm\ f: \mathbb R\to \mathbb R\ $ has a closed graph in $\:\mathbb R^2$ -(a)$\rm\ \ f\ $ is continuous. -(b)$\rm\ \ f\ $ is locally bounded. -(c)$\rm\ \ f\ $ has the intermediate value property. -(d)$\rm\ \ f\ $ has a connected graph in $\rm\mathbb R^2$. -More generally the result is merely a special case of R. L. Moore's 1920 characterization of a topological line as a locally compact metric space that is separated into two connected sets by each of its points. -Per request, I've appended the proof of the theorem below.<|endoftext|> -TITLE: Prove that $\beta \rightarrow \neg \neg \beta$ is a theorem using standard axioms 1,2,3 and MP -QUESTION [9 upvotes]: I've proven that $\neg \neg \beta \rightarrow \beta$ is a theorem, but I can't figure out a way to do the same for $\beta \rightarrow \neg \neg \beta$. -It seems the proof would use Axiom 2 and the deduction theorem (which allows $\beta$ to be an axiom)--but I've endlessy tried values to no avail. -Axiom 1: $A \rightarrow ( B \rightarrow A )$. -Axiom 2: $( A \rightarrow ( B \rightarrow C ) ) \rightarrow ( ( A \rightarrow B ) \rightarrow (A \rightarrow C) ) $. -Axiom 3: $( \neg B \rightarrow \neg A) \rightarrow ( ( \neg B \rightarrow A) \rightarrow B )$. -To clarify: A, B, C, $\alpha$, and $\beta$ are propositions (i.e. assigned True or False). $\rightarrow$ and $\neg$ have the standard logical meanings. -Note: $\TeX$ification does not work in the IE9 beta. - -REPLY [12 votes]: $A \rightarrow \lnot \lnot A$ is intuitionistically valid. Therefore you should be able to proof it already from MP plus Axiom 1 and Axiom 2 plus Ex Falso Quodlibet (An axiom that amounts to $\bot \rightarrow A$). No need to use Axiom 3. -But there is a twist, you need to represent $\lnot A$ as $A \rightarrow \bot$. Here you see a Hilbert style proof of $A \rightarrow ((A \rightarrow \bot) \rightarrow \bot)$: -We need first a little lemma, namely that $A \rightarrow A$ is derivable: -1: $(A \rightarrow ((B \rightarrow A) \rightarrow A)) \rightarrow ((A \rightarrow (B \rightarrow A)) \rightarrow (A \rightarrow A))$ (Axiom 2) -2: $A \rightarrow ((B \rightarrow A) \rightarrow A)$ (Axiom 1) -3: $(A \rightarrow (B \rightarrow A)) \rightarrow (A \rightarrow A)$ (MP 1, 2) -4: $A \rightarrow (B \rightarrow A)$ (Axiom 1) -5: $A \rightarrow A$ (MP 3, 4) -Now we can proof what we desire: -1: $(A \rightarrow (((A \rightarrow \bot) \rightarrow A) \rightarrow ((A \rightarrow \bot) \rightarrow \bot))) \rightarrow ((A \rightarrow ((A \rightarrow \bot) \rightarrow A)) \rightarrow (A \rightarrow ((A \rightarrow \bot) \rightarrow \bot)))$ (Axiom 2) -2: $(((A \rightarrow \bot) \rightarrow A) \rightarrow ((A \rightarrow \bot) \rightarrow \bot)) \rightarrow (A \rightarrow (((A \rightarrow \bot) \rightarrow A) \rightarrow ((A \rightarrow \bot) \rightarrow \bot)))$ (Axiom 1) -3: $((A \rightarrow \bot) \rightarrow (A \rightarrow \bot)) \rightarrow (((A \rightarrow \bot) \rightarrow A) \rightarrow ((A \rightarrow \bot) \rightarrow \bot))$ (Axiom 2) -4: $(A \rightarrow \bot) \rightarrow (A \rightarrow \bot)$ (Lemma) -5: $((A \rightarrow \bot) \rightarrow A) \rightarrow ((A \rightarrow \bot) \rightarrow \bot)$ (MP 3, 4) -6: $A \rightarrow (((A \rightarrow \bot) \rightarrow A) \rightarrow ((A \rightarrow \bot) \rightarrow \bot))$ (MP 2, 5) -7: $(A \rightarrow ((A \rightarrow \bot) \rightarrow A)) \rightarrow (A \rightarrow ((A \rightarrow \bot) \rightarrow \bot))$ (MP 1, 6) -8: $A \rightarrow ((A \rightarrow \bot) \rightarrow A)$ (Axiom 1) -9: $A \rightarrow ((A \rightarrow \bot) \rightarrow \bot)$ (MP 7, 8) -The given proof shows something even stronger, $A \rightarrow \lnot \lnot A$ is not only intuitionistically valid, it is already valid in minimal logic, since we did not make use of Ex Falso Quodlibet. We were able to derive it from MP plus Axiom 1 and Axiom 2. -Best Regards<|endoftext|> -TITLE: How many bins do random numbers fill? -QUESTION [12 upvotes]: Given is a sequence $\langle a_1,a_2,\ldots,a_n\rangle$ over the alphabet $\{1,2,\ldots,m\}$ chosen uniformly at random among the $m^n$ possibilities. What is the expected size of the set $\{a_1,a_2,\ldots,a_n\}$? -If $m=n$ it seems the answer tends to $(1-1/e)n$ as $n\to\infty$, but I don't know why. -I bumped into this while benchmarking some code for hashtables, so I wouldn't be surprised if it is a standard result in the hash world. - -REPLY [3 votes]: This is dealt with in depth at http://www.math.uah.edu/stat/urn/Birthday.html.<|endoftext|> -TITLE: weak sequential continuity of linear operators -QUESTION [25 upvotes]: Suppose I have a weakly sequentially continuous linear operator T between two normed linear spaces X and Y (i.e. $x_n \stackrel {w}{\rightharpoonup} x$ in $X$ $\Rightarrow$ $T(x_n) \stackrel {w}{\rightharpoonup} T(x)$ in $Y$). Does this imply that my operator T must be bounded? - -REPLY [22 votes]: In my original answer I only mentioned that it works for $Y$ complete, but as Nate pointed out in a comment, I never actually used completeness of $Y$. -The answer is yes. Weakly convergent sequences in a normed space are bounded, as a consequence of the uniform boundedness principle applied to the dual space (which is a Banach space) and the fact that a convergent sequence of real (or complex) numbers is bounded. If $T$ is unbounded, then there is a sequence $x_1,x_2,\ldots$ in $X$ converging in norm (and hence weakly) to 0 such that $\|T(x_n)\|\to\infty$, so by the previous sentence this implies that $T(x_1),T(x_2),\ldots$ does not converge weakly.<|endoftext|> -TITLE: Integer translates of a scaling function -QUESTION [8 upvotes]: I think this is asked as a standard exercise in books about wavelets (e.g. exercise 7.2 in Mallat's book), but I couldn't find a proof. Let $\phi$ be a scaling function (see definition below). I would like to learn why -$$\sum_{k\in\mathbb Z} \phi(x-k) = 1 $$ -almost everywhere. -Definition. A sequence of subspaces $\{V_j: j\in \mathbb{Z}\}$ of $L^2(\mathbb R)$ is -called a multiresolution analysis if it satisfies the following: - -$V_j \subset V_{j+1}$ -$\bigcap_{j}V_j = \{0\}$ -$\overline{\bigcup_jV_j} = L^2(\mathbb R)$ -$f(x)\in V_j$ if and only if $f(2x) \in V_{j+1}$ -There exists a function $\phi \in V_0$ such that -$\{\phi(x-k)\}_{k\in\mathbb Z}$ is an orthogonal basis for $V_0$ - -The function $\phi$ here is called as a scaling function. - -REPLY [2 votes]: I think you will find the proof for this in Mallat 1989, 'Multiresolution approximations and wavelet orthonormal bases of L^2'. Theorem 1 (in particular Equations (23), (36)) is what you are after. It is not trivial, longer to prove than I immediately thought. Perhaps there is a very fast proof but I can't think of it now.<|endoftext|> -TITLE: Why is $T_1$ required for a topological space to be $T_4$? -QUESTION [15 upvotes]: Let's say we have some topological space. -Axiom $T_1$ states that for any two points $y \neq x$, there is an open neighborhood $U_y$ of $y$ such that $x \notin U_y$. -Then we say that a topological space is $T_4$ if it is $T_1$ and also satisfies that for any two closed, non-intersecting sets $A,B$, there are open neighborhoods $U_A,U_B$ respectively, such that $U_A\cap U_B = \emptyset$. -Could anyone give an example of a topological space which satisfies the second condition of $T_4$, but which is not $T_1$? - -REPLY [3 votes]: As an addition: the separating closed disjoint sets part is often called normality (a space is normal if it satisfies this), so $T_4$ is normal plus $T_1$, and similarly for regular: $T_3$ is regular and $T_1$. Spaces that have no disjoint non-empty closed sets (besides the trivial topology, we have examples like $\mathbf{N}$ with the topology generated by the sets of the form $U(n) = \{ k : k \ge n \}$ e.g.) trivially satisfy normality. $T_4$ is to avoid these pathologies: the extra $T_1$ ensures that at least all finite sets are closed, so we have some "relevant" closed sets to apply normality to...<|endoftext|> -TITLE: How many disconnected graphs of the Rubik's cube exist? -QUESTION [12 upvotes]: Let us say that a Rubik's cube in a particular configuration is in a particular "state". All other configurations of this cube (other "states"), which can be achieved by rotations of the cube can be thought to be connected to each other... rotations are like walking the edges of the graph where the different states are the vertices. -Now, if our Rubik's cube is of the type where we can peel off the stickers easily and put them back too, then we can swap 2 of the stickers in a corner. -I understand that the cube cannot be solved any more. Our cube has moved to a new state which is disconnected from the previous graph built above. Also, all the states reachable from this new state are also disconnected from the other graph. (otherwise the cube would be solvable). So now we have 2 graphs which are disconnected from each other. A third set of states might be disconnected from both the above states. -For all possible reasonable color assignments ( by reasonable, I mean to preserve the number of tiles for each color=9), how many such disconnected graphs exist for a 3x3x3 Rubik's cube? - -REPLY [3 votes]: All connected components are isomorphic groups, so you can consider them as equivalence classes of the same size. -Now we know the size of one of the classes (i.e. the one containing the solved cube). From wikipedia: -$$|N_1| = 8! \times 3^7 \times \frac{12!}{2} \times 2^{11}$$ -Let $|states|$ be the total number of states you consider. Then the number of equivalence classes or connected components is: -$$ \frac{|states|}{|N_1|}$$ -First, if you only consider the cases where you can move individual - litle cubes, then the number of states is: $$|states_1| = 8! \times -3^8 \times 12! \times 2^{12}$$ And hence $$ \frac{|states_1|}{|N_1|} -= 3 \times 2 \times 2 = 12$$ -In your case, where you allow stickers to be moved around, The number of states is much higher. I think it's computed by the following: -$$ |states_2| = \frac{\text{total permutations}}{\text{cube rotational symetries}} = \frac{6! \times 24! \times 24! }{24}$$ -Thus your answer should be: -$$ \frac{|states_2|}{|N_1|} = \frac {6! \times 24! \times 23! } -{8! \times 3^7 \times 12! \times 2^{12}}$$<|endoftext|> -TITLE: Measurable function remaining constant -QUESTION [6 upvotes]: This is a problem which appeared in one of my tests, which i wasn't able to solve. -Let $\Omega$ be a uncountable set. Let $S$ be the collection of subsets of $\Omega$ given by: $A \in S$ if and only if $A$ is countable or $A^{c}$ is countable. Suppose $f: \Omega \to \mathbb{R}$ is a real measurable function. Prove that there exists a $y \in \mathbb{R}$ and a countable set $B$ such that the $f(x)=y$ is on $B^{c}$. - -REPLY [8 votes]: Here's a fun way to write the solution. -Define on $(\Omega, \mathcal{S})$ the probability measure $P(A) = 0$ if $A$ is countable, $P(A)=1$ if $A^c$ is countable. Then $f$ can be seen as a random variable $X$. Since all events have probability $0$ or $1$, all events, and hence all random variables, are independent. Now there must be some $N$ with $P(|X| \le N) > 0$ (since $\Omega = \bigcup_{N=1}^\infty \{|X| \le N\}$ and $P$ is countably additive), hence $P(|X| \le N) = 1$. So $X$ is a.s. bounded and in particular is $L^2$. But $Var(X) = Cov(X,X) = 0$ since $X$ is independent of itself! So $X = EX$ a.s., i.e. except on a countable set.<|endoftext|> -TITLE: A good, free, graphics package for mathematics? -QUESTION [8 upvotes]: A student of mine has cooked up a new graphical notation for computing with knots on surfaces. The trouble is, writing up his results is difficult due to his new notation. Is there a good "drawing tool" for mathematics that anyone can suggest? - -REPLY [2 votes]: GeoGebra is nice tool that worth. I recomend using it.<|endoftext|> -TITLE: Slick constructions of conditional expectation -QUESTION [9 upvotes]: Let $(\Omega, \mathcal{F}, P)$ be a probability space, $X$ an -integrable random variable, $\mathcal{G} \subset \mathcal{F}$ a -$\sigma$-field. The conditional expectation of $X$ given -$\mathcal{G}$ is by definition the unique random variable $Y$ which is -$\mathcal{G}$-measurable and satisfies $E[Y;A] = E[X;A]$ for all $A -\in \mathcal{G}$. Proving the uniqueness of $Y$ is easy, but -existence is harder. I am looking for a nice existence proof with -minimal prerequisites. -The traditional proof is to invoke the Radon-Nikodym theorem: the -signed measure $\nu(A) = E[X;A]$ on $(\Omega, \mathcal{G})$ is -absolutely continuous to $\mu = P|_\mathcal{G}$, so take $Y$ to be the -Radon-Nikodym derivative, and it clearly has the desired properties. -But the proofs I know of the Radon-Nikodym theorem, while elementary, -are somewhat involved (at least 2 pages, even if you only do the -absolutely continuous case). -Another proof is to first take $X$ with finite variance, and note that -$K = L^2(\Omega, \mathcal{G}, P)$ is a closed subspace of the Hilbert -space $H = L^2(\Omega, \mathcal{F}, P)$; then take $Y$ to be the -orthogonal projection of $X$ onto $K$. Again, it is then easy to see -that $Y$ has the desired properties. But this is not as suitable -for students with no functional analysis background. You can develop -the necessary facts from scratch but it's a little tedious. -So I am wondering if anyone knows of a simple proof, preferably using -only basic measure theory and probability facts. - -REPLY [3 votes]: For the basic case: -Assume that $X$ and $Y$ are random variables on a probability space $(\Omega, \mathcal{F}, P)$ where $E[|Y|] < \infty$. Further assume that $X$ and $Y$ have joint probability distribution $f_{X, Y}(x,y)$. Define: -$$g(x) = \int_{\mathbb{R}} \frac{f_{X,Y}(x,y)}{f_X(x)} \, dy$$ -where $f_X$ is the marginal density of $X$. now $g$ is the conditional expectation $E[Y|X = x]$ from elementary probability theory. Now we can see that $E[Y|X] = g(X)$. Now $g$ is $\sigma(X)$-measurable so now we need to check: -$$\int_A g(X) \, dP = \int_A Y \, dP \textrm{ for $A$ in $\sigma(X)$}$$ -This is the partial-averaging-property, so we get the conditional expectation. Well, this is just some syntax-manipulation so I'll skip that. I can add it if you want.<|endoftext|> -TITLE: Efficiently finding two squares which sum to a prime -QUESTION [39 upvotes]: The web is littered with any number of pages (example) giving an existence and uniqueness proof that a pair of squares can be found summing to primes congruent to 1 mod 4 (and also that there are no such pairs for primes congruent to 3 mod 4). -However, none of the stuff I've read on the topic offers any help with actually efficiently finding (ie other than a straight search up to sqrt(p)) the concrete values of such squares. -What's the best way to actually find them ? - -REPLY [12 votes]: I must add Gauss's construction, which I learned about from p.64 of Stark's "An Introduction to Number Theory". If by "best", you mean computationally most efficient, then certainly the following is not the best way. But there are other ways to measure quality... -Quoting Stark: -"In 1825 Gauss gave the following construction for writing a prime congruent to $1 \pmod{4}$ as a sum of two squares: Let $p=4k+1$ be a prime number. Determine $x$ (this is uniquely possible...) so that -$$ x = \frac{(2k)!}{2(k!)^2} \pmod{p}, \quad |x| < \frac{p}{2}$$ -Now determine $y$ so that -$$ y = x \cdot (2k)! \pmod{p}, \quad |y| < \frac{p}{2}$$ -Gauss showed that $x^2+y^2=p$."<|endoftext|> -TITLE: GRE past papers -QUESTION [38 upvotes]: As it is required for most students who wish to do a Ph.D in maths in the US to sit the GRE subject specific mathematics exam, I hope this question will be of interest to the mathematical community and will not be closed. -Essentially, the exam was "rescaled" (made more difficult) in 2001 and I have only been able to find 2 past "rescaled" papers, one of which is available on the official website, the other available here (this link being unvalid, another is GRE9768.pdf). Are other past papers available elsewhere? Thanks. -EDIT: In actual fact, the exam was not made more difficult (see comments), so any past paper would be of interest. - -REPLY [4 votes]: [EDIT: updated the link] I hope I understand the exact question. The version you have provided is "after October 1, 2001". On the very same site, you could find the following links (which do not work anymore, only kept for the record): - -a version with copyright in 1990, 1991, 1993: http://www.math.ucsb.edu/mathclub/GRE/GRE9367.pdf -a version with copyright in 1986: http://www.math.ucsb.edu/mathclub/GRE/GRE8767.pdf - -They were working at the time of answering. I leave the initial links as they can provide clues to find other copies. Indeed, looking for 'GRE8767.pdf', I obtained new links: - -https://web.math.rochester.edu/people/faculty/abeeson/GRE/GRE8767.pdf -https://www.geneseo.edu/~johannes/GRE8767.pdf - -Same for "GRE9367.pdf" -- https://web.math.rochester.edu/people/faculty/abeeson/GRE/GRE9367.pdf<|endoftext|> -TITLE: An interesting series -QUESTION [7 upvotes]: $\sum_{n=1}^{\infty} \frac{\varphi(n)}{n}$ where $\varphi(n)$ is 1 if the variable $\text n$ has the number $\text 7$ in its typical base-$\text10$ representation, and $\text0$ otherwise. -I am supposed to find out if this series converges or diverges. I think it diverges, and here is why. -We can see that there is a series whose partial sums are always below our series, but which diverges. Compare some of the terms of each sequence -$\frac{1}{7} > \frac{1}{8}$ -$\frac{1}{70} > \frac{1}{80}$ -$\frac{1}{71} > \frac{1}{80}$ -$\frac{1}{72} > \frac{1}{80}$ -$\text ... $ -$\frac{1}{79} > \frac{1}{80}$ -$\text ... $ -$\frac{1}{700} > \frac{1}{800}$ -$\text ... $ -And continue in this way. -Obviously some terms are left out of the sequence on the left, which is fine since our sequence of terms on the left is already greater than the right side. Notice the right side can be grouped into -$\frac{1}{8} + \frac{1}{8} + ... $ because we will have $10$ $\frac{1}{80}$s, $100$ $\frac{1}{800}$s, etc etc. Thus we are adding up infinitely many 1/8s. This is similar to the idea of the divergence of the harmonic series. So, my conclusion is that it diverges. A bunch of other students in my real analysis class have come to the conclusion that is, in fact, convergent, and launched into a detailed verbal explanation about comparison with a geometric series that I couldn't follow without seeing their work. Is my reasoning, like they suspect, flawed? I can't see how. -Sorry about the poor format, I'm new to TeX and couldn't figure out how to format a piecewise function (it was telling me a my \left delimiter wasn't recognized). - -REPLY [3 votes]: Here's another proof that your sum diverges. -Consider the sum $\sum_{n-1}^\infty (1-\phi(n))/n$. This is the sum of the reciprocals of integers which don't have a 7 in their decimal expansion. -The number of integers $n$ with $1-\phi(n)=1$ and $1 \le n < 10^k$ is $9^k - 1$. (We can choose each of $k$ decimal digits to be anything but 7, except we don't want to choose all zeroes.) Thus the numbers of integers $n$ with $1-\phi(n) = 1$ and $10^{k-1} \le n < 10^k$ is $(9^k - 1) - (9^{k-1}-1) = 8 \times 9^{k-1}$. -Therefore -$$ S_k = \sum_{n=10^{k-1}}^{10^k - 1} {1-\phi(n) \over n} $$ -has $8 \times 9^{k-1}$ nonzero terms; they are each at most $10^{k-1}$. So $S_k \le 8 \times (9/10)^{k-1}$. -The infinite sum is then -$$ \sum_{k=1}^\infty S_k \le 8 \times \sum_{k=1}^\infty (9/10)^{k-1} = 8 \times 10 = 80 $$ -and in particular it's finite. -Now, the harmonic series $\sum_{n=1}^\infty 1/n$ diverges; removing terms whose sum converges (like the sum that I just showed to converge) won't change that. So your series diverges.<|endoftext|> -TITLE: How to show that you can find two subspaces that don't intersect -QUESTION [6 upvotes]: Suppose $V, V'$ are subspaces of dimension $d$ of a vector space $X$. Then there is a subspace $W$ of $X$ of codimension $d$ such that $W \cap V = W \cap V' = { 0 }$. -This can be proved by choosing an explicit basis for $X$ which contains a basis for $V$ and a basis for $V'$ and a basis for $V \cap V'$. On the other hand, there should be a nice way to do this without choosing a basis. Can anyone explain this? -Disclosure: This came up when I was doing a homework problem. However, I'm just going to use the non-basis-free approach when I write up my answer. - -REPLY [5 votes]: Suppose first that $V+ V' = X$. Then the natural map $X/(V\cap V') \to X/V \times X/V'$ -is an isomorphism. Choose a subspace of the target of this isomorphism -that projects isomorphically onto each factor (i.e. the graph of an isomorphism between the two factors; such an isomorphism exists since the two factors have the same dimension). -Its preimage under the natural map is a subspace of $X/(V\cap V')$ which -meets each of $V$ and $V'$ trivially. Now choose any subspace $W$ -of $X$ that projects isomorphically onto this preimage; this is then a subspace of -$X$ that maps isomorphically onto each of $X/V$ and $X/V'$, and hence is a codimension $d$ -subspace with the desired property. -In general (i.e. if $V + V' \neq X$) then the above gives a codimension $d$ subspace $W'$ of -$V + V'$ meeting $V$ and $V'$ trivially. Choose $W''$ to be any subspace of $X$ which -maps isomorphically onto $X/(V + V')$. The sum $W' + W''$ is then a codimension $d$ subspace -of $X$ meeting $V$ and $V'$ trivially.<|endoftext|> -TITLE: geometry and topology -QUESTION [16 upvotes]: I was wondering what are the differences and relations: -between geometry and topology; -between differential geometry and differential topology; -between algebraic geometry and algebraic topology? -For example: -Are they studying different objects? Such as different mathematical structures/spaces? -if they study the same object, but study different aspects/properties of the same object? -... -Reading their wikipedia pages really confuses me. -Thanks and regards! - -REPLY [2 votes]: I have the same question when I heard of the word Topology during I was studying how to render a mesh in computer. Here is my understanding now for the difference between Topology and Geometry. Without losing of generality, take a triangular mesh as an example because spaces/complexes can find a triangulation. -Topology is a structure or a framework between the elements that can be found on a complex(e.g. a 2D-surface. It is no doubt that the complex's skeleton is a set of elements too(e.g. vertex, edge, face). I always keep in mind that Topology is a studying of neighborhood for Geometry. This was what I knew during very beginning. Then construction of spaces, manifold...etc are more advanced topic. -Geometry is study of the realization of the skeleton. Realizations are maps from the abstract manifold space concept to your real life $R^3$. The simplest would be the triangular mesh that has been widely used for many industries. The realizations are plane equations for each face->triangle. All skeletons exist in the same space simultaneously.<|endoftext|> -TITLE: is triangle a manifold? -QUESTION [11 upvotes]: Is a triangle (its sides and the region enclosed by its sides) in a 2D Euclidean space $\mathbb{E}^2$ a manifold? I was thinking to use the identity mapping as its charts, but for each point on the sides of the triangle, there is no neighborhood of it can be mapped to an open subset in $\mathbb{E}^2$. -Thanks! - -REPLY [18 votes]: Here's a variant of my comment: -A triangle is a topological manifold with boundary (and so can be made abstractly into a smooth manifold with boundary). But as a subspace of Euclidean space it is not a smooth manifold with boundary. On the other hand, a triangle is a "smooth manifold with corners". -There are a variety of stratified enhancements on the manifold concept. Manifold with no boundary is the "base" manifold concept. You can add boundary or various other stratifications and at some point you can let your space degenerate to the point that anything is more or less a "manifold with enough degeneracies..." Common terms for highly-degenerate manifold types are things like "stratified spaces" and "manifolds with singularities" or "pseudomanifolds". Orbifolds are another variation on this thread of ideas.<|endoftext|> -TITLE: When can a (finite) group be written as the quotient of some other group by its center? -QUESTION [17 upvotes]: So if I'm given $H$, when I can conclude there is a group $G$ such that $H\cong G/Z(G)$? It's easy to show that non-trivial cyclic groups are not of this form. More generally, any group with the property that all finitely generated subgroups are cyclic (e.g. $\mathbb{Q},\mathbb{Q}/\mathbb{Z}$) cannot be of this form. Are there are any others? - -REPLY [24 votes]: The groups you are looking for are called capable groups. The determination of all capable group (even all finite capable groups) is very far from done. -The earliest result is due to Baer, who characterized exactly which groups that are direct sums of cyclic groups are capable (Baer, Reinhold. Groups with preassigned central and central quotient groups, Trans. Amer. Math. Soc. 44 (1938), 387-412; in fact Baer considered the question of when a group has a specified center and central quotient, each of which is a direct sum of cyclic groups. You obtain the characterization of capable groups as a corollary). For finitely generated abelian groups, this becomes: - -Theorem. Let $G$ be a finitely generated abelian group, written as a direct sum of cyclic groups - $$G = C_{a_1}\oplus\cdots\oplus C_{a_n}$$ - with $a_1|a_2|\cdots|a_n$ ($C_0$ denotes the infinite cyclic group). Then $G$ is capable if and only if $n\gt 1$ and $a_{n-1}=a_n$. - -There is a generalization of this result, replacing abelian groups with $p$-groups and replacing the direct sum with the nilpotent product (Capability of nilpotent products of cyclic groups and Capability of nilpotent products of cyclic groups II, J. Group Theory 8, no. 4 (2005), 431-452; and J. Group Theory 10 no. 4 (2007), 441-451): - -Theorem. Let $p$ be a prime and let $c$ be a positive integer, $c\leq p$. If $G$ is the $c$-nilpotent product of cyclic groups - $$G = C_{p^{a_1}}\coprod^{c}\cdots\coprod^{c} C_{p^{a_n}}$$ - where $\coprod^{c}$ represents the $c$-nilpotent product, and $1\leq a_1\leq\cdots\leq a_n$. Then $G$ is capable if and only if $n\gt 1$ and $a_n\leq a_{n-1}+\lfloor\frac{c-1}{p-1}\rfloor$ - -The last condition is in fact necessary in general for $p$-groups: - -Theorem.(2005) Let $G$ be a nilpotent $p$-group of class $c\gt 0$, and let $x_1,\ldots,x_n$ be a generating set for $G$ with $x_i$ of order $p^{a_i}$, $a_1\leq a_2\leq\cdots\leq a_n$. If $G$ is capable, then $n\gt 1$ and $a_n\leq a_{n-1}+\lfloor\frac{c-1}{p-1}\rfloor$. - -The extra-special capable $p$-groups were not classified until 40 years after Baer (F.R. Beyl, U. Felgner, and P. Schmid. On groups ocurring as central factor groups, J Algebra 61 (1979), 161-177). The only extra-special capable $p$-groups are the nonabelian groups of order $p^3$ and exponent $p$. -Aside from these classes, there is a full classification of the capable $2$-generated $p$-groups of class two. You can find it the paper (with Robert F. Morse) Certain homological functors for $2$-generated $p$-groups of class 2, in Computational Group Theory and the Theory of Groups II, Contemporary Mathematics 511 (2010), pp 127-166, American Mathematical Society. -There are lots of known necessary conditions, but in general sufficient conditions are harder to find. Phillip Hall commented 70 years ago (The classification of prime-power groups, J. Reine Angew. Math. 182 (1940) 130-141) that: - -The question of what conditions a group $G$ must fulfil in order that it may be the central quotient of another group $H$, $G\cong H/Z(H)$, is an interesting one. But while it is easy to write down a number of necessary conditions, it is not so easy to be sure that they are sufficient.<|endoftext|> -TITLE: Why are these two definitions of the Mandelbrot set equivalent? -QUESTION [22 upvotes]: The definition of the Mandelbrot set that most enthusiasts first encounter is that of the set of all complex numbers $c$ for which the sequence $z_{n+1} = z_n^2 + c$ starting from $z_0 = 0$ does not diverge. For convenience, let us name this familiar quadratic map $P_c(z) = z^2 + c$. -I have read that an equivalent definition of the Mandelbrot set is as the "connectedness locus" of the Julia sets of $P_c$. That is, $c$ is in the Mandelbrot set if and only if its corresponding filled Julia set $K_c = \{z : P_c^n(z) \not\rightarrow \infty\}$ is connected. -Why are these definitions equivalent? -I understand that the equivalence boils down to the statement that $K_c$ is connected if and only it contains $0$, but I don't know how this is proved. - -REPLY [16 votes]: Let $S_1$ be a circle of radius $100$ in the complex plane centered at the origin. Clearly everything outside of $S_1$ diverges to $\infty$ under iteration of $P_c$, so the filled Julia set lies entirely inside of $S_1$. -Now take the preimage $S_2$ of $S_1$ under $P_c$. This will be a smaller curve (close to a circle, with a radius of approximately $\sqrt{100} = 10$), which again must contain the entire filled Julia set. Iterating this process, we obtain a sequence $S_1,S_2,\ldots$ of closed curves, each of which contains the filled Julia set in its interior. In fact, since any point outside of the filled Julia set goes to $\infty$ under iteration of $P_c$, the intersection of the interiors of the curves $S_n$ is precisely the filled Julia set. (See this picture for an example. The curves separating the different shades of orange are the iterated preimages of some large circle.) -Unfortunately, this reasoning is not quite correct, because the preimage of a closed curve under $P_c$ is not always a single closed curve. Sometimes it is one closed curve, and sometimes it is two closed curves with disjoint interiors. If we repeatedly take the preimages of $S_1$, we may find that $S_n$ is a union of a very large number of closed curves! Note, however, that these curves still contain the filled Julia set entirely inside of them. -Now here is the key bit: the preimage of a curve will have one component if and only if $0$ lies in the the interior of the preimage. This is because $0$ is the critical point of the map $P_c$. Thus the preimage of a curve is either a single curve that surrounds $0$, or two curves, neither of which surrounds $0$. (In the latter case, the two curves are actually negatives of one another, i.e. symmetric across the origin.) Therefore, there are exactly two cases: - -The point $0$ lies in the filled Julia set. In this case, each preimage $S_n$ must be a single curve, so the intersection of the interiors is connected. -The point $0$ lies outside the filled Julia set. In this case, some preimage $S_n$ does not encircle $0$, so it must have two components. Then each successive preimage will have twice as many curves as the last, and the resulting filled Julia set is homeomorphic to the Cantor set. (See this picture for an example. The curves separating the different shades of blue are the iterated preimages of some large circle. You ought to be able to see the first step at which the curve separates into two components.)<|endoftext|> -TITLE: If $A\subseteq\mathbb N$ and $\sum\limits_{a\in A}\frac1a$ converges then $A$ has natural density $0$ -QUESTION [15 upvotes]: In this answer to a question about a series, a theorem was stated: - -If $A= \{a_i \}$ is a set such that $\sum_{i = 1}^{\infty} \frac{1}{a_i}$ converges, then $d(A) = 0$, where $d(A)$ is the natural density of the set. - -My background in number theory is basically zero and all my attempts to prove this have been utterly unseccessful; would anyone please help me, or at least provide me with a hint to prove it? - -REPLY [2 votes]: Let $c_n = |A\cap [1,n]|$ and $S_n=\sum_{a\in A\cap [1,n]}\frac 1a$. Our hypothesis is that $S_n$ converges. -Let $S_0=0$. -Note that $$c_n = \sum_{k=1}^n 1_A(k)=\sum_{k=1}^n k\left(1_A(k)\frac 1k\right)=\sum_{k=1}^n k(S_k-S_{k-1}) = nS_n-\sum_{k=1}^{n-1}S_k$$ -Thus $$\frac{c_n}n = S_n - \frac 1n \sum_{k=1}^{n-1}S_k$$ -By Cesaro mean theorem, $\frac 1n \sum_{k=1}^{n-1}S_k$ converges to the same limit as $S_n$, hence $\lim_n \dfrac{c_n}n= 0$, which is what needed to be proved.<|endoftext|> -TITLE: Factorial equaling a polynomial -QUESTION [5 upvotes]: Are there any positive integer solutions $(n,x)$ to the equation $(x)(x+1)=n!$ except $(2,1)$ and $(3,2)$? -If not (as I suspect is the case), how do you prove that? -In general, is there a way to approach the Diophantine equation $n!=P(x)$ where $P$ is a polynomial? - -REPLY [7 votes]: This question was discussed on mathoverflow, and the question is open.<|endoftext|> -TITLE: Is the failure of MaxSpec to be functorial due to homomorphisms which take non-units to units? -QUESTION [5 upvotes]: I suspect this is easy, but that I'm missing something obvious: - -If a ring homomorphism $f:R\rightarrow S$ is such that a maximal ideal $\mathfrak m$ in $S$ does not have $f^{-1}(\mathfrak m)$ maximal in $R$, must there be some non-unit element $x\in R$ whose image under $f$ is a unit in $S$? - -My intuition is just that some "localization" must have occurred. In the classic example of $\mathbb{Z}\hookrightarrow\mathbb{Q}$, the zero ideal becomes maximal because 2, 3, 5, ... all become units. - -REPLY [11 votes]: Maybe these remarks will be useful: -The map $R/f^{-1}(m) \to S/m$ is a map from an integral domain to a field, and if the -source is not also a field (i.e. if $f^{-1}(m)$ is not itself maximal), then there are certainly -non-units in $R/f^{-1}(m)$ that map to units in $S/m$. -One interpretation of your question is whether, in this context, there must necessarily be a non-unit of $R$ that maps to a unit in $S$? -The answer is "no" in general. Here is a counterexample: -Let $R$ be a domain which is not a field, and let $I$ be any subset of $R\setminus \{0\}$ which generates $R\setminus \{0\}$ as a monoid under multiplication. Let $S$ be the polynomial ring $R[\{x_i\}_{i \in I}].$ -If $Q$ denotes the fraction field of $R$, then there is a natural surjection -$S \to Q$ given by mapping $x_i$ to $1/i$. (This is where we use the assumption that -$I$ generates $R\setminus \{0\}$, so as to get a surjection.) -The kernel of this surjection is a maximal ideal $m$ of $S$, whose preimage in $R$ is the -zero ideal (hence not maximal). -On the other hand, since $S$ is a polynomial ring over $R$, any non-unit in $R$ remains a non-unit in $S$. -Some additional comments: if $R$ is Jacobson (e.g. $\mathbb Z$, or a polynomial ring over a field), then any map between finite type $R$-algebras preserves the corresponding MaxSpecs. -On the other hand, for such domains, the set $I$ is necessarily infinite, and so $S$ is necessarily infinitely generated as an $R$-algebra (and hence no contradiction ensues!). -But we can get finite type examples by taking $R$ to be e.g. a DVR. Then $I$ can be taken -to consist of a single element (the uniformizer), and so $S = R[x]$ is a polynomial ring -in a single variable. For example, if $R = \mathbb Z_p$ (the $p$-adic integers) then -we have -$$\mathbb Z_p \hookrightarrow \mathbb Z_p[x] \mapsto \mathbb Z_p[x]/(p x - 1) = \mathbb Q_p,$$ -and so $p x - 1$ generates a maximal ideal in $\mathbb Z_p[x]$ whose preimage is not maximal, but every non-unit in $\mathbb Z_p$ remains a non-unit in $\mathbb Z_p[x]$.<|endoftext|> -TITLE: Product of two cyclic groups is cyclic iff their orders are co-prime -QUESTION [32 upvotes]: Say you have two groups $G = \langle g \rangle$ with order $n$ and $H = \langle h \rangle$ with order $m$. Then the product $G \times H$ is a cyclic group if and only if $\gcd(n,m)=1$. -I can't seem to figure out how to start proving this. I have tried with some examples, where I pick $(g,h)$ as a candidate generator of $G \times H$. I see that what we want is for the cycles of $g$ and $h$, as we take powers of $(g,h)$, to interleave such that we do not get $(1,1)$ until the $(mn)$-th power. However, I am having a hard time formalizing this and relating it to the greatest common divisor. -Any hints are much appreciated! - -REPLY [34 votes]: $\begin{align}{\bf Hint}\ \ \ - & \Bbb Z_m \times \mathbb Z_n\ \text{is noncyclic}\\[.2em] -\iff\ & \Bbb Z_m \times \Bbb Z_n\ \text{has all elts of order} < mn\\[.2em] -\iff\ & {\rm lcm}(m,n) < mn\\[.2em] -\iff\ & \!\gcd(m,n) > 1 -\end{align}$<|endoftext|> -TITLE: Fitting a parameter dependent matrix to its eigenvalues -QUESTION [7 upvotes]: The essence of my question is, if I have a Hermitian matrix that is linearly dependent on a set of parameters and I have an estimate of its eigenvalues, is there a "simple" way to determine the values of the parameters? Ideally, I would also like to have some measure of the goodness of the fit and the degree of variation within the parameters. - -As a materials physicist, I often have to create a simple quantum mechanical model from either experimental data or a more complex calculation. For the smaller problems (8x8, with 10 params), the parameters can be found by painstakingly working through the various relationships among the parameters, due to symmetry, etc. But, this method is specific to each problem and does not scale well to larger problems. For instance, one system I'm looking at would require a 20x20 matrix with 21 parameters, and that is without including spin! Alternatively, there is the brute force method of simulated annealing, which involves taking a random walk through the parameter space, and slowly decreasing the step size in the hopes that the calculation will get stuck in the global minimum. Neither of these methods is particularly appealing, so I'd like some ideas on how to approach this in a consistent manner. - -REPLY [2 votes]: Given: -$ M = \Sigma_{i=1}^{n} t_i M_i$ -where $t_i$ are the unknown parameters and $M_i$ are known matrices and $n$ is the number of parameters, -Let $N$ be the size of the matrix $M$ and $\lambda_j$, $j = 1, . . . , N$ its eigenvalues, then one can use the identities: -$ tr(M^k) = \Sigma_{j=1}^{N} \lambda_j^k$ -to obtain N polynomial equations of the parameters, for example for $k=2$, the equation has the form: -$ \Sigma_{i=1}^{n} \Sigma_{j=1}^{n} t_i t_j tr(M_i M_j) = \Sigma_{j=1}^{N} \lambda_j^2$ -the righthand side is known and the polynomial coefficients of the left hand side are traces of products of the known submatrices. -Now, the parameters can be found in principle from a numerical solution of the polynomial equation system.<|endoftext|> -TITLE: Solving a differential equation related to $\log (1+t)$ -QUESTION [5 upvotes]: How does one find the solution of -$$\dfrac{dy}{dx}\left( 1-\left( 1-t\right) x-x^{2}\right) -\left( 1+h\left( 1+t\right) +x\right) y=0\quad ?$$ -where $h$ is an integer constant and $t$ is a real constant between $0$ and $1$. -$($ In Roger Apéry, Interpolations de Fractions Continues et Irrationalité de certaines Constantes, Bull. section des sciences du C.T.H.S., n.º3, p.37-53, the solution is -$$y=(1-x)^{-1-h}(1+tx)^{h}.)$$ -Note: The sequence $(v_{h,n})$ in $y=f_{h}(x)=\displaystyle\sum_{n\ge 0}v_{h,n}x^n$ satisfies a recurrence related to $\log (1+t)$. - -Added: Copy of the original with the equation and solution - - -Addendum 2: I transcribe the comment in the 1st answer: "the corrected differential equation above agrees with the recurrence in your excerpt so there is clearly a typo in the printed differential equation." - -REPLY [7 votes]: If the given solution is correct then the posted differential equation is wrong. Instead it should be as follows, with the corrected terms underlined: -$$y^{\prime }\ \left( 1-\left( 1-t\right) x - \underline{tx^{2}}\right) -\left( 1+h\left( 1+t\right) + \underline{tx}\right)\ y\ =\ 0$$ -which of course is trivially integrable since -$$ \frac{y'}y\ =\ \frac{1+h}{1-x}\ +\: \frac{ht}{1+tx} $$ -Update: the corrected differential equation above agrees with the recurrence in your excerpt so there is clearly a typo in the printed differential equation.<|endoftext|> -TITLE: "Cat" modulo natural isomorphism? -QUESTION [14 upvotes]: I'm learning category theory by self-study. I have a couple of texts, and they both talk about how we ought to try not to think so much about the equality between objects in categories. Rather, the important relation is isomorphism. -Okay, sure. -Then they go on to talk about "Cat", the category of (small) categories. One of them even segues into it by considering "categories themselves as structured objects. The `morphisms' between them that preserve their structure are called functors." But then it becomes clear that equality of functors depends on equality of objects, which isn't part of the structure of categories! (Or at least, a part we're not supposed to think about.) -There are natural transformations, which inherit equality from the underlying equality of morphisms in the codomain category. They induce a natural isomorphism on functors. Treating functors, modulo natural isomorphism, as morphisms in a category of categories, seems to me to be the obvious "Cat". -In fact, not only is this not "Cat", but I couldn't find it among the loads of exotic examples of categories I'm given. Apparently it wasn't even deemed worthy of mention. This is a bit disappointing. Doesn't it at least have a standard name and snappy acronym? Can someone point me somewhere where I can learn about its basic properties somewhere? Or is there something defective about it that I've missed? - -REPLY [9 votes]: You are correct that in practice one does not care so much whether two functors are equal, but rather whether they are naturally isomorphic. The difficulty is that typically one does not want to completely forget about the natural isomorphism either! (Which is what happens if one quotients out by natural isomorphisms as you suggest.) -A typical example (which I hope will make sense to you) is to consider for each topological space the category of vector bundles on $X$. So we have a category $Vect\_X$. If $f: X \to Y$ is a continuous map of spaces, -and $\mathcal V$ is a vector bundle on $Y$, then one can pull-back $\mathcal V$ to form -a vector-bundle $f^* \mathcal V$ on $Y$. So we get a functor $f^*: Vect_Y \to Vect_X$. -If now $g: Y \to Z$ as well, then one sees that $(g f)^* $ is naturally isomorphic to -$f^* g^*$, say by some natural isomorphism $c_{f,g}: (g f)^* \cong f^* g^*.$ -Morally, one would like to say that $X \mapsto Vect_X$ and $f \mapsto f^* $ gives a contravariant functor from the category of topological spaces to the category of categories Cat. In practice, because we don't have equality between $(f g)^* $ and $f^* g^* $, we don't get such a functor, although we do get a functor into your suggested category "Cat". -The problem is that in practice, one wants to remember the natural isomorphisms $c_{f , g}$, -which satisfy some important properties: for example, if $h: Z \to W$ is a third map, -then $f^* c_{g,h} \circ c_{f, h g} = c_{f,g}h^* \circ c_{gf, h}.$ (If you write this out, it is a commutative square that relates the various ways to pull back $\mathcal V$ by -$h$, $g$, and $f$, taking into account the associative law $(hg)f = h(g f)$.) -So really, one wants to work in a more sophisticated structure then either Cat or "Cat", -namely a structure in which the objects are categories, the morphisms are functors, -and in which we add an explicit extra layer of structure, so-called 2-morphisms, which are natural isomorphisms between functors. One then develops a theory in which one morally -regards functors as equal if they coincide up to natural isomorphism (i.e. up to a diagram involving a 2-morphism), but one also keeps track of the 2-morphisms. -The resulting structure is called a 2-category, and is part of the study of higher category theory. (Everything written above should provide some background for understanding Martin Brandenburg's answer.) -This theory has a lot in common with homotopy theory: in topology, one can pass to a category in which objects are spaces and maps are continuous maps modulo homotopy, but in lots of applications one wants to remember not just that maps are homotopic, but one actually wants to remember the homotopy; often then there are homotopies between homotopies, -homotopies between homotopies between homotopies, and so on. Similarly in category theory one can introduce not just the notion of 2-category, but notions of n-categories, in which -there are 3-morphisms beweeen the 2-morphisms, etc., up to n-morphisms. -Your structure "Cat" is analogous to passing to the homotopy category in topology; it is interesting, but forgets information one often wants to remember. -If you search for "higher category theory", you will find an enormous amount of material. -One good reference is the n-category cafe and the n-Lab. Another place to look is at the -various manuscripts on Jacob Lurie's web-page (at Harvard). What you will find is a rather intricate, and rapidbly evolving theory, blending category theory and homotopy theory in a fascinating (although sometimes daunting!) way. -In summary, your idea and your question are far from misguided, but in fact are pointing at one of the most active areas of modern research in category theory and related areas!<|endoftext|> -TITLE: Vortex Voronoi diagram? -QUESTION [9 upvotes]: Suppose there are a finite number of disjoint unit-radii disks in the -plane, each spinning clockwise or counterclockwise at the same -angular velocity. -The plane is filled with a thin fluid layer, -and the disks can be viewed as spinning fan blades -determining vectors of fluid motion tangent to the disks. -Is the resulting flow and vector field throughout the plane known? -My initial intuition is that there should be something like a -Voronoi diagram demarcating boundaries of regions of influence. -But in exploring a bit I find it may even be nontrivial to -determine the flow between just two counter-rotating vortices. -For example, the following image was -computed by Paul Nylander -based on a paper by -O.S. Kerr and J.W. Dold, -"Periodic Steady Vortices in a Stagnation Point Flow," -J. Fluid Mech., 276, 307-325 (1994). - - - -As I am quite unschooled in this topic, pointers to -relevant literature might suffice. Thanks! -Edit1. I've now asked a revised version of this question on Math Overflow, -incorporating the clarifying suggestions of Rahul. I might hit a fluid dynamics expert there. -Edit2. Thanks to Rahul and David Bar Moshe here, and Willie Wong and Bob Terrell -on MO, I have a much broader understanding of the problem, and could likely compute a numerical -solution if needed. I appreciate the help! - -REPLY [5 votes]: Edit: It appears an identical idea has, with far greater detail, already been given to you by jvkersch. I am humbled. I should also point out that my example below, which was only meant as an illustration, would not be a steady-state solution in a physical fluid, because the interaction between the vortices themselves would cause them to move. -David Bar Moshe's idea reminded me of some work in vector field visualization which does indeed use stagnation points to divide the fluid domain into something like "regions of influence". I believe the initial paper which introduced the idea was Helman and Hesselink's "Visualizing Vector Field Topology in Fluid Flows" (PDF copy). -In our case, because we assumed that the flow is incompressible and irrotational, in a generic configuration the velocity can only be zero at saddle points, where the flow points inward along two directions and outward along two directions. Streamlines along these directions are called the separatrices of the saddle point. If you place two particles close together on different sides of a separatrix and let them following the flow, they will diverge at the saddle point and follow disparate long-term trajectories. So these separatrices divide space into regions where the global topology of the streamlines is different. -Here's an example I cooked up in Matlab because I thought it would look pretty. There are four point vortices in a square, whose circulations are -1, -1, -2 and 1 going clockwise from top left. Here's what the direction of the velocity field looks like: - -In the diagram below, the separatrices divide space into regions around vortices (and clusters thereof). In each distinct region, the streamlines wind around a particular set of vortices. You can see the saddle points as the points where four arcs come together. I believe this sort of diagram is what you were hoping for when you asked for something like a Voronoi diagram around the vortices. - -(For a complete picture, there ought to be arrows on the separatrices to indicate the direction of flow, but I couldn't figure out how to do that in Matlab.)<|endoftext|> -TITLE: Orientability of $\mathbb{RP}^3$ -QUESTION [11 upvotes]: I was wondering if there is a nice way to see that $\mathbb{RP}^{3}$ is orientable without using tools of algebraic topology, like homology. -The only think I could think of was to argue that $\mathbb{RP}^{3}=\mathbb{R}^3 \cup \mathbb{RP}^{2}$ and perhaps you could argue that to get back to any starting position you have to cross the $\mathbb{RP}^{2}$ boundary but I'm pretty sure that what I'm thinking is nonsense. -This was a question on the homework for one of my topics courses and I plan on asking the professor about it tomorrow, but I was curious to see if anyone had any interesting ways of thinking about or picturing this space. - -REPLY [5 votes]: $RP^{2k-1}$ is a quotient of a codimension 1 submanifold with induced orientation ($S^{2k-1}$)in $R^{2k}$, by an orientation-preserving transformation of $R^{2k}$. Because there is a consistent notion of normal vector (i.e., of "inside" and "outside") on the submanifold, preserving orientation of the surrounding manifold also preserves orientation on the submanifold. Transporting the orientation along a path between two points of the submanifold that are identified in the quotient can be viewed as transporting the orientation in the surrounding manifold, and this has to be consistent (have positive determinant on the local frames) because of the orientability of the surrounding manifold.<|endoftext|> -TITLE: On Zeta function zeros in the critical strip -QUESTION [6 upvotes]: I have been reading about Riemann Zeta function and have been thinking about it for some time. -Has anything been published regarding upper bound for the real part of zeta function zeros as the imaginary part of the zeros tend to infinity? -Thanks - -REPLY [3 votes]: De La Vallee-Pousin's theorem has been improved by Korobov and Vinogradov in the 50's and I believe their result is the strongest known asymptotic zero free region, cf. The Riemann zeta-function: Theory and applications by Alexandar Ivic. One can find more recent papers but my impression is that they don't touch the main exponents and so relatively tractable problems seem to be improving the constants, or giving explicit bounds on the constants etc.<|endoftext|> -TITLE: Integral classes in de Rham cohomology -QUESTION [23 upvotes]: If $M$ is a differentiable manifold, De Rham's theorem gives for each positive integer $k$ an isomorphism -$Rh^k : H^k_{DR}(M,\mathbb R) \to H^k_{singular}(M,\mathbb R)$. On the other hand, we have a canonical map $H^k_{sing}(M,\mathbb Z) \to H^k_{singular}(M,\mathbb R)$ . Allow me to denote (this is not standard) its image by $\tilde {H}^k_{singular }(M,\mathbb Z)$. My question is : how do you recognize if, given a closed differental $k$- form $\omega$ on $M$, its image -$Rh^k([\omega]) \in H^k_{singular}(M,\mathbb R) $ is actually in -$\tilde {H}^k_{singular}(M,\mathbb Z)$. -I would very much appreciate a concrete answer, ideally backed up by one or more explicit calculations. Thank you for your attention. - -REPLY [2 votes]: This has nothing to do with poincare poincare duality or even de rham cohomology. -Let $X$ be any space. By the universal coefficient theorem the canonical map $$H^p(X,\mathbb{Z}) \to \operatorname{Hom}_{\mathsf{Ab}}(H_p(X,\mathbb{Z}),\mathbb{Z})$$ -Is surjective. Moreover (also by the universal coefficient theorem if you like) we have -$$H^p(X,\mathbb{R}) \cong H_p(X,\mathbb{R})^*$$ -Together we get that $\alpha \in H^p(X,\mathbb{R})$ comes from an integral cohomology class iff the induced linear form $\tilde{\alpha}:H_p(X,\mathbb{R}) \to \mathbb{R}$ is a linear extension of a form $H_p(X,\mathbb{Z}) \to \mathbb{Z}$. In other words: - -A real cohomology class is integral iff the corresponding linear form takes integer values when restricted to the lattice $H_p(X,\mathbb{Z})/{\operatorname{Torsion}} \subset H_p(X,\mathbb{R})$<|endoftext|> -TITLE: What are the most important questions or areas of study in the philosophy of mathematics? -QUESTION [11 upvotes]: This question is intended to complement What mathematical questions or areas have philosophical implications outside of mathematics? - -REPLY [3 votes]: George Lakoff and Rafael Núñez started studying embodied mathematics pretty recently. The idea is that our ideas of mathematics are inextricable from our humanity as opposed to being platonic truths, and can all be understood in terms of metaphors for real-world concepts and our learning/acquisition of these metaphors. To take a simple example, whenever we add numbers, our brains will always essentially be adding things to a pile, because that's what addition is to us. If I've understood it correctly. -Their theories really haven't caught on among mathematicians, in part because they're not mathematicians themselves, which is honestly a pretty big weakness among philosophers of math. In my opinion, a lot of what they say is really silly (my brain has nothing to do with $\mathbb{R}$ being the unique completion of $\mathbb{Q}$ as well as the largest Archimedean field), but there is a kernel of truth there: we don't study math by taking logical step after logical step, we study it by thinking about things we have ways of comprehending. For instance, we live in three-dimensional space, and so it's hard to study higher-dimensional things that we therefore can't visualize. I think it would be a useful philosophical project to understand how the mathematicians who study such objects conceptualize them, and how that relates to our human nature, but that would obviously require a larger mathematical background. -Another important question is the origin of mathematical taste and beauty. This is probably more important to mathematicians than to outsiders who don't have as much of a sense of it. (Yet I'm convinced that a cool proof can be appreciated by anyone, if it's explained properly!) I don't know who, if anyone, has written about this, but it definitely exists and is worthy of study.<|endoftext|> -TITLE: Is the notion of density really needed to define integration on nonorientable manifolds? -QUESTION [8 upvotes]: I am trying to understand, in as simple terms as possible: - -How to define integration for non-orientable manifolds, and -why it is impossible to do so using only differential forms. - -In particular, I've seen some discussion of using "densities" instead of $n$-forms for integration, but am not really clear on why densities are required. In other words, is it really impossible to define integration on nonorientable manifolds using forms alone? -I am of course aware that any $n$-form must vanish somewhere on a nonorientable manifold, so we cannot find a volume form, hence cannot use the standard definition of integration. I think the reason I'm not finding this answer satisfying is that it is a bit tautological: we can't define integration with respect to volume forms because there are no volume forms. But why must we define integration with respect to a (global) volume form in the first place? Is there really no other way to do it using locally-defined forms? Thinking of a manifold as a collection of local charts is common in geometry, and I'm having trouble understanding why this approach doesn't work in the case of integration. - -REPLY [3 votes]: Differential forms inherently measure orientation. The value of a differential form $\omega \in \bigwedge^n(M)$ on an $n$-parallelotope, i.e. $\omega(X_1, \ldots, X_n)$, is interpreted as the oriented volume of the parallelotope spanned by $X_1, \ldots, X_n$. The orientation is a necessary part of the interpretation, since differential forms are alternating: -$$ -\omega(X_1, X_2, \ldots, X_n) = - \omega(X_2, X_1, \ldots, X_n). -$$ -Hence, when working with differential forms, we should expect things to go wrong if we throw the notion of orientation out the window. -Concretely, say we want to integrate a differential form $\omega$ over the image $\phi(U)$ of a single coordinate chart. Say that $\omega$ is written in coordinates on $U$ as $\omega = a dx_1 \wedge \cdots \wedge dx_n$. The usual way to define the integral is by pulling back into Euclidean space: -\begin{align*} -\int_{\phi(U)}d\omega -&= \int_{\phi(U)} a \, dx_1 \wedge \cdots \wedge dx_n \\ -&= \int_U (\phi^* a) \, dx_1 \cdots dx_n \\ -&= \int_U (a \circ \phi) \det d\phi \, dx_1 \cdots dx_n. -\end{align*} -This final expression involves the Jacobian determinant of the coordinate transform --- the factor picked up by change-of-variable --- and its sign depends on whether $\phi$ is orientation-preserving. Hence, if we flip orientations, we flip the sign of the integral. Hence we must have an orientation on our manifold in order for integration to be well-defined! -Clearly, if we want to be able to integrate without worrying about orientation, we either need to -(a) change the definition of $\int_{\phi(U)} d \omega$, or -(b) integrate against something besides differential forms. -It seems you are arguing that we should try (a). But as long as you want your definition of integration to make any sense (e.g. be independent of things like choice of charts or partition of unity), by pursuing (a) you'll probably end up arriving at something that is morally more like (b), since we invented differential forms to be orientation-measuring objects in the first place. In fact, you may end up reinventing the exact concept of density that you were trying to avoid! -On that note, it may comfort you to know that any differential form yields an $s$-density $|\omega|^s$ in the natural way, as -$$ -|\omega|^s(X_1, \ldots, X_n) = |\omega(X_1, \ldots, X_n)|^s. -$$ -So, moving from differential forms to densities is really quite natural --- they're just an orientation-forgetting generalization of differential forms. The machinery involved in defining them is a bit more complicated, but that's the price we pay for dropping orientation. - -Looking at it another way, it may be helpful to replace the word "possible" in your question with the word "useful". After all, any construction (that is not logically inconsistent) is possible in mathematics, but most constructions are not useful. Making that substitution: - -In other words, is it really not useful to define integration on nonorientable manifolds using forms alone? - -No, it's not particularly useful. See above --- orientation is baked into the definition of differential forms and their integration. Attempting to wrangle forms into playing nicely with orientation-less structures won't be pretty. We'll cause a lot more problems than we solve by trying to do that. If we want to forget about orientation, we should integrate against something else. - -I've seen some discussion of using "densities" instead of n-forms for integration, but am not really clear on why densities are useful. - -Densities are useful precisely because they solve the problem we're talking about here --- they are the closest thing to differential forms that we can integrate without having to worry about orientation. -I hope this clears things up!<|endoftext|> -TITLE: If a graph can be colored with max $4$ colors, is it planar? -QUESTION [6 upvotes]: There's a theorem that every planar graph can be colored with $4$ colors in such a way that no $2$ adjacent vertices have the same color. Is the opposite true as well? - -REPLY [20 votes]: No. Consider $K_{3,3}$, the graph with two sets of 3 vertices each such that every vertex in one set is connected to every vertex in the other. It's not planar but can be colored with just 2 colors. More generally, take any dense bipartite graph - it's still 2-colorable, but far from planar. -A picture of $K_{3,3}$ (along with $K_5$):<|endoftext|> -TITLE: Definitions for limsup and liminf -QUESTION [6 upvotes]: I was wondering what are the general spaces that the concepts limsup and liminf can apply to? -Is complete lattice one of them? Also How about metric space? -What are limsup and liminf specified with respect to? A subset? A sequence/net/filter base? -How many kinds of definitions for limsup and liminf in these various cases? Are they equivalent? If not, what are the conditions for them to be equivalent? - -REPLY [4 votes]: The notion of a $\limsup$ of a filtered directed set makes sense. Namely, let $A$ be a filtered directed set and $x_\alpha, \alpha \in A$ be an $A$-indexed family in $\mathbb{R}$. Then one can define the $\limsup$ as the infimum of $\sup_{\beta > \alpha} x_{\beta}$ over all $\alpha$. This makes sense for the $\liminf$ as well. -One needs the ordering of the range set to define the limsup, though.<|endoftext|> -TITLE: Can you find a 2-form not written as the wedge of two 1-forms? -QUESTION [7 upvotes]: I was under the impression that all 2-forms are the wedge $(\wedge)$ of two 1-forms. Is it possible to have a 2-form that you can't write as $A\wedge B$ with $A,B$ 1-forms? - -REPLY [11 votes]: Yes, it is possible. (And you should find an example yourself: I will not deprive you of the joy of finding it :) )<|endoftext|> -TITLE: How does hocolim relate to Hom? -QUESTION [8 upvotes]: In a usual category $\mathcal{C}$ one can pull the colim out of the Hom like this:$\DeclareMathOperator{\Hom}{Hom}\DeclareMathOperator{\colim}{colim}\DeclareMathOperator{\hocolim}{hocolim}\DeclareMathOperator{\Ho}{Ho}\DeclareMathOperator{\holim}{holim}$ $$\Hom\nolimits_\mathcal{C}(\colim A_i,B)=\lim \Hom\nolimits_\mathcal{C}(A_i,B)$$ -I am looking for a corresponding statement for hocolims - lets say in simplicial sets, but if there are more general statements, that's even better. -E.g. I could imagine -$$\Hom\nolimits_{\mathcal{C}}(\hocolim A_i,B)=\lim \Hom\nolimits_{\Ho(\mathcal{C})}(A_i,B)$$ -- maybe one needs to have $B$ fibrant and the A_i cofibrant here, i.e. that the Homs on the right are $\mathbb{R}Homs$. -Using the internal Hom in simplicial sets I could also imagine versions like this: -$$\Hom(\colim A_i,B)=\holim \Hom(A_i,B)$$ -$$\Hom(\colim A_i,B)=\holim \mathbb{R}\!\Hom(A_i,B)$$ -What is the right statement and what is the place to learn this hocolim-yoga? -Thanks! -N.B. - -REPLY [7 votes]: A formula like the one you are asking for could be the following (Bousfield-Kan, "Homotopy limits, completions and localizations", chapter XII, proposition 4.1): -$$ -\mathrm{hom}_* (\mathrm{hocolim}\ \mathbf{A}, B) \cong \mathrm{holim}\ \mathrm{hom}_* (\mathbf{A}, B) \ . -$$ -Here $B$ is a pointed simplicial set, $\mathbf{A} : I \longrightarrow \Delta^{\mathrm{o}}\mathbf{Set}_*$ a functor from a small category $I$ to the category of pointed simplicial sets, and for pointed simplicial sets $A, B$ -$$ -\mathrm{hom}_* (A,B) \in \Delta^{\mathrm{o}}\mathbf{Set}_* -$$ -is the pointed simplicial function space which $n$-simplices are maps in $\Delta^{\mathrm{o}}\mathbf{Set}_* $ -$$ -\left( \Delta [n] \times A \right) / \left( \Delta [n] \times * \right) \longrightarrow B -$$ -(op.cit., chapter VIII, 4.8). -I didn't go through the details, but, if it annoys you, it seems to me that you can drop the "pointed" thing everywhere just deleting "pointed", $*$, and $\Delta [n] \times * $ in what Bousfield-Kan say.<|endoftext|> -TITLE: A question related to the card game "Set" -QUESTION [8 upvotes]: The card game Set lead to the following question. Lets call a subset $A$ of $(\mathbb{Z}/3)^n$ dependent, if there is $\{x,y,z\}\subset A$ with $x+y+z=0$. (So unlike the case of linear dependence we are not allowing any coefficients here). -Let $f(n)$ denote the maximal size of a independent subset of $(\mathbb{Z}/3)^n$. -Is there an explicit expression / a recursion for $f(n)$? Can anything be said about its asymptotic behaviour (like $f\in O(c^n)$ for some minimal $c$)? -As $\{0;1\}^n$ is independent, we know that $f(n)\ge 2^n$. And so $c\ge 2$. -The card game set deals with the case $n=4$ and $f(4)=20$ if i remember correctly. - -REPLY [2 votes]: Recently, Jordan Ellenberg and Dion Gijswijt have proved using the ideas of Croot-Lev-Pach, that $f(n) \in O(2.756^n)$. -This is the best possible bound we have so far. See the preprints, https://quomodocumque.files.wordpress.com/2016/05/cap-set.pdf and http://homepage.tudelft.nl/64a8q/progressions.pdf. Also see the blog post ``Mind Boggling: Following the work of Croot, Lev, and Pach, Jordan Ellenberg settled the cap set problem!'' by Gil Kalai. -The upper bound they have obtained can also be stated as "a cap set has size bounded above by the number of monomials $x_1^{e_1}x_2^{e_2}\cdots x_n^{e_n}$ of total degree at most $2n/3$ that satisfy $0 \leq e_i \leq 2$ for all $i$.}". -The beautiful proof can be summarised as follows (take $q = 3$ for capsets). -Let $q$ be an odd prime power and let $\mathcal{P}_d(n, q)$ denote the set of all polynomials $f$ in $\mathbb{F}_q[x_1, \dots, x_n]$ that satisfy $\deg f \leq d$ and $\deg_{x_i} f \leq q - 1$ for all $i$, aka, the set of reduced polynomials of degree at most $d$. The set of all reduced polynomials with no restriction on the total degree is simply denoted by $\mathcal{P}(n, q)$. There is a vector space isomorphism between $\mathcal{P}(n, q)$ and the space of all $\mathbb{F}_q$-valued functions on $\mathbb{F}_q^n$ given by evaluating the polynomial. -For a $3$-term arithmetic progression free subset $A$ of $\mathbb{F}_q^n$, the sets $A + A = \{a + a' : a, a' \in A, a \neq a'\}$ and $2A = \{a + a : a \in A\}$ are disjoint. -Let $U$ be the subspace of $\mathcal{P}(n, q)$ consisting of all the polynomials that vanish on the complement of $2A$. -Then $\dim U = |2A| = |A|$ (since $q$ is odd) and thus we try to find upper bounds on $\dim U$. -For any integer $d \in \{0, 1, \dots, n(q - 1)\}$ let $U_d$ be the intersection of $\mathcal{P}_d(n, q)$ with $U$. -Then since $\langle U, \mathcal{P}_d(n, q) \rangle \leq \mathcal{P}(n, q)$ we have $\dim U \leq \dim \mathcal{P}(n, q) - \dim \mathcal{P}_d(n, q) + \dim U_d$. -Now the crux of the proof is Proposition 1 in Jordan's preprint, which says that every (reduced) polynomial of degree at most $d$ which vanishes on $A + A$ has at most $2 \dim P_{d/2}(n, q)$ non-zeros in $2A$. -Since $A + A$ is a subset of the complement of $2A$, we see that every element of $U_d$, when seen as an element of $\mathbb{F}_q^{\mathbb{F}_q^n}$ via the evaluation isomorphism, has at most $2 \dim \mathcal{P}_{d/2} (n, q)$ non-zero coordinates, and thus $\dim U_d \leq 2 \dim \mathcal{P}_{d/2}(n, q)$. -Therefore, we get $|A| = \dim U \leq q^n - \dim \mathcal{P}_d(n, q) + 2 \dim \mathcal{P}_{d/2}(n, q) = \dim \mathcal{P}_{n(q - 1) - d} (n, q) + 2\dim P_{d/2}(n, q)$. -Finally observe that $n(q - 1) - d = d/2$ when $d = 2(q - 1)n/3$ which gives us the bound $$|A| \leq 3 \dim \mathcal{P}_{(q - 1)n/3}(n, q)$$ (whenever $(q - 1)n/3$ is an integer). -The reason why this is a good bound is that $\dim \mathcal{P}_{d}(n, q)$ is bounded above by $q^{\lambda n}$ for some $\lambda < 1$ when $d < (q - 1)n/2$ (see the preprints).<|endoftext|> -TITLE: Why is this coin-flipping probability problem unsolved? -QUESTION [50 upvotes]: You play a game flipping a fair coin. You may stop after any trial, at which point you are paid in dollars -the percentage of heads flipped. So if on the first trial you flip a head, you should stop and earn \$100 -because you have 100% heads. If you flip a tail then a head, you could either stop and earn \$50, -or continue on, hoping the ratio will exceed 1/2. This second strategy is superior. -A paper by Medina and Zeilberger (arXiv:0907.0032v2 [math.PR]) says that it is an unsolved -problem to determine if it is better to continue or stop after you have flipped 5 heads in 8 trials: accept \$62.50 or hope for more. It is easy to simulate this problem and it is clear -from even limited experimental data that it is better to continue (perhaps more than 70% chance you'll improve over \$62.50): - - - -My question is basically: Why is this difficult to prove? Presumably it is not that difficult -to write out an expression for the expectation of exceeding 5/8 in terms of the cumulative binomial distribution. - (5 Dec 2013). -A paper on this topic was just published: -Olle Häggström, Johan Wästlund. -"Rigorous computer analysis of the Chow-Robbins game." -(pre-journal arXiv link). -The American Mathematical Monthly, Vol. 120, No. 10, December 2013. -(Jstor link). -From the Abstract: - -"In particular, we confirm that with 5 heads and 3 tails, stopping is optimal." - -REPLY [5 votes]: This seems to be related to Gittins Indices. Gittins Indices are a way of solving these kind of optimal stopping problems for some classes of problems, and basically give you a way of balancing how much you are expected to gain given your current knowledge and how much more you could gain by risking obtaining more information about the process (or probability of flipping heads, etc). -Bruno<|endoftext|> -TITLE: Lower hemicontinuity of the intersection of lower hemicontinuous correspondences -QUESTION [6 upvotes]: I have been stumped for long by this exercise (3.12(d)) from Stokey and Lucas's Recursive Methods in Economic Dynamics. Would greatly appreciate any hints. -Let $\phi: X \to Y$ and $\psi: X \to Y$ be lower hemicontinuous correspondences (set-valued functions), and suppose that for all $x \in X$ -$$\Gamma(x)=\{y \in Y: y \in \phi(x) \cap \psi(x)\}\neq \emptyset$$ -Show that if $\phi$ and $\psi$ are both convex valued, and if $\mathrm{int} \phi(x) \cap \mathrm{int} \psi(x) \neq \emptyset$, then $\Gamma(x)$ is lower hemicontinuous at $x$. -[A correspondence $\Gamma: X \to Y$ is said to be lower hemicontinuous at $x \in X$ if $\Gamma(x)$ is nonempty and if, for every $y \in \Gamma(x)$ and every sequence $x_n \to x$, there exists $N \geq 1$ and a sequence $\{y_n\}_{n=N}^\infty$ such that $y_n \to y$ and $y_n \in \Gamma(x_n)$, all $n \geq N$. -Intuitively this means that the graph of $\Gamma(x)$ cannot suddenly broaden out.] -EDIT: We can assume that $X$ and $Y$ are subsets of $\mathbf{R}^n$. - -REPLY [2 votes]: Here is a somewhat detailed outline of an argument that I think works. As this is a homework problem, some of the pieces of the argument do need to be filled in. -Assume we're in $\mathbb{R}^m$. For fixed $y \in \Gamma(x)$, the fact that $\Gamma(x)$ has a nonempty interior means that there are $m$ points $z_1, z_2, \ldots, z_m$ in the interior of $\Gamma(x)$ such that these $m$ points and $y$ together are affinely independent. Thus you can take sufficiently small balls around each of these $m+1$ points such that the balls do not intersect and that any set consisting of one point from each ball is also affinely independent. Let $z_0 = y$. Since $\phi$ is lower hemicontinuous, for each $z_i$ there exists a sequence $z_{i_n} \to z_i$ and an $N_i$ such that $z_{i_n} \in \phi(x_n)$ and $z_{i_n}$ is inside that small ball around $z_i$ for all $n \geq N_i$. For each $n \geq \max \{N_i\}$, construct the convex hull $C_n$ of $\{z_{0_n}, z_{1_n}$, $z_{2_n}, \ldots, z_{i_n}\}$. Since $\phi$ is convex-valued, $C_n$ is a subset of $\phi(x_n)$. Do the same thing for each $n$ for the $\psi$ function to obtain sets $D_n$. Let $S_n = C_n \cap D_n$. The intersection of the convex hulls of two sets of $m+1$ affinely independent points in $\mathbb{R}^m$ that are pairwise close to each other must be nonempty. (Consider the supporting hyperplanes.) Let $y_n$ be the point in $S_n$ closest to $y$. Since the extreme points of $S_n$ converge to the extreme points of the convex hull of $\{z_0, z_1, z_2, \ldots, z_m\}$, $S_n$ converges as a set to the convex hull of $\{z_0, z_1, z_2, \ldots, z_m\}$. Thus the point in $S_n$ closest to $y$ $(= z_0)$ must converge to $y$; i.e., $y_n \to y$.<|endoftext|> -TITLE: Isolated zeros on closure of a domain -QUESTION [10 upvotes]: Let $f$ be an analytic function on the open unit disk domain $D$. Suppose also that $f$ is bounded. -Since $f$ is bounded I believe that $f$ can be continuously extended to the closed unit disk. -I know that the zeros of $f$ in the open disk $D$ are isolated. Are the zeros of $f$ in the closed unit disk also necessarily isolated? - -REPLY [5 votes]: Jonas Meyer's answer goes much deeper, but let me say the following as well. -Even if your (nonzero -- you didn't say that, but of course you meant it!) function does extend continuously to the closed unit disk it need not have isolated zeros. Indeed, suppose that $f$ has infinitely many zeros in the open unit disk. Then the zero set of the (putative) extension of $f$ to the closed unit disk is an infinite subset of a compact space, so must have an accumulation point: i.e., there will be at least one point on the boundary which is a nonisolated zero. -In fact, by the Weierstrass Factorization Theorem, the zero set of an analytic function on the open unit disk can be any discrete subset without accumulation points in the open disk. You can choose such a set to have the entire boundary $|z| = 1$ of the disk contained in its closure. We conclude that there exists a nonzero analytic function $f$ on the open unit disk such that -- if it admits a continuous extension to the closed unit disk -- this extension is identically zero on the boundary. -I would be interested to know if this construction can be made unconditional, i.e., whether any analytic function with such a zero set can be extended continuously to the boundary.<|endoftext|> -TITLE: Calculate combinations of characters -QUESTION [9 upvotes]: My first post here...not really a math expert, but certainly enjoy the challenge. -I working writing a random string generator and would like to know how to calculate how many possible combinations there are for a particular combination. -I am generating a string of 2numbers followed by 2 letters (lowercase) e.g. 12ab -I think the calculation would be (breaking it down) -number combinations 10*10=100 -letter combinations 26*26=676 -So the number of possible combinations is 100*676=67600, but this seems a lot to me so I'm thinking I am off on my calculations!! -Could someone please point me in the right direction? -Thx - -REPLY [3 votes]: You are right. That is the most basic/fundamental procedure for counting in combinatorics. -It's sometimes called the Rule of product, or multiplication principle or fundamental counting principle, and it can be visualized as a tree - -REPLY [2 votes]: Some nomenclature: when you say "2 number", you really mean "2 digits". Also, you need to specify if the digits can be anything or not (for example, do you allow leading zeroes?). -If each of the two digits can be anything, 0 through 9, and each of the letters can be anything, a through z, then your computation is correct. If you think about it, you can see why the number is not off: any particular pair of letters have 100 numbers that can go before them to make the string. Each particular letter going first has 26 possible "second letters", and each of them has 100 possible pairs of digits to go. So there are already 2600 possible strings of the form xxxa. Another batch for xxxb, etc. They add up very quickly.<|endoftext|> -TITLE: Can this sum be simplified: $ \sum_{k=0}^{n-1} { n -1 \choose k } (-2)^{k} (2n - k)! $? -QUESTION [9 upvotes]: Can this expression be further simplified : $ \sum_{k=0}^{n-1} { n -1 \choose k } (-2)^{k} (2n - k)! $? This is the coefficient of $x^{2n}$ in the formal power series expansion of $(1-2x)^{n-1} \times \sum_{ k \geq 0} k! x^k$. -Motivation: I came across this when trying to solve a problem using inclusion-exclusion principle, I am not mentioning the original problem because I am interested in this sum as an independent problem. - -REPLY [8 votes]: As a way of restating Bill's answer, what you have can in fact be expressed in terms of the so-called (generalized) Bessel polynomial: -$$y_n(x;a)=(n+a-1)_n \left(\frac{x}{2}\right)^n {}_1 F_1 \left(-n;-2n-a+2;\frac{2}{x}\right)$$ -(the Kummer hypergeometric series degenerates to a polynomial here because both the numerator and denominator parameters are negative integers) -Your original expression, then, in terms of the (generalized) Bessel polynomial, is -$$(-2)^{n-1}(n+1)!y_{n-1}(-1;4)$$ -The references in the DLMF can point you to papers where the (generalized) Bessel polynomials have been studied; for an elementary treatment, see Chihara's An Introduction to Orthogonal Polynomials.<|endoftext|> -TITLE: What's the value of this Viète-style product involving the golden ratio? -QUESTION [57 upvotes]: One way of looking at the Viète (Viete?) product -$${2\over\pi} = {\sqrt{2}\over 2}\times{\sqrt{2+\sqrt{2}}\over 2}\times{\sqrt{2+\sqrt{2+\sqrt{2}}}\over 2}\times\dots$$ -is as the infinite product of a series of successive 'approximations' to 2, defined by $a_0 = \sqrt{2}$, $a_{i+1} = \sqrt{2+a_i}$ (or more accurately, their ratio to their limit 2). This allows one to see that the product converges; if $|a_i-2|=\epsilon$, then $|a_{i+1}-2|\approx\epsilon/2$ and so the terms of the product go as roughly $(1+2^{-i})$. -Now, the sequence of infinite radicals $a_0=1$, $a_{i+1} = \sqrt{1+a_i}$ converges exponentially to the golden ratio $\phi$, and so the same sort of infinite product can be formed: -$$\Phi = {\sqrt{1}\over\phi}\times{\sqrt{1+\sqrt{1}}\over\phi}\times{\sqrt{1+\sqrt{1+\sqrt{1}}}\over\phi}\times\dots$$ -and an equivalent proof of convergence goes through. The question is, what's the value of $\Phi$? The usual proof of Viète's product by way of the double-angle formula for sin doesn't translate over, and from what I know of the logistic map it seems immensely unlikely that there's any function conjugate to the iteration map here in the same way that the trig functions are suitably conjugate to the version in the Viète product. Is there any other approach that's likely to work, or is $\Phi$ just unlikely to have any formula more explicit than its infinite product? - -REPLY [10 votes]: What you're basically looking for is a function $f(x)$ such that $f(2x)=f^2(x)-1$ and $f(0)=\phi$, from there: -\begin{align} -2f'(2x)&=2f(x)f'(x)\\\\ -\frac{f'(2x)}{f'(x)}&=f(x)\\\\ -\frac{f'(x)}{f'(x/2)}&=f(x/2)\\\\ -\frac{f'(x)}{f'(x/2^n)}&=\prod_{k=1}^n f(x/2^k) -\end{align} -and, given a value $x_0$ such that $f(x_0)=1$, -\begin{align} -\Phi&=\prod_{k=1}^{\infty} \frac{f(x_0/2^k)}{\phi}\\\\ - &=\lim_{n\rightarrow\infty}\phi^{-n} \prod_{k=1}^n f(x_0/2^k)\\\\ - &=\lim_{n\rightarrow\infty}\phi^{-n} \frac{f'(x_0)}{f'(x_0/2^n)}\\\\ - &=\lim_{h\rightarrow0}h^\alpha \frac{f'(x_0)}{f'(hx_0)} -\end{align} -where $\alpha=\frac{\ln(\phi)}{\ln(2)}$. Unfortunately, I have no idea how to get $f(x)$, and the fact that $f(x)=1+O(x^{1+\alpha})$ does not make finding this function look easy.<|endoftext|> -TITLE: Is there a quick proof as to why the vector space of $\mathbb{R}$ over $\mathbb{Q}$ is infinite-dimensional? -QUESTION [163 upvotes]: It would seem that one way of proving this would be to show the existence of non-algebraic numbers. Is there a simpler way to show this? - -REPLY [5 votes]: As $\pi$ is trascendent over $\mathbb{Q}$. Then the set $\{1, \pi, \pi^{2},\cdots\}$ is linearly independent.<|endoftext|> -TITLE: Good resources (book or otherwise) to learn/study basic Combinatorics -QUESTION [20 upvotes]: I'm currently studying basic Combinatorics for a college course and my professor is awful (and that is being generous). -Therefore I'm looking for good resources to learn basic Combinatorics so that I can prepare for his exams and ace the course. -Thanks for any help - -REPLY [12 votes]: IMO there are two great references at two different levels. The first is Brualdi. This book is awesome. It is super clear, super straightfoward, and very readable by yourself. -The second is much more advanced, but even better! It is Aigner's GTM on the subject. This book is fairly advanced, but rarely have I seen such a beautiful exposition of any mathematical subject. His proofs are impressively elegant, and his exercises are very interesting. -I hope this helps.<|endoftext|> -TITLE: Change of Variable (conformal map) -QUESTION [5 upvotes]: Suppose $f$ is an analytic function defined on the unit disk, $D$. I want to evaluate -$\int_{D} f(\omega) dA(\omega)$ -using a change of variable. Suppose $\phi$ is a conformal map of the $D$ onto itself. -Does -$\int_{D} f(\omega) dA(\omega) = \int_{D} f(\phi(z)) |\phi^{'}(z)|^{2} dA(z) $?, -where $\phi^{'}$ is the derivative of $\phi$. - -REPLY [4 votes]: Because $|\phi'(z)|^2$ is the determinant of the Jacobian of $\phi$, this follows from the substitution formula for open sets in $\mathbb{R}^n$, as seen for example in this Wikipedia article, which contains further references for the more general result. Technically, to apply the result directly you would break up each side into real and imaginary parts.<|endoftext|> -TITLE: Matrices commute if and only if they share a common basis of eigenvectors? -QUESTION [63 upvotes]: I've come across a paper that mentions the fact that matrices commute if and only if they share a common basis of eigenvectors. Where can I find a proof of this statement? - -REPLY [2 votes]: An elementary argument. -Summary: show that each eigenspace of $A$ has a basis such that each basis vector is contained in one of the eigenspace of $B$. This basis is then the simultaneous common basis we are looking for. -Suppose $A,B$ are both diagonalizable and they commute. -Now let $E_{\lambda_i}$ be eigenspaces of $A$ for each distinct eigenvalue $\lambda_i$ of $A$. -Now let $F_{s_i}$ be eigenspaces of $B$ for each distinct eigenvalue $s_i$ of $B$. -Now I claim that $E_{\lambda_i}$ (of say dimension $m$) has a basis $v_1^i,...,v_m^i\in E_{\lambda_i}$ such that each $v_r^i$ is in one of $B$'s engenspace $F_{s_j}$--this would imply these $v_r^i$ are eigenvectors of $B$ and $A$ simultaneously. Apply this to all eigenspaces $E_{\lambda_i}, i=1,...,n$. The collection of all $v_r^i$ then becomes a common basis for $A$ and $B$ as required. -To show this claim, first pick arbitrary basis $w_1,...,w_m$ of $E_{\lambda_i}$. Each $w_i$ can be written as sum of vectors where each vector is in one of $B$'s engenspace $F_{s_j}$. This is a subtle point so let me repeat: for each $i=1,...,m,$ $w_i=z_1^i+...+z_{l_i}^i, l_i\le m$ and $z_k^i\in F_{s_j}$ for some $j$. This is trivially true because direct sum of $B$'s engenspaces is the entire space. -Now we make a second claim that all $z_k^i\in E_{\lambda_i}$. Then the collection of all $z_k^i$ span $E_{\lambda_i}$ and thus the collection can be reduced to a basis $v_1,...,v_m$ where each $v_j$ is contained in $E_{\lambda_i}$ as required by the first claim. -Note that $B$ is invariant to $E_{\lambda_i}$ since $A,B$ commute. The second claim follows from: $\sum_{i=1}^N z_i \in S$ where $z_i$ are eigenvectors of distinct eigenvalues of $B$ and $S$ is a subspace to which $B$ is invariant, then $z_i\in S,\forall i$. We check this by induction on $N$. It is trivially true for $N=1$. Then suppose $Bz_1=\lambda z_1$. Since $\lambda(z_1+...+z_N)\in S$ and $B(z_1+...+z_N)\in S$, we have $B(z_1+...+z_N)-\lambda(z_1+...+z_N)=a_2z_2+...+a_Nz_N\in S$ for some constant $a_i\neq 0$--the constants are non-zero because we assumed $z_i$ all have distinct eigenvalues. Then apply inductive hypothesis $z_2,...,z_N\in S$. This would imply $z_1\in S$ as well. This finishes the proof.<|endoftext|> -TITLE: Conditional convergence, Mertens theorem -QUESTION [7 upvotes]: If $\sum a_n$ and $\sum b_n$ both converge and one of them absolutely then the Cauchy product $\sum c_n$ converges to $\sum a_n \sum b_n$. ($c_n = \sum_{k = 0}^n a_k b_{n - k}$), by Mertens Theorem. -Now, if both converge conditionally then the product does not have to converge as $a_n = b_n = (-1)^n/n$ shows. -My question now is: What if $\sum a_n$ and $\sum b_n$ both converge conditionally and $\sum c_n$ converges, then is it always true that $\sum c_n$ converges to the product? -By the way, this is not homework, I'm already past the real analysis part. - -REPLY [7 votes]: This follows readily from Abel's convergence theorem: if $\sum_0^\infty a_n$ -converges then -$$\sum_0^\infty a_n=\lim_{x\to1^-}\sum_0^\infty a_n x^n.$$<|endoftext|> -TITLE: Does a continuous and 1-1 function map Borel sets to Borel sets? -QUESTION [12 upvotes]: Suppose $f: \mathbb{R} \to \mathbb{R}$ is a continuous function which is 1-1, then does $f$ map Borel sets onto Borel sets? - -REPLY [2 votes]: Note, you also need the fact that $f(\mathbb{R})$ is borel. This fortunately is so, as $\mathbb{R}$ is $\sigma$-compact, and the continuous image of a compact set is compact, thus closed (since we are working in hausdorff spaces), thus borel.<|endoftext|> -TITLE: What does E mean in 9.0122222900391E-5? -QUESTION [58 upvotes]: I often find this at the bottom of pages. - -Page generated in 0.00013899803161621 - -Sometimes, I come across - -Page generated in 9.0122222900391E-5 - -What does that time mean? -I tried searching Wikipedia for E and maths but found the e mathematical constant. My guess is E stand for Exponential and -5 is the power it is raised to. And the displayed time is a really small number. But that doesn't make sense when compared to the other time in the question. 0.00013899803161621 is bigger than 9.0122222900391E-5. -If it means x times $10^{-5}$, then 9.0122222900391E-5 will be 0.000090122222900391 which is smaller than 0.00013899803161621. What does E stand for? - -REPLY [7 votes]: I have always taken the E or e to mean "exponent of 10." This construction parses in all modern computing languages as an IEEE754 double or single precision number.<|endoftext|> -TITLE: Geometric multiplicity of an eigenvalue -QUESTION [12 upvotes]: Geometric multiplicity of an eigenvalue of a matrix is the dimension of the corresponding eigenspace. The algebraic multiplicity is its multiplicity as a root of the characteristic polynomial. -It is known that the geometric multiplicity of an eigenvalue cannot be greater than the algebraic multiplicity. This fact can be shown easily using the Jordan normal form of a matrix. -I was wondering if there is a more elementary way to prove this fact, possibly longer but without using the Jordan normal form? (This is an exercise in Kreyszig's book on functional analysis, and given the author's style, I suspect that he did not intend the solution to use Jordan form, because otherwise I guess he would have given a hint about that. But I might be wrong.) - -REPLY [20 votes]: You don't need the Jordan form: suppose the geometric multiplicity of $\lambda$ is $k$, and let $\gamma=\{\mathbf{v}_1,\ldots,\mathbf{v}_k\}$ be a basis for the corresponding eigenspace. Extend the basis $\gamma$ to a basis $\beta$ for $F^n$, and let $Q$ be the change-of-basis matrix. Then the characteristic polynomials of $A$ and $Q^{-1}AQ$ are the same. The upper left $k\times k$ block of $Q^{-1}AQ$ is simply $\lambda I_k$, and, the $(n-k)\times k$ block under it is all zeroes. So the characteristic polynomial of $Q^{-1}AQ$ is a multiple of $(\lambda - t)^k$ hence the algebraic multiplicity of $\lambda$ is at least $k$.<|endoftext|> -TITLE: $n \mid (a^{n}-b^{n}) \ \Longrightarrow$ $n \mid \frac{a^{n}-b^{n}}{a-b}$ -QUESTION [7 upvotes]: How does one prove that if $n \mid (a^{n}-b^{n}) \ \Longrightarrow$ $ \displaystyle n \mid \frac{a^{n}-b^{n}}{a-b}$ where $a,b, n \in \mathbb{N}$. -What i thought of is to consider $$(a-b)^{n} \equiv a^{n} + (-1)^{n}b^{n} \ (\text{mod} \ n)$$ and if we suppose that $n$ is odd then we have, $$(a-b)^{n} \equiv a^{n} -b^{n} \ (\text{mod} \ n)$$ and since $n \mid (a^{n} - b^{n})$ we have $$(a-b)^{n} \equiv 0 \ (\text{mod} \ n) $$ -I think i am far away from the conclusion of the problem, but this is what i could work on regarding the problem. - -REPLY [14 votes]: Let $\,c = (a^n\!-b^n)/(a\!-\!b).\,$ To show $\,n\mid c\,$ it suffices to show $\,p^k\mid n\Rightarrow\, p^k\mid c\,$ for all primes $p$. -If $\,\ p\nmid a\!-\!b\ $ then $\ p^k\mid n\mid a^n\!-b^n\!= (a\!-\!b)\:\!c\,\Rightarrow\ p^k\mid c\:$ by iterating Euclid's Lemma, -else $\, p\mid a\!-\!b\ $ so $\ p^k{\,\LARGE \mid}\, \dfrac{\color{#90f}{a^{\large p}\!-b^{\large p}}}{\color{#0a0}{a-b}}\,\dfrac{a^{\large p^2}\!\!-b^{\large p^2}\!\!}{\color{#90f}{a^{\large p}-b^{\large p}}}\cdots \dfrac{\color{#c00}{a^{\large p^k}\!\!-b^{\large p^k}}}{a^{\large p^{k-1}}\!\!-b^{\large p^{k-1}}}\, \dfrac{\color{#0a0}{a^{\large n}\!-b^{\large n}}}{\color{#c00}{a^{\large p^k}-b^{\large p^k}}} = \color{#0a0}{\dfrac{a^{\large n}-b^{\large n}}{a-b}} = c$ -by first $\,k\,$ factors have form $\,Q= \dfrac{A^{\large p}\!-B^{\large p}\!\!}{A-B}\,$ so each is divisible by $\,p,\,$ by $\,p\mid A\!-\!B\,$ thus -$\qquad\ \ \ \bmod p\!:\ \color{#c00}{A}\equiv B\,\Rightarrow\, Q = \color{#c00}A^{p-1}\!+\color{#c00}A^{p-2}B+\cdots+\!B^{p-1}\!\equiv\ pB^{p-1}\!\equiv 0$ -Remark $ $ For generalizations of the above (multiplicative telescopic) lifting of $p$-divisibility see LTE = Lifting The Exponent and related results.<|endoftext|> -TITLE: Is the identity functor the terminal object of the category of endofunctors on $C$? -QUESTION [6 upvotes]: It seems to me not, since this would seem to imply that for all functors $F$ and all objects $A$ in $C$ there exists a morphism $F(A) \to A$ (making all functors co-pointed?). However, intuitively it seems like the identity functor acts like a terminal object; a monad $M$ on $C$ is a monoid on $[C, C]$ where the "unit" is a natural transformation $η : I \to M$, while for a monoidal set $S$ in Set the unit is a function $e : 1 \to S$. So am I misunderstanding something, or are my intuitions leading me astray? - -REPLY [7 votes]: The difference between those two examples is that in $\textbf{Set}$ the monoidal operation is the categorical product (so the identity object is the terminal object), whereas this is not true in the category of endofunctors on $\mathbf{C}$. (I believe the latter has a product if and only if $\mathbf{C}$ does, and then it is the pointwise product. It follows that the terminal object, if it exists, is the functor which sends all objects to $\mathbf{1}$ and all morphisms to the unique morphism $\mathbf{1} \to \mathbf{1}$. In particular, it's not the identity functor.)<|endoftext|> -TITLE: Have all numbers with "sufficiently many zeros" been proven transcendental? -QUESTION [23 upvotes]: Any number less than 1 can be expressed in base g as $\sum _{k=1}^\infty {\frac {D_k}{g^k}}$, where $D_k$ is the value of the $k^{th}$ digit. If we were interested in only the non-zero digits of this number, we could equivalently express it as $\sum _{k=1}^\infty {\frac {C_k}{g^{Z(k)}}}$, where $Z(k)$ is the position of the $k^{th}$ non-zero digit base $g$ and $C_k$ is the value of that digit (i.e. $C_k = D_{Z(k)}$). -Now, consider all the numbers of this form $(\sum _{k=1}^\infty {\frac {C_k}{g^{Z(k)}}})$ where the function $Z(k)$ eventually dominates any polynomial. Is there a proof that any number of this form is transcendental? -So far, I have found a paper demonstrating this result for the case $g=2$; it can be found here. - -REPLY [22 votes]: The answer to your question is yes. All numbers of the form $x=\sum_{n\ge0}\frac{C_k}{g^{Z(k)}}$ for Z(k) eventually dominating any polynomial are indeed transcendental. As in the question, g and Ck are integers with 1 ≤ Ck ≤ g-1. In fact, the methods used by the paper linked in the question generalize in a quite straightforward way to handle this situation. I don't know of this result appearing in any published paper, but note the following points. - -We can say straight-away that x is irrational. This follows from Z(n) eventually dominating any linear function of n, so its base-g expansion is not eventually periodic. -If Z(n+1)/Z(n) is unbounded then, as noted in the comments, x will be a Liouville number so, by Liouville's theorem, it is transcendental. For any N > 0, Z(n+1) ≥ NZ(n) for infinitely many n. The fact that it is a Liouville number follows from taking $p=\sum_{k=1}^nC_kg^{Z(n)-Z(k)}$ and $q=g^{Z(n)}$, giving the rational approximation $|x-p/q|< g^{1+Z(n)-Z(n+1)}\le gq^{-N}$. -By the Thue–Siegel–Roth theorem, if Z(n+1)/Z(n) ≥ 2+ε infinitely often (any ε > 0) then x will be transcendental. The theorem says that an irrational algebraic number has only finitely many rational approximations $\vert x-p/q\vert\le cq^{-2-\epsilon}$ for any fixed c,ε > 0. That x has infinitely many such rational approximations follows in the same way as for point 2 above. Every Z(n+1) ≥ (2+ε)Z(n) gives a rational approximation $\vert x-p/q\vert< gq^{-2-\epsilon}$, so x cannot be algebraic. This covers the case where Z(n) grows exponentially of rate an for any a > 2, but is not strong enough to cover cases such as Z(n) = 2n. -If Z(n+1)/Z(n) > 1+ε infinitely often (any ε > 0) then x will be transcendental. This is a consequence of the Roth-Ridout theorem, from the 1957 paper Rational approximations to algebraic numbers (not free access, but is also quoted in the freely available paper An explicit version of the theorem of Roth-Ridout, Theorem 2). The Roth-Ridout theorem strengthens the Thue-Siegel-Roth theorem implying, in particular, for irrational and algebraic x, there are only finitely many rational approximations $\vert x-p/q\vert\le cq^{-1-\epsilon}$ when the prime factors of q all belong to some fixed finite set P. In our case, we can let P be the set of prime factors of g and the result follows in the same way as for point 3 above. This shows that x is transcendental if Z(n) grows exponentially. (thanks to Mike Bennett over at mathoverflow for pointing out the Roth-Ridout theorem). -A paper by Bugeaud, On the b-ary expansion of an algebraic number shows that, if x is irrational and algebraic then for large enough n, there are at least (log n)1+1/(ω+4) (loglog n)-1/4 nonzero digits among the first n digits of the base g expansion. Here, ω is the the number of prime divisors of g. This shows that, if Z(n) ≥ exp(cnα) for large n and any fixed c > 0, α > 1/(1+1/(ω+4)) then x is transcendental. -After reading through the details of the paper linked in the original question, I note that they do generalize to the base g ≥ 2 case. So x is transcendental as long as Z(n) eventually dominates any polynomial. I don't know of any published paper proving this, but posted my proof on mathoverflow where this question was also asked. I have re-read through this proof a few times to be sure, and am now confident that it is correct (modulo small typos, etc). Also, Bugeaud posted an answer to the question agreeing that the method generalizes. Using #(x,n) to denote the number of non-zero digits in the first n digits of the base g expansion of x, the precise statement is as follows. - -If x is irrational and satisfies a rational polynomial of degree D then #(x,n) ≥ cn1/D for a positive constant c and all large enough n. - - -In fact, you can easily remove the "large enough" from this statement, although I find it convenient stated in this way. The proof I wrote out is a generalization of the methods used in the paper linked in the question. There is one change worth noting though. Whereas the paper made use of the Thue-Siegel-Roth theorem at one point (Theorem 3.1), I used Liouville's theorem. This means that the constant c appearing in the statement above is not quite as good (if you go through the proof and work it out explicitly) although, in any case, the paper linked in the question could have obtained a better value by using the Roth-Ridout theorem instead. Using Liouville's theorem does have two advantages though. Firstly, it is elementary. A proof of Liouville's theorem is given in the linked Wikipedia article. Secondly, it is effective. That is, not only can the constant c be calculated but you can also work out exactly what "large enough" means for n in the statement above (which will depend on the polynomial satisfied by x). The strengthened versions of Liouville's theorem such as Thue-Siegel-Roth and Roth-Ridout are not effective.<|endoftext|> -TITLE: Are there integer solutions to $9^x - 8^y = 1$? -QUESTION [17 upvotes]: This came up in proving non-regularity of a certain language (powers of 2 over the ternary alphabet). Any clue to the above equation could help me move forward. -Edit: -Of course, $x = 1, y = 1$ is a solution. I am looking for non-trivial solutions. - -REPLY [7 votes]: Equation $\rm\ 3^{2x}-2^{3y}=1\ $ is an instance of various special cases of Catalan's Conjecture. -First,$\ $ making the specialization $\rm\ \ \: z,\:p^n = 3^x,2^{3y}\ $ below yields $\rm\ x = 1 = y\ $ as desired. -LEMMA$\ \ $ $\rm z^2 - p^n = 1\ \ \Rightarrow\ \ z,\:p^n = \:3\ ,\:2^3\ $ or $\ 2,\:3\ $ for $\rm\ \ z,\:p\:,n\in \mathbb N,\ \ p\: $ prime -Proof $\rm\ \ \ (z+1)\:(z-1)\: =\: p^n\ \ \Rightarrow\ \ z+1 = p^{\:j},\ \ z-1 = p^k\ $ for some $\rm\ j,\:k\in \mathbb N$ -$\rm\quad \:\Rightarrow\ \ \ \ 2\ =\ p^{\:j} - p^k\ =\ p^k\: (p^{\:j-k}-1) \ \Rightarrow\ p^k=2\ $ or $\rm\ p^k = 1 \ \Rightarrow\ \ldots$ -Second, it's simply the special case $\rm\: X = 3^x,\ Y = 2^y\: $ of $\rm\ X^2 - Y^3 = 1\:,\: $ solved by Euler in 1738. Nowadays one can present this solution quite easily using elementary properties of $\rm\ \mathbb Z[\sqrt\[3\]{2}]\:$, e.g see p.44 of Metsankyla: Catalan's Conjecture: another old diophantine problem solved. See also this MO thread and this MO thread and Schoof: Catalan's Conjecture. Note also that Catalan equations are a special case of the theory of generalized Fermat (FLT) equations, e.g. see Darmon's exposition.<|endoftext|> -TITLE: discrete version of Laplacian -QUESTION [5 upvotes]: Suppose $P(x,y)$ gives transition probabilities of a random walk. I've seen $Pf=f$ being called the discrete version of Laplace's equation. In what sense are they analogous? - -REPLY [8 votes]: Here is a low-brow answer. If you accept that the discrete version of the second derivative of a function $f$ is $f(x+1) - 2f(x) + f(x-1)$, then the discrete Laplacian, say in two dimensions, is $\Delta f(x, y) = \frac{f(x+1, y) + f(x-1, y) + f(x,y+1) + f(x,y-1)}{4} - f(x,y)$. But the first term is just the transition probabilities of a random walk on $\mathbb{Z}^2$ where one moves to each of the four horizontally or vertically adjacent neighbors with equal probability. A similar statement is true in $n$ dimensions. -More generally one can define a discrete Laplacian on any graph which mimics the usual Laplacian. In fact there is an entire textbook by Doyle and Snell dedicated to working out potential theory on graphs. - -REPLY [6 votes]: Some analogies between discrete and continuous Laplacians were discussed in MO: -https://mathoverflow.net/questions/33602/what-is-a-reasonable-finitary-analogue-of-the-statement-that-harmonic-functions-a/ -If you search for "discrete harmonic" or "discrete Laplacian" there may be more there. -(added: the analogy is not as simple as the fact that the lattice Laplace operator converges to the continuous Laplacian as the spacing shrinks to zero. There is a rotational symmetry of the continuous Laplacian which is an essential part of the geometric theory. For the analogies to be meaningful, at least some of the geometry of the continuum story should be be visible in the lattice or graph Laplacians, and for this eigenvalue inequalities, Hodge theory, vector bundles, zeta functions and other constructs are considered for graphs, in addition to the more easily perceived analogies with harmonic theory and Brownian motion.) - -REPLY [5 votes]: The infinitesimal generator for Brownian motion in $\mathbb R^n$ (and, in general, in a Riemannian manifold) is $\tfrac12\Delta$. Your matrix $P$ plays exactly the same rôle in the discrete version. -As for the equation: the equation $\tfrac12\Delta f=0$ is, in the case of Brownian motion in $\mathbb R^n$, the equation for stationary states. Your equation is the same thing in the discrete version.<|endoftext|> -TITLE: What does "Gromov Witten potential" the "potential" mean -QUESTION [6 upvotes]: "Gromov Witten potential", when does "potential" mean here? What does the whole thing mean in physics? Thanks! - -REPLY [3 votes]: Gromov-Witten potential is a generating function for Gromov-Witten numbers, which are integrals of Gromov-Witten classes over the moduli space. According to Kontsevich and Manin this numbers define all the classes under some rather general assumptions. This potential defines a structure of formal Frobenius manifold on the cohomology. This is an example of cohomological field theory. Standard refernce is the book Frobenius manifolds, quantum cohomology and moduli spaces, written by Yuri Manin.<|endoftext|> -TITLE: Bell numbers and moments of the Poisson distribution -QUESTION [12 upvotes]: Using generating functions one can see that the $n^{th}$ Bell number, i.e., the number of all possible partitions of a set of $n$ elements, is equal to $E(X^n)$ where $X$ is a Poisson random variable with mean 1. Is there a way to explain this connection intuitively? - -REPLY [14 votes]: One way may be to use these facts. You can decide if this is intuitive enough or not. :) - -$B_n = \sum_{k=0}^n \left\{n \atop k \right\}$, where $\left\{n \atop k \right\}$ is a Stirling number of the second kind. (The number $\left\{ n \atop k \right\}$ counts the number of ways to partition a set of $n$ elements into $k$ sets.) -Stirling numbers of the second kind are used to convert ordinary powers to falling powers via $x^n = \sum_{k=0}^n x^{\underline{k}} \left\{n \atop k \right\}$, where $x^{\underline n} = x(x-1)(x-2) \cdots (x-n+1)$. -The factorial moments of a Poisson$(1)$ distribution are all $1$; i.e., $E[X^{\underline{n}}] = 1$. - -Putting them together yields -$$E[X^n] = \sum_{k=0}^n E[X^{\underline{k}}] \left\{n \atop k \right\} = \sum_{k=0}^n \left\{n \atop k \right\} = B_n.$$ -Facts 1 and 2 are well-known properties of the Bell and Stirling numbers. Here is a quick proof of #3. The second step is the definition of expected value, using the Poisson probability mass function. The second-to-last step is the Maclaurin series expansion for $e^x$ evaluated at $1$. -$$E[X^{\underline{n}}] = E[X(X-1)(X-2) \cdots (X-n+1)] = \sum_{x=0}^{\infty} x(x-1) \cdots (x-n+1) \frac{e^{-1}}{x!}$$ -$$= \sum_{x=n}^{\infty} x(x-1) \cdots (x-n+1) \frac{e^{-1}}{x!} = \sum_{x=n}^{\infty} \frac{x!}{(x-n)!} \frac{e^{-1}}{x!} = \sum_{y=0}^{\infty} \frac{e^{-1}}{y!} = e/e = 1.$$<|endoftext|> -TITLE: Quotient topologies and equivalence classes -QUESTION [14 upvotes]: I'm currently studying the notion of a quotient topology. The one thing I'm having trouble with understanding is what we're actually doing to the points as we're identifying them. Say we have a $[0,1] \times [0,1]$ in $E^2$ (euclidean $2$-space) with the subspace topology and we partition $X$ into: - -The set $\{(0,0),(1,0),(0,1),(1,1)\}$ of four corner points -sets of pairs of points $(x,0),(x,1)$ where $0 -TITLE: Are there addition formulas for the Riemann Zeta function? -QUESTION [8 upvotes]: In particular for two real numbers $a$ and $b$, I'd like to know if there are formulas for $\zeta (a+b)$ and $\zeta (a-b)$ as a function of $\zeta (a)$ and $\zeta (b)$. -The closest I could find online is a paper by Harry Yosh "General Addition Formula for Meromorphic Functions Derived from Residue Theorem" in some little known journal, but unfortunately I have no access to it and don't know if it would answer my question. Maybe this is well-known and I didn't search correctly... -Any help appreciated, thanks! - -REPLY [8 votes]: There is a functional equation relating values of $\zeta$ at $s$ and $(1-s)$. -If there were formulas relating values along a continuous family of 1-dimensional curves, such as $x + y = C$, one would get a differential equation for $\zeta$, or some comparably strong constraint. It is known that $\zeta$ does not satisfy any ODE's with algebraic functions as coefficients. Of course there could be gamma functions or other more complicated coefficients but prospects for this kind of additional structure in $\zeta$ seem dim. There are no extra functional equations for finite field zetas, for example.<|endoftext|> -TITLE: Find all positive integers such that $\lfloor{\sqrt{n}\rfloor} \mid n$ -QUESTION [7 upvotes]: How does one find the no of positive integers such that find all possible numbers such that $$\lfloor{\sqrt{n}\rfloor} \mid n$$ -What i did was to subsitute $n=t^{2}$ so that the equation becomes $\lfloor{t\rfloor} \mid t^{2}$ But this means that we want $t^{2} = k \lfloor{t\rfloor}$, where $k \in \mathbb{N}$. I don't really know what to do from here. By the way, this problem is in Apostol. - -REPLY [20 votes]: I suppose you want to find all possible numbers such that $\displaystyle [\sqrt{n}] \mid n$. -Assume $\displaystyle n$ is not a perfect square, then there must be some $\displaystyle k$ such that -$\displaystyle k^2 < n < (k+1)^2$. We have that $\displaystyle k = [\sqrt{n}]$. -The only numbers in the range $\displaystyle k^2 < n < (k+1)^2$ which are divisible by $k$ are $\displaystyle k^2+k$ and $\displaystyle k^2+2k$. -Thus the numbers $\displaystyle n$ such that $\displaystyle [\sqrt{n}] \mid n$ are of the form -$\displaystyle k^2, k^2+k, k^2+2k$<|endoftext|> -TITLE: How can I read this mathematical sentence aloud in English? -QUESTION [9 upvotes]: A map $s : \mathbb{N} \to X$ is a computable sequence in $(X,\nu_X)$ when there exists a computable map $f : \mathbb{N} \to \mathbb{N}$ such that $s(n) = \nu_X(f(n))$ for all $n \in \mathrm{dom}(\nu_X)$. -My best guess would be, "A map s taking N onto X is a computable sequence in the ??? (X, nu??) when there exists a computable map f taking N onto N such that s at n equals ??? of f ??? for all n elements of the domain of nu ???." -I am searching for a way to read it aloud that encodes all the elements of the sentence into speech. - -REPLY [5 votes]: David Speyer wrote how I would say it in practise, in a context where I was writing it on a black/whiteboard. Here's how I would say it in a pub or walking down the street: -"Let's define a 'representation map for X' [or your own preferred jargon] to be just some partial function nu, from the natural numbers to X. Then we can define a computable sequence for that representation map nu to be any function s, from the natural numbers to X, which [is consistent with / agrees with / extends] the composition of nu with a computable function f on the natural numbers." -When using natural language, choose your nouns wisely and characterize them. Do you care about the ordered pair $(X,\nu_X)$, or really just the map $\nu_X$ (for which $X$ is just the background against which the idea is presented)? What is the role of the partial map $\nu_X$ in the idea you are communicating? Do you care about the integers $f(n) \in \mathop{\mathrm{dom}}(\nu_X)$ over which you quantify, or really just the domain of the composite function $\nu_X \circ f$? -Identify the main characters in the synopsis of your play, and their roles: you will have a better chance of transporting the objects and morphisms of your idea faithfully to your interlocutors.<|endoftext|> -TITLE: Normal closure in groups -QUESTION [7 upvotes]: For instance, say $G = \langle x , y \ | \ x^{12}y=yx^{18} \rangle$. I want to know what is the normal closure of $y$ in $G$. -In general, what are the standard approaches to compute the normal closure of a subset of a finitely presented group? Are there algorithms? - -REPLY [12 votes]: You can compute the normal closure by computing the quotient, and then considering the kernel of the quotient homomorphism. -For the example you gave, let $N$ be the normal closure of $y$ in $G$. Then $G/N$ has presentation -$$ -\langle x,y \mid x^{12}y = yx^{18},y=1\rangle -$$ -This presentation reduces to $\langle x \mid x^{12} = x^{18}\rangle$, which is the same as $\langle x \mid x^6 = 1\rangle$. -Thus $G/N$ is a cyclic group of order 6, and $N$ is the kernel of the homomorphism $G\to G/N$. In particular, the normal closure of $y$ consists of all words for which the total power of $x$ is a multiple of 6.<|endoftext|> -TITLE: Are homotopic maps over a cofibration homotopic relative to the cofibration? -QUESTION [11 upvotes]: Let $X$ be a Hausdorff space and $A$ a closed subspace. Suppose the inclusion $A \hookrightarrow X$ is a cofibration. Let $f, g: X \to Y$ be maps that agree on $A$ and which are homotopic. Are they homotopic relative to $A$? -My motivation for asking this question comes from the following result: - -Let $i: A \to X, j: A \to Y$ be cofibrations. Suppose $f: X \to Y$ is a map which makes the natural triangle commutative. Suppose $f$ is a homotopy equivalence. Then $f$ is a cofiber homotopy equivalence. - -On the other hand, I'm having trouble adapting the proof in Peter May's book of this to the question I asked. Nonetheless, the standard examples of pairs of maps which are homotopic but not with respect to which some subset on which they agree (say, the identity map of a comb space and its collapsing to a suitably chosen point), don't seem to involve NDR pairs. - -REPLY [5 votes]: I think the machinery of obstruction theory deals with the special case where the spaces are skeleta of a CW-complex. Here's the setup (I'm basically just copying from pp. 6-7 of Mosher & Tangora here): -Let $Y$ be simply-connected for simplicity. First, let $B$ be a complex and $A$ be a subcomplex. Let $f:A\cup B^n\rightarrow Y$. Then we get an obstruction cochain $c(f)\in C^{n+1}(B,A;\pi_n(Y))$ (i.e. a function on relative $(n+1)$-cells with values in $\pi_n(Y)$). Similarly, let $K$ be a complex; then for any two maps $f,g:K\rightarrow Y$ that agree on $K^{n-1}$, we similarly get a difference cochain $d(f,g)\in C^n(K;\pi_n(Y))$. -Here are the two results. - -Theorem: There is a map $g:A\cup B^{n+1}\rightarrow Y$ agreeing with $f$ on $A\cup B^{n-1}$ iff $[c(f)]=0 \in H^{n+1}(B,A;\pi_n(Y))$. -Theorem: The restrictions of $f$ and $g$ to $X^n$ are homotopic rel $K^{n-1}$ iff $d(f,g)=0 \in C^n(K;\pi_n(Y))$. They are homotopic rel $K^{n-2}$ iff $[d(f,g)]=0 \in H^n(K;\pi_n(Y))$.<|endoftext|> -TITLE: How can I prove $\underbrace{\int \ldots \int}_{n} |x| dx = \frac{x^n |x|}{n+1}+C$? -QUESTION [6 upvotes]: So I was bored and decided to figure out the indefinite integral of the absolute value function, $|x|$. Using integration by parts ($u=|x|, dv=dx$, $dx = \text{sgn}(x)=\frac{|x|}{x}$), it can be shown that $\displaystyle\int |x| dx = \frac{x |x|}{2}+C$. -Now I decided to take the integral again, finding that $\displaystyle\int\left(\int |x| dx \right) dx=\frac{x^2 |x|}{3}+C$. Continuing, I found the pattern in the title, that the $n$th indefinite integral of $|x|$ is $\displaystyle\frac{x^n |x|}{n+1}+C$. Is there a way to prove this general result? - -REPLY [7 votes]: Make the observation that $|x| = \theta(x) x - \theta(-x) x$ for all real $x$, where $\theta$ is the Heaviside function (evaluating to $1$ if the argument is positive and $0$ otherwise). It is known that -\begin{eqnarray} -\int \theta(x) \ x \ dx = \theta(x) \frac{x^{2}}{2} + C \quad \text{and} \quad \int \theta(-x) \ x \ dx = \theta(-x) \frac{x^{2}}{2} + C^{\prime}, -\end{eqnarray} -where $C$ and $C^{\prime}$ are constants of integration. The identity for $n = 1$ follows by subtraction and the representation of $|x|$ above. With $n$ integrations, we have -\begin{eqnarray} -\int \cdots \int |x| \ dx = \theta(x) \int \cdots \int x \ dx - \theta(-x) \int \cdots \int x \ dx = \frac{|x| x^{n}}{(n+1)!} + P_{n}, -\end{eqnarray} -where $P_{n}$ is a polynomial in $x$, as claimed.<|endoftext|> -TITLE: Left/Right Cosets -QUESTION [11 upvotes]: I am trying to understand left/right cosets in group theory. -Here is the example in my text: -Let $G = \lbrace 1, a, b, c, d ,e \rbrace$ -Lets define the group operation $.$ by the following table, where the entry at row $x$ and column $y$ gives $x.y$ -ex. $d.e = b$ - 1 a b c d e -1 1 a b c d e -a a b 1 d e c -b b 1 a e c d -c c e d 1 b a -d d c e a 1 b -e e d c b a 1 - -This is no problem I understand this. But then we get the left and right cosets. -Let $G$ be a group and let $H \leq G$. A left coset of $H$ in $G$ ($G / H$) is a set of the form $gH = \lbrace gh : h \in H \rbrace$ for some $g \in G$. A right coset of $H$ in $G$ ($H$ \ $G$) is a set of the form $Hg = \lbrace hg : h \in H \rbrace$ for some $g \in G$. -Here are some examples that I am trying to figure out how they are generated. I guess i just do not understand the theory fully. I would like a little help explaining and possible a few more examples. -$\lbrace 1, a, b, c, d, e \rbrace / \lbrace 1, a, b \rbrace = \lbrace \lbrace1, a, b\rbrace , \lbrace c, d, e \rbrace\rbrace$ -$\lbrace 1, a, b \rbrace$ \ $\lbrace 1, a, b, c, d, e\rbrace = \lbrace \lbrace 1, a, b \rbrace, \lbrace c, d, e \rbrace \rbrace$ -$\lbrace 1, a, b, c, d, e \rbrace / \lbrace 1, c \rbrace = \lbrace \lbrace 1, c \rbrace, \lbrace a, d \rbrace, \lbrace b, e \rbrace \rbrace$ -A few more examples which i want to figure out are: -$\lbrace 1, a, b, c, d, e \rbrace / \lbrace 1, e \rbrace = ?$ -$\lbrace 1, d \rbrace$ \ $\lbrace 1, a, b, c, d, e \rbrace = ?$ -Thanks! - -REPLY [2 votes]: Cosets arise when you want to model the idea that certain elements of a group are effectively equal. -To see this, rather that looking at one coset at a time it is best to think of all possible cosets of a subgroup. Then you will find that cosets partition a group $G$ into equivalence classes such that two elements of a class differ by an element of the subgroup $H$. -For example sometimes in number theory we do not want to distinguish two numbers that differ by a multiple of a given number $n$. Say $n=4$. Then let $G$ be the set of integers under addition and $H$ be the set of multiples of $4$. The cosets of $H$ are -$$H\cdot 0=\{\ldots,-4,0,4,8,\ldots\}$$ -$$H\cdot 1=\{\ldots,-3,1,5,9,\ldots\}$$ -$$H\cdot 2=\{\ldots,-2,2,6,10,\ldots\}$$ -$$H\cdot 3=\{\ldots,-1,3,7,11,\ldots\}$$ -$$H\cdot 4=\{\ldots,0,4,8,12,\ldots\}$$ -$$H\cdot 5=\{\ldots,1,5,9,13,\ldots\}$$ -Note that $H\cdot 0 =H\cdot 4$ and $H\cdot 1 =H\cdot 5$. In fact there are only $4$ distinct cosets, each corresponding to one congruence class of integers modulo $4$. If we care only about the remainder of an integer after division by $4$ then all the elements in a coset are equivalent and we can think of them as a single entity. -To take another example, let $G$ be the group possible rotations of a point on a unit circle and we care only about where a point ends up on the circle after the rotation. Then let $H$ be rotations that are a multiple of $2\pi$. The cosets of $H$ will now be rotations which differ by multiples of $2\pi$ and which therefore have the same effect on the final position of the point rotated. -In general, two elements $g_1$ and $g_2$ of $G$ are defined to be equivalent ($g_1 \equiv g_2$) if $g_1 \cdot g_2^{-1} \in H$. -You can prove the following - -For all $g$ in $G$, $g \equiv g$ since the identity element belongs to $H$ as it is a subgroup. -If $g_1 \equiv g_2$ the $g_2 \equiv g_1$ since if an element belongs to $H$ then so does its inverse. -If $g_1 \equiv g_2$ and $g_2 \equiv g_3$ then $g_1 \equiv g_3$ since $H$ is closed under multiplication. - -This show that $\equiv$ is a genuine equivalence relation. We define the equivalence class of an element $x$ of $G$ as -$$[x]=\{y \in G\mid x \equiv y\}$$ -You can show that - -$x \in [x]$ -For any $x,y \in G$, either $[x] \cap [y] =\emptyset$ or $[x]=[y]$. - -So the equivalence classes are either equal or disjoint and they cover $G$. -Finally, back to cosets. The equivalence classes we have defined above are the same as the right cosets. If $y \equiv x$ then $yx^{-1}=h$ for some $h \in H$ or $y=hx$. So $[x]=Hx$. If we had instead defined our equivalence relation by the condition $x^{-1}y \in H$ then we would have got the left cosets.<|endoftext|> -TITLE: Is there anywhere we use a fibration which is not a fiber bundle -QUESTION [6 upvotes]: What I currently meet are all fiber bundles. - -REPLY [6 votes]: There's a canonical example of fibrations used in algebraic topology which are not fibre bundles. That is, given a continuous function $f : X \to Y$ there is a homotopy-equivalence $\phi : X' \to X$ and a fibration $f' : X' \to Y$ such that $f \circ \phi$ is homotopic to $f'$. I believe this idea goes back to Serre (or perhaps earlier). The fibre of $f'$ is called the homotopy-fibre of $f$. -A common usage of this construction is with the Postnikov Tower of a space. In his dissertation Jean-Pierre Serre used this (and some "closure" observations of the Serre spectral sequence of a fibration, then it was called Serre $\mathcal C$-theory, nowadays this technology is subsumed in localization) to show that most of the homotopy-groups of the spheres are finite.<|endoftext|> -TITLE: Hairy Ball theorem and its applications -QUESTION [5 upvotes]: While searching a question about fibre bundles, which was asked here, i got directed to Vector bundles. I noticed this word "Hairy Ball" which sounded eccentric and made a search at Wikipedia. -How is the hairy ball theorem related to this statement: You can't comb the hair on a coconut. - -REPLY [7 votes]: More of a comment than an answer, but it didn't fit: -In fact, the hairy ball theorem works on any even-dimensional sphere. However, odd-dimensional spheres do admit nonvanishing vector fields: if you use the usual embedding of $S^{2k-1}$ in $\mathbb{R}^{2k}$, then at the point $(x_1,y_1,\ldots,x_k,y_k)$ you can put the (nonzero) tangent vector $(y_1,-x_1,\ldots,y_k,-x_k)$. -The natural generalization is: How many linearly independent tangent vector fields can exist on $S^{n-1}$? As it turns out, the answer only depends on how many factors of 2 there are in $n$. So e.g. $S^3$ and $S^{35}$ both admit exactly 3 linearly independent tangent vector fields, because $3+1=2^2$ and $35+1=2^2\cdot 9$. This is crazy!<|endoftext|> -TITLE: Möbius strip with edge identified - constructing map? -QUESTION [8 upvotes]: I'm trying to show that the Möbius strip with boundary circle identified to a point is homeomorphic to $P^2$ (real projective space). I get geometrically why this is so, but, how does one generally construct maps between these spaces to show that they're homeomorphic? - -REPLY [4 votes]: Maybe you're going to tell me that if all you've got is a hammer, all problems look like nails, but here goes my proposal -following the same strategy as in my answers to this question. -First of all, take as a model for the projective plane the disk -$$ -D^2 = \{ (x,y) \in \mathbb{R}^2\ \vert \ x^2 + y^2 \leq 1 \} -$$ -and quotient by the equivalence relation among antipodal points on $S^1$: $(x,y) \sim -(x,y) $ if $x^2 + y^2 = 1$. So -$$ -\mathbb{RP}^2 = D^2/\sim \ . -$$ -Put the Moebius strip inside $\mathbb{RP}^2$ as the vertical strip -$$ -M = \left \{\widetilde{(x,y)} \in \mathbb{RP}^2 \ \vert \ -\frac{1}{2} \leq x \leq \frac{1}{2} \right \} \ . -$$ -Let's denote by $M'$ the same set of points inside $D^2$, without any quotient at all. That is, ordinary points of $\mathbb{R}^2$. Thus -$$ -M = M'/\sim \ . -$$ -Now, you want to prove that, when you quotient out the boundary of the Moebius band -$$ -\partial M = \left \{\widetilde{(x,y)} \in \mathbb{RP}^2 \ \vert \ x = -\frac{1}{2} \ \quad \text{or} \quad x = \frac{1}{2} \right \} -$$ -you get the projective plane: -$$ -M/\partial M \ \cong \ \mathbb{RP}^2 \ . -$$ -Ok, the homeomorphism is easily described in some kind of "eliptical" coordinates: for every point $(x,y) \in D^2$ there is one and only one elipse of the form -$$ -x^2 + \frac{y^2}{b^2} = 1 \qquad \text{for} \qquad -1 \leq b \leq 1 \ . -$$ -All these elipses have a common major axis, namely the segment $-1\leq x \leq 1, y = 0$. We make the abuse of considering $b=0$ too as the degenerate "elipse" $y=0$ and in order to distingish the "positive" $y\geq 0$ and "negative" $y\leq 0$ branches of the elipse, we put a sign in $b$: the same as $y$. If all this doesn't sound too rigorous to you, it's ok: look at it as just a motivation. Wait for the resulting formulae. -So a point inside the disk is determined once you know its $x$-coordinate and the elipse to which it belongs; that is, its $b$-coordinate. -Our homeomorphism is going to do the following: points inside the Moebius strip are going to "travel" along the elipse to which they belong. How far? Well, we want all the points in $\partial M$, those with $x=-1/2$ or $x= 1/2$, to reach points $(-1,0)$ or $(1,0)$, depending on the sign of $x$. Hence, we try something like -$$ -\widetilde{\varphi }: M \longrightarrow \mathbb{RP}^2 \ , \qquad \widetilde{\varphi} \widetilde{(x,b)} = \widetilde{(2x, b)} \ . -$$ -So, let's write a decent $\varphi$ in Cartesian coordinates. We want our $(2x, b)$ to belong to the $b$-elipse all the time, hence we have -$$ -(2x)^2 + \frac{y^2}{b^2} = 1 \qquad \Longrightarrow \qquad b = \frac{y}{\sqrt{1-(2x)^2}} \ . -$$ -Hence we begin with a map -$$ -\varphi : M' \longrightarrow D^2 -$$ -defined as -$$ -\begin{cases} -\varphi (x,y) = \left( 2x , \frac{y}{\sqrt{1-(2x)^2}}\right) & \text{if} \ x \neq \frac{\pm 1}{2} \\\ -\varphi (\pm 1/2, y) = (\pm 1,0) & {} -\end{cases} -$$ -Exercise. Check that $\varphi$ is continuous, surjective and compatible with the antipodal equivalence relation. -Hence, $\varphi$ induces a well-defined map -$$ -M \longrightarrow \mathbb{RP}^2 \ , -$$ -which is continuous, because of the universal property of the quotient topology, and surjective. It is "almost" injective too, except for all the points in the boundary of the Moebius band $x=-1/2$ and $x=1/2$ that go to $\widetilde{(-1,0)} = \widetilde{(1,0)}$. So, if we quotient out them, we get a homeomorphism -$$ -\widetilde{\varphi} : M/\partial M \longrightarrow \mathbb{RP}^2 \ , -$$ -whit the help, as usual, of the GTET (see the link above, taking into account that, despite being a quotient, $\mathbb{RP}^2$ is a Hausdorff space -see Rotman, "An introduction to Algebraic Topology", theorem 8.4, Hausdorff).<|endoftext|> -TITLE: Generating function of words in a binary alphabet counting blocks and appearances -QUESTION [8 upvotes]: Given the binary alphabet {a,b}, I'm trying to find the generating function that distinguishes, for all words of fixed length $n$, the count of blocks of a's and the number of a's. Let $x^p$ count the number of blocks of size $p$ and $y^q$ count the number of a's in a word. As a first example, take the word: -$$abbaaabaabaabba$$ -This has nine $a$'s broken up into a two blocks of length one, two blocks on length two and a block of length three. This would contribute to the generating function a term of $y^9(2x + 2x^2 + x^3)$. -As a second example, with $n=3$ there are eight words and the term associated with each of them is: -$bbb = 1$ -$bba = xy$ -$bab = xy$ -$baa = (xy)^2$ -$abb = xy$ -$aba = 2xy^2$ -$aab = (xy)^2$ -$aaa = (xy)^3$ -Note that the term $aba = 2xy^2$ is not a typo, there are two blocks of length one. With $A$ as the generating function, this would give: $A_3(x,y) = 1 + 3xy + (2x + 2x^2)y^2 + y^3x^3$. I've been able to come up with a special case of this, when we don't care about the counts of the $a$'s (equivalent to setting all $y=1$). This restricted form $B_n(x) = A_n(x,y=1)$ is: -$$ B_n = B_{n-1} + x^n + \sum_{j=0}^{n-2} ( 2^j x^{n-j-1} -1 + B_j ) $$ -with $B_0 = 1$. Is there such a formula for $A_n$, or even better, a non-recursive closed form of $A_{npq}$? - -REPLY [3 votes]: I know this is a very old question, BUT here's an answer: -First, let me define four functions... -Let $\text{part}_{k}(n)$ returns the $k$th lexicographic partition of $n$. For instance, $\text{part}_{0}(5) = \langle 5 \rangle$, $\text{part}_{1}(5) = \langle 4,1 \rangle$, and $\text{part}_{4}(5) = \langle 2,2,1 \rangle$. -Furthermore, I define $\text{part}_{k,p}(n)$ to be the $p$th element of $\text{part}_{k}(n)$. For example, $\text{part}_{4,2}(5) = 1$. -Next, let $\text{len}(\text{part}_{k}(n))$ return the number of elements in $\text{part}_{k}(n)$. Hence, $\text{len}(\text{part}_{4}(5)) = 3$. -Finally, let us define $\text{count}(\text{part}_{k}(n), x)$ to be the number of elements in $\text{part}_{k}(n)$ with the value of $x$. Thus, $\text{count}(\text{part}_{4}(5), 2) = 2$. -Also, here are a couple variable definitions. -$w = $ number of characters in word. -$n = $ number of partitions of $w$. -Now we can get started on the formula! :P -I chose to count by partitions, as I'm sure my function definitions have implied. I noticed that the way you put your problem meant that as long as there was at least one b between each block of a's, then there could be any number of b's elsewhere and the function for that particular situation would be unchanged. In other words, the function in x and y of "aaabbabbba" would be the exact same as for "baaabbbaba". -First, I counted the number of ways to arrange the blocks of a's. This is equivalent to having a row of b's and blanks and counting how many ways to choose which blanks to put the blocks of a into. For example, two blocks of a's ("aaa" and "aa", say) can be put into 6 blanks with 5 b's between them ("_b_b_b_b_b_") in $\binom{6}{2} = 15$ ways (6 blanks, 2 blocks). In general, the number of blanks is the number of b's plus one, which is -$\displaystyle \binom{w+1-\text{len}(\text{part}_{i}(k))}{\text{part}_{i}(k)}$ -where $i$ is the index of the particular partition and $k$ is the number of a's. -Next, we need to multiply by the number of ways to arrange the blocks amongst themselves. Disregarding duplicates, this is the factorial of the number of blocks, which is $\text{len}(\text{part}_{i}(k))!$. To account for the duplicates, we need to divide by the product of the factorials of the number of times each element repeats (wow, that was wordy). In all, it looks like this: -$\displaystyle \frac{\text{len}(\text{part}_{i}(k))!}{\displaystyle \prod_{j=0}^{k} \Bigl( \text{count}(\text{part}_{i}(k),j)! \Bigr)}$ -Multiplying the two gives us the coefficient for each piece. The x's are given by $\displaystyle \sum_{q=0}^{\text{len}(\text{part}_{i}(k))} x^q$ and the y's are given by $y^k$. Thus, the whole equation is: -$A_{w}(x,y) = \displaystyle \sum_{k=0}^{w} \left( y^k \; \sum_{i=0}^{n} \left( \binom{w+1-\text{len}(\text{part}_{i}(k))}{\text{part}_{i}(k)} \frac{\text{len}(\text{part}_{i}(k))!}{\displaystyle \prod_{j=0}^{k} \Bigl( \text{count}(\text{part}_{i}(k),j)! \Bigr)} \right) \left( \sum_{q=0}^{\text{len}(\text{part}_{i}(k))} x^q \right) \right)$ -Yep. Quite a monster. But it's not that hard to program, although it definitely would be quite tedious to do by hand.<|endoftext|> -TITLE: Calculate how many ways can you paint the corners of a Pentagon -QUESTION [6 upvotes]: I'm studiying for an exam I have on combinatorics next week and I'm stuck with the solution to this question: -Imagine you have a pentagon and you want to color the corners of the pentagon so that no two adjacent corners have the same color. If you have 'q' colors, how many ways of coloring the pentagon are possible. -I say that the answer is $q^2*(q-1)^2*(q-2) + q*(q-1)*(q-2)^2*(q-3)$. -My friend says the answer is $MyAnswer + q*(q-1)*(q-2)*(q-3)*(q-4)$. -Who is right? (or are we both wrong). -Further if you could point out a resource that explains how to calculate this question for other numbers of corners (say an hexagon) we would really appreciate it. -EDIT: -Answering to some comments in the answer, rotations of the pentagon should not be counted. What I mean is that if we have 5 colors (a,b,c,d,e) the pentagon with corners {(1,a),(2,b),(3,c),(4,d),(5,e)} is exactly the same as {(1,c),(2,d),(3,e),(4,a),(5,b)} - -REPLY [5 votes]: Here I am going to assume that we are not looking for distinct configurations, that is, we count a rotation or reflection of the pentagon to be a distinct possibility. -You both have incorrect terms. Consider having only $3$ colors, ie $q = 3$. In this condition there must be a vertex which lies between two points which have the same color. There are $3 \times 2 = 6$ ways to color these three points, leaving $2$ ways to color the remaining two points for a total of $12$ possibilities. But $3^2 \times 2^2 \times 1 + 0 + (0) = 36$, which is too many. -Hint: If you are coloring a pentagon, there are three possible cases: - -The pentagon can be colored with $k = 5$ distinct colors chosen from the $q$ possibilities. -The pentagon can be colored with $k = 4$ distinct colors and one repeat. -The pentagon can be colored with $k = 3$ distinct colors where two colors appear twice. (We also have that one color could appear three times, but then it would be impossible to color a pentagon without the same color being adjacent.) - -It is easy to see that it is impossible to color the vertices of a pentagon with only 2 colors such that the same color is not adjacent anywhere. -In your question, your friend is accounting for the $q(q-1)(q-2)(q-3)(q-4)$ ways to color a pentagon using $5$ distinct colors. To find the final solution, we need to count how many ways we can color a pentagon in each of the given cases and then we sum all the possibilities together. -Now, for each of the three cases, $q$ choose $k$ to fix the k elements you are working with, then consider how many ways you can color a pentagon using those $k$ colors. -Any more assistance will be moving towards an outright solution so I believe I should stop here. - -REPLY [4 votes]: You need to specify the problem better. Do you count rotations and reflections as different? For example, imagine a regular pentagon drawn on a sheet of paper with a corner up. Starting from the top corner and going clockwise, maybe we have 0,1,2,3,4 (using numbers instead of colors). Is this different from 1,2,3,4,0? How about from 0,4,3,2,1? -The easiest situation is if you say these are all different. Then the naive approach would be to say you have $q$ choices for the top corner, $q-1$ choices for each of the next three corners (as you can't have neighboring corners that match) and $q-2$ choices for the last, giving $q*(q-1)^3*(q-2)$, but this ignores the fact that the first and fourth corners could be the same color, giving $q-1$ choices for the last. -One way to deal with this situation is to separate into classes based on which corners match the top one. If we designate the top corner 0 and count clockwise, you can match the top color only in no corner, corner 2, or corner 3. So matching none, we have $q*(q-1)*(q-2)^3 $ as at corner 1 you can use any color but the color at corner 0, but for each other you have two eliminated. Matching corner 2, we have $q*(q-1)*1*(q-1)*(q-2)$ and matching corner 3 we have $q*(q-1)*(q-1)*1*(q-2)$. Add them all up and you have your answer. -In general, the answer has to be a fifth degree polynomial. For large $q$ the constraint of not matching colors won't reduce the count very much. So if you count the solutions for six different q (I suggest 0 through 5), you can fit a fifth degree polynomial through them. -If you want to count rotations and reflections as the same there are two possibilites. One way is to define a standard position and only count those cases in standard position. In this case this is easy. You would say that the smallest number has to be at the top, and that corner 2 is less than corner 3. A pitfall would be to say the smallest is at the top and corner 1 is less than corner 4. It could be that corner 1 and 4 are the same, and it could be that one of corner 2 and 3 is the same as corner 0. But counting how many configurations satisfy this constraint may not be easy. It is again a fifth degree polynomial. The other case is to list the configurations of matching corners and see how many times they each get counted. So if no corners match there are $q*(q-1)*(q-2)*(q-3)*(q-4)$ possibilities, but you have counted each one 10 times, so divide these by 10. Then you can have one pair or two pair of corners matching. These are probably easier to count by the standard position approach. If one pair matches, say they have to be positions 0 and 2. So we have $q*(q-1)*1*(q-1)*(q-2)$ choices for this. And so on. -Sorry for taking out the asterisks from my expressions, but it rendered in italics and run over itself when they were there. I think they made it easier to read.<|endoftext|> -TITLE: Identities with Div, Grad, Curl -QUESTION [8 upvotes]: In physics there are lots of identities like: -$$\nabla \times (\nabla \times A) = \nabla (\nabla \cdot A) - (\nabla \cdot \nabla) A$$ -I'm wondering if there is an algorithmic algebraic method to prove and/or derive these identities (something like using $e^{i\theta}$ to prove trigonometric identities)? - -REPLY [9 votes]: The most straightforward route I know of is through Einstein tensor notation, which renders the derivation of such identities largely mechanical by virtue of suppressing distracting notational noise. (I should warn you that I'm largely self-taught on this subject, so the following may contain errors of rigour, but it "works" as long as you're working in Cartesian coordinates with the Euclidean metric.) -Let me work through your example to illustrate. In tensor notation, the $i$th component of the curl of a vector field $v_i$ is given by $\varepsilon_{ijk}\partial_j v_k$, where $\varepsilon_{ijk}$ is the Levi-Civita symbol, that takes the values $1$, $-1$ and $0$ depending on the order in which the coordinates appear in the subscripts $ijk$. You can think of this as a notational shorthand for the plus and minus signs in the curl formula. So the left-hand side of the desired identity is -$$\varepsilon_{ijk}\partial_j (\varepsilon_{k\ell m}\partial_\ell A_m)$$ -$$= \varepsilon_{ijk}\varepsilon_{k\ell m}\partial_j\partial_\ell A_m$$ -because derivatives commute with constants and with each other. -At this point, to simplify simplify $\varepsilon_{ijk}\varepsilon_{k\ell m}$, I just looked up the relevant identity of the Levi-Civita symbol, but it should be possible to derive it simply by algebraic manipulation from the purely combinatorial definition of the symbol. It turns out that $\varepsilon_{ijk}\varepsilon_{k\ell m} = \delta_{i\ell}\delta_{jm} - \delta_{im}\delta_{j\ell}$, where $\delta_{ij}$ is the Kronecker delta which is $1$ if and only if $i = j$, and $0$ otherwise. This symbol acts like a substitution operator: $\delta_{ij}v_j = v_i$. -So we have -$$\varepsilon_{ijk}\varepsilon_{k\ell m}\partial_j\partial_\ell A_m$$ -$$= (\delta_{i\ell}\delta_{jm} - \delta_{im}\delta_{j\ell})\partial_j\partial_\ell A_m$$ -$$= \partial_m\partial_i A_m - \partial_\ell\partial_\ell A_i$$ -$$= \partial_i (\partial_m A_m) - (\partial_\ell\partial_\ell) A_i.$$ -The first term is $\nabla(\nabla\cdot A)$, while the second is $-(\nabla\cdot\nabla)A$.<|endoftext|> -TITLE: Subgroups - Klein bottle -QUESTION [9 upvotes]: Let $G$ be the fundamental group of the Klein bottle, -$G = \langle x,y \ ; \ yxy^{-1}=x^{-1} \rangle = {\mathbb Z} \rtimes {\mathbb Z} \ .$ -What are the nilpotent subgroups of $G$? -I was only able to find a normal series of abelian subgroups with cyclic quotients in $G$, namely -$1\leq \langle y^2 \ ; \ \ \rangle\leq \langle x,y^2 \ ; \ xy^2=y^2x \rangle\leq G \ .$ -Since I'm not an algebraist, I'm sorry if this a silly question. Thanks! - -REPLY [7 votes]: Here's a topological version of Arturo's argument. (I'd leave this as a comment, but I don't have enough rep.) -Every covering space of the Klein bottle is a surface, and it's fairly easy to convince yourself that the infinite-degree ones have cyclic, hence nilpotent, fundamental group. -It remains to consider the finite-degree covering spaces. Well, these are all closed surfaces of Euler characteristic zero (because Euler characteristic is nilpotent multiplicative), so by the classification of surfaces they are either tori or Klein bottles. As already noted, the Klein-bottle group itself is not nilpotent, so only the abelian subgroups are left.<|endoftext|> -TITLE: Limits of 2 variable functions -QUESTION [9 upvotes]: How would I prove that if $f: \mathbb R^2 \to \mathbb R$ is a function such that -$$\lim_{(x,y)\to(a,b)} f(x,y) = L$$ -and for every $y_0 \in \mathbb R$ -$$ \lim_{x\to a} f(x,y_0) = L'_{y_0}$$ -and for every $x_0 \in \mathbb R$ -$$ \lim_{y\to b} f(x_0,y) = L''_{x_0}$$ -then -$$ -\lim_{x\to a}\left(\lim_{y\to b} f(x,y)\right) = \lim_{y\to b}\left(\lim_{x\to a} f(x,y)\right) = L$$ - -REPLY [11 votes]: This is a rejoinder to Arturo's comment that if the two-variable limit -exists then both iterated limits exist. -Define -$$f(x,y)=\left\{\begin{array}{cl} -(x^2+y^2)\sin(1/x+1/y)&\textrm{if $xy\ne0$,}\\ -0&\textrm{if $xy=0$.} -\end{array}\right.$$ -Then certainly $f$ is continuous at $(0,0)$; that is -$$\lim_{(x,y)\to(0,0)}f(x,y)=0.$$ -But for any nonzero $x$, -$$\lim_{y\to0}f(x,y)$$ -doesn't exist (the function is wildly oscillatory for $y$ near zero). So -$$\lim_{x\to0}\left(\lim_{y\to0}f(x,y)\right)$$ -is meaningless.<|endoftext|> -TITLE: Can you explain the "Axiom of choice" in simple terms? -QUESTION [139 upvotes]: As I'm sure many of you do, I read the XKCD webcomic regularly. The most recent one involves a joke about the Axiom of Choice, which I didn't get. - -I went to Wikipedia to see what the Axiom of Choice is, but as often happens with things like this, the Wikipedia entry is not in plain, simple, understandable language. Can someone give me a nice simple explanation of what this axiom is, and perhaps explain the XKCD joke as well? - -REPLY [12 votes]: You have some boxes, at least one. There may be only one box, seven boxes, or infinitely many boxes. -In each and every box, there are some items, at least one. There may be arbitrarily many items. - -Question: Can you pick, at least in principle, exactly one item from each box, without necessarily describing the details or rules of how you select them? -The axiom of choice says that the answer to this question is affirmative. - -It might seem quite innocuous, but it turns out that the axiom of choice has groundbreaking consequences in virtually all subfields of mathematics. Some of these consequences are fundamental results, others are mind-boggling paradoxes. The long list includes: - -[Geometry] Banach–Tarski paradox. (The axiom of choice makes it possible to cut an object into a finite number of pieces in such a weird way that you can reassemble two copies of the same object of the same size!) -[Topology] Tychonoff’s theorem. -[Algebra] Every non-trivial vector space has a Hamel basis. -[Order theory] Zorn’s lemma. -[Order theory] Well-ordering theorem and transfinite induction. -[Set theory] Countable union of countable sets is countable. -[Set theory] Every infinite set has a proper subset with the same cardinality. -[Measure theory] Existence of sets that are not Lebesgue measurable. -[Functional analysis] Baire’s category theorem. -[Functional analysis] Hahn–Banach theorem. -[Functional analysis] Every Banach space admits linear maps that are nowhere continuous. -[Functional analysis] Krein–Milman theorem.<|endoftext|> -TITLE: Example of a non measurable function! -QUESTION [5 upvotes]: Can we have a measurable function $f$, whose inverse is not measurable? - -REPLY [8 votes]: My previous answer is wrong. -Take $f(x) = x$ ($f: \mathbb{R} \to \mathbb{R}$) where the domain has the Borel $\sigma$-algebra and the range the trivial $\sigma$-algebra. Then $f$ is clearly measurable but $f$ maps for example $[0,1]$ to $[0,1]$ which is not measurable. -Or do you want both $\sigma$-algebra's to be Borel? In that case I should modify my first answer (which was wrong because not every subset of a measure zero set is Borel measurable). -Another idea would be the following (based on the one below me): -Let $g:[0,1] \to \mathbb{R}$ be the Cantor function. Extend it to all of $\mathbb{R}$ by defining $g(x) = 0$ for $x < 0$ and $g(x) = 1$. Now define $h(x) = x + g(x)$ (this is standard), this function has a lot of nice properties like: $h$ is a bijection and measure of $h(C) = 1$ where $C$ is the Cantor set. -Now we know that a set of positive measure contains a non-measurable subset. So let $A$ be an non-measurable subset of $h(C)$. So $M = h^{-1}(A)$ is a subset of the Cantor set and has thus zero Lebesgue measure. -So define $u := h^{-1}$. This function is measurable (Lebesgue/Borel). But $u^{-1}(M) = A$ is non-measurable.<|endoftext|> -TITLE: Is there a known well ordering of the reals? -QUESTION [129 upvotes]: So, from what I understand, the axiom of choice is equivalent to the claim that every set can be well ordered. A set is well ordered by a relation, $R$ , if every subset has a least element. My question is: Has anyone constructed a well ordering on the reals? -First, I was going to ask this question about the rationals, but then I realised that if you pick your favourite bijection between rationals and the integers, this determines a well ordering on the rationals through the natural well order on $\mathbb{Z}$ . So it's not the denseness of the reals that makes it hard to well order them. So is it just the size of $\mathbb{R}$ that makes it difficult to find a well order for it? Why should that be? -To reiterate: - -Is there a known well order on the Reals? -If there is, does a similar construction work for larger cardinalities? -Is there a largest cardinality for which the construction works? - -REPLY [116 votes]: I assume you know the general theorem that, using the axiom of choice, every set can be well ordered. Given that, I think you're asking how hard it is to actually define the well ordering. This is a natural question but it turns out that the answer may be unsatisfying. -First, of course, without the axiom of choice it's consistent with ZF set theory that there is no well ordering of the reals. So you can't just write down a formula of set theory akin to the quadratic formula that will "obviously" define a well ordering. Any formula that does define a well-ordering of the reals is going to require a nontrivial proof to verify that it's correct. -However, there is not even a formula that unequivocally defines a well ordering of the reals in ZFC. - -The theorem of "Borel determinacy" implies that there is no well ordering of the reals whose graph is a Borel set. This is provable in ZFC. The stronger hypothesis of "projective determinacy" implies there is no well ordering of the reals definable by a formula in the projective hierarchy. This is consistent with ZFC but not provable in ZFC. -Worse, it's even consistent with ZFC that no formula in the language of set theory defines a well ordering of the reals (even though one exists). That is, there is a model of ZFC in which no formula defines a well ordering of the reals. - -A set theorist could tell you more about these results. They are in the set theoretic literature but not in the undergraduate literature. -Here is a positive result. If you work in $L$ (that is, you assume the axiom of constructibility) then a specific formula is known that defines a well ordering of the reals in that context. However, the axiom of constructibility is not provable in ZFC (although it is consistent with ZFC), and the formula in question does not define a well ordering of the reals in arbitrary models of ZFC. -A second positive result, for relative definability. By looking at the standard proof of the well ordering principle (Zermelo's proof), we see that there is a single formula $\phi(x,y,z)$ in the language of set theory such that if we have any choice function $F$ on the powerset of the reals then the formula $\psi(x,y) = \phi(x,y,F)$ defines a well ordering of the reals, in any model of ZF that happens to have such a choice function. Informally, this says that the reason the usual proof can't explicitly construct a well ordering is because we can't explicitly construct the choice function that the proof takes as an input. - -REPLY [30 votes]: No, it's not just the size. One can constructively prove the existence of large well-ordered sets, but for example even when one has the first uncountable ordinal in hand, one can't show that it is in bijection with $\mathbb{R}$ without the continuum hypothesis. -All the difficulty in the problem has to do with what you mean by "constructed." If one has a well-ordering on $\mathbb{R}$ then it is possible to carry out the construction of a Vitali set, which is a non-measurable subset of $[0, 1]$. And it is known that the existence of non-measurable subsets of $\mathbb{R}$ is independent of ZF. In other words, it is impossible to write down a well-ordering of $\mathbb{R}$ in ZF. -On the other hand given AC one can obviously write down a well-ordering in a non-constructive way (choose the first element, then the second element, then...). This is probably not what you meant by "construct," though.<|endoftext|> -TITLE: Precedence of set union, intersect, and difference? -QUESTION [13 upvotes]: Online, I have read contradicting opinions on whether intersect should take precedence over union (by analogy to logical and and or), or whether all set operators should have equal precedence. -Which way makes more sense and why? -And where does difference fit in? (I'd say it should be equal precedence to intersect because A - B = A intersect B'.) - -REPLY [12 votes]: There is no sensible way of preferring one of intersection and union more than the other, since complement switches them. I don't think you should assume an order at all and you should always use parentheses. - -REPLY [10 votes]: Since there are contradicting opinions, I recommend assuming that whoever is reading your work assumes a different precedence order than you do, and using lots of parentheses.<|endoftext|> -TITLE: Are there broad or powerful theorems of rings that do not involve the familiar numerical operations (+) and (*) in some fundamental way? -QUESTION [6 upvotes]: I am of, and I would like to retain, a mindset that mathematics does not have to have numbers as the central object of interest. With that in mind, I have done a fair amount of self-study on topics in undergrad modern algebra, looking for examples to show this is the case. -For objects like groups, it is often stated explicitly, at some point, that group elements can be though of as automorphisms on a particular set. I find this pretty interesting, that numbers do not come into play at all here. This makes groups pretty useful for problems that have symmetries. -However, for objects like fields and rings, it seems that their application often ends up dealing with numbers somehow. I'm aware there are probably a handful of cases where rings and fields do not show up as numbers, but those tend to be side-cases that are not of main study. I'm only vaguely aware of Galois theory and it's uses for fields as extensions of groups, so perhaps that is the direction I should follow there. -But as for rings, aside from the example I just gave, I still come up a bit short on any powerful or broad theorems that make use of those two operations but do not involve numbers in some fundamental way. Perhaps math.stackexchange can prove me wrong? -P.S. Answers with really good links to books or other resources will at least get an upvote. -An update for those who pointed out my lack of precision. -So admittedly, the choice of words was bad. I did use the phrase "involve numbers in some fundamental way", which, in my mind, means "does not use facts about numbers". Perhaps I can rephrase it better. -Suppose I have a group (G, *). There are many examples of groups if I want to let G be the integers, rationals, reals, complex, etc, where the group operation (*) can be addition, multiplication, or some other operation. -However, each of these are "numerical" examples. There are groups which do not behave like numbers, if we let G be the set of all bijections on a set S, and let (*) be the composition, we may not be able to find a group where the set S is a collection of numbers and (*) is a "numerical" operation on them. Our group operation (*) may end just being some random rearrangement of numbers that does not take advantage of the properties of numbers. -In this way, and as others often point out, all groups are just specific cases of bijections & composition. -Now suppose I have a ring (R, +, *). The ring of functions on integers, rationals, reals, etc. takes advantage of the properties of numbers when defining (+) and (*). As would the ring of polynomials, or the ring of matrices. Each of these examples of rings, while the set R is not a set of numbers, ends up being an abstraction of numbers, and the operations (+) and (*) make use of this fact. -An example of what I mean -To give a example (counterexample?) of what I mean, try Boolean rings. The operations (+) and (*) are isomorphic to the logical operations "xor" and "and". There is also examples of lattices with join and meet. One could try to squeeze in natural numbers into this with lcd and gcd, but these properties are not unique to numbers so they are not exactly "numerical". -If you want to add more structure, say an additive and multiplicative identity, you can use Stone's Representation Theorem to show it is isomorphic to a collection of sets with symmetric difference and intersection. - -REPLY [11 votes]: I think this is a legitimate observation -- it is possible to do lots of group theory without knowing much about integers or real numbers, but these number systems seem to come up immediately whenever one is dealing with rings. -It seems to me that the main reason is that every ring has a characteristic, and therefore a ring with unity necessarily includes a copy of either the integers or some $\mathbb{Z}_n$. Furthermore, any element of a finitely-generated ring can be written as a (possibly noncommutative) polynomial in the generators, where the coefficients of the polynomial are integers (or perhaps some elements of $\mathbb{Z}_n$). Adding and multiplying these polynomials inherently requires addition and multiplication of integers. -Once you get past this basic "number-ness", there are many rings that are otherwise unrelated to either integers or real numbers. For example, it's possible to develop the theory of polynomial rings quite a bit without using numbers for anything other than coefficients. The same goes for group rings, whose elements are hardly more number-like than typical group elements. For a more sophisticated example, the elements of the (integer) cohomology ring of a topological space are essentially geometric objects, not numbers. -Fields are the same way. Every field inherently contains either the rational numbers $\mathbb{Q}$ or a prime field $\mathbb{Z}_p$, but aside from that one could argue that many fields have very little to do with numbers.<|endoftext|> -TITLE: Powers of random matrices -QUESTION [18 upvotes]: Let $M$ be an $n \times n$ matrix whose elements are random reals in [0,1]. -Two questions. - - What is the growth rate of the magnitude of the elements of $M^k$ as a function of $k$? It is definitely -exponential, but maybe the exponent is known? - Is it the case that eventually one element of $M^k$ dominates, as $k \rightarrow \infty$? -I have some ambiguous experimental evidence that this is the case, but because -of the exponential growth, exact computation is difficult, rendering my "evidence" tenuous -at best and perhaps worthless. - -One can ask the same question for matrices whose elements are random reals in [-1,1], -or random 0's and 1's, or random choices among $\lbrace -1, 0, 1\rbrace$, ... -These question have likely been studied. Thanks for pointers and/or ideas! - -REPLY [5 votes]: I don't have enough reputation to leave a comment, so here goes.... -As other have observed, the growth rate of $M^k$ is determined by the largest eigenvalue of $M$. I just want to note that for a nonnegative matrix, the largest eigenvalue is always -between the largest and smallest column sum. -That, in principle, could allow you to obtain upper and lower bounds by trying to bound how big or small the column sums usually get for the random matrix you described.<|endoftext|> -TITLE: Partitioning sets such that the sum of 2 elements is Prime -QUESTION [6 upvotes]: Given an $n >0$ is it possible to partition the set $\mathcal{P} = \{1,2, \cdots, 2n\}$ into $n$ pairs $(a_{i},b_{i})$ such that $a_{i} + b_{i}$ is a prime? - -REPLY [13 votes]: Yes. The proof is by strong induction. The base case is obvious. By Bertrand's postulate there exists a prime $p$ between $2n+1$ and $4n$, so pick the pairs $\{ p-2n, 2n \}, \{ p-2n+1, 2n-1 \}, ...$ and so forth. Now it remains to pair up the numbers $\{ 1, 2, ... p-2n-1 \}$, which is possible by the inductive hypothesis.<|endoftext|> -TITLE: Different proofs of $\lim\limits_{n \rightarrow \infty} n \int_0^1 \frac{x^n - (1-x)^n}{2x-1} \mathrm dx= 2$ -QUESTION [20 upvotes]: It can be shown that -$$ n \int_0^1 \frac{x^n - (1-x)^n}{2x-1} \mathrm dx = \sum_{k=0}^{n-1} {n-1 \choose k}^{-1}$$ -(For instance see my answer here.) -It can also be shown that $$\lim_{n \to \infty} \ \sum_{k=0}^{n-1} {n-1 \choose k}^{-1} = 2$$ -(For instance see Qiaochu's answer here.) -Combining those two shows that -$$ \lim_{n \to \infty} \ n \int_0^1 \frac{x^n - (1-x)^n}{2x-1} \mathrm dx = 2$$ -Is there a different, (preferably analytic) proof of this fact? Please do feel free to add a proof which is not analytic. - -REPLY [7 votes]: Here's another idea. -Let $P_n(x)$ be the polynomial defined by -$$P_n(x) = \frac{x^n – (1-x)^n}{2x-1},$$ -and note that $P_n(x)=(1-x)P_{n-1}(x) + x^{n-1} \qquad (1), $ where $P_1(x)=1.$ -Define $I_n = \int_0^{1} P_n(x) dx.$ Integrating (1) between 0 and 1 we obtain -$$I_n = \int_0^{1} (1-x)P_{n-1}(x) dx + \frac{1}{n},$$ -and using the symmetry of $P_n(x)$ about 1/2 we note that -$\int_0^{1} (1-x)P_{n-1}(x) dx = \frac{1}{2}I_{n-1}.$ Hence we obtain the recurrence relation for $n > 1$ -$$I_n = \frac{1}{2}I_{n-1} + \frac{1}{n}, \textrm{ where } I_1=1. \qquad (2)$$ -From which it follows that if $\lim_{n \rightarrow \infty} n I_n$ exists then it must equal 2. -To show that the limit exists we can solve (2) to obtain -$$I_n = \sum_{k=1}^n \frac {1}{k2^{n-k}},$$ -and straightforward algebra shows that $(n-1)I_{n-1} – nI_n > 0$ for sufficiently large $n$ (in fact, $n>6$), and since $nI_n$ is bounded below by 0 the result follows. -Just for completeness: -$$(n-1)I_{n-1} – nI_n = \left\lbrace \sum_{k=1}^{n-2} \frac{k}{2^k(n-k)(n-k-1)} \right\rbrace - \frac{n}{2^{n-1}} \quad \textrm{ for } n>2.$$ -Comparing the first term in the summation with the negative term we have -$$ \frac{1}{2(n-1)(n-2)} > \frac{n}{2^{n-1}} \quad \textrm{ for } n \ge 13.$$ -Hence $\lbrace nI_n \rbrace$ eventually forms a strictly decreasing sequence.<|endoftext|> -TITLE: Area of a sector between a point and a function determined by an angle -QUESTION [5 upvotes]: I've been trying to find a way to do this: given a point $P(\alpha,\beta)$, a function $f(x)$, and an angle $\theta$, find the area of the sector determined by extending a horizontal line from $P$ to $f$, and then another one from $P$ to $f$ at the angle $\theta$ from the first line. If you look at this picture: - -it's a little more clear what I'm talking about. I want to find the area of that sector $S$, which is determined by that triangle $R_1$ and the other region $R_2$. Let the value of $x$ where the horizontal line from $P$ hits $f$ be $q$ (that is, $q=f^{-1}(\beta)$). Now, the line $\overline{PA}$ has the equation $y=\tan\theta(x-\alpha)+\beta$. The intersection of that and $f$, the point $A$ (the first intersection point), is the beginning of the interval determining $R_2$; I'll call the root $p$ of $f(x)=\tan\theta(x-\alpha)+\beta$ that value of $x$, so $p=f^{-1}(\tan\theta(x-\alpha)+\beta)$. The area of $R_1$ is simply the area of the triangle: $R_1=\frac{1}{2}(p-\alpha)(f(p)-\beta)$. The area of $R_2$ is the area under the curve from $p$ to $q$, minus the initial height $\beta$: $R_2=\displaystyle\int_p^q (f(x)-\beta) dx$. Therefore, the total area is $$S=\frac{1}{2}\tan\theta(p-\alpha)^2-\beta(q-p)+\displaystyle\int_p^q f(x) dx$$ -This is the way I did it, but there has to be a simpler way; is there one involving polar equations? Is there a general formula when $f$ is non-invertible? -Edit: If the function $f$ is translated so that $P$ is now the origin, the area is just $\displaystyle\frac{1}{2}\int_0^\theta r^2 d\phi$ where $r(\phi)$ is $f(x)$ in polar form. However, translating $f$ over to where $P$ is on the origin turns it into $f(x+\alpha)-\beta$. Is there a nice way of putting that into polar form? - -REPLY [2 votes]: Yes, it is easier using polar coordinates. If you have a region in the plane determined by two straight lines $OA$ and $OB$ starting in a point $O$ -like those in your picture- and forming angles $\theta_1$ and $\theta_2$ with respect the $x$-axis ($OB$ doesn't need to be horizontal) and you close your region with a curve which polar equation is $r = r(\theta )$, then you apply the change of variables theorem, and you have -$$ -A(D) = \int_D dxdy = \int_{\theta_1}^{\theta_2} \int_0^{r(\theta)} rdrd\theta = \frac{1}{2} \int_{\theta_1}^{\theta_2} r(\theta)^2 d\theta \ . -$$ -Here $D$ is your sector and $A(D)$ its area. -For instance, if you want to compute the area of a circle of radius $R$ (you already know, but just in case...), $r(\theta ) = R$. Hence -$$ -A(D) = \frac{1}{2}\int_0^{2\pi} R^2d\theta = \pi R^2 \ . -$$ -Amazing, isn't it? :-) -So the only problem could be that your data necessarily start with the equation of the curve given in Cartesain coordinates $y = f(x)$ and you have to translate it into polar coordinates.<|endoftext|> -TITLE: The zeros of a multivariable polynomial -QUESTION [11 upvotes]: The other day I came across the following statement: - A polynomial $f(x,y)$ of degree at most $3$ that vanishes at $8$ of the $9$ points $(x,y)$ with $x, y \in \{-1,0,1\}$ must also vanish at the $9$th point. -I am wondering about how this statement generalizes. Specifically, I am looking for a theorem of the form - Suppose a polynomial $f(x_1, \ldots, x_n)$ of degree $d$ vanishes on a some discrete set $S \subset R^n$, which satisfies _______. Then, defining a discrete set $U$ as ______, the polynomial $f$ must also vanish on $U$. -Can anyone fill in the blanks? - -REPLY [7 votes]: The statement you mention is a special case of the Cayley-Bacharach theorem. -For one way of generalizing the theorem, see the beautiful article "Cayley-Bacharach theorems and conjectures" by Eisenbud-Green-Harris.<|endoftext|> -TITLE: relation between set operations -QUESTION [8 upvotes]: I was wondering about the relation between complement of a subset, different between two subsets, union and intersection of subsets. Can we reduce the set of the above operations in a minimal way so that other operations can be represented by these "independent" operations? And how many ways to do this? -For example, the intersection (or union) can be represented by complement and union (or intersection) in the same way as in De Morgan law. -But can we represent complement or difference in terms of union and intersection? -Thanks! - -REPLY [3 votes]: We could also write set difference in terms of intersection and complement as : $A \cap B = A - (A - B)$. (I do also prefer $A - B$ to $A \setminus B$) -This expression comes handy when you are dealing with cardinality.<|endoftext|> -TITLE: How to prove $a^n < n!$ for all $n$ sufficiently large, and $n! \leq n^n$ for all $n$, by induction? -QUESTION [6 upvotes]: I have a couple things I want to prove. I'm pretty sure a proof by induction is the best route for these. - -First, I need to show that $5^n < n!$ from some $n_{0} > 0$. I'm choosing $n_{0} = 12$ since that's the smallest positive integer where $n! > 5^n$. -So: $P(k) : 5^k < k!$ - -Show $P(12)$: $5^{12} < 12!$ -Assume $P(k)$, show $P(k+1)$: - -$P(k+1):$ -$5^{(k+1)} \leq (k+1)!$ -$5*5^k \leq (k+1)*k!$ -This is where I'm stuck. Since our previous assumption was that $5^k \leq k!$, couldn't we conclude the proof by showing: -$5 \leq (k+1)$ for $n \geq 12$? - -The second one is: -$P(k): n! \leq n^n$ - -Show $P(1)$: $1 \leq 1$ -Assume $P(k)$, show $P(k+1)$: - -$P (k+1):$ -$(n+1)! \leq {n+1}^{n+1}$ -$(n+1)*n! \leq {(n+1)} *{(n+1)}^{n}$ -If we cancel out the common factor (we can do that, right?) $(n+1)$ we get: -$n! \leq (n+1)^n$ which is true for $n \geq1$. -Therefore, $n! \leq n^n$ for $n \geq 1 $ - -So in conclusion, I'm wondering if these proofs are sufficient (and correct). I kinda feel like I'm missing something in the last few steps - that it's "turtles all the way down" - -REPLY [9 votes]: HINT $\ $ Rewrite them as products so to make the induction completely trivial. -Namely $\quad\quad\displaystyle \frac{n^n}{n\:!}\ =\ \ \: \frac{n}1\ \ \ \frac{n}2\: \ \ \frac{n}3\ \:\cdots\ \frac{n}n\ \ge\ 1\ \ $ since each factor is $\:\ge 1$ -$\quad$ and $\rm\quad\quad\displaystyle\ \frac{n\:!}{5^n}\ =\ \frac{12\:!}{5^{12}}\ \frac{13}{5}\ \frac{14}5\ \cdots\ \frac{n}5\ >\ 1\ \ $ since each factor is $\: > 1$ -Now you need only inductively prove the lemma that a product is $\ge 1\:$ or $> 1\:$ if each factor is - which is completely trivial. Many inductive proofs can be similarly drastically simplified by conceptual preprocessing. Here we've effectively employed multiplicative telescopy to reduce generally intractable exponential inequalities to tractable polynomial inequalities. $\: $ This is a powerful technique with widespread applications. See also the analogous case of additive telescopy and the fundamental theorem of difference calculus.<|endoftext|> -TITLE: Alternatives to arxiv -QUESTION [7 upvotes]: I am an amateur mathematician (but I do have degree's in computer science (with mathematics)). Anyways, I have written this paper, where I have proved that for $\zeta(\rho) = 0$ if $\Im(\rho) \to \infty$ then $\Re(\rho) \leq \log_2(3) - 1$. -Now this is a big claim, and I am a cranky fella ;) who has a number of withdrawn papers on arxiv. So, chances of me being taken seriously $\to 0$ even though I do put my sweat in for checking any paper I write, before releasing it. -So, the problem is how do I get my paper discussed and verified, before I put it on arxiv (cause I do not want to withdraw it again) and also I do not know any local mathematician working on analytic number theory. -So here's my question: - -Before posting on arxiv are there any - other avenues through which I can get - my paper checked (on the internet)? - -REPLY [13 votes]: There are other places to post papers, but most are either harder to get into (most online journals) or are less reputable than the arXiv (like viXra). Finding math forums and posting there may be your best bet. - -http://mymathforum.com/ -http://mersenneforum.org/ -http://physicsforums.com/ - -to name just a few. - -On this specific point: -Your result seems wrong. Aren't there vertical asymptotes and, in particular, values unbounded above for $\Re(\zeta(x+iy))$ for any fixed $y$? -If you have (or think you have) an effective proof, can you give bounds Y and Z > 0 where, for any $y>Y$, $\Re(\zeta(x+iy)) < Z$? - -REPLY [10 votes]: No one on the internet (or elsewhere) is obliged to check your paper. So you have to catch people's interest and minimize the amount of investment they have to make. -Here are some suggestion based not on personal experience but from being an spectator round the Net: - -Make your work easy to understand: Present your work in a theorem-lemma format, clearly specifying links to existing literature and pinpointing your own specific contributions. -Make your work publicly accessible: Put it up on you website. Or on Google Documents. Or on http://vixra.org -Publicise the key points: Make a newsgroup posting, or create a question here or on Mathoverflow that explains your main insight or innovation and link to your paper. - -I was a bit hesitant in writing (3) since if you do it wrong you are just creating spam. No one is obliged to read on if you say "I have proved X. Can you please check?". Much better to say "So far people did not succeed in approach A to problem B because of C. But I think things can be made to work if you do D. Here's an attempt."<|endoftext|> -TITLE: Magnet Mandelbrot Set -QUESTION [8 upvotes]: We know that the Mandelbrot set is derived from the iterations of z^2 + c. -Do anyone know something about magnet Mandelbrot? I found it in the software UltraFractal, and it is much more beautiful than the original Mandelbrot, in my opinion. -Do you know anything about it? What is it iterating? -EDIT: Here's a picture of a magnet Julia set - -REPLY [9 votes]: Apparently, the idea is roughly this: - -At low temperatures, the metallic crystal lattice is completely ordered, and the metal is magnetic. -At high temperatures, the metallic crystal lattice is completely disordered, and the metal is non-magnetic. -As you warm up a low temperature lattice, small, isolated pockets of disorder appear in the otherwise ordered lattice. -As you cool down a hot lattice, small pockets of order appear in the otherwise chaotic arrangement. -You can write a "magnetic phase renormalisation transform" which represents "zooming out" of the lattice. -At lowish temperatures, when you zoom out, you can't "see" the small pockets of disorder any more, and the lattice looks totally ordered. This tells you that at lowish temperatures, the metal is still magnetic. -In short, to figure out whether the metal is magnetic or non-magnetic at temperature $x$, you just feed $x$ through this renormalisation transform, again and again, until $x$ settles at a very high or very low value. -The actual renormalisation transform varies depending on the properties of the lattice. FractInt (God rest its soul) implements two such lattices, as per Rahul's answer. Notice that each lattice has a parameter, $c$, which represents the number of possible "quantum spins" the metal ions may have. -The guys were struggling to figure out how the properties of the lattice as a function of $c$. So they tried replacing the temperature $x$ with a complex number $z$. Now, obviously, the real world does not have complex-number temperatures. But they thought it might be illuminating to see what the maths does... -...It was illuminating. They couldn't figure it out because it's a damned fractal! -If you look at a magnetic Julia set, the region across the line $\Re(z)=0$ corresponds to real-world temperatures. (I think there's some linear transformation between actual temperatures and function coordinates; I don't recall what it is off-hand.) - -P.S. Apparently in the real world, $c=2$ (the "Ising spin model"), but they tried letting that be a complex number too. The results are the beautiful magnetic Mandelbrot fractals, to go with the magnetic Julias.<|endoftext|> -TITLE: Unary intersection of the empty set -QUESTION [13 upvotes]: In MK (Morse-Kelley) set theory life is easy: $\forall X\forall y\left(y\in\bigcap X\leftrightarrow\forall x\left(x\in X\rightarrow y\in x\right)\right)$. If $X=\left\{\right\}$ then $\bigcap X=U$, where $U$ is the universal class. So the (unary) intersection of the empty set is the class that contains all sets as elements. In ZF (Zermelo-Fraenkel) set theory, instead, proper classes are not allowed. So, how can I define $\bigcap X$ in ZF? I tried with the following definitions: - -$\forall X\left(X\not=\left\{\right\}\rightarrow\forall y\left(y\in\bigcap X\leftrightarrow\forall x\left(x\in X\rightarrow y\in x\right)\right)\right)$. This means that $\bigcap\left\{\right\}$ is undefined, which is not that good. -$\forall X\forall y\left(y\in\bigcap X\leftrightarrow X\not=\left\{\right\}\land\forall x\left(x\in X\rightarrow y\in x\right)\right)$. This means that $\bigcap\left\{\right\}=\left\{\right\}$, which is the opposite of MK. - -I couldn't find any other valuable definition. Any ideas? Thank you. - -REPLY [7 votes]: The way I learned it, in ZF, we define the unary union by -$$\forall y \left(y\in\cup X \Leftrightarrow \exists z(z\in X\wedge y\in z)\right).$$ -The unary intersection is defined using the unary union and the Axiom of Separation: -$$\cap X = \left\{ y\in\cup X\,|\, \forall z(z\in X\rightarrow y\in z)\right\}.$$ -Using this definition, since $\cup\emptyset = \emptyset$, then $\cap\emptyset=\emptyset$ as well.<|endoftext|> -TITLE: Metric and Topological structures induced by a norm -QUESTION [12 upvotes]: While proving that some normed spaces were complete, two questions came to my mind. They relate the topological and the metric structures induced by a norm. - -Is it possible to find two equivalent norms $\|\cdot\|_1$ and $\|\cdot\|_2$ on a vector space $V \ $ such that $(V \ ,\|\cdot\|_1)$ is complete and $(V \ ,\|\cdot\|_2)$ is not? -Is there a vector space $V \ $ and two non-equivalent norms such that $V \ $ is complete relative to both? - -Here I'm assuming $V \ $ a vector space over a subfield of $\mathbb{C}$. Also I know that the answer is no if we only consider finite-dimensional vector spaces. -[Edit: I'm considering two norms equivalent if they define the same topology. I think it's the usual notion Jonas referred in his comment.] - -REPLY [11 votes]: No. It is straightforward to show that equivalent norms yield both the same convergent sequences and the same Cauchy sequences. (Written before Rasmus's answer was posted, but posted afterward.) - -Yes. One way to see this is to note that isomorphism classes of vector spaces depend only on linear dimension, so the question amounts to finding 2 nonisomorphic Banach spaces of the same linear dimension. There are lots of examples of these. Every infinite dimensional separable Banach space has linear dimension $2^{\aleph_0}$. However, for example, $\ell^1$ and $c_0$ are separable Banach spaces that are not isomorphic (as Banach spaces). - - -Actually, "amounts to" wasn't quite accurate. It is certainly sufficient that the 2 Banach spaces are not isomorphic, but it is not necessary because you are only asking that one particular map (the identity in the original formulation) is not an isomorphism. So what I gave above is actually stronger. To just answer 2), you could just take any infinite dimensional Banach space and induce a new norm via an unbounded linear isomorphism with itself. - -The answer above was assuming that equivalent norms are defined as in this PlanetMath article. If instead you meant only that the spaces are homeomorphic in the norm topologies, as Jyotirmoy Bhattacharya suspected, then the examples alluded to above won't work. However, there are also examples of pairs of Banach spaces that have the same linear dimension but are not homeomorphic, and this will work in either case. For example, $\ell^\infty$ and $c_0$ are not homeomorphic because $\ell^\infty$ is nonseparable. Both spaces have linear dimension $2^{\aleph_0}$. This was already mentioned for $c_0$, and for $\ell^\infty$ it follows because $c_0$ embeds in $l^\infty$ (which gives the lower bound on dimension) and because the cardinality of $\ell^\infty$ is $2^{\aleph_0}$ (which gives the upper bound). -(I'm now pretty sure this isn't what you want, based on your edit, but this still gives another example for the actual question as well as an answer to Jyotirmoy's comment.) -Incidentally, another way to see that $2^{\aleph_0}$ is a lower bound for the linear dimensions of $\ell^1$ and friends is to consider the linearly independent set $\\{(1,t,t^2,t^3,\ldots):0\lt t\lt 1\\}$. Cardinality of the spaces gives an upper bound.<|endoftext|> -TITLE: Levin's u-transformation -QUESTION [9 upvotes]: Suppose I'm given a very slowly converging sequence $\sum_k a_k$. In the literature, the Levin u-transformation is mentioned as a good universal technique for convergence acceleration. -I have difficulty in understanding how this method actually works and what the transformation actually is. Can anyone help? -After understanding how the method works I would like to try it out on a computer. Does anyone know any software packages which have black-boxed this process? I have access to Mathematica, Maple, MATLAB and C++. -Thank you very much in advance. - -REPLY [8 votes]: As a reminder for everybody, the general Levin transformation for a sequence of partial sums $S_n=\sum\limits_{j=0}^n a_j$ looks like this: -$$\mathcal{L}_k^{(n)}=\frac{\Delta^k\left((n+b)^{k-1}\frac{S_n}{g(n)}\right)}{\Delta^k\left((n+b)^{k-1}\frac1{g(n)}\right)}$$ -where $\Delta^k$ is the usual $k$-th forward difference operator, $g(n)$ is an auxiliary sequence, and $b$ is an adjustable real parameter that is not a nonpositive integer; more explicitly, we have -$$\mathcal{L}_k^{(n)}=\frac{\sum\limits_{j=0}^k (-1)^j\binom{k}{j}(n+j+b)^{k-1}\frac{S_{n+j}}{g(n+j)}}{\sum\limits_{j=0}^k (-1)^j\binom{k}{j}(n+j+b)^{k-1}\frac1{g(n+j)}}$$ -Often, $b$ is taken to be $1$, and $n$ is taken to be $0$. The various Levin transformations correspond to different choices of the auxiliary sequence $g(n)$; to give two of the simplest cases, the $u$-transformation, for instance, takes $g(n)=(n+b)a_n$, while the $t$-transformation uses $g(n)=a_n$. - -The article you should be looking at, apart from David Levin's original paper, is E.J. Weniger's Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series; in there, he gives a short FORTRAN routine for implementing Levin's transformations. A more elaborate implementation by Fessler, Ford, and Smith is available at Netlib. -Just to show how easy it is to implement the Levin transformation, here's a demonstration program I wrote for the TI-83 Plus calculator for summing the alternating harmonic series $\sum\limits_{k=1}^\infty \frac{(-1)^{k+1}}{k}$, based on Weniger's short FORTRAN routine (comments are delimited by a backslash): -PROGRAM:XTRPOL -Prompt N \\ number of terms of the series to use -1→P:0→S \\ P: alternating sign S: stores partial sums -For(K,1,N) -P/K→U \\ K-th term of the series, change to sum a different series -S+U→S -1→B \\ adjustable parameter for the Levin transformation -(B+K-1)U→T \\ Levin u-transform; for t version, remove the (B+K-1) -prgmLEVINT -Pause Y --P→P -End -Y - -which uses the subroutine -PROGRAM:LEVINT -(B+K-1)ֿ¹→W -W/T→U -If K=1 -ClrList ∟DL,∟NL -U→∟DL(K) -SU→∟NL(K) -1-W→V -For(J,K-1,1,-1) -(B+J-1)W→U -∟NL(J+1)-U∟NL(J)→∟NL(J) -∟DL(J+1)-U∟DL(J)→∟DL(J) -WV→W -End -10^(99)→Y -If abs(∟DL(1))≥10^-99 -∟NL(1)/∟DL(1)→Y -Y - -I haven't bothered to implement the stopping rules described by Fessler, Ford, and Smith in that demo program, but it's doable. Translating that short routine to your language of choice should be straightforward. -As a note, the algorithm here looks very simple, due to the exploitation of the recursive identities satisfied by the forward differences.<|endoftext|> -TITLE: Number of Pythagorean Triples under a given Quantity -QUESTION [7 upvotes]: Consider the function $Pt(n)$. It tells us how many primitive Pythagorean Triples there are (below $n$) when any argument $n \in \mathbb{N}$ is plugged in. Is there an 'exact formula'; i.e. an elementary function of even a combination of known special functions like the Gamma and Error Function, that describes $Pt(n)$ ? -Max -Edit: I'm also interested in the exact value of the limit of $Pt(n)/n$ when $n$ tends to infinity. - -REPLY [4 votes]: To answer the sharpened version of the question I suggested (the number of primitive Pythagorean triples with largest element ${\lt}n$ ): by the parametrization of pythagorean triples as sums of two squares, this is (essentially) equal to the question of how many ways there are of expressing all the odd numbers ${\lt}n$ as a sum of two squares. Mathworld's page on the sum-of-two-squares function at http://mathworld.wolfram.com/SumofSquaresFunction.html indicates that this is proportional to $n$ (though it might take some work to explicitly work out the constant of proportionality for the odd case), and so in fact your intuition is wrong; the limit you suggest tends to a finite positive value.<|endoftext|> -TITLE: License Plate Statistics -QUESTION [10 upvotes]: California issues license plates in numeric order (if we turn the letters into numbers). I have fun noticing the latest plate I have seen. I am interested in what you can derive from a series of these observations. I understand that sampling from $\{1,2,3...n\}$ the only useful data is the highest value you have seen. -Let's oversimplify the problem. Assume the highest plate issued is $N_0+n*t$, $n$ in plates/day and $t$ in days. Assume a similar number of low valued plates come off the road each day. I don't observe a consistent number of plates each day, but it averages out. Over a long time, the increase in highest plate seen should give a measure of $n$. The only other data I have is how frequently I see a new highest plate. Does that give some measure of how far my highest plate is from the highest issued? -As we are asked to cite the source of a question, I made it up. You probably guessed. - -REPLY [4 votes]: Joseph Gallian has decrypted many of the US state license plate and driver's license codes. -http://books.google.com/books?id=PD0clAlF8O4C&pg=PA27 -I think he used Markov chain models. As whuber mentioned your problem is similar to the German tanks for which the subject reference is "extreme value statistics".<|endoftext|> -TITLE: Geometric intuition behind convergence of Fourier series -QUESTION [13 upvotes]: I've been trying to work out the best way to understand why Fourier series converge, and it's a little embarrassing but I don't even know a rigorous proof. Can someone please help put me on the right track to thinking about these issue's in the proper way? I am especially interested any geometric ways to think about the convergence issue (something I suppose which takes advantage of the fact that each component $e^{in\theta}$ corresponds to some point along the unit circle). -Thanks! - -REPLY [5 votes]: Since your question was about the geometry behind convergence, I'll chime in with a very geometric way to think about these concepts. However, as Qiaochu Yuan mentions, in order to do so, we must first nail down in what sense we mean convergence. I'll discuss the "big three" types of convergence: pointwise, uniform, and mean-square (also called $L^2$) convergence. -Let's begin with defining a notion of $error$ between $f(x)$ and the $N$th partial sum of its Fourier series, denoted by $F_N(x)$, on $-\ell -TITLE: $b \mid ac\Rightarrow b \mid (a,b)(c,b)\,$ for integers $\,a,b,c$ -QUESTION [9 upvotes]: I'm trying to prove "the following generalization of Theorem 5 [ Th.5: if $a|bc$ and $(a,b)=1$ then $a | c$ ], which uses the same argument for its proof" (Sierpinski, The Theory of Numbers): if $a$, $b$, and $c$ are integers such that $b | ac$, then $b | (a,b)(b,c)$. -I haven't been able to prove it without any reference to prime numbers (which the author introduces way later), using only divisibility and facts like $(a,b)[a,b]=ab$. Here's what I've done so far: -Let $(a,b)=d_a$, $(b,c)=d_c$, $a=a'd_a$, $c=c'd_c$. Since $b | ac$ and $a | ac$ $\Rightarrow [b,a] | ac$ (this is the argument used in Th5's proof) $\Rightarrow ab/(a,b) | ac \Rightarrow a'b | ac \Rightarrow b | d_a c$. Doing the same for $c$, we get $b | ad_c$. From this we also have $b | (ad_c,d_a c)$. -Thank you for your help. - -REPLY [8 votes]: Lemma $\rm\,\ a\mid bc\Rightarrow a\mid (a,b)(a,c)\ \ $ [my notation swaps $\,\rm a,b\,$ vs yours] -Proof $\ \ \rm \color{#c00}{ad = bc} \ \Rightarrow\ (a,b)\,(a,c)\, =\ (aa,ab,ac,\color{#c00}{bc})\, =\, \color{#c00}a\,(a,b,c,\color{#c00}d)\ \ $ $\small\bf QED$ -The OP has $\rm\,(a,b)=1\,\Rightarrow\,(a,b,c,d)=1\,$ so the above is $\rm\, (a,c) = a,\,$ so $\rm\ a\mid c$ -The proof used only basic GCD arithmetic (distributive, commutative, associative laws). -Alternatively $\rm\,(a,bc) = (a\,(1,c),bc) = (a,ac,bc) = (a,(a,b)c)\ [\,= (a,c)\ \ if\ \ (a,b) = 1]$ -See here for much more on this proof, esp. on how to view it in analogy with integer arithmetic. -Alternatively, if you know the LCM $\cdot$ GCD law $\rm\ [a,b]\, (a,b)\, =\, ab\ $ then, employing this law, -we have $\rm\ \ a,b\mid bc \,\Rightarrow \, [a,b]\mid bc\, \Rightarrow\, ab\mid (a,b)\,bc\, \Rightarrow\, a\mid (a,b)\,c,\ $ so $\rm\,a\,|\,c\ $ if $\rm\ (a,b)= 1.$ -This appears to be the proof that Sierpinski has in mind since his prior proof is merely the special case where $\rm\ \ (a,b)= 1,\, $ and it employs the consequent specialization of the above $\ $ LCM $\cdot$ GCD $\ $ law, explicitly that $\rm\ (a,b) = 1\ \Rightarrow\ [a,b] = ab\,$. -For a proof of the LCM $\cdot$ GCD law simpler than Sierpinski's see the one line universal proof of the Theorem here. Not only is this proof simpler but it is also more general - it works in any domain. -Note also that the result that you seek is a special case of the powerful Euler four number theorem (Vierzahlensatz), or Riesz interpolation, or Schreier refinement. For another example of the simplicity of proofs founded upon the fundamental GCD laws (associative, commutative, distributive, and absorptive laws), see this post on the Freshman's Dream $\rm\, (A+B)^n =\, A^n + B^n\ $ for GCDs / Ideals, $\,$ if $\rm\, A+B\ $ is cancellative. It's advantageous to present gcd proofs using these basic laws (vs. the Bezout linear form) since such proofs will generalize better (e.g. to ideal arithmetic) and, moreover, since these laws are so similar to integer arithmetic, we can reuse are well-honed expertise manipulating expressions obeying said well-known arithmetic laws. For examples see said Freshman's Dream post. - -See also below (merged for preservation from a deleted question). -Note $\rm\ \ (n,ab)\ =\ (n,nb,ab)\ =\ (n,(n,a)\:b)\ =\ (n,b)\ =\ 1\ $ using prior said GCD laws. -Such exercises are easy using the basic GCD laws that I mentioned in your prior questions, viz. the associative, commutative, distributive and modular law $\rm\:(a,b+c\:a) = (a,b).\,$ In fact, to make such proofs more intuitive one can write $\rm\:gcd(a,b)\:$ as $\rm\:a\dot+ b\:$ and then use familar arithmetic laws, e.g. see this proof of the GCD Freshman's Dream $\rm\:(a\:\dot+\: b)^n =\: a^n\: \dot+\: b^n\:.$ -Note $\ $ Also worth emphasis is that not only are proofs using GCD laws more general, they are also more efficient notationally, hence more easily comprehensible. As an example, below is a proof using the GCD laws, followed by a proof using the Bezout identity (from Gerry's answer). -$\begin{eqnarray} -\qquad 1&=& &\rm(a\:,\ \ n)\ &\rm (b\:,\ \ n)&=&\rm\:(ab,\ &\rm n\:(a\:,\ &\rm b\:,\ &\rm n))\ \ =\ \ (ab,n) \\ -1&=&\rm &\rm (ar\!\!+\!\!ns)\:&\rm(bt\!\!+\!\!nu)&=&\rm\ \ ab\:(rt)\!\!+\!\!&\rm n\:(aru\!\!+\!\!&\rm bst\!\!+\!\!&\rm nsu)\ \ so\ \ (ab,n)=1 -\end{eqnarray}$ -Notice how the first proof using GCD laws avoids all the extraneous Bezout variables $\rm\:r,s,t,u\:,\:$ which play no conceptual role but, rather, only serve to obfuscate the true essence of the matter. Further, without such noise obscuring our view, we can immediately see a natural generalization of the GCD-law based proof, namely -$$\rm\ (a,\ b,\ n)\ =\ 1\ \ \Rightarrow\ \ (ab,\:n)\ =\ (a,\ n)\:(b,\ n) $$ -This quickly leads to various refinement-based views of unique factorizations, e.g. the Euclid-Euler Four Number Theorem (Vierzahlensatz) or, more generally, Schreier refinement and Riesz interpolation. See also Paul Cohn's excellent 1973 Monthly survey Unique Factorization Domains.<|endoftext|> -TITLE: Laplace transformations for dummies -QUESTION [51 upvotes]: Is there a simple explanation of what the Laplace transformations do exactly and how they work? Reading my math book has left me in a foggy haze of proofs that I don't completely understand. I'm looking for an explanation in layman's terms so that I understand what it is doing as I make these seemingly magical transformations. -I searched the site and closest to an answer was this. However, it is too complicated for me. - -REPLY [5 votes]: Refer http://www.dspguide.com/CH32.PDF for an excellent explanation of Laplace Transforms in the Electrical Domain.<|endoftext|> -TITLE: Definition of $C_0$ -QUESTION [13 upvotes]: This is probably a silly question, but a couple of people that I have talked to have had different responses. -Does $C_0$ denote the set of continuous functions with compact support or the set of continuous functions which vanish at infinity? - -REPLY [13 votes]: I have always seen $C_0(X)$ denoting the continuous functions vanishing at infinity, and $C_c(X)$ or $C_{00}(X)$ denoting the continuous functions with compact support, where $X$ is usually a locally compact Hausdorff space. -A special case is $c_0$, which is shorthand for $C_0(\mathbb{N})$, and $c_{00}$ means $C_{00}(\mathbb{N})$. In this case $\mathbb{N}$ has the discrete topology, and "continuous" is redundant. -By analogy, sometimes the compact operators on a Hilbert space are denoted by $B_0(H)$, and the finite rank operators by $B_{00}(H)$. -See this Springer Online Reference Works article. - -REPLY [10 votes]: The reason people have different responses is that the notation is not completely standardized. For example, Reed & Simon use $C_0^{\infty}(X)$ for smooth functions with compact support in a space $X$, and $C_{\infty}(X)$ for continuous functions vanishing at infinity. (But instead of $C_0(X)$ for continuous functions with compact support, they write $\kappa(X)$ for some reason...) -So you just have to check in every case which convention the text that you're reading uses.<|endoftext|> -TITLE: The probability theory around a candy bag -QUESTION [5 upvotes]: Consider a candy bag that contains $N=100$ candies. There are only two types of candy in the bag. Say the caramel candy and the chocolate candy. Nothing more is known about the contents of the bag. -Now, you are going to draw (randomly) one candy at a time from the bag until the first caramel appear. Suppose that the first caramel appeared at $k=7$th drawing. -At this moment, what can we say about the number of caramel candies in the bag? - -REPLY [5 votes]: The question you are asking here is the classical question of inferential statistics: "Given the outcome of an experiment, what can be said about the underlying probability distribution?" -You could, for example, give an estimator for the unknown quantity "number of caramels" (called $a$ from here on). The one most often used (since its easy to calculate) would be the maximum likelyhood estimator, where you estimate $a$ to the the value that maximizes the probability of the outcome. -In this case, you'd choose $a$ to maximize $P_a(7)$ (the probability of drawing the first caramel in the seventh draw, assuming there are $a$ of them). A little Excel calculation, along with Isaac's way to calculate $P_a(7)$ results in $a$ to be estimated as 14. -To judge, what this result is worth, you'd need to calculate the mean squared error of this estimator, which is not as easily done. -If you already had a hypothesis about $a$ (say $a$ < 20), you could use your experimental result to test it, using statistical hypothesis testing, too.<|endoftext|> -TITLE: Ways to evaluate $\int \sec \theta \, \mathrm d \theta$ -QUESTION [103 upvotes]: The standard approach for showing $\int \sec \theta \, \mathrm d \theta = \ln|\sec \theta + \tan \theta| + C$ is to multiply by $\dfrac{\sec \theta + \tan \theta}{\sec \theta + \tan \theta}$ and then do a substitution with $u = \sec \theta + \tan \theta$. -I like the fact that this trick leads to a fast and clean derivation, but I also find it unsatisfying: It's not very intuitive, nor does it seem to have applicability to any integration problem other than $\int \csc \theta \,\mathrm d \theta$. Does anyone know of another way to evaluate $\int \sec \theta \, \mathrm d \theta$? - -REPLY [2 votes]: My favorite way: -$$\int\frac{d\theta}{\cos\theta}=\int\frac{\cos\theta\,d\theta}{\cos^2\theta}=\int\frac{d\sin\theta}{1-\sin^2\theta}=\text{artanh}(\sin\theta).$$<|endoftext|> -TITLE: Normal closure in groups II -QUESTION [6 upvotes]: Let $G = \langle x , y \ | \ x^{12}y=yx^{18} \rangle$. I want to continue a discussion here on the normal closure of $y$ in $G$. -How can we determine a group presentation for $N$? - -REPLY [5 votes]: This is a much harder question than your last one, and involves two separate parts: finding generators for $N$ and finding relations for $N$. I'll try to sketch the answers. -For context, recall from the previous thread that every element of $G$ can be written as a word in $x$ and $y$, and $N$ is the subgroup consisting of all words for which the total power of $x$ is a multiple of $6$. In particular, $N$ is a subgroup of $G$ with index $6$. -Generators -Finding generators for $N$ involves something called a Schreier graph. This is a directed graph defined as follows: - -It has one vertex for each coset of $N$ in $G$ (so six vertices total) -There is a directed edge labeled $x$ from each coset $aN$ to the coset $xaN$, and a directed edge labeled $y$ from each coset $aN$ to the coset $yaN$. (Thus there will be $12$ edges total, $6$ labeled $x$ and $6$ labeled $y$.) - -For the case you are looking at, the six vertices are $N,xN,x^2N,\ldots,x^5N$. These six vertices are arranged in a directed hexagon of $x$ edges), and each vertex has a loop labeled $y$: - -Note that each path in this graph corresponds to a word in $x$ and $y$, with backwards travel along directed edges corresponding to inverse generators. In particular, the elements of $N$ are the paths that start and end at the vertex $N$. -Now, the statement that is true is that generators for $N$ are the same as generators for the fundamental group of this graph, using $N$ as the basepoint. In this case, what that means is that $N$ is generated by the following elements: -$$ -A=x^6,\quad B_0 = y,\quad B_1 = xyx^{-1},\quad B_2 = x^2yx^{-2},\quad\ldots,\quad B_5 = x^5yx^{-5} -$$ -(In general, one can obtain generators for the fundamental group of a graph by first choosing a spanning tree for the graph, and then considering all closed paths starting at the basepoint that pass through exactly one edge not in the spanning tree. In this case, the spanning tree consists of the first five edges of the hexagon.) -Relations -Next, the relations for $N$ come from the relations for $G$. There is one relation for $G$, namely: -$$ -x^{12}yx^{-18}y^{-1} = 1. -$$ -This will lead to six different relations for $N$, obtained by conjugating this relation by representatives for the six cosets of $N$. Thus, the relations in $N$ can be written as -$$ -x^{12}yx^{-18}y^{-1} = 1, \quad x(x^{12}yx^{-18}y^{-1})x^{-1} = 1, \quad\ldots,\quad x^5(x^{12}yx^{-18}y^{-1})x^{-5}=1. -$$ -Of course, these are expressed in terms of $x$ and $y$, and we must rewrite them in terms of the generators $A,B_0,\ldots,B_5$: -$$ -A^2B_0 A^{-3} B_0^{-1} = 1,\quad A^2B_1 A^{-3} B_1^{-1} = 1,\quad\ldots,\quad A^2B_5A^{-3}B_5^{-1} = 1 -$$ -(The justification for this comes from algebraic topology. Each group $G$ has an associated 2-complex, which has a single vertex, one loop at the vertex for each generator, and one 2-cell for each relation. The fundamental group of this 2-complex is equal to $G$. The finite-index subgroup $N$ corresponds to a finite-sheeted cover of this 2-complex. This cover has a Schreier graph as its 1-skeleton, and the six 2-cells of the cover are attached along the six words listed above.) -Final Result -In any case, we have now obtained a presentation of the group $N$: -$$ -N = \langle A,B_0,\ldots,B_5 \mid A^2B_k = B_kA^3\text{ for each $k$}\rangle -$$ -By the way, the group $G$ that you seem to be interested in is called a Baumslag-Solitar group, and is usually denoted $B(12,18)$.<|endoftext|> -TITLE: Do functions defined on global elements give rise to arrows in a well-pointed topos? -QUESTION [6 upvotes]: Suppose that $\mathcal{E}$ is a well-pointed elementary topos, that $X$ and $Y$ are objects of $\mathcal{E}$, and that $F$ is a function which maps global elements $p: 1 \to X$ to global elements $F(p): 1 \to Y$ (here $1$ is the terminal object of $\mathcal{E}$). Does there exist a (necessarily unique) arrow $f: X \to Y$ in $\mathcal{E}$ such that $fp = F(p)$ for all $p$? Equivalently, is any object in a well-pointed topos the coproduct over its global elements of $1$? It's easy to show that the answer is "yes" if the coproduct exists since the induced map $\coprod_{p \in \Gamma X} 1 \to X$ is iso. But I don't know whether the coproduct exists in general. -(Could somebody with enough reputation create a "topos-theory" tag and add it to this? Thanks) - -REPLY [2 votes]: Perhaps consider a full subcategory of Set with just the functions you are allowed to create given ZF. Then find some function that in normal set theory requires AC. This should do the trick. Or use the topos of constructible sets and functions, same thing.<|endoftext|> -TITLE: Covering ten dots on a table with ten equal-sized coins: explanation of proof -QUESTION [28 upvotes]: Note: This question has been posted on StackOverflow. I have moved it here because: - -I am curious about the answer -The OP has not shown any interest in moving it himself - - -In the Communications of the ACM, August 2008 "Puzzled" column, Peter Winkler asked the following question: - -On the table before us are 10 dots, - and in our pocket are 10 $1 coins. - Prove the coins can be placed on the - table (no two overlapping) in such a - way that all dots are covered. Figure - 2 shows a valid placement of the coins - for this particular set of dots; they - are transparent so we can see them. - The three coins at the bottom are not - needed. - -In the following issue, he presented his proof: - -We had to show that any 10 dots on a - table can be covered by - non-overlapping $1 coins, in a problem - devised by Naoki Inaba and sent to me - by his friend, Hirokazu Iwasawa, both - puzzle mavens in Japan. -The key is to note that packing disks - arranged in a honeycomb pattern cover - more than 90% of the plane. But how do - we know they do? A disk of radius one - fits inside a regular hexagon made up - of six equilateral triangles of - altitude one. Since each such triangle - has area $\frac{\sqrt{3}}{3}$, the hexagon - itself has area $2 \sqrt{3}$; since the - hexagons tile the plane in a honeycomb - pattern, the disks, each with area $\pi$, - cover $\frac{\pi}{2\sqrt{3}}\approx .9069$ of the - plane's surface. -It follows that if the disks are - placed randomly on the plane, the - probability that any particular point - is covered is .9069. Therefore, if we - randomly place lots of $1 coins - (borrowed) on the table in a hexagonal - pattern, on average, 9.069 of our 10 - points will be covered, meaning at - least some of the time all 10 will be - covered. (We need at most only 10 - coins so give back the rest.) -What does it mean that the disks cover - 90.69% of the infinite plane? The easiest way to answer is to say, - perhaps, that the percentage of any - large square covered by the disks - approaches this value as the square - expands. What is "random" about the - placement of the disks? One way to - think it through is to fix any packing - and any disk within it, then pick a - point uniformly at random from the - honeycomb hexagon containing the disk - and move the disk so its center is at - the chosen point. - -I don't understand. Doesn't the probabilistic nature of this proof simply mean that in the majority of configurations, all 10 dots can be covered. Can't we still come up with a configuration involving 10 (or less) dots where one of the dots can't be covered? - -REPLY [24 votes]: Nice! The above proof proves that any configuration of 10 dots can be covered. What you have here is an example of the probabilistic method, which uses probability but gives a certain (not a probabilistic) conclusion (an example of probabilistic proofs of non-probabilistic theorems). This proof also implicitly uses the linearity of expectation, a fact that seem counter-intuitive in some cases until you get used to it. -To clarify the proof: given any configuration of 10 dots, fix the configuration, and consider placing honeycomb-pattern disks randomly. Now, what is the expected number $X$ of dots covered? Let $X_i$ be 1 if dot $i$ is covered, and $0$ otherwise. We know that $E[X] = E[X_1] + \dots + E[X_{10}]$, and also that $E[X_i] = \Pr(X_i = 1) \approx 0.9069$ as explained above, for all $i$. So $E[X] = 9.069$. (Note that we have obtained this result using linearity of expectation, even though it would be hard to argue about the events of covering the dots being independent.) -Now, since the average over placements of the disks (for the fixed configuration of points!) is 9.069, not all placements can cover ≤9 dots — at least one placement must cover all 10 dots. - -REPLY [16 votes]: The key point is that the 90.69% probability is with respect to "the disks [being] placed randomly on the plane", not the points being placed randomly on the plane. That is, the set of points on the plane is fixed, but the honeycomb arrangement of the disks is placed over it at a random displacement. Since the probability that any such placement covers a given point is 0.9069, a random placement of the honeycomb will cover, on average, 9.069 points (this follows from linearity of expectation; I can expand on this if you like). Now the only way random placements can cover 9.069 points on average is if some of these placements cover 10 points -- if all placements covered 9 points or less, the average number of points covered would be at most 9. Therefore, there exists a placement of the honeycomb arrangement that covers 10 points (though this proof doesn't tell you what it is, or how to find it). - -REPLY [3 votes]: If you read carefully, this proof is for an arbitrary placement of dots. So given any dot arrangement, if we just place the coins randomly (in the honeycomb arrangement,) then on average we will cover slightly more than 9 of the dots. But since we can't cover "part" of a dot (in this problem) then that means that there exists a random placement of the coins that covers all 10 dots. So no matter the configuration of dots, we know that there is always a way to cover the dots with at most 10 coins :)<|endoftext|> -TITLE: Boundedness of Continuous Bilinear Operators -QUESTION [6 upvotes]: Let $T:X \times X \to \mathbb{R}$ be a continuous bilinear operator defined on a normed linear space $X$ s.t. -$T(\alpha x + \beta y,z) = \alpha T(x,z) + \beta T(y,z)$) and $T(x,y) = T(y,x)$. -Does there exist a constant $C$ s.t. $||T(x,y)|| \leq C$ $||x||$ $||y|| \forall x,y$? -I know that the result is true if $X$ and $Y$ are complete spaces, by using the uniform boundedness principle on $T$ as a continuous function of x for fixed y (and/or the other way around). -However, I'm not sure if completeness is necessary, since it is true that a continuous linear operator $T: X \to \mathbb{R}$ has the property $||T(x)|| \leq C ||x|| \forall x$ on any normed linear space $X$ (although linear and bilinear operators are not exactly the same). - -REPLY [6 votes]: I think you can show this using the same argument as in the continuous linear operators. -Since T is continuous, then $U=T^{-1}( (-1,1) )$ is open and contains $(0,0)$. find a c>0 small enough such that if $|x|,|y|\leq c$ then $(x,y)\in U$ and then for a general point (x,y) you have -$|T(x,y)| = |T(\frac{c x}{|x|} \frac {|x|}{c}, \frac{c y}{|y|} \frac {|y|}{c} )| = \frac {|x||y|}{c^2} |T(\frac{c x}{|x|} , \frac{c y}{|y|})| \leq \frac {|x||y|}{c^2}$ -if x=0 or y=0 then T(x,y)=0 so you can use the argument above for $x,y \neq 0$.<|endoftext|> -TITLE: Adjoint functors -QUESTION [7 upvotes]: I'm trying to wrap my brain around adjoint functors. One of the examples I've seen is the categories $\bf IntLE \bf = (\mathbb{Z}, ≤)$ and $\bf RealLE \bf = (\mathbb{R}, ≤)$, where the ceiling functor $ceil : \bf RealLE \rightarrow IntLE$ is left adjoint to the inclusion functor $incl : \bf IntLE \rightarrow RealLE$. I want to check that the following are true, as they seem to be: - -$floor : \bf RealLE \rightarrow IntLE$ would be right adjoint to $incl$ -Between the dual categories of $\bf IntGE \bf = (\mathbb{Z}, ≥)$ and $\bf RealGE \bf = (\mathbb{R}, ≥)$, $ceil$ would be right adjoint to $incl$ -Between $\bf RealGE$ and $\bf IntGE$, $floor$ would be left adjoint to $incl$ - -Is my understanding correct on these points? - -REPLY [2 votes]: Arturo has already posted a nice answer. I'd merely like to emphasize that such universal definitions often enable slick proofs, e.g. see below. For a much more striking example see the theorem in my post here, which presents a slick one-line proof of the LCM * GCD law via their universal definitions. -LEMMA $\rm\: \ \lfloor x/(mn)\rfloor\ =\ \lfloor{\lfloor x/m\rfloor}/n\rfloor\ \ $ -for $\rm\ \ n > 0$ -Proof $\rm\quad\quad\quad\quad\quad\quad\quad k\ \le \lfloor{\lfloor x/m\rfloor}/n\rfloor$ -$\rm\quad\quad\quad\quad\quad\iff\quad\ \ k\ \le\ \:{\lfloor x/m\rfloor}/n$ -$\rm\quad\quad\quad\quad\quad\iff\ \ nk\ \le\ \ \lfloor x/m\rfloor$ -$\rm\quad\quad\quad\quad\quad\iff\ \ nk\ \le\:\ \ \ x/m$ -$\rm\quad\quad\quad\quad\quad\iff\ \ \ \ k\ \le\:\ \ \ x/(mn)$ -$\rm\quad\quad\quad\quad\quad\iff\ \ \ \ k\ \le\ \ \lfloor x/(mn)\rfloor $ -Compare the above trivial proof to more traditional proofs, e.g. the special case $\rm\ m = 1\ $ here.<|endoftext|> -TITLE: The Prime Polynomial : Generating Prime Numbers -QUESTION [8 upvotes]: First of all, i'll confess i'm no math geek. I'm from Stackoverflow, but this question seemed more apt here, so i decided to ask you guys :) -Now, i know noone has discovered (or ever will) a Polynomial that generates Prime Numbers. But i've read about Curve Fitting (or Polynomial Fitting) so i was wondering if there was a way, we could have a simple n-degree Polynomial that could generate the first 1000 (or X) primes accurately. -I don't need it to generate all the primes, maybe just upto 1 million, since we already have the data, can we deduce that polynomial? -How big will be the polynomial for it to be accurate? Could you give an example for the first 100 primes? Am i just plain naive? -Thanks in Advance. :) - -REPLY [9 votes]: From Mathworld -However, there exists a polynomial in 10 variables with integer coefficients such that the set of primes equals the set of positive values of this polynomial obtained as the variables run through all nonnegative integers, although it is really a set of Diophantine equations in disguise (Ribenboim 1991). Jones, Sato, Wada, and Wiens have also found a polynomial of degree 25 in 26 variables whose positive values are exactly the prime numbers (Flannery and Flannery 2000, p. 51). -Unfortunately, the primes do not come out in order, so this will not help for what you want. But it is interesting.<|endoftext|> -TITLE: Continued Fraction of an Infinite Sum -QUESTION [5 upvotes]: What is the continued fraction for $\displaystyle\sum_{i=1}^n\frac{1}{2^{2^i}}$ -It seems to be "almost" periodic, but I can't figure out the exact way to express it. - -REPLY [10 votes]: We can apply the following general transformation formula of a series into a -continued fraction, which one can justify (see Addendum 1 and Addendum 2) by comparing the continued fraction fundamental recurrence relations with the series partial sums recurrence: -$$\sum_{n=1}^{N}\dfrac{u_{n}}{v_{n}}=\dfrac{u_{1}}{v_{1}+\underset{n=1}{ -\overset{N-1}{\mathbb{K}}}\left(\left( -\dfrac{u_{n+1}}{u_{n}}v_{n}^{2}\right) /\left( v_{n+1}+\dfrac{u_{n+1}}{u_{n}}v_{n}\right)\right) }.$$ -In this case, we have $u_{n}=1$, $v_{n}=2^{\left( 2^{n}\right) }$: -$$\sum_{n=1}^{N}\dfrac{1}{v_{n}}=\dfrac{1}{4+\underset{n=1}{\overset{N-1}{ -\mathbb{K}}}\left( \left( -v_{n}^{2}\right) /\left( v_{n+1}+v_{n}\right)\right) }$$ -$$\sum_{n=1}^{N}\dfrac{1}{2^{2^{n}}}=\dfrac{1}{4+\underset{n=1}{\overset{N-1}{\mathbb{K}}}\left(\left( -2^{2^{n+1}}\right)/\left( 2^{2^{n+1}}+2^{2^{n}}\right)\right) }$$ -$$=\dfrac{1}{4+}\dfrac{-16}{20+}\cdots \dfrac{-2^{2^{n+1}}}{ -2^{2^{n+1}}+2^{2^{n}}+}{\cdots }\dfrac{-2^{2^{N}}}{ -2^{2^{N}}+2^{2^{N-1}}}.$$ -The transformation of the series into a continued fraction is -$$\sum_{n=1}^{\infty}\dfrac{1}{2^{2^{n}}}=\dfrac{1}{4+\underset{n=1}{\overset{\infty}{\mathbb{K}}}\left(\left( -2^{2^{n+1}}\right)/\left( 2^{2^{n+1}}+2^{2^{n}}\right) \right) }.$$ - -Addendum 1: The series partial sums -$$s_{n}=\sum_{k=1}^{n}\frac{u_{k}}{v_{k}}=\frac{A_{n}}{B_{n}}$$ -verify, for $n\geq 2$, -$$s_{n}=s_{n-1}+\frac{u_{n}}{v_{n}}=\frac{A_{n-1}}{B_{n-1}}+\frac{u_{n}}{v_{n} -}=\frac{v_{n}A_{n-1}+u_{n}B_{n-1}}{v^{n}B_{n-1}}=\frac{A_{n}}{B_{n}}$$ -which means that -$$A_{n}=v_{n}\;A_{n-1}+u_{n}\;B_{n-1}$$ -$$B_{n}=v_{n}\;B_{n-1}.$$ -The truncated continued fraction -$$\underset{k=1}{\overset{n}{\mathbb{K}}}\left( u_{k}/v_{k}\right) =\frac{ -A_{n}}{B_{n}}$$ -verifies: -$$A_{n}=b_{n}\;A_{n-1}+a_{n}\;A_{n-2}\qquad A_{0}=0$$ -$$B_{n}=b_{n}\;B_{n-1}+a_{n}\;B_{n-2}\qquad B_{0}=1.$$ - -Addendum 2: Detailed algebraic computation. For $n=1$ we have -$$\frac{u_{1}}{v_{1}}=\frac{a_{1}}{b_{1}}=\frac{A_{1}}{B_{1}}\qquad u_{1}=a_{1}\qquad v_{1}=b_{1}.$$ -Replacing $n-1$ for $n$ in the first recurrence we get for $n\geq 3$ -$$A_{n-1}=v_{n-1}\;A_{n-2}+u_{n-1}\;B_{n-2}$$ -$$B_{n-1}=v_{n-1}\;B_{n-2}$$ -which in turn gives: -$$A_{n}=v_{n}\;A_{n-1}+u_{n}\;B_{n-1}$$ -$$=v_{n}\;\left( v_{n-1}\;A_{n-2}+u_{n-1}\;B_{n-2}\right) +u_{n}\;\left( v_{n-1}\;B_{n-2}\right) $$ -$$=v_{n}\;v_{n-1}\;A_{n-2}+\left( v_{n}\;u_{n-1}+u_{n}\;v_{n-1}\right)\;B_{n-2}$$ -and -$$B_{n}=v_{n}\;B_{n-1}=v_{n}\;v_{n-1}\;B_{n-2}.$$ -The same substitution in the second recurrence yields (for $n\geq 3$): -$$A_{n-1}=b_{n-1}\;A_{n-2}+a_{n-1}\;A_{n-3}$$ -$$B_{n-1}=b_{n-1}\;B_{n-2}+a_{n-1}\;B_{n-3}.$$ -Combining everything we obtain: -$$A_{n}=b_{n}\;A_{n-1}+a_{n}\;A_{n-2}$$ -$$=b_{n}\;\left( v_{n-1}\;A_{n-2}+u_{n-1}\;B_{n-2}\right) +a_{n}\;A_{n-2}$$ -$$=\left( b_{n}\;v_{n-1}+a_{n}\;\right) \;A_{n-2}+b_{n}\;u_{n-1}\;B_{n-2}$$ -and -$$B_{n}=b_{n}\;B_{n-1}+a_{n}\;B_{n-2}$$ -$$=b_{n}\;\left( v_{n-1}\;B_{n-2}\right) +a_{n}\;B_{n-2}$$ -$$=\left( b_{n}\;v_{n-1}+a_{n}\right) \;B_{n-2}$$ -Comparing both $A_{n}$ and $B_{n}$ formulae -$$A_{n}=v_{n}\;v_{n-1}\;A_{n-2}+\left( v_{n}\;u_{n-1}+u_{n}\;v_{n-1}\right)\;B_{n-2}$$ -$$A_{n}=\left( b_{n}\;v_{n-1}+a_{n}\;\right) \;A_{n-2}+b_{n}\;u_{n-1}\;B_{n-2}$$ -and -$$B_{n}=v_{n}\;v_{n-1}\;B_{n-2}$$ -$$B_{n}=\left( b_{n}\;v_{n-1}+a_{n}\right) \;B_{n-2}$$ -one concludes that -$$v_{n}\;u_{n-1}+u_{n}\;v_{n-1}=b_{n}\;u_{n-1}$$ -$$v_{n}\;v_{n-1}=b_{n}\;v_{n-1}+a_{n}.$$ -Hence -$$a_{n}=v_{n}\;v_{n-1}-b_{n}\;v_{n-1}$$ -$$=v_{n}\;v_{n-1}-\left( v_{n}\;u_{n-1}+u_{n}\;v_{n-1}\right)\;v_{n-1}/u_{n-1}$$ -$$=v_{n}\;v_{n-1}-v_{n}\;v_{n-1}-u_{n}\;v_{n-1}\;v_{n-1}/u_{n-1}$$ -$$=-\frac{u_{n}}{u_{n-1}}v_{n-1}^{2},$$ -and -$$b_{n}\;u_{n-1}=v_{n}\;u_{n-1}+u_{n}\;v_{n-1}$$ -$$b_{n}=v_{n}+\frac{u_{n}}{u_{n-1}}v_{n-1}.$$ -Thus for $n\geq 2$ -$$a_{n}=-\frac{u_{n}}{u_{n-1}}v_{n-1}^{2}$$ -$$b_{n}=v_{n}+\frac{u_{n}}{u_{n-1}}v_{n-1}.$$<|endoftext|> -TITLE: Theorems in Measure Theory: Fatou's Lemma, Lebesgue DCT, Monotone CT -QUESTION [11 upvotes]: In measure theory there are three fundamentally related theorems about exchanging limits and integrals: Fatou's lemma, Lebesgue's Dominated Convergence Theorem, and Monotone Convergence Theorem. It is difficult to prove any of these from scratch, but once you have one, the others are easier. -My question is, for those who have learned these theorems: which one do you prefer to prove first? Difficulty, length, and, perhaps most importantly, how enlightening each path is are the key considerations. I suppose you could also phrase the question: if you were teaching a class in what order would you prove these theorems. -I've read through all of the proofs and there doesn't seem to be a big difference, but perhaps someone can shed some new light on this question. - -REPLY [9 votes]: I've generally seen MCT -> Fatou -> DCT. MCT is nice if the integral is defined as the supremum of the integrals of all simple functions less than $f$. Fatou points out that you can lose mass when passing to the limit, but cannot gain it. And DCT is nice to prove with two applications of Fatou, since turning your head upside down shows that you cannot gain mass either positively or negatively. -I disagree with Jonas's idea that DCT is the "biggest" one, since it doesn't speak about functions not in $L^1$, which the others do; this is often very important. Also, I see the hypothesis of the DCT as somewhat ad hoc. To my mind, the "biggest" one is the Vitali convergence theorem, whose hypothesis is uniform integrability, which is necessary and sufficient. But since it is more complicated it is often skipped.<|endoftext|> -TITLE: Local homeomorphism and coverings -QUESTION [5 upvotes]: Let $f: X \rightarrow Y$ be a local homeomorphism with X, Y connected, locally path connected, Hausdorff and with X also compact. Then f is also a covering with finite fibers. -I know how to show that the fibers are finite. -Given that f is a surjection, I know how to show that f is a covering map. -How do I show that f is surjective? - -REPLY [7 votes]: I think I have it. The image of X under f is open (use "local homeomorphism") and closed (use "compact" and "Hausdorff") in Y and since Y is connected this shows that the image is the whole of Y.<|endoftext|> -TITLE: Motivation behind the definition of complete metric space -QUESTION [15 upvotes]: What is motivation behind the definition of a complete metric space? -Intuitively,a complete metric is complete if they are no points missing from it. -How does the definition of completeness (in terms of convergence of cauchy sequences) show that? - -REPLY [3 votes]: This answer only applies to the order version of completeness rather than the metric version, but I've found it quite a nice way to think about what completeness means intuitively: consider the real numbers. There the completeness property is what guarantees that the space is connected. The rationals can be split into disjoint non-empty open subsets, for example the set of all positive rationals whose squares are greater than two, and its complement, and the reason this works is because, roughly speaking, there is a "hole" in between the two sets which lets you pull them apart. In the reals this is not possible; there are always points at the ends of intervals, so whenever you partition the reals into two non-empty subsets, one of them will always fail to be open.<|endoftext|> -TITLE: Fekete's conjecture on repeated applications of the tangent function -QUESTION [33 upvotes]: A high-school student named Erna Fekete made a conjecture to me via email three years ago, -which I could not answer. I've since lost touch with her. -I repeat her interesting conjecture here, in case anyone can provide updated -information on it. -Here is how she phrased it. Let $b(0) = 1$ and $b(n)= \tan( b(n-1) )$. -In other words, $b(n)$ is the repeated application of $\tan(\;)$ to 1: -$$\tan(1) = 1.56, \; \tan(\tan(1)) = 74.7, \; \tan^3(1) = -0.9, \; \ldots $$ -Let $a(n) = \lfloor b(n) \rfloor$. -Her conjecture is: - -Every integer eventually appears in the $a(n)$ sequence. - -This sequence is not unknown; it -is A000319 in Sloane's integer sequences. -Essentially hers is a question about the orbit of 1 under repeated $\tan(\;)$-applications. -Her and my investigations at the time led us to believe it was an open problem. - -REPLY [16 votes]: I had made the same conjecture as Fekete, apparently around the same time -- mid-2007. In 2008 I verified that the first twenty million terms do not include 319. (I actually pushed the verification further, but I can't find the more recent records at the moment.) -Because $\tan(x) - x = x^3/3 + O(x^5)$, the function spends a lot of its time in a small neighborhood around $0$. It escapes when it nears $\pi/2$ and quickly returns for many iterations. -A mostly-unexplained phenomenon presumably related to the above: there are long spans of small numbers followed by short, 'productive' spans with large numbers. $\tan^k(1)$ is "below 20 or so" (according to a 2008 email I sent) for $360110\le k\le1392490$ but in the next 2000 numbers there are five which are above 20. -More theory is needed!<|endoftext|> -TITLE: Why can't Cantor sets cover $\mathbb{ R}$? -QUESTION [16 upvotes]: The Cantor set is uncountable so I expect countably many of them to be able to cover $\mathbb R$, but the set has measure $0$ so countably many of them also has set of measure $0$ and thus can't cover the real line. -Why/where is my intuition broken? - -REPLY [33 votes]: You can't cover a complete metric space with a countable union of closed nowhere dense subsets, by Baire's theorem. Cardinality tells us very little. Even infinite dimensional but separable Banach spaces have the same cardinality as lines, but perhaps here it is more intuitive that you will never cover such a space with countably many lines. Similarly, you can't cover the plane with countably many lines, polygons, conic sections, etc. -To see how Baire's theorem is more fundamental than measure here, note that the same is true for "fat" Cantor sets. That is, you take out smaller intervals to obtain Cantor-like sets with positive measure. These sets are still closed and nowhere dense, so by Baire's theorem the line is not a countable union of such sets, even though comparing measures wouldn't tell you this. - -REPLY [13 votes]: It's not just that the Cantor set has measure 0, it's also that the reals are a Baire space and the Cantor set is nowhere dense. E.g. take a fat Cantor set, $C$, and look at $C + \mathbb{Z}$ which has infinite measure but is still be nowhere dense. -UPDATE: There is a related discussion on MO that could be of interest.<|endoftext|> -TITLE: How do I determine the possible number of combinations of two ordered sets? -QUESTION [6 upvotes]: I'm not quite sure what the mathematical term for what I'm asking is, so let me just describe what I'm trying to figure out. Let's say that I have two ordered sets of numbers $\{1, 2\}$ and $\{3, 4\}$. I'm trying to figure out the number of possible ways to combine these two sets into one without breaking the ordering of the two sets. -So for instance, $\{1, 2, 3, 4\}$, $\{3, 4, 1, 2\}$, and $\{1, 3, 2, 4\}$ are valid combinations, but $\{2, 1, 4, 3\}$ isn't. How do I figure out the number of valid combinations? This feels like something I should remember from college, but I'm drawing a blank. It feels somewhere in between a combination and a permutation. Maybe I'm looking for a partially-ordered permutation (which seems to be a somewhat difficult concept if Google is to be believed)? - -REPLY [3 votes]: Here is another way of visualising Moron's answer: -Let $S$ be the first set, and $T$ be the second set. Now, imagine the elements of $S$ are listed, horizontally, in their order; we will not disturb their order, but will only "place" the elements of $T$ among them. -The possible locations for any element of $T$ are (i) before the first elements of $S$; (ii) between two elements of $S$; and (iii) after the last element of $S$. If $S$ has $s$ elements, then this gives $s+1$ possible locations. We can put more than one element of $T$ in each location, though. For example, if $S=\{1,2,3\}$ and $T=\{a,b,c,d\}$, then we can place elements of $T$ either before $1$, between $1$ and $2$, between $2$ and $3$, or after $3$ (four locations); and we could place one element before $1$, none between $1$ and $2$, two between $2$ and $3$, and one after $3$. But note that once we decide where to place the elements of $T$, the order they will appear on is completely determined. If we go the way I just described, you would end up with $a,1,2,b, c, 3, d$, and that is the only way to place elements of $T$ as described while preserving the order. -If we are placing $t$ elements, then, we just need to "select" $t$ locations from $s+1$ possibilities. The order in which we pick them doesn't matter, because in the end we will just put the elements of $T$ in their appropriate order in those locations. And we can select the same location more than once. So we need to compute "combinations with repetitions" (order does not matter, repetitions allowed). The formula for making $n$ selections, with repetitions allowed but where order does not matter, from among $m$ possibilities is $\binom{n+m-1}{n}$ (see for example here) so here we have $n=t$ and $m=s+1$, giving $\binom{t+s}{t} = \binom{t+s}{s}$ possibilities.<|endoftext|> -TITLE: sign reversing involution proof of a combinatorial identity -QUESTION [11 upvotes]: This from an exercise in Aigner's book where one has to evaluate $\sum_{k\ge 0} (-1)^k \binom{n}{k}^2$ using sign reversing involutions. When $n$ is odd, the problem is trivial : let $[n] = \{1,2,\dots,n\}$ consider all pairs of subsets $\{ (A,B) \in 2^{[n]} \times 2^{[n]} : |A| = |B| \}$, and $(A,B)$ has sign $(-1)^{|A|}$ and the sign-reversing involution is $(A,B) \to ([n]-A,[n]-B)$. Any hints on how to approach this problem for even $n$ will be appreciated. - -REPLY [10 votes]: Consider the set of pairs $(A,B)$ of subsets of $\{1,\ldots,n\}$ -such that $|A|+|B|=n$ with weight $(-1)^{|A|}$. -The exceptional pairs where the involution is not defined are those -where $A=B$. For each other pair $(A,B)$ -take the smallest element of the symmetric difference of $A$ and $B$ -and move it from one set to the other. The symmetric difference of -the two new sets is the same, so this is an involution, and it is -weight-reversing.<|endoftext|> -TITLE: Where in the analytic hierarchy does V=L start having consequences? -QUESTION [5 upvotes]: I note that the ordinals of L are the same as V, so I guess that it has no $\Pi_1^1$ consequences. On the other hand Wikipedia tells me that it asserts the existance of a $\Delta_2^1$ non-measurable set of reals. "Measurable" involves third-order concepts but I know that there's often a "coding trick" that gets around that sort of thing, so I guess it has some analytic consequences. -Of course I am guessing -- I am not very good at this stuff. But I am curious. What's the least point in the analytic hierarchy where V=L matters (if any)? - -REPLY [8 votes]: Shoenfield's absoluteness theorem ensures that any $\Sigma^1_2$ statement (and therefore, any $\Pi^1_2$ one) has the same truth value in $L$ as in $V$. -On the other hand, $\Sigma^1_3$ statements differ in general whether they are in $V$ or in $L$. For a silly example, $\exists x\in{\mathbb R}\,(x\notin L)$ can be rewritten as a $\Sigma^1_3$ statement, which is false in $L$ but true in general. Given a $\Sigma^1_2$ formula $\phi(x,y)$, the statement ``$\phi$ defines a well-ordering of the reals'' is $\Pi^1_3$. Again, there is a specific such $\phi$ for which this is true in $L$ and false in general. -("In general" means here that it can be made false by forcing. There are deeper discrepancies if one allows large cardinals into the picture.)<|endoftext|> -TITLE: Testing the series $\sum\limits_{n=1}^{\infty} \frac{1}{n^{k + \cos{n}}}$ -QUESTION [17 upvotes]: We know that the "Harmonic Series" $$ \sum \frac{1}{n}$$ diverges. And for $p >1$ we have the result that the series converges $$\sum \frac{1}{n^{p}}$$ converges. -One can then ask the question of testing the convergence the following 2 Series: -$$\sum\limits_{n=1}^{\infty} \frac{1}{n^{k + \cos{n}}}, \quad \sum\limits_{n=1}^{\infty} \frac{1}{n^{k + \sin{n}}}$$ where $ k \in (0,2)$. -Only thing which i have as tool for this problem is the inequality $| \sin{n} | \leq 1$, which i am not sure whether would applicable or not. - -REPLY [3 votes]: There is a very interesting paper by Bernard Brighi on the divergence of the series -$\sum\limits_{n=1}^{\infty} \frac{1}{n^{2+ \cos{(a+n)}}}$ for any real $a$ which I would like to attach but I don't know how to do it. Maybe most people who like the series already know it -The link is here: -http://www.artofproblemsolving.com/Forum/viewtopic.php?f=67&t=361997&p=1983209#p1983209<|endoftext|> -TITLE: Proving $\sqrt{1-x^2}\ge \operatorname{erf}(\sqrt{-\log x})$ -QUESTION [13 upvotes]: Can anyone see a nice way to prove the following for $0\le x \le 1$? -$$\sqrt{1-x^2}\ge \operatorname{erf}(\sqrt{-\log x})$$ -$\operatorname{erf}$ is defined as -$$\operatorname{erf}(z) = \frac{2}{\sqrt{\pi}}\int_{0}^{z} e^{-t^2} \, dt$$ - -REPLY [22 votes]: Let $y = \sqrt{-\log x}$. Then the inequality reduces to $\text{erf}(y) \leq \sqrt{(1-e^{-2y^2})}$ or equivalently $\text{erf}^2(y) + e^{-2y^2} \leq 1$. Now $\text{erf}^2(y)$ can be written as a double integral $\text{erf}^2(y) = \frac{4}{\pi} \int_{0}^y \int_{0}^{y} e^{- (a^2+b^2)} da db$ (As Qiaochu Yuan points out, the functions involved are well behaved and the double integral is well defined). Replace the area of integration from the square of side $y$ in the first quadrant to a quarter-circle of radius $y\sqrt{2}$ in the first quadrant and switch to polar co-ordinates. This would give the inequality $\text{erf}^2(y) \leq 1-e^{-2y^2}$ which is what we wanted.<|endoftext|> -TITLE: Maximum order of integers coprime to a prime $p$ -QUESTION [6 upvotes]: The following is a lemma I read online, but I don't understand part of the proof. -Let $d$ be the maximum possible order among integers $a$ prime to $p$. Then for any integer $a$ not divisible by $p$, the order of $a$ divides $d$. -Proof: Let $b$ be an element of order $d$, let $k$ be the order of $a$, and let $g=\gcd(k,d)$. Then $a^g$ has order $k/(k,g)=k/g$, and $k/g$ is relatively prime to $d$. Therefore $ba^g$ has order $d\cdot k/g$, which by maximality of $d$ implies that $k/g=1$. Hence $k|d$. -The one part I don't understand is how $k/g$ and $d$ are relatively prime. If I take $d=6$, $k=4$, $g=2$, then $k/g$ and $d$ are not relatively prime. However, I've gone through a few cases for small primes, and such a situation as never arisen, so there must be some constraints on which values $k$ and $d$ can take. Could someone please explain this? - -REPLY [2 votes]: Below is the key theorem as it applies to arbitrary finite abelian groups. See below for an example of how it is applied to deduce the more general result that a finite multiplicative subgroup of a domain is cyclic. The lemma is famous as "Herstein's hardest problem" - see the note below. -$\!\begin{align}{\bf Theorem}\quad\rm maxord(G)\ &\rm =\ expt(G)\ \ \text{for a finite abelian group $\rm\, G,\, $ i.e.}\\[.5em] -\rm \max\,\{ ord(g) : \: g \in G\}\ &=\rm\ \min\, \{ n>0 : \: g^n = 1\ \:\forall\ g \in G\} \end{align}$ -Proof $\:$ By the lemma below, $\rm\: S = \{ ord(g) : \:g \in G \}$ is a finite set of naturals closed under$\rm\ lcm$. -Hence every elt $\rm\ s \in S\:$ is a -divisor of the max elt $\rm\: m\: $ [else $\rm\: lcm(s,m) > m\:$],$\ $ so $\rm\ m = expt(G)\:$. -Lemma $\ $ A finite abelian group $\rm\:G\:$ has an lcm-closed order set, i.e. with $\rm\: o(X) = $ order of $\rm\: X$ -$\rm\quad\quad\quad\quad\ \ X,Y \in G\ \Rightarrow\ \exists\ Z \in G:\ o(Z) = lcm(o(X),o(Y))$ -Proof$\ \ $ By induction on $\rm o(X)\: o(Y)\:.\ $ If it's $\:1\:$ then trivially $\rm\:Z = 1\:$. $\ $ Otherwise -write $\rm\ o(X) =\: AP,\: \ o(Y) = BP',\ \ P'|P = p^m > 1\:,\ $ prime $\rm\: p\:$ coprime to $\rm\: A,B$ -Then $\rm\: o(X^P) = A,\ o(Y^{P'}) = B\:.\ $ By induction there's a $\rm\: Z\:$ with $\rm \: o(Z) = lcm(A,B)$ -so $\rm\ o(X^A\: Z)\: =\: P\ lcm(A,B)\: =\: lcm(AP,BP')\: =\: lcm(o(X),o(Y))\:$. -Note $ $ This lemma was presented as problem 2.5.11, p. 41 in the first edition of Herstein's popular textbook "Topics in Algebra". In the 2nd edition Herstein added the following note (problem 2.5.26, p.48) - -Don't be discouraged if you don't get this problem with what you know of group theory up to this stage. I don't know anybody, including myself, who has done it subject to the restriction of using material developed so far in the text. But it is fun to try. I've had more correspondence about this problem than about any other point in the whole book." - - -Below is excerpted from my sci.math post on Apr 29, 2002, as is the above Lemma. -Theorem $ $ A subgroup $G$ of the multiplicative group of a field is cyclic. -Proof $\ X^m = 1\,$ has $\#G$ roots by the above Lemma, with $\,m = {\rm maxord}(G) = {\rm expt}(G).\,$ Since a polynomial $\,P\,$ over a field satisfies -$\,\#{\rm roots}(P) \le \deg(P)\,$ we have $\,\#G \le m.\,$ But $\,m \le \#G\,$ because the maxorder $\,m <= \#G\,$ via $\,g^{\#G} = 1\,$ for all $\,g \in G\,$ (Lagrange's theorem). So $\,m = \#G = {\rm maxord}(G) \Rightarrow G\,$ has an elt of order $\,\#G,\,$ so $G$ is cyclic.<|endoftext|> -TITLE: Fiber product of varieties vs schemes reference -QUESTION [5 upvotes]: Given two complex varieties over a common base, I can take their fiber product in the category of varieties, or I can take their fiber product in the category of schemes and then take the reduced subscheme. I have heard, that these two operations yield the same. -Has someone a reference? - -REPLY [8 votes]: Here is a proof of this fact: -Let $\mathcal C$ be the category of (not necessarily irreducible) complex varieties. -Then $\mathcal C$ can be identified with the category of reduced finite type $\mathbb C$-schemes. -Let $\mathcal D$ be the category of all finite type $\mathbb C$-schemes. -Then obviously $\mathcal C$ is a full subcategory of $\mathcal D$, and the inclusion -$\mathcal C \subset \mathcal D$ has a right adjoint, namely passage to the underlying reduced subscheme. General nonsense (i.e. an easy categorical argument) then shows that if $X\to S$ and $Y \to S$ in $\mathcal C$ are two morphisms, the fibre product in $\mathcal C$ -can be computed by first computing the fibre product in the bigger category $\mathcal D$, -and then applying the right adjoint to the inclusion. That is, -the fibre product in the category of varieties over $\mathcal C$ is equal to -the reduced subscheme of the fibre product in $\mathcal D$ (which coincides with -the fibre product in the category of all schemes, just because the fibre product -of morphisms of finite type $\mathbb C$-schemes is again finite type over $\mathbb C$). -I'm not sure of a reference. Because the proof is easy when you have the right -categorical framework, it is the kind of thing that is well-known to experts but whose proof -is not necessarily written down explicitly.<|endoftext|> -TITLE: How to Slice the Cheese -QUESTION [8 upvotes]: I encountered a problem recently stated as below: -How many pieces of cheese we can obtain from a single thick piece by making five straight slices? (we can't move the cheese when slicing) If we want to maximize the number of pieces which is denoted by $P(n)$, is there any recurrence relation for $P(n)$, where $n$ is the number of slices? -Any hints will be highly appreciated. - -REPLY [2 votes]: This is the lazy caterer's sequence. As others have mentioned, arbitrary dimensional analogues are called hyperplane arrangements.<|endoftext|> -TITLE: In written mathematics, is $f(x)$a function or a number? -QUESTION [23 upvotes]: I often see notation/wording like "let $f(x)$ be a continuous function" or "let $f(x) \in C^0(\mathbb{R})$". -I would say that $\sin$ and $x \mapsto \sin(x)$ are functions, while $\sin(x)$ is a real number. -Are there any correct or best practices in this regard? - -REPLY [4 votes]: Strictly speaking, I would say that $f(x)$ is the value of the function $f$ evaluated at $x$. However, $\frac{x^3-2}{x+1}$ might be used as a function; it is probably because $x\mapsto\frac{x^3-2}{x+1}$ is harder to write and takes up more space.<|endoftext|> -TITLE: Critical exponents and point-wise convergence -QUESTION [6 upvotes]: A phase change is only possible in a physical system which obeys the laws of statistical mechanics if the infinite series for the partition function of that system converges non-uniformly (i.e. converges point-wise). This is because in order for a phase change to occur, the partition function must converge to two different continuous functions in different regions of the phase plane; and this can only happen if the number of terms in the partition function is infinite. -It might be possible, using the analogy with phase changes, to make some general statements about partition functions which converge non-uniformly to two different continuous functions. This is interesting to me because point-wise convergence is a very weak condition, and the analogy with phase changes seems to suggest a path towards a deeper understanding of this form of convergence. -One of the most interesting subjects in the statistical mechanics of phase transitions is the so-called "renormalization group" (which is not a group) and the critical exponents it predicts (these are non integer numbers which are associated with a "critical point" in the phase plane). Another powerful result from renormalization group theory is the concept of universality, in which the thermodynamic variables of the system are symmetric across all length scales at a certain critical point. Universality is directly related to the critical exponents and critical point behavior, but as is fairly standard for theoretical physics, the mathematics used is non-rigorous (I recommend the book "scaling and renormalization in statistical physics" by Cardy) -For the point-wise convergence of a partition function, can we put the foundations of critical exponents on a rigorous mathematical basis; in the sense that we can somehow predict their values from properties (such as the partial derivatives) of the partition function? - -REPLY [2 votes]: Although this is not entirely what you ask, there are results showing that in general one cannot recover entirely all property of the system from the partition function. A bit like the type of results on whether "one can hear the shape of a drum", but in this case unfortunately negative. Even more, it fails even for Markov systems, so unless further hypotheses are made nothing of general nature can be said about recovering information from the partition function.<|endoftext|> -TITLE: Archimedian field $K$ has LUB property iff it's complete requires DC? -QUESTION [6 upvotes]: The Setting -Let $K$ be an Archimedean field. TFAE: - -$K$ has the least upper bound property. -Every Cauchy sequence in $K$'s additive group converges. - -Now proving that 1 implies 2 is easy, but the other direction is slightly harder. Not that that's a problem. Rather the problem is that I can't see a route that doesn't invoke at least dependent choice at some point. -Strategy 1 -Starting with a nonempty set $A$ that's bounded above, you could construct a monotonely non-decreasing Cauchy sequence of upper bounds that has the supremum as its limit. Here's a short sketch: Pick an upper bound $B_0$ of $A$. Pick an $a_0 \in A$. Recursively define -$$ B_{i+1} = \begin{cases} -\frac{B_i+a_i}{2}, & \text{ if that's an upper bound for } A \\ -B_i, & \text{ otherwise} -\end{cases} $$ -and -$$ a_{i+1} = \begin{cases} -a_i, & \text{ if $\tfrac{a_i+B_i}{2}$ is an upper bound for $A$}\\ -\text{choose any } a \in A \text{ s.t. } \frac{a_i+B_i}{2} < a, & \text{ otherwise.} -\end{cases} $$ -I can't see a way to get rid of the choice because you really want the $a_i$ to be in $A$ for the argument to go through. -Strategy 2 -Okay, let's go the long way instead! First we show that $K$ complete implies $[a,b]$ compact. Then we show that that implies that closed and bounded subsets are compact ("Heine-Borel property"). And finally we show that not(Heine-Borel property) implies not(least upper bound property). -But I already get stumped on the first part. Clearly it's easy to show that $K$ complete implies $[a,b]$ is sequentially compact. And from here it'd be nice to use that $K$ is 2nd countable (the intervals with rational endpoints are a basis) to get that $[a,b]$ is in fact compact. So you start with an open covering $U_\alpha$ of $[a,b]$. 2nd countable spaces are Lindelöf... wait... let's make sure and prove that. Let $\{B_i\}$ be a countable basis. Then for each $B_i$ you choose a $U_\alpha$... oh. Choice crept up again. -So my question is this: Does this really require (an admittedly weak form of) choice? Or is there a way to do without? - -REPLY [5 votes]: It seems to me that you can modify strategy 1 as follows. Given a nonempty set $A$ that's bounded above, let $x_0$ be a non-upper-bound for $A$ and let $y_0$ be an upper bound for $A$. Recursively define -$$ -y_{n+1} \;=\; \begin{cases}(x_n+y_n)/2 & \text{if this is an upper bound for $A$} \\ - y_n & \text{otherwise.} \end{cases} -$$ -and -$$ -x_{n+1} \;=\; \begin{cases}(x_n+y_n)/2 & \text{if this is a non-upper-bound for $A$} \\ - x_n & \text{otherwise.} \end{cases} -$$ -Note that $x_1 \leq x_2 \leq \cdots \leq y_2 \leq y_1$. Furthermore, each $y_n$ is an upper bound for $A$, and each $x_n$ is a non-upper-bound for $A$. It is easy to prove that both sequences are Cauchy sequences, and that they converge to the same limit $L$. We claim that $L$ is the least upper bound for $A$. -Both directions are fairly easy. If $a \in A$, then $a \leq y_n$ for all $n$, which proves that $a \leq L$. Thus $L$ is in fact an upper bound for $A$. Next, if $u$ is any upper bound for $A$, then $u \geq x_n$ for all $n$, and therefore $u \geq L$. Thus $L$ is the least upper bound for $A$.<|endoftext|> -TITLE: Why is it called Sylvester's Law of Inertia? -QUESTION [26 upvotes]: By "Sylvester's Law of Inertia," I mean: -http://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia -How does the name "Law of Inertia" fit with the statement of the theorem? I guess it's from physics, but... I just don't see the connection. - -REPLY [25 votes]: The quote in Mariano's answer is from the introduction to Sylvester's paper. Typical of Sylvester's mathematical papers, he used so many nonstandard terms in that paper that he appended a five-page "Glossary of new or unusual Terms, or of Terms used in a new or unusual sense in the preceding Memoir". There he lists: - -Inertia. -- The unchangeable number of integers in the excess of positive over negative signs which adheres to a quadratic form expressed as the sum of positive and negative squares, notwithstanding any real linear transformations impressed upon such form. - -Sylvester did similarly for many mathematical terms, i.e. coined them or used them in a "new or unusual ways" mathematically. You can find many such examples in Jeff Miller's Earliest Known Uses of Some of the Words of mathematics, including: allotrious factor, anallagmatic, Bezoutiant, catalecticant, combinant covariant cumulant cyclotomy, cyclotomic, dialytic, discriminant, Hessian, invariant, isomorphic, Jacobian, latent, law of intertia of quadratic forms, matrix, minor, nullity, plagiograph, quintic, Schur complement, sequence, syzygy, totient, tree, umbral calculus, umbral notation, universal algebra, x/y/z-coordinate, zero matrix, zetaic multiplication. Please see each entry for Sylvester's role - some are major, others are minor. -Apparently Sylvester's penchant for colorfully naming mathematical objects arose from his love of language and poetry. Indeed, Karen Parshall wrote: - -Sylvester's love of poetry and language manifested itself in notable ways even in his mathematical writings. His mastery of French, German, Italian, and Greek was often reflected in the mathematical neologisms - like "meicatecticizant" and "tamisage" - for which he gained a certain notoriety. Moreover, literary illusions, poetic quotations, and unfettered hyperbole spiced his published papers and lectures. - -Sylvester wrote about such: - -Perhaps I may without immodesty lay claim to the appellation of Mathematical Adam, as I believe that I have given more names (passed into general circulation) of the creatures of mathematical reason than all the other mathematicians of the age combined.-- James Joseph Sylvester, Nature 37 (1888), p. 152. - -You can find a short Sylvester biography here.<|endoftext|> -TITLE: Geometric interpretation of the multiplication of complex numbers? -QUESTION [21 upvotes]: I've always been taught that one way to look at complex numbers is as a Cartesian space, where the real part is the $x$ component and the imaginary part is the $y$ component. -In this sense, these complex numbers are like vectors, and they can be added geometrically like normal vectors can. -However, is there a geometric interpretation for the multiplication of two complex numbers? -I tried out two test ones, $3+i$ and $-2+3i$, which multiply to $-9+7i$. But no geometrical significance seems to be found. -Is there a geometric significance for the multiplication of complex numbers? - -REPLY [22 votes]: Suppose we multiply the complex numbers $z_1$ and $z_2$. If these numbers are written in the polar form as $r_1 e^{i \theta_1}$ and $r_2 e^{i \theta_2}$, the product will be $r_1 r_2 e^{i (\theta_1 + \theta_2)}$. Equivalently, we are stretching the first complex number $z_1$ by a factor equal to the magnitude of the second complex number $z_2$ and then rotating the stretched $z_1$ counter-clockwise by an angle $\theta_2$ to arrive at the product. There are several websites that expand upon this intuition with graphics and more explanation. See this site for example - http://www.suitcaseofdreams.net/Geometric_multiplication.htm - -REPLY [19 votes]: Add the angles and multiply the lengths. - -REPLY [5 votes]: Yes, there is a simple geometric meaning, but you need to convert to the polar form of the complex numbers to see it clearly. $3+i$ has magnitude $\sqrt{10}$ and angle about $18^\circ$; -$-2+3i$ has magnitude $\sqrt{13}$ and angle about $124^\circ$. -Multiplication of the complex numbers multiplies the two magnitudes, resulting in $\sqrt{130}$, -and adds the two angles, $142^\circ$. -In other words, you can view the second number as scaling and rotating the first (or the first scaling and rotating the second).<|endoftext|> -TITLE: Free groups in some classes? -QUESTION [10 upvotes]: I understand that the only free groups that are abelian are 1 and Z, hence a difference between 'free abelian groups' and 'abelian free groups'. -Can someone please tell me what are the solvable free groups? -How can I construct 'free solvable groups'? -What is known about free groups in other classes of groups like polycyclic, nilpotent ...? -Thanks. - -REPLY [8 votes]: There is no "free solvable group", but there are "free solvable groups of length $n$" for any $n$. There are also free nilpotent groups of class $n$, free Burnside groups of exponent $k$; all groups that are abelian-by-exponent $k$; and more. -More precisely: a variety of groups is a class of groups that is closed under taking isomorphisms, subgroups, quotients, and arbitrary direct products (that is, if $G\in\mathcal{C}$ and $K\cong G$, then $K\in\mathcal{C}$; if $G\in\mathcal{C}$ and $H\lt G$, then $H\in \mathcal{C}$; if $G\in\mathcal{C}$ and $N\triangleleft G$, then $G/N\in\mathcal{C}$; and if $\{G_i\}$ is an arbitrary family of groups with $G_i\in\mathcal{C}$ for every $i$, then $\prod G_i\in\mathcal{C}$). Examples of varieties include "all abelian groups", "all groups of solvability length at most $n$"; "all nilpotent groups of class at most $c$"; "all groups such that every element is of exponent $k$"; and more. -If $\mathcal{C}$ is a variety of groups, and $G$ is any group, then there is a smallest normal subgroup $\mathcal{C}(G)$ of $G$ such that $G/\mathcal{C}(G)\in\mathcal{C}$ (this normal subgroup is, in fact, not merely normal, but fully invariant). For example, when $\mathcal{C}$ is the class of all abelian groups, $\mathcal{C}(G)$ is none other than the commutator subgroup of $G$. Proof of the existence of $\mathcal{C}(G)$: Let $\{N_i\}_{i\in I}$ be the collection of all normal subgroups of $G$ such that $G/N_i\in\mathcal{C}$; the class is nonempty, since $G/G$ is always in any variety. Then consider the obvious map from $G$ to $\prod G/N_i$; the product must be in $\mathcal{C}$ (it is a product of groups in $\mathcal{C}$), so the image of $G$ is in $\mathcal{C}$; the kernel of this map is the group $\mathcal{C}(G)$. -So if $X$ is any set, and $F(X)$ is the free group on $X$, then there is a smallest subgroup $\mathcal{C}(F(X))$ such that $F(X)/\mathcal{C}(F(X))\in\mathcal{C}$. The resulting quotient can be seen to have the same universal property relative to groups in $\mathcal{C}$ as $F(X)$ does relative to all groups. So we say that this quotient is the "relatively free $\mathcal{C}$-group on $X$." We call the original group $F(X)$ the "absolutely free group on $X$" to distinguish it. -Added: Just in case, remember that if $\mathcal{D}$ is a category in which the objects are sets and the arrows are maps of sets, then given a set $X$, the free $\mathcal{D}$-object on $X$ is an object $F(X)$ in $\mathcal{D}$, together with a set-theoretic inclusion $i\colon X\to F(X)$, such that for every object $D\in\mathcal{D}$ and every set-theoretic map $f\colon X\to D$ there exists a unique $\mathcal{D}$-morphism $\varphi\colon F(X)\to D$ such that $f=\varphi\circ i$. For the "absolutely free groups", the category is the category of all groups; for "free abelian groups", the category is the category of abelian groups. For "relatively free $\mathcal{C}$-group", the category is the category of all groups in $\mathcal{C}$. -The reason there is no "free solvable group" is that the class of all solvable groups is not a variety: it is not closed under arbitrary direct products. But if you put a bound on the solvability length then you do get a variety, and you can construct the corresponding "free solvable group of length $n$". -In addition you can take the pro-$\mathcal{C}$ approach that Matt E mentions. -If you are interested in learning more about varieties, these lie at the intersection of Universal Algebra and Group Theory. I would recommend Hanna Neumann's book Varieties of Groups (a bit old, but still the standard reference); also George Bergman's Universal Algebra notes (available here).<|endoftext|> -TITLE: mystery regarding power series of $\frac{1}{\sqrt{1+x^{x}}}$ -QUESTION [20 upvotes]: In the course of playing around with $\sum_{n=1}^{\infty} \frac{1}{\sqrt{1+n^{n}}}$, I used w|α to obtain the power series for $f(x)=\frac{1}{\sqrt{1+x^{x}}}$ -which is -\begin{align*} -\frac{1}{\sqrt{1+x^{x}}} =& \frac{1}{\sqrt{2}} - \frac{x\log(x)}{4\sqrt{2}} -\frac{x^{2}\log^{2}(x)}{32\sqrt{2}}+ \frac{5x^{3}\log^{3}(x)}{384\sqrt{2}}\\\ - \\\ -&+ \frac{17x^{4}\log^{4}(x)}{6144\sqrt{2}} - \frac{121x^{5}\log^{5}(x)}{122880\sqrt{2}} - \frac{721x^{6}\log^{6}(x)}{2949120\sqrt{2}} \ldots -\end{align*} -Before I realized that I couldn't really use this to help me with the sum, I found that the denominators (ignoring the $\sqrt{2}$, because all of them have it in common) correspond to $4^{n}n!$, what is baffling is that the numerators appear to correspond to the coefficients in the exponential generating function for $f(x)=e^{\tanh^{-1}(\tan(x))}$ (I believe that the 7th entry should be 1369 and not 6845), and I'm curious what the explanation is, because $f(x)=e^{\tanh^{-1}(\tan(x))}$ is a mighty weird looking function. - -REPLY [3 votes]: It's simple: $\rm\quad\quad f(x)\ =\ \cos x - \sin x \ =\ \frac{1+i}2 e^{ix} + \frac{1-i}2 e^{-ix}$ -$\rm\quad\displaystyle \Rightarrow\quad\quad\quad\quad\ f(\tan^{-1} x)\ =\ \frac{1-x}{\sqrt{x^2+1}}\quad $ via $\rm\displaystyle\quad e^{\:i\:tan^{-1} x}\ =\ \frac{1+ x\: i}{\sqrt{x^2 + 1}}$ -$\rm\quad\displaystyle \Rightarrow\quad f(\tan^{-1}\tanh x)\ =\ \ \frac{\sqrt 2}{e^{4x}+1}\quad\ $ via $\rm\displaystyle\quad\ \tanh{x}\ =\ 1 - \frac{2}{e^{2x}+1}$<|endoftext|> -TITLE: bézier to f(x) polynomial function -QUESTION [6 upvotes]: I've got a 2D quadratic Bézier curve which, by construction, is a f(x) function : no loops, a single solution for each defined x. -Is there a common mean to convert this curve to a 3rd degree polynomial ? 3 should be enough since there can only be two "bumps". -Thanks! - -REPLY [2 votes]: I assume you want a polynomial function $y=f(x)$ that has the same shape as your original parametric cubic Bezier curve. -Well, you already have $y$ as a function of $x$, but the function is unpleasant. For a given value of $x$, there is no closed form formula for the corresponding $y$ -- you have to find $y$ numerically (by intersecting a vertical line with the given Bezier curve). -Let's denote this unpleasant function by $y=g(x)$. So, now the problem is just to approximate $g$ by a polynomial $f$. There are lots of standard ways to do this, depending on how you want to measure the approximation error and what extra conditions you want $f$ to satisfy. You say you want exact matching of end points and end tangents. -So, you choose a degree $m$ for $f$. As you say, $m$ will have to be bigger than 4. Then $f$ will have $m+1$ coefficients, which you can determine by solving a system of $m+1$ linear equations. You get 4 equations from the end-point constraints. Pick another $m-3$ points $(x_i,y_i)$ in the interior of the curve, and, for each $i$, write down the equation that expresses the requirement that $y_i = f(x_i)$. Solve the system of linear equations. -One subtlety is that the extra $m-3$ points have to be chosen fairly carefully. You can't just distribute them uniformly, or else $f$ might wiggle badly. They have to be more dense towards the ends of the curve, and less dense in the middle. The "Chebyshev nodes" are a good choice. -This process is called Hermite interpolation if you use derivative information, and Lagrange interpolation if you don't. Both are covered in Wikipedia articles.<|endoftext|> -TITLE: Finding an angle within an 80-80-20 isosceles triangle -QUESTION [27 upvotes]: The following is a geometry puzzle from a math school book. Even though it has been a long time since I finished school, I remember this puzzle quite well, and I don't have a nice solution to it. -So here is the puzzle: - -The triangle $ABC$ is known to be isosceles, that is, $AC=BC$. The labelled angles are known to be $\alpha=\gamma=20°$, $\beta=30°$. The task is to find the angle labelled "?". -The only solution that I know of is to use the sine formula and cosine formula several times. From this one can obtain a numerical solution. Moreover this number can be algebraically shown to be correct (all sines and cosines are contained in the real subfield of the 36th cyclotomic field). So in this sense I solved the problem, but the solution is kind of a brute force attack (for example, some of the polynomials that show up in the computation have coefficients > 1000000). Since the puzzle originates from a book that deals only with elemetary geometry (and not even trigonometry if I remember correctly) there has to be a more elegant solution. - -REPLY [2 votes]: Any other solutions(advice) are welcome.<|endoftext|> -TITLE: How do I factorize polynomial over Galois field? -QUESTION [5 upvotes]: How does the factoring of polynomials over Galois fields work? I cannot seem to understand the basic concept. -For example: How do I factorize $x^6 - 1$ over $\operatorname{GF}(3)$? I know that the result is $(x+1)^3 (x+2)^3$, but I'm unable to compute it myself. -I've studied the articles on Wikipedia: - -http://en.wikipedia.org/wiki/Galois_field -http://en.wikipedia.org/wiki/Factorization_of_polynomials - -but I found it very difficult to understand. Is there some algorithm that would help me factorize polynomials like $x^n - 1$ over $\operatorname{GF}(k)$? - -REPLY [6 votes]: If $n=pm$ is a multiple of $p$ then over $GF(p)$ one has -$$x^n-1=(x^m-1)^p$$ -so the problem reduces swiftly to the case where $n$ is coprime to $p$. -If $p$ is not a factor of $n$ then over an algebraic closure of -$GF(p)$ -$$x^n-1=\prod_{k=0}^{n-1}(x-\zeta^j)$$ -where $\zeta$ is a primitive $n$-th root of unity. -One makes this into a factorization over $GF(p)$ by combining conjugate -factors together. For each $k$, the polynomial -$$(x-\zeta^k)(x-\zeta^{pk})(x-\zeta^{p^2k})\cdots(x-\zeta^{p^{r-1}k})$$ -has coefficients in, and is irreducible over, $GF(p)$ where $r$ -is the least positive integer with $p^r k\equiv k$ (mod $n$). -Using this, it's easy to work out the degrees of the irreducible -factors of $x^n-1$, but to find the factors themselves needs a bit more -work, using for instance Berlekamp's algorithm. - -REPLY [4 votes]: Factoring the polynomials $x^n - 1$ is very different from factoring general polynomials, since you already know what the roots are; they are, in a suitably large finite extension, precisely the elements of order dividing $n$. Since you know that the multiplicative group of a finite field is cyclic, the conclusion follows from here.<|endoftext|> -TITLE: Areas versus volumes of revolution: why does the area require approximation by a cone? -QUESTION [55 upvotes]: Suppose we rotate the graph of $y = f(x)$ about the $x$-axis from $a$ to $b$. Then (using the disk method) the volume is $$\int_a^b \pi f(x)^2 dx$$ since we approximate a little piece as a cylinder. However, if we want to find the surface area, then we approximate it as part of a cone and the formula is $$\int_a^b 2\pi f(x)\sqrt{1+f'(x)^2} dx.$$ But if approximated it by a circle with thickness $dx$ we would get $$\int_a^b 2\pi f(x) dx.$$ -So my question is how come for volume we can make the cruder approximation of a disk but for surface area we can't. - -REPLY [26 votes]: The problem here is that volume behaves nicely under small deformations of 3D regions in 3D, but surface area does not. Similarly, area behaves nicely under small deformations of 2D regions in 2D, but circumference / arc length does not. You can see the essence of the 2D problem, hence the essence of the 3D problem, in the following: the length of the diagonal from $(0, 0)$ to $(1, 1)$ is $\sqrt{2}$, but if we approximate the diagonal by a "staircase" of horizontal and vertical lines of length $\frac{1}{n}$ and let $n \to \infty$ we get a length of $2$ instead.<|endoftext|> -TITLE: Is the scalar curvature the only isometric invariant of a Riemannian 2-manifold? -QUESTION [11 upvotes]: Given two Riemannian Manifolds of dimension 2, and a point on each. If the scalar curvatures are isomorphic (as functions) in some neighbourhoods of these points, are then the manifolds necessarily locally isometric? - -REPLY [16 votes]: The answer is 'no' for simple reasons: If the two Riemannian surfaces $(M_1,g_1)$ and $(M_2,g_2)$ have Gauss curvatures $K_1:M_1\to\mathbb{R}$ and $K_2:M_2\to\mathbb{R}$ respectively and there happen to be points $p_i\in M_i$ such that $K_1(p_1) = K_2(p_2)$ and such that $dK_i$ is nonvanishing at $p_i$, then there will always be $p_i$-neighborhoods $U_i\subset M_i$ and a local diffeomorphism $\phi:U_1\to U_2$ with $\phi(p_1)=p_2$ such that $K_1 = K_2\circ\phi$ on $U_1$. This is just basic calculus. Of course, there is no reason for $\phi$ to be a local isometry. -However, now set $L_i = |dK_i|^2_{g_i}$ and suppose, in addition, that $L_1(p_1)=L_2(p_2)$ and that $dK_i\wedge dL_i$ is nonvanishing in a neighborhood of $p_i$ for $i=1,2$. Then there will be $p_i$-neighborhoods $U_i\subset M_i$ and a unique local diffeomorphism $\phi:U_1\to U_2$ with $\phi(p_1)=p_2$ such that $K_1 = K_2\circ\phi$ and $L_1 = L_2\circ\phi$ on $U_1$. Using $\phi$, you can now compare $g_1$ with $\phi^*(g_2)$. If these are equal on $U_1$, then the two metrics are (obviously) locally isometric. If these are not equal, then there is no local isometry between the two metrics that carries $p_1$ to $p_2$. -If you happen to have the bad luck that $dK_i\wedge dL_i$ vanishes identically on a neighborhood of $p_i$ (while, still $dK_i$ is nonvanishing), then, locally, one can write $L_i = f_i\circ K_i$ for some (essentially unique) functions $f_i$, and these will have to be equal in a neighborhood of $K_i(p_i)$ or there is no isometry. However, even if this does hold, there might still be no isometry, you have to go to higher order. -For example, if $K_1(p_1) = K_2(p_2)$, each $dK_i$ is nonvanishing on a neighborhood of $p_i$, and there exists a function $f$ on a neighborhood of $K_i(p_1)$ such that $L_i = f\circ K_i$ on some neighborhood of $p_i$, then you might consider the function $J_i = \Delta K_i$ and hope that this is independent of $K_i$, and, if so, ask whether the map $\phi$ that satisfies $(K_1,J_1) = (K_2\circ\phi, J_2\circ\phi)$ is an isometry. Etc. -The point is, though, that, after a finite number of tests of this kind (assuming that some mild nondegeneracy conditions hold), you will be able to completely determine whether the metrics are locally isometric in neighborhoods of the points $p_i$.<|endoftext|> -TITLE: Schwarzian Derivative and One-Dimensional Dynamics - how are they connected? -QUESTION [7 upvotes]: During the summer, I did an REU where we focused primarily on one-dimensional dynamics and more specifically kneading theory. One thing that I was always confused about is why the Schwarzian derivatives always seem to pop up in discussions of iterated dynamics on the real line. I understand what a Schwarzian derivative is, but I don't see any intuitive reason that it should show up in this area. -I was wondering if anyone could explain or provide me with a reference that makes the appearance of Schwarzian derivatives in one-dimensional dynamics on the real line seem natural. -Another question I have, is there an intuitive motivation for the Schwarzian derivative itself? - -REPLY [3 votes]: This is another theorem that has a relationship between Schwarzian Derivative and Dynamical Systems, By Singer. -Let $I$ a close interval and $f:I \to I$ of class $C^3$ with $S(f)(x)<0$ for all $x \in I$, qhere $S(f)(x)$ represent the Schwarzian derivative. If $f$ has $n$ critical points, then $f$ has at most $n+2$ attracting periodic orbits. -This is the full version of theorem. I hope that be useful for you.Regards. - -Edit: -The Schwarzian derivative was introduced into real dynamics by Singer in -David Singer, -Stable Orbits and Bifurcation of Maps of the Interval, -SIAM Journal on Applied Mathematics -Vol. 35, No. 2 (Sep., 1978), pp. 260–267.<|endoftext|> -TITLE: How badly can Krull's Hauptidealsatz fail for non-Noetherian rings? -QUESTION [14 upvotes]: Krull's Hauptidealsatz (principal ideal theorem) says that for a Noetherian ring $R$ and any $r\in R$ which is not a unit or zero-divisor, all primes minimal over $(r)$ are of height 1. How badly can this fail if $R$ is a non-Noetherian ring? For example, if $R$ is non-Noetherian, is it possible for there to be a minimal prime over $(r)$ of infinite height? -EDIT: The answer is yes. See https://mathoverflow.net/questions/42510/how-badly-can-krulls-hauptidealsatz-fail-for-non-noetherian-rings - -REPLY [5 votes]: Valuation rings demonstrate quite clearly the failure of Krull's principal ideal theorem: -take a valuation ring O of finite dimension. The prime ideals then form a chain -$p_0:=0\subset p_1\subset\ldots\subset p_d$ -so that for every $i\in{1,\ldots ,d}$ there exists $r_i\in p_i\setminus p_{i-1}$. Obviously $p_i$ is a minimal prime over $r_iO$. -For valuation domains of infinite dimension one has to consider the so-called limit-primes: a prime ideal $p$ of a commutative ring $R$ is called limit-prime if -$p=\bigcup\limits_{q\in\mathrm{Spec} (R): q\subset p}q$. -There exist valuation domains $O$ of infinite Krull dimension such that the maximal ideal $m$ of $O$ is no limit-prime. For example take a valuation ring such that the corresponding value group is -$\mathbb{Z}\times\mathbb{Z}\times\ldots$ (countably many factors ordered lexigraphically). -Then one can find $r\in m$ such that $m$ is minimal over $rO$. -H<|endoftext|> -TITLE: Expectation of squared time-scaled Brownian process -QUESTION [5 upvotes]: According to an article I'm studying ("Time series, self-similarity -and network traffic by Mark Crovella) the expectation of the square of -a time-scaled Brownian motion process $E[ B(ct)^2 ]$ where $c$ is the time -scaling is equal to $ct$. -I'd appreciate help proving this; i.e. -$E[ B(ct)^2 ] = c t$ - -REPLY [7 votes]: This follows from the fact that $B(t)$ has a Gaussian$(0,t)$ distribution. Therefore, $B(ct)$ has a Gaussian$(0,ct)$ distribution. Thus $E[B(ct)^2] = Var[B(ct)] + (E[B(ct)])^2 = ct + 0 = ct$.<|endoftext|> -TITLE: An inequality by Hardy -QUESTION [7 upvotes]: Young's inequality for convolutions states that if $1 \leq p, q, r \leq \infty$ satisfy -$$\frac{1}{q} + 1 = \frac{1}{p} + \frac{1}{r}$$ -for all $f \in L^p(G)$ and all $g \in L^r(G)$ where $g$ and $g'$ have the same $L^r$-norm and $g'(x) = g(x^{-1})$ ($G$ is a topological group) we have that: -$$\|f * g\|_q \leq \|g\|_r \|f\|_p.$$ -Now Grafakos claims we can use this to prove the following inequality due to Hardy: -$$\left ( \int_0^\infty \left ( \frac{1}{x} \int_0^x |f(t)| \, dt \right )^p \, dx \right )^{1/p} \leq \frac{p}{p - 1} \|f\|_{L^p(0, \infty)}$$ -The hint is to consider on the multiplicative group $(\mathbb{R}^+, \frac{dt}{t})$ the convolution of $|f(x)| x^{1/p}$ and ${x^{-1/p'}} 1_{[1, \infty)}$. So if we use this, the RHS is no problem, it is just a direct computation (I can add it if someone wants it for future reference). However, if I compute the convolution I get: -$$\int_0^{x - 1} |f(t)| (y(t - y))^{1/p'} \, dt$$ but I don't see how this is larger (or equal) to the inner integral on the LHS of the inequality. Any suggestions? -Edit: As Willie Wong points out below, the convolution is wrong. It is an multiplicative group, not an additive one. - -REPLY [9 votes]: You are doing the convolution wrong. On the multiplicative group $(\mathbb{R}_+, dt/t)$, the convolution is -$$f * g(x) = \int_0^\infty f(y) g(x/y) dy/y$$ -(for harmonic analysis on an abelian group, you need to re-interpret the $+$ and $-$ signs in formulae to be the group binary operator). If you plug in, as $g$, the weight function Grafakos suggested, you should get exactly the LHS. -Just to be more general: let $(G,\mu)$ be an Abelian group with an invariant measure $\mu$, where $\cdot$ denotes the group binary operator, then the convolution of two functions $f,g: G\to \mathbb{R}$ is defined as the function $G\to \mathbb{R}$ -$$ f*g(y) = \int_G f(x) g(y\cdot x^{-1})\mu(dx) $$<|endoftext|> -TITLE: Integral involving translates of $\{x\}$. -QUESTION [6 upvotes]: We have two functions: -$F(x)$ and $G(x)$ -Suppose the improper integral has the value 1. -$$\int_{1}^{\infty} F(x) G(x) \mathrm{d}x = 1$$ -Can we find the value of the integral, where c is a constant? -$$\int_{1}^{\infty} F(x+c) G(x) \mathrm{d}x$$ -I am interested in generalized answers and not specific examples. -Or, if any links/books' info are given where I can read similar problems, that would be highly appreciated. -EDIT: Due to the highly generalized nature of this question, I am giving the examples I am working upon. -$$F(x) = \{x\}$$ -Where $\{x\}$ denotes the fractional part of x, and -$$G(x) = \frac{\sin(x\log(x))}{x^{h+1}}$$ -where constants are $0< h < 1$ and $c = 1/2$. - -REPLY [2 votes]: Now that you have updated your question with the fact that $F(x) = \{x\}$ (the standard notation for the fractional part of $x$), there is quite a bit more that can be said. In particular, you can give an interpretation to $\int_1^{\infty} \{x + c\} G(x) dx$. Let $0 < c < 1$. -Then -$$\{x + c\} = \begin{cases} \{x\} + c, &0 \leq \{x\} < 1 - c; \\ -\{x + c\}= \{x\} + c - 1, &1-c \leq \{x\} < 1. \end{cases}$$ -We have -$$\begin{align} -&\int_1^{\infty} \{x + c\} G(x) dx = \int_{1 \leq x < \infty, 0 \leq \{x\} < 1 - c} (\{x \} + c) G(x) dx + \int_{1 \leq x < \infty, 1-c \leq \{x\} < 1 } (\{x \} + c - 1) G(x) dx \\ - &= \int_1^{\infty} \{x \} G(x) dx + c \int_{1 \leq x < \infty, 0 \leq \{x\} < 1 - c} G(x) dx + (c-1) \int_{1 \leq x < \infty, 1-c \leq \{x\} < 1 } G(x) dx \\ -&= 1 + c \int_{1 \leq x < \infty, 0 \leq \{x\} < 1 - c} G(x) dx - (1-c) \int_{1 \leq x < \infty, 1-c \leq \{x\} < 1 } G(x) dx. -\end{align}$$ -The two remaining integrals constitute an average of sorts, weighted to account for the fact that they are being taken over different percentages of the interval $[1,\infty)$. The first integral gets weighted by $c$ but includes $1-c$ of the interval $[1,\infty)$, as it is being taken over the set $\cup_{i=1}^{\infty} [i,i+1-c)$. (Remember that $c$ is a fraction between $0$ and $1$.) The second integral gets weighted by $1-c$ but includes $c$ of the interval $[1,\infty)$, as it is being taken over the set $\cup_{i=2}^{\infty} [i-c,i)$. So $\int_1^{\infty} \{x + c\} G(x) dx$ just shifts the weights on the values of $G(x)$ in $\int_1^{\infty} \{x \} G(x) dx$ in the manner I just described. The resulting value for $\int_1^{\infty} \{x + c\} G(x) dx$ will be either greater or smaller than $1$, depending on whether the larger values of $G(x)$ over $[1, \infty)$ tend to clump just above each integer value of $x$ or just below. -Other than this, I think Willie Wong's answer still applies. In particular, you still can't get an exact answer for $\int_1^{\infty} \{x + c\} G(x) dx$ -- just an interpretation of it. - -You also asked for references for problems similar to yours. One such is the convolution of two functions $f$ and $g$, one form of which is -$$(f*g)(t) = \int_{-\infty}^{\infty} f(\tau) g(t - \tau) d \tau.$$ -Convolutions have lots of interesting properties and interpretations. See MathWorld's article on convolutions for more information.<|endoftext|> -TITLE: Prove that the sequence$ c_1 = 1$, $c_{n+1} = 4/(1 + 5c_n) $ , $ n \geq 1$ is convergent and find its limit -QUESTION [11 upvotes]: Prove that the sequence $c_{1} = 1$, $c_{(n+1)}= 4/(1 + 5c_{n})$ , $n \geq 1$ is convergent and find its limit. -Ok so up to now I've worked out a couple of things. -$c_1 = 1$ -$c_2 = 2/3$ -$c_3 = 12/13$ -$c_4 = 52/73$ -So the odd $c_n$ are decreasing and the even $c_n$ are increasing. Intuitively, it's clear the the two sequences for odd and even $c_n$ are decreasing/increasing less and less. -Therefore it seems like the sequence may converge to some limit $L$. -If the sequence has a limit, let $L=\underset{n\rightarrow \infty }{\lim }a_{n}.$ Then $L = 1/(1+5L).$ -So we yield $L = 4/5$ and $L = -1$. But since the even sequence is increasing and >0, then $L$ must be $4/5$. -Ok, here I am stuck. I'm not sure how to go ahead and show that the sequence converges to this limit (I tried using the definition of the limit but I didn't manage) and and not sure about the separate sequences how I would go about showing their limits. -A few notes : -I am in 2nd year calculus. -This is a bonus question, but I enjoy the challenge and would love the extra marks. -Note : Once again I apologize I don't know how to use the HTML code to make it nice. - -REPLY [2 votes]: As shown by the other answers, there are a few nice ways to approach this problem. -You could concentrate only on $C_{2n-1},$ say, since if you establish that $C_{2n-1}$ tends to a limit you automatically nail $C_{2n}$ as well, because -$$C_{2n} = \frac{4}{1+5C_{2n-1}} \textrm { so } \lim C_{2n} = \lim \frac{4}{1+5C_{2n-1}}.$$ -Now it's easy to show $C_{2n-1} > 4/5$ and so expand -$$(5C_{2n-1}-4)(C_{2n-1}+1)>0$$ -and manipulate (add $20C_{2n-1}$ to both sides and take the 4 to the RHS) to obtain -$$ C_{2n-1} > \frac{4+20C_{2n-1}}{21+5C_{2n-1}} = C_{2n+1},$$ -and since $C_{2n-1}$ is bounded below by 4/5 the result follows immediately.<|endoftext|> -TITLE: Conditional and Total Variance -QUESTION [10 upvotes]: Why does $ \text{Var}(Y) = E(\text{Var}(Y|X))+ \text{Var}(E(Y|X))$? What is the intuitive explanation for this? In laymen's terms it seems to say that the variance of $Y$ equals the expected value of the conditional variance plus the variance of the conditional expectation. - -REPLY [3 votes]: Geometrically it's just the Pythagorean theorem. We may measure the "length" of random variables by standard deviation. -We start with a random variable Y. E(Y|X) is the projection of this Y to the set of random variables wich may be expressed as a deterministic function of X. -We have a hypotenuse Y with squared length Var(Y). -The first leg is E(Y|X) with squared length Var(E(Y|X)). -The second leg is Y-E(Y|X) with squared length Var(Y-E(Y|X))=...=E(Var(Y|X)).<|endoftext|> -TITLE: How to assess a non-natively english speaking high-schooler's mathematical ability? -QUESTION [10 upvotes]: I'm a math PhD who has been asked to interview a high school student and determine what he/she is interested in and how strong the student is. -Usually I would want them to talk as much as possible about what classes they've taken and what parts of the classes they like the best. Generally I'd like to allow them to get comfortable talking about what they like rather than what they think I'd like. However, that might not work this time as the student is a non-native english speaker and may get less comfortable as I try to get him/her to talk more. -Thus far I'm thinking about watching how he/she solves different problems in different areas of math, but I would like to know if anyone has done this before or has any suggestions for evaluating a student who might not be comfortable talking much. - -REPLY [2 votes]: Seeing textbooks or other material, in any language, from which this person has learned mathematics, would be useful and relatively easy to interpret for an Anglophone mathematician. If these are not available then equivalents may exist on the internet. Pointing to material in books in English and seeing if it is familiar may also reveal something. -At the high end, the International Mathematical Olympiad problems are available online in dozens of languages, probably including that of the student's native country. This could at least tell you whether the student has heard of some concept used in a problem, and you would have a translation into English so you know you are discussing the same thing.<|endoftext|> -TITLE: Transfinite Induction and the Axiom of Choice -QUESTION [25 upvotes]: My question is essentially this: Why does the principle of transfinite induction not suffice to show the axiom of choice when the sets to be chosen from are indexed by a well ordered set? -I have read that one can prove the axiom of finite choice from simple induction. You induct on the size of the system of sets you are choosing from and pick an element from each set. I understand this. However, my grasp of the details is sketchy. -1) Why does standard induction alone not suffice to show the axiom of choice for systems of countable sets? Doesn't induction show the truth of the statement for all natural numbers, and therefore for any system of sets that can be indexed by the natural numbers (countable sets)? I know this to be false, but I do not know why. -2) Why can't the above "proof" that induction implies the AoC for countable sets not be repaired by using transfinite induction? Isn't this the purpose of transfinite induction, to allow one to induct on sets of infinite size? Shouldn't transfinite induction suffice to prove the axiom of choice for any system of sets indexed by a well-ordered set? -I am reading Jech right now, but my knowledge of ordinals and transfinite induction is very, very poor, so I would greatly prefer answers with a great amount of explanation and hand-holding. - -REPLY [12 votes]: 1) Why does standard induction alone not suffice to show the axiom of choice for systems of countable sets? Doesn't induction show the truth of the statement for all natural numbers, and therefore for any system of sets that can be indexed by the natural numbers (countable sets)? I know this to be false, but I do not know why. - -As Jason DeVito points out in the comments to his answer, just because something holds for all natural numbers $n$ doesn't mean that it holds for the set $\mathbb{N}$ of natural numbers itself: For a trivial example, every natural number $n$ is finite, yet $\mathbb{N}$ itself is infinite. - -2) Why can't the above "proof" that induction implies the AoC for countable sets not be repaired by using transfinite induction? Isn't this the purpose of transfinite induction, to allow one to induct on sets of infinite size? Shouldn't transfinite induction suffice to prove the axiom of choice for any system of sets indexed by a well-ordered set? - -A proof by ordinary induction has two parts: a base case and a successor step. A proof by transfinite induction has three parts: a base case, a successor step, and a limit step. The limit step says that for an infinite stage (technically, an ordinal number) $\lambda$, if the property $P$ holds at every stage $\alpha < \lambda$, then it holds at stage $\lambda$. For many properties $P$, the base case and successor step hold but the limit step fails. To revisit the trivial example from above, let's try to prove that every set is finite. Let $P(\alpha)$ be the statement "every set of size $\alpha$ is finite," and let us try to prove that $P(\alpha)$ holds for all $\alpha$ by transfinite induction. In this example the base case and successor step hold trivially, but the induction will fail at the very first limit step. -For the less trivial example of the axiom of choice, let $P(\alpha)$ be the statement "every sequence of sets of length $\alpha$ has a choice function." As suggested in the question, the base case and induction step are trivial. But the limit step fails again: Let $(A_i : i \in \mathbb{N})$ be an infinite sequence of sets and suppose by the induction hypothesis that for every finite $n$, the proper initial segment $(A_i : i < n)$ of the sequence has a choice function. -The natural attempt to define a choice function $f$ for $(A_i : i \in \mathbb{N})$ would be to take choice functions $f_n$ for $(A_i : i < n)$ and combine them in some way. But first we would have to choose a choice function $f_n$ for each finite $n$, which is just as hard as the original problem! So transfinite induction doesn't help us at all.<|endoftext|> -TITLE: How to find the sum of this cos series -QUESTION [5 upvotes]: $$S = \sum_{k=1}^{\infty} \frac{\cos(\theta\log(k))}{k^a}$$ -How do I go about finding the value of S, given that $\theta \to \infty$ and $0 < a < 1$. - -Any special techniques that might be - helpful in calculating this sum? - -EDIT: -Just to give some background, -I was actually trying to figure out -$$\sum_{k=1}^{\infty} \frac{\cos(\theta\log(k))}{k^a} - \sum_{k=1}^{\infty} \frac{\cos(\theta\log(k + 0.5))}{(k+0.5)^a}$$ -Since that expression was a bit complicated, I decided to write the common version... - -REPLY [4 votes]: Your value is basically the value of the Riemann zeta function: -$$ -S = \sum_{k=1}^{\infty} \frac{Re (e^{i \theta \log(k)})}{k^a} = Re( \sum_{k=1}^{\infty} k^{i \theta - a} ) = Re( \zeta ( a - i \theta )) -$$ -You want to evaluate this on the critical strip $0 < a < 1$. -The good news is that there is an enormous amount of literatue on the Riemann zeta. The bad news is that this function is nasty on the critical strip.<|endoftext|> -TITLE: Polynomial fitting - how to fit and what is _polynomial fitting_ -QUESTION [5 upvotes]: I don't understand what is polynomial fitting. -Can anyone explain to me how to fit a curve to given points? - -REPLY [19 votes]: There are two concepts frequently conflated: interpolation and fitting. I'll discuss both since you don't seem to know what you really want. -Let's say you have a set of $n$ points -$$(x_i,y_i)\qquad i=1\dots n$$ -Interpolation is the problem of finding the (unique) polynomial $p(x)$ that passes through all your given points (under the assumption that no two points have the same abscissa), i.e. $p(x_i)=y_i \qquad i=1\dots n$. Your resulting polynomial is of degree $n-1$. The usual techniques for finding the interpolating polynomial are the methods of Lagrange, Newton, and Neville-Aitken. -Fitting on the other hand assumes your data is contaminated with error, and you want the polynomial that is the "best approximation" to your data. Here polynomial interpolation does not make much sense since you do not want your function to be reproducing the inherent errors in your data as well. Least-squares is a common technique: it finds the polynomial $f(x)$ such that the quantity -$$\sum_{j=1}^{n}\left(f(x_i)-y_i\right)^2$$ -which measures the departure of your polynomial from the ordinates is minimized (here the assumption is that your abscissas are error-free, and the error in your ordinates is normally distributed). The degree of $f(x)$ can be (and is often) less than $n$. A number of techniques for this are used as well: there's the normal equations, and then there are special matrix decompositions that can be used to efficiently solve this problem.<|endoftext|> -TITLE: A numerical optimization problem with a convolution in the constraint -QUESTION [9 upvotes]: I have a problem of the following form: -minimize $\|Dx\|_2$ -subject to $\|x*x\|_2 = 1$ -where $x\in\mathbb R^n$, $D$ is a given diagonal matrix of positive entries, and $*$ represents convolution, i.e., $(x*x)\_n = \sum \limits_{i+j=n}x_ix_j$ and $x*x\in\mathbb R^{2n-1}$. -What approach could be used in dealing with this problem numerically? Could this problem be converted to one of the known problem classes that have available solvers? - -REPLY [2 votes]: Square the cost function and solve the equivalent problem using SOCP algorithms. And you can lose the convolution by using the DFT matrix and Parseval's theorem: -$$ -\|x * x\|_2 = 1 \Rightarrow (Ax)^T (Ax) = x^T A^T A x = 1 -$$ -where $A$ is the DFT matrix.<|endoftext|> -TITLE: Generators and Relations for $A_4$ -QUESTION [7 upvotes]: Let $G=\{x,y|x^2=y^3=(xy)^3=1\}$ -I would like to show that $G$ is isomorphic to $A_4.$ -Let $f:\mathbf{F}_{2} \to G$ be a surjective homomorphism from the free group on two elements to $G$. Let $f$ map $x \to (12)(34)$ and $y \mapsto (123)$. I'm not sure how to show that these elements generate the kernel of $f$. If they do generate the kernel, how do I conclude that the order of $G$ is $12?$ -Once I have that the order of the group is of order 12 then I can show that $G$ contains $V$ (the Klein four group) as a subgroup, or that $A_4$ is generated by the image of $x$ and $y$. - -REPLY [4 votes]: Perhaps this answer will use too much technology. Still, I think it's pretty. -Consider $A_4$ as the group of orientation-preserving symmetries of a tetrahedron $S$. The quotient $X=S/A_4$ is a 2-dimensional orbifold. Let's try to analyse it. -Two-dimensional orbifolds have three different sorts of singularities that set them apart from surfaces: cone points, reflector lines and corners where reflector lines meet. Because $A_4$ acts preserving orientation, all the singularities of $X$ are cone points, and we can write them down: they're precisely the images of the points of $S$ that are fixed by non-trivial elements of $A_4$, and to give $X$ its orbifold structure you just have to label them with their stabilisers. -So what are these points? There are the vertices of $S$, which are fixed by a rotation of order 3; there are the midpoints of the edges of $S$, which are fixed by a rotation of order 2; and finally, the midpoints of faces, which are fixed by a rotation of order 3. -A fundamental domain for the action of $A_4$ is given by a third of one of the faces, and if you're careful about which sides get identified you can check that $X$ is a sphere with three cone points, one labelled with the cyclic group $C_2$ and the other two labelled with the cyclic group $C_3$. -Finally, we can compute a presentation for $A_4$ by thinking of it as the orbifold fundamental group of $X$ and applying van Kampen's Theorem. This works just as well for orbifolds, as long as you remember to consider each cone point as a space with fundamental group equal to its label. -The complement of the cone points is a 3-punctured sphere, whose fundamental group is free on $x,y$. The boundary loops correspond to the elements $x$, $y$ and $xy$. Next, we take account of each cone point labelled $C_n$ by inserting a relation that makes the $n$th power of the appropriate boundary loop equal to $1$. So we get the presentation -$\langle x,y\mid x^2=y^3=(xy)^3=1\rangle$ -as required.<|endoftext|> -TITLE: Median of distinct numbers -QUESTION [13 upvotes]: What is the least number of comparisons we need to find the median of 6 distinct numbers? -I am able to find the answer to the median of 5 distinct numbers to be 6 comparisons, and it makes sense, however in the case of 6 numbers I can't find an answer. -The best I was able to do it in by hand was 9 comparisons. Can that be minimized further? -Edit: Median in this case, we are assuming to be the lower median. - -REPLY [6 votes]: It looks like the answer is 8. -My Knuth Volume Three (1970s—not as dusty as you think) reports an upper bound of 8, which, paired with Moron's lower bound of 8, ... -The general form of this question is called the Selection Problem. If you google that phrase you will get lots of useful results. -Edit: Knuth doesn't give an explicit algorithm for finding the median of 6 elements in at most 8 steps (at least in the first edition). However, in exercise 12 of section 5.3.3, he does give the explicit method for finding the median of 7 elements using at most 10 comparisons, which may be of some help.<|endoftext|> -TITLE: What are some deep questions that are applicable to first graders in regards to adding zero? -QUESTION [6 upvotes]: I'm trying to come up with some math problems (word or otherwise) that get to the meaning of adding zero, but I'm getting stuck because it seems just too simple to me. -I have come up with questions like "John is having a birthday party and he invites five friends over but nobody shows up. How many people are at the party?". While this question gets to the problem at hand, it might be too simple to understand the meaning of adding zero. This problem also suffers from the possibility that first graders will answer like so, "Well there would be three people at the party because John's mom and dad would be there too." -Any ideas? - -REPLY [4 votes]: I suggest you modify your birthday party idea to include different types of people, such as friends from school, friends from soccer, etc. Then in general you need to add together the number of attendees from the various categories, and sometimes that will entail adding zero.<|endoftext|> -TITLE: Injective functions with intermediate-value property are continuous. Better proof? -QUESTION [19 upvotes]: A function $f: \mathbb{R} \to \mathbb{R}$ is said to have the intermediate-value property if for any $a$, $b$ and $\lambda \in [f(a),f(b)]$ there is $x \in [a,b]$ such that $f(x)=\lambda$. -A function $f$ is injective if $f(x)=f(y) \Rightarrow x=y$. -Now it is the case that every injective function with the intermediate-value property is continuous. I can prove this using the following steps: - -An injective function with the intermediate-value property must be monotonic. -A monotonic function possesses left- and right-handed limits at each point. -For a function with the intermediate-value property the left- and right-handed limits at $x$, if they exist, equal $f(x)$. - -I am not really happy with this proof. Particularly I don't like having to invoke the intermediate-value property twice. -Can there be a shorter or more elegant proof? - -REPLY [15 votes]: [I thought of another proof that uses the IVP and injectiveness once. Putting it as a community wiki answer.] -Assume on the contrary that $f$ is not continuous at $x$. Then there is a sequence $x_n$ converging to $x$ such that $f(x_n)$ does not converge to $f(x)$. Then there is $\epsilon>0$ and a subsequence $x_{n_k}$ such that $f(x_{n_k}) \notin (f(x)-\epsilon,f(x)+\epsilon)$. -There must either be a further subsequence $x_{n_j}$ such that $f(x_{n_j}) \leq f(x)-\epsilon$ or a subsequence $x_{n_q}$ such that $f(x_{n_q}) \geq f(x)+\epsilon$ (or both). Assume without loss of generality the former. -Since $f(x_{n_j}) \leq f(x)-\epsilon < f(x)$, by the IVP for every $j$ there is a $y_j$ such that $x_{n_j}\leq y_j < x$ and $f(y_j)=f(x)-\epsilon$. -Because $f$ is injective, all the $y_j$ must be the same, say $y$. Because $x_{n_j}$ converges to $x$, $y=x$ by the sandwich theorem. But $f(y)\neq f(x)$. Hence a contradiction.<|endoftext|> -TITLE: Topology of a cone of $\mathbb R\mathbb P^2$. -QUESTION [10 upvotes]: I had already posted this on mathoverflow and was advised to post the same here. So here it goes: -$X=\{(x,y,z)|x^2+y^2+z^2\le 1$ and $z≥0\}$ i.e. $X$ is the top half of a $3$-Disk. -$Z=X/E$, where $E$ is the equivalence relation on the the plane $z = 0$ which is as follows: -$(x,y,0)∼(−x,−y,0)$. -I was told that this space is equivalent to a cone of $\mathbb R\mathbb P^2$ (Real Projective Plane). -I want to know the following facts about "$Z$" -1) Is this a manifold with a boundary? -2) If it is a manifold with a boundary, what are the points of $Z$ that make the boundary? -3) Is it simply connected? -4) What is the minimum Euclidean dimension in which $Z$ can be embedded in? -Thank you very much for your help. I am new to topology and this problem came up as a part of my project. Any help is appreciated. -Thank you. Will. - -REPLY [2 votes]: This should be a comment somehwere, but got to long. -Will, the cone on $P^2_\mathbb R$ is not a manifold. Consider the long exact sequence for integral reduced homology of the pair $(C,C')$ where $C=C(P^2_\mathbb R)$ is the cone over $P^2_\mathbb R$ and $C'=C\setminus\{a\}$ is the complement of the apex of the cone. Since $C$ is contractible and $C'$ deformation-retracts onto $P^2_\mathbb R$, you get isomorphisms $H^\sharp_2(C,C')\cong H^\sharp_{1}(P^2_\mathbb R)\cong\mathbb Z/2\mathbb Z$. -If follows that $C$ is not a manifold: in a manifold $M$, for every point $p\in M$ we have that the integral reduced homology $H^\#_\bullet(M,M\setminus\{p\})$ is that of a sphere. -(Generalizing this reasoning, you get a rather strong condition on a manifold for its cone to be also a manifold. I'm sure the topologists among our fellow M.SEers know of a precise characterization.) -In the same way, we see that $C$ is not a manifold with boundary, because in such a space $H^\#_\bullet(M,M\setminus\{p\})$ is, for every $p$, either identically zero or that of a sphere.<|endoftext|> -TITLE: Why is the co-free module defined as the right adjoint to the forgetful functor to Ab rather than Set? -QUESTION [7 upvotes]: I'm currently reading Hilton & Stammbach's A First Course in Homological Algebra, and the following point has stumped me: -In section 1.8, they construct co-free modules ("left moodule" over some ring) as essentially coming from the right adjoint to the forgetful functor from $\Lambda$-Modules to Abelian Groups. On the other hand, the free module is constructed as the left adjoint to the forgetful functor from $\Lambda$-modules to Sets. This turns out to be equivalent to requiring free modules to be direct sums of copies of $\Lambda$ considered as a module over itself, and to requiring co-free modules as direct products of the $\Lambda^*=Hom_\mathbb{Z}(\Lambda, \mathbb{Q}/\mathbb{Z})$. -So I guess my question is: what does the right adjoint to the forgetful functor to Set look like, and why is the right adjoint to the forgetful functor to Abelian Groups more useful? - -REPLY [11 votes]: If a functor has a right adjoint, it preserves colimits; but the -forgetful functor from $R$-Mod to Set doesn't. For instance it -doesn't preserve binary coproducts. So there isn't a right adjoint.<|endoftext|> -TITLE: How can one intuitively think about quaternions? -QUESTION [37 upvotes]: Quaternions came up while I was interning not too long ago and it seemed like no one really know how they worked. While eventually certain people were tracked down and were able to help with the issue, it piqued my interest in quaternions. -After reading many articles and a couple books on them, I began to know the formulas associated with them, but still have no clue how they work (why they allow rotations in 3D space to be specific). I back-tracked a little bit and looked at normal complex numbers with just one imaginary component and asked myself if I even understood how they allow rotations in 2D space. After a couple awesome moments of understanding, I understood it for imaginary numbers, but I'm still having trouble extending the thoughts to quaternions. -How can someone intuitively think about quaternions and how they allow for rotations in 3D space? - -REPLY [2 votes]: Thinking about quaternions as 4D is misleading. Quaternions are the union of a scalar and a 3-vector. Think: time and space. Space is a 3-vector. You can point in directions in space. Time is a scalar. There is a past (negative time) and a future (positive time) and now (0), but no ability to point in the direction of time. -Think of a blinking light on a train, each event having a time and location. These events can be written as quaternions. If the train travels at a constant velocity, you are seeing the addition of quaternions. -One can make movies out of quaternions. Examples are available on my web site, http://visualphysics.org -The rotations in 3D space calculations ignore time.<|endoftext|> -TITLE: Evaluating the nested radical $ \sqrt{1 + 2 \sqrt{1 + 3 \sqrt{1 + \cdots}}} $. -QUESTION [54 upvotes]: How does one prove the following limit? -$$ - \lim_{n \to \infty} - \sqrt{1 + 2 \sqrt{1 + 3 \sqrt{1 + \cdots \sqrt{1 + (n - 1) \sqrt{1 + n}}}}} -= 3. -$$ - -REPLY [33 votes]: Let me provide a full and simple proof here (6 years later) -Set, for $m -TITLE: How do we solve $a \le b^{r}-r$ for $r$? -QUESTION [12 upvotes]: Given two values $a$ and $b$, how should one go about solving the following inequality for $r$: -$$a \le b^r -r .$$ -Applying $\log_b$ on both sides of the inequality doesn't help me much since that yields the following: -$$\log_b a \le \log_b (b^r-r) .$$ -I know that -$$\log_z x - \log_z y = \frac{\log_z x}{\log_z y} .$$ -But that doesn't help me to eliminate the exponentiation and solve for $r$. -It's been ages that I've done algebra, and now I'm back in grad school, and I'm finding this equation in a homework. Can't remember ever seeing a rule of logarithms involving such expressions. -The actual homework (after simplification) involves $9 \le 2^r - r$ which is trivially solvable by just eyeballing it. -But is there a methodical way to solve $a \le b^r-r$ for $r$ given any arbitrary numbers $a$ and $b$? - -REPLY [9 votes]: The case of equality can be solved in terms of the Lambert W function. -It can be used to solve equations of the form $$p^{ax+b} = cx + d$$ -(quoted from the wiki page). -$b^{r} -r$ is 'mostly' monotonic, so I suppose solving the equality will be enough to find the solutions to your inequality. - -REPLY [5 votes]: You can also get a quick-and-dirty approximation for the equality case with the following approach, which should hold in most situations when $a, b > 1$. -Rearrange the problem and take logarithms to rewrite it as $$\ln\left(1+ \frac{r}{a}\right) = r \ln b - \ln a.$$ -Now, if $\frac{r}{a}$ is less than 1 (and it should be in most cases where $a, b > 1$), -$$\ln\left(1+ \frac{r}{a}\right) \approx \frac{r}{a},$$ -which gives you $$r \approx \frac{- \ln a}{1/a - \ln b}.$$ -For your specific problem, this method yields $r \approx 3.78$, whereas the actual answer is closer to $3.66$. -You can improve this by using the quadratic approximation -$$\ln\left(1+ \frac{r}{a}\right) \approx \frac{r}{a} - \frac{r^2}{2a^2},$$ -which requires solving a quadratic equation. -(These approximations come from the Taylor expansion of $\ln (1+x)$.) - -REPLY [3 votes]: If you don't have the W function, you need an iterative root finding type of solution. Note that for your example r is somewhere between 3 and 4. This equation is nicely behaved, so any of them will work. See Root finding for a start<|endoftext|> -TITLE: Which metric spaces are totally bounded? -QUESTION [19 upvotes]: A subset $S$ of a metric space $X$ is totally bounded if for any $r>0$, $S$ can be covered by a finite number of $X$-balls of radius $r$. -A metric space $X$ is totally bounded if it is a totally bounded subset of itself. -For example, bounded subsets of $\mathbb{R}^n$ are totally bounded. -Are there any interesting necessary and/or sufficient conditions for a metric space or its subsets to be totally bounded? -[Background: I was trying to generalize problem 4.8 of baby Rudin which asks you to prove that a real uniformly continuous function on a bounded subset $E$ of the real line is bounded. It seems after a little googling that a more general true statement would require $E$ to be a totally bounded subset of some metric space. But where might we meet such subsets?] - -REPLY [4 votes]: The question asked "where we might meet" total boundedness. One situation where you do not meet it is classical analysis or geometry in finite-dimensional spaces; in $R^n$, bounded and totally bounded are equivalent concepts. So the intuition and conditions for appearance are necessarily more subtle. -Geometrically, a space fails to be totally bounded if and only if it contains an infinite set of points with all pairwise distances at least $d$ for some $d > 0$. This is just the negation of the definition of totally bounded: if for some $\epsilon > 0$ there is no finite set of points whose $\epsilon$-neighborhood covers the space, then we can build an infinite set of mutually $\epsilon$-separated points by placing in the set any point $p_1$ and for $n>1$ inductively adding to the set any point $p_n$ not inside the $\epsilon$-neighborhood of the preceding points. For finite-dimensional spaces this can happen only for a sequence of points that escapes to infinity. But in infinite dimensions an infinite set of $d$-separated points can be packed into a ball of finite radius. -Boundedness and total boundedness are both properties of the completion of the space. A metric space has either boundedness property if and only if its completion does. Considering complete metric spaces is convenient because total boundedness (for a complete metric space) is equivalent to the space being compact. So one might want a topological characterization of the difference between bounded and totally bounded, for complete spaces. However, any metric space can be converted into a bounded one (without affecting total boundedness, completeness, or the topology) by replacing the metric $d$ with $d/(1+d)$ or some other bounded monotonic function that preserves the property of being a metric. So topological characterization suitable for understanding the difference between bounded and totally bounded (complete) spaces is more elusive. Maybe one should consider uniform spaces instead. -Returning to the question of situations where total boundedness appears, there are at least two: - -As far as total boundedness requires infinite-dimensionality and a metric, it comes up naturally in functional analysis. -It also appears in so-called "constructive analysis", i.e., analysis using direct and generally negation-free arguments, as in Errett Bishop's book. The concept of total boundedness first arose from a (classical) logical analysis of the Heine-Borel theorem and what was required to extend it to general metric spaces. In a situation where nonconstructive arguments, such as determining whether a given sequence has infinitely many positive terms, are not present, spaces like the real numbers are not all that different from general metric spaces, and an analysis of which hypotheses were "really at work" in the classical proofs is useful for building the theory in the more general and direct fashion that Bishop advocated. So where boundedness might be used in a finite-dimensional classical argument, one might use total boundedness in the constructive one. This is similar in flavor to the question above in the comments, of what characterizations are true without the axiom of choice.<|endoftext|> -TITLE: What is the Lebesgue mean of the fat Cantor set? -QUESTION [8 upvotes]: Everything that follows takes place in the Borel $\sigma$-algebra with Lebesgue measure. -The Lebesgue mean of $f$ at $x$ is defined as $\displaystyle \lim_{\epsilon\to 0} \int_{x-\epsilon}^{x+\epsilon} \frac{f}{2\epsilon}$. The function defined by $[f](x) =$ "Lebesgue mean of $f$ at $x$" is equal to $f$ almost everywhere so integrals of it will be equal to integrals of $f$, call this property $P$. -Starting with the interval $[0,1]$ and removing the middle $1/4$ then the two middle $1/8$-ths then the four middle $1/16$-ths and so on produces the fat Cantor set which has measure $1/2$ but does not contain any interval (Since measures are countably additive this informs us that the set contains uncountably many points). Let $I$ be the indicator function for the fat Cantor set - it is equal to $1$ when applied on a point of the set and $0$ otherwise. -Intuitively I would have thought that the Lebesgue mean $[I]$ of the indicator for the fat Cantor set would be the zero function. Since that contradicts $P$, it seems more plausible that $[I] = I$ but I cannot prove this. -How can we construct the function $[I]$? - -REPLY [2 votes]: The Lebesgue mean does not exist for "boundary points". Let $A_n$ be the union of the intervals of step $n$, that is, $A_1=[0,3/8]\cup [5/8,1]$, $A_2=[0,9/64]\cup [15/64,3/8]\cup [5/8,49/64]\cup [55/64,1]$ etc, $A=\cap_n A_n$ is the fat Cantor set. Consider a boundary point $x$ of a set $A_n$. Consider the sequence $\varepsilon_k:=(3/8)^{n+k}$. Then by symmetry we have $I_k=|(x-\varepsilon_k,x+\varepsilon_k)\cap A|=1/4\cdot(1/2)^{n+k}$ for $k$ large enough, thus $I_k/\varepsilon_k \sim (4/3)^{n+k} \to \infty$. -You can think of elements of $A$ as infinite sequences of {Left,Right}. Those sequences which are eventually constant correspond to "boundary points". My guess is that the points for which the mean is 1 correspond to sequences that are normal in some sense (for example, all patterns of {Left,Right} appear with the same asymptotic frequency).<|endoftext|> -TITLE: Is the riemann sphere compact even though the complex plane isn't? -QUESTION [6 upvotes]: The Complex plane, set of all $z=x+iy$ where $x$ and $y$ are real, surface area equals cross product of $x$ and $y$ equals aleph-something (that's not the question). Projecting the plane onto a sphere, and adding the complex infinity to the set (without a value for $\arg(z)$), gives the Riemann sphere, which, being a sphere, is compact. -Does the addition of infinity, as some sort of bounder, make the complex plane compact? is the Riemann sphere without infinity compact, since it has an open boundary? also, if the hyperbolic plane can be shown in a disk, what does that mean? The answer to this question might primarily clarify the definitions in my new studies of topology of manifolds, etc. -thanks. - -REPLY [3 votes]: The complex plane $\mathbb{C}$ is not compact. For instance, here is an open cover that does not admit any finite subcover: -$$ -{\mathcal U} = \left\{ U_n\right\} \qquad \text{with} \qquad U_n = \left\{ z \in \mathbb{C} \ \vert \ \vert z \vert < n \right\} \ . -$$ -The reason why ${\mathcal U}$ does not admit any finite subcover is easy: if you pick just a finite number of $U_n$'s, then their union would be equal to the biggest one of them, which is not the whole $\mathbb{C}$. -Now, thanks to the stereographic projection, think of these $U_n$'s inside the Riemann sphere $S^2$. They do not form a cover of $S^2$, since $\infty$ (= the North Pole) is not included in any of them. So, we need to add an open neighborhood of the infinity. For instance, the image of -$$ -V = \left\{ z \in \mathbb{C} \ \vert \ \vert z \vert > 394 \right\} -$$ -by the stereographic projection. Then, we would have an open cover of $S^2$: -$$ -{\mathcal U}' = \left\{ U_n \ \vert \ n \in \mathbb{N}\right\} \cup \left\{ V\right\} \ , -$$ -which admits and open subcover. For instance, -$$ -U_{395} \ , \quad V \ . -$$<|endoftext|> -TITLE: What does a compact set look like? -QUESTION [6 upvotes]: So for a writing assignment in one of my classes we are asked to discuss and prove some basic results about compact sets in general topological spaces. I like proving these things, but they dont help me understand what a compact( locally compact, paracompact,...) set in a topological space "looks like." That said I'm asking for some examples of compact (locally compact, ...) sets in a variety of topological spaces. I'm also interested in some explanation of what extra "benefits" are brought about by singling out these compact sets. For example the p-adic numbers are locally compact and locally compact things (abelian groups to be precise) are a good setting in which to carry out fourier analyis. - -REPLY [6 votes]: You could try reading Terence Tao's notes on the subject; I found them very informative. -As for examples, here is an example and a non-example which I think are informative. The example is that, by Tychonoff's theorem, $[0, 1]^I$ is compact for any index set $I$. The non-example is that the closed unit ball in any infinite-dimensional Banach space, say $\ell_1(\mathbb{Z})$, is not compact. -Edit: Here is another perspective from which to think about compact Hausdorff spaces (and LCH spaces). A compact Hausdorff space $X$ is completely determined by the ring of continuous functions $C(X, \mathbb{R})$; this is a standard exercise which is worked out, for example, here. One gets the points of $X$ as the maximal ideals of $C(X, \mathbb{R})$ and the topology as the initial topology making every function in $C(X, \mathbb{R})$ continuous. Slightly generalized, this is the commutative Gelfand-Naimark theorem, and it says that studying commutative C*-algebras with unit is the same thing as studing compact Hausdorff spaces. Removing the requirement that we have a unit is the same thing as studying locally compact Hausdorff spaces. -So one can write down compact Hausdorff spaces by writing down commutative C*-algebras with unit. One choice is to take the space $C_b(X, \mathbb{R})$, the space of bounded continuous functions on an arbitrary topological space $X$, with the sup norm. If $X$ is completely regular Hausdorff, this gives the Stone-Cech compactification of $X$. (Which is hopeless to think about in general; even for $X = \mathbb{N}$ it's very weird, as the Wikipedia article describes.)<|endoftext|> -TITLE: Differentiability of Convolutions -QUESTION [10 upvotes]: Let $f(x) \in L^p(\mathbb{R})$ and $K \in C^m(\mathbb{R})$. Can I then say that $(f \ast K) (x) = \int_{\mathbb{R}} f(t) K(x-t) dt$ is in $C^m$? -I know that this is true if $K$ has compact support, but I was wondering if it is possible to have a stronger result (perhaps $K$ vanishing at $\infty$?). - -REPLY [2 votes]: I wrote an article on analysis recently and I included the following relevant result (with proof) in the article; I hope it is helpful: -Theorem Let $f\in L^1(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$ for some $1\leq p \leq \infty$. Also, let $g\in L^1(\mathbb{R}^n)$ be a function all of whose partial derivatives of the first order exist and are such that $\frac{\partial g}{\partial x_i}$ is bounded on $\mathbb{R}^n$ for all $1\leq i\leq n$. We conclude that the partial derivatives of the convolution $f\ast g$ of the first order exist on $\mathbb{R}^n$. In fact, $\frac{\partial (f\ast g)}{\partial x_i}=f\ast (\frac{\partial g}{\partial x_i})$ for all $1\leq i\leq n$. -Proof. First note that the convolution $f\ast g\in L^1(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$ by Minkowski's inquality -and is therefore finite (and well-defined) a.e. Let us fix $1\leq i\leq n$. -Note that -$\frac{\left(f\ast g\right)\left(x+he_i\right)-\left(f\ast g\right)(x)}{h} - \left(f\ast \left(\frac{\partial g}{\partial x_i}\right)\right)\left(x\right)$ -$= \int_{\mathbb{R}^{n}} f\left(y\right)\left[\frac{g\left(x-y+he_i\right)-g\left(x-y\right)}{h} - \left(\frac{\partial g}{\partial x_i}\right)\left(x-y\right) \right]dy$ -In particular, -$\frac{\partial \left(f\ast g\right)}{\partial x_i}\left(x\right)$ -$=\lim_{h\to 0} \int_{\mathbb{R}^{n}} f\left(y\right)\left[\frac{g\left(x-y+he_i\right)-g\left(x-y\right)}{h}\right]dy$ -$= \int_{\mathbb{R}^{n}} \left[\lim_{h\to 0} f\left(y\right)\left[\frac{g\left(x-y+he_i\right)-g\left(x-y\right)}{h}\right]\right]dy$ -$= \int_{\mathbb{R}^n} f\left(y\right)\frac{\partial g}{\partial x_i}\left(x-y\right) dy$ -$= \left(f\ast \frac{\partial g}{\partial x_i}\right)\left(x\right)$ -We will justify this computation using the -Lebesgue dominated convergence theorem. In particular, we will show that if $x\in \mathbb{R}^n$ is fixed, -the expression $\left|\frac{g\left(x-y+he_i\right)-g\left(x-y\right)}{h} - \left(\frac{\partial g}{\partial x_i}\right)\left(x-y\right)\right|$ is bounded by -an $L^1(f)$ function in $y$ for all $h>0$ sufficiently small. (Let us recall that $L^1(f)$ is the $L^1$ space -associated to the complex measure $\mu_f$ defined by $\mu_f(E)=\int_{E} f$ for every measurable $E\subseteq \mathbb{R}^n$. -Clearly, every constant function is in $L^1(f)$.) -However, this is an easy consequence of the mean value theorem: we know that there exists -$\delta>0$ such that $0 -TITLE: Question about square-wheeled cars -QUESTION [9 upvotes]: It's kind of an infamous problem in differential equations to find the correct road surface so that a car with square wheels (and an axle located in the center) keeps its axle level as it drives along. I hope I won't offend anybody by saying that one smooth piece of the solution (for a wheel with sides of length 2) is $y = -\cosh(x)$ -If you actually take this solution and describe the position of the axle at any given point, unless I have calculated incorrectly you find that the axle is always positioned directly over the point where the wheel makes contact with the road. I've been unable to come up with a physical justification of this phenomenon and it seems fairly non-obvious to me. -Is there a straightforward reason why this must be true? Is it specific to this wheel shape? - -REPLY [3 votes]: The physical justification is quite simple. The point of contact between the wheel and the road is instantaneously stationary. Since the wheel moves rigidly, the velocity of any point on the wheel is as though the wheel were rotating about the point of contact. In particular, the velocity of the axle is perpendicular to the line joining it to the point of contact. Since we require that the axle stays level, it velocity must be purely horizontal, so it has to be directly above the contact point.<|endoftext|> -TITLE: Expansion of $ (a_1 + a_2 + \cdots + a_k)^n $ -QUESTION [7 upvotes]: Is there an expansion for the following summation? -$$ (a_1 + a_2 + \cdots + a_k)^n $$ - -REPLY [10 votes]: http://en.wikipedia.org/wiki/Multinomial_theorem -This is what you seek.<|endoftext|> -TITLE: Why does this process, when iterated, tend towards a certain number? (the golden ratio?) -QUESTION [13 upvotes]: Take any number $x$ (edit: x should be positive, heh) -Add 1 to it $x+1$ -Find its reciprocal $1/(x+1)$ -Repeat from 2 - -So, taking $x = 1$ to start: - -1 -2 (the + 1) -0.5 (the reciprocal) -1.5 (the + 1) -0.666... (the reciprocal) -1.666... (the + 1) -0.6 (the reciprocal) -1.6 (the + 1) -0.625 -1.625 -0.61584... -1.61584... -0.619047... -1.619047... -0.617647058823.. - -etc. -If we look at just the "step 3"'s (the reciprocals), we get: - -1 -0.5 -0.666... -0.6 -0.625 -0.61584... -0.619047... -0.617647058823.. - -This appears to always converge to 0.61803399... no matter where you start from. I looked up this number and it is often called "The golden ratio" - 1, or $\frac{1+\sqrt{5}}{2}-1$. - -Is there any "mathematical" way to represent the above procedure (or the terms of the second series, of "only reciprocals") as a limit or series? -Why does this converge to what it does for every starting point $x$? - - -edit: darn, I just realized that the golden ratio is actually 1.618... and not 0.618...; I edited my answer to change what the result is apparently (golden ratio - 1). -However, I think I could easily make it the golden ratio by taking the +1 "steps" of the original series, instead of the reciprocation steps of the original series: - -2 -1.5 -1.666... -1.6 -1.625 -1.61584... -1.619047... -1.617647058823.. - -which does converge to $\frac{1+\sqrt{5}}{2}-1$ -Explaining either of these series is adequate as I believe that explaining one also explains the other. - -REPLY [2 votes]: Here is another way of looking at it. -First consider the special case of starting with $1$. -Consider what happens when $\displaystyle x = \frac{f_n}{f_{n+1}}$ where $f_n$ is the $n^{th}$ fibonacci number. -You get $$\frac{1}{\frac{f_n}{f_{n+1}} + 1} = \frac{f_{n+1}}{f_n + f_{n+1}} = \frac{f_{n+1}}{f_{n+2}}$$ -Since $\displaystyle 1 = \frac{f_1}{f_2}$ -we see that after $n$ iterations, $\displaystyle x = \frac{f_{n+1}}{f_{n+2}}$ -This can be generalized to any other starting value, by using a Fibonacci like sequence, which satisfies the recurrence $\displaystyle a_{n+2} = a_{n+1} + a_{n}$ and choosing appropriate $a_{2}$ and $a_{1}$ so that $\displaystyle \frac{a_1}{a_2}$ is the initial guess for $x$. -The $n^{th}$ value for $x$ will be given by $\displaystyle \frac{a_n}{a_{n+1}}$ -The general formula for such sequences is given by $a_{n} = A\alpha^n + B\beta^n$ where $\alpha,\beta$ are roots of $\displaystyle z^2 = z + 1$ and thus the limit of $\displaystyle \frac{a_n}{a_{n+1}}$ can be easily found, which will be one of $1/\alpha$ or $1/\beta$ (which you can also see, by assuming there is a limit $1/L$ and setting $\displaystyle 1/L = \frac{1}{1+1/L}$).<|endoftext|> -TITLE: Does the quantile function uniquely determine the distribution function? -QUESTION [7 upvotes]: For a probability distribution, its quantile function is defined in terms of its distribution function as - -$$ Q(p)=F^{-1}(p) = \inf \{ x\in R : p \le F(x) \} $$ - -I was wondering if, conversely, a quantile function can uniquely determine a distribution and therefore fully describe the probability distribution just as a distribution function does? -Thanks and regards! - -UPDATE: -Please let me be more specific. Because a CDF is nondecreasing, right-continuous and limit is $0$ when $x \to -\infty$ and $1$ when $x \to \infty$, its quantile function is nondecreasing, left-continuous and a map from $(0,1)$ into $R$. If a function is nondecreasing, left-continuous and a map from $(0,1)$ into $R$, can it become a quantile function of some CDF? When it can, is there a way to represent the CDF in terms of the quantile function using infimum or supremum similar as quantile function in terms of CDF? - -REPLY [2 votes]: I believe the following definition for a CDF is consistent with the definition of a quantile function in your original post: -$F(x) = \sup \{ p\in (0,1) : x \ge Q(p) \}$ -This definition indeed makes the quantile function left-continuous as you proposed.<|endoftext|> -TITLE: Algebraization of integral calculus -QUESTION [7 upvotes]: It is well known that the differential calculus has a nice algebraization in terms of the differential rings but what about integral calculus? Of course, one sometimes defines an integral in a differential ring $R$ with a derivation $\partial$ as a projection $\pi: R\rightarrow \tilde R$, where $\tilde R$ is a quotient of $R$ w.r.t. the following equivalence relation: $f\sim g$ iff $f-g$ is in the image of $\partial$, but this is not very intuitive and apparently corresponds to the idea of definite integral over a fixed domain rather than to that of an indefinite one. So my question is: - -Are there algebraic counterparts for the concept of an indefinite integral? - -REPLY [5 votes]: If you find the Rota-Baxter algebra viewpoint of interest then an excellent entry point into the literature on differential-algebraic aspects is the the recent work of Guo, e.g. On differential Rota-Baxter algebras and Baxter algebras and differential algebras. See also other papers listed on his home page. The early papers don't focus so much on these aspects so I would not recommend reading them initially. -Also you may find of interest this paper on classification of related operator identities. -Freeman, J. M. On the classification of operator identities. Studies in Appl. Math. 51 (1972), 73-84.<|endoftext|> -TITLE: Is there a list of all connected $T_0$-spaces with 5 points? -QUESTION [5 upvotes]: Is there some place (on the internet or elsewhere) where I can find the number and preferably a list of all (isomorphism classes of) finite connected $T_0$-spaces with, say, 5 points? -In know that a $T_0$-topology on a finite set is equivalent to a partial ordering, and wikipedia tells me that there are, up to isomorphism, 63 partially ordered sets with precisely 5 elements. However, I am only interested in connected spaces, and I'd love to have a list (most preferably in terms of Hasse diagrams). - -REPLY [2 votes]: This question was answered here.<|endoftext|> -TITLE: Applications of Fractional Calculus -QUESTION [18 upvotes]: I've seen recently for the first time in Special Functions (by G. Andrews, R. Askey and R. Roy) the definitions of fractional integral -$$(I_{\alpha }f)(x)=\frac{1}{\Gamma (\alpha )}\int_{a}^{x}(x-t)^{\alpha -1}f(t)dt\qquad \text{Re}\alpha >0$$ -and fractional derivative -$$\frac{d^{\nu }w^{\mu }}{dw^{\nu }}=\frac{\Gamma (\mu +1)}{\Gamma (\mu -\nu +1)}w^{\mu -\nu },$$ -in The Hypergeometric Functions Chapter. -I would like to know some applications for Fractional Calculus and/or which results can only be obtained by it, if any. - -REPLY [3 votes]: I wrote my undergraduate thesis on a very concrete application of the fractional calculus to Lagrangian Mechanics. For those of you who aren't physicists, Lagrangian Mechanics is a reformulation of classical mechanics that is valid for all coordinates $(q,\dot{q},t)$. Lagrangian Mechanics gives us the same results of Newton's Laws while being much more flexible. It is also the starting point for both quantum mechanics and general relativity. -This is how it works. Let $L(q,\dot{q}) = T - V$ where $T$ and $V$ respectively denote the kinetic and potential energy of the system. -Given a functional of the form -$$ -S[L(q,\dot{q},t)] = \int_a^b L(q,\dot{q})dt -$$ -and applying the calculus of variations, we arrive at the Euler-Lagrange equation -$$ -\frac{\partial L}{\partial q} - \frac{d}{dt}\Big(\frac{\partial L}{\partial \dot{q}}\Big) = 0 -$$ -which you can then integrate to find the equations of motion of the system. There is a problem, however, and that is that you cannot extremize the action if $L$ has a term in it that is explicitly time-dependent. So? Historically this has meant that Lagrangian mechanics--and by extension quantum mechanics--has only been done for a very special kind of interactions, those we call conservative. -The good news is that using fractional derivatives, it is possible to rederive a version of the Euler-Lagrange equation that is valid for nonconservative systems, e.g. anything involving dissipation. It looks like this: -$$ -\frac{\partial L}{\partial q} + {_bD_t^{\alpha}}\Big[\frac{\partial L} {\partial {_bq_t^{\alpha}}} \Big] + {_bD_t^1}\Big[\frac{\partial L}{\partial {_bq_t^1}}\Big] = 0 -$$ -The physical implication is that fractional mechanics give us a notion of path memory for dynamical systems.<|endoftext|> -TITLE: Connection between Fourier transform and Taylor series -QUESTION [149 upvotes]: Both Fourier transform and Taylor series are means to represent functions in a different form. -What is the connection between these two? Is there a way to get from one to the other (and back again)? Is there an overall, connecting (geometric?) intuition? - -REPLY [11 votes]: Taylor series at $t=0$ of some function $\textrm{f}(t)$ is defined as -$$ \textrm{f}\left(t\right) =\sum_{j=0}^{\infty} - { - h_j\cdotp\frac{d^{j}}{dt^{j}}\textrm{f}(0)\cdotp t^{j} - } -$$ -where $ h_j=1/{j!}$ and $\frac{d^0}{dt^0}\textrm{f}\left(t\right)=\textrm{f}\left(t\right)$ -Fourier series is defined as -$$ -\textrm{f}\left(t\right) = - \sum_{n=1}^{\infty} - { \left( - a_n\cdot\cos \left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)+ - b_n\cdot\sin \left({\frac{2\pi\cdotp n \cdotp t}{T}}\right) - \right) - } -$$ - with coefficients: - $$ - \begin{align} - a_n&=\frac{2}{T}\cdotp\int_{t_1}^{t_2}{\textrm{f}(t)\cdotp\cos\left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)\,dt}\\ - b_n&=\frac{2}{T}\cdotp\int_{t_1}^{t_2}{\textrm{f}(t)\cdotp\sin\left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)\,dt} - \end{align} -$$ -For full-wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ for any positive period $T$. -Let find Taylor series of cosine and sine functions: -$$ -\begin{align} - \cos\left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)&=\sum_{k=0}^{\infty}{\frac{(-1)^k}{\left(2\cdotp k \right)!} - \cdotp \left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)^{2\cdotp k}}\\ - \sin\left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)&=\sum_{k=0}^{\infty}{\frac{(-1)^k}{\left(2\cdotp k + 1\right)!} - \cdotp \left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)^{\left(2\cdotp k +1\right)}} - \end{align} -$$ -and substitute this expansion to Fourier coefficients: -$$ -\begin{align} - a_n&=\frac{2}{T}\cdotp - \int_{t_1}^{t_2} - { - \underbrace - { - \left( - \sum_{j=0}^{\infty} - { - h_j\cdotp\frac{d^{j}}{dt^{j}}\textrm{f}(0)\cdotp t^{j} - } - \right)\cdotp - \left(\sum_{k=0}^{\infty} - { - \frac{(-1)^k}{\left(2\cdotp k \right)!} - \cdotp \left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)^{2\cdotp k} - }\right) - }_{\mbox{$\textrm{Tc}(t)$}} - \,dt - }\\ - b_n&=\frac{2}{T}\cdotp - \int_{t_1}^{t_2} - { - \underbrace - { - \left( - \sum_{j=0}^{\infty} - { - h_j\cdotp\frac{d^{j}}{dt^{j}}\textrm{f}(0)\cdotp t^{j} - } - \right) - \cdotp - \left(\sum_{k=0}^{\infty} - {\frac{(-1)^k}{\left(2\cdotp k + 1\right)!} - \cdotp \left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)^{\left(2\cdotp k +1\right)} - } - \right) - }_{\mbox{$\textrm{Ts}(t)$}} - \,dt - } - \end{align} -$$ -Now consider $\textrm{Tc}(t)$: -For some first indicies $j$ and $k$ brackets can be disclosured and terms multiplied in sequence as shown: -$$ - \begin{align*} - \textrm{Tc}(t)&=&\textrm{f}(0)+\frac{d}{dt}\textrm{f}(0)\cdotp t + \left(-2\cdotp\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{2}+\frac{1}{2}\cdotp\frac{d^2}{dt^2}\textrm{f}(0)\right)\cdotp t^{2}+\\&& - +\left(-2\cdotp\frac{d}{dt}\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{2}+\frac{1}{6}\cdotp\frac{d^3}{dt^3}\textrm{f}(0)\right)\cdotp t^{3}+\\&& - +\left(\frac{2}{3}\cdotp\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{4}-\frac{d^2}{dt^2}\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{2}+\frac{1}{24}\cdotp\frac{d^4}{dt^4}\textrm{f}(0)\right)\cdotp t^{4}+\dots - \end{align*} -$$ -Now integrate this function for each term: -$$ -\begin{align*} - \int_{t_1}^{t_2}{\textrm{Tc}(t)\,dt}&=&\textrm{f}(0)\left(t_2-t_1\right)+\frac{1}{2}\cdotp\frac{d}{dt}\textrm{f}(0)\cdotp\left(t_2^2-t_1^2\right) + \\&& + \frac{1}{3}\cdot\left(-2\cdotp\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{2}+\frac{1}{2}\cdotp\frac{d^2}{dt^2}\textrm{f}(0)\right)\cdotp\left(t_2^3-t_1^3\right) +\\&& - +\frac{1}{4}\cdotp\left(-2\cdotp\frac{d}{dt}\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{2}+\frac{1}{6}\cdotp\frac{d^3}{dt^3}\textrm{f}(0)\right)\cdotp\left(t_2^4-t_1^4\right)+\\&& - +\frac{1}{5}\left(\frac{2}{3}\cdotp\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{4}-\frac{d^2}{dt^2}\textrm{f}(0)\cdotp\left(\frac{\pi\cdotp n}{T}\right)^{2}+\frac{1}{24}\cdotp\frac{d^4}{dt^4}\textrm{f}(0)\right)\cdotp\left(t_2^5-t_1^5\right)+\dots - \end{align*} -$$ -Collect coefficients at previous integral above $\frac{d^i}{dt^i}\textrm{f}(0)$: -$$ - \begin{align*} - \int_{t_1}^{t_2}{\textrm{Tc}(t)\,dt}&=&\textrm{f}(0)\cdotp\left( \left(t_2-t_1\right)- - \frac{2}{3}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^3-t_1^3\right)+\frac{2}{15}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^5\cdotp\left(t_2^5-t_1^5\right)+\dots\right) + - \\&& - +\frac{d}{dt}\textrm{f}(0)\cdotp\left(\frac{1}{2}\cdotp\frac{\pi\cdotp n}{T}\cdotp \left(t_2^2-t_1^2\right)- - \frac{1}{2}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^4-t_1^4\right)+\frac{1}{9}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^4\cdotp\left(t_2^6-t_1^6\right)+\dots\right)+ - \\&& - +\frac{d^2}{dt^2}\textrm{f}(0)\cdotp\left(\frac{1}{6}\cdotp\frac{\pi\cdotp n}{T}\cdotp \left(t_2^3-t_1^3\right)- - \frac{1}{5}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^5-t_1^5\right)+\frac{1}{21}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^4\cdotp\left(t_2^7-t_1^7\right)+\dots\right)+ - \\&& - +\frac{d^3}{dt^3}\textrm{f}(0)\cdotp\left(\frac{1}{24}\cdotp\frac{\pi\cdotp n}{T}\cdotp \left(t_2^4-t_1^4\right)- - \frac{1}{18}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^6-t_1^6\right)+\frac{1}{72}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^4\cdotp\left(t_2^8-t_1^8\right)+\dots\right)+\dots - \end{align*} -$$ -Now it is easy to recognize sequences at brackets (rhs expression is multiplied by $2/T$). - For $\textrm{f}(0)$: -$$ -\begin{align*} -\left(t_2-t_1\right)- - \frac{2}{3}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^3-t_1^3\right)+\dots &:& \frac{\left(-1 \right)^i\cdot 2^ \left(2\cdot i+1 \right)\cdot n^ \left(2\cdot i \right)}{ 1\cdot\left(2\cdot i+1 \right)\cdot \left(2\cdot i \right)! }\cdot \frac {\pi^ \left(2\cdot i \right)\cdot \left(t_2^ \left(2\cdot i+1 \right) -t_1^ \left(2\cdot i+1 \right)\right) }{T^ \left(2\cdot i+1 \right)} -\end{align*} -$$ -For $\frac{d}{dt}\textrm{f}(0)$: -$$ -\begin{align*} -\frac{1}{2}\cdotp\frac{\pi\cdotp n}{T}\cdotp \left(t_2^2-t_1^2\right)-\frac{1}{2}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^4-t_1^4\right)+\dots &:&\frac{\left(-1 \right)^i\cdot 2^ \left(2\cdot i+1 \right)\cdot n^ \left(2\cdot i \right)}{ 1\cdot \left(2\cdot i+2 \right)\cdot \left(2\cdot i \right)! }\cdot \frac {\pi^ \left(2\cdot i \right)\cdot \left(t_2^ \left(2\cdot i+2 \right) -t_1^ \left(2\cdot i+2 \right)\right) }{T^ \left(2\cdot i+1 \right)} -\end{align*} -$$ -For $\frac{d^2}{dt^2}\textrm{f}(0)$: -$$ -\begin{align*} -\frac{1}{6}\cdotp\frac{\pi\cdotp n}{T}\cdotp \left(t_2^3-t_1^3\right)-\frac{1}{5}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^5-t_1^5\right)+\dots &:&\frac{\left(-1 \right)^i\cdot 2^ \left(2\cdot i+1 \right)\cdot n^ \left(2\cdot i \right)}{ 1\cdot2\cdot \left(2\cdot i+3 \right)\cdot \left(2\cdot i \right)! }\cdot \frac {\pi^ \left(2\cdot i \right)\cdot \left(t_2^ \left(2\cdot i+3 \right) -t_1^ \left(2\cdot i+3 \right)\right) }{T^ \left(2\cdot i+1 \right)} -\end{align*} -$$ -For $\frac{d^3}{dt^3}\textrm{f}(0)$: -$$ -\begin{align*} -\frac{1}{24}\cdotp\frac{\pi\cdotp n}{T}\cdotp \left(t_2^4-t_1^4\right)- - \frac{1}{18}\cdotp\left(\frac{\pi\cdotp n}{T}\right)^2\cdotp\left(t_2^6-t_1^6\right)+\dots&:&\frac{\left(-1 \right)^i\cdot 2^ \left(2\cdot i+1 \right)\cdot n^ \left(2\cdot i \right)}{ 1\cdot 2\cdot 3\cdot\left(2\cdot i+4 \right)\cdot \left(2\cdot i \right)! }\cdot \frac {\pi^ \left(2\cdot i \right)\cdot \left(t_2^ \left(2\cdot i+4 \right) -t_1^ \left(2\cdot i+4 \right)\right) }{T^ \left(2\cdot i+1 \right)} -\end{align*} -$$ -and so on. -Finally overall sequence for $\frac{d^m}{dt^m}$ is computed as: -$$ -\frac{\left(-1 \right)^i\cdot 2^ \left(2\cdot i+1 \right)\cdot n^ \left(2\cdot i \right)}{ m!\cdot\left(1+m+2\cdot i\right)\cdot \left(2\cdot i \right)! }\cdot \frac {\pi^ \left(2\cdot i \right)\cdot \left(t_2^ \left(2\cdot i+m+1 \right) -t_1^ \left(2\cdot i+m+1 \right)\right) }{T^ \left(2\cdot i+1 \right)} -$$ -Now we can find sum using CAS: -$$ -\textrm{Ct}(n,m)=\sum_{i=0}^{\infty}{\left(\frac{\left(-1 \right)^i\cdot 2^ \left(2\cdot i+1 \right)\cdot n^ \left(2\cdot i \right)}{ m!\cdot\left(1+m+2\cdot i\right)\cdot \left(2\cdot i \right)! }\cdot \frac {\pi^ \left(2\cdot i \right)\cdot \left(t_2^ \left(2\cdot i+m+1 \right) -t_1^ \left(2\cdot i+m+1 \right)\right) }{T^ \left(2\cdot i+1 \right)}\right)} -$$ -and $\textrm{Ct}(n,m)$ becomes quite complex expression containing Lommel's or hypergeometric functions. -In particular, when $m=0$ function becomes -$$ -\textrm{Ct}(n,0)=\frac{\sin\left(\frac{2\pi n\cdot t_2}{T})\right)-\sin\left(\frac{2\pi n \cdot t_1}{T}\right)}{\pi n} -$$ -for $m=1$: -$$ - \textrm{Ct}(n,1)= \frac -{2\pi n - \left( - \sin\left(\frac{2\pi n\cdot t_2}{T})\right)\cdot t_2-\sin\left(\frac{2\pi n\cdot t_1}{T}\right)\cdot t_1 - \right) -+T\cdot -\left( -\cos\left(\frac{2\pi n\cdot t_2}{T})\right)-\cos\left(\frac{2\pi n\cdot t_1}{T}\right) -\right) -} -{ 2\cdot(\pi n)^2 } -$$ -and so on. -So we can write expression for $a_n$: -$$ -a_n=\sum_{m=0}^{\infty}{\frac{d^m}{dt^m}\textrm{f}(0)\cdot\textrm{Ct}(n,m)} -$$ -or -$$ -a_n=\sum_{m=0}^{\infty}{\frac{1}{m!}\cdot\frac{d^m}{dt^m}\textrm{f}(0)\cdot\left(\sum_{i=0}^{\infty}{\left(\frac{\left(-1 \right)^i\cdot 2^ \left(2\cdot i+1 \right)\cdot n^ \left(2\cdot i \right)}{ \left(1+m+2\cdot i\right)\cdot \left(2\cdot i \right)! }\cdot \frac {\pi^ \left(2\cdot i \right)\cdot \left(t_2^ \left(2\cdot i+m+1 \right) -t_1^ \left(2\cdot i+m+1 \right)\right) }{T^ \left(2\cdot i+1 \right)}\right)}\right)} -$$ -We can easy see that $\frac{1}{m!}$ is Taylor's coefficient, thus relationship between the Fourier coefficients and the coefficient in the Taylor expansion using special functions is established. -In the particular case of full wave function $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ we can write for $\textrm{Ct}(n,m)$ more simple closed form: -$$ -\frac{ -\left(-1 \right)^n T^m -\left( - \left(\pi n\right)^ \left(-m-\frac{1}{2} \right)\left(\textrm{I}\cdot\textrm{L}_{\textrm{S1}}\left(m+\frac{3}{2}, \frac{1}{2}, -\pi n \right)-\textrm{L}_{\textrm{S1}}\left(m+\frac{3}{2}, \frac{1}{2}, \pi n \right)\right)+1+\left(-1 \right)^m \right) -} -{2^ {m}\cdot\left(m+1 \right)!} -$$ -where $\textrm{L}_{\textrm{S1}}(\mu,\nu,z)=\textrm{s}_{\mu,\nu}(z)$ is first Lommel function and $\textrm{I}=(-1)^{2^{-1}}$ (Complex $a_n$ coefficient). -For example let consider parabolic signal with period $T$: $\textrm{g}(t)=A\cdot t^2+B\cdot t + C$. -Coefficients $a_n$ can be found using Fourier formula: -$$ -a_n=\frac{2}{T}\cdotp\int_{-T/2}^{T/2}{\left(A\cdot t^2+B\cdot t + C\right)\cdotp\cos\left({\frac{2\pi\cdotp n \cdotp t}{T}}\right)\,dt} = A\cdot\left(\frac{T}{\pi n}\right)^2\cdot(-1)^n -$$ -For $\textrm{g}(t)$ function: $\textrm{g}'(t)=2A\cdot t + B$ , $\textrm{g}''(t)=2A$, derivatives greater than two is zero. So we can use $\textrm{Ct}(n,m)$ for $m=0,1,2$. It is easy to check if $t_1=-{T}/{2}$ and $t_2=+{T}/{2}$ then $\textrm{Ct}(n,0) = 0,\textrm{Ct}(n,1)=0$ we have zero values for odd and non zero for even values, in particular, whenfor $m=2$: -$$ -\textrm{Ct}(n,2) = \frac{1}{2}\left(\frac{T}{\pi n}\right)^2\cdot(-1)^n -$$ -for $m=4$ as example $\textrm{Ct}(n,4) = \frac{1}{48}\left(\frac{T}{\pi n}\right)^4\cdot\left((\pi n)^2-6\right)\cdot(-1)^n -$. -Finally we can obtain for example for $m$ up to 4: -$$ -\begin{align*} -a_n=\sum_{m=0}^{4}{\frac{d^m}{dt^m}\textrm{g}(0)\cdot\textrm{Ct}(n,m)}=&\\ -=\left(A\cdot 0^2+B\cdot 0 + C\right)\cdot 0+\left(2A\cdot 0+B\right)\cdot 0+\frac{1}{2}\left(\frac{T}{\pi n}\right)^2\cdot(-1)^n \cdot 2A+0\cdot0+...\cdot 0=\\ -=A\cdot\left(\frac{T}{\pi n}\right)^2\cdot(-1)^n -\end{align*} -$$ -the same result using Fourier integral. There is interesting result for non-integer harmonic for $\textrm{g}(t)$ is: -$$ -a_n=\frac{1}{120}A\cdot T^2 \cdot (-1)^{n+1} \left({}_{2}\textrm{ -F}_1\left( 1, 3; \frac{7}{2}; -\left(\frac{\pi n}{2}\right)^2\right)\cdot(\pi n)^2-20\right) -$$ -where ${}_{2}\textrm{ -F}_1\left(a,b;c;z\right)$ is hypergeometric function. -So we can plot coefficients calculated from Fourier integral and for this special result (for $T=1,\,A=1$): -Cosine series. Red circles and red line is Fourier cosine coefficients (for real part of non-integer power of -1), solid blue line is real part of obtained expression with hypergeometric function and dash green line is imaginary. For integer $n$ imaginary part is zero and $a_n$ is real. -Similar expressions can be obtained for the sine $b_n$ series.<|endoftext|> -TITLE: Example of functions where linear dependence isn't obvious -QUESTION [5 upvotes]: The Wronskian lets us determine if a set of functions (possibly the solutions to a differential equation) are linearly dependent or not. But, for every example in the book, it is very obvious if one of the functions is a linear combination of the others. The examples in the book use 3-5 functions. What would be an example of a small number of functions where this isn't obvious? -Or is the application of the Wronskian mostly to deal with large sets of functions... where the sheer number makes it hard to tell if they are dependent or not? - -REPLY [3 votes]: Have you tried to prove by hand (i.e., only using the definition of linear independence) that $\sin \theta$ and $ \cos \theta$ are linearly independent? Of course, this can be done with the help of the Wronskian. -And what about $e^{i\theta}$ and $ e^{i (\theta + \frac{\pi}{2})}$? This is geometrically clear, but: can you see the difference between linear independence over the real and complex numbers? -EDIT. Just to add still a more elementary example: what about $\sin^2\theta $ and $\cos^2\theta$?<|endoftext|> -TITLE: Concerning sets of (Lebesgue) measure zero -QUESTION [6 upvotes]: Perhaps this has a simple answer, but I don't know (I wouldn't be asking if I did). Every set of outer measure zero is Lebesgue measurable with Lebesgue measure zero. Is the converse true? That is, if a set has Lebesgue measure zero, does it necessarily have outer measure zero? -And I suppose I should specify that I'm thinking in $\mathbb{R}^n$. - -REPLY [8 votes]: If $\mu^{*}$ is an outer measure, then we define the measurable sets in terms of $\mu^{*}$; the measure is then defined to be the restriction of the outer measure to the measurable sets. -To be more explicit: if you have a $\sigma$-algebra and an outer measure $\mu^{*}$ on the algebra, then we say that a set $E$ is $\mu^{*}$-measurable if and only if for every $A$ in the $\sigma$-algebra, -$$\mu^{*}(A) = \mu^{*}(A\cap E) + \mu^{*}(A\cap E').$$ -As Halmos says in his book Measure Theory, - -It is rather difficutl to get an intuitive understanding of the meaning of $\mu^{*}$-measurability except through familiarity with its implications. - -Once you have the definiiton of $\mu^{*}$-measurable, then let $S$ be the set of all measurable sets, and you define the measure $\mu$ on $S$ by $\mu(E) = \mu^{*}(E)$ for all $E\in S$. -In particular, this holds for the Lebesgue measure: if $E$ is Lebesgue measurable, then the Lebesgue measure of $E$ is equal to the outer measure of $E$, because the Lebesgue measure of $E$ is defined to be the outer measure of $E$. This holds for any Lebesgue measure, not just for measure $0$. -The point of the theorem you state earlier is that having outer measure zero implies that the set is measurable; that's the nontrivial part of the statement (not that the Lebesgue measure of the set will then be zero).<|endoftext|> -TITLE: How to find rectangle intersection on a coordinate plane -QUESTION [7 upvotes]: Given the coordinates of two rectangles on a coordinate plane, what would be the easiest way to find the coordinates of the intersecting rectangle of the two? -I am trying to do this programatically. - -REPLY [5 votes]: Working from Zwarmapapa's solution, you probably want to check that the rectangles actually overlap, and optionally that the overlap has a non-zero area. -When there is no overlap, the two coordinates will be reversed (top left will actually be bottom right and vice-versa). -If you want the rectangles with zero area (edge/corner intersection), change the two less than checks to less than or equal. -Rectangle r1 = rect1; -Rectangle r2 = rect2; -Rectangle intersectionRect = null; - -int leftX = Math.max( r1.getX(), r2.getX() ); -int rightX = Math.min( r1.getX() + r1.getWidth(), r2.getX() + r2.getWidth() ); -int topY = Math.max( r1.getY(), r2.getY() ); -int bottomY = Math.min( r1.getY() + r1.getHeight(), r2.getY() + r2.getHeight() ); - -if ( leftX < rightX && topY < bottomY ) { - intersectionRect = new Rectangle( leftX, topY, rightX-leftX, bottomY-topY ); -} else { - // Rectangles do not overlap, or overlap has an area of zero (edge/corner overlap) -}<|endoftext|> -TITLE: Ideal consisting of zero divisors -QUESTION [6 upvotes]: Let $I$ be a finitely generated ideal of a commutative ring $R$. Assume every element of $I$ is a zero divisor. Does then exist a $x \neq 0$ in $R$ with $xI=0$? -This is true if $0$ is a decomposable ideal, for example if $R$ is noetherian. I wonder if we actually need this. Doesn't it sound plausible? The problem is that we cannot just multiply the elements which kill the generators of $I$, the product can vanish. - -REPLY [4 votes]: No, for a counterexample see Exercises 2-2-6,7 pp. 62-63 in Kaplansky: Commutative Rings (excerpted below). See also the discussion following Theorem 82 p.56.<|endoftext|> -TITLE: Exploring the quadratic equation $x^2 + \lvert x\rvert - 6 = 0$ -QUESTION [7 upvotes]: This question and the described solution are copied from a test-paper : -For the equation $x^2$ + |x| - 6 = 0 analyze the four statements below for correctness. - -there is only one root -sum of the roots is + 1 -sum of the roots is zero -the product of the roots is +4 - -Answer : (3) -Answer Explanation : -If x > 0 |x| = x. -Given equation will be $x^2 + x - 6 = 0$⇒ x = 2,- 3 ⇒ x = 2 -If x < 0 |x| = - x. -Given equation will b e $x^2$ - x - 6 = 0 ⇒ x = -2, 3 ⇒x = - 2 -Sum of roots is 2 - 2 = 0 -Now I have a doubt on the statment "If x < 0 |x| = - x." I think modulus means that |x| is always positive ?! Also I can see that (2) seems to be the correct option isn't ?! -Please post your views. - -REPLY [6 votes]: $f(x)=x^2+|x|-6$ is an even function—that is, $f(x)=f(-x)$ for all $x$, or the graph of $y=f(x)$ is symmetric over the y-axis—so if $f(c)=0$ then $f(-c)=0$, so the sum of the zeros of $f$ must be 0.<|endoftext|> -TITLE: Why does a convex set have the same interior points as its closure? -QUESTION [26 upvotes]: Let $C$ be a convex subset of $\mathbb{R}^n$. I've been trying for hours to prove that $\dot{\overline{C}}=\dot{C}$. Somehow my intuition completely fails me. I found a proof in a textbook, but just got stuck on another statement the author considered obvious. Could someone please give a proof that uses little more than elementary linear algebra, topology, and the definition of a convex set? -Edit: -The proof mentioned above is from Blackwell and Girshick: -Let $y\in\dot{\overline{C}}$ and $T$ be a ball around $y$ contained in $\overline{C}$. Then $C\cap T$ has an inner point, as otherwise $C\cap T$ would be contained in a hyperplane and $\overline{C\cap T}=\overline{T}$ would be contained in the same hyperplane. The problematic statement is "as otherwise $C\cap T$ would be contained in a hyperplane". -Another thing: I would be interested in a proof that doesn't use the theorem about separating a convex set from a point by a hyperplane, as I came across this problem in a proof of that very theorem (in the appendix of Stochastic Finance by Föllmer and Schied). To be more precise, it occurs in the case of the point in question being in the boundary of $C$, when it is tacitly assumed, that it is also in the boundary of $\overline{C}$. I know this isn't strictly necessary, as I could use another proof, e.g. the one referred to by Mike, but now I'm curious. - -REPLY [9 votes]: This is from a set of class notes. -You need a lemma (interior and closure operators are $\mathrm{int},\mathrm{cl}$, resp.): -Lemma. If $C$ a convex subset of a topological space, and if $x\in \mathrm{int} C$ and $y\in \mathrm{cl} C$, then $[x,y)\subset \mathrm{int} C$. -The half-open segment $[x,y)$ is a right-open convex combination. If $\mathrm{int}C$ is empty, then the above lemma is true. Try to prove why this is true when the interior is not empty. -Proposition. $\mathrm{int} C = \mathrm{int}\;\mathrm{cl}\;C$. -Let $y\in \mathrm{int\; cl\;} C$; position it in an open ball, noting that the in the interior of the closure of $C$ is open, and hence $\exists r>0$ such that $B_r(y)\subset \mathrm{cl\;} C$. Pick a $y'\in \mathrm{int}\;C$. Then there exists an $\epsilon>0$, such that $y'+(1+\epsilon)(y-y') = y+\epsilon(y-y')\in B_r(y)\subset \mathrm{cl}\;C$. At the same time, $y$ belongs to the segment $[y',y+\epsilon(y-y'))$, so that, by the previous lemma, $y$ belongs to $\mathrm{int\;}C$. The reverse inclusion is straightforward, since $C\subseteq \mathrm{cl\;}C$, and thus obtaining the result.<|endoftext|> -TITLE: Why are very large prime numbers important in cryptography? -QUESTION [40 upvotes]: Firstly, you guys are awesome, and I learn quite a bit just from reading the questions of others. -Secondly, a friend asked me recently why large primes are important for data security, and I was unable to give him an answer with which I myself was satisfied. Various wikipedia articles have mostly pointed out an embarrassing paucity in mathematical knowledge on my part, and since this happens to be a very math-related question (and not a programming-related question) I was hoping someone could shed some light. -tl;dr: question reads as title. - -REPLY [52 votes]: There is a whole class of cryptographic/security systems which rely on what are called "trap-door functions". The idea is that they are functions which are generally easy to compute, but for which finding the inverse is very hard (here, "easy" and "hard" refer to how quickly we know how to do it), but such that if you have an extra piece of information, then finding the inverse is easy as well. Primes play a very important role in many such systems. -One such example is the function that takes two integers and multiplies them together (something we can do very easily), versus the "inverse", which is a function that takes an integer and gives you proper factors (given $n$, two numbers $p$ and $q$ such that $pq=n$ and $1\lt p,q\lt n$). If $n$ is the product of two primes, then there is one and only one such pair. -Another example is the discrete logarithm. To consider a simple example, look at the integers modulo, say, $7$. The integers between $1$ and $6$, inclusively, form a group under multiplication, and in fact every number between $1$ and $6$ is a power of $3$. The "discrete logarithm problem" would be, given a number $x$ between $1$ and $6$, to find a number $a$ such that $3^a$ equals $x$ modulo $7$. In this case, you can just try powers of $3$ until you hit the right answer. But if the modulo is very large, then this would take too much time. -One method for exchanging information over an open channel relies on the fact that we do not have very good methods of finding discrete logarithms in general, but we do have very good methods for computing modular powers. The idea is: suppose you and I need to exchange information. We want to use some very secure cryptographic system that relies on a complicated key. But, how can we agree on a key? If we have some secure way of communicating so that when we agree on the key nobody will overhear us, then why bother with the entire exercise? We should just communicate using that secure way. So instead we need to communicate at a place where we can be overhead. How can we agree on a secret key if everyone can hear us? Well, Diffie and Hellman proposed the following method: -Pick a very large prime $p$, and a number $r$ such that every number between $1$ and $p-1$ is a power of $r$ modulo $p$ (such numbers $r$ are known to exist for every prime; they are called primitive roots). Everyone knows $p$ and everyone knows $r$. Then I pick a secret number $a$, and you pick a secret number $b$. I cannot tell you my secret number (it's secret). But I tell you what $r^a \mod p$ is. Because computing modular powers is easy, I can do this computation easy enough; but because we don't know how to do discrete logarithms easily, we are hoping that nobody will be able to figure out $a$ just from knowing $r^a$... at least, not very quickly. Likewise, you tell me $r^b \mod p$. Now, you know $r^a$, and you know what $b$ is, so you compute $(r^a)^b \mod p$. By the laws of exponent, you now know (secretly!) the number $r^{ab} \mod p$. I, on the other hand, know $r^b$ (because you told me that number) and I know what $a$ is. So I compute $(r^b)^a\mod p$. But this is the same as $r^{ab} \mod p$. So now we both have a piece of information, namely the number $r^{ab}\mod p$. This is going to be our "secret key". -Now, if someone can figure out either $a$ or $b$, then since they also know $r^a$ and $r^b$, they'll be able to figure out our secret key. We hope this is hard, but we certainly need $p$ to be very big: otherwise, they can just try all powers of $r$ until they hit the right one. We need the "search space" to be very big, so we need $p$ to be very big. Added: As jug points out, having $p$ big is not sufficient. There are algorithms for computing discrete logarithms that are particularly good with certain kinds of primes, so we generally also require that $p$ satisfy some additional "good" properties relative to the cryptographic application. You generally want $p$ and $(p-1)/2$ to be both primes, for example. On the other hand, in practice one does not really need $r$ to be a primitive root. Instead, it is enough that it generate a "large" subgroup of the multiplicative group, which one generally wants to be of prime order. -(Note: figuring out $a$ or $b$ is just one way in which they could figure out our secret key $r^{ab}$, since everyone knows $p$, $r$, $r^a$, and $r^b$. It is not known whether this is essentially the only way to break this "key exchange" method; the method really relies on whether one can figure out $r^{ab}$ from knowing $r$, $p$, $r^a$, and $r^b$; this is called the Diffie-Hellman problem; the Diffie-Hellman problem is at most as hard as the Discrete Logarithm Problem, but we do not know if it is just as hard (it could be easier); and we don't know just how hard the Discrete Logarithm Problem is, we just know that we don't have any easy ways of doing it yet). -So key exchange is one place where big primes are very important. (Diffie-Hellman is not the only way to do key exchanges). Another place where big primes play a big role is in RSA which is a cryptosystem that also relies on big primes (this time, two big primes $p$ and $q$, and we do arithmetic modulo $n=pq$). -Added: Might as well add a quick overview of RSA and how the primes come into play. Here, once again modular exponentiation is part of the process. This is an "public key" system: I will tell everyone how to send me secret messages, which hopefully only I can decode. (In Diffie-Hellman, we did not exchange a message; we agreed on a secret key that we will use with a separate system that requires a secret key; for example, AES). I pick two large primes $p$ and $q$, and compute $n=pq$. I also pick a number $e$ that is relatively prime to $(p-1)(q-1)$ (I can do that because I know $p$ and $q$). Then I use the Euclidean algorithm, which is pretty quick, to find a $d$ such that $ed\equiv 1 \pmod{(p-1)(q-1)}$. Finally, I tell everyone what $n$ and $e$ are. If you want to send me a message, you first convert it to a number $M$ using some standard mechanism. Then you compute $M^e \mod n$, and you tell me what $M^e\mod n$ is. I will take $M^e$ and compute $(M^e)^d = M^{ed}\mod n$. Because $ed\equiv 1 \pmod{(p-1)(q-1)}$, then $M^{ed}\equiv M\pmod{n}$, so that is how I recover $M$. The security of the system relies in hoping that from knowing $n$ and $e$, it is difficult to figure out $d$ (it is easy if I know $p$ and $q$; this is why this is believed to be a "trap-door function" as described in the first paragraph). The problem is at most as hard as factoring $n$, because if you can factor $n$ then you can find $d$ the same way I did; it is not known if the problem of finding $M$ from $n$, $M^e$, and $e$ is at least as hard as factoring (it has been shown that some variants are at least as hard as factoring), and again we don't know just how hard factoring is. But: because we know that if you can factor $n$ then you can read the message, then we want to make $n$ difficult to factor. It only has two factors, but you don't want them to be easy to find, so you want $p$ and $q$ to be large for sure. (Again, there are other conditions one usually puts on $e$, $p$, and $q$ to make sure that certain special attacks do not succeed easily, but at least we need $p$ and $q$ to be very big).<|endoftext|> -TITLE: What are valent vertices? -QUESTION [6 upvotes]: Page 13 of Tropical Algebraic Geometry by Itenberg, Mikhalkin, and Shustin mentions 1-valent vertices, but I haven't been able to find a source that actually defines this term or managed to guess the definition myself. Either a definition or reference would be greatly appreciated. Thanks! - -REPLY [8 votes]: In graph theory, the valency of a vertex is sometimes used to mean its degree, i.e. the number of edges incident on it. Both Wikipedia and MathWorld mention both terms as synonyms.<|endoftext|> -TITLE: A non-noetherian ring with noetherian spectrum -QUESTION [33 upvotes]: Question 1: Does such a ring exist? -Note: The definition of a noetherian topological space is similar to that in rings or sets. Every descending chain of closed subsets stops after a finite number of steps -(Question 2: is this equivalent to saying that every descending chain of opens stops?) - -REPLY [2 votes]: Yet another example from the $p$-adic world: Let $p$ be prime $F = ℚ_p^{\mathrm{tr}}$ the maximal totally ramified extension of $ℚ_p$. Then $F$ is a non-archimedically valued field with a value $\lvert\, ·\,\rvert \colon F → [0..∞)$ that extends every value of every finite totally ramified extension of $ℚ_p$. -Hence, $\mathfrak o = \{x ∈ F;~\lvert x \rvert ≤ 1 \}$ is a local ring with its maximal ideal given by $\mathfrak m = \{x ∈ \mathfrak o;~\lvert x \rvert < 1 \}$. Let $\mathfrak a = p\mathfrak o = \{x ∈ \mathfrak o;~\lvert x \rvert ≤ 1/p \}$. Then $\operatorname{rad} \mathfrak a = \mathfrak m$, and so $\operatorname{Spec} \mathfrak o / \mathfrak a = \operatorname{Spec} \mathfrak o / \mathfrak m$ is a point. However, $\mathfrak o / \mathfrak a$ is not Noetherian, since there are infinite strictly ascending chains of ideals above $\mathfrak a$, for instance -$$\mathfrak a = (p) \subsetneq (\sqrt p) \subsetneq (\sqrt[4] p) \subsetneq (\sqrt[8] p) \subsetneq ….$$<|endoftext|> -TITLE: Geometric Distribution versus Negative Binomial Distribution -QUESTION [5 upvotes]: The pdf of the geomtric distribution is the following: $f(x) = (1-p)^{x}p$. Also $E[X] = \frac{1-p}{p}$ and $\text{Var}[X] = \frac{1-p}{p^2}$. The pdf of the negative binomial distribution is: $f(x) = \binom{r+x-1}{x}p^{r}(1-p)^{x}$. Also $E[X] = \frac{r(1-p)}{p}$ and $\text{Var}[X] = \frac{r(1-p)}{p^2}$. So is the geomtric distribution really a special case of the negative binomial? In each case, $X$ is the number of failures until a success occurs? -Also, does the geometric distribution have a different version as well? The $\binom{r+x-1}{x}$ seems analogous to $\binom{n+k-1}{k}$. - -REPLY [13 votes]: The geometric distribution describes the probability of "x trials are made before a success", and the negative binomial distribution describes that of "x trials are made before $r$ successes are obtained", where $r$ is fixed. So you see that the latter is a particular case of the former, namely, when $r=1$. -As to your last question, it doesn't matter whether you call the variable $x$ or $k$, as long as you keep in mind that it takes on the values $0,1,2...$.<|endoftext|> -TITLE: Lebesgue integral basics -QUESTION [108 upvotes]: I'm having trouble finding a good explanation of the Lebesgue integral. As per the definition, it is the expectation of a random variable. Then how does it model the area under the curve? Let's take for example a function $f(x) = x^2$. How do we find the integral of $f(x)$ under $[0,1]$ using the Lebesgue integral? - -REPLY [2 votes]: I first understood Lebesgue as integral over range rather than integral over domain but it never seemed plausible to me that it can resolve the problem of discontinuity. I think that the trick is that measure is something that better "width" of the area (function value is the same) than Rieman's $\Delta x \rightarrow 0$ does. The Lebegue is basically -$$\int_S f(s)d\mu = \sum {f(s) \mu(s)}$$ -You may think that you scan over f(s) -- all the values that you may have in the domain $S = \cup(s)$ -- and sum up all rectangles of size $height(s) \times width(s) = f(s) \mu(s)$. -You can say that you integrate over range of values f(s). However, you see also other differences. Whereas Riemann implies that the width of every interval is the same dx, that is, if you have function -$$f(1) => 1\\ -f(2) => 1\\ -f(3) => 2\\ -f(4) => 1$$ -the integral will accumulate the sum as 0+1+1+2+1. So, if you have 4 coins, you can compute the sum by simply iterating them. You see that the speed of growth is proportional only to the coin value f(x) and not to the coin index, x. This is not the case of Lebesgue. For Lebesgue, you group the coins by values (I am here from Scala), List(1,1,2,1) => Occurrences(1->3, 2->1), and you see immediately that you have two heaps of coins: ones in the first heap and doubles in the second. These are two rectangles (coin value x heap size) that you need to add together. So, you integrate by range here but you not simply add together the measures (the amount of coins) that exceed current value, you add coin value x count. This is different from Riemann. This is my second stage of understanding the integrals. -This enables integrating point-charge like discontinuities. Consider a step function. It jumps instantly at some points. It is an integral of some infinitely narrow spikes and integral grows instantly at the spike point, as if there is a finite finite amount of charge/mass concentrated in infinitesimal point of space, in contrast to continously distributed charge/mass in space. The amount of that charge in the point determines the height of the step. It seems that Riemann has difficulty with integrating the spike because regardless how infinitecimal you make your dx, they fail to break the infinitesimal interval into rectangles of constant (and infinite) height f(x) to sum them up. On the other hand, we can say that there is charge n at $x=x_0$ and when we integrate over axis of values, we suddenly pass through the value n, which is measured (countable measure) to have width of k (coins), regardless it is confined in a single real point of width 0. - -Now, how is this related to expectations? Your 4 coins, 3*1 + 1*2 add up to 5 euros and, when you draw arbitrary coin, you expect its value to be 3*1+2*1/4 = 5/4 = 1.25 euros. That is, one coin contributes 1.25 euro in average. The Lebesgue integral is equal to expectation when the measure of all coins (amount of coins) is one. That is, every of n coins is not actually a coin but $1/n$th of it. Now, Lebesgue integral is 3/4*1 + 1/4*2 = 5/4 is the expectation. That is not surprising because integrating every value times its probability is what the expectation is. -I hesitated whether to post or not my undergraduate garbage reflections. But, this video persuaded me that I am on the right track.<|endoftext|> -TITLE: Summation of $\sum\limits_{n=1}^{\infty} \frac{x(x+1) \cdots (x+n-1)}{y(y+1) \cdots (y+n-1)}$ -QUESTION [8 upvotes]: For $x>0$ and $y>x+1$, how do we prove that $$\sum\limits_{n=1}^{\infty} \frac{x(x+1) \cdots (x+n-1)}{y(y+1) \cdots (y+n-1)} = \frac{x}{y-x-1}$$ - -REPLY [9 votes]: This is based on Robert Smith's observation above and Robin Chapman's beta trick in some previous problem. -$\frac{\Gamma{(x+n)}\Gamma{(y-x)}}{\Gamma{(y+n)}} = \int_{0}^{1} t^{x+n-1} (1-t)^{y-x-1} dt$, -summing over $n$ we get, -$\Gamma{(y-x)}\sum_{n \geq 1}\frac{\Gamma{(x+n)}}{\Gamma{(y+n)}} = \int_{0}^{1} \sum_{n \geq 1} t^{x+n-1} (1-t)^{y-x-1} dt = \int_{0}^{1} t^x (1-t)^{y-x-2}dt, $ -or, -$\sum_{n \geq 1}\frac{\Gamma{(x+n)}}{\Gamma{(y+n)}} = \frac{1}{\Gamma(y-x)}\int_{0}^{1} t^{x+1-1}(1-t)^{y-x-1-1}dt = \frac{\Gamma{(x+1)}\Gamma{(y-x-1)}}{\Gamma{(y-x)}\Gamma(y)} = \frac{\Gamma{(x+1)}}{(y-x-1)\Gamma(y)}$ -and hence, -$\frac{\Gamma(y)}{\Gamma(x)} \sum_{n \geq 1}\frac{\Gamma{(x+n)}}{\Gamma{(y+n)}} = \frac{\Gamma(x+1)}{(y-x-1)\Gamma(x)} = \frac{x}{y-x-1}.$<|endoftext|> -TITLE: Is "locally linear" an appropriate description of a differentiable function? -QUESTION [15 upvotes]: In this answer on meta, Pete L. Clark said: - -I think the question concerns the idea that a differentiable curve becomes more and more like a straight line segment the closer one zooms in on its graph. (And I must say that I regard part of this confusion as an artifact of badly written recent calculus books who describe this phenomenon as "local linearity". Ugh!) - -So, what's wrong with calling it "local linearity"? (Examples of the specific language from some relatively recent books follow.) - -From Finney, Demana, Waits, and Kennedy's Calculus: Graphical, Numerical, Algebraic, 1st ed, p107: - -A good way to think of differentiable functions is that they are locally linear; that is, a function that is differentiable at a closely resembles its own tangent line very close to a. - - -From Hughes-Hallett, Gleason, et al's Calculus: Single Variable, 2nd ed, pp138-9: - -When we zoom in on the graph of a differentiable function, it looks like a straight line. In fact, the graph is not exactly a straight line when we zoom in; however, its deviation from straightness is so small that it can't be detected by the naked eye. - -Following that, there is discussion of the tangent line approximation, then a theorem titled "Differentiability and Local Linearity" (the first time "local linearity"/"locally linear" appears) stating that if a function f is differentiable at a, then the limit as x goes to a of the quotient of the error in the tangent line approximation and the difference between x and a goes to 0. - -Ostebee and Zorn's Calculus from Graphical, Numerical, and Symbolic Points of View, 1st ed, p110: - -Remarkably, the just-illustrated strategy of zooming in to estimate slope almost always works. Zooming in on the graph of almost any calculus function $f$, at almost any point $(a,f(a))$, eventually produces what looks like a straight line with slope $f'(a)$. A function with this property is sometimes called locally linear (or locally straight) at $x=a$. [Margin note: These aren't formal definitions, just descriptive phrases.] Local linearity says, in effect, that $f$ "looks like a line" near $x=a$ and therefore has a well-defined slope at $x=a$. - - -(I did not find the term "local linearity" or "locally linear" at a quick glance in Stewart's Calculus: Concepts and Contexts, 2nd ed, or Leithold's The Calculus 7; the rest of the calculus books I have on hand predate the inclusion of graphing calculators/software in textbooks, so are not suitable for comparison.) - -REPLY [5 votes]: I see that I never answered or commented on this question, probably because I was curious to see if others would agree and be able to divine my objections without my explicit input. -This indeed happened: my main objection was that this use of "locally" is at odds with any other nearby use of the term in mathematics. Especially, a topological space $X$ should be locally P if every point admits a base of neighborhoods each having property P. Under this definition, a locally linear function on an interval must actually be linear! -I didn't mean to imply that there was no rigorous mathematical concept behind this terminology. Indeed the "infinite zooming in" can be made precise, as T.. did in his answer above. This really does give a (not completely trivial) characterization of differentiable functions. -However, is this an important or useful intuition for beginning calculus students? I don't think so. I don't think it is meant to be taken very seriously, because if you start taking it seriously you'll find yourself asking what else can happen to a continuous curve upon "infinite zooming in", and then you're well on your way to self-similarity and fractal geometry. This is fascinating mathematics, but it is of course not part of the story of differential calculus. -In fact, as you can see from the above quotes, most of the texts which use the term "local linearity" take some care to emphasize its informality: it is a way of thinking about differentiable functions. However, what they don't explain -- and what is not apparent, even to many people who are both research mathematicians and veteran calculus teachers -- is why we are introducing this (mathematically valid, but not directly mathematically relevant) analogy at the very beginning of the story of differential calculus. Speculating from what I've seen, this is part of a relatively recent wave of calculus texts (I believe the trend started after I myself took calculus, in the early 1990's) which (i) want to start talking about derivatives right at the beginning of the text, but (ii) don't want to get bogged down in any of the attendant technicalities or look like they are giving incomplete explanations. The text that I used as a graduate student teaching calculus in the late 1990's did something similar to this: they talked about "slope-predictors" in Chapter 1 and saved "tangent lines" for Chapter 2. -Perhaps this pedagogical choice is defensible, but it certainly does require defending. Purely as it lies in the calculus text, I do not agree with it at all. Differential calculus has a rich enough cast of characters and ideas; it does not seem wise to make up more terminology and introduce other themes and ideas which will not be followed up on later in the course. When I teach freshman calculus (please note: despite the fact that I have taught freshman calculus more than a few times, if you trust the student evaluations, I am less good at it than the average graduate student in my department: caveat emptor!) I like to get the main ideas out into the open as soon as possible, so I spend the entire first lecture talking about tangent lines and instantaneous velocity. I start here because students have been trained in both of these concepts in their precalculus mathematics (and physics), so generally do have some intuition about these things. However, their intuition falls rather short of an acceptable general definition of either of these important concepts (if things seem to be going well, I may ask for various definitions of tangent lines and then draw counterexamples to them!). What one soon sees is that there is in each case a closely related, but technically much simpler definition -- secant lines and average velocity -- and the matter of it is to explain e.g. how we get from a bunch of average velocities to the instantaneous velocity. -In this way, on the first day I try to set up about the first third to half of the course: we want to compute tangent lines (I try to say a little bit about why, e.g. minimizing and maximizing functions and graphing), and for that we need to learn a little bit about limits. These concepts will get reinforced again and again throughout the rest of the course. Infinite zooming in will never come up again, so why bring it up at all, and especially right at the beginning?<|endoftext|> -TITLE: What is the probability that every pair of students studies together at some point? -QUESTION [15 upvotes]: A cohort in a school consists of 75 students who study for 6 years. Each year, the students are randomly distributed into 3 classrooms of 25 students each. What is the probability that, after 6 years, each student has at some point been in a classroom with every other student? -More generally: Starting with an edgeless (undirected) graph on cn vertices, a round consists of first randomly partitioning the vertices into c disjoint sets of n vertices each, then adding an edge between every pair of not-yet-joined vertices that lie in the same set. What is the probability that, after y rounds, the result is a complete graph on cn vertices? -I have estimates and solutions to special cases, and it's straightforward to find -the probability that a single given student sees all the others, but I don't know how to tackle the question in general. (I do have a very pretty but completely useless expression for the exact answer, which I can supply if there's interest.) In the case c=3, n=25, y=6 it's clear that the answer is "so close to zero that nobody can tell the difference" but I was hoping for a more precise result. Any guidance appreciated. - -REPLY [2 votes]: The only help I can give is to suggest a lower bound. You have cn students and the study together graph could then have cn(cn-1)/2 edges. Each session you pick $\frac{cn(n-1)}{2}$ edges to color in and you ask whether after y sessions all the edges are colored. If you ignore the class grouping, you can do the same problem randomly choosing $\frac{ycn(n-1)}{2}$ edges independently with replacement to color. I think your case will color the whole graph with higher probability, as no edge can claim more than $y$ of the colorings, but it should be close. Now this is a nice Poisson distribution. Each edge is colored with probability $1-\exp(-\lambda)$, where $\lambda$ is the average number of colorings each edge receives, here $\frac{ycn(n-1)}{cn(cn-1)}$, or just about $\frac{y}{c}$. The chance that all edges are colored is then $(1-\exp(-\lambda))^{\frac{cn(cn-1)}{2}}$, which as you say is very small.<|endoftext|> -TITLE: Ergodic flow in tori -QUESTION [12 upvotes]: Let $\mathbb{T}^n = { (z_1,\ldots,z_n) \in \mathbb{C}^n : |z_l| = 1, \; 1 \leq l \leq n }$ denote the $n$-torus, and let $t_1, \ldots, t_n$ be arbitrary real numbers. Then it can be shown that the topological closure $H$ in $\mathbb{T}^n$ of the one-parameter subgroup -$$H' = {(e^{2\pi i t_1 y}, \ldots, e^{2\pi i t_n y}) \in \mathbb{T}^n : y \in \mathbb{R}}$$ -is an $r$-dimensional subtorus of $\mathbb{T}^n$, where $0 \leq r \leq n$ is the dimension over $\mathbb{Q}$ of the span of $t_1, \ldots, t_n$, and furthermore that -$$\lim_{Y \to \infty} \frac{1}{Y} \int^{Y}_{0}{g(e^{2\pi i t_1 y}, \ldots, e^{2\pi i t_n y}) \: dy} = \int_{H}{g(z) \: d\mu_H(z)}$$ -for any continuous function $g : \mathbb{T}^n \to \mathbb{C}$, where $\mu_H$ is the normalised Haar measure on $H$. -My question is, for which sets $B \subset \mathbb{T}^n$ do we have that -$$\lim_{Y \to \infty} \frac{1}{Y} \int\limits_{{y \in [0,Y] : (e^{2\pi i t_1 y}, \ldots, e^{2\pi i t_n y}) \in B}}{dy} = \mu_H(B). -$$ -I have a feeling that some sort of approximation by continuous functions should tell you, but I can't seem to make it work for some reason. - -REPLY [4 votes]: I guess now that I know the answer to this question, I shouldn't leave it unanswered! Basically, we define the probability measure $\mu_Y$ for each $Y > 0$ by -$$\mu_Y(B) = \frac{1}{Y} \int\limits_{\{y \in [0,Y] : (e^{2\pi i t_1 y}, \ldots, e^{2\pi i t_n y}) \in B\}}{dy}$$ -for each Borel set $B \subset \mathbb{T}^n$. The fact that -$$\lim_{Y \to \infty} \frac{1}{Y} \int^{Y}_{0}{g(e^{2\pi i t_1 y}, \ldots, e^{2\pi i t_n y}) \: dy} = \int_{H}{g(z) \: d\mu_H(z)}$$ -for any continuous function $g : \mathbb{T}^n \to \mathbb{C}$, where $\mu_H$ is the normalised Haar measure on $H$, implies that the probability measures $\mu_Y$ are converging weakly to $\mu_H$. By the Portmanteau theorem, this is equivalent to -$$\mu_H(B) = \lim_{Y \to \infty} \mu_Y(B) = \lim_{Y \to \infty} \frac{1}{Y} \int\limits_{\{y \in [0,Y] : (e^{2\pi i t_1 y}, \ldots, e^{2\pi i t_n y}) \in B\}}{dy}$$ -for every continuity set $B \subset \mathbb{T}^n$; that is, for every Borel set $B$ whose boundary in $\mathbb{T}^n$ has $\mu_H$-measure zero.<|endoftext|> -TITLE: Fejér's Theorem (Problem in Rudin) -QUESTION [6 upvotes]: Can you solve Problem 19 from Chapter 8 of Rudin's Principles of Mathematical Analysis, I'm having a lot of difficulty with it -I've proven the first part, namely -$$\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N \exp(ik(x+n\alpha))=\frac{1}{2\pi}\int_{-\pi}^\pi(\cdots) = \begin{cases} 1\text{ if }k=0\\0\text{ otherwise}\end{cases}$$ -Now I want to prove that if $f$ is continuous in $\mathbb{R}$ and $f(x+2\pi)=f(x)$ for all $x$ then -$$\lim_{N\to\infty} \sum_{n=1}^{N} \frac{1}{N} f(x+n\alpha)=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}f(t)\mathrm dt$$ -for any $x$, where $\alpha/\pi$ is irrational. -I've tried writing it as -$$\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N \sum_{k=0}^N\frac{1}{2\pi}\int_{-\pi}^\pi e^{ikt}f(x+n\alpha) $$ -but that was not helpful. - -REPLY [6 votes]: For the sake of completeness, here's a solution. First I'll prove the lemma that the asker already could do. -$$\lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N \exp(ik(x+n\alpha)) = \lim_{N\to \infty} \exp(ikx)\frac{1}{N}\sum_{n=1}^N \exp(ikn\alpha)$$ -If $k=0$, the right hand side evaluates to $1\frac{1}{N}N = 1$. -If $k\neq 0$, the sum is geometric, so we know how to evaluate it. -$$ \exp(ikx)\lim_{N\to \infty}\frac{1}{N} \frac{\exp((N+1)ik\alpha)-1}{\exp(ik\alpha)-1} = \frac{\exp(ikx)}{\exp(ik\alpha)-1}\lim_{N\to\infty}\frac{1}{N}(\exp((N+1)ik\alpha) -1)$$ -Because $\alpha$ is an irrational multiple of $\pi$, $k\alpha$ is never an integer multiple of $2\pi$, so the denominator is nonzero. $\exp((N+1)ik\alpha)-1$ is bounded, so the limit evaluates to zero. -This means that -$$\lim_{N\to \infty} \frac{1}{N}\sum_{n=1}^N \exp(ik(x+n\alpha)) = \delta_{k0} = \frac{1}{2\pi}\int_{-\pi}^\pi \exp(ikt)dt$$ -Where $\delta$ is the Kronecker delta. -Now the main problem asks us to show that -$$\lim_{N\to \infty} \frac{1}{N}\sum_{n=1}^N f(x+n\alpha) = \frac{1}{2\pi} \int_{-\pi}^\pi f(t)dt$$ -for any continuous $2\pi$-periodic function $f$ on the reals. If $f$ is a trigonometric polynomial, it follows easily from the result for $\exp(ikx)$. But we know that every continuous $2\pi$-periodic function is a uniform limit of trigonometric polynomials. If $f_1, f_2, \ldots$ is a sequence of trigonometric polynomials that converges uniformly to $f$, we know that for each $i$, -$$\lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N f_i(x+n\alpha) = \frac{1}{2\pi}\int_{-\pi}^\pi f_i(t)dt$$ -Standard theorems about uniform convergence then tell us that -$$\frac{1}{2\pi}\int_{-\pi}^\pi f(t)dt = \frac{1}{2\pi}\int_{-\pi}^\pi \lim_{i\to \infty}f_i(t)dt = \lim_{i\to \infty} \frac{1}{2\pi}\int_{-\pi}^\pi f_i(t)dt$$ -$$= \lim_{i\to \infty} \lim_{N\to \infty} \frac{1}{N}\sum_{n=1}^N f_i(x+n\alpha) = \lim_{N\to \infty} \frac{1}{N}\sum_{n=1}^N \lim_{i\to\infty}f_i(x+n\alpha) = \lim_{N\to\infty} \frac{1}{N}\sum_{n=1}^N f(x+n\alpha)$$<|endoftext|> -TITLE: What is wrong in my proof that 90 = 95? Or is it correct? -QUESTION [13 upvotes]: Hi I have just found the proof that 90 equals 95 and was wondering if I have made some mistake. If so, which step in my proof is not true? -Definitions: - 1. $\angle ABC=90^{\circ}$ - 2. $\angle BCD=95^{\circ}$ - 3. $|AB|=|CD|$ - 4. $M:=$ the center of $BC$ - 5. $N:=$ the center of $AD$ - 6. $l:=$ a line perpendicular to $BC$ passing through $M$ - 7. $m:=$ a line perpendicular to $AD$ passing through $N$ - 8. $S:=$ is the cross-section of $l$ and $m$ -Based on definitions 1 through 8 we can draw the following image: - -Based on the definitions we can derive the following: - 9. $\triangle BSC$ is isosceles (follows from 4, 6 and 8) - 10. $\triangle ASD$ is isosceles (follows from 5, 7 and 8) - 11. $|BS|=|CS|$ (follows from 9) - 12. $|AS|=|DS|$ (follows from 10) - 13. $\triangle ABS\cong\triangle DCS$ (follows from 3, 11 and 12) - 14. $\angle ABS=\angle DCS$ (follows from 13) - 15. $\angle CBS=\angle BCS$ (follows from 9) - 16. $\angle ABC=\angle ABS - \angle CBS=\angle DCS-\angle BCS=\angle BCD$ (follows from 14 and 15) - 17. $90^{\circ}=95^{\circ}$ (follows from 1, 2 and 16) -Note: point $S$ is indeed lying above $BC$. If however it would be below $BC$, then the 'minus' in step 16 would simply have to be changed into a 'plus'. -Also note: The image is not drawn to scale. It only serves as to provide the reader with a intuitive view of the proof. -Also also note: before posting, I have first investigated if this type of question would be appropriate. Based on How come 32.5 = 31.5? and the following meta Questions about math jokes I have decided to post this question. - -REPLY [52 votes]: $$ -\begin{array}{l} -0\in\mathbb N\\ -\forall n\in \mathbb N : n'\in \mathbb N\\ -\hline -0'\in\mathbb N\\ -\forall n\in \mathbb N : n'\in \mathbb N\\ -\hline -0''\in\mathbb N\\ -\forall n\in \mathbb N : n'\in \mathbb N\\ -\hline -0'''\in\mathbb N\\ -\forall n\in \mathbb N : n'\in \mathbb N\\ -\hline -0''''\in\mathbb N\\ -\forall n\in \mathbb N : n'\neq0\\ -\hline -0'''''\neq0\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''\neq0'\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''\neq0''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''\neq0'''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''\neq0''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''\neq0'''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''\neq0''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''\neq0'''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''\neq0''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''\neq0'''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''\neq0''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''\neq0'''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''\neq0''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''\neq0'''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''\neq0''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''\neq0'''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''\neq0''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''\neq0'''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''\neq0''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''\neq0'''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''\neq0''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''\neq0'''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''\neq0''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -\phantom{0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''} -\end{array} -$$ -$$ -\begin{array}{l} -0'''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -\phantom{0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''} -\end{array} -$$ -$$ -\begin{array}{l} -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -\phantom{0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''} -\end{array} -$$ -$$ -\begin{array}{l} -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\forall m,n\in \mathbb N:m\neq n\rightarrow m'\neq n' -\\ -\hline -0'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\neq0''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''\\ -\\ -\square -\end{array} -$$<|endoftext|> -TITLE: Functions that are their Own nth Derivatives for Real $n$ -QUESTION [20 upvotes]: Consider (non-trivial) functions that are their own nth derivatives. For instance -$\frac{\mathrm{d}}{\mathrm{d}x} e^x = e^x$ -$\frac{\mathrm{d}^2}{\mathrm{d}x^2} e^{-x} = e^{-x}$ -$\frac{\mathrm{d}^3}{\mathrm{d}x^3} e^{\frac{-x}{2}}\sin(\frac{\sqrt{3}x}{2}) = e^{\frac{-x}{2}}\sin(\frac{\sqrt{3}x}{2})$ -$\frac{\mathrm{d}^4}{\mathrm{d}x^4} \sin x = \sin x$ -$\cdots$ -Let $f_n(x)$ be the function that is it's own nth derivative. I believe (but I'm not sure) for nonnegative integer $n$, this function can be written as the following infinite polynomial: -$f_n(x) = 1 + \cos(\frac{2\pi}{n})x + \cos(\frac{4\pi}{n})\frac{x^2}{2!} + \cos(\frac{6\pi}{n})\frac{x^3}{3!} + \cdots + \cos(\frac{2t\pi}{n})\frac{x^t}{t!} + \cdots$ -Is there some sense in which this function can be extended to real n using fractional derivatives? Would it then be possible to graph $z(n, x) = f_n(x)$, and would this function be smooth and continuous on both $n$ and $x$ axes? Or would it have many discontinuities? - -REPLY [2 votes]: Sorry to give yet another answer that does not address the issue of fractional $n$ [it seems that fractional derivatives are not such a familiar topic to many research mathematicians; certainly they're not to me], but: -There is a little issue here which has not been addressed. By the context of the OP's question, I gather s/he is looking for real-valued functions which are equal to their $n$th derivative (and not their $k$th derivative for $k < n$). Several answerers have mentioned that the set of solutions to $f^{n} = 0$ forms an $n$-dimensional vector space. But over what field? It is easier to identify the space of such complex-valued functions, i.e., $f: \mathbb{R} \rightarrow \mathbb{C}$: namely, a $\mathbb{C}$-basis is given by $f(x) = e^{2 \pi i k/n}$ for $0 \leq k < n$. But what does this tell us about the $\mathbb{R}$-vector space of real-valued solutions to this differential equation? -The answer is that it is $n$-dimensional as a $\mathbb{R}$-vector space, though it does not have such an immediately obvious and nice basis. -Let $W$ be the $\mathbb{R}$-vector space of real-valued functions $f$ with $f^{(n)} = 0$ and $V$ the $\mathbb{C}$-vector space of $\mathbb{C}$-valued functions $f$ with $f^{(n)} = 0$. -There is a natural inclusion map $W \mapsto V$. Phrased algebraically, the question is whether the induced map $L: W \otimes_{\mathbb{R}} \mathbb{C} \rightarrow V$ is an isomorphism of $\mathbb{C}$-vector spaces. In other words, this means that any given $\mathbb{R}$-basis of $W$ is also a $\mathbb{C}$-basis of $V$. This is certainly not automatic. For instance, viewing the Euclidean plane as first $\mathbb{R}^2$ and second as $\mathbb{C}$ gives a map -$\mathbb{R}^2 \rightarrow \mathbb{C}$ which certainly does not induce an isomorphism upon tensoring with $\mathbb{C}$, since the first space has (real) dimension $2$ but the second space has (complex) dimension $1$. -For more on this, see Theorem 1.6 of -http://www.math.uconn.edu/~kconrad/blurbs/galoistheory/galoisdescent.pdf -It turns out that this is actually a problem in Galois descent: according to Theorem 2.14 of the notes of Keith Conrad already cited above, the map $L$ is an isomorphism iff there exists a conjugate-linear involution $r: V \rightarrow V$, i.e., i.e., a map which is self-inverse and satisfies, for any $z \in \mathbb{C}$ and $v \in V$, $r(zc) = \overline{z} r(c)$. -But indeed we have such a thing: an element of $V$ is just a complex-valued function $f$, -so we put $r(f) = \overline{f}$. Note that this stabilizes $V$ since the differential equation $f^{(n)}) = 0$ "is defined over $\mathbb{R}$": or more simply, the complex conjugate of the $n$th derivative is the $n$th derivative of the complex conjugate. Thus we have "descent data" (or, in Keith Conrad's terminology, a G-structure) and the real solution space has the same dimension as the complex solution space. -It is a nice exercise to use these ideas to construct an explicit real basis of $W$.<|endoftext|> -TITLE: Good book for self study of functional analysis -QUESTION [93 upvotes]: I am a EE grad. student who has had one undergraduate course in real analysis (which was pretty much the only pure math course that I have ever done). I would like to do a self study of some basic functional analysis so that I can be better prepared to take a graduate course in that material in my university. I plan to do that next fall so I do have some time to work through a book fully. Could some one recommend some good books to start working on this? -Thanks in advance - -REPLY [2 votes]: Consider the book by Haase: Functional Analysis: An Elementary Introduction. It's published in the Graduate Studies in Mathematics series, but it only assumes a background in linear algebra and elementary analysis (ie. it builds the basics of Lebesgue theory for you) and has a lot of the functional analysis relevant to applied mathematics. -MAA review<|endoftext|> -TITLE: Kindle as a Tool for Mathematicians? -QUESTION [29 upvotes]: UPDATED: 1/15/2014 -I originally wrote this post in 2010, when I was looking for alternative ways to store and transport papers. I had my laptop, but due to its weight, limited battery life, and the LCD screen, an e-reader such as Kindle seemed like a good idea at the time. (Also, at the time of the original post, I had never owned a smartphone, let alone a tablet.) -Aside from the Kindle, are there any other electronic tablet or pads, or other devices that you'd recommend for this type of purpose? What are some of your experiences? -As for my self, I purchased a Pocketbook Reader 602 from PocketBook International four years ago. At the time, it seemed like a smart purchase, as it PBR handles a while array of file formats without needing to convert anything. However, I don't use the device as nearly as much as I thought I would. -The device itself works the way it should. It is a bit slow with the page rotation. And as user641 points out, the PBR can be a bit sluggish with larger files. The text-to-speech feature is completely useless when it comes to reading math. I tried utilizing the internet connectivity. While it is amusing to see websites in e-ink, in the end it's too sluggish to be of any use. -Here are a few things that I had originally thought would be convenient, but wound up being an annoyance instead. - -Lack of Touch Screen Capability. I purposely chose the model that didn't have a touch screen capability. Reports of glare and the idea of getting smudge on the screen led me to that. However, the alternative is an extremely painful, unintuitive, tedious navigation. -Small Size. I opted for the small size of PBR 602 for its portability. While I have no problem reading novels, for math this is just unbearable. The slow page turn / search interface / zoom makes the problem even worse. Basically, reading anything that requires jumping from one part to another, looking up index or keywords, is extremely painful. - -Note that the small size wouldn't be as big of an issue, if the interface was quick and seamless. For instance, I don't find reading math on my smartphone as painful. -I still use my PBR 602 from time to time. In fact, I go through periods where I would use it extensively. Unfortunately, its inconveniences prevents it from regular usage. - -REPLY [3 votes]: I will share my experience here. -At first I also bought a Pocketbook e-reader, specifically the 902, which has a larger screen. It handled both DJVU and PDF files, which was important to me. The DJVU rendering was slow, however, and it had serious lag issues with large files. But all the math was perfectly rendered, zoom worked, etc. -I eventually got rid of my Pocketbook because I found the device somewhat unreliable. It was also clear that reading DJVU files on an e-reader is not the joy you might think it is. Specifically, these are normal (A4) sized pages, being shrunk to fit the screen. I downloaded a few programs to help with the margins, but still, it was difficult/not worth it to read DJVU files this way. -I then purchased a Kindle DX, which can handle PDF files, but not DJVUs. The Kindle is an incredible product, and works very well. Still, it will lag on larger PDF files, and several times I have had to trim margins on my laptop before putting them on the Kindle. This is really nothing to do with math however: I don't think there are any e-readers out there which handle PDF files like this well. -Now there a couple of options. One is to convert the PDF files to an AZW (or even MOBI) file. In my experience, this fails miserably. Even without any formulas/math type, the spacing and sentence structure is essentially ruined. With math? disastrous results. And if there are pictures, forget it. -The other option, which isn't always available, is to get your hands on the TeX source. I use this option for any arXiv papers I download, for example. There you can resize the font, margins, etc., to optimize for Kindle viewing. There are measurements out there which give exact dimensions for the viewing window of a Kindle. This option (which again, isn't always a possibility) is certainly the best. Reading a paper on the Kindle this way is just as enjoyable as reading a hard copy. -In the end, the books I do have as PDF files I trim myself (using pdfTK and a Java program called Briss), and then put them on the Kindle as PDF files. I view it rotated 90degrees, because this increases the magnification. -The articles I can get as TeX files, I adjust the margins/font size etc. (this is based on the paper itself; for example, large figures require different processing than a short, all-text paper). Then I convert to PDF and put on my Kindle. Again, these are the nicest PDFs on my Kindle. -Finally, I would like to mention that several prominent math books (a few by John Stillwell for example) are now available in Kindle format on Amazon. I haven't purchased any of these, but presumably the publisher simply changed the margins/text size in the TeX file, as I do for the articles.<|endoftext|> -TITLE: Find an infinite set of positive integers such that the sum of any two distinct elements has an even number of distinct prime factors -QUESTION [8 upvotes]: I have attempting to solve this using the infinite ramsey theorem, with colouring based on whether the sum of two vertices has an even or odd number of distinct prime factors. -This is leading to an infinite recursion. -Is this ok? At the end of all time I will be done. - -REPLY [6 votes]: For what it's worth, such a sequence constructed by a greedy algorithm begins: -2, 4, 8, 10, 16, 18, 36, 199, 208, 1131, 1347, 3984, 5751, 7310, 27315, 129313, 134101, 169400, 589570 -That is, we start with $A_1 = 2$, and then each $A_n$ is the smallest number greater than $A_{n-1}$ with $A_i + A_n$ having an even number of distinct prime factors for each $n \le 1$. (So, for example, $A_3 = 8$ because $5+4, 6+2, 7+2$ each have an odd number of distinct prime factors, but $8+2$ and $8+4$ both have even numbers.) -Another such sequence, starting with 1, begins -1, 5, 9, 13, 35, 39, 286, 290, 381, 385, 866, 4376 -and one starting with 3 begins -3, 7, 11, 15, 33, 41, 47, 65, 101, 203, 4102, 6392, 8507, 18608. -From these it seems that these sequences grow roughly exponentially; that is, $A_n \approx k^n$ for some constant $k$. This makes sense. Since approximately half of all integers have an even number of distinct prime factors, once you have $n$ numbers in such a sequence it should take about $2^n$ tries to find the next one. -Of course this isn't a proof, but it's at least an argument why such sequences should exist. And judging from the irregularity of the greedily constructed sequences, a greedy method probably isn't the best way to go here even if you want to explicitly construct the sequence.<|endoftext|> -TITLE: Finding all complex zeros of a high-degree polynomial -QUESTION [6 upvotes]: Given a large univariate polynomial, say of degree 200 or more, is there a procedural way of finding all the complex roots? By "roots", I mean complex decimal approximations to the roots, though the multiplicity of the root is important. I have access to MAPLE and the closest function I've seen is: -with(RootFinding): -Analytic(Z,x,-(2+2*I)..2+2*I); - -but this chokes if Z is of high degree (in fact it fails to complete even if deg(Z)>15). - -REPLY [2 votes]: I think one of the biggest problems is approximating multiple roots. The approach described in -L.Brugnano, D.Trigiante. "Polynomial Roots: the Ultimate Answer?", Linear Algebra and its Applications 225 (1995) 207-219 -relies on the approximation of eigenvalues of a tridiagonal matrix, obtained via the application of Euclid's GCD algorithm to the original polynomial, and seems to work pretty well. -I couldn't find the pdf for the article though, sorry.<|endoftext|> -TITLE: Two metrics induce the same topology, but one is complete and the other isn't -QUESTION [41 upvotes]: I'm looking for an example of two metrics that induce the same topology, but so that one metric is complete and the other is not (Since it is known that completeness isn't a topological invariant). -Thanks in advance for any hints or ideas. - -REPLY [2 votes]: With the usual metric, $(0,1]$ is not complete. -Let us define another metric $\rho$ on this space using the homeomorphism $\phi: [1,\infty) \to (0,1]$. Define $\rho(x,y) = \vert\frac{1}{x}-\frac{1}{y}\vert$. It is easy to check that $\rho$ is indeed a metric. Also, note that the usual metric and $\rho$ are equivalent and hence induce the same topology. -Now consider a Cauchy sequence in $((0,1],\rho)$. -$\rho(x_n,x_m) < \epsilon$ for all $n \geq N$ or, -$\vert\frac{1}{x}-\frac{1}{y}\vert < \epsilon$ for all $n \geq N$. -Hence $(\frac{1}{x_n})$ is a Cauchy sequence in ($[1,\infty),$usual metric), a complete metric space. Let us say, it converges to $y \in [1,\infty)$. It is easy to check that $(x_n)$ converges to $x = \frac{1}{y}$ in $((0,1],\rho)$.<|endoftext|> -TITLE: An Identity Concerning the Riemann Zeta Function -QUESTION [5 upvotes]: Let $\zeta$ be the Riemann- Zeta function. For any integer, $n \geq 2$, how to prove $$\zeta(2) \zeta(2n-2) + \zeta(4)\zeta(2n-4) + \cdots + \zeta(2n-2)\zeta(2) = \Bigl(n + \frac{1}{2}\Bigr)\zeta(2n)$$ - -REPLY [14 votes]: This is too nice an exercise to give away. Your starting point should be: -$$\frac{t}{e^t-1} = 1 - t/2 + \frac{2 \zeta(2)}{(2 \pi)^2} t^2 - \frac{2 \zeta(4)}{(2 \pi)^4} t^4 + \frac{2 \zeta(6)}{(2 \pi)^6} t^6 - \cdots.$$<|endoftext|> -TITLE: A plane algebraic curve with all four kinds of double points -QUESTION [11 upvotes]: During my study of plane algebraic curves, I got curious if there is a nontrivial example of a plane algebraic curve that has a node, a cusp (for my purposes I do not care which of the two kinds of cusps would the example exhibit), a tacnode, and an isolated point. By "nontrivial" I mean a curve that was not constructed as a chimera of two or more simpler curves, e.g. $(x-y)(x^2+y^2-1)=0$. Of course, it would be a quintic at the very least (i.e. the algebraic degree should be 5 at the minimum). -Apart from an explicit example, I would also be interested in a general procedure for constructing algebraic curves with a prescribed number and type of singular points. - -After trying out Qiaochu's and T..'s suggestions, I have a follow-up question: does the problem become more difficult if the requirement that the curve be bounded (i.e. one can draw a circle such that the whole curve, including the isolated point, is within the circle) is imposed? - -REPLY [4 votes]: I'm setting this answer community wiki so other people can attach their own examples of curves constructed with the methods in this thread. -Now, Qiaochu's suggestion for building a curve $q(y)=p(x)$ with collinear singular points on the horizontal axis amounts to the construction of an appropriate Hermite interpolation problem. More explicitly, one wants to find a polynomial (or rational function) whose first few derivatives at preset points vanish. -In Mathematica for instance, the function InterpolatingPolynomial can be used to generate a Hermite interpolant. (For systems that do not have such a function handy, the Hermite interpolation problem is solved through either an appropriate modification of the Newton divided differences scheme or by solving an associated confluent Vandermonde system.) The rational interpolant case is a bit tougher, and I am still experimenting with algorithms for the rational Hermite problem so I won't be considering them for now (but might include them in a later edit). -Thus, taking the conditions in Qiaochu's answer, here is how one builds a curve with a node at $(-1,0)$, a tacnode at $(0,0)$, and a cusp at $(1,0)$: -Expand[InterpolatingPolynomial[{{-1, {0, 0, 1}}, {0, {0, 0, 0, 0, 1}}, {1, {0, 0, 0, 1}}}, x]] - -The result has fractional coefficients, but you can multiply with an appropriate factor so that all the coefficients are integers, resulting in the polynomial -$$3x^{11}+2x^{10}-15x^9+21x^7-6x^6-9x^5+4x^4$$ -Here for instance is a plot of $$y^2=3x^{11}+2x^{10}-15x^9+21x^7-6x^6-9x^5+4x^4$$: - -and a more complicated curve, $$y^2-y^3=(x^2+xy+y^2)(3x^{11}+2x^{10}-15x^9+21x^7-6x^6-9x^5+4x^4)$$: - -I have yet to make the prescription for getting an isolated point to work, since the curves generated only manage to have separate branches passing through the desired point, but no isolated points at all. - -As for T..'s suggestion, the Mathematica code I have requires some serious cleanup, so I shall be editing this answer later to include his parametric construction. -Currently I am trying to find a curve with four-fold symmetry that has four of each of the types of double points. If I manage to find it, I shall be naming it after Qiaochu and T.<|endoftext|> -TITLE: Decomposing the plane into intervals -QUESTION [13 upvotes]: A recent Missouri State problem stated that it is easy to decompose the plane into half-open intervals and asked us to do so with intervals pointing in every direction. That got me trying to decompose the plane into closed or open intervals. The best I could do was to make a square with two sides missing (which you can do out of either type) and form a checkerboard with the white squares missing the top and bottom and the black squares missing the left and right. That gets the whole plane except the lattice points. This seems like it must be a standard problem, but I couldn't find it on the web. So can the plane be decomposed into unit open intervals? closed intervals? - -REPLY [6 votes]: I posted this to Math Overflow and Jeff Strom gave the following answer: -Conway and Croft show it can be done for closed intervals and cannot be done for open intervals in the paper: -Covering a sphere with congruent great-circle arcs. Proc. Cambridge Philos. Soc. 60 1964 787–800.<|endoftext|> -TITLE: Distribution of Functions of Random Variables -QUESTION [5 upvotes]: In general, how would one find the distribution of $f(X)$ where $X$ is a random variable? Or consider the inverse problem of finding the distribution of $X$ given the distribution of $f(X)$. For example, what is the distribution of $\max(X_1, X_2, X_3)$ if $X_1, X_2$ and $X_3$ have the same distribution? Likewise, if one is given the distribution of $ Y = \log X$, then the distribution of $X$ is deduced by looking at $\text{exp}(Y)$? - -REPLY [12 votes]: Let me take the risk of mitigating Qiaochu's healthy skepticism and mention that a wand I find often quite useful to wave is explained on this page. There, I argue that: The simplest and surest way to compute the distribution density or probability of a random variable is often to compute the means of functions of this random variable. -For example, the fact that $Y=\log X$ is normal $N(2,4)$ is equivalent to the fact that, for every bounded measurable function $g$, -$$ -\mathrm E(g(Y))=\int g(y) f_Y(y)\mathrm{d}y, -$$ -for a density $f_Y$ everybody knows and whose precise form will not interest us. Likewise, the fact that the distribution $X$ has density $f_X$ is equivalent to the fact that, for every bounded measurable function $g$, -$$ -\mathrm E(g(X))=\int g(x) f_X(x)\mathrm{d}x. -$$ -Hence our task is simply to pass from one formula to the other. But this is easy since $g(X)=g(\mathrm{e}^Y)$ is also a function of $Y$. As such, -$$ -\mathrm E(g(X))=\int g(\mathrm{e}^y) f_Y(y)\mathrm{d}y, -$$ -and our task is to solve for $f_X$ the equations -$$ -\int g(x) f_X(x)\mathrm{d}x=\int g(\mathrm{e}^y) f_Y(y)\mathrm{d}y, -$$ -We have no choice for our next step but to use the change of variable $x\leftarrow \mathrm{e}^y$. That is, $y\leftarrow \log x$ and $\mathrm{d}y=x^{-1}\mathrm{d}x$, which yields -$$ -\int g(\mathrm{e}^y) f_Y(y)\mathrm{d}y=\int g(x) f_Y(\log x)x^{-1}\mathrm{d}x. -$$ -By identification, $f_X(x)=f_Y(\log x)x^{-1}$. -In a nutshell the idea is that the very notations of integration help us to get the result and that during the proof we have no choice but to use the right path. We leave as an exercise the computation of the density of each random variable $Z=\varphi(Y)$, for some regular enough function $\varphi$. -Note that maxima and minima of independent random variables should be dealt with by a specific, different, method, explained on this page.<|endoftext|> -TITLE: Expressions unchanged by permuting the roots of a polynomial -QUESTION [7 upvotes]: I am trying to self-read through Ian Stewart's Galois theory and am stuck at this paragraph. I quote - "Lagrange observed that all methods for solving polynomial equations by radicals involve constructing rational functions of the roots that take a small number of values when the roots $\alpha_j$ are permuted. Prominent among these is the expression -$$\delta = \prod_{j< k} (\alpha_j - \alpha_k)$$ -which takes just two values, $\pm \delta$, plus for even permutations and minus for the odd ones. Therefore, $\Delta = \delta^{2}$ is a rational function of the coefficients. This gets us started and it yields a complete solution for the quadratic." -My question is as follows: Why should an expression that is unchanged by permuting the roots of a polynomial be expressible as a rational function of the polynomial's coefficients? Is this statement even true or have I misunderstood the book? - -REPLY [9 votes]: This is a straightforward consequence of the Fundamental Theorem on Symmetric Functions. Indeed, the coefficients of the polynomial are the evaluation of the so called elementary symmetric functions at the roots.<|endoftext|> -TITLE: Accuracy of approximation to inclusion-exclusion formula in prime sieve -QUESTION [10 upvotes]: This thing came up in a combinatorics course I am taking. -Choose a fixed set of primes $p_1,p_2,\dots,p_k$ and let $A_n$ be number of integers in $\{1,2,\dots,n\}$ which are not divisible by any of the $p_i$'s. $A_n$ is given by $ n - \sum_{1\leq i_1 \leq k} \lfloor \frac{n}{p_{i_1}} \rfloor + \sum_{1\leq i_1 < i_2 < k} \lfloor \frac{n}{p_{i_1}p_{i_2}}\rfloor - \dots $. Now, if $n$ is divisible by each of the $p_i$'s then we have the simpler expression : $A_n = n \prod_{i=1}^{k}(1-\frac{1}{p_i})$(added later : note I am not assuming $n = \prod_{i=1}^{k}p_i$ here, this holds true whenever $n = c \prod_{i=1}^{k} p_i $ for some integer c, since each $\lfloor \frac{n}{p_{i_1} \dots p_{i_j}} \rfloor = \frac{n}{p_{i_1} \dots p_{i_j}}\ )$. -Another student pointed out that even if $n$ does not have $\prod_{i=1}^{k} p_i$ as a factor, the approximation $B_n = n \prod_{i=1}^{k}(1-\frac{1}{p_i})$ to $A_n$ is quite close in some specific cases. It is easy to see that $\lim_{n\to\infty}\frac{A_n - B_n}{n}=0$ as $\lim_{n\to\infty}\frac{1}{n}\lfloor \frac{n}{p_{i_1}p_{i_2}\dots p_{i_j}} \rfloor \to \frac{1}{p_{i_1}p_{i_2} \dots p_{i_j}},$ however that is not strong enough to imply that $A_n - B_n$ will be always close. -Is there any way to analyze how well this approximation will perform in general? I am interested in the worst case for small to moderately sized $n$. -Just to give a feel of how close $A_n$ and $B_n$ can get, assuming the set of primes is $\{2,3,7\}$ we have (assuming my program is correct): -n A(n) B(n) -17 5 4.86 -27 8 7.71 -37 11 10.57 -107 31 30.57 -1111 318 317.43 -3001 858 857.43 -4007 1145 1144.86 -5000 1429 1428.57 . - -REPLY [3 votes]: This might be slightly off tangent, but it's definitely in the spirit of your post, and is found at the beginning of most texts in sieve theory. -When your fixed set of primes is the first $k$ primes and $n = p_{k}^2$ then $B_{n}$ is the main term of the sieve of Eratosthenes-Legendre, which gives an approximation for the number of primes in the interval $[p_{k}, p_{k}^2]$. In this setting Merten's theorem gives an asymptotic for $B_{n}$. (It's called Merten's Third theorem in the Wikipedia page). -However, an appeal to the prime number theorem shows that this is a reasonably bad approximation, and that $B_{n}$ overestimates the number of primes by a factor of $2e^{-\gamma}$ where $\gamma$ is the Euler-Mascheroni constant. -Terry Tao has a good blog posting on why repeated use of the inclusion-exclusion principle yields non-optimal results.<|endoftext|> -TITLE: I want to graph an equilateral triangle on graph paper -QUESTION [6 upvotes]: I want to graph an equilateral triangle. It would be ideal if I had a set of three points: $(x_1, y_1), (x_2, y_2), (x_3, y_3)$ with $x_i, y_i \in \mathbb{Z}$ as the vertices. However, this is impossible. I am willing to "settle" for a triangle that is off by the width of a pencil lead. (That is, if we draw circles of radius $\delta$ around all of the lattice points, the true location of the vertices of the equilateral triangle are inside of these error circles.) -For a given $\delta$ what is the smallest such triangle? -My instinct is to let the computer guess, is there a more elegant way? - -REPLY [2 votes]: If you are interested in fast sketching of nearly equilateral triangles by hand, based on $5$mm graph paper and assuming a $0.5$mm pencil, then the triangle with corners on the grid coordinates $(0,0)$, $(1,4)$ and $(4,1)$ has an error of less than $0.7$mm (based on an upper bounds of two times the difference of the circumscribed circle based on the shortest and largest edge of this triangle). Even if you sketch without a ruler, the result is nearly indistinguishable from an equilateral triangle.<|endoftext|> -TITLE: Bott periodicity and algebraic geometry -QUESTION [10 upvotes]: It is a theorem that every locally free coherent sheaf on $\mathbb{P}^1$ over an algebraically closed field is isomorphic to a unique sum of sheaves $\mathcal{O}(n)$ for various integers $n$. In particular, the K-ring of locally free coherent sheaves (or all coherent sheaves, $\mathbb{P}^1$ being nonsingular) is isomorphic to $\mathbb{Z}[t, t^{-1}]$. -The topological K-ring of vector bundles on $S^2$ is, by Bott periodicity, isomorphic to $\mathbb{Z}[H]/(H-1)^2$, where $H$ is the canonical bundle. But $S^2$ is homeomorphic to $\mathbb{P}^1_{\mathbb{C}}$. -Every locally free sheaf corresponds to a vector bundle on $S^2$. It follows that the map on the K-groups from locally free sheaves to vector bundles is surjective but not injective. -Questions: - -What goes wrong? -Is there a version of Bott periodicity for algebraic varieties (or schemes)? (I.e., relating K-groups of $X$ and $X \times \mathbb{P}^1$.) I understand that there is one for the Picard groups. - -REPLY [15 votes]: It is not true that the Grothendieck ring of coherent sheaves on $\mathbb{P}^1$ is isomorphic to $\mathbb{Z}[t, t^{-1}]$. Although $\mathcal{O} \oplus \mathcal{O}(2)$ is not isomorphic to $\mathcal{O}(1) \oplus \mathcal{O}(1)$, they do have the same class in $K^0$. -The definition of the Grothendieck group of coherent sheaves on a scheme $X$ is that it is generated by isomorphism classes of coherent sheaves, modulo the relation that $[A] + [C] = [B]$ whenever there is a short exact sequence -$$0 \to A \to B \to C \to 0.$$ -In particular, we have the short exact sequence -$$0 \to \mathcal{O} \to \mathcal{O}(1)^2 \to \mathcal{O}(2) \to 0,$$ -where the maps are given by $(x \ y)$ and $\binom{-y}{x}$. -This makes $K^0$ into $\mathbb{Z}[t, t^{-1}]/(t^2 - 2t +1) \cong \mathbb{Z}[u]/u^2$, just like you wanted. -When working in the categories of smooth or of topological vector bundles, all short exact sequences split, so you can get away with defining $K$-theory with direct sums. You can't do that in the coherent or the algebraic categories.<|endoftext|> -TITLE: You are standing at the origin of an "infinite forest" holding an "infinite bb-gun" -QUESTION [25 upvotes]: I use stories like these to develop intuition... or perhaps to destroy it. I have my own answers in mind, but I want to see if I have made any mistakes... -You are standing at the origin of an "infinite forest" holding an "infinite bb-gun." The "trees" in this forest are at the lattice points all around you. (The lattice points are like those on graph paper and they align with the cardinal directions: N, S, E, W.) The "forest" is Euclidean in the sense that the trees have no width. To hit a tree with your bb-gun you must aim perfectly at it. -You would, for example, hit a tree if you fired the gun due north, south east or west. (Your bullets also have no width.) -A. You fire the gun in an arbitrary direction without bothering to aim. What happens? -B. You get a new bb-gun and the bullets have a little width to them. ($\delta$?) You fire the gun in an arbitrary direction without bothering to aim. What happens? -C. All of the trees are removed that have coordinates whose absolute values are not perfect squares. (So, only points such as $(25, 100)$ and $(4,-1600)$ remain.) Again you use width-less bullets. You fire the gun in an arbitrary direction without bothering to aim. What happens? -D. Again, only with perfect squares, but now the bullets have width. What happens? - -REPLY [3 votes]: The problem with trees of finite width at all lattice points is called "Polya's Orchard Problem". Other variants such as higher dimensions and different shapes of the orchard are in the literature under the same name, or variations on visibility, lattice point, and orchard. Sample search result: -http://www.rose-hulman.edu/mathjournal/archives/2006/vol7-n2/paper9/v7n2-9pd.pdf -For the problem with zero-width trees and visibility from the origin (ie., primitive lattice points) I think the $6/\pi^2$ density is attributed to Chebyshev in the 1800's, but it seems like the type of result that could have been known considerably earlier and frequently rediscovered.<|endoftext|> -TITLE: Produce an explicit bijection between rationals and naturals? -QUESTION [107 upvotes]: I remember my professor in college challenging me with this question, which I failed to answer satisfactorily: I know there exists a bijection between the rational numbers and the natural numbers, but can anyone produce an explicit formula for such a bijection? - -REPLY [3 votes]: Set $\Bbb N = \{1,2,3,4,\dots\}$. -Here we will define a bijective mapping between $\Bbb N$ and the subset of rational numbers -$\quad \Bbb Q_{\gt 0}^{\lt 1} = \{q \in \Bbb Q \mid 0 \lt q \lt 1\}$ -For integer $n \ge 1$ define the set -$\tag 1 F_n = \{s/t\in \Bbb Q \mid [s, t \in \Bbb N] \land [t \le n] \land [s \lt t\}]$ -We have a chain of inclusions -$\tag 2 \emptyset = F_1 \subset F_2 \subset F_3 \subset \dots \subset F_k \subset \dots $ -and denote the union of these sets by $F$; observe that $F = \Bbb Q_{\gt 0}^{\lt 1}$. -Let the number of elements in the set $F_k$ be denoted by $f_k$. -If $k \gt 1$ let $G_k = F_k \setminus F_{k-1}$; observe that $G_k$ has an ordering relation $\le$ defined on it. -We define a function $\Gamma: \Bbb N \to F$ by the specification -$\quad$ Given $m \in \Bbb N$ -$\quad$ Find $k \in \Bbb N$ such that $f_k \lt m \le f_{k+1}$ -$\quad$ Set $\Gamma(m)$ to the -$\tag 3 [m-f_k]^\text{-th} \text{ element of } G_{k+1}$ -Exercise: Show that $\Gamma$ is a well defined function that puts the sets into a $\text{1:1 correspondence}$. -See also the wikipedia article Farey sequence.<|endoftext|> -TITLE: Why is $(x^2-2)/(2y^2+3)$ never an integer for any integers $x$ and $y$? -QUESTION [11 upvotes]: I've started a little reading on quadratic reciprocity, and a reason for this has eluded me. Here's a little of what I came up with so far. I decided I want to show that for all primes $p$, if $p|x^2-2$, then $p$ does not divide $2y^2+3$. Then, by way of contradiction, if $(x^2-2)/(2y^2+3)$ is an integer, then any $p$ such that $p|2y^2+3$ would have to divide $x^2-2$, a contradiction. I see this is true for $p=2$. I want to find all $p$ such that $x^2\equiv 2\pmod{p}$, and since for any odd $p$, -$$\left(\frac{2}{p}\right)=(-1)^{(p^2-1)/8}$$ -I see $(2|p)=1$ iff $p\equiv 1,7\pmod{8}$. So only primes of the form $8k+1$ or $8k+7$ divide $x^2-2$. However, I don't see a way to show that primes of the form $8k+1$ or $8k+7$ do not divide $2y^2+3$, so maybe I'm completely off the mark. Does anyone know how to resolve this, or have a better idea of what to do? Thanks! - -REPLY [4 votes]: Let's suppose it is an integer; -$x^2 - 2 = k(2y^2+3)$ -$x^2 = (2k)y^2 + 3k + 2;$ -since $x$ is an integer; l.h.s is also a perfect square, which is a quadratic equation in $y$; -that means roots of the equation are equal, and discriminant = 0; -$b^2-4ac = 0$ $\ \Rightarrow\ $ $0-(3k + 2)(2K) = 0\ $; $\ \Rightarrow\ $ $k = -3/2$ or $k = 0$; -and $k = 0$ only for $x^2 = 2$; -which is a contradiction...<|endoftext|> -TITLE: Induction on two integer variables -QUESTION [21 upvotes]: Assume you want to prove an identity such as -$$\sum_{k=m+1}^{n}A(k,m)-B(k,m)=S(m)+T(n,m)\qquad\text{for } n,m\in -\mathbb{Z},n,m\geq 0.$$ -Added: I applied mathematical induction on $m,n$ to prove it. I am unsure because up to now I have seen it applied to properties depending on a single variable only. -Question: does application of two inductive arguments, one on $m$ and the -other on $n$, guarantee the validity of such a proof? - -REPLY [19 votes]: Suppose you are trying to prove a family of statements $P(x, y)$. This is the same as proving the family of statements $F(x)$, where $F(x) = \forall y : P(x, y)$. Each statement $F(x)$ can be proven by induction on $y$ (for fixed $x$), and then you can prove $P(x, y)$ by induction on $x$. You might want to try proving -$${n+1 \choose k+1} = {n \choose k+1} + {n \choose k}$$ -this way. -But actually you can be much trickier than this. Sometimes it suffices to induct on $x + y$, for example.<|endoftext|> -TITLE: How to show directly that two elements become equal in Grothendieck group? -QUESTION [12 upvotes]: Consider commutative semigroup S and its Grothendieck completion group -G(S).Suppose I insist on defining G(S) as free abelian group on basis $[a]$ (with $a\in S$) divided out by the relations $[a+b]-[a]-[b]$. How do I show with that definition that if the images of $a,b\in S$ become equal in G(S), then necessarily there existed $c\in S$ with a+c=b+c ? I know it true because I can prove it with other construction of grothendieck group, but I should like direct proof with above free abelian group construction. - -REPLY [11 votes]: Well your notation is a little bit confusing. Since we are considering free abelian groups, we should avoid additive notations for the commutative semigroup S. Let's talk of $(S, \ast )$. If $F(S)$ is the free abelian group with basis $S$, we can assume $S \subseteq F(S)$. So the set of relations is $$R = \{ (a \ast b) -a - b : a,b \in S \}.$$ Then, the Grothendieck completion is the abelian group $$G(S) = F(S)/ \langle R \rangle.$$ If $a \in S$, I'll denote the coset $a + \langle R \rangle$ by $[a]$. -The proof isn't hard. You'll just need to remember that in free abelian groups the expression in terms of a basis is unique. You can find one proof, with an ugly notation, in Rotman's 'Advanced Modern Algebra'. I'll reproduce below the proof on Magum's book 'An Algebraic Introduction to K-theory'. -Suppose that $[a]=[b]$. Then, $a-b \in \langle R \rangle$. That is, $$a-b = \sum_{i=1}^n n_i((a_i \ast b_i) - a_i - b_i),$$ where $n_i = 1$ or $-1$ and $a_i, b_i \in S$. Bringing terms with negative coefficients to the other side, $$ a + \sum_{n_i = -1}(a_i \ast b_i) + \sum_{n_i = 1}(a_i + b_i) = b + \sum_{n_i = 1}(a_i \ast b_i) + \sum_{n_i = -1}(a_i + b_i).$$ Since $S$ is a basis of $F(S)$, the terms on one side of the equation are a permutation of those on the other side. Since $(S, \ast )$ is commutative, it follow that, in S, $$ a \ast \prod_{n_i = -1}(a_i \ast b_i) \ast \prod_{n_i = 1}(a_i \ast b_i) = b \ast \prod_{n_i = 1}(a_i \ast b_i) \ast \prod_{n_i = -1}(a_i \ast b_i). $$ So $a \ast c = b \ast c$, where $$ c = \prod_{i = 1}^n (a_i \ast b_i).$$<|endoftext|> -TITLE: Have I got the right definition of formal smoothness? -QUESTION [7 upvotes]: I'm trying to work out a basic example where formal smoothness should fail. -I'm considering $\mathbb{R} \to \mathbb{R}[x,y]/(x^2-y^2)$. -The idea is that not every $\mathbb{R}$-homomorphism $\mathbb{R}[x,y]/(x^2-y^2) \to \mathbb{R}$ should lift to a homomorphism $\mathbb{R}[x,y]/(x^2-y^2) \to \mathbb{R}[\varepsilon]/(\varepsilon^2)$. But I can't see that ever being possible: after all, I can just take the exact same homomorphism, with image $\mathbb{R} \subset \mathbb{R}[\varepsilon]/(\varepsilon^2)$; this gives a valid lift. -Do I need to consider something else than $R = \mathbb{R}[\varepsilon]/(\varepsilon^2)$ with $I = (\varepsilon)$ in order to witness the failure of formal smoothness? I would think that was enough, given that it should fully explain lifting of points to tangent vectors. - -REPLY [8 votes]: Indeed, you need to consider a different artinian ring to witness the failure of formal smoothness. -Look instead at $\mathbb{R}[\epsilon]/\epsilon^3 \to \mathbb{R}[\epsilon]/\epsilon^2$ and map $\mathbb{R}[x,y]/(x^2-y^2)$ to $\mathbb{R}[\epsilon]/\epsilon^2$ by $x \mapsto \epsilon$, $y \mapsto 0$. If you wanted to lift this to a map to $\mathbb{R}[\epsilon]/\epsilon^3$, then you would have $x \mapsto \epsilon + a \epsilon^2$ and $y \to b \epsilon^2$, but then $x^2 = \epsilon^2$ and $y^2=0$. -The "geometric" intuition here is that the Zariski tangent space to $x^2=y^2$ at $0$ is two dimensional but those tangent vectors which have slope other than $\pm 1$ can not be extended to higher order jets.<|endoftext|> -TITLE: Construction of an infinite set such that any two number from the set are relatively prime -QUESTION [6 upvotes]: This question is taken from a contest in India. - -Prove that we can construct an infinite set of positive integers of the form $2^{n}-3$, where $n \in \mathbb{N}$, such that any two numbers from the set are relatively prime - -I would like to have an answer for this question, and i would also like to know why $2^{n}-3$ has importance here. Can this question be generalized. That is: - -Can we construct an infinite set of positive integers of the form $k^{n} -(k+1)$ such that any two numbers from the set are relatively prime?. - -REPLY [4 votes]: Further to Moron's answer, consider any sequence of the form xn = an + b for fixed integers a,b. Is there an infinite set of the xn for which any two are coprime? -First, any common factor of a and b will divide every xn. Furthermore, for any n, -$$ -x_n=(a^{n-1}+a^{n-2}+\cdots+a+1)(a-1)+(b+1). -$$ -so any common factor of a − 1 and b + 1 will divide every xn. The answer to the question is no if either a, b or a − 1, b + 1 have a common prime factor. -If a, b are coprime and every factor of b + 1 divides a then the answer is yes. -Let S = {p1,...,pm} be a finite set of primes not dividing a. Taking n to be a multiple of ∏k(pk − 1) gives xn = 1 + b ≠ 0 (mod pk), so is not a multiple of pk. If S is the set of prime factors of any finite collection of the xn, this shows that we can find a new xn coprime to each of the ones so far and, by induction, we can find an infinite set. -This shows that the answer is yes in the cases you mention a = 2, b = -3 and a = k, b = -(k + 1). It does, however, leave open the cases where a,b are coprime and b + 1 has prime factors not dividing a or a − 1.<|endoftext|> -TITLE: Implicit Differentiation -QUESTION [8 upvotes]: I was just wondering where the y'/(dy/dx) in implicit differentiation comes from. -$$ -x^2 + y^2 = 25 -$$ -$$ -(d/dx) x^2 + (d/dy) y^2 **(dy/dx)** = 25 (d/dx) -$$ -$$ -2x + 2y (dy/dx) = 0 -$$ -$$ -(dy/dx) = -x/y -$$ -Where does the bold part come from? Wikipedia says it's a byproduct of the chain rule, but it's just not clicking for me. - -REPLY [9 votes]: Isaac and Ryan have already answered your question in words. Now, in symbols, the chain rule gives: -$$\frac{d(y^2)}{dx} = \frac{d(y^2)}{dy}\frac{dy}{dx} = 2y\frac{dy}{dx}$$<|endoftext|> -TITLE: Difference between axioms, theorems, postulates, corollaries, and hypotheses -QUESTION [66 upvotes]: I've heard all these terms thrown about in proofs and in geometry, but what are the differences and relationships between them? Examples would be awesome! :) - -REPLY [2 votes]: Axiom: Not proven and known to be unprovable using other axioms -Postulate: Not proven but not known if it can be proven from axioms (and theorems derived only from axioms) -Theorem: Proved using axioms and postulates -For example -- the parallel postulate of Euclid was used unproven but for many millennia a proof was thought to exist for it in terms of other axioms. Later is was definitively shown that it could not (by e.g. showing consistent other geometries). At that point it could be converted to axiom status for the Euclidean geometric system. -I think everything being marked as postulates is a bit of a disservice, but also reflect it would be almost impossible to track if any nontrivial theorem does not somewhere depend on a postulate rather than an axiom, also, standards for what constitutes 'proof' changes over time. -But I do think the triple structure is helpful for teaching beginning students. E.g. you can prove congruence of triangles via SSS with some axioms but it can be damnably hard and confusing/circular/nit-picky, so it makes sense to teach it as a postulate at first, use it, and then come back and show a proof.<|endoftext|> -TITLE: Finite group with isomorphic normal subgroups and non-isomorphic quotients? -QUESTION [29 upvotes]: I know it is possible for a group $G$ to have normal subgroups $H, K$, such that $H\cong K$ but $G/H\not\cong G/K$, but I couldn't think of any examples with $G$ finite. What is an illustrative example? - -REPLY [37 votes]: Take $G = \mathbb{Z}_4 \times \mathbb{Z}_2$, $H$ generated by $(0,1)$, $K$ generated by $(2,0)$. Then $H \cong K \cong \mathbb{Z}_2$ but $G/H \cong \mathbb{Z}_4$ while $G/K \cong \mathbb{Z}_2 \times \mathbb{Z}_2$.<|endoftext|> -TITLE: sigma algebra generated by random variable -QUESTION [5 upvotes]: Consider the probability space $([0,1]; B[0,1], L)$, where $B[0,1]$ contains the Borel sets -intersecting $[0,1]$ and $L$ is the Lebesgue measure. How do I find the sigma algebra generated by a random variable defined on this space, $X = 1_{[0,1/2]}$? Secondly, how do I determine whether random variables defined on this space are independent or not, e.g., are $X = 1_{[0,1/2]}$ and $Y = 1_{[1/4,3/4]}$ independent? -For finding sigma algebra generated by $X$, I find $X^{-1}(1_{[0,1/2]})$ but what will be this inverse in Borel sets? For finding independence, it should be sufficient to show the sigma algebras generated by $X$ and $Y$ are independent right? - -REPLY [10 votes]: The sigma-algebra generated by $1_{[0,1/2]}$ is simply -$$ -\bigl\{\emptyset,[0,1],[0,1/2],(1/2,1]\bigr\}. -$$ -It consists of the preimages under the function $1_{[0,1/2]}$ of all Borel sets in the codomain of the function $1_{[0,1/2]}$, namely, $(\mathbb R,B(\mathbb R))$. (Notice that the preimage $1_{[0,1/2]}^{-1}(M)$ is completely determined by the information of whether 0 and 1 do or do not belong to $M$ respectively.) -The situation for $1_{[1/4,3/4]}$ is similar. -The random variables $1_{[0,1/2]}$ and $1_{[1/4,3/4]}$ on $([0,1],B[0,1],L)$ are indeed independent: For this you have to check that $L(A\cap B)=L(A)\cdot L(B)$ for all $A\in 1_{[0,1/2]}^{-1}(B(\mathbb R))$ and $B\in 1_{[1/4,3/4]}^{-1}(B(\mathbb R))$. -The most interesting case is $L([0,1/2]\cap [1/4,3/4])=L([0,1/2])\cdot L([1/4,3/4])$. -Check that both sides are equal! -Also think about the following question: Are the random variables $1_{[0,1/2]}$ and $1_{[1/4,1]}$ on $([0,1],B[0,1],L)$ also independent?<|endoftext|> -TITLE: Lebesgue Dominated Convergence example -QUESTION [9 upvotes]: How do I use Lebesgue Dominated Convergence Theorem to evaluate -$$\lim_{n \to \infty}\int_{[0,1]}\frac{n\sin(x)}{1+n^2\sqrt x}dx$$ -What dominating function to use here? - -REPLY [2 votes]: $\newcommand{\angles}[1]{\left\langle #1 \right\rangle}% - \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% - \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% - \newcommand{\dd}{{\rm d}}% - \newcommand{\ds}[1]{\displaystyle{#1}}% - \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% - \newcommand{\expo}[1]{{\rm e}^{#1}}% - \newcommand{\ic}{{\rm i}}% - \newcommand{\imp}{\Longrightarrow}% - \newcommand{\pars}[1]{\left( #1 \right)}% - \newcommand{\pp}{{\cal P}}% - \newcommand{\sgn}{{\rm sgn}}% - \newcommand{\ul}[1]{\underline{#1}}% - \newcommand{\verts}[1]{\left\vert #1 \right\vert}$ - -$$ -0 < \int_{0}^{1}{n\sin\pars{x} \over 1 + n^{2}\,\sqrt{x\,}}\,\dd x -< -\int_{0}^{1}{n \over 1 + n^{2}\,\sqrt{x\,}\,}\,\dd x -= -2n\int_{0}^{1}{x \over 1 + n^{2}x}\,\dd x -= -{2 \over n}\,\ln\pars{1 + n^{2}} -$$ - - -$$ -\lim_{n \to \infty}{2 \over n}\,\ln\pars{1 + n^{2}} -= -2\lim_{n \to \infty}{2n/\pars{1 + n^{2}} \over 1} = 0 -$$ - -$$ -\lim_{n \to \infty}\int_{0}^{1}{n\sin\pars{x} \over 1 + n^{2}\,\sqrt{x\,}}\,\dd x -= -0 -$$<|endoftext|> -TITLE: Getting better at proofs -QUESTION [118 upvotes]: So, I don't like proofs. -To me building a proof feels like constructing a steel trap out of arguments to make true what you're trying to assert. -Oftentimes the proof in the book is something that I get if I study, but hard to come up with on my own. In other words I can't make steel traps, but I feel fine buying them from others. -How does one acquire the ability to create steel traps with fluency and ease? Are there any particular reference books that you found helped you really get how to construct a proof fluently? Or is it just practice? - -REPLY [129 votes]: I'd like to second one part of Qiaochu Yuan's answer: the recommendation to read Polya's book. Unlike many other books I've seen (albeit none of the others recommended above), it actually does contain guidance on how to construct a proof "out of nothing". -And that's one problem with the "practise, practise, practise" mantra. Practise what? Where are the lists of similar-but-not-quite-identical things to prove to practise on? I can find lists of integrals to do and lists of matrices to solve, but it's hard coming up with lists of things to prove. -Of course, practise is correct. But just as with anything else in mathematics, there's guidelines to help get you started. -The first thing to realise is that reading others proofs is not guaranteed to give you any insight as to how the proof was developed. A proof is meant to convince someone of a result, so a proof points to the theorem (or whatever) and knowing how the proof was constructed does not (or at least, should not) lend any extra weight to our confidence in the theorem. Proofs can be written in this way, and when teaching we should make sure to present some proofs in this way, but to do it every time would be tedious. -So, what are the guidelines for constructing a proof? You'll probably get different answers from different mathematicians so these should be construed as being my opinion and not a(n attempt at a) definitive answer. -My recommendation is that you take the statement that you want to prove and apply the following steps to it as often as you can: - -Expand out unfamiliar terms. -Replacing generic statements by statements about generic objects. -Including implicit information. - -Once you've done all that, the hope is that the proof will be much clearer. -Here's an example. - -Original statement: - -The composition of linear transformations is again linear. - -Replace generic statements: - -If $S$ and $T$ are two composable linear transformations then their composition, $S T$, is again linear. - -It is important to be precise here. The word "composable" could have been left out, as the statement only makes sense if $S$ and $T$ are composable, but until you are completely familiar with this kind of process, it is better to be overly precise than otherwise. In this case, leaving in the word "composable" reminds us that there is a restriction on the domains and codomains which will be useful later. (However, one has to draw the line somewhere: even the word "composable" is not quite enough since it leaves open the question as to whether it is $S T$ or $T S$!) -Include implicit information: - -If $S \colon V \to W$ and $T \colon U \to V$ are linear transformations then $S T \colon U \to W$ is again linear. - -Here's where remembering that $S$ and $T$ are composable in the previous step helps keep things clear. As $S$ and $T$ are composable, we only need $3$ vector spaces. Then, since we explicitly have the vector spaces the fact that $S$ and $T$ are composable is plain, though some may prefer to keep that fact in the statement. Also, some may like to have the fact that $U$, $V$, and $W$ are vector spaces explicitly stated. -Expand out definitions: - -If $S \colon V \to W$ and $T \colon U \to V$ are such that $S(v_1 + \lambda v_2) = S(v_1) + \lambda S(v_2)$ and $T(u_1) + \mu T(u_2)$ for all $v_1, v_2 \in V$, $u_1, u_2 \in U$, and $\lambda, \mu \in \mathbb{R}$, then $S T(x_1 + \eta x_2) = S T(x_1) + \eta S T(x_2)$ for all $x_1, x_2 \in U$ and $\eta \in \mathbb{R}$. - -Note that I have been careful not to repeat myself with the newly introduced symbols. It would be technically alright to reuse $u_1$ and $u_2$ in place of $x_1$ and $x_2$ since these are local declarations (restricted by the phrases "for all ..."). However, humans are not good at differentiating between local and global declarations so it is best not to reuse symbols unless the scope is very clear. -Replace generic statements: - -If $S \colon V \to W$ and $T \colon U \to V$ are such that $S(v_1 + \lambda v_2) = S(v_1) + \lambda S(v_2)$ and $T(u_1) + \mu T(u_2)$ for all $v_1, v_2 \in V$, $u_1, u_2 \in U$, and $\lambda, \mu \in \mathbb{R}$, then whenever $x_1, x_2 \in U$ and $\eta \in \mathbb{R}$, $S T(x_1 + \eta x_2) = S T(x_1) + \eta S T(x_2)$. - -Up to now, the rephrasing has not taken into account the fact that there is a conclusion and a hypothesis. This rephrasing modifies a part of the conclusion to turn it from a generic statement "$P(p)$ is true for all $p \in Q$" to a statement about a generic object "whenever $p \in Q$ then $P(p)$ is true". We do not do this for the similar statements in the hypothesis. This is because these two pieces are treated differently in the proof. -Replace generic statements, and reorganise to bring choices to the fore: - -Let $S \colon V \to W$ and $T \colon U \to V$ be such that $S(v_1 + \lambda v_2) = S(v_1) + \lambda S(v_2)$ and $T(u_1) + \mu T(u_2)$ for all $v_1, v_2 \in V$, $u_1, u_2 \in U$, and $\lambda, \mu \in \mathbb{R}$. Let $x_1, x_2 \in U$ and $\eta \in \mathbb{R}$. Then $S T(x_1 + \eta x_2) = S T(x_1) + \eta S T(x_2)$. - -In this form, the distinction between hypothesis and conclusion is all the clearer. Parts of the hypothesis use the word "Let", parts of the conclusion use the word "Then". - -With this formulation, the proof essentially writes itself. With all it's gory details: - -Proof - -Let $S \colon V \to W$ and $T \colon U \to V$ be such that $S(v_1 + \lambda v_2) = S(v_1) + \lambda S(v_2)$ and $T(u_1) + \mu T(u_2)$ for all $v_1, v_2 \in V$, $u_1, u_2 \in U$, and $\lambda, \mu \in \mathbb{R}$. Let $x_1, x_2 \in U$ and $\eta \in \mathbb{R}$.[^quick] Then: -$$ -S T(x_1 + \eta x_2) = S \big( T(x_1) + \eta T(x_2)\big) -$$ -using the hypothesis on $T$ as $x_1, x_2 \in U$ and $\eta \in \mathbb{R}$. So: -$$ -S T(x_1 + \eta x_2) = S T(x_1) + \eta S T(x_2) -$$ -using the hypothesis on $S$ as $T(x_1), T(x_2) \in V$ and $\eta \in \mathbb{R}$. Hence the conclusion is true. -Notes: -1. This could be condensed, but the important thing here is how to find it, not what the final form should be. -2. Notice that I wrote "as $x_1, x_2 \in U$" rather than "with $u_1 = x_1$ and $u_2 = x_2$". This is partly style, and partly because in the statement of linearity, $u_1$ and $u_2$ are placeholders into which we put $x_1$ and $x_2$. So saying $u_1 = x_1$ is semantically incorrect as it equates a virtual vector with an actual vector. This is a very minor point, though. - -Finally, I would like to disagree with one part of Qiaochu's answer. I actually like the imagery of a steel trap. A proof is a bit like a trap: we want to capture the theorem in a trap so that it can't wriggle out. We construct the proof so that there is no possibility of escape. Eventually, yes, we want the proof to be beautiful but when it's first constructed we just want it to do the job. Only once the theorem is caught can we spend a little time decorating the cage to make it look pretty and set it off to its best advantage. So build the trap because theorems can be dangerous! An escaped theorem can do untold damage, rampaging across the countryside, laying waste like an unchecked viking. -(Okay, not quite finally. The step-by-step proof above was taking from a page I wrote for my students on the nature of proof. The original can be found here.)<|endoftext|> -TITLE: How to prove this binomial identity $\sum_{r=0}^n {r {n \choose r}} = n2^{n-1}$? -QUESTION [18 upvotes]: I am trying to prove this binomial identity $\sum_{r=0}^n {r {n \choose r}} = n2^{n-1}$ but am not able to think something except induction,which is of-course not necessary (I think) here, so I am inquisitive to prove this in a more general way. -The left side of an identity occurs while solving another problem (concerning binomial theorem) so I am more interested in deriving the right side from the left side, else I have to remember it now onward. -EDIT: I am more interested in an algebraic proof rather than combinatorial argument or something involving calculus (however I liked svenkatr and Bill Dubuque solution), hence I am removing the combinatorics tag. - -REPLY [8 votes]: Here's a dirty trick. $\frac{1}{2^n} \sum_{r \ge 0} r {n \choose r}$ is the expected size of a random subset of an $n$-element set. But by linearity of expectation, this is $n$ times the probability that any given element is in a subset, which is $\frac{1}{2}$. So -$$\frac{1}{2^n} \sum_{r \ge 0} r {n \choose r} = \frac{n}{2}.$$ -Edit: And as long as I have this written up somewhere, I might as well use it. In this math.SE answer I prove the following. If $a_n, b_n$ are sequences satisfying $b_n = \sum_{k=0}^n {n \choose k} a_k$, and if $A(x) = \sum_{n \ge 0} a_n x^n, B(x) = \sum_{n \ge 0} b_n x^n$, then -$$B(x) = \frac{1}{1 - x} A \left( \frac{x}{1 - x} \right).$$ -In this example $a_n = n$. One can prove by various arguments that in this case $A(x) = \frac{x}{(1 - x)^2}$; this is a special case of an identity I use in the above answer. It follows that -$$B(x) = \frac{1}{1 - x} \left( \frac{ \frac{x}{1-x} }{ \left( 1 - \frac{x}{1-x} \right)^2 } \right) = \frac{x}{(1 - 2x)^2} = \frac{1}{2} \frac{2x}{(1 - 2x)^2} = \frac{1}{2} \sum_{n \ge 0} n \cdot 2^n x^n$$ -as desired.<|endoftext|> -TITLE: Upper-triangular matrix is invertible iff its diagonal is invertible: C*-algebra case -QUESTION [11 upvotes]: Exercise 1.14 of the book Rordam, Larsen and Laustsen "An introduction to K-theory for C*-algebras" asks to prove that an upper triangular matrix with elements from some C*-algebra $A$ is invertible in $M_n(A)$ iff all diagonal entries are invertible in $A$. -Trying to solve this I've found that if $a$ is invertible and $\delta$ is such that $(a^{-1}\delta)^n=0$ then $a+\delta$ is invertible too and its inverse is given by $(a+\delta)^{-1}=\sum_{k=0}^{n} (-a^{-1}\delta)^ka^{-1}$. Using this fact I can show that if diagonal is invertible, then upper-triangular matrix with this diagonal is invertible too, and also that if upper-triangular matrix has upper-triangular inverse, then its diagonal is invertible. So all I need to prove is that if upper-triangular matrix invertible, then its inverse is upper-triangular. I've failed to prove this. -Also there is a hint for this exercise: "Solve the equation $ab=1$ where $a$ is as above [i.e. upper-triangular matrix] and where $b$ is unknown upper triangular matrix". Solution of this equation follows from my reasoning above, but this doesn't help. -Update (counterexample attempt): I've made one more attempt and it looks for me like I have found a counterexample. However I think there is a mistake in it (because otherwise there is a mistake in the book). Here it is. Let $A=B(l^2(\mathbb{N}))$ --- algebra of bounded operators on sequences $x=\{x_i\}_ {i=1}^ \infty:\|x\|^2=\sum_{x=1}^{\infty}|x_i|^2<\infty$. Let $z\in A$ be defined by $(zx)_ {2n-1}=0$, $(zx)_{2n}=x_n$, and $t\in A$ be defined by $(tx)_{2n-1}=x_n$, $(tx)_ {2n}=0$. Then we have $t^*t=z^ * z=tt^* +zz^* =1$, $t^* z=z^* t=0$. From these we have that -$$\begin{pmatrix}z&tz^* \\\ 0&t^* \end{pmatrix}\begin{pmatrix}z^* &0\\\ zt^* &t\end{pmatrix}=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}$$ and -$$\begin{pmatrix}z^* &0\\\ zt^* &t\end{pmatrix}\begin{pmatrix}z&tz^* \\\ 0&t^* \end{pmatrix}=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}.$$ -So now my question should say "Where am I wrong?". - -REPLY [3 votes]: Hi sorry to resurrect this post. I have a relevant example which I want to remember, and this seems like an appropriate place to put it. - -Let $H$ be a separable Hilbert space with orthonormal basis $\{e_0,e_1,e_2,\ldots\}$. Let $S \in B(H)$ be the unilateral shift determined by $S(e_i) = e_{i+1}$ for all $i$. Let $P_0 \in B(H)$ be rank-1 projection onto the span of $e_0$. Then, $U = \begin{pmatrix} S & P_0 \\ 0 & S^* \end{pmatrix}$ is a unitary in $M_2(B(H))$, despite being nondiagonal, upper-triangular, and having both diagonal entries noninvertible. - -It's easy to check $U^* U = UU^* = 1$ directly, but the "real" reason this works is that, under the isomorphism $M_2(B(H)) \cong B(H^2)$, $U$ works on the natural basis of $H^2$ by -$$ \cdots \mapsto (0,e_2) \mapsto (0,e_1) \mapsto (0,e_0) \mapsto (e_0,0) \mapsto (e_1,0) \mapsto (e_2,0) \mapsto (e_3,0) \mapsto \cdots $$ -so $U$ is actually (up to a unitary conjugacy) the bilateral shift.<|endoftext|> -TITLE: Bounding the integral $\int_{2}^{x} \frac{\mathrm dt}{\log^{n}{t}}$ -QUESTION [11 upvotes]: If $x \geq 2$, then how do we prove that $$\int_{2}^{x} \frac{\mathrm dt}{\log^{n}{t}} = O\Bigl(\frac{x}{\log^{n}{x}}\Bigr)?$$ - -REPLY [2 votes]: Another way of dealing with this integral is to let $t = e^u$, so you are trying to bound -$$\int_{\ln 2}^{\ln x}{e^u \over u^n}\,du$$ -Clearly it suffices to replace $\ln 2$ with any fixed constant $c$ since the difference is a constant that doesn't affect asymptotics. -Integrating by parts gives -$$\int_c^{\ln x}{e^u \over u^n}\,du = {x \over (\ln{x})^n} - {e^c \over c^n}+\int_c^{\ln x}{ne^u \over u^{n+1}}\,du$$ -If $c$ is large enough the integrand on the right is less than half that on the left, so the integral is less than half of the left integral. Subtracting gives -$$\int_c^{\ln x}{e^u \over u^n}\,du = {2x \over (\ln{x})^n} - {2e^c \over c^n}$$ -$$< {2x \over (\ln{x})^n}$$<|endoftext|> -TITLE: A limit and a coordinate trigonometric transformation of the interior points of a square into the interior points of a triangle -QUESTION [11 upvotes]: The coordinate transformation (due to Beukers, Calabi and Kolk) -$$x=\frac{\sin u}{\cos v}$$ -$$y=\frac{\sin v}{\cos u}$$ -transforms the square domain $0\lt x\lt 1$ and $0\lt y\lt 1$ into the triangle domain $u,v>0,u+v<\pi /2$ (in Proofs from the BOOK by M. Aigner and G. Ziegler). -Since the invert transformation is -$$u=\arccos \sqrt{\dfrac{1-x^{2}}{1-x^{2}y^{2}}}$$ -$$v=\arccos \sqrt{\dfrac{1-y^{2}}{1-x^{2}y^{2}}}$$ -it is easy to see that three of the vertices (although not belonging to the domain) are transformed as follows: -$$(x,y)=(0,0)\mapsto (0,0)=(u,v),$$ -$$(x,y)=(1,0)\mapsto (\pi /2,0)=(u,v),$$ -$$(x,y)=(0,1)\mapsto (0,\pi /2)=(u,v).$$ -Question 1 - But how is the fourth vertex $(x,y)=(1,1)$ transformed? In the plot below of $\dfrac{1-x^{2}}{1-x^{2}y^{2}}$ seems that the following limit does not exist -$$\underset{(x,y)\rightarrow (1,1)}{\lim }\sqrt{\dfrac{1-x^{2}}{1-x^{2}y^{2}}}.$$ - -Question 2 - As a second question I would like to know how can one "discover" a -transformation of a square into a triangle such as this. Is there any systematic study of this kind of transformations? - -REPLY [6 votes]: Question 1: Setting $u=\pi/2-v$ in your first set of formulas gives $(x,y)=(1,1)$, so that point corresponds to the whole hypotenuse $u+v=\pi/2$. -Question 2: I have no idea. (Luck? Inspiration? Trial and error?)<|endoftext|> -TITLE: Derived subgroup where not every element is a commutator -QUESTION [27 upvotes]: Let $G$ be a group and let $G'$ be the derived subgroup, defined as the subgroup generated by the commutators of $G$. -Is there an example of a finite group $G$ where not every element of $G'$ is a commutator? $G'$ is only generated by commutators, but with all of the properties of commutators (ie: what happens under conjugation, exponentation, etc) I can't think of an example. - -REPLY [28 votes]: For any prime $p$ and $n>1$, there are nilpotent groups $G$ of class 2 and order $p^{n(n+1)/2}$ with generators $a_i$ $(1 \le i \le n)$, $b_{ij}$ $(1 \le i < j \le n)$, such that $[a_i,a_j] = b_{ij}$, the $b_{ij}$ are all central in the group, and all generators have order $p$. -Then $G'$ is the group of order $p^{n(n-1)/2}$ generated by the $b_{ij}$. -In any group, we have $[ax,by] = [a,b]$ when $x,y$ are central in the group, so $G$ has at most $p^{2n}$ distinct elements that are commutators. -Hence, for any fixed $k>0$, by choosing $n$ sufficiently large we can find $G$ such that not all element of $G'$ are products of at most $k$ commutators.<|endoftext|> -TITLE: Constructing Continuous functions at given points -QUESTION [25 upvotes]: Ok. This question may sound very easy, but actually I am in great need of an answer. I have been facing trouble in constructing functions, which are only continuous at some particular sets. -For example, the standard example of a function which is only continuous at one point, is the function, $f(x) = x, \ x \in \mathbb{Q}$ and $f(x) = -x, x \in \mathbb{R} \setminus \mathbb{Q}$. Similarly, I would like to know how to construct a function which is: - -Continuous at exactly $2,3,4$ points. -Continuous exactly at integers -Continuous exactly at Natural numbers -Continuous exactly at Rationals. - -I would like to see many examples (with proof!), so that I won't struggle when somebody asks me to construct such functions. - -REPLY [32 votes]: One simple way of constructing a function which is continuous only at a finite number of points, $x=a_1,\ldots,a_n$, is to do a slight modification to the function you give: take a polynomial $p(x)$ that has roots exactly at $x=a_1,\ldots,a_n$ (e.g., $p(x) = (x-a_1)\cdots(x-a_n)$) , and then define -$$ g(x) = \left\{\begin{array}{ll} -p(x) & \text{if $x\in\mathbb{Q}$;}\\ -0 & \text{if $x\notin\mathbb{Q}$.} -\end{array}\right.$$ -The function is continuous at $a_1,\ldots,a_n$, and since $p(x)\neq 0$ for any $x\notin\{a_1,\ldots,a_n\}$ then $g(x)$ is not continuous at any point other than $a_1,\ldots,a_n$. Other possibilities should suggest themselves easily enough. -A function that is continuous exactly at the integers: a similar idea will work: find a function that has zeros exactly at the integers, for example $f(x)=\sin(\pi x)$, and then take -$$g(x) = \left\{\begin{array}{ll} -\sin(\pi x) & \text{if $x\in\mathbb{Q}$;}\\ -0 & \text{if $x\notin\mathbb{Q}$.} -\end{array}\right.$$ -A function continuous exactly in the natural numbers: take a function that is continuous at the integers, and redefine it as the characteristic function of the rationals in appropriate places(what happens at $0$ depends on whether you believe $0$ is in the natural numbers or not). Assuming that $0\in\mathbb{N}$, one possibility is: -$$g(x) = \left\{\begin{array}{ll} -\sin(\pi x)&\text{if $x\in\mathbb{Q}$ and $x\geq 0$;}\\ -x & \text{if $x\in\mathbb{Q}$ and $-\frac{1}{2}\lt x\leq 0$;}\\ -1 & \text{if $x\in\mathbb{Q}$ and $x\leq -\frac{1}{2}$;}\\ -0 & \text{if $x\notin\mathbb{Q}$.} -\end{array}\right.$$ -A function continuous exactly on the rationals. This one is a bit trickier. There is no such function. This follows because the set of discontinuities of a real valued function must be a countable union of closed sets. -Perhaps then, we might anticipate the next question: -A function that is continuous exactly on the irrationals. An example is the following: let $s\colon\mathbb{N}\to\mathbb{Q}$ be an enumeration of the rationals (that is, a bijection from $\mathbb{N}$ to $\mathbb{Q}$. Define $f(x)$ as follows: -$$f(x) = \sum_{\stackrel{n\in\mathbb{N}}{s_n\leq x}} \frac{1}{2^n}.$$ -The function has a jump at every rational, so it is not continuous at any rational. However, if $x$ is irrational, let $\epsilon\gt 0$. Then there exists $N$ such that $\sum_{k\geq N}\frac{1}{2^k}\lt \epsilon$. Find a neighborhood of $x$ which excludes every $q_m$ with $m\leq N$, and conclude that the difference between the value of $f$ at $x$ and at any point in the neighborhood is at most $\sum_{k\geq N}\frac{1}{2^k}$. -Edit: As I was reminded in the comments by jake, in fact the "standard example" of a function that is continuous at every rational and discontinuous at every rational is Thomae's function. The example I give is a monotone function, and although it is discontinuous at every rational, it is continuous from the right at every number. - -REPLY [12 votes]: Continuous at 2, 3, 4: $f(x)=(x-2)(x-3)(x-4)$ if $x$ is rational, $f(x)=0$ if $x$ is irrational. -Continuous at the integers: $f(x)=\sin(\pi x)$ if $x$ is rational, 0 if $x$ is irrational. -Continuous at the natural numbers: $f(x)=\sin(\pi x)$ if $x$ is rational and not a nonpositive integer, 0 if $x$ is irrational, 1 if $x$ is a nonpositive integer. -Continuous exactly at the rationals: Impossible, because the set of rational numbers is not a $G_\delta$.<|endoftext|> -TITLE: understanding intervals in trigonometry -QUESTION [5 upvotes]: So this is some I partially understand, I'm not sure what I don't and do understand because most of my understanding is based on assumptions... sorry if I sound a little stupid! -The equation $6 \cos x - \sin x = 5 $ needs to be turned into the form $ \Re \cos (x + \alpha) $ then solved for in the interval $ -1/2\pi < x < 1/2\pi $. -I turned it into this: $ \sqrt{37} \cos( x + 0.165) $ then - $ \sqrt{37} \cos( x + 0.165) = 5$ -$ \cos( x + 0.165) = \frac{5}{\sqrt{37}}$ -$ x + 0.165= \arccos( \frac{5}{\sqrt{37}} )$ -$ x = \arccos( \frac{5}{\sqrt{37}} ) - 0.165$ -$ x = 0.44$ YAY! but... the answer is 0.44 and -0.771 -I'm thinking its asking what other value would of the above equation would end up in 5 also? Correct? How do I do this? Could someone explain what is meant by "solve this equation for that interval", and how does one go about it? -A problem I think might be related that I JUST cannot get my head around is this one: - -The angle made by a wasps wings horizontally is given by the equation $ \theta = 0.4 \sin600t $, where t is time is seconds. How many times a second does its wing oscillate? - -I tried solving this, honest but I do not know where to begin! - -REPLY [2 votes]: As the cosine is periodic, there are many values of $\theta$ which have the same $\cos(\theta)$. So they are just asking for all the values between $-\pi/2$ and $\pi/2$ that solve the equation. Your solution got one of them, $\cos(0.605)$ does equal $5/\sqrt(37)$. But so does $\cos(-.605)$ You were supposed to find that one, too. It leads to the solution -.771 when the .165 is subtracted. -For your second problem I presume the > sign is supposed to be *. How much does the argument of the sine function (the thing you take the sine of) have to increase to go through one cycle? How much does t have to increase to go through one cycle. This gives you the period. The frequency is one divided by this.<|endoftext|> -TITLE: Argument of the sum of two complex numbers -QUESTION [5 upvotes]: Let $r$, $s$ be positive real numbers and $\theta$, $\phi$ real numbers with $|\theta -\phi|<\pi$. Then an argument of $re^{i\theta}+se^{i\phi}$ lies between $\theta$ and $\phi$. -Can someone give a short, clean proof of the statement above that doesn't rely on geometric intuition? Of course the proof may use the not so trivial fact that every nonzero complex number can be written in polar form as well as trigonometrical functions. -I have tried, but I have no idea how to start, probably because it seems so obvious. - -REPLY [7 votes]: You want to show that if you write $re^{i\theta} + se^{i\phi}$ as $te^{i\chi}$, then $\chi$ lies between $\theta$ and $\phi$. Without loss of generality, assume $\theta\leq\phi$. Factoring out $e^{i\theta}$ you get $re^{i\theta}+se^{i\phi} = e^{i\theta}(r + se^{i(\phi-\theta)})$. Since multiplying by $e^{i\theta}$ is just a rotation by an angle $\theta$, it is enough to consider the case where $\theta=0$ and $0\lt \phi\lt \pi$. -In that case, you have $r + se^{i\phi} = r + s(\cos(\phi)+i\sin(\phi))$, and you want to express it in the form $t(\cos(\chi) + i \sin(\chi))$. Looking at real and complex parts, you see that $t\sin(\chi)=s\sin(\phi)$, and $r+s\cos(\phi) = t\cos(\chi)$. -Assume first that $0\lt \phi\lt \frac{\pi}{2}$. Then $\chi$ must also lie in the first quadrant, since we need both $\sin(\chi)$ and $\cos(\chi)$ to be positive (since $t$, $s\sin(\phi)$, and $r+s\cos(\phi)$ are all positive). If, on the other hand, $\frac{\pi}{2}\lt \phi\lt \pi$, then $\cos(\phi)$ is negative. If $r+s\cos(\phi)\gt 0$, then we need $\cos(\chi)\gt 0$, so $\chi$ is in the first quadrant and automatically smaller than $\phi$ and we are done. If $r+s\cos(\phi)$ is negative then we need $\chi$ in the second quadrant. -In the former case, $0\lt \chi,\phi\lt \frac{\pi}{2}$, then from $\sin(\chi)=\frac{s}{t}\sin(\phi)$ and since $\sin(x)$ is increasing on $0\leq x\leq\frac{\pi}{2}$, then $0\lt\chi\lt\phi$ if and only if $\frac{s}{t}\lt 1$, if and only if $s\lt t$. -Now note that $t^2 = ||r+se^{i\phi}||^2 = r^2 + s^2 + 2rs\cos(\phi) \gt s^2$, since all of $r$, $s$, and $\cos(\phi)$ are positive. Since $s$ and $t$ are both positive, $s\lt t$, which shows that $0\lt\chi\lt\phi$, as desired. -In the other case, where $\frac{\pi}{2}\lt\phi\lt \pi$ and $r+s\cos(\phi)\lt 0$, then we know $\chi$ is also in the second quadrant where $\sin(x)$ is decreasing, so from $\sin(\chi)=\frac{s}{t}\sin(\phi)$ we get that $\frac{\pi}{2}\lt \chi\lt \phi$ if and only if $\frac{s}{t}\gt 1$, if and only if $s\gt t$. Here, since $\cos(\theta)\lt 0$, then you have again $t^2 = ||r+se^{i\phi}||^2 = s^2 + r(r + 2s\cos(\phi)) \lt s^2$ (since $r+s\cos(\phi)\lt 0$ in this situation), so you get $t\lt s$ and hence $\frac{\pi}{2}\lt\chi\lt\phi$, as desired - -REPLY [5 votes]: It's enough to do the case $\theta=0$ (just factor out $e^{i\theta}$). Assume $0<\phi<\pi/2$. Then -$$0 < \tan\arg(r+s e^{i\phi}) = \frac{s\sin\phi}{r+s\cos\phi} < \frac{s\sin\phi}{s\cos\phi} = \tan\phi,$$ -so the argument of the sum is between $0$ and $\phi$ as desired. Something similar should work when $\phi$ is obtuse.<|endoftext|> -TITLE: Finding all n×n permutation matrices -QUESTION [6 upvotes]: If I have a doubly stochastic matrix, how can I find the set of all basic feasible solutions? -Here's Wikipedia on doubly stochastic matrices. - -REPLY [9 votes]: Don Knuth's Volume 4, Fascicle 2, of The Art of Computer Programming has a long section on generating all permutations, including algorithms for doing so. I found a draft here online. (Update: The link still works, but it is now to a zipped file. However, Knuth has since published Volume 4A: Combinatorial Algorithms, Part 1, which includes this material on generating permutations as Section 7.2.1.2. ) -Then, going from a permutation to a permutation matrix is fairly straightforward. For example, suppose you have the permutation 1342 of the numbers 1, 2, 3, and 4. That can be represented in two-line form as -$$\begin{matrix}1&2&3&4\\1&4&2&3\end{matrix}$$ -because the permutation sends 1 to the first position, 2 to the fourth position, etc. -Then the permutation matrix is the matrix with 1's in entries (1,1), (2,4), (3,2), (4,3), and 0's elsewhere; i.e., -$$\begin{pmatrix}1&0&0&0\\0&0&0&1\\0&1&0&0\\0&0&1&0\end{pmatrix}$$<|endoftext|> -TITLE: Balancing a Latin Square -QUESTION [6 upvotes]: I'm searching for an algorithm that forms a balanced (or quasi-complete) latin square, in which every element is a horizontal neighbor to every other element exactly twice, and a vertical neighbor to every other element exactly twice. -I've found one example (for n = 5), but am not clear about a couple of steps: -Step 1: Write 1 2 ... n as the first row -12345 - -Step 2: For i even in the first row, fill in the diagonal from upper left to lower right starting with i and alternating with i - 1 -12345 - 1 3 - 2 - 1 - -Step 3: For i odd and less than n in the first row fill in the diagonal from upper right to lower left beginning with i and alternating with i + 1 -12345 - 41 3 -3 2 - 1 - -Step 4: Fill in n for the main off diagonal -12345 - 4153 -3 52 - 5 1 -5 - -Step 5: For the last column write n n-2 n-1 n-3 ... 1 2 -12345 - 4153 -3 524 - 5 1 -5 2 - -Step 6: For the even entries of the last column, fill in the diagonal from upper left to lower right beginning with i and alternating with i - 1 -? - -Step 7: Complete by symmetry about the main diagonal -? - -For n = 5, the following is obtained: -12345 -24153 -31524 -45231 -53412 - -Questions... -[RE: Step 5] I was able to fill in that portion of the square, but am not sure what would come after n-3 in a larger sequence, as it would go n n-2 n-1 n-3 ??? ... 1 2 -[RE: Step 6] Are even entries even numbers, or does that refer to even positions in the column? Is "i" in this case the number at the upper left or is it the even entry? I don't see how the completed square corresponds to the instructions in this step. -[RE: Step 7] What does "complete by symmetry about the main diagonal" mean? -Otherwise if there's a less involved algorithm for balancing a latin square, I'd like to know how it goes. - -REPLY [2 votes]: The Latin square L: -12345 -24153 -31524 -45231 -53412 - -is isomorphic to the Cayley table of the cyclic group of order 5 via the isomorphism (3,5,4). That is, if you permute the rows, columns and symbols of L according to the permutation (3,5,4) you will generate -12345 -23451 -34512 -45123 -51234 - -The n=7 case gives rise to the Latin square -1234567 -2416375 -3152746 -4627153 -5371624 -6745231 -7563412 - -which is isomorphic to the Cayley table of the cyclic group of order 7 via the isomorphism (3,7,5,6,4). -With this in mind, my suspicion is that these Latin squares are special cases of those generated in the constructions described by Rosemary Bailey in Quasi-complete Latin squares: construction and randomization, although I haven't gone into the details. -As for the smaller questions: - -In papers, by an entry I exclusively refer to a triple $(i,j,l_{ij})$, where $i$ is the row index, $j$ is the column index and $l_{ij}$ is the symbol in the $(i,j)$-th cell. For example (2,3,1) would be an entry of the first Latin square. Importantly, in an entry, the position in the Latin square is important, whereas in a symbol, there is no notion of position (aside from being somewhere in the Latin square). Consequently, I would use the term "even symbol" (although note that the symbols you use make no difference to whether or not you have a quasi-complete Latin square). -Latin squares L such that $L^T=L$ are called symmetric Latin squares (just as for symmetric matrices). When giving details of a construction of these types of Latin squares, I find myself writing "...then L admits a unique completion to a symmetric Latin square" or similar. There's no need to identify every entry since some entries are determined by other entries.<|endoftext|> -TITLE: More Cantor-like constructions -QUESTION [5 upvotes]: Two questions: -(1.) Construct a subset of $[0,1]$ in the same manner as the Cantor set, except that at the $k$-th stage, each interval removed has length $\delta 3^{-k}$, $0<\delta <1$. Show that the resulting set is perfect, has measure $1-\delta$, and contains no intervals. -Showing that it's perfect is not difficult. The resulting set is an intersection of closed intervals, so is closed. Any point in the set is a limit point of endpoints of intervals (obviously I have more details in my written solution, but that's the idea.) But I don't see how the measure is $1-\delta$... At stage $k$, we remove $2^{k-1}$ intervals of length $\delta 3^{-k}$, so the measure of the resulting set is $$1-\sum\limits_{k=0}^{\infty}{2^{k}\cdot\dfrac{\delta}{3^{k+1}}} = 1-\dfrac{\delta}{3}\cdot\sum\limits_{k=0}^{\infty}{\dfrac{2^k}{3^k}} = 1-\dfrac{2\delta}{3} .$$ -Thoughts? -(2.) Construct a Cantor-type set subset of $[0,1]$ by removing from each interval remaining at the $k$-th stage a subinterval of relative length $\theta_k$, $0<\theta_k<1$. Show that the remainder has measure zero if and only if $\sum{\theta_k}=\infty$. (Use the fact that for $a_k>0$, $\prod\limits_{k=1}^{\infty}{a_k}$ converges, in the sense that $\lim\limits_{N\to\infty}{\prod\limits_{k=1}^N{a_k}}$ exists and is not zero, if only if $\sum\limits_{k=1}^{\infty}{\log{a_k}}$ converges. ) -I don't really understand the construction. Suppose we have our $\theta_1$ and we remove an interval of that length. So then we choose $\theta_2$ so that $\theta_2 < \dfrac{1-\theta_1}{2}$, or something like that? Or... ? The phrasing is weird to me. Secondly, the hint does not help at all; it is complete opaque to me, so any insight you can offer would be great. - -REPLY [5 votes]: Your last step in the computation is incorrect. The sum $\sum_{n=0}^{\infty}\frac{2^k}{3^k}$ is: -$$\sum_{n=0}^{\infty}\frac{2^k}{3^k} = \sum_{n=0}^{\infty}\left(\frac{2}{3}\right)^k = \frac{1}{1-\frac{2}{3}} = 3$$ -so when you multiply it by $\frac{\delta}{3}$ you get $\delta$, exactly what you expect it to be. I'm not sure why you thought you get $1- \frac{2\delta}{3}$. (Recall that if $|x|<1$, then $\sum_{n=0}^{\infty}x^n = \frac{1}{1-x}$). -Notice that it says "relative length", not "length". So at each step, you are taking out an interval which is $\theta_k\cdot\ell$, where $\ell$ is the length of the interval you are looking at. In the Cantor set, you take $\theta_k=\frac{1}{3}$ for every $k$; in the example in (1), you are setting $\theta_k = \frac{\delta}{3}$ for every $k$. If you take out the central $\theta_k\ell$ portion of an interval of length $\ell$, then you are left with two intervals of length $\frac{\ell}{2}(1-\theta_k)$. -Added: As per your question in the comments, in this question, all the $\theta_k$ are constant and equal to $\theta$, much like in the regular Cantor set all $\theta_k$ are equal to $\frac{1}{3}$. To see an example, suppose that you set $\theta_k = \frac{1}{2^{k+1}}$, $k=0,1,\ldots$. First you remove $\theta_0 = \frac{1}{2}$ of the interval, leaving you with $[0,\frac{1}{4}]\cup[\frac{3}{4},1]$. Then you remove $\theta_1=\frac{1}{4}$th of each remaining interval; each interval is of length $\frac{1}{4}$, so you are removing the central $\frac{1}{16}$th part of the interval; this leaves you with $[0,\frac{3}{32}]\cup[\frac{5}{32},\frac{1}{4}]\cup[\frac{3}{4},\frac{27}{32}]\cup[\frac{29}{32},1]$. At the next step, you are removing the central $\theta_2=\frac{1}{8}$th of each interval; the intervals are of length $\frac{3}{32}$, so you are removing the central $\frac{3}{256}$th portion. Etc.<|endoftext|> -TITLE: Preimage of generated $\sigma$-algebra -QUESTION [51 upvotes]: For some collection of sets $A$, let $\sigma(A)$ denote the $\sigma$-algebra generated by $A$. -Let $C$ be some collection of subsets of a set $Y$, and let $f$ be a function from some set $X$ to $Y$. I want to prove: -$$f^{-1}(\sigma(C))=\sigma(f^{-1}(C))$$ -I could prove that -$$\sigma(f^{-1}(C)) \subset f^{-1}(\sigma(C))$$ -since complements and unions are 'preserved' by function inverse. But how do I go the other way? -EDIT: One way to go the other way would be to argue that any set in $\sigma(C)$ must be built by repeatedly applying the complement, union and intersection operations to elements of $C$ and all these operations are preserved when taking the inverse. The problem I am facing with the approach is formalizing the word "repeatedly". -[not-homework] - -REPLY [3 votes]: This is just a summary of Carl Mummert's answer in a slightly more systematic way. -We are given $f:X \rightarrow Y$. Also, I will use lower-case letters (e.g., $x$) for members of $X$ or $Y$, upper-case letters (e.g., $A$) for subsets of $X$ or $Y$, and script font (e.g., $\mathscr{A}$) for sets of subsets of $X$ or $Y$ . - -Let's revisit the following definitions: - -$f(A) = \{ f(x) | x\in A\} \text{ for } A \subset X$. -$f^{-1}(B) = \{ x | f(x) = y \text{ and } y\in B\} \text{ for } B \subset Y$. -$f(\mathscr A) = \{ f(A) | A\in \mathscr{A} \}$, for $\mathscr A$ a class of subsets of $X$. -$f^{-1}(\mathscr B) = \{ f^{-1}(B) | B \in \mathscr{B} \}$, for $\mathscr B$ a class of subsets of $Y$. - -And let's also introduce a new definition: - -$f^{*}(\mathscr A) = \{ f(A) | A \in \mathscr{A} \text{ and } f^{-1}(f(A)) \in \mathscr A \}$, for $\mathscr A$ a class of subsets of $X$. In general, $f^{*}(\mathscr A) \subset f(\mathscr A)$, but the inverse can be wrong. - -We need to prove the following statements: - -(3.1) If $\mathscr A$ is a $\sigma$-algebra, then so is $f^{*}(\mathscr A)$. -(3.2) If $\mathscr B$ is a $\sigma$-algebra, then so is $f^{-1}(\mathscr B)$. -(3.3) $f^{*}(f^{-1}(\mathscr B)) = \mathscr B.$ -(3.4) $f^{-1}(f^{*}(\mathscr A)) \subset \mathscr A.$ - - -Now to prove that $f^{-1}(\sigma(\mathscr B)) \subset \sigma(f^{-1}(\mathscr B))$, we can write: -$\begin{aligned} -&\mathscr A \subset \mathscr A \Rightarrow \\ -\text{from (3.3): } &\mathscr A \subset f^{*}(f^{-1}(\mathscr A)) \Rightarrow \\ -\text{since } \mathscr (.) \subset \sigma((.))\text{: } &\mathscr A \subset f^{*}(\sigma(f^{-1}(\mathscr A))) \Rightarrow \\ -\text{from (3.1): }&\sigma(\mathscr A) \subset f^{*}(\sigma(f^{-1}(\mathscr A))) \Rightarrow \\ -\text{from (3.4): }&f^{-1} (\sigma(\mathscr A)) \subset f^{-1}(f^{*}(\sigma(f^{-1}(\mathscr A)))) \subset \sigma(f^{-1}(\mathscr A)). \quad \blacksquare -\end{aligned}$<|endoftext|> -TITLE: Let $\psi$ be a wavelet. Can its Fourier transform $\hat{\psi}$ be also wavelet? -QUESTION [5 upvotes]: Let $\psi$ be a wavelet. Can its Fourier transform $\hat{\psi}$ be also wavelet? produce an example or prove that it is not possible. A wavelet is a function $\psi:\mathbb R\to\mathbb R$ such that (i) $\psi \in L^1(R) \cap L^2(R)$, (ii) $\int_{-\infty}^{\infty} \psi(t) dt = 0$, fourier transform is $\hat f(\omega) =\int_{-\infty}^{\infty} f(t)e^{-i\omega t} dt$. - -REPLY [6 votes]: $f(x)=\sin(x)\cdot\exp(-x^2)$ should do, because: - -The decay of $f$ ensure $f,\hat{f}\in L^1\cap L^\infty$. -$f$ is odd $f(-x)=-f(x)$. -$\hat{f}(-\xi)=\int_{-\infty}^\infty e^{-ix(-\xi)}f(x)dx=\int_{-\infty}^\infty e^{-i(-x)\xi}f(x)dx=\int_{\infty}^{-\infty} e^{-it\xi}f(-t)(-dt)=-\hat{f}(\xi)$ -where in the last step we used that $f$ is odd. - -EDIT: -If $g$ is integrable and odd then $$\int_{-\infty}^0g(x)dx =\int_{-\infty}^0-g(-x)dx=\int_{+\infty}^0g(t)dt=-\int_0^{+\infty}g(t)dt$$ -hence -$$\int_{-\infty}^\infty g(t)dt = 0.$$ -This imply that $$\int_{-\infty}^\infty f dx= \int_{-\infty}^\infty\hat{f}d\xi=0$$<|endoftext|> -TITLE: Subgroups of finitely generated groups are not necessarily finitely generated -QUESTION [51 upvotes]: I was wondering this today, and my algebra professor didn't know the answer. - -Are subgroups of finitely generated groups also finitely generated? - -I suppose it is necessarily true for finitely generated abelian groups, but is it true in general? -And if not, is there a simple example of a finitely generated group with a non-finitely generated subgroup? -NOTE: This question has been merged with another question, asked by an undergraduate. For an example not involving free groups, please see Andreas Caranti's answer, which was the accepted answer on the merged question. - -REPLY [16 votes]: One of the easiest (counter)example is in Hungerford's Algebra. -Let $G$ be the multiplicative group generated by the real matrices -$$a = \left(\begin{array}{l l} -1 & 1\\ -0 & 1 -\end{array}\right), -b = \left(\begin{array}{l l} -2 & 0\\ -0 & 1 -\end{array}\right) -$$ -Let $H$ be the subgroup of $G$ consisting of matrices that have $1$s on the main diagonal. Then $H$ is not finitely generated.<|endoftext|> -TITLE: Plane determined by $2$ vectors. -QUESTION [9 upvotes]: I have $2$ perpendicular vectors in space. How can I determine the plane determined by the $2$ vectors? - -REPLY [8 votes]: The plane determined by two noncollinear vectors $\mathbf{v}_1$ and $\mathbf{v}_2$ is the collection of all vectors of the form $\alpha\mathbf{v}_1 + \beta\mathbf{v}_2$, with $\alpha$ and $\beta$ scalars. -If by "space" you happen to mean $\mathbb{R}^3$, $\mathbf{v}_1=(a,b,c)$ and $\mathbf{v}_2=(r,s,t)$, then $(a,b,c)\times(r,s,t)$ (the cross product) is perpendicular to both $(a,b,c)$ and $(r,s,t)$, hence perpendicular to the plane they determine, so it will be the normal to the plane. One you have the normal, presumably you know how to get the usual equation of the plane. - -REPLY [7 votes]: Take their cross product. This will be perpendicular to the plane they determine.<|endoftext|> -TITLE: Are Continuous Functions Always Differentiable? -QUESTION [23 upvotes]: Are continuous functions always differentiable? Are there any examples in dimension $n > 1$? - -REPLY [2 votes]: Continuity requires -$$\lim_{h\to0}f(x+h)-f(x)=0.$$ Differentiability is stronger: -$$\lim_{h\to0}\frac{f(x+h)-f(x)}h$$ must exist. -Hence you can find counter-examples of the form -$$f(x)=x g(x)$$ where $$\lim_{x\to0}g(x)$$ does not exist. -E.g. $g(x)=\text{sgn}(x)$ or $g(x)=\sin\dfrac1x$ or $g(x)=|x|^{-1/2}$.<|endoftext|> -TITLE: point below a plane -QUESTION [5 upvotes]: in R3 (3d) , having a vector perpendicular with a plane ( so we know where is 'up') , how do we determine if a certain point is below our plane ? -Regards, -Alexandru Badescu - -REPLY [10 votes]: If $v$ is the vector that points 'up' and $p_0$ is some point on your plane, and -finally $p$ is the point that might be below the plane, compute the dot product -$v \cdot (p-p_0)$. This projects the vector to $p$ on the up-direction. -This product is $\lbrace -, 0, + \rbrace$ if $p$ is below, on, above the plane, respectively.<|endoftext|> -TITLE: $n!+1$ being a perfect square -QUESTION [53 upvotes]: One observes that -\begin{equation*} -4!+1 =25=5^{2},~5!+1=121=11^{2} -\end{equation*} -is a perfect square. Similarly for $n=7$ also we see that $n!+1$ is a perfect square. So one can ask the truth of this question: - -Is $n!+1$ a perfect square for infinitely many $n$? If yes, then how to prove. - -REPLY [45 votes]: This is Brocard's problem, and it is still open. -http://en.wikipedia.org/wiki/Brocard%27s_problem - -REPLY [8 votes]: The sequence of factorials $n!+1$ which are also perfect squares is here in Sloane. It contains three terms, and notes that there are no more terms below $(10^9)!+1$, but as far as I know there's no proof. - -REPLY [4 votes]: My intuition would be that there are very few. There are just not many squares and even fewer factorials. OEIS A025494 lists the squares which are a sum of distinct factorials, which is less restrictive than what you ask and says the list is probably finite. In particular, there are no more below 31!<|endoftext|> -TITLE: On the relationship between the commutators of a Lie group and its Lie algebra -QUESTION [13 upvotes]: I was trying to teach myself some basic Lie theory, and I came across this statement on Mathworld, relating the commutator of a group, $\alpha\beta\alpha^{-1}\beta^{-1}$, to the commutator of its Lie algebra, $[A,B] = AB-BA$: - -For instance, let $A$ and $B$ be square matrices, and let $\alpha(s)$ and $\beta(t)$ be paths in the Lie group of nonsingular matrices which satisfy - $$\begin{align} - \alpha(0)=\beta(0) &= I \\ - \left.\frac{\partial\alpha}{\partial s}\right|_{s=0} &= A \\ - \left.\frac{\partial\beta}{\partial s}\right|_{s=0} &= B, - \end{align}$$ - then - $$\left.\frac{\partial}{\partial s}\frac{\partial}{\partial t}\alpha(s)\beta(t)\alpha^{-1}(s)\beta^{-1}(t)\right|_{(s=0,t=0)}=2[A,B].$$ - -When I tried to derive this for myself, using the fact that -$$\left.\frac{\partial\alpha^{-1}}{\partial s}\right|_{s=0} = \left.-\alpha^{-1}\frac{\partial\alpha}{\partial s}\alpha^{-1}\right|_{s=0} = -A,$$ -I expanded the expression to get -$$\left.\frac{\partial}{\partial s}\frac{\partial}{\partial t}\alpha\beta\alpha^{-1}\beta^{-1}\right|_{(s=0,t=0)}$$ -$$=\left.\left( \frac{\partial\alpha}{\partial s}\frac{\partial\beta}{\partial t}\alpha^{-1}\beta^{-1} + \alpha\frac{\partial\beta}{\partial t}\frac{\partial\alpha^{-1}}{\partial s}\beta^{-1} + \frac{\partial\alpha}{\partial s}\beta\alpha^{-1}\frac{\partial\beta^{-1}}{\partial t} + \alpha\beta\frac{\partial\alpha^{-1}}{\partial s}\frac{\partial\beta^{-1}}{\partial t} \right)\right|_{(s=0,t=0)}$$ -$$=AB - BA - AB + AB$$ -$$=[A,B].$$ -The difference is that the factor of 2 is missing. This seems to agree with the lecture notes I found on MIT OCW, which state (in Ch. 2, PDF 1) that if $X, Y \in \mathfrak{g}$, - -$\exp(-tX)\exp(-tY)\exp(tX)\exp(tY) = \exp\{t^2[X,Y]+O(t^3)\}$. - -Since this is not my area of expertise, I wanted to make sure I got things right before I contacted MathWorld about a typo. Have I done something wrong somewhere, or is the MathWorld statement actually an error? - -REPLY [6 votes]: Ignoring terms of order $3$ and higher, and inverting $1+X$ using the power series, -$$\begin{align}&(1+A)(1+B)(1-A+A^2)(1-B+B^2)\\ -=\;& (1 + (A+B) + AB) (1 - (A+B) + (A^2 + AB + B^2))\\ -=\;& 1 - (A+B)^2 + AB + A^2 + AB + B^2 \\ -=\;& 1 - (A^2 + AB + BA + B^2) + 2AB + A^2 + B^2\\ -=\;& 1 + AB - BA \\ -=\;& 1 + [A,B]\end{align}$$<|endoftext|> -TITLE: Limits in the category of exact sequences -QUESTION [11 upvotes]: Let $\mathbf C$ be an abelian category admitting projective limits. Let's consider the category whose objects are those of the form -$$ -0\to A\to B\to C\to 0 -$$ -and whose morphisms are triples of morphisms of $\mathbf C$ such that the diagram -$$ -\begin{array}{ccccc} -A&\hookrightarrow & B & \to & C & \newline -\downarrow &&\downarrow && \downarrow \newline -A' &\hookrightarrow & B' &\to & C' -\end{array} -$$ -commutes in all its parts. Call this category $\boldsymbol\Sigma(\mathbf C)$. -How could one characterize the limits in $\boldsymbol\Sigma(\mathbf C)$? A little meditation shows that inverse systems are objects of the form -$$ -\mathcal E_i\colon 0\rightarrow A_i\rightarrow B_i\rightarrow C_i\to 0 -$$ -(every object in the sequence can be thought as an element in a separate inverse system), so the universal property of $\varprojlim_\mathbf J \mathcal E_i$, whatever it turns out to be, must be enjoyed by -$$ -\textstyle \varprojlim_\mathbf J \mathcal E_i : 0\to \varprojlim_\mathbf J A_i\to \varprojlim_\mathbf J B_i \to \varprojlim_\mathbf J C_i\to 0 -$$ -as soon as one looks to ${A_i},{B_i},{C_i}$ as three inverse systems. -What condition(s?) has(ve?) to be imposed on them to make sure the sequence is exact? - -REPLY [3 votes]: Considering the category $\Sigma_R(C) $ of the exact seqences like: $0\to A\to B\to C$ this category is complete and the inclusion $\Sigma(C) \subset \Sigma _R(C)$ is coreflexive (the coreflector comes from the coker of the last arrow), then limits in $\Sigma(C) $ exist and are given by the limit in $\Sigma_R(C) $ followed by the coreflection. -When the inclusion $\Sigma(C) \subset \Sigma _R(C)$ preserves (directed) limits is the content of Mittag-Leffler's theorem.<|endoftext|> -TITLE: Nil-Radical equals Jacobson Radical even though not every prime ideal is maximal? -QUESTION [15 upvotes]: Let's assume we have a commutative ring with identity. Can the Nil-Radical and the Jacobson Radical be equal in a non-trivial case (i.e. not every nonzero prime ideal in said ring is maximal)? -Are there any interesting examples of this case? - -REPLY [4 votes]: Theorem 5.1 in T.Y. Lam's book "A First Course in Noncommutative Rings" states that every polynomial ring $R[T]$ over a commutative ring $R$ satisfies -$$rad \,\, R[T] = Nil(R[T]) = (Nil \,\,R)[T]$$<|endoftext|> -TITLE: Use of FFT in the multiplication of multinomials -QUESTION [12 upvotes]: I'm aware that one can use a Fast Fourier Transform (FFT) to take the cost of multiplication of two polynomials of degree N from O$(N^2)$ to O$(N \ln N)$ (which is an amazing reduction when dealing with large polynomials!). Does a similar transformation procedure exist for multinomials? -I'm interested in the special case where the number of independent variables is only two, ie. $h(x,y) = f(x,y)g(x,y)$, but I'd love to read up on the general procedure. - -REPLY [3 votes]: Community wiki answer so the question can be resolved: -As pointed out in the comments, this can be done using multidimensional FFT, with the exponents of the variables serving as coordinates.<|endoftext|> -TITLE: Proof that the rank of a skew-symmetric matrix is at least $2$ -QUESTION [20 upvotes]: Is there a succinct proof for the fact that the rank of a non-zero skew-symmetric matrix ($A = -A^T$) is at least 2? I can think of a proof by contradiction: Assume rank is 1. Then you express all other rows as multiple of the first row. Using skew-symmetric property, this matrix has to be a zero matrix. -Why does such a matrix have at least 2 non-zero eigenvalues? - -REPLY [12 votes]: The following answers the first part of the OP's question, without using the concept of eigenvalues. It works on all fields (including $\mathbb{R}$) with characteristic $\ne2$.* -Every rank-$1$ matrix can be written as $A=uv^\top$ for some nonzero vectors $u$ and $v$ (so that every row of $A$ is a scalar multiple of $v^\top$). If $A$ is skew-symmetric, we have $A=-A^\top=-vu^\top$. Hence every row of $A$ is also a scalar multiple of $u^\top$. It follows that $v=ku$ for some nonzero scalar $k$. But then $vu^\top=-uv^\top$ implies that $kuu^\top=-kuu^\top$ or $2kuu^\top=0$, which is impossible because both $k$ and $u$ are nonzero and the characteristic of the field is not $2$. Therefore, skew-symmetric matrices cannot be rank-1 matrices, and vice versa. -When the underlying field has characteristic 2, the notions of symmetric matrices and skew-symmetric matrices coincide. Hence every nonzero matrix of the form $uu^\top$ with nonzero vector $u$ is a rank-1 skew-symmetric matrix. -Remark. In most modern textbooks, a matrix $A$ in a field of characteristic $2$ is said to be skew-symmetric if $A$ has a zero diagonal and $A^T=-A$. This modern definition is better because the discrepancy between skew-symmetric matrix and alternating bilinear form now vanishes. With this definition, symmetric matrices and skew-symmetric matrices are different notions and a matrix of the form $uu^\top$ cannot be skew-symmetric unless it is zero.<|endoftext|> -TITLE: Can someone please explain the Riemann Hypothesis to me... in English? -QUESTION [68 upvotes]: I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it? - -REPLY [2 votes]: It is straight forward. We have a Zeta function, 'analytically continued' that says -$\zeta(s)[1-2/2^s] = 1 - 1/2^s + 1/3^s - 1/4^s + ......$ Here s is a complex variable. Thus $s=\Re(\sigma) + \Im(\omega).$ (Where $\Re$ indicates the real part and $\Im$ indicates the imaginary part). The above series converges in the region of our interest which is $0 < \sigma < 1.$ -To find the zeroes of $\zeta(s)$ we need to first substitute $\zeta(s) = 0$ and solve for $s$ and the sigmas and omegas. -That is -Solve $0= 1 - 1/2^s + 1/3^s -1/4^s + ...$, for sigmas and omegas. Riemann hypothesized that the zeros will have their sigmas equal to 1/2 while the omegas are distinct. To this date, after 150 years, no one has any clue why sigma takes a single value of 1/2 in the critical strip $0 < \sigma < 1.$ Apart from the consequences I hope I explained it well. Wikipedia on Riemann Hypothesis is a good source for reading up.<|endoftext|> -TITLE: Integration by Parts implies U-substitution? -QUESTION [13 upvotes]: So I feel a bit strange asking a Calculus question, but this came up today while teaching. -One can check that if you start with some integral, which can be see as an "obvious u-substitution problem", that you can instead use integration by parts, and wind up with the scenario where you have have the original integral on both sides of your equation so you solve for the integral. -Example: Given $I=\int g^n(x)g'(x)dx$ we can clearly use u-substitution, but if we use integration by parts we get the equation $I=-nI+g^{n+1}$. This is nothing exciting or surprising, but it yields the observation that u-sub leads to one of these int by parts equations. - -Question Is the opposite true? - -What I mean to say is, if you do integration by parts and you wind up with an equation of this type, does it mean that you could of used some very clever u-substitution? -I feel like I should know this, but I have thought about it today, and asked a friend or two, and we don't see an immediate proof of this. -Thanks! - -REPLY [7 votes]: This expands on a comment that might not have been perfectly clear (perhaps because of its obviousness, for which I apologize). -Suppose you have in hand some magic method of integration (such as the one in the question). By this I mean any procedure that takes a functional expression $f(x)$ and returns some expression $F(x)$ for its indefinite integral. (I also assume you accept $F(x)$ as a valid solution to your integration; e.g., some people might not accept an elliptic function, or imaginary values are verboten, or they might not like power series, etc. I leave the criterion of acceptability to you as a matter of taste. All I require is that you know how to differentiate $F$ and are able to compare that result to $f$ in order to check the validity of your magic method.) A trivial application of the Fundamental Theorem of Calculus asserts that the "very clever" substitution -$$u = F(x)$$ -will enable you to perform the integral. This happens, of course, because you can calculate that $du = F'(x)dx = f(x)dx$, whence the substitution converts the original integral into $\int{du}$ with general solution $u + C = F(x) + C$ by back-substitution. In other words, the question as posed merely asks whether the FTC is true in a special case.<|endoftext|> -TITLE: The function that draws a figure eight -QUESTION [6 upvotes]: I'm trying to describe a counterexample for a theorem which includes the figure eight or "infinity" symbol, but I'm having trouble finding a good piecewise function to draw it. I need it to be the symbol, except at the "crossing point" the function jumps (not continuous) so that we still have a manifold. - -REPLY [13 votes]: You can use the function $$t\in(-\tfrac12\pi,\tfrac32\pi)\mapsto(\cos t,\sin t\cos t)\in\mathbb R^2.$$ -The resulting curve is (I'm omitting a little bit on the left and a little bit on the right from the domain in this picture): - -REPLY [4 votes]: There are no functions that describe the "lemniscate", but there are parametric and polar equations for the lemniscate of Bernoulli and the lemniscate of Gerono, to name two of the more famous lemniscates. - -Might as well share this, since the question already has an accepted answer. Here is my favorite way of generating the lemniscate of Bernoulli, as an envelope of circles centered on a rectangular hyperbola, and passing through the origin: - -REPLY [3 votes]: Is the lemniscate what you want? I don't know what you need as a jump at the crossing point, but maybe you can get that with a trigonometric parameterization.<|endoftext|> -TITLE: Roots of a polynomial in an integral domain -QUESTION [9 upvotes]: Let $R$ be a ring and $f(X) \in R[x]$ be a non-constant polynomial. We know that the number of roots, of $f(X)$ in $R$ has no relation, to its degree if $R$ is not commutative, or commutative but not a domain. But, - -The number of roots, of a non-zero polynomial over commutative integral domain, is at most its degree. - -How does one prove above result? - -REPLY [6 votes]: This is a consequence of the fact that over a commutative ring with identity $A$, an element $a \in A$ is a zero of a polynomial $f \in A[x]$ if and only if $f(x) = (x - a)q(x)$ for some polynomial $q(x) \in A[x]$. - -REPLY [2 votes]: It follows from the division algorithm, the fact that the evaluation maps gives a homomorphism from $R[x]$ to $R^R$ (functions from $R$ to $R$ with pointwise operations), and that $R$ is a domain. It is the exact same argument as for fields, with the division algorithm suitably restricted to certain kinds of polynomials over $R$ as divisors.<|endoftext|> -TITLE: Identity involving Euler's totient function: $\sum \limits_{k=1}^n \left\lfloor \frac{n}{k} \right\rfloor \varphi(k) = \frac{n(n+1)}{2}$ -QUESTION [48 upvotes]: Let $\varphi(n)$ be Euler's totient function, the number of positive integers less than or equal to $n$ and relatively prime to $n$. -Challenge: Prove -$$\sum_{k=1}^n \left\lfloor \frac{n}{k} \right\rfloor \varphi(k) = \frac{n(n+1)}{2}.$$ -I have two proofs, one of which is partially combinatorial. -I'm posing this problem partly because I think some folks on this site would be interested in working on it and partly because I would like to see a purely combinatorial proof. (But please post any proofs; I would be interested in noncombinatorial ones, too. I've learned a lot on this site by reading alternative proofs of results I already know.) -I'll wait a few days to give others a chance to respond before posting my proofs. -EDIT: The two proofs in full are now given among the answers. - -REPLY [18 votes]: In case anyone is interested, here are the full versions of my two proofs. (I constructed the combinatorial one from my original partially combinatorial one after I posted the question.) - -The non-combinatorial proof -As Derek Jennings observes, $\lfloor \frac{n+1}{k} \rfloor - \lfloor \frac{n}{k} \rfloor$ is $1$ if $k|(n+1)$ and $0$ otherwise. Thus, if $$f(n) = \sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k),$$ -then $$\Delta f(n) = f(n+1) - f(n) = \sum_{k|(n+1)} \phi(k) = n+1,$$ -where the last equality follows from the well-known formula Aryabhata cites. -Then -$$\sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k) = f(n) = \sum_{k=0}^{n-1} \Delta f(k) = \sum_{k=0}^{n-1} (k+1) = \frac{n(n+1)}{2}.$$ - -The combinatorial proof -Both sides count the number of fractions (reducible or irreducible) in the interval (0,1] with denominator $n$ or smaller. -For the right side, the number of ways to pick a numerator and a denominator is the number of ways to choose two numbers with replacement from the set $\{1, 2, \ldots, n\}$. This is known to be -$$\binom{n+2-1}{2} = \frac{n(n+1)}{2}.$$ -Now for the left side. The number of irreducible fractions in $(0,1]$ with denominator $k$ is equal to the number of positive integers less than or equal to $k$ and relatively prime to $k$; i.e., $\varphi(k)$. Then, for a given irreducible fraction $\frac{a}{k}$, there are $\left\lfloor \frac{n}{k} \right\rfloor$ total fractions with denominators $n$ or smaller in its equivalence class. (For example, if $n = 20$ and $\frac{a}{k} = \frac{1}{6}$, then the fractions $\frac{1}{6}, \frac{2}{12}$, and $\frac{3}{18}$ are those in its equivalence class.) Thus the sum -$$\sum_{k=1}^n \left\lfloor\frac{n}{k} \right\rfloor \varphi (k)$$ -also gives the desired quantity.<|endoftext|> -TITLE: Verifying a Closure Operation -QUESTION [7 upvotes]: I'm just starting to learn some basic topology, and I've mostly encountered definitions. For instance, I read that for a topological space $X$, for any $E\subseteq X$, define its closure $\overline{E}$ as the set of points $p\in X$ such that each neighborhood $N$ of $p$ has a nonempty intersection with $E$. -I wanted to verify that this is indeed a closure operation, and here's what I have: -(1) Suppose $p\in E$, then for any $N$, $p\in N$, so $N\cap E\neq\emptyset$, so $p\in\overline{E}$, and thus $E\subseteq\overline{E}$. -(2) Suppose $p\in\overline{E\cup F}$. Then for any $N$, $N\cap(E\cup F)\neq\emptyset$, so $(N\cap E)\cup(N\cap F)\neq\emptyset$, and thus either $N\cap E\neq\emptyset$ or $N\cap F\neq\emptyset$, so $p\in\overline{E}$ or $p\in\overline{F}$, and thus $\overline{E\cup F}\subseteq\overline{E}\cup\overline{F}$. -(3) For any $p\in X$, the intersection of any $N$ and $\emptyset$ is empty, so there are no points such that every neighborhood has nonempty intersection with $\emptyset$. Hence $\overline{\emptyset}=\emptyset$. -(4) I'm stuck showing that $\overline{\overline{E}}=\overline{E}$. From (1), I know that $\overline{E}\subseteq\overline{\overline{E}}$, but I can't show the other containment. I took $p\in\overline{\overline{E}}$, and so every $N\cap\overline{E}\neq\emptyset$. I want to show $p\in N\cap\overline{E}$, and I think this would be easy if $X$ contains isolated points. But then I found out there are such things as perfect sets or dense-in-itself sets, so that can't work. Is there some trick to show this containment? -Thanks, I know this is probably very simple, but I'm going a little mad staring at it. - -REPLY [3 votes]: Your proof of (2) is incorrect. The quantifiers are wrong. -You have shown that for any $N$ (neighbourhood of $x$, in the remainder): $N$ intersects $E$ or $N$ intersects $F$. -But you have to show (for any $N$: $N$ intersects $E$) or (for any $N$, $N$ intersects $F$), i.e. that you intersect the same set for any $N$. -So you have to argue by contradiction: suppose $x \in \overline{E \cup F}$, but $x \notin \overline{E}$ and $x \notin \overline{F}$. The latter 2 imply that there is a neighbourhood $N_1$ of $x$ that misses $E$ and a neighbourhood $N_2$ of $x$ that misses $F$. But then $N_1 \cap N_2$ is a neighbourhood of $x$ as well and this misses $E \cup F$, contradicting that $x \in \overline{E \cup F}$. Done.<|endoftext|> -TITLE: Do the equations used in Stargate make sense or are they gibberish? -QUESTION [7 upvotes]: Just wondering if equations used as props in SG1, atlantis and most recently Universe are just random and cool looking, or real/meaningful and cool looking. -Most recent episode of Stargate Universe S02E04 has a "corridor of equations" where dr nick Rush is trying to decipher something. Has anyone familiar with the show got an opinion on this? - -REPLY [5 votes]: Awww, if you want screenshots of my Crazy Hall, all you have to do is ask! Yes, it's all legit math, and all internally consistant throughout the seasons. The core science is usually from peer-reviewed publications, just blended together in ways that don't typically make sense in the Real World (when would you ever feed the energy of a solar flare into a black hole, except with Stargate?) -I frequently use the established alien alphabets for variables, making it a bit more squiggle-loving, and anything that's going to be completed-by-an-actor is always simplified down to "write the last 1-3 characters," with the assumption that we're catching them at the end of a big scribble-fest. Thus, it wasn't really "Chloe knows how to integrate & Rush doesn't," as it was, "Chloe has thought of a new approach to this problem that Rush didn't." Each character has different and distinct handwriting associated with them (usually based off writing samples from the actor), but admittedly that'd almost impossible to spot in the fuzzy darkness that is Destiny's hallways. -For SGU specifically, the Crazy Hall evolved over time based on what the crew had encountered. For example, when going through the "How do we collect Lt. Scott from that shuttlecraft before we jump?!", an entire wall on orbital dynamics, with particular focus on energy needs for intersecting orbits, got added in.<|endoftext|> -TITLE: Are there uncountably infinite orders of infinity? -QUESTION [21 upvotes]: Given a set $S$, one can easily find a set with greater cardinality -- just take the power set of $S$. In this way, one can construct a sequence of sets, each with greater cardinality than the last. Hence there are at least countably infinite many orders of infinity. -But do there exist uncountably infinite orders of infinity? -To be precise, does there exist an uncountable set of sets whose elements all have distinct cardinalities? -The first answer to Types of infinity suggests the answer is "yes", but only establishes a countable number of cardinalities (which, to be fair, was what the question was asking about). -I've been exposed to enough mathematical logic to realize that I'm walking in a minefield; let me know if I've already mis-stepped. - -REPLY [16 votes]: Pete's excellent notes have correctly explained that there is no set containing sets of unboundedly large size in the infinite cardinalities, because from any proposed such family, we can produce a set of strictly larger size than any in that family. -This observation by itself, however, doesn't actually prove that there are uncountably many infinities. For example, Pete's argument can be carried out in the classical Zermelo set theory (known as Z, or ZC, if you add the axiom of choice), but to prove that there are uncountably many infinities requires the axiom of Replacement. In particular, it is actually consistent with ZC that there are only countably many infinities, although this is not consistent with ZFC, and this fact was the historical reason for the switch from ZC to ZFC. -The way it happened was this. Zermelo had produced sets of size $\aleph_0$, $\aleph_1,\ldots,\aleph_n,\ldots$ for each natural number $n$, and wanted to say that therefore he had produced a set of size $\aleph_\omega=\text{sup}_n\aleph_n$. Fraenkel objected that none of the Zermelo axioms actually ensured that $\{\aleph_n\mid n\in\omega\}$ forms a set, and indeed, it is now known that in the least Zermelo universe, this class does not form a set, and there are in fact only countably many infinite cardinalities in that universe; they cannot be collected together there into a single set and thereby avoid contradicting Pete's observation. One can see something like this by considering the universe $V_{\omega+\omega}$, a rank initial segment of the von Neumann hierarchy, which satisfies all the Zermelo axioms but not ZFC, and in which no set has size $\beth_\omega$. -By adding the Replacement axiom, however, the Zermelo axioms are extended to the ZFC axioms, from which one can prove that $\{\aleph_n\mid n\in\omega\}$ does indeed form a set as we want, and everything works out great. In particular, in ZFC using the Replacement axiom in the form of transfinite recursion, there are huge uncountable sets of different infinite cardinalities. -The infinities $\aleph_\alpha$, for example, are defined by transfinite recursion: - -$\aleph_0$ is the first infinite cardinality, or $\omega$. -$\aleph_{\alpha+1}$ is the next (well-ordered) cardinal after $\aleph_\alpha$. (This exists by Hartog's theorem.) -$\aleph_\lambda$, for limit ordinals $\lambda$, is the supremum of the $\aleph_\beta$ for $\beta\lt\lambda$. - -Now, for any ordinal $\beta$, the set $\{\aleph_\alpha\mid\alpha\lt\beta\}$ exists by the axiom of Replacement, and this is a set containing $\beta$ many infinite cardinals. In particular, for any cardinal $\beta$, including uncountable cardinals, there are at least $\beta$ many infinite cardinals, and indeed, strictly more. -The cardinal $\aleph_{\omega_1}$ is the smallest cardinal having uncountably many infinite cardinals below it.<|endoftext|> -TITLE: A Question on RH relating to Prime Number theorem -QUESTION [8 upvotes]: Well, in a previous post regarding the explanation of Riemann Hypothesis Matt answered that: - -The prime number theorem states that the number of primes less than or equal to $x$ is approximately equal to $\int_2^x \dfrac{dt}{\log t}.$ The Riemann hypothesis gives a precise answer to how good this approximation is; namely, it states that the difference between the exact number of primes below $x$, and the given integral, is (essentially) $\sqrt{x} \log x$. - -What i have heard about RH is: - -The non-trivial Zeros of the Riemann- $\zeta$ -function have real part as $\frac{1}{2}$. - -Can anyone tell me how these two statements are related? - -REPLY [9 votes]: The point is that there is an explicit formula (due to Riemann) relating -$\pi(x)$ to the zeroes of the zeta-function. (The proof is via a kind of Fourier transform.) -The rough shape is that -$$\pi(x) = \mathrm{Li}(x) + \sum_{\rho} \mathrm{Li}(x^{\rho}) + \text{ lower order terms},$$ -where the sum is over zeroes $\rho$ of $\zeta(s)$ in the critical strip (i.e. with real -parts between $0$ and $1$). -(See the wikipedia entry for a more precise statement; this is the same link as in Qiaochu's comment above.) -Now the (simple but) key fact to remember is that -$| x^{\rho}| = x^{\Re \rho}$, for a positive real number $x$. -So to get asymptotics on $\pi(x)$ from this, one has to give -upper bounds on $\Re \rho$. For example, to get the prime number theorem, one has to show that $\Re \rho < 1$ for all $\rho$ (i.e. that $\zeta(s)$ -has no zeroes on the line $\Re s = 1$). -The best possible estimate comes if you assume RH. Then $\Re \rho = 1/2$ -for all $\rho$, so $| x^{\rho}| = x^{1/2}$, and (careful) estimates -give the error term $\sqrt{x} \log x$ for the difference between -$\pi(x)$ and $\mathrm{Li}(x)$.<|endoftext|> -TITLE: Difficulty in Mathematical Writing -QUESTION [17 upvotes]: Lots of people (including myself) face lot of problems in tackling Mathematical Problems, which appear as if we can solve it, but then writing out a solution becomes difficult. -Let us consider some examples: - -I was asked this question, some time back in an exam. Give an example of a continuous function on $(a,b)$ which is not uniformly continuous. Well, one's obvious choice is $$f(x) = \frac{1}{x-a} \quad \text{or} \ \frac{1}{x-b}$$ I knew this as soon as I saw the problem, and started proving it. One actually has to make an observation that as $x \to a$ then, $\frac{1}{x-a}$ will be larger. But I found that I couldn't actually formally prove it. - -Similarly, to prove that $f(x)=x^{2}$ is not uniformly continuous on $\mathbb{R}$, one again has to play with the quantifiers, to get the contradiction part. - - -So, these are two instances, where I have found the problem, which to me appeared that I could solve it, but writing out a formal solution became difficult. -How can students improve upon this? Are there any instances, which happened to you like this! - -REPLY [2 votes]: I know this may be a little late, but I'm in the middle of an extremely good book, "How To Prove It - A Structured Approach". I'm just starting a compSci degree and I've found the book very practically helpful in understanding what it takes to write a good proof and what strategies to use when approaching writing a proof.<|endoftext|> -TITLE: An inequality like Riemann sum involving $\sqrt{1-x^2}$ -QUESTION [7 upvotes]: How can I prove that for every positive integer $n$ we have -\begin{equation*} -\frac{n\pi}{4}−\frac{1}{\sqrt{8n}}<\frac{1}{2}+\sum_{k=1}^{n−1}\sqrt{1−\frac{k^2}{n^2}}? -\end{equation*} - -REPLY [10 votes]: Write the inequality as -$$\frac{\pi}{4} < \frac{1}{2n} + \frac{1}{n} \sum_{k=1}^{n-1} \sqrt{1-\left(\frac{k}{n}\right)^2} + \frac{1}{2n} \sqrt{\frac{1}{2n}}.$$ -The left-hand side $\pi/4$ is the area of the part of the unit circle that -lies in the first quadrant (below the curve $y=f(x)=\sqrt{1-x^2}$). -We want to interpret the right-hand side as the area of a region $D$ which -covers that quarter circle. -Note that $f$ is concave, so that its graph lies below any tangent line. -Thus the trapezoid bounded by the lines $x=a-\epsilon$ and $x=a+\epsilon$ -and by the $x$ axis and the tangent line through $(a,f(a))$ will cover the -corresponding part of the circle: -$$\int_{a-\epsilon}^{a+\epsilon} f(x) dx < 2\epsilon f(a).$$ -Thus, taking $D$ to be the union of the following pieces does the trick: - -A rectangle of height 1 between $x=0$ and $x=1/2n$. -Trapezoids as above, of width $\frac{1}{n}$ and centered at $x=k/n$ for $k=1,\ldots,n-1$. -A trapezoid as above, of width $\frac{1}{2n}$ and centered at $x=1-1/4n$. This last one has area -$$\frac{1}{2n} f(1-1/4n) = \frac{1}{2n} \sqrt{\frac{1}{2n} - \frac{1}{16n^2}} < \frac{1}{2n} \sqrt{\frac{1}{2n}}.$$<|endoftext|> -TITLE: Is the distance function in a metric space (uniformly) continuous? -QUESTION [28 upvotes]: Let $(X, d)$ be a metric space. Is the function $x\mapsto d(x, z)$ continuous? Is it uniformly continuous? - -REPLY [44 votes]: As Qiaochu points out $d(x,y)$ is continuous for fixed $x$. You may like to see this as well, as this is a familiar result in Topology: - -If $A$ is a non empty subset of a metric space $(X,d)$ then the function $f$ on $X$ given by - $$f(x)=d(x,A):= \inf_{y\in A} d(x, y)$$ - is continuous. Indeed, - $$| f(x) - f(y) | = | d(x,A) - d(y,A) | \leq d(x,y),$$ - and thus $f$ is uniformly continuous (use $\delta = \epsilon$ in any point). - -To show this, let $x$ and $y$ be points in $X$, and $p$ any point in $A$. -Then -$$d(x,p) \leq d(x,y) + d(y,p)\ \ \ \ \text{ (triangle inequality)}$$ -and so -$$d(x,A) \leq d(x,y) + d(y,p)$$ -as $d(x,A)$ is the infimum. But then $d(y,p) \geq d(x,A) - d(x,y)$ (for all $p$, obtained by subtracting from the previous inequality), so that $d(y,A) \geq d(x,A) - d(x,y)$ (as $d(y,A)$ is the infimum). -So : $d(x,A) - d(y,A) \leq d(x,y)$. -Now reverse the roles of $x$ and $y$ to get -$d(y,A) - d(x,A) \leq d(x,y)$. -This is taken from http://at.yorku.ca/cgi-bin/bbqa?forum=homework_help_2004;task=show_msg;msg=1323.0001 - -REPLY [10 votes]: Yes. The standard definition of the topology induced by a metric ensures this; in fact it's not hard to see that it's the coarsest topology such that $d(x, y)$ is continuous for fixed $x$.<|endoftext|> -TITLE: StarCraft II: Ladder math -QUESTION [24 upvotes]: At the Blizzcon 2010, StarCraft II multiplayer panel, this stuff was supposed to explain the ladder matchmaking system. I look at this and go eh? what!? - -Is any of this real? or are they just messing with me, because I can't make heads or tails of this humongous equation. Can anyone break it down for me? or at least explain the principle behind this, I don't get how these integrals, derivatives, Euler-functions and multiplication sums yield anything meaningful... - -REPLY [30 votes]: It looks like an ordinary statistical calculation. The numerator with $\Pi_{g=1}^G$ is a likelihood or probability density, presumably of some outcomes for games $1$ to $G$. The denominator with $\int \Pi \dots$ is the integral of the numerator over all outcomes; it is a normalization constant to ensure the total probability of all results is $1$. Everything in the formulas is a calculation of (z-scores in) independent normal distributions, so they have a fairly simple probability model for how a player's ranking parameters drive the game outcomes. -The goal of the calculation might be to calculate a player's set of ranking parameters $y$ (a vector of numbers measuring strength, speed, skill, wins, or whatever interpretation the quantities have for the game) that maximize the conditional probability $P(g_j | y)$ of having observed the game outcomes $g_i$ for $i = 1$ to $G$. In other words, Maximum Likelihood Estimation of a player's parameters from game data. I can't read everything in the formula -- can you post a larger magnification? -- but the $\theta_{1,g} - \theta_{2,g}$ look like a measure of how one side of the game performed compared relative to the other, such as a difference in number of points, or a measure of how the sides were expected to perform relative to each other, given their ratings. Alternatively, $P(g_j | y)$ could be a Bayesian "posterior" distribution on $y$ in light of the game outcomes, so that the formula is a rule for updating the rankings given some game results. Here $\Phi(\theta_0 + \gamma_i + \psi_{i,0})$ can be understood as implying an initial rating, where the distribution of skills in the player population is assumed to be normal. -One can also infer from the formula that either they are doing the wrong calculation (after $G$ new games), or the big formula is actually a summary of what has happened after $G$ separate re-estimation steps, one after each game (so that in a single step there is no product involved, and the formula would involve only the ratings just before the game, and the game outcome). The probabilies they are computing for the $G$ games are of the form "what is the chance the player had performance at least $x$ in game 1, and at least $y$ in game 2, $z$ or better in game 3, ...". This is not the correct way to assess the probability that the whole set of $G$ game results is, collectively, above a certain level of performance. But if the parameters are re-estimated after every game, and the older game outcomes forgotten, then "chances of a victory at least $X$ big" is the only thing you can do, so this would account for the shape of their formula. -Now, as the old joke goes: "and by the way, what's StarCraft?".<|endoftext|> -TITLE: Iterated polynomial problem -QUESTION [7 upvotes]: A polynomial $P$ with integral coefficients satisfies $P(n)>n$ for all positive integers $n$. Every positive integer $m$ is a factor of some number of the form $P(1),\, P(P(1)),\, P(P(P(1))),\dots $. Prove that $P(x)=x+1$. - -REPLY [5 votes]: Denote the iterates by $x_0 = 1, x_{n+1} = P(x_n)$. -Assume that the coefficients of $P$ are integral. -If at some point $P(x_n) > 2x_n$, then I claim that $m = P(x_n)-x_n$ does not divide any iterate. First, $x_n < m$, so $x_0,\ldots,x_n$ cannot be divisible by $m$. Second, we prove by induction that for $k \geq n$, $x_k \equiv x_n \pmod{m}$: -$x_{k+1} = P(x_k) \equiv P(x_n) \equiv x_n \pmod{m}$. -Since $x_n < m$, we see that $m$ doesn't divide any of the iterates. -We conclude that always $P(x_n) \leq 2x_n$. Thus $P(x) = ax+b$ with $a \leq 2$. On the one hand $P(1) > 1$, and on the other hand $P(1) \leq 2$. Thus $P(1) = 2$, and therefore either $a = 1$ or $a = 2$. If $a = 2$ then $P(x) = 2x$, and we generate only powers of $2$. Thus $a = 1$ and $P(x) = x + 1$.<|endoftext|> -TITLE: Measure different volumes based on specified capacities -QUESTION [5 upvotes]: Given three containers of specified volume, how many different volumes can you measure ? -For example, suppose we have cans with capacities 2, 3, 4 litres. -We can measure : - -1 litre by filling the 3 litre can and pouring it in the 2 litre can. - 2, 3, 4 litres are trivial. - 5 litres by filling both the 2 and 3 litre cans. -6 litres by filling the 4 litre can with 3 litres and filling the 3 litre can fully or filling the 2 and 4 litre cans. - 7 litres by filling the 3 and 4 litre cans. -8 litres by getting 6 litres and filling the 2 litre can. -9 litres by filling all the cans. -The answer for this case is 9. -Is there a general way to answer this for three capacities a, b, c? - -REPLY [2 votes]: For the case where we have cans with $2,3,4$ liters, consider the polynomial -\begin{equation} -(1+x^2)^3 (1+x^3)^2(1+x^4)^2\left(1+\frac{1}{x^2}\right)^2\left(1+\frac{1}{x^3}\right) \left(1+\frac{1}{x^4}\right) -\end{equation} -Look at the terms $x^k$ where $k\geq 1$. I believe all those powers of $x$ are the capacities you can measure using the three cans. -Reasoning: -Any term of the form $1+x^a$ allows for two choices - You either fill the can of volume $a$ or you don't fill it. If we multiplied three terms and wrote $(1+x^2)(1+x^3)(1+x^4)$, this will allow you to find out how many different volumes you can measure if you are allowed to fill each can at most once and if a can once filled is never emptied. -Also, in the case of cans of size $2,3$ and $4$, it is possible to fill the 2 liter can once, empty it into the 4 liter can, fill the 2 liter can again, pour it into the 4 liter can again and fill the 2 liter can a third time. This is why we have $(1+x^2)^3$. Similarly for the 3 litre can. The 4 litre can could be emptied into the 2 and 3 liter cans combined and the refilled, so we have the term $(1+x^4)^2$ as well. -We should also allow for subtracting of volumes - This implies that terms of the form $(1+\frac{1}{x^a})$ should be present. If we look at the 2 liter can, you could fill it (and empty it) from the 4 liter can twice. Therefore, we have a term $(1+1/x^2)^2$. Similarly we get the term $(1+1/x^3)$. The 4 liter can could also be emptied once after filling it from the other two cans. Combining all of this, we get -\begin{equation} -(1+x^2)^3 (1+x^3)^2(1+x^4)^2\left(1+\frac{1}{x^2}\right)^2\left(1+\frac{1}{x^3}\right) \left(1+\frac{1}{x^4}\right) -\end{equation} -This idea can be extended to other values of $a,b$ and $c$. I tried it for a few other values (using Wolfram Alpha to expand the expressions) and it seems to work.<|endoftext|> -TITLE: A short way to say f(f(f(f(x)))) -QUESTION [5 upvotes]: Is there a short way to say $f(f(f(f(x))))$? -I know you can use recursion: -$g(x,y)=\begin{cases} -f(g(x,y-1)) & \text{if } y > 0, \ -\newline x & \text{if } y = 0. -\end{cases}$ - -REPLY [11 votes]: I personally prefer -$f^{\circ n} = f \circ f^{\circ n-1} = \dotsb = \kern{-2em}\underbrace{f \circ \dotsb \circ f}_{n-1\text{ function compositions}}$<|endoftext|> -TITLE: does $\sum_{n=1}^{\infty} \frac{1}{\zeta(1+\frac{1}{n})}$ diverge or converge? -QUESTION [9 upvotes]: I'm asking because numerical tests seem to give nonsensical answers, and I thought I would check if there was an analytic way of checking for divergence, but I couldn't think of one offhand. - -REPLY [14 votes]: $(s-1)\zeta(s) = 1 + a(s-1) + b(s-1)^2 + \dots$ is analytic near $1$ (in fact entire, but we don't need that for this problem). -For $s=1+1/n$ this gives $\zeta(1+1/n)=n + a + b/n + \dots =n + O(1)$, so the sum diverges. - -REPLY [14 votes]: You could use an integral comparison to get a bound on $\zeta(1+1/n)$: -$$\zeta(1+1/n)\lt 1+\int_1^\infty x^{-1-\frac{1}{n}}=1+n.$$ -More generally, if $0\lt a\lt 1$, then -$$\frac{1}{a}=\int_1^\infty x^{-1-a}\lt\zeta(1+a)\lt1+\int_1^\infty x^{-1-a}=1+\frac{1}{a}<\frac{2}{a}.$$ -Thus if $a_1,a_2,\ldots$ is a sequence of positive numbers converging to $0$, then - $\sum_{n=1}^{\infty} \frac{1}{\zeta(1+a_n)}$ converges if and only if $\sum_{n=1}^\infty a_n$ does.<|endoftext|> -TITLE: Basic counterexample re: preimages of ideals -QUESTION [18 upvotes]: I'm trying to think of an example of a homomorphism of commutative rings $f:A\rightarrow B$ and ideals $I,J$ of $B$ such that $f^{-1}(I)+f^{-1}(J)$ is not a preimage of any ideal of $B$. I can't seem to come up with one... anyone know one? -Edit: To clear up some basic facts / head off some mistakes: -As Arturo points out, we can assume $f$ is an inclusion. Perhaps I should have written the question in terms of inclusions in the first place, but, eh. -No, $f^{-1}(I)+f^{-1}(J)$ is not equal to $f^{-1}(I+J)$ in general. A counterexample would be the inclusion of $\mathbb{C}$ in $\mathbb{C}[x]$; consider $(x)$ and $(1-x)$. -To show an ideal $K\subseteq A$ is not a preimage of any ideal of $B$, it suffices to show that it's not equal to $f^{-1}(Bf(K))$. - -REPLY [3 votes]: Let $k$ be field of caracteristic $\neq 2$, $A=k[x,y], B=k[x,y,x^{-1},y^{-1}]$ and take $f:A\hookrightarrow B$ be inclusion. Example for what you want is $I=(x+y)B, J=(x-y)B$. -Then $f^{-1}I=(x+y)A, f^{-1}J=(x-y)A$ and $f^{-1}I+f^{-1}J=(x,y)A$ . Since $(x,y)B=B$, sum -$f^{-1}I+f^{-1}J$ can not be $f^{-1}$ of ideal in $B$<|endoftext|> -TITLE: An inverse for Euler's zeta function product formula -QUESTION [10 upvotes]: Of course, Euler proved that the Riemann zeta function can be defined as the analytic continuation of a product over all primes. -$$\zeta(s) = \prod_{p \in \mathbb{P}}\frac1{1-p^{-s}}$$ -It is well known (but not something I understand) that the positions of zeros of the zeta function allows one to make inferences about the asymptotic behavior of primes. Is this a general phenomenon? Does Euler's transform generalize to products over other subsets of the natural numbers $\mathbb{A}$? -$$\alpha(s) = \prod_{a \in (\mathbb{A} \subset \mathbb{N})}\frac1{1-a^{-s}}$$ -Can one then reverse Euler's transform and derive the generating subset $\mathbb{A}$ completely from the new function's zero set? More generally, how do properties of the derived function's zeros translate to properties of the generating subset? -And, specifically for the standard Riemann zeta function, if it was shown that exactly one single zero existed off the critical line, what would its position say about the distribution of primes? - -REPLY [5 votes]: As you noted, to even talk about the "zero set" of the Riemann zeta function, one needs to have an analytic continuation to the left of the half-plane $\{\Re(s)>1\}$ in which the Euler product converges. For a generic set $A\subset{\mathbb N}$, it's not even clear that the corresponding Euler product has an anlytic continuation at all. In this case, questions concerning the "zero set" of the corresponding function are ill-posed. -You can search the relevant literature with the terms "Beurling primes" or "generalized primes".<|endoftext|> -TITLE: Complex-number inequality $| z_1 z_2 \ldots z_m - 1 | \leq e^{|z_1 - 1| + \ldots + |z_m - 1|} - 1$ -QUESTION [13 upvotes]: Let $z_1, z_2 \ldots z_m$ be complex numbers, $m \in \mathbb{N}$. Can anybody tell me how to prove the following inequality? -$| z_1 z_2 \ldots z_m - 1 | \leq e^{|z_1 - 1| + \ldots + |z_m - 1|} - 1$ -In case you're wondering, this is asserted without proof in a paper by Von Neumann, about infinite tensor products of Hilbert spaces. - -REPLY [17 votes]: The inequality in question bounds how far you can get from $1$ by multiplying several complex numbers that may individually not be far from $1$. So it makes sense to try to derive a bound for the product of just two complex numbers, and then proceed by induction. -Lemma: Suppose $\lvert z_1 - 1\rvert = \alpha_1$ and $\lvert z_2 - 1\rvert = \alpha_2$. Then $\lvert z_1 z_2 - 1\rvert \le (1 + \alpha_1)(1 + \alpha_2) - 1$. -Proof: We know $\alpha_1\alpha_2 = \lvert z_1z_2 - z_1 - z_2 + 1\rvert$. By triangle inequality on the three points $z_1 z_2$, $z_1 + z_2 - 1$, and $1$, we have -$$\begin{align} -\lvert z_1 z_2 - 1\rvert &\le \lvert z_1 z_2 - z_1 - z_2 + 1\rvert + \lvert z_1 + z_2 - 2\rvert \\ -&\le \alpha_1\alpha_2 + \alpha_1 + \alpha_2 \\ -&= (1 + \alpha_1)(1 + \alpha_2) - 1. -\end{align}$$ -Now, for several numbers, -$$\begin{align} -\lvert z_1 z_2 \cdots z_m - 1\rvert &\le (1 + \alpha_1)(1 + \alpha_{2,\ldots,m}) - 1 \\ -&\le (1 + \alpha_1)(1 + \alpha_2)(1 + \alpha_{3,\ldots,m}) - 1 \\ -&\vdots \\ -&\le (1 + \alpha_1)(1 + \alpha_2)\cdots(1 + \alpha_m) - 1, -\end{align}$$ where $\alpha_{2,\ldots,m}$, for example, is my hopefully transparent abuse of notation to denote $\lvert z_2\cdots z_m - 1\rvert.$ Finally, since $1+x \le e^x$ for real $x$, the desired inequality follows.<|endoftext|> -TITLE: Lower bound on product of distances from points on a circle -QUESTION [8 upvotes]: Let $C$ be a circle of radius $r$ with $n$ points. Prove that there is a point on the circle such that the product of the distances from this point to the other $n$ points is greater than $r^n$. So we seem to be looking at chords that are of length $\leq 2r$. To show the existence of such a point, would you use a constructive argument? Or more of an indirect argument? Likewise, to show that the inequality holds, would we invoke certain "famous" inequalities such as AM-GM or Cauchy-Schwarz? - -REPLY [2 votes]: Using complex numbers, but a more elementary (and constructive) approach than the Maximum Modulus theorem. -Consider the unit circle. It is enough if we show that the product of distances is greater than $1$. -In fact, we will show that the product of distances of some point is at least $2$. -Let the points be $\displaystyle z_1, z_2, \dots z_n$. Rotate the circle so that $\displaystyle (-1)^n \prod_{j=1}^{n} z_j = 1$ -Let $\displaystyle P(z) = \prod_{j=1}^{n}(z-z_j)$. -The product of distances from $\displaystyle z$ to $\displaystyle z_j$ is given by $\displaystyle |P(z)|$. -Now let $\displaystyle \omega_j$ be the $\displaystyle n$ $n^{th}$ roots of unity. -Since for $\displaystyle 1 \le k < n$, we have that $\displaystyle \sum_{j=1}^{n} (\omega_j)^k = 0$ -we have that -$\displaystyle \sum_{j=1}^{n} P(w_j) = 2n$ -Thus $\displaystyle \sum_{j=1}^{n} |P(w_j)| \ge |\sum_{j=1}^{n} P(w_j)| = 2n$ -Hence there is some $j$ for which $|P(w_j)| \ge 2$. -So we have shown that there is a point whose product of distances from the $n$ points is at least $2r^n$. -In fact, I believe we can also show that if for all points, the product of distances is $\leq 2r^n$, then the $n$ points must be equally spaced! (I will leave that for you :-)).<|endoftext|> -TITLE: Points of discontinuity of a bijective function $f:\mathbb{R} \to [0,\infty)$ -QUESTION [8 upvotes]: We know that the points of discontinuity of a monotone function on an interval $[a,b]$ are countable. Using this can we prov that: - -Any bijection $f: \mathbb{R} \to [0,\infty)$ has infinitely many points of discontinuity. - -If yes, how or otherwise how to prove the above result? - -REPLY [17 votes]: Suppose to the contrary that $f$ has a finite number $n$ of discontinuities, at $x_1, x_2, \ldots, x_n$. Then $\mathbb{R} - \{ x_1, \ldots, x_n \} $ is a union of open intervals $I_1, \ldots I_{n + 1}$. $f$ restricted to each $I_m$ is continuous and injective, and therefore monotone, so each image $f|_{I_m}(I_m)$ is an open (in $\mathbb{R}$) interval $J_m$. The $J_m$ are non-empty, pairwise disjoint and contained in $(0, \infty)$; suppose that $J_{m_1} < J_{m_2} < \ldots < J_{m_{n + 1}}$. Then between each $J_{m_p}$ and $J_{m_{p + 1}}$ there is at least one point, which must be the image of some $x_q$. But this exhausts all the $x$'s, so that $0$ is not in the image of $f$.<|endoftext|> -TITLE: Does localization preserve reducedness? -QUESTION [17 upvotes]: Is the localization of a reduced ring (no nilpotents) still reduced? - -REPLY [34 votes]: Let $A$ be a ring, $S\subset A$ a multiplicatively closed subset, and suppose that $0\neq a/b\in A_S$ is nilpotent. Then there exists $n$ such that $(a/b)^n=0$, i.e., such that there exists $t\in S$ with $ta^n=0$. But then $ta$ is nilpotent in $A$. If it is zero, then $a/b=0$ in $A_S$, which it isn't. - -REPLY [2 votes]: EDIT: This argument is incorrect but I feel others could learn from Mariano Suárez-Alvarez's comments so I've made the post CW. -If I understand correctly you are asking does reduce ring imply reduced localization. -Argue by contrapositive, assume that localization is not reduced, i.e. contains nilpotents. Since elements of the localization of our ring are of the form $\frac{r}{s}$ where s is a subset of our ring that does not contain 0. Then choose a nilpotent element in the localization $(\frac{r}{s})^{n}=0$ Since 0 is not in S it must be the case that $r^{n}=0$, i.e. r is nilpotent. Hence the original ring is nilpotent.<|endoftext|> -TITLE: What is an example of a lambda-system that is not a sigma algebra? -QUESTION [30 upvotes]: What is an example of a lambda-system that is not a sigma algebra? - -REPLY [6 votes]: For another example, let $(\Omega, \mathcal{F}, P)$ be a probability space and fix some event $A \in \mathcal{F}$. Let $\mathcal{L}$ be the collection of all events which are independent of $A$, i.e. -$$\mathcal{L} = \{ B \in \mathcal{F} : P(A \cap B) = P(A) P(B)\}.$$ -It is not hard to check that $\mathcal{L}$ is a $\lambda$-system. To see it need not be a $\sigma$-algebra, take as in my other answer a probability space $\Omega = \{HH, HT, TH, TT\}$, $\mathcal{F} = 2^\Omega$, $P(A) = \frac{1}{4} |A|$ consisting of two independent fair coin flips. Set $A = \{HH, HT\}$, the event that the first coin is heads. Then $\{HH, TH\}, \{HH, TT\}$ are in $\mathcal{L}$ but their union $\{HH, TH, TT\}$ is not. -Incidentally, this is really of the same form as my other answer if we take $Q$ to be the conditional probability measure $Q(E) = P (E \mid A) = P(E \cap A)/P(A)$. (Except when $P(A)=0$, but that case is trivial.)<|endoftext|> -TITLE: The determinant is the integral of algebra. The integral is the determinant of analysis -QUESTION [9 upvotes]: This is probably an obvious parallel that most people are aware of, but I only just noticed it the other day and it made me quite excited. The determinant in algebra has a lot in common with the integral in analysis. For example: - -They are both applied to functions, the integral to integrable functions, the determinant to linear transformations $T:V \rightarrow V$. -They are both "sums of products." -They both can be used to give a scalar result. (Not always, of course, but this is how they are first developed.) -They are both important major structures in algebra and analysis. -They are both defined in ways that feel 'backwards'-- the formal definition isn't always useful for calculating them-- then they come to represent multiple important concepts acting as a fulcrum in their fields. (ie. AREA is connected to ANTI-DERIVATIVES... or that SOLUTIONS TO AX=B are connected to LINEAR TRANSFORMATIONS.) -They can both be used to give area and volume. (under a curve, or of a parallelepiped) - -Question: What mathematical structure encompasses both? (If the answer is category theory, please go slowly with me, I don't understand that stuff yet.) -What else could we add to this list? Are there any problems or proofs that bring these parallels in to the light? -Are there any other mathematical structures that follow the pattern established by these two structures? -Were they developed independently (what I suspect) or is the determinant in some way patterned after the integral or vice versa? (I know my math history and have not come across anything about this.) - -REPLY [8 votes]: It's an interesting question. If there were any strong and formalizable analogy it probably would have been developed a long time ago and inscribed in the textbooks. A few observations. - -The linear algebra analogue of integration is a trace. Determinants are an exponentiated trace. For example $\det \exp A = \exp Tr(A)$ for matrices. -If you view integration as solution of differential equations rather than measure, then determinant appears as the Wronskian. -Integration (as measure) and determinants are closely related in the theory behind the change of variables formula in integrals: differential forms. -The integral is a trace on an infinite-dimensional space (the commutative algebra of functions on, e.g., a closed interval or the real line) while the determinant is specifically finite-dimensional. The Lebesgue measure used in ordinary $n$-dimensional integrals is in some sense defined using determinants (volume), which is why it does not generalize well to infinite dimensional spaces. -Thinking about integrals and determinants in terms of formal properties they satisfy leads to "K-theory", specifically $K_1$, but I don't think this produces any deep or striking analogies between the concepts.<|endoftext|> -TITLE: Is there a trick to prove injectivity for maps out of tensor products? -QUESTION [9 upvotes]: For a ring $R$, an $R$-right-module $M$, an $R$-left-module $N$, and an abelian group $P$, one can use the universal property of the tensor product to construct maps -$$ -M\otimes_R N\to P. -$$ -It concrete cases, it is often easy to see that the constructed map is surjective by just writing down pre-images. -It seems to be harder to verify that the map is injective, because then one has to consider general sums of elementary tensors in $M\otimes_R N$. -Are there any tricks to avoid this, and to achieve injectivity more elegantly? - -More concretely, an example I have in mind is the following: I have a pre-additive category $C$ with finitely many objects. I consider modules over this category (that is, functors from $C$ to Abelian groups; or, equivalently, modules over the category ring of $C$). Now, I consider the "free $C$-right-module $Q_Y$ over an object $Y$ in $C$" which is the hom functor $C(\bullet,Y)$. I want to show that for an arbitrary left-module $M$: -$$ -Q_Y\otimes_C M\cong M(Y) -$$ -as abelian groups. Using the module structure of $M$, I can accomplish a natural epimorphism $Q_Y\otimes_C M\to M(Y)$ which should be injective. -Currently I think that the formula -$$ -M(Y)\to Q_Y\otimes_C M;\quad m\mapsto\mathrm{id}_Y\otimes m -$$ -gives indeed a (two-sided) inverse. Is this correct? - -REPLY [4 votes]: If the image $Q$ is easy to find and well understood, then you can try to construct an inverse from $Q\to M\otimes N$. If $Q$ is well understood, then you can define homomorphisms from it (say by defining the images of some generators, and checking that all the relations are sent to $0$ in $M\otimes N$). Then you just check that the composition is the identity (say by sending generators of $Q$ into $M\otimes N$ and then back into $P$, and checking relations to check equality). -This depends heavily on the image being well understood. It works well to prove an abstract tensor product is equal to some concretely understood module. If the image is confusing, then checking that the homomorphism is well defined (or even just defining it) can be hard, and even checking equality of elements in $P$ can be hard in some cases. -There are some other tricks using embeddings and exact sequences, but they also have limited applicability (usually when the entire problem has exact sequences floating around). Feel free to give some more concrete examples for more concrete tricks.<|endoftext|> -TITLE: Function of two variables as a function of a function -QUESTION [5 upvotes]: Consider a function $f : \mathcal X \times \mathcal Y \mapsto \mathbb R$. I want to define $g_x(y) = f(x,y) : \mathcal Y \mapsto \mathbb R$. I want to say that - -$g_x$ is a ___ of function $f$. - -What is the appropriate word for _____ - -REPLY [8 votes]: I've seen your $g_x$ called the $x$-section of $f$. E.g. Folland's Real Analysis, section 2.5. -Edit: Another notation that's often useful is to write $f(x, \cdot)$ instead of $g_x$. - -REPLY [6 votes]: This is sometimes called currying. It is closely related to the notion of an exponential object. But you don't really need to use either of these terms to perform this construction. -Edit: Ah, I was assuming you were varying $x$. If $x$ is fixed, you might want to call $g_x$ a restriction of $f$.<|endoftext|> -TITLE: Logistic function passing through two points? -QUESTION [5 upvotes]: Quick formulation of the problem: -Given two points: $(x_l, y_l)$ and $(x_u, y_u)$ -with: $x_l < x_u$ and $y_l < y_u$, -and given lower asymptote=0 and higher asymptote=1, -what's the logistic function that passes through the two points? -Explanatory image: - -Other details: -I'm given two points in the form of Pareto 90/10 (green in the example above) or 80/20 (blue in the example above), and I know that the upper bound is one and the lower bound is zero. -How do I get the formula of a sigmoid function (such as the logistic function) that has a lower asymptote on the left and higher asymptote on the right and passes via the two points? - -REPLY [2 votes]: To elaborate on the accepted answer, if we have a logistic function using the common notation: -$$f(x) = \frac{1}{1 + e^{-k(x-x_0)}}$$ -... and we want to solve for $k$ and $x_0$ given two points, $(x_l, y_l)$ and $(x_u, y_u)$: -First we can group the unknowns in a single term $b \equiv k(x-x_0)$. So: -$$y = \frac{1}{1 + e^{-b}}$$ -$$y(1 + e^{-b}) = 1$$ -$$e^{-b} = \frac{1-y}{y}$$ -$$-b = \log(\frac{1-y}{y})$$ -$$ b = \log(\frac{y}{1-y})$$ -Now we expand b: -$$k(x-x_0) = \log(\frac{y}{1-y})$$ -... which gives us a linear system to solve for $k$ and $x_0$ given the values of two $(x, y)$ coordinates.<|endoftext|> -TITLE: A tree of convex sets? -QUESTION [13 upvotes]: This was suggested by a problem on FreeNode's #math a little while ago... -Construct a directed graph $\Gamma$ with vertex set the set of compact convex sets in $\mathbb R^2$, and an arrow $A\to B$ if $A$ and $B$ are disjoint and there is a point $a$ in $A$ and a $t>0$ such that $a+(t,0)$ is in $B$. -I guess there are no oriented cycles in $\Gamma$, so it is a directed acyclic graph. Can you give a non-ugly proof? -Later: Rahul's example below shows that this is too much to hope for. In fact, what I really wanted is to know if the subgraph spanned by every finite set of disjoint convex sets has no cycles. - -REPLY [2 votes]: Here is a simple proof that the directed graph induced by disjoint convex sets is acyclic. -Suppose there is a counter-example, then there must be one with the smallest number of convex sets involved. Suppose such a counter example is given by the convex sets $C_1,C_2,\dots,C_n$. Then the directed cycle must contain all the $C_i$'s by the minimality assumption (otherwise just throw the ones not in the cycle away to get a smaller counter-example). WLOG, assume the graph contains $C_1\to C_2\to \cdots \to C_n\to C_1$. -Lemma: There are no other edges in this graph. -Proof: If we had $A_k\to A_r$ with $k\neq r-1 \pmod{n}$, then $A_k\to A_r\to A_{r+1}\to \cdots \to A_k$ would be a shorter cycle which contradicts our minimality assumption. -So far we haven't used that these are convex sets. Now the trick is that if there is no $A_k$ which intersects the convex hull of $A_r \cup A_{r+1}$ then substituting $A_r$ and $A_{r+1}$ with their convex hull gives a smaller collection of convex sets with a cycle. -Now by applying the lemma it follows that to show that no such $A_k$ can exist, it is enough to prove that if $A_k$ intersects the convex hull of $A_r \cup A_{r+1}$ then either $A_r \to A_k$ or $A_k \to A_{r+1}$, but this is obvious if you draw the picture, so I'll leave it as an exercise :) -Since there are no two convex sets $A\to B \to A$, we are done.<|endoftext|> -TITLE: Solving a quadratic congruence: $4x^2 \equiv 2 \ (\text{mod} \ 7)$ -QUESTION [5 upvotes]: How does one go about solving the following quadratic congruence? -$4x^2 \equiv 2 \ (\text{mod} \ 7)$ - -REPLY [3 votes]: You can take the square roots of both sides: $$4x^2 \equiv 2 \pmod 7$$ $$2x \equiv 3, 4 \pmod 7$$ Then halve both sides: $$x \equiv 5, 2 \pmod 7.$$ -Example: $4 \times 12^2 = 576 = 82 \times 7 + 2$.<|endoftext|> -TITLE: A multi-dimensional Frobenius problem -QUESTION [11 upvotes]: Inspired by - this question. -Let $A$ be a subset of ${\mathbb Z}^d$ that generates the whole additive group ${\mathbb Z}^d$, and let $S$ be the additive semigroup generated by $A$. -Prove that there is a $d$-dimensional convex cone $C\subset {\mathbb R}^d$ so that -all elements of $C \cap {\mathbb Z}^d$ with sufficiently large norm are contained in $S$. -P.S. Feel free to retag! - -REPLY [6 votes]: Take $\mathcal{E}=(e_1,\ldots,e_d) \in A^d$ that is a basis of $\mathbb{Q}^d$. -We choose the norm $|| \cdot ||$ to be $|| \cdot ||_{\infty,\mathcal{E}}$ (sup-norm of the coefficients in the basis $\mathcal{E}$). -$\mathbb{Z}e_1 + \ldots + \mathbb{Z}e_d$ contains $N \mathbb{Z}^d$ for some positive integer $N$. -Let $C_0$ be the cone spanned by the $e_i$'s. -Take $C$ to be the cone spanned by $f_1,\ldots,f_n$ where $f_i = e_i + \epsilon \sum_{j \neq i} e_j$, choosing $\epsilon>0$ small enough so that $(f_1,\ldots,f_n)$ is free. -There exists $\eta>0$ (depending on $\epsilon$) such that for every $x \in C$, the coefficients of $x$ in the basis $\mathcal{E}$ are all $\geq \eta ||x||$, so $B(x,\eta ||x||) \subset C_0$ (draw a picture!). -Now choose $s_1,\ldots,s_k \in S$ exhausting $\mathbb{Z}^d/N\mathbb{Z}^d$. -Let $M=\max_i ||s_i||$. -Take $x \in C \cap \mathbb{Z}^d$, with $||x||>M/\eta$. -There exists $i$ such that $x-s_i \in N \mathbb{Z}^d$, and $x-s_i \in B(x,M) \subset C_0$, so the coefficients of $x-s_i$ in $\mathcal{E}$ are non-negative integers. -As a consequence, $x=s_i+(x-s_i)$ is in $S$. -Edit: -Just to add rigor to the statement with the "picture": if $x = \sum_i x_i f_i \in C$, and if $x_{i_0} = \max_i x_i$, then $x = \sum_i y_i e_i$ with $y_{i_0} = \max_i y_i \leq (1+(d-1)\epsilon ) x_{i_0}$ and $y_i \geq \epsilon x_{i_0}$ for each $i$, so that $y_i \geq \epsilon / (1+(d-1)\epsilon) ||x||$.<|endoftext|> -TITLE: Cartesian Product of Two Complete Metric Spaces is Complete -QUESTION [7 upvotes]: This is a problem I'm stuck on that our professor gave us for additional practice (not homework, but its recommended that we understand how to prove it). -We know X and Y are complete metric spaces, and we need to show that $X \times Y$ is complete. I'm really lost on the proof technique. We were given an outline as follows, but I could only fully figure out (1). Part 3 is what we've been really stuck on though. I was wondering whether someone could give an proof for say a more specific space where $X = \mathbb{R}, Y = \mathbb{R}$, so I could understand the principle. -Outlined: -1) Show that $d_{X \times Y} ( (a_1,b_1) , (a_2,b_2)) = \max \{ d_X (a_1,a_2) , d_Y (b_1, b_2)\}$ is a metric. -2) Prove that this gives the product topology on $X \times Y$. -3) Prove that if $a_n, b_n$ are Cauchy sequences, where $a_n \in X$ and $b_n \in Y$, then $(a_n,b_n )$ is Cauchy. - -REPLY [2 votes]: Actually if we want to prove that a product space is complete then we have to take a Cauchy sequence from the product space and then show that it converges to a point in it. -Observe that for all $a_1, a_2 \in X$ and $b_1, b_2 \in Y,$$d_X (a_1,a_2) \leq \max \{ d_X (a_1,a_2) , d_Y (b_1, b_2)\}\tag1$ $d_Y (b_1,b_2) \leq \max \{ d_X (a_1,a_2) , d_Y (b_1, b_2)\} \tag2$. -Suppose we have a Cauchy sequence $((a_n, b_n))$ in $X \times Y $. Then given $\epsilon > 0$ there exist $N$ such that for all $n,m \geq N$ we have $\max \{ d_X (a_n,a_m) , d_Y (b_n, b_m)\} < \epsilon$ -Due to $(1)$ and $(2)$ we have $d_X (a_n,a_m) < \epsilon$ and $d_Y (b_n,b_m) < \epsilon$ for all $n,m \geq N$ implying that $(a_n)$ and $(b_n)$ are Cauchy in $X$ and $Y$ respectively. Since $X$ and $Y$ are both complete $(a_n)$ converges to some $a \in X$ and $(b_n)$ converges to some $b \in Y$ and thus $(a,b) \in X \times Y.$ -Now it is a matter of showing that $((a_n, b_n))$ converges to $(a,b)$ in $X \times Y$ which is easy to do since $\lim_{n \to \infty }d_{X \times Y} ( (a_n,b_n) , (a,b)) = \lim_{n \to \infty } \max \{ d_X (a_n,a) , d_Y (b_n, b)\} = 0.$<|endoftext|> -TITLE: Generating integers from a linear combination of integers -QUESTION [10 upvotes]: Given are some positive integers $t_1, ..., t_n$ such that $\mathrm{gcd}(t_1, ..., t_n) = 1$. Now we create a linear combination of these: $k_1 t_1 + ... + k_n t_n$ where $k_1, ..., k_n$ are nonnegative integers that you can choose to generate different integers. -What is the smallest $N$ such that all integers greater than or equal to $N$ can be generated by the linear combination? -Example: $k_1 3 + k_2 4$ can generate $0, 3, 4, 6, 7, 8, 9, ...$. So here $N = 6$ and $k_1 = 2, k_2 = 0$. - -REPLY [8 votes]: This is a famous problem, sometimes called the coin problem of Frobenius. -If $n=2$ the answer to your question is known to be $N = t_1 t_2 - (t_1 + t_2) + 1$. A proof of this can be found in the answer to this recent question. -For three or more integers, there is no known closed-form solution for $N$. There are some bounds on the values of $N$ in the $n = 3$ case, as well as some algorithms for determining $N$. For more information and references, see the Wikipedia and MathWorld pages on the Frobenius coin problem. -Basically, the problem is considered solved when $n = 2$, partially solved (because of the bounds and algorithms) when $n = 3$, and unsolved when $n \geq 4$. -Update: Guy's Unsolved Problems in Number Theory says, "The case $n = 3$ was first solved explicitly by Selmer and Beyer, using a continued fraction algorithm." So I guess the $n=3$ case has been solved. I suppose you have would have to dig up their paper (it's in the MathWorld references) to see their solution.<|endoftext|> -TITLE: Farkas’ lemma: purely algebraic intuition -QUESTION [21 upvotes]: Here is a statement of Farkas Lemma from the Wikipedia. Let $A$ be an $m \times n$ matrix and $b$ an $m$-dimensional vector. Then, exactly one of the following two statements is true: - -There exists an $x \in \mathbb{R}^n$ such that $Ax = b$ and $x \geq 0$. -There exists a $y \in \mathbb{R}^m$ such that $A^T y \geq 0$ and $b^T y < 0$. - -This result has a simple geometric interpretation: either $b$ lies in the cone formed by the columns of $A$ or it is possible to find a vector $y$ such that $y$ forms an acute angle with all columns of $A$ and an obtuse angle with $b$. -I was wondering if there is a way to make the result intuitive purely at the algebraic level of solving linear equations and inequalities? -The lemma is important in finance and the geometric intuition is not much help there since the vectors are payoffs and prices of financial assets which have no natural geometric meaning. Gale's Theory of Linear Economic Models has a purely algebraic proof, but that is an opaque induction argument. -EDIT: A little more about my application. We have $m$ assets and $n$ possible future states of nature. $A_{ij}$ is the payoff by asset $i$ in state $j$. $b_i$ is the price of asset $i$. $x_j$ is the value of one dollar in state $j$. $y_i$ is the amount of asset $i$ in a portfolio. -So Farkas' lemma tells us that either (1) there is a way of assigning a non-negative price to a dollar in each state in a way such that the price of each asset is just the sum total of the value of its payoffs, or (2) there is a portfolio whose price is negative, so you get paid for holding it, but whose payoffs are non-negative, which means that you do not have to pay anything back. I want to understand, in terms which make economic sense, why (1) and (2) should be mutually exclusive. Acute and obtuse angles are no help here. - -REPLY [26 votes]: If you want an intuition that explains why Farka's Lemma should be true, you will have to use the geometric interpretation; there's no way around that. -If you want an intuition that shows what Farka's Lemma achieves (but not why it should be true), there is indeed an algebraic explanation. -Namely, Farka's Lemma answers the following question: what is a sufficient and necessary condition for the system of equations -$$\begin{array}{rcl}a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n &=& b_1 \\ -a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n &=& b_2 \\ -&\vdots& \\ -a_{m1}x_1 + a_{m2}x_2 + \dots + a_{mn}x_n &=& b_m -\end{array}$$ -to have a positive solution $x_i\ge 0$? -Oviously, if one of the equations has the form -$$ c_1x_1 + c_2x_2 + \dots + c_nx_n = b $$ -where all the $c_i$ are positive $c_i > 0$, but the right-hand side $b$ is negative $b<0$, then it's clearly impossible to choose all the $x_i$ positive and have them satisfy the equation, because the left-hand side would always be positive. -This was only for a single equation, but it is equally obvious that if any linear combination of the equations of your system -$$ \left(\sum y_i a_{i1}\right)x_1 + \left(\sum y_i a_{i2}\right)x_2 + \dots + \left(\sum y_i a_{in}\right) x_n = \sum y_i b_i $$ -has this form with positive coefficients on the left- and a negative number on the right-hand side, then you can't solve it for $x_i \ge 0$. -Now, what Farka's Lemma says is that this is in fact the only bad thing that can happen. If the system -$$Ax = b$$ -doesn't have a solution $x\ge0$, then this must be because there exists a nasty linear combination of some equations -$$(y^TA)x = y^Tb$$ -that obstructs solvability because it satisfies $y^TA \ge 0$ but $y^Tb \le 0$. -In a sense, Farka's Lemma shows that the non-solvability of the system can be "certified"; the vector $y$ is a "certificate" for the fact that the system cannot be solved. Such certificates, commonly called obstructions, are common in other branches of mathematics, like algebraic geometry (Hilbert's Nullstellensatz) or algebraic topology (homology, cohomology).<|endoftext|> -TITLE: Adding integers to an infinite continued fraction expansion doesn't change the value? -QUESTION [5 upvotes]: I'm learning about continued fractions, and I've enjoyed them so far, but I'm unsure if I've done the following correctly. I have no real experience with analysis, so I'm not sure if my reasoning is formal enough, or correct. Any feedback would be appreciated. -Let $\xi$ be an irrational number with continued fraction expansion $\langle a_0,a_1,a_2,a_3\dots\rangle$. Let $b_1,b_2,b_3,\cdots$ be any sequence of positive integers, either finite or infinte. Prove that $\lim_{n\to\infty}\langle a_0,a_1,a_2,\dots,a_n,b_1,b_2,b_3\dots\rangle=\xi$. -I let $r_n=\langle a_0,a_1,\dots,a_n\rangle$ and $\xi'=\langle b_1,b_2,b_3,\dots\rangle$, and let $\beta_n=\langle a_0,a_1,a_2,a_3\dots,a_n,\xi'\rangle$. So* -$$\beta_n-r_n=\beta_n-\frac{h_n}{k_n}=\frac{\xi'h_n+h_{n-1}}{\xi'k_n+k_{n-1}}-\frac{h_n}{k_n}$$ -$$=\frac{-(h_nk_{n-1}-h_{n-1}k_n)}{k_n(\xi'k_n+k_{n-1})}=\frac{(-1)^n}{k_n(\xi'k_n+k_{n-1})}$$ -But ${k_n}$ is a positive increasing series, $\xi'$ is a positive real number, so as $n$ approaches $\infty$, the denominator goes to $\infty$ while the numerator alternates between $-1$ and $1$, so the fraction tends to $0$. Hence we have that $\lim_{n\to\infty}(\beta_n-r_n)=0$, so $$\lim_{n\to\infty}\beta_n=\lim_{n\to\infty}r_n=\lim_{n\to\infty}\langle a_0,a_1,a_2,\dots,a_n\rangle=\langle a_0,a_1,a_2\dots\rangle=\xi.$$ Hence $\lim_{n\to\infty}\langle a_0,a_1,a_2,\dots,a_n,b_1,b_2,b_3\dots\rangle=\xi$. -If this the correct route to go? As a small side question, how does it make sense to have integers $b_i$ at the end of this sequence, if the sequence is infinite? Thanks! -*If it's not well known, $\{h_n\}$ is the sequence defined by $h_{-2}=0,h_{-1}=1,h_i=a_ih_{i-1}+h_{i-2}$ and $\{k_n\}$ is defined as $k_{-2}=1,k_{-1}=-1, k_i=a_ik_{i-1}+k_{i-2}$, and $r_n=\langle a_0,a_1,\dots,a_n\rangle$, for any sequence of integers $a_0,a_1,a_2\dots$ all positive except perhaps $a_0$. - -REPLY [5 votes]: You are on the right track, but some statements do not make any sense at all. -1) "Since $\xi'$ is a sequence of positive integers" : This does not make any sense. $\xi'$ is a constant real number. -2) You are using confusing notation. $\beta = \langle a_0, a_1, \dots, a_n, \xi' \rangle$. This varies with $n$, so you need to talk about $\beta_n$. -3) "Hence we have $\lim_{n \to \infty} (\xi' -r_n) = 0$" is not right. This implies that any two real numbers are equal (think about it)! You need to consider $\lim_{n \to \infty} \beta_n$. -As to you side question, it is not one sequence. -It is a sequence of sequences! -$a_0, a_1, b_1, b_2, b_3, \dots$ -$a_0, a_1, a_2, b_1, b_2, b_3, \dots$ -$\vdots$ -$a_0, a_1, a_2, \dots, a_n, b_1, b_2, b_3 \dots$ -$\vdots$ -Notice that in each sequence, we only had a finite number of the $a_i$. -Each sequence corresponds to a real number (the $\beta_n$ above). -Thus we get a new sequence -$\beta_1, \beta_2, \dots$ -And the problem is asking you to prove that $\lim_{n \to \infty} \beta_n = \xi$.<|endoftext|> -TITLE: Confidence band for Brownian Motion with uniformly distributed hitting position -QUESTION [8 upvotes]: Let $(B_t)$ denote the standard Brownian motion on the interval $[0,1]$. For a given confidence level $\alpha \in (0,1)$ a confidence band on $[0,1]$ is any function $u$ with the property that -$$ -P(\omega; |B_t(\omega)| < u(t), \quad \forall t\in [0,1])=\alpha. -$$ -In other words, the probability that a path of the Brownian motion stays within a confidence band is $\alpha$. Additionally the boundary hitting position for those paths leaving the band must be uniformly distributed on $[0,1]$. This condition can be stated using the stopping time -$$\tau(\omega) = \inf [ t \in [0,1], |B_t(\omega)|=u(t) ]. -$$ -Then $\tau $ is the time of the first hitting, and one asks that $\tau$ is uniformly distributed on $[0,1]$ conditionally on the event that $\tau$ is finite. -I am interested in - -References and links to literature or papers considering this or similar problems -Thoughts, ideas, discussion - -The context of the problem is a rather boring one, so will not state it here. The problem itself seem to be non-trivial and interesting. - -REPLY [2 votes]: If I understand you correctly, you are looking for a curve $u(t)$ with $t \in [0,1]$ so that the probability the absolute value of a standard Wiener process does not cross the curve is $\alpha$ and that the probability density of the first crossing is a constant $1-\alpha$. -The following simulation in R may help indicate the shape of $u_\alpha(t)$: -##simulated boundary for standard Wiener process -##time for absolute value to cross boundary first time -##uniformly distributed on [0,1] given crosses boundary -steps <- 100 #how many steps in (0,1] -cases <- 100000 #how many processes to simulate -alpha <- 0.00 #probability does not cross boundary -normmat <- matrix(rnorm(steps*cases), ncol=steps) -brown <- normmat/sqrt(steps) #for var=1 after all steps -for (i in 2:steps){brown[,i] <- brown[,i-1] + brown[,i]} #cumulative sum -absbrown <- abs(brown) -boundary <- rep(0,steps) -for (i in 1:steps){ - boundary[i] <- quantile(absbrown[,i], - probs = (steps-i*(1-alpha))/(steps-(i-1)*(1-alpha)), - names = FALSE) - absbrown <- absbrown[!(absbrown[,i] > boundary[i]), ] #del crossed - } -plot( c(0,(1:steps)/steps), c(0,boundary), type="l", xlab="t", - ylab="boundary", main=paste("simulated boundary for alpha =",alpha) ) -abline(h=0) -abline(v=0) - -Here are an example with $\alpha =0$. The actual curve will be smoother. - -Here is another. If George Lowther is correct then this is simply the first half of the previous curve stretched upwards. - -Added for comment: Taking the left hand half of the first graph (black below) and taking a shrunken version of the second (red below, dividing $t$ by $2$ and the boundary by $\sqrt{2}$), there is a very good match, except for the $y$ axis which may be a rounding effect in the simulation. So George Lowther looks correct.<|endoftext|> -TITLE: Splitting equilateral triangle into 5 equal parts -QUESTION [15 upvotes]: Is it possible to divide an equilateral triangle into 5 equal (i.e., obtainable -from each other by a rigid motion) parts? - -REPLY [19 votes]: The answer is "yes", it is possible to divide equilateral triangle into $5$ equal parts, see the picture below which comes from here: https://ru-math.livejournal.com/831851.html<|endoftext|> -TITLE: Discriminant of a monic irreducible integer polynomial vs. discriminant of its splitting field -QUESTION [16 upvotes]: Let $f\in\mathbb{Z}[x]$ be monic and irreducible, let $K=$ splitting field of $f$ over $\mathbb{Q}$. What can we say about the relationship between $disc(f)$ and $\Delta_K$? I seem to remember that one differs from the other by a multiple of a square, but I don't know which is which. On a more philosophical note: why are these quantities related at all? Is there an explanation for why they can be different, i.e. some information that one keeps track of that the other doesn't? - -REPLY [13 votes]: I think there was some confusion about the splitting field and the field $\mathbb{Q}[x]/(f(x))$, which is isomorphic to the field generated by one root of $f(x)$. (We always assume that $f(x)$ is monic irreducible.) -Let $\alpha$ be a root of $f(x)$, and $L=\mathbb{Q}(\alpha)$ be the field generated by $\alpha$, $\mathbb{Z}[\alpha]$ be the subring of $\mathcal{O}_L$ generated by $\alpha$. The the discriminant of $f(x)$ is the discriminant of the lattice $\mathbb{Z}[\alpha]$. So $\mathrm{Disc}(f)/\mathrm{Disc}(\mathcal{O}_L)$ is the square of $[\mathcal{O}_L: \mathbb{Z}[\alpha]]$. (See III.3 of Lang's "Algebraic number theory"). -However, for splitting fields, these things hardly compares. For example, take $f(x)=x^4-x+1$, then the discriminant of $f(x)$ is 229 (a prime, which coincides with the discriminant of the field $L$ in this case) , but the discriminant of the splitting field of $f(x)$ is $229^{12}$ (calculated using Pari), which has 28-digits. (Well, it is not hard to show the discriminant of the splitting field of $f(x)$ share the same prime divisors as the field $L$.) -Sorry about bring up a really old question. It is just I asked myself the same thing today.<|endoftext|> -TITLE: Series representation for 1/cos x -QUESTION [8 upvotes]: Why does the summation -\begin{equation*} -\frac{1}{\cos x}=\sum_{n=1}^\infty \frac{(-1)^n(2n-1)\pi }{x^2-\left (n-\frac{1}{2}\right )^2\pi^2} -\end{equation*} -hold? - -REPLY [3 votes]: You can find such partial fraction expansions given as examples in many texts on complex analysis. For example, you will find a derivation of this series in Example 1 at this link to Markushevich and Silverman's book. It uses "Cauchy's theorem on partial fraction expansions", given a few pages earlier at this link. -Example 2 gives the series for $\cot$, although from the preview I can't see whether $\tan$ and $\csc$ are included. The analogous series for $\tan$ is derived on Wikipedia. Each of these series can be useful for evaluating numerical series. For example, taking $x=0$ in your series you get -$$\frac{\pi}{4}=1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots.$$ - And although it is not directly related to your question, I thought I'd add in light of the recent question on evaluating the sum of the reciprocals of the squares that if you take the partial fraction series -$$\tan(x) = \sum_{k=0}^{\infty} \frac{-2x}{x^2 - \left(k + \frac{1}{2}\right)^2\pi^2},$$ -divide by $x$ and let $x$ go to $0$, rearranging yields -$$\frac{\pi^2}{8}=1+\frac{1}{9}+\frac{1}{25}+\frac{1}{49}+\cdots,$$ -which in turn (because the sum over the evens is $\frac{1}{4}$ the total sum) leads to -$$\frac{\pi^2}{6}=1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+\cdots.$$<|endoftext|> -TITLE: A Torus and the Weierstrass P function? -QUESTION [5 upvotes]: Let $\wp$ be the Weierstrass function. -From what I understand, $\wp$ maps the torus to $CP^1 \times CP^1$ in the following way: -$a \mapsto (\wp(a),\wp'(a)) = (z,w)$ -Furthermore, the image of this map lies on the zero set of the polynomial $P(z,w) = 4(z-e_1)(z-e_2)(z-e_3) - w^2.$ -What I don't get is the description of the inverse to this map, which is supposedly the integral of the differential form $\frac{dz}{w}$ from $\infty$ to a point $Q$ along a path $c$. - -I don't understand what $\infty$ means here. -I don't understand why this is would be the inverse. - -Heuristically, I see that -\begin{equation} -\int_\infty^Q \frac{dz}{w} = \int_0^z \frac{\wp'(u)}{\wp'(u)} du = \int_0^a du = a. -\end{equation} -But unfortunately, the computation above makes little sense. I suppose I am attempting pull-back by setting $z=\wp$ and $w=\wp'$. But isn't this a map into the complex plane, and not the Torus? -Also, why is infinity the branch point? -More generally, let $w^2 = p(z)$ with degree of $p$ odd. Why is infinity one of the branch points, and why isn't it a branch point when the degree of $p$ is even? -Thank you for your time! - -REPLY [7 votes]: It is better to think of mapping to $\mathbb C P^2$ rather than a product of $\mathbb C P^1$s. The point $\infty$ is then the unique point at infinity on the cubic $y^2 = 4(x-e_1)(x-e_2)(x-e_3).$ -Also, the integral depends on the path, not just the endpoint; the ambiguity in the value of the integral if you just specify the endpoints is given by integrating over closed loops, i.e. over representatives of the 1st homology. These integrals span a lattice $\Lambda$ in $\mathbb C$ (the original lattice with respect to which you defined $\wp$), and so the integrals really take values in $\mathbb C/\Lambda$, not in $\mathbb C$.<|endoftext|> -TITLE: A Banach Manifold with a Riemannian Metric? -QUESTION [11 upvotes]: Given an infinite dimensional manifold modeled on a Banach space, what does it mean for it to have a Riemannian metric? Does it necessarily mean that it is actually a Hilbert manifold? -My understanding is that Hilbert Spaces have inner products, whereas Banach Spaces just have norms. If a Banach manifold has a Riemannian metric, it means that at each tangent space, which is a Banach space, it has an inner product... so wouldn't that make any Banach manifold a Hilbert Manifold? - -REPLY [7 votes]: There are two notions of what it means for a(n infinite dimensional) manifold to have a Riemannian structure. A strong Riemannian structure means a (smooth) choice of inner product on each tangent space which induces an isomorphism of each tangent space with its corresponding cotangent space. A weak Riemannian structure simply means a (smooth) choice of inner product on each tangent space. -Strong Riemannian structures only exist if the manifold is modelled on a Hilbert space, and even then they have to be chosen correctly (so the usual $L^2$-metric on the space of $L^{1,2}$-loops is not a strong structure, even though the manifold is Hilbertian). Weak Riemannian structures exist much more widely. For a weak Riemannian structure you only need to know that the manifold admits smooth partitions of unity and that the model spaces admit continuous inner products. So, for example, continuous loops in a smooth manifold admit a weak Riemannian structure but not a strong one. -Although strong Riemannian structures are very good for generalising much of ordinary differential geometry to infinite dimensions, there are occasions where the requirement of having a Hilbert manifold is too strong, and one can get away with merely having a weak Riemannian structure. I've written an article where having a weak Riemannian structure on the space of smooth loops was an essential step and where the construction would not have worked on a Hilbertian manifold (though actually it was a co-Riemannian structure that I needed). -(Declaration of interests: I've actually proposed a refinement of the "weak/strong" classification as I found it too harsh. See my article here for this, and the above-mentioned result, and a load of examples of spaces with different types of Riemannian structure.)<|endoftext|> -TITLE: Does Stirling's formula give the correct number of digits for $n!\phantom{}$? -QUESTION [13 upvotes]: It is known that the number of digits of a natural number $n > 0$, which represent by $d(n)$ is given by: -$d(n)= 1 + \lfloor\log n\rfloor\qquad (\text{I})$ -($\log$ indicates $\log$ base $10$) -Well .. the classical approach to the Stirling factorial natural number $n > 1$ is given by: -$$n! \approx f(n) = [(2n\pi) ^{1/2}] [(n / e) ^ n]$$ -The number of digits $n!$, according to equality (I), is: -$d(n!) = 1 + \lfloor\log n!\rfloor$ -It seems to me that for all natural $n> 1$, $\log n!$ and $\log [f (n)]$ have the same floor: -$$\lfloor\log(n!)\rfloor = \lfloor\log(f(n))\rfloor$$ -Here's my big question! -Therefore, we could write: -$$d (n!) = 1 + \lfloor\log(f(n))\rfloor$$ -Hope someone has some little time for the theme. - -REPLY [20 votes]: 6561101970383 is a counterexample, and the first such if I computed correctly. See my answer in https://mathoverflow.net/questions/19170 for more information.<|endoftext|> -TITLE: Exercise from Eisenbud & Harris's The Geometry of Schemes -QUESTION [9 upvotes]: I've just started learning about schemes, so maybe I'm missing something basic. -This is exercise I-24(a): - -Take Z = Spec$\mathbb{C}[x]$, let $X$ be the result of identifying the two closed points (x) and (x-1) of |Z|, and let $\phi: Z \to X$ be the natural projection. Let $\mathcal{O}$ be $\phi_* \mathcal{O}_Z$, a sheaf of rings on -$X$. Show that $(X, \mathcal{O})$ satisfies condition (i) above for all elements $f \in \mathcal{O}(X) = \mathbb{C}[x]$. -The condition (i) referred to: For any $f \in \mathbb{C}[x]$ define $U_f \subset X$ as the set of points $x \in X$ such that $f$ maps to a unit of the stalk $\mathcal{O}_x$. (i) means that $\mathcal{O}(U_f) = \mathbb{C}[x][f^{-1}]$ for all f. - -But how can this be? Put f = x. Then -$U_f = X \setminus \{(x)\}$ -$\phi^{-1}(U_f) = Z \setminus \{ (x), (x-1) \}$ -$\mathcal{O}(U_f) = \mathcal{O}_Z(\phi^{-1}(U_f)) = \mathbb{C}[x][ ((x)(x-1))^{-1} ]$. -And that is not $\mathbb{C}[x][f^{-1}]$. -Edit: Regarding the answer and comments. -evgeniamerkulova's answer reassures me that I'm not out of my mind, but obviously Matt E and Mariano know what they're talking about, so I don't know what to think. -Both Mariano and Matt E imply that $\mathcal{O}(X)$ is not $\mathbb{C}[x]$, but that seems obviously wrong (and contradicts the book itself). -Here's my reasoning, spelled out. O(X) is C[x]. This is because $\mathcal{O}_Z(\phi^{-1}(X)) = \mathcal{O}_Z(Z) = \mathbb{C}[x]$. In order for the condition to be satisfied, we need $\mathcal{O}(U) = \mathbb{C}[x,x^{-1}]$ for some open U in X. So we need $\mathcal{O}_Z(\phi^{-1}(U)) = \mathbb{C}[x,x^{-1}]$. For that to happen we need $\phi^{-1}(U) = Z \setminus \{ (x) \}$. But all the inverse images of sets in X either include both (x) and (x-1) or neither of them, so this can never happen. - -REPLY [4 votes]: Everything you say is right and Eisenbud and Harris are false. I don't inderstand Matt E ' comments either because you did no mistake: -a) That "x" is not function on $X$ has nothing to do with problem, and you never said it was function. -b) He writes "To compute correctly on $X$, you need to figure out what $\mathcal O(X)$ is" : you have computed that it is $\mathbb C[x]$ and you are right. -For completeness stalk $\mathcal O_a$ of $\mathcal O$ at quotient point $a\in X$ corresponding to $0,1$ [you write x, but you may not because x is already polynomial] is ring $S \subset \mathbb C(x)$ of all fractions $f(x)/g(x)$ ($f(x), g(x) \in \mathbb C [x])$) such that $g(0)\neq 0$ and $ g(1)\neq 0$<|endoftext|> -TITLE: Roots of unity and field extensions -QUESTION [8 upvotes]: Can we always break an arbitrary field extension $L/K$ into an extension $F/K$ in which the only roots of unity of $F$ are those in $K$, followed by an extension $L/F$ which is of the form $L=F(\{\omega_i\})$ where the $\omega_i$ are roots of unity? What if we restrict to $L/K$ separable, or finite? - -REPLY [2 votes]: The question has been answered on MathOverflow: https://mathoverflow.net/questions/49913/factoring-a-field-extension-into-one-which-adds-no-roots-of-unity-followed-by-on/49914#49914<|endoftext|> -TITLE: Generalized Feedback Shift Registers -QUESTION [5 upvotes]: I find confusing some examples I have seen. Maybe you can help me to determine what is going on with them. -A Generalized Feedback Shift Register (GFSR) sequence defines a sequence $\{W_{i}\}$ satisfying the equation -$$W_{k+p}=c_{0}W_{k}\bigoplus c_{1}W_{k+1}\bigoplus...\bigoplus c_{p-1}W_{k+p-1} \qquad \qquad (1)$$ -where $\bigoplus$ is the binary exclusive-or operation. -If the polynomial $f(x)=c_{0}+c_{1}x+c_{2}x^{2}+...+c_{p-1}x^{p-1}+x^{p}$ is a primitive polynomial over $GF(2)$, then the sequence $\{W_{i}\}$ will have maximal sequence $2^{p}-1$. -Example 1: Let's consider the trinomial $1+x+x^{4}$ and a bit sequence $1, 0, 1, 0$. For the polynomial we have $c_{0}=1$, $c_{1}=1$, $c_{2}=0$, $c_{3}=0$ and $p=4$. Therefore, the equation $(1)$ should be $W_{k+4}=W_{k}\bigoplus W_{k+1}$. According to this, we calculate the values for $W_{5}, W_{6},...$ etc (since we already know that $W_{1}=1, W_{2}=0, W_{3}=1, W_{4}=0$). -This procedure generates the following sequence -$$1,0,1,0,1,1,1,1,0,0,0,1,0,0,1$$ -Then the example takes 4 bit chunks (changing to decimal representation): -$1010=10, 1111=15, 0001=1, 0011=3, 1010+1111=0101=5, 0001+0011=0010=2$ and so on. So a '4-wise decimation' using the recurrence yields the numbers -$$10, 15, 1, 3, 5, 2, ...$$ -Is this a standard way to generate a bigger sequence? -Example 2: - - -By the using the bit stream from the trinomial $1+x+x^{4}$ and the starting sequence $1,0,1,0$, and... forming 4-bit words by putting the bits into a fixed binary position with a delay of 3 between binary positions, we have - $$1010=10, 1110=14, 0011=3, 0101=5, 1111=15, 0001 = 1, 0010=2, 0111=7,...$$ - - -Well, both examples are dealing with exactly the same problem. However, they lead to different sequences. I don't even know how the second example generates its sequence (it looks like it is taking the first bit sequence $1, 0, 1, 0$ and applying the binary exclusive-or operation for the first two terms $1 \bigoplus 0 = 1$ which is the first term of the following bit sequence, then take the second and third terms $0 \bigoplus 1=1$ which is the second term of the new sequence and so on). However, I don't know how it gets the last term. Such a pattern works for all the sequences of the Example 2, which makes me thing that I'm not seeing the full picture. -Ideas? - -REPLY [2 votes]: I just read "Generalized Feedback Shift Register Pseudorandom Number Algorithm" by T. G. Lewis and W. H. Payne. -I think that paper settles the question I was raising (going to the source, right?). In essence, the question is -"What is the correct procedure to use the Generalized Feedback Shift Register Algorithm (GFSR)?". -1.- Start with a sequence and a primitive polynomial $x^{p}+x^{q}+1$. -For example, $a_{0}=a_{1}=a_{2}=a_{3}=a_{4}=1$ and $x^{5}+x^{2}+1$. -2.- Elements of the sequence follow $a_{k}=a_{k-p+q}\bigoplus a_{k-p}$ with $k=p, p+1,...$. -In this example, since we have the first 5 elements of the sequence and according to the polynomial, we are given that $p=5, q=2$. -Therefore, we can know the next elements of the sequence -\begin{matrix} a_{6}=a_{3}\bigoplus a_{1}=0 \\ a_{7}=a_{4}\bigoplus a_{2}=0 -\\ a_{8}=a_{5}\bigoplus a_{3}=1 \\ a_{9}=a_{6}\bigoplus a_{4}=1 \\ ... \\ \end{matrix} -So, in this way we construct the rest of the sequence: -$\{a_{i}\}_{0}^{30}={1111100011011101010000100101100}$ -In order to produce a better random sequence, we apply Kendall's algorithm. Although there are several variations of Kendall's algorithm, the point is to shift the original sequence $1111100011011101010000100|101100$ forwards by 6 bits, that is, $1011001111100011011101010|000100$. -And again three times more (until we are back with the original sequence). This process gives the following sequence -\begin{matrix} -\text{Key} & \text{Sequence} \\ -0 & \|11111\|00011011101010000100|101100\\ -1 & 1011001111100011011101010|000100\\ -2 & 0001001011001111100011011|101010\\ -3 & 101010000100101100111100|011011\\ -4 & 0110111010100001001011001|111100 -\end{matrix} -Finally, we take n-tuples (in this example, 5-tuples are used) which are positioned as the columns of a new array: -\begin{matrix} -W_{0}: & \|1\|1010 & W_{10}: & 01001& W_{20}: & 00111\\ -W_{1}: & \|1\|0001 & W_{11}: & 10000& W_{21}: & 01111\\ -W_{2}: & \|1\|1011 & W_{12}:& 10110& W_{22}: & 10010\\ -W_{3}: & \|1\|1100 & W_{13}:& 10100& W_{23}: & 01100\\ -W_{4}: & \|1\|0011 & W_{14}:& 01110& W_{24}: & 00101\\ -W_{5}: & 00001 & W_{15}:& 11111& W_{25}: & 10101\\ -W_{6}: & 01101 & W_{16}:& 00100& W_{26}: & 00011\\ -W_{7}: & 01000 & W_{17}:& 11000& W_{27}: & 10111\\ -W_{8}: & 11101 & W_{18}:& 01011& W_{28}: & 11001\\ -W_{9}: & 11110 & W_{19}:& 01010& W_{29}: & 00110 -\end{matrix} -Each $W_{i}$ is called a 'word'. - - -Since each column obeys the recurrence $a_{k}=a_{k-p+q}\bigoplus a_{k-p}$, each word must also obey - $W_{k}=W_{k-p+q}\bigoplus W_{k-p}$. - - -As far as I know, that's the correct procedure for using the GFSR algorithm. -Corrections or comments will be appreciated.<|endoftext|> -TITLE: What requirements should a CRC polynomial satisfy? -QUESTION [9 upvotes]: What requirements should a CRC polynomial of a given degree satisfy to make the CRC catch a maximum of errors? -edit -I'm talking about GF(2) polynomials. -As an example of the kind of requirements I'm looking for: I can imagine (but don't know for sure) that a prime polynomial catches more errors than a composite polynomial. -I'm not a mathematician, so please type slowly :-) - -REPLY [6 votes]: Probably the link given by starblue gives you more than enough information. A few general remarks may help giving a new reader an overview, so here comes: -1) An irreducible check polynomial $p(D)\in F_2[D]$ (Edit: of degree $m$) catches all error patterns of weight $\le 2$ provided that the length of the data block+CRC-check is at most the order $k$ of $D$ modulo the polynomial $p(D)$. IOW $k$ is the smallest positive integer such that $D^k\equiv 1\pmod{p(D)}$. Here the game is to maximize $k$ (maximize the range of usability of this CRC). The maximum $2^m-1$ is reached exactly, when $p(D)$ is a so called primitive polynomial (or its root generates the multiplicative group of the field $GF(2^m)$). -2) If you want a guarantee for the CRC to catch more than 3 bit errors for certainty, then you have to use a product of irreducible polynomials. Typically (but not necessarily) they would have the same degree. If you are familiar with the theory of BCH-codes, then you see that a cyclic code generated by a product of minimal polynomials $p_1(D)$ of a primitive elemen $\alpha$ and the minimal polynomial $p_3(D)$ of $\alpha^3$, gives rise to a CRC-polynomial guaranteed to catch all the error patterns of weights $\le4$. BUT the price you pay for this is that the usable length of the CRC-polynomial $p_1(D)p_3(D)$ is only $2^{\deg p_1(D)}-1$, not $2^{\deg p_1(D)p_3(D)}-1$ as you might have hoped. This is because the polynomial $D^{2^m-1}+1, m=\deg p_1(D)$ is divisible by both $p_1(D)$ and $p_3(D)$ and creates "uncatchable" weight 2-patterns, if you make the block too long. -3) Generator polynomials of cyclic codes other than BCH-codes are often used. There are several pairs of irreducible polynomials that give rise to the same guaranteed error-detection probability as the generator polynomial of a BCH-code. Secondary design criteria often tip the scale in favor of these, also other choices give rise to check polynomials with slightly differing length limits. I have seen generator polynomials of Melas codes and Zetterberg codes used as CRC-polynomials. -4) You can always make sure that you CRC catches all the odd weight error patterns by multiplying the polynomial with $1+D$, if you can spare that single extra check bit.<|endoftext|> -TITLE: $\binom{n}{k} : \binom{n}{k+1} : \binom{n}{k+2} = a : b : c$ -QUESTION [8 upvotes]: It is a rather surprising fact (to me, at least) that $\displaystyle \binom{14}{4} = 1001$; $\displaystyle \binom{14}{5} = 2002$; $\displaystyle \binom{14}{6} = 3003$. -Actually, this is the only instance where three consecutive binomial coefficients are in the ratio $\displaystyle 1 : 2 : 3$. I found it quite interesting to investigate under what conditions consecutive members of a row of Pascal’s triangle can be in the ratio $\displaystyle a : b : c$, where $\displaystyle a,b,c$ are positive integers with $\displaystyle \mathrm{gcd}(a,b,c) = 1$ and $\displaystyle a < b < c$, except where otherwise stated. That is, only the left-hand side of the triangle will be considered. -$$\binom{n}{k} : \binom{n}{k+1} : \binom{n}{k+2} = a : b : c$$ -Cancelling and rearranging, -$\displaystyle b(k + 1) = a(n - k)$......….....[1] -$\displaystyle c(k + 2) = b(n - k - 1)$.........[2] -$$n = \frac{a(b + c) + c(a + b)}{(b^2 - ac)}$$ -$$k = \frac{a(b + c)}{b^2 - ac} - 1 $$ -Therefore n and k are integers iff $\displaystyle b^2-ac$ divides both $\displaystyle a(b + c)$ and $\displaystyle c(a + b)$. -$\displaystyle n > 0$ implies $\displaystyle b^2 > ac$ -$\displaystyle k\ge 0$ implies $\displaystyle c \ge \frac{b(b - a)}{2a}$ -Hence a third condition is $\displaystyle \frac{b^2}{a} > c \ge \frac{b(b - a)}{2a}$ -Perhaps the most interesting special case is $\displaystyle c = a + b$, when for -$\displaystyle a,b < 100$ there are only five solutions. Namely, -$\displaystyle (a,b,c) = (1,2,3)$ gives $\displaystyle \{n,k\} = \{14,4\}$ -$\displaystyle (a,b,c) = (3,5,8)$ gives $\displaystyle \{n,k\} = \{103,38\} $ -$\displaystyle (a,b,c) = (8,13,21)$ gives $\displaystyle \{n,k\} = \{713,271\}$ -$\displaystyle (a,b,c) = (21,34,55)$ gives $\displaystyle \{n,k\} = \{4894,1868\}$ -$\displaystyle (a,b,c) = (55,89,144)$ gives $\displaystyle \{n,k\} = \{33551,12814\}$ -That is, there are solutions only when $\displaystyle (a,b,c) = (F(2m), F(2m + 1), F(2m + 2)) $ -$\displaystyle m = 1,2,3...,$ where $\displaystyle F(m)$ is the $\displaystyle m^{th}$ Fibonacci number. -Generally, -$\displaystyle \{n,k\} = \{F(2m + 2)F(2m + 3) - 1, F(2m)F(2m + 3) - 1\} $ -All solutions satisfy $\displaystyle F(2m)n = F(2m+2)k + F(2m+1) $ -Where possible I have been able to derive formulae for $\displaystyle \{n,k\}$ for all special cases I could think of (eg. $\displaystyle a,b,c$ in arithmetic progression) except for the case $\displaystyle c = a^2$. -For $\displaystyle a,b < 3000$ there are only three solutions: -$\displaystyle (a,b,c) = (1,2,1)$ gives $\displaystyle \{n,k\} = \{2,0\}$ -$\displaystyle (a,b,c) = (2,3,4)$ gives $\displaystyle \{n,k\} = \{34,13\}$ -$\displaystyle (a,b,c) = (13,47,169)$ gives $\displaystyle \{n,k\} = \{1079,233\}$ -Letting $\displaystyle c = a^2$ and dividing equation [2] by [1] leads to -$\displaystyle a(k + 1)(k + 2) = (n - k)(n - k - 1)$ -Rearranging, all solutions satisfy $\displaystyle n^2 - (2k + 1)n - (k + 1)[(a - 1)k + 2a] = 0$ -The discriminant $\displaystyle D$ of the above quadratic is $\displaystyle 4a(k + 1)(k + 2) + 1$, and a necessary and sufficient condition for $\displaystyle n$ to be an integer is that this expression be a perfect square. -$\displaystyle a = 1, k = 0$ gives $\displaystyle D = 9 = 3^2$ -$\displaystyle a = 2, k = 13$ gives $\displaystyle D = 1681 = 41^2$ -$\displaystyle a = 13, k = 233$ gives $\displaystyle D = 2859481 = 1691^2$ -And my question is: can one prove (or disprove) that there are no more solutions? - -REPLY [3 votes]: Sketch: -Suppose $\binom{n}{k},\binom{n}{k+1},\binom{n}{k+2}$ are in ratio $1:2:3$. Then $\frac{n-k}{k+1}=2$ and $\frac{n-k-1}{k+2}=\frac{3}{2}$. Solving these two equations gives $n=14$ and $k=4$.<|endoftext|> -TITLE: $|G|>2$ implies $G$ has non trivial automorphism -QUESTION [52 upvotes]: Well, this is an exercise problem from Herstein which sounds difficult: - -How does one prove that if $|G|>2$, then $G$ has non-trivial automorphism? - -The only thing I know which connects a group with its automorphism is the theorem, $$G/Z(G) \cong \mathcal{I}(G)$$ where $\mathcal{I}(G)$ denotes the Inner- Automorphism group of $G$. So for a group with $Z(G)=(e)$, we can conclude that it has a non-trivial automorphism, but what about groups with center? - -REPLY [6 votes]: The other two answers assume the axiom of choice: - -Arturo Magidin uses choice when he forms the direct sum ("...it is isomorphic to a (possibly infinite) sum of copies of $C_2$...") -HJRW uses choice when he fixes a basis (the proof that every vector space has a basis requires the axiom of choice). - -If we do not assume the axiom of choice then it is consistent that there exists a group $G$ of order greater than two such that $\operatorname{Aut}(G)$ is trivial. This is explained in this answer of Asaf Karagila.<|endoftext|> -TITLE: Prove that $\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$ -QUESTION [74 upvotes]: Using $\text{n}^{\text{th}}$ root of unity -$$\large\left(e^{\frac{2ki\pi}{n}}\right)^{n} = 1$$ -Prove that -$$\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$$ - -REPLY [18 votes]: Consider $z^n=1$, each root is -$$\xi_k = \cos\frac{2k\pi}{n} + i\sin\frac{2k\pi}{n} = e^{i\frac{2k\pi}{n}}, k=0,1,2,...,n-1 $$ -So, we have -$$ z^n -1 = \prod_{k=0}^{n-1}(z-\xi_k)$$ -$$\Longrightarrow (z-1)(z^{n-1}+...+z^2+z+1) = (z-\xi_0)\prod_{k=1}^{n-1}(z-\xi_k)$$ -$$\Longrightarrow (z-1)(z^{n-1}+...+z^2+z+1) = (z-1)\prod_{k=1}^{n-1}(z-\xi_k)$$ -$$\Longrightarrow z^{n-1}+...+z^2+z+1 = \prod_{k=1}^{n-1}(z-\xi_k)$$ -By substituting z=1, $$\Longrightarrow n = \prod_{k=1}^{n-1}(1-\xi_k) $$ -Next, take the modulus on both sides, -$$ |n| = n = |\prod_{k=1}^{n-1}(1-\xi_k)| = \prod_{k=1}^{n-1}|(1-\xi_k)|$$ -$$ 1 - \xi_k = 1-(\cos\frac{2k\pi}{n} + i\sin\frac{2k\pi}{n}) = 2\sin\frac{k\pi}{n}(\sin\frac{k\pi}{n} -i\cos\frac{k\pi}{n})$$ -$$ |1 - \xi_k| = 2\sin\frac{k\pi}{n} $$ -So, -$$ n = 2^{n-1}\prod_{k=1}^{n-1}\sin\frac{k\pi}{n}$$ -$$\prod_{k=1}^{n-1}\sin\frac{k\pi}{n} = \frac{n}{2^{n-1}} $$<|endoftext|> -TITLE: A normal subgroup intersects the center of the $p$-group nontrivially -QUESTION [31 upvotes]: If $G$ is a finite $p$-group with a nontrivial normal subgroup $H$, then the intersection of $H$ and the center of $G$ is not trivial. - -REPLY [2 votes]: Can we try induction on $n$ by taking quotients like $G/Z$ WHERE $Z$ IS THE CENTER. -More technically it goes like this. -We take the quotient group $G/Z$ where $Z$ is the center and as $G$ is a $p-group$,$Z$ is non trivial.Hence $o(G/Z)=o(G)/O(Z)$ is less than $o(G)$ to be ideal for application of induction.Now look at the set $S=(Zx|x\in N)$.My claim is that this set is a subgroup and as a matter of fact a normal subgroup.Because $(Zx_1)(Zx_2)=Zx_1x_2$ and hence the closure and the inverse property is trivial.Again for any $x \in G$ and $n \in N$ $(Zx)(Zn)(Zx)^{-1}=Zxnx^{-1}$ and due to $N$ being normal,that belongs to $S$.Hence $S$ is a normal subgroup in $G/Z$.Now due to non-triviality of $Z$ we can apply induction to claim that the intersection of $S$ with the center of $G/Z$ is non-trivial.That is there is a $a$ not in $Z$ but in $N$ such that $Zax=Zxa$ for all $x \in G$ Hence there is an $a \in N$ such that $axa^{-1}x^{-1}$ belongs to $Z$ for all $x \in G$.Now as $a^{-1}$ belongs to $N$ clearly $xa^{-1}x^{-1}$ be in $N$ and thus $axa^{-1}x^{-1}$ belongs to $N$.But if $N$ and $Z$ has trivial intersection then $ax=xa$ for all $x \in G$ which makes $a \in Z$,a contradiction.<|endoftext|> -TITLE: Degree of $\sqrt{2}+\sqrt[3]{5}$ over $\mathbb{Q}(\sqrt{2})$ and $\mathbb{Q}(\sqrt[3]{5})$ -QUESTION [9 upvotes]: I'm self-studying field extensions. I ran over an exercise which I can't completely solve. (I haven't yet started studying Galois theory, and I think this exercise isn't meant to be solved using it, just in case): -The problem is: - -a) Prove $\sqrt{2}+\sqrt[3]{5}$ is algebraic over $\mathbb{Q}$ of degree 6. - -Done: I know it has degree $\leq 6$ because $\mathbb{Q}\subset \mathbb{Q}(\sqrt{2}+\sqrt[3]{5})\subset \mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ which has degree 6; then I explicitly found the polynomial by solving a 6-equation linear system, and Wolfram Alpha proved it irreducible (btw: how can I prove it by hand?). The polynomial is $t^6-6t^4-10t^3+12t^2-60t+17$. - -b) What's its degree over $\mathbb{Q}(\sqrt{2})$ and $\mathbb{Q}(\sqrt[3]{5})$? - -It is this part b) which I can't solve. Of course its degree is $\leq 6$ in both cases, but I don't know what else to do. - -REPLY [4 votes]: For part b, we can find the minimal polynomial in both cases. -Consider $a =\sqrt{2} + \sqrt[3]{5}$ over $\mathbb{Q}(\sqrt[3]{5})$. Notice that -$ (x - \sqrt[3]{5})^2 -2 = 0$ -is a degree two polynomial in $\mathbb{Q}(\sqrt[3]{5})[x]$ that $a$ is a root of. We know that $a \notin \mathbb{Q}(\sqrt[3]{5})$, so the degree of its minimal polynomial must be greater than 1, and now we have a polynomial of degree 2 that it is a root of, so its minimal polynomial must be that one above. Therefore $a$ has degree two over $\mathbb{Q}(\sqrt[3]{5})$. -Similarly, consider $a$ over $\mathbb{Q}(\sqrt{2})$. Notice that -$ (x - \sqrt{2})^3 - 5 = 0 $ -is a degree 3 polynomial in $\mathbb{Q}(\sqrt{2})[x]$ that $a$ is a root of, and hence the minimal polynomial, which must divide this polynomial, can have degree either 2, or 3. But, if it has degree two, then it is attained from the above one by dividing by $x - a$. We get by polynomial division that -$ (x - \sqrt{2})^3 - 5 = (x - \sqrt{2} - \sqrt[3]{5})(-2 - \sqrt{2} \sqrt[3]{5} + \sqrt[3]{25} + (-2\sqrt{2} + \sqrt[3]{5})x + x^2) $ -and $-2 - \sqrt{2} \sqrt[3]{5} + \sqrt[3]{25} + (-2\sqrt{2} + \sqrt[3]{5})x + x^2 \notin \mathbb{Q}(\sqrt{2})[x]$ because the $x$ coefficient is not in $\mathbb{Q}(\sqrt{2})$. If it were, then as $2\sqrt{2} \in \mathbb{Q}(\sqrt{2})$, we would get that -$2 \sqrt{2} + (-2\sqrt{2} + \sqrt[3]{5}) = \sqrt[3]{5} \in \mathbb{Q}(\sqrt{2})$ -which is impossible. Thus that quadratic polynomial cannot be the minimal polynomial for $a$. Hence, the minimal polynomial of $a$ has degree 3.<|endoftext|> -TITLE: Are the smooth functions dense in either $\mathcal L_2$ or $\mathcal L_1$? -QUESTION [10 upvotes]: Is the subset consisting of all integrable (or square integrable) smooth functions of the set of all integrable (or square integrable) functions, dense under the usual Euclidean or integral of absolute difference metric? -By smooth I mean derivatives of all orders exist. - -REPLY [17 votes]: Jonas's argument is good. Another proof is: given $f \in L^p$ (here $p=1,2$), take the convolution of $f$ with a sequence of mollifiers $\eta_\epsilon$. Using properties of convolutions, it's easy to check that $f * \eta_\epsilon$ is a smooth function, and that $f * \eta_\epsilon \to f$ in $L^p$ as $\epsilon \to 0$. This has the advantage of being a little more direct. -Edit: For a reference, see Folland's Real Analysis, section 8.2. -The smoothness of $f * \eta_\epsilon$ is Proposition 8.10 and comes from differentiating under the integral sign in the convolution (with justification!), and choosing to put the derivative on $\eta_\epsilon$. Intuitively, it comes from the idea that convolution is an "averaging" operation and tends to smooth, smear, or blur rough areas of $f$ together, and so should be a smoothing operation. (The wikipedia article has a nice animation illustrating this.) -The fact that $f * \eta_\epsilon \to f$ in $L^p$ is Folland's Theorem 8.14 (a), and it's pretty elementary. He also has Proposition 8.17 which proves that $C^\infty_c$ is dense in $L^p$, but it sort of inexplicably starts by using the fact that $C_c$ is dense in $L^p$. I suppose this is used to get a compactly supported function, so that you can approximate $f \in L^p$ by functions which are not only smooth (which $f * \eta_\epsilon$ is) but also compactly supported (which $f * \eta_\epsilon$ need not be, although $\eta_\epsilon$ is). But an easier argument would be to first approximate $f$ in $L^p$ norm by a function $g$ which is compactly supported but not necessarily continuous; for example, $g = f 1_{[-N,N]}$ for large $N$ (this works by dominated convergence), and then apply mollifiers to $g$. Unless, of course, there is some subtlety that I've missed. -Edit 2: Indeed there is. Folland's 8.14 (a) relies upon the fact that translation is strongly continuous in $L^p$, which uses the density of $C_c$. So apparently it is not so easy to bypass this step, and that destroys a lot of the "directness" of my argument.<|endoftext|> -TITLE: Expected number of (0,1) distributed continuous random variables required to sum upto 1 -QUESTION [7 upvotes]: I define $X_i$ as a random variable that is uniformly distributed between (0,1). What is the expected number of such variables I require to make the sum go just higher than 1. -Thanks - -REPLY [5 votes]: This problem is well-known. For an answer, see, for example, the first part of Section 2 in http://myweb.facstaff.wwu.edu/curgus/Papers/27Unexpected.pdf, or Equations (7)-(10) in http://mathworld.wolfram.com/UniformSumDistribution.html<|endoftext|> -TITLE: On the Math Mindset -QUESTION [18 upvotes]: I am a soon-to-be high school graduate with inclination towards mathematics. I enjoy doing math, and am relatively good at it, but I dislike the way I am being taught. I feel like I am being taught methods for solving a lot of problems, but I don't see how any of this fits together. -I was considering pursuing a career in mathematics, but I realized I don't really know much about it as a whole as opposed to how to solve specific problems that will appear on some test. -Are there any good books out there about the "math mindset", or whatever it is that it's called? - -REPLY [4 votes]: Here are some further classics that you may find inspiring and enlightening: Polya: How to solve it; Rademacher and Toeplitz: The enjoyment of mathematics; Kac and Ulam: Mathematics and Logic.<|endoftext|> -TITLE: There is a 5 by 5 matrix of points on a plane. How many triangles can be formed using points on this matrix? -QUESTION [5 upvotes]: There is a 5 by 5 matrix of points on a plane. How many triangles can be formed using points on this matrix? - -REPLY [7 votes]: This is the obvious approach (to me, at least), and is mostly a matter of bookkeeping. To begin with, there are ${{5^2} \choose 3}$ sets of three points. However, some of these are colinear, which we don't want to count. -There are $2 \times 5 \times {5 \choose 3}$ horizontal or vertical sets of three colinear points. E.g. - .xxx. ..... - ..... ...x. - ..... ...x. - ..... ..... - ..... ...x. - -There are $3+3+3+3$ sets of three colinear points with rise/run $\in {\pm 2, \pm 1/2}$. - x.... ..x.. - ..x.. ..... - ....x ...x. - ..... ..... - ..... ....x - - ...x. ..... - ..... ....x - ..x.. ..x.. - ..... x.... - .x... ..... - -There are $2 \times {5 \choose 3}$ sets of three colinear points with rise/run $\in {\pm 1}$ along the main diagonal or main anti-diagonal. - x.... ..... - .x... ...x. - ..x.. ..x.. - ..... ..... - ..... x.... - -There are $4 \times {4 \choose 3}$ sets of three colinear points along the "shunted" main diagonal and main anti-diagonal (is there a better name for these diagonals?). - .x... ...x. ..... ..... - ..... ..x.. x.... ..... - ...x. ..... .x... ...x. - ....x x.... ..x.. ..x.. - ..... ..... ..... .x... - -Finally, there are these $4$ remaining: - ..x.. ..x.. ..... ..... - .x... ...x. ..... ..... - x.... ....x x.... ....x - ..... ..... .x... ...x. - ..... ..... ..x.. ..x.. - -So there are \[{{5^2} \choose 3}-2 \times 5 \times {5 \choose 3}-(3+3+3+3)-2{5 \choose 3}-4{4 \choose 3}-4=2148\] triangles that can be formed. I also checked this answer with some code in GAP. -[Here, I have assumed that you want to count congruent triangles separately.]<|endoftext|> -TITLE: Explicit solutions to this nonlinear system of two differential equations -QUESTION [5 upvotes]: I am interested in a system of differential equations that is non-linear, but it doesn't seem to be too crazy. I'm not very good at non-linear stuff, so I thought I'd throw it out there. -The actual equations I'm looking at have several parameters that'd I'd like to tweak eventually. -q' = k - m / r -r' = i - n r - j q - -i, j, k, m and n are all real-valued constants. I'm guessing that this system would be cyclical in nature, but I'm not sure if it has any explicit solution, so I have produced a version of it with the constants removed to see if that can be solved: -q' = 1 - 1 / r -r' = 1 - r - q - -Anyone know if either of these are solvable and what kind of techniques would be needed to solve them if so? -The first equation is based on a polar coordinate system where Q (or theta) is the angle and r is radius, and I've made a number of simplifications to make it somewhat tractable. - -REPLY [5 votes]: Taking that second question, -$r' = i - nr - jq$ -and differentiating gives -$r'' = -nr' - jq' = -nr' - j(k-\frac{m}{r})$ -or in other words -$r'' + ar' + \frac{b}{r} = c$ -which is a much simpler differential equation only one variable. I think that you could probably solve this with power series or clever guessing, but it needs to be worked out.<|endoftext|> -TITLE: Duality with a stochastic matrix -QUESTION [6 upvotes]: If I have a stochastic matrix $X$- the sum of each row is equal to $1$ and all elements are non-negative. -Given this property, how can I show that: -$x'X=x'$ , $x\geq 0$ -Has a non-zero solution? -I'm assuming this has something to do with proving a feasible dual, but not may be wrong.. - -REPLY [7 votes]: Here's the linear programming approach (with considerable help from another user, Fanfan). Consider the LP -$$\min 0^T y$$ -subject to -$$(X - I)y \geq 1,$$ -where $0$ and $1$ are, respectively, vectors containing all 0's and all 1's. -By Fanfan's answer to my question this LP is infeasible. Thus its dual, -$$\max 1^T x$$ -subject to -$$x^T (X-I) = 0^T,$$ -$$x \geq 0,$$ -is either infeasible or unbounded. But $x = 0$ is a solution, and so this dual problem is feasible. Thus it must be unbounded. But that means there must be a nonzero solution $x_1$ in its feasible region. Thus we have $x_1^T X = x_1^T$ with $x_1 \geq 0$, $x_1 \neq 0$. -(Fanfan's answer to my question also includes another answer to your question - one that uses Farkas's Lemma rather than LP duality. It ends up being quite similar to my answer here, as, of course, Farkas's Lemma and LP duality are basically equivalent.)<|endoftext|> -TITLE: What are the prerequisites for learning category theory? -QUESTION [26 upvotes]: Is category theory worth learning for the sake of learning it? Can it be used in applied mathematics/probability? I am currently perusing Categories for the Working Mathematician by Mac Lane. - -REPLY [12 votes]: It depends on whether you are talking about Category Theory as a topic in mathematics (on a par with Geometry or Probability) or Category Theory as a viewpoint on mathematics as a whole. -If the former, the main prerequisite is that you should have encountered a situation where you wanted to move from one type of "thing" to another type of "thing": say from a group to its group ring, or from a space to its ring of functions, or from a manifold to its differential graded algebra. -If the latter, then there are no prerequisites and it is a Very Good thing to do! But if the latter, then reading Mac Lane isn't necessarily the best way to go. However, I'm not sure if there is a textbook (or other) that tries to teach elementary mathematics (of any flavour) from a categorical viewpoint. I try to teach this way, but I've not written a textbook! I wrote a bit more on this in response to a question on MO, I copied my answer here.<|endoftext|> -TITLE: Let $f$ be a continuous but nowhere differentiable function. Is $f$ convolved with mollifier, a smooth function? -QUESTION [5 upvotes]: Let $f$ be a continuous but nowhere differentiable function. Is $f$ convolved with mollifier, a smooth function? - -REPLY [5 votes]: Yes. -The key is that when you (as you say in the comments) get the two scenarios: -$$ -f \star (D g) = D(f \star g) = (D f) \star g -$$ -then you get to choose which! -So if $D f$ doesn't make sense, then you can ignore it and choose to use the identity $D(f \star g) = f \star (D g)$. -Taking this to the extreme, you get the - bizarre, in my opinion - result that if $p$ is a polynomial, then $f \star p$ is always a polynomial.<|endoftext|> -TITLE: Positive integers $k = p_{1}^{r_{1}} \cdots p_{n}^{r_{n}} > 1$ satisfying $\sum_{i = 1}^{n} p_{i}^{-r_{i}} < 1$ -QUESTION [10 upvotes]: A divisor $d$ of $k = p_{1}^{r_{1}} \cdots p_{n}^{r_{n}}$ is unitary if and only if $d = p_{1}^{\varepsilon_{1}} \cdots p_{n}^{\varepsilon_{n}}$, where each exponent $\varepsilon_{i}$ is either $0$ or $r_{i}$. Let $D_{k} = ${ $d$ } be the subset of unitary divisors of an integer $k > 1$ satisfying $\omega(d) = \omega(k) - 1$. -Definition. A positive integer $k = p_{1}^{r_{1}} \cdots p_{n}^{r_{n}} > 1$ is hyperbolic if and only if $\sum_{i = 1}^{n} p_{i}^{-r_{i}} < 1$ or, equivalently, $\sum_{d \in D_{k}} d < k$. -See my OEIS entry. -For example, $3$, $10$ and $20$ are hyperbolic, but $30$ and $510510 = 2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \cdot 17$ are not. -Assuming my calculations are correct, I'll state the following with some confidence: -Indeed, many positive integers are hyperbolic. Of the first $10^{8}$ integers, $70334760$ are hyperbolic. Non-trivial prime powers, squares or higher, are also hyperbolic. If $k$ is a hyperbolic integer, then so are its proper, non-trivial unitary divisors; however, the same cannot be inferred for all divisors (consider the hyperbolic integer $900$ and its non-hyperbolic divisor $30$). In fact, an arbitrary product of any number of non-trivial unitary divisors of a hyperbolic integer is again hyperbolic. -The set of hyperbolic integers is closed under exponentiation, but not under addition (e.g., $10 + 20 = 30$) or under multiplication (e.g., $3 \times 10 = 30$). -Define the Prime zeta function, $\zeta_{P}(s) = \sum_{p \text{ prime}} p^{-s}$, which converges absolutely for $\mathsf{Re}(s) > 1$. Recall that the multiplicity of a prime divisor $p$ of $k$ is the largest exponent $r$ such that $p^{r}$ divides $k$ but $p^{r+1}$ does not. If the minimum multiplicity of an integer $k$ is $2$ or greater, then $k$ is hyperbolic as can be seen by the elementary bound -\begin{eqnarray} -\sum_{i = 1}^{n} p_{i}^{-r_{i}} < \zeta_{P}(2) \approx 0.452247 .... -\end{eqnarray} -Thus, the question of hyperbolicity is non-trivial only for integers with minimum multiplicity $1$. -Numerical evidence suggests that the natural density of the hyperbolic integers is greater than $0.988284 \dots$, and I conjecture that almost all integers are indeed hyperbolic (i.e., the natural density is 1). -Question: Is anything presently known about such integers? (References welcome!) -Question: Is there a simple proof showing (or refuting) that almost all integers are hyperbolic? -Thanks! - -REPLY [4 votes]: [[EDIT: Let $ k = p_1^{r_1} \cdots p_n^{r_n} $ as in your post. Then define the hyperbolicity $ H(k) = \sum_{i=1}^n p_i^{-r_i} $.]] -We can refute the idea in your second question in a much more general manner than you might expect. Note that since $ \zeta_P(1) $ diverges, numbers can have arbitrarily large hyperbolicity; similarly, since $ H(2^n) = 2^{-n} $, it follows that numbers can have arbitrarily small hyperbolicity. Define $ d(x) $ to be the asymptotic density of natural numbers satisfying $ H(n) > x $. We will demonstrate the strict inequality $ d(x) > 0 $ holds for all $ x > 0 $, and even deduce a satisfying lower bound for sufficiently, erm... not-small $x $. -Fix $ x \in (0,\infty) $. Let $ k $ be the smallest natural number such that $ 1/2 + 1/3 + 1/5 + \dots + 1/p_k > x $. Then for $ n = 2(3)(5)\cdots (p_k) $, $ H(n) > x $. Note that hyperbolicity, taken as an arithmetic function, is multiplicative, though not completely. In fact, by categorizing which prime factors belong where and when they're duplicated, one can see the following formula: -$$ H(ab) = H(a) + H(b) - H(\gcd(a,b)^2) + H(\gcd(a,b)) $$ -This isn't necessary for our argument, I'm just putting that out there for posterity. -At any rate, we may conclude that for any number $ m $ which is coprime with $ n $, we can obtain another sufficently hyperbolic number via multiplication: $ H(nm) > -H(n) > x $. Note that the density of numbers coprime with $ n $ can be expressed as -$$ y = (1-1/2)(1-1/3)(1-1/5)\cdots (1-1/p_k). $$ -This formula echoes the Sieve of Eratosthenes: it follows from the fact that the probability a number is divisible by a prime is independent of the probability it is divisible by a different prime. -Finally, if you take the set of all numbers $ m $ coprime with $n $, and multiply them all by $ n $, they'll become $ n $ times more sparse, hence the density of all numbers of the form $ n m $ is $ y / n $. This proves $ d(x) \ge y/n $, which is a pretty good lower bound so long as $ x $ isn't too small (this is my subjective impression). For $ x = 1 $, as in your original question, we have the close lower bound $ (1/30)(1/2)(2/3)(4/5) $ or about 0.89%. -So far I haven't been able to determine whether or not there exists any $ \epsilon > 0 $ such that $ d(\epsilon) = 1 $. If there is one, then there would exist a maximum $ u $ such that $ d(\epsilon) = 1 $ for any $ \epsilon \in (0,u) $, which would make an interesting new constant. Note that $ d(x)$ is decreasing, though I'm not sure if it's continuous. What would really be interesting is if $ d(x) $ were differentiable on some interval.<|endoftext|> -TITLE: Continued Fraction expansion of $\tan(1)$ -QUESTION [11 upvotes]: Prove that the continued fraction of $\tan(1)=[1;1,1,3,1,5,1,7,1,9,1,11,...]$. I tried using the same sort of trick used for finding continued fractions of quadratic irrationals and trying to find a recurrence relation, but that didn't seem to work. - -REPLY [12 votes]: We use the formula given here: Gauss' continued fraction for $\tan z$ and see that -$$\tan(1) = \cfrac{1}{1 - \cfrac{1}{3 - \cfrac{1}{5 -\dots}}}$$ -Now use the identity -$$\cfrac{1}{a-\cfrac{1}{x}} = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{x-1}}}$$ -To transform $$\cfrac{1}{a - \cfrac{1}{b - \cfrac{1}{c - \dots}}}$$ to -$$\cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{b-2 + \cfrac{1}{1 + \cfrac{1}{c-2 + \dots}}}}}$$ -to get the expansion for $\displaystyle \tan(1)$ -The above expansion for $\tan(1)$ becomes -$$ \cfrac{1}{1-1 + \cfrac{1}{1 + \cfrac{1}{3-2 + \cfrac{1}{1 + \cfrac{1}{5-2 + \dots}}}}}$$ -$$ = 1 + \cfrac{1}{3-2 + \cfrac{1}{1 + \cfrac{1}{5-2 + \dots}}}$$ -$$= 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{3 + \cfrac{1}{1 + \cfrac{1}{5 + \dots}}}}}$$ -To prove the transformation, -let $\displaystyle x = b - \cfrac{1}{c - \dots}$ -Then -$$ \cfrac{1}{a-\cfrac{1}{x}} = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{x-1}}}$$ -$$ = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{b-1 + \cfrac{1}{c - \dots}}}}$$ -Applying the identity again to -$$\cfrac{1}{b-1 + \cfrac{1}{c - \dots}}$$ -we see that -$$\cfrac{1}{a-\cfrac{1}{x}} = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{b-2 + \cfrac{1}{1 + \cfrac{1}{c-1 + \cfrac{1}{d - \dots}}}}}}$$ -Applying again to $\cfrac{1}{c-1 + \cfrac{1}{d - \dots}}$ etc gives the required CF.<|endoftext|> -TITLE: Number of common divisors between two given numbers -QUESTION [11 upvotes]: How can I compute the number of common divisors of two naturals ? -For example,if we consider (12,24) the answer is 6 i.e {1,12,2,6,3,4}. -EDIT : I got an answer here.The solution boils down to finding the number of divisor of the GCD of the two numbers. - -REPLY [4 votes]: In response to user2969 : -At first you need to computed the gcd, it is always best to use Euclidean algorithm for this,my C++ implementation for the same : -int E_GCD(int a,int b){ - return b ? gcd(b,a%b):a; -} - -Now we need to compute the number of divisors of N = E_GCD(A,B), this can be done efficiently by first computing the factorization using any fast algorithm like Pollard rho integer factorization or ECM then using the standard number theory $\text{trick}^*$ to for finding the number of factors,however in this problem if the values of A and B are small (say:$0 \lt A,B \le 10^6$),then we don't need really need to use any of such sophisticated algorithms,instead you could build up your own algorithm simply based on the fact the the factors of a number occur in pair that is if say i is a factor of N then so is $N/i$ using this key observation I build up the algorithm,here is my C++ implementation of this idea : - N = E_GCD(A,B); - int ans = 0; - int sqt = (int)sqrt(N); - for(int i = 1 ; i <= sqt; i++){ - if(N % i == 0){ - ans += 2 ; - if(i == N/i) ans--; - } - } - printf("%d\n",ans); - -Notice the fact that a number cannot a have a divisor which is greater than the square root of the number,rest of the code I believe is self explanatory. -*$\text{If }N = a^p \cdot b^q \cdot c^r \cdots \text{,a,b,c ... are primes,number of factors of N is :}(p+1) \cdot (q+1)\cdot(r+1)\cdots$<|endoftext|> -TITLE: Permutations of a set that keep at least one integer fixed -QUESTION [10 upvotes]: what is the number of permutations of a set say {1, 2, 3, 4, 5} that keep at least one integer fixed ? - -REPLY [11 votes]: This is equal to $n!$ minus the number of permutations which keep no elements fixed. These are known as derangements and you can find all the relevant formulas at the Wikipedia article. You can count them, for example, using inclusion-exclusion. -If you just want the number of derangements of an $n$-element set, then a nice way to compute it is to round $\frac{n!}{e}$ to the nearest integer. For example, when $n = 5$ we want to round $\frac{120}{e} \approx 44.1$, so there are $44$ derangements, hence $120 - 44 = 76$ permutations which fix at least one element.<|endoftext|> -TITLE: The 2nd part of the "Fundamental Theorem of Calculus." -QUESTION [5 upvotes]: The 2nd part of the "Fundamental Theorem of Calculus" has never seemed as earth shaking or as fundamental as the first to me. Why is it "fundamental" -- I mean, the mean value theorem, and the intermediate value theorems are both pretty exciting by comparison. And after the joyful union of integration and the derivative that we find in the first part, the 2nd part just seems like a yawn. So, what am I missing? -To be clear I'm talking about this: - -Let ƒ be a real-valued function defined on a closed interval [a, b] that admits an antiderivative F on [a,b]. That is, ƒ and F are functions such that for all x in [a, b], -$f(x) = F'(x)$ -If ƒ is integrable on [a, b] then -$\int_a^b f(x)dx = F(b) - F(a).$ - -I've been through the proof a few times. It makes sense to me. But, it didn't help me to see the light. To me it just looks like "OK here is how you do the definite integral." Which dosen't seem like such a big deal, especially when indefinite integrals can be more interesting. - -REPLY [8 votes]: The names "first" and "second" for the two parts of the theorem are meaningless. More correct names would be existence and uniqueness. It also is not unreasonable to separate the uniqueness statement from the formula relating definite integrals to antiderivatives, which is an algebraic consequence of the (analytic) uniqueness statement. The formula could be considered as a third part of the theorem, but numbering pieces of a theorem in a particular order is an uninformative nomenclature -- as is calling theorems "fundamental". -Fundamental theorem of calculus asserts existence and uniqueness of antiderivatives (solutions of the differential equation $y' = f(x)$ with given value of $y(x_0)$ at one point). Apart from purely logical considerations there are several reasons the uniqueness theorem is important. - -Indefinite integrals of the form $\int_p^x f(t) dt$, which are what appear in most presentations of the existence part of the theorem, in some cases do not account for all antiderivatives of $f(x)$ as the basepoint $p$ is varied over all real numbers. -In the more precise presentation $y(x) = y(a) + \int_a^x f(t) dt$ there is still the possibility that other processes, even more magical than integration, might be related to anti-differentiation. So it is of interest to either find these exotic species, or show that integrals give everything. -An explicit analysis of uniqueness becomes more pressing when integrating functions with singularities, as in $\int dx/|x|^p$ for $p=1$ and $p=1/2$ (the number of integration constants changes, so this is needed for writing down solution formulas in full generality). -the algebraic formula implied by uniqueness, $\int_a^b f = F(b)-F(a)$, is important both as a means of computing integrals and as the basis of the notation supporting changes of integration variable (substitutions).<|endoftext|> -TITLE: Does it ever make sense NOT to go to the most prestigious graduate school you can get into? -QUESTION [64 upvotes]: I'm a senior undergrad at a top-ish(say, top 15) math school. I'm a solid, not stellar, student. This year I'm taking the qualifying exam grad courses in algebra and analysis and have been taken aback by the "pressure cooker" atmosphere among grad students here. That is, even moreso than in the undergraduate program. -If I'm self driven, could going to a "less prestigious" school afford me more space(I mean in a psychological sense) to produce a more solid contribution to math? By "less prestigious", I mean a school "ranked" significantly lower than the range of schools that I could comfortably get into. For me, "less prestigious" would be ranked around 40-60 on, say, USNews or NRC. -My reasoning is that at such a school, I would be more able to learn the fundamentals at my own pace, as opposed to a pace dictated to me by the program. I know I want to do math, and I think my learning style may be better suited to going at my own pace. Thoughts? - -REPLY [4 votes]: There aren't anywhere near enough faculty positions for all the PhDs we churn out, not even close. Academia might dramatically contract over the next couple decates even. -Industry knows almost nothing about your PhD subject, but they recruit heavily amongst the Ivy league graduates. Also, one finds vastly more parochialism amongst the smaller schools. -Imho, you should select the best school possible because (a) it'll make you a broader more flexible mathematician and (b) it'll maximize your chances of doing the most interesting mathematics you can if/when you leave academia for industry. -You should however realize that school rankings aren't the entire story. There are schools that seriously over work their teaching assistants, like Purdue for example. Avoid such places even if it costs you institutional prestige.<|endoftext|> -TITLE: Why is the following evaluation of Apery's Constant wrong and do you have suggestions on how, if at all, this method could be improved? -QUESTION [9 upvotes]: Please let me summarize the method by which L. Euler solved the Basel Problem and how he found the exact value of $\zeta(2n)$ up to $n=13$. Euler used the infinite product -$$ -\displaystyle f(x) = \frac{\sin(x)}{x} = \prod_{n=1}^{\infty} \Big(1-\frac{x^2}{n^2\pi^2}\Big) , -$$ - Newton's identities and the (Taylor) Series Expansion (at $x=0$) of the sine function divided by $x$ to arrive at -$$ -1 - \frac{x^2}{\pi^2} \cdot (1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + ... + \frac{1}{n^2}) + x^4(...) = 1 - \frac{x^2}{6} + \frac{x^4}{120} - ... -$$ -Upon subtracting 'one' from both sides, equating the $x^2$ terms to each other and multiplying both sides by $ - \pi^2$, one finds that -$$ -\zeta(2)=\frac{\pi^2}{6}. -$$ -When I first saw this proof and the way it was extended to find the values of the other even zeta-constants, I couldn't help myself thinking: "How could this method be strenghtened to find the values of the odd zeta-constants?" (And, a little while later, "why hasn't this been done before?") -I started looking for a similar-looking infinite product, only now I focussed on one of the form -$$ -\displaystyle f(x) = \prod_{n=1}^{\infty} \Big(1-\frac{x^a}{k^3 \cdot q}\Big) -$$ -(for some $ a \in \mathbb{N} , q \in \mathbb{R} $). A little while later I stumbled upon this website and fixated my eyeballs on equation (27). If we take $n=3$, Prudnikov et al. tell us that -$$ -\prod_{n=1}^{\infty} \Big(1-\frac{x^3}{k^3}\Big) = - \frac{1}{x^3} \cdot \prod_{k=1}^{2} \frac{1}{\Gamma(-e^{2/3 \pi i k} \cdot x)}. -$$ -Now, I thought that if we could use Newton's Identities again on the left side of the equation and find out what the Taylor Series Expansion of the right-hand side would be, we could find out what the exact value of Apery's Constant and other odd zeta-constants would be. In this answer by Robert Smith, I was told the Series Expansion. So we have -$$ -1 - x^3(1 + \frac{1}{8} + \frac{1}{27} + ... + \frac{1}{n^3}) = -1 - 2 \cdot \gamma x - 2 \gamma^2 x^2 + \frac{1}{6}x^3(-8\gamma^3 - \psi^{(2)} (1)) - x^4(...) -$$ -Notice that on the left side we only have 'one minus a term with an $x^3$ coefficient', while on the other side we see 'minus one plus $x$, $x^2$, $x^3$ coefficients with their terms'. This is important, because it probably answers the question why the following will not work, but I don't know why and I really would like to know. -I guess you know what I will attempt to do now. We equate the $x^3$ terms with each other, set $x=1$, multiply by minus one and 'find' that -$$ -\zeta(3) = \frac{1}{6}(8\gamma^3 + \psi^{(2)} (1)). -$$ -By combining this with the already known result -$$ -\zeta(3) = -\frac{1}{2} \psi^{2}(1), -$$ -we 'find' that -$$ -\zeta(3) '=' \gamma^3. -$$ -Obviously, this is wrong. Apery's constant is larger than one, and this value is clearly smaller than one. Could someone please elaborate one where I went wrong? And does anybody have any sugguestions and/or ideas related to the discussion from above using which we could find "better" values for Apery's Constant and the other odd zeta constants? (For example by pointing out a similar infinite product relation, and by showing that that infinite product has a nicer Series Expansion?) Or could someone point out to me why this approach to finding nicer closed-form representations for these constants clearly won't lead to any results? -Thanks in advance, -Max Muller -(Moderators: If you find any spelling mistakes or errors grammar errors, feel free to correct them. To the rest: $\gamma$ is the Euler-Mascheroni Constant, and it amounts to approximately $0.5772$. The $\psi^{(2)}(x)$ stands for the second logarithmic deriviative of the Gamma-function. As usual, Wikipedia is a pretty good reference for this sort of things.) - -REPLY [7 votes]: Your previous question was about the wrong function. Instead of $\Gamma(x)$ in the denominator you should have $\Gamma(-x)$. If you fix this you'll probably end up with the same polygamma identity you already knew. -In any case, proving anything about $\zeta(2k+1)$ is known to be quite hard. If anything simple worked, Euler would have done it, or somebody in the last few centuries anyway. -Let me also mention that Prudnikov's identity is, if you are willing to accept Euler-style manipulations, trivial. It is equivalent to a product formula for $\frac{1}{\Gamma(x)}$ which follows (again if you are willing to accept Euler-style manipulations) from an investigation of its roots and does not really tell you anything deep about zeta values.<|endoftext|> -TITLE: Separation in direct limits of closed inclusions -QUESTION [14 upvotes]: Suppose $X$ is a space and $A_1\subseteq A_2\subseteq A_3\subseteq ...\subset X$ is a sequence of subspaces each of which is closed in $X$ and such that $X\cong \varinjlim_{n}A_n$ (i.e. $U$ is open in $X$ if and only if $U\cap A_n$ is open in $A_n$ for each $n$). This topology on $X$ has many names (direct limit, inductive limit, weak topology, maybe more) but I can't seem to find much dealing with separation properties in this general setting. Specifically, I am asking: -If $A_n$ is Hausdorff for each $n$, then must $X$ also be Hausdorff? - -REPLY [9 votes]: The answer is no. H. Herrlich showed, in 1969, that even if you consider each $A_n$ a completely regular space, the direct limit may fail to be Hausdorff. However if all $A_n$ are T$_4$ - spaces then $X$ is a T$_4$ - space (it's not hard to prove this). -A comment about the definition of direct limit. Usually, in category theory, we call direct limit a colimit of a directed family of objects. Using this terminology it's well known that the category of Hausdorff spaces isn't closed under direct limits. You can find some examples in Dugundji's 'Topology' (a shame it's out of print). The definition you are using is very particular, so Herrlich's example is special. -In this paper D. Hajek and G. Strecker exhibit sufficient conditions for the Hausdorff property to be preserved under direct limits.<|endoftext|> -TITLE: Eigenvalues and Eigenvectors of $2 \times 2$ Matrix -QUESTION [11 upvotes]: Let's say I have a $2 \times 2$ matrix (actually the structure tensor of a discrete image - I): -$$ \begin{bmatrix} - \frac{\partial I}{\partial x}\frac{\partial I}{\partial x} & \frac{\partial I}{\partial x}\frac{\partial I}{\partial y} \\\ - \frac{\partial I}{\partial y}\frac{\partial I}{\partial x} & \frac{\partial I}{\partial y}\frac{\partial I}{\partial y} -\end{bmatrix}$$ -It has 2 properties: - -Symmetric. -Positive Semidefinite. - -Given those properties, what would be the easiest method to numerically compute its eigenvectors (orthogonal) and eigenvalues? - -REPLY [5 votes]: Despite other answers, I thought it might benefit the impatient to see the explicit answer below. -Let $$M = -\left( -\begin{array}{cc} - a & b \\ - b & c -\end{array} -\right),$$ -be the input matrix. -Define the discriminant: $\Delta = \sqrt{a^2+4 b^2-2 a c+c^2}$ -Then, the eigenvalues of $M$ are given by: -$\lambda_1 = 0.5(a+c-\Delta)$ and $\lambda_2 = 0.5(a+c+\Delta)$ -Now, you can find a matrix $V$ such that -$$M = V^{-1} \begin{pmatrix} -\lambda_1 & 0\\ -0 & \lambda_2 -\end{pmatrix}V.$$ -Mathematica says that the matrix $V$ is given by -$$ -V = \begin{pmatrix} -\frac{a-c-\Delta}{2b} & 1\\ -\frac{a-c+\Delta}{2b} & 1 -\end{pmatrix} -$$ -If you are looking for orthogonal $V$, then the above calculations need some changes.<|endoftext|> -TITLE: Algorithms to find simple closed curves on a surface -QUESTION [8 upvotes]: A closed curve on a surface can be represented by the boundary it intersects. Suppose the original surface is created by glueing $a$ to $A$, $b$ to $B$ etc. Pick an orientation, each time it intersects a boundary, add the boundary to the representation. For example in the image, it's - -The black dot is the starting point to trace the curve. -The sequence of letters is called a word. -The length of a word is defined as the amount of letters in the word. -A word $W$ represents a simple closed curve if there exist a simple closed curve that can be represented by $W$. Let's call those words simple words. -For a given surface, are there algorithms to find simple words with length up to $L$ in polynomial time? -The naive algorithm is to generate every word and test which ones are simple, but that is $O(c^L)$ for some constant $c$. The amount of simple word is $O(L^k)$ for some constant $k$. As shown in Growth of the number of simple closed geodesics on hyperbolic surfaces by Maryam Mirzakhani. The bound given in that paper can be converted into the bound on words. - -REPLY [6 votes]: Fix a triangulation $T = (t_i)_{i=0}^N$ of the surface $S$. An arc $\alpha$ embedded in a triangle $t_i$ is normal if the two points $\partial \alpha$ lie in distinct edges of $t_i$. A single triangle $t_i$ admits three different normal arcs (up to isotopy). -A simple closed curve $\alpha \subset S$ is normal with respect to $T$ if and only if $\alpha \cap t_i$ is a collection of normal arcs for each $i$. Exercise: every essential simple curve may be isotoped to be normal. We may replace any normal curve by a vector of length $3N$ by counting the number of each type of normal arc. These vectors have non-negative integer entries and satisfy certain "matching equations". Conversely, any such vector represents a simple closed multicurve. (Note that a single isotopy class of curve may have many different representations as a normal curve!) -The matching equations cut a cone out of the positive orthant. Thus to enumerate all curves up to a fixed length (where here I count the number of normal arcs as the length) it suffices to find a Hilbert basis for this cone and take all combinations up to a fixed size, and so on, and so on. -The result is an algorithm that enumerates all essential simple closed curves up to length $N$ in time polynomial in $N$. However the precomputation (of the Hilbert basis for the cone, etc) is no joke. -You can find a discussion starting on page 13 of Cameron Gordon's notes for a course on normal surfaces in three-manifolds. I'll end with a remark: in the case of the (once-holed) torus you can get away with exactly two triangles, the cone is very simple, the basis is very pretty, and everything can be done by hand. (In fact you can this way rediscover for yourself the classification of simple closed curves in the torus.) -Edit: I'll add one more remark. You are representing curves on the surface via "cutting sequences" which are very closely related to writing out words in terms of the fundamental group. When working with simple curves (and the simplicity is crucial here) it can be exponentially more efficient to use the "normal coordinates" I described. These are very similar to Thurston's "train track" coordinates for simple closed curves. There is a third way to represent simple curves: Fix a small collection of curves (ie the very short curves) and then act on those by the mapping class group of the surface, as generated by Dehn twists (say). These "mapping class group coordinates" can again be exponentially more efficient than using cutting sequences.<|endoftext|> -TITLE: Is there a limit of $\cos(n!)$? -QUESTION [28 upvotes]: I encountered a problem today to prove that $(X_n)$ with $X_n = \cos(n!)$ does not have a limit (when $n$ approaches infinity). I have no idea how to do it formally. Could someone help? The simpler the proof (by that I mean less complex theorems are used) the better. Thanks - -REPLY [2 votes]: Above posters seem to be right, I thought I had this solved for the limit = 1, but realized I was wrong. -If its of any use, here is my faulty reasoning. - - -As we get more and more factors we will only get closer and closer to a factor that is a multiple of pi. We know that n! is even for any n >=2, so we get a abritarily closer to 2*pi|n! as n approaches infinity. - Sorry of the code is out of form, but: - -eps = .00001 - for i in xrange(2,800000,2): - ... if i*math.pi-math.floor(i*math.pi) < eps: print i - ... - 431230 - 530762 - 630294 -math.cos(math.floor(431230*math.pi)) = - 0.99999999997167566 - So any n! for n > 431230*pi will be at least this close, or closer, to 1 - - - -The problem is that product of the factors NOT close to pi is going to grow faster than that epsilon shrinks. And obviously no integer is going to be an exact multiple of pi (as it is irrational). Therefore, I am led to agree with other posters that this limit is unlikely. -PS - I know this isn't rigorous by any means, but was hoping maybe better minds would have some insightful comments.<|endoftext|> -TITLE: No solutions to a matrix inequality? -QUESTION [10 upvotes]: If $A$ is a stochastic matrix, then $A$ is entry-wise nonnegative and $Ae = e$, i.e., $(1,e)$ is a right eigenpair for $A$. -Is it true that there exists a vector $b$ such that -$$(A - I)x \geq b$$ -has no solutions in $x$? If so, is there a simple proof? -Motivation: I've been trying to construct an answer to another question using linear programming duality (as the OP implies he is interested in). If my reasoning is correct, this is the only step I need to complete the argument. I feel like this should be an easy question to answer, but I've been working on it for a while with no success. - -REPLY [10 votes]: Your inequality $(A-I)x \ge b$ has no solutions in $x$ as soon as $b>0$. Indeed, any potential solution would have to satisfy $A x \ge x + b$ and, since rows of $A$ are nonnegative and sum to one, each element of vector $Ax$ is a convex combination of the components of $x$, which must be less than $x_{max}$, the largest component of $x$. On the other hand, at least one element of $x+b$ is greater than $x_{max}$, which proves the impossibility. -By the way, applying Farkas' Lemma to this impossible system shows that the following always admits solutions in y -$$y^T (A-I) = 0, y \ge 0 \text{ and } y^T b > 0$$ -which expresses the fact that $A$ necessarily admits a nonnegative left eigenvector with eigenvalue $1$ (the last inequality ensures that $y$ is nonzero).<|endoftext|> -TITLE: Eleven unit squares inside a larger square -QUESTION [5 upvotes]: What is the smallest square which contains 11 non-overlapping (except boundary) unit squares? -This question is open but I would like to know a method to verify the best known answer at the moment. -I'm reading paper at here . In figure 2, there has been said that the best known packing has side length about 3.8772 and tilt angle about 40.182 degrees. How can I verify those values, as another source (Which Way did the Bicycle Go page 105) says that the side length is about 3.877083? I think I have to denote the tilt angle by $\alpha$, a side of the square by $x$ and form two equations but I don't see how to make those equations. - -REPLY [2 votes]: I asked this by Walter Stromquist via email. Here is our mails: -Hello, -I am a mathematician from Finland. I read from the book "Which way did the bicycle go" that you have found a way to pack $11$ squares with side length 1 to a square with side length $3.8772$ or $3.877083$. Do you know which one is correct? I have asked this on Eleven unit squares inside a larger square but got no response. Is there any computation available for the side length? -Best wishes, -Jaakko Seppälä -Hello, Jaakko, -Thanks for writing! -I didn't invent the nice packing for 11 squares. I just proved that you can't do better (or as well) with "45-degree packings." The nice packing appeared in a Martin Gardner column, and he attributed it to Walter Trump. Even though my paper came close, I don't think that anyone has proved that Trump's packing is optimal. -Here is one way to do the calculation. I doubt that it is the best way, but it worked for me. -Let's use the diagram of Figure 2 in my paper. (My paper is the one you linked to in Stack Exchange. The figure in Friedman's paper is reflected in a diagonal.) -Call the lower left corner $(0,0)$. Use $(0,y)$ for where a tilted square touches the left edge, and $(x,0)$ for where another tilted square touches the bottom edge. Use $d$ for the tiny vertical distance between the two squares on the right edge. -Use theta for the angle (about 40 degrees) from the lower edge to the lower-right edge of the tilted square. -The the component in the direction $\theta$ of the vector from $(0,y)$ to $(s-2,s-1)$ is $2$. This gives the equation -$[ (s-2,s-1) - (0,y) ] . ( \cos \theta, \sin \theta) = 2$. -Looking at some other vectors gives some more equations: -$[ (s-1,s-2) - ((1,1) ] . (\cos \theta, \sin \theta) = 2$ -$[ (s-1, s-3-d) - (x,0) ] . (\cos \theta, \sin \theta) = 1$ -$[ (1, s-1) - (s-1, s-2-d) ] . ( -\sin \theta, \cos \theta) = 2$ -$[ (0,y) - (x, 0) ] . (-\sin \theta, \cos \theta) = 3$. -Playing the first two equations against each other gives $y = 2$. (That's cute; I never noticed that!) Then it isn't too hard to get equations for $x$ and $s$ in terms of $\theta$, and then and equation for $d$ in terms of $\theta$, $x$, and $s$. That leaves us one equation to test. So we try various values of $\theta$, compute $x$, $s$, and $d$, and then see whether the test equation holds. (Like I said, there must be a better way.) -Anyway, when I do that now I get -$\theta = 40.1819372903297$ -$\textrm{side} = 3.877083590022810$. -I did this using Excel, so I wouldn't want to bet too much on the accuracy of the last two digits. But is is pretty clear that Stan Wagon is right, and the number in my paper is wrong. (Stan Wagon is a pretty careful guy, and he is a genius at computation.) -Welcome to the world of square packing! I hope you find many interesting things. -Walter -Hello, -Thank you very much! It was nice to see the computations. So finally I know which one is correct. It would be nice to share this to Stack Exchange but I think there might be some copyright issues. -Jaakko -No copyright issues at this end. Share as you like!<|endoftext|> -TITLE: Derivative commuting over integral -QUESTION [9 upvotes]: Can a derivative operation commute over an integral operation irrespective of the properties of the function under the integral ? - -REPLY [11 votes]: Not in general. I recommend Gelbaum and Olmsted's Counterexamples in Analysis, which is where I turned to find a counterexample to your question. Namely, example 15 on page 123 is titled - -A function $f$ for which $d/dx\int_a^b f(x,y)dy\neq\int_a^b[\partial/\partial x f(x,y)]dy$, although each integral is proper. - -The example is -$$f(x,y) = \left\{ - \begin{array}{lr} - \frac{x^3}{y^2}e^{-x^2/y} & : y>0, \\ - 0 & : y=0, - \end{array} - \right. -$$ -integrated with respect to $y$ from $0$ to $1$. Actually, differentiating under the integral sign works here except where $x=0$. -The function and its partial derivative are not jointly continuous. When they are jointly continuous, differentiation and integration commute.<|endoftext|> -TITLE: How this operation is called? -QUESTION [14 upvotes]: This operation is similar to discrete convolution and cross-correlation, but has binomial coefficients: -$$f(n)\star g(n)=\sum_{k=0}^n \binom{n}{k}f(n-k)g(k) $$ -Particularly, -$$a^n\star b^n=(a+b)^n$$ -following binomial theorem. -I just wonder if there is a name for such operation and where I can read about its properties. - -REPLY [3 votes]: I came across this same binomial convolution in the following curious setting: consider the shift operator $S(a_n) = (a_{n+1})$ -which maps -$(a_0, a_1, a_2, \ldots) \mapsto (a_1, a_2, a_3, \ldots)$. -It is easy to check that $S$ is a derivation of this convolution, that is: -$$ S ((a_n) \star (b_n)) = S (a_n) \star (b_n) + (a_n) \star S(b_n) $$ -just by using Pascal's rule -$ {n+1 \choose k} = {n \choose k} + {n+1 \choose k-1} $. -This can be used to give a proof of the form of the general solution to a linear recurrence (homogeneous, with constant coefficients): Just repeat the same linear algebra one does to give a proof of the form of the general solution to a linear homogeneous differential equation with constant coefficients, exchanging the derivative operator D, functions and the exponential functions by the shift operator S, sequences and the geometrical sequences. -I have not seen this used elsewhere and I do not know if it has other applications other then the one sketched above.<|endoftext|> -TITLE: Riemann Zeta Function and Analytic Continuation -QUESTION [7 upvotes]: The Riemann Zeta Function is defined as $ \displaystyle \zeta(s) = \sum\limits_{n=1}^{\infty} \frac{1}{n^s}$. It is not absolutely convergent or conditionally convergent for $\text{Re}(s) \leq 1$. Using analytic continuation, one can derive the fact that $\displaystyle \zeta(-s) = -\frac{B_{s+1}}{s+1}$ where $B_{s+1}$ are the Bernoulli numbers. Can one obtain this result without resorting to analytic continuation? - -REPLY [15 votes]: Using Euler--MacLaurin summation, one can obtain the following formula for -$\zeta(s)$: -$$ -\zeta(s) = \frac{1}{s-1}+\frac{1}{2} + \frac{B_2}{2} s + \cdots \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -$$ -$$ -\cdots + \frac{B_{2k}}{(2k)!}s(s+1)\cdots (s + 2k-2) + \frac{s(s+1)\cdots(s+2k-1)}{(2k)!}f(s), -$$ -where $f(s)$ is an integral involving $s$ which converges when $\Re(s) > -2k$. -(My favourite reference for $\zeta(s)$ is Edwards's book Riemann's zeta function. This particular formula is gotten by setting $N = 1$ in formula -(1) on p. 114.) -So this gives a formula for $\zeta(s)$ which is defined when -$\Re(s) > -2k$. If you substitute in $s = -2k+1$, you will get (after some -rearrangement) that -$\zeta(-(2k-1)) = -B_{2k}/2k.$ -Of course this is a form of anaytic continuation (as others have noted, it is hard to make sesne of what $\zeta(-(2k-1))$ would mean otherwise). But it perhaps a little different to the standard approach. - -ADDED: An approach which seems quite different to analytic continuation --- at least at first --- is the -Abelian regularization approach used by Euler. (Please excuse the anachronism -of labelling Euler's method with Abel's name!) This is disussed in some of the answers to this question. -The idea is first to multiply by $(1-2^{-s+1})$, which eliminates the pole at $s=1$, and replaces $\zeta(s)$ by the function $\eta(s):= \sum_{n=1}^{\infty} (-1)^{n-1} n^{-s}$. (Clearly if we can evaluate $\eta(s)$, we can evaluate $\zeta(s)$, -simply by divising through by $(1-2^{-s+1})$.) -Then, one computes $\eta(-k)$ via the following formula: -$$\eta(-k) = \lim_{T \to 1} \sum_{n=1}^{\infty}(-1)^{n-1}n^k T^n.$$ -The point is that the series in $T$ converges (when $|T| < 1$) to a rational function of $T$, -which we can then evaluate at $T = 1$. -This method can be seen directly to lead to the usual formula in terms of -Bernoulli numbers. One can also relate it to the usual description of $\zeta(s)$ (or --- equivalently --- $\eta(s)$) via analytic continuation (by considering the two variable function -$\sum_n (-1)^n n^{-s} T^n$), but it is the approach I know which is a priori furthest removed from analytic continuation.<|endoftext|> -TITLE: Algorithm for shortest path on manifold -QUESTION [5 upvotes]: What are some algorithms used for finding the shortest path between 2 points on a (riemannian) manifold (the manifold may have a smooth boundary)? -So far I've had 3 ideas, none of which seem that good: -1) (Rubber band principle) Find any path between the two points, then draw the path taut like a stretched rubber band with some damping. -2) (Ray-tracing analog) Shoot out geodesic rays from one point in all directions. If a ray hits the target, we're done. If the ray hits the boundary and is not close to tangent, ignore it. If the ray hits the boundary approximately tangent, then follow the geodesic on the boundary in that direction, shooting tangent rays off occasionally. -3) (Discretization) Triangulate/tetrahedralize/etc the manifold, and construct a weighted graph where the nodes correspond to the centerpoints of each tetrahedron, the edges correspond to tetrahedra that touch, and the weights correspond to the distance between tetrahedra centers. Then compute the shortest path in the graph. -Most of the stuff I've seen in the literature is concerned with the theoretical existence of the path rather than actually computing it. How do you find the shortest path in practice? - -REPLY [4 votes]: There are many algorithms for computing shortest paths on polyhedral 2-manifolds. -With a student I computed the shortest paths shown below with one, the Chen-Han algorithm. -The algorithms fan out shortest paths to a frontier, mimicking the structure -(but not the details) of Dijkstra's algorithm for shortest paths in a graph. Sometimes the method is called the "continuous Dijkstra" method. These algorithms (all in $\mathbb{R}^3$) are described in many places, including -the book Geometric Folding Algorithms: Linkages, Origami, Polyhedra, Chapter 24. - - - -I would assume the same approach will work for triangulated manifolds in arbitrary dimensions, but of course it will be significantly more complicated to implement.<|endoftext|> -TITLE: Verifying some basics of the metric topology -QUESTION [6 upvotes]: While reading about general topology, I started looking at the metric topology. At one point, the interior of a topological space $X$ is defined as such: -$${E}^{\circ}=\{p\in E\ | \ B_p(\epsilon)\subseteq E, \epsilon>0\}$$ -Here, $B_p(\epsilon)$ is a ball with center $p$ and radius $\epsilon$. I tried verifying that this met the general properties of an interior, and didn't have much trouble showing ${E}^{\circ}\subseteq E$, $(E\cap F)^{\circ}={E}^{\circ}\cap {F}^{\circ}$, and ${X}^{\circ}=X$. However, I don't see how ${{E}^{\circ}}^{\circ}={E}^{\circ}$. I see that ${{E}^{\circ}}^{\circ}\subseteq {E}^{\circ}$, but tried without success to see the other containment. Is there a reason why this follows simply from the axioms of what a metric is? -Futhermore, apparently another way to approach this topology is to define the family of open sets $\mathcal{T}$ as those which are the union of a family of balls. Again, I tried verifying this for myself. I see that for any $\mathcal{U}\subseteq\mathcal{T}$, $\cup\mathcal{U}$ is a union of sets which each may be represented as the union of a family of balls, so the whole union again may be represented as the union of a family of balls, namely those in each set in $\mathcal{U}$. Also, $X$ may be represented as the union of balls where I take one ball with center $p$ for all $p\in X$. Finally $\emptyset$ is just an empty union of balls. However, I was unable to show that for finite $\mathcal{U}\subseteq\mathcal{T}$, $\cap\mathcal{U}$ is again open. Is there some way to see that the finite intersection could be represented as a union of balls from the properties of the metric? - -REPLY [4 votes]: For the first problem, show that for every point in $E^{\circ}$ there is a ball around it which is also in $E^{\circ}$. In other words, show that interiors are open. -For the second problem, do the same: show that for every point in the intersection there is a ball around it which is also in the intersection. - -REPLY [3 votes]: A hint for your first question: show that if $p \in E^{\circ}$, then there exists an $\epsilon > 0$ such that $B_p(\epsilon)$ is contained in $E^{\circ}$. This shows that $p$ lies in the interior of $E^{\circ}$, i.e., $p \in (E^{\circ})^{\circ}$. This comes down to showing that if you have a point $p$ in an open ball $B$, you can find a smaller opne ball $B'$ around $p$ which is entirely contained in $B$. -A hint for your second question: it's enough to show that if $B_1$ and $B_2$ are two open balls and $p \in B_1 \cap B_2$ (i.e., $p$ lies in both balls), then there is a small ball $B_3$ around $p$ such that $B_3$ is contained in both $B_1$ and $B_2$. This is actually quite similar to the exercise of the previous paragraph.<|endoftext|> -TITLE: Are functions of independent variables also independent? -QUESTION [57 upvotes]: It's a really simple question. However I didn't see it in books and I tried to find the answer on the web but failed. -If I have two independent random variables, $X_1$ and $X_2$, then I define two other random variables $Y_1$ and $Y_2$, where $Y_1$ = $f_1(X_1)$ and $Y_2$ = $f_2(X_2)$. -Intuitively, $Y_1$ and $Y_2$ should be independent, and I can't find a counter example, but I am not sure. Could anyone tell me whether they are independent? Does it depend on some properties of $f_1$ and $f_2$? -Thank you. - -REPLY [5 votes]: I'll add another proof here, the continuous analog of Fang-Yi Yu's proof: -Assume $Y_1$ and $Y_2$ are continuous. For real numbers $y_1$ and $y_2$, we can define: -$S_{y_1} = \{{x_1: g(x_1)\le y_1} \}$ and -$S_{y_2} = \{{x_2: h(x_2)\le y_2} \}$. -We can then write the joint cumulative distribution function of $Y_1$ and $Y_2$ as: -\begin{eqnarray*} -F_{Y_{1},Y_{2}}(y_{1},y_{2}) & = & P(Y_{1}\le y_{1},Y_{2}\le y_{2})\\ - & = & P(X_{1}\in S_{y_{1}},X_{2}\in S_{y_{2}})\\ - & = & P(X_{1}\in S_{y_{1}})P(X_{2}\in S_{y_{2}}) -\end{eqnarray*} -Then the joint probability density function of $Y_{1}$ and $Y_{2}$ -is given by: -\begin{eqnarray*} -f_{Y_{1},Y_{2}}(y_{1},y_{2}) & = & \frac{\partial^{2}}{\partial y_{1}\partial y_{2}}F_{Y_{1},Y_{2}}(y_{1},y_{2})\\ - & = & \frac{d}{dy_{1}}P(X_{1}\in S_{y_{1}})\frac{d}{dy_{2}}P(X_{2}\in S_{y_{2}}) -\end{eqnarray*} -Since the first factor is a function only of $y_{1}$ and the second -is a function only of $y_{2}$, then we know $Y_{1}$ and $Y_{2}$ -are independent (recall that random variables $U$ and $V$ are independent -random variables if and only if there exists functions $g_{U}(u)$ -and $h_{V}(v)$ such that for every real $u$ and $v$, $f_{U,V}(u,v)=g_{U}(u)h_{V}(v)$).<|endoftext|> -TITLE: Reuleaux Rollers -QUESTION [22 upvotes]: The Reuleaux polygons are analogs of the regular polygons, except that the "sides" are composed of circle arcs instead of lines. It is known that for an odd number of sides, e.g. the Reuleaux triangle, the polygon has constant width. -After reading the paper Roads and Wheels by Stan Wagon and Leon Hall, I got curious on how one might construct the appropriate "road" for Reuleaux wheels; i.e., finding the curve such that when a Reuleaux polygon rolls on it, the axle at the centroid of the polygon experiences no vertical displacement. -My problem is that it does not seem straightforward, at least to me, how to construct the corresponding differential equation for the road, as presented in the paper. Since circles roll on horizontal lines, and equiangular spirals roll on inclined lines, I would suppose that the road needed for a rolling Reuleaux would not be piecewise linear. This demonstrates that the "road" cannot be a horizontal line for a Reuleaux triangle, as the axle does not remain level when the curve is rolling. -So, how does one construct the road? A solution for just the Reuleaux triangle would be fine, but a general solution is much better. - -REPLY [2 votes]: I refer to the above figure. The triangle has side length $1$, and the culmination point of the road is at the origin $O$. Let the road $s\to (x(s),y(s))$ be parametrized by arc length and let $\theta:=-\arg(\dot x,\dot y)$. Then the point $A$ in the figure is given by $A=(x+\sin\theta, y+\cos\theta)$, and the centroid $Z$ of the triangle by $Z=(\ldots,\ y+\cos\theta-\cos(\theta + s)/\sqrt{3})$. As $Z$ should keep its second coordinate constant we get the condition -$$-\sin\theta-\sin\theta\cdot\dot \theta +\sin(\theta + s)\cdot(\dot\theta+1)/\sqrt{3} =0$$ -(note that $\dot y=-\sin\theta$) from which we obtain -$$(\dot\theta+1)(\sin(\theta+s)-\sqrt{3} \sin\theta)=0.$$ -It follows that here the second factor has to vanish identically. This leads to -$$\tan\theta(s)={\sin s\over\sqrt{3} -\cos s}$$ resp. to -$$\dot x(s)=\cos\theta={\sqrt{3}-\cos s\over\sqrt{4-2\sqrt{3}\cos s}},\qquad -\dot y(s)=-\sin\theta={-\sin s\over \sqrt{4-2\sqrt{3}\cos s}}.$$ -Unfortunately $x(s)$ is not an elementary function.<|endoftext|> -TITLE: Human checkable proof of the Four Color Theorem? -QUESTION [25 upvotes]: Four Color Theorem is equivalent to the statement: "Every cubic planar bridgeless graphs is 3-edge colorable". There is computer assisted proof given by Appel and Haken. Dick Lipton in of his beautiful blogs posed the following open problem: - -Are there non-computer based proofs of the Four Color Theorem? - -Surprisingly, While I was reading this paper, -Anshelevich and Karagiozova, Terminal backup, 3D matching, and covering cubic graphs, the authors state that Cahit proved that "every 2-connected cubic planar graph is edge-3-colorable" which is equivalent to the Four Color Theorem (I. Cahit, Spiral Chains: The Proofs of Tait's and Tutte's Three-Edge-Coloring Conjectures. arXiv preprint, math CO/0507127 v1, July 6, 2005). - -Does Cahit's proof resolve the open problem in Lipton's blog by providing non-computer based proof for the Four Color Theorem? Why isn't Cahit's proof widely known and accepted? - -Cross posted on MathOverflow as Human checkable proof of the Four Color Theorem? - -REPLY [5 votes]: After reading the papers by Rufus Isaacs [1] and George Spencer-Brown [2], I have reached to the conclusion that spiral chain edge coloring algorithm [3] gives answer to the question in affirmative. -[1] Rufus Isaacs, "Infinite families of nontrivial trivalent graphs which are not tait colorable", American Math Monthly 73 (1975) 221-239. -[2] George Spencer-Brown, "Uncolorable trivalent graphs", Thirteenth European Meeting on Cybernetics and Systems Research, University of Vienna, April 10, 1996. -[3] I. Cahit, Spiral Chains: The Proofs of Tait's and Tutte's Three-Edge-Coloring Conjectures. arXiv preprint, math CO/0507127 v1, July 6, 2005.<|endoftext|> -TITLE: Eigenvalues of companion matrix of $4x^3 - 3x^2 + 9x - 1$ -QUESTION [6 upvotes]: I want to find all the roots of a polynomial and decided to compute the eigenvalues of its companion matrix. -How do I do that? -For example, if I have this polynomial: $4x^3 - 3x^2 + 9x - 1$, I compute the companion matrix: -$$\begin{bmatrix} 0&0&\frac{3}{4} \\ 1&0&-\frac{9}{4} \\ 0&1&\frac{1}{4} \end{bmatrix}$$ -Now how can I find the eigenvalues of this matrix? Thanks in advance. - -REPLY [4 votes]: Hey There, so if I am assuming correctly for your case, you want to find eigenvalues for this matrix, which is essentially solving for your roots of the characteristic polynomial of the matrix after doing the determinant operation on it. So to go off from Robert idea, we want to use the equation, -det(A$-\lambda$ I) = $0$ $~~~$(following from this we can plug in the coefficient matrix given). -det(A$-\lambda$ I) = -$\left[\begin{array}{ccc} -0-\lambda & 0 & \dfrac{3}{4} \\ -1 & 0-\lambda & -\dfrac{9}{4} \\ -0 & 1 & \dfrac{1}{4}-\lambda -\end{array} \right] = 0$, where A is your coefficient matrix and I is the identity matrix. -I = $\left[\begin{array}{ccr} -1 & 0 & 0 \\ -0 & 1 & 0 \\ -0 & 0 & 1 -\end{array} \right]$ -From here we can now find out the eigenvalues of the matrix A as follows: -$\underline {\text{Evaluation of the determinant expanding it by the minors of column 1:}}$ -= $~-\lambda -\left[\begin{array}{cc} - -\lambda & -\dfrac{9}{4} \\ - 1 & \dfrac{1}{4}-\lambda -\end{array} \right] --1 -\left[\begin{array}{cc} - 0 & \dfrac{3}{4} \\ - 1 & \dfrac{1}{4}-\lambda -\end{array} \right] -+ 0 -\left[\begin{array}{rr} - 0 & \dfrac{3}{4} \\ - -\lambda & -\dfrac{9}{4} -\end{array} \right] -$ -$\Rightarrow ~ -\lambda -\left[\begin{array}{c} - \lambda^{2} -\dfrac{1}{4}\lambda + \dfrac{9}{4} \\ -\end{array} \right] --1 -\left[\begin{array}{cc} - 0 -\dfrac{3}{4} \\ -\end{array} \right] -+ ~0 -\left[\begin{array}{rr} - 0 + \dfrac{3}{4}\lambda \\ -\end{array} \right] -$ -$\Rightarrow ~$ $-\lambda^3+\dfrac{1}{4}\lambda^2-\dfrac{9}{4}\lambda+\dfrac{3}{4}$, $~$ Hence our characteristic polynomial is now obtained. -$$P(\lambda)=-\lambda^3+\dfrac{1}{4}\lambda^2-\dfrac{9}{4}\lambda+\dfrac{3}{4}$$ -If you need assistance on how to find the characteristic polynomial by evaluating the determinant, here is a reference: Computing Determinants -\ -After solving this polynomial for its roots (eigenvalues) we get the following: -{$\lambda = (0.329,0.000) ~~ \lambda = (-0.040,-1.508) ~~ \lambda = (-0.040,1.508)$} -I believe all the roots except for $\lambda = 0.329$ are complex conjugate roots. Can someone else please verify that those are all of the roots to this polynomial and that the ones I provided are correct, Thanks. I hope this helps out if this explanation is what you were looking for.<|endoftext|> -TITLE: An incorrect method to sum the first $n$ squares which nevertheless works -QUESTION [15 upvotes]: Start with the identity -$\sum_{i=1}^n i^3 = \left( \sum_{i = 1}^n i \right)^2 = \left(\frac{n(n+1)}{2}\right)^2$. -Differentiate the left-most term with respect to $i$ to get -$\frac{d}{di} \sum_{i=1}^n i^3 = 3 \sum_{i = 1}^n i^2$. -Differentiate the right-most term with respect to $n$ to get -$\frac{d}{dn} \left(\frac{n(n+1)}{2}\right)^2 = \frac{1}{2}n(n+1)(2n+1)$. -Equate the derivatives, obtaining -$\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$, -which is known to be correct. -Is there any neat reason why this method happens to get lucky and work for this case? - -REPLY [5 votes]: This is quite obvious if expressed most naturally in terms of the Bernoulli polynomials, viz. -$\quad\quad\quad\quad\rm\ \ \ \sum n^k\ = \:\frac{1}{k+1} (B_{k+1}(n+1) - B_{k+1}(0))\ $ -and $\rm\quad\quad\ B_k^{\:'}(x)\ =\ k\ B_{k-1}(x)\ $ -thus $\ \rm\ (\sum n^k)'\ =\ k\ \sum n^{k-1} + B_{\:k} $ -According to Knuth's exposition these identities go all the way back to Jacobi - with special cases known to Faulhaber. Note that the identities in Robin Chapman's post are simply special cases of these well-known identities for Bernoulli polynomials. -The Bernoulli polynomials are a special-case of polynomials amenable to study by the powerful techniques of the Umbral Calculus. This is the best way to understand the genesis of their many interesting properties. For a nice introduction see Steven Roman's book.<|endoftext|> -TITLE: Characterization of convergence in measure -QUESTION [6 upvotes]: Prove that $f_n\to f$ in measure on $E$ if and only if given $\varepsilon>0$, there exists $K$ such that |{$x\in E : |f(x)-f_k(x)|>\varepsilon$}|$<\varepsilon$ for $k\ge K$. -The "only if" direction of this is immediate from the definition of convergence in measure, but the other direction is less obvious to me. -Conversely, we suppose that given $\varepsilon >0$, there is a $K$ such that |{$x\in E : |f(x)-f_k(x)|>\varepsilon$}|$<\varepsilon$ for $k\ge K$. My initial thought was to bound the measure of the set in question by, say, $\frac{1}{k}$. But I'm not sure I can do that because $\varepsilon$ not only bounds the measure of the set, but the set also depends on the choice of $\varepsilon$. To show something convergence in measure, I need to show that for every $\varepsilon$ the limit as $k\to\infty$ of the measures of those sets is zero... -[Subquestion: is the use of |$\cdot$| standard for denoting Lebesgue measure? I had never seen it until this course. I had always seen $m(-)$.] -[Sub-subquestion: is there any particular reason that set brackets don't display in math mode? The commands \ { and \ } didn't do anything...] - -REPLY [4 votes]: Convergence in measure means that for all $\varepsilon\gt0$, $|\{x\in E : |f(x)-f_k(x)|>\varepsilon\}|$ goes to $0$, which means that for all $\varepsilon\gt0$, for all $\delta\gt0$, there exists $K$ such that $|\{x\in E : |f(x)-f_k(x)|>\varepsilon\}|<\delta$ for all $k\geq K$. Taking $\delta=\varepsilon$ gives the "only if". To see "if", given positive $\varepsilon$ and $\delta$, take $K$ such that $k\geq K$ implies that $|\{x\in E : |f(x)-f_k(x)|>\min(\varepsilon,\delta)\}|<\min(\varepsilon,\delta)$.<|endoftext|> -TITLE: PDEs on Manifolds -QUESTION [27 upvotes]: I am wondering if there is a general coordinate-independent way to define a Partial Differential Equation on a Smooth manifold. -It is definitely true that in each coordinate neighborhood you could define a function to satisfy a differential equation, but when you change coordinates then the differential equation will most likely have a different form. -For example, in $\mathbb{R}^3$ one says a function is harmonic if it satisfies -$ \frac{\partial^2}{\partial x^2} f + \frac{\partial}{\partial y^2} f + \frac{\partial}{\partial z^2} f = 0 $ -but it is not true that in spherical coordinates a function is harmonic if -$ \frac{\partial^2}{\partial r^2} f + \frac{\partial}{\partial \theta^2} f + \frac{\partial}{\partial \phi^2} f = 0 $ -The change of variables to spherical coordinates gives you a much different PDE. So basically, how can you define a differential operator which is coordinate independent on a smooth manifold, so that you can have some notion of a PDE on a manifold. - -REPLY [29 votes]: Ryan already gave a very good answer. I just want to give a few clarifications on how one should think about PDEs in general. Most importantly: you should not think a differential operator as defined by "a formula". What do I mean by that? In your question, you wrote that the Laplacian in Euclidean coordinates is $\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}$, but in Spherical coordinates it is not $\frac{\partial^2}{\partial r^2} + \frac{\partial^2}{\partial \theta^2} + \frac{\partial^2}{\partial \phi^2}$. But why should one have thought that the two expressions are the same? -To illustrate, think of the function on the plane $f(x,y) = x^2 + y^2$. This represents the square of the distance of a point from the origin. In radial coordinates, the same function is written as $f(r,\theta) = r^2$. Not $r^2 + \theta^2$, which is a very different beast. Just like how the direct replacement $x \to r$ and $y\to \theta$ changes the function you are considering, the replacements $\frac{\partial}{\partial x} \to \frac{\partial}{\partial r}$ etc changes the operator you are considering. -Now: to think geometrically (on manifolds), a (real valued) function is an assignment of a number to each point on the manifold. Then in any coordinate system there is a representation of the function by a formula. Since we like to work with formulas, we can interpret the notion of a function as a "map". This map takes as its input a coordinate system, and outputs a formula that represents your function in that coordinate system. (If you are familiar with category theory, this is similar to how one can think in terms of arrows instead of dots.) -Similarly, a partial differential operator can be thought of as an object in itself. Our usual way of writing it as partial derivatives in some coordinate system with coefficients is just a convenient representation of the real object upstairs. Therefore, analogous to how a function is merely a "map" from coordinate system to the formulaic representation, you can think of a partial differential operator also as a "map" which takes as its input a coordinate system and outputs an expression which is a sum of partial derivatives with some coefficients. -The jet bundle formulation is just a sophisticated, rigorous way of formulating this idea. The main difficulty is making sure the intuition above is compatible with the change of variables formulas. Suppose we are given a coordinate system $A$, and an expression in terms of partial derivatives which we'll call $L_A$. The question becomes: does there exist some abstract partial differential operator $L$ such that its representation in the coordinate system $A$ is precisely $L_A$? And if there is such an operator, is the fact that, for some function $f$ (written in coordinate system $A$ as $f_A$), $L_A f_A = 0$ invariant under change of coordinate system? (That given any other coordinate system $B$, $L_Bf_B = 0$ also.) The tools of jet bundles allows you to make such a consistent definition/description of partial differential equations.<|endoftext|> -TITLE: Intuitive explanation of the difference between waves in odd and even dimensions -QUESTION [48 upvotes]: Motivation: In odd dimensions, solutions to the wave equation: $$u_{tt}(x,t)=\nabla u(x,t), \qquad u_t(x,0)=0, \qquad u(x,0)=f(x)$$ where $t \geq 0$ and $x \in \mathbb{R}^n$, have the nice property that the value of $u(x,t)$ only depends on the values $f(y)$ with $|y-x|=t$. For even dimensions, the value $u(x,t)$ depend on all the values $f(y)$ with $|y-x|\leq t$. A consequence of this is that when you switch a light bulb on and then off (in 3D), there will be a light wave traveling with the speed on light and behind the wave, there will be total darkness. But when you throw a rock into a pond (with a 2D surface), there will be lots of waves traveling outwards from where the rock hit the water and, in theory, the water will never a still again. -Question: Can anyone give an intuitive explanation of this difference between odd and even dimensions? - -REPLY [18 votes]: This is a very good question. Kevin Brown once gave a very detailed answer to it which I will not bother to reproduce here. -A not-exactly precise way to think about it is this: a wave propagating in $n$ dimensions can be thought of as a wave propagating in $n+1$ dimensions, but one degree of freedom removed. More precisely: if $f$ solves the wave equation $(\partial_t^2 - \triangle) f(t,x) = 0$ on $\mathbb{R}^d$, then it also solves the equation $(\partial_t^2 - \partial_y^2 - \triangle) f(t,y,x) = 0$ on $\mathbb{R}^{d+1}$, if you assume that $f(t,y,x) = f(t,0,x) = f(t,x)$ for all $y$. (It is constant in the $y$ direction, so any derivative in that direction is 0.) -Now, if a wave were to propagate in, say, 5 dimensions with the property you mentioned. Then in 4 dimensions, the points "inside the light cone" is reachable in 5 dimensions as on the light cone. Or, in other words, assume $f(t,x)$ is a solution to wave equation in 4 dimensions. And let $f(t,y,x)$ be the trivial extension with one dimension added. So $f(t,y,x)$ solves the wave equation in 5 dimensions. Suppose at time $t = 0$, a light bulb turns on at the origin $x = 0$ in 4 dimension space. This corresponds to a bank of lightbulbs turning on in 5 dimensional space along the $y$ axis. The five dimensional principle says that a point at coordinates $(y,x)$ will be illuminated by a lightbulb at $(y_0,0)$ at time $t$ if $t^2 = x^2 + (y-y_0)^2$. But for every $x, y$ such that $|x| < |t|$, you can find two values of $y_0$ such that $t^2 = x^2 + (y-y_0)^2$. And so some light must lag behind. (In the study of partial differential equations, this is also known as the method of descent.) -But by this argument, there should still be some light lagging behind for 3 dimensions also! The claim, however, is this: due to the nature of the propagation, there is a destructive interference when you drop two dimensions. That you have destructive interference when you drop two dimensions, but not such problem when you drop only one dimension, is not at all obvious intuitively (at least to me). It however does fall out of the expression for the fundamental solution to the wave equation.<|endoftext|> -TITLE: Semigroup with "transitive" operation is a group? -QUESTION [6 upvotes]: I have a semigroup $G$ (a set with associative binary operation) such that for all $a,b\in G$ there exists $x,y\in G$ such that $ax=ya=b$. Is this property enough to show that $G$ is a group, and if so, how? - -REPLY [9 votes]: Yes. This is a standard exercise. -First, let $a\in G$ be arbitrary. Then there exists $e\in G$ such that $ae=a$ (using the first condition with $a=b$). Now let $b\in G$. Then there exists $z\in G$ such that $za=b$. Therefore, $be=(za)e=z(ae) = za = b$. Thus, $e$ is a right identity for $G$. -From the fact that $ax=e$ is always solvable it follows that every element has a right inverse. Since your semigroup has a right identity and right inverses, then it is a group.<|endoftext|> -TITLE: Why does the polynomial equation $1 + x + x^2 + \cdots + x^n = S$ have at most two solutions in $x$? -QUESTION [11 upvotes]: Américo Tavares pointed out in his answer to this question that finding the ratio of a geometric progression only from knowledge of the sum of its first $n+1$ terms $S = 1+x+x^2+\cdots+x^n$ amounts to solving a polynomial of degree $n$. This suggested to me that there might be up to $n$ real solutions of $x$ for a given sum, but I could not find any. In fact, it turned out that the following fact is true: -For $n \ge 1$ and $S \in \mathbb{R}$, the polynomial equation $x^n + x^{n-1} + \cdots + x + 1 = S$ has at most two real solutions. -A corollary is that if $n$ is odd, there is exactly one real solution. I was only able to prove this using a rather contrived geometric argument based on the shape of the graph of $y = x^{n+1}$. Is there a simple, direct (and ideally, intuitive) proof of this fact? - -REPLY [13 votes]: In between the previous two answers - one very specific and one very general, there is the following one: -Rolle's theorem states that between any two zeros of the function lies a zero of the derivative. So each time you differentiate the number of real roots can not decrease by more than 1. This actually holds with multiplicity as well - if a root is of multiplicity k, then it is of multiplicity (k-1) for the derivative, and there are still roots of the derivative strictly between consecutive distinct roots of the original function. So $x^{n+1}−Sx+S−1$ has at most one root more than $(n+1)x^n-S$ and at most two more than $n(n+1)x^{n-1}$, so not more than three total (counted with multiplicity). One of those is $x=1$, so there are no more than two for the $1+x+\ldots+x^n=S$. QED.<|endoftext|> -TITLE: Picture of spec? -QUESTION [14 upvotes]: I recall seeing a hand-drawn picture of spec of a ring (maybe of $\mathbb Z$?) that had been passed around in the early days of the Zariski topology. Does anyone know where I can find a copy? - -REPLY [6 votes]: Be sure not to miss this very interesting paper (in English) on "Mumford's treasure map" and related concepts: Lieven Le Bruyn: Un dessins d'enfants<|endoftext|> -TITLE: Is speed an important quality in a mathematician? -QUESTION [42 upvotes]: Is solving problems quickly an important trait for a mathematician to have? Is solving textbook/olympiad style problems quickly necessary to succeed in math? To make an analogy, is it better to be a sprinter or a marathon runner in mathematics? - -REPLY [8 votes]: You can find all the quotes here and I recommend you to read it. But for the completeness of the answer I quote some of them which I find most important. - -In The Map of My Life mathematician Goro Shimura said, - -I discovered that many of the exam problems were artificial and required some clever tricks. I avoided such types, and chose more straightforward problems, which one could solve with standard techniques and basic knowledge. There is a competition called the Mathematical Olympic, in which a competitor is asked to solve some problems, which are difficult and of the type I avoided. Though such a competition may have its raison d'être, I think those younger people who are seriously interested in mathematics will lose nothing by ignoring it. - -In his lecture at the 2001 International Mathematics Olympiad, Andrew Wiles gave further description of how math competitions are unrepresentative of mathematical practice, - -[...]The two principal differences I believe are of scale and novelty. First of scale: in a mathematics contest such as the one you have just entered, you are competing against time and against each other. While there have been periods, notably in the thirteenth, fourteenth and fifteenth centuries when mathematicians would engage in timed duels with each other, nowadays this is not the custom. In fact time is very much on your side. However the transition from a sprint to a marathon requires a new kind of stamina and a profoundly different test of character. We admire someone who can win a gold medal in five successive Olympics games not so much for the raw talent as for the strength of will and determination to pursue a goal over such a sustained period of time. Real mathematical theorems will require the same stamina whether you measure the effort in months or in years [...] - -In his Mathematical Education essay, Fields Medalist William Thurston said, - -Quickness is helpful in mathematics, but it is only one of the qualities which is helpful. For people who do not become mathematicians, the skills of contest math are probably even less relevant. These contests are a bit like spelling bees. There is some connection between good spelling and good writing, but the winner of the state spelling bee does not necessarily have the talent to become a good writer, and some fine writers are not good spellers. If there was a popular confusion between good spelling and good writing, many potential writers would be unnecessarily discouraged. - -In his book Mathematics: A Very Short Introduction, Fields Medalist Timothy Gowers writes, - -While the negative portrayal of mathematicians may be damaging, by putting off people who would otherwise enjoy the subject and be good at it, the damage done by the word genius is more insidious and possibly greater. Here is a rough and ready definition of genius: somebody who can do easily, and at a young age, something that almost nobody else can do except after years of practice, if at all. The achievements of geniuses have some sort of magic quality about them - it is as if their brains work not just more efficiently than ours, but in a completely different way. Every year or two a mathematics undergraduate arrives at Cambridge who regularly manages to solve a in a few minutes problems that take most people, including those who are supposed to be teaching them, several hours or more. When faced with such a person, all one can do is stand back and admire. -And yet, these extraordinary people are not always the most successful research mathematicians. If you want to solve a problem that other professional mathematicians have tried and failed to solve before you, then, of the many qualities you will need, genius as I have defined it is neither necessary nor sufficient. To illustrate with an extreme example, Andrew Wiles, who (at the age of just over forty) proved Fermat's Last Theorem (which states that if $x$, $y$, $z$, and $n$ are all positive integers and $n$ is greater than $2$, then $x^n + y^n$ cannot equal $z^n$) and thereby solved the world's most famous unsolved mathematics problem, is undoubtedly very clever, but he is not a genius in my sense. -How, you might ask, could he possibly have done what he did without some sort of mysterious extra brainpower? The answer is that, remarkable though his achievement was, it is not so remarkable as to defy explanation. I do not know precisely what enabled him to succeed, but he would have needed great courage, determination, and patience, a wide knowledge of some very difficult work done by others, the good fortune to be in the right mathematical area at the right time, and an exceptional strategic ability. -This last quality is, ultimately, more important than freakish mental speed: the most profound contributions to mathematics are often made by tortoises rather than hares. As mathematicians develop, they learn various tricks of the trade, partly from the work of other mathematicians and partly as a result of many hours spent thinking about mathematics. What determines whether they can use their expertise to solve notorious problems is, in large measure, a matter of careful planning: attempting problems that are likely to be fruitful, knowing when to give up a line of thought (a difficult judgement to make), being able to sketch broad outlines of arguments before, just occasionally, managing to fill in the details. This demands a level of maturity which is by no means incompatible with genius but which does not always accompany it. - -Fields Medalist Alexander Grothendieck describes his own relevant experience in Récoltes et Semailles, - -Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my “elders” and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle–while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured) things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects. -In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective or thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They’ve done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone. - - -See also: This question and this.<|endoftext|> -TITLE: Developing the unit circle in geometries with different metrics: beyond taxi cabs -QUESTION [23 upvotes]: My class had a good time redeveloping the unit circle under the taxicab metric. Now some of them want to do it again with another similar metric. I want to give this to some of my "honors" high-school* calculus I students to work on independently. As such, it can't be too difficult or require too many advanced concepts. -What is another metric that I could introduce them to where one can describe a "unit circle" and try to make sense of the sine, cosine and tangent functions? -*I have a group of high school students enrolled in a college course. The course, as planned, is far too easy for them. (At the same time, it is quite hard for the college students enrolled. Go figure.) After the high school students do what has been assigned by the online learning system I have them work on other stuff-- which I allow to be pretty self-guided. - -REPLY [8 votes]: It is a lovely result of Hermann Minkowski that any plane centrally symmetric convex set can serve as the "unit ball" of a distance function.<|endoftext|> -TITLE: Efficiently calculating the logarithmic integral with complex argument -QUESTION [5 upvotes]: My number theory library of choice doesn't implement the logarithmic integral for complex values. I thought that I might take a crack at coding it, but I thought I'd ask here first for algorithmic advice and/or references. I'm sure there are better methods than naively calculating the integral. - -REPLY [10 votes]: For this answer, I'm assuming the definition -$$\mathrm{li}(z):=\mathrm{PV}\int_0^z\frac{\mathrm{d}u}{\ln\;u}$$ -where we assume the Cauchy principal value (the more common definition in number theory that has a lower limit of 2 differs merely by a constant). -Well, the first thing you have to note is the identity -$$\mathrm{li}(z)=\mathrm{Ei}(\ln\;z)$$ -where $\mathrm{Ei}(z)$ is an exponential integral. (Again, I repeat my advice to people who encounter strange functions: you would really do well to check the DLMF first for identities and references.) -Now, $\mathrm{Ei}(z)$ is a slightly more tractable beastie to numerically evaluate, since the singularity at $z=0$ can be confined to a logarithmic part; to wit: -$\mathrm{Ei}(z)=\gamma+\ln\;z+\int_0^z \frac{\exp(u)-1}{u}\mathrm{d}u$ -where the last portion is an entire function. -Now, depending on which of the left or right half-planes should the exponential integral be evaluated, your strategy will differ (it is a common fact that most special function routines are polyalgorithms, since their behavior can markedly differ in different regions of the complex plane). I will be vague for the rest of this answer since you did not clarify your region of interest. Suffice it to say that one usually uses a continued fraction for arguments in the left half-plane, a power series for small to medium-sized arguments, and asymptotic expansions for large arguments. -If implementing it yourself is starting to sound daunting (because it is), I will have to point out this paper and the corresponding FORTRAN subroutine. -Hope this helps a bit.<|endoftext|> -TITLE: The least prime greater than 2000 -QUESTION [12 upvotes]: I'm a bit curious as to how "real" mathematicians would solve this problem. -"Find the least prime number greater than 2000." -Of course, I can always go brute force: -IF N IS PRIME, OUTPUT N -ELSE INCREASE N BY 1 - -But that's no fun. (Especially when the number is really high.) -Are there any algorithms/tricks/etc. that can help me solve this quickly and efficiently, especially if I were given large values of N? - -REPLY [3 votes]: All prime numbers, except for $2$ and $3$, are of the form $6n\pm1$. Now, what's the nearest multiple of $6$ to $2000$ ? It can't be $2000$ itself, since, despite being even, the sum of its digits is not a multiple of $3$. The same holds for $2002$ as well. Then $2004$ is the answer. Let's check $2004-1=2003$. Indeed, it's prime.<|endoftext|> -TITLE: derivative of characteristic function -QUESTION [5 upvotes]: I came across an interesting problem but unable to see how to approach it. How do I use the dominated convergence theorem (LDCT), to show that first derivative of the characteristic function of the probability distribution at $t = 0$, $\phi^′(0)=iE[X]$? Any ideas? -References: -http://en.wikipedia.org/wiki/Characteristic_function_(probability_theory) -http://en.wikipedia.org/wiki/Dominated_convergence_theorem - -REPLY [5 votes]: First, one has $$e^{ir}-1=O(|r|) $$ for all $r\in \mathbb{R}.$ This means that there exist constant (independent of $r$) $A>0$ such that $$|e^{ir}-1|\le A|r| \mbox{ for all } r\in \mathbb{R} $$ (In fact, you can take $A=1$, as $|e^{ir}-1|$ is the length of the chord from 1 to $e^{ir}$ and $r$ is the corresponding arc length on the unit circle.) -Let us prove your question in the case $X $ is contiuous. (The discrete case can be proved similarly.) Suppose $f(x)$ is the probability density function of $X$. Then -$$ \phi(t)=E(e^{itX})=\int_{-\infty}^\infty f(x)e^{itx} dx$$ and $\phi(0)=\int_{-\infty}^\infty f(x) dx =1$. So by linearity of integral -$$\phi'(0)=\lim_{t\to 0}\frac{\phi(t)-\phi(0)}{t}=\lim_{t\to 0}\int_{-\infty}^\infty f(x)\frac{e^{itx}-1}{t} dx.$$ Since -$$\left|f(x)\frac{e^{itx}-1}{t}\right|\le f(x) A \frac{|tx|}{|t|}=A f(x)|x|,$$ and $$\lim_{t\to 0}\frac{e^{itx}-1}{t}=\frac{d}{dt} e^{itx}|_{t=0}=ix e^0=ix,$$ -it follows from DCT that if $E(|X|)<\infty$, then $$\phi'(0)= \int_{-\infty}^\infty f(x)ix dx=i E(X).$$<|endoftext|> -TITLE: Summing $\frac{1}{e^{2\pi}-1} + \frac{2}{e^{4\pi}-1} + \frac{3}{e^{6\pi}-1} + \cdots \text{ad inf}$ -QUESTION [14 upvotes]: In this post, David Speyer, actually, gave an expression for $\displaystyle \frac{t}{e^{t}-1}$. -The question is can we sum the given series, using that expression, if not how does one sum this series. $$\sum\limits_{n=1}^{\infty} \frac{n}{e^{2\pi n}-1}=\frac{1}{e^{2\pi}-1} + \frac{2}{e^{4\pi}-1} + \frac{3}{e^{6\pi}-1} + \cdots \text{ad inf}$$ - -REPLY [25 votes]: What you require here are the Eisenstein series. In particular the evaluation of -$$E_2(\tau) = 1 – 24\sum_{n=1}^\infty \frac{ne^{2\pi i n \tau} }{1 - e^{2\pi i n \tau}},$$ -at $\tau = i. $ Rearrange to get -$$\sum_{n=1}^\infty \frac{ne^{2\pi i n \tau} }{1 - e^{2\pi i n \tau} } = \frac{1}{24}(1 – E_2(i) ).$$ -See Lambert series for additional information. -EDIT: The function -$$G_ 2(\tau) = \zeta(2) \left( -1 – 24\sum_{n=1}^\infty \frac{ne^{2\pi i n \tau} }{1 - e^{2\pi i n \tau}} \right) -=\zeta(2)E_2(\tau)$$ -satisfies the quasimodular transformation -$$G_ 2\left( \frac{a\tau+b}{c\tau+d} \right) = -(c\tau+d)^2G_ 2(\tau) - \pi i c (c\tau + d).$$ -And so with $a=d=0,$ $c=1$ and $b=-1$ we find $G_ 2(i) = \pi/2.$ Therefore -$$E_2(i) = \frac{ G_ 2( i)}{ \zeta(2)} = \frac{\pi}{2}\frac{6}{\pi^2} = \frac{3}{\pi}.$$ -Hence we obtain -$$\sum_{n=1}^\infty \frac{n}{e^{2\pi n} – 1} = \frac{1}{24} - \frac{1}{8\pi},$$ -as given in the comment to the question by Slowsolver. -EDIT: -There is a very nice generalisation of the sum in the question. -For odd $ m > 1 $ we have -$$\sum_{n=1}^\infty \frac{n^{2m-1} }{ e^{2\pi n} -1 } = \frac{B_{2m}}{4m},$$ -where $B_k$ are the Bernoulli numbers defined by -$$\frac{z}{e^z - 1} = \sum_{k=0}^\infty \frac{B_k}{k!} z^k \quad \textrm{ for } -|z| < 2 \pi.$$<|endoftext|> -TITLE: Volume bounded by cylinders $x^2 + y^2 = r^2$ and $z^2 + y^2 = r^2$ -QUESTION [17 upvotes]: I am having trouble expressing the titular question as iterated integrals over a given region. I have tried narrowing down the problem, and have concluded that the simplest way to approach this is to integrate over the XZ plane in the positive octant and multiply by 8, but I am having trouble identifying the bounding functions. - -REPLY [19 votes]: This is one of those results in calculus which were anticipated by -Archimedes. He gave a correct formula for the volume but it is not known exactly -how Archimedes solved this problem. There is, however, a simple way to -obtain the answer without much calculus. Let me quote from late Gardner's The Unexpected Hanging and Other Mathematical Diversions (Gardner considers the case $r=1$ but this is not essential, of course): - -Imagine a sphere of unit radius inside the volume common - to the two cylinders and having as its center the point - where the axes of the cylinders intersect. Suppose that the - cylinders and sphere are sliced in half by a plane through - the sphere's center and both axes of the cylinders. - The cross section of the volume common to the cylinders will be a square. - The cross section of the sphere will be a circle that fills the square. - - - -Now suppose that the cylinders and sphere are sliced by a - plane that is parallel to the previous one but that shaves off - only a small portion of each cylinder (have a look at the picture on the left). - This will produce parallel tracks on each cylinder, - which intersect as before to form a square cross section of - the volume common to both cylinders. Also as before, the - cross section of the sphere will be a circle inside the square. - It is not hard to see (with a little imagination and pencil - doodling) that any plane section through the cylinders, parallel - to the cylinders' axes, will always have the same result: - a square cross section of the volume common to the cylinders, - enclosing a circular cross section of the sphere. - Think of all these plane sections as being packed together - like the leaves of a book. Clearly, the volume of the sphere - will be the sum of all circular cross sections, and the volume - of the solid common to both cylinders will be the sum of all - the square cross sections. We conclude, therefore, that the - ratio of the volume of the sphere to the volume of the solid - common to the cylinders is the same as the ratio of the area - of a circle to the area of a circumscribed square. A brief calculation - shows that the latter ratio is $\pi/4$. This allows the - following equation, in which $x$ is the volume we seek: - -$$\frac{4\pi r^3/3}{x}=\frac{\pi}{4}.$$ - -The $\pi$'s drop out, giving $x$ a value of $16r^3/3$. The radius in - this case is 1, so the volume common to both cylinders is - $16/3$. As Archimedes pointed out, it is exactly $2/3$ the volume - of a cube that encloses the sphere; that is, a cube with - an edge equal to the diameter of each cylinder.<|endoftext|> -TITLE: Metrization of the weak*-topology on a set of probability measures -QUESTION [9 upvotes]: Let $X$ denote a metric space. One can assume that $X$ is Polish if that helps, but I was trying to avoid to assume that $X$ is compact. Let $P(X)$ denote the set of Borel probability measures on $X$. The weak*-topology on $P(X)$ is defined as usual: a net $(p_{\alpha})$ in $P(X)$ converges to $p \in P(X)$ iff $|\int fdp_{\alpha}-\int fdp|$ converges to $0$. -Question: Is there any metric that metrizes the weak*-topology on $P(X)$ and has convex open balls? - -REPLY [6 votes]: On a separable metric space $(X,d)$, weak convergence of probability measures is equivalent to convergence with respect to the Lévy–Prokhorov metric defined by -$$ \beta(\mu,\nu) = \sup \left( \int_X f\ d(\mu-\nu): \|f\|_{BL}\leq 1\right),$$ -where $$\|f||_{BL}=\sup_x |f(x)|+\sup_{x\neq y}|f(x)-f(y)|/d(x,y).$$ -See Theorem 11.3.3 of R. M. Dudley's Real Analysis and Probability. - -Let's show that $B=\{\nu: \beta(\mu,\nu)\leq\varepsilon \}$ is a convex set. Let $\nu_1,\nu_2\in B$, $\alpha\in(0,1)$, and $f$ with $\|f\|_{BL}\leq 1$. Then -\begin{eqnarray*} -\int f d(\mu-[\alpha\nu_1+(1-\alpha)\nu_2])&=&\alpha\int f d(\mu-\nu_1)+(1-\alpha)\int f d(\mu-\nu_2)\\[5pt] -&\leq&\alpha\beta(\mu,\nu_1)+(1-\alpha)\beta(\mu,\nu_2)\\[5pt] -&\leq &\alpha \varepsilon +(1-\alpha)\varepsilon\\[5pt] -&=&\varepsilon. -\end{eqnarray*} -Taking the supremum over such $f$ gives $\beta(\mu,\alpha\nu_1+(1-\alpha)\nu_2)\leq \varepsilon$, so the closed ball is convex. -The open ball $\{\nu: \beta(\mu,\nu)<\varepsilon\}$ is the increasing union -of the convex sets $\{\nu: \beta(\mu,\nu)\leq\varepsilon-1/n\}$, for $n>1/\varepsilon$, and so is itself convex.<|endoftext|> -TITLE: what size is a "unit torus"? -QUESTION [6 upvotes]: Wikipedia articles on "unit sphere" and "unit circle" say the radius is 1. Articles on the "unit square" and "unit cube" say the length of the side is 1. Would you expect a unit torus to have major radius 1 or major diameter 1? -Admittedly, a torus is a different animal than a sphere, but... It feels most natural to me that the "unit" length should apply to the (major) radius, not the major diameter. Yet I recently came across open source code where someone generated a "unit torus" of major diameter 1. -Is that "wrong enough" that I should change it (in a package of related changes that I'm already preparing to submit)? Can you give me a more solid mathematical basis for advocating that change? Or should I accept the status quo as just a different but legitimate interpretation of "unit torus"? -Edit: -I see from search hits like the following - -Spectral Analysis of Virus Spreading in Random Geometric Graphs -Unconditional Proof of the Boltzmann-Sinai Ergodic Hypothesis -The cover time of random geometric graphs -Dimers and amoebae - -that the term "unit torus" is used in some fields, like dynamical systems and discrete algorithms. But I'm unable to tell from these papers or abstracts what the authors mean exactly by "unit torus". Dimers and amoebae actually gives this definition: - -the unit torus T2 = {(z,w) ∈ C2 : |z| - = |w| = 1} - -This definition appears to give a definite size. But if it's in the two-dimensional vector space over the complex numbers, I don't know how to apply it to $\mathbb{R}^3$. -If "unit torus" (in $\mathbb{R}^3$) actually means something that does not have any particular size, then that would be important to know. -My question is really not one of programming, but of what this term means in mathematics... including, to what degree is it actually defined (or not) in math? -I will base my software decisions on that information. -(Would tag this "torus" if I could create the tag.) - -REPLY [3 votes]: The unit torus (in these cases) refers to a torus of major radius $R$ and minor radius $r$ of surface area $4\pi^2 R r=1$. As such it is the "unit square" with periodic boundary conditions. -In random geometric graphs, the boundary of the domain plays an important role in much graph-theoretic behaviour. Sometimes it is useful to analyse graphs not inside the unit square, but rather on the surface of a torus with unit surface area (remembering that the torus is constructed by sewing the parallel edges of a square together). This removes boundary effects on the random geometric graphs. -Below is the unit square with two small obstacle-like regions removed from the domain. We might ask how the obstacles effect the connectivity of the graphs, but this can be difficult when there are "outer" boundary effects that obscure the effects of the obstacles. - -We can solve this problem by working on the surface of a torus, since the only impact on the connectivity is (to some extent) the obstacles.<|endoftext|> -TITLE: Proof that there is no closed form solution to $2^x + 3^x = 10$ -QUESTION [14 upvotes]: How can I prove that there is no closed form solution to this equation? $$2^x + 3^x = 10$$ - -REPLY [6 votes]: A closed-form solution is a solution that can be expressed as a closed-form expression. -A mathematical expression is a closed-form expression iff it contains only finite numbers of only constants, explicit functions, operations and/or variables. -Sensefully, all the constants, functions and operations in a given closed-form expression should be from allowed sets. -$\ $ -The following parts of the answer are only for closed-form solutions that are expressions of elementary functions. According to Liouville and Ritt, the elementary functions can be represented in a finite number of steps by performing only algebraic operations and / or taking exponentials and / or logarithms. -$\ $ -1) -$2^x+3^x=10$ is a transcendental equation: $e^{\ln(2)x}+e^{\ln(3)x}=10$. The left-hand side of this equation is the functional term of an elementary function. Because $\ln(2)$ and $\ln(3)$ are linearly independent over $\mathbb{Q}$, the expressions $2^x$ and $3^x$ are algebraically independent: MathStackExchange: Algebraic independence of functions. Therefore one can prove with help of the theorem of [Ritt 1925] (which is also proved in [Risch 1979]) that a function $x\mapsto 2^x+3^x$ cannot have a partial inverse over an open domain $D\subseteq\mathbb{C}$ that is an elementary function. It is not possible therefore to rearrange the equation according to $x$ only by applying only elementary operations (elementary functions) one can read from the equation. -2) -The question of solvability of some special kinds of equations by elementary functions is treated in [Rosenlicht 1969]. -3) -$2^x=e^{\ln(2)x}$, $3^x=e^{\ln(3)x}$ -$2^x$ and $3^x$ are $\begin{cases} -\text{algebraic}&\text{if }x\text{ is rational}\\ -\text{transcendental}&\text{if }x\text{ is algebraic and irrational}\\ -&\text{(Gelfond-Schneider theorem)}\\ -\text{transcendental}&\text{otherwise(?)} -\end{cases}$ -Let $x_0$ be a solution of your equation. If $x_0$ is not rational, $2^{x_0}$ and $3^{x_0}$ are algebraically independent: MathStackExchange: Algebraic independence of functions. That means, $2^{x_0}$ and $3^{x_0}$ cannot fulfill together an algebraic equation, in particular not the equation $2^{x_0}+3^{x_0}=10$. That means, the equation can have only rational solutions. -4) -The existence or non-existence of elementary solutions (that are elementary numbers) could possibly be proved by the methods of [Lin 1983] and [Chow 1999]. But both methods need the Schanuel conjecture that is unproven. -$\ $ -[Chow 1999] Chow, T. Y.: What is a Closed-Form Number? Amer. Math. Monthly 106 (1999) (5) 440-448 or https://arxiv.org/abs/math/9805045 -[Lin 1983] Ferng-Ching Lin: Schanuel's Conjecture Implies Ritt's Conjectures. Chin. J. Math. 11 (1983) (1) 41-50 -[Risch 1979] Risch, R. H.: Algebraic Properties of the Elementary Functions of Analysis. Amer. J. Math 101 (1979) (4) 743-759 -[Ritt 1925] Ritt, J. F.: Elementary functions and their inverses. Trans. Amer. Math. Soc. 27 (1925) (1) 68-90 -[Rosenlicht 1969] Rosenlicht, M.: On the explicit solvability of certain transcendental equations. Publications mathématiques de l'IHÉS 36 (1969) 15-22<|endoftext|> -TITLE: Why is the Artin-Rees lemma used here? -QUESTION [10 upvotes]: I am currently engaged in independent study of algebraic geometry, using Dan Bump's book. One of the exercises in it outlines a proof of the Krull Intersection Theorem, which [here] is the following: - -Let $A$ be a Noetherian local ring with maximal ideal $\mathfrak{m}$, and let $M$ be the intersection of all of the $\mathfrak{m}^n$. Then $M = 0$. - -The hints direct me to use the Artin-Rees lemma to show that $\mathfrak{m} M = M$, then use Nakayama's lemma to show that $M = 0$ (this second step is easy). I showed this to a professor and he accused the book of using big machinery for no reason, arguing that -$$\mathfrak{m} M = \mathfrak{m} \bigcap_{n \ge 0} \mathfrak{m}^n = \bigcap_{n \ge 1} \mathfrak{m}^n = M.$$ -Does this argument work? Does Bump apply Artin-Rees because that argument works in some broader context where the above argument fails? - -REPLY [2 votes]: See Theorem 3.6 in Chapter 6 of the CRing project for another elementary argument due to Perdry (American Math. Monthly, 2004) using only the Hilbert basis theorem. In fact, it shows more: if $R$ is a noetherian domain, $I \subset R$ a proper ideal, then the intersection of the powers of $I$ is trivial.<|endoftext|> -TITLE: Prove that $\sum\limits_{n=1}^{\infty}(-1)^n(2^{1/n} - 1)$ is convergent -QUESTION [9 upvotes]: I want to prove that the series -$$ \sum_{n=1}^{\infty}(-1)^n\left(2^{1/n}-1\right)$$ -converges. -I am fairly certain that this converges. Using the ratio or root test does not seem to work. Therefore the other tests I am left with are alternating series test, intergral test, comparison test and limit comparison test (that we know about). -Seems like it would be a messy integral and that would not work or I would not be able to solve it. -I think I could show this using the alternating series test IF the limit of the series is 0. My problem is formally showing that the limit of this series is 0. I can't quite get past that. -My only option left if the alternating series test does not work is the limit comparison test, since the series is not > 0 for large n, and therefore we cannot use the comparison test. If this is the case I honestly don't have a clue what other series I should compare this with. -Thank you for reading! - -REPLY [3 votes]: A general suggestion: There are a few problems asking for convergence of divergence of series of the form $\sum_n (-1)^ng(n)$ or $\sum_n g(n)$ for some function $g$ for which one can easily rewrite $g(n)$ as -$$f(1/n)-f(0)$$ for some $f$ differentiable at 0. -In all these cases one can try to replace $g(n)=f(1/n)-f(0)$ with $f'(0)/n$ and use elementary estimates and inequalities for the remainder error, if needed (sometimes a limit comparison test will already suffice). -For example, in the problem here, $g(n)=2^{1/n}-1$ so blatantly we can try $f(x)=2^x$. We have $f'(0)=\ln 2$, which is the estimate that appears in acarchau's answer. -For another example, $\sum_n\sin(1/n)$ diverges: Take $f(x)=\sin(x)$.<|endoftext|> -TITLE: Evaluating the integral $\int\limits_{0}^{\infty} \Bigl\lfloor{\frac{n}{e^{x}}\Bigr\rfloor} \ dx $ -QUESTION [7 upvotes]: How to evaluate this integral: $$\int_{0}^{\infty} \biggl\lfloor{\frac{n}{e^{x}}\biggr\rfloor} \ dx, $$where $n \in \mathbb{N}$. -The same integral when asked to evaluate for $n=2$ (say) i can do it by splitting the limits from $x = 0$ to $x = \log{2}$ where $e^{x}$ takes the value 1, and then from $x= \log{2}$ to $x = \infty$. But how to do this for the general case $n$. I thought of two ways: - -Using induction on $n$. This will not work! -Splitting the limits from $x =0$, to $x = \log{n}$ also seems to cause some problems for me. - -REPLY [12 votes]: The integrand will only take on integral values m, and only for a finite measure of x values. The interval of x values in which the integrand equals exactly m is $ (- \ln \frac{m+1}{n} , - \ln \frac{m}{n} ] $. Split the integral over these intervals, evaluate (upper bound of interval minus lower bound times m) and turn them into a summation as $$ \sum_{m=1}^n m \left( \ln \frac{m+1}{m} \right) = \ln \prod_{m=1}^n \left( 1 + \frac{1}{m} \right)^m $$<|endoftext|> -TITLE: What is the combinatoric significance of an integral related to the exponential generating function? -QUESTION [15 upvotes]: Suppose that you have an exponential generating function.: $E(z)=\sum_{n=0}^{\infty} \frac{a_{n}z^{n}}{n!}$, and that the definition of $a_{n}$ can be reasonably extended to noninteger arguments. (the Catalan numbers $C_{n}$, would be written in terms of the Gamma function thusly: $C_{n} = \frac{\Gamma(2n+1)}{\Gamma(n+2)\Gamma(n+1)}$, for instance), what then is the combinatorial significance of this integral: -$$U(z)=\int_{0}^{\infty} \frac{a_{v}z^{v}dv}{\Gamma(v+1)}$$ -? - -REPLY [3 votes]: 'Combinatorial' usually refers to finite or discrete objects so using an integral really is going quite the other direction to an analytic one. That is, a combinatorial interpretation of an integral, as is, will most likely just be forced or convoluted (I'm sure it is possible under some intellectual circumstances, it's just not obvious how). -So maybe one would want just an unqualified interpretation of what it means when one takes a gf (of an already combinatorial situation) and replaces the summation symbol with an integral to get an analytic situation (which accords more with an analytic continuation or interpretation). -Then, the question really might be 'What is the analytic significance of the integral?' or possibly 'What is an analytic analog of a generating function? If a gf corresponds to a particular interpretation, what can we say, interpretively, about the symbolic transform from a summation to an integral?' -For the above example, this just isn't obvious, and the integral you give seems to be unknown (even if $a_n = 1$). so I'll try to say something in general about gfs that might help. -A gf is a way to capture a function on the naturals 'by other means'; the natural number exponents of the gf mark the values of the original function. So by trying to use an integral, you're trying to interpret the exponent as a real (rather than an integer). The furthest I've seen gfs generalized is to Puiseux series which allow rational exponents (but with some restrictions akin to a gf, some kind of arithmetical progression in the exponent). -The only appropriate 'conversion' I can think of is something like a Fourier or Laplace transform; those will convert a function to another 'domain' where manipulation can occur, and then convert back to the functional domain. - -Edited out (the following is not what the OP is looking for): -The coefficient of $x^n/n!$ in an egf is usually combinatorially interpreted as the number of labeled sets of size $n$ (with a given structure). -A more likely integral related to an egf is to integrate (formally, forget the constant) the entire egf with respect to the egf variable, which has an immediate calculation: -$$ \int E(z) \ dz = \sum_{n=0}^{\infty} \int\frac{a_{n}z^{n}}{n!}\ dz = -\sum_{n=1}^{\infty} \frac{a_{n}z^{n+1}}{(n+1)!} = \sum_{n=1}^{\infty} \frac{a_{n-1}z^n}{n!}$$. -All this does is, in a sense, shift the function: $\int E(z)$ is the egf of $a_{n-1}$. -Of course, the derivative of the egf shifts the other direction. -For ordinary generating functions (ogf), the derivative usually means combinatorially that you are 'pointing to' (distinguishing, choosing) one particular element out of a structure of size $n$ (and the integral 'undoes' the pointing). For egfs, the labeling provides a sort of 'pointing' already for every object, so the $n$ multiplier is irrelevant. -I realize I have answered a different question than you have asked, in the hopes that it is what you are really after. You are asking, given an egf for a function over naturals, to presume the existence of an analytic continuation of the original function, and then integrate over that analytic continuation (along with $z^n/n!$ also interpreted over reals). For the moment, I really can't say anything there other than it looks like some kind of fractional derivative.<|endoftext|> -TITLE: Euler angles and gimbal lock -QUESTION [21 upvotes]: Can someone show mathematically how gimbal lock happens when doing matrix rotation with Euler angles for yaw, pitch, roll? I'm having a hard time understanding what is going on even after reading several articles on Google. -Is the only work-around to use quaternions? - -REPLY [4 votes]: Let's say we have an object in 3D space located at P(1,1,1) and if we decided we wanted to rotate this object within any of the planes XY, XZ, YZ along any of the 3 axis where this is a basic 3D Cartesian system in a Right-Hand System the rotation matrices $R_n$ for P will look like these: - -$$ R_x \space|| \space R_y \space || \space R_z $$ -$$R_x(\theta) P = -\begin{bmatrix} - 1 & 0 & 0 \\ - 0 & \cos\theta & -\sin\theta \\ - 0 & \sin\theta & \space\space\cos\theta \\ -\end{bmatrix} -\begin{bmatrix} - P_x \\ - P_y \\ - P_z \\ -\end{bmatrix},$$ -$$R_y(\theta) P = -\begin{bmatrix} - \space\cos\theta & 0 & \sin\theta \\ - 0 & 1 & 0 \\ - -\sin\theta & 0 & \cos\theta \\ -\end{bmatrix} -\begin{bmatrix} - P_x \\ - P_y \\ - P_z \\ -\end{bmatrix},$$ -$$R_z(\theta) P= -\begin{bmatrix} - \cos\theta & -\sin \theta & 0 \\ - \sin\theta & \space\space\cos \theta & 0 \\ - 0 & 0 & 1 \\ -\end{bmatrix} -\begin{bmatrix} - P_x \\ - P_y \\ - P_z \\ -\end{bmatrix}$$ - -When doing rotations in 3D among any of the arbitrary axis, the order of the rotations, the handedness of the system, the direction of the rotations and doing rotations among multiple axis does matter. To demonstrate this let's say angle $\theta = 90°$ and we apply this rotation consecutively in multiple dimensions you will see that we eventually end up with Gimbal Lock. First we will do $R_x$ by 90° then $R_y$ and finally try $R_z$ -Here we are going to apply a 90° rotation to the point or vector P(1,1,1) on the $R_x$ axis -$R_x(90°)$ -$\begin{bmatrix} - 1 \\ - 1 \\ - 1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - 1 & 0 & 0 \\ - 0 & \cos 90° & -\sin 90° \\ - 0 & \sin 90° & \space\space\cos 90° \\ -\end{bmatrix}$ -$\begin{bmatrix} - 1 \\ - 1 \\ - 1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - 1 & 0 & \space\space 0 \\ - 0 & 0 & -1 \\ - 0 & 1 & \space\space 0 \\ -\end{bmatrix}$ -$\begin{bmatrix} - 1 \\ - 1 \\ - 1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - 1 \\ - -1 \\ - 1 \\ -\end{bmatrix}$ -Now that our vector P has been transformed we will apply another 90° rotation but this time to the $R_y$ axis with the new values. -$R_y(90°)$ -$\begin{bmatrix} - 1 \\ - -1 \\ - 1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - \cos 90° & 0 & \sin 90°\\ - 0 & 1 & 0 \\ - -\sin 90° & 0 & \cos 90°\\ -\end{bmatrix}$ -$\begin{bmatrix} - 1 \\ - -1 \\ - 1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - 0 & 0 & 1 \\ - 0 & 1 & 0 \\ - -1 & 0 & 0\\ -\end{bmatrix}$ -$\begin{bmatrix} - 1 \\ - -1 \\ - 1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - 1 \\ - -1 \\ - -1 \\ -\end{bmatrix}$ -We can now finish with $R_z$ -$R_z(90°)$ -$\begin{bmatrix} - 1 \\ - -1 \\ - -1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - \cos 90° & -\sin 90° & 0 \\ - \sin 90° & \cos 90° & 0 \\ - 0 & 0 & 1 \\ -\end{bmatrix}$ -$\begin{bmatrix} - 1 \\ - -1 \\ - -1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - 0 & -1 & 0 \\ - 1 & 0 & 0 \\ - 0 & 0 & 1 \\ -\end{bmatrix}$ -$\begin{bmatrix} - 1 \\ - -1 \\ - -1 \\ -\end{bmatrix}$ = -$\begin{bmatrix} - 1 \\ - 1 \\ - -1 \\ -\end{bmatrix}$ - -And as you can see in the calculations of the matrices there has been a change in direction for each time we rotated by 90 degrees. Here we have lost a degree of freedom of rotation. What this means is we rotated the X components by 90° which happens to be perpendicular or orthogonal to both the Y & Z axis which is evident of the fact that $\cos(90°) = 0$. Then when we rotate again by 90° along the Y axis and once again the Y axis is perpendicular to both the X & Z axis now we have 2 axis of rotations that are aligned so when we try to rotate in the 3rd dimension of space we lost a degree of freedom because we can no longer distinguish between the X & Y as they will both rotate simultaneously and there is no method to separate them. This can be seen from the calculations that were done by the matrices. It may not be completely evident now, but if you were to do all 6 permutations of the order of axis rotations you will see the pattern emerge. These kind of rotations are called Euler Angles. - -It also doesn't matter what combination of axis you rotate with because it will happen with every combination when two axis of rotations become parallel. -$$R_x(90°)P \to R_y(90°)P \to R_z(90°)P \implies Gimbal Lock$$ -$$R_x(90°)P \to R_z(90°)P \to R_y(90°)P \implies Gimbal Lock$$ -$$R_y(90°)P \to R_x(90°)P \to R_z(90°)P \implies Gimbal Lock$$ -$$R_y(90°)P \to R_z(90°)P \to R_x(90°)P \implies Gimbal Lock$$ -$$R_z(90°)P \to R_x(90°)P \to R_y(90°)P \implies Gimbal Lock$$ -$$R_z(90°)P \to R_y(90°)P \to R_x(90°)P \implies Gimbal Lock$$ - -If I simplify this by showing all 6 combinations with the ending transformation vectors or matrices for that same point you should see the pattern and these transformations are: -$$R_x(90°) P(1,1,1) \to -\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix} -R_y(90°) \to -\begin{bmatrix} 1 \\ -1 \\ -1 \\ \end{bmatrix} -R_z(90°) \to -\begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}$$ -$$R_x(90°) P(1,1,1) \to -\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix} -R_z(90°) \to -\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix} -R_y(90°) \to -\begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix}$$ -$$R_y(90°) P(1,1,1) \to -\begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix} -R_x(90°) \to -\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix} -R_z(90°) \to -\begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix}$$ -$$R_y(90°) P(1,1,1) \to -\begin{bmatrix} 1 \\ 1 \\ -1 \\ \end{bmatrix} -R_z(90°) \to -\begin{bmatrix} -1 \\ 1 \\ -1 \\ \end{bmatrix} -R_x(90°) \to -\begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix}$$ -$$R_z(90°) P(1,1,1) \to -\begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix} -R_x(90°) \to -\begin{bmatrix} -1 \\ -1 \\ 1 \\ \end{bmatrix} -R_y(90°) \to -\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$$ -$$R_z(90°) P(1,1,1) \to -\begin{bmatrix} -1 \\ 1 \\ 1 \\ \end{bmatrix} -R_y(90°) \to -\begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix} -R_x(90°) \to -\begin{bmatrix} 1 \\ -1 \\ 1 \\ \end{bmatrix}$$ - -If you look at the combinations of the axis that you started with it doesn't matter the order of which two you finished with because the result will be the same for that starting axis of rotation. So intuitively we can say this about using Euler Angles of Rotation in 3D while considering the handedness of the coordinate system; the handedness matters because it will change the rotation matrices along with the trig functions and their signs for they will be different and so will your results. Now as for this particular coordinate system we can visually conclude this about Euler Angles: -$$R_x(\theta) P \implies \begin{bmatrix} a \\ a \\ -a \\ \end{bmatrix}$$ -$$R_y(\theta) P \implies \begin{bmatrix} -a \\ a \\ a \\ \end{bmatrix}$$ -$$R_z(\theta) P \implies \begin{bmatrix} a \\ -a \\ a \\ \end{bmatrix}$$ - -It may not seem quite apparent by the numbers as to what exactly is causing the gimbal lock, but the results of the transformations should give you some insight to what is going on. It might be easier to visualize than just by looking at the math. So I provided a link to a good video below. Now if you are interested in proofs then you have plenty of work ahead of you for there are also some other factors that cause this to happen such as the relationships of the $\cos(\theta)$ between two vectors being equal to the dot product between those vectors divided by their magnitudes. -$$\cos(\theta) = \frac{ V_1 \cdot V_2}{ \lvert V_1 \rvert \lvert V_2 \rvert } $$ -Other contributing factors are the rules of calculus on the trigonometric functions especially the $\sin$ and $\cos$ functions. -$$(\sin{x})' = \cos{x}$$ -$$(\cos{x})' = -\sin{x}$$ -$$\int \sin{ax} \space dx = -\frac{1}{a}\cos{ax} + C$$ -$$\int \cos{ax} \space dx = \frac{1}{a}\sin{ax} + C$$ -Another interesting fact that I think that may lead to the reasoning of Gimbal Lock is quite interesting but that is a topic for another day as that in itself would merit its own page, but do forgive me if the math formatting isn't perfect I am new to this particular stack exchange site and I'm learning the math tags and formatting as I go. -Here is an excellent video illustrating Gimbal Lock: Youtube : Gimbal Lock<|endoftext|> -TITLE: Hom of finitely generated modules over a noetherian ring -QUESTION [14 upvotes]: This is an exercise from Rotman, An Introduction to Homological Algebra, which I've been thinking now and then for a few days and I haven't solved it yet. I've decided to ask here because it is bugging me and I don't have any friends also studying this subject (in case it is of any use, I'm an undergraduate). - -Let $R$ be a commutative noetherian ring. If $A$, $B$ are finitely generated $R$-modules, then $\operatorname{Hom}_R(A,B)$ is a finitely generated $R$-module. - -Here's what I've thought as of yet (maybe the problem is that I haven't been trying to apply the useful theorems...): -First of all, since $R$ is noetherian, it suffices to inject $\operatorname{Hom}_R(A,B)$ into some finitely generated module (in noetherian rings, submodules of f.g. are f.g.). -Now, since $R$ is commutative (or since $R$ is noetherian), a finitely generated $R$-module is a quotient of $R^n$ for some $n$. Then let us write $A\simeq \frac{R^n}{I}$ and $B\simeq \frac{R^m}{J}$. Now we've got: -$\operatorname{Hom}_R(A,B)\simeq \operatorname{Hom}_R\left(\frac{R^n}{I},\frac{R^m}{J}\right)$. -If I had $\operatorname{Hom}_R\left(\frac{R^n}{I},\frac{R^m}{J}\right) \hookrightarrow \operatorname{Hom}_R(R^n, R^m)$ (this is what I tend to think is the wrong direction, but a interesting question-rising path nevertheless) then it would be over since $\operatorname{Hom}_R(R^n, R^m)\simeq R^{nm}$ which is finitely generated. -So I ask myself: can I see $\frac{R^n}{I}$ as a submodule of $R^n$? Because if I could, then doing the same thing for $R^m$ and passing it to the hom, then it would be over. Now, this would be true if the sequence $$0\to I\hookrightarrow R^n\to \frac{R^n}{I} \to 0$$ split. But this (of course) doesn't always happen. But if I somehow had the third module to be projective, or the first one to be injective, then it would happen. -But why would, for example, be $\frac{R^n}{I}$ be projective? -And this is where I got stuck. I'm sorry if this is overly detailed, but I remember reading that it is always good, when asking about a textbook question, to add what you've thought up to that moment. Also, even if my thoughts don't lead to the solution, I would like to know if they are correct! - -REPLY [12 votes]: You're nearly there: with your notations, $\mathrm{Hom}_R(A,B)$ injects in $\mathrm{Hom}_R(R^n,B) \simeq B^n$, and $B^n$ is finitely generated. -Indeed $\mathrm{Hom}_R(A,B) = \left\{f \in \mathrm{Hom}_R(R^n,B)\ |\ I \subset \ker f \right\}$.<|endoftext|> -TITLE: When is $G/H = G/K$? In particular, when are two Lie groups with isomorphic Lie algebras isomorphic? -QUESTION [5 upvotes]: Suppose $G_1$ and $G_2$ are Lie groups with isomorphic Lie algebras. Then from standard Lie theory we know that there is a simply connected Lie group $G$ such that $G/H_i = G_i$ where $H_i$ is a discrete subgroup of the center of $G$. I am curious if there is a nice condition on $H_1,H_2$ and how they sit in $G$ that implies $G_1$ and $G_2$ are isomorphic. -More generally I guess, when is $G/H = G/K$ for a general group $G$ and normal subgroups $H$ and $K$ (maybe having $H$ and $K$ be in the center makes things easier?). - -REPLY [3 votes]: Copied RB's comment as an answer: -The condition is that there is an automorphism $f:G→G$ such that $f(H_1)=H_2$. The idea is if $h:G_1→G_2$ is an isomorphism, lift it to a map $G→G$.<|endoftext|> -TITLE: Harmonic mean and logarithmic mean -QUESTION [7 upvotes]: The harmonic mean of a finite set of positive real numbers $\{x_1, x_2, \ldots, x_n\}$ is defined to be $$H(\{x_1, x_2, \ldots, x_n\}) = \frac{n}{\frac{1}{x_1} + \frac{1}{x_2} + \cdots + \frac{1}{x_n}}.$$ -The logarithmic mean of two distinct positive real numbers $a$ and $b$ is defined to be $$L(a,b) = \frac{b - a}{\ln b - \ln a}.$$ -One of the first applications of integration that students often see is the extension of the arithmetic mean of a finite set of real numbers to the arithmetic mean of a function $f(x)$ on a continuous interval $[a,b]$ via $\frac{1}{b-a} \int_a^b f(x) dx.$ -In the same way, you can extend the harmonic mean so that it applies to a positive function $f(x)$ over a continuous interval $[a,b]$. You get -$$\frac{b-a}{\int_a^b \frac{dx}{f(x)}} .$$ -Thus, if you take $f(x) = x$ you obtain the harmonic mean of the continuous interval $[a,b]$. This is -$$H([a,b]) = \frac{b-a}{\int_a^b \frac{1}{x} dx} = \frac{b-a}{\ln b - \ln a} = L(a,b).$$ - -My question is this: Is there an intuitive reason why $H([a,b]) = L(a,b)$? - -For comparison purposes, note that if $A$ denotes the arithmetic mean, then $A([a,b]) = A(a,b) = \frac{a+b}{2}$. - -REPLY [5 votes]: The harmonic mean can be generalized to an arbitrary invertible function $w$: define the mean of $x_1,\ldots,x_n$ to be -$w^{-1}\left(\frac{w(x_1) + \cdots + w(x_n)}{n}\right)$ -The logarithmic mean can also be generalized re its mean value interpretation for an arbitrary function $f$ such that $f'$ is invertible: define the mean of $x,y$ as -$(f')^{-1}\left(\frac{f(x) - f(y)}{x - y}\right)$ -We can generalize your observation by taking $w = f'$. The corresponding mean, before taking $w^{-1}$, is -$\frac{1}{y-x}\int_x^y f'(t) \, dt = \frac{f(y)-f(x)}{y-x}$ -and so the (former) mean of the interval with weight $f'$ is the same as the (latter) mean with respect to $f$.<|endoftext|> -TITLE: If $F$ is strictly increasing with closed image, then $F$ is continuous -QUESTION [6 upvotes]: Let $F$ be a strictly increasing function on $S$, a subset of the real line. If you know that $F(S)$ is closed, prove that $F$ is continuous. - -REPLY [3 votes]: Let $f$ be any strictly increasing (not necessarily strictly) function on $S$. To show that $f$ is continuous on $S$, it is enough to show that it is continuous at $x$ for every $x \in S$. If $x$ is an isolated point of $S$, every function is continuous at $x$, so assume otherwise. -The key here is that monotone functions can only be discontinuous in a very particular, and simple, way. Namely, the one-sided limits $f(x-)$ and $f(x+)$ always exist (or rather, the first exists when $x$ is not left-isolated and the second exists when $x$ is not right-isolated): it is easy to see for instance that -$f(x-) = \sup_{y < x, \ y \in S} f(y)$. -Therefore a discontinuity occurs when $f(x-) \neq f(x)$ or $f(x+) \neq f(x)$. In the first case we have that for all $y < x$, $f(y) < f(x-)$ and for all $y \geq x$, $f(y) > f(x-)$. Therefore $f(x-)$ is not in $f(S)$. But by the above expression for $f(x-)$, it is certainly a limit point of $f(S)$. So $f(S)$ is not closed. The other case is similar. -Other nice, related properties of monotone functions include: a monotone function has at most countably many points of discontinuity and a monotone function is a regulated function in the sense of Dieudonné. In particular the theoretical aspects of integration are especially simple for such functions. -Added: As Myke notes in the comments below, the conclusion need not be true if $f$ -is merely increasing (i.e., $x_1 \leq x_2$ implies $f(x_1) \leq f(x_2)$). A counterexample -is given by the characteristic function of $[0,\infty)$.<|endoftext|> -TITLE: What is the difference between a variety and a manifold? -QUESTION [33 upvotes]: I hear people use these words relatively interchangeably. I'd believe that any differentiable manifold can also be made into a variety (which data, if I understand correctly, implicitly includes an ambient space?), but it's unclear to me whether the only non-varietable manifolds should be those that don't admit smooth structures. I'd hope there's more to it than that. -I've heard too that affine schemes are to schemes local coordinates are to manifolds, so maybe my question should be about schemes instead -- I don't even know enough to know... - -REPLY [31 votes]: In English (as opposed to French, in which language variety and manifold are synonyms) the word variety is short for algebraic variety. The main differences, then, between (algebraic) varieties and (smooth) manifolds are that: -(i) Varieties are cut out in their ambient (affine or projective) space as the zero loci of polynomial functions, rather than simply as the zero loci of smooth functions. This gives them a more rigid structure. (Here I am thinking just of quasi-projective varieties; there are objects that people call varieties which can't be immersed into projective space, but there is no need to think about them when you are just learning the subject. Also, a manifold need not be regarded as lying in an ambient Eulcidean space, but can always be immersed into one, and can then be thought of as being cut out as the zero locus of smooth -functions.) -The rigidity of varieties is reflected in the definition of isomorphism: we define two varieties to be isomorphic if we can find polynomial maps giving rise to mutually inverse bijections from one to the other, while two manifolds are isomorphic (i.e. diffeomorphic) if we can find smooth maps giving rise to mutually inverse bijections between them. It turns out, for example, that -the only diffeomorphism invariant of a compact connected surface is its genus $g$, while if we look at smooth connected projective curves over the complex numbers (which, when we forget the variety structure and just think of them as manifolds, are -compact connected surfaces --- note that one complex dimension gives two real -dimensions) then the genus is not a complete invariant. For a fixed genus $g \geq 2$, -there is a $6g-6$-dimensional family of non-isomorphic curves of genus $g$. -(When $g = 1$ there is a $2$-dimensional family, and when $g = 0$, the -variety structure is actually uniquely determined. Also, by "dimension" here -I mean real dimension; but these families have their own natural algebraic variety structures, of half the dimension --- i.e. there is a $3 g - 3$ dimensional variety parameterizing isomorphism classes of genus $g$ curves -when $g \geq 2$. Again the halving of dimension reflects the difference between real and complex dimension.) -(ii) Varieties can admit singularities, whereas we stipulate that manifolds be non-singular (i.e. locally Euclidean). Here it is useful to think about the fact that the critical locus of a (collection of) smooth function(s) can be pretty nasty, and so if we consider the zero loci of smooth functions and allow singularities, we will allow extremely nasty singularities. On the other hand, -the critical locus of a (collection) of polynomial(s) is not so bad (e.g. it is always of codimension at least one in the zero locus), and so allowing singularities in the theory turns out to be okay (and in fact to be more than okay; it turns out to be one of the more powerful features of algebraic geometry).<|endoftext|> -TITLE: Inconsistent naming of elliptic integrals -QUESTION [5 upvotes]: This may be a question whose answer is lost in the mists of time, but why is the elliptical integral of the first kind denoted as $F(\pi/2,m)=K(m)$ when that of the second kind has $E(\pi/2,m)=E(m)$? It's not very consistent! Aside from convention, is there anything stopping us from rationalising these names a little? - -REPLY [3 votes]: This discrepancy seems to come from the various conventions in defining the nome $q$ of complex lattice $\Lambda_{\tau} = \mathbb{Z} \oplus \tau \mathbb{Z}$ as one of the four $e^{2 \pi i \tau}$, $e^{\pi i \tau}$, $e^{2 i \tau}$ or $e^{i \tau}$, where $\tau \in \mathbb{H}$ is the lattice parameter. Investigate the vast literature on theta functions, and you'll see exactly what I mean by the problem of too many "conventions".<|endoftext|> -TITLE: How do you handle the floor and ceiling function in an equation? -QUESTION [25 upvotes]: I tried to do some math in a blog post of mine and came to one with a floor function. I wasn't sure how to deal with it so I just ignored it, and then added the ceiling function in my final equation as that seemed to give me the result I wanted. I'm wondering what is the correct way of handling these functions in equations? -What I did was this: -$$\begin{align} -G(n) &= \left\lfloor n\log{\varphi}-\dfrac{\log{5}}{2}\right\rfloor+1 \\\\ -n\log{\varphi} &= G(n)+\dfrac{\log{5}}{2}-1 \\\\ -n &= \left\lceil\dfrac{G(n)+\dfrac{\log{5}}{2}-1}{\log\varphi}\right\rceil -\end{align}$$ -How should I have done this in a correct way? How do I work with the ceiling and floor functions when I shuffle around with equations? - -REPLY [3 votes]: Observe that -\begin{eqnarray} -G(n) = \left \lfloor n \log \varphi - \log \sqrt{5} \right \rfloor + 1 = \left \lceil n \log \varphi - \log \sqrt{5} \right \rceil -\end{eqnarray} -and write -\begin{eqnarray} -\left \lceil \frac{G(n)}{\log \varphi} + \log_{\varphi} \sqrt{5} \right \rceil & = & \left \lceil \tfrac{1}{\log \varphi} \left \lceil n \log \varphi - \log \sqrt{5} \right \rceil + \log_{\varphi} \sqrt{5} \right \rceil = n. -\end{eqnarray}<|endoftext|> -TITLE: Results related to The Happy Ending Problem -QUESTION [11 upvotes]: Im giving a small talk for a combinatorics class on the Erdos-Szekeres conjecture regarding the happy ending problem (the paper is focused on recent work regarding the conjecture). I always find that when the audience is not up for a long technical proof, and they aren't familiar with the field (in this case combinatorial geometry) that it is important to provide strong motivation for why the problem is important, and perhaps mention a couple interesting related results that have applications in other fields, or to other problems. -For the Happy Ending problem, I am aware of connections to the $n$-hole problem for points in general position, Ramsey theory (via the original paper by erdos and szekeres in 1935), and the generalization of the E&S theorem for convex bodies. But Im not sure if these are good things to mention since the class does not know any Ramsey theory and convex bodies are probably too abstract to mention offhand. -So my question is: does anyone know of other interesting related results to the Happy Ending Problem, and the corresponding theorem and conjecture. Or, can you tell me succinctly why or why not this conjecture is important in mathematics. -I will accept anything along the lines of similar geometrically or combinatorially, directly related, or mentioned in a paper involving the E&S conjecture. -The original paper from 1935 is a goldmine of things like the ordered pigeonhole principle, but Im interested in things I havent read yet. - -REPLY [4 votes]: You might look at this for some recent related work, and investigate its 14 references: -"Every Large Point Set contains Many Collinear Points or an Empty Pentagon." CCCG 2009, 99-102. The main result is exactly what is stated in the title.<|endoftext|> -TITLE: Surreal and ordinal numbers -QUESTION [13 upvotes]: Is there a surjective map between the (class of) ordinal numbers On and the set No (Conway's surreal numbers) and is it constructable, In Conway's system we have for example: -$\omega_0 = < 0,1,2,3,... | > $ -and: -$\epsilon = < 0 | 1, 1/2, 1/4, 1/8, ... > $ -(where $\epsilon$ is not the first uncountable ordinal, but the "reciprocal" of $\omega_0$). My question is thus: can you device this for every ordinal, or does Conway's system On eventually run "out of space"). - -REPLY [25 votes]: The existence of a bijection between the class of ordinals $On$ and the class of surreal numbers $No$ is independent of the axioms of set theory. There are several interesting possibilities: - -If ZFC is consistent, then there is a model of ZFC in which there is a definable such bijection. This is true in Goedel's constructible universe $L$, for example, for in $L$ there is a definable well-ordering of the universe, and we can use this well-ordering to well-order the surreals, which provides the desired bijection. -More generally, there is a first-order definable bijection between $On$ and $No$ if and only if the axiom known as $V=HOD$ holds. For the one direction, if $V=HOD$ holds, then there is a definable well-ordering of the universe and hence in particular a definable well-ordering of the surreals. Conversely, under ZFC if there is a definable bijection between $On$ and $No$, then there is a definable well-ordering of $No$. This allows us to construct a definable well-ordering of the class of sets of ordinals, since any set of ordinals determines a transfinite binary sequence of some ordinal length, and we can interpret this sequence as a $\pm 1$ sequence, which determines a unique surreal number by climbing through the tree of left-right cuts. Thus, we can well-order the class of sets of ordinals. But in ZFC every set is coded by a set of ordinals, and so we can construct a well-ordering of the entire universe, by looking for the least ordinal mapping to a surreal whose $\pm 1$ representation codes that set. So in this case, V=HOD holds. -Another way to summarize this argument is to say that if you can well-order $No$---and this is what your bijection to $On$ amounts to---then you can well-order every class. -If you drop the requirement that the bijection be definable, then we should move to the Goedel-Bernays context, in order to treat classes. The assertion that there is a bijection between $On$ and $No$ is equivalent over ZFC+GB to the axiom of Global Choice, which asserts that there is a well-ordering of the universe. This is by the same argument as above. (Note, we need AC for sets in order to make the last step of the argument; the class bijection in effect allows us to sew the set sized well-orderings together into a class well-ordering.) Thus, the theory ZFC+GB+(your bijection) is equivalent to GBC. -Because of this, if ZFC is consistent, then there are models of ZFC that have no bijection between $On$ and $No$, either definable or definable-from-parameters or otherwise. This is because it is known that ZFC does not imply global choice. One can construct such models by performing a class forcing iteration, adding a Cohen subset to every regular cardinal. -Meanwhile, every model of ZFC has a class forcing extension in which there is a class well-ordering of the universe, simply by forcing to add a global well-ordering, and this forcing extension adds no new sets, only classes. In this sense, it is compatible with every model of ZFC set theory to have the desired bijection as a class, without adding any new sets. -Further, every model of ZFC has a class forcing extension in which there is a definable bijection between $On$ and $No$, since we can force $V=HOD$. (This forcing, however, does add new sets.) - -Lastly, upon reading your question again, I see that you asked for a surjection from $On$ onto $No$, rather than a bijection. But these are equivalent, since if there is a surjection, then we can remove the redundant ordinals from the domain by only using the least ordinal that maps to a given surreal, and this gives a bijection from a proper class of ordinals to $No$. But every proper class of ordinals is bijection with $On$ simply by collapsing to the order type of the predecessors.<|endoftext|> -TITLE: Characterization of Almost-Everywhere convergence -QUESTION [13 upvotes]: Given a $\sigma$-finite measure $\mu$ on a set $X$ is it possible to formulate a topology on the space of functions $f:X \rightarrow \mathbb{R}$ that gives convergence $\mu$-almost everywhere? -I can't seem to find any way to write this and am suspecting that no such topology exists! Is this true? If so, is there some generalisation of a topological space where one can make sense of convergence without having open sets? -Any comments, references or tips would be greatly appreciated. - -REPLY [2 votes]: I was looking for a proof myself, I found this standard example. -Let $[0,1]$ be given the Lebesgue measure. The vector space $L^\infty([0,1])$ cannot be given a topological vector space structure. -Let $(f_n)$ be a sequence of function which converges in measure to zero but fails to converge a.e.; define the sequence $f_1^1, f_1^2, f_2^2, f_1^3, f_2^3, \dotsc$ where - $$ f_m^n(x) = - \begin{cases} - 1 & \frac{m-1}{n} \leq x \leq \frac{m}{n} \\ - 0 & \text{otherwise} - \end{cases} $$ -and $m$ is enumerated from 1 to $n$. - So we have - \begin{align*} - f_1 &= 1_{[0,1]} \\ - f_2 &= 1_{[0,\frac{1}{2}]} , \quad f_3 = 1_{[\frac{1}{2}, 1]} \\ - f_4 &= 1_{[0,\frac{1}{3}]} , \quad f_5 = 1_{[\frac{1}{3}, \frac{2}{3}]}, \quad \dotsc - \end{align*} Therefore $\mu(f_n \neq 0) \rightarrow 0$ as $n\rightarrow \infty$, hence $f_n$ converges in measure. Note that $\mu$ is a probability measure, and $\sum_n \mu(f_n \neq 0) = \infty$. By Borel-Cantelli, $\mu(f_n \neq 0 \ \text{i.o.}) = 1$. Hence $f_n$ does not converge to zero a.e. -Suppose a topology exists for a.e. convergence. Since $(f_n)$ fails to converge to zero, there must be a neighborhood $U_0$ which $f_n$ is outside i.o. Let $(f_{n_k})$ be a subsequence of terms outside of $U_0$. Any subsequence ${f_{n_k}}$ converges in measure, it has a further subsequence that converges a.e. to zero. But this subsequence is eventually in $U_0$, contradicting the choice that $(f_{n_k})$ is outside $U_0$. Therefore the topology cannot exist.<|endoftext|> -TITLE: What is the simplest way to fathom the Monster Group? -QUESTION [15 upvotes]: Can someone explain how to picture or construct the Monster Group? - -REPLY [19 votes]: I think the answer really depends on what you mean by "fathom". -If you want a short construction of the Monster, there is a sketch by Conway in one of the later chapters of Sphere Packings, Lattices, and Groups (occasionally available on Google Books). The construction there goes through a nice progression of increasingly complicated exceptional objects, like the Golay code and the Leech lattice. -If you want to understand a "natural" object on which the Monster acts by symmetries, you should read up about vertex algebras. At our current state of knowledge (and depending on who you ask), the most natural object on which the monster acts is the monster vertex algebra $V^\natural$ (also known as the moonshine module, or the monster VOA), which is a graded vector space, together with some extra structure like a rather complicated multiplication operation $V^\natural \otimes V^\natural \to V^\natural((z))$. The construction of $V^\natural$ is given in the book Vertex operator algebras and the Monster by I. Frenkel, Lepowsky, and Meurman. A string theorist might say that it is given by orbifolding the Leech lattice CFT. -If you want to consider some basic facts about the Monster, you can have a look at the ATLAS of finite groups, or play around with the software GAP. Both have the character table, and orders of centralizers of elements, etc. Wilson showed that the monster is a Hurwitz group, so Ryan Budney's comment about acting on a Riemann surface holds for the minimum possible genus (about $10^{52}$). -If you want to understand finer points about the structure of the Monster group, you're pretty much out of luck. It's big enough that there are plenty of explicit questions whose answers we don't know. For example, the conjugacy classes of homomorphisms from $\mathbb{Z} \times \mathbb{Z}$ (i.e., pairs of commuting elements) are not classified, and $H^4$ with coefficients in $\mathbb{Z}$ is still unknown (very annoying).<|endoftext|> -TITLE: What happens to the 0 element in a Finite Group? -QUESTION [7 upvotes]: So, I'm relearning Group Theory. And I got the axioms down, I think. So let's make a concrete example: - -The collection of numbers the positive integers less than 7: 1,2,3,4,5,6 -The • operation will be multiplication mod 7. -Associativity holds. -The Identity e is 1. -Every element has an inverse: - -1*? mod 7 = 1 --> 1 -2*? mod 7 = 1 --> 4 -3*? mod 7 = 1 --> 5 -4*? mod 7 = 1 --> 2 -5*? mod 7 = 1 --> 3 -6*? mod 7 = 1 --> 6 - - -But! What is the order of the group?! I thought the order would be 7. But there are 6 elements! So maybe I was wrong and 0 should be in the group. -But 0 does not have an inverse! There is no x such that 0*x mod 7 = 1. -So what am I misunderstanding here? Is it the definition of order? Is it some other trick about groups? - -REPLY [14 votes]: The only error is your belief that the order "should" be 7. The order of a finite group is just the number of elements in the group. Your group consists of the positive integers that are smaller than, and relatively prime to, 7. There are six of them, so your group has order 6. -(I'm not sure why you thought the order should be 7...) -Indeed, you cannot add $0$ to the mix and still have a group. If you consider the numbers $0,1,\ldots,6$ under multiplication modulo $7$ you do not get a group, you get a semigroup. -Added: Ah, Jonas Meyer's reply suggests what is going on; since you say you are relearning Group Theory, you might have vague memories of the "group of integers modulo $n$" as having order $n$. The group of integers modulo $n$ under addition has order $n$; but the multiplicative group modulo $n$ consists of the positive integers less than, and relatively prime to, $n$, with the operation being multiplication modulo $n$, and has $\varphi(n)$ elements (Euler's phi function). When $n=7$ (the case you are looking at), the group has $\varphi(7)=6$ elements, as you observed. - -REPLY [10 votes]: You're right, the group has order 6 because it has six elements. You can make {0,1,2,3,4,5,6} a group with addition mod 7. This would be a group of order 7. - -REPLY [7 votes]: Good thinking. 0 is not in the group, so the order is 6. 0 is in another group in the multiplicative $\mathbb{Z}/\mathbb{Z}_7$, which is the trivial group (one element). The groups are disjoint.<|endoftext|> -TITLE: Conditions for 2D random walk to return to origin -QUESTION [5 upvotes]: I am seeking conditions on the distribution of the step sizes that guarantee that a random walk on the 2D lattice will return to the origin (with probability 1). Essentially, under what conditions can Pólya's theorem be proved? Certainly it holds if the steps are of size 1 (and equally probable in all directions), and I believe if the variance of the step sizes is finite, the return probability is still 1. -But what about distributions with infinite variance, like the Lévy distribution? -Does "Lévy flight" return to the origin? And more generally, are there conditions on -the distribution that guarantee this return? -This is likely all well known, and if so, pointers to the relevant literature would be appreciated. -Thanks! - -REPLY [3 votes]: You are correct. The "crude" criterion for recurrence of a 2d random walk is $\mu=0$ and $\sigma^2<\infty$ for the jump distribution. The jump sizes are otherwise unrestricted. -The "detailed" criterion involves the characteristic function $\phi$ of the jump distribution, i.e., its Fourier transform. It says that 2d random walk is transient or recurrent as the real part of $(1-\phi(\theta))^{-1}$ is Lebesgue integrable on a neighborhood of the origin. -These results are from Section 8 of Spitzer's Principles of Random Walk (2e). -Spitzer gives a detailed example of symmetric one-dimensional random walks, and shows that their recurrence or transience depends on the size of the tail of the jump distribution. That is, he supposes that -$$0<\lim_{|x|\to\infty} |x|^{1+\alpha}P(0,x)=c<\infty,$$ and concludes that this walk is recurrent when $\alpha\geq 1$ and transient when $\alpha<1$. -So, somewhat unexpectedly, there exist symmetric transient random walks in one dimension. Their jump distribution has such large tails that the walk leaps back and forth with large jumps and satisfies $\liminf_n X_n=-\infty$ and $\limsup_n X_n=+\infty$ without a guarantee of returning to the origin. -It should be possible to modify his arguments to the two dimensional case.<|endoftext|> -TITLE: Integrals of the square root of a cubic polynomial -QUESTION [6 upvotes]: Say I have a function -$$V(x)=A(x-x_1)(x-x_2)(x-x_3)$$ -where $x_1$, $x_2$, $x_3$ are the three roots in increasing order and $A$ is positive. Clearly $V(x)$ is positive at large $x > x_3$, negative between $x_2$ and $x_3$, and so on. -Now, say I wish to evaluate -$$\int_{x_2}^{x_3}\sqrt{-V(x)}\mathrm dx$$ -How do I do it? - -REPLY [15 votes]: Well, you can't integrate it in terms of elementary functions like the arctangent or the logarithm in general (unless there is some special configuration of your roots). Having an integrand that contains the square root of a cubic will result in the use of so-called "elliptic integrals". - -I'll evaluate the integral explicitly here for reference; most of the current computing environments seem to be unable to return reasonable expressions for elliptic integrals. -For this solution, I will assume $A=1$; you can multiply the result of the following by $\sqrt{A}$ afterwards. -Now, we consider first the indefinite integral (let's worry about the limits later) -$$\int\sqrt{(x_3-x)(x-x_2)(x-x_1)}\;\mathrm dx,\qquad x_3 > x_2 > x_1$$ -The first step here is to perform an appropriate Möbius (rational) transformation, which is equivalent to projecting your cubic to a quartic or quadratic, which hopefully has a nice factorization. I'll skip the details on how to obtain it, and point out that the required substitution is -$$x=\frac{x_2-x_1\frac{x_3-x_2}{x_3-x_1}u}{1-\frac{x_3-x_2}{x_3-x_1}u}$$ -Performing this substitution yields -$$\frac{(x_3-x_2)^2(x_2-x_1)^2}{(x_3-x_1)\sqrt{x_3-x_1}}\int\frac1{\left(1-\frac{x_3-x_2}{x_3-x_1}u\right)^3}\sqrt{\frac{u(1-u)}{1-\frac{x_3-x_2}{x_3-x_1}u}}\mathrm du$$ -It is at this point that we require the services of the Jacobian elliptic functions $\mathrm{sn}(v|m)$, $\mathrm{cn}(v|m)$, and $\mathrm{dn}(v|m)$. The elliptic function identities pertinent to this problem are the two Pythagorean relations -$$\mathrm{cn}^2(v|m)+\mathrm{sn}^2(v|m)=1,\qquad \mathrm{dn}^2(v|m)+m\,\mathrm{sn}^2(v|m)=1$$ -and the differential relation -$$\frac{\mathrm d}{\mathrm dv}\mathrm{sn}(v|m)=\mathrm{cn}(v|m)\mathrm{dn}(v|m)$$ -thus, letting $u=\mathrm{sn}^2(v|m)$ (and letting $\Delta$ be the constant in front to avoid clutter), we have -$$2\Delta\int\frac{\mathrm{sn}(v|m)\mathrm{cn}(v|m)\mathrm{dn}(v|m)}{\left(1-\frac{x_3-x_2}{x_3-x_1}\mathrm{sn}^2(v|m)\right)^3}\sqrt{\frac{\mathrm{sn}^2(v|m)(1-\mathrm{sn}^2(v|m))}{1-\frac{x_3-x_2}{x_3-x_1}\mathrm{sn}^2(v|m)}}\mathrm dv$$ -and if we let $m=\frac{x_3-x_2}{x_3-x_1}$, both Pythagorean relations can be applied to yield -$$2\Delta\int\frac{\mathrm{sn}^2(v|m)\mathrm{cn}^2(v|m)}{\mathrm{dn}^6(v|m)}\mathrm dv$$ -At this point, we now insert the proper limits for the definite integral by performing the inverses of the last two transformations on the limits $x=x_2$ and $x=x_3$. The new limits are seen to be $v=0$ and $v=K(m)=K\left(\frac{x_3-x_2}{x_3-x_1}\right)$, where $K(m)$ is the complete elliptic integral of the first kind. The definite integral is now -$$2\Delta\int_0^{K(m)}\frac{\mathrm{sn}^2(v|m)\mathrm{cn}^2(v|m)}{\mathrm{dn}^6(v|m)}\mathrm dv$$ -Using formula 361.18 in Byrd/Friedman (and after much tears and algebra), we finally arrive at the result -$$\begin{align*} -\frac2{15}\sqrt{x_3-x_1}&\left(2(x_1^2+x_2^2+x_3^2-x_1 x_2-x_2 x_3-x_1 x_3)E\left(\frac{x_3-x_2}{x_3-x_1}\right)-\right.\\ - &\left.\quad(x_2-x_1)(x_3-x_1+x_2-x_1)K\left(\frac{x_3-x_2}{x_3-x_1}\right)\right) -\end{align*}$$ -where $E(m)$ is the complete elliptic integral of the second kind. - -Byrd/Friedman would be one of the best references on this subject; they have a comprehensive listing of formulae for reducing elliptic integrals to the Legendre-Jacobi forms. - -REPLY [3 votes]: This is an elliptic integral and can be expressed -in terms of the complete elliptic integral of the second kind.<|endoftext|> -TITLE: Solve trigonometric equation: $1 = m \; \text{cos}(\alpha) + \text{sin}(\alpha)$ -QUESTION [14 upvotes]: Dealing with a physics Problem I get the following equation to solve for $\alpha$ -$1 = m \; \text{cos}(\alpha) + \text{sin}(\alpha)$ -Putting this in Mathematica gives the result: -$a==2 \text{ArcTan}\left[\frac{1-m}{1+m}\right]$ -However I am unable to get this result myself. No matter what I try normal equation transformations or rewriting the equation with the complex e-Function ..., everything fails. Even going the other Direction from Mathematica's Solution to my original equation resulted in nothing sensible. -Any help of how to do do this transformation is very much appreciated. -Thanks in advance - -REPLY [5 votes]: Another: -$$ -\begin{eqnarray} -\frac{1-\sin\alpha}{\cos\alpha} &=& m \\ -\frac{1-\cos\left(\frac{\pi}{2}-\alpha\right)}{\sin\left(\frac{\pi}{2}-\alpha\right)} &=&m \\ -\tan\frac{\frac{\pi}{2}-\alpha}{2} = \tan\left(\frac{\pi}{4}-\frac{\alpha}{2}\right) &=& m \\ -\frac{\tan\frac{\pi}{4}-\tan\frac{\alpha}{2}}{1+\tan\frac{\pi}{4} \tan\frac{\alpha}{2}} = \frac{1-\tan\frac{\alpha}{2}}{1+\tan\frac{\alpha}{2}} &=& m \\ -\Longrightarrow \tan\frac{\alpha}{2} &=& \frac{1-m}{1+m} -\end{eqnarray} -$$<|endoftext|> -TITLE: How to parameterize an orange peel -QUESTION [8 upvotes]: I'm trying to parameterize the space curve determined by the boundary of a standard orange peel: for example, the one on this photo: - -For example, the ideal curve would be inside the unit cube; have only one point of intersection with every horizontal plane $z=k$, when $k\in [-1,1]$; would start in $(0, 0, -1)$ and end in $(0, 0, 1)$, wrapping itself around them; and touch the boundary of the cube when $z=0$. -It's sort of a standard helix, compressed. I hope I was clear. - -REPLY [11 votes]: Well, you seem to have a lot of options; there are a number of spherical spirals that would do. The loxodrome is one (the spherical analogue of the equiangular spiral), and Seiffert's spiral is another.<|endoftext|> -TITLE: Measurable set on which a function is bounded -QUESTION [5 upvotes]: Let $f$ be in $L^{1}(\mathbb{R})$, $\mathbb{R}$ the real numbers. Show that for every $\varepsilon > 0$ there exists $A \subseteq R$ , measurable, such that $m(A) < \infty$ , $f$ is bounded on $A$ and $ \int_{\mathbb{R}} |f| < \int_{A} |f| + \varepsilon$. -If we take $A$ as the support of the simple function which approximates $f$ in the $L^{1}$ norm then this has finite measure and it satisfies the other conditions. But I don't see why $f$ must be bounded on it. Any ideas? -Thank you. - -REPLY [7 votes]: You can do this as a "$2\epsilon$-proof" (or $\epsilon/2$ if you prefer). First, since $(\int_{-n}^n|f|)_{n=1}^\infty$ converges to $\int_{\mathbb{R}}|f|$, there is an $n$ such that $\int_{\mathbb{R}}|f|\lt\int_{-n}^n|f|+\epsilon/2$. Then, since $(\int_{-n}^n|f|\cdot\chi_{\{|f|\leq m\}})_{m=1}^\infty$ converges to $\int_{-n}^n|f|$, there is an $m$ such that $\int_{-n}^n|f|\lt\int_{-n}^n|f|\cdot\chi_{\{|f|\leq m\}}+\epsilon/2$. Take $A=[-n,n]\cap\{x:|f(x)|\leq m\}$. -Rather than using simple functions to show this, I would use this as a first step to showing that $f$ can be approximated by simple functions, because now $f$ can be uniformly approximated by simple functions on $A$. - -REPLY [3 votes]: $|f|I_{|f|\leq n} $ increases to $|f|I_{|f| < \infty} = |f|$ (a.e.) as $f$ is in $L^{1}(\mathcal{R})$, the result follows from monotone convergence theorem. -Update : This answer is incomplete, please see Jonas's answer.<|endoftext|> -TITLE: Probability of cumulative dice rolls hitting a number -QUESTION [7 upvotes]: Is there a general formula to determine the probability of unbounded, cumulative dice rolls hitting a specified number? -For Example, with a D6 and 14: -5 + 2 + 3 + 4 = 14 : success -1 + 1 + 1 + 6 + 5 + 4 = 17 : failure - -REPLY [8 votes]: Assuming the order matters (i,e 1+2 is a different outcome from 2+1) -The probability of getting the sum $n$ with dice numbered $1,2,\dots,6$ is the coefficient of $x^n$ in -$$\sum_{j=0}^{\infty}(\frac{x+x^2+x^3+x^4+x^5+x^6}{6})^j = \frac{6}{6-x-x^2-x^3-x^4-x^5-x^6}$$ -Writing it as partial fractions (using roots of $6-x-x^2-x^3-x^4-x^5-x^6=0$) or using Cauchy's integral formula to find the coefficient of $x^n$, Taylor series, etc should work.<|endoftext|> -TITLE: Prove a Levi-Civita connection gives $\nabla_XY(p)=\partial_t|_{t_0}[P^{-1}_{c_0,t_0,t}(Y(c(t)))]$ with $P$ parallel transport -QUESTION [18 upvotes]: I'm having trouble with the following exercise in do Carmo's Riemannian geometry. -Let $X$ and $Y$ be differentiable vector fields on a Riemannian manifold $M$. Let $p \in M$ and let $c: I \to M$ be an integral curve of $X$ through $p$, i.e. $c(t_0) = p$ and $\frac{dc}{dt} = X(c(t))$. -Prove that the Riemannian connection of $M$ is -$(\nabla_XY \ )(p) = \frac{d}{dt} (P^{-1}_{c,t_0,t}(Y(c(t)))\ |_{t=t_0}$ -where $P^{-1}_{c,t_0,t}: T_{c(t_0)}M \to T_{c(t)}M$ is the parallel transport along $c$, from $t_0$ to $t$. -I guess, I don't have enough understanding of how to handle the parallel transport (since it is only given as the unique solution to a differential equation). -Any hints would be greatly appreciated! -Thank you, -S. L. -Edit: Do Carmo first introduced the notion of an affine connection -$\nabla: \text{Vect}(M) \times \text{Vect}(M) \to \text{Vect}(M)$, $(X,Y) \mapsto \nabla_XY$. -With the following properties: - -$\nabla_{fX + gY}Z = f\nabla_XZ + g \nabla_YZ$ -$\nabla_X(Y+Z) = \nabla_XY + \nabla_XZ$ -$\nabla_X(fY) = f\nabla_XY + X(f)Y$ - -for $X,Y,Z \in \text{Vect}(M)$ and $f,g \in C^\infty(M)$. -And then showed that there is a unique correspondence which associates to a vector field $V$ along the differentiable curve $c: I \to M$ another vector field $\frac{DV}{dt}$ along c, called covariant derivative of $V$ along $c$, with three more properties: - -$\frac{D}{dt}(V+W) = \frac{D}{dt}V + \frac{D}{dt}W$ -$ \frac{D}{dt}(fV) = \frac{df}{dt}V + f\frac{D}{dt}V$ -If $V$ is induced by a vector field $Y \in \text{Vect}(M), then \frac{D}{dt}V = \nabla_{dc/dt}Y$ - -Then he showed existence and uniqueness of the parallel transport along a curve, and went on to prove existence and uniqueness of Levi-Civita connection. -I hope this makes things clearer? Thanks for the quick answer! - -REPLY [17 votes]: Let $\{e_i\}\subseteq T_{c(t_0)}M$ be a basis. Define $E_i(t)$ as the parallel translation of $e_i$ along $c(t)$. Prove to yourself that $\{E_i(t)\}$ forms a basis of $T_{c(t)}$ for all $t$ (hint, use the uniqueness part of solving linear ODEs). -Now, we can write $Y(c(t)) = \sum_i a_i(t) E_i(t)$. Then, since the $E_i(t)$ are parallel, and since $P_{c,t_0,t}$ is linear, we have $P^{-1}_{c,t_0, t}(Y(c(t)) = \sum_i a_i(t) e_i$. -From here, by actually computing the limit, it's not too hard to see that $\frac{d}{dt} P^{-1}_{c,t_o,t}Y(c(t))|_{t=t_0} = \sum_i a_i'(t_0)e_i$. -Thus, the goal is to show that $\nabla_X Y(p)$ can also be written like this. -But $\nabla_X Y(p) = \nabla_X \sum_i a_i(t)E_i(t)|_{t=t_0} = \sum_i a_i(t_0) \nabla_X E_i(t)|_{t=t_0} + \sum_i a_i'(t_0) E_i(t_0)$. (The second equality is just the Leibniz rule every connection must satisfy). However, $\nabla_X E_i(t)$ is $\frac{D}{dt} E_i(t)$ and this is 0 since the $E_i$ are parallel. Hence, the first term of the sum vanishes, so we get the desired result.<|endoftext|> -TITLE: What are examples of mathematicians who don't take many notes? -QUESTION [13 upvotes]: I see people like Terry Tao and others take extensive notes. But is this really necessary? When I do this I feel like I am rewriting a textbook. - -REPLY [34 votes]: About notes and rewriting: if the topic is important and non-trivial, it is all too easy to delude oneself, optimistically, into believing that one grasps all the points... if one is at all passive. To avoid this (as well as to have a perhaps more convenient personal archive), I think there is no substitute for an intensely (self-) critical rewrite. -Yes, this is "expensive" in time and effort, so should NOT be allocated to trivial matters, to "exercises", or to things one doesn't care much about. -Personally, if I've not rewritten something, I don't feel I understand it, except perhaps distantly, passively, as with gossip or hearsay. By the end, the ideal is to perceive the thing that cost effort as being, in fact, as trivial as possible, once one has adjusted one's viewpoint. It is important to remind oneself that this is not an indicator of wasted time or of foolishness, but of success in rewriting/rethinking. Then the significance of things that don't quite become trivial is vastly clearer.<|endoftext|> -TITLE: Packing powder series, suspect it is easy, yet -QUESTION [5 upvotes]: A powder can be compressed by packing it down. Each time it is packed down it loses $\frac{1}{2}$ then $\frac{1}{4}$ then $\frac{1}{8}$ ... etc. of it's total volume. -This powder is placed in to a container of unit volume and packed down. Then the remaining space is filled with fresh powder and packed down again. The packing action acts on the powder multiple times. So, for example, here is the volume of powder after the first few packings: -$(1)(\frac12) = \frac12$ -$(1)(\frac12)(\frac34) + (\frac12)(\frac12) = \frac58$ -$(1)(\frac12)(\frac34)(\frac78) + (\frac12)(\frac12)(\frac34) + (\frac38)(\frac12) = \frac{45}{64}$ -$(1)(\frac12)(\frac34)(\frac78)(\frac{15}{16}) + (\frac12)(\frac12)(\frac34)(\frac78) + (\frac38)(\frac12)(\frac34) + (\frac{19}{64})(\frac12)$ -$\vdots$ -I would like to know how much powder is in the container after it has been packed n times, in terms of n. - -REPLY [4 votes]: The amount of powder remaining at each point is $(1-1/2),(1-1/4),\ldots$, and so the constant you're looking for (in the limit) is -$\prod_{n=1}^{\infty} (1 - 2^{-n}) \approx 0.288788095086602$ -This is also the limiting probability that an $n \times n$ matrix over $GF(2)$ be regular. -Denote the exact result for $n$ by $R_n$. We want to estimate the error $R_n - R_\infty$. Note that -$R_n = \frac{R_\infty}{\prod_{m=n+1}^{\infty} (1 - 2^{-m})}$ -The logarithm of the denominator is -$-\Theta(\sum_{m=n+1}^\infty 2^{-m}) = -\Theta(2^{-n})$ -and so the denominator itself is $1 - \Theta(2^{-n})$. Thus -$R_n = R_\infty (1 + \Theta(2^{-n})) = R_\infty + \Theta(2^{-n})$ - -The product can be converted to a sum (exercise!) -$R_\infty = \sum_{n=2}^\infty \frac{(-1)^n}{\prod_{k=2}^n (2^k-1)} = \frac{1}{3} - \frac{1}{3\cdot 7} + \frac{1}{3 \cdot 7 \cdot 15} - \cdots$ -Using this sum, it's easy to prove that this constant is irrational (hint: same as the usual proof that $e$ is irrational using its series representation $\sum_{n=0}^\infty 1/n!$). -You can also obtain excellent rational approximations this way: -$1/3, 2/7, 13/45, 188/651, 5731/19845, \ldots$ -Since the series is Leibniz, you know that the error after summing $k$ terms is at most the next term. For example, $5731/19845$ is at most $1/78129765$ too large than the actual constant. - -REPLY [2 votes]: Not much help, but the limiting packing density is a factor 3.462746619. The inverse symbolic calculator finds the compression (limiting volume of the initial unit volume) as 0.2887880950866024 with the correct formula. - -REPLY [2 votes]: The best I can come up with is horribly recursive (not so bad if you store intermediary values, as mine does below), but it would work to use Mathematica or some other software to calculate. -Define $f(0) = 0$. -$$f(n)=\displaystyle\sum_{i=0}^{n-1}\bigg(\displaystyle\prod_{j=1}^{n-i}(1-\frac{1}{2^j})\bigg)(1-f(i))$$ -Mathematica code: -f[0] = 0; -f[n_] := f[n] = Sum[Product[(1 - 1/2^j), {j, 1, n - i}] (1 - f[i]), {i, 0, n - 1}]<|endoftext|> -TITLE: Counterintuitive PDE -QUESTION [7 upvotes]: After thinking about it for a while and consulting other students, no one seems to be able to find an example of the following: -Given the PDE -$\dfrac{\partial f}{\partial x} = 0 \quad $ on $U = { (x,y) \in \mathbb R^2 ; y>0, 1 < x^2 + y^2 < 4}$ -I am looking for a solution $f$ that does not only depend on $y$. -How can this be?! -The exercise is taken form Lee's "Introduction to smooth manifolds", p. 517 at the end of the chapter on the Frobenius theorem. -(Note: According to the errata, the condition on $U$ is $y > 0$, not $x > 0$ as your copy of the book might state). -Thanks in advance! -S. L. - -REPLY [5 votes]: I will use the corrected version mentioned by Douglas, i.e. $U$ will be the domain defined by $y>0$ and $1< x^2 +y^2< 4$. Consider a $C^\infty$ function $\phi (y)$ which is equal to $1$ for negative $y$, $0$ for $y\geq 1/2$ and strictly decreasing for $0 < y < 1/2$. -The required function $f$ is then defined by: -$f(x,y) =-\phi (y)$ if $(x,y)\in U$ and $x\leq 0$ -$f(x,y) =+\phi (y)$ if $(x,y)\in U$ and $x\geq 0$. -It does not only depend on $y$ since $f(-3/2,1/4)<0$ and $f(+3/2,1/4)>0$. Nevertheless we do have $\dfrac {\partial f}{\partial x}=0$ -PS The Mean Value Theorem doesn't apply because the segment joining the two points $(-3/2,1/4)$ and $(+3/2,1/4)$ (for example) is not entirely included in $U$.<|endoftext|> -TITLE: continuous rubik's cube circle drawing -QUESTION [6 upvotes]: Imagine that we have something like the rubik's cube on $U=I^{3}=[0,1]^{3}$, and that we say that all the points on the xy, yz, and zx faces are black, and that all the points on the other three faces are white. -Define a legal move to be a rotation by an integral multiple of $\frac{\pi}{4}$ around one of the lines going through the center of each pair of opposite faces, for any square subset of $U$ orthogonal to the respective line. Note that there are an uncountable number of them on each axes. Points retain their color after legal moves. -Is there a sequence of legal moves that will produce a circle -- some points one color and some points another color? -- on one of the faces of $U'$? (where $U'$ is meant to denote $U$ after said sequence of moves has been performed?) -addition: I strongly suspect that no single countably infinite set of legal moves works, just on the basis of cardinality arguments -- a circle would have a continuum of points, and any countably infinite sequence of legal moves would only be able to move a countably infinite set of points. - -REPLY [5 votes]: What a great question! -Here is my solution. I am aiming for a black circle inscribed on a -white face (where the boundary edge points are invisible, -as mentioned in the comments). -Pick one of the white faces to be the desired final face F. -Now, consider any given horizontal row on that face, which -is all white. We want to add two black dots to such a row (except for top/bottom/center, which are handled already by the edge point color condition). These rows come in upper/lower symmetric dual -pairs. Call such a pair the -current working pair of rows. We make a quarter turn to -each of these working rows to the side. Now, operating on -that side face S, we make two moves parallel to F, which -will not upset F, in order to make the working rows on S -have exactly two black dots each in the correct positions. -This is possible since the squares containing the working -rows have not been moved previously, and the corresponding -columns containing the desired black dots have also not -been moved previously, since those dots will only be moved -for this pair of working rows. That is, it is precisely the -black dot opposite the dual white dot that we desire, and -by working in dual pairs, we put all four black dots into -position with two moves. (Is it clear?) Thus, on the side -face, we have the two working rows looking exactly like we -want. So we turn them each a quarter turn back to the main -face F, and proceed with another pair of working rows. The -center row and top/bottom rows do not need adjusting, -because of the invisible edge points. So this seems to do -it! -That said, there are problematic issues in setting up your -problem. This kind of problem, where one wants to describe -a task involving infinitely many steps, is known as a -supertask, and is -open to a number of interesting paradoxes and problematic -issue, some of which I describe in this MathOverflow -answer. -In this problem, we can imagine carrying out my solution in -a transfinite sequence of moves. The fact that each row on the desired face is -handled with finitely many moves makes my particular -solution less problematic than other situations one can -imagine. In general, for example, if a square has been -moved infinitely many times, what position is it in at the -limit? You haven't said, and it isn't clear what it should -be. But solutions in which every given square is moved only -finitely often seem to avoid this problem, particularly when the order that the finite sequences of moves are made in doesn't matter. -Edit. I notice now that you didn't say that the diameter of the circle should be the same as the diameter of the square. You can easily modify my solution for any size circle on any face. In this case, you do have to handle the top/bottom/middle rows of the circle, but this is no problem and is handled the same as the other rows. The top/bottom rows of the circle form a dual pair of working rows just like the others, and the center row has no dual, and is handled by itself.<|endoftext|> -TITLE: Find the sum of all the multiples of 3 or 5 below 1000 -QUESTION [47 upvotes]: How to solve this problem, I can not figure it out: -If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. -Find the sum of all the multiples of 3 or 5 below 1000. - -REPLY [31 votes]: First of all, stop thinking on the number $1000$ and turn your - attention to the number $990$ instead. If you solve the problem - for $990$ you just have to add $993, 995, 996$ & $999$ to it for - the final answer. This sum is $(a)=3983$ -Count all the #s divisible by $3$: From $3$... to $990$ there are - $330$ terms. The sum is $330(990+3)/2$, so $(b)=163845$ -Count all the #s divisible by $5$: From $5$... to $990$ there are - $198$ terms. The sum is $198(990+5)/2$, so $(c)=98505$ -Now, the GCD (greatest common divisor) of $3$ & $5$ is $1$, so the - LCM (least common multiple) should be $3\times 5 = 15$. -This means every number that divides by $15$ was counted twice, - and it should be done only once. Because of this, you have an - extra set of numbers started with $15$ all the way to $990$ that - has to be removed from (b)&(c). -Then, from $15$... to $990$ there are $66$ terms and their sum is - $66(990+15)/2$, so $(d)=33165$ -The answer for the problem is: $(a)+(b)+(c)-(d) = 233168$ -Simple but very fun problem.<|endoftext|> -TITLE: Stalks on Projective Scheme -QUESTION [7 upvotes]: Let $k$ be an algebraic closed field. Let $x$ be a point in $X=P_k^1$. What is $O_{X,x}$? -For example, if I have $x=(t-a)\in \text{Spec }k[t]$. Looking $x$ inside $P_k^1$, does $O_{X,x}=k[t]_{(t-a)}$? I'm confused when I have to deal with the sheaf of rings. - -REPLY [5 votes]: For topological space $X$, open subset $U \subset X$ and point $x\in U$ we have for all sheaf $F$ on $X$ : $F_x = (F|U)_x$. Apply to $X=\mathbb P ^1, U=\mathbb A^1, x=(t-a)$<|endoftext|> -TITLE: Proving $\int_{0}^{\infty} \mathrm{e}^{-x^2} dx = \frac{\sqrt \pi}{2}$ -QUESTION [208 upvotes]: How to prove - $$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2}$$ - -REPLY [3 votes]: Here's a proof that only requires elementary, but clever, calculus manipulations. -Let $I_A = \int_0^A e^{-x^2}dx$. -$$\begin{split} I_A^2 &= \int_0^A \int_0^A e^{-(x^2+y^2)}dxdy\\ -&= \int_0^A \int_0^A \sum_{n\geq 0} \frac{(-1)^n}{n!}(x^2+y^2)^ndxdy\\ -&=\sum_{n\geq 0} \frac{(-1)^n}{n!}\int_0^A \int_0^A\sum_{k=0}^n{n \choose k}x^{2k}y^{2n-2k}dxdy\\ -&=\sum_{n\geq 0} \frac{(-1)^n}{n!}\sum_{k=0}^n{n \choose k}\frac{A^{2k+1}}{2k+1}\frac{A^{2n-2k+1}}{2n-2k+1}\\ -&= \sum_{n\geq 0} \frac{(-1)^n}{n!}A^{2n+2}\sum_{k=0}^n{n \choose k}\frac{1}{2k+1}\frac{1}{2n-2k+1} -\end{split}$$ -Now, note that -$$\begin{split} -\sum_{k=0}^n{n \choose k}\frac{1}{2k+1}\frac{1}{2n-2k+1} &= \frac 1 {2n+2}\sum_{k=0}^n{n \choose k}\left (\frac{1}{2k+1}+\frac{1}{2n-2k+1}\right)\\ -& =\frac 1 {n+1}\sum_{k=0}^n{n \choose k}\frac{1}{2k+1} -\end{split} -$$ -Thus, -$$\begin{split} I_A^2 &= -\sum_{n\geq 0} \frac{(-1)^n}{(n+1)!}A^{2n+2}\sum_{k=0}^n{n \choose k}\frac{1}{2k+1}\\ -&= \sum_{n\geq 0} \frac{(-1)^n}{(n+1)!}A^{2n+2}\sum_{k=0}^n{n \choose k}\int_0^1x^{2k}dx\\ -&= \sum_{n\geq 0} \frac{(-1)^n}{(n+1)!}A^{2n+2}\int_0^1(1+x^2)^ndx\\ -&= \int_0^1\frac 1 {1+x^2}\sum_{n\geq 0}\frac{(-1)^n}{(n+1)!}A^{2n+2}(1+x^2)^{n+1}dx\\ -&= \int_0^1\frac{1-e^{-A^2(1+x^2)}}{1+x^2}dx\\ -&= \frac{\pi}4 + \mathcal O\left(e^{-A^2}\right) -\end{split}$$ -Taking the limit as $A\rightarrow+\infty$ yields the result.<|endoftext|> -TITLE: Norm of a symmetric matrix? -QUESTION [16 upvotes]: Say I have a symmetric matrix. I have the concept of 2-norm as defined on wikipedia. Now I want to prove (disprove?) that the norm of a symmetric matrix is maximum absolute value of its eigenvalue. I would really appreciate if this can be done only using simple concepts of linear algebra. -I am quite new to mathematics. - -REPLY [14 votes]: Here is a simple explanation not necessarily from linear algebra. We have -$$\|A\|_2=\max_{\|x\|=1}\|Ax\|$$ -where $\|\cdot\|$ is simple euclidean norm. This is a constrained optimisation problem with Lagrange function: -$$L(x,\lambda)=\|Ax\|^2-\lambda(\|x\|^2-1)=x'A^2x-\lambda(x'x-1)$$ -here I took squares which do not change anything, but makes the following step easier. -Taking derivative with respect to $x$ and equating it to zero we get -$$A^2x-\lambda x=0$$ -the solution for this problem is the eigenvector of $A^2$. Since $A^2$ is symmetric, all its eigenvalues are real. So $x'A^2x$ will achieve maximum on set $\|x\|^2=1$ with maximal eigenvalue of $A^2$. Now since $A$ is symmetric it admits representation -$$A=Q\Lambda Q'$$ -with $Q$ the orthogonal matrix and $\Lambda$ diagonal with eigenvalues in diagonals. For $A^2$ we get -$$A^2=Q\Lambda^2 Q'$$ -so the eigenvalues of $A^2$ are squares of eigenvalues of $A$. The norm $\|A\|_2$ is the square root taken from maximum $x'A^2x$ on $x'x=1$, which will be the square root of maximal eigenvalue of $A^2$ which is the maximal absolute eigenvalue of $A$.<|endoftext|> -TITLE: How can I determine asymptotic growth of binomial coefficients? -QUESTION [6 upvotes]: Say I have a binomial coefficient $y=\binom{5n+3}{n+2}$ or $y=\binom{n^2+4}{3n}$ something of the sorts in terms of the variable $n$. How can I determine $f$ so that $y = O(f)$? -Is there a general method for this sort of thing. I have only ever seen $O$-notation in a computer science setting and its treatment was not concerned with deriving anything except the bounds for "simple" functions like logarithms and polynomials. As a side question, what would be a good resource to learn further methods to find $O$ of more complex functions? I understand the formal definition, but Im unsure how to use it practically. -My apologies if this is a stupid question. Its not homework. - -REPLY [2 votes]: The answer to your second question is to read Graham, Knuth, and Patashnik's Concrete Mathematics, at least for simpler practical examples. For complicated sequences it can be extremely difficult and in many cases open to find asymptotics, e.g. for Ramsey-theoretic sequences, so you'll have to narrow the scope of your question a little to get a reasonable answer.<|endoftext|> -TITLE: $\sin(n)$ subsequence limits set -QUESTION [13 upvotes]: We were given a challenge by our calculus professor and I've been stuck on it for a while now. -Show that the set of subsequence limits of $A_n=\sin(n)$ is $[-1, 1]$ (another way to phrase this would be: $\forall r\in [-1,1]$ there exists a subsequence of $A_n=\sin(n)$ that converges to $r$). -What would be a good way to start? - -REPLY [2 votes]: Suppose you want to find positive integers $n, k$ such that $|n - 2 \pi k - \arcsin r| < \frac{1}{m}$ for some large positive integer $m$ and $r \in [-1, 1]$. Then I claim that some value of $n$ less than $2\pi m$ works. This is the pattern I was trying to get you to see, although I wasn't doing a very good job of it (I should've asked you to plot $k$ instead of $n$). -Unfortunately I think this is a little hard to prove except when $r = 0$; fortunately, using the second half of Marek's argument you can prove the same result with a weaker bound on $n$ for general $r$ using the case of $r = 0$, so I would encourage you to try that case first. To do this, try plotting the fractional parts of the numbers $\frac{n}{2\pi}$ in $[0, 1)$.<|endoftext|> -TITLE: equilibrium distribution, steady-state distribution, stationary distribution and limiting distribution -QUESTION [16 upvotes]: I was wondering if equilibrium distribution, steady-state distribution, stationary distribution and limiting distribution mean the same thing, or there are differences between them? -I learned them in the context of Discrete-time Markov Chain, as far as I know. Or do they also appear in other situations of stochastic processes and probability? -From the Wikipedia page for Markov Chain, it seems not very clear to me how to define and use these concepts. -Thanks! - -REPLY [10 votes]: (1) You forgot one! In the index to Gregory Lawler's book Introduction to Stochastic Processes (2nd edition) we find - -equilibrium distribution, see invariant distribution -stationary distribution, see invariant distribution -steady state distribution, see invariant distribution - -All this terminology is for one concept; a probability distribution that satisfies $\pi=\pi P$. In other words, if you choose the initial state of the Markov chain with distribution $\pi$, then the process is stationary. I mean if $X_0$ is given distribution $\pi$, then $X_n$ has distribution $\pi$ for all $n\geq 0$. Such a $\pi$ exists if and only if the chain has a positive recurrent state. An invariant distribution need not be unique. For example, if the Markov chain has $n<\infty$ states, the collection $\{\pi: \pi=\pi P\ \}$ is a non-empty simplex in $\mathbb{R}^n$ whose extreme points (corners) correspond to recurrent classes. -(2) The concept of a limiting distribution is related, but not exactly the same. Suppose that $\pi_j:=\lim_n P_{ij}^n$ exists and doesn't depend on $i$. These are called limiting probabilities and the vector $\pi:=(\pi_1,\dots,\pi_n)$ will satisfy $\pi=\pi P$. So a limiting distribution (if it exists) is always invariant. Limiting probabilities exist when the chain is irreducible, positive recurrent, and aperiodic. -A typical case when the limiting distribution fails to exist is when the chain is periodic. For instance, for the two state chain with transition matrix $P=\pmatrix{0&1\cr 1&0}$ the unique invariant distribution is $\pi=(1/2,1/2)$, but -$P_{ij}^n$ alternates between $0$ and $1$ so fails to converge. -I'm not sure that all authors use these terms in the same way, so you want to be careful when reading other books.<|endoftext|> -TITLE: What is special about elementary recursive arithmetic? -QUESTION [6 upvotes]: In several "proof-theory-for-dummies"-type texts I have read, $I\Delta_0+Exp$ or a theory equivalent to it shows up as a "base" theory, though in what sense it is minimal is not clearly addressed. -I realise this is a vague question but if I had the knowledge to make it more specific, I wouldn't need to ask it in the first place. - -REPLY [6 votes]: The main use of this theory is that it is strong enough to prove many truly basic facts about natural numbers, while still being weak enough that it carries little "baggage". -$I\Delta_0 + \text{Exp}$ is strong enough to define functions that encode and decode finite sequence of numbers (Goedel numbering) and, similarly, it can encode and decode finite sets of numbers as single numbers. This makes it possible to formalize many basic number-theoretic and proof-theoretic results in this theory. Those results, in turn, make it possible to use $I\Delta_0 + \text{Exp}$ as a base over which we can prove stronger statements are equivalent. -Harvey Friedman has made a somewhat polemical "grand conjecture" that every fact of elementary number theory proven in Annals of Mathematics can be proven in $I\Delta_0 + \text{Exp}$. This includes, for example, Fermat's Last Theorem. This is not to say that the original proofs will work - the proofs in $I\Delta_0 + \text{Exp}$ may be much more lengthy and complex. But the fact that this conjecture is even plausible speaks to the strength of $I\Delta_0 + \text{Exp}$. (The status of FLT here is not known.) -$I\Delta_0 + \text{Exp}$ is very weak in other senses. Its proof theoretic ordinal is $\omega^3$, which is extremely smaller than the ordinal of Peano arithmetic and even much smaller than the ordinal of the "next" canonical theory, $I\Sigma_1$. This is philosophically relevant in certain instances where we wish to keep our assumptions relatively weak (in terms of their proof-theoretic ordinal). -$I\Delta_0 + \text{Exp}$ is not "minimal" in a formal sense. We could work with ad hoc theories that are weaker, but still include the facts of number theory we need (perhaps as axioms). However, $I\Delta_0 + \text{Exp}$ has a relatively concrete and natural definition, which makes it more attractive to work with than some concocted system. (Of course, it may be that $I\Delta_0 + \text{Exp}$ is equivalent to some axiom $T$ over some even weaker base theory, in which case $I\Delta_0 + \text{Exp}$ would be minimal as an extension of the base theory that proves $T$.) -Another commonly used weak theory is $I\Sigma_1$. This theory extends $I\Delta_0 + \text{Exp}$ (modulo some definitional extensions) but $I\Sigma_1$ is still relatively weak (its ordinal is $\omega^\omega$). Many basic facts of number theory are easier to prove in $I\Sigma_1$, making it more convenient than $I\Delta_0 + \text{Exp}$ for various purposes, at the cost of a larger proof theoretic ordinal.<|endoftext|> -TITLE: About a weighted sum of hitting times for random walks on graphs -QUESTION [8 upvotes]: Consider a random walk on an undirected, non-bipartite graph. Let $\pi$ be the stationary distribution of this process, and let the hitting time $H(s,t)$ be the expected time until a walk beginning at node $s$ reaches node $t$. -I learned from Random walks on graphs: a survey, by L. Lovasz, that the quantity -$$ \sum_{t} \pi(t) H(s,t)$$ -is independent of $s$. In the Lovasz survey, this falls out as a byproduct of a lengthy calculation. I have two questions: -(i) Is there a simple (calculation-free, combinatorial) proof of this statement? -(ii) In what generality does this statement hold? Random walks on undirected graphs are a very particular kind of Markov process. What conditions on the probability transition matrix of the process do you need for the above to hold? - -REPLY [6 votes]: This holds for any irreducible finite state Markov chain. I'm not sure about a pure combinatorial proof, but it can be shown quite quickly. -Let $M(t)$ be the expected time the chain takes to return to its initial state, if started at $t$. As it spends, on average, a time $M(t)-1$ outside the state $t$ after each visit to $t$, the fraction of its time spent at $t$ is $1/M(t)$. So, the stationary distribution is $\pi(t)=1/M(t)$ (this is standard, see Wikipedia). -Now $X_n$ be a Markov chain with the given transition matrix and, for a fixed state $t$, consider the process $H(X_n,t)$. This is just the expected time remaining before it next hits $t$. On average, at each step, this will decrease by 1 when $X_n\not=t$ and increase by $M(t)-1=1/\pi(t)-1$ when $X_n=t$. So, $Y_n\equiv\sum_tH(X_n,t)\pi(t)$ stays the same on average across each step, regardless of the value of $X_n$. That, is it is a martingale. Consider what happens when $Y_n$ reaches its maximum value. As it can't increase further and its expected value remains constant, $Y_n$ must remain constant. So, $\sum_tH(s,t)\pi(t)$ is constant. - -The bit above the line answers your question. However, the calculations in the linked paper follow through for all irreducible Markov chains. In particular the formula (3.3) holds. This is the product of the lengthy calculation that you mention. I would like to show this now, which will involve going through some calculations. Let $M_{ij}=\mathbb{P}(X_{n+1}=j\vert X_n=i)$ be the transition matrix, $H_{ij}=H(i,j)$ be the hitting time matrix, $I$ be the identity matrix, $J_{ij}=1$ be the matrix of ones, and $F$ be the diagonal matrix with entries being the mean return times $F_{ii}=1/\pi(i)$. The description above of the average change in $H(X_n,t)$ across each time step can be written in matrix form -$$ -(M-I)H=F-J.\qquad\qquad{\rm(1)} -$$ -Note that $F\pi=J\pi={\bf 1}$. So, (1) gives $(M-I)H\pi=0$, which is just the martingale property described above. The only eigenvalues of M with eigenvector 1 are proportional to ${\bf 1}$. So, $H\pi$ is proportional to ${\bf 1}$. -We can solve (1). Let $M^*={\bf 1}\pi^{\rm T}$. Then $MM^*=M^*M=M^*$ and $M^*F=M^*J=J$. Trying $A=(I-M+M^*)^{-1}(J-F)$ in place of H in (1), -$$ -\begin{align} -(M-I)A &=(I-M+M^*)^{-1}(M-I)(J-F)\\\\ -&=(I-M+M^*)^{-1}(M-M^*-I)(J-F)\\\\ -&=F-J. -\end{align} -$$ -Also, the only square matrices satisfying $(M-I)A=0$ are the ones with constant columns, so of the form ${\bf 1}v^{\rm T}$ for a vector v. So, (1) has the general solution -$$ -H = (I-M+M^*)^{-1}(J-F)+{\bf 1}v^{\rm T} -$$ -and, using the condition $H_{ii}=0$ gives -$$ -v_i=\left((1-M+M^*)^{-1}(F-J)\right)_{ii}. -$$ -Next, $J\pi=F\pi={\bf 1}$ so, -$$ -H\pi = (v^{\rm T}\pi){\bf 1} -$$ -has constant entries of value -$$ -\begin{align} -v^{\rm T}\pi &= \sum_i\left((I-M+M^*)^{-1}(F-J)\right)_{ii}\pi_i\\\\ -&= {\rm Tr}\left[(I-M+M^*)^{-1}(F-J)F^{-1}\right]\\\\ -&= {\rm Tr}\left[(I-M+M^*)^{-1}(I-M^*)\right]. -\end{align} -$$ -If the eigenvalues of M are $\lambda_1,\ldots,\lambda_n$ with $\lambda_1=1$, then it can be seen that the corresponding eigenvalues of $(I-M+M^*)^{-1}(I-M^*)$ are $0,(1-\lambda_2)^{-1},(1-\lambda_3)^{-1},\ldots,(1-\lambda_n)^{-1}$ giving -$$ -(H\pi)_i=\sum_{j=2}^n\frac{1}{1-\lambda_j}. -$$ -This is independent of i, and is the same as equation (3.3) in the linked paper.<|endoftext|> -TITLE: How to solve a system of linear equations modulo $8$? -QUESTION [6 upvotes]: I encountered a set of linear equations with modulo in only $2$ variables -$$(a_{11}x + a_{12}y) \mod 8 = b_1$$ -$$(a_{21}x + a_{22}y) \mod 8 = b_2$$ -In the case of simple equations without modulo, the solution of $Ax=b$ is $x=A^{-1}b$, but how to handle this modulo? Any clue or pointers will be very helpful. - -REPLY [3 votes]: You can apply Cramer's rule if the determinant is odd (so invertible mod 8). For an introduction to linear algebra over commutative rings see Wm. C. Brown: Matrices over commutative rings.<|endoftext|> -TITLE: Number of terms in a trinomial expansion -QUESTION [6 upvotes]: According to Wikipedia, the number of terms in $(x+y+z)^{30}$ is $496$. I'm assuming this is before like terms are added up. How many terms would there be if like terms were combined? How would I go about figuring that out? - -REPLY [11 votes]: No, the 496 is the number of terms after like terms are combined. Before like terms are combined there are $3^{30}$ terms. This is because you have 30 different factors, and so the number of terms you get before combining is the number of ways to choose 30 elements when there are three choices for each. -Zaricuse's answer is hinting at how to derive the formula on the Wikipedia page. -Here's another way to look at the formula on the Wikipedia page: The number of terms in the expansion of $(x+y+z)^n$ after combining is the number of ways to choose $n$ elements with replacement (since you can choose $x,y,z$ more than once) in which order does not matter from a set of 3 elements. This formula is known to be -$$\binom{3+n-1}{n} = \binom{n+2}{n} = \frac{(n+1)(n+2)}{2}.$$ -See, for example, MathWorld's entry on Ball Picking. - -REPLY [5 votes]: First note that every $x^ay^bz^c$ with $a + b + c = 30$ appears. One way of seeing this is observing ${\partial^{30}(x+y+z)^{30} \over \partial_x^a\partial_y^b\partial_z^c} $ at $(0,0,0)$ is $30!$, which is not zero. So for a given value of $a$, every possible $(b,c)$ with $b + c = 30 - a$ can happen. Since $b$ goes from $0$ to $30 - a$, there are $31 - a$ possibilities for $b$, each of which forces $c$ to have the single value $30 - a - b$. Thus for a given $a$ there are -$31 - a$ different possible $(b,c)$. Adding this over all $a$ from 0 to 31 this gives the sum of $31 + 30 + ... + 0 = 496$.<|endoftext|> -TITLE: Find the second derivative of some implicit function? -QUESTION [5 upvotes]: I have a function given implicitly, you know. X and Y on both sides. Then it says, assume y = y(x). That's fine. I should be able to find y'(0), but what about y''(0)? How do you treat the dy/dx parts when taking the second derivative? -Edit: I would also like to follow the tip in the book, that says when I'm after actual values. We can just insert the value instead solving for dy/dx. - -REPLY [10 votes]: If you can use partial derivatives, then you can do the following: -First you find $dy/dx$, say $$\frac{dy}{dx}=g(x,y).$$ Then by chain rule -$$\frac{d^2y}{dx^2}=\frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}\frac{dy}{dx}=\frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}g(x,y).$$<|endoftext|> -TITLE: Calculating the integral $\int_0^\infty \frac{\cos x}{1+x^2}\, \mathrm{d}x$ without using complex analysis -QUESTION [109 upvotes]: Suppose that we do not know anything about the complex analysis (numbers). In this case, how to calculate the following integral in closed form? -$$\int_0^\infty\frac{\cos x}{1+x^2}\,\mathrm{d}x$$ - -REPLY [2 votes]: I am surprised nobody has written about Schwinger's method, which can be used to evaluate loop integrals in quantum field theory, and which nicely computes this integral in combination with Glasser's master theorem. First notice that the integral can be re-written as -$$\int_0^\infty \cos(x)\int_0^\infty e^{-u(1+x^2)}\,du\,dx.$$ -Now, switch the order of integration and extend the range of one of the integral to the entire real line, gaining a factor of $1/2$: -$$\frac{1}{2}\int_0^\infty e^{-u}\int_{-\infty}^\infty e^{-x^2+ix}\,dx\,du.$$ -Complete the square in the exponential, and evaluate the now Gaussian integral (use the analyticity of $e^{-z^2}$ to bring the line of integration back to the real line) to obtain -$$\sqrt{\pi}\int_0^\infty \frac{du}{2\sqrt{u}}\,\exp-\left(u-\frac{1}{4u}\right).$$ -The time is ripe for a substitution $x=\sqrt{u}$, bringing us to -$$\frac{\sqrt{\pi}}{2}e^{-1}\int_{-\infty}^\infty dx\, \exp-\left(x-\frac{1}{2x}\right)^2.$$ -Glasser's master theorem now tells us how to compute this last integral, reducing it to a Gaussian, and the original integral is just -$$\frac{\pi}{2}e^{-1}.$$<|endoftext|> -TITLE: An intuitive explanation of the Taylor expansion -QUESTION [27 upvotes]: Could you provide a geometric explanation of the Taylor series expansion? - -REPLY [2 votes]: Performing an $n$-th finite Taylor expansion can be thought of as making the approximation that the function's $n$-th derivative is constant. -Try it yourself: let $f$ be a $n$ times differentiable function whose $n$-th derivative is constant, and suppose you know the values of $f^{(i)}(0)$, $0\leq i\leq n$. By integrating repeatedly, you'll find that this uniquely determines $f$ and produces the formula for a Taylor's expansion. -Intuitively, I find it plausible that neglecting higher order derivatives of a function shouldn't cause too large of an error. Taylor's theorem confirms this intuition.<|endoftext|> -TITLE: Definition of direct image? -QUESTION [5 upvotes]: I read a little remark in a topology text that the direct image of an open set under a continuous mapping is not necessarily open. -What is the definition of direct image in this case? I tried googling to no avail, and only found references to sheaf theory on wikipedia, which I don't believe to be what I'm looking for. Thanks. - -REPLY [5 votes]: They just mean "image". An example here is the image of the set $(0, 2\pi)$ under the map $\sin(x)$, which is $(0, 1]$.<|endoftext|> -TITLE: Every group is the quotient of a free group by a normal subgroup -QUESTION [21 upvotes]: Why is every group the quotient of a free group by a normal subgroup? - -REPLY [33 votes]: This is one of the most intuitive observations in all of group theory, and it illustrates the quotient operation in the most fundamental way. -I'll provide two separate answers. The first is fully intuitive; the second is a formalized version of the first. -First answer: Take a group $G$. A relation on $G$ is an equation satisfied by some of the elements. For instance, $eg = g$ where $e$ is the identity is a relation satisfied by all group elements $g \in G$. Because we can always multiply by inverses in a group, we can rewrite this relation as $egg^{-1} = gg^{-1} = e$, i.e., $e = e$. This can be applied to any relation. If $G$ is abelian, then $ab = ba$ for all $a,b \in G$, and we can rewrite this as $aba^{-1}b^{-1} = e$. -In other words, a relation asserts that some product of group elements coincides with the identity, so the only information we need to understand the relation is the product which occurs on the left side of the equals sign. -Now every group has a few relations which are implied directly by the group axioms. $aa^{-1} = e$ is one of them. We can ask whether the group has any extra relations which are not implied by the group axioms. If no such relations exist, i.e., if the only relations which hold are those which must hold by virtue of the group axioms, then the group is said to be free; the group is "free of additional relations." -If you have a group $G$, one natural thing to do is to introduce new relations into it and to thereby create new groups. But you can't just introduce completely random relations because (a) the relations can't contradict each other or pre-exising relations and (b) the resulting structure must again be a group. Now we saw earlier that a relation can be specified as a product of group elements. In order that the relations satisfy (a) and (b), it turns out it is necessary and sufficient that the corresponding products form a normal subgroup $N$. The result of introducing the collection of relations $N$ into the group $G$ is the quotient $G/N$. -Any group $G$ can be obtained in this manner. You start with the free group $F$ whose generators are elements of $G$ considered as a set. And then you look at all the additional relations satisfied by elements of $G$ and assemble them into a normal subgroup $N$. Then $G = F/N$ by the above. -Second answer: Given any set $S$, the free group on $S$ is that group $F(S)$ for which every function $f : S \rightarrow G$ from $S$ to an arbitrary group $G$ extends to a unique homomorphism $\tilde{f} : F(S) \rightarrow G$. There are various ways of constructing $F(S)$ explicitly. For instance, you may take $F(S)$ to consist of words over the alphabet whose letters are elements of $S$ and $S'$, where $S'$ has the letter $s^{-1}$ (a symbol at the moment) for each symbol $s \in S$. It's important to notice that $F(S)$ actually contains equivalence classes of words, because we introduce the obvious cancellation rules; e.g., $abb^{-1}c$ can be reduced via cancellation to $ac$. It must be proved that all possible algorithms for reduction yield the same reduced word; I'll omit that step. -You also have to prove that this group $F(S)$ satisfies the stated universal property. I won't prove this in detail, but it is more or less intuitive. Since $\tilde{f}$ has to be a homomorphism, we find, for instance, that $\tilde{f}(ab) = \tilde{f}(a) \tilde{f}(b) = f(a)f(b)$. In general, since $f$ is defined for all elements of $S$, $\tilde{f}$ is thereby defined uniquely for all elements of $F(S)$. [It is via similar reasoning that you may determine that it is sufficient to know the values of a linear operator on the elements of a basis of a vector space.] -So we start with our group $G$ which we would like to write as a quotient of a free group. Which free group? That free group whose generators are the symbols from $G$. So we pick $F(G)$. Now we need to introduce the needed relations in order to collapse $F(G)$ into $G$. How do we carry it out? By the first answer, we could easily accomplish this if only we knew the normal subgroup $N$ of relations, but it seems that in this general case we don't really know $N$ concretely. -In fact, we can figure out $N$ as follows. We can take the identity map $f : G \rightarrow G$ and extend it to a homomorphism $\tilde{f} : F(G) \rightarrow G$. The extension $\tilde{f}$ is in general not injective, and its kernel is precisely the group of relations $N$! (Formally this is an application of one of the standard theorems on homomorphisms.) Then $G = F(G)/N$ as before.<|endoftext|> -TITLE: function asymptotic where $f(x) = \frac{a + O(\frac{1}{\sqrt{x}})}{b + O(\frac{1}{\sqrt{x}})}$ -QUESTION [5 upvotes]: If $a$ and $b$ are positive real numbers, and if $f(x)$ has the following asymptotic property -$f(x) = \frac{a + O(\frac{1}{\sqrt{x}})}{b + O(\frac{1}{\sqrt{x}})}$ -then is the following true? -$f(x) = \frac{a}{b} + O(\frac{1}{\sqrt{x}})$ -This might look like homework but it isn't. - -REPLY [7 votes]: Yes. One way to see this is to actually do the long division (like the kind you learned in elementary school)! Unfortunately, typesetting that in full on this forum will overtax my LaTeX powers. -Anyway, dividing $b + O\left(\frac{1}{\sqrt{x}}\right)$ into $a + O\left(\frac{1}{\sqrt{x}}\right)$ yields $\frac{a}{b}$ with a remainder of $O\left(\frac{1}{\sqrt{x}}\right)$. So we have -$$\frac{a + O\left(\frac{1}{\sqrt{x}}\right)}{b + O\left(\frac{1}{\sqrt{x}}\right)} = \frac{a}{b} + \frac{O\left(\frac{1}{\sqrt{x}}\right)}{b + O\left(\frac{1}{\sqrt{x}}\right)} = \frac{a}{b} + O\left(\frac{1}{\sqrt{x}}\right),$$ -since $b + O\left(\frac{1}{\sqrt{x}}\right) = O(1).$ - -REPLY [6 votes]: It is true. In the spirit of epsilon/delta, you are challenged to prove that $|f(x)-\frac{a}{b}| \lt \frac{M}{\sqrt{x}}$ for $x\gt x_0$ where your challenger gives M and you have to find an $x_0$ that works. But you get to challenge back saying the numerator should be within $\frac{N}{\sqrt{x}}$ of $a$ and similarly the denominator should be within $\frac{P}{\sqrt{x}}$ of $b$. So take $N=\frac{M}{2b}$ and $P=\frac{aM}{2b^2}$ and take the larger of the $x_0$'s that come back.<|endoftext|> -TITLE: $x^y = y^x$ for integers $x$ and $y$ -QUESTION [91 upvotes]: We know that $2^4 = 4^2$ and $(-2)^{-4} = (-4)^{-2}$. Is there another pair of integers $x, y$ ($x\neq y$) which satisfies the equality $x^y = y^x$? - -REPLY [10 votes]: Well I finally found an answer relating to some number theory I suppose ! -Assume that : $x={p_1}^{\alpha _ 1}.{p_2}^{\alpha _ 2}...{p_k}^{\alpha _ k}$ it is clear that number y prime factors are the same as number x but with different powers i.e: $y={p_1}^{\beta _ 1}.{p_2}^{\beta _ 2}...{p_k}^{\beta _ k}$ replacing the first equation we get: -${({p_1}^{\alpha _ 1}.{p_2}^{\alpha _ 2}...{p_k}^{\alpha _ k})}^y={({p_1}^{\beta _ 1}.{p_2}^{\beta _ 2}...{p_k}^{\beta _ k})}^x$ i.e: ${p_1}^{{\alpha _ 1}y}.{p_2}^{{\alpha _ 2}y}...{p_k}^{{\alpha _ k}y}={p_1}^{{\beta _ 1}x}.{p_2}^{{\beta _ 2}x}...{p_k}^{{\beta _ k}x}$ -Since the the powers ought to be equal we know for each $1\le i \le k$ we have:${\alpha_i}y={\beta_i}x$ i.e: ${\alpha_i}/{\beta_i}=x/y$ -Considering that the equation is symmetric we can assume that $x \le y$ but we have ${\alpha_i}/{\beta_i} = x/y \ge 1$ hence ${\alpha_i} \ge {\beta_i}$ -Assume this obvious,easy-to-prove theorem: -Theorem #1 -Consider $x,y \in \mathbb{N}$ such that $x={p_1}^{\alpha _ 1}.{p_2}^{\alpha _ 2}...{p_k}^{\alpha _ k}$ $y={p_1}^{\beta _ 1}.{p_2}^{\beta _ 2}...{p_k}^{\beta _ k}$ for each $1\le i \le k$ we have: -$y|x \to {\alpha_i}\ge{\beta_i}$ or vice versa - -Using the Theorem #1 we can get that $y|x$ i.e $x=yt$ replacing in the main equation we get: -$x^y=y^x \to ({yt})^y=y^{({yt})} \to yt=y^t$ -Now we must find the answers to the equation $yt=y^t$ for $t=1$ it is obvious that for every $y \in \mathbb{N}$ the equation is valid.so one answer is $x=y$ -Yet again for $t=2$ we must have $2y=y^2$ i.e $y=2$ and we can conclude that $x=4$ (using the equation $x=yt$)so another answer is $x=4$ $\land$ y=2$ (or vice versa) -We show that for $t\ge3$ the equation is not valid anymore. -If $t\ge3$ then $y\gt2$ we prove that with these terms the inequality $y^t \gt yt$ stands. -$y^t={(y-1+1)}^t={(y-1)}^t+...+\binom{t}{2} {(y-1)}^2 + \binom{t}{1}(y-1) +1 \gt \binom{t}{2} {(y-1)}^2 + t(y-1) +1$ -But we have $y-1\gt1$ so: -$y^t \gt \binom{t}{2} {(y-1)}^2 + t(y-1) +1= \frac {t(t-1)}{2} -t +1 +yt= \frac {(t-2)(t-1)}{2} + yt \gt yt$ -So it is proved that for $t\ge3$ is not valid anymore.$\bullet$ - -P.S: The equation is solved for positive integers yet the solution for all the integers is quite the same!(took me an hour to write this all,hope you like my solution)<|endoftext|> -TITLE: Is this a known special function? -QUESTION [5 upvotes]: Is this a known special function: -$$\int\nolimits_0^1 a^p(1-a)^{1-p}\\,b^{1-p}\\,(1-b)^p dp\qquad ?$$ -I am really only interested in maximizing this over $(a,b)$ in $[0,1] \times [0,1]$, so a pointer to a nice numerical evaluation is appreciated as much or more so than an unstable exact formula. -Thanks for any help - -REPLY [8 votes]: You can get a closed-form answer. -$$\int_0^1 a^p (1-a)^{1-p} b^{1-p} (1-b)^p dp = b(1-a) \int_0^1 \left(\frac{a(1-b)}{b(1-a)}\right)^p dp = \left. \frac{b(1-a)}{\ln \frac{a(1-b)}{b(1-a)}} \left(\frac{a(1-b)}{b(1-a)}\right)^p \right|_0^1 $$ -$$= \frac{a(1-b) - b(1-a)}{\ln a + \ln (1-b) - \ln b - \ln (1-a)} = \frac{a-b}{\ln a + \ln (1-b) - \ln b - \ln (1-a)}.$$ -This holds if $a \neq b$ and if neither of $a$ or $b$ is 0 or 1. If $a = b$, then instead we have $$a(1-a) \int_0^1 dp = a - a^{2}.$$ -And, of course, if $a$ or $b$ is 0 or 1 then the value of the integral is 0. -So, as far as maximizing, you can use the usual approach of finding where both partial derivatives are 0. I haven't worked through the calculations, but I strongly suspect that because of the symmetry in $a$ and $b$ that the maximum value will occur at $a = b$.<|endoftext|> -TITLE: Expected number of neighbors -QUESTION [7 upvotes]: Given a row of 16 houses where 10 are red and 6 are blue, what is the expected number of neigbors of a different color? - -REPLY [3 votes]: The chances of a particular neighbour pair being the same colour is -$$ \frac{{14 \choose 8} + {14 \choose 10}}{{16 \choose 6}} = \frac{4004}{8008} = \frac{1}{2}$$ -Hence the answer is $\displaystyle 7.5$ -This is happening because $\displaystyle {14 \choose 8}, {14 \choose 9}, {14 \choose 10}$ are in arithmetic progession: The number of ways of being same colour is $\displaystyle {14 \choose 8} + {14 \choose 10}$ and the number of ways of being different is $\displaystyle 2{14 \choose 9}$. The probability is $\displaystyle \frac{1}{2}$ if these two are equal. -Interestingly, $\displaystyle {n \choose r}, {n \choose r+1}, {n \choose r+2}$ are in arithmetic progression if and only if $\displaystyle n+2$ is a perfect square and $\displaystyle r$ is given by $\displaystyle r = \frac{n-2 \pm \sqrt{n+2}}{2}$ (see the end of the answer for a proof). -So for instance, the whole bunch of problems: - -15 red, 10 blue -21 red, 15 blue - -etc give rise to this neat probability of being $\displaystyle \frac{1}{2}$. - -Proof that n+2 is a perfect square -$\displaystyle {n \choose r}, {n \choose r+1}, {n \choose r+2}$ are in arithmetic progression iff -$\displaystyle 2{n \choose r+1} = {n \choose r} + {n \choose r+2}$ -i.e -$\displaystyle 2 = \frac{r+1}{n-r} + \frac{n-r-1}{r+2}$ -Doing some manipulations gives us -$\displaystyle (n-2r-2)^2 = n+2$ -Hence -$\displaystyle r = \frac{n-2 \pm \sqrt{n+2}}{2}$ -Which has an integer solution iff $\displaystyle n+2$ is a perfect square.<|endoftext|> -TITLE: Motivating infinite series -QUESTION [25 upvotes]: What are some good ways to motivate the material on infinite series that appears at the end of a typical American Calculus II course? -My students in this course are generally from biochemistry, computer science, economics, business, and physics (with a few humanities folks taking the course for fun) - not just math majors. -I have struggled some in the past to motivate the infinite series material to these students. For one, it doesn't fit with the rest of Calc II, which is on the integral. Over the years I have "converged" on telling them that the main point of the unit is Taylor series and that the rest of the material is there primarily so that we have the tools we need in order to understand Taylor series. Then I illustrate some of the many uses of Taylor series (mainly function approximation, at this level). This approach works better than anything I've come up with thus far with respect to getting my students to care about infinite series, but I feel a little like I'm selling the rest of the material short by subordinating it to Taylor series. Does anyone have other ways of motivating infinite series that they would like to share? (Again, only a small percentage of the students in my class are math majors.) -Background: The material in this unit typically consists of sequences, basic series (like geometric and telescoping ones), a slew of tests for convergence (e.g., integral test, ratio test, root test), an introduction to power series, Taylor and Maclaurin series, and maybe binomial series. - -REPLY [2 votes]: From the point of view of a student that is struggling with the abstract concepts of maths, I found that using Zeno's Paradox was an interesting approach to infinite series.<|endoftext|> -TITLE: Link between a Dense subset and a Continuous mapping -QUESTION [9 upvotes]: arising out of comment made by Yuval Filmus in what is the cardinality of set of all smooth functions in $L^1$? I got this idea (forgive me for my ignorance for if it is nothing but an elementary definition/result in real analysis). The idea is like this Let $f:X\to Y$ is a mapping, where $X$ is a complete metric space (not sure if its strictly needed or whether a looser condition would do). If $f$ is a continuous mapping then $f$ is uniquely specified by a mapping $g:E\to Y$ where $E$ is a dense subset of $X$. What are the condition under which it is valid ? also the validity of the converse statement. - -REPLY [23 votes]: If $Y$ is a Hausdorff topological space, then the value of a continuous function $f\colon X\to Y$ is completely determined by the value of $f$ on a dense subset $E$ of $X$; to see this, suppose $f$ and $g$ are two functions that agree on a dense subset $E$ of $X$, and let $u\in X\setminus E$. If $f(u)\neq g(u)$, then there are open neighborhoods $U$ and $V$ of $f(u)$ and $g(u)$, respectively, such that $U\cap V=\emptyset$. Then $f^{-1}(U)$ is an open neighborhood of $u$, as is $g^{-1}(V)$. Their intersection is an open neighborhood of $u$, and therefore must contain elements of $E$; but then any $e\in E$ in the intersection has $f(e)=g(e)$, with $f(e)\in U$ and $g(e)\in V$, contradicting that $U\cap V=\emptyset$. Therefore, $f(u)=g(u)$, hence $f=g$. -In this situation, it does not matter if $X$ is a complete metric space (or even a metric space); the key is $Y$. -To see that the key is $Y$, consider the extreme case in which $Y$ has the indiscrete topology (the only open sets are the empty set and the set $Y$). Then any function into $Y$ is continuous, so you can make your $X$ anything you want, and have two functions that agree on any subspace you care to specify and yet differ somewhere else. -Added: Note that any metric space is necessarily Hausdorff, so if your maps are between metric spaces, then the property holds (as in the case of Yuval's answer). This because you have the property that $d(x,y)\geq 0$ and $d(x,y)=0$ if and only if $x=y$. Thus, given $x,y\in Y$, $x\neq y$, let $\epsilon=d(x,y)\gt 0$. Then $B(x,\frac{\epsilon}{2})$ and $B(y,\frac{\epsilon}{2})$ are open neighborhoods of $x$ and $y$ (respectively) that are disjoint: if $z$ were in the intersection, then $d(x,z)\lt \frac{\epsilon}{2}$, $d(z,y)\lt\frac{\epsilon}{2}$, and by the triangle inequality we would conclude that $d(x,y)\lt\epsilon$, a contradiction. -Final Addition: See here for a discussion of the converse. - -Added 2: A result towards a possible converse: certainly we need some separation on $Y$. Suppose that there exist points $x,y\in Y$, $x\neq y$, such that every open subset that contains $x$ also contains $y$ (so $Y$ could be $T_0$, but cannot be $T_1$). Let $X$ be the Sierpinski space, $X=\{a,b\}$ with topology $\tau = \{\emptyset, \{b\}, X\}$. Let $f,g\colon X\to Y$ be the following maps: $f$ is the constant function that maps everything to $x$; $g$ is the function that maps $a\mapsto x$, $b\mapsto y$. The constantfunction is certainly continuous. For $g$, if $U$ is an open subset that contains $y$ but not $x$, then $g^{-1}(U)=\{b\}$, which is open; so $g$ is continuous. And $g$ and $f$ agree on the dense set $\{a\}$, but are not equal. -I'm still trying to figure out the $T_1$ case (for all $x,y\in Y$, $x\neq y$, there exist open sets $U,V$ such that $x\in U-V$ and $y\in V-U$). -Added 3: Another step: $T_1$ is not sufficient; a colleague came up with this one (I kept trying the cofinite topology on $\mathbb{N}$ and not getting anywhere): take two copies of the real line and identify every point except the origin; this is $Y$. The result is a $T_1$ space, but not Hausdorff since no neighborhoods of the two copies of the origin are disjoint. Now let $X$ be the real line, and let $E = (-\infty,0)\cup(0,\infty)$ be the dense subset. The two obvious injections, one mapping $0$ to the first copy in $Y$ and the other mapping it to the other copy, are both continuous and agree on $E$ but not on all of $X$, so $T_1$ does not suffice for the property. At least, then, in the hierarchy of $T$-spaces, the first level at which we are guaranteed the property is Hausdorff. This does not, however, establish whether the converse property characterises Hausdorff-ness. -Added 4. I almost have the following: -Conjecture. Assuming the Axiom of Choice, the following are equivalent for a topological space $Y$: - -$Y$ is Hausdorff. -For every topological space $X$, every dense subset $E$ of $X$, and every pair of continuous maps $f,g\colon X\to Y$, if $f$ and $g$ agree on $E$, then $f=g$. - -Argument. Suppose that $Y$ is not Hausdorff. If there exist points $x,y\in Y$, $x\neq y$, such that every open neighborhood of $x$ contains $y$, then the map from the Sierpinski space indicated above shows that $Y$ does not have property 2. So we may assume that $Y$ is at least a $T_1$ space. But since $Y$ is not Hausdorff, there exist points $s,t\in Y$, $s\neq t$, such that for every neighborhoods $U$ of $s$ and $V$ of $t$ such that $s\in U-V$ and $t\in V-U$, we have $U\cap V\neq \emptyset$. Let $\mathfrak{U}\_s$ be the family of open neighborhoods of $s$ that do not contain $t$, and let $\mathfrak{V}\_t$ be the family of open neighborhoods of $t$ that do not contain $s$. Let $P=\mathfrak{U}\_s\times \mathfrak{V}\_t$, and partially order $P$ by letting $(U,V)\leq (U',V')$ if and only if $U'\subseteq U$ and $V'\subseteq V$. This makes $P$ into a directed partially ordered set (given any $(U,V),(R,S)\in P$, there exists $(U',V')\in P$ such that $(U,V)\leq (U',V')$ and $(R,S)\leq (U',V')$. Now, for each $(U,V)\in P$, we are assuming that $U\cap V\neq\emptyset$, so using the Axiom of Choice let $y_{(U,V)}\in Y$ be an element of $U\cap V$. Note that $y_{(U,V)}\neq s$ and $y_{(U,V)}\neq t$ for all $(U,V)\in P$. -Now let $X = \{y_{(U,V)}\mid (U,V)\in P\}\cup\{s,t\}$, and give $X$ the induced topology from $Y$, so that the inclusion maps $X\hookrightarrow Y$ is continuous. Note that the set $E=\{y_{(U,V)}\mid (U,V)\in P\}$ is dense in $X$: for every open neighborhood $B$ of $s$ in $X$, there exists an open set $\mathcal{O}\_B\in Y$ such that $\mathcal{O}\_B\cap X = B$; in particular, $\mathcal{O}\_B$ is a neighborhood of $s$; let $\mathcal{V}$ be any open neighborhood of $t$ that does not contain $S$, and let $B'=\mathcal{V}\cap X$; let $\mathcal{U}$ be any open neighborhood of $s$ that does not contain $t$. Then $\mathcal{U}\cap\mathcal{O}\_B$ is open, hence $B'=\mathcal{U}\cap\mathcal{O}\_B\cap X$ is an open subset of $X$ that is contained in $B$. Consider now $y_{(\mathcal{U}\cap\mathcal{O}\_B,\mathcal{V})}$. This is in $\mathcal{U}\cap\mathcal{O}_B\cap\mathcal{V}\cap X \subseteq B'$, and is plainly in $E$. In particular, in $X$ we have that $B'\cap E\neq\emptyset$, and hence $B\cap E\neq \emptyset$. Thus, every open neighborhood of $s$ in $X$ contains points of $E$, hence $x$ lies in the closure of $E$. A symmetric argument holds for $t$. Thus, $E$ is dense in $X$. -Now consider the maps $f,g\colon X\to Y$ defined as follows: $f$ and $g$, restricted to $E$, are the identity; $f(s)=f(t)=s$; and $g(s)=g(t)=t$. I claim that $f$ and $g$ are both continuous. Indeed, let $\mathcal{O}$ be an open set in $Y$. If $\mathcal{O}\cap\{s,t\}=\emptyset$ or $\{s,t\}\subseteq\mathcal{O}$, there is nothing to do: the inverse image under both $f$ and $g$ is just the intersection with $X$, hence open in $X$. So assume without loss of generality that $s\in\mathcal{O}$ but $t\notin \mathcal{O}$. Note that $g^{-1}(\mathcal{O}) = (\mathcal{O}-\{s\})\cap X$, and since $Y$ is $T_1$ removing a single point from an open set results in an open set, so $g^{-1}(\mathcal{O})$ is open. So we just need to show that $f^{-1}(\mathcal{O}) = (\mathcal{O}\cap X)\cup\{t\}$ is open in $X$. -And that is where I am a bit stuck at present. Can anyone either verify or falsify this?<|endoftext|> -TITLE: How can I write the Axiom of Specification as a sentence? -QUESTION [10 upvotes]: I began reading Paul Halmos' "Naive Set Theory", and encountered the "Axiom of Specification". - To every set $A$ and to every condition $S(x)$ there corresponds a set $B$ whose elements are exactly those elements $x$ of $A$ for which $S(x)$ holds. - -Earlier in the same section, I learned that statements in set theory should be "sentences". A sentence was defined by - There are two basic types of sentences, namely, assertions of belonging, -$x \in A$, -and assertions of equality, -$A = B$; -all other sentences are obtained from such atomic sentences by repeated applications of the usual logical operators... -A more complete definition of a sentence follows, which can be read on Google Books here: http://goo.gl/XvK2B -I tried to translate the axioms and theorems in the book into sentences, but it seems like the Axiom of Specification is not a sentence. It refers to "every condition", but I have no way to build a sentence that refers to "every condition" because the atomic sentences only refer to sets. -Is the Axiom of Specification a sentence? If not, does that mean that statements about set theory do not need to be sentences? What other sorts of statements are allowed? (I'm using "statement" colloquially since I don't know the technical term.) - -REPLY [15 votes]: The axiom of specification is not a sentence. It's an "axiom scheme", which is to say that it is a family of sentences. This is one of the things that becomes more clear when you move to axiomatic set theory, instead of naive set theory. -For each sentence $S(x)$ that does not mention $B$, the axiom of specification includes the axiom -$$ -\forall A \exists B \forall x ( x \in B \Leftrightarrow x \in A \land S(x)). -$$ -What that axiom says, informally, is that given a set $A$ and a definition $S$ of a subset of $A$, that subset actually exists. The scheme is slightly more general than my previous formula, because the scheme allows sentences with "parameters". -The restriction that $S$ does not mention $B$ is to avoid paradoxes. Otherwise we would have as an axiom (letting $A = \{0\}$ and letting $S$ be "$x \not \in B$") -$$ -\exists B \forall x ( x \in B \Leftrightarrow x \in \{0\} \land x \not \in B). -$$ -That set is paradoxical - it contains 0 if and only if it doesn't contain $0$. -The reason that we cannot quantify over sentences is that set theory is formalized using the logical system of "first order logic". That system is not able to quantify over sentences. This isn't an arbitrary choice; the inability to quantify over sentences is a necessary result of certain logical properties of first-order logic that are desirable. There are other logics in which one can quantify over sentences, but these logics do not have nice properties (and some have argued these logics themselves include set theory). -All of this is explained, in great detail, in books on axiomatic set theory. One reasonable book is Levy's Basic set theory. The standard graduate textbook is Kunen's Set theory: an introduction to independence proofs, and it can be used to learn axiomatic set theory, but it is somewhat terse at the beginning and is better as a second book on axiomatic set theory in my opinion.<|endoftext|> -TITLE: General expression of $f(a, b)$ if $f(a, b)=f(a-1,b) + f(a, b-1) + f(a-1, b-1)$? -QUESTION [5 upvotes]: $f(a,b) = f(a-1, b) + f(a-1, b-1) + f(a, b-1), ab \neq 0$ -$f(a,b) = 1, ab = 0$ -So what is $f(a, b)$? - -REPLY [4 votes]: Following Robin Chapman's answer, $$\sum_{a,b \geq 0} f(a,b)x^a y^b = \frac{1}{1-x-y-xy} = \sum_{n \geq 0} (x+y+xy)^n = \sum_{n \geq 0} \sum_{i+j+k=n} \frac{n!}{i!j!k!} x^i y^j (xy)^k$$ -so $$f(a,b)=\sum_{k=0}^{\min (a,b)} \frac{(a+b-k)!}{(a-k)!(b-k)!k!}$$<|endoftext|> -TITLE: What's the difference between predicate and propositional logic? -QUESTION [128 upvotes]: I'd heard of propositional logic for years, but until I came across this question, I'd never heard of predicate logic. Moreover, the fact that Introduction to Logic: Predicate Logic and Introduction to Logic: Propositional Logic (both by Howard Pospesel) are distinct books leads me to believe there are significant differences between the two fields. What distinguishes predicate logic from propositional logic? - -REPLY [5 votes]: I think this example from "Overview of proposition and predicate logic -" by Jan Kuper gives a suitable explanation. -In proposition logic, we can express statements as a whole, and combinations -of them. Intuitively, a statement is a sentence in which something is -told about some reality, and which can be true or false about that reality. -For example, if p is the statement ”Albert is at home”, and q means ”the -door is locked”, then q→¬p says: ”if the door is locked, then Albert is not -at home”. -In first-order predicate logic, a statement has a specific inner structure, -consisting of terms and predicates. Terms denote objects in some reality, -and predicates express properties of, or relations between those objects. -For example, the same example as before might be expressed as -Locked(d) → ¬AtHome(a). Here, Locked and AtHome are predicates, -and d and a are terms, all with obvious meanings.<|endoftext|> -TITLE: Every planar graph can be colored with 4 colors max -QUESTION [6 upvotes]: Ok, that's a theorem. As it says in this question. -I just cannot understand how mathematics can prove such a thing; what does exactly this theorem say? How to prove it mathematically? -Thanks! - -REPLY [6 votes]: The first proof was due to Appel and Haken in 1977. However, as pointed out by Willie Wong, it is not really readable. Very roughly, it first uses a classification of almost 1500 "unavoidable configurations" of the triangulation of a plane. Next, using computers, it is shown that every of these configuration "leads" to a 4-coloration. -Appel and Haken published an algorithmic approach to this problem in K. Appel and W. Haken, Every Planar Map is Four-Colorable, American Mathematical Society 1989. -In 1997, another proof was published but still use the computer in a similar way. This was due to N. Robertson, D. Sanders, P.D. Seymour and R. Thomas. -To answer to your question How can math prove such a thing?, we see that computer plays a fundamental role in this particuliar problem. In fact, computer science plays a very fundamental part in today's research in finite structures. For instance, the Classification of the Finite Simple Groups is another very famous example. Computers were needed in several parts of the proof as they still are. -Now it does not reduce these results to "uninteresting" simply because computers are needed to prove them. To the contrary, the efforts needed for the conception and the implementation of such algorithms is in itself a very remarkable work. -In conclusion, mathematics can prove such things through algorithmic approaches and clever implementations in computers. -As a final remark, note also the following result due to Grötzsch, 1959: -Every planar graph not containing a triangle is 3-colorable.<|endoftext|> -TITLE: Is there a continuous function from $[0, 1]$ onto $(0, 1)$? -QUESTION [10 upvotes]: If there is none, why? -And for the other side, what about open set $(0, 1)$ to closed set $[0, 1]$ with a continuous function? -Thanks - -REPLY [12 votes]: HINT: For the first one use the fact that, Continuous image of a compact set is compact. - -REPLY [9 votes]: For the other side consider $f: (0,1) \to [0,1]$ defined as $f(x)= |\cos(2\pi x)|^{2}$<|endoftext|> -TITLE: Card shuffling to Even Decks -QUESTION [6 upvotes]: I have an interesting math question I've developed while playing cards with my family and wonder if anyone has a way to approach the problem. -Say I have n people who each have a deck of cards. Each deck has a different number of cards in it. We are shuffling the cards for a game and in the end we all want to have the same number of cards to shuffle. Each individual can do one of two actions per round of shuffling. They can shuffle their deck, or they can split their deck in half and swap one half of their deck with half of another person's deck. My question is, is there an equation based on n to describe a method of swapping that will eventually give all n players the same amount of cards? Furthermore, how many rounds does it take? (In my mind, I define a round as a period of time where everyone performs one action. A player can only perform one action per round and the round ends when all players have performed one action) -To make the problem simpler, you can assume that each individual can only do one action: split their deck and swap with any other player. -The problem is trivial when n is a power of 2. For example if we have 2 people, they split their decks, swap, and they have the same number of cards. It takes one round. -For 4 people, you split them into 2 groups of 2 people and perform the swap internally in the group. Then you make 2 groups with one person from each group in round one and have them swap. This takes 2 rounds. -I would like to see if there is an simple elegant solution when n is not a power of 2. -(You can also assume that each deck splits evenly every time, which is a big assumption but should give a good basis for how to solve the problem. A similar problem would be to replace the cards with cups of liquid which gets rid of passing half of cards around.) - -REPLY [7 votes]: There is a standard problem (I often assign it in linear algebra, and I was assigned the problem in representation theory); seems like it might be related. -Suppose you have $n$ people sitting in a circular table, each of them with two neighbors. They each have a bowl of porridge in front of them, and a spoon in each hand. Unhappy with their individual portions, they all simultaneously grab half the porridge from their left-hand neighbor, half the porridge from the right-hand neighbor, and they put it in their bowl (which has been emptied by his neighbors doing the same). What happens to the distribution of porridge over time? -Edit: More explicit, applied to the cards. So, let's consider the case of three eaters/shufflers. I'm assuming for convenience now that the cards are, as Moron suggests, "liquid"; we'll get back to the discrete case presently. Let $x_i(n)$ be the number of cards that the $i$th person has after $n$ steps, with $\mathbf{x}(0) = (a_0,b_0,c_0)$ being the initial distribution of cards (so $a_0,b_0,c_0\geq 0$). Then the procedure can be represented as -$$\begin{array}{rcrcrcl} -x_1(n+1) &=& & &\frac{1}{2}x_2(n) & + & \frac{1}{2}x_3(n)\\ -x_2(n+1) & = & \frac{1}{2}x_1(n) & & &+& \frac{1}{2}x_3(n)\\ -x_3(n+1) & = & \frac{1}{2}x_1(n) & + & \frac{1}{2}x_2(n) -\end{array}$$ -So we can view this as the linear system $\mathbf{x}(n+1) = A\mathbf{x}(n)$, where -$$A = \left(\begin{array}{lll} -0 & 0.5 & 0.5\\ -0.5 & 0 & 0.5\\ -0.5 & 0.5 & 0 -\end{array}\right).$$ -The characteristic polynomial of $A$ is $-t(t-0.5)^2$. Since the rows all add up to $1$, then $(1,1,1)^T$ is an eigenvector of $\lambda=1$ (that is, if everyone starts with $1$, then after applying the procedure they all end up again with $1$: $(1,1,1)^T$ is mapped to one times itself). A basis for the eigenspace corresponding to $\lambda=-\frac{1}{2}$ is $(1,-1,0)^T$ and $(1,0,-1)^T$. If, for instance, player one began with one card, and player two with one "anti-card" (a card made of antimatter, say), then after the first step, player three got half a card from 1 and half an anti-card from 2, which vanished in a puff of smoke; player 2 now has half a card he got from 1, and player 1 has half an anti-card he got from 2. So $(1,-1,0)^T$ is mapped to $(-\frac{1}{2},\frac{1}{2},0)$. Similarly with $(1,0,-1)$. -Now, because $(1,1,1), (1,-1,0), (1,0,-1)$ is a basis for $\mathbb{R}^3$, our original distribution $\mathbf{x}(0)$ can be written uniquely as -\[ \mathbf{x}(0) = a(1,1,1) + b(1,-1,0) + c(1,0,-1),\] -for some $a$, $b$, and $c$. Note that the number of cards among all three players is $3a$. If we apply the procedure $n$ times to get $\mathbf{x}(n)$, we will get: -\begin{align*} -\mathbf{x}(n) &= A^n\mathbf{x}(0)\\ -&= A^n(a(1,1,1)^T + b(1,-1,0)^T + c(1,0,-1)^T)\\ -&= (a,a,a)^T + (-0.5)^n(b,-b,0)^T + (-0.5)^n(c,0,-c)^T\\ -&= (a + (-0.5)^n(b+c), a-(0.5)^nb, a-(0.5)^nc)^T. -\end{align*} -Now, in reality we are interested in the floor of these quantities (since we are really dealing with actual cards, no negative and fractional cards). As soon as $(0.5)^nb$ and $(0.5)^nc$ are both smaller than $\frac{1}{4}$, the given distribution will be essentially the even distribution $(a,a,a)$. -For an explicit example: suppose that we start with Player 1 having $30$ cards, Player $2$ with $16$ cards, and player $3$ with $22$ cards (I picked the numbers at random). First, we write $(30,16,22)$ as a linear combination of $(1,1,1)$, $(1,-1,0)$, and $(1,0,-1)$. Solving the corresponding system we get: -$$(30,16,22) = \frac{68}{3}(1,1,1) + \frac{20}{3}(1,-1,0) + \frac{2}{3}(1,0,-1).$$ -So after $n$ applications of the procedure, we will have -$$ -\mathbf{x}(n) = \frac{68}{3}(1,1,1) + \frac{(-1)^n20}{3(2^n)}(1,-1,0) + \frac{(-1)^n2}{3(2^n)}(1,0,-1).$$ -So each player will have exactly one third of the cards, modified by a bit. -If $n=2$, then the coefficient of $(1,-1,0)$ is $1.67$ and the coefficient of $(1,0,-1)$ is $0.167$; so you would expect the first player to have the nearest integer to $\frac{68}{3}+1.67$ cards, namely about $24$; the second player should have the nearest integer to $\frac{68}{3} -1.67$ cards, so about $21$ cards; and the third player the nearest integer to $\frac{68}{3}-0.167$ cards; this is almost $22.5$, so he'll have $23$ cards (to complete the tally). -If $n=3$, you get about $22$ cards for the first player, $23$ for the second, and $23$ for the third). If $n=4$, the first player should have approximately $23$ cards; the second player should have approximately $22$ cards; and the third player should have approximately $23$ cards. After this point, you'll just be shuffling around who is one card short. -A similar phenomenon occurs with any odd number of people. This may not be the fastest procedure for dividing them among an odd number of players, though. -If you try to do this with $n=4$ people, then the matrix you get is: -$$ A = \left(\begin{array}{llll} -0 & 0.5 & 0 & 0.5\\ -0.5 & 0 & 0.5 & 0\\ -0 & 0.5 & 0 & 0.5\\ -0.5 & 0 & 0.5 & 0 -\end{array}\right).$$ -This time, the characteristic polynomial is $t^2(t-1)(t+1)$, so you have one eigenvalue $\lambda=1$ (with corresponding eigenvector $(1,1,1,1)$, which is associated to "the cards are evenly distributed"); you get one eigenvalue $\lambda=-1$, with corresponding eigenvector $(1,-1,1,-1)$; and you get two eigenvectors mapping to $0$; for instance $(0,1,0,-1)$ and $(1,0,-1,0)$. -Once you express your original distribution as a linear combination of these, -\[ (x_1(0),x_2(0),x_3(0),x_4(0)) = \alpha(1,1,1,1) + \beta(1,-1,1,-1) + \gamma(0,1,0,-1) + \delta(1,0,-1,0),\] -the last two components don't matter after the first go around, so you end up with -\[ \mathbf{x}(n) = \bigl( \alpha + (-1)^n\beta, \alpha - (-1)^n\beta, \alpha+(-1)^n\beta, \alpha-(-1)^n\beta\bigr).\] -So depending you are going to have a certain portion of the cards evenly distributed (corresponding to $\alpha$), and a certain portion of the cards (corresponding to $\beta$) swapping between even- and odd-numbered players at each step. -Edit: Another way of looking at the $n=4$ case: A better way of seeing what is happening in this case is to replace the vector $(1,-1,1,-1)$ with the vector $(2,0,2,0)$ (the sum of the eigenvector and $(1,1,1,1)$); we still have a basis, so we can write any initial distribution as -$$(x_1(0),x_2(0),x_3(0),x_4(0)) = \alpha(1,1,1,1) + \beta(2,0,2,0) + \gamma(0,1,0,-1)$ + \delta(1,0,-1,0).$$ -But now, when you apply $A$, the vector $(2,0,2,0)$ transforms into the vector $(0,2,0,2)$, and this vector is in turn transformed back into $(2,0,2,0)$. The total number of cards is $4(\alpha+\beta)$. Of these, $4\alpha$ of the cards will eventually end up evenly distributed among the players, but the remaining $4\beta$ cards will end up evenly distributed between two non-adjacent players, and then swapped over to the other two players, then swapped back. If you imagine you are in a bridge table, first the North-South partnership has the extra $4\beta$ cards, $2\beta$ cards each, then they go to East-West, then back to North-South, then back ot East-West, etc. -A similar phenomenon occurs with any even number of players. -Added: Back to the discrete case. Okay, above we were dealing with the "liquid cards"/"porridge" case, where the cards are divisible any way we want. What happens if we are playing actual cards that we don't want to split up? Notice that each player is only required to divide his cards in half; so the only problem arises if a player has an odd number of cards. In that case, the player can just give the extra card to whichever of his neighbors he likes the most, or randomly; the final distribution in this case is very close to the ideal "continuous" distribution: each of his neighbors was supposed to get $x$-and-a-half cards from him, but got either $x$ or $x+1$; so the most that any one person can be off from the "ideal" result of applying the procedure is $1$ card (half a card from one neighbor, half a card from the other). This vector will be fairly close to the "ideal" continuous vector, and so the result of applying the procedure to this distribution will also result in a distribution which is fairly close to what the ideal distribution is. (Linear maps are continuous, and in this case the eigenvalues are all of absolute value less than or equal to $1$, so the approximation will not get worse as time goes on; it will either remain the same, or get better). You can see this in the example I worked out for $n=3$. If you start with $(30,16,22)$, then after the first step you get $(19,26,23)$; at the next step, the "predicted" distribution is $(24,21,23)$. The actual distributions you get are either $(24,22,22)$, $(25,21,22)$, $(24,21,23)$, or $(25,20,23)$, depending on how the extra cards get distributed. The sup-distance to the predicted distribution is 1 in all cases. Do it a third time, and depending on how things break you will end up in either $(22,24,22)$, $(21,24,23)$, $(22,23,23)$, $(21,23,24)$, $(23,23,22)$, or $(21,25,22)$. The "worst" distribution (the last one) is still better than the worst distribution in the previous step. And so on. -In summary: the "give half your cards to each of your neighbors and get half of each of their cards" will yield an even distribution after a fairly small-ish number of steps if there is an odd number of players, but will not necessarily yield and even distribution if there is an even number of players. In the latter case, part of the deck will get evenly distributed, while the other part ends up swapping between adjacent "teams". One advantage of this, at least for an odd number of players, is that it is a single procedure that is repeated a number of times (much like riffle-shuffling a deck) in order to achieve uniformity, rather than a more complex algorithm that involves different actions at different steps.<|endoftext|> -TITLE: How many ways can I make six moves on a Rubik's cube? -QUESTION [5 upvotes]: I am writing a program to solve a Rubik's cube, and would like to know the answer to this question. -There are 12 ways to make one move on a Rubik's cube. How many ways are there to make a sequence of six moves? -From my project's specification: up to six moves may be used to scramble the cube. My job is to write a program that can return the cube to the solved state. I am allowed to use up to 90 moves to solve it. Currently, I can solve the cube, but it takes me over 100 moves (which fails the objective)... so I ask this question to figure out if a brute force method is applicable to this situation. -If the number of ways to make six moves is not overly excessive, I can just make six random moves, then check to see if the cube is solved. Repeat if necessary. - -REPLY [2 votes]: There are 7,618,438 diferent positions after six movements according to this site, but they use face movements. By the way they show that the Rubik's cube can be solved in 20 face movements or less.<|endoftext|> -TITLE: Cohomology of a tensor product of sheaves -QUESTION [11 upvotes]: Say I have two locally free sheaves $F,G$ on projective variety $X$. I know the cohomology groups $H^i(X,F)$ and $H^i(X,G)$. Is this enough to give me information about $H^i(X,F\otimes G)$? In particular, if $H^i(X,F)=0$, what conditions on $G$ guarantee that also $H^i(X,F\otimes G)=0$? - -REPLY [3 votes]: You need to make some positivity assumptions on $E$ and $F$, as your conclusion is just not true in general. The only case I know of is Le Poitier's vanishing theorem, which says that if $E \otimes F \otimes \omega_X^{-1}$ is ample on a smooth projective variety $X$, then $H^i(X,E \otimes F)=0$ for $i \geq rs$, where $rk(E)=r$ and $rk(F)=s$. This is satisfied for instance if $\omega_X^{-1}$ is nef, and $E$ and $F$ are both ample on $X$, but even here you need $rs$ to be small relative to the dimension of $X$ if you want to say something meaningful.<|endoftext|> -TITLE: Parenthesis vs brackets for matrices -QUESTION [31 upvotes]: When I first learned linear algebra, both the professor and the book used brackets like [ and ] to enclose matrices. However, in my current differential equations textbook, matrices are enclosed by parenthesis, and I suddenly realize everybody else are using them too. -So are brackets/parenthesis for enclosing matrices always totally interchangeable? - -REPLY [3 votes]: This is a common questions that I've been asked time to time. Whether or not you choose to use: -\begin{pmatrix} -1 & 0 & 0 \\ -0 & 1 & 0 \\ -0 & 0 & 1 -\end{pmatrix} -or -\begin{bmatrix} -1 & 0 & 0 \\ -0 & 1 & 0 \\ -0 & 0 & 1 -\end{bmatrix} -is up to you. -The difference between the notations is that the parenthesis notation -\begin{pmatrix} -1 & 0 & 0 \\ -0 & 1 & 0 \\ -0 & 0 & 1 -\end{pmatrix} -is mostly used by mathematicians. -However, the square bracket notation -\begin{bmatrix} -1 & 0 & 0 \\ -0 & 1 & 0 \\ -0 & 0 & 1 -\end{bmatrix} -is mostly used by engineers or physicists (i.e. all other science disciplines other than mathematics). This is analogous to the reason why the spherical coordinate system used by physicists and engineers have the two angles labelled the other way, compared to that used by mathematicians. As a mathematician, I tend to stick with using the parenthesis notation. -Having said that, it is worth noting that | | notations denotes its determinant, not a matrix itself.<|endoftext|> -TITLE: A particular case of the quadratic reciprocity law -QUESTION [15 upvotes]: To motivate my question, recall the following well-known fact: Suppose that $p\equiv 1\pmod 4$ is a prime number. Then the equation $x^2\equiv -1\pmod p$ has a solution. -One can show this as follows: Consider the following polynomial in ${\mathbb Z}_p[x]$: $x^{4k}-1$, where $p=4k+1$. The roots of this polynomial are precisely the elements of ${\mathbb Z}_p^*$, each one with multiplicity 1. The polynomial factors as $(x^{2k}-1)((x^k)^2+1)$, and it follows that if $a$ is any element of ${\mathbb Z}_p^*$ with $a^{2k}\ne 1$ (and there are precisely $(p-1)/2$ possible such $a$), then $b=a^k$ satisfies $b^2\equiv -1\pmod p$. -Of course, there are other arguments, but I am interested in pursuing this line of reasoning. My question is the following: - -From the quadratic reciprocity law, we - have that $x^2\equiv -2\pmod p$ has a - solution iff $p\equiv 1$ or $3\pmod - 8$. Is there a proof of the - right-to-left implication using some - polynomial and appropriate counting of - roots, as in the case shown above? - -REPLY [6 votes]: If I remember correctly, this approach with a Gauss sum (associated to an 8th root of 1 in a finite field) is used in the first page or two of Serre's Cours d'Arithmetique to determine the "supplementary law" for $\left( \frac{2}{p} \right)$. The magical algebraic identity in this case is that if $a^4 = -1$, then $(a \pm 1/a)^2 = \pm 2$.<|endoftext|> -TITLE: If $b_n$ is a bounded sequence and $\lim a_n = 0$, show that $\lim(a_nb_n) = 0$ -QUESTION [11 upvotes]: This is a real-analysis homework question so I of course have to be very precise and justify anything or any theorem I use. -If $b_n$ is a bounded sequence and $\lim(a_n) = 0$, show that $\lim(a_nb_n) = 0$ -Intuitively, since $b_n$ is bounded, then sup($b_n$) is some finite number and therefore we can take an $N$ natural number as large as we need such that for all $n\gt N$ $b_na_n$ approaches $0$. -At first I thought to use the limit theorems, but since $a_n$ is not bounded, the general limit theorems do not reply. (I am referring to $\lim(X + Y) = \lim X + \lim Y$ for $X,Y$ sequences etc). -I was thinking then to use the definition of the limit somehow to show that since $b_n$ is bounded we can take as intuitively stated above $N$ large enough to show the statement is true. I'm not sure how to proceed with this. -Thank you for your replies in advance! - -REPLY [3 votes]: I think this old question deserves an answer straight from the definition of a limit. -The sequence $(b_n)$ is bounded, take an $M$ such that $\left|b_n\right|0$. Since $(a_n)$ converges to $0$ there is an $N$ such that $|a_n|<\frac\varepsilon M$ for all $n\ge N$. Then $|a_n b_n|<\frac\varepsilon M\cdot M=\varepsilon$ for all $n\ge N$. This shows that $(a_nb_n)$ converges to $0$.<|endoftext|> -TITLE: $\frac{\mathrm d^2 \log(\Gamma (z))}{\mathrm dz^2} = \sum\limits_{n = 0}^{\infty} \frac{1}{(z+n)^2}$ -QUESTION [10 upvotes]: How do I show -$$\frac{\mathrm d^2 \log(\Gamma(z))}{\mathrm dz^2} = \sum_{n = 0}^{\infty} \frac{1}{(z+n)^2}$$? -$\Gamma(z)$ is the gamma function. - -REPLY [10 votes]: Use the hadamard product formula -$\Gamma(z) = \frac{e^{-\gamma z}}{z} \prod_{k=1}^\infty \left( 1 + \frac{z}{k} \right)^{-1} e^{z/k} $ -Then, note that -$\frac{d \log(\Gamma(z))} {dz} = \frac{\Gamma'(z)}{\Gamma(z)} $ -For an infinite product, there is an easy way to compute this expression. If -$ f(z) = \prod f_n(z)$ -then it is not hard to prove that -$ \frac{f'(z)}{f(z)} = \sum \frac{f_n'(z)}{f_n(z)} $ -applying this to the Gamma function gives -$ \frac{\Gamma'(z)}{\Gamma(z)} = -\gamma - \frac{1}{z} + \sum_{k=1}^\infty \frac{-1}{k(1 + z/k)} + \frac{1}{k} $ -Then we have to take one more derivative to get -$ \frac{d^2 \log(\Gamma(z))} {dz^2} = \frac{1}{z^2} + \sum_{k=1}^\infty \frac{1}{(k + z)^2} = \sum_{n=0}^\infty \frac{1}{(z+n)^2} $ - -REPLY [8 votes]: Let's assume the Gauss formula -$$\frac{\Gamma'(a)}{\Gamma(a)}+\gamma=\int_{0}^{1}\frac{1-t^{a-1}}{1-t}dt$$ -holds (where $\gamma$ is the Euler–Mascheroni constant). Integrating the identity -$$\frac{1-t^{a-1}}{1-t}=\sum_{k=0}^{\infty}(t^{k}-t^{a+k-1})$$ -yields the series -$$\frac{d\ln \Gamma(a)}{da}+\gamma=\sum_{k=0}^{\infty}\left(\frac{1}{k+1}-\frac{1}{a+k}\right)$$ -which converges uniformly on finite intervals $a\in[0,A]$. Now we can differentiate the latter series in $a$ to obtain that -$$\frac{d^2\ln\Gamma(a)}{da^2}=\sum_{k=0}^{\infty}\frac{1}{(a+k)^2}.$$ -The differentiation is valid since the resulting series converges uniformly for $a\geq 0.$ - -Derivation of the Gauss formula. -Using the basic properties of the beta function, we get -$$\Gamma(b)-B(a,b)=\Gamma(b)-\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}=\frac{b\Gamma(b)(\Gamma(a+b)-\Gamma(a))}{b\Gamma(a+b)}$$ -$$=\frac{\Gamma(b+1)}{\Gamma(a+b)}\cdot\frac{\Gamma(a+b)-\Gamma(a)}{b}.$$ -Passing to the limit $b\to0$ yields -$$\frac{d\ln \Gamma(a)}{da}=\frac{\Gamma'(a)}{\Gamma(a)}=\lim\limits_{b\to 0}(\Gamma(b)-B(a,b)),$$ -or -$$\frac{\Gamma'(a)}{\Gamma(a)}=\lim\limits_{b\to 0}\int_{0}^{\infty}x^{b-1}\left(e^{-x}-\frac{1}{(1+x)^{a+b}}\right)dx=\int_{0}^{\infty}\left(e^{-x}-\frac{1}{(1+x)^{a}}\right)\frac{dx}{x}.\qquad(1)$$ -Identity (1) can be used to define the Euler constant $\gamma$ -$$\qquad\qquad\qquad\qquad\qquad\qquad-\gamma:=\frac{\Gamma'(1)}{\Gamma(1)}=\int_{0}^{\infty}\left(e^{-x}-\frac{1}{1+x}\right)\frac{dx}{x}.\qquad\qquad\qquad\qquad\qquad\qquad(2)$$ -Subtracting (2) from (1) and using the substitution $t=\frac{1}{1+x}$ we obtain that -$$\frac{\Gamma'(a)}{\Gamma(a)}-\frac{\Gamma'(1)}{\Gamma(1)}=\int_{0}^{1}\frac{1-t^{a-1}}{1-t}dt.$$<|endoftext|> -TITLE: When $L_p = L_q$? -QUESTION [5 upvotes]: As we know that $L_p \subseteq L_q$ when $0 < p < q$ for probability measure, I was wondering when $L_p = L_q$ is true and why. Is it to impose some restriction on the domain space? Thanks! - -REPLY [17 votes]: This is essentially Exercise 6.5 in Folland's Real Analysis. Let $(X,\mathcal{F},\mu)$ be a measure space, and let $$m = \inf\{\mu(A) : A \in \mathcal{F}, \mu(A) > 0\}$$ -$$M = \sup\{\mu(A) : A \in \mathcal{F}, \mu(A) < \infty\}$$ -For $0 < p < q < \infty$, it is a fact that $L^p(\mu) \subset L^q(\mu)$ iff $m > 0$, and $L^q(\mu) \subset L^p(\mu)$ iff $M < \infty$ (which in particular holds when $\mu$ is finite). -So for a finite measure $\mu$, it is necessary and sufficient that for some $m > 0$ every set either has measure 0 or has measure at least $m$. This is going to force your space to be "effectively" finite in some sense.<|endoftext|> -TITLE: Finding subgroups of a free group with a specific index -QUESTION [19 upvotes]: How many subgroups with index two are there of a free group on two generators? What are their generators? - -All I know is that the subgroups should have $(2 \times 2) + 1 - 2 = 3$ generators. - -REPLY [27 votes]: Mariano's answer is quite right, but relies on the 'accident' that all subgroups of index two are normal. If you wanted to find all the subgroups of index three, say, you would need to try another approach. Here's one idea. -The free group of rank two, $F$, is the fundamental group of the figure-eight graph $\Gamma$, which has one vertex and two edges. Fix the vertex as the base point. The subgroups of index $k$ correspond, via covering-space theory, to connected covering spaces $\widehat{\Gamma}\to\Gamma$ of degree $k$, with a choice of base vertex $\hat{v}\in\widehat{\Gamma}$. (If, on the other hand, you only wanted to count conjugacy classes of subgroups, you could forget about $\hat{v}$.) Because the covering map $p:\widehat{\Gamma}\to\Gamma$ has degree $k$, the graph $\widehat{\Gamma}$ has exactly $k$ vertices. -Let's decorate the graph $\Gamma$ so that we can see $F=\langle a,b\rangle$ more clearly. Label each edge with the corresponding generator $a$ or $b$. Furthermore, orient each edge to indicate in which direction the generator goes around it. -You can use the covering map $p$ to pull the labels and orientations back from $\Gamma$ to $\widehat{\Gamma}$. That is, if $\hat{e}$ is an edge of $\widehat{\Gamma}$ and $p(\hat{e})=e$ is labelled $a$, then you should label $\hat{e}$ with $a$ also, and orient $\hat{e}$ so that $p$ sends the orientation on $\hat{e}$ to the orientation on $e$. -With a little thought, it's not too hard to see that this decoration on $\widehat{\Gamma}$---the labels and orientation---are enough to reconstruct the map $p$: they tell you where to send each edge, and there's only one choice of where to send each vertex. The statement that $p$ is a covering map translates into a nice condition on the decoration of $\widehat{\Gamma}$: for each vertex $\hat{u}$, you see exactly one edge with label going into $\hat{u}$, and exactly one edge with each label going out of $\hat{u}$. I'll call this condition 'the covering condition'. -This discussion turns counting the subgroups of index $k$ into the combinatorial problem of counting the connected, decorated, based graphs $\widehat{\Gamma}$ with $k$ vertices that satisfy the covering condition. With a little thought, one can write down a formula for this. Marshall Hall Jr did just this (though I don't think he thought in terms of the covering spaces of graphs) and came up with the following formula. Let $N(k,r)$ be the number of subgroups of index $k$ in the free group of rank $r$. Then -$N(k,r)=k(k!)^{r-1}-\sum_{i=1}^{k-1}((k-i)!)^{r-1}N(i,r)$. -For some related things, I wrote a blog post about proving theorems about free groups by thinking about covering spaces of graphs here. -Alternatively, if you don't like covering spaces, an equivalent point of view is to count transitive actions by permutation groups with $r$ generators on based sets of size $k$. This turns out to be the same combinatorial problem.<|endoftext|> -TITLE: Prove by induction $T(n) = 2T(\frac{n}{2}) + 2$ -QUESTION [6 upvotes]: I'm stuck with this induction proof: -So far, given: -$\begin{align*} -T(1) & = 2 \\ -T(n) & = 2T(n/2)+2 \\ -& = 2(2T(n/[2^2])+2) + 2 \\ -& = [2^2]T(n/[2^2]) + [2^2] + 2 \\ -& = [2^2](2T(n/[2^2])+2) + [2^2] + 2 \\ -& = [2^3]T(n/[2^3]) + [2^3] + [2^2] + 2 \\ -& = [2^3]T(n/[2^3]) + 2\{[2^2] + [2^1] + 1\} \\ -& \vdots \\ -& = [2^k]T(n/[2^k]) + 2\{2^{k} - 1\} -\end{align*}$ -How then do I show this to be correct (the proof). So far I have: -Let $(n/[2^k]) = 1$ -$\Rightarrow n = 2^k$ -So, $T(n) = nT(1) + 2(n - 1)$ -$T(n) = 4n - 2$ //This is where I'm stuck. -Proof (by induction): -When $n = 1$, $T(1) = 2$. -Assume $T(k)$ is true [$T(n) = 4n - 2$] //This is where I am stuck. - -REPLY [3 votes]: HINT $\: $ From the first few values we guess $\rm\ T(2^n)\ =\ 2^{n+2}-2\ $ and induction confirms it: -$$\rm T(2^{n+1})\ =\ 2\ T(2^n) + 2 \ =\ 2\ (2^{n+2}-2) +\ 2\ =\ 2^{n+3} - 2$$ -One can extend $\rm\:T\:$ to $\:\mathbb N\:$ by defining $\rm\ T(2k+1) = 2\ T(k+1)-2 $ and now one easily proves by induction that $\rm\ T(k) = 4\:k-2\ $ since -$\rm\quad\quad\quad\quad\quad\quad\quad T(2k+1)\ =\ 2\ T(k+1)-2\ =\ 2\ (4k+2)-2\ =\ 4(2k+1)-2 $ -$\rm\quad\quad\quad\quad\quad\quad\quad\quad\quad\ T(2k)\ =\ 2\ \ \ \ T(k)\ \ +\ \ \: 2\ =\ 2\ (4k-2) + 2\ =\ 4 (2k) - 2 $ - -REPLY [3 votes]: Here is how I attempt this from the point you left off: -We know that $\rm T(1) = 2$. We are trying to prove that $\rm T(n) = 4n-2$. -This is trivially true for $n = 1$: -$ \rm \begin{eqnarray*} - T(1) &=& 4(1) - 2\\ - &=& 4 - 2\\ - &=& 2 \\ -\end{eqnarray*} -$ -Assume -$\rm T(k) = 4k - 2 $ -From the original definition: -$\rm T(k+1) = 2T( [k+1] / 2 ) + 2 $ -//since we assumed up to $T(k)$ is correct and $(k+1)/2$ is less than $T(k)$, we substitute: -So, We have -$ \rm \begin{eqnarray*} - &2&( 4( (k+1) /2) - 2 ) + 2 \\ - &=& 4(k+1)- 4 + 2\\ - &=& 4(k+1) - 2 -\end{eqnarray*} -$ -Proven.<|endoftext|> -TITLE: Applications of the Mean Value Theorem -QUESTION [18 upvotes]: What are some interesting applications of the Mean Value Theorem for derivatives? Both the 'extended' or 'non-extended' versions as seen here are of interest. -So far I've seen some trivial applications like finding the number of roots of a polynomial equation. What are some more interesting applications of it? -I'm asking this as I'm not exactly sure why MVT is so important - so examples which focus on explaining that would be appreciated. - -REPLY [2 votes]: MVT is very important. In calculus and analysis, of course. But it's important in other areas too, like applied mathematics an even number theory. For example, for showing Liouville numbers are transcendental.<|endoftext|> -TITLE: Expected number of steps before three counters reach N modulo 2N at the same time -QUESTION [13 upvotes]: We have three counters, $i, j, k$, all initialized to zero. Each step consists of adding or subtracting one from one of the counters, so $(\Delta i, \Delta j, \Delta k)$ is selected among $(\pm1, 0, 0), (0, \pm1, 0), (0, 0, \pm1)$, each with probability 1/6. What is the expected number of steps before all the counters are N modulo 2N at the same time? - -REPLY [5 votes]: Here's a sketch of another derivation of the hitting time for one counter (re Mike's answer). -Consider the usual $\pm 1$ random walk on the integer line. It is well known that the expected time it takes to reach distance $N$ is $N^2$. In our case reaching distance $N$ and reaching the value $N$ modulo $2N$ is the same, since we can't get to $\pm 3N$ without moving through $\pm N$. -We move the specified counter in expectation once every three steps, and therefore it's $3N^2$ and not $N^2$ (because the direction of movement and this "waiting for our turn" are independent). -This argument shows that the case of one counter is rather special.<|endoftext|> -TITLE: Volume of a geodesic ball -QUESTION [12 upvotes]: This may be embarassingly simple, but I can't see it. -Let $M$ be a Riemannian manifold of dimension $n$; fix $x \in M$, and let $B(x,r)$ denote the geodesic ball in $M$ of radius $r$ centered at $x$. Let $V(r) = \operatorname{Vol}(B(x,r))$ be the Riemannian volume of $B(x,r)$. It seems to be the case that for small $r$, $V(r) \sim r^n$, i.e. $V(r)/r^n \to c$ with $0 < c < \infty$. How is this proved, and where can I find it? -Given a neighborhood $U \ni x$ and a chart $\phi : U \to \mathbb{R}^n$, certainly $\phi$ has nonvanishing Jacobian, hence (making $U$ smaller if necessary) bounded away from 0. So $\operatorname{Vol}(\phi^{-1}(B_{\mathbb{R}^n}(\phi(x), r))) \sim r^n$. But I do not see how to relate the pullback $\phi^{-1}(B_{\mathbb{R}^n}(\phi(x), r))$ of a Euclidean ball to a geodesic ball in $M$. - -REPLY [7 votes]: It's simple, all right. As I realized not long after posting (and as Hans also suggested), the key is the exponential map. The tangent space $T_x M$ gets an inner product space structure from the Riemannian metric; we can isometrically identify it with $\mathbb{R}^n$. Now $\exp_x : \mathbb{R}^n \to M$ is a diffeomorphism on some small ball $B_{\mathbb{R}^n}(0,\epsilon)$; on this ball, straight lines map to length-minimizing geodesics (see Do Carmo, Riemannian Geometry, Proposition 3.6), and thus Euclidean balls map to geodesic balls of the same radius. Taking $\epsilon$ smaller if necessary, we can assume the Jacobian of $\exp_x$ is bounded away from $0$ and $\infty$ on $B_{\mathbb{R}^n}(0, \epsilon)$; thus for $r < \epsilon$ we have that $\operatorname{Vol}(B(x,r))$ is comparable to $\operatorname{Vol}(B_{\mathbb{R}^n}(0,r)) \sim r^n$.<|endoftext|> -TITLE: How to raise a complex number to the power of another complex number? -QUESTION [15 upvotes]: How do I calculate the outcome of taking one complex number to the power of another, ie $\displaystyle {(a + bi)}^{(c + di)}$? - -REPLY [5 votes]: I transcribe part of my answer to this question. -The complex exponential $e^z$ for complex $z=x+iy$ preserves the law of exponents of the real exponential and satisfies $e^0=1$. -By definition -$$e^z=e^{x+iy}=e^xe^{iy}=e^x(\cos y+\sin y)$$ -which agrees with the real exponential function when $y=0$. The principal logarithm of $z$ is the complex number -$$w=\text{Log }z=\log |z|+i\arg z$$ -so that $e^w=z$, where $\arg z$ (the principal argument of $z$) is the real number in $-\pi\lt \arg z\le \pi$, with $x=|z|\cos (\arg z)$ and $y=|z|\sin (\arg z)$. -The complex power is -$$z^w=e^{w\text{ Log} z}.$$ -In your case you have: $z=a+bi,w=c+di$ -$$\begin{eqnarray*} -\left( a+bi\right) ^{c+di} &=&e^{(c+di)\text{Log }(a+bi)} \\ -&=&e^{(c+di)\left( \ln |a+bi|+i\arg (a+bi)\right) } \\ -&=&e^{c\ln \left\vert a+ib\right\vert -d\arg \left( a+ib\right) +i\left( -c\arg \left( a+ib\right) +d\ln \left\vert a+ib\right\vert \right) } \\ -&=&e^{c\ln \left\vert a+ib\right\vert -d\arg(a+bi)}\times \\ -&&\times \left( \cos \left( c\arg \left( a+ib\right) +d\ln \left\vert -a+ib\right\vert \right) +i\sin \left( c\arg \left( a+ib\right) +d\ln -\left\vert a+ib\right\vert \right) \right). -\end{eqnarray*}$$<|endoftext|> -TITLE: Spherical geometry: Arbitrary point between two points -QUESTION [7 upvotes]: If A and B are two points on the earth, how could I find any arbitrary point between them along the shortest distance side of their great circle path? -Points are in radians -longitude = $0$ to $2\pi$ -latitude = $0$ to $\pi$, $0$ being at north pole -Points are not antipodal -I desire something where I specify a range $0.0$ to $1.0$, with $0.0$ being point A and $1.0$ being point B and $0.5$ being the midpoint between them, with all other values being their corresponding points. Thanks! -Note: This is not homework. I'm 41yrs old and this is for a personal project I'm working on. - -REPLY [3 votes]: This aviation website has the information that you were looking for. The formula presented there returns the latitude and longitude of a point that is a fraction $f$ between points A and B except when they are antipodal just as you mentioned in the question.<|endoftext|> -TITLE: Upper Bounds on the Number of Lattice Points in an $n$-Simplex -QUESTION [5 upvotes]: Let $\Omega = $ {$\omega_{i}$} be an ordered set of $n$ positive reals in the unit interval, $\omega_{1} \leq \cdots \leq \omega_{n} \leq 1$. Define the $n$-simplex $\Delta(\Omega; (\mathbb{R}^{+})^{n})$ by the non-negative points $(x_{1}, \dots, x_{n}) \subset (\mathbb{R}^{+})^{n}$ which satisfy the inequality -\begin{eqnarray} -\omega_{1} x_{1} + \cdots + \omega_{n} x_{n} \leq 1. -\end{eqnarray} -Let $X$ be a non-trivial subset of the integers $\mathbb{Z}^{n}$. Define $\Delta(\Omega, X) = X \cap \Delta(\Omega, (\mathbb{R}^{+})^{n})$. It is well-known that -\begin{eqnarray} -|\Delta(\Omega, \mathbb{N}^{n})| \leq \frac{1}{n!} \prod_{i = 1}^{n} \frac{1}{\omega_{i}} \quad \text{and} \quad |\Delta(\Omega, (\mathbb{Z}^{+})^{n})| \leq \frac{1}{n!} \left(1 + \sum_{i = 1}^{n} \omega_{i} \right)^{n} \prod_{i = 1}^{n} \frac{1}{\omega_{i}}, -\end{eqnarray} -where $\mathbb{Z}^{+}$ denotes the set of non-negative integers. -Question(s): For the given bounds above, are any sharper bounds known? Given the similarity in form, are there formulas for other $X$ sets, say for integers greater than some arbitrary integer $c$ or integers satisfying some congruence condition (e.g., $a \equiv b$ mod $d$)? -(Update) The theory of Ehrhart polynomials is relevant to the question above. -Question: Suppose I'd like to use the Ehrhart machinery to count the number of non-negative integer solutions of $a_{1} x_{1} + \cdots + a_{n} x_{n} \leq r$ for a non-negative integer $r$ and positive integers {$a_{i}$}. How does one proceed? -Thanks! - -REPLY [4 votes]: In answer to your question on how one uses the Ehrhart machinery to count the number of lattice points in $a_1 x_1 + \cdots + a_n y_n \le r$ with integer $r$ and $a_i$ see these papers of Matt Beck on counting lattice points in rational simplices (and that simplex in particular.) Other papers his are probably also relevant. I am pretty sure that it is also covered in Computing the Continuous Discretely (It is worth reading regardless of whether it answers your question.)<|endoftext|> -TITLE: Banach Tarski Paradox -QUESTION [9 upvotes]: I know that Banach Tarski Paradox gives that "A three-dimensional Euclidean ball is equidecomposable with two copies of itself" and I have read that doubling the ball can be accomplished with five pieces. My question is : -"Is it possible to construct (mathematically) these five sets or is the proof more of an existence result and not a construction?" If it is possible to construct, I would really appreciate if someone can show the construction here. - -REPLY [15 votes]: No, it is not possible to explicitly construct the pieces. The pieces are, by necessity, non-measurable. This means that they cannot be Borel sets, which covers essentially every "explicit construction" technique you might think of. -Moreover, there are models of ZF set theory (without the axiom of choice) in which the Banach-Tarski paradox fails. So the construction must, necessarily, make use of some form of the axiom of choice. This means that an even wider range of construction techniques - those that can be carried out in ZF - are insufficient to form the decomposition. -Edit: -After clarification, it seems that one part of the question is to find an explicit proof that (using the axiom of choice) it is possible to get a decomposition using exactly 5 pieces. A proof of this is given by Francis Su's thesis on the paradox (PDF). Theorem 20 gives the proof of the five-piece decomposition, and by tracing back the previous results you can work out exactly what the pieces are. That thesis is, by the way, a wonderful reference for many other aspects of the paradox as well.<|endoftext|> -TITLE: Quasiseparated if finitely covered by affines in appropriate way -QUESTION [11 upvotes]: I've been reading Vakil's notes on algebraic geometry (on my own -- this is not part of a class), and I'm stuck on one problem (number 6.1.H). It goes as follows. -Let $X$ be a scheme. Prove that $X$ is quasicompact and quasiseparated if and only if $X$ can be covered by a finite number of affine open subsets, any two of which have intersection also covered by a finite number of affine open subsets. -It's not hard to show one direction, namely that if $X$ is quasicompact and quasiseparated then it has a cover of the indicated form. It's also not hard to prove that if $X$ has a cover of the indicated form, then $X$ is quasicompact. I'm having difficulty with the "quasiseparated" part. -Thank you very much for any help. - -REPLY [7 votes]: I also wanted a topological proof, since it's nice to know what the proofs are in each language and knowing what the proof is when translated in another language (e.g. quasi-separatedness via diagonal morphisms) usually tells something a different way, which is not quite as enlightening. I cleaned up moji's proof (because I am not lazy ;)!) - -A scheme $X$ is qcqs (short for quasi-compact and quasi-separated) if and only if there exists a finite open affine cover $\{U_1,\cdots,U_n\}$ such that each intersection $U_i \cap U_j$ admits a finite open affine cover $\{V_{ij1},\cdots,V_{ijk_{ij}}\}$ (where $k_{ij} \in \mathbb N$ depends on $i$ and $j$). - -Proof : ($\Rightarrow$) Pick a finite open affine cover $\{U_1,\cdots,U_n\}$ of $X$ by quasi-compactness. Affine schemes are qcqs, so the intersections $U_i \cap U_j$ are quasi-compact and therefore admit a finite open affine cover $\{V_{ij1},\cdots,V_{ijk_{ij}}\}$. -($\Leftarrow$) Let $U \subseteq X$ be a quasi-compact open subset. We claim that for each $\alpha=1,\cdots,n$, $U \cap U_{\alpha}$ is quasi-compact. It suffices to deal with the case of $\alpha=1$. Because $U$ is a scheme, its topology admits a basis consisting of quasi-compact open neighborhoods (take a finite open affine cover and the basis of distinguished open subsets of each of those affines). Write - $$ - U = \bigcup_{j=1}^n U \cap U_j = \bigcup_{j=1}^n \bigcup_{\ell \in L_j} W_{j\ell} - $$ - where $W_{j\ell} \subseteq U \cap U_j$ is a quasi-compact open subset. Since $U$ is quasi-compact, choose finite subsets $M_1 \subseteq L_1, \cdots, M_n \subseteq L_n$ such that the above equality still holds. Intersecting this with $U_1$, we get - $$ - U \cap U_1 = \bigcup_{j=1}^n \bigcup_{\ell \in M_j} W_{j\ell} \cap U_1. - $$ - Pick $j > 1$ and $\ell \in M_j$, so that for any $1 \le k \le k_{1j}$, the open subsets $V_{1jk}, W_{j\ell} \subseteq U_j$ are quasi-compact. Because $U_j$ is quasi-separated, $V_{1jk} \cap W_{j\ell}$ is quasi-compact. This means that - $$ - U \cap U_1 = \bigcup_{j=1}^n \bigcup_{\ell \in M_j} W_{j\ell} \cap U_1 \overset{(!)}= \bigcup_{j=1}^n \bigcup_{\ell \in M_j} W_{j\ell} \cap U_1 \cap U_j = \bigcup_{j=1}^n \bigcup_{\ell \in M_j} \bigcup_{k=1}^{k_{1j}} W_{j\ell} \cap V_{1jk} - $$ - is quasi-compact. (The $(!)$ is because $W_{j\ell} \subseteq U_j$ for each $j$. This seemed to be the cause of many incorrect edits to my proof.) -With this lemma in hand, if $U, U' \subseteq X$ are quasi-compact, then for $i=1,\cdots,n$, we see that $U \cap U' \cap U_i = (U \cap U_i) \cap (U' \cap U_i)$ is quasi-compact by the quasi-separatedness of $U_i$ and the quasi-compactness of $U \cap U_i$ and $U' \cap U_i$, so $X$ is quasi-separated. -Hope that helps,<|endoftext|> -TITLE: Area of a spherical triangle -QUESTION [10 upvotes]: Consider a spherical triangle with vertices $A, B$ and $C$, respectively. How to determine its area? -I know the formula: - -$A = E R^2$, - -where $R$ is radius of sphere, and $E$ is the excess angle of $(a + b + c - \pi)$, but how to determine the angles between $ABC, ACB$ and $BAC$? - -REPLY [17 votes]: If you know the angular distance between the points, L'Huilier's Formula (cited in Derek Jennings' answer) gives -$$ -\tan\left(\frac{E}{4}\right)=\sqrt{\tan\left(\frac{s}{2}\right)\tan\left(\frac{s-a}{2}\right)\tan\left(\frac{s-b}{2}\right)\tan\left(\frac{s-c}{2}\right)}\tag{1} -$$ -where $a=\operatorname{ang}(B,C)$, $b=\operatorname{ang}(C,A)$, $c=\operatorname{ang}(A,B)$, and $s=\frac{a+b+c}{2}$. -If $A$, $B$, and $C$ are given as points in $\mathbb{R}^3$, then you can use $(1)$ with -$$ -\operatorname{ang}(A,B)=\cos^{-1}\left(\frac{A\cdot B}{|A||B|}\right)\tag{2} -$$ -or, to answer the question you asked, you can compute -$$ -\angle CAB=\cos^{-1}\left(\frac{(C\;A\cdot A-C\cdot A\;A)\cdot(B\;A\cdot A-B\cdot A\;A)}{|C\;A\cdot A-C\cdot A\;A||B\;A\cdot A-B\cdot A\;A|}\right)\tag{3} -$$ -and simply compute $E=\angle CAB + \angle ABC + \angle BCA - \pi$. -If $A$, $B$, and $C$ are given in other formats, there are probably ways to handle those, too, but more specific information would be needed. -Addition: Using cross products simplifies $(3)$ a bit: -$$ -\angle CAB=\cos^{-1}\left(\frac{(C\times A)\cdot(B\times A)}{|C\times A||B\times A|}\right)\tag{4} -$$<|endoftext|> -TITLE: Does this property characterize a space as Hausdorff? -QUESTION [105 upvotes]: As a result of this question, I've been thinking about the following condition on a topological space $Y$: - -For every topological space $X$, $E\subseteq X$, and continuous maps $f,g\colon X\to Y$, if $E$ is dense in $X$, and $f$ and $g$ agree on $E$ (that is, $f(e)=g(e)$ for all $e\in E$), then $f=g$. - -If $Y$ is Hausdorff, then $Y$ satisfies this condition. The question is whether the converse holds: if $Y$ satisfies the above condition, will it necessarily be Hausdorff? -If $Y$ is not at least $T_1$, then $Y$ does not have the property: if $u,v\in Y$ are such that $u\neq v$ and every open neighborhood of $u$ contains $v$, then let $X$ be the Sierpinski space, $X=\{a,b\}$, $a\neq b$, with topology $\tau=\{\emptyset,\{b\},X\}$, $E=\{b\}$, let $f,g\colon X\to Y$ be given by $f(a)=f(b)=v$, and $g(a)=u$, $g(b)=v$. Then both $f$ and $g$ are continuous, agree on the dense subset $E$, but are distinct. -My attempt at a proof of the converse assumes the Axiom of Choice and proceeded as follows: assume $Y$ is $T_1$ but not $T_2$; let $u$ and $v$ be witnesses to the fact that $Y$ is not $T_2$, let $\mathcal{U}\_s$ and $\mathcal{V}\_t$ be the collection of all open nbds of $s$ that do not contain $t$, and all open nbds of $t$ that do not contain $s$, respectively. Construct a net with index set $\mathcal{U}\_s\times\mathcal{V}\_t$ (ordered by $(U,V)\leq (U',V')$ if and only if $U'\subseteq U$ and $V'\subseteq V$) by letting $y_{(U,V)}$ be a point in $U\cap V$ (this is where AC comes in). Let $E=\{y_{(U,V)}\mid (U,V)\in\mathcal{U}\_s\times\mathcal{V}\_t\}$, and let $X=E\cup\{s\}$. Give $X$ the induced topology; let $f\colon X\to Y$ be the inclusion map, and let $g\colon X\to Y$ be the map that maps $E$ to itself identically, but maps $s$ to $t$. -The only problem is I cannot quite prove that $g$ is continuous; the difficulty arises if I take an open set $\mathcal{O}\in \mathcal{V}_t$; the inverse image under $g$ is equal to $((\mathcal{O}\cap X)-\{t\})\cup\{s\}$, and I have not been able to show that this is open in $X$. -So: - -Does the condition above characterize Hausdorff spaces? - -If not, I would appreciate a counterexample. If it does characterize Hausdorff, then ideally I would like a way to finish off my proof, but if the proof is unsalvageable (or nobody else can figure out how to finish it off either) then any proof will do. - -Added: A little digging turned up this question raised in the Problem Section of the American Mathematical Monthly back in 1964 by Alan Weinstein. The solution by Sim Lasher gives a one paragraph proof that does not require one to consider $T_1$ and non-$T_1$ spaces separately. - -REPLY [13 votes]: This problem has a very straightforward solution when you conceptualize convergence of nets in terms of continuity of maps. Given a directed set $I$, the statement that a net $(y_i)_{i\in I}$ in a space $Y$ converges to a point $y$ can be expressed in terms of continuity of a map. Namely, let $X=I\cup\{\infty\}$, topologized such that a set is open iff either it does not contain $\infty$ or it contains the set $\{j\in I:j\geq i\}$ for some $i\in I$. Then $(y_i)$ converges to $y$ iff the map $i\mapsto y_i$, $\infty\mapsto y$ is a continuous map from $X$ to $Y$. -Now we use the characterization of Hausdorff spaces in terms of nets: a space is Hausdorff iff every net has at most one limit. So if $Y$ is not Hausdorff, there is some net $(y_i)_{i\in I}$ in $Y$ which converges to two distinct points $y$ and $y'$, which means that the map $I\to Y$ sending $i$ to $y_i$ can be extended continuously to $X$ by sending $\infty$ to either $y$ or $y'$. Since $I$ is dense in $X$, this is exactly what we're looking for.<|endoftext|> -TITLE: If $(I-T)^{-1}$ exists, can it always be written in a series representation? -QUESTION [5 upvotes]: If $X$ is a Banach space, and $T:X \to X$ is a bounded linear operator with norm < $1$, then $I-T$ has a bounded inverse defined by $(I-T)^{-1} = \sum_{n=0}^\infty T^n$. -Thinking in terms of a converse, if $T$ is any bounded linear operator defined on $X$, then does the existence of a bounded inverse $S=(I-T)^{-1}$ imply that $S$ can be represented as $S=\sum_{n=0}^\infty T^n$? - -REPLY [13 votes]: No, not even in the finite-dimensional case. If $T$ is a linear map -from $\mathbb{R}^n$ to itself with all eigenvalues $>1$ in absolute -value, then $I-T$ is invertible, but $\sum T^n$ certainly does not -converge. - -REPLY [5 votes]: Well there are slightly weaker condition that improves the result you cite. -Suppose $T$ is a bounded operator with spectral radius $r(T)< 1$ -Then by the spectral radius formula we have -$$\lim_{n\to\infty}\|T^n\|^{1/n} =\inf_n\|T^n\|^{1/n}=r(T)< 1$$ -which ensure the convergence of -$$\sum_{n=0}^\infty\|T^n\|$$ -which in turn, by the triangle inequality, bounds -$$\left\|\sum_{n=0}^\infty T^n\right\|$$ -Now, it is a standard exercise to show that -$$\sum_{n=0}^\infty T^n=(I-T)^{-1}$$ -If there is any doubt at all do not hesitate to ask.. - -**Edit:** By the Banach algebra inequality we have -$\|T^n\|\le\|T\|^n$, which means that $$r(T)=\inf_k\|T^k\|^{1/k}\le\|T^n\|^{1/n}\le\|T\|$$ - Hence $\|T\|<1$ implies not only $r(T)<1$, but also $r(T)\le\|T\|<1$. - -Also, this is not the case in the example of Robin above, because we also have $r(T)=\sup{|\lambda|:\lambda\in\sigma(T)}$ where $\sigma(T)$ is the spectrum of $T$ (the set of all $\lambda\in\mathbb{C}$ such that $\lambda I-T$ is not invertible) and the eigenvaules of $T$ is certainly in the spectrum. (Note that $r(T)$ is the radius of the smallest closed disc that contain $\sigma(T)$ - hence the name spectral radius).<|endoftext|> -TITLE: Which tessellation of the sphere yields a constant density of vertices? -QUESTION [23 upvotes]: One way to tessellate a 3D sphere is by iterated subdivision of an icosahedron. I am wondering whether this method gives a homogeneous surface density of vertices. To the eye, it seems to do so, and logic indicates that too (each face has the same are in the icosahedron, and faces in each subdivision are created of equal area), but is there some bias that I'm not thinking of? -Otherwise, what tessellation method can yield a constant density of vertices? - -REPLY [23 votes]: There are several possible ways of defining "density on a sphere", each one giving somewhat different results. -Alas, most of them have some "maximum number of vertices" that give exactly equal density. -Above that maximum number, further tessellation can at best approximate constant density. -(That approximation is more than adequate for many purposes). -"Unfortunately, it is a well-known group theoretical result that there are no completely regular point distributions on the sphere for N > 20." -- Max Tegmark -Equal density as equal areas of the triangles formed by the vertices: -You can tessellate a sphere to give a geodesic sphere such that every triangle has exactly equal area, to any desired resolution, using any equal-area projection such as the Snyder equal area projection. -(A few people use geodesic grids based on this principle). -Equal density as congruent triangles formed by the vertices: -When a person builds a geodesic dome out of panels, it would be super-convenient if every panel were identically the same size and shape. -Alas, the maximum "size" is the 120 identical faces of the -hexakis icosahedron (aka disdyakis triacontahedron). -Any convex solid with more than 120 faces must necessarily have 2 or more kinds of faces. -Equal density as minimum-energy configurations of charged particles: -Min-Energy Configurations of Electrons On A Sphere. -You can put any integer number of repelling particles on a sphere, and calculate some minimum-energy configuration. -Equal density as equal distance from every vertex to the N nearby vertices: -When a person builds a geodesic dome out of struts, it would be super-convenient if all the struts were the same length. -Most "naive" methods of dividing the large triangles of the icosahedron into smaller triangles generates lots of different edge lengths; -but there are ways to "tweak" the tessellation subdivision in order to minimize the number of different lengths of edges. -(Fewer unique lengths requires fewer jigs in manufacturing and fewer spares needed to replace any damaged strut). -Alas, the maximum "size" of a strictly convex polyhedron made entirely of equilateral triangles (convex deltahedron) is the 30 edges of the icosahedron. -(You could try to make the pentakis dodecahedron out of 60 equilateral triangles, giving 90 equal-length edges, but then it would be slightly concave). -Any strictly convex solid made of triangles with more than 30 edges must necessarily have 2 or more lengths of edges. -A few more notes on sphere approximation.<|endoftext|> -TITLE: Can the phase of a function be extracted from only its absolute value and its Fourier transform's absolute value? -QUESTION [13 upvotes]: If for a function $f(x)$ only its absolute value $|f(x)|$ and the absolute value $|\tilde f(k)|$ of its Fourier transform $\tilde f(k)=N\int f(x)e^{-ikx} dx$ is known, can $f(x) = |f(x)|e^{i\phi(x)}$ and thus the phase function $\phi(x)$ be extracted? (with e.g. $N=1/(2\pi)$) -As Marek already stated, this is even not uniquely possible for $f(x)=c\in\mathbb C$, since the global phase cannot be re-determined. So please let me extend the question to - -Under what circumstances is the phase-retrieval (up to a global phase) uniquely possible, and what ambiguities could arise otherwise? - -REPLY [5 votes]: The right way to ask the question is: given a function $f\in L²(\mathbb{R})$, can $f$ be determined from $|f|$ and $|\widehat{f}|$ up to a multiplicative constant $c$ of modulus $|c|=1$. -This question dates back to Pauli and the answer is no. One can construct counter examples of the form $a\gamma(x-x_0)+b\gamma(x)+c\gamma(x+x_0)$ with $a,b,c$ properly chose ($\gamma(x)=e^{-\pi x²}$ the standard gaussian so that it is ots own Fourier transform). An other construction is as follows: -take $\chi=\mathbf{1}_{[0,1/2]}$ $(a_j)_{j\in\mathbb{Z}}$ a sequence with finite support (to simplify) and $f(x)=\sum_j a_j\chi(x-j)$ so that -$\hat f(\xi)=\sum_j a_je^{2i\pi j\xi}\hat\chi(\xi)$. -Now we want to construct a sequence $(b_j)$ such that $|a_j|=|b_j|$ and -$\left|\sum_j a_je^{2i\pi j\xi}\right|=\left|\sum_j b_je^{2i\pi j\xi}\right|$. -This can be done via a Riesz product: take $\alpha_1,\ldots,\alpha_N$ a finite real sequence, $\varepsilon_1,\ldots,\varepsilon_N$ a finite sequence of $\pm1$ and consider -$$ -\prod_{k=1}^N (1+i\alpha_j\varepsilon_j\sin 2\pi 3^j\xi)=\sum a_j^{(\varepsilon)}e^{2i\pi j\xi}. -$$ -Changing a $\varepsilon_j$ from $+1$ to $-1$ conjugates one of the factors on the left hand side, so it does not change the modulus. Now the same happens for the $a_j^{(\varepsilon)}$: each of them is either $0$ or a product of $i\alpha_j\varepsilon_j$ (up to a constant) -- the point is that it is not the sum of products of $i\alpha_j\varepsilon_j$'s, this is why we took the $3^j$ in the sine!.<|endoftext|> -TITLE: Convergence/Divergence of $\sum_{n=1}^{\infty} \sin(1/n)$ -QUESTION [7 upvotes]: it is a question Convergence/Divergence of calculus II! Please give me a hand! -Determine convergence or divergence using any method covered so far. -$$\sum_{n = 1}^{\infty} \sin (1/n)$$ - -REPLY [2 votes]: We can prove this is divergent through direct comparison, though the way I know uses a Taylor Series expansion, which is not normally covered when students first see this problem. For others, this might shed some new light on the intuition that is lost from the limit comparison test. Sometimes it can feel like the limit comparison test just works "by magic," at least for me. -If you are unfamiliar with a Taylor Series expansion, I encourage you to research it. It becomes quite useful in most portions of applied math. -The first term of the Taylor Series can often hint at the convergence or divergence of a function, as you will see below. For this problem, expand $\sin{x}$ about $x=0$. -$$\sin{⁡x}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots$$ -If we truncated the Taylor series at the second term, we’d have the following positive Lagrange remainder: -$$\sin{⁡x}=x-\frac{x^3}{3!}+f^{(5)}(c)\frac{x^5}{5!},$$ -where $c$ is between $0$ and $x$. Note that $f^{(5)} (x)=\cos{⁡x}$, so we can replace $f^{(5)}(c)$ with $\cos⁡{(c)}$ from this point on. -Now, if we borrow this same Taylor expansion, but let $x=1/n$ for some positive integer n, we’d get -$$\sin⁡{(\frac{1}{n})}=\frac{1}{n}-\frac{1}{6n^3}+\frac{\cos{⁡(c)}}{5!}\frac{1}{n^5}.$$ -Like stated before, looking at the first term $1/n$ hints that this function might diverge. As a similar example, if we were instead observing the function $1-\cos{\frac{1}{n}}$, we'd find the first term to be $\frac{1}{n^2}$ which hints that it converges. -(This expansion is quite weird when you think about it. We built it centered around $x=0$ which would mean that with this substitution, we centered it around $\lim_{n→∞}⁡\frac{1}{n}=0$, an odd concept for Taylor expansions, but we’ll work with it.) -If you note that $1/n$ is always going to be positive and $0<\frac{1}{n}\leq1$, this means that $00$, regardless of $c$. The resulting remainder will be positive, so removing it will result in an under-approximation. To show this bring over the other two terms to the left, as seen below: -$$\sin{⁡(\frac{1}{n})}-\frac{1}{n}+\frac{1}{6n^3}=\frac{\cos{⁡(c)}}{5!}\frac{1}{n^5}>0$$ -$$\implies \sin{⁡(\frac{1}{n})}>\frac{1}{n}-\frac{1}{6n^3}$$ -We can use this for our summation. Observing the corresponding series on the right hand, we have a divergent series: -$$\sum_{n=1}^{\infty}{\frac{1}{n}-\frac{1}{6n^3}}=\sum_{n=1}^{\infty}{\frac{1}{n}}-\sum_{n=1}^{\infty}{\frac{1}{6n^3}}.$$ -The second series above would be convergent, but the harmonic series $\sum_{n=1}^{\infty}{1/n}$ is famously divergent. By direct comparison test, because the above series is divergent and all terms of the above series are less than the corresponding terms of -$$\sum_{n=1}^{\infty}{\sin⁡{(\frac{1}{n})}}$$ -we have the following for our sine series -$$\sum_{n=1}^{\infty}{\sin{⁡(\frac{1}{n})}}>\sum_{n=1}^{\infty}{\frac{1}{n}-\frac{1}{6n^3}}$$ -meaning it too is divergent.<|endoftext|> -TITLE: inverse limit of isomorphic vector spaces -QUESTION [6 upvotes]: Let $$\cdots \rightarrow A_{n+1}\rightarrow^{f_{n+1}} A_n \rightarrow^{f_{n}} A_{n-1}\rightarrow \cdots $$ be an inverse system of finite dimensional vector spaces with the property that the $A_i$ are 'eventually constant', i.e., there is an $m$ such that the maps $f_i$ are isomorphisms for every $i\ge m$. Does it follow that $$\lim_{\leftarrow} A_n \simeq A_m?$$ - -REPLY [5 votes]: Remember that if your directed set has a maximum, then the inverse limit is just the maximum; here this is "essentially" what you have, since after a certain point you are not really getting anything new: you can identify all $A_i$ with $i\geq m$, collapsing the "left tail" of your inverse system into a terminating one, so the inverse system is just that maximum. -In fact, this works whether you are working with finite dimensional vector spaces or any kind of structure. -But of course, "essentially" is not the same as "exactly." So you use the universal property. I claim that $A_m$ has the desired property. Let $f_{ij}$ with $i\geq j$ be defined as the identity if $i=j$, and as the composition $f_i\circ f_{i-1}\circ\cdots\circ f_j$ if $i\gt j$. Then the projection maps $\pi_j\colon A_m\to A_j$ are defined by $\pi_j=f_{mj}$ if $j\leq m$, and as $\pi_j = f_{jm}^{-1}$ if $j\gt m$; note that this makes sense since $f_{jm}$ is a composition of isomorphisms when $j\gt m$, so it is itself an isomorphism and has an inverse. Note that for any $i\gt j$ we have $f_{ij}\circ \pi_j = \pi_i$, since you have $f_{ij}\circ f_{jk} = f_{ik}$ whenever $i\geq j \geq k$. -Now let $P$ be any object together with maps $p_j\colon P\to A_j$ such that for all $i\gt j$ you have $f_{ij}\circ p_i = p_j$. Then $p=p_m\colon P\to A_m$ has the appropriate property, that is, $p_j = \pi_j\circ p$ for all $j$: if $j\leq m$, then this because $\pi_j = f_{mj}$, so $\pi_j\circ p = f_{mj}\circ p_m = p_j$, and if $j\gt m$ then $f_{jm}\circ p_j = p_m$, so pre-composing with $f_{jm}^{-1}$ we get $p_j = (f_{jm})^{-1}\circ p_m = \pi_j\circ p$. The map is unique: if $f\colon P\to A_m$ has the same property, then $f=f_m = p$. -Thus, $(A_m, \{\pi_j\})$ has the desired universal property, and so "is" the inverse limit. -You'll note that the fact that we are dealing with finite dimensional vector spaces is completely immaterial; what matters is the properties of the functions in play. This is typical of universal constructions.<|endoftext|> -TITLE: Morphisms in the category of natural transformations? -QUESTION [22 upvotes]: I am learning the basics of category theory, so this question is probably obvious to anyone who knows the subject. -The resources I've seen all take the following approach: -0) A category is a collection of objects and morphisms between those objects that satisfy some rules. -1) A functor is a morphism in the category of categories. -2) A natural transformation is a morphism in the category of functors. -But they all stop right there. What about: -3) the morphisms in the category of natural transformations? -4) Or the "morphisms in the category of the morphisms in the category of natural transformations" -5) ... -Are these uninteresting? Why does the "meta-ness" stop at 2 levels deep? - -REPLY [37 votes]: I want to point out something potentially misleading about Marek's answer. The n-categories he mentions are not categories, but generalizations of them, so the question still remains, why do categories only form a 2-category, that is, why do people stop after categories, functors, and natural transformations? Why don't people define modifications of natural transformations? -I think it is good to realize that categories really are in an essential way only 2-categorical, if you want interesting higher morphisms you do need to define something like a higher category. One way to think about it is this: natural tranformations are basically homotopies. To make this precise, take I to be the category with two objects, 0 and 1, one morphism from 0 to 1 and the identity morphisms. Then, it is easy to check that to specify a natural tranformation between two functors F and G (both functors C → D) is the same as specifying a functor H : C × I → D which agrees with F on C × {0} and with G on C × {1}. -So then we could get higher morphisms by saying they are homotopies of homotopies, i.e., functors C × I × I with appropriate restrictions. This works, and we indeed get some definition of modification, but it is not interesting as it reduces to just a commuting square of natural transformations, i.e., it can be described simply in terms of the structure we already had. -This is similar to what happens for, say, groups: you can think of a group as a category with a single object where all the morphisms are invertible (the morphisms are the group elements and the composition law is the group product). Then group homomorphisms are simply functors. This makes it sound as if groups now magically have a higher sort of morphisms: natural tranformations between functors! And indeed they do, they are even useful in certain contexts, but they're not terribly interesting: a natural transformation between to group homomorphisms f and g is simply a group element y such that f(x) = y g(x) y -1 . Again, this is described in terms of things we already new about (the group element y and conjugation), and is not really a brand new concept.<|endoftext|> -TITLE: Convergence of the series $\sum \limits_{n=2}^{\infty} \frac{1}{n\log^s n}$ -QUESTION [12 upvotes]: We all know that $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^s}$ converges for $s>1$ and diverges for $s \leq 1$ (Assume $s \in \mathbb{R}$). -I was curious to see till what extent I can push the denominator so that it will still diverges. -So I took $\displaystyle \sum_{n=2}^{\infty} \frac{1}{n\log n}$ and found that it still diverges. (This can be checked by using the well known test that if we have a monotone decreasing sequence, then $\displaystyle \sum_{n=2}^{\infty} a_n$ converges iff $\displaystyle \sum_{n=2}^{\infty} 2^na_{2^n}$ converges). -No surprises here. I expected it to diverge since $\log n$ grows slowly than any power of $n$. -However, when I take $\displaystyle \sum_{n=2}^{\infty} \frac{1}{n(\log n)^s}$, I find that it converges $\forall s>1$. -(By the same argument as previous). -This doesn't make sense to me though. -If this were to converge, then I should be able to find a $s_1 > 1$ such that -$\displaystyle \sum_{n=2}^{\infty} \frac{1}{n^{s_1}}$ is greater than $\displaystyle \sum_{n=2}^{\infty} \frac{1}{n (\log n)^s}$ -Doesn't this mean that in some sense $\log n$ grows faster than a power of $n$? -(or) -How should I make sense of (or) interpret this result? -(I am assuming that my convergence and divergence conclusions are right). - -REPLY [4 votes]: Another test that applies to series of positive decreasing terms (and in this particular one in a rather elegant fashion) is the following: -$$ -\sum_{n=1}^\infty a_n<\infty \quad\Longleftrightarrow\quad\sum_{k=1}^\infty 2^ka_{2^k}<\infty. -$$ -In our case -$$ -\sum_{k=1}^\infty 2^ka_{2^k}=\sum_{k=1}^\infty 2^k\frac{1}{2^k(\log 2^k)^s}= -\frac{1}{(\log 2)^s}\sum_{k=1}^\infty -\frac{1}{k^s}, -$$ -and thus -$$ -\sum_{n=2}^\infty \frac{1}{n(\log n)^s}\quad\Longleftrightarrow\quad -\sum_{k=1}^\infty -\frac{1}{k^s}\quad\Longleftrightarrow\quad s>1. -$$<|endoftext|> -TITLE: $H$ is a subgroup of index $2$ in finite group $G$, then every left coset of $H$ is a right coset as well -QUESTION [7 upvotes]: I know this means that there are two cosets. I also know that one must be H itself. -This means that the remaining coset(s) must be equal for right and left. Also since there is only one possible coset, this means that for all elements, a,b in G, then $aH = bH$ and $Ha = Hb$, since each element in G acting on the subgroup must produce the same set. This means $aH = Ha$ I believe. From there $H = aHa^{-1}$, which given the information I am not sure if this good or how to use it. Any help would good. - -REPLY [2 votes]: For a problem like this one, it helps to know the principle that -Any two left (right) cosets are either disjoint or equal. -This may be proved as follows: suppose $g_1H$ and $g_2H$ are left cosets of $H$ in $G$, and $g_1H \cap g_2 H \ne \emptyset$; then there must be some $h_1, h_2 \in H$ with -$g_1 h_1 = g_2 h_2; \tag 1$ -thus -$g_1 = g_2 h_2 h_1^{-1}; \tag 2$ -i.e., we have -$h = h_2 h_1^{-1} \in H \tag 3$ -with -$g_1 = g_2 h; \tag 4$ -now if -$g_1 h_3 \in g_1H, \tag 5$ -then by (4), -$g_1 h_3 = g_2 h h_3 \in g_2 H, \tag 6$ -which shows that -$g_1 H \subset g_2 H; \tag 7$ -likewise, the roles of $g_1$ and $g_2$ may reversed in the above to show that -$g_2 H \subset g_1 H; \tag 8$ -from (7) and (8) we conclude that -$g_1 H = g_2 H. \tag 9$ -A similar demonstration works for right cosets. We note that this Principle holds for any group $G$ and subgroup $H$, whether finite or not, and whether $[G:H]$ is finite or not. -We exploit said Principle in the present question as follows: if $e \in G$ is the identity element, then clearly -$eH = He = H, \tag{10}$ -and if -$g \in G \setminus H, \tag{11}$ -then -$gH \ne H, \tag{12}$ -since -$g = ge \in gH, \tag{13}$ -but by (11), -$g \notin H; \tag{14}$ -therefore, -$gH \ne H, \tag{15}$ -and it thus follows from our Principle that -$gH \cap H = \emptyset. \tag{16}$ -Since $[G:H] = 2$, it follows that $H = eH$ and $gH$ are the only two left cosets of $H$ in $G$; thus -$G = H \cup gH; \tag{17}$ -in a similar manner we may also see that -$G = H \cup Hg, \; H \cap Hg = \emptyset, \tag{18}$ -and we conclude from (16)-(18) that -$gH = Hg. \tag{19}$ -We see that the above demonstration uses notions of basic set theory as well as the fact that there are only two distinct left or right cosets, which are disjoint, and one of which is always $eH = He = H$. And obviously, a good amount of logic.<|endoftext|> -TITLE: Groups having at most one subgroup of any given finite index -QUESTION [14 upvotes]: Cyclic groups have at most one subgroup of any given finite index. Can we describe the class of all groups having such property? -Thank you! - -REPLY [3 votes]: Let $G$ be a group. The canonical residually finite quotient of $G$ is $R(G)=G/K$ where $K$ is the intersection of all the finite-index subgroups of $G$. -Lemma: If $G$ is finitely generated (update) then $G$ has at most one subgroup of each index if and only if $R(G)$ is cyclic. -Proof: First, note that $R(G)$ is residually finite. If every finite quotient of $R(G)$ is cyclic then $R(G)$ is residually cyclic, and it follows that $R(G)$ is abelian. So $R(G)$ has a non-cyclic finite quotient unless $R(G)$ is cyclic. Therefore, if $R(G)$ is not cyclic then $R(G)$, and hence $G$, has a finite non-cyclic quotient, and hence, by Artuto's answer, has a two distinct finite-index subgroups of the same index. -Conversely, suppose that $R(G)$ is cyclic. Every finite-index subgroup of $G$ contains $K$, so the quotient map $G\to R(G)$ maps finite-index subgroups to finite-index subgroups bijectively and preserves the index. Therefore, if $R(G)$ is cyclic then $G$ has at most one subgroup of each index. QED -I believe that it is an open question whether or not there is an algorithm to determine whether a fp group has a proper finite-index subgroup, ie whether or not $R(G)$ is non-trivial. So it may be open whether or not it is possible to determine if $R(G)$ is cyclic, too. -Note: Earlier, I forgot to mention that I had implicitly assumed that $G$ is finitely generated. This assumption is clearly necessary; otherwise the additive group of the rationals is a counterexample. If $G$ is not finitely generated, then the same argument shows that if $G$ has at most one subgroup of each finite index then $R(G)$ is residually cyclic. But it's not clear to me that the converse of this statement is true. So I'll finish with a question: - -If $G$ is residually cyclic, does $G$ have at most one subgroup of each finite index?<|endoftext|> -TITLE: Does $\frac{(30n)!n!}{(15n)!(10n)!(6n)!}$ count something? -QUESTION [8 upvotes]: I know a proof that $\frac{(30n)!n!}{(15n)!(10n)!(6n)!}$ is an integer. -The proof goes as: -If a prime $p$ divides $(15n)!(10n)!(6n)!$, then the power of the prime dividing $(15n)!(10n)!(6n)!$ is lesser than the $(30n)!n!$. It is relatively easy to prove this. -$\textbf{My question is there a counting argument to prove that this is an integer?}$ -By this, I mean does $\frac{(30n)!n!}{(15n)!(10n)!(6n)!}$ count something? -(I have a gut feeling that this should count something but I have not thought in depth about this). -For instance $\frac{n!}{r!(n-r)!}$ counts the number of ways of choosing $r$ objects out of $n$. -Also, are there other examples similar to this? (I tried searching for other such examples in vain) - -REPLY [4 votes]: This MO thread has a lot of good information. I think we should be pessimistic. Pietro Majer's answer and the comments therein suggest that this would be hard, at least in the sense that one should not expect a proof anything like the proof for binomial coefficients.<|endoftext|> -TITLE: If $aH = bH$ then $Ha^{-1} = Hb^{-1}$, prove or find a counter example. -QUESTION [12 upvotes]: $H$ is a subgroup of $G$ and $a,b$ are in $G$. -I was rather lost for a while but I think I may have -actually proved it, though I am unsure… Does this make sense? -\begin{aligned} -aH &= bH \\ - &\implies b \in aH \\ - &\implies \exists h \in H : ah = b \\ - &\implies a^{-1}ah = a^{-1}b \\ - &\implies h = a^{-1}b \\ - &\implies hb^{-1} = a^{-1}bb^{-1} \\ - &\implies hb^{-1} = a^{-1} \\ -\end{aligned} -This means $a^{-1}$ is an element of $Hb^{-1}$, so they are equal. -I wasn't quite sure I could do those operations. -Can anyone tell me where to learn to do the special symbols, like quantifiers and relations because it would make it simpler. - -REPLY [3 votes]: For formatting, see here and some of the links there. You need to know a bit of LaTeX. -As to your argument, it is (mostly) correct. My only quibbles would be of presentation. From $aH=bH$ you conclude that there must exist $h\in H$ such that $b=ah$. That's fine. Your next step should be that "there exists $h\in H$ such that $a^{-1}b=a^{-1}ah = h$". Then "There exists $h\in H$ such that $a^{-1}=a^{-1}bb^{-1}=hb^{-1}$", hence $a^{-1}$ is in the coset $Hb^{-1}$. Your final statement "hence they are equal" is technically wrong, since $a^{-1}$ is not equal to the coset $Hb^{-1}$; what you meant to say is that the since $a^{-1}$ is in $Hb^{-1}$, then the coset $Ha^{-1}$ is equal to the coset $Hb^{-1}$. Other that these quibbles, the argument seems correct to me. -Now, to see if you understand it well, try to figure out which, if any, of your implications are reversible, to see if the converse also holds.<|endoftext|> -TITLE: When is a graph planar? -QUESTION [5 upvotes]: A graph G is planar if and only if xxx. -What can xxx be substituted for? Note that this is from a topological POV so a graph is a 1-dim cw complex and I guess the fundamental group should be used somehow. - -REPLY [2 votes]: A very good treatment of when a graph can be embedded in the plane and more generally into other surfaces is given by the excellent book: -Graphs on Surfaces by Bojan Mohar and Carsten Thomassen (John Hopkins U. Press, 2001)<|endoftext|> -TITLE: $\frac{\prod \mathbb{Z_p}}{\bigoplus \mathbb{Z_p}}$ is a divisible abelian group -QUESTION [5 upvotes]: I'm trying to prove that $\frac{\prod \mathbb{Z_p}}{\bigoplus \mathbb{Z_p}}$ is a divisible $\mathbb{Z}$-module (p is prime, and the direct sum and direct product are taken over the set of all primes). It is an exercise from Rotman, An Introduction to Homological Algebra. Here's what I've done so far: -$\bigoplus \mathbb{Z_p}$ is the torsion submodule of $\prod \mathbb{Z_p}$, so the quotient is torsion-free. Since $\mathbb{Z}$ is a PID, then the quotient is flat. -How can flatness help me prove divisibility? Well, since $\frac{\prod \mathbb{Z_p}}{\bigoplus \mathbb{Z_p}}$ is flat then $Hom_\mathbb{Z} \left( \frac{\prod \mathbb{Z_p}}{\bigoplus \mathbb{Z_p}}, \frac{\mathbb{Q}}{\mathbb{Z}}\right)$, the character module, is injective. -And I don't know how to continue. I don't think this is the way to go, but it's what I've tried. Another thing I've observed is that again, since $\mathbb{Z}$ is a PID, then $\frac{\prod \mathbb{Z_p}}{\bigoplus \mathbb{Z_p}}$ is divisible iff it is injective, but once again, I don't know what to do with this. - -REPLY [12 votes]: What about the direct approach? -Suppose you have $(a_i)\in \prod \mathbb{Z}_p$ and an integer n, then modulo $\bigoplus \mathbb{Z_p}$ you may assume that $a_p=0$ for all the primes dividing $n$. $n$ is invertible in all the rest of $\mathbb{Z}_p$ so you can find $\frac{a_p}{n}$ in them. -So define $b_p = 0 $ for $p\mid n$ and $b_p = \frac{a_p}{n}$ for $p\not\mid n$ and then -$n (b_p)+\bigoplus \mathbb{Z_p} = (a_n) + \bigoplus \mathbb{Z_p}$<|endoftext|> -TITLE: Convergence of $a_{0} = 0, a_{n}=f(a_{n-1})$ when $|f'(x)|\leq \frac{5}{6}$ -QUESTION [6 upvotes]: By the mean value theorem it's easy to show that $|a_{n+1}-a_{n}| \leq \frac{5}{6}|a_{n}-a_{n-1}|$ for every n. -Next, I thought of saying $|a_{n+1}-a_{n}| \leq ... \leq (\frac{5}{6})^{n}|a_{1}| \to 0$ and somehow show that ** if $M_{n}$ is the closed interval whose end points are $a_{n}$ and $a_{n-1}$ then $a_{n+1} \in M_{n}$ which implies $M_{n+1} \subseteq M_{n}$ and then to finish with Cantor's intersection theorem that gives us convergence of $a_{n}$. -But I'm not even sure if ** is correct and I haven't even used the fact that $a_{0} = 0$. -EDIT: Following the tip and some more thought I've come up with the following: -For every $m\gt n$: $|a_{m}-a_{n}|\leq|a_{m}-a_{m-1}+a_{m-1}-...+a_{n-1}-a_{n}|\leq$ -$\leq\sum_{k=n}^{m-1}|a_{k+1}-a_{k}|\leq|a_{1}|\sum_{k=n}^{m-1}(\frac{5}{6})^{k}\le$ -$\le|a_{1}|\sum_{k=n}^{\infty}(\frac{5}{6})^{k}=|a_{1}|\frac{(\frac{5}{6})^{n}}{\frac{1}{6}}=6|a_{1}|(\frac{5}{6})^{n} \to 0$ and from here it's easy to show that the sequence is Cauchy. -Please correct me if I made an error. - -REPLY [4 votes]: You presumably want $|f'(x)|\le 5/6$. It's not the case that $a_{n+1}$ -need lie in the interval between $a_{n-1}$ and $a_n$. What can you say -about $|a_n-a_m|$? (The value of $a_0$ isn't relevant).<|endoftext|> -TITLE: For what manifold is boundary given odd-dimensional projective space? -QUESTION [8 upvotes]: Take projective real space $\mathbb P_n (\mathbb R)$ of ODD dimension. It is easy to proof that all his Stiefel-Whitney numbers are zero . So according Thom theorem -there must exists manifold $M$ with boundary such that boundary is -$\partial M= \mathbb P_n (\mathbb R)$. I should like to see directly such $M$, without using Thom Theorem . For example if $n=1$ evident choice is $M=$ closed disk. -I have no idea in general case. Can some one help please ? - -REPLY [5 votes]: See the equivalent question on mathoverflow: What manifolds are bounded by RP^odd? -(Since it seems there is a good reason to have the answer recorded as such (and, borrowing from the suggestion here), I'm moving my comment here. Since all I'm doing it linking to another place, I don't want to gain reputation for this, so I'm making it community wiki.) -However, in an effort to personally gain something from this, I'll provide a link to a similar question I asked on MO which still hasn't been answered. The question is: What manifold has $\mathbb{H}P^{odd}$ as a boundary? Incidentally, the case of $\mathbb{C}P^{odd}$ is covered in my question.<|endoftext|> -TITLE: Exercise from Comtet's Advanced Combinatorics: prove $27\sum_{n=1}^{\infty }1/\binom{2n}{n}=9+2\pi \sqrt{3}$ -QUESTION [24 upvotes]: In exercise 36 Miscellaneous Taylor Coefficients using Bernoulli -numbers on pages 88-89 of Louis Comtet's Advanced Combinatorics, 1974, -one is asked to obtain the following explicit formula for the Bernoulli numbers: -$$B_{2n}=(-1)^{n-1}\dfrac{1+\left[ \varphi _{n}\right] }{2(2^{2n}-1)},$$ -where -$$\varphi _{n}=\dfrac{2(2^{2n}-1)(2n)!}{2^{2n-1}\pi ^{2n}}\displaystyle\sum_{k=1}^{3n}\dfrac{1}{k^{2n}}$$ -(with $\displaystyle\sum_{n\geq 0}B_{n}\dfrac{t^{n}}{n!}=\dfrac{t}{e^{t}-1}$), and to prove, among other sums, that -$$\displaystyle\sum_{n=1}^{\infty }\dfrac{1}{\dbinom{2n}{n}}=\dfrac{1}{3}+\dfrac{2\pi\sqrt{3}}{27}.\qquad (\ast )$$ -Alfred van der Poorten wrote here (section 10): seeing that -$$\displaystyle\sum_{n=1}^{\infty}\dfrac{x^{2n}}{n^{2}\dbinom{2n}{n}}=2\arcsin^{2}\left( \dfrac{x}{2}\right) \qquad (\ast \ast )$$ -(...) formula [ $(\ast )$ ] become[s] quite accessible to proof." -I am not able to show formula $(\ast \ast )$ neither how it can be used to -prove $(\ast )$. - -Question: Could you provide (a) more detailed hint(s) on how and/or -different ways in which formula $(\ast )$ can be derived? - -Added: For information the other sums are: -$$\displaystyle\sum_{n=1}^{\infty }\dfrac{1}{n\dbinom{2n}{n}}=\dfrac{\pi \sqrt{3}}{9},\quad\displaystyle\sum_{n=2}^{\infty }\dfrac{1}{n^{2}\dbinom{2n}{n}}=\dfrac{\pi ^{2}}{18},\quad\displaystyle\sum_{n=2}^{\infty }\dfrac{1}{n^{4}\dbinom{2n}{n}}=\dfrac{17\pi ^{4}}{3240}.$$ - -REPLY [8 votes]: I'm surprised no one has yet posted the generating function approach. An excellent reference is Sprugnoli, "Sums of Reciprocals of the Central Binomial Coefficients," Integers, Article A27, 2006. -He proves that the generating function of $4^n \binom{2n}{n}^{-1}$ is -$$Z(t) = \frac{1}{1-t} \sqrt{\frac{t}{1-t}} \arctan \sqrt{\frac{t}{1-t}} + \frac{1}{1-t}.$$ -Substituting $t=1/4$ immediately yields $$\sum_{n=0}^{\infty} \binom{2n}{n}^{-1} = \dfrac{4}{3}+\dfrac{2\pi\sqrt{3}}{27}.$$ -He also derives generating functions for $\frac{4^n}{n} \binom{2n}{n}^{-1}$, $\frac{4^n}{n^2} \binom{2n}{n}^{-1}$, and several other similar expressions involving the reciprocals of the central binomial coefficients, which allows him to deduce a few dozen expressions for various finite and infinite sums involving $\binom{2n}{n}^{-1}$. Again, it's an excellent reference. -Incidentally, in the introduction the author states that his motivation for writing the paper was the very exercise in Comtet that motivated this question.<|endoftext|> -TITLE: Can every element in the stalk be represented by a section in the top space? -QUESTION [5 upvotes]: Let $S$ be a sheaf over $X$ and $r$ an element in $S_x$ for some $x$ in $X$. Must there exist a section $s$ in $S(X)$ such that such that $s$ equals $r$ when mapped to $S_x$ by the canonical map? - -REPLY [4 votes]: This is only true in pathological cases. Neither in complex analysis (see above) nor in algebraic geometry: If we take the structure sheaf $\mathcal{O}_{Spec(A)}$ of an affine scheme, then the question is whether the localzation maps $A \to A_{\mathfrak{p}}$ are surjective, which is almost never true.<|endoftext|> -TITLE: What is the largest prime less than 2^31? -QUESTION [9 upvotes]: I'm sorry for this kind of specific question, I'd love if you could link to resources (prime lists, etc) that can answer similar questions more generically. - -REPLY [8 votes]: http://www.prime-numbers.org/prime-number-2147480000-2147485000.htm tells you that it's 2147483647 (about 2/3rds of the way down, third column). This website seems like a good resource if you're looking for lots of primes.<|endoftext|> -TITLE: Given $a_{1}=1, \ a_{n+1}=a_{n}+\frac{1}{a_{n}}$, find $\lim \limits_{n\to\infty}\frac{a_{n}}{n}$ -QUESTION [10 upvotes]: I started by showing that $1\leq a_{n} \leq n$ (by induction) and then $\frac{1}{n}\leq \frac{a_{n}}{n} \leq 1$ which doesn't really get me anywhere. -On a different path I showed that $a_{n} \to \infty$ but can't see how that helps me. - -REPLY [12 votes]: I completely forgot about the Stolz-Cesàro theorem, from which we get: -$$\lim_{n\to \infty} \frac{a_n}{n}=\lim_{n\to\infty} \frac{a_{n+1}-a_{n}}{(n+1)-n}=\lim_{n\to \infty}\frac{\frac{1}{a_{n}}}{1}=\lim_{n\to \infty}\frac{1}{a_{n}}=0. $$ The same technique works for $\displaystyle \frac{a_{n}^2}{n}.$<|endoftext|> -TITLE: Dense Open Sets and Cocountability -QUESTION [5 upvotes]: Are all dense open sets in R cocountable? -That is, are all dense open sets in R such that their complements are at most countable? -It would seem like they must be since the closed sets that are uncountable are all intervals, so their complements are not dense. - -REPLY [2 votes]: Here is something even more spectacular. Let $\epsilon >0$. Let $z_n$ be an enumeration of all points in $R^d$ with all rational coordinates. We know this sequence's range is dense in $R^d$. -About each $z_n$ choose an open ball of small radius $B_n$ whose volume is less than $\epsilon/2^n$. Now write -$$U_\epsilon = \bigcup_{n=1}^\infty B_n.$$ -This is a union of open balls and is therefore open. Its total Lebesgue measure (volume) is less than $\epsilon$. It is an open dense subset of $R^d$. An open dense subset of $R^d$, can be very small indeed.<|endoftext|> -TITLE: Examples of sets whose cardinalities are $\aleph_{n}$, or any large cardinal. (not assuming GCH) -QUESTION [7 upvotes]: One of the answers to this question indicates that large cardinals are useful for destructive testing of set theory. That aside, and not assuming GCH, are there any sets known that have a cardinality of $\aleph_{n}$, $n>0$, or that of any of the large cardinals? There are a few examples of sets that have these cardinalities on the wikipedia page, but they are meager and few compared to the examples on the page for Beth cardinalities. - -REPLY [5 votes]: There are very few examples where you directly prove in ZFC that a certain set must have size $\aleph_n$. This is because most of the sets we construct are defined in terms of power sets. This means that we can compute the size of these sets in terms of ℶ numbers pretty easily, but we can't compute them in terms of ℵ numbers. The difficulty is related to the unprovability of the continuum hypothesis. It turns out that ZFC can say very, very little about the cardinalities of ℶ numbers. -One way to get sets of a fixed cardinality is to talk directly about well orderings. For example, $\aleph_1$ is exactly the set of order types of well orderings of $\omega$ (regardless of what $\beth_1$ is). -For large cardinals, there is no way to explicitly compute their cardinality. For example, any inaccessible cardinal number $\kappa$ will have the property that $|\kappa| = \aleph_\kappa$, so you will not be able to make progress by trying to compute its ℵ number.<|endoftext|> -TITLE: Where do we need the axiom of choice in Riemannian geometry? -QUESTION [18 upvotes]: A friend of mine is a differential geometer, and keeps insisting that he doesn't need the axiom of choice for the things he does. I'm fairly certain that's not true, though I haven't dug into the details of Riemannian geometry (or the real analysis it's based on, or the topology, or the theory of vector spaces, etc...) to try and find a theorem or construction that uses the axiom of choice, or one of its logical equivalences. -So do you know of any result in Riemannian geometry that needs the axiom of choice? They should be there somewhere, I particularily suspect that one or more is hidden in the basic topology results one uses. - -REPLY [9 votes]: Your friend will probably weasel out of this example by claiming that it isn't strictly "Riemannian" geometry, but it's definitely differential geometry. The example is the Hodge theorem, which asserts that every cohomology class of a Riemannian manifold is represented by a unique harmonic form (where "harmonic" is with respect to the Laplace-Beltrami operator). I highly doubt that a proof can be crafted without some reasonably serious functional analysis; the standard approach uses elliptic theory and Sobolev theory which requires the Banach-Alaoglu theorem which in turn requires the Tychonoff theorem. In general I would wager that any result which involves geometric PDE theory (including some results in, say, minimal surface theory which genuinely are Riemmannian) is going to demand the axiom of choice at some level. -Other than that, you might check out the work of Alexander Nabutovsky. He has obtained some really serious results about the structure of geodesics and on the moduli space of Riemannian metrics using techniques from logic and computability theory - I wouldn't be surprised if AC is hiding somewhere. - -REPLY [8 votes]: It looks to me like the Arzelà--Ascoli theorem needs at least some weak form of choice. (I have started an MO question to clarify this.) One often uses this in geometry; for example, to guarantee the existence of minimizing geodesics connecting pairs of points. -Edit: See Andres Caicedo's answer on MO (at above link). The answer is affirmative. Also, the database list of equivalents he mentions contains some very innocuous-looking statements that I bet your friend has never thought twice about using.<|endoftext|> -TITLE: About finding advisors on the internet -QUESTION [5 upvotes]: The thing is, since I do a job, and work on mathematics in my spare time, I do not have any connection with the academia. Secondly, the internet is the only way I can interact and reach out to other mathematicians or math enthusiasts like me. -If I need to discuss my paper with someone, where do I do so over the internet? (This is not about peer review, or about publishing my paper.) -Thats my question, more clearly: -Where do I find people on the internet who can advise me on the papers that I write? -OR -Where I can discuss my papers on the internet? -More specifically to give some background: I recently released a paper about a new zero-free region for Riemann Zeta Function http://arxiv.org/pdf/0911.5572v14. I wish to get feedback about that. Do you know of any forums/groups, etc. where I can get a good and guiding response? Any links, forums, groups relevant to this topic would be highly appreciated. -Thanks, -Roupam - -REPLY [4 votes]: I think randomly emailing professors is not going to earn any replies. Professors usually have a lot on their plate and they get hundreds of emails each day. Unsolicited emails are usually ignored. -One possibility you can explore is trying to establish email contact with a few graduate students in related areas. Once you have done this, you may send copies of your work to some of them and try to get feedback. You are more likely to find graduate students who are responsive and some of the advanced ones are actually used to reviewing papers. -EDIT: I found another useful piece on this subject by Henry Cohn -http://research.microsoft.com/en-us/um/people/cohn/Thoughts/advice.html<|endoftext|> -TITLE: How to calculate the expected number of distinct items when drawing pairs? -QUESTION [6 upvotes]: Suppose I have a set $\mathcal{S}$ of $N$ distinct items. Now consider the set $\mathcal{P}$ of all possible pairs that I can draw from $S$. Naturally, $|\mathcal{P}| = \binom{N}{2}$. Now when I draw $k$ items (pairs) from $\mathcal{P}$ with a uniform distribution, what is the expected number of distinct items from $S$ in those $k$ pairs? -P.S.: I also asked this question over at stats, but got no answers so far, so I am trying here. Thanks for your time! -Edit I pick the pairs without replacement. - -REPLY [6 votes]: For choosing without replacement, here is an exact answer. Assuming $n \geq 2$, so that there is at least one pair, and $1 \leq k \leq \binom{n}{2}$, so that you're choosing at least one pair but not more than the total number of pairs, the expected value is -$$n - \left(\frac{n^2 - 3n - 2k + 4}{n-1}\right) \frac{\binom{\binom{n}{2} - n + 1}{k-1}}{\binom{\binom{n}{2} - 1}{k-1}}.$$ -We can assume that we are choosing pairs in order. Let $X_k$ be the number of distinct items from $S$ through $k$ pairs. Let $Y_i$ be the number of items in the $i$th pair that did not appear in any of the previous pairs. So $X_k = \sum_{i=1}^k Y_i$. -Now, $Y_i$ is either 0, 1, or 2. Since there are $\binom{n}{2} - n + 1$ pairs that do not contain a given item and $\binom{n}{2} - 2n + 3$ pairs that do not contain either of two given items, we have -$$P(Y_i = 1) = \frac{\binom{\binom{n}{2} - n + 1}{i-1} + \binom{\binom{n}{2} - n + 1}{i-1} - 2 \binom{\binom{n}{2} - 2n + 3}{i-1}}{\binom{\binom{n}{2} - 1}{i-1}}$$ -and -$$P(Y_i = 2) = \frac{\binom{\binom{n}{2} - 2n + 3}{i-1}}{\binom{\binom{n}{2} - 1}{i-1}}.$$ -Thus -$$E[Y_i] = 2\frac{\binom{\binom{n}{2} - n + 1}{i-1}}{\binom{\binom{n}{2} - 1}{i-1}}.$$ -It can be proved by induction that -$$\sum_{i=0}^k \frac{\binom{M}{i}}{\binom{N}{i}} = \frac{(N+1)\binom{N}{k} - (M-k)\binom{M}{k}}{(N+1-M)\binom{N}{k}}.$$ -Thus -$$E[X_k] = \sum_{i=1}^k E[Y_i] = 2\sum_{i=1}^k \frac{\binom{\binom{n}{2} - n + 1}{i-1}}{\binom{\binom{n}{2} - 1}{i-1}} $$ -$$= 2\frac{(\frac{n(n-1)}{2}-1+1)\binom{\binom{n}{2}}{k-1} - (\frac{n(n-1)}{2} - n + 1 - k + 1)\binom{\binom{n}{2} - n + 1}{k-1}}{(\binom{n}{2} - 1+1-\binom{n}{2} + n - 1)\binom{\binom{n}{2} - 1}{k-1}}$$ -$$= \frac{n(n-1)\binom{\binom{n}{2}}{k-1} - (n^2 - 3n - 2k + 4)\binom{\binom{n}{2} - n + 1}{k-1}}{(n - 1)\binom{\binom{n}{2} - 1}{k-1}}$$ -$$= n -\frac{(n^2 - 3n - 2k + 4)\binom{\binom{n}{2} - n + 1}{k-1}}{(n - 1)\binom{\binom{n}{2} - 1}{k-1}}.$$<|endoftext|> -TITLE: Intuition behind the ILATE rule -QUESTION [7 upvotes]: Often I have wondered about this question, but today I had a chance to recollect it and hence I am posting it here. During high-school days one generally learns Integration and I still loving doing problems on Integration. But I have never understood this idea of integration by parts. Why do we adopt the ILATE or LIATE rule to do problems and what was the reason behind thinking of such a rule? -Reference: http://en.wikipedia.org/wiki/Integration_by_parts - -REPLY [16 votes]: The way I see it, when you differentiate an inverse trigonometric function, you don't get another inverse trigonometric function. Instead you get "simpler" functions like $1/(1 + x^2)$ or $1/\sqrt{1-x^2}$. This does not typically happen with the antiderivative of such functions. -Similarly, when you differentiate a logarithmic function, the logarithm disappears. -So, when using integration by parts $\int u dv = uv - \int v du$, it makes sense to select the inverse trigonometric or logarithmic function to be the one that is the $u$ term. -In the case of algebraic, trigonometric and exponential functions both integration and differentiation don't change the nature of the function, so they come later in the ILATE order. -Of course, this is just intuition and there are examples where you can violate this so called rule and still integrate by parts without any problems. - -REPLY [5 votes]: As a technique for explicitly integrating functions given by formulas as usually seen in calculus classes, integration by parts works because $u'v$ can be easier to integrate than $uv'$. There are multiple ways an integrand can be considered as a product of the form $u'v$, but some choices lead to $uv'$ being of no use, perhaps even harder to integrate. The ILATE mnemonic you mention (or whatever it is) gives some rules that approximate the best guesses as to what will work well. Better than this mnemonic (which I've never thought about) is to just have enough experience doing integrals to be able to see what will work and why (at least for the type of integrals one typically sees in a calculus class).<|endoftext|> -TITLE: What is operator calculus? -QUESTION [13 upvotes]: I watched the excellent interview with Richard Feynman: http://www.youtube.com/watch?v=PsgBtOVzHKI In the interview Feynman mention that he at young age re-invented operator calculus. -I have searched for "operator calculus" and have not found any accessible references that introduce the topic. Maybe operator calculus go under another name today, than at the time of the interview? -Can you give me a reasonably simple explanation of operator calculus, and also give some references to books on the subject? - -REPLY [7 votes]: The operator calculus that Feynman is talking about first came to the attention of mathematicians when Freeman Dyson (learned it from Feynman and) used it to prove that the Feynman and the Schwinger formulations of QED (quantum electrodynamics) were equivalent. -The basic idea is to let time take on its natural role as a director of physical processes, so that operators acting at different times commute. To do this, one views the evolution of a physical system as a motion picture and lays out its history as on a film. This means that the mathematical convention of position on paper is replaced by position in time. -The actual mathematical foundations for this theory is of relative recent vintage. I would suggest two references. The first is more mathematical, while the second is directed to the physics inclined and includes quite a lot of history. - -Feynman operator calculus: The constructive theory, Expositiones Mathematicae 29 (2011)165–203. (by T. L. Gill and W. W. Zachary) -Foundations for relativistic quantum theoryI: Feynman’s operator calculus and the Dyson -conjectures, J. Math. Phys. 43(2002)69–93. (by T. L. Gill and W. W. Zachary)<|endoftext|> -TITLE: How do you show that the Laplacian is the square of the (Euclidean) Dirac operator? -QUESTION [6 upvotes]: If I understand correctly, the Euclidean Dirac operator is given by -$$D=\sum_{i=1}^n e_i \frac{\partial}{\partial x_i},$$ -where $e_i$ are bases for $Cl_{0,n}(\mathbb{R})$, i.e., the $n$-dimensional Clifford algebra with negative-definite signature over the reals (so $e_i^2=-1$), and $x_i$ are the corresponding coordinates. Several sources state that $D^2 = -\Delta_n$ where $\Delta_n$ is the standard Euclidean Laplace operator -$$\Delta_n = \sum_{i=1}^n \frac{\partial^2}{\partial x_i^2}.$$ -When I write out $D^2 f$ explicitly for some function $f:\mathbb{R}^n \rightarrow \mathbb{R}$, scalar terms from the Laplacian certainly appear, e.g., -$$e_1 \frac{\partial}{\partial x_1}\left( e_1 \frac{\partial}{\partial x_1} f \right) = e_1 \left( e_1 \frac{\partial^2}{\partial x_1^2}f + \left(\frac{\partial}{\partial x_1}e_1\right)\frac{\partial}{\partial x_1}f \right)=e_1^2 \frac{\partial^2}{\partial x_1^2}f = -\frac{\partial^2}{\partial x_1^2}f.$$ -But I also end up with bivector cross terms that shouldn't be there: -$$e_1 \frac{\partial}{\partial x_1}\left( e_2 \frac{\partial}{\partial x_2} f \right) = e_1 \left( e_2 \frac{\partial^2}{\partial x_1 \partial x_2}f + \left(\frac{\partial}{\partial x_1}e_2\right)\frac{\partial}{\partial x_2}f \right)=e_1 e_2 \frac{\partial^2}{\partial x_1 \partial x_2}f = e_{12}\frac{\partial^2}{\partial x_1 \partial x_2}f.$$ -Should I only be considering the scalar part of $D^2$, or am I simply doing something wrong here? - -REPLY [4 votes]: Note that -\begin{align*} -D^2 &= \left(\sum_{i=1}^ne_i\frac{\partial}{\partial x_i}\right)^2\\ -&= \left(\sum_{i=1}^ne_i\frac{\partial}{\partial x_i}\right)\left(\sum_{j=1}^ne_j\frac{\partial}{\partial x_j}\right)\\ -&= \sum_{i=1}^ne_i\frac{\partial}{\partial x_i}\left(\sum_{j=1}^ne_j\frac{\partial}{\partial x_j}\right)\\ -&= \sum_{i=1}^ne_i\sum_{j=1}^n\frac{\partial}{\partial x_i}\left(e_j\frac{\partial}{\partial x_j}\right)\\ -&= \sum_{i=1}^ne_i\sum_{j=1}^ne_j\frac{\partial^2}{\partial x_i\partial x_j}\\ -&= \sum_{i=1}^n\sum_{j=1}^ne_ie_j\frac{\partial^2}{\partial x_i\partial x_j}\\ -&= \sum_{i=1}^n\left(\sum_{ji}e_ie_j\frac{\partial^2}{\partial x_i\partial x_j}\right)\\ -&= \sum_{i=1}^n\sum_{ji}e_ie_j\frac{\partial^2}{\partial x_i\partial x_j}\\ -&= \sum_{i=1}^n\sum_{j -TITLE: Does having a codimension-1 embedding of a closed manifold $M^n \subset \mathbb{R}^{n+1}$ require $M$ to be orientable? -QUESTION [11 upvotes]: I'm trying to follow a proof about immersing/embedding $\mathbb{RP}^n$ into $\mathbb{R}^{n+1}$, which goes roughly as follows: -Write $\tau=T\mathbb{RP}^n$. The normal bundle $\nu$ has rank 1, so its Steifel-Whitney class is $w(\nu)=1$ or $w(\nu)=1+x$. In every case, we need $w(\nu)\cdot w(\tau) = w(\nu \oplus \tau) = w(\epsilon^{n+1})=1$. If $w(\nu)=1$, then $w(\tau)=(1+x)^{n+1}=1$, so $n+1=2^r$. If $w(\nu)=1+x$, then similarly $(1+x)^{n+2}=1$ so $n+2=2^r$. If the immersion is an embedding, the former case must hold. -Why is this true? I feel like there should be an easy reason, but none of the people I talked with were able to nail down anything precise. This could be wrong, but it seems like this is tacitly saying that a codimension-1 embedding of a closed manifold must be in fact of an orientable manifold, which is the same as saying that the the normal line bundle has trivial $w_1$ (since line bundles are totally classified by their orientability, i.e. by $w_1$). Is this true? - -REPLY [5 votes]: Here's a nice solution I just thought of, which may in fact be logically equivalent to Jim's (if it's even correct!). I welcome comments addressing that. -Compactify $\mathbb{R}^{n+1}$ to $S^{n+1}$, so we consider $M \subset S^{n+1}$. By Alexander duality, $$\tilde{H}_0(S^{n+1} \backslash M ; \mathbb{Z}/2) \cong \tilde{H}^{(n+1)-0-1}(M;\mathbb{Z}/2) = \tilde{H}^n(M;\mathbb{Z}/2)=\mathbb{Z}/2,$$ so $H_0(S^{n+1}\backslash M;\mathbb{Z}/2)=\mathbb{Z}/2 \oplus \mathbb{Z}/2$. Hence $M$ separates $S^{n+1}$.<|endoftext|> -TITLE: $\prod_{k=3}^{\infty} 1 - \tan( \pi/2^k)^4$ -QUESTION [5 upvotes]: So I found this -$$\prod_{k=3}^{\infty} 1 - \tan( \pi/2^k)^4$$ -here. -I have only ever done tests for convergence of infinite sums. -At this link it shows a way to convert but -in this case an is less than one for all k. I can see that $ f(k) = 1 - \tan( \pi/2^k)^4 $ is a strictly increasing function and in fact it is increasing very quickly however it is bound [ $ 1 - \tan( \pi/2^3)^4$, 1) where the lower bound is quite nearly one. -This seems to mean it will converge to something slightly less than the lower bound of the function. -I see that is converges. I want to know if there analytic method to provide a closed from of the solution such as with an alternating series. I figure I can simply do it computationally with MatLab and it would be very simple but I thought there must another way. - -REPLY [9 votes]: Write $\displaystyle 1- \tan^4 \theta$ as $\dfrac{\cos^2 \theta - \sin^2 \theta}{\cos^4 \theta} = \dfrac{\cos 2\theta}{\cos^4 \theta}$ and make repeated use of the following trick: -To evaluate $\displaystyle \prod_{k=1}^{n}\cos \dfrac{\theta}{2^k}$, multiply and divide by $\sin \dfrac{\theta}{2^n}$ and use $\displaystyle 2\sin \theta \cos \theta = \sin 2\theta$. -This should give you a closed form formula for the product of first $\displaystyle n$ terms, which I believe can be evaluated easily as $\displaystyle n \to \infty$. -For more information on this you can refer: http://en.wikipedia.org/wiki/Vi%C3%A8te's_formula<|endoftext|> -TITLE: Is it legitimate to write nested big-Os in an asymptotic formula for a multivariable function? -QUESTION [8 upvotes]: Suppose I have a 2-variable function $g(k,n)$ and I know that $g(k,n)=O(n^{f(k)})$, for fixed $k$ as $n \rightarrow \infty$, for some function $f=f(k)$. Suppose I also know that $f(k)=O(\log k)$. - -Is it legitimate to write $g(k,n)=O(n^{O(\log k)})$? - -REPLY [2 votes]: No (if you mean what I think you mean by that statement). If you have more than one variable in play it's always a good idea to keep track of which variables the implied constant for each big-O depends on. In your example, the implied constant for the outer big-O depends on $k$, but the implied constant for the inner big-O doesn't, so it seems misleading to use notation which pretends they are the same. And if the implied constant grows fast enough, the final statement you want is false (if an unadormed big-O means it doesn't depend on any of the variables which appear on the LHS). Consider, as I said in my comment to Yuval Filmus, $g(k, n) = 2^k n^{\log k}, f(k) = \log k$. In addition to $f(k) = O(\log k)$ you need to know that the implied constant doesn't depend on $n$. -Edit: The above was nonsense. Actually you are fine. You have a constant $C_1$ such that $f(k) \le C_1 \log k$ and another constant $C_2$ such that $g(k) \le C_2 n^{f(k)}$, hence $g(k) \le C_2 n^{C_1 \log k}$, or as Yuval Filmus points out, $g(k) = n^{O(\log k)}$.<|endoftext|> -TITLE: Is there a basis-independent proof of Abel's identity? -QUESTION [8 upvotes]: Abel's identity states that if $X(t)$ and $A(t)$ are $n\times n$ matrix-valued functions such that $X'(t)=A(t)X(t)$, then $\frac{d}{dt}(\det X(t)) = \mathrm{tr}\,A(t) \cdot \det X(t)$. -The question is whether there's a nice high-brow basis-independent way to see this. Given that you can state the problem without ever referring to matrix entries, I want prove it without ever referring to matrix entries. I'd expect the identity $\det e^A = e^{\mathrm{tr}(A)}$ to play a central role in the proof, but I have yet to come up with such an argument. - -You can prove this without too much trouble in a bare-hands way, but I don't see how to turn this into a basis-free proof. Suppose $X_i(t)$ is the matrix you get from $X(t)$ by taking the derivative of every entry in the $i$-th row. Then $\frac{d}{dt}(\det X(t)) = \sum_{i=1}^n \det(X_i(t))$. Using the relation $X'(t)=A(t)X(t)$ you can express the derivative of the $i$-th row of $X'(t)$ in terms of the entries of $A$ and $X$. When you do this, you find that $\det X_i(t)$ is simply $\det X(t)$ multiplied by the $(i,i)$-th entry of $A(t)$. - -REPLY [7 votes]: Please forgive this argument that plays rather fast and loose with infinitesimals. If you ignore second-order terms, an infinitesimal change $\delta t$ in $t$ takes $X(t)$ to $\big(I + \delta t A(t)\big) X(t)$. Since the determinant distributes over multiplication, all you need to show is that $\det\big(I + \delta t A(t)\big)$ is $1 + \delta t \operatorname{tr}A$ for infinitesimal $\delta t $. More formally, you need $\frac{d}{d\tau} \det(I + \tau A) = \operatorname{tr}A$. Since $I + \tau A$ agrees with $e^{\tau A}$ to first order (according to the Taylor series), you can differentiate the identity $\det e^{\tau A} = e^{\operatorname{tr} \tau A}$ with respect to $\tau$ and get the desired result. -Update: For a high-brow view, I think it makes much more sense to cast this in a geometric light, as it has much to do with the geometric interpretation of the trace. Consider how the vector ODE $x'(t) = A(t) x(t)$ acts on Euclidean $n$-space. The identity $\det e^A = e^{\operatorname{tr}A}$ is a statement of the fact that for constant $A$, the volume of any region grows (or shrinks) exponentially at the rate $\operatorname{tr}A$ under this ODE. (This is fine for varying $A(t)$ too as long as we don't care about second-order effects.) Now think of $X(t) $ as defining a parallelepiped in $n$-space as it flows around under the action of the ODE. The volume of the parallelepiped is $\det X(t)$, and its rate of change is therefore $\operatorname{tr}A $ times that. I think this is the answer I should have given in the first place.<|endoftext|> -TITLE: The characteristic and minimal polynomial of a companion matrix -QUESTION [32 upvotes]: The companion matrix of a monic polynomial $f \in \mathbb F\left[x\right]$ in $1$ variable $x$ over a field $\mathbb F$ plays an important role in understanding the structure of finite dimensional $\mathbb F[x]$-modules. -It is an important fact that the characteristic polynomial and the minimal polynomial of $C(f)$ are both equal to $f$. This can be seen quite easily by induction on the degree of $f$. -Does anyone know a different proof of this fact? I would love to see a graph theoretic proof or a non inductive algebraic proof, but I would be happy with anything that makes it seem like more than a coincidence! - -REPLY [8 votes]: Surprisingly, the following (in my opinion) quite elegant proof is still missing: -Look at the $F$-vector space $F[x]/(f)$. The map -$$\phi : F[x]/(f)\to F[x]/(f),\quad g + (f)\mapsto x\cdot g + (f)$$ -is well-defined and $F$-linear. -Let $m_\phi = \sum_{i=0}^d a_i x^i\in F[x]$ be the minimal polynomial and $\chi_\phi\in F[x]$ the characteristic polynomial of $\phi$. Then $m_\phi(\phi)$ is the zero map in $\operatorname {End}(F[x]/(f))$. Thus -$$0 + (f) = m_\phi(\phi)(1 + (f)) = \sum_{i=0}^d a_i \phi^i(1 + (f)) = \left(\sum_{i=0}^d a_i x^i\right) + (f) = m_\phi + (f).$$ -So -$$f\mid m_\phi \mid \chi_{\phi},$$ -where the scecond divisibility follows from Cayley-Hamilton. -Because of $m_\phi \neq 0$ and $\deg(f) = \dim_F(K[x]/(f)) = \deg(\chi_\phi)$ and because all the polynomials are monic, this forces -$$ f = m_\phi = \chi_\phi.$$ -With respect to the basis $(1 + (f), x + (f),\ldots, x^{n-1} + (f))$, the transformation matrix of $\phi$ is the companion matrix $C(f)$ of $f$. So the minimal polynomial of $C(f)$ equals $m_\phi$ and the characteristic polynomial of $C(f)$ equals $\chi_\phi$.<|endoftext|> -TITLE: Uses of quadratic reciprocity theorem -QUESTION [46 upvotes]: I want to motivate the quadratic reciprocity theorem, which at first glance does not look too important to justify it being one of Gauss' favorites. So far I can think of two uses that are basic enough to be shown immediately when presenting the theorem: -1) With the QRT, it is immediate to give a simple, efficient algorithm (that can be done even by hand) for computing Legendre symbols. -2) In Euler's proof of Fermat's claim on the conditions in which a prime $p$ is of the form $x^2+ny^2$ (for certain small values of $n$) the proof is reduced to finding the conditions under which $p$ divides $x^2+ny^2$ for some $x,y$, hence to the question under which conditions is $-n$ a quadratic residue modulo $p$, which leads immediately to the QTR (for example, for $n=3$, where we get that $p\equiv_3 1$). I really like this example since it begins with an "historic" problem and proceeds to "discover" the QTR through special cases (which is what Euler did in practice - see Cox's book on "Primes of the form $x^2+ny^2$"). -However, I am sure there are many more examples (and I'm especially curious as to how Gauss reached the theorem himself). I'd love to hear about them and receive references for further reading. - -REPLY [9 votes]: The quadratic reciprocity law in any of its forms shows that there is an un-obvious correlation between different primes. The $(p,q)$ symbol constrains the $(q,p)$ symbol. This is astonishing compared to other more "linear" theorems about congruences or unique factorization. In its 20th-century reformulations quadratic reciprocity is seen as an avatar of other reciprocity laws in geometry (reciprocity for tame symbols) and even geometric topology (linking numbers where knots play the role of primes) and although these other theorems are in some ways easier to prove, the analogies between all of them are mysterious. -Basically, if you are not shocked by this theorem, you don't completely understand it. Historically it was a hard-won, prize result and not an inevitable universal discovery like the Pythagorean formula or other theorems that were difficult in their time but found independently in many times and places. Many cultures had knowledge of basic number theoretic facts but quadratic reciprocity is one of the first signs of number theory as a science.<|endoftext|> -TITLE: Tractrix-like curves -QUESTION [7 upvotes]: Is there a common name for curves, obtained from dragging a point along another curve, similar to how tractrix is obtained by dragging a point along a line? -What is a parametric equation of such curve given the parametric equation of a curve along which the master goes? - -REPLY [9 votes]: Generalized tractrices (tractories) exist, see this or this or this. (the last two are in French, but with slightly more detail than the first one.) -This old book ought to be of interest as well. -This book describes Euler's treatment of the problem of the tractory. -As a note, the problem of finding the generating curve, given the tractory, is a much easier problem (hint: use the tangent vector of a curve) than finding the tractory corresponding to a generating curve. - -For those who have difficulty reading French, the third link I mentioned gives the prescription for generating the corresponding tractory from a generating curve with parametric equations $(f(t)\quad g(t))$. -The parametric equations for the tractory of $(f(t)\quad g(t))$ (in vector form) is -$$\begin{pmatrix}f(t)\\g(t)\end{pmatrix}-\frac{a}{\sqrt{f^{\prime}(t)^2+g^{\prime}(t)^2}}\begin{pmatrix}\cos\;\alpha(t)&\sin\;\alpha(t)\\-\sin\;\alpha(t)&\cos\;\alpha(t)\end{pmatrix}\cdot\begin{pmatrix}f^{\prime}(t)\\g^{\prime}(t)\end{pmatrix}$$ -or explicitly -$$\begin{align*}x&=f(t)-\frac{a}{\sqrt{f^{\prime}(t)^2+g^{\prime}(t)^2}}(f^{\prime}(t)\cos\;\alpha(t)+g^{\prime}(t)\sin\;\alpha(t))\\y&=g(t)-\frac{a}{\sqrt{f^{\prime}(t)^2+g^{\prime}(t)^2}}(g^{\prime}(t)\cos\;\alpha(t)-f^{\prime}(t)\sin\;\alpha(t))\end{align*}$$ -where the function $\alpha(t)$ satisfies the differential equation -$$\frac{\mathrm d\alpha}{\mathrm dt}=\frac{f^{\prime}(t)g^{\prime\prime}(t)-g^{\prime}(t)f^{\prime\prime}(t)}{f^{\prime}(t)^2+g^{\prime}(t)^2}-\frac{\sin\alpha}{a}\sqrt{f^{\prime}(t)^2+g^{\prime}(t)^2}$$ -and $a$ is the length of the segment running through the generating curve. -As an example, here is an animation showing the curve $(3\cos\;t-2\cos^3 t+\cos 2t\quad 2\sin^3 t+\sin 2t)$ and its tractory with segment length 1: - -(This is a less fancy version of the last bicycle animation in the third French link.)<|endoftext|> -TITLE: How much a càdlàg (i.e., right-continuous with left limits) function can jump? -QUESTION [10 upvotes]: I have changed the title (replaced "well-behaved" by "càdlàg"), since it seems that "a well-behaved function" might be interpreted as "a function of bounded variation" (rather than "a càdlàg function", which I actually meant). -Let $f:[0,1] \to {\bf R}$ have a following property: $f$ is continuous except at the points $x_{k,n} = \frac{{2k - 1}}{{2^n }}$, $k=1,\ldots,2^{n-1}$, $n=1,2,3,\ldots$, where $\lim _{x \downarrow x_{k,n} } f(x) = f(x_{k,n} )$ but $f(x_{k,n} ) - -\lim _{x \uparrow x_{k,n} } f(x) = a_n > 0$. Can such a function exist if $a_n > 1/ \log(n)$ for all sufficiently large $n$? - -REPLY [12 votes]: If $f\colon[0,1]\to\mathbb{R}$ is cadlag (continu à droite, limites à gauche) then we can ask what the possible jumps are. That is, for what functions $g\colon(0,1]\to\mathbb{R}$ is there a cadlag function $f$ with $g(x)=\Delta f(x)\equiv f(x)-f(x-)$? -The answer is that $g$ occurs as the jumps of a cadlag function if and only if the set $\{x\in(0,1]\colon\vert g(x)\vert > \epsilon\}$ is finite for each $\epsilon > 0$. In particular, there is a cadlag function with jumps as you describe. -First, the necessity: If $S=\{x\in(0,1]\colon\vert g(x)\vert > \epsilon\}$ was not finite, then it would contain a strictly increasing or strictly decreasing sequence $x_n$. Then, $\vert f(x_n)-f(y_n)\vert > \epsilon$ for some $y_n$ chosen arbitrarily close to $x_n$. Replacing $x_n$ by $y_n$ where necessary gives a strictly increasing or strictly decreasing sequence $x_n\in[0,1]$ such that $\vert f(x_{n+1})-f(x_n)\vert > \epsilon/2$. However, as f is cadlag and has left and right limits everywhere, this contradicts the requirement that $f(x_n)$ tends to a limit. -Now, we can show sufficiency: Set $\epsilon_n=2^{-n}$ for $n\ge1$ and $\epsilon_0=\infty$. For each $n\ge1$, consider the finite set $S_n=\{x\in(0,1]\colon \epsilon_{n-1}\ge\vert g(x)\vert > \epsilon_n\}$. We can construct a function $f_n\colon[0,1]\to\mathbb{R}$ such that $\Delta f_n(x)=1_{\{x\in S_n\}}g(x)$ and $\vert f_n(x)\vert\le\epsilon_{n-1}$. The idea is to take $f_n(x)=g(x)$ and $f_n(x-)=0$ for points in $S_n$, and linearly interpolate between these. -More precisely, if $S_n$ is empty we set $f_n=0$. Otherwise, -$$ -f_n(x)= g(a)(b-x)/(b-a) -$$ -for $a\le x < b$. Here, $a < b$ are consecutive points of $S_n$. Also set $f_n(x)=0$ for $x$ less than the minimum of $S_n$ and $f_n(x)=g(c)$ for $x\ge c=\max S_n$. This has the required jumps $\Delta f_n(x)=1_{\{x\in S_n\}}g(x)$. -Finally set $f(x)=\sum_{n=1}^\infty f_n(x)$. As $\vert f_n\vert \le 2^{1-n}$ for $n\ge 2$ this converges uniformly and, -$$ -\Delta f(x)=\sum_n\Delta f_n(x)=\sum_n1_{\{x\in S_n\}}g(x)=g(x). -$$<|endoftext|> -TITLE: Inverse of $y=xe^x$ -QUESTION [12 upvotes]: I feel like finding the inverse of $y=xe^x$ should have an easy answer but can't find it. - -REPLY [2 votes]: As this answer shows, the Lambert W function is not absolutely necessary. First, notice that $y<\ln(x)$ for large $x$ since $xe^x>e^x$ for $x>1$. -We may then start rewriting as follows: -$$x=ye^y\implies e^y=\frac xy\implies y=\ln\left(\frac xy\right)$$ -It then follows that since $y<\ln(x)$, we have -$$y>\ln\left(\frac x{\ln(x)}\right)=\ln(x)-\ln(\ln(x))$$ -We've reached Yuval Filmus' short answer at this point. We may then find an upper bound by noting that -$$y=\ln\left(\frac x{\ln\left(\frac x{y}\right)}\right)<\ln\left(\frac x{\ln\left(\frac x{\ln(x)}\right)}\right)=\ln(x)-\ln(\ln(x)-\ln(\ln(x)))$$ -Here is a graph, the dotted line being $y$ and the other two being the bounds: - -Note that we can repeat this process indefinitiely, stacking more and more logarithms. The end result is the following image: - -Indeed, this converges for all $x\ge e$. -We may evaluate the inverse for small values of $x$ as well. Going back to the beginning, we could've had the following: -$$x=ye^y\implies y=xe^{-y}$$ -Using $y\le x$ as our initial statement, we can bound small values of $x$: -$$y\ge xe^{-x}$$ -And likewise, -$$y=xe^{-xe^{-y}}\le xe^{-xe^{-x}}$$ -Here's a graph of the inequalities, the dotted line being the actual inverse again: - -Notice the inequalities fail when $x<0$, which is natural. Again, continuing this process, we end up with the following graph: - -which converges to the primary branch for $-e^{-1}\le x\le e$. To get the pesky other branch is much harder, and I don't know of a neat way to find it off the top of my head :-(<|endoftext|> -TITLE: Does the series $\sum\limits_{n=1}^{\infty}\frac{\sin(n-\sqrt{n^2+n})}{n}$ converge? -QUESTION [32 upvotes]: I'm just reviewing for my exam tomorow looking at old exams, unfortunately I don't have solutions. Here is a question I found : determine if the series converges or diverges. If it converges find it's limit. -$$\displaystyle \sum\limits_{n=1}^{\infty}\dfrac{\sin(n-\sqrt{n^2+n})}{n}$$ -I've ruled down possible tests to the limit comparison test, but I feel like I've made a mistake somewhere. -divergence test - limit is 0 by the squeeze theorem -integral test - who knows how to solve this -comparison test - series is not positive -ratio root tests - on the absolute value of the series, this wouldn't work out -alternating series test - would not work, the series is not decreasing or alternating -Any ideas what to compare this series here with or where my mistake is on my reasoning above? - -REPLY [39 votes]: The key here is that $n - \sqrt{n^2 + n}$ converges to $-{1 \over 2}$ as $n$ goes to infinity: -$$n - \sqrt{n^2 + n}= (n - \sqrt{n^2 + n}) \times {n + \sqrt{n^2 + n} \over n + \sqrt{n^2 + n}}$$ -$$= {n^2 - (n^2 + n) \over n + \sqrt{n^2 + n}} = -{n \over n + \sqrt{n^2 + n}}$$ -$$= -{1 \over 1 + \sqrt{1 + {1 \over n}}}$$ -Take limits as $n$ goes to infinity to get $-{1 \over 2}$. -Hence $\sin(n - \sqrt{n^2 + n})$ converges to $\sin(-{1 \over 2})$, and the series diverges similarly to ${1 \over n}$, using the limit comparison test for example.<|endoftext|> -TITLE: Congruency and Congruent Classes -QUESTION [5 upvotes]: so studying for my midterm on Tuesday (intro to abstract algebra). The topics on the exam are Division Algorithm, Divisibility, Prime Numbers, FTA, Congruency, Congruent Classes and very brief introduction to rings. -I was reading a few theorems about Congruency and have a couple of questions. -I want to know what a "congruent class" is. My notes say "the congruence class of a modulo n" is a set: -$ \left\{ \text{all } b \in \mathbb{Z} | b \equiv a \pmod{n} \right\} $ which is also saying -$ \left\{ \text{all } a + kn \in \mathbb{Z} | k \in \mathbb{Z} \right\} $ -okay so got that. I just wrote it for some people who might need a refreshed (it is a 3rd undergrad course after all). -So in my notes our professor has a following example: -$\left[ 60 \right]_{17} = \left[ 43\right]_{17}$ -1) so the way I figured this out is that to check if they are equivalent, we subtract 60-43 and see if that is a multiple of n = 17. Is this how you can check if they are equal classes? If not, is there a better way to do so? -2) A certain theorem states: Let $n \in \mathbb{Z}_+; a, b \in \mathbb{Z}$ and $gcd(a,n) = d$ then $[a]x=[b]$ has exactly $d$ solutions. My question here is that is x a congruence class or a random integer? What is x and how do I solve for it? -3) Is it true that if we are in $\mathbb{Z}_{12}$ then $[7]x=[11]$ can be rewritten as $ 7x \equiv 11 \pmod{12}$? If so, would finding the solution be similar to solution in this question -Thankyou. I am just very confused about congurency and stuff. I understand the theorems but I am hoping someone would give me an "easy" explanation of what is going on. I still don't know the difference between circle plus and regular plus except that circle plus has to satisfy certain axioms. Am I right? - -REPLY [4 votes]: Okay, let's start from the beginning. You define the congruence class of $a$ as: -\begin{equation*} -[a]_n = \{ b\in\mathbb{Z}\mid a\equiv a\pmod{n}\}. -\end{equation*} -Now, what does $a\equiv b\pmod{n}$ mean? It means that $n$ divides $a-b$, or equivalently, that $a$ and $b$ leave the same remainder when you divide them by $n$. Considering the last of the conditions, notice that: - -Since each number leaves the same remainder as itself when divided by $n$, $a\equiv a \pmod{n}$ for all $a$. -If $a$ leaves the same remainder as $b$ when divided by $n$, then $b$ leaves the same remainder as $a$ when divided by $n$. So if $a\equiv b\pmod{n}$, then $b\equiv a \pmod{n}$. -If $a$ leaves the same remainder as $b$, and $b$ leaves the same remainder as $c$, then $a$ and $c$ leave the same remainder as well. So if $a\equiv b\pmod{n}$ and $b\equiv c\pmod{n}$, then $a\equiv c\pmod{n}$. - -That means that: -\begin{equation*} -[a]_{n} = [b]_n\Longleftrightarrow a\equiv b\pmod{n}. -\end{equation*} -Why? Well, suppose the congruence classes are the same. Since $a\in[a]_n=[b]_n$, that means that $b\equiv a \pmod{n}$ (since $a\in[b]_n$), so $a\equiv b\pmod {n}$. That proves that if the classes are equal, then $a\equiv b\pmod{n}$. -Conversely, suppose that $a\equiv b\pmod{n}$. How do we prove that $[a]_{n}=[b]_{n}$? Since they are sets, the usual way is to show that each is contained in the other. If $c\in[a]_n$, then $a\equiv c\pmod{n}$ by definition; since we also have $b\equiv a\pmod{n}$ (since $a\equiv b\pmod{n}$ is our assumption), then $b\equiv c\pmod{n}$, so $c\in[b]_n$. Therefore, every that is in $[a]_{n}$ is also in $[b]_n$, so $[a]_n\subseteq [b]_n$. For the converse inclusion, if $c\in[b]_n$, then $b\equiv c\pmod{n}$, and since we also have $a\equiv b\pmod{n}$, then $a\equiv c \pmod{n}$ so $c\in[a]_n$. Therefore, $[b]_n\subseteq [a]_n$. Since we have the two inclusions, we conclude that $[a]_n=[b]_n$. -We also have the following, which is perhaps more surprising: -\begin{equation*} -[a]_n\cap[b]_n\neq\emptyset\Longleftrightarrow [a]_n=[b]_n. -\end{equation*} -That is: the only way the class of $a$ and the class of $b$ have anything in common is if they are identical. Why? Well, if they are identical they certainly have nonempty intersection, since the class of $a$ always includes at least $a$. And conversely, if $c\in[a]_n\cap[b]_n$, then $a\equiv c\pmod{n}$ and $b\equiv c\pmod{n}$, from which we conclude that $a\equiv b\pmod{n}$, so $[a]_n=[b]_n$ by what we just finished proving. -On to your questions: -(1) How do we check if $[a]_{17} = [b]_{17}$? Precisely the way you did it: the classes are equal if and only if $a\equiv b \pmod{17}$. How do you check if $a\equiv b\pmod{17}$? By checking to see if $17$ divides $a-b$. So how do you check in general whether $[a]_n=[b]_n$? You check to see if $a-b$ is a multiple of $n$. If it is, then the classes are equal. If $a-b$ is not a multiple of $n$, then they are not equal, and in fact they are disjoint. -There are other ways: for example, if you could show that $[60]_{17}$ and $[43]_{17}$ have any element in common, then you would be able to conclude that they are equal. Sometimes it may be simpler to see that $[a]_n$ and $[b]_n$ have some element in common than to check if $n$ divides $a-b$; but the standard way of checking is to see whether $n$ divides $a-b$, just as you did. -(2) What you have is an equation in which $x$ is an unknown. You are really looking for all solutions to the congruence -\begin{equation*} -ax \equiv b \pmod {n} -\end{equation*} -That is, all integers $x$ that make the congruence true. For example, $x=3$ is a solution to -\begin{equation*} -2x \equiv 1 \pmod{5} -\end{equation*} -because $(2)(3)= 6\equiv 1 \pmod{5}$. In fact, because $3$ is a solution, so is $3+5k$ for any integer $k$; that is, any element in $[3]_5$ is a solution, since $3$ is a solution. So the solutions will actually be a collection of congruence classes. -Moreover, if we replace $2$ in the equation with $7$, so that we are trying to solve $7x\equiv 1 \pmod{5}$, then anything which was a solution to $2x\equiv 1\pmod{5}$ is still a solution, and any solution to the first one is still a solution to the new congruence; why? because $2x\equiv 7x\pmod{5}$ for all $x$, as $2x-7x = -5(x)$ is always a multiple of $5$. So we can replace $2$ with any element of $[2]_5$ and not change the solutions. Likewise, we can replace $1$ with any element of $[1]_5$ and not change the solutions. So it's almost as if instead of trying to solve the single congruence -\begin{equation*} -2x \equiv 1 \pmod{5} -\end{equation*} -we are trying to solve the equation -\begin{equation*} -[2]_5 x = [1]_5 -\end{equation*} -So that's why you have written $[a]x=[b]$. That really means the congruence $ax\equiv b \pmod{n}$. -What you wrote, however, is incorrect. You are missing a clause: the congruence will have $d$ solutions if $d$ divides $b$. Otherwise, it's not going to have any. For an easy example, consider the congruence $2x\equiv 1 \pmod{4}$. Then $\gcd(2,4)=2$, but there are no solutions to the congruence, because $2x$ is always even, so $2x-1$ is always odd, so $4$ never divides $2x-1$. -Now, how do you solve a congruence $ax\equiv b\pmod{n}$ when $\gcd(a,n)$ divides $b$? let $d=\gcd(a,n)$. Then we can write $a=da'$, $b=db'$, and $n=dn'$. Then $ax\equiv b\pmod{n}$ if and only if $n$ divides $ax-b$. But $ax-b = d(a'x - b')$, and $dn'$ divides $d(a'x-b')$ if and only if $n'$ divides $a'x-b'$, if and only if $a'x\equiv b'\pmod{n'}$. So if we can solve a congruence when $\gcd(a,n)=1$, then we can solve any congruence where $\gcd(a,n)$ divides $b$ -So, how do you solve a system $ax \equiv b\pmod{n}$ when $\gcd(a,n)=1$ (which will of course divide $n$)? Since $\gcd(a,n)=1$, then we can write $n$ as a linear combination of $a$ and $n$, $1=ar+ns$ for some integers $r$ and $s$ (for example, using the Euclidean algorithm). Multiplying through by $b$ you get $b=a(rb)+ n(sb)$. That means that $n(sb) = a(rb)-b$, so $a(rb)\equiv b \pmod{n}$, which means that $x=rb$ is a solution. If $y$ is any other solution, then $ay\equiv b\pmod{n}$, and $ax\equiv b\pmod{n}$, so $ax\equiv ay\pmod{n}$, hence $n$ divides $ax-ay=a(x-y)$, and since $\gcd(a,n)=1$, then $n$ divides $x-y$; so if $y$ is any other solution, then $x\equiv y\pmod{n}$. So the only congruence class that is a solution is $[x]_n$. -Now, what about general congruences? Suppose you have $[a]_nx=[b]_n$ and $\gcd(a,n)=d$ divides $b$. Write $a=da'$, $b=db'$, and $n=dn'$. Notice that $\gcd(a',n')=1$. Find a solution $x_0$ to $[a']_{n'}x=[b']_{n'}$. Then $n'$ divides $a'x_0 -b'$, so $dn'=n$ divides $d(a'x_0-b') = da'x_0-db' = ax_0-b$. So $x_0$ is also a solution to the original problem. If $y$ is any other solution, then as before we get that $n$ divides $a(x_0-y)$; so $dn'$ divides $da'(x_0-y)$, hence $n'$ divides $a'(x_0-y)$, and since $\gcd(a',n')=1$, then $n'$ divides $x_0 - y$; that is, $y\equiv x_0\pmod{n'}$, so $y=x_0+kn'$ for some $k$. Conversely, if $y = x_0+\ell n'$ for some $\ell$, then -\begin{equation*} -ay = a(x_0+\ell n') = ax_0 + \ell an' = ax_0 + \ell (da')n' = ax_0 + \ell a'(dn') = ax_0 + (\ell a')n, -\end{equation*} -which says that $ay \equiv ax_0 \pmod{n}$, so $y$ is also a solution. So what are the different solutions? Well, we have $x_0$, $x_0+n'$, $x_0+2n'$, $x_0+3n',\ldots, x_0+(d-1)n'$. All of these are not congruent to one another modulo $n$; the "next" one, however, will be $x_0+dn' = x_0+n$, which is congruent to $x_0$ modulo $n$. So the only distinct congruence classes that are solutions are $[x_0]_n$, $[x_0+n']_n,\ldots,[x_0+(d-1)n']_n$, giving you exactly $d$ distinct classes that are solutions to $[a]_nx=[b]_n$. -To see this in practice, take $n= 77$, $a=21$, and $b=14$. We want to solve the system $[21]_{77} x = [9]_{77}$ Since $\gcd(77,21)=7$ divides $b$, the system has solutions; in fact, it has $7$ different congruence classes modulo $77$ as solutions. -So, we write $21 = 7\cdot 7$, $14 = 7\cdot 2$, $77 = 7\cdot 11$. And we first solve the system $[3]_{11}x=[2]_{11}$. To do this, we write $1$ as a linear combination of $3$ and $11$, like so: $1 = 3(4) - 11(1)$. Then multiply by $2$ to get $2 = 3(8)-11(2)$. So $x_0=8$ is a solution (indeed, $3(8)=24\equiv 2\pmod{11}$). Now going back to the original congruence, we take $x_0$ and we add multiples of $11$ (why $11$? Because $n=77$ is $7\cdot 11$, and $7$ is the $d$ from before) until we have our $7$ different solutions. So the solutions are: $[8]_{77}$, $[19]_{77}$, $[30]_{77}$, $[41]_{77}$, $[52]_{77}$, $[63]_{77}$, and $[74]_{77}$. You can verify that they all are solutions to $[21]_{77}x = [14]_{77}$. For example, $21\cdot 41 = 861$, and $[861]_{77}=[14]_{77}$ because $861-14 = 847 = 77\cdot 11$. -(3) Yes: that is exactly the meaning of $[a]_nx=[b]_n$; it means $ax\equiv b\pmod{n}$. -Hope this helps, despite the length.<|endoftext|> -TITLE: Sum of two independent geometric random variables -QUESTION [6 upvotes]: Let X and Y be independent random variables, -$ P(X = k) = P(Y = k) = p(1 - p)^{k-1} $ -How do you show that the pmf of $ Z = X + Y $, is negative binomial, and how do you find -$ P(X = Y) $? - -REPLY [17 votes]: While Timothy Wagner's answer is correct, I thought you might like to see another way to answer your first question. -Often the simplest way to prove that the sum of independent random variables has a particular distribution is to use moment-generating functions. This is because 1) if $X$ and $Y$ are independent with mgf's $M_X(t)$ and $M_Y(t)$, then $M_{X+Y}(t) = M_X(t) M_Y(t)$ and 2) moment-generating functions (when they exist) characterize distributions. -Applying this to your problem, a geometric $(p)$ random variable has mgf $$\frac{pe^t}{1 - (1-p)e^t}.$$ -Thus $$M_{X+Y}(t) = \left(\frac{pe^t}{1 - (1-p)e^t}\right)^{2}.$$ -Since this is the mgf of a negative binomial random variable, $X+Y$ must have a negative binomial distribution. -(There are different conventions for defining negative binomial and geometric random variables, so depending on the convention used in a particular reference the mgf's there may be slightly different from the ones I give here.)<|endoftext|> -TITLE: Determining Coefficients of a Finite Degree Polynomial $f$ from the Sequence $\{f(k)\}_{k \in \mathbb{N}}$ -QUESTION [5 upvotes]: Suppose $f$ is an unknown polynomial of degree $n$ (in one indeterminate) but the sequence $\{ f(k) \}_{k \in \mathbb{N}}$ is given. It is a nice exercise to show that one needs only the first $n+1$ terms of the sequence to determine the coefficients of $f$. That is, simply solve the matrix equation $A\mathbf{x} = b$, where $\mathbf{b} = (f(0), \dots, f(n))^{\top}$, $A$ is the Vandermonde matrix of $(i^{j})_{i,j = 0, \dots, n}$ and $\mathbf{x} = (c_{0}, \dots, c_{n})^{\top}$ (the unknown coefficients of $f$). -Question: Is there a closed form expression for the coefficients of a finite degree polynomial $f$ in terms of the sequence $\{ f(k) \}_{k \in \mathbb{N}}$ that doesn't involve matrix inversion or differentiation or explicitly calculating the polynomial in question? -(Motivation) The Ehrhart polynomial counts the number of integer lattice points intersecting a dilate of a polytope and can be calculated by the residue of an associated complex rational function (see M. Beck's articles on the subject). Some of the coefficients of the Ehrhart polynomial can be related to an $n$-volume, a relative area and the euler characteristic of said polytope. However, computing coefficients of the Ehrhart polynomial is not a particularly easy task. Having simple formulas for them, say in terms of the residues above would be nice to have at one's disposal. A reasonable starting point is answering the question above. -Thanks! - -REPLY [7 votes]: Yes; you can take finite differences. Let $\Delta f(x) = f(x+1) - f(x)$. -Theorem: $$f(x) = \sum_{k=0}^{n} \Delta^k f(0) {x \choose k}$$ -where ${x \choose k}$ is the polynomial $\frac{x(x-1)...(x-(k-1))}{k!}$. -Proof. Observe that $\Delta {x \choose k} = {x \choose k-1}$. Take $k$ finite differences of both sides and set $x = 0$. -In practice, this means you can work out what $f$ is by writing down a finite difference table. This is really easy to do by hand, and then the top row of the table tells you what the coefficients $\Delta^k f(0)$ are above. In linear algebra terms, what we are doing is using a basis with respect to which the matrix $A$ is upper-triangular, and this makes life much easier. -There is also a known explicit formula for the inverse of $A$ which is equivalent to the Lagrange interpolation formula. This is sometimes useful for theoretical reasons (e.g. as a way to control the behavior of the interpolation polynomial or to deduce certain identities). - -You can extract a "closed form" for the coefficients of $f$ using either of the two methods above, although I think using finite differences is slightly nicer. We have -$${x \choose k} = \frac{1}{k!} \sum_{i=0}^{k} s(k, i) x^i$$ -where $s(k, i)$ are the Stirling numbers of the first kind. This gives -$$[x^i] f(x) = \sum_{k=i}^{n} \Delta^k f(0) \frac{s(k, i)}{k!}$$ -where -$$\Delta^k f(0) = \sum_{j=0}^{k} (-1)^{k-j} {k \choose j} f(j).$$ -So $[x^i] f(x) = \sum_{j=0}^{n} a_{i,j} f(j)$ where -$$a_{i,j} = \sum_{k=j}^{n} (-1)^{k-j} {k \choose j} \frac{s(k, i)}{k!}.$$ -I don't know whether this sum can be simplified further.<|endoftext|> -TITLE: 0.246810121416...: Is it a algebraic number? -QUESTION [12 upvotes]: Is it algebraic the number 0.2468101214 ...? (After point, the natural numbers are juxtaposed pairs). - -REPLY [17 votes]: No, this number is transcendental. The proof by Mahler mentioned in a comment shows this. -A good reference to learn about basic transcendental number theory is the book "Making transcendence transparent: an intuitive approach to classical transcendental number theory", by Edward Burger and Robert Tubbs, Springer-Verlag (2004). -In chapter 1 of the book the proof of the transcendence of Mahler's constant $0.1234\dots$ is discussed. The idea is to show that the "obvious" rational approximations actually are very very close, to the point that they would contradict easy estimates (due to Liouville) for how quickly rational numbers can approximate irrational algebraic numbers. The Wikipedia entry on Liouville numbers discusses Liouville's approximation theorem and related results: -If $\alpha$ is algebraic of degree $d\ge 2$ then there is a constant $C$ such that for any rational $p/q$ with $q>0$, we have $$ \left|\alpha-\frac pq\right|>\frac{C}{q^d}. $$ -Actually, there is a bit of work needed here. The estimates the book discusses together with a strengthening of Liouville's theorem give the proof for Mahler's constant, and the same argument works for the number you are asking. -The strengthening we need is due to Klaus Roth in 1955, and he was awarded the Fields medal in 1958 for this result.<|endoftext|> -TITLE: Invertibility of compact operators in infinite-dimensional Banach spaces -QUESTION [16 upvotes]: Let $X$ be an infinite-dimensional Banach space, and $T$ a compact operator from $X$ to $X$. Why must $0$ then be a spectral value for $T$? -I believe this is equivalent to saying that $T$ is not bijective, but I am not sure how to show that injectivity implies the absence of surjectivity and the other way around (or if this is even the right way to approach the problem). - -REPLY [21 votes]: You are correct that it is the same as saying that $T$ is not bijective, because it follows from the open mapping theorem that a bounded operator on a Banach space has a bounded inverse if it is bijective. However, more straightforward answers for your question can be given without explicitly thinking in these terms. -You can show that if $T$ is compact and $S$ is bounded, then $ST$ is compact. If $T$ were invertible, this would imply that $I=T^{-1}T$ is compact. This in turn translates to saying that the closed unit ball of $X$ is compact. One way to see that this is impossible in the infinite dimensional case is implicit in this question. There is an infinite sequence of points in the unit ball whose pairwise distances are bounded below, no subsequence of which is Cauchy. -Compact operators on infinite dimensional spaces can be injective, but they can never be surjective. The closed subspaces of the range of a compact operator are finite dimensional. -Also, what Qiaochu said in his comment above.<|endoftext|> -TITLE: Quotient spaces and equivariant cohomology -QUESTION [5 upvotes]: Consider a $G$-equivariant map $\pi:X\to Y$ for $G$ an affine algebraic group, such that $\pi$ is a good categorical quotient. Is there any relationship between $H^*_G(X)$ and $H^*(Y)$? Is there if $\pi$ is a good geometric quotient, or if the quotient space is smooth? -EDIT: A categorical quotient is an equivariant map $\pi:X\to Y$ that is constant on $G$-orbits. It's good if the topology on $Y$ is induced by $X$ ($\pi$ is a surjective open submersion) and the map from the functions on any affine $V \subset Y$ to $G$-invariant functions on $\pi^{-1}(V)$ is an isomorphism. It's a good geometric quotient if the $G$-orbits closed in $Y$. - -REPLY [3 votes]: I don't know what a good categorical quotient is, but if $X$ is a CW complex, and the $G$ action is free and cellular, then $H^*_G(X)\cong H^*(X/G)$. See Brown's "Cohomology of Groups," p.173.<|endoftext|> -TITLE: Non-probabilistic proofs of a binomial coefficient identity from a probability question -QUESTION [8 upvotes]: Combining the answers given by me and Ralth to the probability question at Probability of getting three zeros when sampling a set, we get the following identity: -$$ -\sum\limits_{k = m}^n {{n \choose k}p^k (1 - p)^{n - k} {k \choose m} p_j^m (1 - p_j )^{k - m} } = {n \choose m} (p p_j)^m (1 - p p_j)^{n-m}, -$$ -where $m \geq 0$ and $n \geq m$ are arbitrary integers, and $p$ and $p_j$ are arbitrary probabilities (in Ralth's notation $m$ is $k$, $p$ is $p_s$, and $p_j$ is $p_h$). Can you prove this identity directly (i.e., in a non-probabilistic setting)? Whether you'll find this identity interesting or not, at least Ralth's answer may now gain its due recognition. - -REPLY [7 votes]: We can prove this using the Multinomial Theorem. -We have, using the Multinomial Theorem, that -$$\displaystyle (a+b+c)^n = \sum_{m=0}^{n} \sum_{k=m}^{n} {n \choose k}{k \choose m} a^m b^{k-m} c^{n-k}$$ -Set $\displaystyle a = pp_j x$, $b = p(1-p_j)$, $c = 1-p$. -We get -$$\displaystyle (pp_jx + p(1-p_j) + 1-p )^n = \sum_{m=0}^{n} \sum_{k=m}^{n} {n \choose k}{k \choose m} (pp_jx)^m (p(1-p_j))^{k-m} (1-p)^{n-k}$$ -i.e. -$$\displaystyle (pp_jx + 1 - pp_j)^n = \sum_{m=0}^{n} \sum_{k=m}^{n} {n \choose k}{k \choose m} p_j^m(1-p_j)^{k-m} p^{k}(1-p)^{n-k} x^m$$ -Expanding the left hand side using binomial theorem and comparing the coefficients of $\displaystyle x^m$ gives the result.<|endoftext|> -TITLE: What is an example of a topological vector space which contains a non-absorbent or non-balanced open convex set around the origin? -QUESTION [7 upvotes]: There are two equivalent definitions of a locally convex topological vector space. Note that since the vector space is topological, addition by a fixed vector x and multiplication by a fixed scalar r are both homeomophisms, which means it is enough to give a neighborhood basis at the vector 0. -The first definition is that $0$ has a local base consisting of open sets that are convex (if $x$ and $y$ are in $S$, then $\lambda x+(1-\lambda)y$ is also in $S$ for every $\lambda\in[0,1]$), balanced, (if $x\in S$, then $r x\in S$ for every $r\in\mathbb{F}$ with $|r|=1$, where $\mathbb F$ is either the real or complex numbers), and absorbing (for any vector $y$ there exists a $\lambda\in (0,\infty)$ such that $\lambda S$ contains $y$, i.e. $S$ absorbs $y$). -The second definition is that $0$ has a local base given by the ''balls'' of radius $r$ for each $r$ of each semi-norm in some fixed collection of semi-norms (a semi-norm is just like a norm except that non-zero vectors can have a non-zero semi-norm). The two definitions are equivalent in one direction because "balls" of semi-norms are convex, balanced and absorbing, and in the other because we can define for any convex, balanced, absorbing set $S$ the semi-norm $f_S$ by assigning to $x$ the "least" (infimum) $\lambda$ for which $x$ is in $\lambda S$. -My question is: what are some examples of topological vector spaces for which $0$ does have a local base of convex open sets, but that no local base consists of convex sets that are also balanced and absorbing? - -REPLY [12 votes]: Unless I am misunderstanding the situation, there is no such example. That is, if $V$ is a topological vector space (over $\mathbb R$ or $\mathbb C$) which has a basis of neighbourhoods of the origin consisting of convex sets, then it also has a basis of neighbourhoods of the origin consisting of balanced convex sets. -Also, any neighbourhood of $0$ is automatically absorbent (because of continuity of scalar multiplication). -My reference is Robertson and Roberston, Topological Vector Spaces (Cambridge Tracts in Mathematics and Mathematical Physics, 53), which is a kind of summary of Bourbaki's Topological Vector Spaces volume. -The proof is not hard: one first uses continuity of scalar multiplication to show -that any neighbourhood of $0$ contains a balanced neighbourhood. Now if $U$ is a convex n.h of $0$, and $W$ is a balanced n.h. of $0$ contained in $U$, then $V:= \cap_{|x| \geq 1} x U$ contains $W$, so is a n.h., is convex (being the intersection of convex sets), is balanced (by construction), and lies in $U$ (again by construction --- consider $x = 1$). Thus $V$ is a convex balanced n.h. of $0$ contained in $U$.<|endoftext|> -TITLE: Where is the well-pointedness assumption of ETCS used in everyday math? -QUESTION [6 upvotes]: Where is the well-pointedeness assumption of the Elementary theory of the category of sets (Lawvere's category-theoretic axiomatization of set theory) used in everyday math? -Specifically, if you have a topos with natural numbers object (assume choice if you want to), what familiar theorems don't hold? I've heard that showing the Dedekind reals are the same as the Cauchy reals is one. Where in the arguments is well-pointedness used? It seems hard to find examples of this. - -REPLY [3 votes]: A topos with an NNO has intuitionistic internal logic in general (if you assume Choice then Dionescu's theorem tells you that the internal logic is classical), so any proof that relies on proof by contradiction will not work (not to say the results won't, but the proof needs to be fixed, or your concepts altered).<|endoftext|> -TITLE: Methods to find the limit of a sequence defined by a recurrence -QUESTION [12 upvotes]: For a sequence defined by a formula normally the usual limit rules allows -one to find its limit. But for a sequence defined by a recurrence, up to now -I have only seen some refined ad hoc methods, mostly in Problems. -The Trick explains that "There is one trick that is (...) first to prove -that a limit exists, and then to use the recurrence to determine what the -limit must be" illustrated by the example of the recurrence $ -a_{n+1}=a_{n}/2+1/a_{n}$, $a_{0}=2$. -Question: Are there relatively general methods to find the limit of a -sequence defined by a recurrence? - -REPLY [11 votes]: Not really. Even once you've shown that a limit exists, a limit of, say, $x_{n+1} = f(x_n)$ is the same as a fixed point of $f$, or a solution to $f(x) - x = 0$. Needless to say it does not make much sense to ask whether there are general methods to find solutions to equations. For example, one can write down solutions to differential equations as fixed points, and there are no general methods to solve differential equations. -In that sense all methods to find limits of recurrences are "ad hoc" in the same way that all methods to solve differential equations or Diophantine equations are "ad hoc." We do what we can.<|endoftext|> -TITLE: Recurrence trouble: $T(n)=2T(n/2)+T(n/3)+\theta(n^2)$ -QUESTION [5 upvotes]: I have to solve the following recurrence :$\displaystyle T(n)=2T(n/2)+T(n/3)+\theta(n^2)$ -I have done the whole tree analyses and now I have to prove that $\displaystyle T(n) \leq dn^{2}\log_{2}(n)$ but I can not find mathematical way to prove that, is there any trick with logarithms that I can use? - -REPLY [4 votes]: Do it by induction. First, let's set the constant in $\theta$ to 1, so we get -$T(n)=2T(n/2)+T(n/3)+n^2$ -Now, assume $T(m) \leq k m^2$ for some constant $k$ for all $m < n$. Then -$T(n) = 2 T(n/2) + T(n/3) + n^2 \leq k n^2/2 + k n^2/9 + n^2 \leq (11 k /18 +1) n^2 $ -and if $k = 3$, say, we have -$T(n) \leq (11/6+1) n^2 \leq 3 n^2.$ -so if our bound holds for all $m < n$, then our bound holds for $n$. -To make this a real proof, you would need to take the case when $n$ is not a multiple of 6 into account, but the details will work out when you do them carefully.<|endoftext|> -TITLE: How to find the logical formula for a given truth table? -QUESTION [14 upvotes]: Let's say I have a truth table like this: -X Y A -t t f -t f t -f t t -f f f - -Now, I have to find the formula for A. This case is rather easy because I can immediately see that this looks like the inverted values of X~B, thus ¬(X~B). Now, for a more complicated problem I do not know the method. -X Y Z A -t t t f -t t f f -t f t t -t f f f -f t t f -f t f f -f f t t -f f f f - -Is there an approach i'm missing here? - -REPLY [3 votes]: Pick out the rows where a t appears in the rightmost column, and write down a disjunctive normal form. In your example, there are only two rows with a t and your expression will have two terms: -$ (X \cdot \bar{Y} \cdot Z) + (\bar{X} \cdot \bar{Y} \cdot Z) $ -Now you have a logical formula for your truth table. You can stop there, or you can use the laws of boolean algebra to get a simpler expression. In this case, you can use the distributive law: -$ (X \cdot \bar{Y} \cdot Z) + (\bar{X} \cdot \bar{Y} \cdot Z) = (X + \bar{X})\cdot (\bar{Y} \cdot Z) = 1 \cdot (\bar{Y} \cdot Z) = \bar{Y} \cdot Z$<|endoftext|> -TITLE: Comaximal ideals in a commutative ring -QUESTION [19 upvotes]: Let $R$ be a commutative ring and $I_1, \dots, I_n$ pairwise comaximal ideals in $R$, i.e., $I_i + I_j = R$ for $i \neq j$. Why are the ideals $I_1^{n_1}, ... , I_r^{n_r}$ (for any $n_1,...,n_r \in\mathbb N$) also comaximal? - -REPLY [6 votes]: A slight variation on other proofs. Suppose $I+J=R$, so $a+b=1$ for some $a\in I, b \in J$. I want to replace $a$ by $a^n$, so I write $a^n + (a-a^n) + b = 1$. If I knew that $a-a^n$ is in $J$, I could group it together with $b$ and get $a^n+b' = 1$, so $I^n+J = R$. But $a-a^n = a(1-a^{n-1})$ is divisible by $1-a = b \in J$, so it is in fact in $J$. From $I^n+J=R$ the general case $I^n+J^m=R$ follows by another application of the same argument.<|endoftext|> -TITLE: Is a power set null because it contains an empty set? -QUESTION [7 upvotes]: All power sets contain an empty set, so does this make the power set itself an empty set? Or am I mistaken? - -REPLY [3 votes]: An empty set contains nothing. A power set of some set $A$, $\mathscr{P}(A)$ always contains something, at least $A$ itself, thus $\mathscr{P}(A)$ is not empty. -The reason I decided to add my two cents is to point out to the frequent source of confusion which is associating empty set with zero, and then associating zero with "nothing". Then, as the (faulty) reasoning goes, the set that contains only the empty set contains nothing, and therefore it is the empty set. The major cause of such confusion is mostly due to notation: designating the empty set as $\emptyset$, which looks almost like $0$. -My rule of thumb: if in doubt, replace $\emptyset$ with $\{ \, \}$ and then think of $\{ \, \}$ as a container that contains nothing. Empty container. Clearly, the empty set. But, say, we have some other set $A$ that contains only the empty set. Again, $A$ is a container, but now it contains another container that contains nothing: $\{ \ \{ \, \} \ \}$. Clearly, a set $A$ contains something, so it's not empty. However writing the same thing as $\{ \ \emptyset \ \}$ may cause some confusion to those exploring the set theory: is that zero the set $A$ contains? Looks like zero, anyway! But zero is nothing! So the set contains nothing. So it is the empty set.<|endoftext|> -TITLE: How are eigenvectors/eigenvalues and differential equations connected? -QUESTION [18 upvotes]: In school and at university we never had eigenvalues nor differential equations, so these concepts were really giving me a hard time. Now I developed some intuition for both concepts. -I learned that both are connected in some way, i.e. there is an eigenvector/eigenvalue approach to solve differential equations. Unfortunately most of the text I find are way beyond my understanding and seem to require very extensive study. -Is there a possibility to provide me with an intuition of the connection anyway and perhaps give an easy example of how to use this approach to solve a differential equation? - -REPLY [2 votes]: This is the best explanation I have seen so far. This short paper not only explains the connection between eigenvalues, eigenvectors and differential equations using very clear, undergraduate math but also has lots of visualizations... and nice examples concerning the development of Romeo's and Juliet's love: -http://wcherry.math.unt.edu/math2700/diffeq.pdf<|endoftext|> -TITLE: What is the length of a continued fraction expansion of a rational number? -QUESTION [7 upvotes]: I was reviewing quantum factorization and am slightly unclear on a classical detail of order-finding. -Given a (suitably nice) periodic function $f$ with unknown period $r$ and a power of two $N > r^2$, the quantum subroutine yields (with bounded probability) a result of the form $\lfloor jr/N \rceil$ for $1 \le j \le r-1$. -Moreover, $\left \lvert \lfloor jr/N \rceil/N - j/r \right \rvert \le 1/2N$, so since $N > r^2$ a theorem assures us that $j/r$ is a convergent in the continued fraction expansion of $\lfloor jr/N \rceil/N$. Now I see lots of hand-waving at this point in most discussions of the algorithm. If the length of the CFE is $poly(\log(N))$, then this hand-waving can be trivially dispensed with (perhaps at the cost of a slight inefficiency). -So, does the CFE of $M/N$ have length $poly(\log(N))$? - -REPLY [7 votes]: The length of the CF of a rational number $\displaystyle \dfrac{a}{b}$ is closely related to the number of steps it takes in the Euclidean algorithm for finding gcd of $\displaystyle a,b$, which can, in the worst case, be shown to be proportional to the number of digits in $\displaystyle b$. -So the claim is true. -See: http://www.albany.edu/~hammond/gellmu/examples/confrac.pdf -Also: http://en.wikipedia.org/wiki/Euclidean_algorithm#Number_of_steps<|endoftext|> -TITLE: Why steenrod commute with transgression -QUESTION [6 upvotes]: I'm reading Hatcher's notes on spectral sequences and he mentions that steenrod squares commute with the coboundary operator for pairs (X,A) which would then explain why these operations commute with the transgression. It says it's because -that coboundary operator can be defined in terms of suspension and we know steenrod operations commute with suspension. Does anyone know the details of this reasoning? -So... -Assuming the standard axioms of steenrod operations, how do we prove that it commutes with the coboundary operator for pairs? -Thank you, - -REPLY [3 votes]: I realized that your question wasn't exactly about the Steenrod axioms themselves, but about the definition of the coboundary operator involving suspension. In reduced homology, the boundary operator $\partial$ for the pair $(X,A)$ (where the inclusion $i:A\rightarrow X$ is a cofibration) can be defined to come from the "topological boundary map" $\partial^!$ followed by the inverse of the suspension isomorphism. The former is itself a composition -$$ \partial^! = \pi \circ \psi^{-1}: X/A \rightarrow Ci \rightarrow \Sigma A, $$ -where $Ci$ is the mapping cone of $i$, $\psi^{-1}$ is a homotopy inverse of the quotient $\psi: Ci \rightarrow Ci/CA=X/A$, and $\pi: Ci \rightarrow Ci/X=\Sigma A$. So -$$ \partial = (\Sigma_*)^{-1} \circ \partial^!_* : \tilde{H}_q(X/A) \rightarrow \tilde{H}_q(\Sigma A) \rightarrow \tilde{H}_{q-1}(A) .$$ -In fact, this is true for any reduced homology theory. See May's "Concise Course" for details, pp. 106-7. I'm pretty sure that the situation for cohomology is very similar. -Bottom line: In this formulation, the coboundary operator is the composition of a map induced from an actual map on spaces and the (inverse of the (?)) suspension isomorphism. Steenrod squares commute with both of these, so they commute with the coboundary operator.<|endoftext|> -TITLE: Can an integral scheme have closed points of both positive and zero characteristic? -QUESTION [24 upvotes]: Background -Recall that an integral scheme $X$ is a scheme which is both irreducible and reduced; equivalently, its ring of functions is an integral domain on every open subset. -Given any point $p$, there is a local ring $R_p$ at $p$, which is given by localizing the ring of functions $R_U$ on any affine neighborhood $U$ of $p$ at the prime corresponding to $p$. This local ring then has a residue field $K_p/p$. The characteristic of $p$ is then the characteristic of the residue field at $p$. -Question -Its not too hard to come up with integral schemes where the characteristic of points jumps around. The standard example of $Spec(\mathbb{Z})$ has a closed point of every positive characteristic, and the unique non-closed point (the generic point) has characteristic 0 (since the residue field is $\mathbb{Q}$). -If we don't require connectedness, then the disjoint union of $Spec(\mathbb{Q})$ and $Spec(\mathbb{Z}/2)$ has closed points of order 0 and 2, respectively. -However, I have been playing with examples, and I can't seem to come up with an example of an integral scheme that has closed points of both kinds. Can an integral scheme have closed points of both positive and zero characteristic? -I would be curious to see such an example, since there is a wide gap in my intuition between schemes whose closed points have positive characteristic (which are inherently arithmetic) and schemes whose closed points have characteristic zero (which are either geometric, or certain arithmetic localizations). -Algebraic Version of Question (No Schemes Needed) -I should mention, though my motivation and curiosity are geometric in origin, the problem has an algebraic version. Is there an integral domain $R$, and two maximal ideals $m$ and $n$, such that $R/m$ has positive characteristic, and $R/n$ has zero characteristic? - -REPLY [26 votes]: Yes, but you're not going to like it. Let $R = \mathbb{Z}[x_1, x_2, ... ]$. Let $q_1, q_2, ... $ be an enumeration of the rationals; then the map $R \to \mathbb{Q}$ which sends $x_i$ to $q_i$ is surjective, so its kernel is a maximal ideal $n$ such that $R/n$ has characteristic zero. On the other hand, let $m = (p, x_1, x_2, ...)$. Then $R/m \simeq \mathbb{F}_p$. -Edit: Okay, so here is a Noetherian example. Let $R'$ denote the localization of $\mathbb{Z}$ at $p$ (since $\mathbb{Z}_p$ generally means something else) and let $R = R'[x]$. Then the map $R \to \mathbb{Q}$ which sends $x$ to $\frac{1}{p}$ is surjective, and so is the map $R \to \mathbb{F}_p$ which sends $x$ to, say, $1$. It seems interesting to try to picture $\text{Spec } R$; it might help to stare at Mumford's picture of $\text{Spec } \mathbb{Z}[x]$.<|endoftext|> -TITLE: Why is the cohomology of a $K(G,1)$ group cohomology? -QUESTION [22 upvotes]: Let $G$ be a (finite?) group. By definition, the Eilenberg-MacLane space $K(G,1)$ is a CW complex such that $\pi_1(K(G,1)) = G$ while the higher homotopy groups are zero. One can consider the singular cohomology of $K(G,1)$, and it is a theorem that this is isomorphic to the group cohomologies $H^*(G, \mathbb{Z})$. According to one of my teachers, this can be proved by an explicit construction of $K(G, 1)$. -On the other hand, it seems like there ought to be a categorical argument. $K(G, 1)$ is the object that represents the functor $X \to H^1(X, \mathbb{Z})$ in the category of pointed CW complexes, say, while the group cohomology consists of the universal $\delta$-functor that begins with $M \to M^G$ for $M$ a $G$-module. In particular, I would be interested in a deeper explanation of this "coincidence" that singular cohomology on this universal object happens to equal group cohomology. -Is there one? - -REPLY [6 votes]: Recall that group cohomology is not just about the trivial module, but is something you can compute for all $G$-modules. The corresponding thing on the topological side is to consider local systems on $K(G, 1)$ and their cohomology; in fact the category of local systems on $K(G, 1)$ is equivalent to the category of $G$-modules. Moreover, just as group cohomology is about taking derived invariants, cohomology of local systems is about taking derived global sections, and happily taking global sections of a local system is the same thing as taking invariants of the corresponding $G$-module. So there's reason to believe that the derived functors also match. -Now, as written this argument can't possibly work, because as it turns out the category of local systems on any reasonable path-connected space $X$ is equivalent to the category of $\pi_1(X)$-modules, but the cohomology of local systems on $X$ is sensitive to the higher homotopy of $X$ while the group cohomology of $\pi_1(X)$ is not. The difference in the case of general $X$ is that the resolutions needed to compute cohomology of local systems won't themselves be made of local systems; in the standard story these resolutions can be computed in the category of sheaves, but there's a much more interesting place to compute these resolutions, namely the (higher) category of derived local systems. -Roughly speaking, a derived local system on a space $X$ is an $\infty$-functor from the fundamental $\infty$-groupoid $\Pi_{\infty}(X)$ to, say, chain complexes. Unlike a local system, a derived local system is sensitive to the higher homotopy of $X$. Taking the derived global sections of such a thing (by which I mean taking the derived pushforward to a point, by which I mean some homotopy Kan extension) generalizes taking the cohomology of local systems, and in particular ordinary local systems on $X$ should possess resolutions in this category (in a suitable sense) allowing you to compute their cohomologies. If $X$ is a $K(G, 1)$ then this is just the category of chain complexes of $G$-modules and the familiar story from homological algebra takes over. (This generalizes the fact that to compute the cohomology of local systems on a $K(G, 1)$ it suffices to write down resolutions which are chain complexes of $G$-modules and it's unnecessary to consider more general sheaves.)<|endoftext|> -TITLE: Counting Lattice Points with Ehrhart Polynomials -QUESTION [7 upvotes]: Let $\bar{\mathcal{P}}$ denote the closed, convex polytope with vertices at the origin and the positive rational points $(b_{1}, \dots, 0), \dots, (0, \dots, b_{n})$. Define the Ehrhart quasi-polynomial $L_{\mathcal{P}}(t) = |t \bar{\mathcal{P}} \cap \mathbb{Z}^{n}|$, which has the form: $\sum c_{k}(t) \ t^{k}$ with periodic functions $c_{k}(t)$. -Question 1: When is the Ehrhart quasi-polynomial a polynomial, i.e., the functions $c_{k}(t)$ are constants? Does this phenomenon occur only when vertices are at integral lattice points (i.e., $b_{i}$ are positive integers)? -Question 2: Suppose I have an Ehrhart polynomial in an indeterminate $t$. What is the significance of the value of the function at rational (non-integral) $t$? -Question 3: Suppose I'd like to count positive solutions (instead of non-negative solutions) of $\sum \frac{x_i}{b_i} \leq t$ with $t$ positive and fixed. Assuming that $b_{i}$ are positive integers, what is the corresponding "Ehrhart-like" polynomial in $t$ which enumerates the (positive) integral points in the $t$-dilate $t\bar{\mathcal{P}}$? Does it follow from a simple variable change in $t$ or $b_{i}$? -(Update) Here is an example of what I'm trying to do. Suppose I'd like to calculate the non-negative integer solutions of -\begin{eqnarray} -21 x_{1} + 14 x_{2} + 6 x_{3} \leq 1 -\end{eqnarray} -(corresponding to the number of positive integer solutions of $21 x_{1} + 14 x_{2} + 6 x_{3} \leq 42$). Equivalently, by dividing through by the product $6 \cdot 14 \cdot 21 = 1764$, we can consider -\begin{eqnarray} -\frac{x_{1}}{84} + \frac{x_2}{126} + \frac{x_3}{294} \leq \frac{1}{1764}. -\end{eqnarray} -Here, $\mathbf{b} = (84, 126,294)$, so the corresponding polytope is integral. The Ehrhart polynomial for $t$-dilates is -\begin{eqnarray} -L_{\bar{\mathcal{P}}}(t) = 1 + 231 t + 18522 t^{2} + 518616 t^{3}, -\end{eqnarray} -but setting $t = \frac{1}{1764}$ gives a meaningless answer. My initial impression is that along with violating the requirement that $t$ must be an integer, what I am actually calculating is the number of lattice points in the $t$-dilate of the polytope defined by $\frac{1}{1764}$ replaced with $1$. Is there an interpolation scheme to correctly calculate the number of non-negative solutions of the first equation by finding the values of $L_{\bar{\mathcal{P}}}(0)$ and $L_{\bar{\mathcal{P}}}(1)$? Thoughts? -Thanks! - -REPLY [2 votes]: (Partial Answer to Question 1) No, this phenomenon isn't restricted to integral convex polytopes. There are examples of non-integral rational convex polytopes with Ehrhart quasi-polynomials that are polynomials. See this reference.<|endoftext|> -TITLE: Constraints on sum of rows and columns of matrix -QUESTION [5 upvotes]: Suppose $r_i$, $1 \le i \le n$, and $c_j$, $1 \le j \le m$, are nonnegative integers. When does there exist an $n \times m$ matrix in $\text{Mat}_{n \times m} (\mathbb{Z}^+)$, i.e. nonnegative entries, such that $r_i$ is the sum of the entries in its $i$th row and $c_j$ is the sum of the entries in its $j$th columns? - -REPLY [6 votes]: Non-negative integer solutions exist whenever the (also necessary) "balance condition" is met: -$$ \sum_i r_i = \sum_j c_j $$ -which simply sums the total matrix entries in two ways. -There is a survey paper by Alexander Barvinok on this: -http://www.math.lsa.umich.edu/~barvinok/linalg.pdf<|endoftext|> -TITLE: Books that develop interest & critical thinking among high school students -QUESTION [10 upvotes]: I heard about Yakov Perelman and his books. I just finished reading his two volumes of Physics for Entertainment. What a delightful read! What a splendid author. This is the exact book I've been searching for. I can use it to develop interest for science (math & physics) in my students. -His math books: - -Mathematics Can Be Fun -Figures for Fun -Arithmetic for entertainment -Geometry for Entertainment -Lively Mathematics -Fun with Maths & Physics - -His physics books: - -Physics for Entertainment (1913) -Physics Everywhere -Mechanics for entertainment -Astronomy for entertainment -Tricks and Amusements - -I want to get all the above books. Because books from author like this cannot be disappointing. But unfortunately not all of them are available. :( -I also read another amazing book How to Solve It: A New Aspect of Mathematical Method by G. Polya. This books actually teaches you how to think. -In the similar lines if you have any book suggestions (with very practical problems & case studies) for physics & Math (Please don't differentiate between math & physics here. If I can develop interest in one of the subject he'll gain interest in other.) please contribute. - -REPLY [2 votes]: As a kid I really enjoyed The Mathematical Tourist by Ivars Peterson (at the time, the 1988 edition). The best part is that it gave just enough detail (for example, on how to compute a particular type of fractal) to inspire me to implement a computer program of my own to carry out the calculation or simulation. In this way it really was a good guidebook for my own mathematical/computational explorations.<|endoftext|> -TITLE: Derivatives distribution -QUESTION [9 upvotes]: Let $f$ be a distribution on $\mathbf{R}^n$ (in the Schwartz sense) such that -$$\frac{\partial f}{\partial x_i} = 0 \text{ for $i = 1, \ldots, n$.}$$ -Then how to prove that $f$ is a constant? I had this exercise in a class last year but I couldn't find how to do the induction step (for $n = 1$ it is clear of course). - -REPLY [6 votes]: If $f(x)$ is a smooth function, one gets that $f(x)$ is constant relatively easily as in Shai Covo's comment. For a general distribution, one can convolve with approximations to the identity: Let $\phi(x)$ be a fixed smooth function with compact support such that $\int \phi(x) = 1$, and let $\phi_k(x) = k^n\phi(kx)$. -Recall $f \ast \phi_k$ is a smooth function (the integral of $$ with respect to $y$, where $\phi_y$ is the translate of $\phi$ by $y$) and is defined by $ = $ for $C_c(R^n)$ functions $s(x)$. Since $s \ast \phi_k(x)$ converges to $s(x)$ uniformly, with the same property holding for derivatives of $s(x)$, $f \ast \phi_k$ converges to $f$ as distributions. -But each $f \ast \phi_k(x)$ is a smooth function, and $$ $=$ $$ $=$ $$ $= 0$ by assumption. Hence by the smooth function case $f \ast \phi_k(x)$ is a constant. Thus $f(x)$ is the distributional limit of constants and must be constant itself.<|endoftext|> -TITLE: Existence of circuit passing through each vertex in a directed graph -QUESTION [6 upvotes]: Are there necessary and sufficient conditions for the existence of a circuit, or a disjoint set of circuits, that passes through each vertex once in a directed graph? - -REPLY [4 votes]: According to the classic text, "Computers and Intractability, A Guide to the Theory of NP-Completeness" by Garey and Johnson, the following problem: -Partition into Hamiltonian Subgraphs - -Given a directed graph $G=(V,A)$, can - the vertices be partitioned into - disjoint sets $V_1, V_2, \dots, V_k$ - for some $k$, such that each $V_i$ - contains at least three vertices and - induces a subgraph of $G$ that - contains a Hamiltonian Circuit. - -is NP-Complete, by a reduction from 3SAT. -The book also mentions that if we allow each $V_i$ to contain at least two vertices, then this is solvable in polynomial time using Matching techniques.<|endoftext|> -TITLE: Proof that Frechet-metric generates same topology as the semi-norms -QUESTION [5 upvotes]: Given a countable family of semi-norms $p_i$, we can define a metric -$d(f,g) = \sum \limits_{i=0}^{\infty} 2^{-i} \frac{ p_i(f-g) }{ 1 + p_i(f-g) }$ -We have the locally convex topology induced by the semi-norms as above, as well as the topology induced by the metric. -How does the proof work to show their equality? -I am known to the proof of Rudin (Functional analysis), but utilizes a different metric: -$d(f,g) = \max \limits_{i \in \mathbb N} 2^{-i} \frac{ p_i(f-g) }{ 1 + p_i(f-g) }$ -Whereas the proof for this metric is fairly easy - you can handle value of the sequence on its own - i do not see how a similar proof might work for the first metric. -One guess would be to show equivalence of both metrics, but I don't see even that, as on $l^1$, the sum-norm-topology is strictly finer than the max-norm-topology. -Can you help me? - -REPLY [6 votes]: To prove equivalence of the topologies you can use the fact that both topologies are characterized by their convergent sequences. This is true in any first countable space. Metric spaces are first countable, and the topology induced by a countable family of seminorms is first countable. Here, since you have translation invariance, it is enough to check sequences converging to 0 (although this simplification is not essential). That is, you can show that if $x_1,x_2,\ldots$ is a sequence in your vector space, then $d(x_n,0)\to 0$ as $n\to\infty$ if and only if for all $i$, $p_i(x_n)\to0$ as $n\to\infty$. For the left to right direction, you can use the fact that $\frac{p_i}{1+p_i}\leq 2^id$. For the other direction you can first bound the tail, then work with the remaining finitely many terms. -There is no actual need to consider sequences. You basically want to show that the identity map is a homeomorphism, and this reduces to showing it is continuous at 0 in both directions, which in any case involves the same estimates you would make when working with sequences. (In other words, considering first countability a priori isn't necessary. If you really want to, you could instead work with nets.) Note that Jonas T's answer suggests a complementary approach (that was given around the same time), working directly with neighborhoods of $0$ instead of with sequences.<|endoftext|> -TITLE: Representation theory of the additive group of the rationals? -QUESTION [26 upvotes]: What do the finite-dimensional continuous complex representations of the additive group $\mathbb{Q}$ with the usual topology look like? With the discrete topology? Which representations are indecomposable? Irreducible? -The only ones I can think of are of the form $t \mapsto e^{tA}$ for some $A \in \mathcal{M}_n(\mathbb{C})$. I would be willing to believe that these are the only ones in the first case, but I'm less sure in the second case. - -REPLY [28 votes]: One way to think of $\mathbb Q$ is the direct limit over positive integers $n$ of $\frac{1}{n} \mathbb Z$. Thus giving a character of $\mathbb Q$ is the same -as giving an element in the projective limit of the character groups of -$\frac{1}{n}\mathbb Z$. In particular, if we restrict to unitary characters, -we find that $\mathbb Q^{\vee}$ is the projective limit of circle groups $S^1$ under the $n$th power maps. This object is (I think) called a solenoid; to number theorists it is better known as the adele class group $\mathbb A/\mathbb Q$. (Here and throughout I am using the discrete topology; if one instead considers the -induced topology from $\mathbb R$, then, as Robin explains, one just gets -characters of $\mathbb R$.) -The exact sequence $0 \to \hat{\mathbb Z} \to \mathbb Q^{\vee} \to S^1 \to 0$ -in Pete's answer arises from the map taking the solenoid to the base $S^1$; the fibres of this map are copies of $\hat{\mathbb Z}$. -If we wanted not necessarily unitary characters, we would instead get -the projective limit of copies of $\mathbb C^{\times}$ under the $n$th power maps. Since $\mathbb C^{\times} = \mathbb R_{> 0} \times S^1$, and since -$\mathbb R_{> 0}$ is uniquely divisible, this projective limit is simply -$\mathbb R_{> 0}$ times the solenoid. -On a slightly tangential note, let me remark that -the relationship with the adeles is important (e.g. it is the first step in Tate's thesis): -Since the adeles are the (restricted) product of $\mathbb R$ and each $\mathbb Q_p$, and since these are all self-dual, it is easy to see that $\mathbb A$ is self-dual. -One then has the exact sequence -$$0 \to \mathbb Q \to \mathbb A \to \mathbb A/\mathbb Q \to 0$$ -which is again self-dual (the duality swaps $\mathbb Q$ and the solenoid -$\mathbb A/\mathbb Q$). -One should compare this with the exact sequence -$$0 \to \mathbb Z \to \mathbb R \to \mathbb R/\mathbb Z = S^1 \to 0.$$ -This is again self-dual ($\mathbb R$ is self-dual, -and duality swaps the integers and the circle). -This brings out the important intuition that the adeles are to $\mathbb Q$ as -$\mathbb R$ is to $\mathbb Z$.<|endoftext|> -TITLE: Every real sequence is the derivative sequence of some function -QUESTION [10 upvotes]: I am looking for the proof of the following theorem: - -Let $(a_n)$ be a sequence of real - numbers. Then there exists a function - $f$ which is infinitely differentiable - at 0, and $$ \frac{d^nf}{dx^n}(0) = - a_n, \ \ \text{for all } n.$$ - -I would appreciate either a sketch of the proof or an online reference to it. A general case is listed as Borel's lemma in Wikipedia, without proof. -The hard part is when the power series $\sum_n \frac{a_n}{n!}x^n$ has a zero radius of convergence. -Edit: Thanks for the answers! - -REPLY [10 votes]: As you mentioned, this is a result of Borel. Proofs can be found in several sources, see for example the book on "Complex variables" by Berenstein and Gay (it may just be an exercise there, but at least there is a "hint"). The idea is to try a power series. Of course, this may not converge, so you use the kind of functions that come in constructions of smooth partitions of unity to help the convergence; the point is that you can ensure these functions decay to zero sufficiently fast. -Edit: There is a proof in Wikipedia, actually. See here. -And there is another question where proofs are sketched. In particular, in my answer I sketch a proof of Peano that is different from the standard proof I reference above. - -REPLY [5 votes]: This is a famous theorem of Borel. If you have Hormander's "The analysis of partial differential operators I" it's Theorem 1.2.6 there. He proves the result in any dimension. The basic idea is to write a sum of functions $\sum_n {a_n \over n!}\phi(m_n x)x^n$ where $\phi(x)$ is $1$ near $x = 0$ and is equal to zero outside of a small set containing $0$. If the $m_n$ are chosen carefully then the sum will have the desired properties.<|endoftext|> -TITLE: A space is regular if each closed set $Z$ is the intersection of all open sets containing $Z$? -QUESTION [9 upvotes]: I've been trying to prove the following statement, but the converse has been giving me trouble. - -A topological space $X$ is regular if and only if every closed subset $Z\subseteq X$ is the intersection of all open sets $U\subseteq X$ which contain it. - -Here, the defintion of regular is that for any point $p\in X$ and any closed subset $Z\subset X$ not containing $p$, there exist open sets $E,F\subset X$ such that $p\in E$, $Z\subseteq F$, and $E\cap F=\emptyset$. The space does not necessarily have to be a $T_1$ space. -I believe I showed the forward statement correctly. I suppose $X$ is regular, and clearly $Z\subseteq\cap\mathcal{U}$, where $\mathcal{U}$ is the family of all open sets containing $Z$. Then if I take $p\in\cap\mathcal{U}$, then $p$ is in every open subset containing $Z$. Towards a contradiction, I assume that $p$ is not in $Z$. Then since $X$ is regular, there exist open sets $E,F\subset X$ such that $p\in E$, $Z\subseteq F$, and $E\cap F=\emptyset$. Thus $p\not\in F$, but $F$ is an open set containing $Z$, and thus must contain $p$, a contradiction. Thus $p\in Z$, and so $Z=\cap\mathcal{U}$. -The backwards step has stumped me for a while. I take any point $p$ and a closed set $Z$ which does not contain $p$. So $Z=\cap\mathcal{U}$, and since $p\not\in Z$, there exists an open set $F$ containing $Z$ which does not contain $p$. The only observation I've been able to make is that $F^c$ is a closed set containing $p$, and then $F^c=\cap\mathcal{W}$ where $\mathcal{W}$ is every open set containing $F^c$. Hence $p$ is in every open set containing $F^c$. Taking any open sets $E$ and $F$ such that $p\in E$ and $Z\subseteq F$, and I haven't been able to see a way that there exists two such open sets that are disjoint, to show the space is regular. Showing $F^c$ is an open set would work, but I'm not sure if that's even true. I was hoping someone could point out what I'm missing, thank you. - -REPLY [9 votes]: Ok, let's try another counterexample. -For simplicity, say a space with your property (any closed set is the intersection of its neighborhoods) is "Cromarty". -Let $X$ be an infinite set with the cofinite topology (the closed sets are all the finite sets and $X$ itself). If $Z \subset X$ is closed (hence finite), let $\mathcal{U}$ be the collection of all open sets containing $Z$. For any $x \notin Z$, $\{x\}^c$ is open (because it is cofinite) and contains $Z$, so $\{x\}^c \in \mathcal{U}$, and thus $x \notin \bigcap \mathcal{U}$; hence $Z^C \subset \bigcap \mathcal{U}$, so $\bigcap \mathcal{U}=Z$. So $X$ is Cromarty. -However, $X$ is not regular, since every pair of nonempty open sets has nonempty (in fact, infinite) intersection. -Edit: To amplify, it looks like the Cromarty property is equivalent to being $R_0$, which means: if $x$ has a neighborhood not containing $y$, then $y$ has a neighborhood not containing $x$. This is of course strictly weaker than being regular (as the cofinite topology shows). -Suppose $X$ is $R_0$, and $Z$ is closed. Suppose $x \notin Z$, $y \in Z$. Then $Z^c$ is a neighborhood of $x$ not containing $y$, so $y$ has a neighborhood $U_y$ that does not contain $x$. Now $U = \bigcup_{y \in Z} U_y$ is a neighborhood of $Z$ that does not contain $x$. As above, it follows that $Z$ is the intersection of its neighborhoods, so $X$ is Cromarty. -Conversely, suppose $X$ is Cromarty. Suppose a point $x$ has a neighborhood $U$ not containing $y$. $U^c$ is closed, hence the intersection of its neighborhoods, so $U^c$ has a neighborhood $V$ not containing $x$. But $V$ is also a neighborhood of $y$, so $X$ is $R_0$.<|endoftext|> -TITLE: Monkeys and Typewriters -QUESTION [7 upvotes]: Suppose that there is a certain collected works of plays that is N symbols long in the following sense: a "symbol" is one of the 26 letters of the alphabet, a line break, period, space, or a colon; in other words there are 30 possible symbols. -If "a monkey" randomly "types" 1 of these 30 symbols at a rate of one per second, how long will it take M monkeys working at this rate, on average, for one of them to randomly write this specific N symbol long collected works? -For clarity let me state that I am assuming each monkey ceaselessly types random symbols at this rate, and unless a monkey immediately types the right things, the collected works will be preceded by gibberish. - -REPLY [5 votes]: We will estimate the probability for a "generic" string. The number of occurrences of the string in any given monkey's output is roughly distributed Poisson with $\lambda = 30^{-N}$. The time until the first event happens is thus roughly distributed exponentially with rate $\lambda = 30^{-N}$. The minimum of $M$ such processes is also distributed exponentially with rate $\lambda = M/30^N$. Thus the expected time is roughly $30^N/M$. -The same estimate can be obtained if we calculate the expected number of appearances. The expected number of appearances in any given monkey's stream for the first $t+M-1$ characters is $t/30^N$. For $M$ monkeys, it is $tM/30^N$. This is $1$ for $t = 30^N/M$, and this gives a rough estimate for the actual expectation. -In fact, assuming that the string "doesn't overlap with itself", we can get an exact expression for the expectation (depending only on $N$ and $M$) using Theorem 1.1 in "String overlaps, pattern matching, and nontransitive games" (Guibas and Odlyzko '81), which gives a generating function for the probability that a given monkey is not done after $t$ steps. -The paper also gives an expression for "non-generic" strings and for multiple strings, but the collected works are not going to overlap themselves; even if they do, it will probably have only a slight effect on the probabilities.<|endoftext|> -TITLE: Convergence of Series $\sum 1/(1+ n^2 x)$ Uniformly (Homework) -QUESTION [5 upvotes]: This is from the book Principles of Mathematical Analysis by Rudin, number 4 of chapter 7. It says consider -$$ f(x) = \sum\limits_{n=1}^{\infty}{ 1/(1+ n^2 x) } $$ -The question asks: -(1) For what values of x does the series converge absolutely. We got that the series converges when x $\not=$ 0 & x $\not= -1/k^2 $ when k is an integer since, there is a discontinuity when n reaches the value $k^2$ . -However we don't understand how to do any of the following questions asked. Any hints would be greatly appreciated. We were told that this problem was suppose to be fairly hard for its position in the problem set (ie. 4th question in the Rudin book). -(2) What interval does it converge uniformly? -(3) On what intervals does it fail to converge uniformly ? -(4) Is f continuous wherever the series converges? -(5) Is f bounded? - -REPLY [4 votes]: For strictly positive $x$ you can already see that each term of the series, except the first one, is bounded by: -$$\frac{1}{1+n^2 x} < \frac{1}{(n-1)^2 x}$$ -The series formed by the bounds converges, so by the Weierstrass M-test, you have uniform convergence for $x>0$. -For $x<-1$ you can make the same story: -$$\left|\frac{1}{1+n^2 x}\right| < \frac{1}{(n+1)^2 |x|}$$ -The series formed by the bounds converges, so by the Weierstrass M-test, you have uniform convergence for $x<-1$ as well. -The hard part is of course what happens on the open interval $]-1,0[$, you have already excluded the endpoints as well as all the negative inverses of squares of integers within that interval. - -REPLY [2 votes]: HINT: -You have seen a number of discontinuities at certain points. - -Can you use the M-test between points of discontinuities? -1.a If you can then it is good. -1.b If you can not do that - why? Can you do estimates if you remove a part around these points? -Can you show that the convergence is NOT uniform near these points?<|endoftext|> -TITLE: Asymmetric planar cubic graphs -QUESTION [6 upvotes]: At Wikipedia I found that "according to a strengthened version of Frucht's theorem, there are infinitely many asymmetric cubic graphs". - -Are there infinitely many asymmetric -planar cubic graphs, too? -If so, does it follow that there is an infinite asymmetric planar cubic graph? -If so, how could this graph be characterized? - -Background -I am looking for an homogeneous and isotropic (in the large) regular graph that could "mimick" a discretized plane (without distinguished directions as in a grid). So an infinite asymmetric 4-regular graph would be even better (reflecting dimension 2). - -REPLY [7 votes]: Unless I'm mistaken, you can add a sufficiently large "cyclic ladder" to the Frucht graph as in: -I would argue that, in an automorphism, the right-hand side would need to be mapped to itself. Therefore, an automorphism of the above graph would imply an automorphism of the Frucht graph (and therefore must be trivial). -An infinite version could be constructed by appending two infinite ladders instead (actually, I'm pretty sure you could attach one infinite ladder).<|endoftext|> -TITLE: Vague convergence of measures -QUESTION [5 upvotes]: Define a subprobability measure to be a measure on Borel sigma algebra of the real line $\mathbb{R}$ with the measure for the whole real line less or equal than 1. -I was wondering about the definition of vague convergence of a sequence of subprobability measures $\{ \mu_n, n\geq 1 \}$ to another subprobability measure $\mu$. The convergence can be defined in slightly two different ways as in Chung's probability theory book p85 and p90: -(1) if there exists a dense subset D of the real line $\mathbb{R}$ so that $ \forall a \text{ and } b \in D \text{ with } a -TITLE: Quotient geometries known in popular culture, such as "flat torus = Asteroids video game" -QUESTION [37 upvotes]: In answering a question I mentioned the Asteroids video game as an example -- at one time, the canonical example -- of a locally flat geometry that is globally different from the Euclidean plane. It might be out of date in 2010. This raises its own question: -are there other real-life examples of geometries formed by identifications? We know the cylinder and Moebius strip and there are probably some interesting equivalents of those. Origami are coverings of a punctured torus and I heard that there are crocheted examples of complicated 2-d and 3-d objects. Are there simple, Asteroids-like conversational examples for flat surfaces formed as quotients? Punctures and orbifold points and higher dimensional examples all would be interesting, but I am looking less for mathematically sophisticated than conversationally relevant examples, such as a famous game or gadget that makes a cellphone function as a torus instead of a rectangle. -(edit: Pac-Man, board games such as Chutes and Ladders, or any game with magic portals that transport you between different locations, all illustrate identification of points or pieces of the space, but they lead to non-homogeneous geometries. The nice thing about Asteroids was that it was clearly the whole uniform geometry of the torus. -edit-2: the Flying Toasters screen-saver would have been an example of what I mean, except that video of it exists online and shows it to be a square window onto motion in the ordinary Euclidean plane.) - -REPLY [3 votes]: Take a look at this version of Asteroids that runs in other quotient spaces besides the flat torus: -http://olhar3d.impa.br/nave/asteroids/ -This short paper describes the approach: -http://www.visgraf.impa.br/Data/RefBib/PS_PDF/sib2015filipe/wip-filipe.pdf<|endoftext|> -TITLE: A vector space over $R$ is not a countable union of proper subspaces -QUESTION [13 upvotes]: I was looking for alternate proofs of the theorem that "a vector space $V$ of dimension greater than $1$ over an infinite field $\mathbf{F}$ is not a union of fewer than $|\mathbf{F}|$ proper subspaces" and possible generalizations. -A simple measure-theoretic proof over $\mathbb{R}$ is as follows: By countable additivity the sum of the measures of any collection of subspaces is zero since the measure of each subspace is zero, which is a contradiction. -I would like to look at proofs over arbitrary infinite fields and would like to know if similar statements hold for say modules (finitely generated or otherwise) over infinite rings. - -REPLY [8 votes]: I wrote a short note on exactly this problem. It will appear in the Monthly one of these years. -Note in particular that your quoted statement is not necessarily true for infinite-dimensional vector spaces: an infinite-dimensional vector space over any field can be covered by $\aleph_0$-many proper subspaces. -You ask also about modules over rings. Yes, there has been work on that. I believe that Apoorva Khare, a mathematician who independently proved most of the results in my note, also has some work on the case of modules over various rings. If you google for "covering numbers", you'll find that many, many papers on this topic have been written.<|endoftext|> -TITLE: Sets of Constant Irrationality Measure -QUESTION [5 upvotes]: Let $\mu (r)>2$ be the irrationality measure of a transcendental number $r$, and consider the following set of points $P \in\mathbb{R}$: -$P=\{r\in \mathbb{R}: \mu(r)=Constant\}$ -Is this set a fractal, and If so, then what is it's dimension? - -REPLY [3 votes]: It is a fractal much like the cantor set with dimension 2/r. That is Jarniks theorem. You can find a proof in the Falconer book Fractal Geometry: Mathematical Foundations and Applications.<|endoftext|> -TITLE: How do I compute the eigenfunctions of the Fourier Transform? -QUESTION [10 upvotes]: In Andy's answer to the question "What are fixed points of the Fourier Transform" on Math Overflow, he shows that the Fourier Transform has eigenvalues $\{+1, +i, -1, -i \}$ and that the projections of any function onto the corresponding four eigenspaces may be found through some simple linear algebra. -I would like to get a better feeling for these four eigenspaces of the fourier transform. - -How can I find some interesting members of each of these eigenspaces? -How can I show that Hermite-Gaussians are in one (or more?) of the eigenspaces? -How can one define usable projection operators onto these eigenspaces? -The wikipedia article on the Fourier Transform mentions that Wiener defined the Fourier Transform via these projections. What exactly was Wiener's approach? - -REPLY [6 votes]: One method is to apply the Bargmann transform: -$$B_f(z) = \int_{-\infty}^{\infty} f(x) \exp \left(2 \pi x z - \pi x^2 - \frac{\pi}{2}z^2 \right) dy$$ -to the eigenvalue equation: -$$f(x) = \lambda \int_{-\infty}^{\infty}f(y) \exp(-2 \pi ix y) dy$$ -to obtain the following relation for the Bargmann transformed eigenfunctions: -$$B_f(z) = \lambda B_f(-iz) $$ -whose solutions are the monomials: -$B_f(z) = z^n$, corresponding to the eigenvalues $i^{n \bmod{4}}$. -Now, the inverse Bargmann transform of the monomials are just the Hermite functions. -Remark: The application of the Bargmann transform to the eigenvalue equation entails a change of the integration order which is possible because the Hermite functions are bounded. -The projection operators onto the $n$th subspace in the Bargamann -representation are given by the kernels -$$ P(v,\bar{z})=\frac{(v\bar{z})^n}{n!} ,$$ -which act on the Bargmann transformed functions according to: -$$ Pf(v) = \int_{\mathbb{C}}P(v,\bar{z}) f(z) \exp(-z\bar{z})dzd\bar{z} .$$ -In the time domain the projection kernels are just $P(t, \tau) = H_n(\tau) H_n(t)$ due to the orthonormality of the Hermite functions.<|endoftext|> -TITLE: Moments and non-negative random variables? -QUESTION [7 upvotes]: I want to prove that for non-negative random variables with distribution F: -$$E(X^{n}) = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx$$ -Is the following proof correct? -$$R.H.S = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx = \int_0^\infty n x^{n-1} (1-F(x)) dx$$ -using integration by parts: -$$R.H.S = [x^{n}(1-F(x))]_0^\infty + \int_0^\infty x^{n} f(x) dx = 0 + \int_0^\infty x^{n} f(x) dx = E(X^{n})$$ -If not correct, then how to prove it? - -REPLY [3 votes]: If ${\rm E}(X^n) < \infty$, then $\int_0^\infty {x^n f(x){\rm d}x} < \infty $, and in turn, $\int_M^\infty {x^n f(x){\rm d}x} \to 0$ as $M \to \infty$. Since $\int_M^\infty {x^n f(x){\rm d}x} \geq \int_M^\infty {M^n f(x){\rm d}x} = M^n [1 - F(M)]$, we have that $M^n [1 - F(M)] \to 0$ as $M \to \infty$. Hence, your solution is correct (assuming that ${\rm E}(X^n) < \infty$).<|endoftext|> -TITLE: Convergence of a Fourier series -QUESTION [7 upvotes]: Let $f$ be the $2\pi$ periodic function which is the even extension of $$x^{1/n}, 0 \le x \le \pi,$$ where $n \ge 2$. -I am looking for a general theorem that implies that the Fourier series of $f$ converges to $f$, pointwise, uniformly or absolutely. - -REPLY [5 votes]: I found the following theorems from the book "Introduction to classical real analysis" by Karl R. Stromberg, 1981. - -(Zygmund) If $f$ satisfies a Hölder (also called Lipschitz) condition of order $\alpha\gt 0$ and $f$ is of bounded variation on $[0,2\pi]$, then the Fourier series of $f$ converges absolutely (and hence uniformly). p. 521. -This applies to the example in my question. -If $f$ is absolutely continuous, then the Fourier series of $f$ converges uniformly but not necessarily absolutely. p. 519 Exercise 6(d) and p.520 Exercise 7c. -(Bernstein) If $f$ satisfies a Holder condition of order $\alpha\gt 1/2$ , then the Fourier series of $f$ converges absolutely (and hence uniformly). p.520 Exercise 8 (f) -(Hille) For each $0<\alpha\le 1/2$, there exists a function that satisfies a Holder condition of order $\alpha$ whose Fourier series converges uniformly, but not absolutely. p.520 Exercise 8 (f)<|endoftext|> -TITLE: Quadratic minimization in a Hilbert space -QUESTION [6 upvotes]: If $A$ is a positive definite matrix, then the solution to the minimization problem $(1/2)x^TAx - b^Tx$ is given by $A^{-1}b$. I'm interested in the generalization of this to a Hilbert space. What conditions on $A$ should be required for this to be true in a Hilbert space? Also is there a reference that includes this? I searched Google books, but couldn't find it. - -REPLY [6 votes]: What you need is to look at some texts on functional analysis, especially about "Direct Methods" in "Calculus of Variations". I'll give a quick primer here to flesh out what Rahul Narain said in the comments, but it will be by no means complete. -First let us review how the minimization problem is solved in finite dimensional spaces. We will consider a finite dimensional vector space $V$ with an inner product $\langle,\rangle$. Let $A$ be a self-adjoint linear operator $V\to V$, and $b$ some element of $V$. We consider the minimization problem with the non-linear functional $J(x) = \frac12 \langle x,Ax\rangle - \langle b,x\rangle$. There are two steps used in finding the minimizer. -Step 1) Existence of a minimizer. First we need to show that $J(x)$ is bounded from below. This is where we first use that $A$ has only non-negative eigenvalues. If $A$ were to have negative eigenvalues, let $v$ be its eigenvector, and look at $J(kv)$ for $k$ a scalar. Letting $k\nearrow\infty$ we see that $J(kv)\searrow -\infty$ and there cannot be an minimum. Next, if $A$ were to have a non-empty kernel, such that there exists some vector $v$ with the property $Av = 0$ and $\langle b,v\rangle \neq 0$, then again taking $J(kv)$ and sending $k$ to either $\pm\infty$, we will have that the functional $J$ is not bounded below. -So $A$ must only have positive eigenvalues. Now, let $\lambda$ be the smallest of $A$'s eigenvalues. Then we have $\langle x,Ax\rangle > \lambda \|x\|^2$. Using Cauchy-Schwarz inequality, we then have -$$ J(x) > \frac12 \lambda \|x\|^2 - \|b\|\|x\| $$ -So we see that for $\|x\| > 2\|b\| / \lambda$, $J(x) > 0$. And by continuity, $J(x)$ is bounded below. -Now, let $j = \inf J < 0$. (The infimum is negative since by choosing $w$ a sufficiently small multiple of $b$, we see that $J(w) < 0$.) By the definition of infimum, there exists a sequence of vectors $(v_k)$ such that $J(v_k) \searrow j$. Since the infimum is less than 0, by our previous computations we know that the sequence $(v_k)$ must all satisfy $\|v_k\| \leq 2\|b\|/\lambda$. Now, this set is a bounded closed ball in a finite dimensional vector space. In particular, it is compact. Hence up to a subsequence $v_k$ must converge. Now using continuity of $J$, you have that the limit element $v_\infty$ of the sequence attains the minimum. That is $J(v_\infty) = j$. -Step 2) Computing the minimizer. Here we just use the Euler-Lagrange equations. At the minimizer, the functional is stationary. That is $(\frac{\delta}{\delta x}J)(v_\infty) = 0$. Taking the variation of the functional $J$, we get, using again that $A$ is self-adjoint, that -$$ \frac{\delta}{\delta x}J = 0 \iff Ax - b = 0 $$ -Since $A$ only has positive eigenvalues, it is invertible, so we can solve $v_\infty = A^{-1}b$. - -Formally, we can try to do the same thing in infinite dimension. But there are a few caveats. First is that it is not enough that $A$ only has positive eigenvalues. You need that $A$ is coercive in the following sense: -$$ \langle x, Ax\rangle > \lambda \|x\|^2$$ -for some $\lambda$. In finite dimensions that $A$ is positive implies the existence of this $\lambda$. In infinite dimensions this is not true. Imagine the $\ell^2$ Hilbert space with standard inner product and the standard basis $(e_k)$. Let $Ae_k = \frac1k e_k$. Then $A$ has only positive eigenvalues. But by taking larger and larger $k$'s, we can make the eigenvalue approach 0 as closely as we want, and violate coercivity. Coercivity allows us to guarantee that the functional $J(x)$ has a bottom. -The second problem is that a closed bounded set in a Hilbert space is not necessarily compact. So we cannot conclude immediately that a minimizing sequence converges in the norm topology. However, using Banach-Alaoglu theorem, the minimizing sequence must have a convergent subsequence in the weak topology of your Hilbert space. So it is sufficient to show that your functional $J(x)$ is what is called "sequentially weakly lower-semicontinuous" to get the desired conclusion. (That condition just guarantees that for any weakly converging sequence $w_k\to_w w_\infty$, $\lim\inf J(w_k) \geq J(w_\infty)$.) -In our particular case, things can be made slightly easier. If we in addition assume that $A$ is a bounded linear operator. Then we can use $(x,y) = \langle x,Ay\rangle$ as an equivalent inner product on our Hilbert space. Then $A^{-1}b$ exists by the Riesz representation theorem, and our functional can be written as $J(x) = \frac12 (x,x) - (A^{-1}b, x)$. Now we run through our previous arguments again. Now $w_k\to_w w_\infty$, and therefore $(A^{-1}b,w_k) \to (A^{-1}b,w_\infty)$ strongly by the definition of weak limits. And we also use the property of weak limits that -$$ \lim\inf (w_k,w_k) \geq (w_\infty,w_\infty)$$ -which gurantees lower semi-continuity as desired. So now we've shown that the minimizer exists, and computing the variation we see that the minimizer must satisfy $w_\infty = A^{-1}b$. -So to repeat, a sufficient condition is that $A$ is self-adjoint, bounded, and coercive.<|endoftext|> -TITLE: Pointwise but not uniform convergence of a Fourier series -QUESTION [8 upvotes]: What is an example of a continuous, or even better, differentiable, $2\pi$ (or 1) periodic function whose Fourier series converges pointwise but not uniformly? (Such function cannot be of Hölder class, or absolutely continuous.) - -REPLY [10 votes]: Consider: -$$f_{n,N}(z) = \sin(Nx) \sum_{k = 1}^n \frac{\sin(kx)}{k}$$ -Now consider -$$\sum_k \frac{1}{k^2} f_{2^{k^3}, 2^{k^3 - 1}}(z)$$ -Now for $x = \pi / (4n)$ and $N = 2n$ we get that -$$\sin(\pi/4) \sum_1^n \frac{1}{k} > \frac{1}{\sqrt{2}} \log n$$ -So we have for some $x$ that -$$|s_{n_k + 1} - s_{n_k}| \geq \frac{1}{\sqrt{2}} \frac{1}{k^2} \log n_k$$ -So we cannot have uniform convergence. I believe this is due to Hugo Steinhaus. -I hope I didn't make a mistake, but it is along these lines, I can correct it if I made an error.<|endoftext|> -TITLE: Does rapid decay of Fourier coefficients imply smoothness? -QUESTION [11 upvotes]: Under the isomorphism of Hilbert spaces $L^2(S^1)\to\ell^2(\mathbb Z),\quad e^{2\pi i n t}\mapsto e_n$, smooth functions on the circle are mapped to rapidly decaying sequences (see wikipedia). -Is the converse also true? That is, does every rapidly decaying sequence in $\ell^2(\mathbb Z)$ represent a smooth function? -Motivation: For certain arguments, it would be really nice to read off smoothness from the Fourier coefficients. - -REPLY [15 votes]: Yes. Smoothness is equivalent to the Fourier coefficients forming a sequence that decays rapidly (faster than any polynomial). To see the direction you asked about, note that if $\{c_n\}$ is a rapidly decaying sequence, then the sum $\sum c_n e^{in x}$ will converge uniformly as will every derivative. So the sum will represent a smooth function. (The continuous analog of this is that the Fourier transform induces an isomorphism of the Schwartz space onto itself.) -In general, one can use the Fourier coefficients to give an $L^2$ definition of differentiability. Morally, a function ought to be $C^k$ if its Fourier coefficients decay like $|n|^{-k}$. This is not quite accurate, but one can use this to define the Sobolev spaces -on the circle (more generally on the torus, or using the Fourier transform on euclidean space, and by an extension process on compact manifolds) that describe how many "weak" derivatives a function has. This sort of argument shows that $L^2$ differentiability is roughly comparable with normal differentiability, and using such Sobolev spaces turns out to be integral when one tries to prove explicit bounds about things such as elliptic regularity. There, working with Fourier coefficients (or transforms) and a multiplication operator is significantly easier than working in the original function space with the correspondingly more complicated differential operator. (I am using the fact that constant-coefficient differential operators correspond under the Fourier transform to multiplication by a polynomial.)<|endoftext|> -TITLE: matrix representation of $ax+b$ group -QUESTION [5 upvotes]: In "A Short Course on Spectral Theory", page 10, William Arveson asserts that the "$ax+b$ group", ie. the group generated by all dilations and translations of the real line, is isomorpic to the group of all (real) $2\times 2$ matrices of the form -$$ \begin{bmatrix} a & b\\ 0 & \frac{1}{a}\end{bmatrix}, \,\,\,\,\,\, a>0 \mbox{ and }, b \mbox{ real}$$ -It is very easy to check that the $ax+b$ group is isomorphic to the group of all matrices of the form -$$ \begin{bmatrix} a & b\\ 0 & 1\end{bmatrix}, \,\,\,\,\,\, a>0 \mbox{ and }, b \mbox{ real}$$ -So these two matrix groups should be isomorphic. Is this correct? Can someone give me the isomorphism? I've tried for a while and can't seem to get it. - -REPLY [2 votes]: The isomorphism arises simply from factoring $\rm\ \ aa\ X + b\ \ =\ \ a\ (ax+b/a)\:,\ $ namely -$\rm\ \ \ (aa\ X + b)\ \: \circ\: \ (AA\ X+B)\quad =\quad a\ a\ (A\ A\ \ X\ \ +\ \ B)\ \ +\ \ b\quad\ \ \ =\quad\ \ \ aa\ \ AA\ X\ +\ aaB + b$ -$\rm\displaystyle\ a\bigg(aX+\frac{b}a\bigg)\circ \:A\:\bigg(AX+\frac{B}A\bigg)\ =\ aA\:\bigg(aX+\frac{b}{aA}\bigg)\circ\bigg(AX+\frac{B}A\bigg)\ =\ aA\bigg(aA\ X\ + \frac{aaB + b}{aA}\bigg)\ $<|endoftext|> -TITLE: Analytic method for determining if a function is one-to-one -QUESTION [9 upvotes]: In algebra, we learn that if a function $ f(x) $ has a one-to-one mapping, then we can find the inverse function $ f^{-1}(x) $. The method that I have seen taught is the "horizontal line test": if any horizontal line touches the graph of the function more than once, then it must not be one-to-one. This method is not exactly rigorous, since any function with a non-finite range can not be completely viewed on a graph. -Is there an analytic method to determine if a function is one-to-one? Is this possible in elementary algebra or calculus? - -REPLY [26 votes]: First, let me answer your question; but please keep reading, because there's lots more to say. -Yes: there is an analytic way to see if a function is one-to-one. For this, you need the "analytic definition" of being one-to-one. The definition is: -$$f\text{ is one-to-one if and only if for all }a,b\text{ if }a\neq b\text{ then } f(a)\neq f(b).$$ -Logically, this is equivalent to: -$$f\text{ is one-to-one if and only if for all }a,b\text{ if }f(a)=f(b)\text{ then }a=b.$$ -So this provides a way to check if the function is one-to-one: if you can find $a\neq b$ such that $f(a)=f(b)$, then $f$ is not one-to-one; and this is essentially the best way of doing it: exhibit a pair of distinct numbers that map to the same thing. To prove that $\sin(x)$ is not one-to-one, all I need to do is say: "Look, $0$ and $\pi$ are different, but $\sin(0)=\sin(\pi)$." -To prove that a function is one-to-one, you can do it in any of two (equivalent) ways: show that if $a$ and $b$ are any numbers with the property that $f(a)=f(b)$, then it must be the case that $a=b$; or show that if $a\neq b$, then $f(a)$ must be different from $f(b)$. -For instance, to show that $f(x)=x^3$ is one to one, we can note that if $f(a)=f(b)$, then $a^3=b^3$, and taking cubic roots we conclude that $a=\sqrt[3]{a^3} = \sqrt[3]{b^3} = b$. So if $f(a)=f(b)$, then $a=b$. QED. -Or we can argue that if $a\neq b$, then either $a\lt b$ or $a\gt b$. If $a\lt b$, then $a^3\lt b^3$ (you can prove this easily using the properties of real numbers and inequalities), so $f(a)\neq f(b)$. If $a\gt b$, then $f(a)=a^3\gt b^3=f(b)$, hence $f(a)\neq f(b)$. Either way, $a\neq b$ implies $f(a)\neq f(b)$, so $f$ is one-to-one. -This is the standard way of showing, analytically, that a function is one-to-one. How one establishes the implication will depend on the function. -That said, there are some things you may want to remember: -First: to specify a function, we usually need to specify at least two things: the domain of the function, and the value of the function at any point in the domain. -More often, we are interested in functions between two specific sets. In that case, we actually need to specify three things: the domain, the set in which the images will lie, and the value of the functions at any point of the domain. -So, if we say that we have a function $f\colon X\to Y$ between two sets, then we mean that: - -For every $x\in X$, there is an element of $y\in Y$ such that $f(x)=y$; and -For each $x\in X$, there is only one $y\in Y$ with $f(x)=y$ (unique value). - -So, we need every element of $X$ to have a unique image. Different elements of $X$ may have the same image, but a single element should not have multiple images. We call $X$ the domain of $f$, and we call $Y$ the codomain of $f$. -In calculus, however, we are usually a bit sloppy. We almost never mention either the domain or the codomain! Instead, we agree that if we will either specify the domain explicitly, or else we will mean "the natural domain". And we almost never mention the codomain at all. -Now, if you have a function $f\colon X\to Y$, then we say that a function $g\colon Y\to X$ "going the other way" is "the inverse of $f$" if and only if two things happen: - -For every $x\in X$, $g(f(x)) = x$ ($g$ "un-does" what $f$ does); and -For every $y\in Y$, $f(g(y)) = y$ ($f$ "un-does" what $g$ does). - -Now, for this to actually work, we need two things: - -Given an element $y\in Y$, there can be at most one $x\in X$ such that $y=f(x)$. This is the "one-to-one" condition. If you think in terms of calculus and the graph, it is precisely the "horizontal line test": for each value of $y$ (each horizontal line), there is at most one point in the domain where $f$ takes value $y$ (it cuts the graph in at most one point). - -Reason: If you had $x\neq x'$ but with $y=f(x)=f(x')$, then you would need $g(y)=x$ so that $x=g(y)=g(f(x))$, but you would also need $g(y)=x'$ because $x'=g(y) = g(f(x'))$. But a function cannot have two different values at the same point, so this is an impossible situation for $g$. The only way to solve this quandry is for either $g$ not to exist, or for $x$ and $x'$ not to exist. - -Given any element $y\in Y$, there is at least one $x\in X$ such that $y=f(x)$. This is the "onto" or "surjective" part people have mentioned. - -Reason: Since we also want $f(g(y))=y$ for every $y\in Y$, we need there to be an element of $X$, namely $g(y)$, that maps to $y$. - - -So: if $f\colon X\to Y$ has an inverse, then it must be both one-to-one, and onto the set $Y$. This is necessary. In fact, it is also sufficient, and one can show that if there is an inverse, then there is one and only one inverse, so we call it $f^{-1}$ instead of $g$. -Now, here's the thing: if your function satisfies the first condition (one-to-one), but not the second, then you can "cheat": instead of thinking of $f$ as a function from $X$ to $Y$, we let $Y'=\mathrm{Image}(f)$, and then look at the function $\mathfrak{f}\colon X\to Y'$, with $\mathfrak{f}(x)=f(x)$ for every $x$; the only thing we changed is what we want the set $Y$ to be. This is not really the same as $f$: $\mathfrak{f}$ one is both one-to-one and onto, so $\mathfrak{f}$ does have an inverse, even though $f$ does not. -For instance, if you think of the function $f\colon\mathbb{R}\to \mathbb{R}$ given by $f(x)=e^x$, then $f$ is one-to-one, but is not onto ($f(x)$ does not take negative values). So this function is not invertible. However, if we tweak the function and think of it instead as $\mathfrak{f}\colon\mathbb{R}\to (0,\infty)$, given by $\mathfrak{f}(x)=e^x$, then $\mathfrak{f}$ is onto, is also one-to-one, so $\mathfrak{f}$ is invertible. The inverse is a function $f^{-1}\colon (0,\infty)\to\mathbb{R}$, so the only valid inputs are positive numbers. (You may know who $f^{-1}$ is: it's the natural logarithm). -Now, notice that to check if a function has an inverse, you need to know both what $X$ and what $Y$ is. But in Calculus we almost never mention $Y$. So what can we do? -Well, we agree that we will take $Y$ to be "the image of $f$"; that is, the collection -$$\{ f(x) \mid x\text{ is in the domain of }f\}.$$ -That means that we always "automatically" assume our functions are "onto". We say this by saying that the function is "onto its image". -Given that agreement, in order to figure out if $f$ has an inverse, we just need to know if $f$ is one-to-one, which is why your calculus book says things like "a function has an inverse if and only if it is one to one, if and only if it passes the horizontal line test." They are referring exclusively to functions whose codomain is always taken to be the image.<|endoftext|> -TITLE: Recommended Reading on Regression Analysis? -QUESTION [6 upvotes]: For a university project, I am implementing an automated regression analysis tool. -However, I have very little background in statistics. -So what books / articles / material would you suggest I could use to brush up on this topic, based on your experiences? -Thanks - -REPLY [4 votes]: Thisted, Elements of Statistical -Computing -Rao, Linear Statistical -Inference and Its Applications -Chapter 3 and 4 of Hastie, Tibshirani and Friedman, Elements of Statistical Learning (downloadable)<|endoftext|> -TITLE: Can somebody explain the plate trick to me? -QUESTION [11 upvotes]: I learned of the plate trick via Wikipedia, which states that this is a demonstration of the fact that SU(2) double-covers SO(3). It also offers a link to an animation of the "belt trick" which is apparently equivalent to the plate trick. Since I've thought most about the belt version, I'll phrase my question in terms of the belt trick. -I am not clear on how the plate/belt trick relates to the double covering. Specifically, I am looking for a sort of translation of each step of the belt trick into the Lie group setting. For example, am I correct in interpreting the initial twisting of the belt as corresponding to the action of a point in SU(2)? Which point? Do I have the group right? - -REPLY [11 votes]: A 360 degree rotation of the arm results in a very sore arm unless you unwind it or wind it a second time. Think about the rotation of the hand as being a path in the space of all rotations of 3-space. Or if you really want to rotate something, tape a belt to a soccer ball and tape the other end of the belt to the wall. Just rotate the ball a full rotation, the belt gets a twist. Now rotate again; the belt has two full twists. The twists in the belt can be undone by looping the ball back through the belt as in the animation. -The diagram in the lower right shows the paths in the space $\operatorname{SO}(3)$ which is the $3$-ball with its antipodal points on its boundary identified. That is why the doubled path appears to be broken; it is passing through antipodal points. -You might play with a Möbius band and the pencil loop which maps twice around to gain more intuition. The point of thinking of the Möbius band is that it is embedded in the space of rotations. -It is really worth thinking this through while going to sleep or while riding the bus!<|endoftext|> -TITLE: When do the Freshman's dream product and quotient rules for differentiation hold? -QUESTION [17 upvotes]: This is motivated by looking at the calculus exams of some of my undergraduate students. A recurring mistake is assuming that the derivative of the product of functions is a product of derivatives and the derivative of the quotient of two functions is the quotient of their derivatives. -This might be an ill formed question, but do their exist a pair of functions such that these rules hold. Explicitly, do there exist $f(x),g(x)$ satisfying all of: -\begin{align*} -f&\neq 0;\\ -\frac{d(fg)}{dx}&=\frac{df}{dx}\cdot \frac{dg}{dx}\\ -\text{and }\frac{d(f/g)}{dx}&=\frac{df}{dx}/ \frac{dg}{dx}\text{ ?} -\end{align*} -EDIT: Just to clarify my question, looking at the responses so far. I am looking for functions satisfying all three conditions above simultaneously. - -REPLY [21 votes]: Another answer to your question from a different angle is this: They hold in a different kind of calculus. -If you swap subtraction for division and division for taking roots in the usual definition of the derivative, you get -$$f^*(x) = \lim_{\Delta x \to 0} \left(\frac{f(x+\Delta x)}{f(x)}\right)^{\frac{1}{\Delta x}}.$$ -It's not too hard to prove that $(fg)^* = f^* g^*$ and $(f/g)^* = f^*/g^*$ under this definition. -The calculus that results from this definition of the derivative (and the corresponding definition of the integral) goes by various names, such as "multiplicative calculus," "non-Newtonian calculus," and "product calculus." Many of the standard results in the usual calculus (e.g., Fundamental Theorem, Mean Value Theorem, l'Hopital-type rules, Taylor's Theorem) can be redone for this multiplicative calculus. The idea goes back at least to Vito Volterra in the late 1800s. -According to the Wikipedia page, "Opinions differ as to the usefulness of the multiplicative calculi," but there are some applications. For instance, the product derivative measures the multiplicative rate of change of some function and so is useful when you're interested in looking at, say, growth rates of stock prices. The product integral has some corresponding applications; my favorite is that it can be used to calculate geometric means (in the same way that the usual integral can find arithmetic means). -Multiplicative calculus also has a close relationship to the usual calculus. For instance, the derivatives are related via -$$f^*(x) = \exp\left(\frac{d}{dx}\ln |f(x)|\right) = e^{f'(x)/f(x)}.$$ -The product integral and the usual integral have a similar relationship. In fact, I think part of the reason some folks don't find this multiplicative calculus all that interesting is that product derivatives and integrals can so easily be expressed in terms of the usual ones. The question, I suppose, is whether there is really anything new and different going on here. (Note also that this relationship I just gave means that the product derivative is just $\exp$ of the logarithmic derivative of $f$.) -Anyway, a few years ago I wrote a survey paper and had a student do a summer research project on this before I realized how much was already out there and before the Wikipedia page appeared. There are lots of other references and information on the Wikipedia page, so if you find this interesting you should check it out. The page on the product integral is also a good source. - -REPLY [16 votes]: I don't think there is a solution. -$f'g'=f'g+g'f$ implies $\frac{f'}{f}=\frac{g'}{g'-g}$ where the denominators are nonzero. -$\frac{f'}{g'}=\frac{f'g-g'f}{g^2}$ implies $\frac{f'}{f}=\frac{(g')^2}{g(g'-g)}$ where the denominators are nonzero. -If $f$ and $g$ are nonzero and $g'-g$ is nonzero, this implies that $\frac{g'}{g'-g}=\frac{(g')^2}{g(g'-g)}$, which implies $g'=g$. Thus $g=g'$ if $f$ and $g$ are nonzero, so $g(x)=ce^x$. But now from $f'g'=f'g+g'f$ we have $ce^xf(x)=0$, which implies $f=0$. - -For solutions to just the product rule, the equations above suggest taking $g$ such that $\frac{g'}{g'-g}$ is integrable, and -$$f(x)=\exp\left(\int_a^x\frac{g'(t)}{g'(t)-g(t)}dt\right),$$ which in particular works for $f(x)=e^{bx}$ and $g(x)=e^{cx}$ with $bc=b+c$ (as seen in a now deleted answer). Similarly, for just the quotient rule, one could try finding $g$ such that $\frac{(g')^2}{g(g'-g)}$ is integrable, and then take -$$f(x)=\exp\left(\int_a^x\frac{g'(t)^2}{g(t)(g'(t)-g(t))}dt\right).$$ -For example, $f(x)=e^{bx}$ and $g(x)=e^{cx}$ with $bc=b+c^2$. - -REPLY [6 votes]: The product rule equality gives $f'g+g'f = f'g'$, or $-fg' = f'g-f'g'$. From this, using the quotient rule equality we get -$$ -\frac{f'}{g'} = \frac{gf'-fg'}{g^2} = \frac{gf' +f'g - f'g'}{g^2} -= f'\left(\frac{2g-g'}{g^2}\right). -$$ -If $f$ is not constant, then $f'\neq 0$. So we get $\frac{1}{g'} = \frac{2g-g'}{g^2}$, or $g^2 - 2gg'+(g')^2 = 0$, or $(g-g')^2 = 0$; hence $g=g'$. -Therefore, from the product rule again we have $f'g'=f'g+fg'$, or $f'g = f'g+fg$, so $fg=0$. This requires $g=0$, which makes the quotient rule impossible. -If $f$ is constant, then the product rule equality requires $g$ constant, and then the quotient rule equality cannot hold.<|endoftext|> -TITLE: showing a set is totally disconnected -QUESTION [6 upvotes]: So, we are considering the subset -$$ S = \{(x, y) \in \mathbb{R^2} | (x \text{ and } y \in \mathbb{Q}) \text{ or } (x \text{ and } y \notin \mathbb{Q})\} $$ -And consider its complement $$ T = \mathbb{R^2} \backslash S $$ -The set T is disconnected, actually I am fairly certain it is totally disconnected. I am just having problems showing that rigorously. I was trying to show it using straight lines but I don't think I was getting anywhere. I know that a totally disconnected set's only connected sets are the one point sets. I've been trying to show that given two arbitrary points, that a separation exists between them. It is more difficult since this is in the plane. -Any hints at all would be a great help. Maybe I'm making mountains of molehills. - -REPLY [5 votes]: I had written out a full solution, but since this is homework, I've removed it and replaced it with this suggestion. -One way to prove that $T$ is totally disconnected is to show that whenever $p$ and $q$ are distinct points of $T$, then there are open sets $A$ and $B$ of $\mathbb{R}^2$ such that $T\subseteq A\cup B$, $(A\cap B)\cap T=\emptyset$, $p\in A$, and $q\in B$. This will show that there is a disconnection of $T$ in which $p$ and $q$ are in distinct components. In particular, $p$ and $q$ cannot be in the same connected component of $T$. If this holds for all pairs of points $p$ and $q$, then the connected components of $T$ must be single points. -So, pick two distinct points $p$ and $q$ in $T$. Try to find a line that is completely contained in $S$ and which separates $p$ and $q$. One way to achieve a line completely contained in $S$ is to have it go through a rational point with rational slope. Then throw away the line to get your two sets $A$ and $B$.<|endoftext|> -TITLE: What's the name of this stochastic process? -QUESTION [7 upvotes]: I heard about it sometime somewhere and want to read about it now, but I can't recall what the name is: -Start with $a_1 = \ldots =a_n=1$. Choose a number between 1 and $n$ with probability $a_i/(a_1+ \ldots + a_n)$ to choose $i$. If $i_0$ is the number chosen, increase $a_i$ by 1 and now choose another number and so on indefinitely. - -REPLY [2 votes]: Thanks for the pointer, Shai Covo. The name of this specific process is preferential attachment.<|endoftext|> -TITLE: Finite abelian groups as class groups -QUESTION [20 upvotes]: Is it known whether every finite abelian group is isomorphic to the ideal class group of the ring of integers in some number field? If so, is it still true if we consider only imaginary quadratic fields? - -REPLY [17 votes]: The smallest abelian group which is not the class group of an imaginary quadratic field is $(\mathbf{Z}/3 \mathbf{Z})^3$. There are six other groups of order -$< 100$ which do not occur in this way, of orders -$32$, $27$, $64$, $64$, $81$, and $81$ respectively. -The groups $(\mathbf{Z}/3 \mathbf{Z})^2$ and $(\mathbf{Z}/2 \mathbf{Z})^4$ occur as class groups of imaginary quadratic fields exactly once, for $D = -4027$ and $-5460$ respectively. -These results follow from the "class number $100$" problem, solved by Mark Watkins. -If you restrict to the $p$-part of the class group, then the answer (for general number fields) is positive. That is, for any abelian $p$-group $A$, there exists a number field $K$ with class group $C$ such that $C \otimes \mathbf{Z}_p = A$. -There is even a non-abelian analog of this. Namely, for any finite $p$-group $G$, there exists a number field $K$ such that the maximal Galois $p$-extension $L/K$ unramified everywhere has Galois group $G$. This is a very recent result of Ozaki.<|endoftext|> -TITLE: Problem from Victor Prasolov's Polynomials -- Finding the number of real roots of $nx^{n}-x^{n-1}-\cdots -1$ -QUESTION [7 upvotes]: In Chapter 1 of Polynomials by Victor Prasolov, Springer, 2001, the following theorem is proved. (p.3) - -Theorem 1.1.4 (Ostrovsky). Let -$f(x)=x^{n}-b_{1}x^{n-1}-\cdots -b_{n}$, -where all the numbers $b_{i}$ are non-negative and at least one of them -is nonzero. If the greatest common -divisor of the indices of the positive -coefficients $b_{i}$ is equal to -$1$, then $f$ has a unique positive -root $p$ and the absolute value of -the other roots are $<$ p. - -The following is one of the Problems to Chapter 1 (p.41). - -Problem 1.5 - Find the number of real -roots of the following polynomials -a) ... -b) $nx^{n}-x^{n-1}-\cdots -1$ - -Question: How to solve this Problem? - -Added: $nx^{n}-x^{n-1}-\cdots -1=0$ $\Leftrightarrow x^{n}-\dfrac{1}{n}x^{n-1}-\cdots -\dfrac{1}{n}=0$ -Added 2: Sturm's Theorem. - -REPLY [15 votes]: I think there is some value here in knowing how to do such problems "by hand." The proof in this case is quite simple. If $|x| > 1$, then $|x^n| > |x^k|$ for $k < n$, hence -$$n |x|^n > |x|^{n-1} + |x|^{n-2} + ... + |1| \ge |x^{n-1} + x^{n-2} + ... + 1|$$ -by the triangle inequality, so this polynomial $f(x)$ has no roots of absolute value greater than $1$. It follows that any real roots lie in $[-1, 1]$. By inspection $x = 1$ is a root and $x = 0, -1$ are not, so any remaining roots lie in $(0, 1)$ or $(-1, 0)$. If $x \in (0, 1)$, then -$$x^{n-1} + x^{n-2} + ... + 1 > nx^n$$ -so there are no roots in $(0, 1)$. To find any remaining roots in $(-1, 0)$, let -$$g(x) = f(x) (x - 1) = nx^n(x - 1) - x^n + 1 = nx^{n+1} - (n+1) x^n + 1.$$ -Then $g'(x) = n(n+1) x^n - n(n+1) x^{n-1} = n(n+1) x^{n-1}(x - 1)$ has roots $x = 0, 1$, hence $g$ is monotonic on $(-1, 0)$, so to determine if there are roots on this interval it suffices to compute $g(-1)$ and $g(0)$. We have $g(0) = 1$ and $g(-1) = -2n$ if $n$ is even and $g(-1) = 2n+2$ if $n$ is odd. In the first case there is one real root in $(-1, 0)$ by the IVT and in the second case there are none.<|endoftext|> -TITLE: Integral with Tanh: $\int_{0}^{b} \tanh(x)/x \mathrm{d} x$ . What would be the solution when 'b' does not tends to infinity though a large one? -QUESTION [9 upvotes]: two integrals that got my attention because I really don't know how to solve them. They are a solution to the CDW equation below critical temperature of a 1D strongly correlated electron-phonon system. The second one is used in the theory of superconductivity, while the first is a more complex variation in lower dimensions. I know the result for the second one, but without the whole calculus, it is meaningless. -$$ \int_0^b \frac{\tanh(c(x^2-b^2))}{x-b}\mathrm{d}x $$ -$$ \int_0^b \frac{\tanh(x)}{x}\mathrm{d}x \approx \ln\frac{4e^\gamma b}{\pi} \text{as} \ b \to \infty$$ -where $\gamma = 0.57721...$ is Euler's constant - -REPLY [12 votes]: The constant $C$ given in Aryabhata's answer, as suspected, is exactly -$$\gamma + \log \frac{4}{\pi},$$ -which, together with Aryabhata's answer, nicely rounds off the second part of this question. -Since $ \text{sech} x = 2(e^{-x} – e^{-3x} + e^{-5x} + \cdots ) \qquad (1)$ we have -$$\int_0^1 \frac{\tanh x}{x}\mathrm dx = -2\int_0^1 \frac{\sinh x}{x}(e^{-x} – e^{-3x} + e^{-5x} + \cdots )\mathrm dx$$ -Now -$$2\int_0^1 \frac{\sinh x}{x} e^{-x}\mathrm dx = - \mathrm{Ei}(-2) + \gamma + \log 2$$ -$$2\int_0^1 \frac{\sinh x}{x} e^{-3x}\mathrm dx = - \mathrm{Ei}(-4) + \mathrm{Ei}(-2) + \log 2$$ -$$2\int_0^1 \frac{\sinh x}{x} e^{-5x}\mathrm dx = - \mathrm{Ei}(-6) + \mathrm{Ei}(-4) + \log (3/2)$$ -$$2\int_0^1 \frac{\sinh x}{x} e^{-7x}\mathrm dx = - \mathrm{Ei}(-8) + \mathrm{Ei}(-6) + \log (4/3)$$ -and so on, where $\mathrm{Ei}(x)$ is the exponential integral. -Thus, interchanging the order of summation, summing and using Wallis's product -we obtain -$$\int_0^1 \frac{\tanh x}{x}\mathrm dx = \gamma + \log \frac{4}{\pi} --2\mathrm{Ei}(-2)+2\mathrm{Ei}(-4)-2\mathrm{Ei}(-6) + \cdots. \qquad (2)$$ -Using $(1)$ for $\mathrm{sech} x$ we also have -$$\int_0^1 \frac{2}{x(e^{2/x}+1) }\mathrm dx = 2 \int_1^\infty \frac{1}{x(e^{2x}+1)}\mathrm dx$$ -$$= \int_1^\infty \frac{\text{sech} x}{x} e^{-x}\mathrm dx -= 2 \int_1^\infty \frac{e^{-2x}}{x} - \frac{e^{-4x}}{x} + \frac{e^{-6x}}{x} - \cdots\mathrm dx $$ -$$= -2\mathrm{Ei}(-2)+2\mathrm{Ei}(-4)-2\mathrm{Ei}(-6) + \cdots.$$ -And so the result follows from $(2).$<|endoftext|> -TITLE: Ring with finitely many prime ideals with an extra condition. Are they maximal? -QUESTION [5 upvotes]: Let $A$ be a commutative ring with identity. If $A$ has finite number of prime ideals $p_1,...p_n$ and moreover $\prod_{i=1}^n p_i^{k_i} = 0$ for some $k_i$. Are the prime ideals necessarily maximal? - -REPLY [4 votes]: No, but the counterexample is trivial. Take any integral domain with finitely many prime ideals which is not a field. For example, the localization $\mathbb{Z}_{(p)}$ of the integers at a prime p. The zero ideal is non-maximal and prime so, trivially, $\prod_{i=1}^np_i=0$. Maybe this isn't exactly what you were meaning to ask?<|endoftext|> -TITLE: Analogue of rank for finite abelian groups? -QUESTION [7 upvotes]: The rank of a finitely-generated abelian group is the size of the maximal $\mathbb{Z}$-linearly-independent subset; loosely this means the number of distinct copies of $\mathbb{Z}$ contained as direct summands. I have come across a situation where a notion somewhat like rank may be useful for finite abelian groups (which obviously all have rank zero). I'm looking at graphs embedded on surfaces and I have a procedure for associated certain finite abelian groups to a graph embedding. I've found that the groups I'm getting this way are all quotients of $\mathbb{Z}^{2g}$, where $g$ is the genus of the surface. So for example, I can realize $(\mathbb{Z}/2\mathbb{Z})^2$ as a group coming from a graph embedded on a torus, but I can't get $(\mathbb{Z}/2\mathbb{Z})^3$. -Here's my question: for a finite abelian group $G$, define the "finite rank" of $G$ to be the minimal rank of a free abelian group that surjects onto $G$. Is this a common notion? Although there is the possibility this could be an argumentative question, is it a useful notion (or conversely, can you make the case for it being useless)? - -REPLY [8 votes]: Unless I am mistaken, the "finite rank" is just the minimal size of a generating set for $G$. Indeed, if $X$ is a generating set for $G$, then the free abelian group on $X$ surjects onto $G$ by the map induced by the embedding of $X$ into $G$, so the "finite rank" is at most the smallest size of a generating set. And if a free abelian group $F$ of rank $k$ surjects onto $G$, then the image of a free generating set for $F$ maps to a generating set for $G$ (possibly not injectively), so $G$ has a generating set with at most $k$ elements. -This number, in an arbitrary group, is usually written $d(G)$; I don't know of any special name for it, and none of my books seems to have any name attached to it. In $p$-groups, it is the index of the Frattini subgroup, $[G:\Phi(G)]$.<|endoftext|> -TITLE: Can I draw a net for a Steinmetz solid with a compass? -QUESTION [5 upvotes]: Can I draw a net for a Steinmetz solid from two cylinders with a compass? -That is, can we flatten the net? I often make a model using paper and a compass-- it looks about right... is it really a valid method of construction? - -REPLY [2 votes]: Imagine keeping one of the cylinders intact, but merely marking on it the curves that would have been the edges of the Steinmetz solid. If you unfold the cylinder, the curves must be periodic in the azimuthal direction, so they cannot be circles. -If you solve $(x,y,z) = (\cos \theta, \sin \theta, z) = (x, \cos \phi, \sin \phi)$, you'll find that the curves are actually sinusoids, $z = \pm \cos \theta$.<|endoftext|> -TITLE: Please explain inequality $|x^{p}-y^{p}| \leq |x-y|^p$ -QUESTION [17 upvotes]: Suppose $x \geq 0$, $y \geq 0$ and $0 -TITLE: Probability that a vector in $\mathbb{Z}^n$ is primitive -QUESTION [8 upvotes]: A vector $v \in \mathbb{Z}^n$ is primitive if there does not exist some vector $v' \in \mathbb{Z}^n$ and some $k \in \mathbb{Z}$ such that $v = k v'$ and $k \geq 2$. -For a paper I'm writing right now, I'd like to know that a "random" vector in $\mathbb{Z}^n$ is primitive. Let me make this precise. -Let $\|\cdot\|_{1}$ be the $L^{1}$ norm on $\mathbb{Z}^n$, so $\|v\|_1 = \sum_{i=1}^n |v_i|$, where the $v_i$ are the components of $v$. Define $\mathcal{V}_k$ to be the number of vectors $v$ in $\mathbb{Z}^n$ such that $\|v\|_1 \leq k$. Define $\mathcal{P}_k$ to be the number of primitive vectors $v$ in $\mathbb{Z}^n$ such that $\|v\|_1 \leq k$. -I then want -$$\lim_{k \rightarrow \infty} \frac{\mathcal{P}_k}{\mathcal{V}_k} = 1.$$ -Assuming this is true, is there any nice estimate as to how fast it approaches $1$? - -REPLY [13 votes]: In the case where $n = 2$, you're asking for the "probability" that two integers are relatively prime; this is well-known to be $6/\pi^2$, not 1. In the general-$n$ case, the probability that $n$ integers are relatively prime is $1/\zeta(n)$. - -Reference: http://en.wikipedia.org/wiki/Coprime - -REPLY [10 votes]: Further to Michael's answer, not only does $\mathcal{P}_k/\mathcal{V}_k\to1/\zeta(n)$, but we can calculate a bound for the rate of convergence. I'll also give an argument which is a little different from the one given in his Wikipedia reference. -Noting that the set $\lbrace v\in\mathbb{R}^n\colon\Vert v\Vert_1\le k\rbrace$ has volume $ck^n$ (for a constant c depending only on the dimension n) and surface area proportional to $k^{n-1}$ gives -$$ -\mathcal{V}_{k}-1=ck^n + O(k^{n-1}).\qquad\qquad{\rm(1)} -$$ -The '-1' on the left hand side is not relevant for large k as it can be absorbed into the O(kn−1) error term, and is just there so that (1) is also valid for small k < 1. -Noting that every nonzero $v\in\mathbb{Z}^n$ decomposes uniquely as $v=mv^\prime$ for integer $m\ge1$ and primitive $v^\prime\in\mathbb{Z}^n$ leads to the following relation between $\mathcal{P}_k$ and $\mathcal{V}_k$, -$$ -\mathcal{V}_k-1=\sum_{m=1}^\infty\mathcal{P}_{\frac{k}{m}}. -$$ -This can be inverted via the Möbius function μ, -$$ -\mathcal{P}_k=\sum_{m=1}^\infty\mu(m)(\mathcal{V}_{\frac{k}{m}}-1). -$$ -In dimension $n > 2$, substituting (1) into this expression gives -$$ -\mathcal{P}_k=\sum_{m=1}^\infty \mu(m)c k^n m^{-n} + O(k^{n-1}).\qquad\qquad{(2)} -$$ -The $O(k^{n-1})$ comes from the sum $\sum_m (k/m)^{n-1}$ from the remainder term of (1) which, for $n > 2$, gives $k^{n-1}$ multiplied by a convergent sum. -Dividing through by $\mathcal{V}_k$, -$$ -\mathcal{P}_k/\mathcal{V}_k=\sum_{m=1}^\infty\mu(m)m^{-n}+O(1/k)=1/\zeta(n)+O(1/k). -$$ -Edit: The case for $n=2$ is actually a little bit different, and we do not obtain such a good convergence rate. As the sum $\sum_m(k/m)^{n-1}$ does not converge, the error term in (2) does not apply. Instead, we can use $O(1_{\lbrace k\ge1\rbrace}k+1_{\lbrace k < 1\rbrace}k^2)$ for the error term in (1). This leads to an error of order $k\sum_{m\le k}m^{-1}+k^2\sum_{m > k}m^{-2}\sim k\log k$ in (2), giving -$$ -\mathcal{P}_k/\mathcal{V}_k=1/\zeta(2)+O(\log k/k). -$$ -You can also look at the paper On the probability that k positive integers are relatively prime.<|endoftext|> -TITLE: Probability for twelve dice -QUESTION [7 upvotes]: In 'An Introduction to Probability Theory and Applications' by W. Feller I encountered this apparently innocuous problem. - -A throw of twelve dice can result in - $6^{12}$ different outcomes, to all of - which we attribute equal - probabilities. The event that each - face appears twice can occur in as - many ways as twelve dice can be - arranged in six groups of two each. - Hence the probability of the event is - $\displaystyle \frac{12!}{2^{6}6^{12}}=0.003438$. - -The reasoning is that by doing that you're grouping two 1's, two 2's, ..., two 6's (each group using one partition in the multinomial) and the result is the number of different partitions that can be found with that particular characteristic. However, I had doubts about that answer. To understand better that problem, I did a simpler example with $4$ dice instead of $12$ (in this case, the event is the number of ways in which two faces appear twice). -Using the same result I get as the probability $\displaystyle \frac{4!}{2^{2}6^{4}}=$ 0.46 $0.0046$. Then, to see if that's true I ran a little simulation of this case in Mathematica: -dice = Table[Table[Random[Integer, {1, 6}], {i, 1, 4}], -{j, 1, 1000000}]; i=0; -f[{a_, b_, c_, d_}] := Which[a === b && c === d && a != c, -i++, a === c && b === d && a != b, -i++, a === d && b === c && a != b, i++;]; -Map[f, dice, {1}]; - -After $1000000$ steps I got $69687$ cases in which two faces appear twice. This is equivalent to a probability of $0.069687$. Far smaller than what I expected based on the calculation above. -Since this latter example is much more manageable than the one with twelve dice, I did the following -With four dice we have the following: - -Partition $r_{1}$ contains the first and second dice and partition $r_{2}$ contains the third and fourth dice. -Partition $r_{1}$ contains the second and fourth dice and partition $r_{2}$ contains the first and fourth third. -Partition $r_{1}$ contains the first and fourth dice and partition $r_{2}$ contains the second and third dice. -Partition $r_{2}$ contains the first and second dice and partition $r_{1}$ contains the third and fourth dice. -Partition $r_{2}$ contains the second and fourth dice and partition $r_{1}$ contains the first and fourth third. -Partition $r_{2}$ contains the first and fourth dice and partition $r_{1}$ contains the second and third dice. - -For each case, we can have $30$ outcomes in which two faces appear two times. For example, for the first case we have $1122, 1133, 1144, 1 155, 1166, 2211, 2233, 2244,2255, 2266,... 6611, 6622, 6633, 6644, 6655$. However, such outcomes are repeated twice (particularly, the outcomes of case 1 repeat the outcomes of case 4, etc). Therefore, the number of different outcomes which produce two faces appearing two times are $\frac{6}{2}6*5=90$. Since there are $6^{4}$ outcomes, we have a probability of $\frac{90}{6^{4}}=0.0694444$, which is the result that produces the simulation in Mathematica. -Is it wrong the first reasoning? If so, is there a general approach to use the multinomial coefficient to solve this kind of problems. For instance, it appears that this only happens for $r_{1}=r_{2}=r_{k}$. Otherwise, there are not repeated outcomes. - -REPLY [12 votes]: There are two things going on here. First, $\displaystyle \frac{4!}{2^{2}6^{4}}$ is about $0.0046$, not $0.46$. -Second, the problem described by Feller and your simplification to four dice are not the same problem. Feller's problem requires that each face appear twice. Your four-dice problem requires that $some$ two faces appear twice. There's only one way that Feller's event can be satisfied; namely, that each of 1, 2, 3, 4, 5, and 6 appear exactly twice. There are many more ways that some two faces could appear twice. For instance, you could have two 1's and two 2's, two 1's and two 3's, etc. In fact, there are $\binom{6}{2} = 15$ different ways to choose the two faces that will appear. -And that solves your problem. Taking the correct result from Feller, $\displaystyle \frac{4!}{2^{2}6^{4}} \approx 0.0046$, and multiplying it by the $15$ different ways to choose the faces gives $\displaystyle \frac{90}{6^4} \approx 0.0694444$. - -With respect to your question about the multinomial coefficient, $\displaystyle \binom{n}{k_1, k_2, \ldots, k_m}$ calculates the number of ways to partition a set of $n$ items into $m$ groups with $k_1$ items in the first group, $k_2$ items in the second group, and so forth. So in Feller's problem $\displaystyle \binom{12}{2, 2, 2, 2, 2, 2}$ is calculating the number of ways to put two dice in the 1's group, two dice in the 2's group, and so forth. -In the four-dice problem you describe, you haven't specified a particular partition of dice into groups; there are several partitions that will satisfy "two faces appearing twice." So, in the four-dice problem, $\displaystyle \binom{4}{2, 2}$ counts the number of ways to put two dice in a 1's group and two dice in a 2's group. It also counts the number of ways to put two dice in a 1's group and two dice in a 3's group, and it counts the number of ways to put two dice in a 1's group and two dice in a 4's group, and so forth. So to calculate the total number of ways of obtaining "two faces appearing twice" you need to multiply $\displaystyle \binom{4}{2, 2}$ by the number of ways to pick two of the faces out of six possible faces, which is $\binom{6}{2}$. Then you mutiply by $\frac{1}{6^4}$ to get the probability you want. -If you want a general answer, the probability of throwing $n$ $d$-sided dice and having exactly $m$ faces appear $\frac{n}{m}$ times would be -$$\binom{n}{m} \frac{n!}{((\frac{n}{m})!)^m d^n} .$$ - -REPLY [3 votes]: The case of four dice is not so hard. You require XXYY in some order, where X != Y. So let the first throw be X. Then there are three possibilities: XXYY, XYXY, and XYYX. Each has probability 1/6 x 5/6 x 1/6 = 5/216. So we get 5/72, which is about 0.0694. This agrees very well with your simulation! -The question is, where did you go wrong in your calculation? You calculated the probability of a specific XXYY -- say, two ones and two fives. Now you have to multiply by the number of such (X,Y) pairs, which is 15. And 15 x 4! / (2^2 x 6^4) is 5/72.<|endoftext|> -TITLE: Prime factors of $n^2+1$ -QUESTION [7 upvotes]: I know it is unknown if there are infinitely many primes of the form $n^2+1$. Is it known if there is a positive integer $k$ such that $|\{n\in\mathbb{Z}:n^2+1 \text{ has at most k prime factors}\}|=\infty$? - -REPLY [18 votes]: Yes, Iwaniec, "Almost-primes represented by quadratic polynomials", Inventiones Math., 47:171–188, 1978, proves that there exist infinitely many -integers $n$ such that $n^2 + 1$ is either prime or the product of two -primes. - -REPLY [4 votes]: Please see Iwaniec, "Almost-primes represented by quadratic polynomials", Inventiones Math<|endoftext|> -TITLE: Convergence of integrals in $L^p$ -QUESTION [16 upvotes]: Stuck with this problem from Zgymund's book. -Suppose that $f_{n} \rightarrow f$ almost everywhere and that $f_{n}, f \in L^{p}$ where $1 -TITLE: Cardinality of a set that consists of all existing cardinalities -QUESTION [6 upvotes]: What is the easiest way to prove (if possible, without using ordinals etc. as my current math understanding of set theory counts only cardinals, and countable & uncountable sets) that the number of cardinalities that exists is not countable (that is, can't be put into bijection with $\mathbb{N}$)? -What exactly does it mean that the set of all cardinals is so big that it's not even a set, but a class? Where does contradiction that does not allow it to be a set arise? I have read Pete Clark's notes, but am not quite sure how #20 leads up to that conclusion. - -I have taken a look at the following topics: - -number of infinite sets with different cardinalities -Cardinality of all cardinalities -Are there uncountably infinite orders of infinity? -Types of infinity - -But still can't quite find/understand the answer. - -REPLY [10 votes]: There is no "number of cardinalities". As you say, there are so many that they cannot form a set. -Suppose that ${\mathcal A}$ is a set whose elements are sets, with the property that if $A,B\in{\mathcal A}$ and $A\ne B$, then $|A|\ne|B|$, i.e., $A,B$ have different cardinalities. Let $C=\bigcup{\mathcal A}$, i.e., $C=\bigcup_{A\in{\mathcal A}}A$. Clearly, $|C|\ge|A|$ for each $A\in{\mathcal A}$. Let $D={\mathcal P}(C)$ be the power set of $C$. Then $|D|>|C|$ so $|D|>|A|$ for any $A\in{\mathcal A}$. This proves that there cannot be a set of all cardinalities, because given any such set, we just found a new cardinality different from all the ones in the set. -Of course, if ${\mathcal A}$ is countable, this shows that the ''number'' of cardinalities is not countable. - -There is a small remark that may be worth making. The argument works just as well if we do not require that all sets in ${\mathcal A}$ have different cardinalities, but simply that for any prescribed cardinality we want to consider, there is at least one set in ${\mathcal A}$ of that size (but there may more than one). This is slightly more general, but there is also a technical advantage, namely, in this form, the argument does not depend in any version of the axiom of choice. -(Finally: I just checked Pete's nice note that is linked to in the body of the question. His fact 20 there is essentially the argument I've shown here.)<|endoftext|> -TITLE: Ulam spiral: Is there an "unusual amount of clumping" in prime-rich quadratic polynomials? -QUESTION [15 upvotes]: I was reading Martin Gardner's Mathematical Games column on the Ulam spiral which appeared in the March 1964 issue of Scientific American. (The spiral actually featured on the cover of that issue.) Gardner makes the following statement: - -The grid on the cover suggests that throughout the entire number series expressions of this form are likely to vary markedly from those "poor" in primes to those that are "rich," and that on the rich lines an unusual amount of clumping occurs. - -By "this form" Gardner means the form $4x^2+bx+c$. I'm curious - and a little bit skeptical - about his last statement concerning clumping. I know that the existence of prime-rich and prime-poor polynomials is a longstanding conjecture, going back to Euler's discovery that polynomials such as $x^2-x+41$ generate unusually many primes, and that Hardy and Littlewood and also Bateman and Horn made concrete proposals as to what the density of primes in such polynomials ought to be. -My question is whether there is any evidence, either numerical or heuristic, that there should be a large amount of clumping in the primes of the form $x^2-x+41$. Famously, the first 40 values of $x$ all give primes, but if one goes to higher values of $x$ are there more long clusters of primes than one would expect if the primes were randomly distributed? -Rephrasing the question: I am aware of the conjecture that $x^2-x+41$ has more primes than other, similar lines. The question is whether there is a conjecture saying that $x^2-x+41$ has more dense clusters of primes than expected. - -REPLY [3 votes]: Clumps of primes from such quadratic equations should behave similarly to variations in the gaps between all primes. In general, richer functions in primes are more likely to find groups. There are other quadratics that generate streaks of consecutive outputs that are prime, such as: -2n^2 - 272431: Prime for n = 371 to 393 (23 in a row) -2n^2 + 144251: Two streaks, n = 34 to 50 (17) and n = 583 to 602 (20). -n(n+1) - 1776433: Prime for n = 1424 to 1443 (20); my JVM has calculated -the prime density of this one at about 8.32 (versus Euler's ~6.64). Its smallest -dividing primes are 41, 59, 97, and 101. It's nowhere near the records for the richest functions, though.<|endoftext|> -TITLE: Lights out game on hexagonal grid -QUESTION [13 upvotes]: I greatly enjoyed the Lights Out game described here (I am sorry I had to link to an older page because some wikidiot keeps deleting most of the page). -Its mathematical analysis is here (it's just linear algebra) -Now I just discovered a hexagonal version: - -http://cwynn.com/games/lightsoutmobile.htm - (and hopefully soon) https://sites.google.com/site/beyondlightsout/ - -Are there any mathematical results for this version of the game, before I dig in and start the analysis myself. I have the major references including - -Turning Lights Out with Linear Algebra, by M. Anderson and T. Feil (1998). Math Magazine, vol. 71, no. 4, October 1998, pp. 300-303. It's here (thanks to J.M.). - -Update: I've been playing with the iPhone T-Lights hexagonal versions. I can get most of them, except for the ones where the "template" is a Y shape. Any ideas? - -REPLY [3 votes]: I tried (and tried again) to improve the Wikipedia article but the edits were promptly reverted as if well-known solutions of Lights Out are "original research", despite providing several citations. -If any Stack Exchange members are Wikipedians who can assist in restoring the article, it would be greatly appreciated. Also, I would direct this request more appropriately if I could, but don't see a way to contact John Smith directly. Feel free to delete it.<|endoftext|> -TITLE: Is $L^2(\mathbb{R})$ with convolution a Banach Algebra? -QUESTION [11 upvotes]: Is $L^2(\mathbb{R})$ a Banach algebra, with convolution? -I am pretty sure the answer is no, because I think that -$f,g \in L^2(\mathbb{R})$ does not imply that $f*g \in L^2(\mathbb{R})$. However, I can't seem to find a counterexample, or a proof that this is not the case, using Fourier Transform for example. -Can someone give a hint? -Thank you - -REPLY [5 votes]: As seen in the other answers $L^2(\mathbb{R})$ is not a Banach algebra. Instead we may consider a weighted $L^2$-space, let $w$ be a positive continuous function defined on $\mathbb{R}$ and $L^2_w= L^2_w(\mathbb{R})$ be the space of all measurable functions $f$ such that -$fw\in L^2(\mathbb{R})$ - the norm is defined by -$$ -\|f\|_{L^2_w}=\|fw\|_{L^2}. -$$ -It is an exercise to show that $L^2_w$ is a Banach space. If we defined convolution as in the non-weighted space, that is -$$f*g(x)=\int_\mathbb{R} f(y) g(x-y)dy\qquad\text{(note no weight here)} $$ -we will get a Banach algebra if $w^{-2}*w^{-2}\le w^{-2}$. Unfortunately, I do not know if this condition can be improved. See this thread for more details, and a proof of the last statement: When is the weighted space $\ell^p(\mathbb{Z},\omega)$ a Banach algebra ($p>1$)?<|endoftext|> -TITLE: One dimensional quotients of formal power series rings -QUESTION [7 upvotes]: Suppose $R=k[[x_1,...,x_n]]$ is a formal power series ring over the field $k$, what can we say about the structure of $R/p$ if $p$ is a prime ideal of $R$ such that dim$(R/p)=1$. In particular, are these subrings of power series rings in one variable? -Motivation: Given a subring of a power series ring in one variable which can be written as, $k[[x^{a_1},...,x^{a_n}]]$ where $a_1,...,a_n$ are distinct positive integers, we have a homomorphism $k[[x_1,...,x_n]]\to k[[x^{a_1},...,x^{a_n}]]$ which sends $x_i\to x^{a_i}$. The kernel of this homomorphism must be a prime ideal since $k[[x^{a_1},...,x^{a_n}]]$ is a domain. Moreover, the quotient modulo the kernel must be one dimensional since the image ring is one dimensional. I was wondering if we have a (partial) converse to this result. (I do realize the second paragraph would analogously work with polynomial rings, but it seems power series rings and much nicer (they are regular and local), so I was hoping for a nicer description of the quotient in the power series case). - -REPLY [5 votes]: The integral closure of a complete Noetherian local domain is also complete, local and Noetherian. So the integral closure of $R/p$ is a complete regular local ring containing a field (normal rings in dimension $1$ are regular by Serre criterion), and you win! -EDIT: add more details, by Tymothy's comments (now deleted): So let $S$ be the integral closure of $R/p$ in its quotient field. Then by definition, $S$ is integrally closed (normal), thus regular because $\dim S=1$. Clearly $R/p$ is a subring of $S$. By Cohen Structure theorem, $S$ is a power series ring with one variable. -The fact about integral closure in my first sentence is well-known. Check out Theorem 4.3.4 of the book available online by Huneke-Swanson, or Eisenbud Chapter 13, or Matsumura somewhere (probably the finiteness of integral closure is true for excellent rings as well).<|endoftext|> -TITLE: Does every Latin square contain a diagonal in which no symbol appears thrice? -QUESTION [7 upvotes]: A diagonal of a Latin square is a selection of n entries in which no two entries occur in the same row or column. For example: the entries marked with an asterisk below form a diagonal. -1 2* 3 4 -2 3 4 1* -3 4 1* 2 -4* 1 2 3 - -Theorem: Every Latin square contains a diagonal in which no symbol appears thrice (or more). -The asterisked diagonal in the above example is a diagonal in which no symbol appears thrice. - -Problem: Prove the above theorem. - -This is quite a fun problem to solve, but there is a trap. - -REPLY [2 votes]: You can do better- Cameron and Wanless showed that every latin square -possesses a diagonal in which no symbol appears more than twice. - -We also show that every Latin square contains a set of entries which meets each row and column exactly once while using no symbol more than twice. - -For the paper, see Covering radius for sets of permutations<|endoftext|> -TITLE: A different proof of the insolubility of the quintic? -QUESTION [12 upvotes]: I'm familiar with the "standard" proof using Galois theory that there is no general formula for solving an equation of fifth (or higher) degree using radicals (i.e. arithmetic and root-taking). However, now I'm wondering if other proofs of different nature were found (in particular ones relying on analysis rather than algebra). -What sparked my interest was seeing a description of the solution of the 2nd, 3rd and 4th degree equations via something that looked like discrete Fourier transform. - -REPLY [12 votes]: This is the content of the Abel-Ruffini theorem (whose proof predates Galois') -http://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem -A topological proof may be found in this paper. -https://www.tmna.ncu.pl/static/files/v16n2-02.pdf<|endoftext|> -TITLE: Is the Serre spectral sequence a special case of the Leray spectral sequence? -QUESTION [14 upvotes]: Let $F \to E \to B$ be a fibration with $B$ simply connected (more generally, such that $\pi_1(B)$ acts trivially on the homology of $F$). Then there is a Serre spectral sequence $H_p(B, H_q(F)) \to H_{p+q}(E)$. One can do the same for singular cohomology. However, for reasonable spaces (specifically, locally contractible spaces, e.g. CW complexes), singular cohomology is the same as sheaf cohomology of the constant sheaf $\mathbb{Z}$. -But there is another spectral sequence for sheaf cohomology: the Leray spectral sequence. Given spaces $X, Y$ and $f: X \to Y$, and a sheaf $\mathcal{F}$ on $X$, there is a spectral sequence $H^p(Y, R^q_f(\mathcal{F})) \to H^{p+q}(X, \mathcal{F})$. -The Wikipedia article hints that the topological implications of this include in particular the Serre spectral sequence. I would be interested in this, because I like the machinery of the Grothendieck spectral sequence (from which the Leray spectral sequence easily follows), and would be curious if the Serre spectral sequence could be obtained as a corollary. -Is this possible? - -REPLY [8 votes]: Yes. In fact, the result is basically obvious if you use Czech cohomology on the base. -Serre really had two key insights. First, sheaf cohomology is a pain to compute, but if there is no fundamental group then for fiber bundles the Leray spectral sequence is really just using normal old-fashioned untwisted cohomology. Second, you don't really need to work with fiber bundles -- all you need are Serre fibrations, and those are easy to construct. In particular, you have the standard Serre fibration $\Omega X \rightarrow PX \rightarrow X$, where $\Omega X$ is the loop space of $X$ and $PX$ is the space of paths starting at the basepoint of $X$ and the map $PX \rightarrow X$ is "evaluation at the endpoint". Clearly $PX$ is contractible! An amazing amount of milage can be had from this silly observation! -Serre also really developed many of the key algebraic tricks one needs to work with spectral sequences. For instance, he had the amazing idea that one can work modulo "Serre classes", and thus ignore things like torsion. It's like pretending to localize spaces long before Sullivan and Quillen realized you could do so for real!<|endoftext|> -TITLE: What are good books/other readings for elementary set theory? -QUESTION [54 upvotes]: I am looking to expand my knowledge on set theory (which is pretty poor right now -- basic understanding of sets, power sets, and different (infinite) cardinalities). Are there any books that come to your mind that would be useful for an undergrad math student who hasn't taken a set theory course yet? -Thanks a lot for your suggestions! - -REPLY [31 votes]: I am going to go out on a limb and recommend a more elementary book than (I think) any of the ones others have mentioned. -I claim that as a pure mathematician who is not a set theorist, all the set theory I have ever needed to know I learned from Irving Kaplansky's Set Theory and Metric Spaces. (And, you know, I also enjoyed the part about metric spaces). Kaplansky spent most of his career at the University of Chicago. Although he had left for MSRI by the time I got there in the mid 1990's, nevertheless his text was still used for the one undergraduate set theory course they offered there. (Not that I actually took that course, but I digress...) -In fact I think that if you work through this book carefully -- it's beautifully written and reads easily, but is not always as innocuous as it appears -- you will actually come out with more set theory than the average pure mathematician knows. -Apologies if you actually do need or want to know some more serious stuff: there's nothing about, say, cofinalities in there, let alone forcing and whatever else comes later on. But maybe this answer will be appropriate for someone else, if not for you. - -Added: I suppose I might as well mention my own lecture notes, available online here (scroll down to Set Theory). I think it is fair to say that these are a digest version of Kaplansky's book, even though they were for the most part not written with that book in hand. [However, last week David Speyer emailed me to kindly point out that I had completely screwed up (not his words!) one of the proofs. He also suggested the correct fix, but I didn't feel sanguine about it until I went back to Kaplansky to see how he did it.] -The description All the set theory I have ever needed to know on the main page is not meant to be offensive to set theorists (and I hope it isn't) but rather an honest admission: here is the little bit of material that goes a very long way indeed. Note especially the word need: this is not to say that these 40 pages contain all the set theory I want to know. For instance, I own Cohen's book on forcing and the Continuum Hypothesis, and I would certainly like to know how that stuff goes... -[Come to think of it: I would be highly amused and interested to read 40 pages of notes entitled All the number theory I have ever needed to know written by one of the several eminent set theorists / logicians who frequent this site and MO. What would make the cut?]<|endoftext|> -TITLE: Uniform convergence and $L^p$-convergence -QUESTION [5 upvotes]: Let $f_n,f \in L^p$ and $f_n \to f$ uniformly. Further let $g \in L^q$. Now, clearly $\int (f_n - f) g$ is defined. How do I show that -$$\int |f_n - f||g|$$ -can be made arbitrary small if I choose $n$ large enough? If we just naively apply Hölder we will not be able to make it small but just finite. Further, we cannot just replace $|f_n - f|$ by the bound "$\epsilon$" because that is larger than this. Does someone have a hint? - -REPLY [6 votes]: Outside of a sufficiently large compact set, $\|g\|_q <\epsilon$. Use Hölder naively, and on that portion the integral is bounded above by $(\|f_n\|_p + \|f\|_p)\epsilon$. On the inside of the compact set $\|g\|_1 \leq C \|g\|_q$ where $C$ is given by the volume of the compact set. Then you can replace $|f_n - f|$ by $\epsilon$. -Note that this requires $\|f_n\|_p$ be uniformly bounded, which would be true if you additionally assume $f_n \to f$ in $L^p$. If you don't assume that: -Consider over $\mathbb{R}$, take $f_n = \frac1n \chi_{[2^n,2^{n+1}]}$, where $\chi$ denotes the characteristic function. $f_n \to 0$ uniformly as a sequence of functions (i.e. $f_n\to f$ in $L^\infty$). For this sequence $\|f_n\|_p = \frac1n \cdot 2^{n/p} \nearrow \infty$ (unless $p = \infty$). (Note that since it is not a bounded sequence, you cannot assert that it has a weakly converging subsequence.) -Now take $g$ be the function such that $g(x) = 2^{-n/(q-\delta)}$ if $2^n < |x| \leq 2^{n+1}$. By construction it is a function in $L^q$. -But $|f_n - f| |g| = \frac1n \cdot 2^{-n / (q - \delta)} \chi_{[2^n,2^{n+1}]}$. If you integrate it, you get -$$ \frac1n \cdot 2^{n( 1 - \frac{1}{q - \delta}) }$$ -So unless $q = 1$ and $p = \infty$, you can always choose $\delta$ sufficiently small such that the exponent in the integrated expression is positive, and provide a counterexample to what you want proved. (And of course, in the case where $p = \infty$, you can just replace $|f-f_n|$ by $\epsilon$ and be done with it.<|endoftext|> -TITLE: Probability in single-lane traffic flow: What are the odds of "choke points" being encountered? -QUESTION [10 upvotes]: Let's say you have a single-lane road (single in this case meaning single lane in each direction, or what you could also call two-lane). Let's say you have a random number of vehicles on a given 10-mile stretch of road. The speed limit is 45 mph, but some people go faster and some go slower. -Is there any math that describes the likelihood that faster vehicles will overtake slower vehicles at "choke points" (points where oncoming traffic makes passing impossible)? (Assume a straight, flat road where safe passing is always allowed.) EDIT: Choke points are points where oncoming cars make it impossible to pass without the overtaking vehicle having to slow down and wait for the oncoming car (or cars) to pass. -Obviously, the answer has to be some kind of curve, because if there is only one vehicle on the road the probability is 0 that a choke point will be encountered, and if the road is completely filled with vehicles in both directions (or even one) the probability is 1 that a choke point will be encountered at any given time. -Sorry if this is a stupid question. But it seems like there should be a way to express this mathematically. -EDIT 2: Limiting the scope of the problem -Thanks to Mike Spivey, and I hope this helps narrow the problem so that may be in some way answerable. If not, I'll just declare defeat and retire from the field. -Let's say that we have two cars going in one direction: a slow vehicle (Vehicle A) traveling at 30mph and a faster vehicle (Vehicle B) traveling at 60mph. At the start of the problem the A is somewhere between mile 2 and mile 4 on the 10-mile road, and B is at mile 0. Additionally, there will be four oncoming vehicles randomly distributed, and these vehicles are randomly traveling at 30 or 60mph. The overtaking vehicle requires .5 miles of "clear" space to pass safely if the oncoming vehicle is traveling at 60mph, and .25 if it is traveling at 30mph. -Assume further that the car lengths do not matter, but that a "choke point" (or bottleneck, if you prefer) is not reached until B gets within .05 miles of the rear bumper of A, at which point B has to pass A or decelerate. We will call the latter condition a "deceleration event," (DE) and I would like to be able to calculate the probability that at least one DE will occur for Vehicle A along the 10-mile route. -Perhaps this clarification is still insufficient to render the problem solvable, but I appreciate anyone taking the time to consider it. - -REPLY [13 votes]: Your question is an interesting one, but I think the reason you haven't received an answer to it yet is that it's ill-constrained. That's not a criticism but a technical term. Basically it means that the problem isn't well-defined enough for there to be a clear, distinct answer. (This is what the second part of Rahul Narain's comment is getting at.) To borrow from the Math Overflow FAQ: "The site works best for well-defined questions: math questions that actually have a specific answer." -Instead, your question would be great in my mathematical modeling course. I would pose it and then ask the students what assumptions would need to be made in order to get a well-defined mathematical model, or what aspects of the problem would be parameters to the model that we could vary and then see how the solution changes (such as those Rahul Narain mentions), or what existing mathematical tools we could use to help understand traffic behavior (such as queuing theory, as Yuval Filmus mentions). An answer to the question would then require some student taking this on as an extended assignment. He or she would have to make and justify assumptions, create a model (or two or three, which might require some programming), testing it (or them), refine the model(s), vary parameters, and then interpret the output from the model(s). There probably wouldn't be a single answer but a range of answers along the lines of "Based on these assumptions our model says this." -(I've actually just described the process of mathematical modeling.) -Another aspect of your question is that traffic problems are hard. They frequently show up as problems in the Mathematical Contest in Modeling (see, for example, the 2009 and 2005 contests). -Traffic problems are an active area of research, too, because there's a lot we don't understand about the way traffic behaves. There's not even a consensus about the best way to go about modeling traffic. Fluid flow models seem to be the most popular, but some people use queuing, and I've also seen discrete dynamical system and even cellular automata models. -If you're interested in learning more about research on traffic, you could also check out this article ("Traffic jam mystery solved by mathematicians") or the book Mathematical Theories of Traffic Flow.<|endoftext|> -TITLE: Almost identical map -QUESTION [12 upvotes]: Let $f: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be bijective map with following properties: -1) $f|_{\mathbb{Q}^2}=id$; -2) Image of any line under map $f$ is again a line. -Is it right that $f=id$? - -REPLY [5 votes]: (Posting this as CW so Alex can accept the answer.) -Trutheality re-asked the question on MathOverflow, and it turns out the answer is given by what is known as the "Fundamental Theorem of Affine Geometry". See https://mathoverflow.net/questions/46854/continuity-in-terms-of-lines<|endoftext|> -TITLE: Continuity of Rational Functions on the Riemann Sphere $\hat{\mathbb{C}}$ -QUESTION [5 upvotes]: Let $\hat{\mathbb{C}} = \mathbb{C} \cup \{ \infty \}$ denote the Riemann Sphere. While reading the wikipedia article about it I found a passage that said that every rational function on the complex plane can be extended to a continuous function on the Riemann Sphere. -The particular construction is as follows, let $R(z) = \frac{f(z)}{g(z)} \in \mathbb{C}(z)$ be a rational function. Lets assume for simplicity that $f(z)$ and $g(z)$ share no common factor. Then for any point $a \in \mathbb{C}$ such that $g(a) = 0$ but $f(a) \neq 0$ we define $R(a) = \infty$. Also we define $R(\infty) := \lim_{z \to \infty} R(z)$. -So to quote wikipedia, with this definitions $R(z)$ becomes a continuous function from the Riemann Sphere to itself. The problem for me is that the article doesn't add any details as to how one may go about showing that in fact $R(z)$ is continuous. So my question is exactly that, to make it simple, if I have for instance $R(z) = \frac{z-1}{z+1}$ or even $R(z) = \frac{1}{z}$, how do I show that it is a continuous function on $\hat{\mathbb{C}}$? I think I can build up from an easy example such as this (assumming this is easy). -Also, I'm a little bit confused about how to interpret this continuity, how should I see the continuity on the Riemann Sphere?, does it involve an argument with stereographic projection? -I added in the Riemann Surface tag just in case they are involved, which I'm not sure. Thank you very much in advance. - -REPLY [2 votes]: As a lowly high school teacher, my response will probably appear to be comical; nonetheless, I've always imagined rational functions to be continuous. As $x\to\infty$, I imagine the $x$-axis as a great circle of a unit sphere. The $y$-axis would also be a great circle. In spherical geometry, two lines intersect at two poles. If the origin is located at one pole, and infinity at the other, the function can be envisioned as being continuous at infinity. As timur mentions, then infinity would have coordinate $0$.<|endoftext|> -TITLE: How to prove that: $\tan(3\pi/11) + 4\sin(2\pi/11) = \sqrt{11}$ -QUESTION [36 upvotes]: How can we prove the following trigonometric identity? -$$\displaystyle \tan(3\pi/11) + 4\sin(2\pi/11) =\sqrt{11}$$ - -REPLY [6 votes]: Since $\tan\frac{3\pi}{11}+4\sin\frac{2\pi}{11}>0$, it's enough to prove that -$$\left(\sin\frac{3\pi}{11}+4\sin\frac{2\pi}{11}\cos\frac{3\pi}{11}\right)^2=11\cos^2\frac{3\pi}{11}$$ or -$$\left(\sin\frac{3\pi}{11}+2\sin\frac{5\pi}{11}-2\sin\frac{\pi}{11}\right)^2=11\cos^2\frac{3\pi}{11}$$ or -$$1-\cos\frac{6\pi}{11}+4-4\cos\frac{10\pi}{11}+4-4\cos\frac{2\pi}{11}+4\cos\frac{2\pi}{11}-4\cos\frac{8\pi}{11}-$$ -$$-4\cos\frac{2\pi}{11}+4\cos\frac{4\pi}{11}-8\cos\frac{4\pi}{11}+8\cos\frac{6\pi}{11}=11+11\cos\frac{6\pi}{11}$$ or -$$\sum_{k=1}^5\cos\frac{2k\pi}{11}=-\frac{1}{2}$$ or -$$\sum_{k=1}^52\sin\frac{\pi}{11}\cos\frac{2k\pi}{11}=-\sin\frac{\pi}{11}$$ or -$$\sum_{k=1}^5\left(\sin\frac{(2k+1)\pi}{11}-\sin\frac{(2k-1)\pi}{11}\right)=-\sin\frac{\pi}{11}$$ or -$$\sin\frac{11\pi}{11}-\sin\frac{\pi}{11}=-\sin\frac{\pi}{11}.$$ -Done!<|endoftext|> -TITLE: Is a map a homotopy equivalence if its suspension is so? -QUESTION [9 upvotes]: Let $X$, $Y$ be pointed CW complexes, $Y$ connected and $f:X\to Y$ a mapping. -Does the assertion '$\Sigma f:\Sigma X\to\Sigma Y$ is a homotopy equivalence' imply that $f$ is a homotopy equivalence? '$\Sigma$' is the reduced suspension. -If not, is it true with some additional hypotheses on $Y$? -Addition: -Does the assertion '$\Omega f:\Omega X\to\Omega Y$ is a homotopy equivalence' imply that $f$ is a homotopy equivalence? '$\Omega$' is the loopspace. - -REPLY [15 votes]: I believe the answer to your first question is no. Let $X$ be any connected acyclic CW-complex with non-trivial fundamental group, for example the space constructed as example 2.38 in Hatcher. Such a space has the property that $H_i(X) = 0$ for $i>0$ and $H_0(X) = \mathbb{Z}$, but $\pi_1(X) \neq 0$. (In particular $\pi_1(X)$ must be perfect). Consider the projection map $f: X \to pt$. By looking $\pi_1$, $f$ cannot be a homotopy equivalence. -However, $\Sigma f: \Sigma X \to \Sigma pt$ is a homotopy equivalence. To see this, note that suspension increases the connectivity, which implies that both spaces are simply connected. Hence the homology Whitehead theorem applies, which says that a map between simply connected CW-complexes is a homotopy equivalence if and only if it induces isomorphisms on all homology groups. Using the suspension axiom in homology, we see that all $H_i(\Sigma X)$ and $H_i(\Sigma pt)$ for all $i>0$ are zero and for $i=0$ are $\mathbb{Z}$. It is then easy to check that $\Sigma f$ is an isomorphism in all degrees. -edit: in your addition, I think the answer is yes, if we replace homotopy equivalence by weak homotopy equivalence. The Whitehead theorem says that $f: X \to Y$ is homotopy equivalence if and only if induces an isomorphism on all $\pi_i$. Because $\Omega X$ and $\Omega Y$ have the homotopy type of CW-complexes, we can replace them by CW-complexes with the price of replacing homotopy equivalence with weak homotopy equivalence. Now note that $Map_+(S^n,\Omega X) \cong Map_+(S^{n+1},X)$ and similarly $Map_+(S^n,\Omega Y) \cong Map_+(S^{n+1},Y)$. Under this isomorphism $(\Omega f)_*$ corresponds to $f_*$.<|endoftext|> -TITLE: why does a certain formula in Lang's book on modular forms hold? -QUESTION [8 upvotes]: Background: Let $k$ be an even integer. The Eisenstein series are defined by $$E_{k} = 1 - \frac{2k}{B_{k}}\sum_{n=1}^{\infty} \sigma_{k-1}(n)q^{n}$$ where -$$\sigma_{k-1}(n)= \sum\limits_{d \mid n,\;d \geq 1} d^{k-1}$$ and $B_{k}$ is the $k$-th Bernoulli number. If $k \gt 2$ then $E_{k}$ is a modular form of weight $k$. In the case when $k = 2$, $E_{k}$ is not a modular form. Let $P = E_{2}$. Thus $P = 1 - 24\sum_{n = 1}^{\infty} \sigma_{1}(n)q^{n}$. -Let $\theta$ be the differential operator defined by $\theta = q \frac{d}{dq}$. If $f$ is a modular form of weight $k$, define $\Delta f = 12 \theta f - kPf$. (Note that the symbol $\circ$ means composition.) -Let -$$\alpha = \left( \begin{array}{cc} -a & b \\ -c & d \\ -\end{array} \right) \in \textrm{SL}_{2}(\mathbf{R}).$$ -Define -$$\alpha \tau = \frac{a \tau + b}{c \tau + d}.$$ -Define $f \circ [a]_{k}(\tau) = f(\alpha \tau)(c \tau + d)^{-k}$. -My question: -Lang claims (as a lemma on page 161 of his "Introduction to Modular Forms") that -$\Delta(f \circ [\alpha]_{k}) = (\Delta f) \circ [\alpha]_{k + 2}$. My question is why is this true? - -REPLY [10 votes]: Note to begin with that the symbol $\circ$ does not mean composition; rather, it means the operation given by the formula further on in your question, namely: -$$f\circ[\alpha]_k(\tau) = (c\tau + d)^k f(\alpha \tau).$$ -Now to say that $f$ is a modular form of weight $k$ is to say that -$$f\circ[\alpha]_k(\tau) = f(\tau),$$ -for all $\alpha \in SL_2(\mathbb Z),$ -or equivalently, -that $$f(\alpha \tau) = (c\tau + d)^k f(\tau).$$ -Differentiating both sides with respect to $\tau,$ -we find that -$$f'(\alpha \tau) (c\tau + d)^{-2} = (c\tau + d)^k f'(\tau) + c k (c\tau + d)^{k-1}f(\tau).$$ -Since $\theta = \dfrac{1}{2\pi i} \dfrac{d}{d\tau},$ -we may rewrite this as -$$(c\tau + d)^{-(k+2)}(\theta f)(\alpha\tau) = (\theta f)(\tau) + \dfrac{c k}{2 \pi i(c\tau + d)} f(\tau),$$ -or equivalently, -that $$(\theta f)\circ [\alpha]_{k+2}(\tau) = (\theta f)(\tau) -+ \dfrac{c k}{c \tau + d} f(\tau).$$ -On the other hand, -$$(k P f)\circ[\alpha]_{k+2}(\tau) = k (P\circ[\alpha]_2)(f\circ [\alpha]_k)(\tau),$$ -and so the formula you ask about is equivalent to the formula -$$P\circ [\alpha]_2(\tau) = P(\tau) + \dfrac{12 c}{2 \pi i(c\tau + d)}.$$ -This is a standard formula for $P$ (which shows that although $P$ is not -a modular form of weight 2, it is not so far off). -One way to obtain it is to consider the weight 12 cuspform -$\Delta = q\prod_{n = 1}^{\infty}(1 - q^n)^{24}.$ -(Sorry for the conflict in notation, but $\Delta$ is the standard notation here. -The operator that you have labelled $\Delta$ is often denoted $\delta$, perhaps -to avoid precisely this notational conflict.) -A computation (using the product formula for $\Delta$) shows that $\theta(\log \Delta) = P.$ -Now writing down the modularity equation $\Delta(\alpha \tau) = -(c\tau + d)^{12} \Delta$ and applying $\theta$ to the log of each side, -we find that indeed $P$ satisfies the required functional equation, -and thus the formula you asked about holds. -As an aside, note that one can also reverse this last argument: suppose that we know that the formula -in the question is true. Applying it to the modular form -$\Delta$, we find that $\theta \Delta - P \Delta$ is a cuspform of weight $14$, but there are no non-zero such cuspforms. Thus we find that -$\theta\Delta = P\Delta,$ or equivalently, -that $\theta(\log \Delta) = P.$ If you look in Serre's Course in arithmetic, -you will see that this is how he proves the product formula for $\Delta$: -he verifies the functional equation for $P$ directly (in the special case -when $\alpha = \begin{pmatrix}0 & 1 \\ - 1 & 0\end{pmatrix}$; but since -the functional equation is obvious for $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix},$ and since these two matrices together generate -$PSL_2({\mathbb Z})$, verifying it for this single choice of $\alpha$ is enough), and then -essentially applies the argument I just gave to deduce the product formula for -$\Delta$. -Incidentally, I highly recommend Serre's book; as an introduction to modular forms it is both shorter and clearer than Lang's. If you then want to go further with the theory, there are other, better sources. (For example, after -reading Serre's book you could try directly reading Atkin and Lehner's paper on newforms.)<|endoftext|> -TITLE: How do I do Combinatorics / Counting? -QUESTION [7 upvotes]: If someone could give answers and explain, it would be greatly appreciated. Help required studying for a final. -One hundred tickets, numbered 1,2,3,…,100, are sold to 100 different people for a drawing. -Four different prizes are awarded, including a grand prize (a trip to Tahiti). -A) How many ways are there to award the prizes? -B) How many ways are there to award the prizes if the person holding ticket 47 wins the grand prize? -C) How many ways are there to award the prizes if the person holding ticket 47 wins one of the prizes? -D) How many ways are there to award the prizes if the person holding ticket 47 does not win a prize? -E) How many ways are there to award the prizes if the people holding tickets 19 and 47 both win prizes? -F) How many ways are there to award the prizes if the people holding tickets 19, 47, 73, and 97 all win prizes? -G) How many ways are there to award the prizes if none of the people holding tickets 19, 47, 73, and 97 wins a prize? -H) How many ways are there to award the prizes if the grand prize winner is a person holding ticket 18, 47, 73, or 97? -I) How many ways are there to award the prizes if the people holding tickets 19 and 47 win prizes, but the people holding tickets 73 and 97 do not win prizes? - -REPLY [33 votes]: The two basic rules of counting are: - -Sum rule: If one event can occur in $n$ different ways, and another, independent, event can occur in $m$ different ways, then the number of different ways in which either one or the other event can occur is $n+m$. -Product rule: If one event can occur in $n$ different ways, and another, independent, event can occur in $m$ different ways, then the number of different ways in which both evens can occur is $nm$. - -When making choices, you have two basic kinds: if the order in which you make the choices matters, they are called permutations. (For example, choosing the Caesar salad as an appetizer and the shrimp cocktail as the main course is different from choosing the shrimp cocktail as an appetizer and the Caesar salad as a main course). If the order in which you make the choices does not matter, you have combinations. (For example, when choosing teams, it doesn't matter if Bill is chosen first to join team A and Clara is chosen second also for team A, or if Clara is chosen first to join team A and Bill is chosen second also for team A; all that matters is that both Bill and Clara are in team A). -Combining these, you get some basic formulas: - -The number of ways in which you can make $n$ choices, with $m$ options for each, allowing repetitions but where the order of the choice matters (permutations with repetitions), is $m^n$. -The number of ways in which you can make $n$ choices, with $m$ possibilities, where the order matters but with repetitions not allowed (permutations without repetitions) is $m(m-1)(m-2)\cdots(m-n+1) = \frac{m!}{(m-n)!}$. -The number of ways in which you can make $n$ choices out of $m$ options, if the order does not matter and repetitions are not allowed, is $\binom{m}{n} = \frac{m!}{n!(m-n)!}$ (called "$m$-choose-$n$", because you are choosing $n$ out of $m$). These are "combinations". -The number of ways in which you can make $n$ choices, with $m$ options, if the order does not matter and repetitions are allowed is $\binom{m+n-1}{n}$. (Combinations with repetitions). - -You probably knew all that, but still... -That said: -A. You need to choose 4 tickets, out of 100 possibilities; you are not allowed to choose the same ticket twice (no repetitions). Since the prizes are all different, the order in which you make the choices matters (draw for the 4th prize, then the 3rd prize, then the 2nd prize, then the Grand Prize; or in whichever order you want). So, which formula above applies, and what is the answer? -B. If ticket 47 will win the Grand Prize, you still need to assign the remaining three prizes. They have to be awarded to people other than ticket 47, no repetitions, but order still matters. How many options do you have, and how many choices to do you need to make? Order matters, no repetitions. -C. This is similar to the above, but this time, ticket 47 may win the Grand Prize, the second, the third, or the fourth place prize. Count each of the four outcomes separately, then apply the Sum rule. -D. Well, if you are going to exclude ticket 47, you have to choose from among the remaining 99 tickets. Order still matters, repetitions are not allowed. So the only difference is the number of options you have. -E. Well, you have two prizes awarded, and two more to award. First award the other two prizes (i.e., count how many ways you can do that). Then decide which prizes go to tickets 19 and 47: you have four prizes you can award, so you need to choose two prizes (to give to 19 and 47); order matters (first prize chosen goes to 19, second to 47), no repetitions. Count that. In total, you need to (i) pick to other winners; pick them in order, so first winner gets the top prize not awarded to 19 or 47, second gets the lower prize not awarded); and (ii) pick which prizes to give to 19 and to 47. You need both things to happen, so you should then use the Product rule. -F. Now you know who the four winners are; you just need to figure out which prizes they get. Count how many ways you can distribute the prizes among the four winners. -G. Similar to D, but now you have to exclude four people instead of one. -H. First decide who wins the Grand Prize. Then pick the other three winners. Then use the Product rule. -I. This is a combination of the ideas from E and D. Use the same method as in E to figure out how many ways there are to award prizes to 19 and 47; then figure out how many ways there are to assign the remaining two prizes to the remaining people if you exclude 73 and 97. Then combine the two answers using an appropriate rule mentioned above.<|endoftext|> -TITLE: One-to-one function from the interval $[0,1]$ to $\mathbb{R}\setminus\mathbb{Q}$? -QUESTION [6 upvotes]: This question might turn out to be really trivial. -$f$ is a one-to-one function from the interval $[0,1]$ to $\mathbb{R}$. Is it necessary that $\exists q \in \mathbb{Q}$ such that $f(x) = q$ for some $x \in [0,1]$ i.e. is it necessary that the image of $f$ contains a rational number? -I came across this question when I was browsing through some website. -I think this is false. But I am unable to come up with a counter example. - -REPLY [12 votes]: If you want an explicit counterexample, define $f:[0,1]\to\mathbb{R}$ by $f(x)=x$ if $x$ is irrational, $f(x)=\sqrt2+x$ if $x$ is rational. - -REPLY [3 votes]: It is obviously false since $[0,1]$ and $\mathbb{R}\setminus\mathbb{Q}$ have the same cardinality, so there exists a function $f: [0,1] \to \mathbb{R}\setminus\mathbb{Q}$ that is bijective. Do codomain expansion and you get an injective map from the unit interval to $\mathbb{R}$ that has no rationals in its range.<|endoftext|> -TITLE: How does the Chinese Remainder Theorem behave with polynomials with multiple variables? -QUESTION [6 upvotes]: I'm interested because I want to show that $x^2-34y^2\equiv -1\pmod{m}$ has solutions for all integers $m$. I started by using the following reasoning: -If $3\nmid m$, then $gcd(m,3)=1$. Then there exists a multiplicative inverse $\bar{3}$ modulo $m$. I note that $5^2-34=-(3^2)$, and thus $\bar{3}^2(5^2-34)\equiv (\bar{3}\cdot 5)^2-34(\bar{3}^2)\equiv -(\bar{3})^2(3^2) \equiv -1\pmod{4}$. And thus $(\bar{3}\cdot 5, \bar{3})$ is a solution modulo $m$. -Similarly, if $5\nmid m$, then $(m,5)=1$. Then since $3^2-34=-(5^2)$, then I also have $\bar{5}^2(3^2-34)\equiv (\bar{5}\cdot 3)^2-34(\bar{5}^2)\equiv -(\bar{5})^2(5^2)\equiv -1\pmod{m}$. -So for any $m$ not divisible by $3$ or $5$, there exists a solution. Then for $m$ such that $3|m$ and $5|m$, then $m$ has prime factorization $m=3^a5^b{p_1}^{q_1}\cdots {p_r}^{q_r}$. This would give the system of congruences -$x^2-34y^2 \equiv -1 \pmod{3^a}, -x^2-34y^2 \equiv -1 \pmod{5^b}, -x^2-34y^2 \equiv -1 \pmod{{p_i}^{q_i}}$ -Then $5\nmid 3^a$, $3\nmid 5^b$, and $3\nmid {p_i}^{q_i}$ and $5\nmid {p_i}^{q_i}$, so each of the congruences has a solution. Does the Chinese Remainder Theorem then imply that there is a solution modulo $m$? I know that it holds for polynomials in one variable $x$, and that the number of solutions is the product of the number of solutions for each prime power modulus. Would the same result hold now that there are two variables in the polynomial? I haven't found any proofs to support or contradict the result. Thanks! - -REPLY [5 votes]: The Chinese Remainder Theorem says that if the modulii $m_1,\ldots,m_k$ are pairwise relatively prime, and $a_1,\ldots,a_k$ are arbitrary, then there is a solution $x$ to the congruence system -\begin{align*} -x &\equiv a_1 &\pmod{m_1}\\ -x &\equiv a_2 &\pmod{m_2}\\ -&\vdots\\ -x &\equiv a_k &\pmod{m_k} -\end{align*} -and the solution is unique modulo $m_1\cdots m_k$. -So, for each $i$ you find a value $r_i$ and a value $s_i$ such that $r_i^2 - 34s_i^2 \equiv -1 \pmod{p_i^{q_i}}$, and values $r$ and $s$ for the congruence modulo $3^a$, and values $r'$ and $s'$ for the congruence modulo $5^b$. Apply the Chinese Remainder Theorem to the $r_i$, $r$, and $r'$ (with appropriate modulii) to get a single value of $x$ that is congruent to what you want for each congruence. Do the same with the $s_i$, $s$, and $s'$ to get a value for $y$. The single value of $x$ and $y$ is the solution you want. -Notice that you are not applying the Chinese Remainder Theorem to a 2-variable polynomial (or to a one-variable polynomial other than $p(x)=x$, for that matter): rather, you are finding the values you want for each variable for each prime power, and then you are using the Chinese Remainder Theorem to find a single value that will have the appropriate remainders for each of the variables separately.<|endoftext|> -TITLE: Dirichlet's theorem on primes in arithmetic progression -QUESTION [11 upvotes]: Is there a proof in the spirit of Euclid to prove Dirichlet's theorem on primes in arithmetic progression? (By the spirit of Euclid, I mean assuming finite number of primes we try to construct another number which has a prime factor which falls in the same equivalence class as the other primes but the number is not divisible by any of the primes we considered in the initial list.) -I am aware of the proof using $L$ functions but I am curious to know if Euclid's "simple" idea can be extended to all other cases as well. I tried googling but am unable to find a proof other than the ones relying on $L$ functions. If it is not possible, to what cases can Euclid's idea be extended to? -Any other proofs are welcome as well. - -REPLY [12 votes]: As Jonas says, Keith Conrad's paper will tell you that the answer is essentially no. He defines a certain notion of "Euclidean" proof coming from writing down a polynomial with certain properties and gives two classic results which say that these proofs exist for primes in arithmetic progression $a \bmod n$ if and only if $a^2 \equiv 1 \bmod n$. -The first case these proofs can't handle is primes congruent to $2 \bmod 5$ (equivalently, primes congruent to $3 \bmod 5$). The basic problem is that there is no way to force a positive integer to have factors congruent to $2 \bmod 5$ and not congruent to $3 \bmod 5$ (or the other way around) solely by controlling its residue class $\bmod 5$. In more sophisticated language, $2$ and $3$ lie in all the same subgroups of $(\mathbb{Z}/5\mathbb{Z})^{\ast}$. - -REPLY [6 votes]: A couple more references: -Ram Murty and Nithum Thain, -Prime numbers in certain arithmetic progressions, -Paul Pollack, -Hypothesis H and an impossiblity theorem of Ram Murty.<|endoftext|> -TITLE: How to compute the angle between two vectors expressed in the spherical coordinates? -QUESTION [13 upvotes]: Given two vectors $u, v \in \mathbb{R}^d$ represented the spherical coordinates is there a simple formula to compute the angle between the two vectors? Without loss of generality, we can assume that the vectors $u$ and $v$ have unit norm. -I am not sure that the following notation is standard, but I assume that the vector $u$ is defined with $\rho = 1$ and the angular components $\theta_1, \ldots, \theta_{d-1}$. We can obtain the Euclidean components as follows: $$u_{x_1} = \cos \theta_1$$ $$u_{x_2} = \sin \theta_1 \cos \theta_2$$ $$\ldots$$ $$u_{x_{d-1}} = \sin \theta_1 \ldots \sin \theta_{d-2} \cos \theta_{d-1}$$ $$u_{x_d} = \sin \theta_1 \ldots \sin \theta_{d-2} \sin \theta_{d-1}.$$ -One way to find the angle is to represent the two vectors in the Euclidean coordinates and compute arccos of the dot product. Is there a simpler way? -As pointed out in comments, is there a generalization of the Haversine formula? - -REPLY [3 votes]: Simple, alternative derivation of Ralf's formula, without the guesswork: -Angle between the vectors: -$$\alpha = \arccos\left({{u \cdot v}\over{|u| |v|}}\right)$$ -$$u \cdot v = \sum_i u_i v_i $$ -in hyper-spherical coordinates (n+1 dimensions, hence n angles): -$$u_i(n) = |u| \cos(\theta_i) \prod_{j=1}^{i-1}{\sin(\theta_j)} $$ -except when i=n: -$$u_n(n) = |u| \prod_{j=1}^{n}{\sin(\theta_j)} $$ -and similar for v (I will use $\phi$ for the angles of v). -With unit vectors we combine the three formulae to: -$$\cos(\alpha) = \sum_{i=1}^{n-1}{\cos(\theta_i)\cos(\phi_i) \prod_{j=1}^{i-1}{\sin(\theta_j)\sin(\phi_j)}} + \prod_{i=1}^{n}{\sin(\theta_j)\sin(\phi_j)} $$ -which can be written as a recursive rule: -$$\cos(\alpha)_n = \cos(\theta_{n-1})\cos(\phi_{n-1}) \cos(\alpha)_{n-1} + \prod_{j=1}^{n}{\sin(\theta_j)\sin(\phi_j)} $$ -Which is similar to what Ralf's answer contains (cos/sin switched).<|endoftext|> -TITLE: Compactification of Manifolds -QUESTION [21 upvotes]: It is known that for any locally compact Hausdorff space X, we can define a Hausdorff one-point compactification containing X. -In the case of the (differentiable) manifold $\mathbb R^n$ this one-point compactification turns out to be (homeomorphic to) $\mathbb S^n$, which is again a (differentiable) manifold. -This leads to the following question: -What does the picture look like in the general case for compactifications of an arbitrary manifold $M$? -Although the one-point compactification of $M$ is not a manifold in general (e.g. $\mathbb R^n - 0$); is it possible to view every manifold as an open (dense?) subset of a compact manifold by taking some other kind of compactification? -In the differentiable case? In the $C^0$-case? - -I had thought along the following lines at first: -By the Whitney embedding theorem, every manifold $M$ can be thought of as a closed submanifold of $\mathbb R^n$ for some $n$. And by embedding $\mathbb R^n$ into $\mathbb S^n$, we can think of $M$ as an embedded submanifold of a compact manifold. -But I guess taking the closure of $M$ in $\mathbb S^n$ will not in general leave us with a manifold anymore (?), so this does not answer my question... - -Has this been looked into? -Thanks for any thoughts. -S.L. - -REPLY [6 votes]: Just to take this question from the "unanswered" list. (It was actually answered in comments.) -(1). The simplest example of a manifold which is not homeomorphic to an open subset of a compact manifold is an infinite disjoint union of circles. [Edit: I stand corrected: the simplest example is ${\mathbb N}$ with discrete topology. I was only thinking about manifolds of positive dimension.] -(2). If you want a connected example, then it first appears in dimension 2: If a connected surface $S$ has infinite genus then it is not homeomorphic to an open subset of a compact surface. This is intuitively clear, but I will nevertheless give a proof which works in all dimensions. -Since $S$ has infinite genus, the image of the natural map -$$ -\phi: H^1_c(S; {\mathbb R})\to H^1(S; {\mathbb R}) -$$ -has infinite rank (each "handle" in $S$ contributes a 2-dimensional subspace). Suppose that $S\to T$ is an open embedding of $S$ to a compact surface $T$. Then we have the commutative diagram -$$ -\begin{array}{ccc} -H^1_c(S; {\mathbb R}) & \stackrel{\phi}{\to} & H^1(S; {\mathbb R})\\ -\psi\downarrow & ~ & \eta\uparrow \\ -H^1_c(T; {\mathbb R}) & \stackrel{\cong}{\to} & H^1(T; {\mathbb R}) -\end{array} -$$ -(Note that the induced maps of ordinary and of compactly supported cohomology groups go in opposite directions, this is what used in the proof.) Since $H^1(T, {\mathbb R})$ is finite-dimensional, the image of $\eta\circ \psi$ is also finite-dimensional, which is a contradiction. -Edit. There are less trivial examples in dimension 3. Haken proved that a certain open contractible 3-manifold does not embed in any compact 3-manifold: -W. Haken, Some results on surfaces in 3-manifolds, Studies in Modern Topology, Math. Assoc. Amer. (distributed by Prentice-Hall, Englewood Cliffs, N. J.), 1968, 39-98. -This result was generalized in -R. Messer, A. Wright, Embedding open 3-manifolds in compact 3-manifolds. -Pacific J. Math. 82 (1979), no. 1, 163–177. -who found necessary and sufficient conditions for embedding in compact 3-manifolds of open 3-manifolds of the form -$$ -\bigcup_{n \in {\mathbb N}} M_n, -$$ -where for all $n$, $M_n$ is a compact submanifold with toral boundary and $M_n\subset int(M_{n+1})$.<|endoftext|> -TITLE: Do the Möbius function, totient function, sum of divisors and number of divisors uniquely specify a number? -QUESTION [48 upvotes]: Let $\mu\left(n\right)$ be the Möbius function. Let $\phi\left(n\right)$ be Euler's totient function. Let $\sigma\left(n\right)$ be the sum of divisors and $\tau\left(n\right)$ be the number of divisors functions. I am curious to know whether or not the system: -$\mu\left(n\right)=a$ -$\phi\left(n\right)=b$ -$\sigma\left(n\right)=c$ -$\tau\left(n\right)=d$ -has at most one solution. -Motivation: I remember a number theory assignment I had where we were given particular values for each of these functions and asked to recover the original number. I can't for the life of me remember how (or if) I managed to solve this problem. I tried to work out a general proof, but couldn't. I also wrote a loop in maple to check for counterexamples, but haven't found any yet. I feel like this is something I should know, but probably have forgotten the relevant facts to approaching this problem. - -REPLY [31 votes]: The answer is No. The smallest counterexamples I could find are {1836, 1824), {5236, 4960}, {5742, 5112}, {6764, 6368}, {9180, 9120} and {9724, 9184}. I think those are all the pairs in which both numbers are less than 10,000. -For example, both $n=1836$ and $n=1824$ satisfy $\mu(n)=0$, $\varphi(n)=576$, $\sigma(n)=5040$ and $\tau(n)=24$. -EDIT: here's the code of the program I used in GAP. -vec := function(n) - return [MoebiusMu(n), Phi(n), Sigma(n), Tau(n)]; -end; - -dic:=NewDictionary([1,2,3,4], true); - -for i in [2..10000] do - v:=vec(i); - if (LookupDictionary(dic, v) <> fail) then Print(i," <=> ", LookupDictionary(dic, v), "\n"); fi; - AddDictionary(dic, v, i); -od;<|endoftext|> -TITLE: Why would I want to multiply two polynomials? -QUESTION [100 upvotes]: I'm hoping that this isn't such a basic question that it gets completely laughed off the site, but why would I want to multiply two polynomials together? -I flipped through some algebra books and have googled around a bit, and whenever they introduce polynomial multiplication they just say 'Suppose you have two polynomials you wish to multiply', or sometimes it's just as simple as 'find the product'. I even looked for some example story problems, hoping that might let me in on the secret, but no dice. -I understand that a polynomial is basically a set of numbers (or, if you'd rather, a mapping of one set of numbers to another), or, in another way of thinking about it, two polynomials are functions, and the product of the two functions is a new function that lets you apply the function once, provided you were planning on applying the original functions to the number and then multiplying the result together. -Elementary multiplication can be described as 'add $X$ to itself $Y$ times', where $Y$ is a nice integer number of times. When $Y$ is not a whole number, it doesn't seem to make as much sense. -Any ideas? - -REPLY [2 votes]: In digital signal processing a signal of finite length may be represented as a polynomial ( its Z transform). So the convolution of two finite length signals is simply the multiplication of the Z transform polynomials.<|endoftext|> -TITLE: Topological vs. Algebraic $K$-Theory -QUESTION [24 upvotes]: Suppose I can calculate the extraordinary cohomology encoded in topological $K$-groups of a topological space $X$ with CW structure. What information does this give me about $C^{*}$-algebras associated with $X$? What is the algebraic analogue of topological suspension or the algebraic version of Bott Periodicity? -Bigger Question: More generally, what is the deep connection between topological $K$-theory and algebraic $K$-theory? -Motivation: I've computed the reduced topological $K$-groups of a wedge sum of $m$ $n$-spheres. They are given by -\begin{eqnarray} -\tilde{K}^{p}_{\text{top}} \left(\bigvee^{m} S^{n} \right) \cong -\mathbb{Z}^{m} \quad \text{if} \quad p + n \quad \text{is even} -\end{eqnarray} -and trivial otherwise. -Question: Is the positive integer $m$ the dimension of an algebra associated with the wedge sum? If so, is there an intuitive description of this algebra? -Thanks! - -REPLY [26 votes]: I cannot fully answer your questions, but hopefully I can help orient you. I know a little about C*-algebra K-theory, and less about topological K-theory, so take this with a grain of salt. -The $K_0$ group of a (complex) C*-algebra $A$ is defined as the Grothendieck group of a commutative monoid of equivalence classes of projections in the matrix algebras over $A$. The $K_1$ group of $A$ can be defined as a group of equivalence classes of unitary elements of matrix algebras over $A$. (The subscripts are used instead of superscripts because the corresponding functors are covariant rather than contravariant.) To define higher $K_n$ groups, "suspensions" are used. In this setting, the suspension $SA$ of $A$ is defined to be $C_0(0,1)\otimes A\cong C_0((0,1),A)$, which can be thought of as the C*-algebra of continuous $A$-valued functions on the unit interval that vanish at $0$ and $1$. When $n$ is greater than $1$, $K_n(A)$ is defined to be $K_{n-1}(SA)$. It turns out that this is consistent with the $n=1$ case, i.e., $K_1(A)$ is isomorphic to $K_0(SA)$. Bott periodicity says that $K_{n+2}(A)$ is isomorphic to $K_n(A)$ for all $n$. -In the commutative case, there is a locally compact Hausdorff space $X$ such that $A$ is isomorphic to $C_0(X)$, the algebra of complex-valued continuous functions on $X$ vanishing at infinity. The homeomorphism class of $X$ is determined by $A$, and the isomorphism class of $A$ is determined by $X$. One can consider either the operator K-theory of $C_0(X)$ or the topological K-theory of $X$. It turns out that the groups are isomorphic: $K_n(C_0(X))\cong K^n(X)$. -The K-groups don't tell you anything literally about the dimension of the C*-algebra. For example, the infinite dimensional C*-algebra $C(S^2)$ of continuous complex-valued functions on the $2$-sphere has K-groups $K_0(C(S^2))=\mathbb{Z}^2$ and $K_1(C(S^2))=0$. These are the same as the K-groups of the algebra $\mathbb{C}\oplus\mathbb{C}$ (which is the algebra of continuous complex-valued functions on a $2$-point discrete space). -As a terminological aside, usually a distinction is made between algebraic K-theory and K-theory of operator algebras. -Here are some references. For topological K-theory, see Atiyah and Karoubi. For operator K-theory, in increasing order of difficulty and assumed background, see Wegge-Olsen, Rørdam et al., and Blackadar. Blackadar's book is a wonderful reference and the most comprehensive, while the other two serve as good introductions. (Rørdam et al. is the one I've studied the most.) -Some insight on the relationship between the two theories can be gained in light of Swan's theorem on the equivalence between finitely generated projective modules over $C(X)$ and vector bundles over $X$. Roughly speaking, since projective modules are direct summands of free modules, they correspond to idempotents in the endomorphism rings of free modules, and in the C*-algebra case this corresponds to the projections in matrix algebras mentioned above. Each of the operator K-theory books mentioned above includes at least some material on the relationship to topological K-theory.<|endoftext|> -TITLE: The order of the number of integer pairs satisfying certain arithmetical function relationships -QUESTION [26 upvotes]: This question is a follow up to this excellent mathematics stackexchange question. -Let $\mu(n)$ be the Möbius function, $\phi(n)$ Euler's totient function, $\sigma(n)$ the sum of divisors function and $\tau(n)$ the number of divisors function. Define the set $S_N,$ for a natural number $N,$ by -$$S_N = \lbrace (m,n) \in \mathbb{N} \times \mathbb{N} \mid m \ne n, \, -\mu(m)=\mu(n), \, \phi(m)=\phi(n),$$ -$$\sigma(m)=\sigma(n), \, \tau(m)=\tau(n) \textrm{ and } -\text{max} \lbrace m,n \rbrace \le N \rbrace .$$ -How large is the set $ S_N $ ? - -REPLY [8 votes]: As all of $\mu$, $\tau$, $\phi$, and $\sigma$ are multiplicative, if $(a,b)$ is such a pair and $n$ is relatively prime to $ab$ then $(an,bn)$ is another pair. So, as there is at least one pair there are an infinite number of them, and for $N$ large enough there is a constant C such that there are at least $CN$ such pairs. Also, if $(c,d)$ is another such pair with $(c,d)$ with $cd$ relatively prime to $ab$ then $(ac,bd)$ and $(ad,bc)$ are more pairs. -If we define a primitive pair as a pair that cannot be decomposed as above, then I suspect that there are an infinite number of primitive pairs, but I have no idea how to prove it. -Edit: In the answer to this question the pairs {1836, 1824), {5236, 4960}, {5742, 5112}, {6764, 6368}, {9180, 9120} and {9724, 9184} are found so there is at least one such pair. -Added -To motivate the intuition that there are probably an infinite number of primitive pairs, here is a simple algorithm for computing relative prime pairs: - -Choose a set of small primes $B$ (for example the primes less than 20) -Compute a set of primes $P$ such that for $p\in P$ both $\sigma(p)=p+1$ and $\phi(p)=p-1$ factor completely using the primes in $B$ -For each prime in $P$ create a variable $X_p$, where $X_p=1$ if $p|m$, $X_p=-1$ if $p|n$ and $X_p=0$ otherwise. -Each member of $B$ creates 2 linear equality constraints on the $X_p$ and we add another constraint $\sum_{p\in P}X_p = 0$ constraining $m$ and $n$ to have the same number of prime factors. Since they are square free and have the same number of prime factors, both $\mu$ and $tau$ will be equal. -Enumerate the $\{-1,0,1\}^k$ lattice points that satisfy the constraints. These will give the prime factors of $m$ and $n$. - -In theory step 5 can be problematic, in practice it is easy to find many. For example, letting $B$ be the primes less than 20, letting $P$ be the qualifying primes less than 1000, we quickly get hundreds of pairs, the smaller of which are: -(15265, 15169), (27962, 26355), (30199, 30217), (64255, 63791), (66526, 62535) -(72713, 72703), -(89089, 89585), -(149739, 145915), -(166315, 165319), -(182942, 171795), -(233597, 235135), -(307021, 307951), -(344137, 344129), -(392227, 391859), -(483769, 485317), -(622599, 605815), -(873301, 876211), -(967759, 968297), -It is left as an exercise to the reader to extend this to non square free pairs. -As the size of $P$ seems to grow much faster than $B$ it is plausible that we will generate more new pairs as we grow $B$.<|endoftext|> -TITLE: Existence of k-regular graph -QUESTION [6 upvotes]: In a few examples i noted that the existence of $k$-regular graph on n vertices is : - -True , for k or n even. -False , for k and n odd . But we can find a graph with $n-1$ vertices with -degree k and one vertex with degree $k-1$. There doesn't exists a k-regular graph for -k and n odd because $k=\deg(G) = 2*|E(G)| / |V(G)|$ -$|E(G)| = k*n/2$, and $|E(G)|= m$ is not a natural number if $n$ and $k$ is odd. - -Any proof idea ?? - -REPLY [2 votes]: A lot easier way: the sum of the degrees is $2|E|$. Therefore the sum of the degrees must be an even number. Since an odd times an odd is always an odd, and the sum of the degrees of an $\boldsymbol k$-regular graph is $k\cdot n$, $n$ and $k$ cannot both be odd.<|endoftext|> -TITLE: Intuition behind $dx \wedge dy=-dy \wedge dx$ -QUESTION [17 upvotes]: I was re-reading this old book of mine; and I noticed that in defining the rules of differential forms, it "makes sense" that we have the rule $dx \wedge dx=0$ because if $dx$ is infinitesimal, then to first order approximations we can ignore powers of $dx$. Similarly, the definition for the exterior derivative $d$, of a differential form $\omega=Adx+Bdy+Cdz$, $d\omega=\frac{dA}{dx}dx + \frac{dB}{dy}dy + \frac{dC}{dz}dz $ "makes sense" because it feels like we are just multiplying the top and bottom by the differentials $dx,dy,$ and $dz$. -But it is practically a miracle that by introducing the simple anti-symmetrical commutation relations for differential forms, and applying very elementary operations, we can arrive at all the results of vector calculus such as gradient and cross product, among a large amount of other well known results. -In this particular book, the authors motivate the anti-symmetry condition by properties of determinants and Jacobian's for change of variables in integration. But I was wondering if there are other ways to think about why differential forms should commute anti-symmetrically which might provide some more intuition on just why this "miracle" works. -Thanks! - -REPLY [21 votes]: One way of looking at the antisymmetric relation is a consequence of $dx∧dx=0$ (which feels intuitive to you). Applied to $(dx+dy)∧(dx+dy)=0$, we get $(dx∧dx)+(dx∧dy)+(dy∧dx)+(dy∧dy)=0$. So, $(dx∧dy)+(dy∧dx)=0$, so $(dx∧dy)=-(dy∧dx)$ - -REPLY [10 votes]: I like the motivation given by Jack Lee's book Introduction to Smooth Manifolds. Roughly, we want to capture volume by the exterior algebra: say $\omega$ is a tensor that we want to apply to $n$ vectors to get the $n$-dimensional volume of the parallelogram they form. In the case $n=2$ for example, we should have $\omega(X,X) = 0$ since we get a line and not a 2-d region (so 0 area). Now by linearity, $\omega(X,X) = 0$ forces $X$ to be alternating (as in Timothy's answer). -The algebra of forms is the algebra of alternating tensors.<|endoftext|> -TITLE: Any two points in a Stone space can be disconnected by clopen sets -QUESTION [28 upvotes]: Let $B$ be a Stone space (compact, Hausdorff, and totally disconnected). Then I am basically certain (because of Stone's representation theorem) that if $a, b \in B$ are two distinct points in $B$, then $B$ can be written as a disjoint union $U \cup V$ of open sets where $a \in U, b \in V$. -However, I can't seem to prove this directly. The proof should be fairly straightforward, so I am sure I'm missing something obvious. (As an exercise for myself, I'm trying to prove Stone's representation theorem, and I need this as a lemma.) - -REPLY [13 votes]: The point of this answer is mostly to provide some terminology with which to express the distinction here. -A topological space is totally disconnected if the only nonempty connected subsets are the singleton sets. Equivalently, the connected components are all singleton sets. -A topological space is zero-dimensional if it admits a base of clopen [i.e., both open and closed] sets. -A topological space is separated (more traditionally: $T_1$) if all the singleton sets are closed. -First easy observation: A totally disconnected space is separated. -(Proof: Otherwise the closure of a point would give a larger connected component.) -Second easy observation: In a separated zero-dimensional space $X$, for any two distinct points $x_1$, $x_2$, there exists a separation $X = U_1 \coprod U_2$ with $x_i \in U_i$. -(In the terminology of Nuno's answer -- which is relatively standard if not well-known -- the conclusion is that the quasi-components are singleton sets.) In particular, $X$ is totally disconnected. -(Proof: Let $x_1$, $x_2$ be distinct points in $X$. Since $X$ is separated, $N = X \setminus \{x_2\}$ is open. By definition of a base, there exists a clopen set $U_1$ with $x_1 \in U_1 \subset N$ and then $U_1$, $U_2 = X \setminus U_1$ is the desired separation.) -By contrast, we have the following -Nontrivial result: A locally compact Hausdorff space is zero-dimensional iff it is totally disconnected. For instance this wikipedia article gives a reference. -Note that Nuno's answer (which should be the accepted one, IMO) gives a proof of this in the compact case. -Perhaps a cleaner approach is to "redefine" a Stone space to be a space which is compact, Hausdorff and zero-dimensional. Then when you are given a space that someone says is a Stone space, the first thing you'll do is look for a base of clopen sets, which tends to be a good idea anyway. For instance, given any profinite space (i.e., an inverse limit of finite discrete spaces, yet another term for the same class of spaces!) it is very easy to do this.<|endoftext|> -TITLE: When functions commute under composition -QUESTION [34 upvotes]: Today I was thinking about composition of functions. It has nice properties, its always associative, there is an identity, and if we restrict to bijective functions then we have an inverse. -But then I thought about commutativity. My first intuition was that bijective self maps of a space should commute but then I saw some counter-examples. The symmetric group is only abelian if $n \le 2$ so clearly there need to be more restrictions on functions than bijectivity for them to commute. -The only examples I could think of were boring things like multiplying by a constant or maximal tori of groups like $O(n)$ (maybe less boring). -My question: In a euclidean space, what are (edit) some nice characterizations of sets of functions that commute? What about in a more general space? -Bonus: Is this notion of commutativity important anywhere in analysis? - -REPLY [5 votes]: This question may also be related to how certain functions behave under functions of their variables. In this context, the property of commuting with binary operators, such as addition and multiplication, can be used to define classes of functions: - -additive commutation: if $g(x, y) = x + y$, then $f\big(g(x, y)\big) = g\big(f(x),\ f(y)\big)$ if and only if $f(x + y) = f(x) + f(y)$ thus $f$ is a homogeneous linear function of the form $f(x; a) \equiv ax$ -multiplicative commutation: if $g(x, y) = xy$, then $f\big( g(x, y) \big) = g\big(f(x),\ f(y)\big)$ if and only if $f(xy) = f(x)f(y)$ thus $f$ is "scale invariant" i.e. a power law of the form $f(x; a) \equiv x^a$ -log-additive commutation: if $g(x, y) = x + y$, then $\log f\big( g(x, y) \big) = g\big( \log f(x),\ \log f(y) \big)$ if and only if $f(x + y) = f(x)f(y)$ thus $f$ is an exponential function of the form $f(x; a) \equiv \exp(ax)$ - -The last item (3) involves a third function (the logarithm) which when denoted as $h$ gives -$h\big(f[g(x, y)]\big) = g\big(h[f(x)],\ h[f(y)]\big)$ -or -$h \circ f \circ g(x, y) = g\big(h \circ f(x),\ h \circ f(y)\big).$ -Since $h \circ f$ occurs on both sides, we can denote this as $\tilde f$ to get -$\tilde f \big( g(x, y) \big) = g \big( \tilde f(x), \tilde f(y) \big)$ -which has the same form as item (1) above. From this perspective, items (1) and (3) above can be seen as being isomorphic under the $\exp$ and $\log$ pair of invertible mappings.<|endoftext|> -TITLE: Product of Polytopes -QUESTION [5 upvotes]: Question: What is the correct notion of a product of integral (or rational) polytopes which induces a factorization of its Ehrhart (quasi-)polynomial into two primitive Ehrhart (quasi-)polynomials corresponding to its constituent polytopes, viz., $L_{P \times Q}(t) = L_{P}(t) L_{Q}(t)$? -(Motivation) Given two closed integral polytopes $P$ and $Q$ each with vertices at $\{ \mathbf{0} , b_{1} \mathbf{e}_{1}, \dots, b_{n} \mathbf{e}_{n} \}$ and $\{ \mathbf{0} , d_{1} \mathbf{e}_{1}, \dots, d_{m} \mathbf{e}_{m} \}$, respectively, where $n, b_{i}, m, d_{j} \in \mathbb{N}$, define the integral polytope $R$ with vertices at $\{ \mathbf{0}, b_{1} \mathbf{e}_{1}, \dots, b_{n} \mathbf{e}_{n}, d_{1} \mathbf{e}_{n+1}, \dots, d_{m} \mathbf{e}_{n+m} \}$. -The above construction cannot be the sought after product $P \times Q$. Suppose $P$ and $Q$ are defined by $b_{1} = b_{2} = d_{1} = d_{2} = 2$. It is easy to show that $L_{P}(1) = L_{Q}(1) = 6$. Define $R$ as above with vertices of $P$ and $Q$. It is true that $L_{R}(1) = 15 \neq 6^{2}$. -Question: What is $R$ in terms of $P$ and $Q$? Is it special in some way? -Thanks! - -REPLY [3 votes]: The (Motivation) description appears to be similar to a direct sum (a.k.a. tegum product) as defined in Section 1.2 of this paper: -http://people.reed.edu/~davidp/homepage/seniors/mcmahan.pdf -The difference is that for a direct sum, 0 is in the relative interior of both of the factor polytopes, rather than a common vertex. -The same article answers your question about what the nm vertices of a Cartesian product are.<|endoftext|> -TITLE: Equilateral polygon in a plane -QUESTION [17 upvotes]: Let $n$ be a positive integer. Suppose we have an equilateral polygon in the Euclidean plane with the property that all angles except possibly two consecutive ones are an integral multiple of $\frac{\pi}{n}$, then all angles are an integral multiple of $\frac{\pi}{n}$. -This problem is #28 on page 61 in these notes restated here for convenience: -http://websites.math.leidenuniv.nl/algebra/ant.pdf -I have seen a number-theoretic proof of this. I was wondering if there are any geometric (or at least non number-theoretic) proofs of this result. - -REPLY [2 votes]: Maybe this is a start: Given: an n-step walk in the plane with each step of length 1 that begins with a step to (1,0) and ends at the origin, and all angles between steps being a multiple of $\pi/n$ except perhaps those between three adjacent steps (i.e. two corners). Show that all the angles are multiples of $\pi/n$. -Label the angles by $\{\alpha_k|1\le k \le n\}$ with the angles that are not multiples of $\pi/n$ being $\alpha_{n-1}$ and $\alpha_n$. Define each consecutive step in vector form, that is, relative to this coordinate system. We have steps of form: -$$(\cos(\beta_k),\sin(\beta_k))$$ -where $\beta_k = \Sigma_{j\le k}\;\alpha_j$ and $\beta_1=0$. So $\beta_k$ is a multiple of $\pi/n$ except for $k=n-1$ or $n$. -That the path returns to the origin requires: -$$\Sigma_k(\cos(\beta_k),\sin(\beta_k)) = (0,0).$$ -Now note that $\cos(k\pi/n)$ can be written as a polynomial over $Z$ in $\cos(\pi/n)$ and that $\sin(k\pi/n)$ can be written as $\sin(\pi/n)$ times a polynomial over $Z$ in $\cos(\pi/n)$. These apply to all the $\beta_k$ except the last two. -So it looks like a problem in $Z[\cos(\pi/n)]$. And maybe you should divide the "y" restriction by $\sin(\pi/n)$.<|endoftext|> -TITLE: Computing the Chern-Simons invariant of $SO(3)$ -QUESTION [44 upvotes]: I am an undergraduate learning about gauge theory and I have been tasked with working through the two examples given on pages 65 and 66 of "Characteristic forms and geometric invariants" by Chern and Simon. I will recount the examples and my progress at a solution. For ease here is the relevant text: - -Example 1. Let $M = \mathbb{R}P^3 = SO(3)$ together with the standard metric of constant curvature 1. Let $E_1, E_2, E_3$ be an orthonormal basis of left invariant fields on $M$, oriented positively. Then it is easily seen that $\nabla_{E_1}E_2 = E_3, \nabla_{E_1}E_3 = - E_2, \text{ and } \nabla_{E_2}E_3 = E_1$. Let $\chi : M \rightarrow F(M)$ be the cross-section determined by this frame. - $$\Phi(SO(3)) = \frac{1}{2}.$$ -Example 2. Again let $M = SO(3)$, but this time with left invariant metric $g_{\lambda}$, with respect to which $\lambda E_1, E_2, E_3$ is an orthonormal frame. Direct calculation shows - $$\Phi(SO(3),g_{\lambda}) = \frac{2\lambda^2 - 1}{2\lambda^4}.$$ - -For each of these examples I am expected to calculate -$$\Phi(M) = \int_{\chi(M)} \frac{1}{2} TP_1(\theta)$$ -which lies in $\mathbb{R}/\mathbb{Z}$. Previously in the paper they give an explicit formulation of $TP_1(\theta)$ in terms of the "component" forms of the connection $\theta$ and its curvature $\Omega$, -$$TP_1(\theta) = \frac{1}{4\pi^2}\left( \theta_{12}\wedge\theta_{13}\wedge\theta_{23} + \theta_{12}\wedge\Omega_{12} + \theta_{13}\wedge\Omega_{13} + \theta_{23}\wedge\Omega_{23}\right).$$ -I have verified this formula for myself given the information in the paper. Using the structural equation $\Omega = d\theta + \theta\wedge\theta$ I am able to reduce the expression for $TP_1(\theta)$ to -$$TP_1(\theta) = \frac{-1}{2\pi^2}\left( \theta_{12}\wedge\theta_{13}\wedge\theta_{23} \right).$$ -I don't believe I have assumed anything about the structure of $M$ during that reduction so I believe it should hold for both examples. I continue by claiming that since $E_1, E_2, E_3 \in so(3)$, the Lie algebra of $SO(3)$ I should be able to compute $\theta$ by considering -$$\nabla_{E_i}E_j := (\nabla E_j)(E_i) = \sum_k E_k \otimes \theta^{k}{}_{ij}(E_i)$$ -and comparing it with the given derivatives. -For example one this yielded for me $\theta_{12} = E^3, \theta_{13} = -E^2, \theta_{23} = E^1$ where $E^i$ are the 1-forms dual to the basis $E_i$. Then I think that $\chi^*$ should act trivially on $TP_1(\theta)$ as it is a horizontal form in $\Lambda^*(T^*F(M))$. Therefore I find that $\chi^*(TP_1(\theta)) = \frac{1}{2\pi^2}\omega$, where $\omega$ is the volume form of $M$, and when integrated this yields the correct answer of $\frac{1}{2}$ for the first example. -However, my approach fails completely for the second example. I assume that the set $\lambda E_1, E_2, E_3$ obeys the same derivate relationships as given in the first example, but this does not seem to give me enough factors of $\lambda$. I suspect that I am not handling the computation of the $\theta_{ij}$ forms or the application of $\chi^*$ correctly, however I am uncertain what my exact issue is. Is there a fundamental flaw in my understanding? I am hoping someone with more experience can point me in the right direction. - -REPLY [3 votes]: I will use co-frames. If $\bar{\theta}_1$, $\bar{\theta}_2$ and $\bar{\theta}_3$ is the co-frame dual to $E_1$, $E_2$ and $E_3$, then the co-frame dual to $\lambda E_1$, $E_2$ and $E_3$ is: -$\theta_1 = \lambda^{-1} \bar{\theta}_1$, $\theta_2 = \bar{\theta}_2$ and $\theta_3 = \bar{\theta}_3$. -We then have: -$d \theta_1 + 2\lambda^{-1} \theta_2 \wedge \theta_3 = 0$, -$d \theta_2 +2 \lambda \theta_3 \wedge \theta_1 = 0$ and -$d \theta_3 + 2 \lambda \theta_1 \wedge \theta_2 = 0$. -From Cartan's first structure equation $d \theta_i + \sum_j \theta_{ij} \wedge \theta_j = 0$, we then deduce that -$\theta_{12} = -\lambda^{-1} \theta_3$, $\theta_{13} = \lambda^{-1} \theta_2$ and $\theta_{23} = (\lambda^{-1}-2\lambda) \theta_1$. -We then compute the curvature $2$-forms $\Omega_{ij} = d\theta_{ij} + \sum_k \theta_{ik} \wedge \theta_{kj}$. We obtain -$\Omega_{12} = \lambda^{-2} \theta_1 \wedge \theta_2$, $\Omega_{13} = \lambda^{-2} \theta_1 \wedge \theta_3$ and $\Omega_{23} = (4-3\lambda^{-2}) \theta_2 \wedge \theta_3$. -We then get that -$\frac{1}{2} TP_1(\theta) = \frac{1}{2\pi^2}(-\lambda^{-4}+2\lambda^{-2}-2)Vol_{SO(3)}$, where $Vol_{SO(3)}$ denotes the volume form of the round $SO(3)$ (i.e. corresponding to $\lambda = 1$). -Since the volume of the round $SO(3)$ is $\pi^2$, the formula for the Chern-Simons invariant of $g_\lambda$ now follows. -Edit 1: I will provide more details. Using the formula (I think $6.1$ in that Chern-Simons paper): -$$TP_1(\theta) = \frac{1}{4\pi^2}\left( \theta_{12}\wedge\theta_{13}\wedge\theta_{23} + \theta_{12}\wedge\Omega_{12} + \theta_{13}\wedge\Omega_{13} + \theta_{23}\wedge\Omega_{23}\right),$$ -we get that -\begin{align*} \frac{1}{2} TP_1(\theta) = &\frac{1}{8 \pi^2} \left(-\lambda^{-1}\lambda^{-1}(\lambda^{-1}-2\lambda) \, \theta_3 \wedge \theta_2 \wedge \theta_1 \right. \\ -&-\lambda^{-1} \lambda^{-2} \, \theta_3 \theta_1 \theta_2 + \lambda^{-1} \lambda^{-2} \, \theta_2 \wedge \theta_1 \wedge \theta_3 \\ -&\left.\,+(\lambda^{-1} - 2\lambda)(4 - 3\lambda^{-2}) \, \theta_1 \wedge \theta_2 \wedge \theta_3\right) \\ -& = \frac{1}{2 \pi^2}(-\lambda^{-3} + 2\lambda^{-1} - 2\lambda) \, \theta_1 \wedge \theta_2 \wedge \theta_3 -\end{align*} -But remember the formulas -$\theta_1 = \lambda^{-1} \bar{\theta}_1$, $\theta_2 = \bar{\theta}_2$ and $\theta_3 = \bar{\theta}_3$. -We thus pick up an extra factor of $\lambda^{-1}$ when written in terms of the standard volume form on $SO(3)$. This is how I obtained: -$$\frac{1}{2} TP_1(\theta) = \frac{1}{2\pi^2}(-\lambda^{-4}+2\lambda^{-2}-2)\,Vol_{SO(3)}.$$ -Finally, since the volume of $SO(3)$ is $\pi^2$, we get that -$$\Phi(g_{\lambda}) \equiv \frac{1}{2}(-\lambda^{-4} + 2 \lambda^{-2} -2) \equiv -\frac{1}{2} \lambda^{-4} + \lambda^{-2} \quad \text{(mod $\mathbb{Z}$)},$$ -which is what Chern and Simons wrote, up to a minor algebraic manipulation. Yes, it is a tricky calculation! -Edit 2: I will explain how to obtain the connection $1$-forms $\theta_{ij}$. We should have -$$d\theta_1 + \theta_{12} \wedge \theta_2 + \theta_{13} \wedge \theta_3 = 0.$$ -We have that $d\theta_1 + 2 \lambda^{-1} \theta_2 \wedge \theta_3 = 0$. -It can be shown that $\theta_{12} = h \theta_3$. Basically, if $\theta_{12}$ has a non-zero $\theta_1$ component, then this would lead to a non-zero $\theta_1 \wedge \theta_2$ component of $d\theta_1$, which is a contradiction. It can similarly be shown that $\theta_{12}$ has no $\theta_2$ component either. -A similar reasoning gives that: -$$\theta_{12} = h\, \theta_3 \text{ , } \theta_{31} = g \,\theta_2 \text{ and } \theta_{23} = f \,\theta_1.$$ -We therefore have, using the first two equations in edit 2, that: -$$ -g - h = 2 \lambda^{-1} .$$ -Using similar equations for $d\theta_2$ and $d\theta_3$, we get that: -$$ -f - h = 2 \lambda, $$ -$$ -f - g = 2 \lambda. $$ -Solving these $3$ linear equations in $f, g$ and $h$ yields -$$f = \lambda^{-1} - 2\lambda \text{ , } g = -\lambda^{-1} \text{ , } h = -\lambda^{-1}.$$ -Therefore, we have that -$\theta_{12} = -\lambda^{-1} \theta_3$, $\theta_{13} = \lambda^{-1} \theta_2$ and $\theta_{23} = (\lambda^{-1}-2\lambda) \theta_1.$<|endoftext|> -TITLE: Fibonacci addition law $F_{n+m} = F_{n-1}F_m + F_n F_{m+1}$ -QUESTION [24 upvotes]: Question: Let $F_n$ the sequence of Fibonacci numbers, given by $F_0 = 0, F_1 = 1$ and $F_n = F_{n-1} + F_{n-2}$ for $n \geq 2$. Show for $n, m \in \mathbb{N}$: $$F_{n+m} = F_{n-1}F_m + F_n F_{m+1}$$ -My (very limited) attempt so far: after creating a small list of the values $F_0=0, F_1=1, F_2=1, F_3=2, F_4=3, F_5=5, F_6=8, F_7=13, F_8=21, F_9=34, F_{10}=55$ i can see that yes it does seem to work for instance $F_{6+3}=F_5 F_3 +F_6 F_4 = 10 +24 = 34 = F_9$. However, I really don't know where to begin as showing that this must hold in general terms. Should I be looking to use limits? Or perhaps induction? What is the best way to solve this? - -REPLY [5 votes]: There are several good answers already, but I thought I would add the following derivation because it is one of the few uses I know for the sum property of permanents; namely, -If $A$, $B$, and $C$ are matrices with identical entries except that one row (column) of $C$, say the $k^{th}$, is the sum of the $k^{th}$ rows (columns) of $A$ and $B$, then $\text{ per } A + \text{ per } B = \text{per } C$. -Start with the matrices $\begin{bmatrix} F_n & F_{n-1} \\ F_0 & F_1 \end{bmatrix}$ and $\begin{bmatrix} F_n & F_{n-1} \\ F_1 & F_2 \end{bmatrix}$. Since $F_0 = 0$ and $F_1 = F_2 = 1$, they have permanents $F_n$ and $F_n + F_{n-1} = F_{n+1}$, respectively. Applying the Fibonacci recurrence and the permanent sum property, we have $\text{ per } \begin{bmatrix} F_n & F_{n-1} \\ F_2 & F_3 \end{bmatrix} = F_{n+2}$. By continuing to construct new matrices whose second rows are the sums of the second rows of the previous two matrices, this process continues until we have $F_n F_{m+1} + F_{n-1}F_m = \text{ per} \begin{bmatrix} F_n & F_{n-1} \\ F_m & F_{m+1} \end{bmatrix} = F_{n+m}.$ -For more on this approach (but with determinants), see this paper I wrote a few years ago: "Fibonacci Identities via the Determinant Sum Property," The College Mathematics Journal, 37 (4): 286-289, 2006.<|endoftext|> -TITLE: Create a trapping region for Lorenz Attractor -QUESTION [5 upvotes]: I would like to show that the quantity: -$-2\sigma\left(rx^{2}+y^{2}+b\left(z-r\right)^{2}-br^{2}\right)$ -is negative on the surface: -$rx^{2}+\sigma y^{2}+\sigma\left(z-2r\right)^{2}=C$ -for some sufficiently large value of $C$. -I was not able to massage the first quantity any more in order to make it look like the second. I also considered a change of coordinates, but had no luck. -$\sigma, b, r$ are positive parameters. -This is a step in exercise 9.2.2 from Strogatz Nonlinear Dynamics and Chaos. - -REPLY [3 votes]: Since the parameter $\sigma$ is positive, the quantity -$$ --2\sigma \left ( rx^2 + y^2 + b(z-r)^2 - br^2 \right ) -$$ -is negative if -$$ -rx^2 + y^2 + b(z-r)^2 > br^{2}. -$$ -This inequality defines the exterior of an ellipsoid (call it $E_1$); note that the size of this ellipsoid is fixed by the parameters (that's a hint). -Now the equation -$$ -rx^2 + \sigma y^2 + \sigma \left ( z - 2r \right )^2 = C -$$ -defines a different ellipsoid, $E_2$, the size of which is determined by your choice of $C$ (another hint). -At this point, remind yourself what it is that you want to show. Typically, the goal is to show that there exists a $C$ such that $E_2$ defines a trapping region for the Lorenz equations, in which case it suffices to show that $E_2$ can be made large enough so that it contains $E_1$. There's really no additional calculation necessary to do this -- you just need to understand what you've done so far. -A related (but different) question is to find an explicit lower bound on $C$ in terms of the parameters. In this case, you can find bounds on each of $x$, $y$, and $z$ separately for points inside of $E_1$. This will then give you a bound on the quantity -$$ -rx^2 + \sigma y^2 + \sigma \left ( z - 2r \right )^2 -$$ -which then defines $C$.<|endoftext|> -TITLE: What is a simple loop? -QUESTION [5 upvotes]: I'm asking here because no textbook or website that I know of gives a definition of the above mentioned term. -Since there's no obvious way (that I can think of) to define a normal subloop, I don't see how the definition of a simple group can be modified for the case of loops. -So - what is the definition of a simple loop? (And what is the motivation behind the definition?) - -REPLY [3 votes]: Without referencing homomorphisms, a normal subloop K is one such that as sets: -$$\begin{align} -xK &= Kx \\ -(xy)K &= x(yK) \\ -K(xy) &= (Kx) y -\end{align}$$ -Each of these can also be read as $\forall x, y, k \, \exists k'(x, y, k)$ such that $xk = k'x$, etc. The first equation is just the normality condition for groups. The second two assert that "next to" elements of the normal subgroup, associativity holds up to choosing new elements from the normal subloop. This is enough to get Lagrange's theorem to hold, and for right and left cosets to be equal.<|endoftext|> -TITLE: Is there an algebraic homomorphism between two Banach algebras which is not continuous? -QUESTION [17 upvotes]: According to wikipedia, you need the Axiom of Choice to find a discontinuous map between two Banach spaces. -Does this procedure also apply for Banach algebras yielding a discontinuous multiplicative linear map? -Or, is there some obstruction, ensuring that every algebraic homomorphism between two Banach algebras is continuous? (I know that this is true for $*$-homomorphisms between C*-algebras.) - -REPLY [3 votes]: Let $X$ be a Banach space and denote by $B(X)$ the Banach algebra of all bounded linear operators on $X$. It is well-known that if $X$ is isomorphic to $X\oplus X$, then any homomorphism from $B(X)$ into any Banach algebra is continous. Charles Read constructed however an example of a space $X_R$ such that $B(X_R)$ admits a discontinuous homomorphism. - -C. J. Read, Discontinuous derivations on the algebra of bounded operators - on a Banach space, J. London Math. Soc. 40 (1989), 305–326. - -A recent PhD thesis by Skillicorn contains a synthesis of the subject (in particular Lemma 1.3.3 therein gives you description of such homomorphism). - -R. Skillicorn, Discontinuous Homomorphisms from - Banach Algebras of Operators, PhD thesis, Lancaster 2016. - -You may also like the paper - -R. Skillicorn, The uniqueness-of-norm problem for Calkin algebras, - Math. Proc. R. Ir. Acad. 115 A (2015), 145–152. - -which describes an example of discontinuous homomorphism on $B(X) / K(X)$, the Calkin algebra of $X$.<|endoftext|> -TITLE: Intuition for uniform continuity of a function on $\mathbb{R}$ -QUESTION [24 upvotes]: I understand the formal definition of uniform continuity of a function, -and how it is different from standard continuity. -My question is: Is there an intuitive way to classify a function -on $\mathbb{R}$ as being -uniformly continuous, just like there is for regular continuity? -I mean, that for a "nice" function $f:\mathbb{R} \to \mathbb{R}$, -it is usually easy to tell if it is continuous on an interval by -looking at or thinking of the graph of the function on the interval, -or on all of $\mathbb{R}$. Can I do the same for uniform continuity? -Can I tell that a function is uniformly continuous just by what it -looks like? Ideally, this intuition would fit with the Heine-Cantor theorem for -compact sets on $\mathbb{R}$. - -REPLY [17 votes]: I like to think of the following fact: a $C^1$ function on $\mathbb{R}$ with bounded derivative is uniformly continuous. So in order for a function not to be uniformly continuous, there have to be places where its graph is "arbitrarily steep". -Because of the theorem that any continuous function on a compact set is uniformly continuous, this can only happen if you make the function steeper and steeper as you go off to $\infty$, either unboundedly (like $x^2$) or boundedly (like $\sin(x^2)$). -Edit: As pointed out, the converse is false: a function with unbounded derivative can still be uniformly continuous. $\sin(x^4)/(1+x^2)$ is such an example, I believe. So this may not be great intuition after all. - -REPLY [11 votes]: This is slightly tricky because your intuitive picture for "continuous" probably more closely matches "uniformly continuous." That is, if you can actually draw a graph without lifting your pencil then it's uniformly continuous. In order to get something continuous but not uniformly continuous you have to do something that you can't actually draw like going off to infinity on an open interval or oscillating really wildly (as explained in the other two answers). - -REPLY [10 votes]: One possible intuition for uniform continuity is that a function shouldn't oscillate arbitrarily wildly. If the function is differentiable e.g. on an open interval, then the derivative should be bounded in absolute value. That's something that you can see by looking at the graph. For example if you consider the function $sin(1/x)$ on the open interval $(0,1)$, then just by looking at it, you will realise that it's not uniformly continuous, since it's slope becomes arbitrarily steep as you approach $0$ from the right. -The way this intuition ties in with the Heine-Cantor theorem is that on compact sets, bounded functions attain their extrema. If your function is differentiable, then this is in particular true for its derivative, so the slope of the function cannot get arbitrarily steep on a compact interval.<|endoftext|> -TITLE: Combinatorial Group Theory / Lyndon & Schupp, Lemma 4.7. Automorphisms of free groups -QUESTION [8 upvotes]: I'm trying to understand a proof in "Combinatorial Group Theory" by Lyndon & Schupp, page 26, Lemma 4.7. -Here's my problem: -$F$ is a free group with $X$ as basis. $x \in X$. $\alpha \in IA(F)$ - meaning it's an automorphism of $F$ which induces the identity automorphism on $F/[F,F]$. $k \in F_n$ - a term in the descending central series for F. $x\alpha$ and $xk$ are conjugates. -In addition: for some $N\geq1$, $\alpha^N \in JA(F)$ - the group of inner automorphisms of F. -"Elementary Fact" used in the proof: $\alpha \in IA(F)$ and $k \in F_n$ implies $k \alpha = k$ (modulo $F_{n+1}$). -Right after mentioning this "elementary fact" they say it implies: $x\alpha^N = xk^N$ (modulo $F_{n+1}$). Why is that? -Edit: A little clarification: $x\alpha$ is the image of $x$ under the automorphism $\alpha$, following the notation in this book. $xk$ is just $x$ times $k$, multiplication in the group $F$. - -REPLY [7 votes]: For the first assertion, I will work with $F_n$ being the $n$th term of the lower central series; but the argument should be about the same for any central series. -We show that if $\alpha$ is the identity on $F/[F,F]$, then it induces the identity on $F_n/F_{n+1}$ for all $n$. The proof is by induction. -The assertion for $n=1$ is the original assumption. Assume now that the result holds for $k$, so that if $w\in F_k$, then $w\alpha\equiv w\pmod{F_{k+1}}$. We want to show that the result holds for $k+1$. The generators of $F_{k+1}$ are contained in the set $[F,F_{k+1}]$, so let $u\in F$ and $w\in F_k$; if we prove the result for $[u,w]$, then it will follow for $F_{k+1}$. -We will use the following commutator identity: in any group, for all $x,y,z,t$, we have: -$$[xy,zt] = [x,t]^y[y,t][x,z]^{yt}[y,z]^t,$$ -where $a^b = b^{-1}ab$ is conjugation. -Now, since $u\alpha\equiv u\pmod{F_2}$, then $u\alpha=uc$ with $c\in F_2$. And since $w\alpha\equiv w\pmod{F_{k+1}}$, then $w\alpha = wd$ with $d\in F_{k+1}$. Therefore, -$$[u,w]\alpha = [u\alpha,w\alpha]= [uc,wd] -= [u,d]^{c}[c,d][u,w]^{cd}[c,w]^{d} \equiv [u,w]\pmod{F_{k+2}},$$ -because $[u,d]\in[F_1,F_{k+1}]$, $[c,d]\in[F_2,F_{k+1}]$, $[c,w]\in[F_2,F_k]$, and $[u,w]^{cw}=[u,w][[u,w],cw]\equiv [u,w]\pmod_{F_{k+2}}$. -Therefore, $\alpha$ acts as the identity on $F_{k+1}/F_{k+2}$, as desired. -(Alternatively, the basic commutators of weight $k$ give a basis for the free abelian group $F_{k}/F_{k+1}$, and it is an easy matter to check that if $\alpha$ induces the identity on $F/F_{2}$, then it maps a basic commutator of weight $c$ to the product of itself times commutators of weight greater than $c$). -I'm still working on the second assertion, but for instance, for $n=1$, we have that $x\alpha= xk$ for some $k\in F_2$, hence $x\alpha\equiv xk\pmod{F_3}$, and since $k\alpha\equiv k\pmod{F_3}$, you get that $x\alpha^2 \equiv x\alpha k \equiv x k^2\pmod{F_3}$, and so inductively $x\alpha^N \equiv x k^{N}\pmod{F_3}$. -Added: I double-checked the book on my way out of the office and to the gym, and I noticed that the book did not have " $x\alpha^N\equiv xk^N\pmod{F_{n+1}}$, but only $x\alpha^N\equiv xk^N$. I can show that under the given assumptions, $x\alpha^N$ is conjugate, modulo $F_{n+1}$, to $xk^N$, and that should be enough for the purposes of the argument, since the next step of the proof is to note that since $\alpha^N\in JA(F)$, then this means that $uxu^{-1}\equiv xk^N\pmod{F_{n+1}}$ for some $u$, and this will also hold if all you have is that $x\alpha^N$ is conjugate to $xk^N$ modulo $F_{n+1}$. -To show this weaker claim, I proceed by induction on $m$. We know that $x\alpha$ is conjugate to $xk$, hence $x\alpha \equiv (xk)^u\pmod{F_{n+1}}$ for some $u$. Assume that $x\alpha^m\equiv (xk^m)^v\pmod{F_{n+1}}$ for some $v\in F$, where $a^b = b^{-1}ab$ is just conjugation. Applying $\alpha$ on both sides, we have $x\alpha^{m+1}$ on the left hand side, and -\begin{align*} -(xk^m)^v\alpha &= ((x\alpha)(k^m\alpha))^{v\alpha}\\ -& \equiv \bigl((xk)^u k^m\bigr)^{v\alpha}\pmod{F_{n+1}}\\ -&\equiv \bigl( (xk^{m+1})^u\bigr)^{v\alpha}\pmod{F_{n+1}} &\text{(as $k$ is central in $F_/F_{n+1}$)}\\ -&\equiv (xk^{m+1})^{u(v\alpha)}\pmod{F_{n+1}}. -\end{align*} -Therefore, $x\alpha^{m+1}\equiv (xk^{m+1})^w\pmod{F_{n+1}}$ for some $w\in F$; that is, $x\alpha^{m+1}$ is conjugate, modulo $F_{n+1}$, to $xk^{m+1}$. In particular, $x\alpha^N$ is conjugate, modulo $F_{n+1}$ to $xk^{N}$, and since $x$ is conjugate to $x\alpha^N$, then $x$ is conjugate to $xk^N$ modulo $F_{n+1}$. Now I think the argument given in the book can be continued from this point. - - -So, continuing on with the proof of the proposition. We know that $x\alpha^N$ is conjugate to $xk^N$ modulo $F_{n+1}$, and that $\alpha^N$ is equal to conjugation by some element. So there exists $u$ such that $x\alpha^N = u^{-1}xu = x[x,u]$ is equal, modulo $F_{n+1}$, to $xk^N$, and therefore $[x,u]=k^N \pmod{F_{n+1}}$ by cancelling $x$. -Now, $F_n/F_{n+1}$ is a finitely generated free abelian group generated by the basic commutators; $k\in F_n$, so we can write $k$ as a product of basic commutators, and the same is true for $[x,u]$. Because $[x,u]$ is an $N$th power, commutator calculus tells us that in fact $u$ must be an $N$th power, so that in fact we can find a $v$ such that $[x,v]\equiv k \pmod{F_{n+1}}$ (essentially, every commutator that shows up is an $N$th power, so dividing the exponent by $N$ we can construct an appropriate $v$). So now we have $[x,v]\equiv k\pmod{F_{n+1}}$. -Now: $x\alpha$ is conjugate to $xk$, so $v^{-1}x\alpha v$ is conjugate to $vxkv^{-1}$ modulo $F_{n+1}$. But $vxkv^{-1} \equiv vxv^{-1}k\pmod{F_{n+1}}$ (since $k$ is central in $F/F_{n+1}$), so -$$vxkv^{-1} \equiv vxv^{-1}k \equiv x[x,v^{-1}]k \equiv x[x,v]^{-1}k \equiv xk^{-1}k\pmod{F_{n+1}}$$ -(the next to last congruence by using commutator identities, since we are in the lowest term of the lower central series of $F/F_{n+1}$). In summary, $x\alpha$ is conjugate to $x$ modulo $F_{n+1}$. That means that there exists some $k'\in F_{n+1}$ and some $w\in F$ such that $w^{-1}(x\alpha)w\equiv x \pmod{F_{n+1}}$. But this means that $w^{-1}(x\alpha)w =xk'$ for some $k'\in F_{n+1}$, which is precisely what we wanted to prove to complete the inductive step.<|endoftext|> -TITLE: Size of the closure of a set -QUESTION [23 upvotes]: Why in a Hausdorff sequentially compact space the size of the closure of a countable subset is less or equal than $c$ ? I can see why this is true when the space if first countable but we are not assuming so. - -REPLY [4 votes]: Here is some progress in the positive direction. -Theorem. In any Hausdorff space, the closure of a -countable set has size at most $2^{\mathfrak{c}}$, where $\mathfrak{c}$ is the -continuum. -Proof. Suppose that $X$ is a Hausdorff topological space -with a countable set $D$. For any point $a$ in the closure -$\bar D$, we may consider the collection of open sets -containing $a$, and their trace on $D$. That is, consider -$F_a=\{U\cap D\mid a\in U\text{ open }\}$. Since there are -only continuum many subsets of $D$, there are therefore at -most $2^{\mathfrak{c}}$ many possible such families. -If the closure $\bar D$ had size larger than $2^{\mathfrak{c}}$, then -there would be at least two (in fact many) distinct points -$a$ and $b$ in $\bar D$ for which $F_a=F_b$. Let $U$ and -$V$ be disjoint neighborhoods of $a$ and $b$. Let $U_1$ be -another neighborhood of $a$ such that $U_1\cap D=V\cap D$, -which must exist since $F_a=F_b$. Thus, $U\cap U_1$ is a -neighborhood of $a$ that is disjoint from $D$, -contradicting $a\in\bar D$. QED -The bound is sharp, even for compact Hausdorff spaces, in -the sense that the Stone–Čech -compactification -$\beta\mathbb{N}$ is a Hausdorff topological space of size -$2^c$ with a countable dense set. But $\beta\mathbb{N}$ is -not sequentially compact, so this is not actually a -counterexample to your question. -What the argument actually shows is that if $a$ and $b$ are -in the closure of the countable set $D$, then $F_b$ is not a -subset of $F_a$. If it were, we could find disjoint -neighborhoods $U$ and $V$ of $a$ and $b$, respectively, and -then $V\cap D\in F_b$, and so there is a neighborhood $U_1$ -of $a$ with $U_1\cap D=V\cap D$, making $U\cap U_1$ a -neighborhood of $a$ having no points from $D$, a -contradiction. By symmetry, we conclude $F_a$ and $F_b$ are -incomparable with respect to $\subset$ for all $a,b\in\bar -D$. -I suspect that such a line of reasoning could be improved -when there is sequential compactness, perhaps by using a -cardinal characteristic, such as the splitting number.<|endoftext|> -TITLE: $\gcd(b^x - 1, b^y - 1, b^ z- 1,\dots) = b^{\gcd(x, y, z,\dots)} -1$ -QUESTION [19 upvotes]: Possible Duplicate: Prove that $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$ - -$b$, $x$, $y$, $z$, $\ldots$ are integers greater than 1. How can we prove that -$$ -\gcd (b ^ x - 1, b ^ y - 1, b ^ z - 1 ,\dots)= b ^ {\gcd (x, y, z, \dots)} - 1\ ? -$$ - -REPLY [12 votes]: Hint $ $ By below $\, a^M\!-\!1,\:a^N\!-\!1\ $ and $\, a^{(M,N)}\!-\!1\ $ have the same set $\,S$ of common divisors $\,d,\, $ so they have the same greatest common divisor $\ (= \max\ S),\,$ i.e. using $\,\rm\color{#90f}{R\! = }$mod order reduction -$$\begin{eqnarray}\ {\rm mod}\:\ d\!:\ a^M\!\equiv 1\equiv a^N&\!\iff\!& {\rm ord}(a)\ |\ M,N\!\color{#c00}\iff\! {\rm ord}(a)\ |\ (M,N)\!\!\!\overset{\rm\color{#90f}R\!\!}\iff\! \color{#0a0}{a^{(M,N)}\!\equiv 1}\\[.3em] - {\rm i.e.}\ \ \ d\ |\ a^M\!-\!1,\:a^N\!-\!1\! &\!\iff\!\!&\ d\ |\ \color{#0a0}{a^{(M,N)}\!-\!1},\qquad\,\ {\rm where} \quad\! (M,N)\, :=\, \gcd(M,N) -\end{eqnarray}\ \ \ \ \ $$ -Note $ $ We used the GCD universal property $\ a\mid b,c \color{#c00}\iff a\mid (b,c)\ $ [which is the definition of a gcd in more general rings]. $ $ Compare that with $\ a -TITLE: Limit of $S(n) = \sum_{k=1}^{\infty} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right)$ -QUESTION [5 upvotes]: This is a follow up of Upper bound binomial sum -I was working on the problem in the above thread and noticed an interesting thing. I wanted to try and improve the bound Derek had (which was a quadratic in $n$). -If we reformulate the problem as Derek has (since for this we need $2^k \geq n$, so we forget the original problem and think the problem given is as follows) -i.e. -Let $S(n) = \displaystyle \sum_{k=1}^{\infty} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right)$. -We see that -$S(1) = 0$ -$S(2) = 1$ -$S(3) = \frac{7}{3} \approx 2.3333$ -$S(4) = \frac{67}{21} \approx 3.1904$ -$S(5) = \frac{407}{105} \approx 3.8762$ -$S(6) = \frac{4789}{1085} \approx 4.4138$ -$S(7) = \frac{5289}{1085} \approx 4.8747$ -$S(8) = \frac{726093}{137795} \approx 5.2694$ -$S(9) = \frac{118399669}{21082635} \approx 5.61598$ -$S(10) = \frac{9120486643}{1539032355} \approx 5.92612$ -$S(11) = \frac{105065594573}{16929355905} \approx 6.20612$ -$S(12) = \frac{31986101239583}{4950627362505} \approx 6.4610$ -We see that the growth is slow as $n$ increases. My question is if this converges to some limit or it diverges. I have been working on it for the past couple of hours but am unable to come to a conclusion. -So the question is what is $\displaystyle \lim_{n \rightarrow \infty} S(n)$? -If it converges, can we find the limit? -or -If it diverges, how fast does it diverge? - -$\textbf{EDIT:}$ -So Mike has proved that $S(n)$ in fact diverges. -The conjecture is now that $\lim_{n \rightarrow \infty} (2 \log_{2}(n) - S(n)) = \alpha$ where $\alpha \approx 0.667$. -Look at Limit of $S(n) = \sum_{k=1}^{\infty} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right)$ - Part II for further discussions. - -REPLY [4 votes]: $S(n)$ diverges at a rate at least as large as $\log_2 n$. -Suppose $n > 2^k$. Then, for some $1 \leq j \leq n-1$, $j = 2^k$. Thus -$$\prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right) = 0.$$ -Therefore, -$$S(n) = \sum_{k=1}^{\infty} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right) > \sum_{k=1}^{\log_2 (n-1)} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right) = \sum_{k=1}^{\log_2 (n-1)} 1 = \lfloor \log_2 (n-1) \rfloor.$$ - -As far as an upper bound, the following graph is of $2 \log_2 n - S(n)$ for $n \leq 300$. I conjecture that there exists some $\alpha \approx \frac{2}{3}$ such that $S(n) \leq 2 \log_2 n - \alpha$ for $n \geq 2$ and that $\lim_{n \to \infty} (2 \log_2 n - S(n)) = \alpha$. - -(More numerical evidence: The value of $2 \log_2 n - S(n)$, is, for $n = 1000$, $2000$, and $3000$, respectively, $0.667734$, $0.667494$, and $0.667413$.)<|endoftext|> -TITLE: When is $\frac1{n}\sin(nt)=\frac1{n+2}\sin((n+2)t)$? -QUESTION [5 upvotes]: I'm trying to classify (at least as fully as possible) the values of $t$ in $(0,\pi/2)$ for which the following equation has a solution for some natural number, $n$. -$$\frac1{n}\sin(nt)=\frac1{n+2}\sin((n+2)t)$$ -Does anyone know of any results that will help me? - -REPLY [2 votes]: EDIT: See J.M.'s comment to Rahul's answer: it's better to use the "other" Chebyshev polynomials. -Square your identity: $$(n+2)^2 \sin^2 nt = n^2 \sin^2 (n+2)t.$$ Convert from sines to cosines: $$(n+2)^2 + n^2 \cos^2 (n+2)t = n^2 + (n+2)^2 \cos^2 nt.$$ Now let $x = \cos t$ to get $$(n+2)^2 + n^2 T_{n+2}(x)^2 = n^2 + (n+2)^2 T_n(x)^2,$$ where the $T_m$ are the Chebyshev polynomials. We get a polynomial equation of degree $2(n+2)$ in $x$ which you can solve to find all solutions (you will need to check that the signs of the sines match).<|endoftext|> -TITLE: A proof system for GCD? -QUESTION [5 upvotes]: As is well known, if $x,y$ are natural numbers then their GCD $(x,y)$ has a representation $(x,y) = ax + by$ where $a,b$ are natural. Now let's prove that if $(x,y)=1$ then $(xy,x+y) = 1$. Starting from $ax+by=1$, $$1 = (ax+by)^2 = a^2x^2 + b^2y^2 + 2abxy = (a^2x+b^2y)(x+y) - (a-b)^2xy.$$ The reverse implication is trivial in this method. The proof can be generalized to show that if $(x,y,z)=1$ then $(xyz,xy+xz+yz,x+y+z)=1$ and vice versa. -Can this idea be molded into a reasonably strong formal proof system for some fragment of number theory? -We're competing with divisibility proofs, which we'd like to forbid somehow. For example, the original statement can be proven as follows. Suppose $p$ is a prime dividing $(xy,x+y)$. Since $p|xy$ and $p$ is prime, either $p|x$ or $p|y$. From $p|x+y$ we get that $p|(x,y)$. - -REPLY [7 votes]: First, work with ideals, which provides much more power and transparency. For example, in terms of ideals your first proof is simply: -$\rm\quad\quad (x,y) \ \supset\ (xy,\ x+y)\ \supset\ (x,y)^2\ \ $ therefore $\rm\ \ (xy,\ x+y)\ =\ 1\ \iff\ (x,y)\ =\ 1$ -Notice how employing ideals has eliminated obfuscatory information such as the extraneous coefficients $\ a,\:b\ $ in the original proof. That done, the innate structure becomes much clearer. -Second, there are various generalizations of the Grobner basis algorithm over Euclidean domains. Whether or not they will suffice for your application is hard to say without knowing further details.<|endoftext|> -TITLE: Mathematical difference between white and black notes in a piano -QUESTION [575 upvotes]: The division of the chromatic scale in $7$ natural notes (white keys in a piano) and $5$ accidental ones (black) seems a bit arbitrary to me. -Apparently, adjacent notes in a piano (including white or black) are always separated by a semitone. Why the distinction, then? Why not just have scales with $12$ notes? (apparently there's a musical scale called Swara that does just that) -I've asked several musician friends, but they lack the math skills to give me a valid answer. "Notes are like that because they are like that." -I need some mathematician with musical knowledge (or a musician with mathematical knowledge) to help me out with this. -Mathematically, is there any difference between white and black notes, or do we make the distinction for historical reasons only? - -REPLY [16 votes]: Many, many answers to this one already, but, in the framework of Pythagorean tuning, there actually is a clear mathematical distinction between black keys and white keys that has not yet, I think, been explicitly stated. - -The division of the chromatic scale in $7$ natural notes (white keys in a piano) and $5$ accidental ones (black) seems a bit arbitrary to me. -Apparently, adjacent notes in a piano (including white or black) are always separated by a semitone. Why the distinction, then? - -In equal temperament, the ratio of the frequencies of two pitches separated by one semitone is $\sqrt[12]{2}$, no matter what the pitches are. But in other tunings, the ratio cannot be kept equal. In Pythagorean tuning, which tries to make fifths perfect as far as possible, there are two different types of semitone, a wider semitone when the higher pitch is a black key, and a narrower semitone when the higher pitch is a white key. Hence, in Pythagorean tuning at least, there is a clear mathematical distinction between white keys and black keys. -Of course, which notes are white keys and which are black keys depends on which note is used to start building the scale. Starting from $F$ produces the traditional names for the keys. -To see how this works, start from $F$ and generate ascending fifths, -$$ -F,\ C,\ G,\ D,\ A,\ E,\ B,\ F\sharp,\ C\sharp,\ G\sharp,\ D\sharp,\ A\sharp, -$$ -with frequencies in exact $\frac{3}{2}$ ratios (dividing by $2$ as needed to keep all pitches within an octave of the starting $F$). You find that you cannot add the $13^\text{th}$ note, $E\sharp$, without coming awfully close to the base note, $F$. The separation between $F$ and $E\sharp$ is called the Pythagorean comma, and is roughly a quarter of a semitone. So if you stop with $A\sharp$, you have divided the octave into $12$ semitones, which you discover are not all the same. Five of the $12$ semitones are slightly wider than the other seven. These two distinct semitones are called the Pythagorean diatonic semitone, with a frequency ratio of $\frac{256}{243}$ or about $90.2$ cents, and the Pythagorean chromatic semitone, with a frequency ratio of $\frac{2187}{2048}$ or about $113.7$ cents. (In equal temperament, a semitone is exactly $100$ cents. The number of cents separating $f_1$ and $f_2$ is defined to be $1200\log_2f_2/f_1$.) The Pythagorean diatonic semitone and the Pythagorean chromatic semitone differ from each other by a Pythagorean comma (about $23.5$ cents). -You find that the semitone ending at $F$, that is, the interval between $E$ and $F$, is a diatonic semitone, whereas the semitone ending at $F\sharp$, that is, the semitone between $F$ and $F\sharp$, is a chromatic semitone. The other diatonic semitones end at $G$, $A$, $B$, $C$, $D$, and $E$, while the other chromatic semitones end at $G\sharp$, $A\sharp$, $C\sharp$, and $D\sharp$. -Some things to note: - -If you start with a note other than $F$, the diatonic and chromatic semitones will be situated differently, but you will always end up with seven diatonic ones and five chromatic semitones, with the chromatic semitones appearing in a group of three and a group of two as in the traditional keyboard layout. -A great many tuning systems have been devised, which play with the definitions of the semitones or introduce new ones. It is only in equal temperament that the distinction between the two semitones is completely erased. - -Some additional detail: starting from the octave, one can progressively subdivide larger intervals into smaller ones by adding notes from the progression of fifths. At the initial stage you have the octave. -$$ -\begin{array}{c|c|c|c} -\text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ -\hline -F & 1 & \\ -F & 2 & 2 & 1200 -\end{array} -$$ -Interpolating a note a fifth higher than $F$ divides the octave into two unequal intervals, a fifth and a fourth. (Added notes will be shown in red.) -$$ -\begin{array}{c|c|c|c} -\text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ -\hline -F & 1 & \\ -\color{red}{C} & \color{red}{\frac{3}{2}} & \color{red}{\frac{3}{2}} & \color{red}{702.0}\\ -F & 2 & \frac{4}{3} & 498.0 -\end{array} -$$ -Adding a third note, the note a fifth above $C$, splits the fifth into a whole tone (ratio $\frac{9}{8})$ and a fourth. -$$ -\begin{array}{c|c|c|c} -\text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ -\hline -F & 1 & \\ -\color{red}{G} & \color{red}{\frac{9}{8}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ -C & \frac{3}{2} & \frac{4}{3} & 498.0\\ -F & 2 & \frac{4}{3} & 498.0 -\end{array} -$$ -Two more additions split the fourths and produce the pentatonic scale, which is built of whole tones and minor thirds. -$$ -\begin{array}{c|c|c|c} -\text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ -\hline -F & 1 & \\ -G & \frac{9}{8} & \frac{9}{8} & 203.9\\ -\color{red}{A} & \color{red}{\frac{81}{64}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ -C & \frac{3}{2} & \frac{32}{27} & 294.1\\ -\color{red}{D} & \color{red}{\frac{27}{16}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ -F & 2 & \frac{32}{27} & 294.1 -\end{array} -$$ -We may split each of the minor thirds into a whole tone and a (diatonic) semitone, which produces the diatonic scale. -$$ -\begin{array}{c|c|c|c} -\text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ -\hline -F & 1 & \\ -G & \frac{9}{8} & \frac{9}{8} & 203.9\\ -A & \frac{81}{64} & \frac{9}{8} & 203.9\\ -\color{red}{B} & \color{red}{\frac{729}{512}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ -C & \frac{3}{2} & \frac{256}{243} & 90.2\\ -D & \frac{27}{16} & \frac{9}{8} & 203.9\\ -\color{red}{E} & \color{red}{\frac{243}{128}} & \color{red}{\frac{9}{8}} & \color{red}{203.9}\\ -F & 2 & \frac{256}{243} & 90.2 -\end{array} -$$ -Adding five more fifths splits each of the five whole tones into a chromatic semitone and a diatonic semitone to produce the chromatic scale. -$$ -\begin{array}{c|c|c|c} -\text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ -\hline -F & 1 & \\ -\color{red}{F\sharp} & \color{red}{\frac{2187}{2048}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ -G & \frac{9}{8} & \frac{256}{243} & 90.2\\ -\color{red}{G\sharp} & \color{red}{\frac{19683}{16384}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ -A & \frac{81}{64} & \frac{256}{243} & 90.2\\ -\color{red}{A\sharp} & \color{red}{\frac{177147}{131072}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ -B & \frac{729}{512} & \frac{256}{243} & 90.2\\ -C & \frac{3}{2} & \frac{256}{243} & 90.2\\ -\color{red}{C\sharp} & \color{red}{\frac{6561}{4096}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ -D & \frac{27}{16} & \frac{256}{243} & 90.2\\ -\color{red}{D\sharp} & \color{red}{\frac{59049}{32768}} & \color{red}{\frac{2187}{2048}} & \color{red}{113.7}\\ -E & \frac{243}{128} & \frac{256}{243} & 90.2\\ -F & 2 & \frac{256}{243} & 90.2 -\end{array} -$$ -There is no fundamental reason to stop here. Adding five more fifths creates a $17$-note scale by dividing each of the wider chromatic semitones into a new small interval, the Pythagorean comma (frequency ratio $531441/524288=3^{12}/2^{19}$ or about $23.5$ cents), and a diatonic semitone. We call the new notes $E\sharp$, $B\sharp$, $F\sharp\sharp$, $C\sharp\sharp$, $G\sharp\sharp$. Note that $E\sharp$ is a Pythagorean comma higher than its enharmonic equivalent $F$, $B\sharp$ is a Pythagorean comma higher than its enharmonic equivalent $C$, $F\sharp\sharp$ is a Pythagorean comma higher than its enharmonic equivalent $G$, and so on. -$$ -\begin{array}{c|c|c|c} -\text{note} & \text{freq.} & \text{ratio to prev.} & \text{ratio in cents}\\ -\hline -F & 1 & & \\ -\color{red}{E\sharp} & \color{red}{\frac{531441}{524288}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ -F\sharp & \frac{2187}{2048} & \frac{256}{243} & 90.2\\ -G & \frac{9}{8} & \frac{256}{243} & 90.2\\ -\color{red}{F\sharp\sharp} & \color{red}{\frac{4782969}{4194304}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ -G\sharp & \frac{19683}{16384} & \frac{256}{243} & 90.2\\ -A & \frac{81}{64} & \frac{256}{243} & 90.2\\ -\color{red}{G\sharp\sharp} & \color{red}{\frac{43046721}{33554432}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ -A\sharp & \frac{177147}{131072} & \frac{256}{243} & 90.2\\ -B & \frac{729}{512} & \frac{256}{243} & 90.2\\ -C & \frac{3}{2} & \frac{256}{243} & 90.2\\ -\color{red}{B\sharp} & \color{red}{\frac{1594323}{1048576}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ -C\sharp & \frac{6561}{4096} & \frac{256}{243} & 90.2\\ -D & \frac{27}{16} & \frac{256}{243} & 90.2\\ -\color{red}{C\sharp\sharp} & \color{red}{\frac{14348907}{8388608}} & \color{red}{\frac{531441}{524288}} & \color{red}{23.5}\\ -D\sharp & \frac{59049}{32768} & \frac{256}{243} & 90.2\\ -E & \frac{243}{128} & \frac{256}{243} & 90.2\\ -F & 2 & \frac{256}{243} & 90.2 -\end{array} -$$ -In the next few iterations, - -$12$ fifths are added, shaving a Pythagorean comma off of each diatonic semitone, thereby producing a $29$-note scale with $17$ Pythagorean commas ($23.5$ cents) and $12$ intervals of $66.8$ cents; -$12$ more fifths are added, shaving a Pythagorean comma off of each $66.8$ cent interval, thereby producing a $41$-note scale with $29$ Pythagorean commas ($23.5$ cents) and $12$ intervals of $43.3$ cents; -$12$ further fifths are added, shaving a Pythagorean comma off of each $43.3$ cent interval, thereby producing a $53$-note scale with $41$ Pythagorean commas ($23.5$ cents) and $12$ intervals of $19.8$ cents. - -Note that at some steps in this process the two intervals obtained are more nearly equal than at others, and that those scales whose intervals are nearly equal are very well approximated by an equal-tempered scale. The lengths of the scales where this happens coincide with denominators of convergents of the continued fraction expansion of $\log_2 3$, that is, at $2$, $5$, $12$, $41$, $53$, $306$, $665$, etc. A spectacular improvement is seen in the $665$-note scale, where the two intervals are $1.85$ cents and $1.77$ cents. In contrast, the intervals in the $306$-note scale are relatively far apart: $5.38$ cents and $3.62$ cents. From this perspective, the $12$-note scale is remarkably good. -I should emphasize that this is only the barest beginning of a discussion of tuning systems. It is desirable to accommodate small whole number ratios other than $\frac{3}{2}$ such as $\frac{5}{4}$ (the major third) and $\frac{6}{5}$ (the minor third), which necessitates various adjustments. It is also desirable to be able to play music in different keys, which forces other compromises. Many of these issues are discussed in the other answers.<|endoftext|> -TITLE: Why is width of critical strip what it is? -QUESTION [7 upvotes]: For Riemann zeta function and $L$-functions of number fields, the width of critical strip is $1$. For $L$-functions of modular forms of weight $k$, the width of the critical strip is $k$. -Why is there a variation in the width of the critical strip for various $L$-functions? Is there a conceptual explanation or an underlying heuristics? - -REPLY [7 votes]: Roughly speaking the critical strip is where the $L$-function -is hard to compute/understand. Its right edge is the boundary -of the region where the Dirichlet series/Euler product converges -nicely (absolutely, locally uniformly etc). Its left edge -is the image of the right edge under the functional equation -(to compute it to the left of the critical strip you can use -the functional equation to reduce it to comuting a nice series/product).<|endoftext|> -TITLE: Applications of Gauss sums -QUESTION [28 upvotes]: For proving the quadratic reciprocity, Gauss sums are very useful. However this seems an ad-hoc construction. Is this useful in a wider context? What are some other uses for Gauss sums? - -REPLY [5 votes]: A small additional note, in line with an earlier answer: Gauss sums are, literally, the Lagrange resolvents obtained in the course of expressing roots of unity in terms of radicals. (Yes, then the Kummer-Stickelberger business can be used to effectively obtain the actual radical expressions...: here .)<|endoftext|> -TITLE: Motivation for Hecke characters -QUESTION [21 upvotes]: The context is the definition of Hecke Größencharakter: -http://en.wikipedia.org/wiki/Hecke_character -This is supposed to generalize the Dirichlet $L$-series for number fields. Dirichlet characters are characters of the multiplicative groups of $\mathbb Z/p\mathbb Z$. An appropriate generalization would be instead to consider characters of the multiplicative group of $\mathcal O_K/\cal P$ where $\mathcal P$ is a prime ideal in the ring of integers of a number field $K$. -But Hecke Größencharakter goes to more trouble than this. It brings in ideles and such for a more complicated generalization. Why is this necessary? - -REPLY [26 votes]: It's natural to think that the correct analogue of the groups $({\mathbf Z}/m{\mathbf Z})^\times$ in a number field $K$ should be the groups $({\mathcal O}_K/{\mathfrak a})^\times$ where $\mathfrak a$ is a nonzero ideal in $\mathcal O_K$, and for some purposes that is true. But for other purposes it is not, and one instance where it is not is your question. -Hecke's motivation for creating "his" characters was to produce $L$-functions of them as Euler products over (nonzero) prime ideals in $\mathcal O_K$ that generalize Dirichlet $L$-functions. If you start off with a character $\chi$ on a unit group $({\mathcal O}_K/{\mathfrak a})^\times$, how do you make it into a function of ideals? You want some kind of series like -$$ -\sum_{\mathfrak a} \frac{\chi({\mathfrak a})}{{\rm N}(\mathfrak a)^s} -$$ -running over (nonzero) integral ideals ${\mathfrak a}$ of ${\mathcal O}_K$, and if $K$ has class number greater than 1, it's hard to imagine how to get a function of ideals out of a function on $({\mathcal O}_K/{\mathfrak a})^\times$. Even if $K$ has class number 1 you'd have pretty serious problems making such a transition if there are units of infinite order in $\mathcal O_K$, which there are except when $K$ is ${\mathbf Q}$ or an imaginary quadratic field. -The key to understanding how Hecke generalized Dirichlet characters is to reinterpret the group $({\mathbf Z}/m{\mathbf Z})^\times$ as the multiplicative quotient group of fractional ideals $I_{(m)\infty}/P_{(m)\infty}$, not as a group of units in a quotient ring. That leads to generalized ideal class groups $I_{\mathfrak m}/P_{\mathfrak m}$ for a generalized modulus $\mathfrak m$ in a number field, and it is characters of generalized ideal class groups $I_{\mathfrak m}/P_{\mathfrak m}$, not characters of the groups $(\mathcal O_K/\mathfrak a)^\times$, that are examples of Hecke's definition of his characters. The characters of generalized ideal class groups all have finite order, but Hecke's definition is much broader: it allows for infinite-order characters that are not closely related to any finite order characters in any way. Generalized ideal class groups are the original way in which class field theory was developed, and you're not going to find anyone telling you that the formalism of class field theory is easy to grasp the first time through it. -Hecke's original definition of his characters did not make any use of ideles, which in fact weren't created until later (by Chevalley). His paper introducing his characters came in two parts in Mathematische Zeitschrift (vol. 1 in 1918 and vol. 6 in 1920) and he gives explicit examples of his characters for real and imaginary quadratic fields. The classical definition is discussed on the Wikipedia page you link to in your question, although the definition there is (at the moment) all largely in words and is kind of opaque. I think you would find the classical formulas defining Hecke characters in general pretty frightening. You can find them in, for instance, Narkiewicz's book on algebraic number theory. Hecke's original papers do not offer much in the way of gentle motivation for his definition. These developments, at the time, were not at all obvious. In the 1940s, Matchett showed in her thesis how to interpret Hecke's characters more conceptually as the characters of the idele class group, and that is often how they are viewed today because it is a cleaner and more conceptual definition.<|endoftext|> -TITLE: Generality of the Manin-Drinfel'd theorem -QUESTION [13 upvotes]: The Manin-Drinfeld theorem asserts that for a modular curve $X_0(N)$ and Jacobian $J_0(N)$ with the former being embdedded in the latter under the map that takes $i\infty$ to $0$, the cusps are torsion. -The proof Manin-Drinfeld theorem seems to use Hecke operators and thus seems to be valid only for congruence subgroups of $PSL_2(\mathbb Z)$. Is there possibly a finite-index subgroup of $PSL_2(\mathbb Z)$ such that the statement of Manin-Drinfeld theorem is still true for the quotient? - -REPLY [3 votes]: I like your question! I wish I could give a full answer, but I'm not familiar enough with the Manin-Drinfeld theorem. After I finish a couple weeks of work, I hope I can return and think about it in detail. -I would guess the answer is yes, and I have some candidates you could try. The Fermat curves can be identified with compactified quotients $\Gamma\backslash\mathcal{H}$ for finite index subgroups $\Gamma$ in the following way: -The modular function $\lambda(z) = \frac{\theta_2^4(q)}{\theta_3^4(q)}=16 q^{1/2} -128q +704q^{3/2} + \ldots$ -is a Hauptmodul of the genus 0 congruence subgroup $\Gamma(2)$. (I.e., it is invariant under the action -of $\Gamma(2)$ by fractional linear transformation, and parametrizes the genus 0 modular surface $X_{\Gamma(2)} = \Gamma(2) \backslash \mathcal{H} \cup \{0,1,i_\infty\}$.) -We define Hauptmoduln $x:= \sqrt[n]{\lambda}$ and $y:= \sqrt[n]{1-\lambda}$, which determine finite index genus 0 subgroups $\Gamma_x$ and $\Gamma_y$. Then $\Gamma:= \Gamma_x \cap \Gamma_y$ is subgroup with compactified quotient $X_{\Gamma}: x^n + y^n = 1$, and $\Gamma$ is a noncongruence subgroup for $n \neq 1,2,4,8$. -This construction is from a survey paper by Ling Long. We can use essentially the same idea to obtain quotient curves of the Fermat curves: If we set $x:= -\sqrt[5]{\lambda}$ and $y:= \sqrt{1-\lambda}$, we obtain a noncongruence subgroup $\Gamma$ with model $y^2=x^5+1$, which is a nice genus 2 curve, and its Jacobian has CM. -If you can verify what happens in this case, I would be interested to hear about it. - -Added later: -I just did a search and found a relevant paper: "The Manin—Drinfeld theorem and Ramanujan sums" by V. Kumar Murty and Dinakar Ramakrishnan. -They give the same construction of the Fermat curves as I gave above, attributing it to Fricke and Klein. They also cite a result of Rohrlich which establishes that the cusps of $\Gamma$ map to torsion points of the Jacobian. -So the answer is yes; the noncongruence groups $\Gamma$ corresponding to the Fermat curves also satisfy the statement of the Manin-Drinfeld theorem.<|endoftext|> -TITLE: Proof of the formula $1+x+x^2+x^3+ \cdots +x^n =\frac{x^{n+1}-1}{x-1}$ -QUESTION [5 upvotes]: Possible Duplicate: -Value of $\sum x^n$ - -Proof to the formula -$$1+x+x^2+x^3+\cdots+x^n = \frac{x^{n+1}-1}{x-1}.$$ - -REPLY [9 votes]: Since $1-x^{n+1}$ has $1$ as a root, the quotient $\frac{1-x^{n+1}}{1-x}$ is a polynomial. -If $\mathbb F_q$ is a finite field with $q$ elements and $V$ is a $\mathbb F_q$-vector space of dimension $n+1$, then $\frac{1-q^{n+1}}{1-q}=|P(V)|$ is the cardinal of the projective space attached to $V$. Now $P(V)$ can be described as a disjoint union $$P(V)=\mathbb A^0\sqcup\mathbb A^1\sqcup \mathbb A^2\sqcup\cdots\sqcup\mathbb A^n$$ where $\mathbb A^k$ is, for each $k$, an affine space of dimension $k$ over $\mathbb F_q$ (which is a complicated way of saying, as far as our purposes go, a vector space over $\mathbb F_q$ of dimension $k$) Since $|\mathbb A^k|=q^k$, we find that -$$\frac{1-q^{n+1}}{1-q}=1+q+q^2+q^3+\cdots+q^n$$ -for all numbers $q$ which are powers of prime numbers. It follows that -$$\frac{1-x^{n+1}}{1-x}=1+x+x^2+x^3+\cdots+x^n$$ -as polynomials, because the equality holds for infinitely many values of $x$ (and we are working over $\mathbb Z$...)<|endoftext|> -TITLE: Theories of $p$-adic integration -QUESTION [18 upvotes]: What is the compelling need for introducing a theory of $p$-adic integration? -Do the existing theories of $p$-adic integration use some kind of analogues of Lebesgue measures? That is, do we put a Lebesgue measure on $p$-adic spaces, and just integrate real or complex valued functions on $p$-adic spaces, or is something more possible like integrating $p$-adic valued functions on $p$-adic spaces? What is the machinery used? -Then again, does the integration on spaces like $\mathbb C_p$ give something more than the usual integration in real analysis? I mean, the integration of complex valued functions of complex variables, or more precisely holomorphic functions, is much a much more interesting topic than measure theory. Is a similar analogue true in $p$-adic cases? -I have also seen mentioned that Grothendieck's cohomology theories like etale cohomology, crystalline cohomology etc., fit into such $p$-adic integration theories. What could possibly be the connection? - -REPLY [10 votes]: I would normally take $p$-adic integration to mean "integration of $p$-adic valued functions" or "integration of differential forms with some kind of $p$-adic valued functions as coefficients", where the integration is also taking place over some kind of $p$-adic space or manifold. -The reason for wanting such theories are various. One reason is indicated in George S.'s answer: there are known analogues of classical Hodge theory, known as $p$-adic Hodge theory, whose proofs however are not analytic, but rather proceed via arithmetic geometry. One would like to have more analytic ways of thinking about them, and this is one goal of Robert Coleman's theory. (In a recent volume of Asterisque, namely vol. 331, Coleman and Iovita have an article, "Hidden structures on semistable curves", related to this problem.) (Note also that $p$-adic Hodge theory relates $p$-adic etale cohomology to crystalline cohomology, which gives on answer to your question of how $p$-adic integration might be related to those topics.) -Another reason is that many integral formulas (involving usual archimedean integrals) appear in the theory of classical $L$-functions attached -to automorphic forms, and one would like, at least in certain contexts, to be able to write down $p$-adic analogues so as to construct $p$-adic $L$-functions. -As for what machinery is used: in the theory of $p$-adic $L$-functions and related contexts in Iwasawa theory, often nothing more is used than basic computations with Riemann sums. In the material related to $p$-adic Hodge theory, much more substantial theoretical foundations are used: tools from arithemtic geometry, rigid analysis, possibly Berkovich spaces, and related topics.<|endoftext|> -TITLE: Limit of $S(n) = \sum_{k=1}^{\infty} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right)$ - Part II -QUESTION [7 upvotes]: This is a follow up of Limit of $S(n) = \sum_{k=1}^{\infty} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right)$ -More details can be found in the above thread. -Let $S(n) = \displaystyle \sum_{k=1}^{\infty} \left(1 - \prod_{j=1}^{n-1}\left(1-\frac{j}{2^k}\right)\right)$ -Mike has proved that $S(n)$ in fact diverges at-least faster than $\log_2(\lfloor n-1 \rfloor)$. -Now based on what Mike has worked this conjectures arises: -$\displaystyle \lim_{n \rightarrow \infty} (2 \log_{2}(n) - S(n)) = \alpha$. -Also, can $\alpha$ be expressed in terms of other familiar constants. $\frac{\pi \gamma}{e}$ seems to be a close guess. -The numerical evidence seem to suggest they are true. For example, we have the following graph of $2 \log_2 n - S(n)$ for $n \leq 300$. - -(More numerical evidence: The value of $2 \log_2 n - S(n)$, is, for $n = 1000$, $2000$, and $3000$, respectively, $0.667734$, $0.667494$, and $0.667413$.) -An alternative expression for $S(n)$ was worked out by Moron in the previously-mentioned question: -$$S(n) = - \sum_{k=1}^{n-1} \frac{s(n,k)}{2^{n-k}-1},$$ -where $s(n,k)$ is a Stirling number of the first kind. - -REPLY [9 votes]: Here's a proof. The value of $\alpha$ is the value of the infinite sum $$\sum_{m= -\infty}^{\infty} (e^{-2^{-m-1}} - [m \geq 1]),$$ where $[m \geq 1]$ is 1 if $m \geq 1$ and 0 otherwise. Mathematica gives this value (to 6 decimal places) as $0.667253$. -The full argument in all its rigor is too long to post on this site, so I'm only going to give an extended outline. There are a couple of strange claims in here, but bear with me. - -Part 1 -From my previous post we know that $1 - \prod_{j=1}^{n-1} \left(1 - \frac{j}{2^k}\right) = 1$ when $k \leq \lfloor \log_2 (n-1) \rfloor$. -Let $p = 2 \log_2 n - \lfloor 2 \log_2 n \rfloor$. Thus $2 \log_2 n - S(n)$ is $$2 \log_2 n - \lfloor 2 \log_2 n \rfloor + \sum_{k= \lfloor \log_2 (n-1) \rfloor +1}^{\lfloor 2 \log_2 n \rfloor} \prod_{j=1}^{n-1} \left(1 - \frac{j}{2^k}\right) + \sum_{k= \lfloor 2 \log_2 n \rfloor + 1}^{\infty} \left(\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^k}\right) - 1\right)$$ -$$= p + \sum_{k= \lfloor \log_2 (n-1) \rfloor +1}^{2 \log_2 n - p} \prod_{j=1}^{n-1} \left(1 - \frac{j}{2^k}\right) + \sum_{k= 2 \log_2 n - p + 1}^{\infty} \left(\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^k}\right) - 1\right) .$$ -Now, the expression $$\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^k}\right)$$ is very close to 0 when $k < 2 \log_2 n$ and very close to 1 when $k > 2 \log_2 n$. So most of the contribution to $2 \log_2 n - S(n)$ occurs when $k$ is close to $2 \log_2 n$. The next step, then, is to reindex with $m = k - \lfloor 2\log_2 n \rfloor$. Now we basically have -$$2 \log_2 n - S(n) = p + \sum_{m= - \log_2 n}^0 \prod_{j=1}^{n-1} \left(1 - \frac{j}{2^{m-p} n^2}\right) + \sum_{m= 1}^{\infty} \left(\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^{m-p} n^2}\right) - 1\right).$$ - -Part 2 -Next, we need a good approximation to $\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^{m-p} n^2}\right)$. It turns out that $e^{-2^{-m+p-1}}$ is an excellent approximation (which surprises me some - despite the fact that I have verified it numerically - as it is independent of $n$). To see this, rewrite $\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^{m-p} n^2}\right)$ as $$\exp \left( \sum_{j=1}^{n-1} \ln\left(1 - \frac{j}{2^{m-p} n^2}\right)\right)$$ and then expand the $\log$ expression with the Maclaurin series for $\ln (1+x)$. The first term in the expansion dominates when $m$ is positive or constant, and we get -$$\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^{m-p} n^2}\right) = \exp \left(\sum_{j=1}^{n-1} \left(- \frac{j}{2^{m-p} n^2} \right) + O\left(\frac{j^2}{4^{m-p} n^4}\right) \right)$$ -$$=\exp \left( - \frac{1}{2^{m-p+1}} + O\left(\frac{1}{2^m n}\right) \right) = \exp \left( - \frac{1}{2^{m-p+1}}\right) + O\left(\frac{1}{2^m n}\right)$$ -Thus $$\sum_{m= 1}^{\infty} \left(\prod_{j=1}^{n-1} \left(1 - \frac{j}{2^{m-p} n^2}\right) - 1\right) = \sum_{m= 1}^{\infty} \left(\exp \left( - \frac{1}{2^{m-p+1}} \right) - 1\right) + O\left(\frac{1}{n}\right).$$ -When $m$ is negative and not constant, things are a little trickier, as the higher-order Maclaurin series terms make $\exp \left( - \frac{1}{2^{m-p+1} } \right)$ not a good relative approximation to the product. However, all the terms in the Maclaurin series are negative, so truncating after the first term does yield an upper bound. In addition, $\exp \left( - \frac{1}{2^{m-p+1}} \right)$ goes to zero extremely fast as $m \to -\infty$. (For example, if $m = - \log_2 (\log n)$ (which heads to $-\infty$ very slowly), we still have $\exp \left( - \frac{1}{2^{m-p+1}} \right) = \frac{1}{n^{2^{p-1}}}$.) Thus $\exp \left( - \frac{1}{2^{m-p+1}} \right)$ is still an excellent absolute approximation for the product when $m$ is negative. Since there are only $\log_2 n$ negative terms in the sum we are considering, we have -$$\sum_{m= - \log_2 n}^0 \prod_{j=1}^{n-1} \left(1 - \frac{j}{2^{m-p} n^2}\right) = \sum_{m = - \log_2 n}^0 \exp \left( - \frac{1}{2^{m-p+1}} \right) + E(n),$$ -where $E(n) \to 0$ as $n \to \infty$. - -Part 3 -Now, let $$f(p) = \sum_{m= - \infty}^0 e^{-2^{-m+p-1}} + \sum_{m= 1}^{\infty} (e^{-2^{-m+p-1}} - 1).$$ -The function $f$ is linear in $p$! (Despite having verified this numerically and despite the argument below, this still seems weird to me!) The slope is $-1$. To see this, differentiate to get $$f'(p) = - \ln 2 \sum_{m= - \infty}^{\infty} 2^{-m+p-1} e^{-2^{-m+p-1}}.$$ -Now, apply the Euler-Maclaurin summation formula. Because we have the product of two exponentials, $f^{(k)}(p)$ will have the expression $2^{-m+p-1} e^{-2^{-m+p-1}}$ in every term. As $m \to \infty$, $2^{-m+p-1} \to 0$ and $e^{-2^{-m+p-1}} \to 1$. As $m \to -\infty$, $2^{-m+p-1} \to \infty$ and $e^{-2^{-m+p-1}} \to 0$, but the latter expression dominates. Thus $f^{(k)}(p) \to 0$ as $m \to \infty$ and as $m \to -\infty$. Thus the Euler-Maclaurin formula says that -$$f'(p) = - \ln 2 \int_{-\infty}^{\infty} 2^{-m+p-1} e^{-2^{-m+p-1}} dp = \left. e^{-2^{-m+p-1}}\right|_{-\infty}^{\infty} = -1.$$ -Therefore, $$\lim_{n \to \infty} \left(2 \log_2 n - S(n)\right) = p + \sum_{m= - \infty}^0 e^{-2^{-m+p-1}} + \sum_{m= 1}^{\infty} (e^{-2^{-m+p-1}} - 1) $$ -$$= p - p + \sum_{m= - \infty}^0 e^{-2^{-m-1}} + \sum_{m= 1}^{\infty} (e^{-2^{-m-1}} - 1) = \sum_{m= - \infty}^0 e^{-2^{-m-1}} + \sum_{m= 1}^{\infty} (e^{-2^{-m-1}} - 1).$$ -Again, this value is approximately $0.667253$.<|endoftext|> -TITLE: How to prove that a torus has the same volume as a cylinder (with the height equal to the torus' perimeter) -QUESTION [12 upvotes]: I want to find the volume of a torus with a given thickness and a given radius. -Let r be the radius of a circle with its midpoint at $M(0|b)$ ($b \geq r$). Now I want to rotate this circle about the x-axis, that is to say about a circular path which has the length $2 \pi \cdot |b|$. So I thought I'd simply integrate: -$V = \int\limits_0^{2 \pi \cdot |b|} \pi r^2 dz = 2 \pi \cdot r^2 \cdot |b|$, which turns out to be the correct result. -However, I don't find it trivial that the volume of this torus is the same as the volume of a cylinder with the corresponding height. I read the article on Wikipedia about the torus and it said that this was due to Cavalieri's theorem, which to my mind doesn't really have a lot to do with the torus vs. the cylinder... -Is there some easy way to prove that a torus has the same volume as a cylinder with the height equal to the torus' perimeter? - -REPLY [5 votes]: To use Cavalieri's theorem, lay the torus and cylinder on a table and slice them with planes parallel to the table. Then it suffices to show that the torus slices (annuli) have the same area as the cylinder slices (rectangles). -At height $h$ (measured from the centre of the torus or cylinder), the annulus has inner and outer radii $$R_\pm = |b| \pm \sqrt{r^2-h^2},$$ so its area is $$A_1 = \pi R_+^2 - \pi R_-^2 = \pi(R_+ + R_-)(R_+ - R_-) = \pi \cdot 2|b| \cdot 2\sqrt{r^2-h^2}.$$ The rectangle has width $2\sqrt{r^2-h^2}$ and length $2\pi |b|$, so its area is $$A_2 = 2\sqrt{r^2-h^2} \cdot 2\pi|b|.$$ The areas are equal, so we're done! -I prefer Pappus's centroid theorem though...<|endoftext|> -TITLE: Summing up the series $a_{3k}$ where $\log(1-x+x^2) = \sum a_k x^k$ -QUESTION [5 upvotes]: If $\ln(1-x+x^2) = a_1x+a_2x^2 + \cdots \text{ then } a_3+a_6+a_9+a_{12} + \cdots = $ ? -My approach is to write $1-x+x^2 = \frac{1+x^3}{1+x}$ then expanding the respective logarithms,I got a series (of coefficient) which is nothing but $\frac{2}{3}\ln 2$.But this approach took some time for me (I can't solved it during the test but after the test I solved it)... Any other quick method? - -REPLY [4 votes]: Qiaochu Yuan gives an excellent exposition of the general method. Sometimes in specific cases you can get the answer via some hands-on calculations as well, and expanding the logs as you mentioned can actually be done pretty quickly: using that $1 - x + x^2 = {1 + x^3 \over 1 + x}$ one gets that $\ln(1 - x + x^2) = \ln(1 + x^3) - \ln(1 + x)$. You now can use the fact that the sum of $a_3 + a_6 + ...$ is obtained by subtracting the corresponding sums for $\ln(1 + x^3)$ and $\ln(1 + x)$. Since the power series for $\ln(1 + x^3)$ only contains powers of $x^3$, the contribution to $a_3 + a_6 + ...$ coming from that term is what you get from plugging in $x = 1$, namely $\ln(2)$. -The power series of $\ln(1 + x)$ may be written as $-\sum_{n > 0} {(-x)^n \over n}$. -Taking every third term gives $-\sum_{n > 0} {(-x)^{3n} \over 3n} = -{1 \over 3}\sum_{n > 0} {(-x^3)^n \over n}$. Note this series is again the series of $\ln(1 + x)$, but applied to $x^3$ in place of $x$. Plugging in $x = 1$ gives ${1 \over 3}\ln(2)$. -So the answer you want is $\ln(2) - {1 \over 3}\ln(2) = {2 \over 3}\ln(2)$.<|endoftext|> -TITLE: Status of mixed motives -QUESTION [7 upvotes]: From the wikipedia page: -http://en.wikipedia.org/wiki/Motive_(algebraic_geometry) -it appears that the category of Mixed motives $MM(k)$ over a field $k$ is still conjectural; but there is a good derived category $DMM(k)$ already constructed for this. -What is a good reference for this construction, and why this derived category is the "suitable" one. - -REPLY [6 votes]: As a supplement to T..'s nice answer, let me write something that is more of a cultural remark than a direct answer to the question: -Many people learnt about the yoga of mixed motives by reading Deligne's article -on the thrice punctured sphere. (This was certainly the case for me; and a hat-tip to Quomodocumque for providing the link.) -This article helps explain what one should expect from the category of mixed motives, and thus helps one recognize the (or perhaps a) correct construction when one sees it.<|endoftext|> -TITLE: Meaning of relative homology -QUESTION [42 upvotes]: It is a bit easier to understand the homology $H_1(X, \mathbb Z)$ for various compact surfaces in analogy with handles and so on. There seems to be a nice intuitive picture with handles, holes, etc to think of the first homology group, and similar heuristics for higher homology groups. -But almost all axiomatic treatment of homology groups uses instead the relative homology. But it is not so intuitively clear how to visualize the relative homology groups. -What are some intuitive crutches for dealing with these relative homology groups, particularly for surfaces? - -REPLY [21 votes]: This question was asked a long time ago. But, it may be still relevant. -Intuitively, relative homology of $H(K, K_0)$ is the homology of the space when we identify all the points that separate $K_0$ from $K$ to be a single point. Here $K_0$ is a subcomplex of $K$. -Figure from Relative Homology chapter Edelsbrunner book- p.107 -For example, a relative $1-$cycle can result from: - -Its all edges reside in the space $K-K_0$. This cycle is not affected by the relative homology computation. -It was not a cycle in $K-K_0$, but its two endpoints are in $K_0$.<|endoftext|> -TITLE: Do all manifolds have a densely defined chart? -QUESTION [11 upvotes]: Let $M$ be a smooth connected manifold. Is it always possible to find a connected dense open subset $U$ of $M$ which is diffeomorphic to an open subset of R$^n$? -If we don't require $U$ to be connected, the answer is yes: it is enough to construct a countable collection of disjoint open "affines" whose union is dense, and this is not terribly difficult. - -REPLY [7 votes]: If the manifold is compact, the answer is yes. The geometric idea is to put a small balloon in the manifold, and to inflate it. Since the manifold is compact, you eventually fill the manifold, so you're left with the interior of the balloon (an open ball) with identifications on the boundary, so it's dense. -You can make this precise by putting a Riemann metric on the manifold and using the Hopf-Rinow Theorem. -The relation between my answer and Jason's comment would be to take a handle decomposition of the manifold and to consider a maximal tree in the dual 1-skeleton as giving a "tree-like ball". -edit: responding to Qiaochu's comment, I believe the answer is affirmative for connected, non-compact manifolds as well. The technique of the proof has to be adapted some. Step 1: construct a proper Morse function on $M$, meaning a Morse function $f : M \to [0,\infty)$ such that it has only one local minimum -- which is the global minimum, $0$, and demand that $f^{-1}[0,c]$ is connected and compact for all $c\geq 0$. Step 2: you construct the charts on $f^{-1}[c_i,c_{i+1}]$ where $c_1 -TITLE: Why is $|Y^{\emptyset}|=1$ but $|\emptyset^Y|=0$ where $Y\neq \emptyset$ -QUESTION [10 upvotes]: I have a question about the set of functions from a set to another set. I am wondering about the degenerate cases. Suppose $X^Y$ denotes the set of functions from a set $Y$ to a set $X$, why is $|Y^{\emptyset}|=1$ but $|\emptyset^Y|=0$ where $Y\neq \emptyset$? - -REPLY [12 votes]: The definition of $A^B$ is "the set of all functions with domain $B$ and codomain $A$". -A function $f$ from $B$ to $A$ is a set of ordered pairs such that: - -If $(x,y)\in f$, then $x\in B$ and $y\in A$. -For every $b\in B$ there exists $a\in A$ such that $(b,a)\in f$. -If $(b,a)$ and $(b,a')$ are in $f$, then $a=a'$. - -Now, what happens if $B=\emptyset$? Well, then there can be no pair in $f$, because you cannot have $x\in B$. But notice that in that case, 2 is satisfied "by vacuity" (if it were false, you would be able to exhibit a $b\in\emptyset$ for which there is no $a\in A$ with $(b,a)\in f$; but there are no $b\in\emptyset$, so you cannot make such an exhibition; the statement is true because the premise, "$b\in\emptyset$", can never hold). Likewise 3 holds by vacuity. So it turns out that if we take $f=\emptyset$, then $f$ satisfies 1, 2, and 3, and therefore it is by all rights a "function from $\emptyset$ to $A$". But this is the only possible function from $\emptyset$ to $A$, because only the empty set works. -By contrast, if $A=\emptyset$, but $B\neq\emptyset$, then no set $f$ can satisfy both 1 and 2, so no set can be a function from $B$ to $A$. -That means that $Y^{\emptyset}$ always contains exactly one element, namely the "empty function", $\emptyset$. But if $Y\neq\emptyset$, then $\emptyset^Y$ contains no elements; that is, it is empty. -Therefore, since $Y^{\emptyset}$ has exactly one element, $|Y^{\emptyset}|=1$ regardless of what $Y$ is. But if $Y\neq\emptyset$, then $\emptyset^{Y}$ is empty, so $|\emptyset^{Y}| = 0$. - -REPLY [7 votes]: Because the empty function is the unique function from the empty set to an arbitrary set $Y$, while if $Y\neq\emptyset$, then there exists $y\in Y$, but there's no place in $\emptyset$ for a function to send $y$ to.<|endoftext|> -TITLE: Why are cubics separable over fields that are not characteristic 2 or 3 -QUESTION [6 upvotes]: Why is it that cubics are separable over fields that are not of characteristic $2$ or $3$? -This is the starting point for some a discussion of the Galois group of a cubic, but I seem to be stuck right off the bat. - -REPLY [4 votes]: You can actually do this "by hand" too. Our goal is to show that a monic irreducible polynomial $f(x)$ of degree $2$ or $3$ over $F$ splits into distinct linear factors in an algebraic closure $E$, as long as the characteristic of the $F$ is not $2$ or $3$. Suppose the irreducible $f(x)$ does not split into linear factors in $E$; we will get a contradiction. -First, if the polynomial is of the form $(x - a)^3 = x^3 - 3ax^2 + 3a^2x + a^3$, then the $x^2$ coefficient $-3a$ must be in $F$. So $a$ is in $F$ too, using that $F$ is not of characteristic $3$. -The remaining case is that the polynomial is of the form $(x - a)^2(x - b)$. -For this we use the fact that for any polynomial $p(x)$, ${p'(x) \over p(x)} = \sum {1 \over x - r}$, where the sum is taken over the roots of $p(x)$. Applying this to $f(x)$ gives that ${f'(x) \over f(x)} = {2 \over (x - a)} + {1 \over x - b}$. Note that $f(x) = x^3 - (b + 2a)x^2 + ...$, so that $b + 2a \in F$. Hence ${b + 2a \over 3} \in F$ too. Substituting this value of $x$ into ${f'(x) \over f(x)}$ gives gives ${9 \over 2(b - a)}$, which must be in $F$. Thus $b - a\in F$; note we use the characteristic is not $2$ or $3$ here. So $b = {2 \over 3}(b - a) + {1 \over 3}(b + 2a)$ is in $F$ as well, again contradicting the irreducibility of $f(x)$.<|endoftext|> -TITLE: What is the difference between Categories and Relations? -QUESTION [11 upvotes]: For a common basis, I'll state basic definitions of a category and the relation type I'm thinking of. They're here for quick clarity, not precision, so feel free to revise for an answer. -Category: -A collection of objects, a collection of arrows each with a source and target among said objects, identity arrows, and a composition operator satisfying an associative law over arrows. -Reflexive relation on AxA: -A collection of objects, a labelled multi-set of pairs of said objects satisfying the reflexive law, and the standard relation combinator operation (which satisfies the associative law). -Note that labelled multi-set relations are commonly expressed and utilized in discrete math as multi-graphs. I collapse graphs and relations here for unified axiomatic treatment. This is not to treat, e.g., paths over graphs as different from relation composition. -What, exactly, separates these two? They appear essentially identical in definition and meaning to me. Relations immediately appear more general, given that categories only analogize to a certain restricted class of relations. -I note some notational difference, such as how categories name each arrow; I don't see how that would change mathematical power. Similarly with the explicit treatment of composition. While such contrivances can come in handy for certain proofs or explorations, I don't see how it justifies treating the two as separate branches rather than syntactic shims on identical concepts. -[EDITS: fixed associativity statement, extended relations to multi-set representation with graph analogy] - -REPLY [2 votes]: This is an example of that the axioms of a theory doesn't say much of the intentions with the theory. The purpose of relation has never been to study universal properties, for example.<|endoftext|> -TITLE: Why can't erf be expressed in terms of elementary functions? -QUESTION [5 upvotes]: I have seen this claim on Wikipedia and other places. Which branch of mathematics does this result come from? - -REPLY [6 votes]: This is a consequence of the Liouville-Ritt theory of integration in finite terms. You can find a brief sketch and references in my post here. - -REPLY [4 votes]: I am not aware of any easy proof of this. Here is a proof that $\int e^{x^2}$ cannot be expressed in terms of elementary functions: -http://math.hunter.cuny.edu/ksda/papers/rick_09_02.pdf - -REPLY [3 votes]: Differential Galois theory.<|endoftext|> -TITLE: Khayyam's work on cubic equations -QUESTION [11 upvotes]: Omar Khayyam is known for his significant progress in solving cubic polynomial equations. For example, his biography on www-history.mcs.st-andrews.ac.uk says - -(...) This problem in turn led Khayyam to -solve the cubic equation x^3 + 200x = -20x^2 + 2000 and he found a positive -root of this cubic by considering the -intersection of a rectangular -hyperbola and a circle. -(...) Indeed Khayyam did produce such a work, the Treatise -on Demonstration of Problems of -Algebra which contained a complete -classification of cubic equations with -geometric solutions found by means of -intersecting conic sections. - -But I still can't see the big picture of those days. I'm possibly omitting something about the idea of geometric solutions of algebraic equations, but why were they trying hard to find intersections of conic sections, and building large classification schemes for it?. If the idea was to get a numerical value out of these constructions by measuring lengths on paper, they could just have prepared a careful template for the function $y = x^3$, and then solved all the cubic equations by intersecting it with a parabola, like in the figure below for the mentioned equation. -I would appreciate answers that would clarify my confusion. Was it that they did not conceive $y=x^3$ as a curve, if they were interested in getting a numerical value? Or was it a conceptual challenge to show that all cubic equations can be represented as an intersection of two conic sections? - -REPLY [10 votes]: There's a brief note in this book on how Khayyam bumped into having to solve a cubic. -I'll only make the note that you should remember the context of the time: there was no concept of negative, much less complex, solutions. Corresponding to our current Cartesian system, Khayyam only looked at intersections in the first quadrant. -Another note should be made that the curves of the time were constructed with geometric tools (straightedge, compass, and a bunch of other contraptions), and $y=x^3$ isn't really a sort of curve that easily lends itself to such a construction (but is now easily constructed thanks to our current knowledge of coordinate geometry). -Here is a more explicit mention of the hyperbola-circle intersection problem Khayyam studied and was mentioned in the OP. -Here is a (more or less) complete table of all the intersection cases Khayyam studied. (The book has an appendix containing a (translated) section of Khayyam's work.) -Here is yet another reference. -(I'll keep updating this answer as I comb through more books; watch this space! As an aside, it's funny that my attempts to look for answers to this question are leading me to references for this question!)<|endoftext|> -TITLE: Probability distribution of the maximum of random variables -QUESTION [14 upvotes]: Given $N$ random iid variables, $X_1, \ldots, X_N$, with a uniform probability distribution on $[0, 1)$ what is the distribution of $\displaystyle \max_{i = 1 \ldots N}(X_i)$? - -REPLY [12 votes]: Hint: $\max \lbrace X_1,...,X_N \rbrace \leq x$ if and only if $X_i \leq x$ for all $i=1,...,N$.<|endoftext|> -TITLE: What does "$f\in C^2[a,b]$" mean? -QUESTION [12 upvotes]: What does this expression mean? -$$f\in C^2[a,b]$$ -More specifically, I don't know what $C$ means. - -REPLY [3 votes]: $f:[a,b]\to \mathbb{R}$ or $\to \mathbb{C}$. And both the first and second derivatives of $f$ are continuous.<|endoftext|> -TITLE: What Re(f(z))=c is if f is a holomorphic function? -QUESTION [5 upvotes]: Suppose that $f:U\subset\mathbb{C}\to\mathbb{C}$, where $U$ is a region in the complex plane, is a holomorphic function. -If $c\in\mathbb{R}$ is a regular value for $\text{Re}(f(z))$ then it follows from implicit function theorem that -$\text{Re}(f(z))^{-1}(c)$ is at least locally a differentiable curve in the plane. - Question: -1- If $c$ is a regular value is any connected component of $\text{Re}(f(z))^{-1}(c)$ a global differentiable curve ? -2-If $c$ is not a regular value and $\text{Re}(f(z))^{-1}(c)$ have at least one cluster point is this set locally a curve ? - -REPLY [2 votes]: (1) To use your notation, if $c \in \mathbb{R}$ is a regular value, then the level set $Re(f(z))^{-1}(c)$ will be a 1-dimensional embedded submanifold of $U \subset \mathbb{R}^2$. Therefore, every connected component will be a connected 1-manifold. -Now, I'm not sure what exactly you mean by "global differentiable curve." -If you mean "something of the form $f(t) = (x(t), y(t))$," then it follows from the comments in this question that yes, every connected component can be put in that form. -If you mean "something of the form $y = f(x)$ or $x = f(y)$" then the answer is (I think) no. For example, consider $f(z) = \log z$ on $U = R^2 - \{x \geq 0, y = 0\}$, i.e. the plane with the non-negative x-axis deleted. Then $Re(f(z)) = \log(\sqrt{x^2+y^2})$, so the level set $Re(f(z))^{-1}(1)$ is the circle $x^2 + y^2 = e^2$ minus the point $(e,0)$. So, it doesn't seem like you'd be able to represent this curve by a single function $x = f(y)$ or $y = f(x)$.<|endoftext|> -TITLE: Image of a math problem that was stated in Cuneiform, Arabic, Latin and Finally in modern math notation -QUESTION [29 upvotes]: Many years ago a lecturer of mine had a photocopy of a page from a book containing a math problem ( I think it was a simple quadradic equation ) that was stated/solved in Cuneiform, Arabic, Latin scripts and Finally in modern math notation. -I have contacted my lecturer but he has no idea where it was from, nor I have been able to find it using google books searches etc. -Does anyone know where to find it? -Thank you in advance - -REPLY [2 votes]: Click on Historical overview in the section Solving polynomials at the webpage: -http://www.math.harvard.edu/~ctm/gallery/index.html -This is not quite what you where asking for, as it is the solution to the quadratic, cubic, quartic, quinitic,... But I think you will still like it!<|endoftext|> -TITLE: Convergence of Series -QUESTION [6 upvotes]: At university, we are currently introduced in various methods of proving that a series converges. For example the ComparisonTest, the RatioTest or the RootTest. However we aren't told of how to calculate the point of convergence. The only series of which we know the point of convergence is the geometric series. -I believe this to be a non trivial task in general, but there have to be at least some methods to calculate the point of convergence for simple series, haven't there? -If you could point me in the direction of some of these methods I would very much appreciate it. -Thanks in advance - -REPLY [7 votes]: The short answer is that there are no general methods and we use whatever's available. The first thing you can do is learn the Taylor series of common functions; then evaluation of the Taylor series at a point where the series converges to the function corresponds to evaluation of the function. For example, $e^x = \sum \frac{x^n}{n!}$, hence $e = \sum \frac{1}{n!}$. Less trivially, $\ln (1 + x) = \sum \frac{(-1)^{n-1} x^n}{n}$, hence -$$\ln(2) = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} \pm ....$$ -The more Taylor series you know, the more series you can evaluate "explicitly" in some sense. -Beyond that, life gets difficult. Mastering the use of Taylor series is already highly nontrivial - especially recognizing when the method is applicable - so worry about learning how to do that properly first.<|endoftext|> -TITLE: Derivative of an expectation (using integrals) -QUESTION [6 upvotes]: I am working through an economics paper and I need to take the derivative of the following function: -$h\left(\overline{\omega}\right) = \int^{\infty}_{\overline{\omega}} \omega \Phi \left(d\omega\right)$ -Even though I don't understand it well, I can do the derivative for the case -$g\left(\overline{\omega}\right) = \int^{\overline{\omega}}_{0} \omega \Phi \left(d\omega\right)$ -where the derivative is simply -$g'\left(\overline{\omega}\right) = \overline{\omega} \phi \left(\overline{\omega}\right)$ -But for $h\left(\overline{\omega}\right)$ where the upper bound is $\infty$ I really have no idea of what to do. -Can anyone help me? Any explanation or even a pointer to where I can learn those things would be greatly appreciated. - -REPLY [5 votes]: If the integral is sufficiently nice by which I mean it doesn't blow up to infinity, we can write $h(\bar{\omega}) = \int_{l}^{\infty} \omega d(\Phi(w)) - \int_{l}^{\bar{\omega}} \omega d(\Phi(w))$. -where $l$ is some constant number. It could be $0$ or $\infty$ based on your problem. -Now the first integral is just a constant, and the second integral is similar to $g(\bar{\omega})$ - -REPLY [4 votes]: Note that $h(\bar \omega ) + g(\bar \omega )$ is constant, and use the result for $g(\bar \omega )$. - -REPLY [3 votes]: Since $$h(\overline{\omega})=-\int_{\infty}^{\overline{\omega}}\omega\Phi(d\omega),$$ -$h'(\overline{\omega})=-\overline{\omega}\phi(\overline{\omega})$.<|endoftext|> -TITLE: Completion of rational numbers via Cauchy sequences -QUESTION [75 upvotes]: Can anyone recommend a good self-contained reference for completion of rationals to get reals using Cauchy sequences? - -REPLY [18 votes]: These are the handout articles, given to me by our Professor, when i took Real Analysis class. I hope you find it useful. (TeXing it up was tough work. Phew!!) -Often i have omitted the proof, since i felt that one can do it with little bit of thinking. But if you find that you need a proof, of theorem or a lemma, please inform, i shall be happy to edit it. -Definition 1. Let $\mathbb{Q}$ be the set of all rational numbers. A sequence $(x_{n}), x \in \mathbb{Q}$ is said to be Cauchy if for every $\epsilon \in \mathbb{Q}$, there exists a positive integer $n_{0}$ such that $|x_{n}-x_{m}|<\epsilon$ for all $n,m \geq n_{0}$. -Definition 2. A sequence $(x_{n})$ in $\mathbb{Q}$ is said to be convergent in $\mathbb{Q}$, to a rational number $a$, if for every $\epsilon^{+} \in \mathbb{Q}$, there exists a $n_{0} \in \mathbb{N}$, such that $|x_{n}-a| < \epsilon$ for all $n \geq n_{0}$. We denote this by $\lim{x_{n}}=a$. -Notations: Let $\mathcal{C}$ be the set of all Cauchy sequences in $\mathbb{Q}$ and let $\mathcal{N}$ be the set of all $(x_{n}) \in \mathcal{C}$ such that $\lim{x_{n}}=0$. Elements of $\mathcal{N}$ are called null sequences. We define addition and multiplication of two Cauchy sequences $(x_{n})$ and $(y_{n})$ in $\mathcal{C}$ as follows: $(x_{n}) + (y_{n})= (x_{n}+y_{n})$ and $(x_{n}) \cdot (y_{n}) = (x_{n}y_{n})$. -Lemma 1. $\mathcal{C}$, is a commutative ring under the addition and multiplication defined above. -Definition 3. If $R$ is a ring then a non empty subset $I \subseteq R$ is said to be an ideal of $R$, if for all $x,y \in I$ and $r \in R$, $x+y \in I$ and $rx \in I$. -Lemma 2. $\mathcal{N}$ is an ideal of $\mathcal{C}$. -Definition 4. We define a relation $\sim$ on $\mathcal{C}$ as follows: For $(x_{n}), (y_{n}) \in \mathcal{C}$, we say that $(x_{n}) \sim (y_{n})$ iff $(x_{n})-(y_{n})= (x_{n}-y_{n}) \in \mathcal{N}$. This $\sim$ is an equivalence relation. We define the equivalence class $\mathcal{C}/\mathcal{N}= \{(x_{n}) + \mathcal{N} \ | \ (x_{n}) \in \mathcal{C}\}$ as the set of real numbers denoted by $\mathbb{R}$. -We now make $\mathcal{C}/\mathcal{N}$ into a ring. This is a very common result in algebra if $R$ is a ring and $I$ is an ideal of $R$, then $R/I$ can be made into a ring. We define addition and multiplication in $\mathcal{C}/\mathcal{N}$ as follows: - -For $(x_{n}) + \mathcal{N}, (y_{n}) +\mathcal{N} \in \mathcal{C}/\mathcal{N}$ -$(x_{n})+\mathcal{N} + (y_{n})+\mathcal{N} = (x_{n}+y_{n})+\mathcal{N}$, and $(x_{n}+\mathcal{N})\cdot ((y_{n})+\mathcal{N}) = (x_{n}y_{n})+\mathcal{N}$. - -Lemma 3. Let $R$ be a commutative ring and $I$ be an ideal in $R$. Let $R/I$ the quotient of $R$, w.r.t the equivalence relation $\sim$ on $R$ as: $x \sim y$ iff $x-y \in I$. If $x \sim x_{1}$ and $y \sim y_{1}$, then $x+y+I=x_{1}+y_{1}+I$ and $xy+I=x_{1}y_{1}+I$. -Lemma 4. Let $(x_{n}) \in \mathcal{C}/\mathcal{N}$. There exists $\epsilon>0$ and $n_{0} \in \mathbb{N}$, such that $|x_{n}|>\epsilon$, for all $n \geq n_{0}$. In fact, there exists $\epsilon >0$ and $n_{0} \in \mathbb{N}$ such that only one of the following is true: - -Either $x_{n} \geq \epsilon$, for all $n \geq n_{0}$, or -$x_{n} \leq - \epsilon$, for all $n \geq n_{0}$. - -Theorem. $\mathbb{R} = \mathcal{C}/\mathcal{N}$ is a field. -Proof. It is easy to show that $\mathbb{R}$ is a commutative ring witht he zero element $\mathcal{N}$ and the identity element $1+\mathcal{N}$. We need to check that if $x+\mathcal{N} \in \mathcal{C}/\mathcal{N}$ and $x \notin \mathcal{N}$, then it is invertible. That is, there exists $y+\mathcal{N}$ such that $(x+\mathcal{N})\cdot(y+\mathcal{N}=1+\mathcal{N}$. -Let $x+\mathcal{N} \in \mathcal{C}/\mathcal{N}$ and $x \notin \mathcal{N}$. By Lemma 4 there exists $\epsilon >0$, and $N \in \mathbb{N}$ such that $x_{n}>\epsilon$ for all $n \geq N$. Define $y=(y_{1},y_{2},\cdots,y_{N},0,...)$ such that $x_{i}+y_{i} \neq 0$, for $1 \leq i \leq N$. Note that $x+\mathcal{N}=(x+y)+\mathcal{N}$. Define $$(x+y)^{-1}=\biggl( \frac{1}{x_{1}+y_{1}},\cdots,\frac{1}{x_{N}+y_{N}},\cdots\biggr)$$ We claim that $(x+y)^{-1} \in\mathcal{C}$. Let $\delta \in\mathbb{Q}^{+}$ be given. For all $m,n \geq N$, $$\biggl|\frac{1}{x_{n}}-\frac{1}{x_{m}}\biggr|=\frac{|x_{m}-x_{n}|}{|x_{n}\cdot |x_{m}|}<\frac{|x_{n}-x_{m}|}{\epsilon^{2}}$$ -Since $(x_{n})\in\mathcal{C}$, for the above $\delta$ there exists $n_{1} \in \mathbb{N}$, such that $|x_{m}-x_{n}| < \delta \epsilon^{2}$, for all $n,m \geq n_{1}$. Choose $n_{0}=\max (N,n_{1})$, and conclude the result. -Definition 5. An ideal $I$ of a ring $R$ is said to be a maximal ideal if $J$ is an ideal containing $I$ properly, then $J=R$. -Remark 1. In fact we have proved that $\mathcal{N}$ is a maximal ideal of $\mathcal{C}$. -Definition 6. A cauchy sequence $(x_{n}) \in \mathbb{Q}$ is said to be positive if there exists $\epsilon \in \mathbb{Q}^{+}$, and $N \in \mathbb{N}$ such that $x_{n} > \epsilon$, for all $n \geq N$. -Definition 7. A real number $\alpha \in \mathbb{R}$ is said to be positive, if $(x_{n}) \in \alpha$, then $(x_{n})$ is a positive sequence in $\mathbb{Q}$. -Theorem. If $(x_{n})$ is a positive sequence in $\mathcal{C}$, and $(z_{n}) \in \mathcal{N}$, then $(x_{n}+z_{n})$ is a positive sequence in $\mathcal{C}$. If $(x_{n})$ and $(y_{n})$ are positive sequences in $\mathcal{C}$, then $(x_{n}+y_{n})$ and $(x_{n}y_{n})$ are positive sequences in $\mathcal{C}$ -We denote the set of all positive sequences in $\mathcal{C}/\mathcal{N}$ by $\mathbb{R}^{+}$. -Definition 8. Let $\mathbb{F}$ be a field. By an order on $\mathbb{F}$, we mean a subset $\mathbb{F}^{+}$ of $\mathbb{F}$, with the following properties: - -Any $x \in \mathbb{F}$, lies in exactly one of the sets $\mathbb{F}^{+},\{0\},$ and $\mathbb{F}^{-}=-\mathbb{F}^{+}$. -For any $x,y \in \mathbb{F}^{+}$ their sum $x+y$ and the product $xy$ again lie in $\mathbb{F}^{+}$. - -Theorem. $\mathbb{R}$ is an ordered field with an order $\mathbb{R}^{+}$. -Definition 9. Let $\overline{x},\overline{y} \in \mathbb{R}$. We say that $\overline{x}>\overline{y}$ if $\overline{x}-\overline{y} \in \mathbb{R}^{+}$. -Theorem. $\mathbb{R}$ has the Archimedean property, that is, if $\overline{x} \in \mathbb{R}^{+}$ and $\overline{y}\in\mathbb{R}$, then there exists $n \in \mathbb{N}$ such that $n\overline{x}>\overline{y}$. -Corollary. $\mathbb{N}$ is not bounded in $\mathbb{R}$. -Theorem. $\mathbb{R}=\mathcal{C}/\mathcal{N}$, has the l.u.b property, that is if $S$ is a non empty subset of $\mathbb{R}$, which is bounded above, then there exists a real number which is the least upper bound for $S$. -Proof. Let $S \subseteq \mathbb{R}$, be non empty and bounded above. Let $M \in \mathbb{R}$, be such that $x \leq M$ for all $x \in S$. Without loss of generality we can assume that $M \in \mathbb{Z}$. Fix $x \in S$. We claim that there exists $m \in \mathbb{Z}$, such that $m \leq x$. -For otherwise, $m>x$, for all $m \in \mathbb{Z}$, which implies that $-m<-x$, for all $m \in \mathbb{Z}$. This implies that $\mathbb{N}$ is bounded, which is a contradiction. Hence, $m \leq x \leq M$ for some $m, M \in \mathbb{Z}$. Since $S$ is bounded above by $M$, if at all the l.u.b exists, it has to lie in $[m,M]$. For each $n \in \mathbb{N}$, consider the set $$B_{n} = \Bigl\{ \frac{c}{2^{n}} \ | \ m \leq \frac{c}{2^{n}} \leq M, \ c \in \mathbb{Z}\Bigr\}$$ Note that, since $M= \frac{M.2^{n}}{2^{n}}$, $M \in B_{n}$. Hence $B_{n}$ is not empty.Also, if $\frac{c}{2^{m}}$, then $\frac{2c}{2^{n+1}} \in B_{n+1}$. Hence $B_{n} \subseteq B_{n+1}$ for all $n \in \mathbb{N}$. Since there are only finitely many integers between $m2^{n}$ and $M2^{n}$, $B_{n}$ is finite. Hence there are only finitely many upper bounds for $S$ in $B_{n}$. Let $a_{n}$ be the smallest such upper bound for $S$ in $B_{n}$, Since for $n \geq m$, $B_{m} \subseteq B_{n}$, it follows that $a_{m} \in B_{n}$ and hence $a_{n} \leq a_{m}$ for all $n \geq m$. -Now we claim that for each $n \in \mathbb{N}$, $a_{n} - \frac{1}{2^{n}}$ is not an upper bound for $S$. If $m \leq a_{n} - \frac{1}{2^{n}}$, then we are through, since $a_{n}$ is the least upper bound for $S$ in $B_{n}$, $a_{n}-\frac{1}{2^{n}} a_{n}-\frac{1}{2^{n}}$. Since $m \leq x$, for $x \in S$, $a_{n}-\frac{1}{2^{n}}< m\leq x$. Hence $a_{n}-\frac{1}{2^{n}}$ cannote be an upper bound for $S$. Since $a_{m}-\frac{1}{2^{m}} \leq x \leq a_{n}$, it follows that $a_{m}-a_{n}<\frac{1}{2^{m}}$. Hence $(a_{n})$ is cauchy sequence. Define $\alpha = (a_{n}) + \mathcal{N}$. First of all we claim that $a_{n}-\frac{1}{2^{n}} < \alpha < a_{n}$, where $a_{n}=(a_{n},a_{n},a_{n}, \cdots)$ and $a_{n}-\frac{1}{2^{n}} = (a_{n}-\frac{1}{2^{n}},a_{n}-\frac{1}{2^{n}},\cdots)$. Since $a_{k}$ is a decreasing sequence $a_{n} \geq a_{k}$ for all $k \geq n$. Hence $a_{n} - a_{k} >0,$ for all $k \geq n$. This shows that $(a_{n}- \alpha$ is a positive sequence in $\mathcal{C}$ and hence $\alpha < a_{n}$. Next $\alpha-\Bigl(a_{n}-\frac{1}{2^{n}}\Bigr)=(a_{1}-a_{n}+\frac{1}{2^{n}},\cdots)$ . Since $a_{k}-\frac{1}{2^{k}}$ is not an upper bound for $S$, $a_{k}>(a_{n}-\frac{1}{2^{n}})$ for all $k$, $a_{k}-(a_{n}-\frac{1}{2^{n}})>0$, for all $k$. Hence $\alpha-(a_{n}-\frac{1}{2^{n}}$ is a positive sequence and hence $\alpha > a_{n}-\frac{1}{2^{n}}$ -Finally we claim that $\alpha$ is the least upper bound for S. First of all we have to show that $\alpha$ is an upper bound for S. Suppose not. Then there exists $x \in S$, such that $x > \alpha$, hence $x - \alpha >0$. Using the Archimedean property there exists $N\in\mathbb{N}$ such that $N(x-\alpha)>1$. Hence $x-\alpha>\frac{1}{N}>\frac{1}{2^{N}}$. Thus we se that $x>\alpha+\frac{1}{2^{N}}>a_{N}-\frac{1}{2^{N}}+\frac{1}{2^{N}}=a_{N}$. This is a contradiction, since $a_{N}$ is the least upper bound for $S$ in $B_{n}$. -Next, to show that $\alpha$ is the least upper bound. Suppose not. Then there exists $b \in \mathbb{R}$, such that $b$ is an upper bound for $S$ and $b < \alpha$. By the archimedean property there exists $N \in \mathbb{N}$, such that $N(\alpha - b)>1$. Hence $\alpha -b > \frac{1}{N}>\frac{1}{2^{N}}$. This implies that $\alpha -b>\frac{1}{2^{N}}$ and hence $\alpha -\frac{1}{2^{N}}>b$, or $b < \alpha-\frac{1}{2^{N}}< a_{N}-\frac{1}{2^{N}}$ as $\alpha < a_{n}$, for all $n$.But $a_{N}-\frac{1}{2^{N}}$ is not an upper bound for $S$. Hence anything less than this cannot be an upper bound for $S$. In particular, $b$ is not a upper bound for $S$, which is a contradiction. Hence $\alpha$ is the least upper bound for $S$.<|endoftext|> -TITLE: What is the standard method to estimate probabilities of events based on observations? -QUESTION [5 upvotes]: I start from a simple example. I through a (possibly unfair) coin 20 times. I got "eagle" and "tail" 15 and 5 times, respectively. Now I need to estimate probabilities for eagle and tail. For eagle I get 15/20 = 0.75 and for the tail I get 0.25. -However, I know that the probability that I got are not accurate. It is still possible that I have a fair coin (with probability for eagle equal to 0.5) and I get more eagles in my experiment just by chance. -Now I want to estimate probabilities of probabilities. In other words I want to know how likely it is that probability for eagle is 0.5 (or 0.3, or 0.9). I can solve this problem, but I would like to know if there is a name for this problem. I an also interested in the generalization of the problem for the case of more than two events (not just "eagle" and "tail"). - -REPLY [2 votes]: We want to find $p,0\le p\le 1,$ such that $${20\choose 15}p^{15}(1-p)^5$$ is maximum. -By simple calculus, this $p$ turns out to be 15/20, as expected.<|endoftext|> -TITLE: Cyclic sylow subgroup of order 9 with 3-core of order 3 -QUESTION [5 upvotes]: Can a finite group G have a cyclic Sylow 3-subgroup of order 9, such that the intersection of the Sylow 3-subgroups of G has order exactly 3, without having non-identity normal subgroups of order coprime to 3? - -In classifying the finite groups with cyclic Sylow 3-subgroups of order 9, it seems reasonable to split into cases based on the size of the 3-core (the intersection of the Sylow 3-subgroups) mod the 3'-core (the largest normal subgroup of order coprime to 3). 3-Cores of sizes 1 and 9 are easy to handle, but size 3 is devolving into a number of cases none of which seem to work, but for no systematic reason. -Is there some systematic reason it cannot occur, or is there an example I've overlooked? -Edit: The original left out the important condition on normal subgroups of order coprime to 3, without which the classification is intractable. Alex's answer shows how to use such normal subgroups to get fairly arbitrary behavior (for any type of Sylow). -Edit 2: If the perfect residuum X of the group is nontrivial, it contains Ω(P) of the cyclic Sylow p-subgroup P, and so Op(G) ≤ Op(X). In this case, Op(X) = 1. Hence the group is solvable, and Fit(G) = Op(G), so G/Fit(G) ≤ Aut(p) = p−1 has the wrong order. - -REPLY [4 votes]: Such groups are easy to construct: let $C_9$ act on something through a $C_3$-quotient. E.g. construct the semi-direct product $C_7\rtimes C_9$ such that $C_9$ acts on $C_7$ through the unique automorphism of order 3. Then, the Sylow 3-subgroups are not normal, so there is more than one of them, but there is a unique $C_3$, which is the centre of $G$.<|endoftext|> -TITLE: Reconstructing a Monthly problem: tree growth on the 2D integer lattice -QUESTION [12 upvotes]: I'm trying to reconstruct a problem I saw in the Monthly, years ago. Perhaps it'll look familiar to someone. -In the integer lattice in the plane, we grow a tree in the following natural way: Initially the tree is just the origin. At each step, we find the set of lattice points that are neighbors (distance 1) to precisely one vertex of our tree, and add them (simultaneously) to the tree. -Thus on day 0 the tree is $\{(0,0)\}$; on day 1 it contains $\{(0,0), (1,0), (-1,0),(0,1),(0,-1)\}$; on day 2 it contains those vertices along with $(2,0),(-2,0),(0,2)$ and $(0,-2)$ (note that $(1,1)$ is not added because it has two neighbors already in the tree), and on day 3 we add 12 new vertices. It looks like a pretty familiar fractal. -The thing I'm not sure of is what exactly was asked of that tree... Possible candidates include its asymptotic density, some sort of simple formula to determine which lattice points ultimately make it into the tree and the # of vertices added on day $n$. There are lots of interesting questions and I'm happy to try and solve them but I prefer to work on the ones that were actually posed! - -REPLY [10 votes]: I think I got it. It seems to be problem 10360, originally published in 1994 (volume 101, issue 1, page 76), proposed by Richard Stanley. I'm looking at the JSTOR page of the solution (by Robin Chapman), which appeared in October 1998, volume 105, no 8, pages 769-771). -Here is the statement of the problem: -Let $L$ be the integer lattice in $\mathbb{R}^d$, i.e., the $L$ is the set of points $(x_1,x_2,\ldots,x_d)$ with all $x_j\in\mathbb{Z}$. Consider $L$ as a graph by declaring two lattice points to be adjacent if the distance between them is $1$. Define a sequence $S_0$, $S_1,\ldots$ of subsets of $L$ inductively as follows: -\begin{align*} -S_0 &= \Bigl\{ (0,0,\ldots,0)\Bigr\}\\\ -S_{n} &= \Bigl\{ P\in L-\mathop{\cup}\limits_{0\leq k\lt n}S_k\ \Bigm|\ \text{$P$ is adjacent to exactly one element of $\mathop{\cup}\limits_{0\leq k\lt n}S_k$}\Bigr\}. -\end{align*} -Let $S$ be the subgraph of $L$ whose vertices are $\cup S_n$. Thus, $P\in S$ is adjacent to $P'\in S$ if the distance between $P$ and $P'$ is $1$. - -Find a simple condition for a point of $L$ to belong to $S$. -For $P\in S$, find a simple rule to determine $i$ such that $P\in S_i$. -How many elements are in $S_i$? -How many $P\in S_i$ are adjacent to no points in $S_{i+1}$? -Show that $S$ is a tree. -Investigate the (vertex) density of $S$ in $L$, and compare it to the largest density of a subset of $L$ for which the induced subgraph is a tree.<|endoftext|> -TITLE: What does smooth curve mean? -QUESTION [17 upvotes]: In this problem, I know that the hypothesis of Green's theorem must ensure that the simple closed curve is smooth, but what is smooth? Could you give a definition and an intuitive explanation? - -REPLY [4 votes]: I stumbled upon this old question and I'd like to add something: there is a difference of perspective on smoothness depending whether you look at the geometric object or its parametrization. -Look at the standard example: the real cusp. It is a curve in the real plane parametrized $f:t\to (t^2,t^3)$. Of course, the mapping $f$ is smooth (of any order), and the graph of $f$ is a smooth manifold in $\mathbb{R}^3$, but its image is singular: it is the zero set $x^3=y^2$. It is "worse than a corner"! -So you need to be always clear what you want: do you need only differentiability of the parametrization or do you want the image to be a differentiable manifold (typically in such a case you would assume that the derivative of $f$ does not vanish).<|endoftext|> -TITLE: What are the conditions for existence of the Fourier series expansion of a function $f\colon\mathbb{R}\to\mathbb{R}$ -QUESTION [8 upvotes]: What are the conditions for existence of the Fourier series expansion of a function $f\colon\mathbb{R}\to\mathbb{R}$? - -REPLY [3 votes]: In addition to Carleson's theorem (stated by AD above), which gives a sufficient condition for pointwise convergence almost everywhere, one might also consider the following theorem about uniform convergence: - -Suppose $f$ is periodic. Then, if $f$ is $\mathcal{C}^0$ and piecewise $\mathcal{C}^1$, $S_N(f)$ converges uniformly to $f$ on $\mathbb{R}$.<|endoftext|> -TITLE: Recurrence relation satisfied by $\lfloor(1+\sqrt{5})^n\rfloor$ -QUESTION [6 upvotes]: Let $L(n)=\lfloor(1+\sqrt{5})^n\rfloor$. What kind of a linear recurrence is satisfied by $L(n)$? I have no idea how to go about this, because of the presence of the greatest integer function. -Please feel free to retag it as I kept getting an error on every tag I thought was appropriate. - -REPLY [4 votes]: Do you have any reason to suspect $L(n)$ should satisfy a linear recurrence? Here's a way you can prove that it doesn't satisfy a linear recurrence of depth 2 (which can be generalised to any depth). -Step 1: Compute $L(n)$ for small $n$: 3, 10, 33, 109, 354, 1148, 3716, ... -Step 2: Assume $b L(n-2)+a L(n-1)=L(n)$ for some $a,b$. Using the data from Step 1 we obtain the system of linear equations: - -$3b+10a=33$, -$10b+33a=109$, -$33b+109a=354$. - -In fact, we can keep going forever adding equations $109b+354a=1148$, and so on. -Step 3: Solve the system of linear equations (or get your computer to do it for you (using e.g. WolframAlpha)). In this case there are no solutions, so $L(n)$ does not satisfy a linear recurrence of depth 2. If you feel that the starting point shouldn't be $L(1)$, you can use the same argument starting later in the sequence. -Assuming I coded things correctly, I have checked that $L(n)$ doesn't satisfy a linear recurrence of depth 10 (or less). [It's probably a good idea to check this yourself if you end up relying on this result.] I also attempted to find a linear recurrence with polynomial coefficients for $L(n)$ to a limited extent (see A=B about Sister Celine's Technique for more info). -Finally, if you're allowed to use an auxiliary function, then let $s(1)=1+\sqrt{5}$ and for $n \geq 2$ let $s(n)=(1+\sqrt{5}) \cdot s(n-1)$. Then $L(n)=\lfloor s(n) \rfloor$ for all $n \geq 1$.<|endoftext|> -TITLE: Series $\sum_{n=1}^{\infty} (\sqrt[3]{n+1} - \sqrt[3]{n-1})^{\alpha}$ converge or diverge? -QUESTION [7 upvotes]: Given the following series: $\sum_{n=1}^{\infty} (\sqrt[3]{n+1} - \sqrt[3]{n-1})^{\alpha}$ -where $\alpha \in \mathbb{R}$. Does the series converge or diverge? -Attempts to solve the problem: -1) $\lim_{n\to\infty} (\sqrt[3]{n+1} - \sqrt[3]{n-1})^{\alpha } = 0$ - not helpful. -2) Used the formula $a^{3} - b^{3} = (a-b)(a^{2} + ab + b^{2})$ - not helpful. -3) The ration test is not helpful either. - -REPLY [10 votes]: The formula $a^{3} - b^{3} = (a-b)(a^{2} + ab + b^{2})$ actually can be quite helpful. Note that it implies that -$$\sqrt[3]{n+1}-\sqrt[3]{n-1}=\frac{2}{(n+1)^{2/3}+(n^2-1)^{1/3}+(n-1)^{2/3}}.$$ -For sufficiently large $n$ ($n\gt2$ will do for sure, and you can get tighter bounds by going further), that denominator is squeezed between $\frac{3}{2}n^{2/3}$ and $6n^{2/3}$, and therefore the convergence question is the same as for the series $\sum_{n=1}^\infty n^{-2\alpha/3}$.<|endoftext|> -TITLE: How to use the Lagrange's remainder to prove that log(1+x) = sum(...)? -QUESTION [7 upvotes]: Using Lagrange's remainder, I have to prove that: -$\log(1+x) = \sum\limits_{n=1}^\infty (-1)^{n+1} \cdot \frac{x^n}{n}, \; \forall |x| < 1$ -I am not quite sure how to do this. I started with the Taylor series for $x_0 = 0$: -$f(x_0) = \sum\limits_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!} \cdot x^n + r_n$, where $r_n$ is the remainder. Then, I used induction to prove that the n-th derivative of $\log(1+x)$ can be written as: -$f^{(n)} = (-1)^{n+1} \cdot \frac{(n-1)!}{(1+x)^n}, \forall n \in \mathbb{N}$ -I plugged this formula into the Taylor series for $\log(1+x)$ and ended up with: -$f(x_0) = \sum\limits_{n=1}^\infty (-1)^{n+1} \cdot \frac{x^n}{n} + r_n$, which already looked quite promising. -As the formula which I have to prove doesn't have that remainder $r_n$, I tried to show that $\lim_{n \to \infty} r_n = 0$, using Lagrange's remainder formula (for $x_0 = 0$ and $|x| < 1$). -So now I basically showed that the formula was valid for $x \to x_0 = 0$. I also showed that the radius of convergence of this power series is $r = 1$, that is to say the power series converges $\forall |x| < 1$. -What is bugging me, is the fact, that to my opinion, the formula is only valid for $x \to 0$. I mean sure, the radius of convergence is 1, but does this actually tell me that the formula is valid within $(-1,1)$? I've never done something like this before, thus the insecurity. I'd be delighted, if someone could help me out and tell me, whether the things I've shown are already sufficient or whether I still need to prove something. - -REPLY [4 votes]: I think there is a problem with the above solution. In the estimate, $$|f^{(k+1)}(x)| \leq \left(\frac{1}{1-r}\right)^{k+1},$$ there is a dropped $k!$. Indeed, it should read, -$$|f^{(k+1)}(x)| \leq \left(\frac{k!}{1-r}\right)^{k+1},$$ -and thus -$$ -|r_k(x)| \leq \left( \frac{r}{1-r} \right)^{k+1} \cdot \frac{k!}{(k+1)!} = \left( \frac{r}{1-r} \right)^{k+1} \cdot \frac{1}{k+1}. -$$ -Unfortunately now this expression won't go to $0$ if $r>0$ (now the exponential term will dominate $\frac{1}{k+1}$). -The above solution does work for x in (-1/2,1). Here's a way to handle the remaining cases. In fact, let's just take x in (-1,0). Now Lagrange's form of the remainder gives: -$$ -r_k(x) = \int_0^x \frac{f^{(k+1)}(t)}{k!} (x-t)^{k} dt -= \int_0^x \frac{(-1)^k}{(1+t)^{k+1}} (x-t)^{k} dt -$$ -Note that for $x<0$, the above integrand has the same sign for every $t$. In particular, -$$ -|r_k(x)| -= \int_x^0 \frac{1}{(1+t)^{k+1}} (t-x)^{k} dt. -$$ -Consider $\frac{t-x}{1+t}$ as a function in $t$ with $x$ fixed. It then is an increasing function on [x,0] with maximal value of $-x$ when $t=0$. Thus, -$$ -|r_k(x)| \leq \int_x^0 (-x)^k \frac{1}{1+t} dt \leq \int_x^0 (-x)^k \frac{1}{1+x} dt = \frac{(-x)^{k+1}}{1+x}. -$$ -As desired, this last expression does go to $0$ as $k \to \infty$ as $-1 -TITLE: How to prove this inequality without use of computers? -QUESTION [8 upvotes]: With help from Maple, I got -$$\left(\frac{ax+by+cz}{x-y}\right)^2+\left(\frac{ay+bz+cx}{y-z}\right)^2+\left(\frac{az+bx+cy}{z-x}\right)^2-(c-a)^2-(c-b)^2$$ -equal to -$$\frac{(c(x^3+y^3+z^3)+(a-c)(x^2y+y^2z+z^2x)+(b-c)(x^2z+y^2x+z^2y)-3(a+b-c)xyz)^2}{(x-y)^2(y-z)^2(x-z)^2}$$ which of course is $\ge 0$. -But with no help from a computer algebra, how would one prove:$$\left(\frac{ax+by+cz}{x-y}\right)^2+\left(\frac{ay+bz+cx}{y-z}\right)^2+\left(\frac{az+bx+cy}{z-x}\right)^2\ge (c-a)^2+(c-b)^2 ?$$ - -REPLY [2 votes]: We may express it in matrix form. Let $u = [a, \ b, \ c]^\mathsf{T}$, -$p_1 = [x, \ y, \ z]^\mathsf{T}$, $p_2 = [y, \ z, \ x]^\mathsf{T}$, -$p_3 = [z, \ x, \ y]^\mathsf{T}$, -$q_1 = [-1, \ 0, \ 1]^\mathsf{T}$ and $q_2 = [0, \ -1, \ 1]^\mathsf{T}$. -Let -$$S = \frac{1}{(x-y)^2} p_1p_1^\mathsf{T} -+ \frac{1}{(y-z)^2} p_2p_2^\mathsf{T} + \frac{1}{(z-x)^2} p_3p_3^\mathsf{T} -- q_1q_1^\mathsf{T} - q_2q_2^\mathsf{T}.$$ -We have $\mathrm{LHS} - \mathrm{RHS} = u^\mathsf{T}S u$. -It suffices to prove that $S$ is positive semidefinite. -Note that all $2\times 2$ minors of $S$ are zero -(e.g., $S_{1,1} S_{2,2}-S_{1,2}S_{2,1} = 0$, etc.). Thus, $\mathrm{rank}(S)\le 1$. -Also, $S_{1,1} = x^2/(x-y)^2+y^2/(y-z)^2+z^2/(z-x)^2-1 > 0$ -(the proof is not difficult). -Thus, $S$ is positive semidefinite. We are done.<|endoftext|> -TITLE: The product of $n$ consecutive integers is divisible by $n$ factorial -QUESTION [46 upvotes]: How can we prove that the product of $n$ consecutive integers is divisible by $n$ factorial? - -Note: In this subsequent question and the comments here the OP has clarified that he seeks a proof that "does not use the properties of binomial coefficients". Please post answers in said newer thread so that this incorrectly-posed question may be closed as a duplicate. - -REPLY [6 votes]: This answer completely formalizes the argument of Nurdin Takenov in a manner sufficient to easily be expressed in an automated theorem prover such as PVS. Note that this proof uses strong induction on the sum m+k to avoid any nasty double inductions, and is explicit about all assumptions on the arguments: -DEFINITION: *P*roduct of k consecutive posints starting at m (m>=1, k>=1) -i.e. P(m,k) ==def== m...(m+k-1) -LEMMA: P(m,k) = k*P(m,k-1) + P(m-1,k) if m>=2 and k>=2 -PROOF: P(m,k) = m...(m+k-1) - = m...(m+k-2)[ k + (m-1) ] - - = k*(m)...(m+k-2) + (m-1)...(m+k-2) - - = k*P(m,k-1) + P(m-1,k) QED - -THEOREM: Product of k consecutive posints starting with m is divisible by k factorial -i.e. k! | P(m,k) -PROOF (by strong induction on all sums m+k <= n): -(i) BASIS: If n = 2 then clearly m=k=1 and we have k! = 1! clearly divides P(m,k) = 1 -(ii) INDUCTION STEP: Assume k! | P(m,k) for all m+k<=n. Now to show that k! | P(m,k) for all m+k <= n+1 -If m=1 we are done since P(1,k) = 1...k = k! and if k=1 then k! = 1! clearly divides P(m,k). So in the remainder -we may assume that m >= 2 and k >= 2. Also if m+k<=n we are done vacuously, so consider only that m+k = n+1. -By the lemma we have P(m,k) = k*P(m,k-1) + P(m-1,k) so by the Induction hypothesis we have (k-1)! | P(m,k-1) -and thus also k! | k*P(m,k-1) and also by the Induction hypothesis k! | P(m-1,k) and finally k! | P(m,k) QED<|endoftext|> -TITLE: Closure of a subset in a metric space -QUESTION [7 upvotes]: Let $(X,d)$ be a metric space and $S \subset X$. Show that $d_S(x):=\text{inf}\{d(x,s): s \in S\}=0 \Leftrightarrow x \in \overline S .$ -Notes: $\overline S$ is the closure of S. Maybe you can use that a closed set is also closed for sequences in the set? I think the difficult part is when S is open, otherwise its trivial as the closure would be equal to S. - -REPLY [8 votes]: If $x\notin \overline{S}$, then there exists $r>0$ such that $B_r(x)\cap S=\phi$. It follows that $d(x,s)\ge r>0$ for all $s\in S$ and hence $d_S(x)>0$. -And these steps can be reversed, i.e. the steps above are actually "if and only if". So you get the proof. - -REPLY [3 votes]: If $x \in \overline{S}$, then there is a sequence $(s_n)$ in $S$ converging to $x$ (e.g. take $s_n$ to be some element in $S \cap B(x;\tfrac{1}{n})$). Then $d_S(x) \leq \inf\lbrace d(x,s_n) \;\vert\; n \in \mathbb{N}\rbrace = 0$ (why?). -For the other directtion, we prove the contrapositive: If $x \notin \overline{S}$, then there is some ball around $x$ disjoint from $S$. This gives that the infimum in $d_S(x)$ must be greater than $0$ (why?).<|endoftext|> -TITLE: Field reductions -QUESTION [11 upvotes]: If there is a field $F$ that is a field reduction of the real numbers, that is $F(a)=\mathbb{R}$ for some $a$, let's also denote this $F=\mathbb{R}(\setminus a)$, then given $x \in \mathbb{R}$ is there a general method to determine whether $x$ is in $F$ or $x$ is in $\mathbb{R}\setminus F$ ? - -REPLY [11 votes]: In fact there are no nontrivial "field reductions" of $\mathbb{R}$: if $\mathbb{R} = F(a)$, then $a \in F$ and $F = \mathbb{R}$. -Case 1: $a$ is algebraic over $F$, hence $F(a) = F[a]$, so $d = [F[a]:F]$ is finite. Then $[\mathbb{C}:F] = 2d$, i.e., "the" algebraic closure of $F$ has finite degree over $F$. - -Theorem: Let $K$ be a field and $\overline{K}$ any algebraic closure. If $[\overline{K}:K] = d$ is finite, then $d = 1$ or $d = 2$. -Proof: This is essentially the Grand Artin-Schreier Theorem (see e.g. Section 12.5 of http://math.uga.edu/~pete/FieldTheory.pdf for a proof of that.) Namely, Artin-Schreier says that if $\overline{K}/K$ is a finite Galois extension, then the degree is either $1$ or $2$. Certainly $\overline{K}/K$ is normal. And for the purposes of this question we are in characteristic zero, so everything is separable. Therefore $\overline{K}/K$ is Galois. (But here is a proof of the separability in the general case: if $\overline{K}/K$ is not separable, then there is a nontrivial subextension $L$ such that $[\overline{K}:L] = p^n$ and $\overline{K} = L(a^{p^{-n}})$. But this is impossible: the polynomial $t^p - a$ is irreducible over $L$ iff all of the polynomials $t^{p^n} - a$ are irreducible over $L$. So $\overline{K}/K$ is separable.) - -Thus we must have $d = 1$, i.e., $F = \mathbb{R}$. -Case 2: $a$ is transcendental over $F$. But then the field $F(a)$ is isomorphic to the rational function field $F(t)$. Such a field cannot be isomorphic to $\mathbb{R}$, because it admits finite extensions of degree $n$ for all $n \in \mathbb{Z}^+$, e.g. -$F(t^{\frac{1}{n}})$.<|endoftext|> -TITLE: Counting number of moves on a grid -QUESTION [10 upvotes]: Imagine a two-dimensional grid consisting of 20 points along the x-axis and -10 points along the y-axis. Suppose the origin (0,0) is in the bottom-left corner and the -point (20,10) is the top-right corner. A path on the grid consists of a series of moves in -which each move is either one unit to the right or one unit up. Diagonal moves are not -allowed. How many different ways are there to construct a path starting at (0,0) and -ending at (20,10)? -I'm a little stuck on this one. I feel I'm headed towards the right direction, but I'm not sure if I'm doing this right. For every move, there are 2 possible choices. If we want to get to (20,10), then there are 200 points from the origin to this point. And I think order matters here, so we would use permutations, so I come up with 200 P 2, which is 39,800. - -REPLY [15 votes]: There are two ways of doing this. One is Ross Millikan's: you will make ten "up" moves, and 20 "right" moves; the only question is which order you make them in. Imagine placing the "right" moves on a row; now you need to decide where to do the "up" moves: you do so by inserting them "in between" (or before, or after) the "right" moves. So you need to choose ten places to put "up" moves: there are 21 locations for them (nineteeen in between the "right" moves, one before all of them, one after), and you are allowed to choose the same location more than once. -This is a combinations-with-repetitions: the formula is $\binom{n+r-1}{r}$, where you have $n$ possibilities, and must make $r$ choices with repetitions allowed. In this case, $n=21$, $r=10$, so you get $\binom{30}{10}$. -There is another way of doing it, which is more graphical. I'll do it with a 4 by 3 array so you see how it works. You have this array: -$$\begin{array}{cccc} -\cdot & \cdot & \cdot & \cdot\\ -\cdot & \cdot & \cdot & \cdot\\ -\cdot & \cdot & \cdot & \cdot -\end{array}$$ -Now, you start at the bottom left, so there is only one way to get there; we put a $1$ next to it. -$$\begin{array}{llll} -\cdot & \cdot & \cdot & \cdot\\ -\cdot & \cdot & \cdot & \cdot\\ -\cdot\;1& \cdot & \cdot & \cdot -\end{array}$$ -Then, you can go either up or right; there is only one way to get to those points (via the first move); we put a $1$ next to them: -$$\begin{array}{llll} -\cdot & \cdot & \cdot & \cdot\\ -\cdot\;1 & \cdot & \cdot & \cdot\\ -\cdot\;1& \cdot\;1 & \cdot & \cdot -\end{array}$$ -Now: to get to $(1,1)$, you can either get to it from $(1,0)$ or from $(0,1)$; since there is only one way to get to each of those, there are two ways to get to $(1,1)$. On the other hand, only one way to get to $(2,0)$ or to $(0,2)$: -$$\begin{array}{llll} -\cdot\;1 & \cdot & \cdot & \cdot\\ -\cdot\;1 & \cdot\;2 & \cdot & \cdot\\ -\cdot\;1& \cdot\;1 & \cdot\;1 & \cdot -\end{array}$$ -Next: to get to $(1,2)$, you can arrive either from $(0,2)$ (one way of being there), or from $(1,1)$ (two ways of getting there); so in total, three ways. Likewise, you have three ways to get to $(2,1)$, because you can either go up from $(2,0)$, and there is only one way to do all of that, or you can go right from $(1,1)$ (and there are two ways of doing that, corresponding to the two ways there are to get to $(1,1)$; so we have: -$$\begin{array}{llll} -\cdot \;1 &\cdot\;3 & \cdot & \cdot\\ -\cdot\;1 & \cdot\;2 & \cdot\;3 & \cdot\\ -\cdot\;1& \cdot\;1 & \cdot\;1 & \cdot -\end{array}$$ -Continuing this way, we get: -$$\begin{array}{llll} -\cdot\;1 & \cdot\;3 & \cdot\;6 & \cdot\;10\\ -\cdot\;1 & \cdot\;2 & \cdot\;3 & \cdot\;4\\ -\cdot\;1 & \cdot\;1 & \cdot\;1 & \cdot\;1 -\end{array}$$ -So there are $10$ ways to get to the top right corner in the 4 by 3 case. -You may even recognize that these numbers are just Pascal's triangle lying on its side! Well, there is a combinatorial formula for the entries of Pascal's triangle: the $r$th entry in the $m$th row corresponds to the coefficient of $a^{m-r}b^{r-1}$ in the binomial expansion of $(a+b)^{m-1}$, so it equals $\binom{m-1}{r-1}$. To figure out the entry that corresponds to the top right corner, note that you go "down" one row for each position on the $x$-axis, and another one for each step up. So here we have gone to the 4th row on the horizontal steps, and then to the 6th row on the by going up twice. Each step up is a move "right" on the row. So with a 4 by 3, we are in row $4+(3-1)=6$, and we are in position $3$ of that row. According to the formula above, that corresponds to $\binom{4+2-1}{3-1}=\binom{5}{2}$. This corresponds to the need to make $3$ moves right and two moves up, so we need choose where to place the two up moves among the three right moves; there are four places to put them in (before the three, after the three, or in the two spaces in between), so the formula I gave above gives this answer as well.<|endoftext|> -TITLE: Probability of the maximum (Levy Stable) random variable in a list being greater than the sum of the rest? -QUESTION [10 upvotes]: Original post on Mathoverflow here. -Given a list of identical and independently distributed Levy Stable random variables, $(X_0, X_1, \dots, X_{n-1})$, what is the is the probability that the maximum exceeds the sum of the rest? i.e.: -$$ M = \text{Max}(X_0, X_1, \dots, X_{n-1}) $$ -$$ \text{Pr}( M > \sum_{j=0}^{n-1} X_j - M ) $$ -Where, in Nolan's notation, $X_j \in S(\alpha, \beta=1, \gamma, \delta=0 ; 0)$, where $\alpha$ is the critical exponent, $\beta$ is the skew, $\gamma$ is the scale parameter and $\delta$ is the shift. For simplicity, I have taken the skew parameter, $\beta$, to be 1 (maximally skewed to the right) and $\delta=0$ so everything has its mode centered in an interval near 0. -From numerical simulations, it appears that for the region of $0 < \alpha < 1$, the probability converges to a constant, irregardless of $n$ or $\gamma$. Below is a plot of this region for $n=500$, $0< \alpha < 1$, where each point represents the result of 10,000 random draws. The graph looks exactly the same for $n=100, 200, 300$ and $400$. - -For $1 < \alpha < 2$ it appears to go as $O(1/n^{\alpha - 1})$ (maybe?) irregardless of $n$ or $\gamma$. Below is a plot of the probability for $\alpha \in (1.125, 1.3125)$ as a function of $n$. Note that it is a log-log plot and I have provided the graphs $1/x^{.125}$ and $1/x^{.3125}$ for reference. It's hard to tell from the graph unless you line them up, but the fit for each is a bit off, and it appears as if the (log-log) slope of the actual data is steeper than my guess for each. Each point represents 10,000 iterations. - -For $\alpha=1$ it's not clear (to me) what's going on, but it appears to be a decreasing function dependent on $n$ and $\gamma$. -I have tried making a heuristic argument to the in the form of: -$$\text{Pr}( M > \sum_{j=0}^{n-1} X_j - M) \le n \text{Pr}( X_0 - \sum_{j=1}^{n-1} X_j > 0 )$$ -Then using formula's provided by Nolan (pg. 27) for the parameters of the implicit r.v. $ U = X_0 - \sum_{j=1}^{n-1} X_j$ combined with the tail approximation: -$$ \text{Pr}( X > x ) \sim \gamma^{\alpha} c_{\alpha} ( 1 + \beta ) x^{-\alpha} $$ -$$ c_{\alpha} = \sin( \pi \alpha / 2) \Gamma(\alpha) / \pi $$ -but this leaves me nervous and a bit unsatisfied. -Just for comparison, if $X_j$ were taken to be uniform r.v.'s on the unit interval, this function would decrease exponentially quickly. I imagine similar results hold were the $X_j$'s Gaussian, though any clarification on that point would be appreciated. -Getting closed form solutions for this is probably out of the question, as there isn't even a closed form solution for the pdf of Levy-Stable random variables, but getting bounds on what the probability is would be helpful. I would appreciate any help with regards to how to analyze these types of questions in general such as general methods or references to other work in this area. -If this problem is elementary, I would greatly appreciate any reference to a textbook, tutorial or paper that would help me solve problems of this sort. -UPDATE: George Lowther and Shai Covo have answered this question below. I just wanted to give a few more pictures that compare their answers to some of the numerical experiments that I did. -Below is the probability of the maximum element being larger than the rest for a list size of $n=100$ as a function of $\alpha$, $\alpha \in (0,1)$. Each point represents 10,000 simulations. - -Below are two graphs for two values of $\alpha \in \{1.53125, 1.875\}$. Both have the function $ (2/\pi) \sin(\pi \alpha / 2) \Gamma(\alpha) n (( \tan(\pi \alpha/2) (n^{1/\alpha} - n))^{-\alpha} $ with different prescalars in front of them to get them to line up ( $1/4$ and $1/37$, respectively) superimposed for reference. - - -As George Lowther correctly pointed out, for the relatively small $n$ being considered here, the effect of the extra $n^{1/\alpha}$ term (when $1 < \alpha < 2$) is non-negligible and this is why my original reference plots did not line up with the results of the simulations. Once the full approximation is put in, the fit is much better. -When I get around to it, I will try and post some more pictures for the case when $\alpha=1$ as a function of $n$ and $\gamma$. - -REPLY [4 votes]: My previous answer was useless. The new answer below completes George's answer for the $0 < \alpha < 1$ case, illustrated in the first graph above. -For this case, George provided the asymptotic approximation $P_n \sim {\rm P}(2 \Delta Z^*_1 - Z_1 > 0)$ as $n \to \infty$, where $P_n$ is the probability the OP asked for, $Z = \lbrace Z_t : t \geq 0 \rbrace$ is a strictly stable L\'evy process of index $\alpha$, and $\Delta Z^*_1$ is the largest jump of $Z$ in the time interval $[0,1]$. It is not easy to obtain -${\rm P}(2 \Delta Z^*_1 - Z_1 > 0)$ in closed form, but indeed possible, and in fact the following more general result is known: -$$ -{\rm P}\bigg(\frac{{\Delta Z_1^* }}{{Z_1 }} > \frac{1}{{1 + y}}\bigg) = \frac{{y^\alpha }}{{\Gamma (1 - \alpha )\Gamma (1 + \alpha )}}, \;\; y \in [0,1], -$$ -where $\Gamma$ is the gamma function. Letting $y = 1$ thus gives $P_n \sim 1/[\Gamma(1 - \alpha) \Gamma(1 + \alpha)]$. -This asymptotic expression agrees very well with the graph above, as I confirmed by checking various values of $\alpha \in (0,1)$. (For example, $\alpha = 1/2$ gives $P_n \sim 2/\pi \approx 0.6366$.) -A generalization. Instead of just considering the probability ${\rm P}(M > \sum\nolimits_{i = 0}^{n - 1} {X_i } - M)$, we can consider the probability ${\rm P}(M > y^{-1}[\sum\nolimits_{i = 0}^{n - 1} {X_i } - M])$, $y \in (0,1]$. In view of George's answer, this should correspond to the formula given above for ${\rm P}(\Delta Z_1^* / Z_1 > 1/(1+y))$. -This can be further generalized, in various directions, based on known results from the literature (three useful references in this context are indicated somewhere in the comments above/below). -EDIT: -As George observed, the term $\Gamma(1 - \alpha) \Gamma(1 + \alpha)$ can be simplified to $(\pi\alpha) / \sin(\pi\alpha)$. The former expression corresponds to $[\Gamma(1 - \alpha)]^k \Gamma(1 + k \alpha)$, which is incorporated in the explicit formula for ${\rm P}(\Delta Z_1^* / Z_1 > 1/(1+y))$, $y > 0$, of the form -$\sum\nolimits_{k = 1}^{\left\lceil y \right\rceil } {a_k \varphi _k (y)} $. The function $\varphi _k (y)$ is some $(k-1)$-dimensional integral, hence that formula is computationally very expensive already for moderate values of $y$.<|endoftext|> -TITLE: Drawing heart in mathematica -QUESTION [14 upvotes]: It's not really a typical math question. Today, while studying graphs, I suddenly got inquisitive about whether there exists a function that could possibly draw a heart-shaped graph. Out of sheer curiosity, I clicked on Google, which took me to this page. -The page seems informative, and I am glad to learn certain new things! Now I am interested in drawing them by my own using Mathematica. So my question is: is it possible to draw them in Mathematica? If yes, please show me how. - -REPLY [2 votes]: A three-dimensional space curve with the shape of a red heart: - -The Mathematica code for the image above is: -ParametricPlot3D[{Cos[u]*(4*Sqrt[1 - v^2]*Sin[Abs[u]]^Abs[u]), v, - Sin[u]*(4*Sqrt[1 - v^2]*Sin[Abs[u]]^Abs[u])}, - {u, -Pi, Pi}, {v, -1, 1}, Axes -> None, Mesh -> False, - Boxed -> False, - PlotStyle -> {Red, Specularity[White, 10]}] - -3D red heart with Mesh and lines: - -Mathematica code for the image above: -ParametricPlot3D[{Cos[u]*(4*Sqrt[1 - v^2]*Sin[Abs[u]]^Abs[u]), v, - Sin[u]*(4*Sqrt[1 - v^2]*Sin[Abs[u]]^Abs[u])}, {u, -Pi, - Pi}, {v, -0.97, 0.97}, PlotPoints -> 50, Axes -> None, - Boxed -> False, - PlotStyle -> - Directive[Glow[Red], Specularity[White, 30], Opacity[0.15]], - Mesh -> 50, Background -> Black, MeshStyle -> {Blue, Red}, - Lighting -> {{"Directional", Yellow, {{1.5, 1.5, 5}, {1.5, 1.5, 0}}, - Pi/6}}] - -A variation on the use of the Taubin heart surface with hue: - -Mathematica code for the last image above: -ContourPlot3D[(-1/10) x^2 z^3 - - y^2 z^3 + (2 x^2 + y^2 + z^2 - 1)^3 == 0, {x, -1.2, 1.2}, {y, -1.4, - 1.4}, {z, -1.5, 1.5}, Mesh -> False, PlotPoints -> 60, - Axes -> None, Boxed -> False, - ContourStyle -> Directive[Opacity[0.5], Red], - ColorFunction -> Function[{x, y, z, f}, Hue[z]]] - -For more customized heart images, see the post in my website/blog: -https://knowledgemix.wordpress.com/2014/02/14/heart-to-heart-with-3d-math/<|endoftext|> -TITLE: Factorial of a non-integer number -QUESTION [13 upvotes]: My TI-83 calculator doesnt allow me to do this, but using Windows calculator, I can compute the factorial of say 5.8. What does this mean and how does it work? - -REPLY [19 votes]: Perhaps you're looking for the Gamma function. It restricts to the usual factorial on natural numbers. -Wikipedia's article on Gamma function<|endoftext|> -TITLE: What is the double multiplication sign for? -QUESTION [8 upvotes]: I accidentally typed 4 ** 5 into Google and received 1,024 instead of 20. But 4^5 is also 1024... -Does the double multiplication sign also mean the power sign? Why two symbols? - -REPLY [6 votes]: Caveat: The symbol ^ in C family languages is bitwise xor. This is true in Python, too. The ** symbol is also seen in Python.<|endoftext|> -TITLE: Show that $V = \mbox{ker}(f) \oplus \mbox{im}(f)$ for a linear map with $f \circ f = f$ -QUESTION [7 upvotes]: Question: Let $V$ be a $K$-Vectorspace and $f: V \rightarrow V$ be linear. It holds that $f \circ f = f$. Show that $V = \mbox{ker}(f) \oplus \mbox{im}(f)$. -My attempt: So i guess that the $\oplus$ denotes a direct sum which means i have to show that (i) $V = \mbox{ker}(f) + \mbox{im}(f)$ and (ii) $\mbox{ker}(f) \cap \mbox{im}(f) = \{0\}$. -I tried to do (ii) first: Let $v \in \mbox{im}(f) \cap \mbox{ker}(f)$ -$\Rightarrow \exists u: f(u)=v \wedge f(v) = 0$ -(can i put a "Rightarrow" here?) $(f \circ f)(u)=f(f(u)) = f(v) = 0$ -As for (i) i am having difficulty with an approach to showing that $V = \mbox{ker}(f) + \mbox{im}(f)$. Should I even be trying to do this in the first place? if so, any advice as to how? - -REPLY [9 votes]: Let's see (ii) first: (Yes, your use of the implication symbol $\Rightarrow$ is appropriate.) You want to use that $f\circ f=f$, which you do not mention explicitly. The point is: In (ii) you need to show that the $v$ you are considering is $0$. You have: $f\circ f=f$, so $v=f(u)=(f\circ f)(u)$ etc, as you did. This gives $v=0$. -For (i), let $v$ be any element of $V$. What can we say about $v-f(v)$? Let's call it $w$. We have $f(w)=f(v-f(v))=f(v)-f(f(v))=f(v)-f(v)=0$. Here, I used that $f$ is linear and (again) that $f\circ f=f$. This shows that $w\in{\rm ker}(f)$. So, $v=(v-f(v))+f(v)=w+f(v)$ is the sum of an element of the kernel (namely, $w$) and one of the image (namely, $f(v)$), and that is precisely what you needed. -(A meta-question here is how would one think of considering $v-f(v)$. Ok, let's try to work backwards. Suppose we already know that $v=a+b$ with $a$ in the kernel and $b$ in the range. Then $f(v)=f(a)+f(b)=f(b)$, since $a$ is in the kernel. Also, $f(v)=f(f(v))$, so it is reasonable, as a first approach, to let $b=f(v)$, and see if we can directly prove that $a=v-b$ is in the kernel. And this is precisely what we did.) - -REPLY [3 votes]: $v = (v - f(v)) + f(v)$<|endoftext|> -TITLE: Freyd-Mitchell's embedding theorem -QUESTION [6 upvotes]: Freyd-Mitchell's embedding theorem states that: if A is a small abelian category, then there exists a ring R and a full, faithful and exact functor F: A → R-Mod. -This is quite the theorem and has several useful applications (it allows one to do diagram chasing in abstract abelian categories, etc.) -I have been asked to state and prove the theorem in class (a homological algebra course). However, by reading the texts I was recommended, I'm about to give in: -Freyd's Abelian Categories says that the text, excepting the exercises, tries to be a geodesic leading to the theorem. If you take out the exercises, probably the text is 120 pages long. Impossible to do in 2:30 hours. To give you an idea, the course I'm taking is based in Rotman's "An Introduction to Homological Algebra" which works in R-Mod... -Mitchell's Theory of Categories is very hard to read, and also to prove the theorem you have tons of definitions and propositions and lemmas to prove. -Weibel's An Introduction to Homological Algebra redirects me to Swan, The Theory of Sheaves, a book which is unavailable in my university's library. I've leafed through Swan's Algebraic K-Theory: the theorem is proved, but it is also long, hard and painful to read, and assumes a lot of knowledge I don't have (I had never seen a weakly effaceable functor, or a Serre subcategory; and it certainly is not well known to me that the category of additive functors from a small abelian category to the category of abelian groups is well-powered, right complete, and has injective envelopes!) -I'm starting to believe it's an impossible task. But maybe there are more modern proofs which require less heavy machinery and technicalities? - -REPLY [2 votes]: Since it has been some time since this question was posted (in fact, it's four days away from its second-year birthday), I believe it's time to answer it with -the link to its MO counterpart, -which has an awesome answer by Theo Buehler.<|endoftext|> -TITLE: Roots of Legendre Polynomial -QUESTION [28 upvotes]: I was wondering if the following properties of the Legendre polynomials are true in general. They hold for the first ten or fifteen polynomials. - -Are the roots always simple (i.e., multiplicity $1$)? -Except for low-degree cases, the roots can't be calculated exactly, only approximated (unlike Chebyshev polynomials). -Are roots of the entire family of Legendre Polynomials dense in the interval $[0,1]$ (i.e., it's not possible to find a subinterval, no matter how small, that doesn't contain at least one root of one polynomial)? - -If anyone knows of an article/text that proves any of the above, please let me know. The definition of these polynomials can be found on Wikipedia. - -REPLY [10 votes]: The density of the roots of any family of orthogonal polynomials follows from this result: - -If $\{p_n\}$ is a family of orthogonal polynomials with roots in $[-1,1]$ and $N(a,b,n)$ represents the number of roots of $p_n$ in $[\cos(b),\cos(a)]$ then -$$\lim_{n\to \infty} \frac{N(a,b,n)}{n} = \frac{b-a}{\pi}$$<|endoftext|> -TITLE: Calculate $\lim\limits_{y\to{b}}\frac{y-b}{\ln{y}-\ln{b}}$ -QUESTION [10 upvotes]: How can we find $\displaystyle \lim_{y\to{b}}\frac{y-b}{\ln{y}-\ln{b}}$ without using: -(a) L'Hôpital's rule, (b) the limit $\displaystyle \lim_{h \to 0}\frac{e^h-1}{h} = 1$, and (c) the fact that $\displaystyle \frac{d}{dx}\left(e^x\right) = e^x$. -The reason for the conditions is that with this limit I'm trying to prove (c), and I've done so with (b) and I gather it would be circular to use (a). So that's that. Also, I would appreciate if you could share one or more ways of proving that the derivative of $e^x$ is $e^x$. Thanks a lot for your time. - -REPLY [2 votes]: If $f$ is convex on $[a,b]$ or $[b,a]$ then -$$ f\Big(\frac{a+b}2\Big) \le \frac1{b-a}\int_a^b f(x)\,dx \le \frac{f(a)+f(b)}2 \tag{$\ast$} $$ -(The first inequality is by Jensen, since $\frac{a+b}2=\frac1{b-a}\int_a^b x\,dx$; the second comes from the change of variables $x=(1-t)a+tb=a+t(b-a)$ and applying the convexity of $f$ pointwise.) -Taking $f(x)=\frac1x$ and taking reciprocals throughout yields -$$ \frac1{\frac12(\frac1a+\frac1b)} \le \frac{b-a}{\log b-\log a} \le \frac{a+b}2 $$ -which is the inequality of the harmonic, logarithmic, and arithmetic means. By squeezing, -$$ \lim_{a\to b} \frac{b-a}{\log b-\log a} = b $$ -Notes: - -In fact the logarithmic mean is also bounded by the geometric mean, i.e., -$$ \sqrt{ab} \le \frac{b-a}{\log b - \log a} $$ -This is stronger than the bound by the harmonic mean proved above, but the only proof I know is to take $f(x)=e^x$ in ($\ast$), and to evaluate the resulting integral we need to use $\frac{d}{dx} e^x = e^x$. -An advantage (?) of this proof is that it doesn't need the fundamental theorem of calculus. (In fact it verifies FTC for $\int\frac1x\,dx$.) That's assuming you define $\log$ as the integral of $\frac1x$, that you prove a change of variables theorem for linear changes of variables directly by Riemann sums, and that you evaluate $\int_a^b x\,dx$ directly by Riemann sums.<|endoftext|> -TITLE: Numbers of circles around a circle -QUESTION [11 upvotes]: "When you draw a circle in a plane of radius $1$ you can perfectly surround it with $6$ other circles of the same radius." -BUT when you draw a circle in a plane of radius $1$ and try to perfectly surround the central circle with $7$ circles you have to change the radius of the surround circles. -How can I find the radius of the surround circles if I want to use more that $6$ circles? -ex : -$7$ circles of radius $0.4$ -$8$ circles of radius $0.2$ - -REPLY [5 votes]: Let's recollect a very simple fact about the situation when having n identical circles, $n\geq3$ perfectly surrounding the unit circle. If we connect all intersection points of these circles with the unit circle we get a regular polygon inscribed to the unit circle. The side length formula of the regular polygon inscribed to a circle is $l=2r\sin(\frac{\pi}{n})$, $n$ - the number of the sides of the polygon and is equal to the number of the surrounding circles. In our case $l=2\sin(\frac{\pi}{n})$. Now imagine that in all the picture you have there remains only the unit circle and two of the surrounding circles that are against each other. Connect the three centers of these circles and also consider the side of the polygon inside to the unit circle and get similar triangles that by Thales theorem yield: -$$ \frac{l}{2r}=\frac{1}{1+r};r=\frac{2\sin(\frac{\pi}{n})}{2-2\sin(\frac{\pi}{n})}=\frac{\sin(\frac{\pi}{n})}{1-\sin(\frac{\pi}{n})}.$$ -The proof is complete.<|endoftext|> -TITLE: The set of rationals has the same cardinality as the set of integers -QUESTION [6 upvotes]: The set of rationals $\mathbb{Q}$ has the same cardinality as -the set of integers $\mathbb{Z}$. True or false? -This was a question on an old exam for our class. The correct answer is true. However, I did some additional reading and came across Cantor's transfinite numbers. In the book I'm reading, it says that "there are more real numbers (which include rational and irrational numbers) than there are integers". So can it also be said that there are more rational numbers than integers? And so can we say that the above statement is false? - -REPLY [3 votes]: Just because the integers are a proper subset of the rationals doesn't mean that the rationals have a higher cardinality than the integers. Actually, there is a theorem that says that a set is infinite if and only if it has the same cardinality to a proper subset of itself (so your logic would only apply to a finite set). -A set is countable, or has the same cardinality as the integers, if you can count the elements. In other words, you can label each element by a unique positive integer. We can see from the diagonals argument (see this image on Wikipedia for a good illustration) that this holds for that rational numbers. Once you get the hang of it, you can see that a lot of sets that seem to be a lot bigger than the set of integers are in fact the same "size" (have the same cardinality) as the integers. See Hilbert's Paradax of the Grand Hotel for a good example of this.<|endoftext|> -TITLE: How to diagonalize a large sparse symmetric matrix to get the eigenvalues and eigenvectors -QUESTION [7 upvotes]: How does one diagonalize a large sparse symmetric matrix to get the eigenvalues and the eigenvectors? -The problem is the matrix could be very large (though it is sparse), at most $2500\times 2500$. Is there a good algorithm to do that and most importantly, one that I can implement it into my own code? Thanks a lot! - -REPLY [8 votes]: $2500 \times 2500$ is a small matrix by current standards. The standard eig command of matlab should be able to handle this size with ease. Iterative sparse matrix eigensolvers like those implemented in ARPACK, or SLEPc will become more preferable if the matrix is much larger. -Also, if you want to implement an eigensolver into your own code, just use the LAPACK library that comes with very well developed routines for such purpose. Matlab also ultimately invokes LAPACK routines for doing most of its numerical linear algebra. -Semi-related note: the matrix need not be explicitly available for the large sparse solvers, because they usually just depend on being able to compute $A*x$ and $A'*x$.<|endoftext|> -TITLE: Arc Length of Bézier Curves -QUESTION [19 upvotes]: See also: answers with code on GameDev.SE - -How can I find out the arc length of a Bézier curve? For instance, the arc length of a linear Bézier curve is simply: -$$s = \sqrt{(x_1 - x_0)^2 + (y_1 - y_0)^2}$$ -But what of quadratic, cubic, or nth-degree Bézier curves? -$$\mathbf{B}(t) = \sum_{i=0}^n {n\choose i}(1-t)^{n-i}t^i\mathbf{P}_i$$ - -REPLY [2 votes]: I worked out the closed form expression of length for a 3 point (quadratic) Bezier (below). I've not attempted to work out a closed form for 4+ points. This would most likely be difficult or complicated to represent and handle. However, a numerical approximation technique such as a Runge-Kutta integration algorithm would work quite well by integrating using the arc length formula. -Here is some Java code for the arc length of a 3 point Bezier, with points a,b, and c. - v.x = 2*(b.x - a.x); - v.y = 2*(b.y - a.y); - w.x = c.x - 2*b.x + a.x; - w.y = c.y - 2*b.y + a.y; - - uu = 4*(w.x*w.x + w.y*w.y); - - if(uu < 0.00001) - { - return (float) Math.sqrt((c.x - a.x)*(c.x - a.x) + (c.y - a.y)*(c.y - a.y)); - } - - vv = 4*(v.x*w.x + v.y*w.y); - ww = v.x*v.x + v.y*v.y; - - t1 = (float) (2*Math.sqrt(uu*(uu + vv + ww))); - t2 = 2*uu+vv; - t3 = vv*vv - 4*uu*ww; - t4 = (float) (2*Math.sqrt(uu*ww)); - - return (float) ((t1*t2 - t3*Math.log(t2+t1) -(vv*t4 - t3*Math.log(vv+t4))) / (8*Math.pow(uu, 1.5)));<|endoftext|> -TITLE: Solving the equation $x e^x = e$ -QUESTION [7 upvotes]: I know that $x e^x = e$ means $x = 1$, but how do you solve for it? - -REPLY [7 votes]: With logarithms. -Firstly $x>0$, Then : -$xe^{x}=e \Rightarrow \ln (xe^{x})=\ln e \Rightarrow \ln x + x = 1 $. -So: -If $x>1$ then $ \ln x >0$, thus $\ln x +x > 1+0 =1$ (contradiction) -If $x<1$ then $ \ln x < 0$, thus $\ln x +x < 1+0 =1$ (contradiction) -Thus $x=1$<|endoftext|> -TITLE: Can we reduce the number of states of a Turing Machine? -QUESTION [9 upvotes]: My friend claims that one could reduce the number of states of a given turning machine by somehow blowing up the tape alphabet. He does not have any algorithm though. He only has the intuition. -But I say it's not possible. Else one could arbitrarily keep decreasing the states via the same algorithm and arrive at some constant sized machine. -Who is right? - -REPLY [7 votes]: Yes it is possible to reduce the number of states a Turing machine uses to decide a problem $X$ by increasing the number of symbols. -However it gets very tricky when you are close to the minimum number of states/symbols needed to solve $X$ -Here is a nice survey paper about the efforts to find the minimum number of states and symbols Turing machines need for universality. -http://portal.acm.org/citation.cfm?id=1498068 -"The complexity of small universal Turing machines: A survey" -Damien Woods and Turlough Neary, -Theoretical Computer Science archive -Volume 410 Issue 4-5, February, 2009<|endoftext|> -TITLE: Can a collection of subsets of $\mathbb{N}$ such that no one set contains another be uncountable? -QUESTION [6 upvotes]: Let C be a collection of subsets of $\mathbb{N}$ such that $A,B\in C \Rightarrow A \not\subseteq B$. Can C be uncountable? - -REPLY [6 votes]: Here is another one, perhaps due to Donald J Newman. -For each real number $x$ consider an (infinite) sequence of rationals $R_x = \{r_{x1}, r_{x2}, \cdots \}$ which converges to $x$. -For $x \neq y$, $R_x$ and $R_y$ have at most finite intersection. -Thus the set $S = \{R_x : x \in \mathbb{R}\}$ is an uncountable collection of infinite subsets of a countable set, none of which is a subset of other. In fact, any two members of the collection have only a finite intersection.<|endoftext|> -TITLE: Operators commuting with translations -QUESTION [5 upvotes]: Let $T$ be a bounded linear operator on $L^2(\mathbb R)$. So, let us now assume that $T$ commutes with the translations $\tau_x$. How do I now show that $T$ is given by a convolution with respect to a distribution? -By the way, I know I can probably find the proof somewhere in one of Stein's books, but I would like to prove it myself without knowing what it should be but I'm struggling a bit. So I would like some hints. Especially I would like a method of deriving the result without knowing what it should be. If that is not possible, an intuitive argument why it should be true would be nice as well. - -REPLY [2 votes]: Some suggestions: -Even though the delta function is not in $L^2$, heuristically what does your condition on $T$ say when applied to the delta function? Now, given the answer to the previous question and the fact that every function is an average of translated delta functions, what does one expect $Tf$ to be for an arbitrary $L^2$ function $f(x)$? -You can also do this on the Fourier transform side.. i.e. consider $G = F \circ T \circ F^{-1}$ as Plop suggested and then look at how $G$ behaves on a given $\delta(x - a)$. Then again use the idea that an $L^2$ function is an average of translated delta functions.<|endoftext|> -TITLE: Associativity of logical connectives -QUESTION [31 upvotes]: According to the precedence of logical connectives, operator $\rightarrow$ gets higher precedence than $\leftrightarrow$ operator. But what about associativity of $\rightarrow$ operator? -The implies operator ($\rightarrow$) does not have the associative property. That means that $(p \rightarrow q) \rightarrow r$ is not equivalent to $p \rightarrow (q \rightarrow r)$. Because of that, the question comes op how $p \rightarrow q \rightarrow r$ should be interpreted. -The proposition $p \rightarrow q \rightarrow r$ can be defined in multiple ways that make sense: - -$(p \rightarrow q) \rightarrow r$ (left associativity) -$p \rightarrow (q \rightarrow r)$ (right associativity) -$(p \rightarrow q) \land (q \rightarrow r)$ - -Which one of these definitions is used? -I could not locate any book/webpage that mentions about associativity of logical operators in discrete mathematics. -Please also cite the reference (book/reliable webpage) that you use to answer my question (as I'm planning to add this to wikipedia page about 'logical connectives'). -Thanks. -PS: I got this question when I saw this problem: -Check if following compound proposition is tautology or not: -$$ \mathrm{p} \leftrightarrow (\mathrm{q} \wedge \mathrm{r}) \rightarrow \neg\mathrm{r} \rightarrow \neg\mathrm{p}$$ - -REPLY [2 votes]: The proposition $p \rightarrow q \rightarrow r$ can be defined in -multiple ways that make sense: - -$(p \rightarrow q) \rightarrow r$ (left associativity) -$p \rightarrow (q \rightarrow r)$ (right associativity) -$(p \rightarrow q) \land (q \rightarrow r)$ - -Which one of these definitions is used? - -In the absence of parentheses, a logician would read $$p \rightarrow q \rightarrow r$$ as $$p \rightarrow (q \rightarrow r);$$ however, in informal logic (including mathematics), $$p \Rightarrow q \Rightarrow r$$ is commonly read as $$(p \Rightarrow q) \;\text{ and }\; (q \Rightarrow r),$$ which isn't logically equivalent to $$p \Rightarrow (q \Rightarrow r).$$ - -Similarly, $$p \Leftrightarrow q \Leftrightarrow r$$ is commonly read as $$(p \Leftrightarrow q) \;\text{ and }\; (q \Leftrightarrow r),$$ although a logician would read $$p \leftrightarrow q \leftrightarrow r$$ as $$p \leftrightarrow (q \leftrightarrow r),$$ which isn't logically equivalent to $$(p \leftrightarrow q) \;\text{ and }\; (q \leftrightarrow r).$$ - -Here's a clear illustration: $$(p \Leftrightarrow q) \;\text{ and }\; (q \Leftrightarrow r)$$ asserts that $p,q,r$ all have the same truth value; however, both $$p \Leftrightarrow ( q \Leftrightarrow r)$$ and $$(p \Leftrightarrow q) \Leftrightarrow r $$ admit this truth assignment: $(p,q,r)=$ (false, false, true).<|endoftext|> -TITLE: Stuck at the proof of the existence of the partial fraction expansion -QUESTION [6 upvotes]: Let $P$ and $Q$ be complex polynomials with $deg(P) < deg(Q)$. -Let $Q(z) = (z-z_1)^{k_1} (z-z_2)^{k_2}...(z-z_m)^{k_m}, z_i \in \mathbb{C} \text{ and } k_i \in \mathbb{N}$ be a complete decomposition. Then, I have to prove that there exists a unique exposition of the following form: -$\frac{P(z)}{Q(z)} = \sum\limits_{i=1}^m \sum\limits_{j=1}^{k_i} \frac{a_{ij}}{(z-z_i)^j}, a_{ij} \in \mathbb{C}$ -I started with the proof of the uniqueness of this exposition and succeeded. However, I don't feel like I am doing well now, while trying to prove its existence. Basically, I want to do two inductions: -First I want to do an induction over $deg(Q)$ with $deg(P) = 0$, then I want to go on with an induction over $deg(P)$ with an arbitrary $deg(Q)$. The start of the first induction is easy, but I can't get to an end at the induction step. I have something like: -$\frac{P}{Q} = \frac{P}{(z-z*)Q'} = \frac{1}{z-z*} \frac{P}{Q'}$, where $Q'$ is a polynomial with $deg(Q') = n$. -What do I do next? I know that a unique exposition exists for the latter fraction and if $z* = z_i$ for some $i$, it's proven I guess. But what if this $z*$ is a completely new complex number? - -REPLY [3 votes]: HINT $\rm\quad\displaystyle 1 = gcd(F,G)\ \ \Rightarrow\ \ 1 = AF+BG\ \ \Rightarrow\ \ \frac{1}{FG}\ = \ \frac{A}G + \frac{B}F$<|endoftext|> -TITLE: Connection between Hecke operators and Hecke algebras -QUESTION [12 upvotes]: Hecke operators are things that act on modular forms and give rise to a lot of interesting arithmetical results: -http://en.wikipedia.org/wiki/Hecke_operator -On the other hand on the wikpedia page for Hecke algebras, which should naively be an algebra of Hecke operators, the term seems to acquire very different meanings, such as deformations of group algebra of a Coxeter group. -What is the connection between the two? - -REPLY [13 votes]: There are basically 3 senses of "Hecke algebra", and they are related to each other. The modular-form sense is a special case of all three. -The oldest version is that motivated by modular forms, if we think of modular forms as functions on (homothety classes of) lattices: the operator $T_p$ takes the average of a $\mathbb C$-valued function over lattices of index $p$ inside a given lattice. Viewing a point $z$ in the upper half-plane as giving the lattice $\mathbb Z z + \mathbb Z$ makes the connection to modular forms of a complex variable. -One important generalization of this idea is through repn theory, realizing that when modular forms are recast as functions on adele groups, the p-adic group $GL_2(\mathbb Q_p)$ acts on modular forms $f$. To say that $p$ does not divide the level becomes the assertion that $f$ is invariant under the (maximal) compact subgroup $GL_2(\mathbb Z_p)$ of $GL_2(\mathbb Q_p)$. Some "conversion" computations show that $T_p$ and its powers become integral operators (often mis-named "convolution operators"... despite several technical reasons not to call them this) of the form $f(g) \rightarrow \int_{GL_2(\mathbb Q_p)} \eta(h)\,f(gh)\,dh$, where $\eta$ is a left-and-right $GL_2(\mathbb Z_p)$-invariant compactly-supported function on $GL_2(\mathbb Q_p)$. The convolution algebra (yes!) of such functions $\eta$ is the (spherical) Hecke algebra on $GL_2(\mathbb Q_p)$. -A slightly larger, non-commutative convolution algebra of functions on $GL_2(\mathbb Q_p)$ consists of those left-and-right invariant by the Iwahori subgroup of matrices $\pmatrix{a & b \cr pc & d}$ in $GL_2(\mathbb Z_p)$, that is, where the lower left entry is divisible by $p$. This algebra of operators still has clear structure, with structure constants depending on the residue field cardinality, here just $p$. (The Iwahori subgroup corresponds to "level" divisible by $p$, but not by $p^2$.) This is the Hecke algebra attached to the affine Coxeter group $\hat{A}_1$. -Replacing $p$ by $q$, and letting it be a "variable" or "indeterminate" gives an example of another generalization of "Hecke algebra". -The latter situation also connects to "quantum" stuff, but I'm not competent to discuss that. -Edit: by now, there are several references for the relation between "classical Hecke operators" (on modular forms) and the group-theoretic, or representation-theoretic, version. Gelbart's 1974 book may have been the first generally-accessible source, though Gelfand-PiatetskiShapiro's 1964 book on automorphic forms certainly contains some form of this. Since that time, Dan Bump's book on automorphic forms certainly contains a discussion of the two notions, and transition between the two. My old book on Hilbert modular forms contains such a comparison, also, but the book is out of print and was created in a time prior to reasonable electronic files, unfortunately.<|endoftext|> -TITLE: Diffeomorphisms and Stokes' theorem -QUESTION [9 upvotes]: Problem: -Let $\omega\in\Omega^r(M^n)$ suppose that $\int_\sum \omega = 0$ for every oriented smooth manifold $\sum \subseteq M^n$ that is diffeomorphic to $S^r$. Show that $d\omega = 0$. -Proof: -Assume $d\omega \neq 0$. Then there exists $v_1, \ldots, v_{r+1}\in T_pM$ such that $d\omega_p(v_1, \ldots, v_{r+1}) \neq 0$. -$D^{r+1}\subseteq \mathbb{R}^{r+1}$ a smooth submanifold of $\mathbb{R}^{n}$ with boundary $S^r$. Let $(h,U)$ be a chart around $p$ such that $D^{r+1}$ (with some radius) is mapped to $N = h^{-1}(D^{r+1})$ around $p$. Then $N$ is a smooth submanifold of $M^n$ with boundary equal to $\partial N = h^{-1}(S^r)$ (diffeomorphic to $S^r$). -By definition of the integral and Stokes' theorem: -$\int_{\partial N} \omega = \int_N d\omega = \int_{D^{r+1}}(h^{-1})^* (d\omega)$. -Now let $\alpha = (h^{-1})^*(d\omega)$. Then $\alpha = f(x)dx_1\wedge\ldots\wedge dx_{r+1}$(topform in $\mathbb{R}^{r+1}$). Since $f(x) \neq 0$, it has to be different from zero on a small domain. Assume that $f(x) > 0$. Then -$\int_{D^{r+1}}\alpha = \int_{D^{r+1}}f(x)d\mu_{r+1} > 0$. -Contradiction. --- I feel my idea is correct, but I'm not fully sure this is a full good proof. Could this have been done easier? I'm grateful for any feedback. - -REPLY [3 votes]: As suggested by mixedmath since the proof was correct to begin with.<|endoftext|> -TITLE: Why is Top a model category? -QUESTION [13 upvotes]: Recall that a model category is a complete and cocomplete category with classes of morphisms called cofibrations, fibrations, and weak equivalences. These are closed under composition and satisfy certain axioms, such as lifting properties. Furthermore, according to Mark Hovey (but this apparently varies in the literature), any morphism must admit a functorial factorization into the composite of a cofibration and a trivial fibration (i.e. a fibration which is a weak equivalence), as well as a composite of a trivial cofibration and a fibration. -The standard examples of model categories are simplicial sets and chain complexes. Yet the words "fibration" and "cofibration" suggest not category theory but the topological homotopy lifting property. A morphism of topological spaces $f: X \to Y$ is called a (Hurewicz) fibration if whenever $p: T \times [0,1] \to Y$ is a map and $\widetilde{p}: T \to X$ is a map lifting $p|_{T \times \{0\}}$, $\widetilde{p}$ can be extended to a lifting of $p$. Cofibrations can be defined by an analogous homotopy extension property, and for Hausdorff spaces this is equivalent to being a deformation retract of a suitable neighborhood (defined as $\{x: u(x)<1\}$ where $u$ is a suitable continuous function). -So, although this is not explicitly stated anywhere I see, I take it that topological spaces with the usual notions of fibrations and cofibrations (with weak equivalences the homotopy equivalences) do indeed form a model category. -Question: Am I right in assuming this? -The axioms are slightly tricky to check in general. Topological spaces admit limits and colimits. I believe that the lifting properties of cofibrations with respect to trivial fibrations (or trivial cofibrations with respect to fibrations) follows directly from the definition of a cofibration (namely, if you can extend something up to homotopy, you can do it exactly for a cofibration). -I'm getting slightly confused on the functorial factorizations. Let $f: X \to Y$ be a morphism. We need a canonical way of factoring this (actually, two canonical ways). One way is to use the inclusion of $X$ in the mapping cylinder $M_f$ (which is a cofibration because $(M_f, X)$ is checked to be an NDR-pair). -Moreover, $M_f \to Y$ is a homotopy equivalence. -I don't see why this is a fibration though (it's true that the fibers have the same homotopy type at least). -Correction: As Aaron observes in the comments, there are easy counterexamples for when the map from the mapping cylinder is not a fibration. This means that another approach is needed to construct the functorial factorization of a map into a cofibration and a trivial fibration? -Could someone clarify? - -REPLY [9 votes]: This is in many places. I think that the whole point of model categories is that people saw you could get a lot of mileage out of the adjunction between Top and sSet (geometric realization and singular simplices). Just how good is the combinatorial model for topological spaces? The answer is that it is a Quillen equivalence (whatever that means). But for that sentence to exist we need to talk about model categories. I would say that Top is one of the most basic examples, before simplicial sets and chain complexes. For one thing the model structure on sSet requires use of the model structure on Top (the standard definition of a weak equivalence of simplicial sets is that its geometric realization is a weak equivalence in Top). The model structure on chain complexes sort of came out of homological algebra, and i don't know how early it was realized that this model category stuff shed light on the homological algebra that was going on. -When thinking about this stuff I find the model theoretic approach helps me understand what is going on in top. Also, the weak equivalences are the maps that induce isos in homotopy for all choices of basepoints. -ps Peter May has a write up of a verification of the model category axioms for Top under Misc notes on his website. (There is tons of great stuff on there) -pps Dwyer and Spalinski is a great resource as well http://www.nd.edu/~wgd/ number 75<|endoftext|> -TITLE: Minimum multi-subset sum to a target -QUESTION [8 upvotes]: This is some sort of standard puzzle, which many do it using trial and error or brute-force method. The question goes like this, given the numbers, 11,13,31,33,42,44,46 what is choices of numbers (a number can be chosen more than once) so that sum adds up to 100. -Is there a formula like way or approach to this other than brute force only? - -REPLY [6 votes]: Assuming that the numbers are positive integers, -This is closely related to the Frobenius Coin Problem which says that there is a maximum number $\displaystyle F$ (called the Frobenius number) which is not representable. It is NP-Hard to find out the Frobenius number when there are at least $\displaystyle 3$ numbers. -For a formula like approach to determine if such a representation is possible or not, you can use generating functions, which can be used to give a pseudo polynomial time algorithm, polynomial in size $\displaystyle W = n_1 + n_2 + \cdots + n_k$. -If the numbers are $\displaystyle n_1, n_2, \dots, n_k$ and you need to see if they can be summed to $\displaystyle S$ then the number of ways it can be done is the coefficient of $\displaystyle x^S$ in -$$\displaystyle (1+x^{n_1} + x^{2n_1} + x^{3n_1} + \cdots )(1+ x^{n_2} + x^{2n_2} + x^{3n_2} + \cdots ) \cdots (1 + x^{n_k} + x^{2n_k} + x^{3n_k} + \cdots )$$ -$$\displaystyle = \dfrac{1}{(1-x^{n_1})(1-x^{n_2}) \cdots (1-x^{n_k})}$$ -Using partial fractions this can be written as -$$\displaystyle \sum_{j=1}^{m} \dfrac{C_j}{c_j - x}$$ -where $\displaystyle C_j$ and $\displaystyle c_j$ are appropriate complex numbers and $\displaystyle m \le n_1 + n_2 + \cdots + n_k$. -The coefficient of $\displaystyle x^S$ is thus given by -$$\displaystyle \sum_{j=1}^{m} \dfrac{C_j}{c_j^{S+1}}$$ -which you need to check is zero or not. -Of course, this might require quite precise floating point operations and does not actually tell you what numbers to choose.<|endoftext|> -TITLE: Writing $f(x,y)$ as $\Phi(g(x) + h(y))$ -QUESTION [12 upvotes]: Could you prove or disprove the following statement? - -Let - $f\colon[0,1]^2\rightarrow \mathbb R$ - be a continuous function. Then - there are continuous functions - $g,\ h\colon [0,1]\rightarrow \mathbb R$ and - $\Phi\colon \mathbb R \to \mathbb R$ - such that $$ f(x,y) = \Phi(g(x) + h(y)).$$ - -(This problem popped up in my mind while I was thinking about this related one on MO. I couldn't find an easy proof or a disproof. This version is much weaker than the one asked at MO, since $g$ and $h$ do depend on $f$ here.) - -REPLY [8 votes]: The statement is not true. -To make the counterexample simpler, I will take $f:[-1,1]^2 \rightarrow \mathbb{R}$. Let f(x,y)=xy. So we have ----0+++ ----0+++ -0000000 -+++0--- -+++0--- - -Consider the image of xy=0 in g(x)+h(y). Since it's a connected, compact set, the image is connected and compact, so an interval [m,n], with $\Phi([m,n])=0$. We may assume that $\Phi(n+\epsilon)>0$ and $\Phi(m-\epsilon)<0$. Let A=g(1), a=g(-1), B=h(1), b=h(-1). -Then we have $A+B>n, \; a+b>n, \; A+b -TITLE: Is there any connection between Green's Theorem and the Cauchy-Riemann equations? -QUESTION [43 upvotes]: Green's Theorem has the form: -$$\oint P(x,y)dx = - \iint \frac{\partial P}{\partial x}dxdy , \oint Q(x,y)dy = \iint \frac{\partial Q}{\partial y}dxdy $$ -The Cauchy-Riemann equations have the following form:(Assuming $z = P(x,y) + iQ(x,y)$) -$$\frac{\partial P}{\partial x} = \frac{\partial Q}{\partial y}, \frac{\partial P}{\partial y} = - \frac{\partial Q}{\partial x}$$ -Is there any connection between this two equations? - -REPLY [3 votes]: That's Green's Theorem can be used to prove Cauchy's Theorem or a corollary of it, depending on your text, is in A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka Exer 4.25 - -Note that that $f'$ assumption is superfluous because $f$ is holomorphic, but this is not proven until Ch5. - -Appendix -Green's Theorem is stated as: - - - -Cor 4.20 is a corollary of Cauchy's Thm 4.18 for the authors and is stated as: - - - -Cauchy's Thm 4.18 is stated as: - - - -The authors acknowledge that Cauchy's Theorem is sometimes designated to be the statement in Cor 4.20 instead of their Thm 4.18<|endoftext|> -TITLE: Isometry in compact metric spaces -QUESTION [20 upvotes]: Why is the following true? - -If $(X,d)$ is a compact metric space and $f: X \rightarrow X$ is non-expansive (i.e $d(f(x),f(y)) \leq d(x,y)$) and surjective then $f$ is an isometry. - -REPLY [4 votes]: Here's a different argument. Recall that an $\epsilon$-covering $S$ of a metric space is a set such that for every point $x$ there exists $s\in S$ with $d(x,s)\leq \epsilon.$ - -Lemma. Let $X$ be a metric space with a finite $\epsilon/4$-cover. If $f:X\to X$ is a non-expansive surjection then $d(f(x),f(y))\geq d(x,y)-\epsilon$ for all $x,y\in X.$ -Proof: Set $D=d(x,y)-\epsilon/2.$ Let $S$ be an $\epsilon/4$-cover minimizing the quantity $N(S)=|\{(s_1,s_2)\in S\mid d(s_1,s_2)\geq D\}|.$ - Note that $f(S)$ is also an $\epsilon/4$-cover, and whenever $d(s_1,s_2)< D$ we have $d(f(s_1),f(s_2))< D.$ But $N(f(S))\geq N(S),$ so no pair can go from distance $\geq D$ to $ -TITLE: Approaching to zero, but not equal to zero, then why do the points get overlapped? -QUESTION [14 upvotes]: It is my question when I was in Senior High School. Up to now, I have no idea about the correct explanation. I just accept it by faith :D -Here is the question: - -If the symbol $\Delta x \to 0$ does not mean $\Delta x =0$, how can the points $A$ and $B$ get overlapped which in turn, how can the line joining $A$ and $B$ become $C$, the tangent line to the curve $y=f(x)$ at point $A$? -Thank you in advance. - -REPLY [29 votes]: Basically, what you have put your finger on is the original difficulty with infinitesimals, and the basis on which Bishop Berkeley made some of his famous objections (mind you, Berkeley didn't really care about the logical foundations of Calculus; he was engaged in a theological debate at the time1): if infinitesimals are not zero, then what you have is not a tangent but a secant, because the line touches the curve at two distinct points; but if the two points are the same, then they don't define a tangent because you cannot determine a line with a single point. -The answer is that when we talk about limits, we are talking about what the quantities are approaching, not what the quantities are. The point $B$ never "gets overlapped" with $A$, it just approaches $A$; the line between $A$ and $B$ never "becomes" the tangent (which in your diagram is $C$), but its slope approaches the slope of $c$. These "approaches" have a very precise meaning (made formal by Weierstrass). -First, what would it mean to say that the values of a function $g(x)$ "approach" the value $L$ as $x$ approaches $a$? It means that you can make all the values of $g(x)$ be arbitrarily close to $L$ provided that $x$ is "close enough" to $a$. If you specify a narrow horizontal band around the value $y=L$, then by specifying a narrow vertical band around the value $a$ we can guarantee that the graph of $y=g(x)$ for values of $x$ inside the narrow vertical band will necessarily lie inside the horizontal line. -The reason this is sensible is that if the values of $g(x)$ approach some other number $M$ as well, then by specifying a band around $L$ which is smaller than the distance from $M$ to $L$ you will always run into problems: the graph of $y=g(x)$ will always end up with parts outside this band, because $M$ is outside the band and you are also approaching $M$. -Formally, we say it like this: the limit of $g(x)$ as $x$ approaches $a$ is $L$, which is written: -$$\lim_{x\to a}g(x) = L$$ -if and only if for every $\epsilon\gt 0$ there exists a $\delta\gt 0$ such that if $0\lt |x-a|\lt \delta$, then $|f(x)-L|\lt \epsilon$; $\epsilon$ is how wide the horizontal band around $L$ is, $\delta$ is how thin you need to make the vertical band around $a$ needs to be to make sure the graph is completely inside the rectangle. -What the picture suggests is that as $\Delta x$ approaches $0$, the slope of the line joining $A$ and $B$ should approach the slope of the tangent at $a$; however, the line joining $A$ and $B$ never actually "becomes" the tangent $C$, and the point $B$ never actually "becomes" the point $A$. They are just approaching. -Now, does the line joining $A$ and $B$ have a slope that really approaches the slope of the tangent? The key is that we can characterize the tangent algebraically: the tangent is the unique line that affords the best approximation to $y=f(x)$ near the point $x_0$, where by "best approximation" we mean one in which the relative error approaches zero. That is: if we take a line $mx+b$ that goes through $(x_0,f(x_0))$, then we can ask for the "relative error" in approximating using $mx+b$ instead of $f(x)$: -$$\frac{f(x) - (mx+b)}{x-x_0}.$$ -The tangent is the unique line through $(x_0,f(x_0))$ for which the relative error approaches $0$ as $x$ approaches $x_0$ (if such a line exists at all). Since $y=mx+b$ goes through $(x_0,f(x_0))$ if and only if $f(x_0) = mx_0 + b$, then $b= f(x_0)-mx_0$; so we have that the relative error will be -$$\frac{f(x) - mx - f(x_0)+mx_0}{x-x_0} = \frac{f(x)-f(x_0) - m(x-x_0)}{x-x_0} = \frac{f(x)-f(x_0)}{x-x_0} - m.$$ -So if we want the relative error to approach $0$, then we need -$\frac{f(x)-f(x_0)}{x-x_0}$ -to approach $m$; that is, the slope of the tangent must be whatever quantity the numbers -$$\frac{f(x)-f(x_0)}{x-x_0}$$ -are approaching as $x$ approaches $x_0$ (if they are approaching any particular number). Setting $\Delta x = x-x_0$, so that $x = x_0+\Delta x$, the above fraction is the same as -$$\frac{f(x_0 + \Delta x) - f(x_0)}{\Delta x}.$$ -And $x$ approaches $x_0$ if and only if $\Delta x$ approaches $0$. So the slope of the tangent needs to be whatever quantity this fraction approaches as $\Delta x$ approaches $0$; that is, the slope of the tangent at $x_0$ is $m$ if and only if -$$ \lim_{\Delta x \to 0} \frac{f(x_0+\Delta x)-f(x_0)}{\Delta x} = m.$$ -Turning this assertion into a picture gives precisely the picture you have. - -1 In case anyone is interested: many people viewed Newtonian mechanics and its consequent "clockwork universe" as a direct attack on the Christian notion of a deity that was directly involved in and modifying his creation, and also on the notion of free will. Some deists were in fact using the new physics as evidence in favor of the deist God, who created the universe, set it motion, but does not actively participate in it, in contrast to the theistic deity. By attacking the mathematical foundation of the new physics, Berkeley was defending the notions of free will and of the active deity. If you read Augustus de Morgan's A Budget of Paradoxes, he reviews many pamphlets and booklets written during those years which attack Newton and Calculus because they view the latter as an attack on religion. The "morality" arguments raised against Calculus are eerily similar (when not downright identical) to those raised against Darwin and the Theory of Evolution in later days.<|endoftext|> -TITLE: System of parameters which have linear independent images in the cotangent space -QUESTION [8 upvotes]: Given a Noetherian, local ring $(R,m)$, can we always find a system of parameters whose images in the cotangent space $m/m^2$ are linearly independent? - -We can do this in the regular case, by just choosing a basis for the cotangent space and looking at their preimages in $m$ under the canonical map. By Nakayama's lemma these generate $m$ and hence form a system of parameters. Can we do this for any general Noetherian, local ring? - -REPLY [4 votes]: By induction let's just worry about picking one parameter element $x$. We need $x$ to be in $m$, but outside all the minimal primes of $R$ and $m^2$. This smells exactly like Prime Avoidance (note that they don't have to be all primes!). Now replace $R$ by $R/(x)$ and repeat.<|endoftext|> -TITLE: Examples of manifolds that cannot be embedded in $\mathbb R^4$ -QUESTION [9 upvotes]: Could someone give me an example of a (smooth) $n$-manifold $(n=2, 3)$ which cannot be embedded (or immersed) in $\mathbb R^4$? -Thanks in advance! -S. L. - -REPLY [7 votes]: No compact nonorientable $(n-1)$-manifold embeds in $\mathbb R^n$: -this follows from the Alexander duality theorem. -Immersibility is a harder problem....<|endoftext|> -TITLE: Proof that if group $G/Z(G)$ is cyclic, then $G$ is commutative -QUESTION [6 upvotes]: I am looking for a correct proof of this statement: If $G$ is a group such that $G/Z(G)$ is cyclic, then $G$ is commutative. -Proof: $G/Z(G)$ is isomorphic to $\operatorname{Inn}(G)$ and is cyclic, and then for every $a$ and $b$ in $G$ the inner isomorphisms $\gamma_a$ and $\gamma_b$ satisfy $\gamma_a \gamma_b = \gamma_{ab} = \gamma_{ba} = \gamma_b \gamma_a$, and therefore for every $a,b \in G$, $ab = ba$. -Is that proof complete, or am I missing something? Thanks a lot for the help. - -REPLY [3 votes]: A problem with your attempt is that you seem to want to use only the fact that the inner automorphism group is abelian, but this does not suffice. There are nonabelian groups $G$ such that $G/Z(G)$ is abelian, like the group of symmetries of the square. Thus, it does not follow from $\gamma_{ab}=\gamma_{ba}$ that $ab=ba$. -So you need to use the hypothesis that $G/Z(G)$ has a single generator. Robin Chapman pointed out to you here what you can conclude from this, so I might as well quote him: - -Each element of $G$ has the form $a^nz$ where $a$ is fixed and $z\in Z(G)$...<|endoftext|> -TITLE: How to find finite groups -QUESTION [6 upvotes]: How possible methods for finding groups of given order we have? -Sylow theorems - and next? We know something about center of group, about possible orders of these elements and subgroups. But what is most effective way, to find all non-isomorphic groups of given order? Is there any other strong "weapon" for this (like Sylow theorems)? -For example, I took groups order 15. -Commutative group is only $\mathbb{Z}_{15}$. -About non-commutative we know, they have 1 Syllow 3-subgroup and 1 Syllow 5-subgroup. And next, that $M_3 \cup M_5 = G$ (if $M_3$, resp. $M_5$ is Syllow 3-subgroup, resp. Syllow 5-subgroup). -$M_3$ and $M_5$ are single generated, than we can choose elements $f$ and $g$ satysfaing $=M_3$ and $=M_5$. Because there are characterictis, we get these equations: $f^{-1}gf=g^m$, where $m \in \{1,2,3,4\}$ and $g^{-1}fg=f^n$, where $n \in \{1,2\}$. $m$ and $n$ must supplying $|g|=|g^m|$ and $|f|=|f^n|$. -But there is end of my way, i don't know how continue... Can anyone help? (Sorry for bad English) - -REPLY [3 votes]: Here's how to finish without assuming the Sylow 5-subgroup is normal. -(Assume by way of contradiction:) If $n=2$, so $g^{-1} f g = ff$, then: $$g^{-2} f g^2 = g^{-1}( g^{-1} f g ) g = g^{-1}( ff )g = g^{-1}( fg g^{-1}f)g = (g^{-1}fg) (g^{-1}fg) = (ff) (ff) = f^4.$$ Every time you apply $g^{-1}( * )g$, you double the number of $f$, so $g^{-i} f g^i = f^{(2^i)}$. -Now we use $g^5 = 1$. If you apply $g^{-1}( * )g$ five times, you get $g^{-5} f g^5 = 1 f 1 = f$ on the one hand, but $f^{(2^5)} = f^{32} = f^2$ on the other, using the fact that 2 ≡ 32 mod 3. -How can $f = f^2$? It does not, since $f$ has order 3. Assuming $n=2$ gives a contradiction, so $n=1$, and $g^{-1} f g = f$. Multiply on the left by $g$ to get $f g = g f$. The group must be abelian, since all of its generators commute. - -Now try this with "5" replaced by "2" and you will still get the Sylow 3-subgroup is normal, but the Sylow 2-subgroup need not be normal. The same argument will get you that both n=1 (cyclic group of order six), and n=2 (dihedral group of order six) are possible. -More precisely, you'll check that $g^{-2} f g^2 = f^{(2^2)} = f^4 = f$ just as it should be. - -Congratulations, you have just analyzed some semi-direct products using automorphisms! With cyclic Sylow subgroups it makes a lot of sense. We didn't even need to call "n" an automorphism, nor did we need to call M3∪M5 a semi-direct product. -If the subgroups are not Sylow subgroups, then it is harder to check (1) that they are normal or (2) that M∩N = 1, but there are more general techniques for this. -If the subgroups are not cyclic, then you cannot just use one "n" for the Sylow 3-subgroup. You get a matrix of "m"s and "n"s. Not just any matrix will do, but the ones that do work are called automorphisms. -In other words, the general case of semi-direct products is similar to what you are now doing. Don't give up.<|endoftext|> -TITLE: Help understanding the definition of tangent vector or tangent plane? -QUESTION [6 upvotes]: Here is what my textbook told me: -Assuming the formula for surface $\Sigma$ is $$F(x,y,z) = 0$$ -Suppose $X_0 = (x_0, y_0, z_0)$ is a point on the surface $\Sigma$ and we assuming F(x,y,z) is differentiable and $$\mathbf{J}F(X_0) = (\frac{\partial F(X_0)}{\partial x}, \frac{\partial F(X_0)}{\partial y}, \frac{\partial F(X_0)}{\partial z}) \neq 0$$ -Draw a line $\Gamma$ in the surface $\Sigma$ passing through the point $X_0$, assuming the equations for $\Sigma$ is $$x = x(t), y = y(t), z = z(t)$$ $t = t_0$ correspond to the point $X_0$ and $x'(t_0), y'(t_0), z'(t_0)$ does not all vanish. Because of the line $\Gamma$ is on the surface $\Sigma$, so $$F(x(t), y(t), z(t)) = 0$$ So $$ \frac{dF}{dt}\mid_{t=t_0} = {F_x}'(X_0)x'(t_0) + {F_y}'(X_0)y'(t_0) + {F_z}'(X_0)z'(t_0) = 0 $$ So $$ ({F_x}'(X_0), {F_y}'(X_0), {F_z}'(X_0))\cdot(x'(t_0), y'(t_0), z'(t_0)) = 0 $$ -We know the vector $\mathbf{T} = (x'(t_0), y'(t_0), z'(t_0))$ is the tangent vector for the line $\Gamma$ on the point $X_0$ -My questions are - -Why the vector $\mathbf{T}$ is the tangent vector for line $\Gamma$ at point $X_0$? -Why the $\mathbf{J}F(X_0)$ should not equal to zero? What if it is zero? -Why $x'(t_0), y'(t_0), z'(t_0)$ should not all vanish? What if all vanish? - -REPLY [5 votes]: This is intended as a complement to Alex Bartel's answer. -(1) Consider the 2D case given by $x = x(t)$, $y = y(t)$. Here is a graph of a parametrized function with $(x(t_0),y(t_0))$ (in red) and $(x(t_0 + \delta), y(t_0 + \delta))$ (in yellow) marked on it. The vector $(x(t_0 + \delta), y(t_0 + \delta)) - (x(t_0),y(t_0))$ (from the numerator of the derivative definition) is turquoise, sits almost on top of the graph of the function, and thus is very close to being tangent to the graph of the function at $(x(t_0), y(t_0))$. Dividing by $\delta$ only changes the length of that vector, not its direction. Now imagine letting $\delta \to 0$ and watching the yellow and turquoise vectors change. Graphically, the turquoise vector is getting closer and closer to being a tangent vector at $t_0$. In the limit, it should actually be a tangent vector. - -(1) (from another perspective) If $(x(t), y(t), z(t))$ describes the position of some object moving through space, then $(x'(t), y'(t), z'(t))$ is its velocity. Since the direction of the velocity vector gives the direction the object is moving at an instant in time, $(x'(t_0), y'(t_0), z'(t_0))$ will be tangent to the graph of the position at $t_0$. -(3) Here's an example in the 2D case where $x'(t)$ and $y'(t)$ both vanish. The parametrized function is $x(t) = 1+t^3$ and $y(t) = t^2$, with $x'(0) = y'(0) = 0$. Graphically, we have a cusp, and so there is no well-defined tangent vector. This is not the only situation that can occur (see Alex's answer and J.M.'s comment), but it does show why an assumption like this is needed.<|endoftext|> -TITLE: What is limit of $\sum \limits_{n=0}^{\infty}\frac{1}{(2n)!} $? -QUESTION [25 upvotes]: What is the limit of the series $1 \over (2n)!$ for n in $[0, \infty)$ ? -$$ \sum_{n = 0}^{\infty}{1 \over (2n)!}$$ -I've ground out the sum of the 1st 1000 terms to 1000 digits using Python, -(see here ), but how would a mathematician calculate the limit? And what is it? -No, this isn't homework. I'm 73. Just curious. -Thanks - -REPLY [36 votes]: It's half the sum of $e^1=\sum 1/n!$ and $e^{-1}=\sum (-1)^{n}/n!$ (or $\cosh 1$, in other words).<|endoftext|> -TITLE: What is the Fourier transform of the product of two functions? -QUESTION [34 upvotes]: Given $x(t) = f(t) \cdot g(t)$, what is the Fourier transform of $x(t)$? If possible, please explain your answer. - -The motivation behind the question is homework, but this is a basic principle in the class that I never quite grasped properly. My current homework builds upon the principle of the question. So answering this question by no means will be doing my homework for me. This is why I am asking the general case. - -REPLY [16 votes]: Transforms such as Fourier transform or Laplace transform, takes a product of two functions to the convolution of the integral transforms, and vice versa. -This is called the Convolution Theorem, and is available with proof at wikipedia. - -REPLY [7 votes]: Fourier transform of a product is the convolution of the corresponding transforms. For details on conditions on the functions refer links below -http://en.wikipedia.org/wiki/Fourier_transform -http://en.wikipedia.org/wiki/Fourier_transform#Convolution_theorem<|endoftext|> -TITLE: Relative Cohomology Isomorphic to Cohomology of Quotient -QUESTION [6 upvotes]: Given a topological space (with nice enough conditions, maybe Hausdorff, compactly generated, or CW complex, I'm not sure) $X$ and a subspace $A\subset X$, is it true that $H^n(X,A)\cong H^n(X\backslash A)$ whenever there is an open set containing $A$ which can be retracted to $A$? Can this be shown easily using the Mayer-Vietoris Sequence, or something else similar to that? (I'm basically assuming simplicial or singular cohomology here, with integer coefficients.) - -REPLY [8 votes]: It should say reduced cohomology of $X/A$, I believe. -In general, it is always true that the reduced cohomology of $X \cup CA$ (for $CA$ the cone) is isomorphic to the relative cohomology. To see this, one removes a small contractible piece near the top. -The exact sequence shows that $\widetilde{H}^*(X \cup CA) \simeq \widetilde{H}^*(X \cup CA, C A)$ because the cone is contractible. Then, excise the top of the cone. The pair then becomes homotopy equivalent to the pair $(X, A)$. So this proves the claim. -The next fact that one needs to show is that $X/A$ has the same cohomology as $X \cup CA$. This is true if the inclusion $A \to X$ is a cofibration. In general, it is a fact that if $B \subset Y$ is a cofibration, and $B$ is contractible, then $Y$ and $Y/B$ have the same homotopy type. (Apply this with $Y = X \cup -CA, B = CA$ to see that $X \cup CA$ and $X/A$ have the same homotopy type under cofibration conditions.) -To see this, one takes a contracting homotopy $B \times I \to B$ of the identity to a point, and extends it to a homotopy $\phi: Y \times I \to Y$ by the cofibration property. This sends $B \times \{1\}$ into a point. So $\phi(., 1)$ is a map $Y/B \to Y$. The composite of $\phi(.,1)$ and $Y \to Y/B$ is homotopic to $1_{Y/B}$ by $\phi$ (since $\phi(.,t)$ always sends $B$ into itself).<|endoftext|> -TITLE: Determine the matrix relative to a given basis -QUESTION [14 upvotes]: Question: (a) Let $f: V \rightarrow W$ with $ V,W \simeq \mathbb{R}^{3}$ given by: $$f(x_1, x_2, x_3) = (x_1 - x_3, 2x_1 -5x_2 -x_3, x_2 + x_3).$$ -Determine the matrix of $f$ relative to the basis $\{(0,2,1),(-1,1,1),(2,-1,1)\}$ of $V$ and $\{(-1,-1,0),(1,-1,2),(0,2,0)\}$ of $W$. -(b) Let $n \in \mathbb{N}$ and $U_n$ the vector space of real polynomials of degree $\leq n$. The linear map $f: U_n \rightarrow U_n$ is given by $f(p) = p'$. Determine the matrix of $f$ relative to the basis $\{1,t,t^{2},...,t^{n}\}$ of $U_n$. -My attempt so far: -(a): First relative to the bases of $W$ I found the coordinates of an arbitrary vector: $\left( \begin{array}{r} a \\ b \\ c \end{array} \right) = x \left( \begin{array}{r} -1 \\ -1 \\ 0 \end{array} \right) + y \left( \begin{array}{r} 1 \\ -1 \\ 2 \end{array} \right) + z \left( \begin{array}{c} 0 \\ 2 \\ 0 \end{array} \right)$ -$\begin{array}{l} a = -x + y \\ b = - x - y + 2z \\ c = 2y \end{array}$ or $\begin{array}{l} x = -a + \frac{1}{2}c \\ z = -\frac{1}{2}a + \frac{1}{2}b + \frac{1}{2}c \\ y = \frac{1}{2}c \end{array}$ -At this point I believe I have the linear combinations of the given basis in $W$ for an arbitrary vector, so next I take the vectors from $V$ and send them to $W$ using the given function: -$\begin{array}{l} f(v_1) = f(0,2,1) = (-1,-11,3) = (1 + \frac{3}{2})w_1 + \frac{3}{2}w_2 + (\frac{1}{2} - \frac{11}{2} + \frac{3}{2})w_3 \\ f(v_2) = f(-1,1,1) = (-2,-8,2) = (2+1)w_1 + w_2 + (1 - 4 +1)w_3 \\ f(v_3) = f(2,-1,1) = (1,8,0) = w_1 + (-\frac{1}{2} + 4)w_3 \end{array}$ -or $\left( \begin{array}{rrc} \frac{5}{2} & 3 & 1 \\ \frac{3}{2} & 1 & 0 \\ -\frac{7}{2} & -2 & \frac{7}{2}\end{array} \right)$ -Was I taking the correct steps? I didn't really do anything differently based on the fact that $V,W$ were isometric... Is there a particular significance or interpretation for the resulting matrix? -(b): Not really sure here... -$f(p) = p'$ -would it make sense to write something like: -$f(1,t,t^{2},\dots, t^{n}) = (0,1,2t, \dots, nt^{n-1})$? -and if a basis for $(1,t,t^{2},\dots, t^{n})$ would be $A = \left( \begin{array}{ccccc} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\0 & \cdots & \cdots & 0 & 1 \end{array} \right)$ -could i write: -$A' = \left( \begin{array}{ccccc} 0 & 0 & 0 & \cdots & 0 \\ 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\0 & 0 & 0 & 1 & 0 \end{array} \right)$? - -REPLY [4 votes]: Here's a suggestion for seeing the bigger picture in your first question: - -The key idea below is that the column vectors of a matrix $M$ are the images of the standard ordered basis when the matrix $M$ is viewed as a transformation -$M: \mathbb{R}^n \to \mathbb{R}^n$. -It is easy to find a matrix of f with respect to the standard ordered basis of $\mathbb{R}^3$: it is given by a matrix with columns $\{1, 2, 0\}$, $\{0, -5, 1\}$, and $\{-1, -1, 1\}$. Call this matrix $C$. -The matrix which transforms the given ordered basis of $V$ to the standard basis of $\mathbb{R}^3$ is given by writing the basis vectors as columns of this matrix. Similarly, this can be done for the basis of $W$. Let's call these matrices $A$ and $B$. -You desire a matrix which represents $f$. Interpreting the matrices above as transformations, consider the following diagram: - -$$\begin{array}{ccc} -V & \to & \mathbb{R}^3\\ -\downarrow & \qquad & \downarrow\\ -W & \to & \mathbb{R}^3 -\end{array}$$ -where the horizontal maps are $A$ and $B$ and the right vertical map is $C$. -Your desired matrix is the left vertical arrow. It is now clear that it is given by the matrix $B^{-1} C A$.<|endoftext|> -TITLE: Particle moving at constant speed with Poisson setbacks -QUESTION [5 upvotes]: Consider a particle starting at the the origin and moving along the positive real line at a constant speed of 1. Suppose there is a counter which clicks at random time intervals following the exponential distribution with parameter $\lambda$ and whenever the counter clicks, the position $x > 0$ of the particle at that time instantaneously changes to the position $x/2$. We wish to calculate the expected average speed of the particle. -I don't really have any idea of how to go about solving this. Here are a couple of related problems which seem even more difficult to me: - -Modify the puzzle so that when the counter clicks, the particle moves from $x$ to a point chosen uniformly at random from $[0,x]$. -The particle starts moving as above but whenever the counter clicks, its speed increases by 1 (the initial speed was 1). What is the expected time when the particle hits the position 1? What is the expected speed when the particle hits the position 1? - -This is not a homework problem. Any solutions, hints, thoughts will be appreciated. -Thanks, - -REPLY [3 votes]: Just for the record, I like Shai Covo's answer better. But the OP asked me to post my solution as well, so here it is. -Let $X_t$ be the position of the object at time $t$. Given $N$ clicks in $[0,t]$, let $\tau_1, \tau_2, \ldots \tau_N$ be the times of those clicks. Let $T_i$ be the $i$th interarrival time, so that $T_1 = \tau_1$, $T_{N+1} = t - \tau_N$, and $T_i = \tau_i - \tau_{i-1}$ otherwise. Thus $t = \sum_{i=1}^{N+1} T_i$. -By properties of the exponential distribution, $E[T_i|N] = E[T_j|N]$ for all $i, j$. Thus $$t = \sum_{i=1}^{N+1} E[T_i|N] = E[T_i|N] (N+1) \Rightarrow E[T_i|N] = \frac{t}{N+1}.$$ -If $N=0$, then $X_t = T_1$. If $N = 1$, $X_t = \frac{1}{2}T_1 + T_2$. If $N = 2$, $X_t = \frac{1}{4}T_1 + \frac{1}{2}T_2 + T_3$, and, in general, $X_t = \sum_{i=0}^N \frac{T_{N+1-i}}{2^i} $. Thus -$$E[X_t|N] = \sum_{i=0}^N \frac{E[T_{N+1-i}|N]}{2^i} = \frac{t}{N+1}\left(2 - \frac{1}{2^N}\right).$$ -Since $E[X_t] = E[E[X_t|N]]$, we just have to calculate $E\left[\frac{1}{N+1}\right]$ and $E\left[\frac{1}{(N+1)2^N}\right]$. Since $N$ is Poisson$(\lambda t)$, we have -$$E\left[\frac{1}{N+1}\right] = \sum_{n=0}^{\infty} \frac{(\lambda t)^{n} e^{-\lambda t}}{(n+1) n!} = e^{-\lambda t} \sum_{n=0}^{\infty} \frac{(\lambda t)^{n} }{(n+1)!} = \frac{e^{-\lambda t}}{\lambda t} \sum_{n=0}^{\infty} \frac{(\lambda t)^{n+1} }{(n+1)!} $$ -$$= \frac{e^{-\lambda t}}{\lambda t} \left(\sum_{n=0}^{\infty} \frac{(\lambda t)^n }{n!} - 1 \right) = \frac{e^{-\lambda t}}{\lambda t} \left(e^{\lambda t} - 1 \right) = \frac{1}{\lambda t} - \frac{e^{-\lambda t}}{\lambda t}.$$ -Similarly, -$$E\left[\frac{1}{(N+1)2^N}\right] = \frac{2e^{-\lambda t}}{\lambda t} \sum_{n=0}^{\infty} \frac{(\frac{\lambda t}{2})^{n+1} }{(n+1)!} = \frac{2e^{-\lambda t}}{\lambda t} \left(e^{\lambda t/2} - 1 \right) = \frac{2e^{-\lambda t/2}}{\lambda t} - \frac{2e^{-\lambda t}}{\lambda t}.$$ -Therefore, -$$E[X_t] = E\left[\frac{2t}{N+1}\right] - E\left[\frac{t}{(N+1)2^N}\right] = \frac{2}{\lambda}\left(1 - e^{-\lambda t/2}\right),$$ -which is exactly what you get if you solve the differential equation in Shai Covo's answer. -So the expected average speed (velocity, actually, since the average speed is technically 1) is -$$\frac{2}{\lambda t}\left(1 - e^{-\lambda t/2}\right).$$<|endoftext|> -TITLE: Is there an empty set in the complement of an empty set? -QUESTION [11 upvotes]: Currently taking a logic class and trying to understand this. -You have two set $A$ and $B$. -Both sets are empty sets. -Is set $A$ a subset of the complement of set $B$? -Assume the context is the universal set. - -REPLY [21 votes]: The answer is yes. But there are several comments that need to be made: - -There is only one empty set. So it is better to say that $A=B=\emptyset$ rather than saying that "both $A$ and $B$ are empty sets", as the latter erroneously suggests that there is more than one. This is because two sets are equal precisely if they have the same elements. So any two empty sets are equal, since they have precisely the same elements (namely, none). -I assume by "context" you mean that the complement of $B$ is computed with respect to the "universal set." In the standard system of set theory, there can be no "universal set", as assuming its existence leads to problems (Russell paradox). [Though, yes, it is usual to talk of a "universal set" as a way to delineate what objects we are interested in.] - -The reason why $A$ is contained in the complement of $B$ is that $A$ (being the empty set) is a subset of any set. This is because we define "$A$ is contained in $C$" to mean that any element of $A$ is also an element of $C$. Now, since nothing is an element of $A$, this condition is satisfied in this case (one typically says that it is satisfied vacuously.) - -REPLY [3 votes]: The empty set is a subset of any other set.<|endoftext|> -TITLE: Recurrence relation satisfied by $\lfloor(1+\sqrt{3})^n\rfloor$ -QUESTION [7 upvotes]: This is a follow up to a question I had asked earlier about a linear recurrence relationship satsified by $\lfloor(1+\sqrt{5})^n\rfloor$. I messed up there, and I actually meant to ask about $L(n)=\lfloor(1+\sqrt{3})^n\rfloor$. -Following Douglas' suggestion I have determined that the values (at least the first 1000) satisfy the following recurrence: -$L(2n+5)=8L(2n+3)-4L(2n+1)$ -The question is how do I prove something like this. I can prove the recurrence for the values inside the floor function, but floor function in general does not commute with addition and multiplication. -Explicitly, it's easy to show -$(1+\sqrt{3})^{2n+5}=8(1+\sqrt{3})^{2n+3}-4(1+\sqrt{3})^{2n+1}$ -but I am not sure how to prove the recurrence from here. - -REPLY [10 votes]: Prove the same relation for $1-\sqrt 3$, and check that the resulting powers are all smaller than 1, all negative, and when added to the same powers of $1+\sqrt 3$ you end up with an integer: For all $k=1,2,\dots$, $$(1+\sqrt 3)^k+(1-\sqrt 3)^k $$ is an integer. -There are several ways of checking this fact. For example, by induction. Or by using the Binomial theorem.<|endoftext|> -TITLE: Supremum length of space curves contained in the open unit ball having always less than unity curvature -QUESTION [10 upvotes]: I am in the process of proving that if a space curve (in $R^3$) has infinite length and the curvature tends towards $0$ as the natural parameter $s$ tends to infinity, the curve must be unbounded - i.e. not contained in any sphere of finite radius. This seems correct intuitively, but I have no guarantee it is correct, unless I am missing something obvious. One way to prove my hunch, I have deduced, is to use a lemma that any curve contained in the open unit ball with curvature always less than one must have a finite upper bound on its length (possibly $2π$, but it could be greater for all I know). -How might one go about proving such an upper bound exists, or if it exists? It might also be nice to know what the bound specifically is, too. I've thought it might be possible to pose this as a variational problem - maximizing length - and then reducing it into a simpler problem, but that appears to be hellishly complicated. Thoughts? - -REPLY [5 votes]: Let ${\mathbf{x}}(s)$ be a curve in ${\mathbb{R}}^3$ with natural parameter $s$. We will need the following lemma, the proof of which is given at the end of this answer. - -Lemma: Choose a fixed point ${\mathbf{y}}$. Then the curvature $\kappa$ satisfies - $$ -\kappa \ge \left|\dot{\theta} + \frac{1}{r}\sin{\theta}\right|, -$$ - where $r = \lVert\mathbf{x} - \mathbf{y}\rVert$ and $\theta$ is the angle between the - velocity $\dot{\mathbf{x}}$ and $\mathbf{x} - \mathbf{y}$. - -Take $\mathbf{y} = \mathbf{x}(0)$; then $r(0)=0$ and $\theta(0)=0$. The function $r(s)$ is monotonically increasing as long as $0 \le \theta < \pi/2$ (since $\dot{r}(s) = \cos{\theta}$), so $\theta$ and $\kappa$ can be considered as single-valued functions of $r$ until that point. The above lemma then gives -$$ -\kappa(r) \ge \left|{\theta}'(r)\dot{r} + \frac{1}{r}\sin{\theta(r)}\right| = \left|{\theta}'(r)\cos{\theta(r)} + \frac{1}{r}\sin{\theta(r)}\right| = \left|\frac{1}{r}(r \sin\theta(r) )'\right|. -$$ -Until the first turning point of the motion (where the velocity becomes perpendicular to the radius), we have -$$ -R\sin\theta(R) \le \int_{0}^{R} r \kappa(r) dr. -$$ -If the curvature is strictly below a fixed value (say, $\kappa < K$), then the integral is less than $\frac{1}{2}KR^2$ for $R>0$, and we have the result that -$$ -\sin{\theta(r)} < \frac{1}{2}Kr -$$ -for $r>0$. A turning point is reached when $\theta=\pi/2$; this equation shows that the first such turning point must be at a radius greater than $2/K$, and hence the curve cannot be confined within a ball of diameter $2/K$. Finally, the arclength before reaching a given radius $R$ is bounded by -$$ -\begin{eqnarray} -s(R) &=& \int_{0}^{R} \frac{ds}{dr}dr \\ &=& \int_{0}^{R} \frac{dr}{\cos\theta(r)} \\ &<& \int_{0}^{R} \frac{dr}{\sqrt{1 - \frac{1}{4}K^2 r^2}} \\ &=& \frac{2}{K}\sin^{-1}\left(\frac{1}{2}KR\right) -\end{eqnarray} -$$ -for $R \le 2/K$. We conclude that any curve contained in the open unit ball with curvature $\kappa < 1$ must have length $s(2) < 2\sin^{-1}(1) = \pi$. Moreover, this bound is tight, since a circular arc joining the points at $\pm (1-\epsilon^2)\hat{\mathbf{z}}$ and the point at $(1-\epsilon)\hat{\mathbf{x}}$ has length approaching $\pi$ as $\epsilon \rightarrow 0$. - -Proof of Lemma: -We will work in spherical coordinates centered at $\mathbf{y}$; then -$$ -\begin{eqnarray} -{\mathbf{x}} - &=& r\hat{\mathbf{r}}, \\ -{\dot{\mathbf{x}}} - &=& \dot{r}{\hat{\mathbf{r}}} + r\dot{\hat{\mathbf{r}}} \\ - &=& \dot{r}{\hat{\mathbf{r}}} + r v_{\perp} \hat{\mathbf{v}}_{\perp}. -\end{eqnarray} -$$ -Here $\hat{\mathbf{r}}$ is the unit vector from the origin to ${\mathbf{x}}$, and $\dot{\hat{\mathbf{r}}} = v_{\perp} \hat{\mathbf{v}}_{\perp}$ is its rate of change. Because $\hat{\mathbf{r}}$ has constant length, we have $\hat{\mathbf{v}}_{\perp}\cdot \hat{\mathbf{r}} = 0$. Taking the time derivative of this gives -$$ -0 = \dot{\hat{\mathbf{v}}}_{\perp}\cdot \hat{\mathbf{r}} + \hat{\mathbf{v}}_{\perp}\cdot \dot{\hat{\mathbf{r}}} = v_{\perp} + \dot{\hat{\mathbf{v}}}_{\perp}\cdot \hat{\mathbf{r}}, -$$ -which we will use later. Now, because $s$ is a natural parameter, $$\lVert\dot{\mathbf{x}}\rVert^2 = \left(\dot{r}\right)^2 + \left(rv_{\perp}\right)^2 = 1;$$ -so we can define $\theta \in [0,\pi]$ such that $\dot{r} = \cos{\theta}$ and $rv_{\perp} = \sin{\theta}$. The velocity and acceleration become -$$ -\begin{eqnarray} -{\dot{\mathbf{x}}} - &=& \left(\cos{\theta}\right){\hat{\mathbf{r}}} + \left(\sin{\theta}\right)\hat{\mathbf{v}}_{\perp}, \\ -{\ddot{\mathbf{x}}} - &=& -\left(\sin{\theta}\dot{\theta}\right){\hat{\mathbf{r}}} + \left(\cos{\theta}\right)\dot{\hat{\mathbf{r}}} + \left(\cos{\theta}\dot{\theta}\right)\hat{\mathbf{v}}_{\perp} + \left(\sin{\theta}\right)\dot{\hat{\mathbf{v}}}_{\perp} \\ -&=& -\left(\sin{\theta}\dot{\theta}\right){\hat{\mathbf{r}}} + \left(\cos{\theta}\right)\left(\dot{\theta} + \frac{1}{r}\sin{\theta}\right)\hat{\mathbf{v}}_{\perp} + \left(\sin{\theta}\right)\dot{\hat{\mathbf{v}}}_{\perp}, -\end{eqnarray} -$$ -and the acceleration has (two of its three) components -$$ -\begin{eqnarray} -{\ddot{\mathbf{x}}}\cdot\hat{\mathbf{r}} &=& -\left(\sin{\theta}\dot{\theta}\right) + \left(\sin{\theta}\right)\left(\dot{\hat{\mathbf{v}}}_{\perp} \cdot \hat{\mathbf{r}}\right) \\ -&=& -\left(\sin{\theta}\right)\left(\dot{\theta} + \frac{1}{r}\sin{\theta}\right), \\ -{\ddot{\mathbf{x}}}\cdot\hat{\mathbf{v}}_{\perp} &=& +\left(\cos{\theta}\right)\left(\dot{\theta} + \frac{1}{r}\sin{\theta}\right). -\end{eqnarray} -$$ -This brings us to the result that the squared curvature -$$ -\kappa^2 = \lVert\ddot{\mathbf{x}}\rVert^2 \ge \left({\ddot{\mathbf{x}}}\cdot\hat{\mathbf{r}}\right)^{2} + -\left({\ddot{\mathbf{x}}}\cdot\hat{\mathbf{v}}_{\perp}\right)^{2} = \left(\dot{\theta} + \frac{1}{r}\sin{\theta}\right)^{2}, -$$ -where $\theta$ is the angle between the velocity and the outward radial vector. The lemma follows by taking the square root of both sides.<|endoftext|> -TITLE: Probability of dice sum just greater than 100 -QUESTION [17 upvotes]: Can someone please guide me to a way by which I can solve the following problem. -There is a die and 2 players. Rolling stops as soon as some exceeds 100(not including 100 itself). Hence you have the following choices: 101, 102, 103, 104, 105, 106. -Which should I choose given first choice. -I'm thinking Markov chains, but is there a simpler way? -Thanks. -EDIT: I wrote dice instead of die. There is just one die being rolled - -REPLY [2 votes]: I see it's a very old question, but let me add my two cents. It's only an approximate solution and at times it involves some guesswork, but it turns out to be quite good. (Also, it doesn't need a computer and the math is pretty elementary.) -Let $a_n$ denote the probability of rolling total of $n$ in any number of rolls (now without the "stop when > 100" condition). After some thinking, we get a recurrence relation for these: -$$ a_n = (a_{n-1} + a_{n-2} + \ldots + a_{n-6})/6,\quad a_{-5} = a_{-4} = \ldots = a_{-1} = 0, a_0 = 1. $$ -This is a linear recurrence, which can be solved "easily" by forming the characteristic equation $6\lambda^6 - \lambda^5 - \lambda^4 - \lambda^3 - \lambda^2 - \lambda - 1 = 0$. If we denote its roots by $l_1$ to $l_6$, the explicit formula for the recurrence has the form of -$$ a_n = \sum_{0 < i < 7} C_i l_i^n. $$ -(The $C$'s are obtained from the boundary conditions.) Since the $a_n$'s represent probabilities, they should be in the interval of $<0;1>$. From this it seems to be reasonable that $|l_i| \leq 1$. If there were any that don't satisfy this condition, the $a_n$'s would be unbounded (since the powers, and even differences of two of them with different bases, would be unbounded). -Now we see it has a root of $\lambda = 1$. So, $a_n$'s eventually converge to $C_1$ (which we don't know, but we don't care), since all other $C_i l_i^n$ converge to 0 (because of the $|l_i|<1$ condition). -So probability of getting 101 is $a_{101}$. Getting 102 has a probability of $a_{102} - a_{101}/6$, since we can't obtain it by rolling 101+1. Similarly, rolling 103 has a probability of $a_{103} - (a_{102} + a_{101})/6$ (no 101+2 nor 102+1), etc. Now we guess that the $a_n$'s converge so well that $a_{101}$ through $a_{106}$ are essentially equal. That gives the 6:5:4:3:2:1 ratio, and furthermore, the (correct) limit of $a_n$'s, 2/7. -I know this is somewhat estimatory (some may say, "physicist's") approach, but even with that, I hope it could have some value.<|endoftext|> -TITLE: What does $H=GL(2,\mathbb{R})/(Z(GL(2,\mathbb{R}))\cdot O(2,\mathbb{R}))$ mean? -QUESTION [5 upvotes]: Let $H=\left\{ z\in\mathbb{C}\mid\Im\left(z\right)>0\right\}$ be the upper-half Poincare plane. Let $GL\left(2,\mathbb{R}\right)$ be the general linear group, $Z\left(GL\left(2,\mathbb{R}\right)\right)$ be the center of the general linear group and $O\left(2,\mathbb{R}\right)$ be the orthogonal subgroup of $GL\left(2,\mathbb{R}\right)$. -What does it mean to say $H=GL\left(2,\mathbb{R}\right)/\left(Z\left(GL\left(2,\mathbb{R}\right)\right)\cdot O\left(2,\mathbb{R}\right)\right)$? The left-hand side is a metric space and the right hand side is a set of cosets of $GL\left(2,\mathbb{R}\right)$. So I'm confused about what it means to write that they are equal or to say "the upper half plane is..." It seems like this would be the group of orientation preserving isometries of H, but I still find the terminology confusing. -I've been trying to figure out what this could possibly mean, but my searches on the internet have not been fruitful. I've also looked at 2 sources on standard modular groups but they make no mention of this fact. An explanation or reference would be greatly appreciated. -Motivation: I am reading a paper titled "On Modular Functions in characteristic p" by Wen-Ch'ing Winnie Li which can be found at http://www.jstor.org/stable/1997973. The claim appears on page 3 of the pdf (page 232 of the journal). It is also stated on the wikipedia page: http://en.wikipedia.org/wiki/Poincar%C3%A9_half-plane_model - -REPLY [5 votes]: My comments were getting too long, so I will post this as an answer: the isomorphism referred to is an isomorphism of $G$-sets. This works in great generality: let $H$ be a set, let $G$ act on $H$ transitively. Pick an arbitrary point $z\in H$ and let $K=\text{Stab}_G(z)$ be the point stabiliser in $G$. Note that $K$ is not normal in general, since the group $gKg^{-1}$ stabilises the point $g(z)$ (my action is on the left). So, $K$ is normal if and only if it acts trivially on $H$. -Nevertheless, the set of cosets $G/K$ is always a $G$-set, i.e. a set with an action of $G$: -$$g: hK\mapsto (gh)K$$ -for all $g\in G$ and $hK\in G/K$. This is the usual coset action. Now, check that the map -$$\phi:G/K\rightarrow H,\;gK\mapsto g(z)$$ -is a bijection of $G$-sets, i.e. a bijection of sets that respects the $G$-action.<|endoftext|> -TITLE: How is $\operatorname{GL}(1,\mathbb{C})$ related to $\operatorname{GL}(2,\mathbb{R})$? -QUESTION [19 upvotes]: I am trying to get a grasp on what a representation is, and a professor gave me a simple example of representing the group $Z_{12}$ as the twelve roots of unity, or corresponding $2\times 2$ matrices. Now I am wondering how $\operatorname{GL}(1,\mathbb{C})$ and $\operatorname{GL}(2,\mathbb{R})$ are related, since the elements of both groups are automorphisms of the complex numbers. $\operatorname{GL}(\mathbb{C})$, the group of automorphisms of C, is (to my understanding) isomorphic to both $\operatorname{GL}(1,\mathbb{C})$ and $\operatorname{GL}(2,\mathbb{R})$ since the complex numbers are a two-dimensional vector space over $\mathbb{R}$. But it doesn't seem like these two groups are isomorphic to each other. - -REPLY [14 votes]: The one group is commutative, the other is not. So they can not be isomorphic.<|endoftext|> -TITLE: Is there an easy way to show which spheres can be Lie groups? -QUESTION [80 upvotes]: I heard that using some relatively basic differential geometry, you can show that the only spheres which are Lie groups are $S^0$, $S^1$, and $S^3$. My friend who told me this thought that it involved de Rham cohomology, but I don't really know anything about the cohomology of Lie groups so this doesn't help me much. Presumably there are some pretty strict conditions we can get from talking about invariant differential forms -- if you can tell me anything about this it will be a much-appreciated bonus :) -(A necessary condition for a manifold to be a Lie group is that is must be parallelizable, since any Lie group is parallelized (?) by the left-invariant vector fields generated by a basis of the Lie algebra. Which happens to mean, by some pretty fancy tricks, that the only spheres that even have a chance are the ones listed above plus $S^7$. The usual parallelization of this last one comes from viewing it as the set of unit octonions, which don't form a group since their multiplication isn't associative; of course this doesn't immediately preclude $S^7$ from admitting the structure of a Lie group. Whatever. I'd like to avoid having to appeal to this whole parallelizability business, if possible.) - -REPLY [120 votes]: Here is the sketch of the proof. -Start with a compact connected Lie group G. Let's break into 2 cases - either $G$ is abelian or not. -If $G$ is abelian, then one can easily show the Lie algebra is abelian, i.e., $[x,y]=0$ for any $x$ and $y$ in $\mathfrak{g}$. Since $\mathbb{R}^n$ is simply connected and has the same Lie algebra as $G$, it must be the universal cover of $G$. -So, if $G$ is a sphere, it's $S^1$, since all the others are simply connected, and hence are their own universal covers. -Next, we move onto the case where $G$ is nonabelian. For $x,y,$ and $z$ in the Lie algebra, consider the map $t(x,y,z) = \langle [x,y], z\rangle$. This map is clearly multilinear. It obviously changes sign if we swap $x$ and $y$. What's a bit more surprising is that it changes sign if we swap $y$ and $z$ or $x$ and $z$. Said another way, $t$ is a 3 form! I believe $t$ is called the Cartan 3-form. Since $G$ is nonabelian, there are some $x$ and $y$ with $[x,y]\neq 0$. Then $t(x,y,[x,y]) = ||[x,y]||^2 \neq 0$ so $t$ is not the 0 form. -Next, use left translation on $G$ to move $t$ around: define $t$ at the point $g\in G$ to be $L_{g^{-1}}^*t$, where $L_{g^{-1}}:G\rightarrow G$ is given by $L_{g^{-1}}(h) = g^{-1}h$. -This differential 3-form is automatically left invariant from the way you've defined it everywhere. It takes a bit more work (but is not too hard) to show that it's also right invariant as well. -Next one argues that a biinvariant form is automatically closed. This means $t$ defines an element in the 3rd de Rham cohomology of $G$. It must be nonzero, for if $ds = t$, then we may assume wlog that $s$ is biinvariant in which case $ds = 0 = t$, but $t$ is not $0$ as we argued above. -Thus, for a nonabelian Lie group, $H^3_{\text{de Rham}}(G)\neq 0$. But this is isomorphic to singular homology. Hence, for a sphere to have a nonabelian Lie group structure, it must satisfy $H^3(S^n)\neq 0$. This tells you $n=3$.<|endoftext|> -TITLE: Is Knopp's "Theory and Application of Infinite Series" out of date? -QUESTION [10 upvotes]: Is Knopp's Theory and Application of Infinite Series out of date? It's looks terrific to me, but the Dover edition I bought new maybe a year ago: http://preview.tinyurl.com/2eprqps seems to be the same as an edition published in 1951 and may go back as far as 1921. 60 or 90 years is a lot of math years. How about it? Does my book leave out some important developments? Is it old-fashioned in some other ways? -I've seen this question: what is the current state of the art in methods of summing "exotic" series? but it doesn't have a full answer yet. -Thanks - -REPLY [7 votes]: I realize this is an old thread but in case anybody else looks here I will share my thoughts. This book is not out of date, if any math graduate student can find the time to read it, they definitely should. Had it been written 20 years earlier then it would be too old to read today, but happily the notation has been pretty well locked in since the 1920's. He even gives nice histories of the development of the terminology and notation, usually in footnotes. -The real problem I see with this book is the first couple chapters where he lays down the foundations. He talks at extreme length almost philosophically about the construction of the real numbers in an almost Shakespearean style. It could be cut down by 2/3 and be much more readable. Also, he uses "nests" instead of Cauchy sequences to complete $\Bbb Q$. In modern treatments it's almost always Cauchy sequences. But once he gets past this, the rest of the book read just like modern math, the meat and potatoes of it is great and wouldn't need to be modified for a current student of the 21st century. -So recently I undertook the task of rewriting the first couple chapters to modern exposition, and then I plan to just transcribe the rest of the book nearly verbatim. I assume this book is no longer under copyright protection, if it is then my version will just have to wait until it becomes public domain. But I would think 90 years would be enough. In any case this book deserves to be read by future generations for a long time to come.<|endoftext|> -TITLE: Does anyone know an interesting introductory topic involving vector spaces over the rationals -QUESTION [6 upvotes]: Many introductory books on vector spaces mention that the scalars need not be reals, and might even have sections discussing complex vector spaces or vector spaces over the integers mod 2. I have never seen any such book mention that all of the theory goes through as well if one restricts the scalars to be just rational numbers. Perhaps this is because there is a dearth of interesting problems about such vector spaces accessible at this level that couldn't simply be discussed in the context of real scalars. -I wonder if there is an interesting introductory-level problem or topic about vector spaces that would be most naturally conducted by allowing rational number scalars. Does anyone know of such, perhaps one with a number-theoretic aspect? -(By introductory: I envision a first course on linear algebra, including non-math majors. They would be seeing vector spaces (and that level of abstraction) for the first time. Perhaps they would be seeing matrix multiplication for the first time. Usually, in my experience, such courses primarily use the real numbers as scalars.) - -REPLY [5 votes]: Continuing Akhil's answer, let's prove a theorem of Dehn: if a rectangle is tiled by squares, the the ratio of the lengths of its sides is rational. -Suppose to the contrary that the sides of the rectangle $x,y$ are not rationally dependent. Then we can find some linear homomorphism $f\colon \mathbb{R} \rightarrow \mathbb{Q}$ such that $f(x) = 1$ and $f(y) = -1$. -We define the $f$-area $A(R)$ of a rectangle $R$ with edge lengths $h,v$ to be $f(h)f(v)$. If a rectangle $R$ is tiled by rectangles $R_i$ forming a grid, then from linearity it immediately follows that $$A(R) = \sum_i A(R_i).$$ -Denote the big rectangle $R$ and the squares $S_i$. Take your tiling and extend all the lines to form a grid inside the rectangle. Denote the grid rectangles by $G_j$. Then $$A(R) = \sum_j A(G_j) = \sum_i A(S_i).$$ Since a square has both sides equal, $A(S_i) \geq 0$. On the other hand, by construction $A(R) < 0$. This contradiction shows that the two sides of the big rectangle are, in fact, rationally dependent. -Instead of taking a linear mapping from $\mathbb{R}$ to $\mathbb{Q}$ we could take a linear mapping from a smaller, finite dimensional domain by only considering the lengths in the grid $G_j$ - then everything becomes beginner's linear algebra.<|endoftext|> -TITLE: What's known about recurrences involving $(a_n)^2$? -QUESTION [5 upvotes]: I've run across the recurrence $a_{n+1} = (a_n)^2 + 1$ in the past. Unfortunately, the referrence escapes me. However, my impression was that recurrences involving the product of previous terms (such as $a_{n+1} = (a_n)(a_{n-1})$) are difficult to solve. I'm wondering what is known for this very general problem. -(1) Is there a known way to solve recurrences involving the product of previous terms? Or, what is known about these? -(2) What are these recurrences called? Do they have a general name (such as non-linear recurrences)? -(3) Where can I find more literature on the subject? -(4) Who are some experts that have dealt with this? - -REPLY [2 votes]: You may want to take a look at Kelley and Peterson's textbook [1]. The problem you are looking at here is a recurrence relation as you have stated. You also hear them called difference equations. If you are familiar with differential equations, these difference equations are the discrete analog of differential equations [2]. -Kelley and Peterson go very in depth with solving different types of difference equations. They give a nice overview of several different methods. -I hope this helps you! If you have any other questions, feel free to email me or contact me through my blog http://www.tylerclark12.com/blog. -[1] Kelley, W. & Peterson, A. (2001). Difference Equations: An Introduction with Applications (2nd Ed.). San Diego, CA: Academic Press. -[2] Weisstein, Eric W. "Recurrence Equation." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/RecurrenceEquation.html<|endoftext|> -TITLE: Calculus of Variations and Lagrange Multipliers -QUESTION [11 upvotes]: A general problem for the Calculus of Variations asks us to minimize the value of a functional $A[f]$, where $f$ is usually a differentiable function defined on $\mathbb{R}^n$. -What if, however, the domain of $A$ is not actually all differentiable functions. Suppose there is a constraint equation on $f$, such as (for example): -$L[f] = \int_{-1}^1 \sqrt{1 + f'(x)^2} dx = \pi$ -and we want to minimize over functions satisfying the above and the property that $f(-1)=f(1)=0$ the functional -$A[f] = \int_{-1}^1 f(x) dx $ -This sort of problem seems to me to be very similar to the problem in multivariate calculus of minimizing a function $f(x)$ with respect to a constraint equation $g(x) = 0$. In this case we are trying to minimize a functional $A[f]$ with respect to a functional constraint equation $L[f] = \pi$. -In the former, one can use Lagrange multipliers to reduce the problem to that of solving a system of equations. Is there such a technique for the variational version? - -REPLY [10 votes]: Yes. For the general theorem, see this Wikipedia page. In the particular case which you consider, you can actually consider the minimization problem -$$ \int_{-1}^1 f(x) + \lambda (\sqrt{ 1 + f'(x)^2} - \pi) dx $$ -which leads to the Euler-Lagrange equation -$$ \left( \frac{f'}{\sqrt{1 + (f')^2}}\right)' = \lambda^{-1} $$ -which leads to a somewhat unappealing looking Lagrangian with $f''$ in the denominator when you plug $\lambda$ back in. -(This is, of course, not completely unexpected, as your functional $A$ is not bounded below on the set $L[f] = \pi$. To see this, it suffice to note that $L[f + c] = L[f]$ by definition, while $A[f + c] = A[f] + c$. So for a decreasing sequence of $c\searrow -\infty$, $A[f + c]$ decreases without bound.)<|endoftext|> -TITLE: For a Planar Graph, Find the Algorithm that Constructs A Cycle Basis, with each Edge Shared by At Most 2 Cycles -QUESTION [8 upvotes]: In a planar graph $G$, one can easily find all the cycle basis by first finding the spanning tree ( any spanning tree would do), and then use the remaining edge to complete cycles. Given Vertex $V$, edge $E$, there are $C=E-V+1$ number of cycles, and there are $C$ number of edges that are inside the graph, but not inside the spanning tree. -Now, there always exists a set of cycle basis such that each and every edge inside the $G$ is shared by at most 2 cycles. My question is, is there any algorithm that allows me to find such a set of cycle basis? The above procedure I outlined only guarantees to find a set of cycle basis, but doesn't guarantee that all the edges in the cycle basis is shared by at most two cycles. -Note: Coordinates for each vertex are not known, even though we do know that the graph must be planar. - -REPLY [5 votes]: To suppliment Aryabhata's comment: yes, using plane embedding algorithm will do. One of such algorithm is Boyer and Myrvold.<|endoftext|> -TITLE: Are translations of a polynomial linearly independent? -QUESTION [12 upvotes]: I've been wondering about the following question: -Suppose that $P$ is a polynomial of degree $n$ with complex coefficients. Assume that $a_0, a_1, \dots, a_n \in \mathbb{C}$ are distinct. Are the polynomials $$P(x + a_0), P(x + a_1), \dots, P(x + a_n)$$ linearly independent? - -REPLY [3 votes]: Here a fancy proof that uses operators and differentiation. -Let $T_a$ be the translation operator which maps polynomials to their translates -$$ T_a p(x) = p(x + a) .$$ -Restricting attention to polynomials of degree $n$ or lower, this is a map from an $n+1$-dimensional vector space to itself. The question to be resolved is whether the $n+1$ translation operators $T_{a_0}, T_{a_1}\dots, T_{a_n}$ are linearly independent (when applied to polynomials of degree $n$). -It is well-known that the translation operator can be written as the exponentiation of the differentiation operator $Dp(x) := p'(x)$ as follows -$$ T_a = e^{aD} = 1 + \frac1{1!} aD + \frac1{2!} a^2D^2 + \dots + \frac1{n!} a^n D^n .$$ -This is Taylor's theorem. Note that since $D$ is nilpotent on the space of polynomials of degree $n$, $D^{n+1} = 0$, the series is actually a finite sum. -Now, the point is that for any polynomial $p(x)$ of degree $n$, we know that its derivatives $p(x), Dp(x), D^2p(x), \dots, D^n p(x)$ are linearly independent because the degree always decreases by 1. Hence, they form a basis of our vector space of polynomials. The expansion of the $T_a$ in terms of this new basis is given by the expression above. -To show that the translates $T_{a_k} p(x)$ are linearly independent, we only have to check that their matrix with respect to the new basis is nonsingular. It reads -$$\begin{pmatrix} 1 & 1 & \dots & 1 \\ -a_0 & a_1 & \dots & a_n \\ -\frac1{2!} a_0^2 & \frac1{2!} a_1^2 & \dots & \frac1{2!} a_n^2 \\ -\vdots \\ -\frac1{n!} a_0^n & \frac1{n!} a_1^n & \dots & \frac1{n!} a_n^n \\ -\end{pmatrix}$$ -But its determinant is clearly a non-zero multiple of the Vandermonde determinant, which is non-zero if and only if the values $a_0,a_1,\dots,a_n$ are pairwise distinct. -(Another way to see that this matrix is non-singular is to note that the Vandermonde determinant is related to the interpolation problem, whose solution is unique.)<|endoftext|> -TITLE: Are the actuarial exams hard? -QUESTION [8 upvotes]: I heard that they are difficult. Is this true? Are they like the qualifying exams in grad school? For example, is the probability exam and the financial math exam comparable to qualifying exams (e.g. needs months and months of study)? - -REPLY [3 votes]: I have passed some of the earlier actuarial exams in the U.S. system, so they are jointly administered by the Society of Actuaries (SOA) and the Casualty Actuarial Society (CAS). I have passed two qualifying exams in graduate school. The qualifying exams were much more difficult, especially the exam I took over analysis, which is largely on Lebesgue measure and more general measures. -Now that I have gone through most of graduate school, I am very certain that studying for the actuarial exams will be easy. What I mean is that the difficulty of math I have seen in graduate school is so much greater than the actuarial exams, that I will be able to understand everything I read, and will be able to apply it as well. -BUT, it will still take a lot of study time. It is often said you will need to study for 150-300 hours to pass an actuarial exam. This might go down a bit after graduate school, but you still have to read through 10-15 chapters of a textbook and learn pretty much everything in it, all the definitions and formulas and theorems. And, then, you will want to practice as many old exam problems as you can, because they are most likely more difficult than the problems from the textbook you learned the material from. -Let me put it this way. If you were a good math student at a top 80 grad school, and you only studied for actuarial exams for a three or four months (no other job), you could probably pass 2 or 3 or 4 exams all at once. But, I wouldn't do more than 2 or 3 because companies don't want to pay people a lot of money when they don't have any experience.<|endoftext|> -TITLE: Category of sets -QUESTION [5 upvotes]: Let C be a category of sets, which has objects all sets and arrows all functions, with usual identity functions and the usual composition of functions. For any set S, the assignment s-s for all s in S describes the identity function. If S is a subset of Y, the assignment also describes the inclusion function from S to Y. These functions are different unless S is equal to Y. My question is why these functions are different since they have the same domain and assignment. - -REPLY [10 votes]: In Categories, we have two operations called Domain and Codomain from the collection of all arrows to the collection of all objects. Each arrow $f$ belongs to the collection $C(\mathrm{dom}f,\mathrm{codom} f)$. -So in the Category of Sets, we actually are interested not just in the domain and the assignment of each function (which is what the arrows are), but also in the codomain. That is, you want to think of a function in the category $\mathcal{S}et$ as a triple, consisting of the domain, the codomain, and the actual set of pairs that make up the function (the assignment), $(A,B,f)$ for a function $f\colon A\to B$. Then the operator Domain will just give you the first component, $A$; the operator Codomain will give you the second component $B$. -Here, two functions are equal if and only if they have the same domain, the same assignment, and the same codomain. The codomain is important because it is an important attribute of arrows in categories. -So the inclusion function from $S$ to $Y$ has domain $S$, codomain $Y$, and rule $s\mapsto s$. The identity function from $S$ to itself has domain $S$, codomain $S$, and rule $s\mapsto s$. So the first function corresponds to -$$\Bigl(S, Y, \{(s,s)\in S\times S\}\Bigr)$$ -while the second function is -$$\Bigl(S,S,\{(s,s)\in S\times S\}\Bigr).$$ -If the two arrows are equal, then the value of Domain and of Codomain must be the same in both; that is, we must have $S=Y$. And of course, if $S=Y$, then they are the same function. - -REPLY [3 votes]: The two functions have different codomains. Any map $S\to S$ lies in $Mor(S,S)$ while any map $S\to Y$ lies in $Mor(S,Y)$.<|endoftext|> -TITLE: What is the spectral theorem for compact self-adjoint operators on a Hilbert space actually for? -QUESTION [43 upvotes]: Please excuse the naive question. I have had two classes now in which this theorem was taught and proven, but I have only ever seen a single (indirect?) application involving the quantum harmonic oscillator. Even if this is not the strongest spectral theorem, it still seems useful enough that there should be many nice examples illustrating its utility. So... what are some of those examples? -(I couldn't readily find any nice examples looking through a few functional analysis textbooks, either. Maybe I have the wrong books.) - -REPLY [8 votes]: The only way I know to prove "discreteness" of some piece of a spectrum is to find one or more compact operators on it, suitably separating points. That is, somehow the only tractable operators are those closely related to compact ones. -Even to discuss the spectral theory of self-adjoint differential operators $T$, the happiest cases are where $T$ has compact resolvent $(T-\lambda)^{-1}$. -In particular instances, the Schwartz kernel theorem depends on the compactness of the inclusions of Sobolev spaces into each other (Rellich's lemma). -In automorphic forms: to prove the discreteness of spaces of cuspforms, one shows that the natural integral operators (after Selberg, Gelfand, Langlands et alia) restricted to the space of $L^2$ cuspforms are compact. -One of Selberg's arguments, Bernstein's sketch, Colin de Verdiere's proof, and (apparently) the proof in Moeglin-Waldspurger's book (credited to Jacquet, credited to Colin de Verdiere!?) of meromorphic continuation of Eisenstein series of various sorts depends ultimately on proving compactness of an operator.<|endoftext|> -TITLE: $e$ to 50 billion decimal places -QUESTION [41 upvotes]: Sorry if this is a really naive question, but in my reading of a lot of textbooks and articles, there is a lot of mention of how many decimals we know of a certain number today, such as $\pi$ or $e$. An excerpt from my textbook: - -In 1748, Leonard Euler used the sum of the infinite series of $e$ (mentioned in the book in a section about Taylor Series) to find the value of $e$ to 23 digits. In 2003, Shigeru Kondo, again using the series, computed $e$ to 50 billion decimals places - -My question is why does it matter how many decimals we know? Isn't this just a huge waste of time? What could we ever do with so many decimal places? And, if $e$ can be represented as a sum of infinite series of $1/n!$, can't we just plug that into a computer that just loops the same equation but increasing $n$ every iteration, and find as many decimals of $e$ as we like? -(Once again, I realize this may be an ignorant/naive question, but I've always been curious about this) - -REPLY [20 votes]: I'd just like to give two quotes from the book The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing that might help explain motivation. Here is the one from their chapter on computing constants to 10,000 digits: - -While such an exercise might seem frivolous, the fact is we learned a - lot from the continual refinement of our algorithms to work efficiently at - ultrahigh precision. The reward is a deeper understanding of the theory, - and often a better algorithm for low-precision cases. - -and here is something from the foreword written by David Bailey, one of the pioneers of experimental mathematics: - -Some may question why anyone would care about such prodigious precision, - when in the “real” physical world, hardly any quantities are known to an - accuracy beyond about 12 decimal digits. For instance, a value of π - correct to 20 decimal digits would suffice to calculate the circumference - of a circle around the sun at the orbit of the earth to within the width - of an atom. So why should anyone care about finding any answers to 10,000 - digit accuracy? -In fact, recent work in experimental mathematics has provided an important - venue where numerical results are needed to very high numerical - precision, in some cases to thousands of decimal digit accuracy. In - particular, precision of this scale is often required when applying integer - relation algorithms to discover new mathematical identities. An integer - relation algorithm is an algorithm that, given $n$ real numbers - ($x_i,\quad 1\leq i\leq n$), in the form of high-precision floating-point - numerical values, produces $n$ integers, not all zero, such that - $a_1x_1+a_2x_2+\cdots+a_n x_n=0$. -The best known example of this sort is the discovery in 1995 of a new formula - for π: -$$\pi=\sum_{k=0}^{\infty}\frac1{16^k}\left(\frac{4}{8k+1}-\frac{2}{8k+4}-\frac1{8k+5}-\frac1{8k+6}\right)$$ -This formula was found by a computer program implementing the PSLQ integer - relation algorithm, using (in this case) a numerical precision of - approximately 200 digits. This computation also required, as an input - real vector, more than 25 mathematical constants, each computed to 200-digit - accuracy. The mathematical significance of this particular formula is that it - permits one to directly calculate binary or hexadecimal digits of - π beginning at any arbitrary position, using an algorithm that is very simple, - requires almost no memory, and does not require multiple-precision arithmetic.<|endoftext|> -TITLE: Solving systems of linear equations over a finite ring -QUESTION [11 upvotes]: I want to solve equations like this (mod $2^n$): -$$\begin{array}{rcrcrcr} 3x&+&4y&+&13z&=&3&\pmod{16} \\ x&+&5y&+&3z&=&5&\pmod{16} \\ 4x&+&7y&+&11z&=&12&\pmod{16}\end{array}$$ -Since we are working over a ring and not a field, Gaussian elimination doesn't work. So how can I still solve these types of equations? - -REPLY [7 votes]: You can still use Gaussian elimination as long as you don't "divide" by things that are not relatively prime to the modulus. In this case, you can "divide" by any odd number, and perform all the usual computations. In this case, you can perform Gaussian pretty well: -\begin{align*} -\left(\begin{array}{ccc|c} -3 & 4 & 13 & 3\\ -1 & 5 & 3 & 5\\ -4 & 7 & 11 & 12 -\end{array}\right) &\rightarrow -\left(\begin{array}{ccc|c} -1 & 5 & 3 & 5\\ -3 & 4 & 13 & 3\\ -4 & 8 & 11 & 12 -\end{array}\right) && \rightarrow -\left(\begin{array}{ccr|c} -1 & 5 & 3 & 5\\ -0 & 5 & 4 & 4\\ -0 & 4 & -1 & 8 -\end{array}\right)\\ -&\rightarrow -\left(\begin{array}{ccr|r} -1 & 5 & 3 & 5\\ -0 & 1 & 5 & -4\\ -0 & 4 & -1 & 8 -\end{array}\right) -&&\rightarrow -\left(\begin{array}{ccr|r} -1 & 5 & 3 & 5\\ -0 & 1 & 5 & -4\\ -0 & 0 & 11 & 8 -\end{array}\right). -\end{align*} -So here you get that $11z\equiv 8 \pmod{16}$. Since $11^{-1} \equiv 3\pmod{16}$, this means $z \equiv 24 \equiv 8\pmod{16}$. Then you can backsubstitute and solve. (Assuming I didn't make any mistakes with my modular arithmetic, anyway...) -If you are unlucky enough to get a congruence in which all the coefficients are even, then you can divide through by $2$ and get a congruence modulo $8$ (instead of $16$); that will lead to two solutions modulo $16$ (if you get $x\equiv 4 \pmod{8}$, that means $x\equiv 4 \pmod{16}$ or $x\equiv 8+4=12\pmod{16}$, for instance). -Basically, so long as you are careful, you can certainly do Gaussian elimination. You can even do it over more general rings, through in that case you have to other restrictions on what you can or cannot conclude. - -REPLY [3 votes]: You basically do Gaussian elimination as usual, although you can get stuck if at some point all coefficients of a variable are, say, even. This just means that you'll have two solutions to that variable. In general, you'll pick the row in which the coefficient has the least power of $2$.<|endoftext|> -TITLE: Proving $\sum\limits_{p \leq x} \frac{1}{\sqrt{p}} \geq \frac{1}{2}\log{x} -\log{\log{x}}$ -QUESTION [14 upvotes]: How to prove this: $$\sum\limits_{p \leq x} \frac{1}{\sqrt{p}} \geq \frac{1}{2}\log{x} -\log{\log{x}}$$ -From Apostol's number theory text i know that $$\sum\limits_{p \leq x} \frac{1}{p} = \log{\log{x}} + A + \mathcal{O}\Bigl(\frac{1}{\log{x}}\Bigr)$$ But how can i use this to prove my claim. - -REPLY [15 votes]: I wasn't sure that you wanted a proof that used that fact from Apostol. -One easy method not using the result you quote from Apostol is as follows: -$$ -\sum_{p\le x} \frac{1}{\sqrt{p}} > \sum_{p \le x} \frac{1}{\sqrt{x}} = \frac{\pi(x)}{\sqrt{x}} > c \frac{\sqrt{x}}{\log{x}} -$$ -where $c$ is a constant you can get from Chebyshev or Rosser and Schoenfeld, and maybe do a little computation to take care of small $x$, and you've got it. This is a much better lower bound than what you are trying to prove.<|endoftext|> -TITLE: How do I Generate Doubly-Stochastic Matrices Uniform Randomly? -QUESTION [13 upvotes]: A doubly-stochastic matrix is an $n\times n$ matrix $P$ such that -$\displaystyle\sum_{i=1}^n{p_{ij}}=1$ -and -$\displaystyle\sum_{j=1}^n{p_{ij}}=1$ -where $p_{ij}\ge 0$. -Can someone suggest an algorithm for generating these matrices uniform randomly? - -REPLY [3 votes]: You can first generate a random unitary matrix and then square the absolute values of all entries. Here is Mathematica code that does this: -(* Random real and complex numbers with normal distribution *) -RR := RandomReal[NormalDistribution[0, 1]]; -RC := RR + I*RR; -(* Random matrix from Ginibre ensemble *) -RG[n_] := Table[RC, {n}, {n}]; -(* Random unitary matrix *) -RU[n_] := Orthogonalize[RG[n]]; -(* Random doubly stochastic matrix *) -RDS[n_] := Abs[RU[n]]^2; - -Run RDS[5] to generate a $5 \times 5$ random doubly stochastic matrix.<|endoftext|> -TITLE: Finite sub cover for $(0,1)$ -QUESTION [13 upvotes]: While learning topology one learns about compact set. The standard definition is: - -A set $X$ is said to be compact if open cover has a finite subcover. - -Since $[0,1]$ is compact, if we take a open cover for this we should be able to get a finite subcover. I know, that $(0,1)$ is not compact, so there must exists some open cover for $(0,1)$ which doesn't admit any finite subcover. But how does one prove this fact? - -REPLY [6 votes]: Alternatively, use the fact that every compact subspace of a Hausdorff space is closed. Since (0,1) is not closed, it cannot be compact. (this is theorem 26.3 in Munkres)<|endoftext|> -TITLE: Ramification index and inertia degree -QUESTION [8 upvotes]: Let $L,K$ be number fields and $L|K$ a galois extension. -Let $(0)\neq Q$ a prime ideal in $\mathcal O_L$ (=ring of integers in $L$) and $P=Q \cap \mathcal O_K$. -$Z_Q $ denotes the decomposition field of $Q$ and $T_Q$ denotes the inertia field of $Q$. -Now put $Q' :=Q\cap Z_Q$ and $Q'' :=Q\cap T_Q$. -How does one prove, that $e(Q|Q'')=e(Q|P)$ and $f(Q''|Q')=f(Q|P)$, if $e$ denotes the ramification index and $f$ the inertia degree? - -REPLY [5 votes]: This is essentially the argument in Daniel Marcus's Number Fields (which I very highly recommend, especially for its exercises), Theorem 28 on page 100. -Let $G=\mathrm{Gal}(L/K)$ be the Galois group. The decomposition group of $Q$ is -$$D = \{ \sigma\in G\mid \sigma Q = Q\}$$ -and the inertia group of $Q$ is -$$E = \{\sigma\in G\mid \sigma(a)\equiv a\pmod{Q}\text{ for all $a\in \mathcal{O}_L$}\}.$$ -Then $Z_Q$ is defined to be the fixed field of $D$, and $T_Q$ is the fixed field of $E$. -As usual, let $[L:K]=n = efr$, where $e$ is the ramification degree and $f$ is the inertia degree. -First, I claim that $[Z_Q:K]=r$. Indeed, by the Fundamental theorem of Galois Theory, $[Z_Q:K] = [G:D]$. If $\tau\in G$, then every element of the coset $\tau D$ maps $Q$ to $\tau Q$; moreover, if $\tau Q=\rho Q$, then $\rho^{-1}\tau \in T$. So we have a one-to-one correspondence between the cosets of $D$ in $G$, and the primes over $P$ of the form $\tau Q$ with $\tau\in G$. Since $L$ is Galois over $K$, the action of $G$ is transitive on the primes lying above $P$, and there are $r$ of them. So the index $[G:D]$ equals $r$, hence $[Z_Q:K]=r$, as claimed. -Next, we show that $e(Q'|Q)=f(Q'|Q)=1$. Notice that $Q$ is the only prime of $\mathcal{O}_L$ that lies over $Q'$: because $\mathrm{Gal}(L/Z_Q)=D$ by the Fundamental Theorem of Galois Theory, and $D$ acts transitively on the primes of $\mathcal{O}_L$ lying over $Q'$; but every element of $D$ maps $Q$ to itself, so $Q$ is the only prime lying over $Q'$. Since $[L:Z_Q]=e(Q|Q')f(Q|Q')r(Q|Q') = e(Q|Q')f(Q|Q')$, and since $erf=[L:K]=[L:Z_Q][Z_Q:K]=[L:Z_Q]r$, then $[L:Z_Q]=ef$. And since $e(Q|Q')\leq e$ and $f(Q|Q')\leq f$, it follows that we must have $e(Q|Q')=e$, $f(Q|Q')=f$. And since $e=e(Q|P) = e(Q|Q')e(Q'|P)$, and $f=f(Q|P) = f(Q|Q')f(Q'|P)$, then $e(Q'|P)=f(Q'|P)=1$. -In particular, since $1 = e(Q'|P) = e(Q'|Q'')e(Q''|P)$, then $e(Q'|Q'')=1$. Thus, $e(Q|Q'') = e(Q|Q')e(Q'|Q'') = e = e(Q|P)$, which proves the first equality. -Finally, we show that $f(Q|Q'') = 1$, or equivalently that $\mathcal{O}_{T_Q}/Q''$ is equal to $\mathcal{O}_L/Q$. For this it suffices to show the corresponding Galois group is trivial. If we can establish this, then we will have -$f = f(Q|Q') = f(Q|Q'')f(Q''|Q') = f(Q''|Q')$, which will give the second equality you want. -To show that $\mathcal{O}_L/Q$ is the trivial extension of $\mathcal{O}_{T_Q}/Q''$, it is enough to show that for every $\overline{a}\in\mathcal{O}_L/Q$, the polynomial $(x-\overline{a})^m$ lies in $\mathcal{O}_{T_Q}/Q''[x]$ for some positive $m$. If this is the case, then every element of the Galois group must send $\overline{a}$ to itself. Pick any preimage $a\in \mathcal{O}_L$ of $\overline{a}$. Then the polynomial -$$\prod_{\sigma\in E}(x - \sigma(a))$$ -has coefficients in the fixed field of $E$, that is, in $T_Q$; reduce modulo $Q''$ to get a polynomial with coefficients in $\mathcal{O}_{T_Q}/Q''$; since $\sigma(a)\equiv a \pmod{Q}$ for all $\sigma\in E$, the reduced polynomial is of the form $(x-\overline{a})^m$, with $m=|E|$. This proves that every element of $\mathcal{O}_L/Q$ is fixed by every element of the Galois group, so the extension is trivial, hence the inertia degree is $1$. This proves the second equality, as outlined above.<|endoftext|> -TITLE: If G is fully residually cyclic, does G have at most one subgroup of each finite index? -QUESTION [7 upvotes]: Update: Steve points out in comments that many direct products of residually cyclic groups will be counterexamples. Indeed, I should have spotted this: every finitely generated abelian group is residually cyclic! I should have asked about fully residually cyclic groups, ie groups $G$ such that for any finite subset $X\subseteq G\smallsetminus 1$, there is a homomorphism from $G$ to a cyclic group that doesn't kill any elements of $X$. I suspect the question is now quite easy, though I haven't had a chance to think about it yet. - -This question is motivated by this recent question, which asked for a characterisation of groups with at most one subgroup of each finite index. Arturo Magidin's answer showed that every such finite group is cyclic, but the questioner, Louis Burkill, added in comments that he is really interested in the infinite case. -In my answer to that question, I argue that a finitely generated group has at most one subgroup of each finite index if and only if its canonical residually finite quotient, $R(G)$, is cyclic. The `finitely generated' hypothesis is necessary - otherwise the additive group of the rationals provides a counterexample. -In the general case, one can reduce from the case of $G$ to $R(G)$ as before (this gets rid of pathological examples like infinite simple groups), and the same argument shows that if $R(G)$ has at most one subgroup of each finite index then $R(G)$ is fully residually cyclic, in particular abelian. But the converse is not clear to me. Hence the question in the title, which I'll reiterate here, with some extra hypotheses that should indicate where the difficulty lies. - -If $G$ is an infinitely generated, fully residually cyclic (in particular, abelian) group, must $G$ have at most one subgroup of each finite index? - -REPLY [3 votes]: The odds are against us. Even "fully" is not enough. -With the "fully" hypothesis, probably it is easy: If $G$ has two subgroups $H,K$ of finite index $n$, then their intersection has finite index. Take $X$ to be a set of (non-identity) coset reps of $G/H \cap K$, then there is a quotient where no element non-identity element of $G/H \cap K$ is sent to the identity (and WLOG, G/H \cap K is that quotient), but since $G$ is fully residually cyclic, that quotient is cyclic, and the two subgroups must be identical by the lattice homomorphism theorem. -In other words, you seem to have exactly answered your own question! :) -At least K4 is no longer a counterexample.<|endoftext|> -TITLE: Stone–Čech compactification problem -QUESTION [7 upvotes]: How do we show the following? -Let $X$ be a topological space and let $x \in X$. Show that if $x$ has a countable neighborhood basis in $X$ then $x$ has a countable neighborhood basis in $\beta X$. Here $\beta X$ denotes the Stone–Čech compactification of $X$. - -REPLY [4 votes]: If $X$ is locally compact, then it is easy, since $X$ is open in $\beta X$, hence the basis of $x$ in $X$ is also a basis in $\beta X$. -Here's an argument for normal spaces (using my favorite of the many characterizations of $\beta X$ (for normal spaces)): -We're going to define $\beta X$ as the space of ultrafilters in the algebra of closed subsets of $X$. That is, an element of $\beta X$ is a maximal set $\mathfrak{a}$ of closed sets that is closed under finite intersections and such that if $f\in\mathfrak{a}$ and $g\supseteq f$ is closed, then $g\in\mathfrak{a}$. -A base for the closed sets consists of sets of the form -$F_f=\{\mathfrak{a}\in\beta X : f\in\mathfrak{a}\}$ for closed sets $f\subseteq X$ (meaning the closed sets are intersections of such $F_f$s.) Finally, define the embedding $X\to\beta X$ by letting $\hat{x}=\{f\subseteq X:x\in f\}$. -Now, suppose that $\{U_n\}_{n\in\omega}$ is a local base at $x$ in $X$. Then -$\{Z_n\}_{n\in\omega}$ where $Z_n=X\setminus U_n$ is a local closed base at $x$, meaning that $x\notin Z_n$ for all $n$ and if $f\subseteq X$ is any closed set with $x\notin f$ then there is an $n$ with $f\subseteq Z_n$. -Now I claim that $\{F_{Z_n}\}_{n\in\omega}$ is a closed base at $\hat{x}$ in $\beta X$. For, let $A\subseteq\beta X$ be closed with $\hat{x}\notin A$. Thus there is some closed $f\subseteq X$ with $\hat{x}\notin A_f$ and $A\subseteq A_f$. This implies that $x\notin f$ hence $f\subseteq Z_n$ for some $n$. But $x\notin Z_n$ so $\hat x\notin F_{Z_n}$ and $F_{Z_n}\supseteq F_f\supseteq A$.<|endoftext|> -TITLE: Understanding Limits of Integration in Integration-by-Parts -QUESTION [10 upvotes]: My understanding of integration-by-parts is a little shaky. In particular, I'm not totally certain that I understand how to properly calculate the limits of integration. -For example, the formula I have is: -$\int_{v_1}^{v_2}{u dv} = (u_2 v_2 - u_1 v_1) - \int_{u_1}^{u_2}{v du}$ -I'd like to see how to calculate $u_1$ and $u_2$, preferably in a complete example (that solves a definite integral.) I'm really interested in an example where the limits of integration change; i.e. $u_1$ and $u_2$ are different than $v_1$ and $v_2$, if possible. - -REPLY [15 votes]: A more precise notation is this one -$$\int_{x_{1}}^{x_{2}}u(x)v^{\prime }(x)dx=\left( -u(x_{2})v(x_{2})-u(x_{1})v(x_{2})\right) -\int_{x_{1}}^{x_{2}}u^{\prime -}(x)v(x)dx$$ -which is derived from the derivative rule for the product -$$(u(x)v(x))^{\prime }=u^{\prime }(x)v(x)+u(x)v^{\prime }(x)$$ -or -$$u(x)v^{\prime }(x)=(u(x)v(x))^{\prime }-u^{\prime }(x)v(x).$$ -So -$$\begin{eqnarray*} -\int_{x_{1}}^{x_{2}}u(x)v^{\prime }(x)dx -&=&\int_{x_{1}}^{x_{2}}(u(x)v(x))^{\prime }dx-\int_{x_{1}}^{x_{2}}u^{\prime -}(x)v(x)dx \\ -&=&\left. (u(x)v(x))\right\vert -_{x=x_{1}}^{x=x_{2}}-\int_{x_{1}}^{x_{2}}u(x)v(x)dx \\ -&=&\left( u(x_{2})v(x_{2})-u(x_{1})v(x_{2})\right) --\int_{x_{1}}^{x_{2}}u^{\prime }(x)v(x)dx. -\end{eqnarray*}.$$ -If you write $dv=v^{\prime }(x)dx$ and $du=u^{\prime }(x)dx$, you get your -formula but with $u,v$ as a function of $x$ -$$\int_{v_{1}(x)}^{v_{2}(x)}u(x)dv=\left( -u(x_{2})v(x_{2})-u(x_{1})v(x_{2})\right) --\int_{u_{1}(x)}^{u_{2}(x)}v(x)du$$ -Example: Assume you want to evaluate $\int_{x_{1}}^{x_{2}}\log -xdx=\int_{x_{1}}^{x_{2}}1\cdot \log xdx$. You can choose $v^{\prime }(x)=1$ -and $u(x)=\log x$. Then $v(x)=x$ (omitting the constant of integration) and -$u^{\prime }(x)=\frac{1}{x}$. Hence -$$\begin{eqnarray*} -\int_{x_{1}}^{x_{2}}\log xdx &=&\int_{x_{1}}^{x_{2}}1\cdot \log xdx \\ -&=&\left( \log x_{2}\cdot x_{2}-\log x_{1}\cdot x_{1}\right) --\int_{x_{1}}^{x_{2}}\frac{1}{x}\cdot xdx \\ -&=&\left( \log x_{2}\cdot x_{2}-\log x_{1}\cdot x_{1}\right) --\int_{x_{1}}^{x_{2}}dx \\ -&=&\left( \log x_{2}\cdot x_{2}-\log x_{1}\cdot x_{1}\right) -\left( -x_{2}-x_{1}\right) -\end{eqnarray*}$$ - -The same example with your formula: -$$u=\log x,v=x,dv=dx,v=x,du=\frac{1}{x}dx$$ -$$u_{2}=\log x_{2},u_{1}=\log x_{1},v_{2}=x_{2},v_{1}=x_{1}$$ -$$\begin{eqnarray*} -\int_{v_{1}}^{v_{2}}udv &=&\left( u_{2}v_{2}-u_{1}v_{2}\right) --\int_{u_{1}}^{u_{2}}vdu \\ -\int_{x_{1}}^{x_{2}}\log xdx &=&\left( \log x_{2}\cdot x_{2}-\log x_{1}\cdot -x_{1}\right) -\int_{\log x_{1}}^{\log x_{2}}xdu \\ -&=&\left( \log x_{2}\cdot x_{2}-\log x_{1}\cdot x_{1}\right) --\int_{x_{1}}^{x_{2}}x\cdot \frac{1}{x}dx \\ -&=&\left( \log x_{2}\cdot x_{2}-\log x_{1}\cdot x_{1}\right) -\left( -x_{2}-x_{1}\right). -\end{eqnarray*}$$ -Note: The limits of integration, although different in terms of $u(x),v(x)$, when expressed in terms of the same variable $x$ of functions $u(x),v(x)$ are the same in both sides. -For a strategy on how to chose the $u$ and $v$ terms see this question.<|endoftext|> -TITLE: Is $\mathbb{C}^*$ modulo the roots of unity isomorphic to $\mathbb{R}^+$? -QUESTION [12 upvotes]: A student came to me showing a question from his exam in basic group theory, in which they are asked to prove that $\mathbb{C}^*$ modulo the subgroup of roots of unity is isomorphic to $\mathbb{R}^+$ (in both cases we mean the multiplicative groups). -Now this seems to me to be a simple error in the question. I believe they meant to ask to prove that $\mathbb{C}^*$ modulo all the elements of absolute value 1 is isomorphic to $\mathbb{R}^+$ which is very easy to prove (take the homomorphism mapping $z$ to $|z|$ and use the first homomorphism theorem). However, I couldn't prove the claim about the roots of unity is wrong; is there an easy way to show this? - -REPLY [14 votes]: For the sake of completeness: $\mathbb{C}^{\ast}$ is isomorphic to $\mathbb{R}^{+} \oplus \mathbb{R}/\mathbb{Z}$ in the obvious way, and $\mathbb{C}^{\ast}$ modulo the roots of unity is then isomorphic to $\mathbb{R}^{+} \oplus \mathbb{R}/\mathbb{Q}$. As a $\mathbb{Q}$-vector space this is abstractly isomorphic to $\mathbb{R}^{+}$, but the construction of such an isomorphism is likely to require the axiom of choice.<|endoftext|> -TITLE: Algebraic structure cheat sheet anyone? -QUESTION [18 upvotes]: Has anyone ever come across a good cheat sheet for a list of definitions for the various algebraic structures out there, i.e. groups, fields, rings etc. Every time I come across the name of some structure, I have to look it up on wikipedia just to be sure I'm thinking of the right one, figured it would be cool to print out a cheat sheet and hang it on a wall nearby. -The table on the algebraic structure article on wikipedia is almost what I want, however it's a bit cryptic and lacks some structures, e.g. that of a vector space or a module. - -REPLY [2 votes]: I was just in the process of typing out this exact question when I saw that somebody else had already asked it. -Wikipedia has this: -http://en.wikipedia.org/wiki/File:Magma_to_group2.svg -Unfortunately, the exact page it's linked from keeps moving, and it only "goes up to" groups. But other than that, this is exactly the sort of cheat-sheet I'm looking for. -Does anybody have a chart like this that goes all the way up to fields? Did the OP ever make that chart?<|endoftext|> -TITLE: knapsack algorithm that looks too good to be true -QUESTION [6 upvotes]: I have an idea for solving the knapsack problem, but it looks too good to be true. I would like someone to explain potential problems with this approach. I'll give an example: I want to find a subset of {2,7,11} which has elements that sum to 13. Here is the algorithm for solving this: -In binary notation 2=0010, 7=0111, 11=1011, 13=1101. Suppose that 2x+7y+11z=13, where x,y,z are {0,1}. -Then y+z=1 (mod 2), since the last bits of 7, 11, and 13 are 1 and the last bit of 2 is 0. -And (y+z-1)+2*(x+y+z)=0 (mod 4), since the second to last bits of 2, 7, and 11 are 1 and the second to last bit of 13 is 0. (The y+z-1 is the carry bit from before.) -And ((y+z-1)+2*(x+y+z)-0)+4y=4 (mod 8), since the third to last bits of 7 and 13 are 1 and the third to last bit of 2 and 11 are 0. (The (y+z-1)+2*(x+y+z)-0 is the carry bit from before.) -And finally ((y+z-1)+2*(x+y+z)-0)+4y-4)+8z=8 (mod 16), since the first bits of 11 and 13 are 1 and the first bit of 2 and 7 are 0. (The ((y+z-1)+2*(x+y+z)-0)+4y-4 is the carry bit from before.) -So we have four linear equations over a finite ring, if we change each one to mod 16. -8y+8z=8 (mod 16) -8x+12y+12z=4 (mod 16) -4x+14y+6z=10 (mod 16) -2x+7y+11z=13 (mod 16) -As we can see, a solution is x=1, y=0, z=1, as one would expect, since the set {2,11} has elements that sum to 13. We could have gotten this through Gaussian elimination (according to answers to my last question on this site). -By request, here is the general algorithm that I just wrote (which can be induced from the above example): We want to solve a_1*x_1 + ... a_n*x_n = b, where each a_j and b is an integer and each x_j is in {0,1}. Putting each a_j and b in binary, we have a_j=(a_{mj} ... a_{1j}) and b=(b_m ... b_1), for suitable m. For instance, a_j=5 in binary is (a_{4j} ... a_{1j})=(0 1 0 1) - highest bit on the left side, lowest on the right side. -Here is the simple algorithm for constructing a new matrix A=(a_{ij}) from this original (a_{ij}) and a new b=(b_i) from this original b: -`Let b_1 = 2^{m-1}*b_1; -For j=1 to n - Let a_{1j} = 2^{m-1}*a_{1j}; -For i=2 to m { - Let b_i = 2^{m-1}*b_i + b_{i-1}/2; - For j=1 to n - Let a_{ij} = 2^{m-1}*a_{ij} + a_{(i-1),j}/2; -}` -Next solve the matrix equation Ax=b (mod 2^m), for the new A=(a_{ij}) and b=(b_i). If there is a solution to the knapsack problem, there should be a solution for x in {0,1} in Ax=b (mod 2^m) and vice versa (since the last row is just the original a_j and b). I'm curious to see if it works for large m and n. Anyone want to give it a shot? -What's wrong with this approach? It shouldn't work, but I don't know why. - -REPLY [9 votes]: The problem is that the equations generated are all linearly dependent on the original equation, so finding solutions of the matrix is just as hard as finding solutions to just the original equation. It works in the given example only because the matrix is so small that trial-and-error makes a solution apparent. -To use your example, let $S = \{2,7,11\}$ and $N = 13$. We want to find a subset of $S$ whose elements sum to $N$. Looking at your generated solutions modulo $2^n$, notice that -$$2x + 7y + 11z \equiv 13 \pmod{16} \, \Rightarrow \, 4x + 14y + 6z \equiv 10 \pmod{16}$$ -$$4x + 14y + 6z \equiv 10 \pmod{16} \, \Rightarrow \, 8x + 12y + 12z \equiv 4 \pmod{16}$$ -$$8x + 12y + 12z \equiv 4 \pmod{16} \, \Rightarrow \, 8y + 8z \equiv 8 \pmod{16}$$ -Since the reverse implications are not true in general, "simplifying" the equation in this manner does no good, as solving a simpler equation has no effect than to rule out possibilities (and as the problem scales it becomes infeasible to keep track of all remaining possibilities). - -REPLY [3 votes]: I'm not sure how one sees that finding solutions to the linear system over {0,1} can be accomplished in polynomial time. Reducing the matrix to row-echelon form is not sufficient for finding such solutions. Suppose that the reduced system has a parametrized solution set, as happens in your example. Is there any way, short of plugging in 0 and 1 for the free parameters in all possible ways and then checking the values of the remaining variables, of finding {0,1} solutions? Maybe, but I don't think this is immediately clear. It seems possible that there might be many free parameters, and that there would then be a combinatorial explosion in the number of cases that needed to be checked in order to find {0,1} solutions.<|endoftext|> -TITLE: Soft Question Hilbert Space Geometry -QUESTION [9 upvotes]: Just a quick question about the geometry of Hilbert spaces from an intuitive standpoint. Maybe just assuming we're working with $L^2$ would simplify the situation. Basically, in something like $\mathbb{R}^2$ we have the situation that $\cos(\theta)=\frac{\langle a,b\rangle}{\vert a\vert\cdot\vert b\vert}$, and the idea of an angle between vectors is very meaningful, geometrically. We can easily extend this idea to $\mathbb{R}^n$ because when we talk about the angle between two vectors, we mean we are choosing the plane that both of them lie in, and picking the vector in there. But what does this really mean in a Hilbert space like $L^2$? I have a good intuition about functions, and about geometry (topology) separately, but not really the "geometry of functions". -Now, there may be no visualization of this in $L^2$, and I'm not asking for one, but is there any sense to doing geometry (i.e. actual polygons, things like that) in a space like $L^2$? Also, what sort of applications do ideas like this have in functional analysis? Are we ever interested in ideas like "planes" of functions, polygons, surfaces, solids, etc.? What do we really mean by angles, projections, normal vectors? And do these sorts of things ever have any sort of interesting relationships? -I'm primarily asking for an intuitive idea here. It's easy to just do the math, prove theorems about inner products, norms, etc. Maybe geometry gives some clues or intuitive ideas when doing functional analysis? - -REPLY [2 votes]: One motivation for the $L^2$ inner product is as follows: Imagine sampling your functions f and g at N equally-spaced points, and putting those values into vectors $\vec{f}$ and $\vec{g}$. - -Then (modulo certain technical assumptions) in the limit as you take more and more samples, the angle between the approximating vectors with a standard dot product approaches the angle between the functions with the $L^2$ inner product. - -It's not quite rigorous, but a useful mental model is to think of each different x in the domain as representing a different orthogonal direction in your function space, and then the value f(x) is the "coordinate" of f in the x direction.<|endoftext|> -TITLE: Finding the least element of the greatest elements of certain subsets of the natural numbers -QUESTION [9 upvotes]: This is a problem that a professor proposed for the highschool mathematical olympiad in Costa Rica that we haven't been able to solve. Therefore it cannot be asked since we don't have a solution yet in general. -Let $\mathcal{F}_{k}$ for a fixed $k \in \mathbb{N}$ be the family of subsets $A_i \subset \mathbb{N}$ that satisfy the following conditions: -1) The cardinality of $A_i$ is $k$ for every index $i$. -2) For every $A_i$ it holds that given any two different subsets of two elements $ \{ x, y \} \neq \{ z, w \}$, the absolute value of the differences between the elements of each subset are different -$$ |x - y| \neq |z - w|$$ -Now we define a function $f: \mathcal{F}_k \rightarrow \mathbb{N}$ given by $$f(A_i) = \max{A_i}$$ -The problem is to find the minimum of the image of $f$, that is to find -$$\min{f(\mathcal{F}_k)}$$ -For instance, we know that for $k = 4$ the answer is $\min{f(\mathcal{F}_4)} = 7$ and for $k = 3 $ the answer is $\min{f(\mathcal{F}_3)} = 4$ but we don't have a general pattern for the solution, basically these were done by "brute force". -We would very much appreciate any help with this problem. Thanks a lot in advance. - -REPLY [10 votes]: Here are some more values, starting from $\mathcal{F}_1$: $$1,2,4,7,12,18,26,35,45,56,73,86,107,128,152,178,200,217,247,284,334,357,373,426,481,493,\ldots$$ -You're trying to find the shortest Golomb ruler of given size. I believe that the exact optimal values are unknown in general. -EDIT: This is A003022 (in which all numbers are $1$ less, e.g. $(0,)1,3,6,\ldots$).<|endoftext|> -TITLE: These two group theory statements are "the same"? -QUESTION [10 upvotes]: Let $G$ be a group and $G'$ its commutator subgroup. Let $\pi: G\to G/G'$ be the natural projection. -Statement 1: $G/G'$ is the largest Abelian quotient of $G$ in the sense that if $H\unlhd G$ and $G/H$ is Abelian, then $G'\le H$. Conversely, if $G'\le H$, then $H\unlhd G$ and $G/H$ is Abelian. -Statement 2: If $\varphi:G\to A$ is any homomorphism of $G$ into an Abelian group $A$, then $\varphi$ factors through $G'$; i.e., $G'\le \ker{\varphi}$ and there is a homomorphism $\hat{\varphi}:G/G'\to A$ such that $\varphi(g) = (\hat{\varphi}\circ \pi)(g)$. (That is, we have a fancy commutative diagram.) -This is from Dummit and Foote, p.169, Proposition 7. -The proof of (1) is very straightforward. However, the authors claim that (1) is a restatement of (2) in terms of homomorphisms. Can anyone explain this? Because it is not clear to me. Also, if I wanted to prove (2) outright, what should the map $\hat{\varphi}$ be? My first thought was defining it as $\hat{\varphi}(aG')= \varphi(a)$, but I don't think this works. -Thanks! - -REPLY [8 votes]: By the Homomorphism Theorem, any homomorphism $f\colon G\to K$ factors through $G/\mathrm{ker}f$, meaning that there is a map $\hat{f}\colon G/\mathrm{ker}f \to K$ such that $ f = \hat{f}\pi$. The map is indeed $\hat{f}(g\,\mathrm{ker}f) = f(g)$. This applies to the specific case given in Statement 2. -Edit: In fact, the full Statement 1 is not equivalent to Statement 2, in the sense that if you replace $G'$ with an arbitrary subgroup $M$ of $G$ in both statements, then Statement 1 characterizes $G'$, but Statement 2 does not. That is, if you have - -Statement 1': If $H\triangleleft G$ and $G/H$ is abelian, then $M\subseteq H$; and if $M\subseteq H$, then $H\triangleleft G$ and $G/H$ is abelian. -Statement 2': If $\varphi\colon G\to A$ is any homomorphism of $G$ into an abelian group $A$, then $\varphi$ factors through $M$; that is, $M\subseteq \ker\varphi$ and there is a homomorphism $\hat{\varphi}\colon G/M\to A$ such that $\varphi(g) = \hat{\varphi}\circ\pi(g)$. - -The only subgroup $M$ of $G$ that satisfies Statement 1' is $M=G'$. However, any subgroup of $G'$ that is normal in $G$ will satisfy Statement 2'. -In fact, Statement 2 is equivalent to the first clause of Statement 1, namely that if $H\triangleleft G$ and $G/H$ is abelian, then $G'\subseteq H$, plus the implicit assertion that $G'$ itself is normal in $G$. -Assuming the first clause of Statement 1 plus the fact that $G'\triangleleft G$, if $\varphi\colon G\to A$ is a homomorphism, then by the Homomorphism Theorem, letting $H=\ker\varphi$, then $G/H$ is (isomorphic to) a subgroup of $A$, hence abelian, so we must have $G'\subseteq H = \mathrm{ker}\varphi$; this is Statement 2 (with the final clause of 2 given by the homomorphism theorem as above). -Assuming Statement 2, (which implicitly asserts that $G'$ is normal) suppose that $H$ is a normal subgroup of $G$ such that $G/H$ is abelian. Then considering $\pi\colon G\to G/H$ and applying 2, you conclude that $G'\subseteq \mathrm{ker}\pi = H$. And normality of $G'$ follows from the statement of 2, which requires it. -That is, they aren't quite equivalent, because Statement 1 has another clause, namely the "Conversely..." clause, which is not a consequence of assuming Statement 2. But the first part of Statement 1 (plus "$G'\triangleleft G$") is equivalent to Statement 2. -Added earlier: To see that the two are not quite equivalent as stated, let me give you an example of a subgroup $M$ of $G$ that satisfies Statement 2' but not Statement 1': consider the case of $G=S_4$; then $G' = A_4$. Now let $M = \{ 1, (12)(34), (13)(24), (14)(23)\}$. Then $M\triangleleft G$, and the statement in 2 holds for $M$: given any homomorphism $f\colon G\to A$ with $A$ abelian, the map $f$ factors through $G/M$ and there exists a homomorphism $\hat{f}\colon G/M\to A$ such that $f=\hat{f}\pi$. However, $M$ is not the commutator subgroup of $G$. What is missing in Statement 2 for it to be a true equivalent of Statement 1 is some statement that corresponds to the assertion that $G/G'$ is itself abelian, which is what follows from the "Conversely..." clause in Statement 1. One way to do it is to simply state that $G/G'$ is itself abelian. Another is to consider the intersection of all kernels of all homomorphisms into abelian groups, and say that $G'$ must be equal to that intersection.<|endoftext|> -TITLE: Series inequality $\sum _{k=n}^{\infty } \frac{1}{k!}\leq \frac{2}{n!}$ -QUESTION [6 upvotes]: Show that: $\displaystyle\sum _{k=n}^{\infty } \frac{1}{k!}\leq \frac{2}{n!}$ -I am clueless here, I tried to multiply both sides with $n!$, but it doesn't make things better. I know that the left one converges against $e$ for $n=0$, but I better don't want to use its numerical value. - -REPLY [8 votes]: Hint: -$$\frac{1}{(n+1)!} + \frac{1}{(n+2)!} + \cdots $$ -$$= \frac{1}{n!} \left( \frac{1}{(n+1)} + \frac{1}{(n+1)(n+2)} + \cdots \right) $$ -$$ < \frac{1}{n!} \times \text{some geometric series}$$<|endoftext|> -TITLE: What is "Approximation Theory"? -QUESTION [7 upvotes]: What exactly is "Approximation Theory"? If I read the wikipedia-article I doesn't get much clearer. Why are "pure" mathematicians interested in it? I see a lot of people that do harmonic analysis also do approximation theory. - -REPLY [8 votes]: Approximation theory includes many subject areas of analysis, but the common idea is how well a target in a topological space (often a metric space) can be approximated by the points of a narrower subspace. -Some examples will illustrate the breadth of this topic. Given a real number x of a certain kind (e.g. algebraic), what are the rational numbers of bounded denominator that best approximate x, using the least absolute value of their difference as the objective? -Given a real function $f$ on $[0,1]$ of a certain kind (e.g. twice continuously differentiable), what are the polynomials of bounded degree that best approximate $f$? There are a variety of objectives that might be used, such as minimizing the square integral of the difference or minimizing the maximum difference. -Problems involving function approximation can be extended to higher dimensions, and the scope of approximating candidates can be varied endlessly (splines, trigonometric series, rational functions, etc.) and subject to many different restrictions (monotonicity, analyticity, symmetry, etc.). -In a strong sense all of analysis uses approximation theory.<|endoftext|> -TITLE: Probability on spreading of rumors -QUESTION [7 upvotes]: A little help here. Exercise 21, Ch. 2 from Feller's book reads - -In a town a $n+1$ inhabitants, a person tells a rumor to a second person, who in turn repeats it to a third person, etc. At each step, the recipient of the rumor is chosen at random from the $n$ people available. Find the probability that the rumor will be told $r$ times without: a) returning to the originator, b) being repeated to any person. Do the same problem when at each step the rumor is told by one person to a gathering of $N$ randomly chosen people. (The first question is the special case N=1). - -I already did a) and b) for the first description of the problem and a) for the case when the rumor is spreading through a gathering of $N$ people, however, my solution for b) in this second case is not correct. -I reasoned in the following way: In a first instance, $n$ people to receive the rumor, however, it's needed to spread such rumor through a group of $N$ people, therefore, there are $\displaystyle n \choose N$ ways to choose those gatherings. Once one of these people is chosen, he/she can choose from another gathering of $N$ people, taking care of not choosing someone who already know the rumor, which is, there are $\displaystyle n-1 \choose N$, and so on, until we reach the $r$ step in this process. Therefore, the probability I get is: -$$\frac{\displaystyle {n \choose N} {n-1 \choose N} {n-2 \choose N} ... {n-r+1 \choose N}}{\displaystyle {n \choose N}^{r}}$$ -According to the book, the solution must be $\displaystyle \frac{(n)_{Nr}}{(n_{N})^{r}}$ (which is not equivalent to the first expression). -I will appreciate any help. - -REPLY [4 votes]: Liberalkid is right. Using his suggestion, you get $$\frac{\binom{n}{N}\binom{n-N}{N}\cdots\binom{n-(r-1)N}{N}}{\binom{n}{N}^r} = \frac{n_N (n-N)_N \cdots (n-(r-1)N)_N}{(n_N)^r} = \frac{n_{Nr}}{(n_N)^r}.$$ In the first step you cancel $N!$ from each side $r$ times.<|endoftext|> -TITLE: Why is $\mathbb{Z}[x]/(1-x,p)$ isomorphic to $\mathbb{Z}_{p}$, where $p$ is a prime integer. -QUESTION [12 upvotes]: I want to know why $\mathbb{Z}[x]/(1-x,p)$ is isomorphic to $\mathbb{Z}_{p}$, where $p$ is a prime integer? - -Here's what I have so far, but I am unsure if I am correct. Every $f\in \mathbb{Z}[x]$ can be written as $(1-x)q+ r$ where $q\in \mathbb{Z}[x]$ and $r$ is in $\mathbb{Z}$. Does It follows that there are $p$ cosets of $(1-x,p)$ ( namely 0+(1-x,p), 1+(1-x,p),2+(1-x,p) etc ..) -That would imply $\mathbb{Z}[x]/(1-x,p)$ is isomorphic to $\mathbb{Z_{p}}$ - -REPLY [2 votes]: LEMMA $\rm\quad S\ :=\ R[x]/(x-a,b)\ \cong\ R/b\ \ $ for $\rm\ a,b\in R\ $ any ring. -Proof $\ \ $ In $\rm\:S\ $ we have $\rm\ x = a\ $ hence $\rm\ f(x) = f(a)\:.\ $ Therefore the natural map of $\rm R $ into $\rm S$ is onto, with kernel $\rm K \supset b\:R\:.\ $ If $\rm\ c\in K\ $ then $\rm\ c = (x-a)\ f(x) + b\ g(x)\ \Rightarrow\ c\in b\:R\ \ $ via $\ $ eval $\: $ at $\rm\ x = a\:.$ Therefore $\rm\ \ \ K = b\:R\ \ $ so $\rm\: \ S \cong R/K = R/b\:.$<|endoftext|> -TITLE: Definition of $\mathbb{Z}[\omega]$ where $\omega$ is a primitive root of unity -QUESTION [5 upvotes]: What does $\mathbb{Z}[\omega]$ usually mean when $\omega$ is a primitive root of unity? - -REPLY [6 votes]: $\mathbb{Z}[\omega]$ is the ring generated by $\mathbb{Z}$ and $\omega$ (inside, say $\mathbb{C}$). You can think of this as a ring whose elements are polynomials in $\omega$ with coefficients in $\mathbb{Z}$. Since, $\omega^n=1$ for the appropriate $n$, you need to only consider polynomials of degree less than $n$. Then the usual addition and multiplication of polynomials give you the ring structure.<|endoftext|> -TITLE: My Daughter's 4th grade math question got me thinking -QUESTION [12 upvotes]: Given a number of 3in squares and 2in squares, how many of each are needed to get a total area of 35 in^2? -Through quick trial and error (the method they wanted I believe) you find that you need 3 3in squares and 2 2in squares, but I got to thinking on how to solve this exactly. -You have 2 unknowns and the following info: -4x + 9y = 35 -x >= 0, y >= 0, x and y are both integers. -It also follows then that x <= 8 and y <= 3 -I'm not sure how to use the inequalities or the integer only info to form a direct 2nd equation in order to solve the system of equations. How would you do this without trial and error? - -REPLY [7 votes]: There is an algorithmic way to solve this which works when you have two types of squares. -if $\displaystyle \text{gcd}(a,b) = 1$, then for any integer $c$ the linear diophantine equation $\displaystyle ax + by = c$ has an infinite number of solution, with integer $\displaystyle x,y$. -In fact if $\displaystyle x_0, y_0$ are such that $\displaystyle a x_0 - b y_0 = 1$, then all the solutions of $\displaystyle ax + by = c$ are given by -$\displaystyle x = -tb + cx_0$, $\displaystyle y = ta - cy_0$, where $\displaystyle t$ is an arbitrary integer. -$\displaystyle x_0 , y_0$ can be found using the Extended Euclidean Algorithm. -Since you also need $\displaystyle x \ge 0$ and $\displaystyle y \ge 0$ you must pick a $\displaystyle t$ such that -$\displaystyle c x_0 \ge tb$ and $ta \ge cy_0$. -If there is no such $\displaystyle t$, then you do not have a solution. -In your case, $\displaystyle a= 9, b= 4$, we need a solution of $\displaystyle ax + by = 35$. -We can easily see that $\displaystyle x_0 = 1, y_0 = 2$ gives us $\displaystyle a x_0 - by_0 = 1$. -Thus we need to find a $\displaystyle t$ such that $ 35 \ge t\times 4$ and $ t\times 9 \ge 35\times 2$. -i.e. -$\displaystyle 35/4 \ge t \ge 35\times 2/9$ -i.e. -$\displaystyle 8.75 \ge t \ge 7.77\dots$ -Thus $t = 8$. -This gives us $\displaystyle x = cx_0 - tb = 3$, $\displaystyle y = ta- cy_0 = 2$. -(Note: I have swapped your x and y).<|endoftext|> -TITLE: Finding the least positive root -QUESTION [5 upvotes]: How to find the least positive root of the equation $\cos 3x + \sin 5x = 0$? -My approach so far is to represent $\sin 5x$ as $\cos \biggl(\frac{\pi}{2} - 5x\biggr)$ then the whole equation reduces to $$2\cos \biggl(\frac{\pi}{4} - x\biggr)\cdot \cos \biggl(\frac{\pi}{4} - 4x\biggr) = 0$$ -From here we can write: -$$\biggl(\frac{\pi}{4} - x\biggr) = n\pi + \frac{\pi}{2} , n \in \mathbb{Z}$$ -$$\biggl(\frac{\pi}{4} - 4x\biggr) = n\pi + \frac{\pi}{2} , n \in \mathbb{Z}$$ -Now there can be infinitely many solutions for this, what I am not getting how to compute the minimum among them? And what about if I am asked to find the maximum? - -REPLY [5 votes]: You've already done the difficult part! Now just find all the solutions of $\cos(\pi/4-x)=0$ and $\cos(\pi/4-4x)=0$, and pick the smallest positive one.<|endoftext|> -TITLE: How do you calculate the unit vector between two points? -QUESTION [9 upvotes]: I'm reading a paper on fluid dynamics and it references a unit vector between two particles i and j. I'm not clear what it means by a unit vector in this instance. How do I calculate the unit vector between the two particles? - -REPLY [12 votes]: Two particles i, j are located in some reference frame at vectorial positions $\vec{r}_i$ and $\vec{r}_j$. Therefore the vector which starts at the position of i and ends at j, is just the difference $\vec{r}_j-\vec{r}_i$; its modulus $||\vec{r}_j-\vec{r}_i||$ is the distance between the particles so one can construct the unit vector in that direction (from i to j) by just -$$\vec{u}_{ij}=\frac{1}{||\vec{r}_j-\vec{r}_i||}(\vec{r}_j-\vec{r}_i)$$ -Indeed this is a unit vector for its a multiple of the original with unit modulus since $||\vec{u}_{ij}||=\left|\frac{1}{||\vec{r}_j-\vec{r}_i||}\right|\cdot||\vec{r}_j-\vec{r}_i||=1$ using the property $||\lambda\cdot\vec{v}||=|\lambda|\cdot ||\vec{v}||$. - -REPLY [4 votes]: If particle $i$'s position is described by a position vector $\vec{r}_i$ and particle $j$'s position is described by a position vector $\vec{r}_j$, then you can define the position of $j$ relative to $i$ as -$$\vec{r}_{ji}= \vec{r}_j-\vec{r}_i$$ -Now, if you divide this vector by its length: -$$\frac{\vec{r}_{ji}}{\|\vec{r}_{ji}\|}=\frac{\vec{r}_j-\vec{r}_i}{\|\vec{r}_j-\vec{r}_i\|}$$ -you get a vector with unit length and aligned along the direction of the line through particles $i$ and $j$, pointing towards $j$.<|endoftext|> -TITLE: if $\int_1^{\infty}f(x)\ \mathrm dx$ converges, must $\int_1^{\infty}f(x)\sin x\ \mathrm dx$ converge? -QUESTION [11 upvotes]: I can't use any of the convergence tests I learned because I have no information on $f(x)$, in particular I don't know if it's continuous or positive. -The only thing I could think of was that if $\displaystyle \int_{1}^{\infty}f(x)\ \mathrm dx$ was absolutely convergent, then $|f(x)\sin x| \leq |f(x)|$ would imply by the comparison test that $\displaystyle \int_{1}^{\infty}f(x)\sin x\ \mathrm dx$ converges. -So if I want to find a counter-example I have to pick $f(x)$ so that $\displaystyle \int_{1}^{\infty}f(x)\ \mathrm dx$ conditionally converges, but I can't think of one. - -REPLY [18 votes]: Consider $f(x)=\sin(x) / x$.<|endoftext|> -TITLE: Estimates for Stirling's formula remainder -QUESTION [9 upvotes]: It is proved in Advanced Calculus by Angus Taylor, § 20.8, that -$$\log n!=\log \left( \left( \frac{n}{e}\right) ^{n}\sqrt{2\pi n}\right) -+r_{n},$$ -where -$$r_{n}=\sum_{k=1}^{\infty }S_{k}$$ -with -$$S_{k}=\sum_{p=n+1}^{\infty }\frac{k}{2(k+1)(k+2)p^{k+1}}.$$ -This formula for $r_{n}$ provides a method for finding the estimate -$$\frac{1}{12\left( n+1\right) } -TITLE: What's the geometrical interpretation of the magnitude of gradient generally? -QUESTION [9 upvotes]: In the following picture, the author of the Field and Wave Electromagnetics shows the geometrical meaning of the direction of the gradient. That is, only by following the direction of the normal vector to the curve at that pointer could the rate of change be the maximum. - -But what about the geometrical interpretation of the magnitude of gradient generally or maybe is there a geometrical interpreation of the magnitude of graident? -thanks. - -REPLY [9 votes]: Short version of answer: - -The gradient defines a direction; the magnitude of the gradient is the slope of your surface in that direction. - -This direction just so happens to be the one in which you have to go to get the maximum slope. - - - -Long version: -Let's say you take the gradient of an N surface in N+1 space. For instance, the gradient of a 2D surface in 3D space. The gradient will point in the direction that you have to go in to get the biggest increase in "height" (that +1 dimension). So, in other words, if you go in the direction in which the gradient points, you'll see the largest increase. -The magnitude of the gradient is the rate at which that increase happens. Literally, it is the slope of the surface at that point along the axis defined by the gradient's direction. Consequently, the magnitude of the gradient of some point on a surface is the steepest slope you can find on that surface! -Proof in 3D: -Gradients are actually defined to behave as described above, but, perhaps you are like me, and want a little bit of mathematical proof of this. -Let's start with the tangent plane to your surface at some point. If the above is true, then the magnitude of your gradient should equal the slope of the plane along the direction defined by your gradient! To start, let's quickly define the slope of a plane in a certain direction: -If we want to know the slope of a plane in a certain direction, we simply find the slope of said plane between the point (0, 0), and the point represented by our arbitrary direction. To simplify this procedure, we can shift our plane so that it passes through the point (0, 0, 0), since shifting a plane does not change its slope. -To recap, the slope of a plane in some 2D direction D is the same as the slope of a similar plane (which passes through (0,0,0))) between the points (0,0) and D. -The slope of a plane between 2D points (0,0) and D is given by: -$$ -\frac {\frac{\partial z}{\partial x}D_x + \frac{\partial z}{\partial y}D_y}{||D||} -$$ -Or: -$$ -\frac {\frac{\partial z}{\partial x}D_x + \frac{\partial z}{\partial y}D_y}{\sqrt{D_x^2 + D_y^2}} -$$ -Since this slope-in-a-direction is defined only in terms of the partial derivatives of our plane, and since the partial derivatives of any tangent plane to a surface are the same as those of the surface at the tangent point, we may make the claim that: -The slope of a surface at some point (x, y) in the direction D is given by the expression: -$$ -\frac {\frac{\partial f(x,y)}{\partial x}D_x + \frac{\partial f(x,y)}{\partial y}D_y}{\sqrt{D_x^2 + D_y^2}} -$$ -Meanwhile, the gradient of our surface at this point is given by the expression: -$$ -{\frac{\partial f(x,y)}{\partial x} \hat{x} + \frac{\partial f(x,y)}{\partial y} \hat{y}} -$$ -which means that the slope of our surface at some point (x, y) in the direction of our gradient at that point is (as defined by our slope-in-direction argument earlier) given by: -$$ -\frac {\frac{\partial f(x,y)}{\partial x} \cdot \frac{\partial f(x,y)}{\partial x} + \frac{\partial f(x,y)}{\partial y} \cdot \frac{\partial f(x,y)}{\partial y}}{\sqrt{\left(\frac{\partial f(x,y)}{\partial x}\right)^2 + \left(\frac{\partial f(x,y)}{\partial y} \right)^2}} -$$ -This, of course, simplifies to: -$$ -\frac {\left(\frac{\partial f(x,y)}{\partial x}\right)^2 + \left(\frac{\partial f(x,y)}{\partial y} \right)^2}{\sqrt{\left(\frac{\partial f(x,y)}{\partial x}\right)^2 + \left(\frac{\partial f(x,y)}{\partial y} \right)^2}} -$$ -Now for the fun part! Let's define some temporary variable J as: -$$ -J = \left(\frac{\partial f(x,y)}{\partial x}\right)^2 + \left(\frac{\partial f(x,y)}{\partial y} \right)^2 -$$ -Then our "simplified" expression for the slope of our curve at any point (x,y) along its gradient at (x,y) becomes: -$$ -\frac {J}{\sqrt{J}} -$$ -Which is the same thing as simply -$$ -\sqrt{J} -$$ -Which is the same thing as -$$ -\sqrt{\left(\frac{\partial f(x,y)}{\partial x}\right)^2 + \left(\frac{\partial f(x,y)}{\partial y} \right)^2} -$$ -Which, but of course, by pythagorean theorem, is the same expression as the magnitude of our gradient at the point (x,y)! -And so, we have just proven that, yes, the magnitude of the gradient of a surface at some point is the same as the slope of that surface along said gradient. -I hope this helped! -Side note: -Hans Lundmark's answer touches on this a bit; those curves made from the intersections of your volume in 4D space with planes at evenly spaced positions on the 4th axis (equivalent to the contour lines on a surface in 3D space) will, indeed, be closer together when the slope of your volume is steeper; this is because, as one might expect, we cross through more height-surfaces (and therefore more "vertical" distance) in an area which is more steep than in an area which is less steep. Hopefully the above slope explanation makes why that is (specifically, how it relates to the magnitude of the gradient) a bit clearer.<|endoftext|> -TITLE: Cohomological decomposition of tensor sheaves? -QUESTION [6 upvotes]: My question is similar to this, but not identical. I believe the following to be true, but I'd like a reference. -Given (quasicoherent?) sheaves of $\mathcal O_X$ modules $E$ and $F$ on a projective variety $X$, -$$ -H^n(X, E\otimes F) \cong \bigoplus_{p+q=n} H^p(X, E) \otimes H^q(X, F). -$$ - -Is this true, and what is a good reference or counterexample? If true, is quasicoherent necessary? - -REPLY [3 votes]: It is false. Take $E = \mathcal O_{\mathbb P^n}(2)$ and $F= \mathcal O_{\mathbb P^n}(-2)$. Then, -$$ -H^0(\mathbb P^n, \mathcal O_{\mathbb P^n}) = -H^0(\mathbb P^n, E\otimes F) \ne -H^0(\mathbb P^n, \mathcal O_{\mathbb P^n}(2)) \otimes -H^0(\mathbb P^n, \mathcal O_{\mathbb P^n}(-2)). -$$<|endoftext|> -TITLE: What does -1.13 times faster mean? -QUESTION [8 upvotes]: I'm reading High Performance JavaScript, and I think the graphs in one chapter are just plain wrong. Here is one on Google Books. -The y axis is "Times faster", and it runs from -1.5 to +4.0. Now, I would have thought that "1 times faster" means "no faster", "2 times faster" means "twice as fast", and "0.5 times faster" means "half as fast"/"twice as slow". Have they just got completely confused in that graph, or is it me? - -REPLY [9 votes]: Sorry for the confusion, but you got it absolutely right. -2 times faster means two times as fast -A takes 100 ms -B takes 200 ms -so A is two times faster than B -This particular graph is probably the most confusing taken out of the flow of the previous ones in the chapter. This one shows an optimization that most often you shouldn't bother. Especially when there are cases of stuff being 100 times faster. And especially when it's not consistent among browsers. -The point was not to worry about micro-optimizations (unless it's critical to you and you've done all else). So 20% faster (or 1.2 times faster) is probably not worth it most of the time when there are other optimization that will make something 10 times faster.<|endoftext|> -TITLE: Finding roots of the fourth degree polynomial: $2x^4 + 3x^3 - 11x^2 - 9x + 15 = 0$. -QUESTION [7 upvotes]: My son is taking algebra and I'm a little rusty. Not using a calculator or the internet, how would you find the roots of $2x^4 + 3x^3 - 11x^2 - 9x + 15 = 0$. Please list step by step. Thanks, Brian - -REPLY [21 votes]: Guessing one root sometimes opens up the whole equation for you. -First notice that $\displaystyle x=1$ gives 0. So $\displaystyle x-1$ is a factor. -Next, rewrite as -$\displaystyle 2x^4 - 2x^3 + 5x^3 - 5x^2 -6x^2 + 6x - 15x + 15$ -This is to try and get $x-1$ as a factor. -This gives us -$\displaystyle 2x^3(x-1) + 5x^2(x-1) - 6x(x-1) - 15(x-1) = (x-1)(2x^3 + 5x^2 - 6x - 15)$ -Now notice that $\displaystyle 2x^3 - 6x = 2x(x^2-3)$ and $5x^2 - 15 = 5(x^2 - 3)$ -Thus -$\displaystyle (x-1)(2x^3 + 5x^2 - 6x - 15) = (x-1)(2x(x^2 - 3) + 5(x^2 - 3)) = (x-1)(x^2-3)(2x+5)$ -and so the roots are $\displaystyle 1, \pm\sqrt{3}, -\frac{5}{2}$ - -REPLY [9 votes]: You can try first finding the rational roots using the rational root theorem in combination with the factor theorem in order to reduce the degree of the polynomial until you get to a quadratic, which can be solved by means of the quadratic formula or by completing the square. -For example, to complement a little bit on Aryabhata's answer, the first solution he found $x = 1$ can be guessed by using the rational root theorem since the theorem tells you to look for rational solutions only in the set of fractions $\frac{\pm a}{b}$ where $a, b \in \mathbb{Z}$ are integers such that $a$ divides $15$ and $b$ divides $2$. Thus you list all divisors of $15$, which are $\pm 1, \pm 3, \pm 5, \pm 15$ and the divisors of $2$ are $\pm 1, \pm 2$. then your list of possible rational roots would be $\pm \frac{1}{1}, \pm \frac{1}{2}, \pm \frac{3}{1}, \pm \frac{3}{2}, \pm \frac{5}{1}, \pm \frac{5}{2}, \pm \frac{15}{1}, \pm \frac{15}{2}$, and you have to start proving to see if any of those is a root of your polynomial.<|endoftext|> -TITLE: Motivating implications of the axiom of choice? -QUESTION [7 upvotes]: What are some motivating consequences of the axiom of choice (or its omission)? I know that weak forms of choice are sometimes required for interesting results like Banach-Tarski; what are some important consequences of a strong formulation of the axiom of choice? - -REPLY [5 votes]: I think that one of the most important implications of the axiom of choice is actually the equivalence of continuity between the Cauchy definition of $\epsilon$-$\delta$ and the Heine definition using sequences.<|endoftext|> -TITLE: Riemann zeta function at odd positive integers -QUESTION [52 upvotes]: Starting with the famous Basel problem, Euler evaluated the Riemann zeta function for all even positive integers and the result is a compact expression involving Bernoulli numbers. However, the evaluation of the zeta function at odd positive integers (in terms of getting a closed form sum) is still open. There has been some progress in the form of Apery's theorem and other results such as "infinitely many of $\zeta(2n+1)$ are irrational" or "at least one of $\zeta(5),\zeta(7),\zeta(9)$ or $\zeta(11)$ is irrational". -Question(s): Is there a high level understanding for this disparity between even and odd integers? Is it a case of there being a simple expression for $\zeta(3)$ that is out there waiting for an ingenious attack like Euler did with $\zeta(2)$? Or is the belief that such a closed form summation is unlikely? Where do the many many proofs powerful enough to evaluate $\zeta(2n)$ stumble when it comes to evaluating $\zeta(2n+1)$? -Motivation: The Basel problem and Euler's solution are my all-time favorites for the sheer surprise factor and ingenuity of proof (what do $\pi$ and $\frac{sin(x)}{x}$ have to do with $\zeta(2)$??). However, I currently lack the more advanced analytical tools to appreciate the deeper results of this area. I have wondered for a while about the questions above and Internet search hasn't helped much. I would greatly appreciate any answers/references. Thanks. - -REPLY [2 votes]: To piggy-back off of the discussion from @Matt E above, I would like to add that there is probably good reason to believe that all odd integer values for (s) are algebraically independent of each other, but not necessarily of π. Euler evaluated the sinc(x) function to obtain a closed form value for (2). You can also see this function in the product formula: -$f(x) = \prod_{k=1}^\infty 1 -\frac{x^2}{k^2 π^2} = \frac{sin(x)}{x}$ -If we look at a similar function for (4), we obtain: -$f(x) = \prod_{k=1}^\infty 1 -\frac{x^4}{k^4 π^4} = \frac{sin(x) sinh(x)}{x^2}$ -There are other transcendental functions associated with the higher, even integer values for (s). Given these transcendental functions, i.e. the sinc(x) function and the function involving the hyperbolic sin associated with (4), you can simply take the absolute value of the second term in each of their Taylor series expansions and set x = π. You then obtain the closed form values for (2) and (4). You can similarly perform this for the higher, even values for (s). -The question: is there a transcendental function that we can similarly use to obtain the value for (3)? We do know this, using the gamma function: -$f(x) = \prod_{k=1}^\infty 1 -\frac{x^3}{k^3 π^3} = \frac{1}{\Gamma(1 - \frac{x}{π}) \Gamma(1 + \frac{(-1)^{(1/3)} x}{π}) \Gamma(1 - \frac{(-1)^{(2/3)} x}{π})}$ -If you look at the second term of the series expansion for this inverted "multi-gamma" function above, we obtain: -$\frac{x^3 ψ^2(1)}{2 π^3}$ -where $ψ^n(x)$ is ${n^t}^h$ derivative of the digamma function. -When you set x = π, you can see that it cancels out with the $π^3$ in the denominator and you're left with the digamma function value divided by 2, which is (3). So it seems that $π^3$ is kind of related to (3), but not in a way that's tangible. My opinion is that there is most likely another transcendental quantity involved along with π, one that may need to be discovered, or one that is in lieu of π. That is, π might be a stepping stone, so to speak, to reach another transcendental which is then involved with a closed form value for (3).<|endoftext|> -TITLE: Cohomology of $\mathcal O_X$ for toric varieties -QUESTION [5 upvotes]: Motivated by my ignorance here, if $X$ is a projective toric variety, is -$$H^m(X, \mathcal O_X) \cong -\begin{cases} -0 & m > 0 \\ -\mathbb C & m = 1 -\end{cases} -$$ -as for $\mathbb P^n$? - -REPLY [10 votes]: Yes, this is true, at least for varieties over the complex numbers $\mathbb{C}$. Indeed, a toric variety over an algebraically closed field is rational (i.e., birational to projective space). In characteristic zero, rational connectedness is a birational invariant, so toric varieties are rationally connected. Finally, any rationally connected variety is $\mathcal{O}$-acyclic, which is the name for the conclusion that you want. See e.g. here for this last implication. -The conclusion might well hold more generally; I am not an expert in these matters. You may want to ask your question on MathOverflow if you are not satisfied with this answer.<|endoftext|> -TITLE: Do people study "ring presentations"? Is this a dumb question? -QUESTION [11 upvotes]: So one way to define a group presentation is to say, well, let's generate the free group with some number of generators, and then quotient by saying certain elements (relators) cancel just as $aa^{-1}$ does (and other things, you have to take the normal subgroup generated by the elements, but the basic idea works, I think). That is, we have for some words $R_1, R_2, \ldots$ the relations $R_1 = R_2 = \ldots = e$. -A simple example is the cyclic group of order n: one presentation is $\langle g \mid g^n \rangle$. -My question is: can and do we do this for rings? I'm imagining you would do something like take the "free ring" consisting of all sums of products of generators, and perhaps instead of having all the relators equal to the trivial group they would equal the zero ideal, because then if you wanted some word $R$ to equal the multiplicative identity you would just say $R - 1 = 0$. (In particular, the comments on the vaguely related question that got me thinking about this is here: could you formulate a rigorous proof by "dividing by the relations"?) - -REPLY [2 votes]: You can consider presentations for any algebraic system given by special constants, operations and equations. For example, groups are given by the special constant 1 (for the multiplicative identity), the unary operation of inverse, x-1, and the binary operation of multiplication, $xy$, the plus the equations that these must satisfy. In that case, you can specify every object with such operations in terms of a set of generators, and the relations that the generators must satisfy. The study of these kinds of constructions is known as universal algebra. -For the specific case of commutative rings over a field, there is an explicit algorithm for working with presentations, known as the Groebner basis algorithm. It's widely implemented in symbolic algebra packages. (In the case of commutative rings, you're basically just working with systems of polynomials.)<|endoftext|> -TITLE: density of roots of a family of polynomials: $(1-x^2)^{v+n}$ -QUESTION [10 upvotes]: My research has brought me to the following, very general problem. -Given a fixed, but arbitrary, natural number, $\displaystyle v$, consider the following family of polynomials: The $\displaystyle (n-1)^{th}$ derivative of -$$\displaystyle (1-x^2)^{v+n} \ \ \forall n \in \mathbb{N} $$ -I would like to prove (or disprove) that the roots of this entire family of polynomials forms a dense subset of the interval $\displaystyle [0,1]$ for any value of $\displaystyle v$ (I am not interested in roots outside the interval $\displaystyle [0,1]$). -In other words, given any subinterval, $\displaystyle [a,b]$,no mater how small, at least one of these polynomials has at least one root in the interval $\displaystyle [a,b]$ (for any fixed value of $\displaystyle v$). -I realize my question is very general and will happily accept any partial solutions. - -REPLY [2 votes]: First note that this family of polynomials is orthogonal, on the interval $[-1,1]$, with the weight factor $(1-x^2)^{-(v+1)}$. This is not much of a surprise since the definition is very similar to that of the traditional Legendre Polynomials, which are orthogonal. Next, we use the following deep result involving orthogonal polynomials: -If $\{p_n\}$ is a family of orthogonal polynomials with roots in $[-1,1]$ and $N(a,b,n)$ represents the number of roots of $p_n$ in [$\cos(b),\cos(a)$] then -$$\lim_{n\to \infty}\frac1{n} N(a,b,n)=\frac{b-a}{\pi}$$ -Thus for any small subinterval [$\cos(b),\cos(a)$], there exists $n$ sufficiently large such that $N(a,b,n)>1$ implying that the roots of these polynomials do form a dense subset of $[-1,1]$.<|endoftext|> -TITLE: Alexandroff compactification question -QUESTION [9 upvotes]: If $X$ is a locally compact and metrizable space such that its Alexandroff compactification is not first countable. Does this imply that no other compactification of $X$ can be first countable? Why? - -REPLY [6 votes]: There can be locally-compact metrizable spaces with non-first-countable Alexandroff compactifications but with other first-countable compactifications. -This follows from a theorem of Banakh and Leiderman in "Uniform Eberlein compactifications of metrizable spaces" which states that a metrizable space $X$ has a first-countable uniform Eberlein compactification if and only if $|X|\leq\mathfrak{c}$. -This implies that the discrete space on $\omega_1$ (which is trivially locally-compact and metrizable) has a first-countable compactification. But the Alexandroff compactification is not first-countable, since the open subsets of the "point at infinity" consist of all co-finite subsets of $\omega_1$ and any (sub-)basis must be uncountable.<|endoftext|> -TITLE: Ten people in a room -QUESTION [10 upvotes]: There are ten people in a room: -Person $1$ knows $9$ people. Person $2$ - knows $8$ people. etc. Person $9$ knows $1$ - person. -How many people does person $10$ know? - -What is the answer and what is the principle behind this question? - -REPLY [7 votes]: You can think of this as a problem in graph theory. You have a graph $G$ on ten vertices (the ten people in the room ${ 1, 2, 3, ... 10 }$) where two people are connected by an edge if and only if they know each other. The degree or valence of a vertex $v \in G$, denoted $d_v$, is the number of edges connected to it (the number of people someone knows). You are given that $d_1 = 9, d_2 = 8, ... d_9 = 1$ and you want to know $d_{10}$ (so the problem gives you information here: it tells you that $d_{10}$ is uniquely determined by these conditions). -Here's one way to solve it, in steps. - -Person $1$ must know everyone else; that is, he must be connected to all of the other vertices (including person $10$). In particular, he must be the only person that person $9$ knows. -Person $2$ must know everyone else except one person, and since person $9$ only knows person $1$, person $2$ must know everyone else except person $9$ (including person $10$). In particular, he must be the only other person that person $8$ knows. -Person $3$ must know everyone else except two people, and since persons $8$ and $9$ have all their friends accounted for, person $3$ must know everyone else except persons $8$ and $9$ (including person $10$). In particular, he must be the only other person that person $7$ knows. -... and so forth. By induction we conclude that person $10$ knows persons $1, 2, 3, 4, 5$ but not persons $6, 7, 8, 9$. So $d_{10} = 5$.<|endoftext|> -TITLE: Is there an infinite set of strings whose Kolmogorov complexities are computable? -QUESTION [8 upvotes]: Is there an infinite set of strings whose Kolmogorov complexities are computable? - -REPLY [15 votes]: I think you are asking this: is there an infinite r.e. set of pairs $(\sigma,n)$ where $\sigma \in 2^{<\omega}$ is a string of Kolmogorov complexity $n$. The answer to that is no. -For a contradiction, assume such a list is r.e. - then there arbitrarily long strings in it, and thus strings of arbitrarily high Kolmogorov complexity. Define a function $P$ that takes input $\tau \in 2^{<\omega}$ and does the following. First, it effectively enumerates that list until it finds a pair $(\sigma, n)$ where $n > |\tau|$. Then it prints out $\sigma$. -The assumptions we have made ensure that $P$ is a total computable function. Therefore, applying Kleene's recursion theorem to $P$ gives a program $e_0$ that, when run with no input, computes $P(e_0)$. Thus the output of program $e_0$, run with no input, is a string of Kolmogorov complexity larger than $|e_0|$, which is impossible.<|endoftext|> -TITLE: Find a function from values -QUESTION [6 upvotes]: Is there any way to find a function, even just similar, from a set of values? -I get these pairs of values from two sensors and would like to find a simple function that describes the relationship between these pairs of numbers and then to estimate the values without having to take each time. -I have a "black box" sensor, connected to a potentiometer that gives me a value for the direction of a servo, I noticed that the value is offset by a number (eg 200 -> 11, 72 -> 5) and I wanted to understand if it was possible to solve this error from outside this box approximating the value of error that could return from a given parameter. -example: -I have some values (x, y): -{(200, 11), (72, 5), (36,3), (28,3), (18,2), (12,2), ...} - -what is the function that can return these values and all others who follow this trend on a graph, more easily and better possible? -thank you, hello! - -REPLY [8 votes]: Assuming those values came from a polynomial, what you want to do is polynomial interpolation. -In this case, since you have six points, the degree of the polynomial is at most five, since six coefficients are needed to uniquely determine a quintic. -Wolfram Alpha is able to determine interpolating polynomials from given data, e.g. this.<|endoftext|> -TITLE: Multiplying Cardinal Numbers -QUESTION [8 upvotes]: I was just reading a proof of the dimension theorem in Steven Roman's Advanced Linear Algebra. In addressing the cases of infinite bases, Roman proceeds to show that if $\mathcal{B}$ and $\mathcal{C}$ are bases of a space $V$, then $|\mathcal{B}|\leq |\mathcal{C}|$, working up to an application of the BSC-Theorem. Anyway, he uses the string -$$|\mathcal{B}|\leq\aleph_0|\mathcal{C}|=|\mathcal{C}|.$$ -Sorry if it's an elementary question, but why does the equality follow? Here $\mathcal{C}$ is any infinite basis. Is it definition? I tried looking up multiplication of ordinals, but didn't find anything useful. Thanks. - -REPLY [3 votes]: Seeing how the essential question was answered, I want to stress something else in your post which needs to be pointed out: -It is true that cardinals (namely, Aleph numbers) are usually treated as ordinals, however the multiplications and addition of cardinals and ordinals are very different, and most of all - exponentiation is different as well. -For ordinals $\alpha$ and $\beta$ we define the sum to be: - -$\alpha + 0 = \alpha$ -$\alpha + (\beta + 1) = (\alpha + \beta) + 1$ (where $+1$ is the successor ordinal) -$\alpha + \beta$ for a limit ordinal $\beta$ is the limit of $\alpha+\gamma$ for $\gamma<\beta$ - -One can notice that it is usually non-commutative as $2+\omega = \sup\{2+n\colon n<\omega\} = \omega \neq \omega+2$. -The ordinal multiplication is defined in a similar way, as well exponentiation. (Namely a simple rule for zero, and successor and a limit for limits) and an interesting result is that $\omega^\omega$ is countable when dealing with ordinal exponentiation. -In contrast, if $\lambda$ and $\mu$ are cardinals then $\lambda + \mu = \mu + \lambda = \lambda \cdot \mu = \mu \cdot \lambda = \max \{\lambda, \mu\}$, and the exponentiation is defined as $\lambda^\mu = |\{f | f\colon\mu\to\lambda\}|$ - that is the cardinality of the collection of functions from $\mu$ into $\lambda$. -For further information and definitions you can see this wikipedia link -So once you were dealing with the cardinality of the basis you were looking for cardinal arithmetics and not ordinal arithmetics. Which are two different things.<|endoftext|> -TITLE: On what interval does a Taylor series approximate (or equal?) its function? -QUESTION [20 upvotes]: Suppose I have a function $f$ that is infinitely differentiable on some interval $I$. -When I construct a Taylor series $P$ for it, using some point $a$ in $I$, does $f(x) = P(x)$ for all $x$ in $I$? -I'm confused as to whether Taylor series approximate (or equal - I'm not about that either) functions on one point, or on an interval. - -REPLY [5 votes]: Since you speak about intervals (on the real line), perhaps it should also be mentioned that the "natural habitat" for power series is really the complex plane; computing a power series involves only +, -, *, /, and limits, which are well defined operations on complex numbers. And for so-called "complex analytic" (or "holomorphic") functions, which includes most functions that you encounter in calculus, it is a fact that the Taylor series at any point in the complex plane is convergent (equal to the function) in a circle around that point. The size of this circle is such that it exactly reaches out to the nearest singularity of the function. (Circles in the complex plane are the counterparts to intervals on the real line in this context.) -A simple example is $f(x)=1/(1+x^2)$. If you just look at the graph of this function (for $x$ real) it looks perfectly nice, and there seems to be no reason why the Taylor series at $x=0$ only manages to converge to the function in the interval $(-1,1)$. But if you think of $x$ as a complex variable, it's clear that there are singularities (division by zero) at the points $x=\pm i$, which lie at distance one from the origin, and that explains why the Taylor series converges inside the circle with radius one. (Note that the intersection of this disk with the real axis is just the interval $(-1,1)$.) -The same thing goes for $g(x)=\arctan x$. Its derivative is $f(x)$ above, and where the derivative has singularitites, the function has too. So the Taylor series for $g(x)$ at $x=0$ is also convergent inside the unit circle.<|endoftext|> -TITLE: What is the resistance between two points a knights move away on a infinite grid of 1-ohm resistors -QUESTION [20 upvotes]: On an infinite grid of ideal one-ohm resistors, what's the equivalant resistance between two nodes a knights move away? - -(please fix the tags, I didn't really know where to put it) - -REPLY [4 votes]: The good answer is indeed $\frac{4}{\pi}-\frac{1}{2}$. You can find a complete solution in the book of R. Lyons and Y. Peres "Probability on trees and networks", section 4.3, p. 124-127. This mainly uses Fourier analysis and the symmetry of the grid.<|endoftext|> -TITLE: Lebesgue Decomposition Theorem -QUESTION [7 upvotes]: The usual statement of the Lebesgue Decomposition Theorem says that given two $\sigma$-finite measures $\mu$ and $\nu$ on a measure space, we can decompose $\nu = \nu_1 + \nu_2$, where $\nu_1$ is absolutely continuous with respect to $\mu$ and $\nu_2$ and $\mu$ are mutually singular. -Wikipedia (link text) says that there is a "refinement" of this result, where the singular part $\nu_2$ can be further decomposed into a discrete measure and a singular continuous measure. -I understand what a discrete measure is, but what exactly is the definition of a singular continuous measure? I was also wondering if anyone knew of a reference for this refined result, since I haven't been able to find it anywhere. - -REPLY [8 votes]: That decomposition is commonly encountered in probability theory. In a more general setting, suppose that $\rho$ is a $\sigma$-finite measure on $\mathcal{B}(\mathbb{R}^n)$. Then there is a unique decomposition $\rho = \rho_{ac} + \rho_d + \rho_{cs}$, such that: 1) $\rho_{ac}$ is absolutely continuous, that is, $\rho_{ac}$ is zero on sets of Lebesgue measure zero; 2) $\rho_d$ is discrete, that is, $\rho_d$ is zero on the complement of some countable set $C$; 3) $\rho_{cs}$ is continuous singular, that is, $\rho_{cs}$ is zero at every point $x \in \mathbb{R}^n$ (= continuous measure), and is zero on the complement of some set $B$ of Lebesgue measure zero (= singular measure).<|endoftext|> -TITLE: Distinction between 'adjoint' and 'formal adjoint' -QUESTION [24 upvotes]: in functional analysis, you encounter the terms 'adjoint' and 'formal adjoint'. -What does 'formal' in that case mean? It Sounds like a hint that 'formal adjoints' lack a certain property to make them a 'true' adjoint. -I have nowhere found a definition, and would be eager to know. - -REPLY [41 votes]: Are you talking about differential operators on functions over a domain in $\mathbb{R}^d$? (This is the context in which the phrase "formal adjoint" usually comes up.) -The idea is that working with, say, smooth functions with compact support, we have the integration by parts formula -$$ \int Du\cdot v ~dx + \int u\cdot Dv~ dx = 0 $$ -So if a linear partial differential operator is defined as $P = \sum A_\alpha D^\alpha$ where $\alpha$ are multi-indices, you can write $P'$ as a linear partial differential operator $P'\phi = \sum (-1)^{|\alpha|} D^\alpha(A_\alpha \phi)$ and generalize the integration by parts formula -$$ \int Pu \cdot v~dx = \int u \cdot P'v~ dx $$ -which looks, in form, suspiciously like the adjoint with respect to the $L^2$ inner product. That is, writing $\langle,\rangle$ for the $L^2$ inner product of real valued functions, -$$ \langle Pu,v\rangle = \langle u, P'v\rangle $$ -The reason that we call this a formal adjoint is because, technically, to take an adjoint (in the Hilbert space sense, there is also a different notion for Banach spaces) of an operator, you need to specify which Hilbert space you are working over. In the case of the formal adjoint, it is left unspecified: indeed, the formula only really hold for sufficiently smooth function decaying sufficiently fast at infinity, and not in general for arbitrary functions $u,v\in L^2$. -In general for differential operators, the operator itself will not be bounded on an $L^2$ Hilbert space, and so the operator is only densely defined on your Hilbert space. Therefore the adjoint can only be defined on another subset of the Hilbert space, the domain of the adjoint. (In the most general cases, the domain of the adjoint can be a much, much smaller set [even finite dimensional], so does not make much sense as an operator on the original Hilbert space. For differential operators, the adjoint is still densely defined using the density of $C^\infty_0$ in $L^2$.) (Note that also if the spatial domain has a boundary, the integration by parts formula picks up a boundary term in general, so you pick up a further problem with the notion of adjoints, related to the fact that $C^\infty_0(\Omega)$ is not dense in the Sobolev space $W^{1,2}(\Omega)$ when $\Omega$ has boundary.) -While the word "formal" is, I think, not mentioned explicitly, a lot of the problems that can arise when you deal with unbounded operators are discussed in chapter 8 of Reed-Simon, "Methods of mathematical physics".<|endoftext|> -TITLE: If $(a_{n})$ is increasing, is $u_{n}=\frac{a_{1}+\cdots+a_{n}}{n}$ increasing as well? -QUESTION [10 upvotes]: And what about the other direction? If $(u_{n})$ is increasing, what about $(a_{n})$? -I'm guessing the former is true since we know that if $(a_{n})$ converges, $(u_{n})$ converges to the same limit. That tells me that around infinity the sequences behave roughly the same. I tried proof by induction but got stuck. - -REPLY [7 votes]: Here is a "proof by physics": -The average of $a_1, \dots, a_n$ is the center of mass of $n$ point masses, each with unit mass, having positions (along the $x$-axis) given by their values, i.e. $a_i$ has mass 1 and position $x = a_i$. -Given $(n+1)$ unit point masses, $a_1 \leq \dots \leq a_{n+1}$, let $u_n$ be the center of mass of the first $n$ points. The center of mass $u_{n+1}$ can be computed by replacing the first $n$ points by a mass of $n$ with position $u_{n}$. Since $a_{n+1} \geq a_n \geq u_n$, the addition of $a_{n+1}$ to the system shifts the center of mass to the right. Hence, $u_{n+1} \geq u_n$. -On the other hand, if $n$ points of unit mass are distributed evenly along the $x$-axis, it is easy to see that the addition of a unit mass at a position to the right of $u_n$ but to the left of $a_n$ will shift the center of the mass to the right (even though the position of the additional mass is to the left of $a_n$). Hence, the converse is false.<|endoftext|> -TITLE: Showing $1,e^{x}$ and $\sin{x}$ are linearly independent in $\mathcal{C}[0,1]$ -QUESTION [10 upvotes]: How do i show that $f_{1}(x)=1$, $f_{2}(x)=e^{x}$ and $f_{3}(x)=\sin{x}$ are linearly independent, as elements of the vector space, of continuous functions $\mathcal{C}[0,1]$. -So for showing these elements are linearly independent, one needs to show that if $$ a_{1} \cdot 1 + a_{2} \cdot e^{x} + a_{3} \cdot \sin{x}=0$$ then from this we should conclude that $a_{1}=a_{2}=a_{3}=0$. But i am not being able to deduce this. - -REPLY [2 votes]: You have $a + b \sum_{k\geq 0} \frac{x^k}{k!} + c \sum_{k\geq 0} (-1)^{k} \frac{x^{2k+1}}{(2k+1)!} = 0$ -All the coefficients must be 0 above, so, for the constant term we have $a+b = 0$ and comparing coefficients of $x$ we have $b+c=0$, comparing coefficient of $x^2$ we have $b=0$, from which it follows $a=b=c=0.$<|endoftext|> -TITLE: Will moving differentiation from inside, to outside an integral, change the result? -QUESTION [40 upvotes]: I'm interested in the potential of such a technique. I got the idea from Moron's answer to this question, which uses the technique of differentiation under the integral. -Now, I'd like to consider this integral: -$$\int_{-\pi}^\pi \cos{(y(1-e^{i\cdot n \cdot t}))}\mathrm dt$$ -I'd like to differentiate with respect to y. This will give the integral: -$$\int_{-\pi}^\pi -(1-e^{i\cdot n \cdot t})(\sin{(y(1-e^{i\cdot n \cdot t}))}\mathrm dt$$ -...If I'm correct. Anyways, I'm interested in obtaining the results to this second integral, using this technique. So I'm wondering if solving the first integral can help give results for the second integral. I'm thinking of setting $y=1$ in the second integral. This should eliminate $y$ from the result, and give me the integral involving $x$. -The trouble is, I'm not sure I can use the technique of differentiation under the integral. I want to know how I can apply this technique to the integrals above. Any pointers are appreciated. -For instance, for what values of $y$ is this valid? - -REPLY [5 votes]: The general theorem wroten above by Qiaochu Yuan is formulated in the German and French Wikipedias with proofs. -They also give links to some literature, but the French book I wasn't able to find, while in the German books I havent found the statement in its generality.<|endoftext|> -TITLE: Miller-Rabin Primality Testing failure and a subgroup -QUESTION [6 upvotes]: Let $n$ be composite. I'm trying to figure out if the set $H$ of $a$ such that -1) $a$ is relatively prime to $n$ and -2) the Miller-Rabin test fails to show compositeness of $n$ with $a$ -is a subgroup of the multiplicative group mod $n$. -My instinct is no: I am trying to show that the set is not closed. But then again we won't get any factors of $n$ by multiplication because of condition 1). -Update: I've run some experiments and it seems that the number of strong liars relatively prime to n divides the order of the multiplicative group. I wanted to get a contradiction with Lagrange's theorem. - -REPLY [7 votes]: The strong liars form a subgroup of $(\mathbb{Z}/n\mathbb{Z})^{\times}$ iff $n$ isn't of the form $n=\prod_{j=1}^{k}{p_{j}}^{\alpha_{j}}$, where $k\geq2$, $p_{1},\ldots,p_{j}$ are pairwise different primes such that $p_{j}\equiv 1\pmod4$ and $\alpha_{j}\in\mathbb{N}$, $j\in\{1,\ldots,k\}$. Here's the proof: -For $n=1$ everything's trivial so let's assume $n>1$ and set $n-1=2^{x}y$, where $x,y\in\mathbb{Z}$ and $2\nmid y$. Also let $L(n)$ denote the set of strong liars $\bmod n$, ie -$$L(n)=\big\{a\in(\mathbb{Z}/n\mathbb{Z})^{\times}\colon(\exists t\in\mathbb{Z},0\leq t0$ and -$$L(n)=\big\{a\in(\mathbb{Z}/n\mathbb{Z})^{\times}\colon a^y=\pm1\big\},$$ -which is again a subgroup of $(\mathbb{Z}/n\mathbb{Z})^{\times}$. -If $n=p^\alpha$, where $p$ is a prime such that $p\equiv1\bmod4$ and $\alpha\in\mathbb{N}$ then we have $a^{n-1}-1=(a^{y}-1)\prod_{j=0}^{x-1}(a^{2^{j}y}+1)$, and since at most one of the factors on the RHS of this equation can be $0\bmod p$, we have -$$L(n)=\big\{a\in(\mathbb{Z}/n\mathbb{Z})^{\times}\colon a^{n-1}=1\big\},$$ -which is once again a subgroup of $(\mathbb{Z}/n\mathbb{Z})^{\times}$. -Now let $n=\prod_{j=1}^{k}{p_{j}}^{\alpha_{j}}$, where $k\geq2$, $p_{1},\ldots,p_{j}$ are pairwise different primes such that $p_{j}\equiv 1\pmod4$ and $\alpha_{j}\in\mathbb{N}$, $j\in\{1,\ldots,k\}$. Let $G_{j}=(\mathbb{Z}/{p_{j}}^{\alpha_{j}}\mathbb{Z})^{\times}$, $H_{j}={G_{j}}^y$ be the subgroup of $y$-th powers in $G_{j}$ and $K_{j}=\{a\in G_{j}\colon a^{y}=1\}$ be the $y$-torsion subgroup of $G_{j}$. Then $|G_{j}|=\varphi({p_{j}}^{\alpha_{j}})=(p_{j}-1){p_{j}}^{\alpha_{j}-1}$, $|K_{j}|=\gcd(\varphi({p_{j}}^{\alpha_{j}}),y)$ (because $G_{j}$ is cyclic) and $H_{j}\cong G_{j}/K_{j}$, from which it follows that $4\mid|H_{j}|$. $H_{j}$ is cyclic (it's a subgroup of $G_{j})$, so it contains an unique cyclic subgroup of order $4$. For every $j\in\{1,\ldots,k\}$ let's fix $r_{j}\in G_{j}$ such that ${r_{j}}^{y}$ has order $4$ (such elements exist by the above reasoning). By CRT we have $(\mathbb{Z}/n\mathbb{Z})^{\times}\cong\prod_{j=1}^{k}G_{j}$, let $f_{j}\colon(\mathbb{Z}/n\mathbb{Z})^{\times}\to G_{j}$ be the natural projection. Then there exist $a,b\in(\mathbb{Z}/n\mathbb{Z})^{\times}$ such that $f_{1}(a)=r_1$, $f_{1}(b)=-r_1$ and $f_{j}(a)=f_{j}(b)=r_j$ for $j\geq2$. Then for every $j$ it's $f_{j}(a^{2y})=f_{j}(b^{2y})=-1$, hence $a^{2y}=b^{2y}=-1$ and $a,b\in L(n)$ (because $x\geq2$). But $(ab)^{2y}=1$ so it can't be $(ab)^{2^{t}y}=-1$ for $t\geq1$, and also $f_{1}((ab)^{y})=-{r_{1}}^{2y}=1$, $f_{2}((ab)^{y})={r_{2}}^{2y}=-1$, so it's $(ab)^{y}\neq\pm1$, hence $ab\notin L(n)$, which means that $L(n)$ isn't a subgroup of $(\mathbb{Z}/n\mathbb{Z})^{\times}$ in this case. -Concerning the previous post: If $q\mid n$ for some prime $q$ such that $q\equiv 2\text{ or }3\pmod4$ then it's easy to see that there are no primitive Pythagorean triangles with hypotenuse $n$, and if $n=\prod_{j=1}^{k}{p_{j}}^{\alpha_{j}}$, where $p_{1},\ldots,p_{j}$ are pairwise different primes such that $p_{j}\equiv 1\pmod4$ and $\alpha_{j}\in\mathbb{N}$, then it's not difficult to show that there are $2^{k-1}$ different (up to isomorphism) primitive Pythagorean triangles with hypotenuse $n$, so the "bad" $n$'s are really exactly the ones for which there's more than one such triangle. :)<|endoftext|> -TITLE: An inequality on Cevians -QUESTION [7 upvotes]: Let $\displaystyle AD$, $\displaystyle BE$, $\displaystyle CF$ be three cevians concurrent at $\displaystyle P$ inside the $\displaystyle \Delta ABC$. -Prove or disprove that: -$$\displaystyle \dfrac{AD}{AP} + \dfrac{BE}{BP} + \dfrac{CF}{CP} \ge \dfrac{9}{2}$$ - -REPLY [7 votes]: The inequality is true! - -It can be shown that (see proof at the end of the answer) -$$\displaystyle \frac{PD}{AD} + \frac{PE}{BE} + \frac{PF}{CF} = 1$$ -Note that this implies that -$$\displaystyle \frac{AP}{AD} + \frac{BP}{BE} + \frac{CP}{CF} = 2$$ -as $\displaystyle 1 - \frac{PD}{AD} = \frac{AP}{AD}$ etc. -Now we have the inequality (easily shown using $\text{AM} \ge \text{GM}$) that -$$\displaystyle (a_1 + a_2 + a_3)(\frac{1}{a_1} + \frac{1}{a_2} + \frac{1}{a_3}) \ge 9$$ -This shows that -$$\displaystyle (\frac{AP}{AD} + \frac{BP}{BE} + \frac{CP}{CF})(\frac{AD}{AP} + \frac{BE}{BP} + \frac{CF}{CP}) \ge 9$$ -and so -$$\displaystyle 2(\frac{AD}{AP} + \frac{BE}{BP} + \frac{CF}{CP}) \ge 9$$ -i.e. -$$\displaystyle \frac{AD}{AP} + \frac{BE}{BP} + \frac{CF}{CP} \ge \frac{9}{2}$$ -Note that the equality occurs only when $\displaystyle \frac{AP}{AD} = \frac{BP}{BE} = \frac{CP}{CF} = \frac{2}{3}$, which implies that $\displaystyle P$ is the centroid. - -Proof -Let us try showing that -$$\displaystyle \frac{PD}{AD} + \frac{PE}{BE} + \frac{PF}{CF} = 1$$ -Consider the figure (repeated from above for convenience). - -Note, if you are worried about acute triangle vs obtuse etc, a simple affine transformation will do to transform the triangle into an equilateral triangle. -Let X be the foot of perpendicular from A to BC and Y be the foot of the perpendicular from P to BC. -$\displaystyle \triangle AXD$ and $\triangle PYD$ are similar and thus -$\displaystyle \frac{PY}{AX} = \frac{PD}{AD}$. -Now $\displaystyle \frac{PY}{AX} = \frac{|\triangle PBC|}{|\triangle ABC|}$ -where $\displaystyle |\triangle MNO|$ is the area of $\displaystyle \triangle MNO$. -Thus $\displaystyle \frac{PD}{AD} = \frac{|\triangle PBC|}{|\triangle ABC|}$ -Similarly -$\displaystyle \frac{PE}{BE} = \frac{|\triangle PAC|}{|\triangle ABC|}$ -$\displaystyle \frac{PF}{CF} = \frac{|\triangle PAB|}{|\triangle ABC|}$ -Adding gives us -$$\displaystyle \frac{PD}{AD} + \frac{PE}{BE} + \frac{PF}{CF} = \frac{|\triangle PAB| + |\triangle PAC| + |\triangle PBC|}{|\triangle ABC|} = \frac{|\triangle ABC|}{|\triangle ABC|} = 1$$<|endoftext|> -TITLE: What is the product of a Dirac delta function with itself? -QUESTION [11 upvotes]: What is the product of a Dirac delta function with itself? What is the dot product with itself? - -REPLY [5 votes]: This answer is primarily to expand on this comment. From that comment and the following, it seems to me draks thinks of the delta as a function, and from the title, it seems the OP also does. Or at least, this was true at the time of posting the comment and the question respectively. Eradicate this misconception from your minds at once, if it is there. The delta is not a function, although it is sometimes called "Delta function". -Let me give you a bit of background, a little timeline of my relationship with the Delta. - -I first heard of it from my father, a Physics professor and physicist, who introduced it to me as a function equalling 0 outside 0 and infinity in 0. Such a function seemed abstruse to me, but I had other worries up my mind so I didn't bother investigating. This is how Dirac originally thought of the Delta when introducing it, but, as we shall see, this definition is useless because it doesn't yield the one most used identity involving this "function"; -Then I had Measure theory, and voilà a Dirac Delta again, this time a measure, which gives a set measure 0 if 0 is not in it, and 1 if 0 is in. More precisely, $\delta_0$ is a measure on $\mathbb{R}$, and if $A\subseteq\mathbb{R}$, then $\delta_0(A)=0$ if $0\not\in A$, and 1 otherwise. Actually, I was introduced to uncountably many Deltas, one for each $x\in\mathbb{R}$. $\delta_x$, for $x\in\mathbb{R}$, was a measure on the real line, giving measure 0 to a set $A\subseteq\mathbb{R}$ with $x\not\in A$, and 1 to a set containing $x$; -Then I had Physics 2 and Quantum Mechanics, and this Delta popped up as a function, and I was like, WTF! It's a measure, not a function! Both courses did say it was a distribution, and not a function, so I was like, what in the world is a distribution? But both courses, when using it, always treated it like a function; -Then I had Mathematical Physics, including a part of Distribution theory, and I finally was like, oh OK, that is what a distribution is! The measure and the distribution are close relatives, since the distribution is nothing but the integral with respect to the measure of the function this distribution is given as an argument. - -In both settings, it is a priori meaningless to multiply two deltas. Well, one could make a product measure, but that would just be another delta on a Cartesian product, no need for special attention. In the distribution setting, we have what this answer says, which gives us an answer as to what the product might be defined as, and what problems we might run into. -So what is the product of deltas? And what is the comment's statement all about? -The answer to the first question is: there is no product of deltas. Or rather, to multiply distributions you need convolutions, and those need some restrictions to be associative. -The second question can be answered as follows. That statement is a formal abbreviations. You will typically use that inside a double integral like: -$$\int_{\mathbb{R}}f(\xi)\int_{\mathbb{R}}\delta(\xi-x)\delta(x-\eta)dxd\xi,$$ -which with the formal statement reduces to $f(\eta)$. I have seen such integrals in Quantum Mechanics, IIRC. I remember some kind of spectral theorem for some kind of operators where there was a part of the spectrum, the discrete spectrum, which yielded an orthonormal system of eigenvectors, and the continuous spectrum somehow yielded deltas, but I will come back here to clarify after searching what I have of those lessons for details. -Edit: -$\newcommand{\braket}[1]{\left|#1\right\rangle} -\newcommand{\xbraket}[1]{|#1\rangle}$ -I have sifted a bit, and found the following: - -Spectral theorem - Given a self-adjoint operator $A$, the set of eigenvectors $\braket{n}$ of $A$ can be completed with a family of distributions $\braket{a}$, indicised by a continuous parameter $a$, which satisfy: - \begin{align*} -A\braket{n}={}&a_n\braket{n} && \braket{n}\in H, \\ -A\braket{a}={}&a\braket{a} && \braket{a}\text{ distribution}, -\end{align*} - in such a way as to form a "generalized" basis of $H$, in the sense that all the vectors of $H$ can be written as an infinite linear combination: - $$\braket{\psi}=\sum c_n\braket{n}+\int da\,c(a)\braket{a}.$$ - The set of eigenvalues (proper and generalized) of $A$ is called the spectrum of $A$ and is a subset of $\mathbb{R}$. - -What happens to the Parseval identity? Naturally: -$$\langle\psi,\psi\rangle=\sum|c_n|^2+\int da\,|c(a)|^2.$$ -So this "basis" is orthonormal in the sense that the eigenvectors are, the distributions have as product a family of deltas, or: -$$\langle a,a'\rangle=\delta(a-a'),$$ -and multiplying the eigenvectors by the distributions also yields a nice big 0. -The famous identity I mentioned in the timeline above and then forgot to expand upon is actually what defines the delta, or at least what the QM teacher used to define it: -$$\int_{\mathbb{R}}f(x)\delta(x-x_0)=f(x_0),$$ -for any function $f:\mathbb{R}\to\mathbb{R}$ and $x_0\in\mathbb{R}$. If the $\delta$ were a function, it would have to be zero outside 0, but I'm sure you know all too well that altering the value of a function in a single point doesn't alter the integral, and the integral in the identity above would be an integral of a function that is 0 save for a point, so it would be 0, and if $f(x_0)\neq0$ the identity wouldn't hold. -Notice how this formal statement is much like an analogous statement for Kronecker deltas: -$$\sum_n\delta_{nm}\delta_{nl}=\delta_{ml}.$$ -Imagine taking this to the continuum: the sums become integrals, and what can $\delta_{nm}$ become if not $\delta(n-m)$? So the statement is just a formal analog of the true statement with Kronecker Deltas when going into the continuum. Of course, distributionally it makes no sense, nor in terms of measure. -I have no idea how integrals with two deltas may be useful, and I have found none in my sifting. I will sift more, and perhaps Google, and if I find anything interesting, I'll be back. -Update: -$\newcommand{\lbar}{\overline} -\newcommand{\pa}[1]{\left(#1\right)}$ -I decided I'd just stop the sifting and concentrate on my exams. I googled though, and found this. -Another argument I thought up myself in favor of the statement is the following. Let $\phi$ be a functions. It is pretty natural to say: -$$\phi=\int_{\mathbb{R}}\phi(a)\delta(x-a)da,$$ -since for any $x$ this yields $\phi(x)$. Now what happens to the $L^2$-norm? -$$N:=\|\phi\|_{L^2}^2=\int_{\mathbb{R}}\lbar{\phi(x)}\phi(x)dx=\int_{\mathbb{R}}\lbar{\int_{\mathbb{R}}\phi(a')\delta(x-a')da'}\cdot\pa{\int_{\mathbb{R}}\phi(a)\delta(x-a)da}dx.$$ -The complex conjugation can be brought inside the first integral. Now to a physicist integrals that don't swap are evil, and we surely don't want any evil around, so we assume we can reorder the three integrals the way we want, and get: -$$N=\int_{\mathbb{R}}da\,\phi(a)\cdot\pa{\int_{\mathbb{R}}da'\,\lbar{\phi(a')}\cdot\pa{\int_{\mathbb{R}}dx\,\delta(x-a)\delta(x-a')}}.$$ -Suppose the formal statement holds. Then the innermost integral yields $\delta(a-a')$, and the second innermost one yields $\lbar{\phi(a)}$, which then combines with $\phi(a)$ outside it to form $|\phi(a)|^2$, which integrated gives the $L^2$ norm of $\phi$, squared. If the statement doesn't hold, it seems unreasonable to think we can still get the squared norm out of that mess. So the statement must hold, otherwise the integrals won't swap.<|endoftext|> -TITLE: Is every CW complex homotopic to a Delta-Complex? -QUESTION [7 upvotes]: Both answers to this question seem equally reasonable to me. -If the answer is positive, I have no idea what the construction of such a space would look like.... -If the answer is negative, I assume one would try to subdivide the cells somehow... but I don't really know how that would go. -I guess this came up because I was trying to think of an example of a CW-complex that wasn't homeomorphic to a Delta-complex... and figured the easiest way to make such a thing would be to make one that is not homotopic. This, of course, doesn't seem much easier to build, but at least easier to prove once you're done building. - -REPLY [2 votes]: This question was answered in a comment: - -By "homotopic" you mean "homotopy-equivalent" yes? CW complexes all have the homotopy type of simplicial complexes, so also of delta complexes. You can make the argument inductively -- argue that if you attach a cell to a simplicial complex, you get something with the homotopy-type of a simplicial complex. Have you read (for example) the proof of excision for singular homology? – Ryan Budney Dec 4 '10 at 0:22<|endoftext|> -TITLE: Question about Brownian motion -QUESTION [8 upvotes]: Let $\{B(t), t \in \mathbb{R} \}$ be a two sided brownian motion defined as -$$ -B(t) = \begin{cases} B_1(t),\quad t >0 \\ -0, \quad t = 0 \\ -B_2(-t), \quad t < 0 \end{cases} -$$ -where $B_1$ and $B_2$ are independent standard Brownian motions on $\mathbb{R}^+$. -Fix $x_0 > 0$ and let $x_k = B(x_{k-1})$ for $k=1,2,\dots$. What can we say about $\lim_{k\rightarrow \infty} x_k$? -If it converges, then I imagine the limit would have to be to $0$ a.s., since the limit $y$ would satisfy $B(y) = y$ a.s.. But I don't know how to show this sequence converges. Any ideas? (this isn't homework, just a problem my friend and I thought up) - -REPLY [4 votes]: For any fixed $x_0$, there is positive probability that the sequence starting from $x_0$ fails to converge. To see why, take, say, $x_0 = 15$. There is positive probability that $50 < B_t < 60$ for $10 < t < 15$ and $10 < B_t < 20$ for $50 < t < 60$. On this event the sequence oscillates between the intervals $(10,20)$ and $(50,60)$. -On the other hand, almost surely there exist infinitely many $t$ in any interval $(0,\epsilon)$ with $B_t = t$, so even when the sequence does converge the limit need not be 0. For a proof, note that $P(B_t > t) = P(N > \sqrt{t}) \ge 1/4$ for sufficiently small $t$ (here $N$ is a standard normal random variable). So for any sequence $t_n$ decreasing to $0$, we have $P(B_{t_n} > t_n \text{ i.o.}) \ge 1/4$; by the Blumenthal 0-1 law, $P(B_{t_n} > t_n \text{ i.o.}) =1$. However, we also have $P(B_{t_n} < t_n) \ge P(B_{t_n} < 0) = 1/2$ so by a similar argument $P(B_{t_n} < t_n \text{ i.o.}) =1$. The result follows by continuity. -One could ask some other questions: - -For a fixed $x_0$, what is the probability that the sequence starting from $x_0$ converges? My guess is $0$ but I don't see a proof offhand. -Consider the (random) set $C$ of $x$ such that the sequence starting from $x$ converges. What is the Lebesgue measure of $C$? My guess is that $m(C) = 0$ a.s. but again no proof. - -Edit: Another interesting fact is that almost surely, for every starting point $x_0$, the sequence $x_k$ is bounded, and hence has a convergent subsequence. Let $M_r = \sup_{t \in [-r,r]} |B_t|$. By the strong law of large numbers, $B_t/t \to 0$ a.s. as $t \to \pm \infty$, and it follows that $M_r / r \to 0$ a.s. as $r \to \infty$. In particular, a.s. there exists $r > x_0$ with $M_r < r$, and then $|x_k| \le M_r$ for all $k \ge 1$.<|endoftext|> -TITLE: Involution centralizer of perfect group with quaternion Sylow 2-subgroups -QUESTION [15 upvotes]: Exercise 7.11 of Kurzweil–Stellmacher asks me to: - -Prove that the centralizer of an involution is nonsolvable inside any perfect group with generalized quaternion Sylow 2-subgroups of order at least 16. - -I would like to oblige, but I'm having trouble getting much of a handle on the centralizer itself, rather than just its action. - -The center T=⟨t⟩ of a quaternion group is weakly closed, and so the normalizer X=NG(T)=CG(t) of T in that perfect group G controls G-fusion within any Sylow 2-subgroup P that happens to contain T. -The only fusion possible on P in a perfect group is the "full" fusion where all (both classes of) quaternion subgroups of order 8 are acted on by their full automorphism group (and otherwise no fusion beyond that induced by P itself). -So I know X contains at least two copies of SL(2,3) and X/Z contains at least two copies of S4. Since G is perfect, P = [P,X] ≤ [X,X]. I think the copies of SL(2,3) pairwise intersect in a particular cyclic subgroup of P of order 4. -More precisely: If Q is a quaternion subgroup of P of order 8, then Y=NX(Q) contains SL(2,3) and Y/Z contains S4. [Y,Q] ≥ Q. P is generated by its [T:Q] quaternion subgroups of order 8, so also by its [Y,Q]s. The intersection of any pair of distinct Qs is the order 4 cyclic subgroup of the (unique) maximal subgroup that happens to be cyclic. -So it is a special configuration that has to fit lots of solvable groups inside, but I don't see any particular reason the result could not be solvable. The closest thing to nonsolvable I get is that the derived length of N has to be at least 3. -I get roughly the same results for a quaternion Sylow of order 8, except one only has the single SL(2,3). I have not found a perfect G with solvable X in this case, but of course I can if I only require the same fusion pattern, since SL(2,3) and SL(2,5) have the same 2-fusion. - -More: With a little reminder from reading the work of Bender's students, I can now do this using Brauer–Suzuki or Z*, but this does not seem much like a homework solution. -The truth seems to be X⋅O(G)=G, and every non-trivial quotient of a perfect group is perfect, so the homomorphic image G/O(G) of X is perfect. Since G has Sylow 2-subgroups, G/O(G) is non-identity, so X has a non-identity perfect quotient, so X is nonsolvable. -I'm not sure how to show X⋅O(G)=G without using tools significantly beyond chapter 7. - -Smaller question: I vaguely recall Suzuki proved that if G is a perfect group with no non-identity normal subgroups of odd order and with quaternion Sylow 2-subgroups, then each coset of CG(t) contains exactly one conjugate of t, where t is an involution. In other words, if the product of two involutions centralizes t, then that product is the identity. - -Can someone prove this with ideas from K–S up to chapter 7? - -This is hard for me to check since in reality such a group has exactly one involution t and G = CG(t), but that comes later in my solution. - -REPLY [7 votes]: Let $P$ be generated by $x$ of order $2^n > 4$ and $y$ of order $4$, and let $z = x^{2^{n-2}}$ be a power of $x$ of order 4. Then the two classes of $Q_8$ in $P$ are represented by $\langle z,y \rangle$ and $\langle z, xy \rangle$. -From what you say, you know that both of these subgroups are normalized in $X$ by subgroups isomorphic to ${\rm SL}(2,3)$. That means that $z$ is conjugate in $X$ to both $y$ and to $xy$. I don't think that's possible in a solvable group $X$. -Assume that $X$ is solvable, so it has a chief series with elementary abelian factors. So $z \in M \setminus N$ for one of these factors $M/N$. If $z$ is conjugate to both $y$ and $xy$, then we have $y, xy \in M \setminus N$. But then $x \in M$ and so $x^2 \in N$ and hence $z \in N$, contradiction. -ADDED: For your smaller question if, for conjugates $u$ and $v$ of $t$, $x=uv$ is nontrivial and centralizes $t$, then $x$ has odd order and both $t$ and $u$ lie in the normalizer $N$ of $\langle x \rangle$ and hence are conjugate in $N$, but that is impossible because $t$ centralizes and $u$ inverts $x$.<|endoftext|> -TITLE: Axiom of choice - to use or not to use -QUESTION [31 upvotes]: I was wondering if there are examples of results in mathematics that were first proven using axiom of choice and later someone found a proof of the result without using the axiom of choice. - -REPLY [3 votes]: If $G$ is a locally compact abelian group, then there exists a unique measure $\mu$ on the Borel sets of $G$ such that: - -$\mu(G)=1$, -For every measurable $A$ and $x\in G$: $\mu(A)=\mu(A+x)$, - -We can slightly enlarge the Borel sets to include all the subsets of measure zero sets, and thus have a complete measure. -This measure is called Haar measure. In the case of $\mathbb R$ this is exactly the Lebesgue measure. -Haar (the mathematician) proved its existence on separable compact groups in 1933, and von Neumann proved the uniqueness shortly after. -The general case for locally compact Abelian groups was proved by Weil and relied on the axiom of choice. Henri Cartan later proved both existence and uniqueness of the general case without the axiom of choice.<|endoftext|> -TITLE: Intuition about the Central Limit Theorem -QUESTION [17 upvotes]: I'm studying statistics, and would like to better understand the Central Limit Theorem. The proof I found on Wikipedia requires some previous knowledge I do not currently possess. -Is there a quick intuitive explanation you can give as to why this theorem is correct? - -REPLY [2 votes]: This answer gives an outline of how to use the Fourier Transform to prove that the $n$-fold convolution of any probability distribution with a finite variance contracted by a factor of $\sqrt{n}$ converges weakly to the normal distribution. -However, in his answer, Qiaochu Yuan mentions that one can use the Principle of Maximum Entropy to get a normal distribution. Below, I have endeavored to do just that using the Calculus of Variations. - -Applying the Principle of Maximum Entropy -Suppose we want to maximize the entropy -$$ --\int_{\mathbb{R}}\log(f(x))f(x)\,\mathrm{d}x\tag1 -$$ -over all $f$ whose mean is $0$ and variance is $\sigma^2$, that is -$$ -\int_{\mathbb{R}}\left(1,x,x^2\right)f(x)\,\mathrm{d}x=\left(1,0,\sigma^2\right)\tag2 -$$ -That is, we want the variation of $(1)$ to vanish -$$ -\int_{\mathbb{R}}(1+\log(f(x)))\,\delta f(x)\,\mathrm{d}x=0\tag3 -$$ -for all variations of $f$, $\delta f(x)$, so that the variation of $(2)$ vanishes -$$ -\int_{\mathbb{R}}\left(1,x,x^2\right)\delta f(x)\,\mathrm{d}x=(0,0,0)\tag4 -$$ -$(3)$, $(4)$, and orthogonality requires -$$ -\log(f(x))=c_0+c_1x+c_2x^2\tag5 -$$ -To satisfy $(2)$, we need $c_0=-\frac12\log\left(2\pi\sigma^2\right)$, $c_1=0$, and $c_2=-\frac1{2\sigma^2}$. That is, -$$ -\bbox[5px,border:2px solid #C0A000]{f(x)=\frac1{\sigma\sqrt{2\pi}}\,e^{-\frac{x^2}{2\sigma^2}}}\tag6 -$$<|endoftext|> -TITLE: A smooth function's domain of being non-analytic -QUESTION [26 upvotes]: I am wondering how much a smooth function may be non-analytic, because in proofs, whilst there non-analytic smooth functions, it would suffice if a smooth function were analytic on only a "small set". More exactly: -Let $U \in \mathbb R^n$ be open, $C^\infty = C^\infty(U,\mathbb R)$. A smooth function in $C^\infty$ is analytic in $a \in U$, iff there exists $\epsilon > 0$, s.t. the function is equal to its own Taylor series in $B_\epsilon(a)$. There exist smooth functions that are non-analytic, i.e. there exists $f \in C^\infty, b \in U, \epsilon > 0$ s.t. the function is not its taylor series at $x$ in $B_\epsilon (b)$. -Let $A$ be the union of all $\epsilon$-Balls in $U$ where $f$ is analytic. By definition, $A$ is open. It's complement $C = A^c$is the closed set of points where $f$ is non-analytic. -Does $C$ have an interior? - -REPLY [36 votes]: Yes, that can happen. The canonical example (as far as I know) is the Fabius function which is smooth and nowhere analytic. - -I never really liked this answer because it's hard — if not outright impossible — to find references online on how to prove the nonanalyticity of $Fb$. So here is a short exposition on a different example adapted from the discussion archived in this text file mirrored on archive.org. - -Introduction -Students often see -$$ f(x) = \begin{cases} - \exp(-\tfrac{1}{x}) & \text{for } x > 0 \\\\ - 0 & \text{for } x \leq 0 -\end{cases}$$ -as an example of a smooth function that's not analytic at $0$, but this as well as the other usual examples makes it very easy to intuit that smooth functions are “mostly analytic”, i.e. everywhere analytic except possibly at some isolated points. And given that this is the only example that one typically sees this is in fact a reasonable thing to believe. But nonetheless there are plenty of examples out there — for example the following: -Example -Define $F: \mathbb{R} \to \mathbb{C}$ by -$$ F(x) := \sum_{n=0}^\infty \frac{\exp(i2^nx)}{n!}. $$ -Theorem: $F$ and $\Re F$, the real part of $F$, are smooth nowhere analytic functions. -Proof: Computing the derivatives of $F$ we get -$$ F^{(k)}(x) = \sum_{n=0}^\infty \frac{\exp(i2^nx)(i2^n)^k}{n!}, $$ -which is uniformly convergent everywhere, so it's continuous and there are no issues with moving the differentiation in under the summation. In other words $F$ and $\Re F$ are smooth. Furthermore we have that -$$ F^{(k)}(0) = \sum_{n=0}^\infty \frac{(i2^n)^k}{n!} = i^k \exp(2^k). $$ -The Cauchy-Hadamard formula for the radius of convergence, $r$, for the Taylor series for $F$ gives at $x=0$ with the usual conventions about infinities that -$$ \frac{1}{r} = \limsup_{k \to \infty} \left\lvert \frac{F^{(k)}(0)}{k!}\right\rvert^{1/k} = \limsup_{k \to \infty} \left(\frac{\exp(2^k)}{k!} \right)^{1/k} = \infty, $$ -and so $r = 0$. The same will be true for $\Re f$ as the even-indexed parts of $\Re F$ have the same magnitude as those of $F$. $F$ is $2\pi$-periodic, so the same is true for every $x$ which is an integer multiple of $2\pi$. Moreover we see that if we throw away the first $p$ terms, we get a function with period $\omega = \frac{2\pi}{2^p}$ and with points of non-analyticity at all integer multiples of $\omega$. Hence $F$ and $\Re F$ must also be nonanalytic at these points and we conclude that the set of points where the two functions are nonanalytic is dense in $\mathbb{R}$. -Finally observe that if a function is analytic in a point it must be analytic on a neighbourhood of that point, thus there are no possible open sets where $F$ and $\Re F$ can be analytic, i.e. they are smooth nowhere analytic functions. $\;\square$ -Good Questions to Ponder -Suppose $\Omega \subseteq \mathbb{R}^n$ is open. -Is there an $f \in C^\infty(\mathbb{R}^n)$ such that $\Omega$ is its locus of analyticity? -Are the functions that are analytic at some point of the first category in $C^\infty(\Omega)$?<|endoftext|> -TITLE: Parabolic subalgebra -QUESTION [6 upvotes]: Let $R$ a root system and $\Delta$ be a simple system of roots of a Lie algebra $\mathfrak g$, $\Delta'\subset \Delta$ and $R(\Delta')=R\cap \mathbb Z(\Delta')$. Define -$$p(\Delta')=\mathfrak h \bigoplus_{\alpha \in R(\Delta')} \mathfrak g_{\alpha} \bigoplus_{\alpha \in R^+ \setminus R^+(\Delta')}\mathfrak g_{\alpha}$$ the parabolic subalgebra associated to $\Delta'$. -If $\alpha$ is a simple root in $R^+(\Delta)\setminus R^+ (\Delta')$, then $\beta(h_\alpha)=0$ for all $\beta$ in $R(\Delta')$??? - -REPLY [5 votes]: The answer is NO. -Take $g=sl(3)$ and $\Delta=\{a_1,a_2\}$. The unique choice for $\Delta'$ is $\{a_2\}$. -The result is clearly false in this scenery.<|endoftext|> -TITLE: Parallelograms & Axis of Symmetry -QUESTION [6 upvotes]: Consider a parallelogram which is neither a rhombus nor a rectangle. It is well-known that such shapes do not have an axis of symmetry. - -Is there a simple proof for this? - -I prefer a proof that I can give to a kid (~12 year old), but more involved proofs will do as well. - -REPLY [3 votes]: If a symmetry pairs two vertices, then the angles at those vertices must be congruent. A symmetry that pairs adjacent vertices of a parallelogram requires that the figure be a rectangle: adjacent angles are supplementary; if they're also congruent, then they must be right angles. -Having ruled out rectangles, your proposed symmetry must pair each vertex (a) with its non-adjacent counterpart, or (b) with itself. In (a), the axis would be the perpendicular bisector of the diagonal between the paired vertices; in (b) the axis would contain the vertex. If both (a) and (b) occur, then we have a vertex (from (b)) on the perpendicular bisector of the diagonal joining two non-adjacent others (from (a)); this implies that we have two congruent adjacent edges, which in turn implies that the figure is a rhombus. -To avoid both rectangles and rhombi, we must have that the axis provides only-(a) symmetry, or only-(b) symmetry. An axis of only-(a) symmetry must be the perpendicular bisector of both diagonals, which (as perpendiculars of that axis, and lying in the same plane of that axis) must therefore lie on the same line: the vertices of the figure are collinear. An axis of only-(b) symmetry contains all the vertices, making them collinear (but in a different arrangement than required by (a)). -If you disallow "degenerate" parallelograms with all vertices collinear, then you have your impossibility argument. However, I prefer to recognize degenerate figures as legitimate, and I try to avoid stigmatizing them whenever possible. (After all, they come in handy as intermediate steps in transforming one figure smoothly into another. Besides, it's not like the "Parallelogram Law of Vector Addition" becomes invalid when the vectors involved are linearly dependent.) So, I would go so far as to observe that an axis of only-(a) symmetry implies the existence of a (separate) axis of only-(b) symmetry, and vice versa; that is, we've determined a third class of parallelograms --along with rectangles and rhombi-- that have two axes of symmetry. -(Consideration of parallelograms whose vertices coincide is left to the reader.) -Note: An only-(a) axial symmetry can also be realized with an axis perpendicular to the plane of the parallelogram (which need not be degenerate), but you've ruled this out.<|endoftext|> -TITLE: What field is $\mathbb{F}_p(\zeta_{p-1}^{1/n})$ -QUESTION [5 upvotes]: What order is the field: $\mathbb{F}_p(\zeta_{p-1}^{1/n})$? -$\mathbb{F}_p^{\times}$ is size $p-1$ and cyclic (with a generator we shall call $\zeta_{p-1}$). Naively, it seems that by taking the $n^{th}$ root of $\zeta_{p-1}$ we get a group of units of size $C_{n(p-1)}$. This, of course, doesn't make sense. What should be true is that $C_{n(p-1)}$ is some subgroup of the group of units of the $\mathbb{F}_{p^r}$ we get. -Question -What is the order of the field $\mathbb{F}_p[x]/(m_{\zeta_{p-1}^{1/n}})$ where $m_{\zeta_{p-1}^{1/n}}$ is the minimal polynomial of $\zeta_{p-1}^{1/n}$ (which, unless I'm mistaken, is the same as the splitting field of $x^n-\zeta_{p-1}$)? Does it depend on whether $p|n$ or not? - -REPLY [2 votes]: Every nonzero element of a finite field is a root of unity. Roots of unity behave very nicely, since the other roots of unity of the same order are just powers of the first. -For instance i is a primitive 4th root of unity, and the other one, -i, is just a power i,-1,-i,1. -The roots of unity form these nice cyclic subgroups of the group of units of a field. -If you want one primitive k'th root of unity, you get all of them for free, so we are just interested in the smallest finite field of characteristic p with a primitive k'th root of unity. -Now you get into trouble if p divides k: a finite field of characteristic p has size p^m, so its group of units has order p^m-1, and so k has to divide p^m-1. Such numbers are coprime to p. -In your question, you'll want to assume n is coprime to p, otherwise you won't get very good roots. For instance, when p=2, the 8th roots of unity are exactly {1}; none of them are primitive. -Groups of units of finite fields are very well behaved: they are all cyclic. So to find a primitive k'th root of unity, you just want to find the smallest m such that k divides p^m-1: - -The field obtained by adjoining a primitive k=n(p-1)st root of unity is the field of size pm, where m is the order of p in the group of units of the ring Z / k Z. - -Note that k has to be coprime to p in order for a primitive k'th root of unity to exist. Just divide k by p until it is coprime if you want to adjoin the "closest to primitive" root.<|endoftext|> -TITLE: Tensors as matrices vs. Tensors as multi-linear maps -QUESTION [10 upvotes]: So I read the answers in this question, and don't feel that much closer to an answer about how tensors as multi-linear maps and tensors as "multi-dimensional" matrices are truly related. For instance, it seems that a (1,1)-tensor should be able to be realized as a 2 by 2 matrix, as something that looks like $(x_1,x_2)\otimes(y_1,y_2)$ and as a multi-linear map from $\mathbb{R}^2\times(\mathbb{R}^2)^\ast$ to $\mathbb{R}$. Is there a formal isomorphism between these three ideas? -My motivation for asking this is that I'm trying to work with deRham cohomology without just doing the mechanics of it. I'd like to actually know what's going on, and so I'm trying to see -1) How is $\mathrm{Alt}^p(\mathbb{R}^n)$ actually the same as $\wedge^p(\mathbb{R}^n)$ where the former is the space of alternating p-linear maps and the latter is the algebraically defined quotient of $\mathbb{R}^n\otimes\ldots\otimes\mathbb{R}^n$, the tensor product of $p$ copies of $\mathbb{R}^n$? -2) How is $\Omega^p(\mathbb{R}^n)$ actually the dual of p-vector fields (if in fact it is...)? -Thanks! - -REPLY [5 votes]: This interpretation makes judicious use of duality. Recall that $Hom_R(A\otimes B, C)\cong Hom_R(A,Hom_R(B,C))$ (the proof of this fact is extremely simple. Please give it a try.). With finite dimensional vector spaces, we have a canonical isomorphism $V\cong V^{**}=Hom_R(Hom_R(V,R),R)$. Then to give a map $A\to B$, this is the same as giving a map $A\to B^{**}$ by composition, and by the earlier note about the hom-tensor adjunction, this is identical to giving a map $A\otimes B^*\to R$. -With regards to your two questions, they fall out from these elementary observations. I urge you to try to work them out.<|endoftext|> -TITLE: Some questions about the gamma function -QUESTION [6 upvotes]: Show that $\Gamma(y) = \int_0^{\infty}{e^{-x}x^{y-1}\,dx}$ is finite for $y>0$ both as an improper Riemann integral and as a Lebesgue integral. -Show $\Gamma'(y) = \int_0^{\infty}{e^{-x}x^{y-1}\ln{x}\,dx}$ for $y>0$. - -For one: I've tried simply integrating it as an improper Riemann integral, but you always end up with another integral of the "same type" (which is how you eventually show $\Gamma(y+1)=y\Gamma(y)$ ). How do I get around this? As for the Lebesgue integral, I think it'd be easiest to compare the integrand to a larger function whose integral converges, but I haven't come up with a good candidate. -For two: Fix $y_0>0$. Write $$\Gamma'(y) = \lim_{y\to y_0}{\int_0^{\infty}{\frac{e^{-x}x^{y-1}-e^{-x}x^{y_0-1}}{y-y_0}\,dx}}\,.$$ By the MVT, there exists $\eta$ between $y$ and $y_0$ such that the above limit is equal to $$\lim_{y\to y_0}{\int_0^{\infty}{e^{-x}x^{\eta-1}\ln{x}\,dx}}\,.$$ But now I'm not sure what to do. This is similar to a previous question I posted; for that problem we knew the derivative of the original integrand was bounded, so we applied the bounded convergence theorem. Would it be enough to prove that the derivative of my integrand is bounded, and apply BCT? - -REPLY [5 votes]: A bit late to the game, but here is an answer: -1: To show it is finite: Write $e^{-x}=e^{-x/2}\cdot e^{-x/2}$. For every $y>0$ there exists $N$ such that $e^{-x/2}x^{y-1}<1$ when $x>N$ so that $$\int_N^\infty e^{-x} x^{y-1}dx\leq \int_N^\infty e^{-x/2} <\infty.$$ For $x$ between $0,1$ compare to $\int x^{y-1}dx$ which converges whenever $y-1>-1$, so for all $y$. Lastly bounding $\int_1^N e^{-x}x^{y-1}dx$ I leave to you. -2: For this part, what you have done so far is good. All we need to do is switch the last integral with the limit. To do this, notice that in some neighborhood of radius $\delta$ around $y_0$ we can bound $$\int_0^\infty e^{-x} x^{y-1}dx$$ by similar methods as in 1. Then, applying the Dominated Convergence Theorem to that neighborhood, we can switch the limit and the integral, solving the problem. -Remark: For that last part, notice we cannot bound $$\int_0^\infty e^{-x} x^{y-1}dx$$ uniformely for all $y\in(0,\infty)$ since the function blows up at $0$ and at $\infty$. But fortunately, we can bound it uniformely for all $y$ in any compact subset of $(0,\infty)$, allowing us to use the DCT.<|endoftext|> -TITLE: Finitely axiomatizable theories -QUESTION [10 upvotes]: Let $T_1$ and $T_2$ be two theories having the same set of symbols. -Assume that any interpretation of $T_1$ is a model of $T_1$ if and only if it is not a model of $T_2$. Then: -$T_1$ and $T_2$ are finitely axiomatizable. -(i.e. there are finite sets of sentences $A_1$ and $A_2$ such that, for any sentence $S$: -$T_1$ proves $S$ if and only if $A_1$ proves $S$, and $T_2$ proves $S$ if and only if $A_2$ proves $S$). -/The proof will be by contradiction; assume $T_1$ or $T_2$ are not finitely axiomatizable, then .....?/ -Any one have any idea of how to prove this argument? - -REPLY [12 votes]: The union $T_1\cup T_2$ has no models, and so by the Compactness theorem there is a finite subtheory with no models. This amounts to finite $A_1\subset T_1$ and $A_2\subset T_2$ such that $A_1\cup A_2$ has no models. Any model $M$ of $A_1$ is therefore not a model of $A_2$ and so $M$ is not a model of $T_2$ and hence by your assumption it is a model of $T_1$. So $A_1\vdash T_1$ and similarly $A_2\vdash T_2$, so both theories are finitely axiomatizable.<|endoftext|> -TITLE: Calculate the surface area of a solid of revolution -QUESTION [5 upvotes]: I have to calculate the surface area of the solid of revolution which is produced from rotating $f: (-1,1) \rightarrow \mathbb{R}$, $f(x) = 1-x^2$ about the $x$-axis. I do know there is a formula: -$$S=2 \pi \int_{a}^b f(x) \sqrt{1+f'(x)^2}\, \mathrm dx$$ -Which will work very well. However, I am not very comfortable with the integral -$$\int_{-1}^1 (1-x^2)\sqrt{1+4x^2}\, \mathrm dx$$ -which I would have to calculate in order to get to the surface area (I tried to substitute $x=\frac{1}{2} \sinh(u)$, but it did not work out too well). Thus, I had the idea to apply Pappus' centroid theorem. I first found the centroid of the area between the parabola and the x-axis to be at $y=\frac{2}{5}$, hence the surface area of the solid of revolution would be: -$$S = 2 \pi \frac{2}{5} \int_{-1}^1 \sqrt{1+4x^2}\, \mathrm dx$$ -But this leads me to a different result than I should get (I calculated the value of the first integral with the help of wolframalpha, it's about ~11...). -What did I do wrong? My best guess is that I misunderstood Pappus' centroid theorem, but what's the mistake? How can I fix it? - -REPLY [5 votes]: You did misinterpret Pappus' Theorem. You used the geometric centroid of the region between $1-x^2$ and the $x$-axis, whereas Pappus' Theorem wants you to use the geometric centroid of the curve $1-x^2$. The geometric centroid of the latter is not $\frac{2}{5}$ but (by definition) -$$\frac{\int_{-1}^1 (1-x^2) \sqrt{1+4x^2} dx}{\int_{-1}^1 \sqrt{1+4x^2} dx} \approx 0.59002.$$ -Unfortunately, multiplying this by $2\pi$ times the arc length just gives you the integral you started with. So it doesn't appear that Pappus' Theorem is an easier route to take. You could also try switching to an integral in $dy$, but I doubt that will be any better. I would try Joe's suggestion on your first integral. -For more on finding geometric centroids of curves, see this. - -REPLY [4 votes]: If you want to try the original integral again, you can use the trig substitution x = 0.5 tan(t). This will give you an integral involving tan and sec which can be worked out.<|endoftext|> -TITLE: Ranges and the Fundamental Theorem of Calculus 1 -QUESTION [10 upvotes]: I'm going over a chapter by chapter review for my calculus final and discovered this problem: -$$y=\int_{\sqrt{x}}^{x^3}\sqrt{t}\sin{t}\;\mathrm dt$$ -They split it up so that it became: -$$-\int_1^{\sqrt{x}}\sqrt{t}\sin{t}\;\mathrm dt + \int_1^{x^3}\sqrt{t}\sin{t}\;\mathrm dt $$ -Why 1? How was that determined? - -REPLY [13 votes]: It doesn't have to be $1$; all it needs to be is in the domain of the function. -What is going on is that you have the Fundamental Theorem of Calculus, which tells you that if $f(x)$ is continuous on an interval containing $a$, then the function $F(x)$ defined on that interval by: -$$F(x) = \int_a^x f(t)\,dt$$ -is differentiable, and in fact $F'(x) = f(x)$ for all $x$ in the interval. But this requires (i) the lower limit to be constant; and (ii) the upper limit to be the variable with respect to which you are taking the derivative. -So the first question is: what does one do if you have the variable in the lower limit instead of the upper limit? Well, that only requires you to use the property of the integral that says -$$\int_a^b f(t)\,dt = -\int_b^a f(t)\,dt,$$ -and the fact that $(-G(x))' = -G'(x)$ for any $G$. So if -$$F_2(x) = \int_x^a f(t)\,dt$$ -then -$$F_2'(x) = \frac{d}{dx}\int_x^a f(t)\,dt = \frac{d}{dx}\left(-\int_a^x f(t)\,dt\right) = -\frac{d}{dx}\int_a^x f(t)\,dt = -f(x),$$ -with the last equality holding by the FTC. -Next, what if the upper limit is not $x$, but a function of $x$, say -$$F_3(x) = \int_a^{g(x)} f(t)\,dt\ ?$$ -Then we use the Chain Rule: -$$\frac{dF_3}{dx} = \frac{dF_3}{dg}\frac{dg}{dx} = -\left(\frac{d}{dg}\int_a^{g(x)}f(t)\,dt\right)\frac{dg}{dx} = f(g(x))g'(x),$$ -with the last equality again by the FTC. -What if the lower limit is the function and the upper limit the constant? We combine the two "tricks" above to get the derivative. -And finally, what if both upper and lower limit are functions? Then we use the property of the integrals mentioned by Huy: -$$\int_a^b f(t)\,dt = \int_a^cf(t)\,dt + \int_c^b f(t)\,dt$$ -for any $c$ such that $f(x)$ is defined and continuous on an interval containing $a$, $c$, and $b$. In the case at hand, they pick $1$ simply because it's an easy point; you can use any point on $[0,\infty)$ (cannot be a negative point because you have $\sqrt{t}$ in the integrand). Pick your favorite $a$ on $[0,\infty)$, and you have -$$F(X) = \int_{\sqrt{x}}^{x^2}f(t)\,dt = \int_{\sqrt{x}}^a f(t)\,dt + \int_a^{x^2}f(t)\,dt = -\int_a^{\sqrt{x}}f(t)\,dt + \int_a^{x^2}f(t)\,dt$$ -and each of the integrals on the right is an integral we know how to do using the FTC. So if we let the first integral be some $G(X)$ and the second be $H(X)$, then $F(X)=G(X)+H(X)$, so $F'(X) = G'(X)+H'(X)$, and we can find $G'(X)$ and $H'(X)$ using the methods outlined above. -They did it with $a=1$, but you could do it with $a=\pi$, or $a=4$, or any nonnegative $a$. - -REPLY [5 votes]: You can take any lower limit, it doesn't need to be 1. -$\int_b^c f(x) dx = \int_a^c f(x) dx - \int_a^b f(x) dx$ -Just think of it as the area under the curve.<|endoftext|> -TITLE: $e^{e^{e^{79}}}$ and ultrafinitism -QUESTION [40 upvotes]: I was reading the following article on Ultrafinitism, and it mentions that one of the reasons ultrafinitists believe that N is not infinite is because the floor of $e^{e^{e^{79}}}$ is not computable. I was wondering if that's the case because of technological limitations, or whether there is another reason we cannot find a floor of this number. - -REPLY [48 votes]: In the formal meaning of "computable" the floor of that number is indeed computable. This is to say that a patient immortal human with access to unlimited paper and pencil could, in principle, work out the answer. (Here I assume, for technical reasons, that the number in question is not an integer - I assume someone who knows enough number theory will be able to cite a result that implies this.) -The article linked makes the weaker claim that the value has not yet been calculated, which seems likely to me. The issue they are concerned about is that humans are not immortal and that our supply of paper is very limited. If the number of decimal digits in the value is too large, it would be impossible to actually represent it in any physical way within our universe. -In general, I think it is more accurate to say that ultrafinitists don't accept that the set of all natural numbers is a coherent entity - not that they think it is finite. However, as the article you linked alludes, it is very difficult to find a coherent but non-arbitrary way to say what natural numbers are without accepting that there are an infinite number of them. -Addendum Here is why I am worried whether $e^{e^{e^{79}}}$ is an integer. It's certainly correct that no matter what, the floor of that number is an integer and is therefore computable. That part of my argument is fine. -On the other hand, if $e^{e^{e^{79}}}$ is not an integer, then I can tell you a specific algorithm to use to compute it. Namely, compute better and better upper and lower bounds until they fall strictly between two consecutive integers (which they must, since their limit is not an integer) and then pick the smaller of those two integers. -If $e^{e^{e^{79}}}$ is an integer, then that algorithm won't work, because it will never stop. But if we knew that $e^{e^{e^{79}}}$ was an integer then we could take better and better upper and lower approximations until they straddle a single integer, and then pick that. -So the reason that I am interested whether the number is an integer is that, beyond merely knowing that the floor is an integer, I'd like to know which algorithm could be used to compute it. -In any case, I don't think that the point of the example was to pick a number that is not known to be integer or known to not be an integer. The point of the example should be to pick a number which is simply too large to represent physically. I was hoping that someone would have a quick answer that confirms $e^{e^{e^{79}}}$ is not an integer, so I could edit my response with that info. But the non-integer property seems more difficult than I thought.<|endoftext|> -TITLE: How to show $e^{e^{e^{79}}}$ is not an integer -QUESTION [199 upvotes]: In this question, I needed to assume in my answer that $e^{e^{e^{79}}}$ is not an integer. Is there some standard result in number theory that applies to situations like this? -After several years, it appears this is an open problem. As a non-number theorist, I had assumed there would be known results that would answer the question. I was aware of the difficulty in proving various constants to be transcendental -- such as $e + \pi$, which is not known to be transcendental at present. -However, I was looking at a question that seems simpler, naively: whether a number is an integer, rather than whether it is transcendental. It seems that what appeared to be possibly simpler is actually not, with current techniques. -The main motivation for asking about this particular number is that it is very large. It is certainly possible to find a pair of very large numbers, at least one of which is transcendental. But the current lack of knowledge about this particular number is even an integer shows just how much progress remains to be made, in my opinion. Any answers that describe techniques that would suffice to solve the problem (perhaps with other, unproven assumptions) would be very welcome. - -REPLY [45 votes]: The paper Chuangxun Cheng, Brian Dietel, Mathilde Herblot, Jingjing Huang, Holly Krieger, Diego Marques, Jonathan Mason, Martin Mereb, and S. Robert Wilson, Some consequences of Schanuel’s conjecture, Journal of Number Theory 129 (2009) 1464–1467, shows that $e,e^e,e^{e^e},\dots$ is an algebraically independent set, on the assumption of Schanuel's Conjecture. Maybe a close reading of that paper will suggest a way of applying the result to the 79-question.<|endoftext|> -TITLE: deadline in math jobs application -QUESTION [6 upvotes]: In some math job advertisements, it says "the deadline is...", but some schools review applications before that. Does that mean we should submit applications as early as possible to be in a better position? (Because I normally apply when the date is close to the deadline). (I just want to make sure I understand things correctly). Thanks! - -REPLY [3 votes]: My only experience is applying for research postdocs last year, and a lot of schools started looking at applications shortly after Thanksgiving. Some of my friends had offers the first week of December. So I would say apply by mid-November to be safe. This might only apply to research postdocs though. -I don't like the blanket advice "apply as early as possible" because for many people, including me, the late summer and early fall of application season is a very productive time, and you might get a lot of research done. If you're hot on the trail of something good it's better to finish it off than apply early, especially if you're a grad student with not so many results. Of course if you're at a natural stopping point in your research (e.g. because you're essentially stuck) then it's better not to procrastinate and get the applications in early. Professors have access to your file on mathjobs as soon as you apply, and maybe someone in the department where you're applying is bored and browsing...<|endoftext|> -TITLE: A limit related to the Gibbs phenomenon -QUESTION [5 upvotes]: Let $$D_N(x)=\frac{\sin [(N+(1/2))t]}{\sin (t/2)}$$ be the Dirichlet kernel. Let $x(N)$ be the number in $0 -TITLE: subgroups of finitely generated groups with a finite index -QUESTION [35 upvotes]: Let $G$ be a finitely generated group and $H$ a subgroup of $G$. If the index of $H$ in $G$ is finite, show that $H$ is also finitely generated. - -REPLY [6 votes]: If $G$ is finitely generated by $A\subset G$, equipping $G$ with the word metric, we can look at $G$ as a proper metric space $(G,d_{A})$. Then $H$ acts continuously (by isometries) on $(G,d_{A})$. Provoking Švarc–Milnor lemma, we get that $H$ is f.g and quasi-isometric to $G$ (looks like $G$ looking at both $G$ and $H$ from far distance).<|endoftext|> -TITLE: A question regarding non archimedean absolute values -QUESTION [7 upvotes]: I don't understand an equation I am reading in my notes: -Suppose, $|\cdot |$ is a nonarchimedean absolute value on a field $K$ complete wrt this absolute value. Suppose, $|a_0|>|a_i|$ for all $i>0$. Then, $|a_0+a_1+a_2+...|=|a_0|$. I understand that $|a_0+a_1+a_2+...|\leq max\{|a_i|:i\geq 0\}=|a_0|$ by ultrametric inequality. I am not sure how to get the reverse inequality. - -REPLY [2 votes]: $\newcommand{\set}[1]{\left\{#1\right\}} -\newcommand{\jleq}[1]{\justifyed{#1}{\leq}} -\newcommand{\jeqtxt}[1]{\jeq{\text{#1}}} -\newcommand{\jl}[1]{\justifyed{#1}{<}} -\newcommand{\justifyed}[2]{\stackrel{#1}{#2}} -\newcommand{\jeq}[1]{\justifyed{#1}{=}} -\newcommand{\jleqref}[1]{\jleqtxt{\ref{#1}}} -\newcommand{\jeqref}[1]{\jeqtxt{\ref{#1}}} -\newcommand{\jleqtxt}[1]{\jleq{\text{#1}}} -\newcommand{\df}{\mathrel{\mathop:}=} -\newcommand{\norm}[1]{\left|#1\right|}$ -Isosceles triangle principle: Let $\norm{\cdot}$ be a non-Archimedean norm on a field $\mathbb{F}$. Then $\forall\ x,y\in\mathbb{F}$ with $\norm{x}<\norm{y}$ it holds that $\norm{x+y}=\norm{y}$. -Proof: Let $x,y\in\mathbb{F}$ with $\norm{x}<\norm{y}$. We now calculate -\begin{align} -\norm{x-y}\jleqtxt{non-Arch.}\max\set{\norm{x},\norm{y}}=\norm{y}=\norm{x-(x-y)}\jleqtxt{non-Arch.}\max\set{\norm{x},\norm{x-y}}. -\end{align} -From this equation we get that $\norm{y}\leq\max\set{\norm{x},\norm{x-y}}$ but since we assumed $\norm{x}<\norm{y}$, we have that $\norm{y}\nleq\norm{x}$ so $\max\set{\norm{x},\norm{x-y}}=\norm{x-y}$ actually. Together with the equation above, we get that $\norm{x-y}=\norm{y}$. -Remark: The name "Isosceles Triangle Principle" comes from the geometric interpretation: If $x$ and $y$ are two points of a triangle which has the origin $0$ as it's third point, the sides have the length $x$, $y$ and $x-y$. The principle then says that the third side is as long as the longer one of the others, so every "triangle" is isosceles with respect to a non-Archimedean norm. -Lemma: Let $\norm{\cdot}: \mathbb{F}\rightarrow\mathbb{R}_{\geq 0}$ be a non-Archimedean norm and let $\forall\ i\in \set{0, \ldots, n}: a_i\in\mathbb{F}$. If $\forall\ 0\norm{a_i}$ it holds that $\norm{\sum_{i=0}^n a_i}=\norm{a_0}$. -Proof: We prove this statement by induction. For $n=0$, the statement is trivial. Now assume that the statement is proven for $n$ and let $a_{n+1}\in\mathbb{F}$ with $\norm{a_0}>\norm{a_{n+1}}$. Now set $x\df a_{n+1}$ and $y=\sum_{i=0}^n a_i$ and note that: -$$ \norm{y}\jeqtxt{I.H.}\norm{a_0}>\norm{a_{n+1}}=\norm{x} $$ -and we can apply the Isosceles Triangle Principle (ITP): -$$ \norm{\sum_{i=0}^{n+1} a_i}=\norm{\sum_{i=0}^{n} a_i + a_{n+1}} = \norm{x+y}\jeqtxt{ITP}\norm{y}=\norm{\sum_{i=0}^n a_i}\jeqtxt{I.H.}\norm{a_0}.$$<|endoftext|> -TITLE: What is a special function? -QUESTION [18 upvotes]: When I read some issues here I see from time to time incorrect references to the field special functions, it might e.g. be a discussion around Dirac's $\delta$-function which is tagged (special-functions) or a discussion around some function reminding of the Weierstrass no where differentiable continuous function. These examples make me think - what would classify a special function? -A vague bad definition could be "A function is a special function if it has some resemblance to some Hypergeometric function" or "A function is a special function if it fits into the Bateman manuscript project. -To me the Gamma function and the Zeta function are definitively special functions. -Also, I have worked on Legendre functions $P_\lambda$ and $Q_\lambda$ of the first and second kind , which I would call special functions, but not individually however (except perhaps $P_{-1/2}$). -I would not say that elementary functions (such as trigonometric functions and the exponential function) are special functions - but I am not totally convinced about this.. -I do not agree with Wikipedia, it says: -"Special functions are particular mathematical functions which have more or less established names and notations due to their importance in mathematical analysis, functional analysis, physics, or other applications. -There is no general formal definition, but the list of mathematical functions contains functions which are commonly accepted as special. In particular, elementary functions are also considered as special functions." -Also, looking at the Wikipedia list (linked above) the indicator function, step functions, the absolute value function and the sign are special functions -- this sounds very wrong to me. -So what is a special function and what should be under the (special-function) tag? - -REPLY [2 votes]: I (physicist) associate the term mostly with this huge family of often indexed functions which happen to bear some magical relations to each other. My handwavy explaination why these things exists goes as follows: -In physics, we're dealing with the dynamics of certain degrees of freedom. These (the dynamics, given by differential equations) often employ smooth symmetries, that is we're dealing with Lie groups, which are also manifold in themselfs. Take e.g. the Laplacian $\Delta=\nabla\cdot\nabla$ and the associated symmetries $R$ acting as $\nabla\to R\nabla$ in such a way that that $R\nabla\cdot R\nabla=\nabla\cdot\nabla$. Now in case one is dealing with a "rotation" in the broadest sense of the word, the $R$'s often form compact manifold, where we can savagely define things like integration on the group, and these symmetry groups also permit pretty unitary matrix representations. That is there are necessarily matrices $U$ with -$$\sum_kU_{kn}^*U_{km}=\delta_{mn},$$ -and well, the matrix coefficients $U_{kn}$ must be some complex functions. You see the direct relation to special functions if you take the abstract Lie group theory and actually sit down and write down the matrices in some base. E.g. for the rotation group matrices $D$, you find -$$D^j_{m'm}(\alpha,\beta,\gamma)= e^{-im'\alpha } -[(j+m')!(j-m')!(j+m)!(j-m)!]^{1/2}\cdot$$ -$$\cdot\sum\limits_s \left[\frac{(-1)^{m'-m+s}}{(j+m-s)!s!(m'-m+s)!(j-m'-s)!} \right.\cdot$$ -$$ \left. \cdot \left(\cos\frac{\beta}{2}\right)^{2j+m-m'-2s}\left(\sin\frac{\beta}{2}\right)^{m'-m+2s} \right] - e^{-i m\gamma}.$$ -Very sweet, right? Now here you have the Legendre Polynomials $P_\ell^m$ -$$ -D^{\ell}_{m 0}(\alpha,\beta,0) = \sqrt{\frac{4\pi}{2\ell+1}} Y_{\ell}^{m*} (\beta, \alpha ) = \sqrt{\frac{(\ell-m)!}{(\ell+m)!}} \, P_\ell^m ( \cos{\beta} ) \, e^{-i m \alpha } -$$ -so that you have the special relation relating the functions -$$ \int_0^{2\pi} d\alpha \int_0^\pi \sin \beta d\beta \int_0^{2\pi} d\gamma \,\, - D^{j'}_{m'k'}(\alpha,\beta,\gamma)^\ast D^j_{mk}(\alpha,\beta,\gamma) = - \frac{8\pi^2}{2j+1} \delta_{m'm}\delta_{k'k}\delta_{j'j},$$ -which I read as $U^* U=1$. -For there are representable symmetries in the geometric structure of spaces, there must be functions which have some magical interrelating properties.<|endoftext|> -TITLE: coupon collector and Markov chains -QUESTION [6 upvotes]: I need some help with my homework in probability. -I need to prove that if -$X(n) =$ the number of different coupons that the collector has in time $n$ -then $X(n)$ represents a Markov chain. -I proved that -$$P(X(m+1)=j)= \cases{ -\frac{X(m)}{n}&\text{if $j=X(m)$}\\\ -1-\frac{X(m)}{n}&\text{if $j=X(m)+1$}\\\ -0 &\text{otherwise}.}$$ -Now I need to show from there that $P(X(m+1)=j|X(m)=Km,\ldots ,X(0)=K0)=P(X(m+1)=j|X(m)=Km)$ -thanks for the help. -benny. - -REPLY [8 votes]: This answer is incomplete and potentially misleading, so -I'm posting an extended comment. -First of all, there is something wrong with the OP's formula. The -left hand side is a probability which is a non-random number while -the right hand side depends on the random variable $X(m)$. Probably what he means is -$$ -P\big(X(m+1)=j\,|\, X(m)\big)= \cases{ -\frac{X(m)}{n}&\text{if $X(m)=j$}\\[3pt] -1-\frac{X(m)}{n}&\text{if $X(m)=j-1$}\\[5pt] -0 &\text{otherwise}.}$$ -This formula describes the conditional distribution of $X(m+1)$ given $X(m)$. -By definition, this depends only on $X(m)$ and not $X(m-1),\dots, X(1), X(0)$, -and this is true whether or not the process $X$ satisfies the Markov property. -The OP is correct in asserting that to prove the Markov property, you must -consider conditional probabilities of the form -$$P(X(m+1)=j\,|\,X(m)=K_m,\ldots ,X(0)=K_0),$$ -or alternatively joint probabilities of the form -$$P(X(m)=K_m,\ldots ,X(0)=K_0).$$ -For the coupon collector's problem, this is a bit of a pain, but it is not terribly difficult -and it has to be done to prove that $X$ is Markov. - -Added: Here is a simple example that maybe explains my -objection a bit better. -Draw cards one at a time, without replacement, -from a well-shuffled deck. For $1\leq n\leq 52$, let -$X_n$ be the color of the card drawn at time $n$. This -is a stochastic process with state space ${\cal S}=\{R,B\}$. -For $1\leq n <52$, using exchangeability you find that -$\mathbb{P}(X_{n+1}=j\,|\,X_n=i)$ is the $(i,j)$th entry in the -matrix -$$P= \pmatrix{{25\over 51}&{26\over 51}\\[3pt] {26\over 51}&{25\over 51}}.$$ -So $(X_n)$ is time homogeneous, and has a "transition matrix" $P$ but, -nevertheless, it is not Markov. -Why not? Well, for example -$$\mathbb{P}(X_3=B\,|\,X_2=R,X_1=R)={26\over 50}\neq {26\over 51}=\mathbb{P}(X_3=B\,|\,X_2=R).$$<|endoftext|> -TITLE: What is the maximum number of primes generated consecutively generated by a polynomial of degree $a$? -QUESTION [9 upvotes]: Let $p(n)$ be a polynomial of degree $a$. Start of with plunging in arguments from zero and go up one integer at the time. Go on until you have come at an integer argument $n$ of which $p(n)$'s value is not prime and count the number of distinct primes your polynomial has generated. - -Question: what is the maximum number of distinct primes a polynomial of degree $a$ can generate by the process described above? Furthermore, what is the general form of such a polynomial $p(n)$? - -This question was inspired by this article. -Thanks, -Max -[Please note that your polynomial does not need to generate consecutive primes, only primes at consecutive positive integer arguments.] - -REPLY [12 votes]: The Green-Tao Theorem states that there are arbitrarily long arithmetic progressions of primes; that is, sequences of primes of the form -$$ b , b+a, b+2a, b+3a,... ,b+na $$ -Since such a progression will be the first $n$ values of the polynomial $ax+b$, this implies that even for degree 1, there is no upper bound to how many primes in a row a polynomial can generate. - -REPLY [4 votes]: Here is result by Rabinowitsch for quadratic polynomials. - -$n^2+n+A$ is prime for $n=0,1,2,...,A-2$ if and only if $d=1-4A$ is squarefree and the class number of $\mathbb{Q}[\sqrt{d}]$ is $1$. - -See this article for details. -http://matwbn.icm.edu.pl/ksiazki/aa/aa89/aa8911.pdf -Also here is a list of imaginary quadratic fields with class number $1$ -http://en.wikipedia.org/wiki/List_of_number_fields_with_class_number_one#Imaginary_quadratic_fields -There are many other articles about prime generating (quadratic) polynomials that you can google.<|endoftext|> -TITLE: What is the 0-norm? -QUESTION [22 upvotes]: On $\mathbb{R}^n$ and $p\ge 1$ the $p$-norm is defined as $$\|x\|_p=\left ( \sum _{j=1} ^n |x_j| ^p \right ) ^{1/p}$$ -and there is the $\infty$-norm which is $\|x\|_\infty=\max _j |x_j|$. It's called the $\infty$ norm because it is the limit of $\|\cdot\|_p$ for $p\to \infty$. -Now we can use the definition above for $p<1$ as well and define a $p$-"norm" for these $p$. The triangle inequality is not satisfied, but I will use the term "norm" nonetheless. For $p\to 0$ the limit of $\|x\|_p$ is obviously $\infty$ if there are at least two nonzero entries in $x$, but if we use the following modified definition -$$\|x\|_p=\left ( \frac{1}{n} \sum _{j=1} ^n |x_j| ^p \right ) ^{1/p}$$ -then this should have a limit for $p\to 0$, which should be called 0-norm. What is this limit? - -REPLY [27 votes]: When $p$ is small, $$x^p = \exp(p \log x) \approx 1 + p\log x.$$ Therefore $$\frac{1}{n} \sum_{j=1}^n x_j^p \approx 1 + p\frac{1}{n} \sum_{j=1}^n \log x_j = 1 + p \log \sqrt[n]{\prod_{i=1}^n x_j}.$$ On the other hand, we have $$(1+py)^{1/p} \longrightarrow \exp(y),$$ and so we easily get that the norm approaches the geometric mean, as Raskolnikov commented.<|endoftext|> -TITLE: Automorphism of the Field of rational functions -QUESTION [35 upvotes]: Let $K$ be a field and let $K(x)$ be the field of rational functions in $x$ whose coefficients are in $K$. Let $\theta(x)$ $\in \operatorname{Aut}(K(x))$ such that $\theta|_K = \operatorname{id}_K$. Show that $\theta(x) =\frac{ax+b}{cx+d}$, with $ad\neq bc$. -Here is my attempt. -Let $\theta(x) = \frac{f}{g}$, $f,g \in K[x]$, with $\gcd(f, g)=1$. let $h \in K(x)$. Then $h(\frac{f}{g}) = x$ and $\frac{f(x)}{g(x)}\neq\frac{f(y)}{g(y)}$ if $x\neq y$ . Suppose $\deg f \gt 1$ or $\deg g \gt 1$, then the equation $g(x)b=f(x)$, $b\in K$, will have more than one solution for $x$. Hence $f$ and $g$ have degrees at most $1$ and thus $\theta(x)=\frac{ax+b}{cx+d}$ with inverse function $\theta(x) =\frac{dx-b}{a-cx}$. -What I'd like to know is if Ive approached the problem in the right manner. Please any input will be very much appreciated. Thanks. - -REPLY [14 votes]: This and related results are frequently called "Lüroth's Theorem". Searching on that term will turn up much of interest, e.g. this stimulating exercise set of George Bergman, which I append below for those who may not have easy access to a postscript viewer. See also the MO thread on elementary proofs.<|endoftext|> -TITLE: Starting digits of $2^n$. -QUESTION [16 upvotes]: Prove that for any finite sequence of decimal digits, there exists an $n$ such that the decimal expansion of $2^n$ begins with these digits. - -REPLY [24 votes]: Take $\log_{10} (2^n) = n \log_{10} 2$, note that $\log_{10} 2$ is irrational, and use the equidistribution theorem (or prove what you want directly using the pigeonhole principle).<|endoftext|> -TITLE: If A is a subset of R with Lebesgue measure > 0 then are there a,b such that the measure of $[a,b]\cap A$ is b-a? -QUESTION [7 upvotes]: If $A$ is a subset of $\mathbb{R}$ with Lebesgue measure strictly greater than $0$, does it follow then that are there $a$ and $b$ such that the measure of $[a,b]\cap A$ is $b-a$? -Thank you. - -REPLY [4 votes]: What is actually true is this: for every set $C$ of positive measure and every $\epsilon < 1$ there is some open interval $(a,b)$ such that $\mu(C \cap (a,b)) \geq \epsilon |b-a|$. -I have always viewed this as an instance of one of Littlewood's three principles for analysis: a measurable set is almost an open set.<|endoftext|> -TITLE: extracting rotation, scale values from 2d transformation matrix -QUESTION [25 upvotes]: How can I extract rotation and scale values from a 2D transformation matrix? -matrix = [1, 0, 0, 1, 0, 0] - -matrix.rotate(45 / 180 * PI) -matrix.scale(3, 4) -matrix.translate(50, 100) -matrix.rotate(30 / 180 * PI) -matrix.scale(-2, 4) - -Now my matrix have values [a, b, c, d, tx, ty]. Lets forget about the processes above and imagine that we have only the values a, b, c, d, tx, and ty. How can I find final rotation and scale values? - -REPLY [2 votes]: The term for this is matrix decomposition. Here is a solution that includes skew as described by Frédéric Wang. -It operates on a 2d matrix defined as such: -$$\left[\begin{array}{ccc} -\mathrm{a} & \mathrm{c} & \mathrm{tx}\\ -\mathrm{b} & \mathrm{d} & \mathrm{ty}\end{array}\right]$$ -function decompose_2d_matrix(mat) { - var a = mat[0]; - var b = mat[1]; - var c = mat[2]; - var d = mat[3]; - var e = mat[4]; - var f = mat[5]; - - var delta = a * d - b * c; - - let result = { - translation: [e, f], - rotation: 0, - scale: [0, 0], - skew: [0, 0], - }; - - // Apply the QR-like decomposition. - if (a != 0 || b != 0) { - var r = Math.sqrt(a * a + b * b); - result.rotation = b > 0 ? Math.acos(a / r) : -Math.acos(a / r); - result.scale = [r, delta / r]; - result.skew = [Math.atan((a * c + b * d) / (r * r)), 0]; - } else if (c != 0 || d != 0) { - var s = Math.sqrt(c * c + d * d); - result.rotation = - Math.PI / 2 - (d > 0 ? Math.acos(-c / s) : -Math.acos(c / s)); - result.scale = [delta / s, s]; - result.skew = [0, Math.atan((a * c + b * d) / (s * s))]; - } else { - // a = b = c = d = 0 - } - - return result; -}<|endoftext|> -TITLE: Calculate the rank of the following matrices -QUESTION [5 upvotes]: Question: Calculate the rank of the following matrices: -$A = \left( \begin{array}{cc} 1 & n \\ n & 1 \end{array} \right), n \in \mathbb{Z}$ and $B = \left( \begin{array}{ccc} 1 & x & x^{2} \\ 1 & y & y^{2} \\ 1 & z & z^{2} \end{array} \right)$, $x,y,z \in \mathbb{R}$. -So the way I understand rank($A$), is the number of pivots in an echelon form of $A$. To put $A$ into echelon form I would subtract $n$ times the first row from the second row: $A \sim \left( \begin{array}{cc} 1 & n \\ n & 1 \end{array} \right) \sim \left( \begin{array}{cc} 1 & n \\ 0 & 1 - n^{2} \end{array} \right) \Rightarrow $rank$(A) = 2$. -With $B$ I would have done pretty much the same thing, subtracting row 1 from both row 2 and row 3: $B \sim \left( \begin{array}{ccc} 1 & x & x^{2} \\ 1 & y & y^{2} \\ 1 & z & z^{2} \end{array} \right) \sim \left( \begin{array}{ccc} 1 & x & x^{2} \\ 0 & y - x & y^{2} - x^{2} \\ 0 & z - x & z^{2} - x^{2} \end{array} \right)$ (at this point I could multiply row 2 by $-(\frac{z-x}{y-x})$ and add it to row 3 which ends up being a long polynomial....) However, with both parts, I am pretty confident that it is not so simple and that I am missing the point of this exercise. Could somebody please help point me in the right direction? - -REPLY [8 votes]: You seem to be assuming that because "$1-n^2$" doesn't look like $0$, then it cannot be zero. That is a common, but often fatal, mistake. -Remember that $n$ stands for some integer. Once you get to -$$A = \left(\begin{array}{cc} -1 & n\\ -0 & 1-n^2 -\end{array}\right),$$ -you cannot just jump to saying there are two pivots: your next step would be to divide the second row by $1-n^2$ to make the second pivot, but whenever you divide by something, that little voice in your head should be whispering in your ear: "Wait! Are you sure you are not dividing by zero?" (remember, if you divide by zero, the universe explodes!). And the thing is, you aren't sure you are not dividing by zero. It depends on what $n$ is! So, your answer should be that it will be rank $2$ if $1-n^2\neq 0$, and rank $1$ if $1-n^2 = 0$. But you don't want the person who is grading/reading to have to figure out when that will happen. You want them to be able to glance at the original matrix, and then be able to immediately say (correctly) "Rank is 1" or "Rank is 2". So you should express the conditions in terms of $n$ alone, not in terms of some computation involving $n$. So your final answer should be something like "$\mathrm{rank}(A)=2$ if $n=\text{someting}$, and $\mathrm{rank}(A)=1$ if $n=\text{something else}$." -The same thing happens with the second matrix: in order to be able to multiply by $-(\frac{z-x}{y-x})$, that little voice in your head will whisper "Wait! are you sure you are not dividing by zero?", which leads you to consider what happens when $y-x=0$. But more: even if you are sure that $y-x\neq 0$, that meddlesome little voice should be whispering "Wait! Are you sure you are not multiplying the row by zero?" (because, remember, multiplying a row by zero is not an elementary row operation). (And be careful: if you don't pay attention to that voice, it's going to start yelling instead of whispering...) So that means that you also need to worry about what happens when $z-x=0$. The answer on the rank of $B$, then, will depend on how $x$, $y$, and $z$ relate, and so your solution should reflect that. - -REPLY [2 votes]: For your first matrix, the rank could be 1 if $n=1$ or $n=-1$ (because there would only be one pivot column). For you second example, the rank could be 1,2,or 3 depending on x,y, and z. For instance, if $x=y=z$ there are only non-zero entries in the first row of the reduced matrix. You may want to look at the invertible matrix theorem to help you with this second example. -http://www.ams.sunysb.edu/~yan2000/ams210_f2005/TheInvertibleMatrixTheorem.pdf -In particular, a square matrix has "full rank" iff it is invertible. This makes your first question trivial. For the second one, think about the values of $x,y,z$ that make the matrix singular, then classify these as rank 1 or 2. Any combination of $x,y,z$ making the matrix invertible implies the resulting matrix has rank 3.<|endoftext|> -TITLE: Why study "curves" instead of 1-manifolds? -QUESTION [21 upvotes]: In most undergraduate differential geometry courses -- I am thinking of do Carmo's "Differential Geometry of Curves and Surfaces" -- the topic of study is curves and surfaces in $\mathbb{R}^3$. However, the definition of "curve" and "surface" are usually presented in very different ways. -A curve is defined simply as a differentiable map $\gamma\colon I \to \mathbb{R}^3$, where $I \subset \mathbb{R}$ is an interval. Of course, some authors prefer to define a curve as the image of such a map, and others require piecewise-differentiability, but the general concept is the same. -On the other hand, surfaces are essentially defined as 2-manifolds. -Similarly, in graduate courses on manifolds -- I am thinking of John Lee's "Introduction to Smooth Manifolds" -- one talks about curves $\gamma\colon I \to M$ in a manifold, and can do line integrals over such curves, but talks separately about embedded/immersed 1-dimensional submanifolds. -My question, then, is: - -Why make (parametrized) curves the object of study rather than 1-manifolds? - -Earlier, I asked a question that was perhaps meant to hint at this one, though I didn't say so explicitly. -Ultimately, I would simply like to say "curves are 1-manifolds and surfaces are 2-manifolds," and am looking for reasons why this is correct/incorrect or at least a good/bad idea. (So, yes, I'm looking for a standard definition of "curve.") - -REPLY [7 votes]: Also, in agreement to what kahen said: -In a regular parametrization, you can always find a metric that is constant along the curve leading to zero intrinsic curvature. This is also a reason why it might be somehow not very enlightening to discuss curves as one dimensional manifolds in its own right. -Curves, on the other hand, can have extrinsic curvature which can for some body moving along a trajectory be interpreted as the acting force. -Greets -Robert<|endoftext|> -TITLE: What are Diophantine equations REALLY? -QUESTION [8 upvotes]: Sometimes when you want to solve an equation you can just use algebra and rearrange it then you are done. But sometimes no amount of algebra can ever prove the equation, and then you need an idea, here is what I mean by the idea: - -parity (or modular arithmetic for higher numbers) -complex integers. -the irrational number -a special function -a strange curve - -A lot of equations can be proved impossible if the left hand side is odd and the right hand side is even, sometimes I had a pair of equations that turned out to be easy if you read it as a real and imaginary equation, I read to solve Pell's equation you need a fraction close to the square roots of $d$ and to solve $ax + by = 1$ you have use the greatest common divisor function. To solve Fermat's Last theorem you need elliptic curves. -I have two questions, Where can I find more examples of this (an equation and then what it is really - which lets you prove it)? and how do you find out what is behind the equation aka. what is the idea (because I only see the + - * ^ and variables..)? - -REPLY [12 votes]: I interpret the question as "how do I know how to attack a Diophantine problem". The question is in fact not easy and it is this skill that algebraic number theorists hone. Moreover, the methods that you listed are where the story was about 300 years ago. Today, there are more sophisticated and more general techniques available. If you are more interested in the motivation, you should have a look at this MO question, which somewhat goes in your direction. -Before I start listing the modern methods, let me directly address your questions "what is behind a given equation" and "how do I know what to try?", starting with the latter: you don't. After a while you develop some intuition, but still, the basic way to find an approach that works is to try them all out in the order of decreasing likelihood of success. It is this likelihood that you can estimate better and better, as you become more proficient, but you will never know for sure, until you try an approach. As for the question "what is behind a Diophantine equation", there is no good way of making sense of this question at present. Some people will view the equation as describing a geometric object (see last paragraph of this post), some people will look at it from a "modular angle" (see penultimate paragraph). But at the end of the day, when you are interested in integral or rational solutions, the equation is just that: a Diophantine equation. If you must categorise equations, then to categorise them according to the geometry and topology of the set of complex solutions is probably the most sensible thing to do (see last paragraph). -There are two or three rather broad themes in modern research, where modern means everything from the last 150-odd years. -First, a classical method that you have hinted at but that has much more potential is that often, to understand integer solutions, you are forced to work in a bigger ring. You have listed square roots of integers, but the technique is more general than that. To master it, you need to learn some classical algebraic number theory, as it was developed at the end of the 19th - beginning of the 20th century and that's also where I would recommend you to start reading. Have a look at an introductory book into algebraic number theory, such as the book by Ian Stewart, which I personally quite like. -Another broad theme is the one successfully used by Ribet, Frey, Wiles and several others along the way to prove Fermat's Last Theorem. It is nowadays subsumed under the mysterious bracketing term "modularity". To start understanding, what this is about, you first need to learn about modular forms and elliptic curves. The basic idea is that the Shimura-Tanyiama-Weil conjecture, which was the actual result Wiles proved, relates two seemingly unrelated objects: rational elliptic curves and modular forms. This is extremely useful, because modular forms are extremely well-behaved. The "modularity"-idea of solving Diophantine equations then is to construct an elliptic curve out of a putative solution to your given equation that has such strange properties that it cannot possibly be modular. That would then contradict Wiles's theorem, so there cannot be such a solution. The places to start reading about elliptic curves and modular forms are (after you have completely read an introductory book on algebraic number theory and done all the exercises) Silverman - the classic on elliptic curves, and maybe the book by Diamond and Schurman for modular forms. -Finally, a very broad theme is that often, the geometry or the topology of the complex solutions of the equation controls its arithmetic (i.e. Diophantine) behaviour. It is difficult to point to one place where to learn about this, but elliptic curves are definitely the right point to start. I think, once you have read a book on algebraic number theory and one on elliptic curves, you should just come back here and ask this question again with your new background.<|endoftext|> -TITLE: What is $\operatorname{Spec}\mathbf{C}[[x,y]]/(y^{2} - x^{3} - x^{2})$? -QUESTION [20 upvotes]: Let $X = \operatorname{Spec} \mathbf{C}[[x,y]]/(y^{2} - x^{3} - x^{2})$. I would like to describe $X$ set-theoretically. My questions are: Can one explicitly say what the elements in $X$ are? Is it possible to interpret them geometrically? And is $X$ irreducible? -I am not really sure where to get started. Any help would be much appreciated. - -REPLY [31 votes]: Geometrically you are considering germ of singular node at the origin, that is at the singularity. The node is $ N = \textrm{Spec} ( \mathbf{C}[x,y]/(y^{2} - x^{3} - x^{2})) $ and is irreducible affine curve, because polynomial $y^{2} - x^{3} - x^{2}$ is irreducible in polynomial ring $\mathbf{C}[x,y]$. However your germ $X$ is reducible: this is because polynomial $y^{2} - x^{3} - x^{2}$ become reducible in formal powers series ring : $\; y^{2} - x^{3} - x^{2}=(y-x\sqrt{1+x})(y+x\sqrt{1+x})\in \mathbf{C}[[x,y]]$ - where $\sqrt{1+x}= 1+\frac{1}{2}x+...$ can be developed by Newton binomial in -$\mathbf{C}[[x,y]]$. -So in scheme sense $X$ contains three points: origin (= singularity )and two generic points of two irreducible components of $N$. -This is very intersting because node remains irreducible in every neighbourhood of origin in affine plane $\textrm{Spec} ( \mathbf{C}[x,y])$ but by going to formal series you obligate curve to split in two components. So intuition should be that going to formal series is strong form of localization.<|endoftext|> -TITLE: Solving a peculiar system of equations -QUESTION [22 upvotes]: I have the following system of equations where the $m$'s are known but $a, b, c, x, y, z$ are unknown. How does one go about solving this system? All the usual linear algebra tricks I know don't apply and I don't want to do it through tedious substitutions. -\begin{aligned} -a + b + c &= m_0 \\ -ax + by + cz &= m_1 \\ -ax^2 + by^2 + cz^2 &= m_2 \\ -ax^3 + by^3 + cz^3 &= m_3 \\ -ax^4 + by^4 + cz^4 &= m_4 \\ -ax^5 + by^5 + cz^5 &= m_5 -\end{aligned} - -REPLY [15 votes]: It seems that I am a wee bit late to this particular party, but I thought I'd show yet another way to resolve the OP's algebraic system, which I'll recast here in different notation: -$$\begin{aligned} -w_1+w_2+w_3&=m_0\\ -w_1 x_1+w_2 x_2+w_3 x_3&=m_1\\ -w_1 x_1^2+w_2 x_2^2+w_3 x_3^2&=m_2\\ -w_1 x_1^3+w_2 x_2^3+w_3 x_3^3&=m_3\\ -w_1 x_1^4+w_2 x_2^4+w_3 x_3^4&=m_4\\ -w_1 x_1^5+w_2 x_2^5+w_3 x_3^5&=m_5 -\end{aligned}$$ -and the problem is to find the $x_i$ and the $w_i$ satisfying the system of equations. -The key here is to recognize that this is exactly the problem of recovering the nodes $x_i$ and weights $w_i$ of an $n$-point Gaussian quadrature rule with some weight function $w(u)$ and some support interval $[a,b]$, given the moments $m_j=\int_a^b w(u)u^j \mathrm du,\quad j=0\dots2n-1$. Recall that $n$-point rules are designed to exactly integrate functions of the form $w(u)p(u)$, where $p(u)$ is a polynomial of degree at most $2n-1$, and this is the reason why we have $2n$ equations. -As is well known, the nodes and the weights of a Gaussian quadrature can be obtained if we know the orthogonal polynomials $P_k(u)$ associated with the weight function $w(u)$. This is due to the fact that the nodes of an $n$-point Gaussian rule are the roots of the orthogonal polynomial $P_n(u)$. The first phase of the problem, now, is determining the set of orthogonal polynomials from the moments. -Luckily, in 1859(!), Chebyshev obtained a method for determining the recursion coefficients $a_k$, $b_k$ for the orthogonal polynomial recurrence -$$P_{k+1}(u)=(u-a_k)P_k(u)-b_k P_{k-1}(u)\quad P_{-1}(u)=0,P_0(u)=1$$ -when given the $m_j$. Chebyshev's algorithm goes as follows: initialize the quantities -$$\sigma_{-1,l}=0,\quad l=1,\dots,2n-2$$ -$$\sigma_{0,l}=m_l,\quad l=0,\dots,2n-1$$ -$$a_0=\frac{m_1}{m_0}$$ -$$b_0=m_0$$ -and then perform the recursion -$$\sigma_{k,l}=\sigma_{k-1,l+1}-a_{k-1}\sigma_{k-1,l}-b_{k-1}\sigma_{k-2,l}$$ -for $l=k,\dots,2n-k+1$ and $k=1,\dots,n-1$, from which the recursion coefficients for $k=1,\dots,n-1$ are given by -$$a_k=\frac{\sigma_{k,k+1}}{\sigma_{k,k}}-\frac{\sigma_{k-1,k}}{\sigma_{k-1,k-1}}$$ -$$b_k=\frac{\sigma_{k,k}}{\sigma_{k-1,k-1}}$$ -I'll skip the details of how the algorithm was obtained, and will instead tell you to look at this paper by Walter Gautschi where he discusses these things. -Once the $a_k$ and $b_k$ have been obtained, solving the original set of equations can be done through the Golub-Welsch algorithm; essentially, one solves a symmetric tridiagonal eigenproblem where the $a_k$ are diagonal entries and the $b_k$ are the off-diagonal entries (the characteristic polynomial of this symmetric tridiagonal matrix is $P_n(x)$). The $x_i$ are the eigenvalues of this matrix, and the $w_i$ can be obtained from the first components of the normalized eigenvectors by multiplying the squares of those quantities with $m_0$. -I have been wholly theoretical up to this point, and you and most other people would rather have code to play with. I thus offer up the following Mathematica implementation of the theory discussed earlier: -(* Chebyshev's algorithm *) - -chebAlgo[mom_?VectorQ, prec_: MachinePrecision] := - Module[{n = Quotient[Length[mom], 2], si = mom, ak, bk, np, sp, s, v}, - np = Precision[mom]; If[np === Infinity, np = prec]; - ak[1] = mom[[2]]/First[mom]; bk[1] = First[mom]; - sp = PadRight[{First[mom]}, 2 n - 1]; - Do[ - sp[[k - 1]] = si[[k - 1]]; - Do[ - v = sp[[j]]; - sp[[j]] = s = si[[j]]; - si[[j]] = si[[j + 1]] - ak[k - 1] s - bk[k - 1] v; - , {j, k, 2 n - k + 1}]; - ak[k] = si[[k + 1]]/si[[k]] - sp[[k]]/sp[[k - 1]]; - bk[k] = si[[k]]/sp[[k - 1]]; - , {k, 2, n}]; - N[{Table[ak[k], {k, n}], Table[bk[k], {k, n}]}, np] - ] - -(* Golub-Welsch algorithm *) - -golubWelsch[d_?VectorQ, e_?VectorQ] := - Transpose[ - MapAt[(First[e] Map[First, #]^2) &, - Eigensystem[ - SparseArray[{Band[{1, 1}] -> d, Band[{1, 2}] -> Sqrt[Rest[e]], - Band[{2, 1}] -> Sqrt[Rest[e]]}, {Length[d], Length[d]}]], {2}]] - -(I note that the implementation here of Chebyshev's algorithm was optimized to use two vectors instead of a two-dimensional array.) -Let's try an example. Let $m_j=j!$ and take the system given earlier ($n=3$): -{d, e} = chebAlgo[Range[0, 5]!] -{{1., 3., 5.}, {1., 1., 4.}} - -xw = golubWelsch[d, e] -{{6.2899450829374794, 0.010389256501586145}, {2.2942803602790467, 0.27851773356923976}, {0.4157745567834814, 0.7110930099291743}} - -We have here the equivalence xw[[i, 1]]$=x_i$ and xw[[i, 2]]$=w_i$; let's see if the original set of equations are satisfied: -Chop[Table[Sum[xw[[j, 2]] xw[[j, 1]]^i, {j, 3}] - i!, {i, 0, 5}]] -{0, 0, 0, 0, 0, 0} - -and they are. -(This example corresponds to generating the three-point Gauss-Laguerre rule.) - -As a final aside, the solution given by Aryabhata is an acceptable way of generating Gaussian quadrature rules from moments, though it will require $O(n^3)$ effort in solving the linear equations, as opposed to the $O(n^2)$ effort required for the combination of Chebyshev and Golub-Welsch. Hildebrand gives a discussion of this approach in his book. -Here is Aryabhata's proposal in Mathematica code (after having done the elimination of the appropriate variables in the background): -gaussPolyGen[mom_?VectorQ, t_] := - Module[{n = Quotient[Length[mom], 2]}, - Expand[Fold[(#1 t + #2) &, 1, Reverse[LinearSolve[ - Apply[HankelMatrix, Partition[mom, n, n-1]], -Take[mom, -n]]]]]] - -and compare, using the earlier example: -gaussPolyGen[Range[0, 5]!, t] --6 + 18*t - 9*t^2 + t^3 - -% == -6 LaguerreL[3, t] // Simplify -True - -Having found the roots of the polynomial generated by gaussPolyGen[], one merely solves an appropriate Vandermonde linear system to get the weights. -nodes = t /. NSolve[gaussPolyGen[Range[0, 5]!, t], t, 20] -{0.4157745567834791, 2.294280360279042, 6.2899450829374794} - -weights = LinearAlgebra`VandermondeSolve[nodes, Range[0, 2]!] -{0.711093009929173, 0.27851773356924087, 0.010389256501586128} - -The results here and from the previous method are comparable.<|endoftext|> -TITLE: A challenging question about T1 spaces and countable compactness -QUESTION [9 upvotes]: I was working through a textbook on topology and I came across a problem I couldn't solve. -1) It is known that if a space is T1, it is countably compact if and only if every countable open cover has a finite subcover. (See below for the definition of countably compact, which might be different than the conventional definition.) -2) It is also true that if a space is T1, it is countably compact if and only if every infinite open cover has a proper subcover. (why?) -Intuitively, both properties seem to talk about how open covers can be removed of unnecessary elements and still work as a cover, under conditions where points are sufficiently close together. However, I cannot figure out a proof for the second statement. Because the cover may contain uncountably many sets, it is very hard to deal with. -This question appears in "Elements of Point Set Topology" by John D. Baum as exercise 3.33. The question and related hint can be viewed here. -Terminology used in this text: -A T1 space is a topological space such that, if x is an element of the space, the set {x} is closed. -A countably compact space is a space such that every infinite subset of the space has a limit point in the space. I am under the impression that other texts refer to this property as limit point compactness. -An infinite open cover is a collection of infinitely many open sets which cover the space. - -REPLY [9 votes]: We shall show that if $X$ is a $T_1$ space, then it is countably compact if and only if every infinite open cover has a proper subcover. The key idea is to proceed by contrapositive. -($\Rightarrow$ Needs $T_1$) Suppose that $ \ \mathcal{U} \ $ is an infinite open cover of $X$ with no proper subcover. Then for each $U \in \mathcal{U}$, there is a point $p_U \in U$ that doesn't belong to any other member of $ \ \mathcal{U} \ $. The set $A = \{p_U : U \in \mathcal{U} \}$ is infinite and doesn't have a limit point. Indeed, if $x \in X$, then there is $U \in \mathcal{U}$ containing $x$ and $ U \cap A = \{ p_U \}$. If $x = p_U$, then it isn't a limit point of $A$. If $x \ne p_U$, then, using $T_1$, $U- \{ p_U \}$ is open and $(U- \{ p_U \}) \cap A =\emptyset$. So $x$ isn't a limit point of $A$. -George Lowther gave an example showing that the $T_1$ hypothesis is fundamental. Since there are many comments, I'll reproduce it here: - -"Consider the example of the real numbers where the open sets are unions of intervals $[n,a)$ for integer $n$ and real $a>n$. This is $T_0$ but not $T_1$. It is also countably compact, but the infinite cover $\mathcal{U}=\{[n,n+1)\colon n\in\mathbb{Z}\}$ has no proper subcovers. So, $T_1$ is needed." - -A direct approach to prove this implication was suggested by Carl Mummert. Let $ \ \mathcal{U} \ $ be an infinite open cover of $X$ and $ \ \mathcal{U}_0 $ be a countably infinite subset of $ \ \mathcal{U} $. Now consider $ \ \mathcal{U}_1 = \mathcal{U} - \mathcal{U}_0$ and let $V \ $ be the union of all sets in $ \ \mathcal{U}_1$. Then $ \ \mathcal{U}_0 \cup \{ V \ \}$ is a countable open cover of $X$ and follows from $(1)$ that it has a finite subcover $U_1, \ldots, U_n$. Now the set consisting of all $U_i$, with possible exception of $V$, adjoined with the sets $ U \in \mathcal{U}_1$ is a proper subcover of $ \ \mathcal{U}$. -($\Leftarrow$ Doesn't need $T_1$) Now, suppose that $X$ isn't countably compact. Then there is an infinite set $A$ with no limit points. Thus $A$ is closed. By the definition of limit point, for each $x \in A$, there is an open set $U_x$ containing $x$ such that $ U_x \cap A = \{ x \} $. Consider $ \ \mathcal{U} \ $ the set of all $U_x$ plus, if necessary, $X-A$. Then $ \ \mathcal{U} \ $ is an infinite cover of $X$ with no proper subcovers. -P.S. Thanks Carl Mummert, George Lowther and Mark for your efforts to clarify and improve this answer.<|endoftext|> -TITLE: Why is $\mathbb{C}[x,y]$ not isomorphic to $\mathbb{C}[x] \otimes _{\mathbb{Z}} \mathbb{C}[y]$ as rings? -QUESTION [9 upvotes]: I would like to know why $\mathbb{C}[x,y]$ is not isomorphic to $\mathbb{C}[x] \otimes _{\mathbb{Z}} \mathbb{C}[y]$ as rings. -Thank you! 1 - -REPLY [9 votes]: To show that $1\otimes 1+i\otimes i$ is nonzero -in $\mathbb{C}[X]\otimes_{\mathbb{Z}}\mathbb{C}[X]$ note that it maps -to $1\otimes 1+i\otimes i$ in -$\mathbb{C}[X]\otimes_{\mathbb{R}}\mathbb{C}[X]$. -This is a tensor product over a field, so a basis as an -$\mathbb{R}$-vector space is got by tensoring togther bases -on each side. Now as the set of elements of the forms $X^n$ -and $iX^n$ are bases of $\mathbb{C}[X]$ over $\mathbb{R}$ -then $1\otimes 1$ and $i\otimes i$ are linearly independent -over $\mathbb{R}$. -As -$$(1\otimes 1+i\otimes i)(1\otimes 1-i\otimes i)=0$$ -then $\mathbb{C}[X]\otimes_{\mathbb{Z}}\mathbb{C}[X]$ -isn't an integral domain, unlike $\mathbb{C}[X]\otimes_{\mathbb{C}}\mathbb{C}[X]$.<|endoftext|> -TITLE: Arithmetic function to return lowest in-parameter -QUESTION [9 upvotes]: Is there a mathematical function such that; -f(3, 5) = 3 -f(10, 2) = 2 -f(14, 15) = 14 -f(9, 9) = 9 - -It would be even more cool if there's a function that takes three (3) parameters, but that one could be solved by using recursive functionality; -f( f(3, 5), 4) = 3 - -REPLY [16 votes]: $$f(x,y)=\frac{x+y-|x-y|}{2}$$ - -Oscar gave a nice interpretation of the above formula in his follow-up question, but I'll give a dumb derivation here for completeness. -Making use of Iversonian brackets, we have -$$\min(x,y)=x[y \geq x]+y[y < x]$$ -and since $[\neg p]=1-[p]$, -$$\min(x,y)=x[y \geq x]+y(1-[y \geq x])=y-(y-x)[y-x \geq 0]$$ -Now, there is the identity -$$\frac{u+|u|}{2}=u[u \geq 0]$$ -and so we have -$$\min(x,y)=y+\frac{x-y-|x-y|}{2}$$ -which simplifies to the desired expression. -The extension to more than two arguments is no longer as compact, though, since one now has to contend with products of Iversonian brackets ($[p \land q]=[p]\cdot[q]$).<|endoftext|> -TITLE: Nice expression for minimum of three variables? -QUESTION [65 upvotes]: As we saw here, the minimum of two quantities can be written using elementary functions and the absolute value function. -$\min(a,b)=\frac{a+b}{2} - \frac{|a-b|}{2}$ -There's even a nice intuitive explanation to go along with this: If we go to the point half way between two numbers, then going down by half their difference will take us to the smaller one. So my question is: "Is there a similar formula for three numbers?" -Obviously $\min(a,\min(b,c))$ will work, but this gives us the expression: -$$\frac{a+\left(\frac{b+c}{2} - \frac{|b-c|}{2}\right)}{2} - \frac{\left|a-\left(\frac{b+c}{2} - \frac{|b-c|}{2}\right)\right|}{2},$$ -which isn't intuitively the minimum of three numbers, and isn't even symmetrical in the variables, even though its output is. Is there some nicer way of expressing this function? - -REPLY [5 votes]: Based on Christian Blatter's answer and the trigonometric solution of the cubic equation, we can derive the following unusual solution. -Let $a$, $b$ and $c$ be real numbers. Then they are the roots of the equation -$$(x-a)(x-b)(x-c)=0$$ -which can also be written as -$$x^3-(a+b+c)x^2+(ab+bc+ca)x-abc=0.$$ -The trigonometric formula gives the solutions of a cubic equation in terms of its coefficients. If we substitute in the above coefficients and simplify we get an expression for the three roots, but now their size order is clear. Let $M$ be the average of $a$, $b$ and $c$, and let $P$ and $Q$ be the quadratic and geometric means of the quantities $a+b-2c$, $a-2b+c$ and $-2a+b+c$. That is: -$$M=\frac{a+b+c}3,$$ -$$P=\sqrt{\frac{(a+b-2c)^2+(a-2b+c)^2+(-2a+b+c)^2}3},$$ -$$Q=\sqrt[3]{(a+b-2c)(a-2b+c)(-2a+b+c)}.$$ -Then we have -$$\max(a,b,c)=\frac{\sqrt 2}3P\cos\left(\frac 13\arccos\left(\sqrt 2\left(\frac QP\right)^3\right)\right)+M,$$ -$$\mathrm{median}(a,b,c)=\frac{\sqrt 2}3P\cos\left(\frac 13\arccos\left(\sqrt 2\left(\frac QP\right)^3\right)+\frac{2\pi}3\right)+M,$$ -$$\min(a,b,c)=\frac{\sqrt 2}3P\cos\left(\frac 13\arccos\left(\sqrt 2\left(\frac QP\right)^3\right)+\frac{4\pi}3\right)+M.$$<|endoftext|> -TITLE: Homology and Euler characteristics of the classical Lie groups -QUESTION [12 upvotes]: I'm interested in methods of computing the homology groups and Euler characteristics of the classical Lie groups ($GL(n,\mathbb{R}), SL(n,\mathbb{R})$, etc.). (But I'd be interested in techniques which are more generally applicable to arbitrary Lie groups as well.) -Here's the current state of my knowledge on this issue with some explicit questions interspersed: - -A compact Lie group of positive dimension admits a nowhere vanishing vector field and hence has Euler characteristic zero. -I remember reading in the past that a connected Lie group deformation retracts onto its maximal compact subgroup, which by the foregoing has Euler characteristic zero. Given this, we can say that a connected Lie group is either contractible or has Euler characteristic zero. However, my knowledge of Lie theory is (very) weak at best, so I am not fully comfortable accepting this fact. Is there a simple proof? (I feel that there should be but I'm just missing it.) -The Euler characteristic respects fiber bundle structures: if $F \rightarrow E \rightarrow B$ is a fiber bundle with total space $E$, base $B$, and fiber $F$, then $\chi(E) = \chi(F)\cdot\chi(B)$. Some suitable conditions are needed; I think $B$ path-connected suffices, but I'm not 100% certain. I can think of one explicit example of this: Identify $S^3$ with $SU(2)$ and implement the Hopf fibration $S^1 \rightarrow S^3 \rightarrow S^2$. The subgroup $U(1)$ is realized as $S^1$ and the quotient $SU(2)/U(1)$ is realized as $S^2$. So we have a fibration $SU(2) \rightarrow SU(2)/U(1)$ with fiber $U(1)$; hence $\chi(SU(2)) = \chi(SU(2)/U(1)) \cdot \chi(U(1)) = 0$. But this is already known since $SU(2)$ is compact. -The most direct approach to computing both the homology and the Euler characteristics would possibly be to find an explicit cellular decomposition of these groups. Again, my knowledge of basic Lie theory is failing me here: I don't see how to find such decompositions of the matrix groups. -Surely there are other methods of computing the homology which would exploit the Lie group structure, but I don't know of any. - -REPLY [9 votes]: So we know that $GL_n(\mathbb{R})$ and $SL_n(\mathbb{R})$ are homotopy equivalent to $O_n$ -and $SO_n$ respectively (Similar things are true for working over $\mathbb{C}$). Now you can work inductively using a Serre SS. Look at the fibration coming from the inclusion of one classical group into the next higher one, if you set it up right the homogenous space will be a sphere. This should get most of the work done for you, and it doesn't require lie theory. I don't think it will generalize though to exceptional lie groups though, you need to understand the homogenous spaces in order for this to work. -Here is another way using the Serre SS: Use the fact that we know what $H^*(BG)$ is for $G$ unitary or orthoganl, special or not (with integer or mod 2 coefficients depending on whether or not it is $U$ or $O$). We know these rings because they give us chern and stiefel-whitney classes respectively (I believe the effect of looking a the special groups is just to kill $c_1$ or $w_1$). Also, the dimension of the vector space the group is acting on tells you the index of the top characteristic class. Now use our good old friend the loop path space fibration: $\Omega BG \to PBG \to BG$ and remember that $\Omega BG$ is homotopy equivalent to $G$ and $PBG$ is contractible. -Both of these approaches give you the ring structure. -Then use a Bockstein SS to get the integral cohomology for the stuff with mod two coefficients ($sq^1$ is the differential!). -Once you get the cohomology you can compute the euler characteristic by taking the alternating sums of the ranks.<|endoftext|> -TITLE: Determining whether a symmetric matrix is positive-definite (algorithm) -QUESTION [20 upvotes]: I'm trying to create a program, that will decompose a matrix using the Cholesky decomposition. -The decomposition itself isn't a difficult algorithm, but a matrix, to be eligible for Cholesky decomposition, must be symmetric and positive-definite. Checking whether a matrix is symmetric is easy, but the positive part proves to be more complex. -I've read about the Sylvester's criterion, but that leads to determinants, and based on what I found on the web, those are quite extensive and hard on computers. -In a nutshell - is there something I might be missing? Due to the fact the the matrix is square or something like that, is there possibly a simpler way to determine whether it's positive? -Regards, -Paul - -REPLY [3 votes]: Other possibilities include using the conjugate gradient algorithm to check positive-definiteness. In theory, this method terminates after at most n iterations (n being the dimension of your matrix). In practice, it may have to run a bit longer. It is trivial to implement. You can also use a variant of the Lanczos method to estimate the smallest eigenvalue of your matrix (which is much easier than computing all eigenvalues!) Pick up a book on numerical linear algebra (check the SIAM collection.) -At any rate recall that such methods (and the Cholesky decomposition) check numerical positive-definiteness. It is possible for the smallest eigenvalue of your matrix to be, say, 1.0e-16 and for cancellation errors due to finite-precision arithmetic to cause your Cholesky (or conjugate gradient) to break down.<|endoftext|> -TITLE: Big $\mathcal{O}$ Notation question while estimating $\sum \frac{\log n}{n}$ -QUESTION [7 upvotes]: I have a function $S(x)$ which is bounded above and below as follows: -$f(x) + C_1 + \mathcal{O}(g(x)) < S(x) < f(x) + C_2 + \mathcal{O}(g(x))$ as $x \rightarrow \infty$ -Can I conclude that $$S(x) = f(x) + C + \mathcal{O}(g(x))$$ as $x \rightarrow \infty$? Can anyone give a short proof for this fact? -EDIT: -(To clear up the confusion, I am stating the problem below. I am wondering if what I did was right.) -Prove that as $x \rightarrow \infty$ -$$\displaystyle \sum_{n \leq x} \frac{\log n}{n} = \frac{(\log x)^2}{2} + C + \mathcal{O}(\frac{\log(x)}{x})$$ -where $C$ is a constant. -This is how I did it. -Approximate the summation as an integral $\int \frac{\log(x)}{x} dx$ from above and from below. (The usual way of coming up with bounds for $p$ series). Then we will get the lower bound as $\frac{\log([x])^2}{2} + C_1$ and the upper bound as $\frac{\log([x])^2}{2} + C_2$. -Then, $\log([x]) = \log(x) + \log(\frac{[x]}{x}) = \log(x) + \log(1-\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{1}{x})$. -Plugging this into the above I get the the question I asked. -I did this when I was asked on a timed exam since I could not think of other ways to get to the answer. Other answers and suggestions welcome as always. - -REPLY [8 votes]: Since you wanted a different proof approach, you can try using Abel's Identity, which has turned out to be quite useful in analytic number theory. -For instance see this: An estimate for sum of reciprocals of square roots of prime numbers. -To apply this to your question: -Since we know that -$\displaystyle \sum_{1 \le n \le x} \frac1{n} = \log x + \gamma + R(x)$, where $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$ -Using Abel's identity we have that -$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = (\log x+ \gamma + R(x))\log x - \int_{1}^{x} \frac{\log t+ \gamma + R(t)}{t} \ \text dt$ -i.e. -$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} + R(x)\log x - \int_{1}^{x} \frac{R(t)}{t} \ \text dt$ -Since $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$ we have that $\displaystyle \int_{1}^{\infty} \frac{R(t)}{t} \ \text dt = \eta$ exists. We also have that $\displaystyle R(x) \log x \to 0 \ \text{as} \ x \to \infty$. -Thus we have that -$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} -\eta + \mathcal{O}\left(\frac{\log x}{x}\right) \ \ \text{as} \ \ x \to \infty$ -Another useful approach is to try using the Euler-Maclaurin Summation formula.<|endoftext|> -TITLE: What local system really is -QUESTION [32 upvotes]: I know a local system is a locally constant sheaf. But why does a local system on the topological space $X$ correspond to $\tilde{X}\times_G V$, where $G$ is the fundamental group of $X$, $\tilde{X}$ is the universal covering space of $X$, and $V$ is a $G$-module? How do you recover the locally free sheaf from $\tilde{X} \times_G V$? - -REPLY [33 votes]: The group $G$ acts properly discontinuously on $\tilde{X}$, and so if $x$ is any -point of $\tilde{X}$, it admits a neighbourhood $U$ s.t. that $U g$ is disjoint -from $U$ if $g \in G$ is non-trivial. Thus the natural map from -$U$ to $\tilde{X}/G = X$ is an embedding. -Thus the natural map from $U \times V$ to $\tilde{X}\times_G V$ is also an -embedding, and so $\tilde{X}\times_G V$ is locally constant (i.e. locally a -product). -More detailed remarks: - -We should equip $V$ with its discrete topology -The object $\tilde{X}\times_G V$ is not itself actually a sheaf, but is rather the espace etale of a sheaf. To get the actual sheaf we consider the natural projection $\tilde{X}\times_G V \to \tilde{X}/G = X$, and form the associated sheaf of sections. Over the open set $U \hookrightarrow X,$ this restricts to the sheaf of sections -to the projection $U\times V \to U$, whose sections are precisely the constant sheaf on $U$ attached to the vector space $V$. (Here is where we see that it is important to equip $V$ with the discrete topology.) Thus our original sheaf -of sections is locally constant, as claimed.<|endoftext|> -TITLE: Shortest distance between a point and a helix -QUESTION [8 upvotes]: I have a helix in parametric equations that wraps around the Z axis and a point in space. I want to determine the shortest distance between this helix and the point, how would i go about doing that? -I've tried using Pythagorean theorem to get the distance and then taking the derivative of the distance function to find the zeros but I can't seem to get an explicit equation for T and I'm stuck at that. -(I apologize for the tags, not sure how to tag it and I cant create new ones either) - -REPLY [9 votes]: Let the helix be given by $(\cos t, \sin t, ht)$ (after scaling). If $P$ is your point $(a,b,c)$, and $Q = (\cos t, \sin t, ht)$ is the nearest point on the helix, then $PQ$ is perpendicular to the tangent at $Q$, which is just $(-\sin t, \cos t, h)$: -$-(\cos t - a)\sin t + (\sin t - b)\cos t + (ht - c)h = 0 $ -This simplifies to $A \sin(t+B) + Ct + D = 0$ for some constants $A,B,C,D,$ as Moron said. But then you have to solve this numerically. There will be more than one solution in general, but (as Jonas Kibelbek pointed out in the comments) you only need to check the solutions with $z$-coordinate in the interval $[c-\pi h, c+\pi h)$.<|endoftext|> -TITLE: Doubts on Mutually exclusive and Independent events -QUESTION [6 upvotes]: Problem: - -In a school competition,the probability of hitting the target bu Dick is $\frac{1}{2}$,by Betty is $\frac{1}{3}$ and by Joe is $\frac{3}{5}$.If all of them fire independently,calculate the probability that the target will be hit. - -This general approach for solving is to find the complement of the probability that the target would not be hit. -If the three events are mutually exclusive/independent, then why does summing the three probabilities not give the correct answer? - -REPLY [13 votes]: "Mutually exclusive" and "independent" mean different things. -Two events are mutually exclusive if they can't both happen. For example, "my first name is Steve" and "my first name is Fred" are mutually exclusive. When events are mutually exclusive, you are allowed to add their probabilities to get the probability that one of them occurs. -Independent events are events where finding out about one doesn't change the probability of the other. Finding out that "it is raining" doesn't tell you anything about "my car is red" so those events are independent. -In the question the events "Dick hits the target", "Betty hits the target" and "Joe hits the target" are independent, but are not mutually exclusive. For instance, Betty and Dick could both hit the target. Since they are not mutually exclusive adding the probabilities will not give the correct answer.<|endoftext|> -TITLE: Intersection of neighborhoods of 0. Subgroup? -QUESTION [7 upvotes]: Repeating for my exam in commutative algebra. -Let G be a topological abelian group, i.e. such that the mappings $+:G\times G \to G$ and $-:G\to G$ are continuous. Then we have the following Lemma: -Let H be the intersection of all neighborhoods of $0$ in $G$. Then $H$ is a subgroup. -The proof in the books is the following one-liner: "follows from continuity of the group operations". (this is from "Introduction to Commutative Algebra" by Atiyah-MacDonald) -I must admit that I don't really see how that "follows". If there is an easy explanation aimed at someone who has not encountered topological groups in any extent, I'd be happy to read it. - -REPLY [8 votes]: If $U$ is a neighbourhood of $0$ then so is $-U=\{-x:x\in U\}$. -This shows that if $x\in H$ then $-x\in H$. -To show that $H$ is closed under addition, use the fact that -if $U$ is a neighbourhood of $0$ then there is another -neighbourhood $V$ of $0$ with $V+V\subseteq U$. The existence -of $V$ follows from the continuity of addition at $(0,0)$.<|endoftext|> -TITLE: Smooth functions for which $f(x)$ is rational if and only if $x$ is rational -QUESTION [27 upvotes]: A friend of mine introduced me to the following question: -Does there exist a smooth function $f: \mathbb{R} \to \mathbb{R}$ ($f \in C^{\infty}$), such that $f$ maps rationals to rationals and irrationals to irrationals and is nonlinear? -He has been able to prove that such a polynomial (with degree at least 2) doesn't exist. -The problem has been asked before at least at http://www.artofproblemsolving.com. - -REPLY [7 votes]: Sergei Ivanov has given a positive answer for the existence of such functions on MO: -https://mathoverflow.net/questions/48910/smooth-functions-for-which-fx-is-rational-if-and-only-if-x-is-rational.<|endoftext|> -TITLE: Have I made a straight line, or a circle? -QUESTION [28 upvotes]: (Disclaimer: I'm an engineer) -Hi everybody, I found this “riddle” posted on the internet: - -It's meant as a joke, but I do think it deserves an answer :) -A bit of background: the orange and blue ellipses are a citation from the videogame Portal. They are "portals" connected so that everything that goes into one of them comes out of the other, mantaining his momentum (this last part is the foundation for the game). For further info, you can also watch the trailer. -So.. is it a circle, a straight line or what else? -(also, feel free to retag) - -REPLY [2 votes]: Is the equator of the Earth a line or a circle? -Well, it's a circle of course. But as you know by now, from Willie Wong's answer and from others, it's also a geodesic, which is very much like a straight line: unless you are ready for some serious digging, the equator is the shortest path between any two places on it. -Your Portal picture is nothing like the Earth, mind you. But in my kitchen I have a cutting board that's a flat rectangle with a thin rectangular hole cut out near one side, as a handle for carrying. The corners are rounded somewhat. Now an ant walking along the top side, if it passed through the handle, would wind up on the bottom side with its momentum reflected. The same if it walked over an edge. -Imagine an orthographic projection of my cutting board. (By the way I'm also an engineer. I learned drafing the old-school way, with a T-square.) I draw a top view which is a rectangle with a thin vertical rectangular hole near the right side. To the right I place a side view which is just a vertical rectangle with a couple of hidden lines, the thickness of the cutting board. It's not normal "third-angle" to also draw the bottom view, but I will, so it is a rectange with a thin vertical rectangular hole near the left side. -An ant walking over the right edge would, on the drawing, pass from the top view to the side view to the bottom view with its momentum unchanged. (From a 2-D perspective, the edges are not actually special. The corners are another matter.) If it walked through the handle, it would pass from one rectangle on the top view through an "interspace" - hidden lines on the side view - to the other rectangle on the bottom view, again with its momentum again unchanged. Imagine if we tied a string through around the handle and the right edge of the cutting board. On the drawing, it would be a straight line between the two rectangles passing through the three views. -So the area around the handle and right edge of my cutting board is very much like a 2-D version of Portal. 3-D Portal can then be thought of as being played on the hypersurface of a higher dimensional cutting board. (To visualise a higher dimensional cutting board, ponder an ordinary cutting board while enjoying a "herbal cigarette".)<|endoftext|> -TITLE: Sequence converging to the supremum -QUESTION [7 upvotes]: Please help me, I can't find the theorem anywhere that states something like: - -For a bounded set $U\subset\mathbb R$ there exists a non-descreasing sequence $(a_n)_{n\in \mathbb N}$ with $a_n \in U$ with $\lim_{n\to\infty}a_n=\sup U$. - -Thank you! - -REPLY [8 votes]: Let $$M:=\left\{a\in\mathbb R:a\ge x\text{ for all }x\in U\right\}\;.$$ By definition, $$s:=\sup U=\min M\;.$$ Let $n\in\mathbb N$ $\Rightarrow$ $s-1/n -TITLE: An elementary doubt on Inequalities -QUESTION [8 upvotes]: Given the following expression: -$$(7y-1)(y-7) \le 0$$ -To me this inequality implies $y \le 7$ and $y \le \frac{1}{7}$ but the correct expression (from my module) happens to be $\frac{1}{7} \le y \le 7$ -Where exactly I am wrong? - -REPLY [2 votes]: The Zero-Product Property ($ab=0\implies a=0\text{ or }b=0$) is the mechanics behind going from $(7y-1)(y-7)=0$ to $y=\frac{1}{7}$ or $y=7$. There is no directly analogous property for inequalities. -The method that I'd suggest for examining inequalities that compare an expression to 0 is the boundary algorithm—find the values that make the expression equal to 0, which are the boundary points of a set of intervals on the number line, then test each interval to see if it satisfies the original equation. -In your specific example, the boundary points are $y=\frac{1}{7}$ and $y=7$, so test some value of $y$ below $y=\frac{1}{7}$ (for example, $y=0$), some value of $y$ between $y=\frac{1}{7}$ and $y=7$ (for example, $y=\frac{1}{2}$), and some value of $y$ above $y=7$ (for example, $y=10$). The original inequality is false below $y=\frac{1}{7}$ and above $y=7$ and true between $y=\frac{1}{7}$ and $y=7$. Since the original inequality included = (it was ≤), the solution includes the boundary points that solved the corresponding equation, so the solution is $\frac{1}{7}\le y\le 7$.<|endoftext|> -TITLE: Is this definite integral really independent of a parameter? How can it be shown? -QUESTION [18 upvotes]: I want to find a nice simple expression for the definite integral -$$\int_0^\infty \frac{x^2\,dx}{(x^2-a^2)^2 + x^2}$$ -Now, I can numerically compute this integral, and it seems to converge to $\pi/2$ for all real values of $a$. Is this integral actually always equal to $\pi/2$? How can I show this? -Also, why does Wolfram Alpha give me something that appears to depend on $a$? Is there a good reason it doesn't eliminate $a$? - -REPLY [5 votes]: Aryabhata's solution is nice. -The method of residue is standard in complex function theory. -Here it is a simple elementary derivation. -We may assume that $a\ge 0$. -$$ -\int_0^\infty \frac{x^2\,dx}{(x^2-a^2)^2 + x^2}=\int_0^\infty \frac{1}{1+\left( x-\frac{a^2}{x} \right)^2}\,dx. -$$ -If we had -$$ -\int_0^\infty \frac{1}{1+t^2}\,dt -$$ -then we could calculate it easily. -This motivates the substitution -$$ -x-\frac{a^2}{x}=:t\, \qquad(1). -$$ -Here -$$ -D_x\left( x-\frac{a^2}{x} \right)=1+\frac{a^2}{x^2}\gt 0, \qquad (x\gt 0). -$$ -From $(1)$ we obtain -$$ -x=\frac{t}{2}+\frac{1}{2}\sqrt{t^2+4 a^2} -$$ -because $x\gt0$. -From this -$$ -dx=\left( \frac{1}{2}+\frac{1}{2}\cdot\frac{t}{\sqrt{t^2+4a^2}} \right)\,dt. -$$ -Substituting back into the integral we get -$$ -\int_0^\infty \frac{1}{1+\left( x-\frac{a^2}{x} \right)^2}\,dx= -\int_{-\infty}^\infty \left(\frac{1}{2}+\frac{1}{2}\cdot\frac{t}{\sqrt{t^2+4a^2}}\right)\frac{1}{1+t^2} \,dt -$$ -Here the second integrand is an odd function so the result is -$$ -\int_{-\infty}^\infty \frac{1}{2}\cdot\frac{1}{1+t^2}\,dt=\frac{\pi}{2}. -$$<|endoftext|> -TITLE: application of strong vs weak law of large numbers -QUESTION [5 upvotes]: By definition, the weak law states that for a specified large $n$, the average is likely to be near $\mu$. Thus, it leaves open the possibility that $|\bar{X_n}-\mu| \gt \eta$ happens an infinite number of times, although at infrequent intervals. -The strong law shows that this almost surely will not occur. In particular, it implies that with probability 1, we have that for any $\eta > 0$ the inequality $|\bar{X_n}-\mu| \lt \eta$ holds for all large enough $n$. -Now my question is application of these laws. How do I know which distribution satisfies the strong law vs the weak law. For example, consider a distribution $X_n$ be iid with finite variances and zero means. Does the mean $\frac{\sum_{k=1}^{n} X_k}{n}$ converge to $0$ almost surely (strong law of large numbers) or only in probability (weak law of large numbers)? - -REPLY [6 votes]: From section 7.4 of Grimmett and Stirzaker's Probability and Random Processes (3rd edition). -The independent and identically distributed sequence $(X_n)$, with common distribution function $F$, satisfies $${1\over n} \sum_{i=1}^n X_i\to \mu$$ in probability for some constant $\mu$ if and only if the characteristic function $\phi$ of $X_n$ is differentiable at $t=0$ and $\phi^\prime(0)=i \mu$. -For instance, the weak law holds but the strong law fails for $\mu=0$ and symmetric random variables with $1-F(x)\sim 1/(x\log(x))$ as $x\to\infty$.<|endoftext|> -TITLE: Eigenvalues of representations -QUESTION [5 upvotes]: Let $\rho$ be a representation of $G$ on $V$. Why are its eigenvalues roots of unity? - -REPLY [10 votes]: I assume $G$ is finite. In that case any $g \in G$ has some finite order $n$, hence $\rho(g)^n = 1$. It follows that the characteristic polynomial of $\rho(g)$ divides $x^n - 1$.<|endoftext|> -TITLE: how to read a mathematical paper? -QUESTION [67 upvotes]: I hope that this question is on-topic, though it is not quite technical. -I am curious to hear from people how they approach reading a mathematical paper. -I am not asking specific questions on purpose, though at first I had a few. But I want to keep it rather open-ended. - -REPLY [7 votes]: I generally have a "top down approach" to reading papers. More often than not, I am driven to a particular paper looking for a particular result. So I would start by looking at the statement of the result and try to see what background is needed in order to parse the statements which appear there. Once I have understood the philosophy of the result, I think about/look at the proof. Again, if I come across something unfamiliar, I try to acquaint myself with the relevant material on an as needed basis. I find this approach works better for me if I am interested in a paper for isolated results. Reading such papers from introduction to conclusion might not be of immediate (or even future) value. -The other kinds of papers I read would include survey papers (exposition of a particular new idea) and papers which essentially develop a new idea/concept/structure for the first time. A lot of these papers, I find, can (and eventually should) be read from front matter to bibliography. So I would essentially read these papers as I would read a textbook. -One thing which I find especially important while reading papers is to look at the cross references. This can be useful for motivation, alternate proofs and related ideas. It also gets me acquainted with who else is working in the area and the exposure to ideas from different mathematicians achieved thus is a nice side effect. -Finally, I feel it is important to ask a lot of questions while I read papers. Why is hypothesis X necessary. Can condition Y be weakened? Can this result be generalized to Z structure? This is eventually useful when I write my own results, to conceive the most general form that I can.<|endoftext|> -TITLE: "Abstract nonsense" proof that the fundamental group of a topological group is abelian -QUESTION [5 upvotes]: I seem to remember reading somewhere an "abstract nonsense" proof of the well-known fact that the fundamental group of a connected topological group is abelian. By "abstract nonsense" I mean that the proof used little more than the fact that topological groups are the group objects in the category of topological spaces and the fact that $\pi_1$ is a homotopy functor. Does anybody remember how this works? References are fine. - -REPLY [5 votes]: I just found this old question and happen to know a very nice "abstract nonsense" argument of a completely different flavor. Let $G$ be a topological group and consider the fundamental groupoid $\Pi(G)$ of $G$, which becomes a monoidal category under the group operation. The unit object is of of course the identity. But it is well known that the endomorphism monoid of the unit object, which in this case is the fundamental group at the identity, embeds into the Bernstein center (the monoid of endomorphisms of the identity functor on $\Pi(G)$), which is always commutative.<|endoftext|> -TITLE: Riesz representation and vector-valued functions -QUESTION [14 upvotes]: A version of the Riesz Representation Theorem says that a continuous linear functional on the space of continuous real-valued mappings on a compact metric space, $C(X)$, can be identified with a signed Borel measure on the set $X$. Are there any similar results when we replace $C(X)$ by the space of continuous functions of $X$ (compact metric) into $Y$ when (1) $Y=R^N$ or in general (2) $Y$ is a Banach space? I suspect the answer is yes, but I would like to find the right reference to start looking at. Thanks. - -REPLY [13 votes]: Yes, there are similar results in the vector-valued case. Dunford and Schwartz is a standard reference for this kind of thing. for further information see this -Some notation: $X$ is a fixed compact Hausdorff space. For a Banach space $Y$, the space of continuous functions from $X$ to $Y$, endowed with the supremum norm coming from the norm of $Y$, I denote by $C(X,Y)$. For a Banach space $Z$, I denote its dual by $Z'$. -Here is one way to think about the dual of $C(X,Y)$ for $Y$ a Banach space. -An element $\phi$ of $C(X,Y)'$ gives rise to a family of measures on $X$ parametrized by $Y$ in the following way. Fixing $\xi \in Y$, one can define a linear functional $L_{\phi,\xi}$ on $C(X)$ by sending the function $f$ on $X$, to the value of $\phi$ on the function $X \to Y$ given by $x \mapsto f(x) \xi$. In symbols: -$$ -L_{\phi,\xi}(f) = \phi(x \mapsto f(x) \xi). -$$ -From the usual Riesz theorem, there is then a measure $m_{\phi, \xi}$ defined on the Borel subsets of $X$ satisfying -$$ -L_{\xi,\phi}(f) = \int_X f \, dm_{\phi, \xi}. -$$ -So from $\phi$ we have produced a family of measures on $X$, one for each $\xi$ in $Y$. -Now define a map $m_{\phi}$ from the Borel subsets of $X$ to $Y'$ as follows: for any Borel subset $E$ of $X$, define $m_{\phi}(E)$ to be the linear functional on $Y$ given by -$$ -m_{\phi}(E)(\xi) = \int_E 1 \, dm_{\phi, \xi}. -$$ -The map $m_{\phi}$ has various nice properties (it is a $Y'$-valued analogue of a regular signed Borel measure on $X$). Since the functions of the form $x \mapsto f(x) \xi$, with $f \in C(X)$ and $\xi \in Y$, are dense in $C(X,Y)$, it is easy to show that $\phi$ is uniquely determined by $m_{\phi}$. (The intuition is to think of $\phi$ as coming from $m_{\phi}$ as follows: for each $f \in C(X,Y)$, the number $\phi(f)$ is obtained "by integrating, over $X$, the values of $f$ with respect to the $Y'$-valued measure $m_{\phi}$, so that $\phi(f) = \int_X f \, dm_{\phi}$." You can think of this just as a formal thing, or, think enough about the integration of vector-valued functions with respect to vector-valued set mappings like $m_{\phi}$ to formalize this and remove the quotation marks.) -Anyway, you can reverse this whole chain of reasoning: starting with a map from the Borel sets of $X$ to $Y'$ with nice enough properties, you can show that it must be $m_{\phi}$ for some $\phi$ in $C(X,Y)'$. There is a natural notion of norm for these things (a "variation" norm) and it turns out to coincide with the norm you'd get from $C(X,Y)'$. So the dual of $C(X,Y)$, in this picture, is a space of nicely behaved $Y'$-valued mappings on the Borel subsets of $X$, with a certain variation norm. When $Y$ is the scalars this is turns into the original Riesz theorem. -More generally you can think of any bounded linear map from $C(X,Y)$ into a Banach space $Z$ in similar terms, but things get more complicated (the "measure-like" things you integrate over $X$ to represent maps $C(X,Y) \to Z$ take values in the linear operators from $Y$ to $Z''$). You can also weaken various hypotheses here (e.g. you can drop the compactness hypothesis on $X$, or replace $Y$ with a more general topological linear space, provided that you are willing to make additional complicated hypotheses in order to state a decent theorem). -There is another direction you can go. If $X$ is compact Hausdorff and $Y$ is a Banach space, the space $C(X,Y)$ is isometrically isomorphic to a certain tensor product, namely the injective Banach space tensor product, of $C(X)$ with $Y$. So identifying the dual of $C(X,Y)$ is a special case of identifying the dual of an injective tensor product $A \otimes_i B$ of Banach spaces $A$ and $B$. The dual of this tensor product has various characterizations. One is in terms of Borel measures on the Cartesian product of the compact topological spaces $(A')_1$ and $(B')_1$ (the unit balls of the duals of $A$ and $B$, given the weak-$*$ topology). Any book on Banach spaces that discusses the tensor product theory will have theorems about the injective Banach space tensor product and how duality interacts with it.<|endoftext|> -TITLE: Manifold Explained -QUESTION [6 upvotes]: Is there a good explanation of a manifold on the web somewhere? The wikipedia article isn't really working for me. I was actually hoping for a whiteboard lecture on youtube, but can't find one. -My math experience is calculus through differential equations, twenty five years ago. I also have some computer science related math; discrete structures and numerical methods. -My problem with the Wikipedia article is that I just can't visualize what they're saying. -Any sources (not just web based) are appreciated. -Edit 2 -I stumbled upon the Wikipedia article while trying to brush up on my math. I had been talking to my daughter about primes, we wandered there from her 4th grade homework. We ventured as far as Mersene primes and perfect numbers. It was at this point that I knew I had to brush up. -Somehow I chased a link to Riemann manifolds and this was completely new to me. -Having looked at some of the suggested material from the answers, the Wikipedia article makes a lot more sense. In fact I'm trying to figure out why I found it unclear, maybe it can be improved. - -REPLY [6 votes]: This isn't in any way a thorough or rigorous introduction to manifolds, but as far as visualization it's quite nice: Weeks' The Shape of Space.<|endoftext|> -TITLE: Negativity in a CIR model discretized by Ito-Taylor expansion -QUESTION [8 upvotes]: Let $X = (X_t: t \in [0,T])$ be a stochastic process satisfying a CIR model -$$ -dX_t = \beta (X_t - \gamma) dt + \sigma\sqrt{X_t} dB_t, -$$ -where $B_t$ is a standard Brownian motion, $\beta$ is a negative constant, $\gamma, \sigma$ are positive constants. In order for the SDE to make sense, assume that $X_t > 0$ for all $t \in [0,T]$. -Consider following two ways to simulate the model based on discretization of $t$ with Ito-Taylor expansion: - -the Euler scheme: -$$ -X_{t + \Delta} \approx X_t + \beta(X_t - \gamma)\Delta + \sigma \sqrt{X_t} Z \Delta, -$$ -where $Z$ is $N(0, 1)$ Gaussian variable. -the Milstein scheme -$$ -X_{t + \Delta} \approx X_t + \beta(X_t - \gamma)\Delta + \sigma \sqrt{X_t}Z\sqrt{\Delta} + \frac{1}{4}\sigma^2 \Delta (Z^2-1) -$$ -where $Z$ is $N(0, 1)$ Gaussian variable. - -I was wondering why these two schemes have a positive probability of generating negative values of $X_t$ and therefore cannot be used without suitable modifications? -References (book, tutorial and/or paper) will be helpful too! -Thanks and regards! - -REPLY [3 votes]: Take the Euler scheme. Let's assume that you start with $X_0=\epsilon$. Let's take $\beta=-1$, $\gamma =1$ and $\sigma=1$ for the sake of definiteness. Then in the next step you'll have -$$X_{\Delta} = \epsilon + (1-\epsilon) \Delta + \sqrt{\epsilon} Z \Delta$$ -What's the probability that this is smaller than zero? -$$\mathbb{P}\left[X_{\Delta}<0\right]= \mathbb{P}\left[\epsilon + (1-\epsilon) \Delta + \sqrt{\epsilon} Z \Delta < 0\right]$$ -Rearranging you'll get: -$$\mathbb{P}\left[X_{\Delta}<0\right]= \mathbb{P}\left[ Z < -\frac{\sqrt{\epsilon}}{\Delta}(1+(1-\frac{1}{\epsilon})\Delta)\right]$$ -Using Chebyshev for an estimate this is smaller than -$$\mathbb{P}\left[X_{\Delta}<0\right] \leq \frac{\Delta^2}{2\epsilon}$$ -So the closer you are to zero, the higher the probability of crossing that line. But taking time steps sufficiently small lowers the chance of crossing quadratically. That should give you an idea of how to show it for the other scheme.<|endoftext|> -TITLE: Density of set of differences of cubes and primes -QUESTION [9 upvotes]: Consider the set $A$ of natural numbers which are of the form $k^3-p$, for $k$ a positive integer and $p$ a positive prime. Does $A$ have a density (of any of the usual kinds for sets of natural numbers) and if so, what is it? -$A$ contains a lot of numbers, and it seems somewhat difficult to me to prove that a particular number is not in $A$, except that most cubes are not in $A$. - -REPLY [4 votes]: A naive approach would be to say that the chance $n$ is equal to $k^3-p$ for a given $k$ is about $\frac{1}{\ln(k^3-n)}$. Then the chance $n$ is not equal to $k^3-p$ for any $k$ is $$\prod_{k=\sqrt[3]{n}}^\infty {1-\frac{1}{\ln(k^3-n)}}$$ as this goes to zero, "all" numbers should be in A.<|endoftext|> -TITLE: Proving that the sequence $F_{n}(x)=\sum\limits_{k=1}^{n} \frac{\sin{kx}}{k}$ is boundedly convergent on $\mathbb{R}$ -QUESTION [42 upvotes]: Here is an exercise, on analysis which i am stuck. - -How do I prove that if $F_{n}(x)=\sum\limits_{k=1}^{n} \frac{\sin{kx}}{k}$, then the sequence $\{F_{n}(x)\}$ is boundedly convergent on $\mathbb{R}$? - -REPLY [3 votes]: For any $n\geq 1$, we know where the stationary points of $F_n(x)$ occur since $F_n'(x)$ has a simple closed form. -It follows that -$$ \sup_{x\in\mathbb{R}} |F_n(x)| = \sum_{k=1}^{n}\frac{1}{k}\sin\left(\frac{2\pi k}{2n+1}\right) $$ -and over $[0,\pi]$ we have $\sin(x)\leq \frac{4}{\pi^2}x(\pi-x)$ by concavity, therefore -$$ \sup_{x\in\mathbb{R}} |F_n(x)| \leq \frac{8n^2}{(2n+1)^2} <2.$$ -We may also prove that the sequence -$$ A_n = \sum_{k=1}^{n}\frac{1}{k}\sin\left(\frac{2\pi k}{2n+1}\right) $$ -is increasing and convergent to -$$ \int_{0}^{\pi}\frac{\sin x}{x}\,dx = \text{Si}(\pi) \approx 1.85194. $$ -Ultimately we may check that $\sum_{k\geq 1}\frac{\sin(kx)}{k}$ is the Fourier series of the sawtooth wave, i.e. the $2\pi$-periodic extension of $\frac{\pi-x}{2}$ defined over $(0,2\pi)$. This is enough to ensure convergence in $L^2$, plus we have a uniform bound for $|F_n(x)|$.<|endoftext|> -TITLE: How to prove a manifold is simply connected?... using geometry -QUESTION [15 upvotes]: I was Looking at another questions title, and given the tag of DG, I thought it would read a little more like this one. Or at least that answers to this question would be answers to that question. -There are many different techniques one might use to show a manifold is simply conencted, I am only interested in a specific brand of them: those involving Differential Geometry. I borrowed a book from my friends physics professor, I think it was called Comparison theorems in Riemannian Geometry by Cheeger and Ebin, and the content was foreign to me. The goal of the book, if I recall properly, seemed to be proving theorems about $\pi_1(M)$ by examining geometric quantities like curvature. -This is amazing to me, and I am curious what the main idea could be. I currently only know of one relationship between geometric quantities and homotopy invariants, that is the fabled Chern-Weil theory. (fabled because I don't understand it... yet) - -What are the precise statements of such types of results? -Which parts of the hypothesis do what? (for example what does compactness help you do here since it doesn't do a whole lot on the homotopy side(maybe that is wrong)) -Other than other evidence/theorems, like Gauss-Bonnet, why would someone expect such results? -Can we get interesting results going the other way? How does $\pi_1$ affect things like curvature? -Can we say anything about higher homotopy groups? - -thanks for your patience! - -REPLY [15 votes]: One also has the classical Bonnet-Myers and Synge theorems. -The Bonnet-Myers theorem states that if $M$ is complete with Ricci curvature bounded below $\delta > 0$, then $M$ is compact with finite fundamental group. -The idea of the proof is the opposite of the one in Matt E's answer. In positive (sectional) curvature, geodesics tend to get stuck together. One uses this to show that geodesics past a certain length (something like $\pi/\delta^2$) cannot continue to minimize. -It follows that if $B_r(0)\subseteq T_p M$ is any closed ball with radius $r > \pi/\delta^2$, then $exp(B_r(0)) = M$. In particular, you've written $M$ as the compact image of a continuous function so it's compact. -The theorem about the fundamental group is an immediate corollary: look at the universal cover $\tilde{M}$. One can "pull back" the metric so that the covering map is a local isometry. Hence, the same curvature estimates apply to $\tilde{M}$, so it, too, must be compact. But a compact manifold can only finitely cover something, so the fundamental group of $M$ is finite. -To push this to Ricci, one uses a nice trick: if $\text{Ric} > 0$, then the sectional curvature in some directions must be $> 0$, and one uses these directions only to show geodesics eventually stop minimizing. -The Synge theorems have a different style of proof. Here's one version of the theorem: suppose $M$ is compact with positive curvature, and $f:M\rightarrow M$ is an isometry. Suppose $\text{dim}(M)$ is even and $f$ is orientation preserving OR $\text{dim}(M)$ is odd and $f$ is orientation reversing. Then $f$ has a fixed point. -The proof is straightforward: suppose not. Choose $p\in M$ with $d(p,f(p))$ as small as possible. Choose a minimal geodesic from $p$ to $f(p)$. One then computes variations of the geodesic (this uses the parity of the dimension, if I recall correctly), and shows that some nearby geodesic is smaller, contradicting your choice of $\gamma$. -As a corollary, one learns that the fundamental group of an even dimension positively curved compact manifold is either trivial or $\mathbb{Z}/2\mathbb{Z}$. For, suppose $M$ satisfies all the hypothesis. Look at the deck group action on $\tilde{M}$. If it's orientation preserving, it has a fixed point, but the only deck transformation with a fixed point is the identity. If it's orientation reversing, then it's square is orientation preserving, and hence is the identity, so all elements are of order 2 in $\pi_1$. If there are 2 orientation reversing maps, then their product is orientation preserving and hence the identity, so there is at most one element of order 2. -Likewise, in odd dimensions, the corollary is that $M$ must be orientable, since the deck group must acts on $\tilde{M}$ by orientation preserving maps. -Finally, to just randomly answer one of your other questions, curvature can affect higher homotopy groups. As Matt E pointed out, in negative (really, nonpositive) curvature, the universal cover is $\mathbb{R}^n$, implying all higher homotopy groups vanish. In positive curvature, this is a very important and currently unsolved problem.<|endoftext|> -TITLE: Does $R[x] \cong S[x]$ imply $R \cong S$? -QUESTION [141 upvotes]: This is a very simple question but I believe it's nontrivial. -I would like to know if the following is true: - -If $R$ and $S$ are rings and $R[x]$ and $S[x]$ are isomorphic as rings, then $R$ and $S$ are isomorphic. - -Thanks! -If there isn't a proof (or disproof) of the general result, I would be interested to know if there are particular cases when this claim is true. - -REPLY [23 votes]: There's been much work on this problem since the mentioned seminal work in the early seventies. Searching on the buzzword "stably equivalent" should help locate most of it. Below is a helpful introduction from Jon L Johnson: Cancellation and Prime Spectra<|endoftext|> -TITLE: Third degree Diophantine equation -QUESTION [6 upvotes]: I saw an exercise which was meant to find integers all integers $m,n$ satisfying $2m^2 + 5n^2 = 11(mn-11)$. I found them using the factorization $(m-5n)(2m-n)=-11\cdot 11$. However, what kind of methods there are to solve the original problem, find all $(m,n)\in\mathbb{Z}\times\mathbb{Z}$ satisfying $2m^2+5n^3=11(mn-11)$? I haven't solved many cubic Diophantine equations so I was just wondering if there is some birational transformation to convert the equation to a Weierstrass form of an elliptic curve. - -REPLY [3 votes]: The equation $2y^2+5x^3=11(xy−11)$ describes an elliptic curve. Given any elliptic curve, you can perform a change of variables to put it into Weierstrass form, and if the field of definition has characteristic different from 2 or 3, you can even put it into the form $y^2 = x^3 + ax + b$. In this particular case, to find the transformation is easy: first scale $x$ and $y$ to get rid of make the coefficients of $x^3$ and of $y^2$ equal to 1. Then write $y' = y + \alpha x$ to get rid of the $xy$-term. This will introduce an $x^2$ term. So now, translate $x$ to get rid of the $x^2$ term (I haven't actually done the computation). -An important thing to note is that while the notion of rational points does not depend on the model over $\mathbb{Q}$ that you are working with (as long as you only change variables over $\mathbb{Q}$), the notion of integral points does depend on the integral model. A theorem of Siegel says that on any Weierstrass model of an elliptic curve, there are only finitely many integral points, and there are bounds on their height in terms of the coefficients of the model. But as far as I know, to actually find the points comes down to brute force methods: e.g. if you manage to compute generators of the Mordell-Weil group, i.e. of the group of the rational points, and the torsion subgroup, then you just check all possible linear combinations up to the given bound. There are better methods using elliptic logarithms, but they are theoretically more involved. -Edit: some more info on your concrete curve: -To get a Weierstrass model you can replace $y$ by $y'/20$ and $x$ by $-x'/10$ in the original equation, resulting in $y'^2+11x'y' = x'^3 - 2^35^211^2$, confirming your hint. The elliptic curve has rank 1, as you say, and no torsion. The point $P=(22,-88)$ generates all the rational points on the curve under the group law. Integral solutions of the original equation correspond to points on the Weierstrass model with integral $x$-coordinate divisible by 10 and integral $y$-coordinate divisible by 20 (see our transformation). So the naive way of ruling out integral solutions to the original equation is to check all multiples of the generator $P=(22,-88)$ under the group law up to the given bound and convince yourself that no point satisfies the divisibility criteria. However, the bound of Baker, referred to in the wikipedia article, is huge and the computation might not actually be feasible. A possibly more promising approach is to write down the polynomial that computes $Q\mapsto Q\oplus P$ and see whether this operation always introduces higher and higher denominators that never cancel the numerators.<|endoftext|> -TITLE: School project in knot theory -QUESTION [6 upvotes]: Can someone suggest an idea for a school project in knot-theory for a 13 year old? -Thanks - -REPLY [5 votes]: You may wish to take a look at the readable and well-illustrated Why Knot?. -Something to consider about knots is how 'basic' they are -- children will often learn to construct simple knots before learning arithmetic. This demonstrates an ability to manipulate (and distinguish between classes of) closed curves in three dimensions, and amounts to an intuitive theory of topology. We can use this as a staging ground to ask more interesting and difficult questions about knots and their classification.<|endoftext|> -TITLE: Is there a connection between length of sentence and length of proof? -QUESTION [14 upvotes]: My basic question is: "Do longer tautologies take longer to prove?" But obviously this is underdetermined. If you are allowed an inference rule "Tautological Implication" then any tautology has a 1 line proof. -But let's say we're working in natural deduction, is it true that longer tautologies (tautologies with more letters and connectives in them) take more proof steps? -But this is still not quite a good enough question, since we can add superfluous steps to a proof and still prove the result. So the question is: -In Natural Deduction, is the shortest proof of a tautology related to its length? Or perhaps: does a shorter tautology always admit a shorter proof? -And how robust is this result across different sets of connectives/inference rules? - -REPLY [9 votes]: This question is one of the main topics studied in proof complexity. The measure you are talking about is called the number of steps in the proof or proof-length. -The size of a tautology is related to the number of steps of its shortest proof, but this does not mean that the number of steps is a monotone function of size of tautology. As mentioned by others, the length of a long tautology can be very short. Coming up with arbitrary long tautologies having proofs with constant (i.e. independent from the size of tautology) number of steps is not difficult. -On the other hand, the task of coming up with tautologies with large number of steps is not obvious and is open for many propositional proof systems. Note that it follows from the (constructive) proof of completeness theorem that every tautology of size $s$ has a proof of height $O(s)$ with $O(2^s)$ steps and size $O(s2^s)$. -We have some very weak lowerbounds for maximum number of steps needed for a tautology of size $s$ and the question that this upperbound is tight is wide open for sequent calculus and also for natural deduction which are essentially equivalent to Hilbert style proof systems. The bound is tight if we remove cut rule from sequent calculus, i.e. there are tautologies which require exponential number of steps. -You can find out more about the relation between different measure studied in proof complexity and the relation between different propositional proof systems by checking surveys/books on the topic. -For first order logic, note that the first order logic is undecidable, i.e. given a formula there is no algorithm to check if it is a valid formula. Any computable upperbound on the proof size would give an algorithm for deciding first order logic, check all possible proofs up to that size. (Note that an upper bound on proof length in natural deduction/sequent calculus will give an upper bound on the size because of normalization/cut elimination). Since there is no such algorithm, there cannot be any computable upperbound.<|endoftext|> -TITLE: Fixed point iteration for analytic functions on the unit disc -QUESTION [5 upvotes]: Suppose that $f(z)$ is complex analytic on $|z| \leq 1$ and satisfies -$|f(z)| < 1$ for $|z|=1$. -(a) Prove that the equation $f(z)=z$ has exactly one root -(counting multiplicities) in $|z|<1$. -(b) Prove that if $|z_0| \leq 1$, then the sequence $z_n$ defined recursively -by $z_n= f(z_{n-1}) , n=1,2,...$, converges to the fixed point of $f$. -I was able to prove (a) using Rouche's theorem, but (b) stumps me. -I know that (b) is true for analytic fuctions such that $f(0)=0$ or -$|f'(z)|<1$ on the disc, neither of which are necessarily true in general. -The farthest I was able to get was $|f(z)-z^*|<\frac{1}{1-|z*|}|z-z^*|$, -where $z^*$ is the fixed point of $f$, but -$\frac{1}{1-|z^*|}>1$, so I don't think this helps me. -Can someone please point me in the right direction? - -REPLY [5 votes]: One can reduce it to the case of $f(0)=0$ (i.e. $0$ is the fixed point) by making a suitable linear fractional transformation of the disk. Namely, if $z^*$ is the fixed point, apply the above argument to $L \circ f \circ L^{-1}$ where $L$ sends $z^* \to 0$ and is a LFT.<|endoftext|> -TITLE: When one writes $\zeta_n$ which of the n roots of unity is meant here? -QUESTION [6 upvotes]: When one writes $\zeta_n$ which of the n roots of unity is meant here? Does it matter? - -REPLY [10 votes]: The context is important here. -It is rather standard for $\zeta_n$ to at least denote a primitive $n$th root of unity in the algebraic closure of the field $k$ which is currently being considered (this is possible iff the characteristic of $k$ does not divide $n$; in particular it is true for all fields of characteristic zero). When the characteristic of $k$ does not divide $n$ (i.e., when any primitive $n$th roots of unity exist) there are precisely $\varphi(n)$ primitive $n$th roots of unity, where $\varphi$ is Euler's phi function. -In a context in which $k$ is a subfield of the complex numbers, it is also rather standard for $\zeta_n$ to denote the specific primitive $n$th root of unity $e^{\frac{2 \pi i}{n}}$, i.e., the one of minimal argument in the complex plane. -Does it matter? For algebraic purposes, probably not: the primitive $n$th roots of unity in $\mathbb{C}$ are algebraically conjugate over $\mathbb{Q}$: i.e., different roots of a common irreducible polynomial over $\mathbb{Q}$, the cyclotomic polynomial $\Phi_n(t)$. Sometimes in number theory one considers systems of $n$th roots of unity for varying $n$, and in this case it is necessary to make a consistent (in a certain sense) choice of $\zeta_n$'s. Taking $\zeta_n = e^{\frac{2 \pi i}{n}}$ for all positive integers $n$ is such a consistent choice, but there are (many!) others. -Of course there are always situations when confusing one complex number for another would lead to trouble, so yes, in principle it might matter, especially in analytic or metric arguments. - -REPLY [5 votes]: Usually this means $e^{ \frac{2\pi i}{n} }$. Depending on the context, it can just mean any primitive $n^{th}$ root of unity (and sometimes it doesn't matter which one, since they're all taken to each other under the action of the Galois group).<|endoftext|> -TITLE: Does every l.e.s. "in homology" come from a s.e.s. of complexes? -QUESTION [18 upvotes]: Given a long exact sequence of the form -$$ -\dots\to A'_n \to B'_n \to C'_n -\,\xrightarrow{\omega_n}\, -A'_{n-1} \to B'_{n-1} \to C'_{n-1}\to \dots\qquad (*) -$$ -is there a way to recover a short exact sequence of complexes $\mathcal A=\{A_n,\partial_n^A\}$, $\mathcal B=\{B_n,\partial_n^B\}$, $\mathcal C=\{C_n,\partial_n^C\}$ such that the sequence (*) "is" the long exact sequence in homology induced by -$$ -0\to \mathcal A\to \mathcal B\to \mathcal C \to 0 -$$ -and the morphisms $\omega_n$ are in fact the connection morphisms of that homology? I mean $A'_n\cong H_n(\mathcal A)$ for all $n\ge 0$ and similarly for $B'_n$, $C'_n$. -I expect the answer will be "obviously no", but then is there a case in which it is possible? - -REPLY [6 votes]: This is not a direct answer, but related. This paper by Jan Stovicek addresses the question "which long exact sequences can arise from the snake lemma" so it might be of help here. -http://arxiv.org/abs/0906.1286<|endoftext|> -TITLE: Teaching myself differential topology and differential geometry -QUESTION [208 upvotes]: I have a hazy notion of some stuff in differential geometry and a better, but still not quite rigorous understanding of basics of differential topology. -I have decided to fix this lacuna once for all. Unfortunately I cannot attend a course right now. I must teach myself all the stuff by reading books. -Towards this purpose I want to know what are the most important basic theorems in differential geometry and differential topology. For a start, for differential topology, I think I must read Stokes' theorem and de Rham theorem with complete proofs. -Differential geometry is a bit more difficult. What is a connection? Which notion should I use? I want to know about parallel transport and holonomy. What are the most important and basic theorems here? Are there concise books which can teach me the stuff faster than the voluminous Spivak books? -Also finally I want to read into some algebraic geometry and Hodge/Kähler stuff. -Suggestions about important theorems and concepts to learn, and book references, will be most helpful. - -REPLY [5 votes]: My take on it is like this: -Basics of Smooth manifolds - -Loring Tu, Introduction to manifolds - elementary introduction, -Jeffrey Lee, Manifolds and Differential geometry, chapters 1-11 - cover the basics (tangent bundle, immersions/submersions, Lie group basics, vector bundles, differential forms, Frobenius theorem) at a relatively slow pace and very deep level. -Will Merry, Differential Geometry - beautifully written notes (with problems sheets!), where lectures 1-27 cover pretty much the same stuff as the above book of Jeffrey Lee - -Basic notions of differential geometry - -Jeffrey Lee, Manifolds and Differential geometry, chapters 12 and 13 - center around the notions of metric and connection. -Will Merry, Differential Geometry - lectures 28-53 also center around metrics and connections, but the notion of parallel transport is worked out much more thoroughly than in Jeffrey Lee's book. -Sundararaman Ramanan, Global calculus - a high-brow exposition of basic notions in differential geometry. A unifying topic is that of differential operators (done in a coordinate-free way!) and their symbols. - -What I find most valuable about these books is that they try to avoid using indices and local coordinates for developing the theory as much as possible, and only use them for concrete computations with examples. -However, the above books only lay out the general notions and do not develop any deep theorems about the geometry of a manifold you may wish to study. At this point the tree of differential geometry branches out into various topics like Riemannian geometry, symplectic geometry, complex differential geometry, index theory, etc. -I will only mention one book here for the breadth of topics discussed - -Arthur Besse, Einstein manifolds - reviews Riemannian geometry and tells about (more or less) the state of the art at 1980 of the differential geometry of Kähler and Einstein manifolds. - -Differential topology - -Amiya Mukherjee, Differential Topology - first five chapters overlap a bit with the above titles, but chapter 6-10 discuss differential topology proper - transversality, intersection, theory, jets, Morse theory, culminating in h-cobordism theorem. -Raoul Bott and Loring Tu, Differential Forms in -Algebraic Topology - a famous classic; not a book on differential topology - as the title suggests, this is a treatment of algebraic topology of manifolds using analytic methods<|endoftext|> -TITLE: What's the nth integral of $\frac1{x}$? -QUESTION [29 upvotes]: It can be shown by simple induction that $\dfrac{\mathrm d^n}{\mathrm dx^n}\left(\dfrac1{x}\right) = \dfrac{(-1)^n n!}{x^{n+1}}$. -But what about the nth integral of $\dfrac1{x}$? Finding the first few primitives, I can't discern a pattern. - -REPLY [4 votes]: We can solve this problem by the use of fractional derivatives. -$\forall q < 0$, using the Riemann Liouville formula and substituting $v = \frac{x-y}{x}$: -$$ -\frac{d^q}{dx^q}\log x = \\ -\frac{1}{\Gamma(-q)} \int_0^x \frac{\log (y)}{(x-y)^{q+1}}\, dy=\\ -\frac{x^{-q} \log x}{\Gamma(-q)}\int_0^1 \frac{dv}{v^{q+1}} + \frac{x^{-q}}{\Gamma(-q)}\int_0^1 \frac{\log(1-v)}{v^{q+1}}\, dv -$$ -The first integral equals $-\frac{1}{q}$, while the second one is evaluated by parts: -$$\int_0^1 \frac{\log(1-v)}{v^{q+1}}\, dv = \\ -\frac{1}{q} \int_0^1 \log(1-v)\,d(1-v^{-q}) = \\ -\frac{\log(1-v^{-q}) \log(1-v)}{q}\big|_0^1 - \frac{1}{q} \int_0^1 \frac{1-v^{-q}}{1-v}\, dv = \\ -\frac{\psi(1-q)+\gamma}{q} -$$ -Then -$$\frac{d^q}{dx^q}\log x = \frac{x^{-q}}{\Gamma(1-q)}(\log x - \gamma - \psi(1-q))$$ -Letting $q \mapsto -q$ and letting $q$ be an integer, and differentiating once we obtain: -$$\boxed{I^q \frac{1}{x} = \frac{x^{q-1}}{q!}\left(q\log x + q\sum_{n=1}^q \frac{1}{n}+1\right)+P_{n-1}(x)}$$ -where $I^q$ represents the $q^{th}$ integral and $P_{n-1} (x)$ represents a polynomial of degree $n-1$. Note that the sum above is for the $n^{th}$ harmonic number. -Reference: The Fractional Calculus (Oldham & Spanier)<|endoftext|> -TITLE: martingale and filtration -QUESTION [10 upvotes]: As I understand, martingale is a stochastic process (i.e., a sequence of random variables) such that the conditional expected value of an observation at some time $t$, given all the observations up to some earlier time $s$, is equal to the observation at that earlier time $s$. -A sequence $Y_1, Y_2, Y_3 ...$ is said to be a martingale with respect to another sequence $X_1, X_2, X_3 ...$ if for all $n$: -$E(Y_{n+1}|X_1,...,X_n) = Y_n$ -Now I don't understand how it is defined in terms of filtration. Does filtration discretize the time space of a stochastic process so that we can analyze the process as a martingale? A simple explanation or an example on what is filtration and how it relates to martingale theory would be very helpful. I can then read more detailed content. - -REPLY [12 votes]: A Filtration is a growing sequence of sigma algebras -$$\mathcal{F_1}\subseteq \mathcal{F_2}\ldots \subseteq \mathcal{F_n}.$$ -Now when talking of martingales we need to talk of conditional expectations, and in particular conditional expectations w.r.t $\sigma$ algebra's. So whenever we write -$$ E[Y_n|X_1,X_2,\ldots,X_n]$$ -we can alternatively write it as -$$E[Y_{n+1}| \mathcal{F_{n}}],$$ - where $\mathcal{F}_{n}$ is a sigma algebra that makes random variables $$X_1,\ldots,X_n$$ measurable. Finally a flitration $\mathcal{F_1},\ldots \mathcal{F_n}$ is simply an increasing sequence of simga algebras. That is we are conditioning on growing amounts of information.<|endoftext|> -TITLE: Show that $\gcd(7^{79}+5,7^{78}+3) = 4$ -QUESTION [6 upvotes]: How can I prove that $\gcd(7^{79}+5,7^{78}+3) = 4$ ? This was a question on a past exam, so the naive euclidean algorithm doesn't seem to suffice. -I'm not really sure where to start with this. -Note: This is exam prep, not homework. - -REPLY [8 votes]: Mod the gcd $\rm\,d\!:\,$ $\rm\ \color{#0a0}{7^{78} \equiv -3}\ $ so $\ 0\ \equiv\ 7(\color{#0a0}{7^{78}})+5\ \equiv\ 7(\color{#0a0}{-3})+5\ \equiv -16,\ $ so $\rm\ d\mid16$ -Mod $8$ both args are $\equiv 4\,$ by $\,7\equiv -1$ so the gcd $\rm \,d = \smash{(4\!+\!8i,4\!+\!8j) = 4\,\underbrace{(1\!+\!2i,1\!+\!2j)}_{\rm\color{#c00}{\large =\ k\ odd}}}$ -So for $\,\rm\color{#c00}{odd\ k}\,$ the gcd $\rm\, d = 4\,\color{#c00}k\mid 16,\ $ so $\rm\ k = 1\,$ so $\rm\,d = 4$. -Remark $\ $ Note how the calculations become more intuitive by employing modular aithmetic. Doing such allows us to reuse our well-honed intuition of arithmetic operations (ring laws), versus the much more cumbersome and much less intuitive divisibility relation, i.e. calculating in equational algebras is simpler than calculating in relational algebras, so whenever a problem can be converted from relational to equational it usually yields a simplification.<|endoftext|> -TITLE: How to Characterize Clumps in a Large, Semi-Random Graph -QUESTION [8 upvotes]: Consider large (100,000+ vertices, say) graphs, which we think of as representing some population with edges representing some form of symmetric relation. They might be the Friend graph of Facebook, mathematicians with the collaboration relation, or a large computer network. -These networks have the property that they are neither highly structured, nor totally random. No information about other edges on the graph can tell me for certain whether a given pair of vertices is connected. That said, if a given pair of vertices have many common neighbors, then it is considerably more likely that they are connected by an edge (so it is not entirely random). I've seen some lectures on graphs like this, and I understand they are a productive area of research (see, for instance, Kleinberg or Lovasz). -I am curious about the following phenomenon (my description is vague, but part of my question is asking for a good definition). These networks tend to have subsets (which I will call 'clumps') which are significantly more connected to each other than to the average vertex in the graph. Consider a college in Facebook or a research group in mathematics, for example. If the graphs were small enough to draw in a reasonable way, such clumps would be obvious to the naked eye. For very large graphs, this is impractical; so instead, I ask, - -1) What is a graph-theoretic way to characterize these clumps? - -Clearly, there won't be a yes-no criterion, but I am hoping for some quantity that measures how much a given subset is a clump. This should also factor in the statistical significance of the clump. Very small subsets which are highly connected will happen even in totally random graphs, whereas a large subset which is even moderately well-connected is unlikely in a random graph, and would be interesting to find. - -2) Given a graph (and a definition of a clump), how does one find the clumps? - -Is there a definition and an algorithm so robust that it can take networks like Facebook or the collaboration graph, and return the clumps that we know are there, like colleges or research discplines? - -Oh, and I am not looking for the Szemeredi partition of a graph, which has some similarities to the kinds of partitions I am looking for, but is explicitly a partition of the graph into similar sized chunks. The clumps in a graph don't have to be the same size, disjoint, or contain every element. - -REPLY [4 votes]: Here is one example I stumbled upon. -Random walks have been used to identify clustering in networks in Rosvall and Bergstrom, Maps of random walks on complex networks reveal community structure. In particular, this technique was used by Eigenfactor (who publish journal rankings) to deduce clusterings in research communities.<|endoftext|> -TITLE: Why is it harder to prove which integers are sums of three squares rather than sums of two squares or four squares? -QUESTION [37 upvotes]: Background: Let $n$ be an integer and let $p$ be a prime. If $p^{e} || n$, we write $v_{p}(n) = e$. A natural number $n$ is a sum of two integer squares if and only if for each prime $p \equiv 3 \pmod 4$, $v_{p}(n)$ is even. Every natural number is a sum of four squares. A natural number $n$ is a sum of three squares if and only if it is not of the form $4^{k}u$ where $u \equiv 7 \pmod 8$. -I would like to know why it is harder to prove the above result for sums of three squares as opposed to sums of two squares or four squares. -I've heard somewhere that one way to see this involves modular forms... but I don't remember any details. I would also like to know if there is a formula for the number of ways of representing a natural number n as a sum of three squares (or more generally, $m$ squares) that is similar in spirit to the formulas for the number of ways of representing a natural number as the sum of two squares and four squares. - -REPLY [32 votes]: The modular forms explanation is basically due to the fact -that $3$ is odd and so the generating function for representations -of sums of three squares is a modular form of half-integer weight. -In general if $r_k(n)$ is the number of representations of $n$ -as a sum of $k$ squares then -$$\sum_{n=0}^\infty r_k(n)q^n=\theta(z)^k$$ -where $q=\exp(\pi i z)$ and -$$\theta(z)=1+2\sum_{n=1}^\infty q^{n^2}.$$ -Then $f_k(z)=\theta(z)^k$ is a modular form of weight $k/2$ for the group -$\Gamma_0(4)$. This means that -$$f_k((az+b)/(cz+d))=(cz+d)^{k/2}f_k(z)$$ -whenever the matrix $\begin{pmatrix}a&b\\\\c&d\end{pmatrix}$ -lies in $\Gamma_0(4)$, that is $a$, $b$, $c$ and $d$ are integers, $4\mid c$ -and $ad-bc=1$. -This definition is easy to understand when $k$ is even, but for odd -$k$ one needs to take the correct branch of $(cz+d)^{k/2}$, and -this is awkward. The space of modular forms of weight $k/2$ is -finite-dimensional for all $k$, and is one-dimensional for small enough $k$. -For these small $k$ the space is spanned by an "Eisenstein series". -Computing the Eisenstein series isn't too hard for even $k$, but is -much nastier for odd $k$ where again square roots need to be -dealt with. See Koblitz's book on modular forms and elliptic functions -for the calculation for $k\ge5$ odd. The calculation for $k=3$ -is even nastier as the Eisenstein series does not converge absolutely. -In fact the cases where $k$ is divisible by $4$ are even easier, as -even weight modular forms behave nicer. -For large $k$, Eisenstein series are no longer enough, one needs -also "cusp forms". While fascinating, cusp forms have coefficients -which aren't given by nice formulae unlike Eisenstein series. -Of course there is a formula for $r_3(n)$, due to Gauss in his -Disquisitiones Arithmeticae. It involves class numbers of quadratic -fields (or to Gauss numbers of classes of integral quadratic forms).<|endoftext|> -TITLE: Weakening paracompactness condition -QUESTION [5 upvotes]: Let $X$ be a topological space such that every open cover has a finite refinement. Then is $X$ compact, or is there a counterexample? -Let $X$ be a topological space such that every open cover has a locally finite subcover. Then is $X$ compact, or is there a counterexample? - -REPLY [3 votes]: For 1., clearly yes: if a cover has a finite refinement, then the sets it is a refinement of also cover $X$, and form a finite subcover. -For 2., see Nuno's comment on the other answer.<|endoftext|> -TITLE: Example of a smooth function with zero derivative that is not constant -QUESTION [6 upvotes]: One of false beliefs in this question on Math Overflow is "If f is a smooth function with df=0, then f is constant". What is a counterexample to this statement? Can it be made correct by adding some restriction, e.g. that f is a function from reals to reals? - -REPLY [12 votes]: This is described in the comments; $f$ need only be locally constant, so as a counterexample take $f : (0, 1) \cup (2, 3) \to \mathbb{R}$ which is equal to $0$ on the first component and $1$ on the second. It is certainly true for functions from $\mathbb{R}$ to $\mathbb{R}$, for example by the mean value theorem. -This seems trite, but it's still an important point to keep in mind. If you ever feel like you're missing something, then of course you should attempt to carefully prove the result that you thought was true and see where it fails (if it does).<|endoftext|> -TITLE: Subbundles and subsheaves -QUESTION [6 upvotes]: Let Let $E \rightarrow X$ be a vector bundle on a manifold $X$. Let $\cal E$ be the sheaf of sections of $E$. Let $\cal F$ be a subsheaf of $\cal E$, and let $F$ be the etale space of $\cal F$. What is an example that the map $F \rightarrow E$ might not be an injection on all fibers? - -REPLY [3 votes]: Take $E$ to be the trivial bundle $X \times \mathbb{R}$. Let $f: X \to \mathbb{R}$ to be a function vanishing only at $x_0$. Then multiplication by the global function $f$ defines a morphism $E \to E$ which is injective as a morphism of sheaves (because it is injective on $E(U)$ for any open set $U$), but the map on fibers at $x_0$ is not injective. -(To avoid uninteresting cases, assume $x_0$ is not an isolated point of $X$.) -The problem is that for an injection of sheaves $E \to F$, the map on stalks $E_x \to F_x$ is always injective. The map on fibers $E(x) \to F(x)$ obtained by tensoring the map on stalks with the residue field is generally not, because tensoring is not an exact functor. If the map on stalks is a split injection, then the map on fibers will be an injection still.<|endoftext|> -TITLE: classification up to similarity of complex n-by-n matrices -QUESTION [5 upvotes]: Classify up to similarity all 3 x 3 complex matrices $A$ such that $A^n$ = $I$. - -REPLY [3 votes]: In fact, one does not need to know the characteristic polynomial in this case. Let the minimal polynomial be $p$, then $p\mid (x^3-1)$. It is important to see that $x^3-1$ has three distinct roots in $\mathbb{C}$. Hence $p$ cannot have repeated roots in $\mathbb{C}$. Thus $A$ must be diagonalizable over $\mathbb{C}$, with each diagonal entry a root of $x^3-1$. Hence $A$ is similar to -\begin{equation} -\begin{pmatrix} -a & 0 & 0 \\ -0 & b & 0 \\ -0 & 0 & c -\end{pmatrix}, -\qquad -a,b,c \in \{1,e^{i2\pi/3},e^{-i2\pi/3}\} -\end{equation}<|endoftext|> -TITLE: An orthonormal set cannot be a basis in an infinite dimension vector space? -QUESTION [28 upvotes]: I'm reading the Algebra book by Knapp and he mentions in passing that an orthonormal set in an infinite dimension vector space is "never large enough" to be a vector-space basis (i.e. that every vector can be written as a finite sum of vectors from the basis; such bases do exist for infinite dimension vector spaces by virtue of the axiom of choice, but usually one works with orthonormal sets for which infinite sums yield all the elements of the vector space). -So my question is - how can we prove this claim? (That an orthonormal set in an infinite dimension vector space is not a vector space basis). - -Edit (by Jonas Meyer): Knapp says in a footnote on page 92 of Basic algebra: - -In the infinite-dimensional theory the term "orthonormal basis" is used for an orthonormal set that spans $V$ when limits of finite sums are allowed, in addition to finite sums themselves; when $V$ is infinite-dimensional, an orthonormal basis is never large enough to be a vector-space basis. - -Thus, without explicitly using the word, Knapp is referring only to complete infinite-dimensional inner product spaces. - -REPLY [10 votes]: As already mentioned in the other answers and comments, this is true if the space is assumed to be complete (see Andrey Rekalo's answer), but not necessarily otherwise (see Carl Mummert's answer). In the complete case, more can be said. If the Hilbert space dimension (cardinality of a maximal orthonormal set) is infinite, then the linear dimension (cardinality of a maximal linearly independent set) is at least $\mathfrak{c}=2^{\aleph_0}=|\mathbb{R}|$. Of course, this doesn't directly tell you anything if the Hilbert space dimension is $\mathfrak{c}$ or greater, but in the case of separable Hilbert spaces like $\ell^2$ (as defined here), it tells you that not only does an orthonormal set not span, but no subset of cardinality less than $\mathfrak{c}$ can span. One way to see this is to consider the linearly independent set $\{(1,t,t^2,t^3,\ldots):-1\lt t\lt 1\}\subset\ell^2$. Since $\ell^2$ imbeds into every infinite dimensional Hilbert space as the closed linear span of any countably infinite orthonormal set, this also demonstrates the general fact. This and other proofs can be found in the solutions to Problem 7 of Halmos's Hilbert Space Problem Book, which I highly recommend. This statement also extends to Banach spaces. -Strictly speaking, what I have said so far hasn't answered your question in the case of Hilbert space dimension greater than or equal to $\mathfrak{c}$. However, you can get an overkill solution to your question by taking a countably infinite subset $A$ of an orthonormal subset of an arbitrary Hilbert space $H$. If $M_0$ is the linear span of $A$ and $M=\overline{M_0}$ is the closed linear span of $A$, then $H=M\oplus M^\perp$, while the linear span of your orthonormal set is contained in $M_0\oplus M^\perp$, which is properly contained in $H$ because $M$ has higher dimension than $M_0$. ($M^\perp$ denotes the set of vectors orthogonal to every element of $M$, and $\oplus$ is being used here to denote internal direct sums.) All that was really needed here is that the linear dimension of $M$ is not countable, and this also follows from Baire's theorem. -Andrey Rekalo gives a better, nonoverkill answer for the complete case.<|endoftext|> -TITLE: Proving $\prod_{j=1}^n \left(4-\frac2{j}\right)$ is an integer -QUESTION [21 upvotes]: How do I show that the product $$\biggl(4 - \frac21\biggr) \cdot \biggl(4 - \frac22\biggr) \cdot \biggl(4 - \frac23\biggr) \cdots \biggl(4 - \frac2{n}\biggr)$$ is an integer for any $n \in \mathbb{N}$. - - -Source: www.math.muni.cz/~bulik/vyuka/pen-20070711.pdf - -REPLY [8 votes]: Using Pascal identity -$$\displaystyle\binom{2n}{n}=\left(4-\frac{2}{n}\right)\cdot\binom{2n-2}{n-1}$$ -one gets for your product -$$\displaystyle\frac{\binom{2}{1}}{\binom{0}{0}}\cdot\frac{\binom{4}{2}}{\binom{2}{1}}\cdots\frac{\binom{2n-2}{n-1}}{\binom{2n-4}{n-2}}\cdot\frac{\binom{2n}{n}}{\binom{2n-2}{n-1}}=\binom{2n}{n}$$ -and this is an integer as required.<|endoftext|> -TITLE: Is a Gödel sentence logically valid? -QUESTION [19 upvotes]: This might be an elementary question, but I am just beginning to learn logic theory. -From wikipedia article on Gödel's incompleteness theorems - -Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory (Kleene 1967, p. 250). - The true but unprovable statement referred to by the theorem is often referred to as “the Gödel sentence” for the theory. - -My question: Is a Gödel statement logically valid?. -Edit: As Carl answers below, if the Gödel statement is valid, then by completeness theorem, it is provable, which leads to a contradiction. So there exists a model in which the statement is false. Can we construct such a model? - -REPLY [21 votes]: No, a Gödel sentence is not logically valid. Because the Gödel sentence for a theory $T$ is unprovable from $T$, it follows from the completeness theorem for first-order logic that there is a model of $T$ in which the Gödel sentence is false. -When the text you quoted says "true" you should read that as "true in the standard model of arithmetic". Logical validity would correspond to truth in all models. An example of a logically valid sentence is $(\forall x) (x=x)$.<|endoftext|> -TITLE: Does $\lim_{n \to \infty}\frac{\mathrm d^n}{\mathrm dx^n} f(x)$ converge? -QUESTION [5 upvotes]: Is there a general way to show whether the limit $$\lim_{n \to \infty}\frac{\mathrm d^n}{\mathrm dx^n} f(x)$$ converges to some expression? -What about repeatedly integrating an expression $n$ times as $n \to \infty$? - -REPLY [2 votes]: In terms of pointwise convergence, you cannot ensure the limit to exist at a given point, even if all the derivatives of $f$ exist (everywhere). -For example, by a theorem of Borel, any sequence $a_0,a_1,\dots$ is $f(0),f'(0),f''(0),\dots$ for some function $f$, see this question.<|endoftext|> -TITLE: Mathematical Telescoping -QUESTION [12 upvotes]: Bill Dubuque has answered several questions by indicating that some form of "telescoping" is taking place. See this post and the links provided by Bill for more information. -I have never heard of "telescoping" until I read a few answers on here by Bill which refer to this notion. It seems fairly straightforward, basically you expand some expression using basic arithmetic, there is a minor miracle and lots of terms cancel out in a systematic way, and we are then able to solve the problem. -I suppose "telescoping" in this sense was something I always took for granted, and considered a low level "trick" to keep in my back pocket. However, considering the importance Bill seems to attach to this notion of telescoping, and considering that I have a great deal of respect for Bill based on the post's by him I have read, I was wondering if I'm not missing something about telescoping. There is no wiki article on the subject, and a Google search directs me to Bill's answers on SE! -Therefore I would like to ask: -1) What unintuitive results can I achieve with telescoping? -2) Is there a good reference which only discusses telescoping and applications, or is this concept too narrow for anyone to write a paper/book like this? -3) More trivially, am I missing something about what telescoping actually means? If not, then why is this called telescoping, because I don't see what this has to do with a telescope? - -REPLY [11 votes]: 1, 2) Telescoping is one of the ideas behind modern algorithms to automatically prove hypergeometric identities. These algorithms allow you, for example, to automatically prove binomial coefficient identities. The standard reference here is Petkovsek, Wilf, and Zeilberger's A=B. -3) The name comes from the process of collapsing a telescope, which is analogous to the collapsing of a telescoping sum. -Philosophically telescoping is the same as "discrete integration": telescoping a sum $\sum f(n)$ is the same as finding $g(n)$ such that $f(n) = g(n+1) - g(n)$. In that sense it is part of the theory of finite differences, although people probably don't call it "telescoping" in this context. The context in which I hear the term "telescoping" being used is high school math competitions. It's one of those basic ideas that everyone has in the back of their head, I suppose. It's elementary and effective when it applies, but usually there are more sophisticated methods available. -Edit: Some specific examples. The ur-example of a telescoping sum is probably -$$\sum_{k=1}^n \frac{1}{k(k+1)} = \sum_{k=1}^{n} \left( \frac{1}{k} - \frac{1}{k+1} \right) = 1 - \frac{1}{n+1}$$ -and many people have seen this application, but probably far fewer have seen its generalization: -$$\sum_{k=1}^n \frac{1}{k(k+1)...(k+r)} = \frac{1}{r} \sum_{k=1}^n \left( \frac{1}{k(k+1)...(k+r-1)} - \frac{1}{(k+1)...(k+r)} \right) = \frac{1}{r} \left( \frac{1}{r!} - \frac{n!}{(n+r)!} \right).$$ -The other classic example I remember from my competition days is -$$\sum_{k=1}^n \frac{k}{k^4 - k^2 + 1} = \frac{1}{2} \sum_{k=1}^n \left( \frac{1}{k^2 - k + 1} - \frac{1}{(k+1)^2 - (k+1) + 1} \right) = \frac{1}{2} \left( 1 - \frac{1}{n^2 + n + 1} \right)$$ -although I have to admit I always found it a little contrived. Finally, telescoping was put to good use to solve this math.SE question I posed. - -REPLY [5 votes]: The book A=B by Petkovsek, Wilf and Zeilberger -is an extended exposition of the Wilf-Zeilberger method -for summing series often called "creative telescoping".<|endoftext|> -TITLE: What is the pullback in the category of commutative algebras? -QUESTION [6 upvotes]: The pullback is a subset of the cartesian product in the category of commutative rings with unit. -What is the pullback in the category of commutative $k$-algebras? Is it the same set as in rings? - -REPLY [4 votes]: Yes. -More generally, fix a category $\mathcal{C}$ that contains finite limits. Given an object $A \in \mathcal{C}$, we can construct the category $\mathcal{C}_A$ of objects "under $A$." That is, an object of this category $\mathcal{C}_A$ is a morphism $A \to B$ and a morphism is a commutative triangle. -The claim is that $\mathcal{C}_A$ admits finite limits, and the limits are the same as in $\mathcal{C}$. This is basically formal. Given a functor $F:I \to \mathcal{C}_A$ from a finite category $I$, we get a functor $G: I \to \mathcal{C}$ which has a limit by assumption. Moreover, for each $i \in I$, we have a morphism $A \to Gi$. By the universal property, this becomes a morphism $A \to \lim G$. -I claim that the object $A \to \lim G$ in $\mathcal{C}_A$ is a limit of $F$. This can be checked directly from the definitions.<|endoftext|> -TITLE: Geometry or Topology -QUESTION [6 upvotes]: So, I am a graduate student who is certain that he does not want to do analysis (I think...). What are the most exciting fields in mathematics right now? It seems to me that very generally, they are algebraic geometry and algebraic topology, with the latter being specifically concerned with homotopy theory (something I find very interesting). Perhaps this is not the correct place to post such a question and I'm welcome to redirection on that front. -Ultimately, I would like to do something highly abstract and categorical. I love the ideas of sheaves, homological algebra, and various categorical concepts. Does anyone have any good knowledge of where to begin? Also, do algebraic geometry and homotopy theory have a great deal of common ground? -Thanks, and sorry if this question belongs somewhere else. - -REPLY [9 votes]: Wow, this is great! I would love to expound a bit on some of the things you have mentioned. While I agree with Ryan that what is exciting and hot is entirely subjective, I too find homotopy theory and algebraic geometry enthralling. I really only know things about homotopy theory though, so I can only talk on that. -There are loads of places to go abstract crazy under the general heading of homotopy theory. A lot of the model category theory is being put to use in many many places, like Algebraic Geometry! (This is the work of Dugger-Isaksen that Ryan mentioned above). There is a lot of beautiful abstract framework that people work with that falls under the umbrella of homotopy theory. Personally I am a bit more on the computational side, or rather old fashioned "how do we compute $\pi_n^S(\mathbb{S})$?" So I am interested in different computational aspects of the stable homotopy category of spectra. It really is a big field. -I just realized I did not address your question about where to begin. There are some amazing deep results that are really cool that you can get to in a finite amount of time. For homotopy theory I would work on trying to get to come classical results of Adams, like vector fields on spheres or Hopf invariant one. Both of these are addressed in Mosher and Tangora (now a dover book). It is a good book, but you should skip certain bits, like you don't need their construction of the steenrod operations. There is also the theory of formal group laws and how those relate to stable homotopy theory, that stuff is awesome. Being at JHU, I would begin by asking Jack Morava or Andrew Salch. They are both super nice guys that know a whole lot, but they are really smart and might be hard to keep up with. So maybe ask them what got them started in being interested in such things, or what they think is something that is really cool to work towards. I also think that Boardman and Wilson would be excellent people to ask, but I have had no interaction with them. They are also deep vats of information. I learned so much going through the first third of Boardmans CCSS paper, it was great! -As far as the interaction, it is large! As Mathew pointed out there is a whole "new" field called motivic homotopy theory that asks "what can homotopy theory tell us about schemes?" There are other interactions though, a lot of number theory and stackiness is interesting via the chromatic picture. A lot of people study the moduli stack of formal group (laws) in order to get at the stable homotopy groups of spheres. That moduli stack necessarily has a lot of AG in it. -I think the best thing to do would be to talk to people around about things that you are learning/want to learn. In fact, feel free to drop me a line at first intial last name at wayne.edu. Seriously drop me a line if you want to get into a bit more detail, it is unclear what will be beneficial for me to say without a bit more background. -PS: there are great people at JHU that know loads about homotopy theory, as well as its interactions with algebraic geometry. Morava pioneered the relationship with class field theory, and now it seems like Salch is taking it all the way to Langlands Land. It would be hard to be at a better place to study homotopy theory!<|endoftext|> -TITLE: Graph-theoretic interpretation of determinant? -QUESTION [5 upvotes]: The permanent can be interpreted as the number of perfect matchings in bipartite graphs. - -Is there a similar graph-theoretic interpretation of the determinant? - -REPLY [13 votes]: I'm aware of a few. There is the Lindström-Gessel-Viennot lemma, and there is also the matrix-tree theorem. If $A$ is the adjacency matrix of a finite graph $G$ then $\frac{1}{\det(I - At)}$ describes a kind of "zeta function" of $G$. I describe some of how this works in this blog post. -You may also be interested in Kuperberg's An exploration of the permanent-determinant method.<|endoftext|> -TITLE: Elliptic Curves and Points at Infinity -QUESTION [27 upvotes]: My undergraduate number theory class decided to dip into a bit of algebraic geometry to finish up the semester. I'm having trouble understanding this bit of information that the instructor presented in his notes. -Here it is in paraphrase (assume we are over an abstract field k) -We take a polynomial in k, $f =Y^2 - X^3 -aX -b$ and homogenize the polynomial to $F = Y^2Z - X^3 -aXZ^2 - bZ^3$. Note that the points at infinity of V(F) consist of triples $[\alpha : \beta : 0]$ s.t $ -\alpha^3 = 0$, hence the only point at infinity is $[0 : 1 :0]$ -The part I'm confused about is in italics. He introduces the terms "points at infinity" without defining it. After some google time, I understand what a point at infinity means in the context of a projective space/projective line but am having trouble understanding how the professor came to his conclusion about the point at infinity in this particular example -Here is my question. In general, are all points in the locus of vanishing points for a homogeneous polynomials considered points at infinity? If not, is there a general procedure for calculating these point if we are given an arbitrary polynomial? -More abstractly, How do I understand that a finite point in the projective space is a "point at infinity" for this polynomial. - -REPLY [48 votes]: Here's another way to think about the "line at infinity" and the "points at infinity"... -Think of the usual $XY$-plane as sitting inside of $3$-space, but instead of it sitting in its usual place, $\{(x,y,0) : x,y\in\mathbb{R}\}$, shift it up by $1$ so that it sits as the $z=1$ plane. -Now, you are sitting at the origin with a very powerful laser pointer. Whenever you want a point on the $XY$-plane, you shine your laser pointer at that point. So, if you want the point $(x,y)$, you are actually pointing your laser pointer at the point $(x,y,1)$; since you are sitting at the origin, the laser beam describes a (half)-line, joining $(0,0,0)$ to $(x,y,1)$. -Now, for example, look at the point $(x,0,1)$, and imagine $x$ getting larger. The angle your laser pointer makes with the $z=0$ plane gets smaller and smaller, until "as $x$ goes to infinity", your laser pointer is just pointing along the line $x$ axis (at the point $(1,0,0)$), and the same thing happens if you let $x$ go to $-\infty$. More generally, if you start pointing to points that are further and further away from the "origin" in your plane (away from $(0,0,1)$), the laser beam's angle with $Z=0$ gets smaller and smaller, until, "at the limit" as $||(x,y)||\to\infty$, you end up with the laser beam pointing along the $z=0$ plane in some direction. We can represent the direction with the slope of the line, so that we are pointing at $(1,m,0)$ for some $m$ (or perhaps to $(-1,-m,0)$, but that's the same direction), or perhaps to the point $(0,1,0)$. So we "add" these "points at infinity" (so called because we get them by letting the point we are shining the laser beam on "go to infinity"), one for each direction away from the "origin": $(1,m,0)$ for arbitrary $m$ for nonvertical lines, and $(0,1,0)$ corresponding to the direction of $x=0$, $y\to\pm\infty$. -So: the "usual", affine points, are the ones in the $z=1$ plane, and they correspond to laser beams coming from the origin; they are each of the form $(x,y,1)$ for some $x,y$ in $\mathbb{R}$. In addition, for each "direction" we want to include that limiting laser beam which does not intersect the plane $z=1$; those correspond to points $(1,m,0)$, or the point $(0,1,0)$ when you do it with the line $x=0$. So we get one point for every real $m$, $(1,m,0)$, and another for $(0,1,0)$. You are adding one point for every direction of lines through the origin; these points are the "points at infinity", and together they make the "line at infinity". -Now, put your elliptic curve/polynomial $F=Y^2 - X^3 - aX-b$, and draw the points that correspond to it on the $z=1$ plane; that's the "affine piece" of the curve. But do you also get any of those "points at infinity"? -Well, even though we are thinking of the points as being on the $XY$-plane, they "really" are in the $Z=1$ plane; so our equation actually has a "hidden" $Z$ that we lost sight of when we evaluated at $Z=1$. We use the homogenization $f = Y^2Z - X^3 - aXZ^2 - bZ^3$ to find it. Why that? Well, for any fixed point $(x,y,1)$ in our "$XY$-plane", the laser pointer points to all points of the form $(\alpha x,\alpha y,\alpha)$. If we were to shift up our copy of the plane from $Z=1$ to $Z=\alpha$, we'll want to scale everything so that it still corresponds to what I'm tracing from the origin; this requires that every monomial have the same total degree, which is why we put in factors of $Z$ to complete them to degree $3$, the smallest we can (making it bigger would give you the point $(0,0,0)$ as a solution, and we do need to stay away from that because we cannot point the laser pointer in our eye). -Once we do that, we find the "directions" that also correspond to our curve by setting $Z=0$ and solving, to find those points $(1,m,0)$ and $(0,1,0)$ that may also lie in our curve. But the only one that works is $(0,1,0)$, which is why the elliptic curve $F$ only has one "point at infinity". - -REPLY [14 votes]: The term "point at infinity" is not actually a well-defined term from the point of view of your projective model. You should think of it this way: -Suppose you are given a homogeneous polynomial and you are interested in its zero set as a subset of projective space. -The projective space is made up of several so-called affine pieces, and accordingly your zero set is made up of several affine pieces. Here is how it works: let's take the concrete polynomial $F=Y^2Z - X^3-aXZ^2 - bZ^3$. Since you can take any point $(x:y:z)$ satisfying this polynomial and scale the coordinates by any non-zero scalar, $\alpha$ say, and the result still represents the same point, you may scale the point by $\alpha=z^{-1}$ and write it as $(x/z,y/z,1)$. But now, you have to be careful: this only works for points, at which $z\neq 0$. For those points, you are setting $z=1$, so the resulting affine model is given by your polynomial $f$ (just set $Z=1$ in $F$). -If on the other hand $z$ happened to be 0, then you have attempted to divide by 0, so in this sense, you obtained a "point at infinity". Now, what is this point at infinity? Set $Z=0$ in $F$ and see what points satisfy the resulting polynomial. For the polynomial to be 0, you also need $-X^3=0$, so $X=0$. Now, since we are in projective space, $Y$ mustn't be 0 (recall that (0:0:0) is not a point in projective space), so you can scale the coordinates so that $Y=1$. That's your "point at infinity" for this particular affine piece: $(X:Y:Z)=(0:1:0)$. -But the procedure was not canonical. Instead, you might have chosen to look at the affine piece $Y\neq 0$, say. Then, you would have written all points in this affine model as $(x/y:1:z/y)$ and the dehomogenised polynomial would have been different. Also, the "points at infinity" would have been different ones, namely all those projective points, for which $Y=0$. -I hope this makes sense. Otherwise, feel free to ask for clarifications.<|endoftext|> -TITLE: Quotient of a regular language -QUESTION [6 upvotes]: According to wikipedia the right quotient of a regular language with ANY other language is regular. I have not been able to find a proof of this fact. All the sources talk about quotient with another regular language. Can somebody point me to a proof? -P.S. Definition from Wikipedia: The right quotient (or simply quotient) of a formal language $L_1$ with a formal language $L_2$ is the language consisting of strings $w$ such that $wx$ is in $L_1$ for some string $x$ in $L_2$. In symbols, we write: -$$L_1 / L_2 = \{w \ | \ \exists x ((x \in L_2) \land (wx \in L_1)) \}$$ -In other words, each string in L1 / L2 is the prefix of a string wx in L1, with the remainder of the word being a string in L2. - -REPLY [7 votes]: You can find a proof in this book: Introduction to Automata Theory, Languages and Computation by Hopcroft and Ullman, 1979 edition. -The short and non-constructive proof appears on page 63. -Here is a snapshot (which I got from google books by searching for quotient in that book): - -Only the last line of the proof is incomplete in the above snapshot, which reads: - -Thus $M'$ accepts $R/L$. - -Here $R$ is a regular language and $L$ is an arbitrary language.<|endoftext|> -TITLE: What is the value of $1^x$? -QUESTION [5 upvotes]: I am trying to understand why $1^{x}=1$ for any $x\in\mathbb{R}$ -Is it OK to write $1^{x}$? As the base 1 should not equal 1 for $1^{x}$ to be an exponential function? -Is $1^{x}=1$ just because it is defined to be so? -If possible please refer me to a book or article that discusses this topic. - -REPLY [6 votes]: Once you move past elementary topics, definitions become much more fundamental in mathematics. So, in a formal sense, you're right that the reason $1^x = 1$ for all $x \in \mathbb{R}$ is that the definition of $1^x$ makes this so. The emphasis on definitions comes from the use of mathematical proofs; the only way to make a rigorous proof about exponentiation is to start with a rigorous definition. So, in a sense, all formal mathematical propositions are true because the definitions have been chosen to make them true. -However, we have a clear motivation behind exponentiation, and if the definition of real number exponentiation did not make $1^x = 1$ for all real $x$, then the definition would have been changed. We don't make up mathematical definitions at random - they are motivated by our informal ideas about the objects we are studying. -The role of this motivation can be seen more clearly by considering complex exponentiation. Unlike natural number exponentiation, complex number exponentiation is not based on repeated multiplication; it's based on logarithms and the function $\exp(z)$. So the definition of complex exponentiation does not imply that $1^i = 1$, and mathematicians are OK with that. There are several possible values for $1^i$, only one of which is $1$. I explained this in this answer. -One thing that is often confusing at first is that there are really several different exponentiation functions, with different domains, all of which are denoted with the notation $x^y$. -Finally, you asked whether $f(x) = 1^x$, as a function from $\mathbb{R}$ to itself, is an exponential function. Many calculus books seem to include a special clause in their definitions that makes this not be an exponential function. However, things would work just as well if you did call it an exponential function. It's just a question of terminology. The only downside to calling $1^x$ an exponential function is that, when stating some results, you might have to add an exception to get rid of $1^x$. Instead of saying "Every exponential function" you would say "Every exponential function except $1^x$". Of course, students in a class need to adopt the conventions of the class so that everyone can understand them. But if you were writing a math book alone on a desert island you could adopt whatever terminology you wanted.<|endoftext|> -TITLE: Transfinite series: Uncountable sums -QUESTION [10 upvotes]: If you sum an expression over an uncountable set -$\sum_{x\in \mathbb{R}}f(x)$, then do we need $f(x)=0$ on all but a countable subset in order for the sum to have a finite value? -If not can you give an example of a function everywhere nonzero that has a transfinite sum with a finite value? -Possible keywords: Transseries, Écalle–Borel Summation, analyzable function -Transseries for beginners, GA Edgar, 2009 - -REPLY [9 votes]: HINT -Given $\varepsilon>0$, can you measure the set $\{x:f(x)>\varepsilon\}$? - -Edit: Giving some background to the somewhat short hint. -Suppose $E\subset\mathbb{R}$ is uncountable, and let $f$ be a non-negative function defined on $E$. The usual definition of the expression $\sum_{x\in E}f(x)$ is given by -$$ -\sum_{x\in E}f(x) =\sup_{F\subset E,\,|F|<\infty}\sum_{x\in F}f(x) \tag{1} -$$ -i.e. we take supremum over finite sets. -Now, let us choose $\varepsilon>0$ and consider the set -$$E_\varepsilon = \{x\in E :f(x)>\varepsilon\}.$$ -This set must be finite in order for the sum in (1) to be finite, because otherwise we may for each positive integer $n$ choose subsets $F_n\subset E_\varepsilon$ such that $|F_n|=n$ and then $$\sum_{x\in E}f(x)\ge \sum_{x\in F_n}f(x)>\sum_{F_n}\varepsilon =n\varepsilon.$$ -Since $$\bigcup_{\varepsilon>0} - E_\varepsilon =\bigcup_{n=1}^\infty E_{1/n} =\{x\in E:f(x)>0\}$$ -the conclusion follows. -Note: I added the assumption $f\ge0$. In fact I do not think there is a reasonable definition for conditional convergence of this kind.<|endoftext|> -TITLE: Finding index of a Fibonacci number: any mathematical solution possible? -QUESTION [5 upvotes]: The problem: - - Given a Fibonacci number,find its index. - - -I am aware of the standard solution 'generate-hash-find'. I am just curious if there is some inverse system of matrix exponentiation or some other mathematical method that gives the solution. - -REPLY [2 votes]: Using this other my answer -Using integers only, I would use Binary Search. Certainly you can compute $F_n$ only with integers, the simplest way is matrix exponentiation. Using Binary Search you can find numbers ``near'' your number $x$ and you will find $x = F_n$ (and $n$). I suppose this method is generic for anything monotone you can compute fast. To initiliaze the binary search, just keep doubling $ F_{2n} $ -Binary search allows you to search for a number x in a sorted "array" F[] (in the programming sense). Use this method to search for your number. When you need F[n] just compute $F_n$. This will work because the sequence is strictly increasing except for the initial 1,1.<|endoftext|> -TITLE: Are the rationals minus a point homeomorphic to the rationals? -QUESTION [38 upvotes]: A while ago I was dreaming up point-set topology exam questions, and this one came to mind: -Is $\mathbb Q\setminus \{0\}$ homeomorphic to $\mathbb Q$? (Where both sets have the subspace topology induced from the standard topology on $\mathbb R$.) -However, I couldn't figure this out at the time, and I'm curious to see whether anyone has a nice argument. I'm not even willing to take a guess as to whether they are or aren't homeomorphic. - -REPLY [20 votes]: The ordered approach is fine. A classical theorem by Sierpinski says that all countable metric spaces without isolated points are homeomorphic. $\mathbf{Q}$ is such a space. -It also implies $\mathbf{Q} \setminus \{0\}$ is homeomorphic to $\mathbf{Q}$ and $\mathbf{Q} \times \mathbf{Q}$ e.g., or any finite product for that matter. -A proof is at the topology atlas, topology explained<|endoftext|> -TITLE: "Converting" equivalence relations to partitions -QUESTION [5 upvotes]: There is a direct relationship between equivalence relations and partitions. -Is there a way to simply use an equivalence relation's definition to get the matching partition? And what about the other way around? - -REPLY [9 votes]: Most of the time, an equivalence relation is hiding an "equality" somewhere; that is, $x\sim y$ if and only if $x$ and $y$ have something that you are trying to isolate which is "equal". With integers, the equivalence $a\equiv b\pmod{m}$ means that $a$ and $b$ have equal remainder when divided by $m$. We usually start from a notion of the thing that we want to be "the same", and define the equivalence relation accordingly, which makes it easier to think about just what are the equivalence classes: they correspond to all objects with the same "thingie" that we are focusing on in the first place. -But suppose you were walking down the street and you found an equivalence relation lying on the ground. Just how easy is it to figure out what this "equality" that is hinding behind it? How easy is it to describe all elements of the equivalence class of a given $x$? -The answer is that it depends greatly on the equivalence relation you find, and sometimes on the $x$. For some equivalence reations, it is fairly easy to figure it out, and then to partition the objects into equivalence classes. But for others, it may be more mysterious. -For example, suppose you only know about positive integers, their order, and their addition and multiplication. You don't know about negative numbers (as most of humanity did not for most of its history). I can define an equivalence relation on pairs of positive integers by -$$(a,b)\sim (r,s) \Longleftrightarrow a+s = b+r.$$ -It is easy to verify that this is an equivalence relation. But what is the partition it induces? What is that "equality" that is hiding behind that equivalence relation? -It's perhaps not so obvious. In fact, it comes from thinking of an ordered pair $(a,b)$ as corresponding to the equation $a+x=b$, so that the pair $(a,b)$ represents what we want to be a solution to this equation, which may or may not exist in the positive integers. Then $(a,b)\sim(r,s)$ means that the solution of $a+x=b$ should be "the same" as the solution of $r+x=s$. It seems obvious, then, that since $x=b-a=s-r$, this gives the condition I give, but the point is that one makes this definition among positive integers, where $b-a$ may be undefined, e.g., if $a\geq b$. It corresponds to a way of trying to think about negative numbers without having to use subtraction. But if you've never even thought about negative numbers, it would be rather difficult to figure it out, thought it would be pretty easy to check, given two pairs, whether they are equivalent or not; and you may even be able to describe all the pairs that are equivalent to a given one, at least sometimes. -But you really have little hope of having some general method that will work easily and always. The reason is precisely because equivalence relations correspond to partitions. Pick any partition, and define an equivalence relation by "$a\sim b$ if and only if they are in the same piece of the partition." That will be an equivalence relation, but if you find it lying on the quad you would be hard pressed to figure out where it could have come from, precisely because it came from an arbitrary partition that happened to catch your fancy at the moment. -So, the moral is that most of the time equivalence relations come from a very specific idea we are trying to isolate, or a specific issue that is somewhat troublesome and we want to avoid (for instance, two functions being identical except for some negligible set of points where they differ, which may lead to some 'obvious' conclusions being technically false but true in 'spirit', so we define an equivalence relation that puts two such functions into the same class so that our "true in spirit" becomes "technically true" by now refering to equivalence classes of functions instead of functions themselves). In such cases, it is very often easy to figure out the equivalence classes, or at least the bulk of each equivalence class (with some outliers being a bit troublesome from time to time). Practice and experience will let you spot them as they show up.<|endoftext|> -TITLE: Nonnegativity of the quadratic Dirichlet L-function $L(\tfrac{1}{2},\chi)$ under GRH -QUESTION [7 upvotes]: I have been looking for a proof of the statement: -"Assume the Generalized Riemann Hypothesis. Let $d$ be a fundamental discriminant and $\chi_d$ the associated primitive quadratic character. Then, $$L(\tfrac{1}{2},\chi_d)\geq 0."$$ -Can anyone point me in the right direction or give a reference? Thanks a lot! - -REPLY [7 votes]: For real $s$, $L(s,\chi_d)$ is real, and it is certainly positive for -large $s$. It is nonzero for $s>1$ by the Euler product and $L(1,\chi_d)\ne0$. -If $L(1/2,\chi_s)<0$ what would happen for some $1/2 -TITLE: Diverging improper integral -QUESTION [6 upvotes]: When asked to evaluate $\int_{a}^{\infty}f(x)dx$, you split the interval based on the improper points. -If there is another improper point other than $\infty$, at $b$, we will write: $\int_{a}^{\infty}f(x)dx=\int_{a}^{b}f(x)dx+\int_{b}^{c}f(x)dx+\int_{c}^{\infty}f(x)dx$ and ask whether each of the integrals on the right hand side converge. If they all converge, so does the original one. -But if at least one of them diverges, the original one doesn't. What is the justification for this conclusion? -I can see that if $\int_{c}^{b}f(x)dx$ diverges, we can assume that the original integral converges and move the integrals around like this: $\int_{a}^{\infty}f(x)dx-\int_{a}^{b}f(x)dx-\int_{c}^{\infty}f(x)dx=\int_{b}^{c}f(x)dx$ then we'll get a contradiction. But what if both $\int_{c}^{b}f(x)dx$ and $\int_{a}^{b}f(x)dx$ diverge? What is the argument then? - -REPLY [9 votes]: This is a good question. One way to think about the issue is that the "convergence" of your integral is not just about whether the value is finite, but also about whether the finite value is well defined. This is somewhat related to indeterminant forms where $+\infty + (-\infty)$ is not a well-defined quantity. (You can't say they cancel out, since, heuristically, $\infty - \infty = (1+\infty)-\infty = 1+\infty -\infty = 1 + (\infty - \infty)$...) -So as long as one of the integrals on the right hand side diverges, the entire algebraic expression becomes indeterminate, and hence we say the integral diverges. (Diverges doesn't necessarily mean that the value must run-off to infinity; it can just mean that the value does not converge to a definite number/expression.) -(A similar issue also crops up when summing infinite series that doesn't converge absolutely. Riemann rearrangement theorem tells you that, depending on "how" you sum the series you can get the final number to be anything you want.) -Sometimes, however, it is advantageous to try to make sense of an integral which can be split into two divergent improper integrals, but also where one can argue that there should be some natural cancellation. For example, one may want to argue that $\int_{-a}^a \frac{1}{x^3} dx$ evaluates to 0 since it is the integral of an odd function. For this kind of situations, the notion of Cauchy principal value is useful. But notice that the definition is taken in the sense of a limit that rely on some cancellation, and so much in the same way of Riemann rearrangement theorem, "how" you take the limit can affect what value you get as the end result. (This is compatible with the notion that the integral diverges; as I said above, divergence should be taken to mean the lack of a well-defined, unique convergence.) -Edit. Let me add another example of an integral that remains finite but does not converge. What is the value of $\int_{0}^\infty \sin(x) dx$? For every fixed $a > 0$, $\int_0^a\sin(x) dx = 1 - \cos(a) $ is a number between 0 and 2. But the limit as $a\to \infty$ doesn't exist! If you pick a certain way to approach $\infty$, say choose a sequence $a_n = 2\pi n$, then you'll come to the conclusion that the "limit" is 0; but if you choose $a_n = (2n + 1)\pi$, then you get the conclusion that the limit is $2$. The idea here is roughly similar: you take a left limit and a right limit approaching the improper point, and depending on how you choose your representative points (by an algebraic relation between the speed at which the left and right limits approach the improper point, say), you can get different answers.<|endoftext|> -TITLE: Fundamental group of quotient spaces of $SO(3)$ -QUESTION [16 upvotes]: I am trying to figure out the fundamental group (actually simply connected or not will suffice) of the following quotient space of $SO(3)$: -Let $X = SO(3)/E$, where $E$ is the equivalence relation defined as follows: -$E \equiv M \sim {S_{A}}^{i} * M * {S_{B}}^{j}$ where ${S_{A}}^{i} \in $ Crystallographic point group $A$ and ${S_{B}}^{j} \in $ Crystallographic point group $B$, $M \in SO(3)$. -$*$ represents the multiplication operation (matrix multiplication if rotations are represented as $3 \times 3$ special orthogonal matrices) -Crystallographic point group is defined here: http://en.wikipedia.org/wiki/Crystallographic_point_group -"In crystallography, a crystallographic point group is a set of symmetry operations, like rotations or reflections, that leave the crystal invariant (hence a symmetry)." -Only consider the point groups with rotational symmetries. There exist 11 crystallographic point groups for three-dimensional crystals. Let us start with just considering cyclic point groups. They are the following: -(a) $C_1 = \{I \}$ where I is the identity rotation. -If $Z_{\omega}$ is a rotation of angle $\omega$ about the $Z-$axis. -(b) $C_2 = \{I, Z_{\pi} \}$ -(c) $C_3 = \{I, Z_{\frac{2\pi}{3}}, Z_{\frac{4\pi}{3}} \}$ -(d) $C_4 = \{I, Z_{\frac{\pi}{2}}, Z_{\pi}, Z_{\frac{3\pi}{2}} \}$ -(e) $C_6 = \{I, Z_{\frac{\pi}{3}}, Z_{\frac{2\pi}{3}}, Z_{\pi}, Z_{\frac{4\pi}{3}}, Z_{\frac{5\pi}{3}} \}$ -For example if Crystallographic point group of A is $C_2$ and B is $C_3$, the equivalence relations are: -$ M \sim M * Z_{\frac{2\pi}{3}} \sim M * Z_{\frac{4\pi}{3}} \sim Z_{\pi}* M \sim Z_{\pi} * M * Z_{\frac{2\pi}{3}} \sim Z_{\pi} * M * Z_{\frac{4\pi}{3}} $ -There exists literature for cases when one of the point groups is $C_1 = \{ I \}$. In this case it is a group action on $SO(3)$ and the space $X = SO(3) / G$ where $G$ is one of the 11 crystallographic point groups. These spaces fall under the so-called spherical 3-manifolds ( http://en.wikipedia.org/wiki/Spherical_3-manifold ). When $G$ is one of the cyclic groups above, $X$ is a lens space. $L(2n,1) \cong SO(3)/ C_{n}$. I am not able to figure out how to think of these spaces when there are two point groups involved. -Progress so far: Even when there are two point groups acting on $SO(3)$, which if we refer to as $ G_{1} \backslash SO(3)/ G_{2}$ where $G_{1}$ is crystallographic point group of $A$ and $G_2$ refers to system $B$, there exists a finite subgroup of $\Gamma$ of $SO(4)$ such that $ G_{1} \backslash SO(3)/ G_{2} \cong S^{3}/ \Gamma$. In the case $G_{1} = C_{1}$, it turns out that $\Gamma$ acts properly discontinuously and hence the fundamental group of that space is $\Gamma$ itself (from literature). But I am not sure how to check if $\Gamma$ acts "properly discontinuously" or not. And I am not sure how to check if the space is simply connected or not if $\Gamma$ does not act properly discontinuously. -Any help appreciated. -Thank you. -To answer Aaron's question: do you know how to obtain this group $\Gamma$ explicitly? -I use the quaternion representation for rotations. Let $M = (q_0, q_1, q_2, q_3)$. I also use the following fact: Each 4D rotation $R$ is in two ways the product of left- and right-isoclinic rotations $R_L$ and $R_R$. $R_L$ and $R_R$ are together determined up to the central inversion, i.e. when both $R_L$ and $R_R$ are multiplied by the central inversion their product is $R$ again. (from wiki: http://en.wikipedia.org/wiki/SO(4) ). So for any operation ${S_{A}}^{i} * M * {S_{B}}^{j}$, I can find $\Gamma_{ij}$ such that -${S_{A}}^{i} * M * {S_{B}}^{j} = \Gamma_{ij}*[q_0 \ q_1 \ q_2 \ q_3]'$. -The collection of all such $\Gamma_{ij}$ (and $\Gamma_{ij} * -I_4$, where $I_4$ is the $4 \times 4$ identity matrix) forms a finite subgroup $\Gamma$ of $SO(4)$. I used the definition provided by Aaron below for free action and found that at least for the cases where crystallographic point groups of $A$ and $B$ are the same (e.g. $C_2 \backslash SO(3) / C_2$), the action of finite subgroup $\Gamma_{C_2,C_2}$ is not properly discontinuous. Even though I tried to generalize the problem I am more interested in the cases when the crystallographic point groups of A and B are the same. So, how can I say whether this space $C_2 \backslash SO(3) / C_2$ is simply connected or not, especially now that i know that the action of $\Gamma$ is not properly discontinuous ?? -Any ideas ? - -REPLY [6 votes]: This is cool. Are you doing chemistry? -So, a slightly more standard notation might be $G_1\backslash SO(3)/G_2$ -- this indicates that you're left-multiplying by elements of $G_1$ and right-multiplying by elements of $G_2$. -A "properly discontinuous" action of a group $G$ on a space $X$ is one where every point $x\in X$ has a neighborhood $U$ such that $g(U)\cap U=\emptyset$ unless $g=1$; that is, $G$ not only acts freely (no nonidentity element has fixed points), but the nonidentity elements of $G$ take every point "sufficiently far away from itself". This may seem vacuous, but in fact when you're working with Kleinian groups and things this can be an important condition. However, in your case what is immediately true (since your space $SO(3)$ is Hausdorff and your groups are finite) is that any free action is automatically properly discontinuous. So the only thing you need to check is that your actions have no fixed points. -So I guess the question becomes, do you know how to obtain this group $\Gamma$ explicitly? If it's cyclic, it probably will be taken to be a subgroup of the circle, which acts freely on $S^3$. (This is $S^1\subseteq \mathbb{C}$ acting on $S^3\subseteq \mathbb{C}^2$ by multiplication.) -If $\Gamma$ doesn't act properly discontinuously, things are harder.<|endoftext|> -TITLE: Connections and differential equations -QUESTION [7 upvotes]: I was trying to understand the notion of a connection. I have heard in seminars that a connection is more or less a differential equation. I read the definition of Kozsul connection and I am trying to assimilate it. So far I cannot see why a connection is a differential equation. Please help me with some clarification. - -REPLY [6 votes]: I'm going to be bold and go against the wisdom of the rest of the posters here. -A connection can be regarded as a differential equation, through the notion of parallel transport and geodesics. In the parallel transport formulation, a connection can be associated with the following map: give a point $p$ in your manifold $M$, a tangent vector $v \in T_pM$ in the tangent space at $p$, and a curve $\gamma:[0,1] \to M$ such that $\gamma(0) = p$, you have -$$ (p,v,\gamma) \mapsto \tilde{v}$$ -where $\tilde{v}$ is a vector field on $M$ defined along $\gamma$. That is, $\tilde{v}(\gamma(s)) \in T_{\gamma(s)}M$ for every $s\in [0,1]$. The specification of $\tilde{v}$ is given by the ordinary differential equation -$$ \nabla_{\dot{\gamma}}\tilde{v} = 0, \qquad \tilde{v}(0) = v$$ -or, more explicitly -$$ \frac{d}{ds} \tilde{v}(\gamma(s)) = 0, \qquad \tilde{v}(\gamma(0)) = v$$ -In terms of the geodesic formulation, in local coordinates, a specification of a parallel transport is equivalent to the specification of the geodesics for that connection. More precisely, fixing a coordinate system $\{x_1, \ldots, x_n\}$ for your manifold $M$, the connection can be represented by its Christoffel symbols (relative to the coordinate system) $\Gamma_{ij}^k$, such that the geodesics satisfy Newton's Equations -$$ \frac{d^2 x_i}{dt^2} = -\sum_{k,j} \Gamma^i_{jk} \frac{d x_j}{dt} \frac{d x_k}{dt} $$ -and in this way the connection precisely is associated to a particular set of second order differential equations, such that for each prescribed initial data $x_i(0) = x_{i,0}$ and $dx_i/dt (0) = y_{i,0}$ you have a corresponding geodesic. - -In a more abstract setting, a connection on the tangent space $TM$ of a manifold $M$ can be associated to a vector field $v$ define over the total space $TM$ (so that $v$ is a section of $TTM$) so that the natural projection $\pi_*v(x) = x$ where $\pi$ is the projection map from $TM$ to $M$. The vector $v$ defines what is known as the geodesic spray of the connection, which can be written as a first order ordinary differential equation on $TM$ (or a second order ordinary differential equation on $M$). So yes, it is possible to associate a connection to a differential equation. (The reverse association, however, is more delicate.)<|endoftext|> -TITLE: Is $\mathbb{Q}_p(\zeta_p)$ the same as $\mathbb{Q}_p(p^{\frac{1}{p-1}})$? -QUESTION [8 upvotes]: It seems so. $\mathbb{Q}_p(\zeta_p)$ is a $p-1^{th}$ extension of $\mathbb{Q}_p$ which doesn't extend the residue field; and so is $\mathbb{Q}_p(p^{\frac{1}{p-1}})$. However I can't see how to express $\zeta_p$ in $\mathbb{Q}_p(p^{\frac{1}{p-1}})$ or how to express $p^{\frac{1}{p-1}}$ in $\mathbb{Q}_p(\zeta_p)$. -Can you help? - -REPLY [12 votes]: The right result is that $\mathbb{Q}_p(\zeta_p) = \mathbb{Q}_p((-p)^{1/(p-1)})$. -I will show that $\zeta_p$ is in $\mathbb{Q}_p((-p)^{1/(p-1)})$. Since both fields are of dimension $p-1$ over $\mathbb{Q}_p$ (compute ramification degrees), this shows the fields are equal. -Let $K=\mathbb{Q}_p((-p)^{1/(p-1)})$ and let $\pi$ be a $(p-1)$st root of $-p$ in $K$. We want to show that the equation $x^p=1$ has $p$ roots in $K$. Make the change of variable $x=1+\pi y$. So $\pi^{p} y + p \pi^{p-1} y^{p-1} + \cdots + p \pi y + 1 =1$. -Dividing out $\pi^{p} = -p \pi$, we get -$$y^{p} - \pi^{p-1} y^{p-1} - \frac{1}{p} \binom{p}{2} \pi^{p-2} y^{p-2} - \frac{1}{p} \binom{p}{3} \pi^{p-3} y^{p-3} - \cdots - y=0 \quad (*).$$ -Notice that $(1/p) \binom{p}{k}$ is an integer for $1 \leq k \leq p-1$. So, when we reduce $(*)$ modulo $\pi$, we get -$$y^p - y =0.$$ -The equation $y^p-y=0$ has $p$ distinct roots in $\mathbb{F}_p$. So, by Hensel's lemma, equation $(*)$ has $p$ roots in $K$. -We have shown that $K$ contains a nontrivial $p$-th root of unity, as desired. - -Here is a direct argument for the other direction, when $p >2$. Let $1-\zeta = \lambda$ (where $\zeta$ is a primitive $p$th root of unity.) We know that -$$\prod_{j=1}^{p-1} (1-\zeta^j) = p.$$ -The left hand side is -$$\prod_{j=1}^{p-1} (j \lambda - \binom{j}{2} \lambda^2 + \cdots) = (p-1)! \lambda^{p-1} \cdot \prod_{j=1}^{p-1} \cdot \left( 1 - \frac{j}{2} \lambda + \cdots \right) =$$ -$$(p-1)! \cdot \lambda^{p-1} \cdot \left( 1 - \frac{p(p-1)}{4} \lambda + \cdots \right)$$ -By Wilson's theorem, $(p-1)! \equiv -1 \mod p$. And, since $p$ is odd, $p(p-1)/4 \equiv 0 \mod p$. So we deduce that -$$p = (-1) \lambda^{p-1} \left( 1+ \mbox{something divisible by} \lambda^2 \right).$$ -Anything of the form $ \left( 1+ \mbox{something divisible by} \lambda^2 \right)$ has a $(p-1)$st root in $\mathbb{Q}_p(\zeta)$, as the Taylor series for $(1+u)^{1/(p-1)}$ will converge for $u$ that small. So we see that -$$(-p)^{1/(p-1)} = \lambda \left( 1+ \mbox{something divisible by} \lambda^2 \right)^{1/(p-1)}$$ -is in $\mathbb{Q}_p(\zeta)$ as desired.<|endoftext|> -TITLE: Lebesgue Measure of the Cartesian Product -QUESTION [7 upvotes]: If $E$ is Lebesgue measurable in $\mathbb{R}^n$ and $I=[a,b]$ how do I show that $E\times I$ is measurable in $\mathbb{R}^{n+1}$? -Jonas: -I'm using $\mu^*(E)=\inf \{ \sum \mathrm{Vol}(I_k) \mid E\subseteq \cup I_k\}$ and for every $\epsilon \gt 0$ there exists an open set $G$ containing $E$ such that $\mu^*(G-E)\lt\epsilon$ ($\mu^*$ is the outer measure). -I tried using the first definition since I think it would be easier, but I don't know how to make it fit together. - -REPLY [11 votes]: An equivalent criterion for measurability of a set $E$ is the existence of a $G_\delta$ set $G$ containing $E$ such that $\mu^*(G\setminus E)=0$. (If you haven't already seen this, you can prove it.) You can use this along with the fact that $(G_1\times G_2)\setminus(E_1\times E_2)=((G_1\setminus E_1)\times G_2)\cup(G_1\times(G_2\setminus E_2))$ to show that if $E_1$ is measurable in $\mathbb{R^n}$ and $E_2$ is measurable in $\mathbb{R^m}$, then $E_1\times E_2$ is measurable in $\mathbb{R^{n+m}}$.<|endoftext|> -TITLE: What "general" in "general topology" refers to -QUESTION [9 upvotes]: What does general in general topology really refer to? We use the term all the time without thinking about its origin. - -REPLY [4 votes]: Wikipedia says: -In mathematics, general topology or point-set topology is the branch of topology which studies properties of topological spaces and structures defined on them. It is distinct from other branches of topology in that the topological spaces may be very general, and do not have to be at all similar to manifolds. -... -Other main branches of topology are algebraic topology, geometric topology, and differential topology. As the name implies, general topology provides the common foundation for these areas.<|endoftext|> -TITLE: Why does dust gather in corners? -QUESTION [26 upvotes]: I've noticed when sweeping the floor that dust gathers particularly in the corners. I assume there is a fluid mechanics reason for this. Does anyone know what it is? - -Edit: No, really, this is a mathematical question. Air blows around the room, which constitutes a vector field. Let's say it blows through a square room with doors near bottom-left and top-right. Air blows from bottom-left to top-right. -There are dust particles in the room, let's say uniformly distributed at first. But after application of this flow they are not uniformly distributed. They pile in the corners. Maybe that's because vortices form more in the corners, maybe some other reason. -It's not because of where I start sweeping. -It is a physics question, but obviously knowledge of Newtonian mechanics won't solve it. It comes down to fluid flow and vector fields, which is math. - -REPLY [3 votes]: First, we must assume a more or less constant airflow in the room, for as everyone knows,in a room that is closed up for a long time with no airflow, dust accumulates evenly throughout the room. The simplest air flow type is random. Like water, or anything tbat flows, air flows through the path of least resistance. For air, the path of least resistance is a straight line. Resistance to flow is proportional to the angular change of direction produced by an obstacle; in this case a corner, which does not neccessarily imply a 90 degree corner. There are two types of corners: inside corners (less than 180 degrees and outside corners (greater than 180 degrees). Outside corners do not gather dust, simply because they do not force the air flow to change its direction. Inside corners, however present a physical barrier to straight-line air flow. The smaller the angle of deflection, the greater the resistance. The air simply avoids the highly flow resistant corner by curving around it. In doing so, it loses some of its energy, slowing its flowrate or speed. Of course, some of the air particles pass closer to the corner than others. The closer particles have to turn more sharply to avoid the black hole of the corner, thus slowing down more, thus losing more of their ability to hold a dust particle aloft, thus losing more dust particles before returning to the (assumed) random air flow of the room. In a simple rectangular or square room, this action results in dusty corners. This is the simplest and least technical answer I can think of. Such things as Navier-stokes equations, Laplace equations and pressure derivatives have their place, but are superflous, irrelevant overkill for answering such a simple question. "KISS" always applies. If you want a solution to the dusty corner problem (and seeking the cause of a problem is, after all, the first step in seeking a solution), simply place a contantly running vaccumn inlet in each corner with the (filtered) outlet in the center of the room, directed in all directions, and you will have constantly self-dusting corners. Then, (not so simply) eliminate all dust generators and dust attractors, and replace all objects that have inside corners (or modify them so that they have inside curves instead), and you will have a constantly self-dusting room. However, you should not complain if, upon enterimg this room, you friends declare, "This room really sucks!". I hope this clears the air (and the corners, of course).<|endoftext|> -TITLE: Path (Feynman) Integrals over Graphs -QUESTION [11 upvotes]: I was thinking about Feynman integrals the other day and in particular about discretizing the paths. -Does anyone know the lay of the land about what happens when you do path integrals over, say, a lattice or graph? - -REPLY [4 votes]: There's an entire approach to non-perturbative numerical calculations for quantum field theory, lattice gauge theory, which is based on path integrals over lattices. I did my diploma in that field, so I might be able to answer some questions, but it's been a long time :-).<|endoftext|> -TITLE: How to sum $\frac1{1\cdot 2\cdot 3\cdot 4} + \frac4{3\cdot 4\cdot 5\cdot 6} + \frac9{5\cdot 6\cdot 7\cdot 8} + \cdots$ quickly? -QUESTION [8 upvotes]: The Problem: -$$\frac1{1\cdot 2\cdot 3\cdot 4} + \frac4{3\cdot 4\cdot 5\cdot 6} + \frac9{5\cdot 6\cdot 7\cdot 8} + \frac{16}{7 \cdot 8 \cdot 9 \cdot 10} + \cdots$$ -Any smarter way to solve this may be within a minute or two? - -I am adding my solution,If you are a student please don't read-on I find this particular problem to some sort of interesting so just try it once :-) -I started off with trying to find the $T_n$ th term -$$T_n = \frac{n^2}{(2n-1)(2n)(2n+1)(2n+2)}$$ -$$ = \frac1{24} \times \biggl( \frac1{2n-1} + \frac3{2n+1} - \frac4{2n+2} \biggr)$$ -$$ = \frac1{24} \times \biggl[\biggl( \frac1{2n-1} + \frac1{2n+1}\biggr) + 4 \times \biggl(\frac1{2n+1} - \frac1{2n+2} \biggr) \biggr]$$ -After this this becomes really easy, - $$ S_\infty = \frac1{24} \times \biggl[( 1 - \frac13 + \frac13 - \frac15 + \frac15 + \cdots ) + 4 \times (\ln 2 - \frac12)\biggr]$$ -$$ = \frac16 \log_e 2 - \frac1{24}$$ -But as you can see this approach is a bit tedious and it took me some 30 minutes to reduce the $T_n$ to that form. Any other smart approaches? - -REPLY [3 votes]: Again this is trivial employing telescopy. After partial fraction decomposition the -summand has the form $\rm\ a_1(n-1) - a_1(n) + c\ (a_1(n)+a_0(n))\ $ where $\rm\ a_0(n),\ a_1(n)\ $ are the coefficients of the even, odd part of the $\rm log\ x\ $ power series, for $\rm\ x = 2\:.\:$ The sum of the first part $\rm\ a_1(n-1) - a_1(n)\ $ telescopes to $\rm\:a(0)\:$ and the sum of second part combines all the odd and even terms into the complete power series for $\rm\: log\ 2\:.\ $ Notice how very simple this approach is. In particular it does not require any knowledge of asymptotics of harmonic series - as in the approach in Moron's answer. -Generally no ingenuity is required to find such telescopy. It can be done mechanically as follows. Let $\rm\:S\:$ be the shift $\rm\: S\ a(n)\: \to\: a(n+1)\:,\: $ and let $\rm\ P(S)\ $ be a polynomial in $\rm\:S\:$ whose coefficients $\rm\:c\:$ are constant w.r.t. $\rm\:n\:,\ $ i.e. $\rm\ S\ c = c\:.\: $ Then $\rm\ \sum\ P(S)\ a(n)\ =\ \sum\ (P(S)-P(1))\ a(n)\ +\ P(1)\: \sum\: a(n)\:.\ $ The first sum telescopes since the Factor Theorem $\rm\: \Rightarrow\ S-1 \:|\: P(S)-P(1)\:.\ $ Therefore the sum computation reduces to that of the simpler $\rm\ \sum\: a(n)\:.$<|endoftext|> -TITLE: Motivating Cohomology -QUESTION [29 upvotes]: Question: Are there intuitive ways to introduce cohomology? Pretend you're talking to a high school student; how could we use pictures and easy (even trivial!) examples to illustrate cohomology? -Why do I care: For a number of math kids I know, doing algebraic topology is fine until we get to homology, and then it begins to get a bit hazy: why does all this quotienting out work, why do we make spaces up from other spaces, how do we define attaching maps, etc, etc. I try to help my peers do basic homological calculations through a sequence of easy examples (much like the ones Hatcher begins with: taking a circle and showing how "filling it in" with a disk will make the "hole" disappear --- ) and then begin talking about what kinds of axioms would be nice to have in a theory like this. I have attempted to begin studying co-homology through "From Calculus to Cohomology" and Hatcher's text, but I cannot see the "picture" or imagine easy examples of cohomology to start with. - -REPLY [5 votes]: There is a nice and, to my opinion, more natural way to motivate cohomology - a geometric one, rather than an analytical one. Please read carefully the following question and answer in math.stackexchange: -Intuitive Approach to de Rham Cohomology<|endoftext|> -TITLE: Finite groups: $H \leq A \times B$. Is $H \cong C \times D$ for some $C \leq A$, $D \leq B$? -QUESTION [8 upvotes]: $A$ and $B$ are finite groups. -$H \leq A \times B$. -Can we find some $C \leq A$, $D \leq B$ such that $H \cong C \times D$? -In case the statement is not true: is it true under further assumptions about A and B, such as solvability, nilpotency, etc? -Special cases I can prove: - -$A$ and $B$ are abelian (following ideas from another discussion: $G$ finite abelian. $A \times B$ embedded in $G$. Is $G=C \times D$ such that $A$ embedded in $C$, $B$ embedded in $D$?) -$(|A|,|B|)=1$. In this case we even have $H = C \times D$. By using the Chinese remainder theorem for instance. - -REPLY [5 votes]: You might be interested in the expository article "Subgroups of direct products of groups, ideals and subrings of direct products of rings, and Goursat's lemma" by Anderson and Camillo. A couple of excerpts:<|endoftext|> -TITLE: How to prove that the derivative of Heaviside's unit step function is the Dirac delta? -QUESTION [30 upvotes]: Here is a problem from Griffith's book Introduction to E&M. - -Let $\theta(x)$ be the step function - $$\theta = \begin{cases} -0, & x \le 0, \\ -1, & x \gt 0. -\end{cases} -$$ -The question is how to prove $\frac{d\theta}{dx} = \delta(x)$. - -I think since the function is discontinuous at $x = 0$, there is no definition of $\frac{d\theta}{dx}$ at the point $x = 0$ at all. Thus, how could we show the equivalence of a equation if the left part of equation is not defined at the point $x = 0$? - -REPLY [4 votes]: Of course, as it was stated above, the equality holds in the sense of distributions. In order to approach this same issue, i.e. showing that the equality follows from the theory of distributions, one can define the derivative of a function-type distribution (a distribution which in fact is a function in $L^1_{\text{loc}}(\mathbb R)$), as the limit, in the sense of distributions, of the usual incremental ratio, as follows: -$$ -\theta(x) = \begin{cases} -0\quad \text{ if } x\le 0\\ -1\quad \text{ if }x>0. -\end{cases} -$$ -Then the functions -$$ -\delta_\varepsilon (x) \equiv \frac{\theta(x)-\theta(x-\varepsilon)}{\varepsilon} -$$ -for, say, $\varepsilon>0$ are in fact -$$ -\delta_\varepsilon(x) = -\begin{cases} -0\quad\text{ if }x<\varepsilon\\ -\dfrac{1}{\varepsilon}\quad\text{ if }0< x\le\varepsilon\\ -0\quad\text{ if }x\le0 -\end{cases} -=\varepsilon^{-1}\chi_{(0,\varepsilon]}(x) -=\varepsilon^{-1}\chi_{(0,1]}\left(\frac{x}{\varepsilon}\right), -$$ -where $\chi_{B}$ is the characteristic function of $B$. -Now, for any $\varphi\in\mathscr D(\mathbb R)$, -$$ -\lim_{\varepsilon\to0}\langle \delta_\varepsilon, \varphi\rangle -=\lim_{\varepsilon\to0}\int_{0}^{1}\varphi(y\varepsilon)dy=\varphi(0) = \langle\delta,\varphi\rangle. -$$ -The same holds when $\varepsilon<0$, as one can check explicitly. -Thus $\theta'=\lim_{\varepsilon\to0}\delta_\varepsilon=\delta$, in the sense of distributions $\mathscr D'$.<|endoftext|> -TITLE: The fractional parts of the powers of the golden ratio are not equidistributed in [0,1] -QUESTION [6 upvotes]: Let $$a_n=\left(\frac{1+\sqrt{5}}{2}\right)^n.$$ -For a real number $r$, denote by $\langle r\rangle$ the fractional part of $r$. -Why is the sequence $$\langle a_n\rangle$$ not equidistributed in $[0,1]$? - -REPLY [17 votes]: Let $\phi'=(1-\sqrt 5)/2$ denote the Galois conjugate of the golden mean $\phi$. -Then $\phi^n+\phi'^{n}$ is an integer for every $n\in\mathbb N$, i.e. -$$\phi^n+\phi'^{n}\equiv 0\ (\mbox{mod}\ 1).$$ -But $|\phi'|<1$, so $\phi'^{n}\to 0$. This implies that $\phi^n\to 0\ (\mbox{mod}\ 1)$. -The property that the sequence $\{ \mbox{frac} (x^n)\}$ is not equidistributed is shared by other Pisot numbers. There is quite a lot of research publications devoted to them.<|endoftext|> -TITLE: What does the endomorphism group of an object tell us about the object in question? -QUESTION [6 upvotes]: For example: -What conclusions can be drawn about the relations between two objects with the same group of endomorphism? -Can we tell from End(A) if A is Abelian or not? -Does End(A) contain information about the sub-objects of A? -Any information or references to information about this is highly appreciated. - -REPLY [8 votes]: For abelian groups, the ring End(A) is very important. As far as non-abelian groups A go, End(A) is not even (usually considered) a group. -"Adding" homomorphisms doesn't work in the non-abelian case. -If you define (f+g)(x) = f(x) + g(x), then (f+g)(x+y) = f(x+y) + g(x+y) = f(x) + f(y) + g(x) + g(y), but (f+g)(x) + (f+g)(y) = f(x) + g(x) + f(y) + g(y). To conclude that: -    f(y) + g(x) = g(x) + f(y) -are equal, you use that + is commutative, that A is abelian. More precisely, if you take f=g to be the identity endomorphism, then f+g is an endomorphism iff A is abelian. -"Composing" homomorphisms doesn't work to form a group, since they are not invertible. -Aut(A), the group of invertible endomorphisms, does form a group. Aut(A) does not determine if a group is abelian or not: 4×2 and the dihedral group of order 8 have isomorphic automorphism groups. -Instead of a ring, End(A) sits inside the "near-ring" of self-maps. See the wikipedia article on nearring for an explanation.<|endoftext|> -TITLE: Domain of the Gamma function -QUESTION [5 upvotes]: I need to find the domain of the Gamma function, that is to say all $z \in \mathbb{C}$, for which the integral: -$$\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} \mathrm dt$$ -converges. I started by splitting up the integral into an integral running from $0$ to $1$ and another one from $1$ to $\infty$. I first tried to figure out for what $z \in \mathbb{C}$ the integral from $0$ to $1$ converges and I came to the conclusion, that $\Re(z) > 0$ is the condition. -The other integral, I believe, converges for every $z$, as the exponential function dominates the monomial eventually. So I concluded: -$$\exists \Gamma(z) \iff \Re(z) > 0$$ -However, I just learned that this is wrong. I found out that the integral only diverges for non-positive integers. What did I do wrong or what is a better way to find the domain of the Gamma function? - -REPLY [5 votes]: Elaborating on Andrey's answer: -There are some similar questions (Q1 and Q2, e.g.) related to the convergence/divergence of the standard integral rep of $ \Gamma (s)$ on this site, so I thought it might be helpful for those familiar with basic complex analysis to look at an extended integral rep for $\Gamma (s)$ that is related to the function's singularities, i.e., the Cauchy–Saalschütz integral, which Andrey highlights. -In understanding the convergence of real Taylor series, you need to look at the complex domain and the singularities (hopefully, only simple poles at most) of the function it represents. The same can apply to integrals over the real line, so first do a partial fraction expansion of the Euler/Gauss rep (Eqn. 6.1.2 in Abramowitz and Stegun, pg. 255; also EOM article) of $\Gamma (s)$ in the complex plane and note the simple poles at $s=0, -1, -2, ...$ , consistent with the identity $\frac{1}{s!(-s)!}=\frac{\sin(\pi s)}{\pi s}$. Then if you are comfortable with the Mellin transform, you can easily write down a Hadamard finite part integral representation for sections between the poles: -For $-n<\Re(s)=\sigma<-(n+1)$, the inverse Mellin transform gives -$$\frac{1}{2\pi i}\int_{\sigma -i\infty }^{\sigma +i\infty } \Gamma(s) x^{-s}ds=\frac{1}{2\pi i}\int_{\sigma -i\infty }^{\sigma +i\infty }\frac{\pi }{\sin \left ( \pi s \right )}\frac{x^{-s}}{(-s)!}ds$$ -$$=\exp(-x)-\left(1-x+\frac{x^2}{2!}+ ... + \frac{(-x)^n}{n!}\right),$$ -and, therefore, the associated Mellin transform gives -$$\Gamma (s)=\mathrm{FP}\int_{0}^{\infty }x^{s-1}\exp(-x)dx = \int_{0}^{\infty }x^{s-1}\left[\exp(-x)-\left(1-x+\frac{x^2}{2!}+ ... + \frac{(-x)^n}{n!}\right)\right]dx.$$ -Roughly speaking, the singularities at the lower limit $x=0$ of $\frac{x^{s+m}}{s+m}$ for $m=0, -1, ..., -n$ are being subtracted out. -For $\Re(s)=\sigma>0$, you obtain the standard integral rep.<|endoftext|> -TITLE: Why does the structure theorem for finitely generated modules over PIDs fail for arbitrary modules over a PID? -QUESTION [13 upvotes]: The proof that I know of the theorem goes like this: - -Any module $M$ is a quotient of a free module $F$ (over any ring). -Any submodule $K$ of a free module $F$ over a PID $R$ is a free module, so in particular the kernel of the above quotient map is free. -For any free submodule $K$ of a free module $F$ over a PID $R$ we can find a $y\in K, x\in F$ and $a\in R$ such that $F=\left\oplus F'$ and $K=\left\oplus K'$ with $K'=K\cap F'$. Furthermore, $\left$ is an ideal maximal among images of $K$ under homomorphisms $F\to R$. -If $K$ is finitely generated (which would follow from $F$ being finitely generated), we iterate this construction, which gives us sequences $x_i\in F$, $y_i\in K$ and $a_i\in R$ such that $F=F'\oplus_{i} \left$ and $K=\oplus_{i}\left$ where $a_ix_i=y_i$ and $a_i$ divides $a_j$ for $i$ by $\left$ being maximal among images of $\oplus_{i\leq j}a_j$ under homomorphisms from $\oplus_{i\leq j}\left\oplus F'$ to $R$). -Taking quotients, we obtain that $M=F'\oplus_i R/(a_i)$ where $a_i$ divides $a_j$ if $i -TITLE: $\left \{ 0,1 \right \}^{\mathbb{N}}\sim \left \{ 0,1,2,3 \right \}^{\mathbb{N}}$ bijection function -QUESTION [6 upvotes]: Prove that $\left \{ 0,1 \right \}^{\mathbb{N}}\sim \left \{ 0,1,2,3 \right \}^{\mathbb{N}}$ and find a direct bijection function. -I got the first part by showing that $\left \{ 0,1 \right \}^{\mathbb{N}} \subseteq \left \{ 0,1,2,3 \right \}^{\mathbb{N}} \subseteq {\mathbb{N}}^{\mathbb{N}}$, which implies that $|\left \{ 0,1 \right \}^{\mathbb{N}}| \leq |\left \{ 0,1,2,3 \right \}^{\mathbb{N}}| \leq |{\mathbb{N}}^{\mathbb{N}}|$ and since $|{\mathbb{N}}^{\mathbb{N}}| = |\left \{ 0,1 \right \}^{\mathbb{N}} | = 2^{\aleph_0} $ and Cantor-Bernstein you get that $\left \{ 0,1 \right \}^{\mathbb{N}}\sim \left \{ 0,1,2,3 \right \}^{\mathbb{N}}$. -But I'm stuck with formulating a bijection function. More generally, what approach do you use when you need a formulate an exact function? - -REPLY [3 votes]: $(b_1,b_2,b_3,b_4,\ldots)\mapsto(b_1+3^{b_2}-1,b_3+3^{b_4}-1,\ldots)$ answers the question. -@Andres: "As a slightly more challenging exercise, pick any two positive integers $n -TITLE: How to solve this rational equation for y -QUESTION [6 upvotes]: Greetings! -On a test recently I ended up having to solve this for y: -$$ x = \frac{2y}{y + 1} $$ -But I kept getting stuck in cirlces... -$$ -\begin{aligned} -x(y + 1 ) = 2y \\ -xy + x = 2y \\ -\frac{xy + x}{2} = y \\ -\end{aligned} -$$ -That didn't get me anywhere, so then I started over and tried multiplying both sides by the reciprocal: -$$ -\begin{aligned} -(\frac{y+1}{2y})(x) = 1 \\ -\frac{xy+x}{2y} = 1 \\ -\end{aligned} -$$ -And still I can't see a way to isolate y. -The worst part of this is that I remember being specifically taught a trick for this particular conundrum, but I can't remember the trick! -Wolfram gives the answer as -$$ -y = - \frac{x}{x-2} -$$ -but it doesn't show the steps. - -REPLY [4 votes]: Another way to arrive at the conclusion, if you don't "see" factors jumping out at you is this: When you see a fraction like $\frac{2y}{y+1}$, I am sure you feel the urge to split into summands. Now, of course, you cannot do it with the sum in the denominator, so that suggests inverting the whole thing (assuming that everything that you want to be non-zero is non-zero): -$$ -\frac{1}{x} = \frac{y+1}{2y} = \frac{y}{2y} + \frac{1}{2y} = \frac{1}{2} + \frac{1}{2y}. -$$ -Now subtract 1/2 and invert back again to get the result.<|endoftext|> -TITLE: Combinatorics and analysis -QUESTION [8 upvotes]: A lot of "big" names in analysis (and other fields) seem to be doing some form of combinatorics (without any order, some examples are Tim Gowers, Terence Tao and Jean Bourgain). -So, looking a bit around makes me conclude that combinatorics is a huge field. There must be one "kind" which is the most fruitful in analysis. What kind is this? What is a good introduction to this? -Edit: I forgot, analysis is also a big field. I mean more in the direction of harmonic analysis and PDE. -Thanks. - -REPLY [2 votes]: With regard to combinatorics & harmonic analysis, you might find this interesting. This work by Terence Tao (whom you mentioned) sheds light on all three of combinatorics, analysis, and PDEs. Hope this helps!<|endoftext|> -TITLE: Convergence of series involving iterated $ \sin $ -QUESTION [19 upvotes]: I've been trying to show the convergence or divergence of -$$ \sum_{n=1}^\infty \frac{\sin^n 1}{n} = \frac{\sin 1}{1} + \frac{\sin \sin 1}{2} + \frac{\sin \sin \sin 1}{3} + \ \cdots $$ -where the superscript means iteration (not multiplication, so it's not simply less than a geometric series -- I couldn't find the standard notation for this). -Problem is, - -$ \sin^n 1 \to 0 $ as $ n \to \infty $ (which I eventually proved by assuming a positive limit and having $ \sin^n 1 $ fall below it, after getting its existence) helps the series to converge, - -but at the same time - -$ \sin^{n+1} 1 = \sin \sin^n 1 \approx \sin^n 1 $ for large $ n $ makes it resemble the divergent harmonic series. - -I would appreciate it if someone knows a helpful convergence test or a proof (or any kind of advice, for that matter). -In case it's useful, here are some things I've tried: - -Show $ \sin^n 1 = O(n^{-\epsilon}) $ and use the p-series. I'm not sure that's even true. -Computer tests and looking at partial sums. Unfortunately, $ \sum 1/n $ diverges very slowly, which is hard to distinguish from convergence. -Somehow work in the related series -$$ \sum_{n=1}^\infty \frac{\cos^n 1}{n} = \frac{\cos 1}{1} + \frac{\cos \cos 1}{2} + \frac{\cos \cos \cos 1}{3} + \ \cdots $$ -which I know diverges since the numerators approach a fixed point. - -REPLY [18 votes]: A Google search has turned up an analysis of the asymptotic behavior of the iterates of $\sin$ on page 157 of de Bruijn's Asymptotic methods in analysis. Namely, -$$\sin^n(1)=\frac{\sqrt{3}}{\sqrt{n}}\left(1+O\left(\frac{\log(n)}{n}\right)\right),$$ -which implies that your series converges. -Edit: Aryabhata has pointed out in a comment that the problem of showing that $\sqrt{n}\sin^n(1)$ converges to $\sqrt{3}$ already appeared in the question Convergence of $\sqrt{n}x_{n}$ where $x_{n+1} = \sin(x_{n})$ (asked by Aryabhata in August). I had missed or forgot about it. David Speyer gave a great self contained answer, and he also referenced de Bruijn's book. De Bruijn gives a reference to a 1945 work of Pólya and Szegő for this result.<|endoftext|> -TITLE: Known bounds and values for Ramsey Numbers -QUESTION [6 upvotes]: Is there a good online reference that lists known bounds on Ramsey numbers (and is relatively up to date)? The wikipedia page only has numbers for $R_2(n,m)$. -I am specifically interested in known bounds and values for hypergraph Ramsey Numbers, i.e. 2 colorings of k-subsets (or 2 colorings of the edges of a complete k-uniform hypergraph). These are commonly denoted $R_k(m,n)$. A shallow internet search has yielded only a couple sets of papers and notes on bounds. - -REPLY [10 votes]: The best resource online is the (frequently updated) survey Small Ramsey Numbers by Stanisław Radziszowski, in The Electronic Journal of Combinatorics. Go to http://www.combinatorics.org/ and click on the link for "dynamic surveys". When I first wrote this answer (December 12, 2010), the survey version dated August 2009. As of this edit (June 8, 2014), the most recent update on the Ramsey numbers paper is dated January 12, 2014. The paper gives extensive references where one can find complete proofs or details of the computations involved. -Radziszowski himself is responsible for several improvements to the bounds listed there, and you may want to check his page for recent results not yet included. Although the emphasis of the paper is on exact values, it also includes references for asymptotics and general upper bounds. With respect to the latter, there have been significant recent advances (particularly, by Conlon and his collaborators), and you may want to check the pages of the authors listed on page 9 of the survey, for possible improvements. -I found through another answer in this site a link to Geoffrey Exoo's page (somewhat under construction, it seems), which contains additional improvements due to Exoo (mostly unpublished).<|endoftext|> -TITLE: Computer Programs for Pure Mathematicians -QUESTION [15 upvotes]: Question: Which computer programs are useful for a pure mathematician to familiarize themselves with? -Less Briefly: I was once told that, now-a-days, any new mathematician worth his beans knows how to TeX up work; when I began my graduate work one of the fourth year students told me he couldn't TeX, I was horrified! Similarly, a number of my peers were horrified when I told them I'd never used Matlab or Mathematica before. Currently, I can "get by" in all three of these programs, but it made me think: what programs are popular with pure mathematicians? I don't mean to limit this to computing things: programs which help to draw pictures and things can also be included. -Lest I Start A Flame War: This is not meant to be a "what is your favorite computer language" question or a poll on who thinks Mathematica is worse than Sage. This is meant to be a survey on what programs and resources are currently being used by pure mathematicians and may be good to look at and read up on. -I'm also unsure of how to tag this. - -REPLY [2 votes]: GAP (http://www.gap-system.org), which is, as it is written on the GAP website, an open source "system for computational discrete algebra, with particular emphasis on Computational Group Theory. GAP provides a programming language, a library of thousands of functions implementing algebraic algorithms written in the GAP language as well as large data libraries of algebraic objects. GAP is used in research and teaching for studying groups and their representations, rings, vector spaces, algebras, combinatorial structures, and more." -The core GAP system is redistributed with its GAP Packages, which extend further the functionality of the system and improve its performance. Some packages include binaries and require separate compilation. -To obtain GAP, go to Downloads . Windows users may use .exe installer with precompiled binaries. Unix and OS X users would have to compile GAP and packages.<|endoftext|> -TITLE: Trilateration with unknown fixed points -QUESTION [5 upvotes]: I am able to measure my distance to a set of (about 6 or 7) fixed but unknown points from many positions. -The difference in position between measurements is also unknown. -I believe that I should be able to work out the relative position of the fixed points, and therefore where I measured from and the path I took. -I have looked at the wiki page for trilateration, but it only gives examples working from known points. -Any help? - -REPLY [2 votes]: I'm elaborating on the answer Isaac already gave, so I'll adopt his notation. I'm posting a second answer to elaborate the point of isometric transformations, but also because the question has been asked again and the author of that newer question found the existing answer insufficient. -Suppose you have $n$ fixed points and perform $k$ distance measurements to all of these, taken from different points. Also suppose that you are working on the plane, although the ideas can be easily generalized to 3D space. Let's assume the fixed point with index $i$ is at position $(a_i,b_i)$ and the measurements with index $j$ were taken from position $(x_j,y_j)$, then the equations are of the form -$$(a_i-x_j)^2 + (b_i-y_j)^2 = d_{ij}^2$$ -where $d_{ij}$ is the distance you measured to fixed point $i$ from position $j$ along the trajectory. So you have $2n$ variables $a_i,b_i$ and $2k$ variables $x_i,y_i$ together with $n\cdot k$ equations. -But the whole setup is invariant under isometric transformations. Which means you can't know the origin or orientation of your coordinate system since that coordinate system is completely arbitrary. Therefore you might simply fix some coordinate system, by choosing $x_1=y_1=y_2=0$. So the origin of the coordinate system is defined as your starting point, and the $x$ axis of the coordinate system is the direction from the first to the second point of your trajectory. This adds three more equations. (If you were operating in 3D instead of 2D, then you'd end up with 6 parameters you may choose arbitrarily, three for position and three for orientation.) -A system of $n\cdot k+3$ equations with $2n+2k$ will often have a finite number of solutions if $2n+2k=n\cdot k+3$. At least if the equations are sufficiently independent. If the equations were linear, that solution would be unique, but as the equations are non-linear, there may be multiple solutions. If you have more than the required number of equations, that might help you pick one of these. I'll not discuss techniques of how to solve systems on non-linear polynomial equations, but I suggest you let some computer algebra system or some numerical tool designed for the task handle that. -Of course, if you have more equations than absolutely required, and if your measurements are subject to some error (measurement error, rounding error, …) then you may end up with no matching solution at all. So you'll want the solution which is closest to your measurements, even though it doesn't exactly reproduce the observed measurements. This would place the problem in the domain of unconstrained non-linear optimization. You'd definitely want to tackle this numerically, and would be well advised to use a tool designed for that task.<|endoftext|> -TITLE: Constructor And\Or-graph on function transition of the alternating automata -QUESTION [7 upvotes]: In a And\Or-graph induced by the transition function, each node of $G$ corresponds to a state $q$ belonging to a set $Q$ of the state of the Automaton, for $q$ with $\delta(q,a)=q_1*q_2$, the node is a $*$-node with two successors $q_1$ and $q_2$. For $q=\{true,false\}$, the node $q$ is a sink-node. Hence, if $*$ is $\vee$ then q is $\vee-node$, else $q$ is $\wedge-node$. My problem is this : since that the result of transition function of a alternating automata includes more nodes in $\wedge$ and $\vee$ (example: $\delta(q_0,a)=(q_1 \vee q_2) \wedge q_3 \wedge q_4 )$ ), how to build the graph And\Or on transition function of an alternating automaton? -The Graph And/Or graph is defined as following : A form of graph or tree used in problem solving and problem decomposition. The nodes of the graph represent states or goals and their successors are labeled as either AND or OR branches. The AND successors are subgoals that must all be achieved to satisfy the parent goal, while OR branches indicate alternative subgoals, any one of which could satisfy the parent goal. -Instead the alternating automata is a automata with transition function defined as following : $\delta: S \times \Sigma \longrightarrow B^+(S)$ where S are the states of the automata ,$\Sigma$ the alphabet and $B^+(S)$ is the set of positive Boolean formulas over S. - -REPLY [2 votes]: Your definition of the transition function $\delta$ is different than the usual definition for that of an alternating finite automaton (AFA). The usual definition is $\delta:S\times\Sigma\longrightarrow2^S$. -If you use the usual definition, the problem you mention does not occur since each state in $q\in S=(Q_\exists \cup Q_\forall)$ is either a $\vee$-node (element of $Q_\exists$) or a $\wedge$-node (element of $Q_\forall$). If $q_0$ is a $\vee$-node, then the boolean value of $q_0$ is $\bigvee_{q\in \delta(q_0,a)} q$. If $q_0$ is a $\wedge$-node, then the boolean value of $q_0$ is $\bigwedge_{q\in \delta(q_0,a)} q$. It should be clear how to construct an and-or tree from this definition. -If instead you choose to use your definition, a solution would be to add more states when presented with an example like $\delta(q,a)=((q_1 \vee q_2) \wedge q_3 \wedge q_4 )$. We can convert each state so that it does not have both $\wedge$ and $\vee$ in the boolean expression. Consider adding a new state $q_5$ such that $\delta(q_5,a)=(q_1\vee q_2)$ and amending $\delta$ so that $\delta(q,a)=(q_5 \wedge q_3 \wedge q_4 )$. Now it is clear that $q$ is a $\wedge$-node (and that $q_5$ is a $\vee$-node). This effectively aligns your transition definition with the usual definition and your problem is removed.<|endoftext|> -TITLE: Is $\mathbf{Grp}$ a concrete category? -QUESTION [10 upvotes]: Is $\mathbf{Grp}$ a concrete category? I thought it is, but then the group of symmetries of a square and the quaternion group are both of the order 8, and they are not isomorphic as groups. But sets of the same cardinality are isomorphic as sets, so a standard forgetful functor $(G, \cdot) \mapsto G$ is not faithful! Am I missing something here? - -REPLY [21 votes]: Yes, you have the wrong definition of a faithful functor. A functor F is faithful precisely when F sends morphisms injectively, but nowhere is there required any kind of injectivity on objects.<|endoftext|> -TITLE: Trick to showing for which primes $p$ is $34$ a square? -QUESTION [7 upvotes]: I'm going through an old final from 2003 on MIT's Opencourseware, and problem 6b is giving me a little trouble. -It asks for which primes $p$ is $34$ a square modulo $p$. I approached it like this: -$$\left(\frac{34}{p}\right)=\left(\frac{2}{p}\right)\left(\frac{17}{p}\right)=(-1)^{(p^2-1)/8}\left(\frac{p}{17}\right).$$ -I figure I can break it down into cases where $p\equiv 1,3,5,7\pmod{8}$. So if $p\equiv 1\pmod{8}$, then $(2|p)=1$, and thus I want $(p|17)=1$ as well. I calculated all the squares modulo $17$, and found them to be $1,2,4,8,9,13,15,16$. I suppose I could then go through all cases where $p\equiv 1\pmod{8}$, and $p\equiv 1,2,4,8,\dots\pmod{17}$, and then use the Chinese remainder theorem to find what $p$ is congruent to modulo $8\cdot 17$, but this seems very tedious to do for each case. First of all, is my method correct, and also, is there a better way to solve this question? Thank you. - -REPLY [8 votes]: You have all that you need right there: -$$\left(\frac{34}{p}\right)=\left(\frac{2}{p}\right)\left(\frac{17}{p}\right)=(-1)^{(p^2-1)/8}\left(\frac{p}{17}\right).$$ -So if $p \equiv \pm 1 \pmod{8}$ then $p$ must be a quadratic residue mod 17, otherwise it must be a quadratic non-residue mod 17. (But delete 14 from your list first!)<|endoftext|> -TITLE: Symmetric polynomials and the Newton identities -QUESTION [9 upvotes]: I want to write -$P(x,y,z)=yx^{3}+zx^{3}+xy^{3}+zy^{3}+xz^{3}+yz^{3}$ -in terms of elementary symmetric polynomials, but I'm getting stuck at the first step. I know I should follow the proof of the fundamental theorem of symmetric polynomials using the Newton identities. -First I pick out the 'biggest' monomial according to the lexicographical ordering: $yz^{3}$. Now I want to rewrite this as a polynomial in the elementary symmetric polynomials. I don't quite understand how to do this. - -REPLY [15 votes]: By Gauss's algorithm, if $\rm\ z^a\ y^b\ x^c\ $ is the highest w.r.t. lex order $\rm\ z > y > x\ $ then you subtract $\rm\ s_1^{a-b}\ s_2^{b-c}\ s_3^c\:.\:$ Thus since $\rm\ z^3\ y $ is highest you subtract $\rm s_1^{3-1}\ s_2^{1-0}\ s_3^0\ = (x+y+z)^2\ (xy+yz+zx)$ from $\rm\:P\:$. The result is smaller in lex-order, so iterating this reduction yields a representation of $\rm\:P\:$ in terms of elementary symmetric polynomials $\rm\:s_i\:.\:$ Here the algorithm terminates in two more steps. -As I mentioned in a prior post, Gauss's algorithm is the earliest known example of using lex-order reduction as in the Grobner basis algorithm. For a nice exposition see Chapter 7 of Cox, Little, O'Shea: Ideals, Varieties and Algorithms. They also give generalizations to the ring of invariants of a finite matrix group $\rm G \subset GL(n,k)$. Here's an excerpt which, coincidentally, presents this example. You might find it helpful to first read the example at the end before reading the proof.<|endoftext|> -TITLE: Direct sum of complexes -QUESTION [7 upvotes]: How can I figure out the classical construction (direct sum, product, pullbacks, and in general direct and inverse limits) in the category made by chain complexes and chain maps (of abelian groups or any abelian stuff)? Because of this category is abelian it must have (co)limits, isn't it? -In particular take -$$ -\begin{gather*} -\dots\to A_n\to A_{n-1}\to\dots\\ -\dots\to B_n\to B_{n-1}\to\dots -\end{gather*} -$$ -I would like to say that $\mathcal A\oplus\mathcal B$ is "what I want it to be", $A_n\oplus B_n$ with obvious maps. But a standard argument doesn't allow me to conclude it: maybe it is false? A single word with the right reference will be enough to close the topic; I am now reading Hilton & Stammbach. -Edit: I would like to add what I've tried to do, but seems difficult not to invoke some diagrams, which I'm not able to draw without a suitable package here. However, first of all both complexes inject in the sum by maps which are part of a chian complex, $\iota_n^A,\iota_n^B$. Then, consider another complex $\{C_n,\partial_n^C\}$ and a couple of chain maps $\{a_n\colon A_n\to C_n\}$, $\{b_n\colon B_n\to C_n\}$. For each $n$ there exists a map $\alpha_n\colon A_n\oplus B_n\to C_n$ factoring the $a_n$ and the $b_n$s. Then I would like to show that the maps $\alpha_n$ are part of a chain map between the sum and the complex $\mathcal C$, but trying to prove it I can only conclude that in the diagram (vertical rows are the $\alpha_n$s) -$$ -\begin{array}{ccc} -A_n\oplus B_n &\xrightarrow{\partial_n^\oplus}& A_{n-1}\oplus B_{n-1} \\ -\downarrow && \downarrow\\ -C_n &\xrightarrow[\partial_n^C]{}& C_{n-1} -\end{array} -$$ -which I want to be commutative, aka $\alpha_{n-1}\partial_n^\oplus=\partial_n^C\alpha_n$, I have $\alpha_{n-1}\partial_n^\oplus\iota_n^A=\partial_n^C\alpha_n\iota_n^A$. How can I remove the iotas? - -REPLY [4 votes]: The category of complexes in an abelian category $\mathcal{A}$ is a full subcategory of $\text{Fun}({\mathbb{Z}},\mathcal{A})$, where $\mathbb{Z}$ is partially ordered under reverse inequality. So if we know how these constructions are performed in the category of functors from $\mathbb{Z}$ to $\mathcal{A}$, we'll have the natural candidates to the category of complexes. The standard result is that (co)limits in $\text{Fun}({\mathcal{D}},\mathcal{C})$, where $\mathcal{D}$ is a small category and $\mathcal{C}$ is a category, are computed pointwise. Take a look at Borceux's 'Handbook of Categorical Algebra, Vol.$1$' section $2.15$. There he explains the precise meaning of being computed pointwise. -Since any abelian category is finitely (co)complete, we can compute any finite (co)limit in $\text{Fun}({\mathbb{Z}},\mathcal{A})$ pointwise. If we consider a (co)complete category, e.g., the category of modules over a ring, we can compute any (co)limit pointwise. -If you think Borceux's book is too terse, there is a similar discussion in Rotman's 'An Introduction to Homological Algebra' on page $317$. -Added: In order to remove the iotas you will need to prove that $\alpha_{n-1}\partial_n^\oplus\iota_n^B=\partial_n^C\alpha_n\iota_n^B$. Now use the fact that there is only one morphism $\varphi: A_n \oplus B_n \rightarrow C_{n-1}$ such that $\varphi \iota_n^A = \alpha_{n-1}\partial_n^\oplus\iota_n^A$ and $\varphi \iota_n^B = \alpha_{n-1}\partial_n^\oplus\iota_n^B $.<|endoftext|> -TITLE: Proof of the Euler Generalisation of Fermat's Little Theorem using modular arithmetic -QUESTION [6 upvotes]: Fermat's Little Theorem, in Fermat-Euler form, states that: if $\gcd(a,m)=1$, then $a^{\varphi(m)} \equiv 1 \pmod{m}$. -Now, I've been asked to prove it via modular arithmetic. In order to do this I understand that I'm to use two lemmas: - -$\varphi(p^n) = p^{n} - p^{n-1}$. This I can prove by working out that there are $p^n$ numbers less than $p^n$ of which $p^{n-1}$ of them are divisible by $p$. -Given $gcd(r,s) = 1$, $\varphi(rs) = \varphi(r)\varphi(s)$. This I'm having problems with. - -My lecturer suggested the proposition that $\varphi(n) = n\prod_{p|n}\left(1-\frac{1}{p}\right)$. I can re-arrange this to equal lemma 1, but what I can't understand how that proposition proves 2, which my lecturer claims it does, and why these two points together prove the theorem. -I suspect I might follow the argument correctly if I fully understood lemma 2. - -REPLY [6 votes]: Here is another way to prove Euler's generalization. You do not need to know the formula of $\varphi(n)$ for this method which I think makes this method elegant. -Consider the set of all numbers less than $n$ and relatively prime to it. Let $\{a_1,a_2,...,a_{\varphi(n)}\}$ be this set. -Consider a number $c < n$ and relatively prime to it i.e. $c \in \{a_1,a_2,\ldots,a_{\varphi(n)}\}$. -First observe that for any $a_i$, $c a_{i} \equiv a_{j} \pmod{n}$ for some $j$. -(True since $c$ and $a_i$ are themselves relatively prime to $n$, their product has to be relatively prime to $n$). -And if $c a_{i} \equiv c a_{j} \pmod{n}$ then $a_i = a_j$. -(True as cancellation can be done since $c$ is relatively prime to $n$). -Hence, if we now consider the set $\{ca_1,ca_2,...,ca_{\varphi(n)}\}$ this is just a permutation of the set $\{a_1,a_2,...,a_{\varphi(n)}\}$. -Thereby, we have $\displaystyle \prod_{k=1}^{\varphi(n)} ca_k \equiv \prod_{k=1}^{\varphi(n)} a_k \pmod{n}$. -Hence, we get $\displaystyle c^{\varphi(n)} \prod_{k=1}^{\varphi(n)} a_k \equiv \prod_{k=1}^{\varphi(n)} a_k \pmod{n}$. -Now, note that $\displaystyle \prod_{k=1}^{\varphi(n)} a_k$ is relatively prime to $n$ and hence you can cancel them on both sides to get -$$c^{\varphi(n)} \equiv 1 \pmod{n}$$ whenever $(c,n) = 1$.<|endoftext|> -TITLE: Ring structure of $H^2(S^2 \vee S^4)$ -QUESTION [6 upvotes]: We know that $H^p(S^2 \vee S^4) = H^p(S^2)\oplus H^p(S^4)$ for $p\neq 0$. I want to show that this space has different ring structure than $CP^2$. So, given a generator in $H^2(S^2 \vee S^4)$ I want to cup it with itself and get 0. My idea is to use the generator from $H^2(S^2)$(which obviously is zero when squared). How should I go from here? Is this even the correct idea? -($\vee$ here is one-point intersection). - -REPLY [3 votes]: Yes, that is the right idea. Use that the cup product is "natural" with respect to pull backs.<|endoftext|> -TITLE: Condition on degrees for existence of a tree -QUESTION [13 upvotes]: Here is what I need to prove: - -Let $d_1,d_2,...,d_n$ be a sequence of natural numbers (>0). Show that $d_i$ is a degree sequence of some tree if and only if $\sum d_i = 2(n-1)$. - -I know that: -1. for any graph $\sum_{v \in V}\ deg(v) = 2e$; -2. for any tree $e=v-1$. -From 1 and 2 it follows that for any tree $\sum_{v \in V}\ deg(v) = 2(v-1)$. -If I understand it correctly, this is only a half of the proof ($\rightarrow$), isn't it? -Any hints on how to prove it the other way? -Edit (induction attempt): - -$n=1$, we have $d_1 = 2(1-1) = 0$ and $d_1$ is a degree sequence of a tree. -Let's assume the theorem holds for all $k -TITLE: Can two different roots of an irreducible polynomial generate the same extension? -QUESTION [10 upvotes]: Let $K$ be a field and $f(x)$ be an irreducible polynomial over $K$. Suppose, $f(x)$ has degree at least $2$. Is it possible that if $a,b$ are two roots of $f(x)$ with $a\neq b$, then $K(a)=K(b)$. Note I need equality, not isomorphism. - -REPLY [11 votes]: I think it's worth elaborating on this distinction between equality and isomorphism. The problem occurs if the extension you're considering isn't normal and can therefore embed into an algebraic closure in more than one way. For example, the abstract field $\mathbb{Q}[x]/(x^3 - 2)$ embeds in three ways into $\mathbb{C}$, corresponding to the three roots of $x^3 - 2$. So it doesn't make sense to ask whether $\mathbb{Q}[x]/(x^3 - 2)$ is "equal to" $\mathbb{Q}[y]/(y^3 - 2)$ without specifying an embedding into a larger field. -This is a fairly subtle point which I don't think is addressed particularly well in introductions to Galois theory (at least the ones I've seen). There are three categories one might work in when studying fields (where $K$ is a fixed field): - -The category of fields. Here it is "evil" (really, impossible) to speak of the "equality" of two fields and one can only speak of isomorphism. -The category of (say, algebraic) field extensions $K \to L$. (The morphisms are morphisms $\phi : L \to L'$ making the obvious triangle commute.) Here it is still "evil" to speak of the "equality" of two extensions, and one can only speak of isomorphism, but the isomorphism type of an extension $K \to L$ is not determined by the isomorphism type of $L$ (that is, $L$ can be an extension of $K$ in more than one way). -The category of subfields of $\bar{K}$ containing $K$ (for a fixed embedding $K \to \bar{K}$). This is the category in which it makes sense to take intersections and composita of fields; you can't do either of these constructions in the above category. Here, at last, one can speak of equality (as subfields of $\bar{K}$), and it is not the same as isomorphism of $K$-extensions, which is in turn not the same as isomorphism of abstract fields. - -People sometimes don't specify which of the above categories they're working in, and until you do this you can't be precise: there are three different notions of equality or isomorphism in the last category.<|endoftext|> -TITLE: Show the $R$-module $R$ is isomorphic to $Rb \times R(1-b)$ where $b$ is an idempotent of a commutative ring with unity -QUESTION [5 upvotes]: Let $R$ be a commutative ring with unity and let $B(R)$ be the set of all idempotent elements in $R$. -Show for $b\in B(R)$, the $R$-modules $R$ and $Rb \times R(1-b)$ are isomorphic to one another. - -REPLY [3 votes]: Let $b\in R$ be idempotent. Let $c=1-b$ for convenience. Then $b+c=1$ and $bc=0=cb$. Consider now as you have suggested the natural map $f:R \to Rb \times Rc$ given by $f(x) = (xb,xc)$. It is easy to see that $f$ is an $R$-module homomorphism. -If $f(x)=0$ then $xb=0=xc$ and so $0=xb+xc=x(b+c)=x\cdot1=x$, which means that $f$ is injective. -Given $(ub,vc) \in b \times Rc$, we want $x\in R$ such that $xb=ub$ and $xc=vc$. If this is the case, then $x=x\cdot1=x(b+c)=xb+xc=ub+vc$. So, take $x=ub+vc$. Then $xb=ub^2+vcb=ub$, because $b^2=b$ and $cb=0$. Similarly, $xc=vc$. Thus, $(ub,vc)=f(ub+vc)$ and $f$ is surjective. -We have proved that $f$ is an isomorphism. (BTW, I don't think commutativity of $R$ is used here.)<|endoftext|> -TITLE: When does the modular law apply to ideals in a commutative ring -QUESTION [26 upvotes]: Let $R$ be a commutative ring with identity and $I,J,K$ be ideals of $R$. If $I\supseteq J$ or $I\supseteq K$, we have the following modular law -$$ I\cap (J+K)=I\cap J + I\cap K$$ -I was wondering if there are situations in which the modular law holds in which the hypothesis that $I$ contains at least one of $J,K$ is relaxed. -One example is when $R$ is a polynomial ring or power series ring and $I,J,K$ are monomial ideals. -Of course one containment always holds $I\cap (J+K)\supseteq I\cap J +I\cap K$. In what other situations does the other containment hold? - -REPLY [34 votes]: Such domains are known as Prüfer domains. They are non-Noetherian generalizations of Dedekind domains. Their ubiquity stems from a remarkable confluence of interesting characterizations. For example, they are those domains satisfy either the Chinese Remainder Theorem for ideals, or Gauss's Lemma for polynomial content ideals, or for ideals: $\rm\ A\cap (B + C) = A\cap B + A\cap C\:,\ $ or $\rm\ (A + B)\ (A \cap B) = A\ B\:,\ $ or $\rm\ A\supset B\ \Rightarrow\ A\:|\:B\ $ for fin. gen. $\rm\:A\:$ etc. It's been estimated that there are close to 100 such characterizations known, e.g. see my sci.math post for 30 odd characterizations. Below is an excerpt: -THEOREM $\ \ $ Let $\rm\:D\:$ be a domain. The following are equivalent: -(1) $\rm\:D\:$ is a Prüfer domain, i.e. every nonzero f.g. (finitely generated) ideal is invertible. -(2) Every nonzero two-generated ideal of $\rm\:D\:$ is invertible. -(3) $\rm\:D_P\:$ is a Prufer domain for every prime ideal $\rm\:P\:$ of $\rm\:D.\:$ -(4) $\rm\:D_P\:$ is a valuation domain for every prime ideal $\rm\:P\:$ of $\rm\:D.\:$ -(5) $\rm\:D_P\:$ is a valuation domain for every maximal ideal $\rm\:P\:$ of $\rm\:D.\:$ -(6) Every nonzero f.g. ideal $\rm\:I\:$ of $\rm\:D\:$ is cancellable, i.e. $\rm\:I\:J = I\:K\ \Rightarrow\ J = K\:$ -(7) $\: $ (6) restricted to f.g. $\rm\:J,K.$ -(8) $\rm\:D\:$ is integrally closed and there is an $\rm\:n > 1\:$ such that for all $\rm\: a,b \in D,\ (a,b)^n = (a^n,b^n).$ -(9) $\rm\:D\:$ is integrally closed and there is an $\rm\: n > 1\:$ such that for all $\rm\:a,b \in D,\ a^{n-1} b \ \in\ (a^n, b^n).$ -(10) Each ideal $\rm\:I\:$ of $\rm\:D\:$ is complete, i.e. $\rm\:I = \cap\ I\: V_j\:$ as $\rm\:V_j\:$ run over all the valuation overrings of $\rm\:D.\:$ -(11) Each f.g. ideal of $\rm\:D\:$ is an intersection of valuation ideals. -(12) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D,\:$ then $\rm\:I \cap (J + K) = I\cap J + I\cap K.$ -(13) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D,\:$ then $\rm\:I\ (J \cap K) = I\:J\cap I\:K.$ -(14) If $\rm\:I,J\:$ are nonzero ideals of $\rm\:D,\:$ then $\rm\:(I + J)\ (I \cap J) = I\:J.\ $ ($\rm LCM\times GCD$ law) -(15) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D,\:$ with $\rm\:K\:$ f.g. then $\rm\:(I + J):K = I:K + J:K.$ -(16) For any two elements $\rm\:a,b \in D,\ (a:b) + (b:a) = D.$ -(17) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D\:$ with $\rm\:I,J\:$ f.g. then $\rm\:K:(I \cap J) = K:I + K:J.$ -(18) $\rm\:D\:$ is integrally closed and each overring of $\rm\:D\:$ is the intersection of localizations of $\rm\:D.\:$ -(19) $\rm\:D\:$ is integrally closed and each overring of $\rm\:D\:$ is the intersection of quotient rings of $\rm\:D.\:$ -(20) Each overring of $\rm\:D\:$ is integrally closed. -(21) Each overring of $\rm\:D\:$ is flat over $\rm\:D.\:$ -(22) $\rm\:D\:$ is integrally closed and prime ideals of overrings of are extensions of prime ideals of $\rm\:D.$ -(23) $\rm\:D\:$ is integrally closed and for each prime ideal $\rm\:P\:$ of $\rm\:D,\:$ and each overring $\rm\:S\:$ of $\rm\:D,\:$ there is at most one prime ideal of $\rm\:S\:$ lying over $\rm\:P.\:$ -(24) For polynomials $\rm\:f,g \in D[x],\ c(fg) = c(f)\: c(g)\:$ where for a polynomial $\rm\:h \in D[x],\ c(h)\:$ denotes the "content" ideal of $\rm\:D\:$ generated by the coefficients of $\rm\:h.\:$ (Gauss' Lemma) -(25) Ideals in $\rm\:D\:$ are integrally closed. -(26) If $\rm\:I,J\:$ are ideals with $\rm\:I\:$ f.g. then $\rm\: I\supset J\ \Rightarrow\ I|J.$ (contains $\:\Rightarrow\:$ divides) -(27) the Chinese Remainder Theorem $\rm(CRT)$ holds true in $\rm\:D\:,\:$ i.e. a system of congruences $\rm\:x\equiv x_j\ (mod\ I_j)\:$ is solvable iff $\rm\:x_j\equiv x_k\ (mod\ I_j + I_k).$ -(28) Each finitely generated torsion-free $\rm\,D$-module is projective.<|endoftext|> -TITLE: Motivation for Ramanujan's mysterious $\pi$ formula -QUESTION [111 upvotes]: The following formula for $\pi$ was discovered by Ramanujan: -$$\frac1{\pi} = \frac{2\sqrt{2}}{9801} \sum_{k=0}^\infty \frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}}\!$$ -Does anyone know how it works, or what the motivation for it is? - -REPLY [25 votes]: This is one of the most interesting results Ramanujan gave and it has a very deep and beautiful theory behind it. Most references regarding this formula try to treat it in high handed manner using modular forms. Ramanujan himself got this formula by remaining within the limits of real analysis and I have presented these ideas along with proofs in my blog post. -Please note that the actual calculation to obtain the numbers 1103 and 26390 in the formula is difficult. Especially no one knows how Ramanujan got 1103 and modern approach to get 1103 is based on numerical calculations. -By Ramanujan's theory (explained in my blog post linked above) we can find infinitely many series of the form $$\frac{1}{\pi} = \sum_{n = 0}^{\infty}(a + bn)d_{n}c^{n}\tag{1}$$ where $a, b, c$ are certain specific algebraic numbers and $d_{n}$ is some sequence of rationals usually expressed in terms of factorials. The modern theory of modular forms allows us to get more details about their algebraic nature (say for example we can get the degree of minimal polynomials of $a, b, c)$. In the case of the current formula it can be shown that both $a, b$ must be quadratic irrationals and $c$ turns out to be a rational number. The calculation of $b, c$ is possible by formulas given by Ramanujan. It is the value of $a$ (related to $1103$) which is difficult to obtain. Now the modern approach goes like this. Since we know the value of $b, c$ and $\pi$ (via some other series calculation) we can find the numerical value of $a$. Knowing that it is a quadratic irrational we can search for integers $p, q, r$ such that $a$ is a root of $px^{2} + qx + r = 0$. This way the quadratic equation is found and the root $a$ is then evaluated in algebraic form. -There are direct formulas to calculate $a, b, c$ and we have two forms of such formulas. One of the forms is a finite formula which may require computations of algebraic nature (so that effectively the value is expressible as a radical expression). Another formula is kind of based on infinite series/product approach which can give numerical values of $a, b, c$. While the algebraic formula for $b, c$ is easy to calculate, the algebraic formula for $a$ is very difficult to compute. Hence the modern approach relies on numerical calculation of $a$. But I very strongly suspect that Ramanujan being an expert in radical manipulation must have found the algebraic value of $a$ using a direct radical manipulation. -In this regard also try to read the book "Pi and the AGM" by Borwein Brothers as they are the first ones to prove this formula of Ramanujan. Also see this answer on mathoverflow for calculation of the constant $1103$. -@Derek Jennings -The general series given in MathWorld is the one discovered by Chudnovsky brothers and it is a different series based on Ramanujan's ideas, but the series in the question under discussion can not be obtained from this general formula of Chudnovsky. A proof of this general series of Chudnovsky is presented in my blog post.<|endoftext|> -TITLE: Clarify why all logarithms differ by a constant -QUESTION [6 upvotes]: One of the rules of logarithms that I've been taught is: -$\log_b(x)$ is equivalent to $\dfrac{\log_k(x)}{\log_k(b)}$. -Recently I've also seen another rule that says: -$\log_b(x)$ is equivalent to $\log_b(k)\log_k(x)$. -Are these equivalent (perhaps via some refactoring) ? - -REPLY [6 votes]: HINT $\: $ The first equation is $\rm\ log_K\ $ of $\rm\ B^{log_B\ X}\ =\ X$ -and the $\:$ second $\:$ equation is $\rm\ log_B\ $ of $\rm\ K^{log_K\ X}\ =\ X$<|endoftext|> -TITLE: Generalizing Cauchy-Riemann Equations to Arbitrary Algebraic Fields -QUESTION [7 upvotes]: Can it be done? -For an arbitrary quadratic field $Q[\sqrt{d}]$, it's easy to show the equations are simply $ f_x = -\sqrt{d} f_y $, where $ f : Q[\sqrt{d}] \to Q[\sqrt{d}]$. I'm working on the case of $Q[\theta]$, when $\theta$ is a root of $\theta^3 - a\theta - b$, but I'm not sure if it's even possible. Has there been any mathematical research done on this topic? What do you think about it? - -REPLY [12 votes]: If you want to take derivatives in a rather general context, you can. For instance, let -$K$ be any topological field. Then for any function $f: K \rightarrow K$ and any -$x \in K$, we say the derivative of $f$ exists at $x$ if the usual limit -$f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h)-f(x)}{h}$ -exists. So if your topology on $\mathbb{Q}$ is the usual Archimedean one coming from the restricting the Euclidean metric on $\mathbb{R}$, you can speak of continuous and differentiable functions $f: \mathbb{Q} \rightarrow \mathbb{Q}$. -However these functions lack most of the nice properties of the corresponding functions on $\mathbb{R}$ or $\mathbb{C}$, due to the lack of completeness of $\mathbb{Q}$. That is, a continuous (and even differentiable) function on a closed -interval in $\mathbb{Q}$ need not satisfy the intermediate value property, need not be bounded, if it is bounded need not assume a maximum or minimum value, and need not be uniformly continuous, need not satisfy the Mean Value Theorem or Taylor's Theorem, and so forth. So it is fair to ask why one would want to study differentiable functions on $\mathbb{Q}$. -(I should say that it's not completely clear that there is no good answer to this. For instance, in the case of the $p$-adic field $\mathbb{Q}_p$, it is not so common to speak of or study differentiable functions. However, there is a nontrivial theory here, as I learned from Alain Robert's book on $p$-adic analysis. While it is not as essential as in the real or complex case, it has definitely been studied and written about.) -The issue of defining partial derivatives over number fields is a bit more subtle, and here I think there are problems that the OP has yet to appreciate. Think about the Cauchy-Riemann equations on $\mathbb{C}$: Step 0 here is identifying $\mathbb{C}$ with $\mathbb{R}^2$ and a function $f: \mathbb{C} \rightarrow \mathbb{C}$ as a function -$f: \mathbb{R}^2 \rightarrow \mathbb{R}^2$, i.e., a real function in several variables. -More generally, let $(K,| \ |)$ be a normed field, i.e., $| \ |: K \rightarrow \mathbb{R}^{\geq 0}$ is such that $\rho(x,y) := |x-y|$ is gives a metric on $K$ with the additional property that $|xy| = |x||y|$ for all $x,y \in K$. And let $V$ be a finite-dimensional normed $K$-vector space, say of dimension $n$ Then a basic but perhaps underappreciated result is that the completeness of $K$ is essential to make an identification of normed $K$-linear spaces $V \cong K^n$. So, for example, the Euclidean norm $| \ |$ on $\mathbb{C}$ is equivalent to any product metric on $\mathbb{R}^2$. -This property does not hold in general when we extend the Archimedean norm $| \ |$ on $\mathbb{Q}$ to an arbitrary number field $K$. In fact, things work out okay exactly if $K = \mathbb{Q}$ or $K$ is an imaginary quadratic field. So let's look at the next -simplest case, that of a real quadratic field $K = \mathbb{Q}(\sqrt{D})$. In this case -there are two norms on $K$ extending the Euclidean norm, say $| \ |_1$ and -$| \ |_2$ corresponding to the two different embeddings of $K$ into $\mathbb{R}$. (So, for instance, if $|\sqrt{D}|_1$ is the positive square root of $D$, $|\sqrt{D}|_2$ is -the negative square root of $D$.) Neither $(K,| \ |_1)$ nor $(K,| \ |_2)$ is equivalent, as a normed $\mathbb{Q}$-vector space, to $\mathbb{Q}^2$ with the product norm. Indeed, here is an even stronger statement: consider the (unique) embedding $\iota$ of $\mathbb{Q}$ into $K$. Then, with respect to the topology induced by either $| \ |_1$ or $|\ |_2$ $\iota(\mathbb{Q})$ is dense, since indeed both are dense in their completions, which are isomorphic to $\mathbb{R}$. On the other hand, the embedding of $\mathbb{Q}$ into $\mathbb{Q}^2$ via the diagonal, $x \mapsto (x,x)$, has closed image. -So the very idea of partial derivatives here makes me nervous. An upshot of the above discussion is that choosing the basis $\{1,\sqrt{D} \}$ for $\mathbb{Q}(\sqrt{D})$ over $\mathbb{Q}$, the ``directions'' $1$ and $\sqrt{D}$ are not metrically/topologically independent, even though they are independent in the sense of linear algebra. -A final remark to make is that, to a number theorist like myself, it is very unnatural to choose a particular Archimedean norm $| \ |$ on a number field $K$. Rather, there is a finite set of equivalence classes of such norms ("Archimedean places") which can be determined by looking at the factorization of any polynomial $P(t) \in \mathbb{Q}[t]$ -such that $K \cong \mathbb{Q}[t]/(P)$ over $\mathbb{R}$: if $P$ has $r$ real roots and -$s$ complex-conjugate pairs of complex roots, then there are $r + s$ Archimedean places of $K$, and one needs to work with all of them at once in order to do topologically useful things. In particular, the natural embedding here is really from $K$ into $K \otimes_{\mathbb{Q}} \mathbb{R} \cong \mathbb{R}^r \oplus \mathbb{C}^s$. Note that this latter object is a field in exactly two cases: when $(r,s) = (1,0)$ (i.e., $K = \mathbb{Q})$ or when $(r,s) = (0,1)$ (i.e., $K$ is an imaginary quadratic field).<|endoftext|> -TITLE: Probability of picking a random natural number -QUESTION [15 upvotes]: I randomly pick a natural number n. Assuming that I would have picked each number with the same probability, what was the probability for me to pick n before I did it? - -REPLY [3 votes]: Hi :) I've always had a nagging suspicion about drawing (uniform) random numbers from an infinite set. I'm not convinced it's possible: here's an intuition about why I'm sceptical: it would be nice if someone could explain where I go wrong. My counterargument is structured as follows: if it's possible to draw random natural numbers (i.e. from the whole infinite set) then there must be a probability of selecting an EVEN number: I argue that $Pr(even) = \frac{1}{2}$ AND $Pr(even) = 1$, causing a contradiction. -First we need to define: -URD: A "uniform random draw" for a set $S$, ($S = \mathbb{N} = \{1, 2, 3, ... \}$ in this case) is a random selection $x$ from $S$ such that every $y$ in $S$ has an equal chance of being picked. This is what we're doing in the video, and it's what we usually mean when we just say "random". -Consider the statement '$Pr(\texttt{n even}) = 0.5$', where $n$ is a random element from $\mathbb{N}$ -- this is what I assume most people would agree with. I disagree that this is true, I think it depends on arbitrary selection rules. -Firstly the intuition we normally have in mine: if you represent a natural number as a rounded scaler from 1 to infinity or a finite string of bits defining a binary number (like your computer does), then it's true that $Pr(\texttt{n even}) = 0.5$ : because it boils down to whether the last bit is 0 (even) or 1 (odd), and since both 0 and 1 bits are equally likely, it follows that Pr(even) = Pr(not even), which must sum to 1 so Pr(even) = 1/2. You can jiggle this for $Pr(\texttt{n is a multiple of m})$ to get $\frac{1}{m}$. It's easy to see that this way of drawing number satisfies URD. -There is another way to uniquely represent numbers, not just as a line or scale: we can represent n as a product of primes via the fundamental theorem of arithmetic. -Consider $n = 2^{a1} * 3^{a2} * 5^{a3} * 7^{a4} * 11^{a5} * ..... $ -If we start by drawing random URD numbers a1, a2, a3, ..., over $\{0, 1, 2, ....\}$ and then defining $n$ as above, we have drawn a random $n$. In fact, since every different sequence $\{a1, a2, a3 ... \}$ defines one, and EXACTLY one, natural number $n$, it also follows that this method of drawing $n$ should also satisfy URD. -But under this last method, $prob(\texttt{n is even}) = prob(\texttt{n not odd}) = 1 - prob(a1 = 0) = 1 - 0 = 1$ -If you want to be more formal, you can note the following observation: suppose you have $N$ (non-empty) sets $S_1, S_2, ...., S_N$. We want to choose a random element $X$ from the cartesian product set $S = S_1 \times S_2 \times S_3 .... \times S_N$. Then drawing $X$ is equivalent to drawing $N$ elements $s_i$ from $S_i$, and then setting $X = (s_1, s_2, ..., s_N)$. -In effect, I've used this result in my argument above that $Pr(even) = 1$, because I've quoted the fundamental theorem of arithmetic and defined $S_i = \{\texttt{powers of the }$i$\texttt{th prime}\}$, e.g. $S_2 = \{1, 3, 9, 27, 81, ....\}$. -Since I've shown that Pr(even) = 1/2 and shown that Pr(even) = 1, it follows that it makes no sense to even assign a probability to drawing random numbers from the infinite set of natural numbers. -I assume I'm wrong somewhere, but I don't know where I've gone wrong?<|endoftext|> -TITLE: What's Combinatorial Proof/Object/etc.? -QUESTION [6 upvotes]: In high school, when we talked about "combinatorics," we solely meant "mathematics of choice." For instance: - -There are 10 people who want to sit around a table. In how many ways is this possible? -We have 50 balls and 20 boxes. In how many ways can we distribute balls into the boxes? -There are 10 apples and 5 oranges. In how many ways can we select 7 fruits? -... - -I did have this certain wrong view of combinatorics unless I read about combinatorial objects (lists, sets, graphs, etc.) and combinatorial proofs. I tried reading several sources (Wikipedia, books, papers), but still don't have a clear understanding of when something is called combinatorial. - -Could you please elaborate it? - -REPLY [2 votes]: A combinatorial argument often consists in giving a bijection between two sets or at least has such an observation as its key step. More precisely, one often starts with one set $A$ whose elements one wishes to count. You then find a bijection from $A$ to some other set $B$ whose elements are easily counted, leading to some expression for the number of elements of $A$. -A very simple example would be the problem of counting the number of subsets of some finite set of size $n$; call this set of subsets $A$. We first notice there's a bijection from $A$ to the set $B$ of length $n$ binary strings. We can easily count the number of elements of $B$; it's just $2^n$. Hence $A$ also has $2^n$ elements. -More generally, a combinatorial argument proceeds by noticing that some statement to be proved is equivalent to the assertion that two sets are in bijection and then by constructing an explicit bijection.<|endoftext|> -TITLE: Help with calculating the angle to turn towards a target in a coordinatesystem -QUESTION [6 upvotes]: I know the following: - -my own position -my own facing (the angle im turned) -my targets position - -What i would like some help with is how i calculate the shortes way to turn and the angle to turn. If i my position was (4,4) and i faced 90 degrees and my target was to face (1,1) where my target is, i know by drawing it on some paper that the shortest way to rotate is counter-clockwise 135 degrees. -But how do i calulate this? Ive tried using tangent, but using sine seems just as easy. Any ideas? - -REPLY [10 votes]: Most computer systems have a function Atan2(x,y) which returns the polar angle of a point in the range $(-\pi,\pi]$ (check the endpoints-I think you get $+\pi$ for (-1,0). If you subtract your position from the target position and take Atan2 of the difference it will be the angle from you to target. Now subtract your heading and you have the angle to turn. If the result is outside $(-\pi,\pi]$, add or subtract $2\pi$ to get into the range. The nice thing of Atan2, as opposed to the regular atan, is it sorts out for you the signs and the branch of tangent. -You can use arcsin as well, but that requires measuring the distance to the target, which has an extra square root in it.<|endoftext|> -TITLE: What are or where can I find style guidelines for writing math? -QUESTION [20 upvotes]: I am a scientist writing my first manuscript with a substantial amount of mathematical methodological documentation. -I am using LaTeX, but this is not my question. -I would like to find a list of usage rules similar to Strunk and White. I can follow the practices that I see in my field, but it would be helpful to know the underlying rules and recommendations that will help me explain mathematical concepts in a reproducible and understandable way. -Update In addition to excellent answers and resources below, this recent (2018) paper directly and comprehensively answers my question: -Edwards, Andrew M., and Marie Auger‐Méthé. "Some guidance on using mathematical notation in ecology." Methods in Ecology and Evolution. - -REPLY [19 votes]: Here's what is in my shelf, in the order in which I picked them up: - -Mathematics into Type (Updated Edition). American Mathematical Society, Providence, RI, 1999, ISBN 0-8218-1961-5. It includes what the usual submission/refereeing process is, how to mark manuscripts, how to space symbols, in-line equations, and display equations (and how to break long equations across lines). It does not discuss matters of style (how a mathematician usually says certain things), however. Available online. - -Handbook of Writing for the Mathematical Sciences by Nicholas J. Higham. Society for Industrial and Applied Mathematics, Philadelphia PA, 1993. ISBN 0-89871-314-5. It includes a section on Mathematical Writing (Chapter 3) and one on English Usage (Chapter 4), as well as tips on organizing a paper and the like. I'll add that Chapter 2 ("Writer's Tools and Recommended Reading") contains a wealth of references. - -A Primer of Mathematical Writing. Being a disquisition on having your ideas recorded, typeset, published, read, and appreciated by Steven G. Krantz. American Mathematical Society, Providence RI, 1997, ISBN 0-8218-0635-1. One full chapter devoted specifically to mathematical writing ("How to State a Theorem", "How to Prove a Theorem", "How to State a Definition", etc). - -Paul R. Halmos. How to write mathematics. Enseign. Math. 16 (1970), pp. 123-152. A very good read, cited by every other work mentioned above. Also available on the web, as a quick google search will reveal; e.g., here. - -How to write mathematics, corrected edition, by Norman E. Steenrod, Paul R. Halmos, Menahem M. Schiffer, and Jean A. Dieudonné. American Mathematical Society, Providence RI 1981, ISBN 0-8218-0055-8. A reprint of four papers on writing mathematics, including Halmos's paper mentioned above. - -Writing Mathematics Well: A Manual for Authors by Leonard Gillman. Mathematical Association of America, 1987, ISBN 0-88385-443-0 (page ix reads: "This manuscript was prepared by the author on an Apple Macintosh with the help of Mac$\Sigma$qn, a program for symbols and equations"; how times have changed...) - -Mathematical Writing, by Donald E. Knuth, Tracy Larrabee, and Paul M. Roberts. MAA Notes no. 14, The Mathematical Association of America, 1989, ISBN 0-88385-063-X; this is a bit of an odd duck, in my opinion. They are the class notes for a course on mathematical writing taught by Knuth. - - -There are other, more general guides; I keep a copy of Fowler's "Modern English Usage" and the "Oxford Guide to English Usage" always available, as well as a copy of the Chicago Manual of Style (14th edition) and of Strunk and White at home (recently joined by Eats, Shoots and Leaves: the zero tolerance approach to punctuation by Lynne Truss; now I can tell my students that the semi-colon was invented by the same guy who invented italics). Mary-Claire van Leunen's A Handbook for Scholars (revised edition) is a classic as well. But these are not specific to mathematics, and a lot of the advice has to be actively disobeyed to follow standard mathematical phraseology or uses (many a copywriter unaccustomed to mathematics has choked on "a Green's function"). -Added: I don't think you will necessarily pick up the traditional jargon/cadence of mathematics from any of the above sources, however. That's something that is picked up by osmosis, through hearing and reading a lot of mathematics. But this even happens across fields, not just across disciplines; one can often spot that someone is new to a particular field within mathematics by how he or she phrases certain statements, which is at odds with the usual practices of the sub-field in question. But the advice you get in any of the above (my recommendations, in no particular order, are Higham, Krantz, and Halmos for writing content, and Mathematics into Type for final preparation of the manuscript for submission, especially if you are submitting to a mathematics journal) should carry you through.<|endoftext|> -TITLE: How to prove the error estimate of the Newton-iteration? -QUESTION [6 upvotes]: I'm trying to get familiar with the Newton-iteration over here but I got stuck at the proof of the error estimate. -Let $f: [a,b] \rightarrow \mathbb{R}$ be continuously differentiable twice, concave or convex and $f' \neq 0 \;\; \forall x \in [a,b]$. Let $\xi$ be the root of $f$. We define the Newton-iteration for $k \in \mathbb{Z}_{\geq 0}$: -$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}$$ -Also, we assume $x_1 \in [a,b]$ for $x_0 = a$ and $x_0 = b$. -I already showed that the sequence $(x_k)_{k \in \mathbb{N}}$ converges to $\xi$. Now, I want to show the following error estimate: -$$|\xi - x_{k+1}| \leq \frac{\max_{a \leq x \leq b} |f''(x)|}{2 \min_{a \leq x \leq b} |f'(x)|} (x_{k+1}-x_k)^2$$ -I am quite sure I will have to combine the mean value theorem and Taylor's theorem (and Lagrange's remainder), but I have no idea, how to. I don't quite know at what point I should use Taylor's theorem, also, I don't know between which two points I should apply the mean value theorem. -I'd be very happy if somebody could give me a little hint so that I can proceed. - -REPLY [2 votes]: We first apply the Taylor-expansion for $f(x_{k+1})$ around $x_k$: -$$f(x_{k+1})=f(x_k) + f'(x_k)(x_{k+1}-x_k)+R_2$$ -where $R_2$ is the remainder. We'll take Lagrange's remainder and we get to: -$$f(x_{k+1}) = f'(x_k) \cdot (\frac{f(x_k)}{f'(x_k)} - x_k + x_{k+1}) + \frac{1}{2} f''(x_0)(x_{k+1} - x_k)^2$$ -where per definition $\frac{f(x)}{f'(x)}-x_k = -x_{k+1}$ and $x_0$ is some point between $x_{k+1}$ and $x_k$. -So the first term vanishes and we can write: -$$|f(x_{k+1})| \leq \frac{1}{2} \max_{a \leq x \leq b} |f''(x)| (x_{k+1}-x_k)^2$$ -Also, from the mean value theorem we know that: -$$\min_{a \leq x \leq b} |f'(x)| \leq \frac{|f(x_{k+1} - f(\xi)|}{|x_{k+1} - \xi|} = \frac{|f(x_{k+1}|}{|x_{k+1} - \xi|}$$ -It follows: -$$|\xi - x_{k+1}| \leq \frac{1}{2} \frac{\max_{a \leq x \leq b} |f''(x)|}{\min_{a \leq x \leq b} |f'(x)|} (x_{k+1}-x_k)^2$$ -q.e.d.<|endoftext|> -TITLE: The free group $F_3$ being a quotient of $F_2$ -QUESTION [9 upvotes]: Every finitely generated free group is a subgroup of $F_2$, the free group on two generators. This is an elementary fact, as is the fact that $G$, finitely presented, is the quotient of $F(|S|)$ the free group on some set of generators $S$ for $G$. -My question is whether $F_3$, and hence any finitely presented group, is a quotient of $F_2$. - -REPLY [6 votes]: Here's a slightly different way, perhaps a little more sophisticated, to see this. -Free groups are Hopfian, meaning that every surjective endomorphism is an isomorphism. There are a variety of ways to prove this. It's proved in Lyndon & Schupp, using Nielsen transformations. Alternatively, you can appeal to an (easy) result of Malcev, which states that every finitely generated, residually finite group is Hopfian. -Now, there is an obvious epimorphism $F_3\to F_2$ with non-trivial kernel, given by killing a generator. If $F_3$ were a quotient of $F_2$, the composition of these two maps would give an epimorphism $F_2\to F_2$ with non-trivial kernel.<|endoftext|> -TITLE: Array of numbers, how many solutions/ways? -QUESTION [5 upvotes]: Let's say, we have an array/matrix $n \times m$ and we need to find, how many ways can we fill this array with numbers from $\{ 1, \ldots , m\cdot n \}$, but: -1) every number can be used only 1 time -2) every column and every row should be sorted in increasing order -For example, if we have $n = m = 3$, there are 42 different ways to do this. -I was thinking about this problem, but I can't find out any simple formula to compute this for every $n, m$. -We can solve this problem by writnig a program that uses backtracking method/algorithm [1] but it has huge complexity and problem is solved by computer, and I want to know, how to do this by hand. -Sombody told me, that in this problem I can use some kind of Catalan numbers [2] [3], but I'm not sure about this method. -So, what do You think about this problem and its solution? -[1] Backtracking (Wikipedia): http://en.wikipedia.org/wiki/Backtracking -[2] Catalan numbers (Wikipedia): http://en.wikipedia.org/wiki/Catalan_number -[3] Catalan numbers (OEIS): http://oeis.org/A000108 - -REPLY [6 votes]: These are Young tableaux. OEIS has the numbers. For the square case, see A039622. For the rectangular case, see A060854 - -REPLY [5 votes]: You are counting the number of standard Young tableaux of shape $(m, m, m, ...)$ (where $m$ occurs $n$ times). The formula that counts these is called the hook length formula. It correctly reproduces the answer -$$\frac{9!}{5 \cdot 4 \cdot 4 \cdot 3 \cdot 3 \cdot 3 \cdot 2 \cdot 2} = 42$$ -for the example you gave. The general formula seems annoying to write down. In the special case $n = 2$ you get Catalan numbers; this is a nice exercise and it is worth trying to prove this with and then without the hook length formula.<|endoftext|> -TITLE: Getting my number theoretic series straight -QUESTION [6 upvotes]: There are Artin $L$-series and Dirichlet $L$-series, and zeta functions for varieties and for number fields; there are a slew of objects named after Hecke... There are also various kinds of characters in these areas. -I've always had a hard time keeping all of them straight. I would be very grateful if someone could put them in some sort of perspective that would make it clear what the role of each of those objects is with respect to the rest. -If you need to use the language of Langlands, that would be acceptable. (perhaps it is best to see it in those terms?) - -REPLY [15 votes]: First of all, one shouldn't distinguish too much between the terms $L$-series and $\zeta$-function; it is more or less a matter of history which term you use in a given context. -The true distinction is between objects on the "motivic" (Diophantine equation or algebraic number theory) side, and objects on the "automorphic" side. -So, Artin $L$-functions, and $\zeta$-functions and $L$-functions attached to varieties, are on the motivic side. They don't have obvious analytic continuations or functional equations. -Dirichlet $L$-functions (i.e. the ones built out of characters of the multiplicative group mod $N$ for some $N$), Hecke $L$-functions (built out -of ideal class group characters), $L$-functions of modular forms, or more generally automorphic forms, are automorphic $L$-functions. Except for the -last (general automorphic $L$-functions) these can be proved to have analytic continuation with functional equations. The natural limit of these analytic continuation/functional equation arguments (which begin with Riemann, with Tate's thesis being another major point en route) is the fact that standard $L$-functions for automorphic forms on $GL_n$ have analytic continuation/functional equations. -Langlands's functoriality conjecture in particular conjectures that any (a priori more general) automorphic $L$-function is in fact a standard $L$-function. Many special cases are known (and Ngo got the Fields medal for proving the fundamental lemma, which is one tool in proving certain cases, namely those arising from endoscopy), but it is wide open in general. -The reciprocity conjecture then states that any of the motivic $L$-functions are also actually standard $L$-functions. (E.g. for $L$-functions of elliptic curves over $\mathbb Q$, this becomes the statement that any such elliptic curve is associated to a weight 2 modular form, which was proved by Wiles et. al.) This is also wide open in general. -The special case of Artin $L$-functions, and some other special cases, are actually covered by both conjectures. (This is because, if you broaden your perspective sufficiently, as Langlands did, by allowing arbitrary reductive algebraic groups into the picture, then Artin $L$-functions can be thought of as -a particular kind of automorphic $L$-functions. This is not so easy to see when you are just entering the subject, but one can think of just the Riemann $\zeta$-function: this is certainly motivic, being the $\zeta$-function of the -variety Spec $\mathbb Q$, but is also automorphic, being the Dirichlet $L$-function for the trivial character.) -But in general, the conjectures have a slightly different nature: functoriality can be thought of (if you want) purely in terms of harmonic analysis on adelic groups, whereas by its very nature reciprocity involves arithmetic geometry. In practice, at least at the moment, the two seem to be fairly intertwined. Functoriality certainly is a tool that can be very helpful in proving reciprocity; also, of those cases of the Artin conjecture which are known, some are proved using the functoriality view-point (e.g. Langlands--Tunnell) and some using the reciprocity view-point (e.g. Buzzard--Dickinson--Shepherd-Barron--Taylor).<|endoftext|> -TITLE: Braid Group, B_4->> S_4 onto, do I know kernel is P_4, pure braid group? -QUESTION [7 upvotes]: I have an epimorphism $f:B_4\longrightarrow S_4$, from the braid group on 4 strands onto the symmetric group on 4 elements. Is it possible the kernel is not isomorphic to $P_4$, the pure braid group on 4 strands? - -REPLY [7 votes]: By Ryan Budney's suggestion, I went ahead and proved the general case. When $B_n$ onto $S_n$ the kernel is isomorphic to $P_n$. -A proof sketch is this: relations for the Artin generators in $B_n$ must be satisfied in the image. Relations $b_ib_{i+1}b_i=b_{i+1}b_ib_{i+1}$ can be rewritten as in terms of conjugation so that every $b_i$ has image of a fixed cycle structure. -The relations which impose commutativity of non-adjacent generators imply that non-adjacent generators get sent to permutations with cycles either coincidental or disjoint. -It can be shown that for $n>4$ the images of non-adjacent generators must actually be disjoint: mildly technical, but not difficult. (The $n=4$ case being easily solved by hand or GAP). -Then counting every other odd generator $b_1,b_3,\ldots$, of which there are $\lceil \frac n 2\rceil$ we have that they must be transpositions, since $3\lceil \frac n 2\rceil>n$. Essentially that's it: up to isomorphism of $S_n$ the images of the generators for $B_n$ are the usual transpositions they induce, so kernel is $P_n$.<|endoftext|> -TITLE: How can I prove this identity involving the digamma function? -QUESTION [5 upvotes]: I'm trying to prove an identity involving the digamma function $\psi(z)$, but I can't seem to figure out a way to do it. Can anyone help me out? The identity is -$$\psi\left(\frac{m}{2} + iy\right) + \psi\left(\frac{m}{2} - iy\right) = \psi\left(\frac{n}{2} + iy\right) + \psi\left(\frac{n}{2} - iy\right)$$ -where $m$ and $n$ are integers satisfying $m + n = 2$, and $y$ is any nonzero real number. -I've tried looking at a couple of the integral representations of $\psi(z)$ listed on the Wikipedia page, but I haven't been able to figure it out from those. I think there's probably some simple integral identity I'm forgetting that would make the whole thing work out - for instance, if I could show that -$$\int_0^\infty \frac{e^{nt/2}\cos(yt)}{\sinh(t/2)}\mathrm{d}t = \int_0^\infty \frac{e^{-nt/2}\cos(yt)}{\sinh(t/2)}\mathrm{d}t$$ -for $n \in \mathbb{Z}, y \neq 0$, I think I'd be set. So what I'm hoping to get is a pointer to some relation like that which I could use. Of course I'd be happy with a full proof if you prefer ;-) -P.S. I know this sounds kind of "homework-y," but it's not for a homework assignment; I'm trying to verify a calculation in a physics paper. - -REPLY [8 votes]: The first relation can be rewritten as -$$\psi\left(\frac{m}{2} + iy\right) + \psi\left(\frac{m}{2} - iy\right) = \psi\left(\frac{2-m}{2} + iy\right) + \psi\left(\frac{2-m}{2} - iy\right)$$ -or -$$\psi\left(\frac{m}{2} + iy\right) + \psi\left(\frac{m}{2} - iy\right) = \psi\left(1-\left(\frac{m}{2} + iy\right)\right) + \psi\left(1-\left(\frac{m}{2} - iy\right)\right)$$ -If we use the reflection formula for the digamma function (which can be derived from the reflection formula for the gamma function) -$$\psi(1-z)=\psi(z)+\pi\cot\pi z$$ -your equation simplifies as -$$\cot\left(\frac{m\pi}{2}+i\pi y\right)+\cot\left(\frac{m\pi}{2}-i\pi y\right)=0$$ -or since the cotangent is an odd function, -$$\cot\left(\frac{m\pi}{2}+i\pi y\right)=\cot\left(-\frac{m\pi}{2}+i\pi y\right)$$ -which can be rewritten as -$$\cot\left(\frac{m\pi}{2}+i\pi y\right)=\cot\left(\frac{m\pi}{2}+i\pi y-m\pi\right)$$ -and it is clear that both sides of the equation will agree only if $m\in\mathbb Z$ due to the periodicity of the cotangent.<|endoftext|> -TITLE: Does a motive capture everything about an algebraic variety? -QUESTION [6 upvotes]: Is the functor from the category of projective varieties over a field $k$ to the category of pure motives over $k$, faithful? (Perhaps it is not full). -Ditto: -Is the functor from the category of affine varieties over a field $k$ to the category of mixed motives over $k$, faithful? (Perhaps it is not full). - -REPLY [2 votes]: If you work with Grothendieck motives modulo rational equivalence, you even have simpler examples : the motive of any split (projective) quadric of odd dimension $n$ is the same as the motive of $\mathbb{P}^n$.<|endoftext|> -TITLE: What are norms of sub-matrices invariant under a block diagonal similarity transformation of a block matrix? -QUESTION [5 upvotes]: Say $M := \begin{pmatrix} A & B\\ C & D \end{pmatrix}$ is a block matrix with $A, D$ being square matrices and this $B$ and $C^T$ having the same shape. Is there any norm characterizing the collection of $B$ and $C$ that is invariant under all block diagonal similarity transformations $M\to S M S^{-1}$ with $S=\begin{pmatrix}E & 0\\ 0 & F\end{pmatrix}$ of the whole block matrix? Were $B$ and $C$ square matrices I'd use e.g. the sum of squared eigenvalues, but is there something similar for non-square matrices? -In brief, the requirements for the block norms I seek are: - -the norm of the square blocks $A$ and $D$ must be invariant (under these similarity transformations of $M$) respectively -either there are individual norms for $B$ and $C$ which are invariant, or the is one combined norm depending on $B$ and $C$ which is invariant - -Is the latter requirement possible? - -Due to the block diagonal structure of $S$, all blocks transform independently, so there should be independent norms for $B$ and $C$. But since $B\to F B E^{-1}$ and $C\to E C F^{-1}$ are no similarity transformations, I couldn't even use eigenvalues if $B$ and $C$ were square matrices. However, $BC \to F BC F^{-1}$ and $CB \to E CB E^{-1}$ are similarity transformations, the remaining question is what kind of norm to use and maybe also how to decide whether $BC$ or $CB$ or both are "significant"... - -REPLY [2 votes]: Let $X$ be an $m \times n$ matrix. A norm $\|X\|$ is called unitarily invariant if $\|UXV\| = \|X\|$ for appropriately shaped unitary matrices $U$ and $V$ (i.e., $U^*U=I_m$ and $V^*V=I_n$. -Now, the singular value decomposition of $X$ tells us that if $\|\cdot\|$ is a unitarily invariant norm, it will be a function only of the singular values of $X$ (a so-called spectral function). -A simple observation is that if $X$ is square and diagonalizable, then we can select $S=V$, the matrix of eigenvectors of $X$, and consequently the norms that you are looking for again must be unitarily (here orthogonally) invariant. -Please have a look in the book Matrix Analysis by Horn and Johnson, who devote an entire chapter (or more) to matrix norms.<|endoftext|> -TITLE: Best Strategy for a die game -QUESTION [13 upvotes]: You are allowed to roll a die up to six times. Anytime you stop, you get the dollar amount of the face value of your last roll. -Question: What is the best strategy? -According to my calculation, for the strategy 6,5,5,4,4, the expected value is $142/27\approx 5.26$, which I consider quite high. So this might be the best strategy. -Here, 6,5,5,4,4 means in the first roll you stop only when you get a 6; if you did not get a 6 in the first roll, then in the second roll you stop only when you roll a number 5 or higher (i.e. 5 or 6), etc. - -REPLY [4 votes]: Let $X_n$ be your winnings in a game of length $n$ (in your case $n = 6$), if you are playing optimally. Here, "optimally" means that at roll $m$, you will accept if the value is greater than $\mathbb{E} X_{n-m}$, which is your expected winnings if you continued to play with this strategy. -Let $X \sim \mathrm{Unif}(1,2,3,4,5,6) $ (you can also insert any distribution you like here). Then $X_n$ can be defined as $X_1 = X$ and for $n \geq 2$, -$$ X_n = \begin{cases} X_{n-1}, \quad \mathrm{if} \quad X < \mathbb{E}X_{n-1} \\ -X \quad \mathrm{if} \quad X \geq \mathbb{E}X_{n-1} \end{cases} $$ -So your decisions can be determined by computing $\mathbb{E} X_n$ for each $n$ recursively. For the dice case, $\mathbb{E} X_1 = \mathbb{E}X = 7/2$ (meaning on the fifth roll, accept if you get >7/2, or 4,5 or 6), and so, -$$\mathbb{E} X_2 = \mathbb{E} X_1 \mathrm{P}[X = 1,2,3] + \mathbb{E} [X | X \geq 4] \mathrm{P}[X = 4,5,6]$$ -$$ = (7/2)(3/6) + (4 + 5 + 6)/3 (1/2) = 17/4 $$ -So on the fourth roll, accept if you get > 17/4, or 5 or 6, and so on (you need to round the answer up at each step, which makes it hard to give a closed form for $\mathbb{E} X_n$ unfortunately).<|endoftext|> -TITLE: Partitioning the integers $1$ through $n$ so that the product of the elements in one set is equal to the sum of the elements in the other -QUESTION [42 upvotes]: It is known that, for $n \geq 5$, it is possible to partition the integers $\{1, 2, \ldots, n\}$ into two disjoint subsets such that the product of the elements in one set equals the sum of the elements in the other. One solution is the following: -Let $N = \{1, 2, \ldots, n\}$. -If $n$ is even, take $P = \{1, \frac{n-2}{2}, n\}$ and $N-P$ as the two sets. -If $n$ is odd, take $P = \{1, \frac{n-1}{2}, n-1\}$ and $N-P$ as the two sets. -My question is this: - -Is this partition unique for infinitely many $n$? - -One might be able to prove an even stronger result, as I don't know of any other solutions. -Update on progress: Derek Jennings has found another family of solutions for the case where $n$ is a multiple of 4, except for $n=8$, $28$, or $36$; see his answer below. And Matthew Conroy has verified that, for $n \leq 100$, the partition given above is unique only for $n = 5,6,7,8,9,13,18,$ and $34$. -Background: The problem of proving that the partition is possible was posed several years ago as Problem 2826 in the journal Crux Mathematicorum, with solutions in the April 2004 issue. Every one of the 20 or so solvers (including me, which is why I'm interested in the question) came up with the partition given here. The person who originally posed the problem also asked if the partition is unique for infinitely many $n$. I don't think anyone ever submitted an answer to that latter question to Crux (although I cannot verify that, as I no longer have a subscription). I thought someone here might be able to give an answer. - -REPLY [2 votes]: There are no families of 2 elements of the form $\{an+b,cn+d\}$. Otherwise we would have $(an+b)(cn+d)+an+b+cn+d=\frac{n^{2}+n}{2}$ hence $bd+b+d=0$, $d(b+1)=-b$, $d=\frac{-b}{b+1}$, $ac=\frac{1}{2}$ and $bc+ad+a+b=\frac{1}{2}$, and substitutions will give a quadratic in b for which no solutions will give an infinite solution set. -Supposing a family of solutions of 3 elements were given by $\{an+b,cn+d,fn+e\}$ where $a,b,c,d,e,f$ are rational, and would apply for all $n$ such that $an+b,cn+d,fn+e$ are distinct integers $>0$ and $\leq{n}$. -Their product + their sum is to be $\frac{n^2+n}{2}$ therefore the parametric equations of -the solution families would be: -$acf=0$ (generalising, families of $n$ elements would have 2 elements with a non-zero $n$ coefficient, and the rest of the elements would be constant - so we can assume $f = 0$) -$ebd+e+b+d=0$ -$eac=\frac{1}{2}$ -$ebc+eda+a+c=\frac{1}{2}$ -With 4 elements any family of solutions $\{an+b,cn+d,e,f\}$ would require: -$ac(f+g)=\frac{1}{2}$ -$a+c+adgf+gcbf=\frac{1}{2}$ -$bdfg+b+d+f+g=0$<|endoftext|> -TITLE: Asymptotic behavior of the first step in a best strategy -QUESTION [8 upvotes]: Consider the game described here, but for a sequence $X_1,\ldots,X_n$ of i.i.d. uniform rv's on $\lbrace 1,\ldots,n \rbrace$ (in the original game $n=6$). Using the original notation, let $a_n$ denote the first element in the best strategy $a_n,\ldots,a_2$. We saw that for $n=6$, $a_n = n$. Can you provide a heuristic explanation as to why $a_n < n$ for all sufficiently large $n$ (this is indicated by numerical results), or even much better, can you determine the behavior of $n - a_n$ as $n \to \infty$? No rigorous proof is required, only heuristic ideas. - -REPLY [3 votes]: Update: See the last section for a possible proof (**) that $a_n < n$ for all $n \geq 18$. - -Here are some bounds on $E(n,m)$ that turn out to be useful. -Using $E_{n,m}$ for $E(n,m)$ in Ross Millikan's notation, let $R_{n,m} = \lceil E_{n,m} \rceil - E_{n,m}$. -Simplifying the expression for $E_{n,m}$ in Ross's first answer yields (thanks to some nice cancellation) -$$E_{n,m} = \frac{n+1}{2} + \frac{E_{n,m-1}(E_{n,m-1}-1)}{2n} - \frac{R_{n,m-1}(R_{n,m-1}-1)}{2n}.$$ -Since $0 \leq R_{n,m-1} < 1$, we have $$0 \leq - \frac{R_{n,m-1}(R_{n,m-1}-1)}{2n} \leq \frac{1}{8n}.$$ This can be seen easily by the fact that the expression being bounded is quadratic in $R_{n,m-1}$ with vertex at $R_{n,m-1} = \frac{1}{2}$. -Therefore, $F_{n,m} \leq E_{n,m} \leq G_{n,m}$, where $F_{n,1} = G_{n,1} = \frac{n+1}{2}$, and -$$F_{n,m} = \frac{n+1}{2} + \frac{F_{n,m-1}(F_{n,m-1}-1)}{2n},$$ -$$G_{n,m} = \frac{n+1}{2} + \frac{G_{n,m-1}(G_{n,m-1}-1)}{2n} + \frac{1}{8n}.$$ -Thus we have recurrences that give upper and lower bounds on $E_{n,m}$ without having to deal with the problem of taking ceilings. -Numerical experiments indicate that - -$F_{n,n}$ and $G_{n,n}$ are very close to each other, -$G_{n,n} - F_{n,n}$ is decreasing, -$n - G_{n,n}$ is increasing, -$n - G_{n,n} > 1$ for $n \geq 15$. - -A proof of 3 or 4 would imply $a_n < n$ for $n \geq 15$. A close analysis of $G_{n,n} - F_{n,n}$, together with an asymptotic estimate of $F_{n,n}$ or $G_{n,n}$, would help with the requested behavior of $n - a_n$. -Also, it is easy to see that $F_{n,m} = n$ is an equilibrium solution for the $F_{n,m}$ recurrence. That should be helpful as well. - -It turns out that $G_{n,m} = \frac{1}{2} + n a_m$, where $a_1 = \frac{1}{2}$ and $a_m$ satisfies the recurrence $$a_m = \frac{a^2_{m-1}+1}{2}.$$ This is easy to verify once one has the conjectured expression. -It also turns out that the $a_m$ recurrence has been studied (**), with the following bounds: -$$ 1 - \frac{2}{m} + \frac{2}{m^2} \ln \frac{m}{3} + \frac{417}{128m^2} \leq a_m \leq 1 - \frac{2}{m} + \frac{5 \ln m + 3}{2m^2}$$ -The upper bound implies -$$G_{n,n} \leq \frac{1}{2} + n \left(1 - \frac{2}{n} + \frac{5 \ln n + 3}{2n^2}\right) = n - \frac{3}{2} + \frac{5 \ln n + 3}{2n} (*) $$ -Now, since $E_{n,n} \leq G_{n,n}$, $G_{n,n} < n-1$ implies $a_n < n$. The expression on the right in $(*)$ is less than $n-1$ when $$\frac{5 \ln n + 3}{2n} < \frac{1}{2},$$ -which is true for all $n \geq 18$. -(**) The bounds required for my argument are given in a post in the "Real Analysis Unsolved and Proposed Problems" forum at the Art of Problem Solving. I cannot tell whether the bounds are conjectured and the poster is asking for a proof, or whether the poster has a proof and is merely posing the problem for others to solve. So I cannot claim that this is a complete proof.<|endoftext|> -TITLE: A Laskerian non-Noetherian ring -QUESTION [33 upvotes]: A Laskerian ring is a ring in which every ideal has a primary decomposition. The Lasker-Noether theorem states that every commutative Noetherian ring is a Laskerian ring (as an easy consequence of the ascending chain condition). - -And I've found the statement that there are non-Noetherian Laskerian rings, but I can't find an example. Any ideas? - -Edit. As the tag already suggested, I'm particularly interested in a commutative Laskerian non-Noetherian ring, but noncommutative examples are also welcome. It never hurts to know more counterexamples. - -REPLY [5 votes]: See I. Armeanu, On a class of Laskerian rings, Revue Roum. Math. Pures et Appl. XXII, -8, 1033–1036, Bucharest, 1977.<|endoftext|> -TITLE: Example of nonlinear regular function with constant nonzero Jacobian -QUESTION [5 upvotes]: Can anyone give a nonlinear regular function from C^2 to C^2 with a constant nonzero Jacobian? It seems to me that the only such functions are linear. -According to the Jacobian conjecture, a function from C^2 to C^2 with a constant nonzero Jacobian must have an inverse. - -REPLY [3 votes]: Any linear function has constant Jacobian determinant, as does any map of the form $(z_1,z_2) \rightarrow (z_1, z_2 - f(z_1))$ or $(z_1,z_2) \rightarrow (z_1 - f(z_2), z_2)$. As a result, any finite composition of maps of these forms will have constant Jacobian. These include the examples that Shai Covo and Andrew Marshall listed. I forget if there are known examples outside this category.<|endoftext|> -TITLE: Twin, cousin, sexy, ... primes -QUESTION [21 upvotes]: Twin, cousin, and sexy primes are of the forms $(p,p+2)$, $(p,p+4)$, $(p,p+6)$ respectively, for $p$ a prime. -The Wikipedia article on cousin primes says that, -"It follows from the first Hardy–Littlewood conjecture that cousin primes have the same asymptotic density as twin primes," but the analogous article on sexy primes does not make a similar claim. - -Q1. Are the sexy primes expected to have the same density as twin primes? -Q2. Is it conjectured that there are an infinite number of cousin and sexy prime - pairs? -Q3. Have prime pairs of the form $(p,p+2k)$ been studied for $k>3$? - If so, what are the conjectures? - -Thanks for information or pointers! - -REPLY [3 votes]: Q1. Are the sexy primes expected to have the same density as twin - primes? - -No, they are expected to have twice the density of the twin primes. This is because $(p,p+6)$ forms a different residue class than $(p,p+2)$ (and $(p,p+4)$). The Hardy-Littlewood $k$-tuple conjecture provides a way to estimate the amount of primes $p$ below a positive integer $x$ such that $p+6$ is also prime. If we denote this number by $\pi(x)_{(p,p+6)}$, we have: -$$ -\pi(x)_{(p,p+6)} \sim 4 \prod_{p>=3} \frac{p(p-2)}{(p-1)^2} \int_2^x \frac{dt}{\log t^2}. -$$ - -Q2. Is it conjectured that there are an infinite number of cousin and sexy prime pairs? - -Yes. - -Q3. Have prime pairs of the form $(p,p+2k)$ been studied for $k>3$? If so, what are the conjectures? - -Yes. In particular, the already mentioned Hardy-Littlewood conjecture provides a way to calculate an asymptotic density for such constellations, if indeed there are an infinite number.<|endoftext|> -TITLE: Collections of points containing only isosceles triangles -QUESTION [6 upvotes]: I've just been thinking about for what values of n we can place n points in the plane so that any three of those points define an isosceles triangle. A triangle, square and pentagon work for 3,4 and 5, and to get 6 just place a point in the centre of the pentagon. -The question is can we do this for 7 points in the plane? - -REPLY [2 votes]: Here is a recent paper related to this topic: -http://www.cims.nyu.edu/~pach/publications/isosceles.ps<|endoftext|> -TITLE: Does the fact that every interval in $\mathbb{R}$ is connected implies that $\mathbb{R}$ is order-complete? -QUESTION [6 upvotes]: Suppose that every open interval in $\mathbb{R}$ is a connected set. Does this implies the least upper bound axiom? (i.e every non-empty subset of $\mathbb{R}$ which is bounded above has a least upper bound) Is this true? in such case, how would you prove this? - -REPLY [2 votes]: Suppose not. -Take two intervals which witness that fact - namely $(a,b)$ and $(c,d)$ such that $b -TITLE: How many right angled triangles can a circle have? -QUESTION [5 upvotes]: Here's what I recall of the question from CNML Grade 11, 2010/2011 Contest #3, Question 7: - -There are 2010 points on a circle, - evenly spaced. Ford Prefect will* - randomly choose three points on the - circle. He will* connect these points - to form a shape. What is the - probability that the resulting shape - will* form a right angled triangle? - - -I answered $\frac{1}{4} = 25\%$, but that's probably incorrect. (Right?) -When I got home, I thought it out in my head, and I got this: -$\frac{2010 * (2010/1005)}{2010 \choose 3}$ -$\frac{2020050}{1351414120} = \frac{3015}{2017036} = 0.149476756984010201\%$ -I'm probably wrong ...again. Can anyone tell how to get the right answer (if I'm not wrong :) )? -*in the past of the future of the perfect present present time double into ripple fluctuater byer doininger of the past future continuum... -EDIT: Realized my mistake in copying the question. - -REPLY [2 votes]: Simplify it. There are 2010 points, but if you start with four points instead and pick 3 at random, you see the probability of getting a right triangle is 1 (draw it out, it helps). You can do it with six points too, and if you look at the probability for both figures you can create a formula to give the probability regardless of the number of points. 3/(n-1), where n is the number of points (I think it has to be even for this to work). With four points, 3/(4-1) = 1, six points 3/(6-1) = 3/5. If you have 2010 points then the probability would be 3/(2010-1) = 3/2009.<|endoftext|> -TITLE: Do your friends on average have more friends than you do? -QUESTION [17 upvotes]: I was watching this TED talk, which suggested that on average your friends tend to individually have more friends than you do. To define this more formally, we are comparing the average number of friends with: -average over each person p of: - friend popularity, defined as: - average over each friend f of p: - number of friends f has - -Intuitively, this seems to make sense. After all, if someone has a high number of friends, they will tend to increase friend popularity and affect a high number of people, while those people who decrease friend popularity only affect a low number of people. Does this result hold for all graphs? -Given a person p, let t stand for: -sum over each friend f of p: - number of friends f has - -It is pretty clear that sum(t)=sum(f^2) as a person with f friends has value of f towards their f friends value of t. -We are then trying to determine whether: sum(t/f)>sum(f) holds for all graphs. - -REPLY [16 votes]: The answer is yes, this holds for any graph (with weak inequality, as Jon points out). -Let's set up some notation. The graph of friendships is $G$. The set of vertices of $G$ (the people) is $V$; the set of edges (the friendships) is $E$. For a person $v$, the number of friends that person has is $\deg v$. The total number of people is $n$. -We want to show that -$$\frac{1}{n} \sum_{v \in V} \deg v \leq \frac{1}{n} \sum_{v \in V} \frac{1}{\deg v} \sum_{(u,v) \in E} \deg(u).$$ -Cancel the $1/n$'s from both sides. After a little rewriting, we want to show that -$$\sum_{v \in V} \sum_{(u,v) \in E} 1 \leq \sum_{v \in V} \sum_{(u,v) \in E} \frac{\deg u}{\deg v}. \quad (*)$$ -Let's consider what a given edge $(u,v)$ contributes to each side of $(*)$. On the left, it contributes $1+1=2$. On the right, it contributes $(\deg u)/(\deg v) + (\deg v)/(\deg u)$. For any two positive numbers $x$ and $y$, we have $2 \leq x/y+y/x$. So every edge contributes more to the right hand side of $(*)$ than to the left, and we have the claimed result.<|endoftext|> -TITLE: Simplify $\sum \limits_{k=0}^{n} \binom{n}{k} 2^{\sqrt{k}}$ -QUESTION [10 upvotes]: Can this sum be simplified: $\sum \limits_{k=0}^{n} \binom{n}{k} 2^{\sqrt{k}}$ -Or at least is there a simple fairly tight upperbound? -EDIT -So I think this sum is more easily bounded than I previously thought: -Clearly, $$\sum_{k=0}^{n} \binom{n}{k} 2^{\sqrt{k}} \cdot \leq 2^{\sqrt{n}} \sum_{k=0}^{n} \binom{n}{k} = 2^{n+\sqrt{n}} .$$ -Also, along the same lines as Shai Covo's answer, $\sum \limits_{k=0}^{n} \binom{n}{k} 2^{\sqrt{k}} \geq \binom{n}{n/2} 2^{\sqrt{n/2}}$. The central binomial coefficient: $\binom{n}{n/2}$ is at least $\frac{2^n}{\sqrt{2n}}$, hence $$\sum_{k=0}^{n} \binom{n}{k} 2^{\sqrt{k}} \geq 2^{\sqrt{n/2}} \cdot \frac{2^n}{\sqrt{2n}} = \frac{2^{n + \sqrt{n/2}}}{2n}$$ -So $\sum \limits_{k=0}^{n} \binom{n}{k} 2^{\sqrt{k}}$ is $O(2^{n + \sqrt{n}})$ and $\Omega(\frac{2^{n+\sqrt{n/2}}}{n})$ -What about expressions of the form: -$\sum \limits_{k=0}^{n} \binom{n}{k} a^{\sqrt{k}} b^{\sqrt{n-k}}$? - -REPLY [2 votes]: The mass in the sum is approximately proportional to a (shifted) Gaussian centered at $k = n/2 + C\sqrt{n}$ and with standard deviation $A \sqrt{n}$ for explicitly calculable constants $C$ and $A$. This implies asymptotics of the form $(M + o(1))2^{n + \sqrt{n/2}}$ where $M$ is another computable constant. -The same is true for the $a,b$ version of the problem. -[edit: my calculations give $M = 2^{(\ln 2)/8} = 1.0618966...$. -[edit-2: similar calculations give $M=(a/b)^{\ln (a/b)}/8$ and asymptotics $(M+o(1))2^n (ab)^\sqrt{n/2}$ for the $a,b$ sum.]<|endoftext|> -TITLE: Entire "periodic" function -QUESTION [13 upvotes]: I am studing for exams and am stuck on this problem. - -Suppose $f$ is an entire function s.t. $f(z) =f(z+1)$ and $|f(z)| < e^{|z|}$. -Show $f$ is constant. - -I've deduced so far that: -a) $f$ is bounded on every horizontal strip -b) for every bounded horizontal strip of length greater than 1 a maximum modulus must occur on a horizontal boundary. - -REPLY [2 votes]: Consider -$$ -g(z) = \frac{f(z) - f(0)}{\sin(\pi z)} -$$ -This is an entire function, since $\sin(\pi z)$ has poles at the integers which are cancelled by the zeros of $f(z) - f(0)$ which also occur at every integer. We have $g(z + 2\ell) = g(z)$ and also $g(x + iy) \rightarrow 0$ when $|y| \rightarrow \infty$ and $|x| \leq B$ for any fixed $B$. Therefore $g$ is bounded. Hence by Liouville $g = C$ with $C$ constant. We must have $C = 0$ because otherwise $f$ is of order greater than $e^{|z|}$ (i.e it would be of order at least $e^{|\pi z|}$). Therefore $C = 0$ and $f(z) = f(0)$ as desired.<|endoftext|> -TITLE: What's the cardinality of all sequences with coefficients in an infinite set? -QUESTION [10 upvotes]: My motivation for asking this question is that a classmate of mine asked me some kind of question that made me think of this one. I can't recall his exact question because he is kind of messy (both when talking about math and when thinking about math). -I'm kind of stuck though. I feel like the set $A^{\mathbb{N}} = \{f: \mathbb{N} \rightarrow A, f \text{ is a function} \}$ should have the same cardinality as the power set of A, if A is infinite. On the other hand, in this post, it is stated that the sequences with real coefficients have the same cardinality as the reals. -It's easy to see that $A^{\mathbb{N}} \subseteq P(A)$, but (obviously) I got stuck on the other inclusion. Is there any general result that says anything else? References would be appreciated. -EDIT To clarify the intetion of this question: I want to know if there are any general results on the cardinality of $A^{\mathbb{N}}$ other that it is strictly less than that of the power set of A. -Also, I was aware that the other inclusion isn't true in general (as the post on here I linked to gave a counterexample), but thanks for pointing out why too. :) - -REPLY [5 votes]: Arturo Magidin's answer has the general theorem. Here are two more facts that can be useful: - -If $\aleph_0 \leq \lambda$ and $2 \leq \kappa \leq \lambda$ then $\kappa^\lambda = 2^\lambda = |P(\lambda)|$ -If $\aleph_0 \leq \lambda \leq \kappa$ then $\kappa^\lambda = |\{ X \subseteq \kappa : |X| = \lambda \}|$<|endoftext|> -TITLE: Can the tensor product of two non-free abelian groups be non-zero free? -QUESTION [8 upvotes]: It's pretty easy to construct an ($R$-$S$) bi-module $M$ and a left $S$-module $N$ such that neither $M_S$ nor $N$ is a projective $S$-module, but the tensor product $M_S \otimes N$ is a non-zero projective $R$-module. -However, taking $R=S= \mathbb{Z}$ defies my zoo of examples. Here projective=free is closed under direct sums and summands, and so it seems like $M$ and $N$ can be chosen indecomposable. The indecomposable abelian groups I know don't seem to fit. I don't know how to "uninvert" using a tensor product. - -Can the tensor product of two non-free abelian groups be non-zero free? - -Inspired by an AoPS question and finals week. - -REPLY [9 votes]: The tensor product $M\otimes N$ of abelian groups cannot be non-zero and free unless both $M$ and $N$ are free. This follows from the following facts. - -Any subgroup of a free abelian group is free. This holds even in the infinitely generated case. Wikipedia has a proof in its article on free abelian groups. -Under certain conditions on $M$ and $N$, for non-zero $n\in N$ the homomorphism -$$ -\begin{align} -&M\to M\otimes N,\\ -&m\mapsto m\otimes n, -\end{align} -$$ -is injective. It is easy to see that this holds when $N$ is free and $n$ is a basis element. It also holds when $M$ and $N$ are both torsion-free. You could prove this by extension of scalars to reduce it to the case of vector spaces over the rationals (and vector spaces are always free). This does not hold if you merely assume that $N$ is torsion-free, as pointed out by Jack Schmidt in a comment (and I must apologize for making the mistake of assuming this in my original answer). -Tensor products are right exact. That is, if $0\to A\to B\to C\to 0$ is an exact sequence of abelian groups, then -$$ -M\otimes A\to M\otimes B\to M\otimes C\to 0 -$$ -is exact. In particular, if $M\otimes A$ maps to zero, this shows that $M\otimes B$ and $M\otimes C$ are isomorphic, which I will use to quotient out the torsion subgroups before applying 2. - -Now, suppose that $M\otimes N$ is non-zero and free. -Let $T$ be the torsion subgroup of $N$, so that $N/T$ is torsion-free and $0\to T\to N\to N/T\to 0$ is an exact sequence. By 3, -$$ -M\otimes T\to M\otimes N\to M\otimes(N/T)\to 0 -$$ -is exact. However, as $T$ is torsion, $M\otimes T$ will also be torsion, so its image in the torsion-free group $M\otimes N$ is zero. This gives an isomorphism $M\otimes N\cong M\otimes(N/T)$. Applying the same argument with the torsion subgroup $S$ of $M$ shows that $(M/S)\otimes(N/T)$ is isomorphic to the free group $M\otimes N$. Then, as $M/S$, $N/T$ are torsion-free, picking any non-zero $n\in N/T$ and using 2 gives an injection -$$ -\begin{align} -&M/S\to (M/S)\otimes(N/T),\\ -& m\mapsto m\otimes n. -\end{align} -$$ -So $M/S$ is isomorphic to a subgroup of the free group $(M/S)\otimes(N/T)$ which, as stated in 1, means that $M/S$ is free. Similarly, $N/T$ is free. Applying 2 again, the homomorphism -$$ -\begin{align} -&M\to M\otimes(N/T),\\ -& m\mapsto m\otimes n -\end{align} -$$ -is injective for any basis element $n\in N/T$. So $M$ is isomorphic to a subgroup of the free abelian group $M\otimes(N/T)\cong M\otimes N$ and, using 1 once more, must be free. Similarly, $N$ is free.<|endoftext|> -TITLE: Dual of Sobolev space $W^{1,p}(U)$ for $U$ an arbitrary subset of $\mathbb R^n$ -QUESTION [9 upvotes]: this question may be shameful, but nevertheless I can't help myself. -Let $U \subset \mathbb R^n$ be arbitrary, in particular not the whole of the space itself. I wonder about the dual of the space $W^{1,p}(U)$, for $p < \infty$. -For $U = \mathbb R^n$, we have $(W^{1,p})' = (W^{1,p'})$ with $p' = \frac{p}{p-1}$. How about different $U$? -For example, in case $U = B_1(0)$ being the closed $1$-Ball, it seems the dual is not a function space. Just recall that the trace is well-defined, linear and continuous on $W^{1,p}(U)$ and, with $S_1$ the boundary of $B_1(0)$ and $w \in L^p(S_1)$, we are given are continuous linear functional by -$ W^{1,p}(B_1(0)) \longrightarrow \mathbb C \, , f \mapsto \int_{S_1} w \cdot tr f dx $. -In fact, I wouldn't be surprised if the above example were somehow prototypical, but I have no clue how to proceed from this point. I regard this relevant, as these spaces are ubiquitous in analysis. -Thank you! - -REPLY [3 votes]: How about embedding $W^{1,p}$ in $L^p\times (L^p)^n$ and using Hahn-Banach and Riesz' representation theorem to get a nice characterization of elements in the dual<|endoftext|> -TITLE: Proving $V_{\kappa}$ is a model of ZFC for inaccessible $\kappa$ -QUESTION [11 upvotes]: Prove that if $\kappa$ is an inaccessible cardinal, then $V_{\kappa}$ satisfies all the axioms of ZFC. -How is this done for the axiom of choice and for regularity? - -REPLY [18 votes]: Let's solve the general problem surrounding the question, -with a few observations, each of them easy to see. - -Every $V_\alpha$ for any ordinal $\alpha$ satisfies -Extensionality and Foundation, since all transitive sets satisfy -Extensionality and Foundation. -Every $V_\alpha$ satisfies Separation, for the simple reason that -$A\subset B\in V_\alpha\implies A\in V_\alpha$. -Every $V_\alpha$ satisfies Union, since $A\in - V_\alpha\implies \bigcup A\in V_\alpha$. -Every $V_\lambda$ for a limit ordinal $\lambda$ -satisfies Pairing and Powerset, since the required set is -added at the next stage below $\lambda$. -Every $V_\alpha$ satisfies the Axiom of Choice (assuming -this holds in $V$) in the choice-set version, since if -${\cal A}\in V_\alpha$ is a family of disjoint sets , then all choice -sets $B\subset \bigcup {\cal A}$ selecting one element from each $A\in{\cal A}$ -have the same or lower rank than ${\cal A}$, and so $B\in - V_\alpha$. -Every $V_\alpha$ with $\omega\lt\alpha$ satisfies -Infinity, since $\omega\in V_\alpha$. -The only remaining axiom is Replacement, and this is the -only one that makes use of inaccessibility. But if -$\kappa$ is inaccessible, then $V_\kappa$ satisfies -Replacement, since if $A\in V_\kappa$ and $F:A\to - V_\kappa$ is definable over $V_\kappa$, then $F''A$ has -bounded rank below $\kappa$, since $|A|\lt\kappa$ and -$\kappa$ is regular. Thus, $F''A\in V_\kappa$ as desired. - -Finally, one can also consider the question about -$H_\delta$, the sets of hereditary size less than $\delta$, -and things are a bit nicer here in several ways. - -For any regular uncountable cardinal $\delta$, the set $H_\delta$ of -sets having hereditary size less than $\delta$, satisfies -$ZFC^-$, that is, all of ZFC except the Powerset axiom. -One gets the easy axioms easily; Separation is easy since the subset also has small hereditary size; -and Replacment follows from the fact that the union of fewer -than $\delta$ many sets in $H_\delta$ still has size less -than $\delta$ by the regularity of $\delta$. - -In particular, this shows that ZFC proves that there are -numerous transitive models of $ZFC^-$. - -If $\kappa$ is inaccessible, then $V_\kappa=H_\kappa$, -and this satisfies full ZFC, since we get $ZFC^-$ in -$H_\kappa$, and we get power set since $\kappa$ is a -strong limit.<|endoftext|> -TITLE: Determine if the coordinates of a point are within an irregular quadrilateral whose corners are defined by coordinates -QUESTION [6 upvotes]: Given four coordinates that define the corners of an irregular quadrilateral and a point defined by its coordinates, what is the simplest way to determine if the point is within or outside of the quadrilateral? - -REPLY [4 votes]: Although the links provided in some sense answer the question, the specific question can be answered without the full force of a point-in-polygon computation. I would recommend this. -Compute whether each angle of your quad $(a,b,c,d)$ is convex or reflex. If one is reflex -(say $a$), connect it to the opposite vertex $c$. If all are convex, choose any diagaonal; e.g., $(a,c)$. Now you have partitioned your quad into two triangles. Check if your point is in either triangle, by checking if it is left-of-or-on each of its three edges.<|endoftext|> -TITLE: Showing properties of discontinuous points of a strictly increasing function -QUESTION [5 upvotes]: Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be strictly monotonically increasing. -(i) Is $f$ not continuous at $p \in \mathbb{R}$, there exists a non-empty, open interval $(a_p, b_p) \subset \mathbb{R}$ such that $f(x)\leq a_p$ for all $x < p$ and $f(x) \geq b_p$ for all $x > p$. -(ii) The set of discontinuity points $$ \{ p \in \mathbb{R} | f \; \mbox{is not continuous at} \; p \}$$ is countable. -For (i) I could be way off, but I am picturing a graph with $p$ on the $x$-axis for which the value $f(p)$ on the $y$-axis is undefined. Am I correct to interpret the open interval $(a_p, b_p)$ as an interval on the $y$-axis which should be contained within the distance between two $f(x)$'s (one for $xp$)? If that is so far correct, there could be 2 types of discontinuous points $p$, a jump or removable type. For the jump it would be easier to show that somehow the interval $(a_p, b_p)$ is smaller than the vertical jump... For a removable discontinuity $c$, I would think that the interval could contain just the $y$-axis value $\displaystyle \lim_{x\to c}f(x)$ but I don't really know how to express that the upper and lower bounds would be just above and below that... -With (ii) I am currently trying to understand a proof, what exactly does the notation $f(p-)$ or $f(p+)$ mean in this context? Is it simply the value when approached from the left or the right(respectively)? - -REPLY [2 votes]: Revised answer: -(i)$x < p \Rightarrow f(x) < f(p)$ (because $f$ is strictly monotonically increasing). -Because this set of values is bounded from above there exists a supremum, namely $f(p-)$. -Likewise, $x > p \Rightarrow f(x) > f(p)$ (because $f$ is strictly monotonically increasing). -Because this set of values is bounded from below there exists an infinimum, namely $f(p+)$. -Let $a_p = f(p-)$ and $b_p = f(p+)$. -$\Rightarrow f(x) < a_p \forall x < p$ and $f(x) > b_p \forall x >p$. -Furthermore, $p$ discontinuous $\Rightarrow f(p-) \neq f(p+)$, $f$ strictly monotonically increasing $\Rightarrow f(p+) - f(p-) > 0 \Rightarrow (a_p, b_p) \neq \emptyset$. -(ii)For every point of discontinuity $p_i$, there exists (after (i)) an interval $(a_{p_i}, b_{p_i})$. These intervals are by construction disjoint. Because $\mathbb{R}$ is dense, in every interval there exists an element from $\mathbb{Q} \Rightarrow$ the set of discontinuity points has cardinality at most equal to that of $\mathbb{Q} \Rightarrow$ the set of discontinuity points is countable.<|endoftext|> -TITLE: Number of terms in a monomial symmetric polynomial -QUESTION [7 upvotes]: Is there a closed form expression for the number of terms in a monomial symmetric polynomial in a given number of variables for a particular partition of exponents, in terms of which/how many exponents are distinct? -I feel the answer should be straightforward, and it's probably just a contrived statement of a more elementary number theoretic question, but I'm just drawing a blank at the moment. I at first thought that the answer might be -$$\frac{N!}{(N-k+1)!}$$ -for $N$ variables and $k$ distinct exponents, which works for all $N=2$ and $N=3$ cases, but predicts 4 terms for a partition $\alpha=(1,1,0,0)$, while there are in fact 6: -$$m_\alpha(a,b,c,d) = ab+ac+ad+bc+bd+cd$$ -Am I being an idiot? - -REPLY [6 votes]: If the smallest exponent appears $k_1$ times, the second smallest appears $k_2$ times, etc. then the answer is the multinomial coefficient: ${n \choose k_1,k_2,...,k_n}$. -The multinomial coefficient $n \choose k_1,k_2,...,k_n$ counts how many ways you have to partition n objects into disjoint subsets of sizes $k_1,...,k_n$ which is just what you need. - -REPLY [5 votes]: Suppose the number of indeterminates is $n$. Suppose the monomial symmetric polynomial is of type $(a_1,a_2,...,a_n)$ where the first $b_1$ exponents are identical, the next $b_2$ are identical and so on until $b_r$, where $1\leq b_j \leq n$ and $\sum_{1}^{r} b_j = n$. Then, number of terms in a monomial symmetric polynomial is the same as number of permutations of the set of exponents. The number of permutations when they are all distinct is $n!$. If not, the every permutations of any collection of identical exponents amongst themselves should be treated the same. So factoring out such permutations, we get the total number to be $\frac{n!}{b_1!\cdot b_2!\cdot...\cdot b_r!}$ -For example, consider the examples on the wikipedia page. -We have, $n=3$. In the first case, we have 2 identical exponents, so number of terms is $\frac{3!}{2!}=3$. -In the second case, all exponents are distinct, so the number of terms is $3!=6$<|endoftext|> -TITLE: Poincare Duality Reference -QUESTION [29 upvotes]: In Hatcher's "Algebraic Topology" in the Poincaré Duality section he introduces the subject by doing orientable surfaces. He shows that there is a dual cell structure to each cell structure and it's easy to see that the first structure gives the cellular chain complex, while the other gives the cellular cochain complex. He goes on to say that this generalizes for manifolds of higher dimension, but that "requires a certain amount of manifold theory". Is there a good book or paper where I can read about this formulation of Poincaré Duality? - -REPLY [2 votes]: Another German textbook including full details of the geometric proof of Poincaré duality is: -Ralph Stöcker, Heiner Zieschang: Algebraische Topologie (1988) -Contrary to the claim frequently found in sketches of the argument, the authors stress that the dual ‘cells’ are not in general cells in the topological sense: - -“Examples and sketches in dimensions $\leq$ 3 suggest that for any $q$-simplex $\sigma$ [the dual ‘cell’] is an $(n-q)$-ball and [its ‘boundary’] is an $(n-q-1)$-sphere [...] (if this were the case, then [...] the dual decomposition would be a CW decomposition). The question whether this is the case was one of the open, difficult and interesting problems in topology for several decades, until it was answered negatively by Edwards in 1975; [...].” - ${}_{\text{(my translation)}}$ - -As pointed out by Ryan Budney in his comment, this technical difficulty can be circumvented by restricting attention to PL triangulations. “Most” manifolds, in particular all differentiable manifolds, admit such triangulations.<|endoftext|> -TITLE: Proof of subfactorial formula $!n = n!- \sum_{i=1}^{n} {{n} \choose {i}} \quad!(n-i)$ -QUESTION [10 upvotes]: Any hints about how to prove -$$!n = n!- \sum_{i=1}^{n} {{n} \choose {i}}\,!(n-i)$$ -from -Wikipedia's article on derangements? -Here, $!n$ is the number of derangements of a set with $n$ elements. -I am not looking for proofs, just nudges in the right direction. - -REPLY [10 votes]: Hint: ${{n} \choose {i}} \cdot!(n-i)$ counts the number of permutations that fix exactly $i$ elements. - -REPLY [5 votes]: I actually prove a generalization of this in my paper "Deranged Exams" (College Mathematics Journal, 41 (3): 197-202, 2010). See Theorem 7. -The generalization is the following. Let $S_{n,k}$ be the number of permutations on $n$ elements in which none of the first $k$ elements remains in its original position. Thus $S_{n,0} = n!$, and the number of derangements on $n$ elements, $D_n$, is $S_{n,n}$. -$$S_{n+k,k} = \sum_{j=0}^n \binom{n}{j} D_{k+j}.$$ -The OP's question is the case $k = 0$. -I'll extract the essence of the proof and post it in the next few minutes. -Since you want hints rather than a full proof, I'll just leave this as a reference in case you (or anyone else reading this) is interested. Jonas Meyer's answer gives a good hint. - -REPLY [3 votes]: Here's a proof, obscured using spoiler space. - - If $d_n$ is the number of derangements on $n$ elements, then the number of permutations on $n$ elements with exactly $i$ fixed points is ${n \choose i} d_{n-i}$ (choose i points to fix, then any permutation that fixes exactly those $i$ points (and nothing else) determines a derangement on the non-fixed points, and there are $d_{n-i}$ such derangements). Hence, $n!=\sum_{i=0}^n {n \choose i} d_{n-i}$, which can be rearranged to give the above formula. - -PS. I'm not a fan of the $!n$ notation, I'm pretty sure it's not standard in combinatorics.<|endoftext|> -TITLE: Highly composite number -QUESTION [7 upvotes]: Definition: n is said to be a highly composite number if and only if $d(n)>d(m)$ for all $m -TITLE: Finitely additive measures on $2^\mathbb{N}$ -QUESTION [12 upvotes]: In our analysis course, the following question came up and could, up to now, not be solved: - -Let $a: \mathbb{N} \to \mathbb{C}$ be a sequence of complex numbers. What are necessary and sufficient conditions for the existence of a function $\mu: 2^\mathbb{N} \to \mathbb{C}$ satisfying the properties - -$\mu(\{i\}) = a_i$ -$\mu$ is finitely additive, i.e. $A\cap B = \emptyset\implies \mu(A\cup B) = \mu(A)+\mu(B)$? - - -Partial Results -If we are given such a function defined on a subset of $2^\mathbb{N}$, we can of course extend it to all sets of the form $A\cup B, A\cap B = \emptyset$ and $A\setminus B, B\subset A$. This invites the use of Zorn's Lemma; but it seems impossible to prove that a maximal set closed under these operations must be $2^\mathbb{N}$. However, this approach strongly suggests that $\mu$ exists for all $(a_i)$, as the problem only depends on $2^\mathbb{N}$. -On the other hand, if $a_i$ converges absolutely, one can set $\mu(I) = \sum_{i\in I} a_i$ which fulfills the required properties, but this approach does not generalise at all. - -REPLY [8 votes]: Given any such sequence $a_i$, you can use it to define a finitely additive measure on the collection of finite subsets of $\mathbb N$. Now consider the vector -space $V:= \mathbb C^{\oplus \mathbb N}$, i.e. the direct sum of countably many copies of $\mathbb C$, with the copies of $\mathbb C$ being indexed by elements $i \in \mathbb N$. Alternatively, this is the space of sequences $(z_i)_{i \in \mathbb N}$ with $z_i =0$ for all but finitely many $i$. -Or, if you want to think a little more analytically, you can think -of this as the space of $\mathbb C$-valued functions on $\mathbb N$ with finite support, i.e. which vanish outside a finite set. (The function attached to a sequence is just $i \mapsto z_i$, of course.) -Our choice of finitely addivite measure defines a functional on $V$, given by -$(z_i) \mapsto \sum_{i \in \mathbb N} a_i z_i.$ (More analytically, this is integration of the finitely supported function corresponding to $(z_i)$ against our finitely addivite measure.) -Now let $W$ be the vector space of all $\mathbb C$-valued functions on $V$. -Certainly $V \subset W$, and we may always extend linear functionals from a subspace to the whole space; thus we may extend our given functional to a -functional $I: W \to \mathbb C$. (The label $I$ is chosen to suggest integration.) -Now if $S$ is any subset of $\mathbb N$, let $\chi_S$ be the characteristic function of $S$. Define $\mu(S) = I(\chi_S)$. Then $\mu$ is a finitely additive measure on $\mathbb N$ satisfying the required properties. -(This is essentially the Zorn's lemma argument suggested in the original posting, but reformulated in terms of extending functionals on vector spaces, -which makes it more transparent.)<|endoftext|> -TITLE: Please explain how Conditionally Convergent can be valid? -QUESTION [9 upvotes]: I understand the basic idea of Conditionally Convergent (some infinitely long series can be made to converge to any value by reordering the series). I just do not understand how this could possibly be true. I think it defies common sense and seems like a clear violation of the Commutative Law. - -REPLY [4 votes]: It deserves to be better known that there are simple cases where one can give closed forms for some rearrangements of alternating series. Here are a couple of interesting examples based on results of Schlömilch in 1873. Many further results can be found in classical textbooks on infinite series, e.g. those by Bromwich and Knopp. -Let $\rm\ H^m_n\ $ be the rearrangement of the alternating harmonic series $\rm\ 1 - 1/2 + 1/3 - 1/4 +\: \cdots\ $ obtained by taking consecutive groups of $\rm\:m\:$ positive terms and $\rm\:n\:$ negative terms. Then -$$\rm H^m_n\ =\ log\ 2 + \frac{1}2\ \lim_{k\to\infty}\ \int^{\:mk}_{nk}\frac{1}x\ dx\ =\ \log 2 + \frac{1}2 \log\frac{m}n $$ -Similarly rearranging Lebniz's sum $\rm\ L\ =\ \pi/4\ =\ 1 - 1/3 + 1/5 - 1/7 +\: \cdots\ $ yields -$$\rm L^m_n\ =\ \frac{\pi}4 + \frac{1}2\ \lim_{k\to\infty}\ \int^{\:mk}_{nk}\frac{1}{2x-1}\ dx\ =\ \frac{\pi}4 + \frac{1}4 \log\frac{m}n $$ -Thus as $\rm\:m\:$ varies we obtain infinitely many rearrangements with distinct sums. -The proof of the general theorem underlying these results is quite simple - using nothing deeper than the integral test. See Beigel: Rearranging Terms in Alternating Series, Math. Mag. 1981.<|endoftext|> -TITLE: Help with proof about maximum number of eigenvalues -QUESTION [5 upvotes]: I'm working my way through Linear Algebra Done Right. To help with one proof, I want to prove the following: -Given $\mathbf{V}$, a vector space and $T$, a linear operator on it, then: -If $\mathbf{W}_1$ and $\mathbf{W}_2$ are subspaces of $\mathbf{V}$ such that: - -$\mathbf{V}$ is a direct sum of $\mathbf{W}_1$ and $\mathbf{W}_2$. -$\mathbf{W}_1$ and $\mathbf{W}_2$ are invariant under $T$. -The restriction of $T$ to $\mathbf{W}_1$ has at most $k$ eigenvalues. -The restriction of $T$ to $\mathbf{W}_2$ has at most $p$ eigenvalues. - -Then $T$ has at most $k+p$ eigenvalues. -I've done a sketch of a proof using determinants, but it was based on old knowledge about the properties of determinants with regards to eigenvalues, so it may not be correct. The book doesn't emphasize using them though, and maybe there's a proof of this without using determinants. -I've tried a proof by contradiction, trying to find something weird by assuming that T can have more than $k+p$ eigenvalues, but I haven't been able to find anything. -Any help would be appreciated. - -REPLY [4 votes]: Hint: $ker(T) = ker(T|W_1)\ \oplus\ ker(T|W_2)$. -Here $ker$ stands for kernel, $T|W_i$ stands for $T$ restricted to $W_i$, etc.. Actually you have to do this with the transformation $T -\lambda I$ instead of $T$. - -REPLY [4 votes]: Hint: Suppose $\lambda$ is an eigenvalue. Then there exists an eigenvector $\mathbf{v}$ corresponding to $\lambda$. Write $\mathbf{v}=\mathbf{w}_1+\mathbf{w}_2$; since $\mathbf{v}$ is nonzero, at least one of $\mathbf{w}_1$ and $\mathbf{w}_2$ is nonzero. -Now, evaluate $T(\mathbf{v})$; since $\mathbf{V}$ is a direct sum, and each $\mathbf{W}_i$ is invariant, what can you say about $T(\mathbf{w}_1)$ and $T(\mathbf{w}_2)$?<|endoftext|> -TITLE: The $5n+1$ Problem -QUESTION [30 upvotes]: The Collatz Conjecture is a famous conjecture in mathematics that has lasted for over 70 years. It goes as follows: -Define $f(n)$ to be as a function on the natural numbers by: -$f(n) = n/2$ if $n$ is even and -$f(n) = 3n+1$ if $n$ is odd -The conjecture is that for all $n \in \mathbb{N}$, $n$ eventually converges under iteration by $f$ to $1$. -I was wondering if the "5n+1" problem has been solved. This problem is the same as the Collatz problem except that in the above one replaces $3n+1$ with $5n+1$. - -REPLY [47 votes]: You shouldn't expect this to be true. Here is a nonrigorous argument. Let $n_k$ be the sequence of odd numbers you obtain. So (heuristically), with probability $1/2$, we have $n_{k+1} = (5n_k+1)/2$, with probability $1/4$, we have $n_{k+1} = (5 n_k+1)/4$, with probability $1/8$, we have $n_{k+1} = (5 n_k+1)/8$ and so forth. Setting $x_k = \log n_k$, we approximately have $x_{k+1} \approx x_k + \log 5 - \log 2$ with probability $1/2$, $x_{k+1} \approx x_k + \log 5 - 2 \log 2$ with probability $1/4$, $x_{k+1} \approx x_k + \log 5 - 3 \log 2$ with probability $1/8$ and so forth. -So the expected change from $x_{k}$ to $x_{k+1}$ is -$$\sum_{j=1}^{\infty} \frac{ \log 5 - j \log 2}{2^j} = \log 5 - 2 \log 2.$$ -This is positive! So, heurisitically, I expect this sequence to run off to $\infty$. This is different from the $3n+1$ problem, where $\log 3 - 2 \log 2 <0$, and so you heurisitically expect the sequence to decrease over time. -Here is a numerical example. I started with $n=25$ and generated $25$ odd numbers. Here is a plot of $(k, \log n_k)$, versus the linear growth predicted by my heuristic. Notice that we are up to 4 digit numbers and show no signs of dropping down.<|endoftext|> -TITLE: Proving a certain type of poker game can always be won -QUESTION [6 upvotes]: Ok, so I just finished a Discrete Math class where we learned about all types of proofs and counting and graphs, etc. So I know the basics, but never thought I'd need to use it in the real world (not anytime soon at least). However, a friend of mine showed me a puzzle game, and he said that it's always possible to win this game, even though it seems unlikely to me. I will explain the rules, which are pretty simple, and I want to know how I could prove this, or if the answer is evident using combinatorics. -Here's how the game works: -The game is a slight variation of a game called 'Poker Squares'. Anyway, you lay down (all at once) 25 random cards arranged in a 5x5 grid (no jokers). The goal of the game is to arrange all the cards so that each row is a poker hand of the following: - -A straight: 5 cards arranged in order of their number (i.e. 9,10,J,Q,K, irregardless of their suite.) -A flush: any 5 cards in any order of the same suite (i.e. five hearts) -A full house: A triple and a double (i.e. three 10's and two J's.) -A straight flush: 5 cards of the same suite arranged in increasing or decreasing order (i.e. 4,5,6,7,8 all diamonds) - -And thats it. The claim is that given any random 25 cards, these cards can ALWAYS be arranged so that each row is a winning poker hand (of the above mentioned hands only). -Any ideas? - -REPLY [12 votes]: Suppose you have one two of clubs, and the rest of the cards are all four and up, and all diamonds, hearts and spades (there are 33 such cards, which is more than enough). You can't use the two of clubs in any winning hand. So the claim is not true. In fact, this counterexample works even if you allow four of a kind.<|endoftext|> -TITLE: Does the order, lattice of subgroups, and lattice of factor groups, uniquely determine a group up to isomorphism? -QUESTION [16 upvotes]: If we have a two lattices (partially ordered) - one for subgroups, one for factor groups, and we know order of the group we want to have these subgroup and factor group lattices, is such a group unique up to isomorphism (if exists)? Or is there a counterexample? -If that's true, are sufficient conditions on the order and subgroup lattices to guarantee uniqueness? Another way - what if we now lattice for subgroup and group of automorphism of group; is that group uniquely determined by that information? -Thanks for help. (sorry for English) - -REPLY [25 votes]: No, the lattice of subgroups, the lattice of normal subgroups, the order of the group, and the automorphism group do not (even taken together) determine the isomorphism type of a finite group. - -Take G = SmallGroup(243,19) and H = SmallGroup(243,20). There is a bijection f:L(G)→L(H) between their lattices of subgroups such that: - -|X| = |f(X)| -X ≅ f(X) unless X = G -X ≤ Y iff f(X) ≤ f(Y) -X ⊴ G iff f(X) ⊴ f(G) = H -G/X ≅ H/f(X) whenever X≠1 is normal - -Additionally Aut(G) ≅ Aut(H). The fourth bullet shows in particular, that f induces an isomorphism between the lattice of quotient groups of G and the lattice of quotient groups of H. The second and fifth bullets show the isomorphism respects everything about the subgroups' properties as abstract groups. -The groups have presentations: -\begin{align*} - G &= \bigl\langle a,b,c \mid a^{27} = b^{3} = c^{3} = 1,\ ba = abc,\ ca = acz,\ cb = bcz \bigr\rangle\text{ where }z = a^9\\ -H &= \bigl\langle a,b,c \mid a^{27} = b^3 = c^{3} = 1,\ ba = abc,\ ca = acz,\ cb = bcz \bigr\rangle\text{ where }z = a^{-9} -\end{align*} -The function is induced by a bijection of the underlying sets: - -f(ai bj ck) = ai bj ck - -There are no such groups of order dividing 64 (even just having an isomorphism of subgroup lattices respecting normal subgroups).<|endoftext|> -TITLE: chain rule using tree diagram, why does it work? -QUESTION [11 upvotes]: In multivariable calculus, I was taught to compute the chain rule by drawing a "tree diagram" (a directed acyclic graph) representing the dependence of one variable on the others. I now want to understand the theory behind it. -Examples: -Let $y$ and $x$ both be functions of $t$. -Let $z$ be a function of both $x$ and $y$. -The derivative of z with respect to t is: -$$\frac{dz}{dt} = \frac{\partial z}{\partial x} \frac{dx}{dt} + \frac{\partial z}{\partial y} \frac{dy}{dt}$$ -To compute this derivative, I was taught to draw a graph with the following edges: -$x \to z$, $y \to z$, $t \to x$, and $t \to y$. -Source: http://www.math.hmc.edu/calculus/tutorials/multichainrule/ - -These tree diagrams can be constructed for arbitrarily complex functions with many variables. -In general, to find a derivative of a dependent variable with respect to an independent variable, you need to take the sum of all of the different paths to reach the dependent variable from the independent variable. Traveling down a path, you multiply the functions (e.g. $\frac{\partial z}{\partial x} \cdot \frac{dx}{dt}$). -Why does this work? - -REPLY [2 votes]: This video will certainly clarify things: http://www.youtube.com/watch?v=2bF6H_xu0ao. -Although it may take a bit longer, I personally find that computing the total differential is substantially easier and more intuitive than a tree diagram.<|endoftext|> -TITLE: Projective closure -QUESTION [7 upvotes]: Is the projective closure of an infinite affine variety (over an algebraically closed field, I only care about the classical case right now) always strictly larger than the affine variety? I know it is an open dense subset of its projective closure, but I don't think it can actually be its own projective closure unless it is finite. -I guess my intuition has me worried about cases like the plane curve $X^2 + Y^2 - 1$, since the real part is compact, but such a curve must still "escape to infinity" over an algebraically closed field, right? - -REPLY [10 votes]: One can show that if the dimension of the affine variety is positive (equivalently, if it is infinite in the sense of our question) then the projective closure is strictly larger. -There are probably lots of ways to prove this, but one is by a version of Noether normalization. If you would like a detailed proof, let me know by a comment and I can give one. -(Another way to phrase this result is to say that a variety that is simultaneously affine and projective is necessarily finite. Scheme-theory mavens will recognize this as a special case of the more general statement that a morphism which is simultaneously affine and proper is finite.) -Added: Here is a sketch of a proof, as promised: -Let me begin with Noether normalization in a geometric form. Fix an afffine variety $V$ contained in $\mathbb A^n$, with projective closue $\overline{V}$. (Here and below I am always working over an algebraically closed field $k$.) -Assuming that $V$ is not all of $\mathbb A^n$, we see that $\overline{V}$ does not contain the hyperplane at infinity, and so we may choose a point $P$ lying in the hyperplane at infinity, but not lying in $\overline{V}$. We may also choose a different hyperplane $H$ (i.e. not the hyperplane at infinity) which doesn't contain $P$. -With $P$ and $H$ in hand, we may define the projection map $\pi: \mathbb P^n \setminus P \to H$, which maps any $Q \neq P$ to the intersection of the line $\ell$ joining $P$ and $Q$ with the hyperplane $H$. Restricting $\pi$ to $\overline{V}$, we obtain a map $V \to H$. -Now since $P$ is not contained in $\overline{V}$, none of the lines $\ell$ appearing in the projection map are contained in $V$, and so each of them meets $\overline{V}$ in only finitely many points. In particular, if $\overline{V}$ is infinite, so is its image under $\pi$. -Now by elimination theory, i.e. the fact that projective varieties are proper, we know that $\pi(\overline{V})$ is closed in $H$, i.e. is a projective variety in $H$, -which is a projective space of dimension $n-1$. -Also, our choice of $P$ ensures that for any $Q \in \overline{V}$, the image $\pi(Q)$ lies at infinity if and only if $Q$ itself does. So $\pi(V)$ is an affine variety (in the affine space $H \cap \mathbb A^n$ of dimension $n-1$), and $\pi(\overline{V})$ is its projective closure. -What I have just done is prove Noether normalization, in a geometric form. -The result we want now follows immeidately, by induction on $n$. (Basically, the case when -$V = \mathbb A^n$ is clear, and if $V$ is not all of $\mathbb A^n$, the preceding argument allows us to reduce the dimension of the ambient affine space -by one.)<|endoftext|> -TITLE: Intuition for not-so-smooth manifolds -QUESTION [6 upvotes]: in standard text books on (smooth) manifolds, for example the known series by John M. Lee or Jeffrey Lee, you either deal with continuous manifolds, or with smooth manifolds. -However, neither in these books nor in lectures I have encountered real examples when a manifold may be $C^k$, but not $C^{k+1}$. -Intuitively, I would suppose the $|\cdot|_\infty$-Ball with radius $1$ to be a merely continuous, non-smooth manifold, because smoothness fails at the edges of the cube. In contrast to this, polar coordinates show the $|\cdot|_{2}$ with radius $1$ is in fact a smooth manifold. -I'd be thankful for some examples with clues to basic techniques, how the different degrees of smoothnesses manifest 'in real life'. - -REPLY [12 votes]: One can show that any $C^k$-manifold, for $k \geq 1$, has a unique enrichment to a $C^{\infty}$-manifold. (I.e. given $M$ with its $C^k$-atlas, we can find a $C^{\infty}$-atlas on $M$, compatible with the given $C^k$-atlas, and this $C^{\infty}$-atlas is unique up to equivalence; see wikipedia for more details.) -So there is not much point in considering $C^k$-manifolds other than for $k = 0$ or $\infty$. -With regard to your unit ball examples, note that the $| \cdot |_{\infty}$-unit ball, although it has -corners, is homeomorphic to the $| \cdot |_2$-unit ball; one says that it can be smoothed. There are topological manifolds that cannot be smoothed (in dimension 4 and higher), in the sense that they are not homeomorphic to a smooth manifold. There are also smooth manifolds that are homeomorphic, but not diffeomorphic. (E.g. when $n \geq 7,$ one can find smooth manifolds that are homeomorphic to $S^n$, but not diffeomorphic to it; these are so-called exotic spheres.) Again, the wikipedia entry has more details. - -REPLY [11 votes]: Take a look at Morris Hirsch's text "Differential Topology", in particular Theorem 2.9 on page 51. The upshot is that if $M$ is a $C^k$-manifold for $k \geq 1$, then it admits the structure of a $C^j$-manifold for any $j \geq k$, and that structure is unique up to diffeomorphism. So in that sense the different degrees of smoothness is largely just an artefact of how your manifold is constructed -- you can always "do better" provided the manifold is at least $C^1$. -I finished this as Emerton's answer appeared, anyhow, take this as a reference for the result.<|endoftext|> -TITLE: Proving the continued fraction representation of $\sqrt{2}$ -QUESTION [12 upvotes]: There's a question in Spivak's Calculus (I don't happen to have the question number in front of me in the 2nd Edition, it's Chapter 21, Problem 7) that develops the concept of continued fraction, specifically the continued fraction representation of $\sqrt{2}$. Having only recently become aware of continued fractions, I'm trying to work my way through this problem, but I'm stuck at the very last stage. -Here's what I've got so far: Let $\{a_n\}$ be the sequence defined recursively as $$a_1 = 1, a_{n + 1} = 1 + \frac{1}{1 + a_n}$$ Consider the two subsequences $\{a_{2n}\}$ and $\{a_{2n - 1}\}$. I've already shown that $\{a_{2n}\}$ is monotonic, strictly decreasing, and bounded below by $\sqrt{2}$, and similarly I've shown that $\{a_{2n - 1}\}$ is monotonic, strictly increasing, and bounded above by $\sqrt{2}$. Obviously, both of these subsequences converge. -Although of course in general, if two subsequences of a sequence happen to converge to the same value, that doesn't guarantee that the sequence itself converges at all (much less to that same value), in the case where the subsequences are $\{a_{2n}\}$ and $\{a_{2n - 1}\}$, it's easy to show that if they both converge to the same value, then so will $\{a_n\}$ (since every term of $\{a_n\}$ is a term of one of the two subsequences). So no problem there. -In other words, it remains only to show that not only do the subsequences converge, but they converge to $\sqrt{2}$ in particular. Take, for starters, $\{a_{2n - 1}\}$ (if I can get $\{a_{2n - 1}\}$ to converge to $\sqrt{2}$, I'm sure getting $\{a_{2n}\}$ to converge to $\sqrt{2}$ won't be very different). Because it's strictly increasing and bounded above by $\sqrt{2}$, it converges to some number $x \leq \sqrt{2}$. Suppose that $x < \sqrt{2}$. We want to show this doesn't happen. -But this is where I'm getting stuck. I feel like I want to take $\epsilon = \sqrt{2} - x$ and show that there exists some $N$ such that $a_{2N - 1} > x$, which would finish the problem due to the monotonicity of $\{a_{2n - 1}\}$. But this isn't working. -Any hints? Thanks a ton. - -REPLY [3 votes]: I don't think you need to break it up into two cases. In stead we can do this: - -Establish that the sequence $\langle a_i \rangle$ is cauchy and therefore convergent to some $L$. For this establish the sequence $\langle f^i(0)\rangle$ converges to zero, where $f^i$ is the 'i'th iterate of $\frac{1}{2+x}$. -Notice that the relation: $a_{n+1}=1+\frac{1}{1+a_n}$ is the same as $a_{n+1}a_n + a_{n+1} - a_n -2 = 0$. -Take limits at both ends to get: $L^2+L-L-2=0$.<|endoftext|> -TITLE: The construction of knotted surfaces in $\mathbb{R}^4$ -QUESTION [6 upvotes]: For a two-sphere embedded in $\mathbb{R}^4$,how can you check whether or not there is an ambient isotopy to the "standard" 2-sphere (the set of points $(x,y,z,0)$ in $\mathbb{R}^4$ distance 1 from the origin)? -Knot theory was discussed in an intro topology course I took and I'm wondering about further generalizations of the concept. It seems to me that such "knotted" 2-spheres could be created by rotating a knotted arc about a plane in $\mathbb{R}^4$; and knotted tori by rotating a standard knot about a nonintersecting plane. However I cannot think of a way to show which of these constructions, if any, are indeed nonisotopic to their standard counterparts. - -REPLY [3 votes]: Kawauchi's and Hillman's books are indeed good references. Many known knotted surfaces have non-trivial and easily computable fundamental groups. You should also look at R.H. Fox's "Quick Trip through Knot Theory," in Fort's Georgia Topology Conference Proceedings, republished by Dover. In addition, there are three books dedicated to studying knotted surfaces from a diagrammatic point of view. Kamada's book on Surface Braids published by the AMS, my book with Saito, "Knotted Surfaces and Their Diagrams," and my book with Kamada and Saito, "Surfaces in 4-space." -The quandle cocycle invariants discovered by Jelsovsky, Langford, Kamada, and me are known to be powerful invariants of knotted surfaces.<|endoftext|> -TITLE: ultrahyperbolic PDE -QUESTION [5 upvotes]: Just wondering: - -How to solve ultrahyperbolic PDEs? Is there any analytical solution for linear ultrahyperbolic PDEs? -If there are only numerical solutions, are the solutions' behavior similar to those of nonlinear eqns? I mean, like an anharmonic oscillator, chaotic but deterministic? - -Information of reference books is also welcomed - but better not too specialized in math. I am an engineering student with very very limited math skills. -Thanks :) - -REPLY [7 votes]: The answer depends on what you mean by "solving" the PDE. The initial value problem for the ultrahyperbolic PDEs are ill-posed. In particular, there is a theorem (the version I know is due to Hormander, you can find it in his Analysis of Linear Partial Differential Operators; but presumably some versions go back earlier) which states that: - -Theorem Let $L$ be a linear partial differential operator with smooth coefficients of order $m$ on $\mathbb{R}^{1+n}$. Consider the Cauchy problem for $Lu = F$ on the upper-half-space $\{ x_0 \geq 0 \}$ with initial data $u_0, u_1, \ldots, u_{m-1}$ (such that $(\partial_0)^k u|_{x_0 = 0} = u_k$ ). Then the following are equivalent. (a) The Cauchy problem has a unique smooth solution $u$ for every prescribed smooth data $u_0,\ldots,u_{m-1}$ and source $F$ and (b) $L$ is a hyperbolic operator. - -So in particular, in general there cannot be well-defined solutions to the initial value problem for ultrahyperbolic PDEs. (They either don't exist or aren't unique. And in the case you do have a solution, the solution is unstable.) -Now, you cannot even construct approximate solutions to the initial value problem reliably using numerical methods, since ultrahyperbolic equations do not have finite-speed of propagation, so you cannot "localise" the problem, and small changes around a point $x$ may almost instantaneously affect the solution at a far-away point $y$. -In certain special cases you can produce some semblance of a solution. In the constant coefficient case in second order, where the equation can be written as $(\triangle_X - \triangle_Y)u = 0$, you have what is known as Asgeirsson's Mean Value Theorem which is sort of a generalisation of the mean value theorem for harmonic functions, and also a generalisation for the Green's function formula for the linear, constant coefficient wave equation. In this particular case you can also consider solutions using Fourier analytic methods. From there one sees that if one were to assume certain restrictions on the allowed wave-numbers (which leads to a non-local constraint on the initial data), one can recover well-posedness of the initial value problem. -As to references, perhaps an easy way to do a reverse search on the classical paper of Fritz John on the subject. John's various textbooks also contain some information about it; in particular you may want to consult his Partial Differential Equations book, and his book on Plane Waves and Spherical Means Applied to Partial Differential Equations.<|endoftext|> -TITLE: Given $n$ identical resistors $R$, find combinations of series, parallel, and series-parallel arrangements -QUESTION [8 upvotes]: Is there an algorithm to find out all possible resistance values of series, parallel, and series-parallel arrangements given $n$ identical resistors, $R$? All of them must be used. -This might even extend to differently-valued resistors, but I'll just focus on identical resistors. - -REPLY [4 votes]: There is a newer (2012) pair of academic papers on this topic by by -Sameen Ahmed Khan - -"How many equivalent resistances?" An older but longer preprint/version of it can be found on arxiv. -"Farey sequences and resistor networks" - -He proves a theoretical upper bound for the following four sequences: - -A048211, which he denotes by $A(n)$, i.e. for the "number of distinct resistances that can be produced from a circuit of n equal resistors using only series and parallel combinations", -A153588, "the total number of equivalent resistances obtained using one or more of the $n$ equal resistors [for series-parallel arrangement]", which he denotes by $C(n)$, and also -a slightly larger set than $A(n)$, denoted by $B(n)$ and "containing bridge circuits (in addition to the configurations produced by series and/or parallel)"; no OEIS number is given for this one in the paper, but I see it's at http://oeis.org/A174283 now. -$D(n)$ is defined similarly to $C(n)$ but with $B(n)$ substituting for $A(n)$. This is now http://oeis.org/A174284 - -The upper bound proved is the same for all four above sequences and is denoted by -$$ G(n) = 2\cdot(1-\frac{1}{n})\cdot \text{Farey} (F_{n+1}) -1 $$ -where $\text{Farey}(n)$ is the number of fractions in the Farey sequence of order $n$ [A005728] and $F_{n}$ is the usual Fibonacci number [A000045]. -Also this bit is perhaps worth noting: - -A set $A(n)$ of higher order does not necessarily contain the complete sets of lower orders. For example, $2/3$ is present in the set $A(3)$, but - it is not present in the sets $A(4)$ and $A(5)$. - -and - -Farey sequence is the most exhaustive set of fractions, so it is - sure to contain some terms absent in the actual circuit configurations. - -$G(n)$ itself is asymptotically $2.618^n$, which is consistent with experimental/asymptotic result from the paper by Amengual mentioned in the other answer. Also $G(n)$ got its own http://oeis.org/A176502 now. -He also proved a lower bound $ \frac{1}{4}(1+\sqrt{2})^n < A(n)$, which makes the enumeration of solutions guaranteed exponential.<|endoftext|> -TITLE: Geometric intuition for the Householder transformation -QUESTION [7 upvotes]: I am studying QR decomposition. -Could you explain the geometric intuition for what the Householder transformation does in that context, and why it's sometimes referred to as the Householder reflection. - -REPLY [5 votes]: We start with a square matrix $M$ of dimension $n$. We can think of its $n$ columns as vectors in $\mathbb{R}^n$. We consider the hyperplane generated by the first column (for example the orthogonal complement of that vector). Next, we reflect each of the columns about this hyperplane. In symbols: $H_1M= [ H_1(v_1) \ldots H_1(v_n)]$, where on the RHS we use functional notation for $H_1$. Now, because $v_1$ is normal to the hyperplane, $H_1(v_1)$ looks simple. The rest of the vectors transform like: -That is we subtract twice their projections onto $v_1$ (this gives me the formula for householder reflections). -Then we consider the $n-1$ dimensional submatrix of $H_1M:=M_2$, and repeat. The submatrix takes me into the hyperplane, since the first reflection leaves that plane invariant. What we are doing is changing the basis (since Reflections have $det \neq 0$) of the underlying space progressively so that the vectors have a nice representation (Thats what QR decomposition is, The Q contains the orthonormal vectors, while the R tracks all the changes we have made).<|endoftext|> -TITLE: An example of computing Ext -QUESTION [15 upvotes]: I've been looking for less trivial examples of computing Ext than finitely generated abelian groups, which tends to be the standard example (and often the only example). Here's an interesting exercise I found in some notes: -Let $M = \mathbb{C}[x,y] / (x,y), N = \mathbb{C}[x,y] / (x-1)$. My question is how to compute $\text{Ext}_v(M,N)$ in the category of $\mathbb{C}[x,y]$-modules. -Well, first of all $\text{Ext}_0(M,N) = \text{Hom}(M,N)$. However, I'm not sure how to identify what this $\text{Hom}$ is! More generally, we have the short exact sequence -$0\rightarrow K \rightarrow \mathbb{C}[x,y] \rightarrow M \rightarrow 0$ -where the second map is the inclusion, the third map is the quotient projection, and $K$ is the kernel of the projection. This sequence gives the exact sequence (a piece of the long exact sequence) -$0\rightarrow \text{Ext}_1(M,N) \rightarrow \text{Hom}(K,N) \rightarrow \text{Hom}(\mathbb{C}[x,y],N) \rightarrow \text{Hom}(M,N) \rightarrow 0$, -which means that $\text{Ext}_1(M,N)$ is the kernel of the map $\text{Hom}(K,N) \rightarrow \text{Hom}(\mathbb{C}[x,y], N)$. But again I'm having trouble determining this kernel. -Finally I think the projective resolution $0 \rightarrow \mathbb{C}[x,y] \rightarrow \mathbb{C}[x,y] \rightarrow \mathbb{C}[x,y]/(x-1)$ shows that the higher Ext's are zero. -Any help would be greatly appreciated. - -REPLY [7 votes]: First, as Aaron mentioned, a homomorphism from $M$ to $N$ is uniquely defined by its image on the generator 1 of $M$, and it must commute with the action $x\cdot 1=y\cdot 1=0$. In particular, if $f$ is such a homomorphism and $\bar{f(1)}$ is a coset representative of $f(1)$, then you must have $x\cdot \bar{f(1)} \in (x-1)$. Since $\mathbb{C}[x,y]$ is a UFD and $x$ and $x-1$ are both irreducible, this implies that $\bar{f(1)}\in (x-1)$, i.e. that $f(1) = 0\in N$. So, that hom-space is 0, and that's good news for the next computation (see below). -As for your computation of $\text{Ext}^1$, you have forgotten that $N\mapsto \text{Hom}(,N)$ is a -contraviariant functor and you have to turn your long exact sequence around accordingly (indeed, -there is no obvious way of defining the map $\text{Hom}(K,N)\rightarrow \text{Hom}(\mathbb{C}[x,y],N)$, -while it's perfectly clear how to define the map the other way round - namely by restriction; similarly $\text{Hom}(M,N)\rightarrow \text{Hom}(\mathbb{C}[x,y],N)$ should be defined by composing homs with projection). See if that gets you anywhere -and feel free to report back.<|endoftext|> -TITLE: Counting nested integer partitions -QUESTION [5 upvotes]: One partition of 8 is 5 + 3, but if we then partition each of the 5 and 3 we could get (3+2) + (2+1), and then partition again to get ((2+1)+(1+1)) + ((1+1)+1) and finally (((1+1)+1)+(1+1)) + ((1+1)+1). 5+3 could also be expanded as (4+1)+(2+1), then ((2+2)+1)+((1+1)+1), then (((1+1)+(1+1))+1)+((1+1)+1). -This question is about viewing "+" as a binary operation , so 1+1+1+1 would have to written as either (1+1)+(1+1) or ((1+1)+1)+1. -Every partition can be written as a such nested partition of 1s. It is still order independent, but associative dependent. -For a given number $n$, how many associative dependent binary-operation nested partitions of 1s are there of $n$ ? -What is known about this function ? - -REPLY [6 votes]: Marc's certainly correct that the Catalan numbers count full binary trees. But while every nested partition defines a full binary tree, not every full binary tree corresponds to a nested partition. -Smallest example: 1+(1+1) would come from 1+2, not a partition of 3 following the standard convention of listing parts in non-increasing order. As in Roy's example, the only two binary-operation nested partitions of four 1's are (1+1)+(1+1) from 2+2 or ((1+1)+1)+1 from 3+1. -Working out more terms, the number of such sums (from 1 to 10) is 1, 1, 1, 2, 3, 6, 11, 24, 47, 103. This is http://oeis.org/A000992, and Callan's description of restricted binary trees there matches this context. The sequence has a convolution recurrence similar to Catalan numbers, only going "half-way." (If Roy's question were about compositions [partitions where "order matters"], allowing 1+2 and then 1+(1+1), the recurrence formula would go the whole way and the answer would be the Catalan numbers.)<|endoftext|> -TITLE: Number of permutations of $n$ elements where no number $i$ is in position $i$ -QUESTION [25 upvotes]: I am trying to figure out how many permutations exist in a set where none of the numbers equal their own position in the set; for example, $3,1,5,2,4$ is an acceptable permutation where $3,1,2,4,5$ is not because 5 is in position 5. I know that the number of total permutations is $n!$. Is there a formula for how many are acceptable given the case that no position holds its own number? - -REPLY [26 votes]: What you are looking for is known as derangement. However, for counting the number of derangement for say $n$-elements you could possible use a trick, compute$\frac{n!}{e}$ and then round off to an integer and this will give you the desired result. -This is actually another application of $e$, which was discovered by Jacob Bernoulli in the problem of derangement, also known as the hat check problem.<|endoftext|> -TITLE: Square-free Integers Factorization -QUESTION [6 upvotes]: Suggest an algorithm for factorizing an integer such that each of its factor is square-free. -Thanks, - -REPLY [11 votes]: It's trivial given the complete factorization into primes. Currently we do not know any other way to compute squarefree parts and it is widely suspected that it cannot be computed in any simpler way. -This problem is important because one of the main tasks of computational algebraic number theory reduces to it (in deterministic polynomial time). Namely the problem of computing the ring of integers of an algebraic number field depends upon the square-free decomposition of the polynomial discriminant when computing an integral basis. -Contrast this difficulty with the trivial squarefree decomposition of polynomials by way of gcd with its derivative. The availability of derivatives for polynomials opens up a powerful toolbox that is not available for integers. For example once derivatives are available so are Wronskians - which provide powerful measures of dependence in transcendence theory and diophantine approximation. A simple yet stunning example is the elementary proof of the polynomial case of Mason's ABC theorem, which yields as a very special case a high-school-level proof of FLT for polynomials. -For references see my post here.<|endoftext|> -TITLE: What's the map $BU \times \mathbb{Z} \to \prod K(\mathbb{Z},n)$ representing the total Chern class? -QUESTION [9 upvotes]: Recall that complex topological $K$-theory is representable on reasonable spaces by the space $BU \times \mathbb{Z}$ (where $BU$ is a colimit of various infinite Grassmannians), and that the total Chern class provides a natural map $\mathrm{Vect}(B) \to H^*(B)$ for every such space $B$. By the multiplicativity property, this map factors through the K-group and leads to a natural transformation $K(B) \to H^*(B)$. $H^*(B)$ is also representable by a product of Eilenberg-MacLane spaces $K(\mathbb{Z}, n)$ over all $n$. There is thus a map, unique up to homotopy -$$BU \times\mathbb{Z} \to \prod_n K(\mathbb{Z},n).$$ -What is this map? -As Mariano observes, one can simply define the individual Chern classes on the $K$-group as well, albeit not immediately through the universal property, so the question reduces to the determination of the (homotopy class of the) map $BU \times \mathbb{Z} \to K(\mathbb{Z},n)$ induced by each Chern class. - -REPLY [6 votes]: One way, my favorite, of defining the chern classes goes as follows: first compute $H^*(BU; \mathbb{Z})=\mathbb{Z}[c_1, c_2, ...]$ by using your favorite method. I know two methods, one is using some cellular description of the grassmanians (see Milnor and Stasheff), the other is to first compute the cohomology of the lie groups $U(n)$ and then use a path-loop space fibration and the Serre SS to get at the cohomology of $H^*(BU(n);\mathbb{Z})$ (see Homology and Euler characteristics of the classical Lie groups). Next you notice that $BU$ and $BU(n)$ have the same cell structure through a range and so you essentially take the limit (colimit, and there are some subtleties here, the place I recall seeing these is in Jacob Lurie's survey on elliptic cohomology, at the beginning). -Next suppose you have a complex vector bundle $\xi : E \to B$ then it is classified by a map $f: B \to BU(n)$ where n is the dimension of $\xi$. You can get a map on all of $BU$ by just adding on trivial bundles to get a map out of $BU(k)$ for all $k$ larger than $n$. This gives a map out of BU (although technically you don't really need this, $BU(n)$ works fine for defining the chern classes of an $n$ dimensional complex vector bundle). Now define $c_n(\xi):=f^*(c_n)$. -From this perspective we started with the universal case, so maybe it is a bit of a cheat. For me this even clarifies how I should think about characteristic classes in general. Suppose you want to look at $G$-bundles and see what you can tell about them from $E$-theory ($E$ some ring spectrum and $G$ some compact lie group or whatever you need for $BG$ to be nice, I am not sure if there are other restrictions for this to work). Now compute $E^*BG$ and use the fact that $G$-bundles over $X$ are classified by homotopy classes $X \to BG$. You should check out some of the threads on MO about characteristic classes, I think I can learn something from each of Rezk's answers. -please let me know if I can make some of the above clearer or if there are any mistakes.<|endoftext|> -TITLE: Thought experiment for dice game -QUESTION [6 upvotes]: Two players in a dice game to see who can roll a total of 60 first, taking turns each rolling 2 dice. -For one player, one die is 4 sided and one is 6 sided. Therefore the average roll for this player will be a 6. -The other player has two 6 sided dice. So this players average roll will be a 7. -If they take turns, each rolling both dice, what % of the time will the first player (a 4-6 dice combo) get to 60 first and what % of the time will the second player (6-6 dice combo) get to 60 first. If they get to 60 on the same roll, the player who goes the furthest wins. - -REPLY [7 votes]: Here is the exact answer, which I used Mathematica to compute. -$P(player\ 1\ wins) = \frac{118598889714523902216022358617928917633636253614645787377890526452587094871641057161197}{778560366535929033488842048429259732340012411410736514701931793474234814991339289051136}$ -$P(player\ 2\ wins) = \frac{858777828556435297558093986193070674512712644399583464104126713012616584348576395258605}{1038080488714572044651789397905679643120016548547648686269242391298979753321785718734848}$ -$P(players\ tie) = \frac{63512421616314632416996800666111235287366697985612516983784929048741127433063741783941}{3114241466143716133955368193717038929360049645642946058807727173896939259965357156204544}$ -Numerically, that's -.15233101351178373415... -.82727479987591082853... -.02039418661230543732... - -Here are the Mathematica rules I used to compute the probability of player 1 winnng. The other two cases are similar. - -(sorry for the image) - -With these rules in place I then computed p[60,60]. I thought it was simpler to start the players at 60 and count down to 0, rather than the other way round. -The first rule just says if player 1 has reached 0 and he player 2 hasn't, then the probability of player 1 winning is 1. The second rule is similar for player 2. -The third rule handles the case where both players have crossed the finish line. Note that despite the game description, there aren't really turns involved—there are rounds. It doesn't matter which player goes first, so me might as well consider all four dice being rolled simultaneously. -The fourth and final rule is where all the action is. The recursive sum is over all possible outcomes of the four dice (two for each player). The constant $864$ is merely $4\cdot6\cdot 6\cdot 6$. The p[x,y] = thing in the center is a kind of memoization (caching). Without it, the simplistic recursion would take forever. -EDIT: -Here is a plot of $P(n)$, the probability that the weaker player wins, when playing to a total of $n$ (instead of $60$), for $n = 1\dots80$. To my surprise, it's not quite monotonic! But a little thought will explain why.<|endoftext|> -TITLE: What is the difference between Gödel's completeness and incompleteness theorems? -QUESTION [26 upvotes]: What is the difference between Gödel's completeness and incompleteness theorems? - -REPLY [11 votes]: I'll add some comments... -It is useful to state Gödel's Completeness Theorem in this form : - -if a wff $A$ of a first-order theory $T$ is logically implied by the axioms of $T$, then it is provable in $T$, where "$T$ logically implies $A$" means that $A$ is true in every model of $T$. - -The problem is that most of first-order mathematical theories have more than one model; in particular, this happens for $\mathsf {PA}$ and related systems (to which Gödel's (First) Incompleteness Theorem applies). -When we "see" (with insight) that the unprovable formula of Gödel's Incompleteness Theorem is true, we refer to our "natural reading" of it in the intended interpretation of $\mathsf {PA}$ (the structure consisting of the natural number with addition and multiplication). -So, there exist some "unintended interpretation" that is also a model of $\mathsf {PA}$ in which the aforesaid formula is not true. This in turn implies that the unprovable formula isn't logically implied by the axioms of $\mathsf {PA}$.<|endoftext|> -TITLE: Characteristic Functions and motivations -QUESTION [8 upvotes]: I've recently studied characteristic functions in my probability course and I can't get why we define it to be the Fourier transform of the distribution (if the random variabile is continuous). -I mean that if $X$ is a random variable, $\varphi_X (t) = \mathbb{E}(e^{i t X}) = \int_{-\infty}^{+\infty} e^{i t x}f_X(x) dx$ where $f_X(x)$ is the distrubution function of $X$, and I can't see any motivation for doing this. I asked my professor but he wasn't clear at all; he said something like this: -"Since we proved the theorem that if $\varphi_X (t) = \varphi_Y (t)$ then $X \sim Y$ (or $P_X \equiv P_Y)$, it is natural to define it this way". -But of course, to prove that we need the definition! So I couldn't really make up my mind about it, if you could provide some help in this sense (motivation for defining the characteristic function of a random variable as the Fourier transform of its distribution) it would be much appreciated. - -REPLY [7 votes]: Practically speaking, the short answer is that it's convenient. The characteristic function has better analytic properties than the moment generating function, lets you study all of the moments of a random variable at once, and has the extremely convenient property that $\phi_{X+Y}(t) = \phi_X(t) \phi_Y(t)$ if $X, Y$ are independent. This makes the characteristic function an amazing tool for understanding sums of independent random variables, and indeed a standard proof of the central limit theorem proceeds via a computation of characteristic functions. -Many constructions in mathematics translate problems in one domain (understanding distribution functions) to problems in another domain (understanding characteristic functions), and these constructions are useful because different tools apply in the second domain. That is exactly what happens in the Fourier-theoretic proof of the central limit theorem.<|endoftext|> -TITLE: 3 Utilities | 3 Houses puzzle? -QUESTION [11 upvotes]: There's a puzzle where you have 3 houses and 3 utilities. You must draw lines so that each house is connected to all three utilities, but the lines cannot overlap. However, I'm fairly sure that the puzzle is impossible. How is this proved? - -REPLY [3 votes]: Here is a fun solution to this problem: -(Gas is Internet in this case) - -(As pointed out, this is not what the OP intended)<|endoftext|> -TITLE: Is a uniformly continuous function vanishing at $0$ bounded by $a|x|+c$? -QUESTION [10 upvotes]: Let $g: \mathbb{R} \rightarrow \mathbb{R}$ be uniformly continuous with $g(0)=0,c\geq 0, c \in \mathbb{R}$. Show: $$\exists a\geq 0 \in \mathbb{R}: \forall x \in \mathbb{R}: |g(x)| \leq a \cdot |x|+c$$ - -I could also say $g(x) \in \mathcal{O}(x)$. -Notes: I could not make up any counterexample so I guess it could be true, all uniformly continuous functions I know grow too slowly. -My approach: -Given $\epsilon > 0$, we have that: $$\exists \delta(\epsilon): |x-y|<\delta=> |g(x)-g(y)|<\epsilon$$ -because of the continuity of $g$. Now choose $n=\text{max}\{n \in \mathbb{N}: (n-1)\delta/2\leq|x|\}$. Obviously, such an $n$ exists, and $n > 0$. We also easily see that an upper bound for $n$ is $n \leq \frac{2}{\delta}|x|+1$. -Now we use this to separate $|x|$ into $n-1$ distinct parts of size $s<\delta/2$, and the last part which is smaller than $\delta$ : -$$|x|=|x_1-x_0|+|x_2-x_1|+|x_3-x_2|+...+|x_n-x_{n-1}| < (n-1)\delta/2 + \delta = (n+1)\delta/2.$$ -$$\begin{align} -\Rightarrow |g(x)| & =|g(x_1)-g(x_0)+g(x_2)-g(x_1)+g(x_3)-g(x_2)+...+g(x_n)-g(x_{n-1})| \\ -& \leq |g(x_1)-g(x_0)|+|g(x_2)-g(x_1)|+|g(x_3)-g(x_2)|+...+|g(x_n)-g(x_{n-1})| \\ -& \lt n \cdot \epsilon \leq (\frac{2}{\delta}|x|+1) \cdot \epsilon = \frac{2\epsilon}{\delta} \cdot |x|+\epsilon -\end{align}$$ -So we can see that the constant $c$ we were given can be set as the $\epsilon := c$, and that was also the reason why generally speaking $c>0$. Then we can choose $a := \frac{2\epsilon}{\delta}$, as our $\delta$ only depends on the $\epsilon$, and we have that $|g(x)| \leq a \cdot |x| + c$ for $c > 0$. $\quad \square$ - -REPLY [7 votes]: It is false if $c=0$. To see this, try to think of a continuous function that grows very rapidly near $0$. -It is true if $c\gt 0$. One way to show it is by taking a number of very small steps from $0$ to $x$, small enough to guarantee (using uniform continuity) that the function changes no more than a certain fixed amount at each step. Trying to write out the details should lead you to what this fixed amount is, and to what value of $a$ will work.<|endoftext|> -TITLE: Group of order 12 -QUESTION [8 upvotes]: Is it true or false that a group of order 12 always has a normal 2-sylow subgroup? -I have a hunch it is false.. - -REPLY [4 votes]: The 5 groups of order 12 are -$C_{12}$, $C_6 \times C_2$ in the abelian case, and -$A_4$ (group of all even permutations of length 4), -$D_6$ (group of all symmetries of the regular hexagon), -$C_3 \rtimes C_4$ in the nonabelian case.<|endoftext|> -TITLE: Fiction "Division by Zero" By Ted Chiang -QUESTION [9 upvotes]: Fiction "Division by Zero" By Ted Chiang -I read the fiction story "Division by Zero" By Ted Chiang -My interpretation is the character finds a proof that arithmetic is inconsistent. -Is there a formal proof the fiction can't come true? (I don't suggest the fiction can come true). -EDIT: I see someone tried - -REPLY [7 votes]: Is there a formal proof the fiction can't come true? -No, by Gödel's second incompleteness theorem, formal systems can prove their own consistency if and only if they are inconsistent. So given that arithmetic is consistent, we'll never be able to prove that it is. (EDIT: Actually not quite true; see Alon's clarification below.) -As an aside, if you liked "Division by Zero," you might also like Greg Egan's pair of stories in which arithmetic isn't consistent: "Luminous" and "Dark Integers".<|endoftext|> -TITLE: Going to the Movies! -QUESTION [26 upvotes]: I was looking at movie times today and was struck by the oddly-spaced showing times. For example, at the local Loew's Theater "Tron: Legacy 3D" (127 min.) is playing on two screens at the following interlaced times: 1:00 pm, 1:45 pm, 4:00 pm, 4:45 pm, 7:00 pm, 7:45 pm, 10:00 pm and 10:45 pm. Why not space the times equally? Is there an algorithm at work here? Other than optimizing food sales by cleverly keeping a pool of waiters, the strange times might have to do with overbooking and accommodating johnnys-come-lately. -Consider the following idealized scenario. Suppose only $1$ movie is a playing at a theater with $n$ screens, and free popcorn and refreshments is given upon sitting in the theater, so no other factors are relevant for spacing movie times but ticket sales. Suppose each showing can accommodate at most $N$ people. Suppose $N \pm M$ arrive at the kiosk reasonably before any particular showing time, where $0 < M < N$, and $0 < L < M$ people show up just a little too late for any particular show -- the same number of latecomers come by each time. If any person has to wait for more than some fraction $0 < R < 1$ of the time $t$ of the movie in question to watch the next movie in the cue, then he/she returns the ticket and goes home. Suppose the $\pm$ sign above is governed by tossing a fair coin, $+$ for heads, $-$ for tails. -Question: Given the above data, what is the optimal spacing of $X$ movie times, each movie of the same length $t$, on $n$ different screens that maximizes the total number of ticket purchases and (happy) moviegoers? -If this question is too easy, then generalize the above scenario to multiple movies showing at the same theater. If this question is too hard, then simplify it. -(Of course, feel free to edit and improve.) -(Added Thoughts) The constraints above are in place to try to model the scenario as closely as possible while keeping the mathematics simple. -I'd like to account for a little randomness, and the simplest truly non-trivial random event is the tossing of a fair coin. If $N - M$ or $N + M$ people come every time, then the problem is trivial or cumulatively impossible, respectively. What makes this problem tractable is that there are some occasions when some people are left out of a showing. These people are either at the end of a long cue or literally late; either way they must wait but few, if any, will wait longer than the length of the movie. I believe the answer of spacing depends heavily on the amount of wait time. That is, if $R = 0$, $L + M > 0$ people go home every time (not optimal). If $R = 1$, then any reasonable spacing should suffice to accommodate the extremely patient moviegoers. I think this possibility oversimplifies the problem, unless I'm missing something crucial or obvious. -I suppose also that the condition $L < M$ could be relaxed to $L < N$, but my reasoning is that latecomers seem to be rarer than overbookers. Are these constraints reasonable? - -REPLY [3 votes]: I heard a question like this years ago and heard that the theaters are running the same film threw several projectors on several screens. First the film goes threw projector one on screen one then through projector two on screen two and so on. It is called a "platter system" (see link below for a wiki article on it). I guess it takes a little time for it to get to the second projector and maybe there is some sort of delay they can add so there is a little time between showings. -http://en.wikipedia.org/wiki/Movie_projector#Single_reel_system<|endoftext|> -TITLE: Finding irreducible polynomials over GF(2) with the fewest terms -QUESTION [11 upvotes]: I'm investigating an idea in cryptography that requires irreducible polynomials with coefficients of either 0 or 1 (e.g. over GF[2]). Essentially I am mapping bytes to polynomials. For this reason, the degree of the polynomial will always be an integer multiple of 8. -I would like to build a table of such irreducible polynomials for various degrees (e.g. degrees from 8, 16, 24, ..., 1024). Because there are multiple irreducible polynomials for a given degree, I'd like the one with the fewest number of terms since I will hard code the non-zero terms. For example, for degree 16, both of these polynomials are irreducible: -$x^{16} + x^{14} + x^{12} + x^7 + x^6 + x^4 + x^2 + x + 1$ -and -$x^{16} + x^5 + x^3 + x + 1$ -Obviously, the latter one is preferred because it requires less space in code and is more likely to be right (e.g. that I wouldn't have made a copy/paste error). -Furthermore, I've noticed that to at least degree 1024 where the degree is a multiple of 8, there are irreducible polynomials of the form: -$x^n + x^i + x^j + x^k + 1$ where $n = 8*m$ and $0 < i,j,k < 25$ -Is there an good algorithmic way of finding these polynomials (or ones that have even fewer terms)? Again, the purpose is to keep the non-zero terms in a look-up table in code. -Thanks in advance for any help! -UPDATE: -This Mathematica code generates all pentanomials for degrees that are multiples of 8 up to degree 1024: -IrreducibleInGF2[x_] := IrreduciblePolynomialQ[x, Modulus -> 2] - -ParallelTable[ - Select[ - Sort[ - Flatten[ - Table[ - x^n + x^a + x^b + x^c + 1, - {a, 1, Min[25, n - 1]}, {b, 1, a - 1}, {c, 1, b - 1} - ] - ] - ], - IrreducibleInGF2, 1], - {n, 8, 1024, 8}] - -(I sorted the list of polynomials to make sure I always got the one with the overall smallest degrees first). However, it takes quite a bit of time to run. For example, it took over 26 minutes for the case of $x^{984} + x^{24} + x^9 + x^3 + 1$ . -UPDATE #2 -The HP paper "Table of Low-Weight Binary Irreducible Polynomials" has been incredibly helpful. It lists up to $x^{10000}$ and reiterates a proof by Swan that there are no trinomials when the degree is a multiple of 8 (which matches my findings). I've spot checked that their results match mine up to $x^{1024}$ so I'll just need to double check their results up to 10000 which should be much easier than finding them myself. - -REPLY [10 votes]: According to a paper "Optimal Irreducible Polynomials for $GF(2^m)$ Arithmetic" by M. Scott, -"it is in practise always possible to chooose as an irreducible polynomial either a trinomial... or a pentanomial." [talk slides] [PDF link] -In random number generators irreducible trinomials of various degrees with three nonzero binary coefficients are associated with the names Tausworthe-Lewis-Payne. -Added: It has been known since Gauss that there are lots of irreducible polynomials over a finite field, basically the analog of the Prime Number Theorem for such polynomials. Among the $2^m$ (monic) polynomials over Z/2Z of degree $m$, approximately $1/m$ of them are irreducible. -We can eliminate the possibility of first degree factors by inspection, for divisibility by $x$ or $x+1$ would imply respectively a zero constant term or an even number of nonzero terms in the polynomial. So the first case to test for irreducibility is the trinomials of degree $m$. With leading and constant coefficients accounting for two of the three nonzero terms, there are but $m-1$ possibilities to test, and by symmetry of $x$ → $1/x$ substitution, we can restrict the middle terms to degree ≤ $m/2$. -If none of those pan out, we have the richer supply of pentanomials to test. Indeed you seem to have hit upon a seam of cases where trinomials will never work out, namely degree $m$ a multiple of 8 [PS] (Swan, 1962). -The work then comes down to testing all the $\binom{m-1}{3}$ binary pentanomials $p(x)$ until we find one that's irreducible. Your application might make other conditions, perhaps similar to those considered in Scott's paper above, attractive. Given the modest degrees you are working with, trial division (taking $p(x) \; mod \; q(x)$ for all $q(x)$ of degree ≤ $m/2$) should be fast enough. [Remember, we shouldn't have to test more than O(m) possibilities before we find success.] -There is a fancier way [PDF] to test polynomials for irreducibility over GF(2). A necessary condition for binary polynomial $p(x)$ to be irreducible over $GF(2)$ is that: -$$x^{2^m} = x \mod p(x)$$ -In fact Gauss showed for prime q that $x^{q^m} - x$ is precisely the product -of all monic irreducible polynomials over $GF(q)$ whose degrees divide -$m$. [From this he deduced the count of monic irreducible polynomials of -degree exactly $m$ is asymptotically $q^m/m$ as $m \rightarrow \infty$.] -For $q = 2$ it follows that if $p(x)$ is irreducible of degree $m$, -it divides $x^{2^m} - x$ over $GF(2)$, i.e. the congruence above. -Rubin (1980) proposed a necessary and sufficient test for irreducibility, -combining the above with some additional steps to rule out the possibility -that $p(x)$ might be the product of some irreducible factors whose degrees -properly divide $m$. [While the degrees of the irreducible factors would -naturally sum to $m$, having all the irreducible factors' degrees divide -$m$ would be somewhat special, unless of course there is only one factor.] -The additional "sufficiency" steps are to check for each prime factor -$d_i$ of $m$ that: -$$GCD(x^{2^{m/d}} - x, p(x)) = 1$$ -That is, if $p(x)$ were to have an irreducible factor of degree $k$ properly -dividing $m$, it would crop up when taking the gcd of $x^{2^{m/d}} - x$ and -$p(x)$ if $k$ divides $m/d$. -Since then a lot of ingenuity has been applied to efficiently doing these -steps. Of course the necessary condition lends itself to repeated squaring, -computing $x^{2^{k+1}} = (x^{2^k})^2$ mod $p(x)$, for $k$ up to $m-1$. We -can take advantage here of the fact that the multiplication we're doing -is a squaring and advantage of the sparsity of our $p(x)$ as a pentanomial -when doing reduction mod $p(x)$. -As the report by Brent of work with Zimmerman (linked above) points out, -this repeated squaring gives with each (fairly inexpensive step) linear -progress toward the "exponent of the exponent" $m$. There is also a way -to progress farther with greater computational effort by modular -composition. -That is, suppose we've already arrived at: -$$f(x) = x^{2^k} \mod p(x)$$ -and -$$g(x) = x^{2^j} \mod p(x)$$ -Then: -$$f(g(x)) = x^{2^{k+j}} \mod p(x)$$ -Thus composition of two polynomials $f(x)$ and $g(x)$ mod $p(x)$ can -replace a number of repeated squaring steps. But composition mod $p(x)$ -is more expensive than squaring or even than multiplication generally -mod $p(x)$. So as Brent points out, practical advantage for using -modular composition lies at the final stage(s) of evaluating the -necessary condition. E.g. at the end one modular composition might -replace $m/2$ repeated squarings. -As far as the "sufficiency" conditions go, Gao and Panario outlined -an improvement over a naive implementation of Rubin's tests in this - -1997 paper [PDF], basically sequencing the gcd computations in -an expedient order.<|endoftext|> -TITLE: Geometric argument that operators on $\mathbb{R}^3$ have an eigenvalue? -QUESTION [11 upvotes]: This question came up when trying to trying to find a $3\times3$ real matrix $A$ such that - -$Ax$ is nonzero for nonzero $x$ -$Ax$ is orthogonal to $x$ for any $x$ in $\mathbb{R}^3$ - -We know such a matrix cannot exist because $A$ must have an eigenvalue (thus there is some $x$ such that either $Ax = 0$ or $Ax$ is parallel to $x$) -However, - -Is there a nice, purely geometric way to justify that every operator on $\mathbb{R}^3 $ has a (real) eigenvalue? - -To clarify: I'm looking an intuitive way to visualize why an eigenvalue must exist in this case. In particular, no polynomials and no determinants are allowed! - -REPLY [20 votes]: If $A\colon\mathbb{R}^3\to\mathbb{R}^3$ was a linear map with no eigenvectors, then $x\mapsto Ax/\Vert Ax\Vert$ would give a map on the unit sphere with no fixed points and without taking any point to its antipode, contradicting the hairy ball theorem. - -REPLY [3 votes]: Just using linear algebra but no determinants, the closer to what you ask that I know uses that polynomials of degree 3 with real coefficients have roots: -Let's argue by contradiction, and assume that $A$ has no eigenvalues. First, I claim that any $v$ is contained in a proper $A$-invariant subspace of ${\mathbb R}^3$. This is clear if $v=0$. Otherwise, consider $v,Av,A^2v,A^3v$. They are linearly dependent, so there are real coefficients $a,b,c,d$ not all zero with $(aA^3+bA^2+cA+dI)v=0$. -Now we examine the polynomial $p(x)=ax^3+bx^2+cx+d$. If $a\ne0$, $p$ has a real root $r$, and we can write $p(x)=(x-r)(ax^2+ex+f)$ for some real numbers $e,f$. This gives us $(A-rI)(aA^2+eA+fI)v=0$, but $A-rI$ is invertible (since $r$ is not an eigenvalue of $A$), so in fact $(aA^2+eA+fI)v=0$. If $a=0$, then directly we have $(bA^2+cA+dI)v=0$. -This shows that $v$ is contained in a proper $A$-invariant subspace (consider the span $S$ of $v,AV$, and note that the argument above shows that either $A^2v$ is in this span, from which it follows that $S$ is $A$-invariant, or else, the coefficient of $A^2$ above is 0, and in fact $v$ is an eigenvalue, a contradiction). -So we may assume that any non-zero $v$ is in an $A$-invariant plane $P_v$. Now, if $w$ is a vector not in $P_v$, then any vector in $P_v\cap P_w$ is mapped by $A$ to another vector in the same line, so $P_v\cap P_w$ is an $A$-invariant line, i.e., $A$ has a real eigenvalue after all. -(Note that I presented the argument as a contradiction for brevity, but it can be rearranged as a direct proof.) -David Milovich found a nice way of extending this argument so we have a short nice "determinant free" proof that $n\times n$ matrices with real entries and $n$ odd admit a real eigenvalue, see this blog post of mine. -I was interested in this because this is the base case of a nice inductive argument (that actually can be traced back to one of Gauss' first proofs of the fundamental theorem of algebra) that allows us to show that any square matrix with real coefficients admits a (perhaps complex) eigenvalue, from which we can deduce the fundamental theorem of algebra. I refer to this in the post above, but it comes from "The fundamental theorem of algebra and linear algebra" by Harm Derksen, American Mathematical Monthly, 110 (7) (2003), 620-623. (The issue is that in that paper, the odd-dimensional case is done by appealing to determinants.)<|endoftext|> -TITLE: If $g^{-1} \circ f \circ g$ is $C^\infty$ whenever $f$ is $C^\infty$, must $g$ be $C^\infty$? -QUESTION [13 upvotes]: Suppose that $g$ is a bijection on the real line, and $g^{-1} \circ f \circ g$ is a $C^\infty$ function whenever $f$ is $C^\infty$. It seems howlingly obvious that this can only happen if $g$ is itself $C^\infty$. But I can't figure out how to prove it. -Can you help me, Internets? -(Context: I want to show that in a manifold of dimension at least 2, the facts about which unparameterized curves are smooth suffice to determine the differential structure. I think I can sort of see how to prove this provided I can rely on the claim above, but that's where I'm stuck. However, if anyone here knows of a proof of the geometric claim, pointers would be most welcome.) - -REPLY [15 votes]: This is a result due to Floris Takens: - -Let $\Phi \colon M_1 \to M_2$ be a bijection between two smooth $n$-manifolds such that $\lambda \colon M_2 \to M_2$ is a diffeomorphism iff $\Phi^{-1} \circ \lambda \circ \Phi$ is a diffeomorphism. Then $\Phi$ is a diffeomorphism. - -It can be found in Characterization of a differentiable structure by its group of diffeomorphisms (Math Reviews number: MR552032). I remember that I found an online version of it (but don't remember where off the top of my head, probably via MathSciNet). I don't have it in front of me, but I do recall that the argument was fairly involved. (I found it for a fairly similar reason: I wanted to show that the only Frölicher space with endomorphism monoid $C^\infty(\mathbb{R},\mathbb{R})$ was $\mathbb{R}$ itself, this is Proposition 8.4 of Comparative Smootheology.) - -Update: After reading the comments, I realised that the quoted theorem is not quite what is wanted. Actually, what you need is Proposition 8.4 of Comparative Smootheology, which uses Takens' result at the key stage but nonetheless is about a page long in the proving. The result in the paper is: - -The only Frölicher structures on $\mathbb{R}$ whose endomorphism monoid contains $C^\infty(\mathbb{R},\mathbb{R})$ are the standard, the discrete, and the indiscrete structures. In particular, the only Frölicher structure on $\mathbb{R}$ whose endomorphism monoid is precisely $C^\infty(\mathbb{R},\mathbb{R})$ is the standard structure. - -You can almost replace "Frölicher structure" by "smooth structure" in the above, except that there's no such thing as the "indiscrete smooth structure". The discrete smooth/Frölicher structure on $\mathbb{R}$ views $\mathbb{R}$ as a load of disjoint points: a zero-dimensional manifold. This is characterised by the fact that the only smooth functions $\mathbb{R} \to \mathbb{R}_{disc}$ are the constant functions (here the unadorned $\mathbb{R}$ is the usual manifold $\mathbb{R}$). The indiscrete structure is the opposite: the only smooth functions $\mathbb{R}_{indisc} \to \mathbb{R}$ are the constant functions. So we could rephrase the above result as: - -The only smooth structures on $\mathbb{R}$ whose endomorphism monoid contains $C^\infty(\mathbb{R},\mathbb{R})$ are the standard and the discrete structures. In particular, the only smooth structure on $\mathbb{R}$ whose endomorphism monoid is precisely $C^\infty(\mathbb{R},\mathbb{R})$ is the standard structure. - -Let us connect that to your question. You ask: - -If $g \colon \mathbb{R} \to \mathbb{R}$ is a bijection such that $g^{-1} \circ f \circ g$ is $C^\infty$ whenever $f$ is $C^\infty$, must $g$ be $C^\infty$? - -First, we can use $g$ to put a smooth structure on $\mathbb{R}$ by post-composing $g$ with charts (or pre-composing with $g^{-1}$ if your charts go from your manifold). Let us write this as $\mathbb{R}_g$. For clarity, let us write $\mathbb{R}_s$ for the standard smooth structure on $\mathbb{R}$. Then $g \colon \mathbb{R}_s \to \mathbb{R}_g$ is a diffeomorphism, by construction. Given a smooth function $f \colon \mathbb{R}_s \to \mathbb{R}_s$ we get a smooth function $f_g \colon \mathbb{R}_g \to \mathbb{R}_g$ by $g \circ f \circ g^{-1}$. Conversely, given a smooth function $h \colon \mathbb{R}_g \to \mathbb{R}_g$, $g^{-1} \circ h \circ g$ is smooth on $\mathbb{R}_s$ and thus is $C^\infty$. Hence the endomorphism monoid of $\mathbb{R}_g$, $C^\infty(\mathbb{R}_g,\mathbb{R}_g)$, is $g C^\infty(\mathbb{R}_s,\mathbb{R}_s) g^{-1}$. -The condition imposed is that $g^{-1} \circ f \circ g \in C^{\infty}(\mathbb{R},\mathbb{R})$ for all $f \in C^\infty(\mathbb{R},\mathbb{R})$. The $\mathbb{R}$s here are $\mathbb{R}_s$ in our notation. Translated into a single statement (rather than one for each $f \in C^\infty(\mathbb{R},\mathbb{R})$, this is: $g^{-1} C^\infty(\mathbb{R}_s,\mathbb{R}_s) g \subseteq C^\infty(\mathbb{R}_s,\mathbb{R}_s)$. Now here's the sneaky bit; the inclusion here is going the wrong way. So we simply apply $g \circ - \circ g^{-1}$ to both sides to get $C^\infty(\mathbb{R}_s,\mathbb{R}_s) \subseteq g C^\infty(\mathbb{R}_s,\mathbb{R}_s) g^{-1}$. -Now we apply my result to deduce that $\mathbb{R}_g$ must be either the discrete smooth structure (which it isn't) or the standard one: $\mathbb{R}_g = \mathbb{R}_s$. But then $g$ is a diffeomorphism $\mathbb{R}_s \to \mathbb{R}_g = \mathbb{R}_s$ and hence is $C^\infty$.<|endoftext|> -TITLE: Why does this converge to $\pi/4$? -QUESTION [22 upvotes]: The infinite series... -$\pi/4 = 1 - 1/3 + 1/5 - 1/7 ...$ -...is very intriguing to me and seems like a crazy coincidence (its relationship to $\pi$). Is it actually crazy or does it have an easy to explain, logical reasoning behind it that would make it seem not so magical? - -REPLY [6 votes]: The infinite series $$\pi/4 = 1-1/3+1/5-1/7+ \ ...$$ can be established by finding the expression of Taylor series -\begin{equation} -f(x) = \sum_{k=0}^\infty \frac{1}{k!} f^{(k)}(a) (x-a)^k -\end{equation} -for $\arctan(x)$ for $x \in [-1,1]$ at $a = 0$ and applying the result for $x = 1$. The finite geometric sum formula -\begin{equation} -\sum_{k=0}^n q^k = \frac{1-q^{n+1}}{1-q}, \ \ q \in \mathbb{C}, \ q \neq 1 -\end{equation} -is applied to find a Taylor-form series. The uniqueness of Taylor polynomial establishes the uniqueness of Taylor series. Note that we don't hence need to calculate all derivatives of $\arctan(x)$. We calculate -\begin{eqnarray} -\arctan(t) & = & \arctan(t) - \arctan(0) = \bigg\vert_0^t \arctan(x) = \int_0^t \frac{1}{1+x^2} dx \\ -& = & \int_0^t \Big(\frac{1-(-x^2)^{n+1}}{1-(-x^2)} + \frac{(-x^2)^{n+1}}{1-(-x^2)} \Big) dx \\ -& = & \int_0^t \frac{1-(-x^2)^{n+1}}{1-(-x^2)} dx + \int_0^t \frac{(-x^2)^{n+1}}{1-(-x^2)} dx \\ -& = & \int_0^t \sum_{k=0}^n (-x^2)^k dx + \int_0^t \frac{(-x^2)^{n+1}}{1+x^2} dx \\ -& = & \sum_{k=0}^n \int_0^t (-x^2)^k dx + \int_0^t \frac{(-x^2)^{n+1}}{1+x^2} dx \\ -& = & \sum_{k=0}^n \int_0^t ((-1)x^2)^k dx + \int_0^t \frac{((-1)x^2)^{n+1}}{1+x^2} dx \\ -& = & \sum_{k=0}^n \int_0^t (-1)^k (x^2)^k dx + \int_0^t \frac{(-1)^{n+1}(x^2)^{n+1}}{1+x^2} dx \\ -& = & \sum_{k=0}^n \int_0^t (-1)^k x^{2k} dx + \int_0^t \frac{(-1)^{n+1}x^{2(n+1)}}{1+x^2} dx \\ -& = & \sum_{k=0}^n (-1)^k \int_0^t x^{2k} dx + \int_0^t \frac{(-1)^{n+1}x^{2n+2}}{1+x^2} dx \\ -& = & \sum_{k=0}^n (-1)^k \bigg\vert_0^t \frac{x^{2k+1}}{2k+1} + \int_0^1 \frac{(-1)^{n+1}(tx)^{2n+2}}{1+(tx)^2} t dx \\ -& = & \sum_{k=0}^n (-1)^k \frac{t^{2k+1}}{2k+1} + \int_0^1 \frac{(-1)^{n+1} t^{2n+2} x^{2n+2}}{1+(tx)^2} t dx \\ -& = & \sum_{k=0}^n \frac{(-1)^k}{2k+1} t^{2k+1} + \int_0^1 \frac{(-1)^{n+1} t^{2n+3} x^{2n+2}}{1+(tx)^2} dx , -\end{eqnarray} -where $t \in \mathbb{R}$ and $n \in \mathbb{N}$. Note that $-x^2 \neq 1$ for every $x \in \mathbb{R}$. Hence we can apply the finite geometric sum formula for every $x \in \mathbb{R}$, that allows us to calculate the Taylor polynomial for every $t \in \mathbb{R}$. Assume now $t \in [-1,1]$. To obtain the limit function we calculate -\begin{eqnarray} -\Bigg| \arctan(t) & - & \sum_{k=0}^n \frac{(-1)^k}{2k+1} t^{2k+1} \Bigg| = \Bigg| \int_0^1 \frac{(-1)^{n+1}t^{2n+3}x^{2n+2}}{1+(tx)^2} dx \Bigg| \\ -& \leq & \int_0^1 \Bigg| \frac{(-1)^{n+1}t^{2n+3}x^{2n+2}}{1+(tx)^2} \Bigg| dx \\ -& \leq & \int_0^1 \frac{|-1|^{n+1}|t|^{2n+3}|x|^{2n+2}}{|1+(tx)^2|} dx \\ -& = & \int_0^1 \frac{1^{n+1} |t|^{2n+3} x^{2n+2}}{1+(tx)^2} dx \leq \int_0^1 \frac{|t|^{2n+3} x^{2n+2}}{1} dx \\ -& = & \int_0^1 |t|^{2n+3} x^{2n+2} dx = |t|^{2n+3} \int_0^1 x^{2n+2} dx \\ -& = & |t|^{2n+3} \bigg\vert_0^1 \frac{1}{2n+3} x^{2n+3} = \frac{|t|^{2n+3}}{2n+3} \leq \frac{1^{2n+3}}{2n+3} \\ -& = & \frac{1}{2n+3} \rightarrow 0, -\end{eqnarray} -as $n \rightarrow \infty$. Hence -\begin{eqnarray} -\arctan(x) & = & \lim_{n \rightarrow \infty} \sum_{k=0}^n \frac{(-1)^k}{2k+1} x^{2k+1} = \sum_{k=0}^\infty \frac{(-1)^k}{2k+1} x^{2k+1} -\end{eqnarray} -for $x \in [-1,1]$. Now inserting $x = 1$ into the series expression of $\arctan(x)$ we obtain -\begin{eqnarray} -\pi/4 & = & \arctan(1) = \sum_{k=0}^\infty \frac{(-1)^k}{2k+1} 1^{2k+1} = \sum_{k=0}^\infty \frac{(-1)^k}{2k+1} \\ -& = & 1 - 1/3 + 1/5 - 1/7 + \ ... \ , -\end{eqnarray} -that is the desired result. I hope that this was what you were searching for.<|endoftext|> -TITLE: Algebraic versus topological line bundles -QUESTION [20 upvotes]: Let $X$ be a CW complex. The (isomorphism classes of) complex line bundles on $X$ are classified by the homotopy classes of maps $X \to \mathbb{CP}^\infty$, that is by the elements of $H^2(X, \mathbb{Z})$. -It is also true that the tensor product of line bundles corresponds to adding cohomology classes. It follows that if $n \in \mathbb{N}$, then the line bundles of $\mathbb{CP}^n$ are generated by the tensor powers of the tautological line bundle, or, equivalently, by the tensor powers of the sheaf typically denoted $\mathcal{O}(1)$ in algebraic geometry (because the dual of $\mathcal{O}(1)$ is the tautological bundle). -It is also true that the isomorphism classes of algebraic line bundles on $\mathbb{CP}^n$ forms a group isomorphic to $\mathbb{Z}$, given by the powers of $\mathcal{O}(1)$, as one can see by studying the Weil class group. It follows that the isomorphism classes of line bundles are the same in both the algebraic and the topological category. -The above observation is also true for affine $n$-space, because the topological line bundles are trivial ($\mathbb{C}^n$ being contractible), and the algebraic ones are as well (the polynomial ring being a UFD). -To what extent is this true in general? - -REPLY [4 votes]: Deat Akhil, I have nothing to add to Matt E's masterful survey on the algebraic/analytic comparison. However since you ask about topological line bundles, you have to modify his answer in the following way. -You must replace everywhere $\mathcal O$ by $\mathcal C$, the sheaf of continuous functions. Now things are very easy: since $\mathcal C$ is soft (fine if you prefer), it is acyclic and so the map -$Pic^{top}(X) \to H^2(X, - \mathbb Z)$ is an isomorphism: a topological line bundle is classified by its Chern class, which lives in the second cohomology group of the space. To put it dramatically: the continuous jacobian is trivial ! -For example topological line bundles on a compact Riemann surface are classified by $\mathbb Z$ in stark contrast to the huge Picard variety classifying its algebraic= analytic line bundles. So to answer your question "To what extent is this true in general?" [natural bijection between algebraic and topological line bundles] I would answer, just for the pleasure of using the anglicism : "once in a blue moon".<|endoftext|> -TITLE: Set Theoretic Definition of Numbers -QUESTION [51 upvotes]: I am reading the book by Goldrei on Classic Set Theory. My question is more of a clarification. It is on if we are overloading symbols in some cases. For instance, when we define $2$ as a natural number, we define $$2_{\mathbb{N}} = \{\emptyset,\{\emptyset\} \}$$ When we define $2$ as an integer, $2_{\mathbb{Z}}$ is an equivalence class of ordered pair $$2_{\mathbb{Z}} = \{(n+_{\mathbb{N}}2_{\mathbb{N}},n):n \in \mathbb{N}\}$$ Similarly, when we define $2$ as a rational number, $2_{\mathbb{Q}}$ is an equivalence class of ordered pair $$2_{\mathbb{Q}} = \{(a \times_{\mathbb{Z}} 2_{\mathbb{Z}},a):a \in \mathbb{Z}\backslash\{0\}\}$$ and as a real number we define it as the left Dedekind cut of rationals less than $2_{\mathbb{Q}}$, i.e. $$2_{\mathbb{R}} = \{q \in \mathbb{Q}: q <_{\mathbb{Q}} 2_{\mathbb{Q}}\}$$ -The clarification is each of the above are different objects right? So when we say $2$, it depends on the context? Also, if the above is true, is it correct or incorrect to say that "The set of natural numbers is a subset of reals"? Should we take the statement with a pinch of salt and understand accordingly? - -REPLY [5 votes]: In addition to the other answers, it's also noteworthy to know that there is a number system (more-or-less) integrating N, Z, Q and R (and a lot more, but not C!). -The surreal numbers take the basic idea from Dedekind cuts, assigning to each number a so-called left set ('smaller') and right set ('larger') of numbers (constraint to certain rules), bootstrapping the whole process from the empty set and ending up with the reals and weird infinitesimals like $\frac{1}{\sqrt{\omega - \pi}}$!<|endoftext|> -TITLE: Generators of a free group -QUESTION [6 upvotes]: If G is a free group generated by n elements, is it possible to find an isomorphism of G with a free group generated by n-1 (or any fewer number) of elements? - -REPLY [13 votes]: Here is another approach. Let $G$ be a free group on $m$ generators, and let $H$ be a free group on $n$ generators. There are exactly $2^m$ homomorphisms from $G$ to a group of order two, since each generator can be mapped in two ways. Likewise, there are $2^n$ homomorphisms from $H$ to a group of order two. -If $G$ and $H$ are isomorphic, then they have the same number of homomorphisms to a group of order two. Therefore $2^m = 2^n$, which implies $m=n$. - -REPLY [4 votes]: Dave R's answer highlights a more general principle: by the Yoneda lemma, an object $X$ in a category is determined up to isomorphism by the behavior of the functor $F_X = \text{Hom}(X, -)$, so to show that two objects $X, Y$ are not isomorphic it suffices to show that the corresponding functors $F_X, F_Y$ are not isomorphic. (In particular it suffices to show the existence of an object $Z$ such that $\text{Hom}(X, Z)$ has a different size from $\text{Hom}(Y, Z)$.) -The free groups $F_n$ represent some special functors $\text{Grp} \to \text{Set}$: namely, $\text{Hom}(F_n, G)$ is precisely the set $G^n$ of $n$-tuples of elements of $G$. This is a manifestation of the adjunction between the free group functor $\text{Set} \to \text{Grp}$ and the underlying set functor $\text{Grp} \to \text{Set}$, and by setting $G$ to any nontrivial finite group (Dave R's answer uses $\mathbb{Z}/2\mathbb{Z}$) it is not hard to see that these functors are all nonisomorphic. -More generally, I believe one can say the following. Let $F : D \to C$ and $G : C \to D$ be an adjunction, hence -$$\text{Hom}_C(FX, Y) \simeq \text{Hom}_D(X, GY)$$ -and suppose that the functor $G$ is essentially surjective. (In this case $G$ is the forgetful functor $\text{Grp} \to \text{Set}$ and $F$ is the free group functor $\text{Set} \to \text{Grp}$; then it is a classical result that $G$ is essentially surjective.) Then by another application of the Yoneda lemma, I believe the functors $\text{Hom}_C(FX_1, -)$ and $\text{Hom}_C(FX_2, -)$ are isomorphic if and only if the objects $X_1, X_2$ are isomorphic in $D$. Can anyone confirm this?<|endoftext|> -TITLE: What is the meaning of the third derivative of a function at a point -QUESTION [80 upvotes]: (Originally asked on MO by AJAY.) -What is the geometric, physical, or other meaning of the third derivative of a function at a point? -If you have interesting things to say about the meaning of the first and second derivatives, please do so. - -REPLY [10 votes]: An intuitive complement to Arturo Magidin's answer: -A good way to intuitively grasp the jerk (hence the third derivative of the position function) is to remember the last time you took a plane and realize that the following "equivalences" holds - -No acceleration = constant speed = feels like when sitting in your chair at work = first derivative of the position function is zero. -Acceleration = speed increases = feels like someone is pushing you toward the back of your seat = second derivative is positive. -Increasing acceleration = the pace at which your speed increases gets higher and higher = feels like the guy who is pushing you toward the back of your seat is pushing harder and harder = jerk or third derivative is positive. - -In a plane : -1) Right before take-off, the plane is still, no acceleration, the derivative of the position function are zero. -2) Now the plane starts moving, you are not still anymore, and the first derivative of the position function is positive. -3) Not only are you moving, but the plane brutally accelerates. As a result of the acceleration, you feel like someone is pushing you toward the back of your seat: the second derivative of the position function is positive. -4) Quickly after the engines are on, not only do you feel like someone is pushing you toward the back or your seat but, in addition, it feels like this imaginary person is pushing harder and harder. This is because you accelerate more and more (the jerk is positive). During the first 2 second you went from 0 km/h to say 20 km/h, and during the 2 following ones, you went from, say, 20 km/h to 60 km/h: The third derivative of the position function is positive. -5) After some time, the plane still accelerates, but at a diminishing rate. It feels like the imaginary guy pushing you toward the back of your seat starts to release the pressure. He is still pushing (you would need a higher effort to stand up from your seat than if you were sitting in your office chair), but less and less intensely. The rate at which you accelerate is diminishing, hence the third derivative is negative. However, you are still accelerating so the second derivative is still positive. -6) Your plane eventually reaches its cruising altitude and maintain a constant speed of say 800 km/h. So now, your are not accelerating at all, the second and third derivative of the position function are zero. Only the first derivative remains positive. -7) When you land, the process is reversed. It feels like someone is pushing you in the back and you need the seatbelt to prevent you from falling forward. When it feels like the imaginary guy pushes you in the back stronger and stronger, then the jerk is negative, hence the third derivative of the position function is negative too.<|endoftext|> -TITLE: an example of a continuous function whose Fourier series diverges at a dense set of points -QUESTION [6 upvotes]: Please give me a link to a reference for an example of a continuous function whose Fourier series diverges at a dense set of points. (given by Du Bois-Reymond). I couldn't find this in Wikipedia. - -REPLY [8 votes]: Kolmogorov improved his result to a Fourier series diverging everywhere. Original papers, in French: -Kolmogorov, A. N.: Une série de Fourier-Lebesgue divergente presque partout, Fund. Math., 4, 324-328 (1923). -Kolmogorov, A. N.: Une série de Fourier-Lebesgue divergente partout, Comptes Rendus, 183, 1327-1328 (1926).<|endoftext|> -TITLE: Series absolute convergence, proof -QUESTION [6 upvotes]: I had this problem on my last exam in calculus, I still don't know, how to solve this, so I would appreciate your help. -Let $a_{ij} \in \mathbb{R}$ for $i,j \in \mathbb{N}$. Suppose, that for every $j$ series $S_j = \sum_{i=1}^{\infty}{a_{ij}}$ is absolutely convergent and, for every $i$ exists finite limit (limit $\not= \pm\infty$) $c_i=\lim_{j\to\infty}{a_{ij}}$. -(a) Prove, that if exists such absolutely convergent series $\sum_{i=1}^{\infty}{b_i}$ that $|a_{ij}| \leq |b_i|$ for every $i,j$ then $S_j \to S = \sum_{i=1}^{\infty}{c_i}$, $j \to \infty$ -(b) Is argument in (a) true without assumption that series $\sum{}b_i$ with properties given in (a) exists? - -REPLY [5 votes]: First let us look at part (b) to extract some ideas about proving part (a). -Consider the double sequence $a_{ij} = \delta_{ij}$, that is, $a_{ij} = 1$ if $i = j$ and $a_{ij} = 0$ otherwise. Then $S_j = 1$ for any $j$. On the other hand, $c_i = 0$ for any $i$. So $S = \sum c_i = 0 \neq \lim S_j$. So without the dominating sequence $b_i$ the statement is false. -(Like Pete says, this is another reflection of something in the continuous case, namely the principle of weak convergence and also Fatou's lemma.) -Now we prove part (a). Fix $\epsilon > 0$. Let $N$ be chosen large enough such that $\sum_{i = N}^\infty |b_i| < \epsilon/4 $, this $N$ exists since $(b_i)$ is absolutely convergent. Consider the partial sums $S_{j,N} = \sum_{i = 1}^{N-1} a_{ij}$. We can then pick $M$ sufficiently large such that for all $j > M$ and $i < N$, $|a_{ij} - c_i| < \epsilon / (4N)$. -So in particular, $|S_{j,N} - S_N| < \epsilon / 4$ for $j > M$ where $S_N = \sum_{i = 1}^{N-1} c_i$. -Now, by assumption of convergence, and the domination of $(b_i)$, you have $|c_i| < |b_i|$. So in particular -$$ |S - S_N| = | \sum_{N}^{\infty} c_i | \leq \sum_{N}^\infty |b_i| \leq \epsilon/4 $$ -We also have -$$ |S_N - S_{j,N} | < \epsilon / 4, \quad j > M $$ -(where $M$ implicitly depends on $N$, and hence on $\epsilon$) and -$$ |S_j - S_{j,N} | < \epsilon / 4 $$ -So we have that, applying the triangle inequality, that for every fixed $\epsilon > 0$, we can pick $M$ sufficiently large such that -$$ |S_j - S| < \epsilon \qquad \forall j > M$$ -Q.E.D.<|endoftext|> -TITLE: Suggest me the path to learn Maths -QUESTION [5 upvotes]: I am a Software Engineer. With interest on computers I have chosen this path. But I don't know anything about great maths. I want to learn from basic to some level(which would help me in my field..like identifying solution to a problem, designing algorithms, designing shapes and some animation on them). So, please suggest me where do I need to start. Please suggest me/list out the concepts that I need to go through. -Thank you very much - -REPLY [4 votes]: The most useful mathematical areas for a programmer are - -trigonometry and linear algebra, especially to draw and recognise shapes, ray tracing, animations and so on, -combinatorics, particularly to cleverly enumerate sets, estimate running times of simple loops and recursions etc., -some elementary group theory and particularly permutation groups, for algorithms that involve tracing out graphs (particularly trees) and also for clever enumeration, as above. - -There are lots of introductory books on each of the above subjects, although maybe other people can comment on which ones are best suited for non-mathematicians. Note that the three areas are fairly independent, at least at the beginning, and you can learn them in any order or in parallel. -Edit: I forgot the possibly best way to get started: once you know the definition and some basic facts about groups, you should read the book Indra's Pearls. It is a very beautiful book that explains how to draw certain types of fractals with the computer. It introduces all the relevant mathematics pretty much from zero and it provides the actual programs in pseudo-code, ready for you to implement in your favourite language. The mathematics and the algorithms it introduces are very relevant to other situations, but since you are particularly interested in graphics, you will hardly find a better way to get started! There is even a computer graphics artist who bases some of his work on that book.<|endoftext|> -TITLE: Cutting a unit square into smaller squares -QUESTION [17 upvotes]: My algebra professor gave me this puzzle a while back. I'm pretty sure I've found the right solution; nonetheless, I wanted to share it and see if you come up with anything really nice or unexpected. - -Prove that if you take a unit square and cut it into a finite number of smaller squares (in any way you can think of), the side lengths of the smaller squares are all rational. - -P.S. The first tag was my professor's hint. -[Edit] Just to be clear, every piece must actually be a square (e.g. no gluing two triangles into a square). - -REPLY [6 votes]: Since your professor's hint was to use linear algebra, the intended proof is probably that of Hadwiger - employing a Hamel basis of $\:\mathbb R\:$ over $\:\mathbb Q\:$ to construct additive "area" functions in order to deduce a contradiction (as in Yuval's answer). Related results were proved by Dehn circa 1900 by different complicated methods. Although Hamel's work on Hamel bases occurred shortly later in 1905, it was not until much later in the 1950's that Hadwiger and his students noticed that Hamel bases could be employed to greatly simplify Dehn's proofs. For a nice exposition see Freiling; Rinne: Tiling a square with similar rectangles. It's worth remarking that use of the axiom of choice in the construction of a Hamel basis can easily be eliminated for applications of this type, e.g. see the discussion in section 2 of Feshchenko et. al.: Dissecting a brick into bars.<|endoftext|> -TITLE: Basic Taylor expansion question -QUESTION [11 upvotes]: I seem to have a misunderstanding of how to work with a Taylor series. -Suppose I want to write $f(x)=x e^x$'s Taylor expansion of $n$ degree around $0$. -I see two ways: -1) Find the $n$th derivative of $f(x)$, this is quite easy: $f^{(n)} (x)=n e^x+x e^x$. -And from the formula I get: -$$f(x)=\sum_{k=1}^{n} \frac{x^k}{(k-1)!} + R_{n}(x).$$ -2) Since $e^x$ Taylors expansion is already known, it's also possible to do this: -$$f(x)=xe^x=x\left(\sum_{k=0}^{n-1} \frac{x^k}{k!} + R_{n-1}(x)\right)=\sum_{k=0}^{n-1} \frac{x^{k+1}}{k!} + xR_{n-1}(x)$$ -But how do I interpret $xR_{n-1}(x)$? -I feel like I'm missing something fundamental about the meaning of $R_{n}(x)$. - -REPLY [22 votes]: Edited and rewritten, hopefully to make it clearer. -There is something that the notation does not help and which you need to keep straight (this may be the source of your confusions). $R_n(x)$ is the "remainder" at $x$: the error between the actual value of the function and the value you get from evaluating the Taylor polynomial of degree $n$ instead. That is, the $n$th remainder is equal to -$$f(x) - \sum_{k=0}^n \frac{f^{(k)}(0)}{k!}x^k$$ -(I'm assuming the polynomial is being expanded around $a=0$). -So the remainder really needs three pieces of information to make it precise: the value of $n$, the value of $x$, and the function $f$ in question. It would be best to write it as $R(n,x,f)$, but since the function $f$ is usually clear from context, we ignore it. But I will use $R(n,x,f)$ in what follows. So notice that in your question, in (1) what you have is $R(n,x,xe^x)$, while in (2) what you have is $xR(n-1,x,e^x)$. And so the question is what is the relation between the two expressions. -Before addressing this in more detail, let me take a slight detour to Taylor series. -Say we have the Taylor series for $f$, -$$\sum_{k=0}^{\infty}\frac{f^{(k)}(0)}{k!}x^k.$$ -We know this power series converges inside the radius of convergence (which can be found using, say, the Ratio Test); for $f(x)=e^x$, the radius of convergence is infinite, so the series converges for all values of $x$. -However, there is still the question of whether the value of the series at a given $x$ equals the value of the function at that point. This is where the remainders come in. The Taylor series for $f$ converges to $f(x)$ at the point $x_0$ if and only if $R(n,x_0,f)\to 0$ as $n\to\infty$. -In the case of $f(x)=e^x$, one can show that the Taylor series not only converges everywhere, but also that it converges to $e^x$. For example, this can be done using the Cauchy estimate for the remainder: given $r\gt 0$, and $x$ in $(-r,r)$, if $M_n$ is a number such that $|f^{(n+1)}(x)|\leq M_n$ for all $x$ in $(-r,r)$, then -$$|R(n,x,f)|\leq \frac{M_nr^{n+1}}{(n+1)!}.$$ -For $f(x) = e^x$, you can take $M=e^r$ (or even $3^r$), so you get an exponential divided by a factorial, and that goes to $0$ as $n\to\infty$. This holds for any $x$ (by changing the $r$), so that $R(n,x,e^x)\to 0$ as $n\to\infty$ for all $x$. So the Taylor series for $e^x$ converges to $e^x$ at every $x$. That is, for each $x$, -$$e^x = \sum_{k=0}^{\infty}\frac{x^k}{k!} = 1 + x + \frac{x^2}{2!} + \cdots + \frac{x^n}{n!}+\cdots$$ -in the sense that the value of the limit on the right hand side is exactly equal to $e^x$. -The "Taylor series" for $x$ is also very easy: it's just $x$ itself (not even a real series). It also has infinite radius of convergence, and $R(n,x,x) = 0$ for any $n\gt 1$. -It is a theorem that if the Taylor series for $f$ converges to $f$ in $(-r,r)$, and the Taylor series for $g$ converges to $g$ in $(-R,R)$, then the product of the series will converge to $fg$ in $(-\min(r,R),\min(r,R))$ (that is, on the interval where they both converge). For $e^x$ and $x$, this tells you that indeed you have that -$$xe^x = x\left(\sum_{k=0}^{\infty}\frac{x^k}{k!}\right) = \sum_{k=0}^{\infty}\frac{x^{k+1}}{k!} = \sum_{k=1}^{\infty}\frac{x^k}{(k-1)!}.$$ -Because the series converges, and converges to $xe^x$, that means that if you write -$$xe^x = \sum_{k=1}^n \frac{x^k}{(k-1)!} + R(n,x,xe^x)$$ -then you have that -$$R(n,x,xe^x) = \sum_{k=n+1}^{\infty} \frac{x^k}{(k-1)!}$$ -(in the sense that the value of $R(n,x,xe^x)$ is the limit of the partial sums of that series) and moreover that -$$\sum_{k=n+1}^{\infty}\frac{x^k}{(k-1)!} \to 0\text{ as }n\to\infty.$$ -What you are doing in (2) is to essentially deal with the Taylor polynomials instead of the series as I did above. We have that the Taylor series for $e^x$ converges to $e^x$ at every $x$, so that if we write -$$e^x = \sum_{k=0}^{n}\frac{x^k}{k!} + R(n,x,e^x)$$ -then -$$R(n,x,e^x) = \sum_{k=n+1}^{\infty}\frac{x^k}{k!}$$ -and $R(n,x,e^x)\to 0$ as $n\to \infty$. -Multiplying the expression for $e^x$ by $x$, we get -$$xe^x = x\left(\sum_{k=0}^n\frac{x^k}{k!} + R(n,x,e^x)\right) = \sum_{k=0}^n\frac{x^{k+1}}{k!}+xR(n,x,e^x).$$ -Except that note that this time $xR(n,x,e^x)$ is giving the $(n+1)$st remainder, since the polynomial we have is of degree $n+1$. So, equating this with the expression we had before, this suggests that we should have -$$R(n+1,x,xe^x) = xR(n,x,e^x).$$ -And indeed, this is what we find when we express them as series and limits: -\begin{align} -xR(n,x,e^x) &= x\left(\sum_{k=n+1}^{\infty}\frac{x^k}{k!}\right)\\ -&=x\left(\lim_{m\to\infty}\sum_{k=n+1}^m\frac{x^k}{k!}\right)\\ -&= \lim_{m\to\infty}\left(x\sum_{k=n+1}^m \frac{x^k}{k!}\right)\\ -&= \lim_{m\to\infty}\sum_{k=n+1}^m \frac{x^{k+1}}{k!}\\ -&=\lim_{m\to\infty}\sum_{k=n+2}^{m-1}\frac{x^k}{(k-1)!}\\ -&= \lim_{m\to\infty}\sum_{k=n+2}^{m}\frac{x^k}{(k-1)!}\\ -&= R(n+1,x,xe^x). -\end{align} -What this tells you is that the $n+1$st remainder at $x$ for the function $xe^x$ equals $x$ times the $n$th remainder at $x$ for the function $e^x$. So your equations both make sense and they are telling you correct things, once you put back the necessary information into the remainder.<|endoftext|> -TITLE: Why are knot invariants best organized as polynomials? -QUESTION [13 upvotes]: Does anyone have a good explanation for why Knot invariants tend to be well organized as polynomials? What exactly is going on and why don't we often see polynomial invariants for classifying other geometric objects? For instance, why don't we just say that the Alexander invariants are a finite set of numbers? Presumably, the organizing of them into polynomials shows off some inherent geometric properties of knot description and combination which is easily encapsulated in the multiplication and addition of polynomials. I would be happy (and not surprised) if this also helps me understand why Lie algebra representations also often arise, especially in more recently discovered invariants. - -REPLY [4 votes]: The only polynomial invariants of knots that I can think of are the Alexander polynomial on one hand, and the Jones polynomial and its generalizations (HOMFLYPT et.al.) on the other (ignoring things like the 2-loop polynomial). It's clear why the Alexander polynomial should be a polynomial- it's the generating function for torsion numbers of the knot, as explained in Rolfsen, and the Alexander module is finitely presented. -Why the Jones polynomial should be a polynomial is a much more interesting question, addressed in the paper Is the Jones polynomial of a knot really a polynomial? by Garoufalidis and Le. From the TQFT perspective, we would have expected quantum invariants to be power series, but actually for knots the Jones polynomial is a polynomial, which is quite interesting. -So I suppose that I don't know the ultimate reason that knots have interesting polynomial invariants. I feel that there has got to be a wonderful secret hiding here! I would guess that it ultimately has to do with the rich local algebraic structure of an algebra over a modular operad generated by crossings, and maybe trivalent vertices and other stuff.<|endoftext|> -TITLE: How to check if a polytope is a smooth Fano polytope? -QUESTION [7 upvotes]: Question: -We say that a convex lattice polytope $P\subset \mathbb{R}^d$ is a smooth Fano polytope if: - -The origin is contained in the interior of $P$ -The vertices of every facet of $P$ are a $\mathbb{Z}$-basis of $\mathbb{Z}^d$ - -Now, suppose we have a set $V=\{v_1,\ldots,v_n\}\subset \mathbb{Z}^d$. What is an (preferentially efficient) algorithm for deciding if the convex hull $\text{conv}(V)$ is a smooth Fano polytope? -Motivation: -I've read a paper that gives an algorithm to classify all smooth Fano polytopes given the dimension $d$ as input (An algorithm for the classification of smooth Fano polytopes by Mikkel Øbro), and while trying to implement said algorithm I discovered I don't know how to solve this question. -For those who are curious, I want to use the algorithm to help me gain some intuition about toric varieties, and use it to compute known invariants and test conjectures. - -REPLY [2 votes]: After you have found the facets of $P$, you can check if $d$ points form a $\mathbb Z$ bases by calculating their determinant which has to be $\pm 1$. -(The determinant is the volume of the fundamental domain and is the inverse of the density of lattice points. A sublattice is the integer lattice if and only if the density is the same.)<|endoftext|> -TITLE: Different formulations of Class Field Theory -QUESTION [13 upvotes]: I was reading up on class field theory, and I have a question. On wiki (http://en.wikipedia.org/wiki/Artin_reciprocity), one formulation is that there's some modulus for which $I^c_K/i(K_{c,1})Nm_{L/K}(I^c_L)$ is isomorphic to $Gal(L/K)$. -Another formulation on the same page is: -$C_K/N_{L/K}(C_L)$ is isomorphic to $Gal(L/K)$. -(where $C_{number\,field}$ is the idele class group of that number field). -How does one relate these two formulations? Is it true that for some modulus, $c$, $I^c_K$ is $C_K$? I don't really see how this fits into one picture. - -REPLY [16 votes]: If you read the initial section or two of the chapter by Tate in Cassels and Frolich, he gives a nice explanation of how to pass from the classical formulation in terms of generalized ideal class groups w.r.t a modulus, and the more modern formulation in terms of idele class groups. As Tate explains, -the two formulations are indeed equivalent, but it is not quite as simple as saying that $I^c_K = C_K$. Here is a sketch of the equivalence (of course it is the same is in Akhil Mathew's answer, just slightly more detailed): -Since $I_K^c$ has already been taken to denote the ideals prime to $c$ (at least, this is how I interpret your notation), let me use $J_K$ to denote the ideles for -$K$. Then we can consider the subgroup $J_K^c$ of the ideles whose entries are all $1$ at any finite place dividing $c$, and at any infinite place. -Then there is a natural surjection $J_K^c \to I_K^c$ given by sending any element -$(a_{\wp})$ of the former to the ideal $\prod_{\wp} \wp^{v_{\wp}(a_{\wp})}$ -of the latter (where the product is over finite places, i.e. prime ideals, -$\wp$). -Now one can show that $K^{\times} J_K^c$ is dense in $J_K$, so -the image of $J_K^c$ is dense in $C_K$. Since $N_{L/K}(C_L)$ is open in $C_K$, -we see that $J_K^c$ surjects onto $C_K/N_{L/K}(C_L)$. -Now one checks that this map factors through the surjection $J_K^c \to I_K^c$ -described in the preceding paragraph, and in fact induces an isomorphism -$I_K^c/i(K_{c,1}) N_{L/K}(I_L^c) \buildrel \sim \over \longrightarrow -C_K/N_{L/K}(C_L)$, as required. -In practice, suppose you want to compute the Artin map on an element of $J_K$: -the algorithm is you first multiply by a principal idele so that the resulting element is in $J_K^c$ times $N_{L/K}(C_L)$. (You may not know exactly what this -group is, but its not hard to at least identify an open subgroup of it: for example, at any complex infinite place $v$ the norm map is surjective, at any real place $v$ the image of the norm map at least contains the positive reals, and at any finite place $\wp$ the image of the norm map will contain elements which are congruent to $1$ modulo the power of $\wp$ dividing the relevant modulus $c$.) -Now the Artin map on $J_K^c$ factors through the surjection $J_K^c \to I_K^c$, -and is computed on the target using Frobenius elements. -Indeed, this was the argument via which local class field theory was originally proved; one took a local extension, embedded it into a global context (so that the original local situation was realized as $L_{\wp}/K_{\wp}$ for some abelian -extension of number fields $L/K$), and then defined the Artin map via the above computation (which means concretely that one passes from the possibly ramified -situation at $\wp$ to a consideration just at the unramified primes, where everything is easily understood just in terms of ideals and Frobenius elements). -Of course, one then had to check that the resulting local Artin map was well-defined independent of the choice of "global context". -Nowadays, one can define the local Artin maps at all places (unramified or ramified) first. However, in generalizations to the non-abelian situation (i.e. local and global Langlands) one generally uses the old-fashioned technique of proving certain global results first, and then establishing the precise local results by passing to a well-chosen global context and reducing to a calculation at unramified primes. (This is a bit of an oversimplification, but I think it is correct in spirit.) So (if one has an eventual aim of understanding modern algebraic number theory and the Langlands program) it is well worth understanding the passage between the idelic and ideal-theoretic view-points on class field theory, and practicing how to use the algorithm described above.<|endoftext|> -TITLE: Zebra groups and counting stripes -QUESTION [9 upvotes]: How many stripes can you paint on a $2$-group of fixed size? - -A group of order $2^ap^b$ is solvable, by Burnside's theorem, so its chief factors are either abelian $2$-groups or abelian $p$-groups. Such a group is a zebra group if its chief factors in any chief series alternate between the $2$ and the $p$. In other words, if you take a chain of normal subgroups $$1 = N_1 ⊲ N_2 ⊲ \cdots ⊲ N_n = G$$ of maximal length, then the quotient groups $N_{i+1}/N_i$ alternate between being abelian $2$-groups (the stripes) and abelian $p$-groups (the background). - -If we fix $a$, say $a=8$, then how many stripes can a zebra group have? - -Obviously no more than $8$ $2$-stripes, but with a little work one can see that it can have no more than $4$ $2$-stripes. Unfortunately, I'm having trouble getting even $3$ stripes. - -REPLY [4 votes]: 2 is the maximum number of stripes possible on a zebra group with a=8. S stripes require a ≥ C⋅9S for large S. - -The number of stripes S of a zebra group gives bounds on the derived length D of a zebra group: 2S - 1 ≤ D ≤ 2S + 1. The action of G/H on a chief factor H/K must be irreducible and faithful, and so G/H embeds in GL(H/K) = GL(n,q) for q in {2,p} the prime dividing the order q^n of H/K. However, for large n, the maximal derived length of a solvable subgroup of GL(n,q) is about log(n-2)/log(9), or in other words, n ≥ 9^D. In particular, as S increases, the "a" from 2a pb increases exponentially. -For small n, the maximal derived length of soluble subgroups of GL(n,2) are: 2, 3, 4, 4. The maximal derived length of irreducible soluble subgroups of GL(n,2) appear to be 2,2,3,2,6,2,5,4,4. Hence the minimum dimensions for derived length D=2 is n=2, D=3 is n=4, D=4 is n=6, D=5 is n=6, D=6 is n=6, and D≥7 is n≥11. -The top stripe has order at least 2^1. By the time H/K is the second stripe, G/H has derived length 2, and so H/K has order at least 2^2. By the time H/K is the third stripe, G/H has derived length 4, and so H/K has order at least 2^6. - Since 1+2+6 ≥ 8, this shows there is no three-striped zebra group with a Sylow 2-subgroup of order 2^8. -Three stripes is attainable at a=9. Four stripes is not attainable for a≤14 and is attainable for a=27. Roughly speaking, S stripes require a ≥ C⋅9S (though my proof needs S really large).<|endoftext|> -TITLE: How many cubic curves are there? -QUESTION [30 upvotes]: It is well-known that there is only one "kind" of line, and that there are three "kinds" of quadratic curves (the nature of which depends on the sign of a so-called "discriminant"). -It is noteworthy that many of the named cubic curves look rather similar: the folium of Descartes, the trisectrix of Maclaurin, the (right) strophoid, and the Tschirnhausen cubic look very similar in form; the semicubical parabola and the cissoid of Diocles resemble each other as well. -I have deliberately placed the word "kind" in quotes since there does not seem to me an intuitive way of defining the term, so an answer to my question might have to define "kind" rigorously in the context of cubic curves. (An algebraic invariant, for instance... it is a pity that there does not seem to be an analogue of "eccentricity" for cubics!) -In here, it is noted that Newton classified cubics into 72 "kinds", and Plücker after him described 219 "kinds". -So, how does one algebraically distinguish one cubic curve from another, and with a rigorous definition of "kind", how many cubics are there? - -REPLY [14 votes]: In John Stillwell's Mathematics and its History it is observed that Euler criticized Newton's classification for lacking a general principle, but that a closer examination of Newton/s work reveals one. In fact his work gives a general classification into 5 types, depicted on page 112 of that book. -I believe the difference in the real and complex cases, is the fact that the "discriminant" locus of singular cubics has real codimension one in the real case, hence separates the space of smooth cubics into distinct connected components, and this may be the types sought for. By Ehresmann's theorem, a connected family of compact manifolds have the same topological type. Thus curves on different connected components of the complement of the discriminant locus can have different homeomorphism type. -At least one (two?) of Newton's types is also singular and could represent the general (and special) point of the discriminant locus. I am not an expert.<|endoftext|> -TITLE: Probability that a random permutation has no fixed point among the first $k$ elements -QUESTION [12 upvotes]: Is it true that $\frac1{n!} \int_0^\infty x^{n-k} (x-1)^k e^{-x}\,dx \approx e^{-k/n}$ when $k$ and $n$ are large integers with $k \le n$? -This quantity is the probability that a random permutation of $n$ elements does not fix any of the first $k$ elements. - -REPLY [7 votes]: Update: This argument only holds for some cases. See italicized additions below. -Let $S_{n,k}$ denote the number of permutations in which the first $k$ elements are not fixed. I published an expository paper on these numbers earlier this year. See "Deranged Exams," (College Mathematics Journal, 41 (3): 197-202, 2010). Aravind's formula is in the paper, as are several others involving $S_{n,k}$ and related numbers. -Theorem 7 (which I also mention in this recent math.SE question) is relevant to this question. It's -$$S_{n+k,k} = \sum_{j=0}^n \binom{n}{j} D_{k+j},$$ -where $D_n$ is the number of derangements on $n$ elements. See the paper for a simple combinatorial proof of this. -Since $D_n$ grows as $n!$ via $D_n = \frac{n!}{e} + O(1)$ (see Wikipedia's page on the derangement numbers), and if $k$ is much larger than $n$, -the dominant terms in the probability -$\frac{S_{n+k,k}}{(n+k)!}$ are the $j = n$ and $j = n-1$ terms from the Theorem 7 expression. Thus we have -$$\frac{S_{n+k,k}}{(n+k)!} \approx \frac{D_{n+k} + n D_{n+k-1}}{(n+k)!} \approx \frac{1}{e}\left(1 + \frac{n}{n+k}\right) \approx e^{-1} e^{\frac{n}{n+k}} = e^\frac{-k}{n+k},$$ -where the second-to-last step uses the first two terms in the Maclaurin series expansion for $e^x$. -Again, this argument holds only for (in my notation) $k$ much larger than $n$.<|endoftext|> -TITLE: Multiplicative inverses of formal series with non-negative coefficients -QUESTION [16 upvotes]: What are the formal series $f$ with non-negative integer coefficients and constant term equal to $1$ whose multiplicative inverse $1/f$ has all coefficients, apart from a finite subset, all non-positive? -In fact, assume the series converges in some disk if you want... - -REPLY [6 votes]: For example, $f(x) = (1 - a x)/(1 - b x)$ with $b > a > 0$ has all coefficients nonnegative while $1/f(x)$ has all coefficients nonpositive after the constant term.<|endoftext|> -TITLE: How to express the whole part $\lfloor x \rfloor$ as analytical function or Taylor/Fourier series? -QUESTION [5 upvotes]: And how to express $\{ x \} = x - \lfloor x \rfloor$ as function of $sin(x)$ and $sign(x)$? - -REPLY [15 votes]: To express ⌊x⌋ in terms of the Fourier series for $x \in \mathbb{R}/\mathbb{Z}$ let f(x) := x - ⌊x⌋, which is easier to compute the Fourier series for. Hence, start by showing that f(x) is periodic. Given any integer $n \in \mathbb{Z}$. -$$\begin{align}f(x) &= x - ⌊x⌋ \\&= x + n - n - ⌊x⌋\\ -&= x + n - ⌊x + n⌋ \\&= f(x+1\times n)\end{align}$$ -That is, periodic with fundamental period $T_0 = 1$, and $\omega_0 = \frac{2\pi}{T_0} = 2\pi$. In particular, $f(x)=x$ at $0 \leq x < 1$. - -The periodic function $f(x)$ can be expressed by the Trigonometric Fourier series, -$$\begin{align}f(x) &= a_0 + \sum_{m=1}^{\infty} (a_{m}\cos(m\omega_0 x) + b_{m}\sin(m\omega_0 x))\\ - &= a_0 + \sum_{m=1}^{\infty} (a_{m}\cos(2\pi mx) + b_{m}\sin(2\pi m x))\end{align}$$ -where the coefficients are calculated as follows -$$\begin{align}a_0 &= \frac{1}{T_0}\int_{}f(x)dx = \int_{0}^{1}x dx = \frac{1}{2} -\\a_m &= \frac{2}{T_0}\int_{}f(x)\cos(m 2\pi x)dx = 0 -\\b_m &= \frac{2}{T_0}\int_{}f(x)\sin(m 2\pi x)dx = -\frac{1}{\pi m}\end{align}$$ -Hint. Use integration by parts... -Plot of $f(x) = a_0 + \sum_{m=1}^{10} b_m\sin(2\pi m x)$, (summation from m=1 to 10). - -Now that we obtained the Fourier series representation of f(x) = x - ⌊x⌋ -Then ⌊x⌋ $ = x - f(x) = x - 0.5 + \frac{1}{\pi}\sum_{m=1}^{\infty} \frac{\sin(2\pi m x)}{m}$. -A plot of ⌊x⌋ (using finite number of terms as above) - -Notice also, it is valid only for $x \in \mathbb{R}/\mathbb{Z}$. Because although ⌊x⌋ is continuous on $\mathbb{R}/\mathbb{Z}$. It has a jump of one unit at each integer. -⌊x⌋ can also be represented as a summation of shifted unit step functions u(x) = Sign(x)/2 + 0.5, in particular -$$⌊x⌋ = \sum_{m=1}^{\infty}(u(x-m)-u(-x-m+1))$$<|endoftext|> -TITLE: Average projected area of an ellipsoid -QUESTION [5 upvotes]: Consider an ellipsoid of semi-axes a, b, c (possibly prolate, b=c). I am interested in the "shadow" of this solid onto a distant plane, in a given direction d=(k,l,m) orthogonal to that plane. By shadow I mean the projected area onto the plane: each point on the surface of the ellipsoid is translated in the same direction until it intersects with a plane normal to it; the shadow is defined by the envelope of the intersection points. -First question: Is the projected curve an ellipse? (and what is its equation in terms of a,b,c and the direction vector d)? -Second question: What is the mean area of this shadow when averaged over all orientations of the ellipsoid (or, equivalently, the plane of projection) -I'm guessing this problem has been solved in the past; any references would be very welcome. -Thanks, -baptiste - -REPLY [3 votes]: I know this question has been answered long ago, but I wanted to provide you with a decent paper (albeit astrophysical in nature), since it does deal with the projections of tri-axial ellipsoids (with arbitrary semi-axes, and orientation) onto a plane. -Binggeli et al. 1980. -In summary: -The answer to your first question is yes. Quantities constant on ellipsoids (e.g. - isodensities (same mass) or isophotes (same light) ) project to ellipses. -The perceived axis ratio (minor to major) of your new ellipse will be the following: -$$Q = \sqrt{\frac{j+l - \sqrt{(j-l)^{2}+4k^{2}}}{j+l+\sqrt{(j-l)^{2}+4k^{2}}}} $$ -where j,k, and l are geometric functions dependent upon the instrinsic axis ratios, and orientation angles with respect to the line of sight of your ellipsoid. -$$j = q^{2}\sin^{2}\theta+p^{2}\sin^{2}\phi\cos^{2}\theta+\cos^{2}\phi\cos^{2}\theta - $$ -$$l = p^{2}\cos^{2}\phi+\sin^{2}\phi $$ -$$k=(1-p^{2})\sin\phi\cos\phi\cos\theta $$ -Lots of transcription here - I'd definitely double-check this in the paper. p is the intermediate-to-major axis ratio, q is the minor-to-major axis ratio, and theta and phi define the observer's orientation relative to the intrinsic coordinate system of the ellipsoid (meaning the coordinate system is aligned along the semi-axes). -A few other notes to avoid confusion with the notation: 1) Prolate spheroids mean p=q, and 2) Oblate spheroids mean p=1, with q less than p in value. -Sorry that this is a little more physics-y than math-y. I understand this is a mathematics stack exchange site. I hope this helps!<|endoftext|> -TITLE: What is the mathematical symbol for the unique values in a vector? -QUESTION [5 upvotes]: I am looking for a symbol to represent the operation of taking unique values from a vector. So, say the symbol was $\theta$: -$v = [0, 0, 0, 1, 1, 1, 3, 1, 2, 0]$ -$\theta(v) = [0, 1, 3, 2]$ -Or is this something that just isn't defined? -Thanks -EDIT -Thanks all for your help here. Your answers and comments have led me to realise that doing this operation on a vector is not really what I want to do; using a set makes much more sense logically. -(Background: I have many coordinates $\boldsymbol{p}_i = [x_i, y_i, z_i]$ in 3D space, which I then perform a rounding function on so they are equally sampled. However, there will be many duplicates points at each new coordinate, so I remove the duplicated $\boldsymbol{p}$ points to get the final set of points. So it makes sense to define a set $\boldsymbol{P} = \{\boldsymbol{p}_1, ... \boldsymbol{p}_i, \cdot \boldsymbol{p}_n\}$). - -REPLY [2 votes]: There is no standard notation for the set of coefficients of a vector. Indeed, there's not even any standard notation for the set of coefficients of a polynomial (which would be useful when taking the content, i.e. the gcd / ideal generated by the coefficients).<|endoftext|> -TITLE: What is the second principle of finite induction? -QUESTION [22 upvotes]: I understand the principle of finite induction, but my book then mentions that there is a variant of the first where requirement b is changed to - -If $k$ is a positive integer such that $1,2, \dots, k$ belong to $S$, then $k + 1$ must also be in $S$. - -The sample problem is proving that the inequality about the Lucas numbers -$l_n < (7/4)^n$. -I understand the algebra of the proof, but I do no understand how it uses the second principle or how it is different from the first principle? -The proof for the lucas numbers -$$ -a_1 = 1 < \left(\frac{7}{4}\right)^1 = \frac{7}{4}\text{ and }a_2 = 3 \lt \left(\frac{7}{4}\right)^2 = \frac{49}{16}.$$ -Choose an integer $k\geq 3$ and assume that the inequality is valid for $n = 1,2,\ldots,k$. Then -$$a_{k-1} \lt \left(\frac{7}{4}\right)^{k-1}\text{ and }a_{k-2} \lt \left(\frac{7}{4}\right)^{k-2},$$ -so -\begin{align*} -a_k &= a_{k-1} + a_{k-2}\\\ -&\lt \left(\frac{7}{4}\right)^{k-1} + \left(\frac{7}{4}\right)^{k-2}\\\ -&= \left(\frac{7}{4}\right)^{k-2}\left(\frac{7}{4} + 1\right)\\\ -&= \left(\frac{7}{4}\right)^{k-2}\left(\frac{11}{4}\right)\\\ -&\lt \left(\frac{7}{4}\right)^{k-2}\left(\frac{7}{4}\right)^2\\\ -&= \left(\frac{7}{4}\right)^k -\end{align*} -The responses are all helpful, but if someone could point out what the difference is when you go about actually using strong induction instead of weak induction. (You could use the lucas number proof or something else) - -REPLY [7 votes]: There is a common misconception that strong induction is somehow "stronger" than normal induction, but it isn't really. Strong induction is just the special case of normal induction in which the induction hypothesis $P(n)$ takes the form "for all $m < n$, $Q(m)$". Thus strong induction is actually a limited form of the principle of induction, because not all induction hypotheses are of that special form. On the other hand, any proof by strong induction can be trivially rephrased as a proof by "weak" induction. -One reason for the terminological difficulty is that the only place that people talk about "strong induction" is in introductory courses. There, "use strong induction" can be a hint about what sort of induction hypothesis to choose. In these classes, the principle of induction is stated somewhat informally, so it may not be obvious what counts as a legal induction hypothesis in the first place. -In contexts where the principle of induction is stated formally, e.g. Peano arithmetic, nobody talks about strong induction at all. The axiom scheme for induction in Peano arithmetic simply includes all axioms of the form -$$ -(P(0) \land (\forall n)(P(n) \to P(n+1)) \to (\forall n)(P(n)) -$$ -where $P$ is any formula of Peano arithmetic. This includes formulas of the form $(m < n)\to Q(m)$ as well as all other formulas.<|endoftext|> -TITLE: Seating friends around a dinner table -QUESTION [14 upvotes]: This problem came from a Putnam problem solving seminar. - -If each person in a group of n people is a friend of at least half the people in the group, then show that it is possible to seat the n people in a circle so that everyone sits next to friends only. - -My idea was to use induction on $n$; if $n$ is odd we can remove a person, note that the remaining $n-1$ people all have at least $n/2$ friends left, use our inductive hypothesis to seat them, and then use the pigeonhole principle to seat the last person. -Unfortunately, this doesn't work when $n$ is even because after removing a person, some of the remaining people might have less than $n/2$ friends left. In fact, my friend who actually participated in the seminar said they had the same issue, but didn't address it because they ran out of time. -Is this sort of induction a reasonable approach? If so, how would we deal with the case when $n$ is even? If not, what's a better way to think about the problem? -P.S. I'm not sure of the best tags for this question, so please feel free to re-tag if necessary. - -REPLY [2 votes]: http://www.math.uri.edu/~eaton/GT4F03.pdf -Has a simple proof: Theorem 6<|endoftext|> -TITLE: Isoperimetric inequality implies Wirtinger's inequality -QUESTION [13 upvotes]: Let $C: x=x(t), y=y(t), a\le t\le b$ be a $C^1$ closed curve (not necessarily simple).The isoperimetric inequality says that - $$ A\le \frac{\ell^2}{4\pi},$$ where $$A=\left|\int_C y(t)x'(t) dt\right|$$ is the area enclosed by $C$, and -$\ell=\int_a^b \sqrt{(x'(t))^2+(y'(t))^2} dt$ is the arc length of $C$. -My question is how to use this theorem to prove Wirtinger theorem: If $f(t)$ is a $T$-periodic $C^1$ real-valued function such that -$\int_0^T f(t) dt=0,$ then $$\int_0^T |f(t)|^2 dt\le \frac{T^2}{4\pi^2}\int_0^T |f'(t)|^2 dt.$$ - -REPLY [7 votes]: First thing to note that if we re-parametrize $t = ks$, we have, by the chain rule, $\frac{d}{ds}f = k (\frac{d}{dt}f)\circ t$. Choose $k = T/(2\pi)$, then the change of variable shows that it is sufficient to prove the claim for the period being $2\pi$. That is, if we can show for any $2\pi$ periodic function with mean 0 -$$ \int_0^{2\pi} f^2 ds \leq \int_0^{2\pi} |f'|^2 ds $$ -we'll be done by a re-scaling argument. -Since $f$ has mean 0, you can write $f = F'$ for $F$ another $2\pi$ periodic function. So the isoperimetric inequality implies -$$ \left|\int_0^{2\pi} f F' ds \right| \leq \frac{1}{4\pi} \left( \int_0^{2\pi} \sqrt{ (F')^2 + (f')^2 } ds \right)^2 $$ -or -$$ \int_0^{2\pi} f^2 ds \leq \frac{1}{4\pi} \left( \int_0^{2\pi} \sqrt{f^2 + (f')^2} ds \right)^2 $$ -Now use Holder's inequality on the finite interval $[0,2\pi]$, we get -$$ \int_0^{2\pi} \sqrt{f^2 + (f')^2} ds \leq \sqrt{2\pi} \left( \int f^2 + (f')^2 ds \right) $$ -So we get -$$ \int_0^{2\pi} f^2 ds \leq \frac{1}{2}\int_0^{2\pi} f^2 + (f')^2 ds $$ -which, subtracting $\frac12 \int f^2$ from both sides yields the desired inequality. - -Now, a short remark on why it is necessary to first use the scaling argument. The principle is the following: Wirtinger's inequality, as discussed in the first paragraph above, is scale invariant: changing the scale of parameter $t$ changes the terms $f^2$ and $T^2 (f')^2$ equally. -The isoperimetric inequality, however, is not scale invariant in the same way: using the ansatz where $x$ corresponds to $F$ and $y$ corresponds to $F' = f$, you see that a change of parametrization $t \to ks$ will leave $x$ the same while changing $y$ by a multiple factor. -In particular, this means that the depending on scale, the inequality may not be sharp. In other words, the changing of scale corresponds, morally speaking, to changing the $x$ and $y$ directions in different proportions. So if we start out with a circle, which is a maximizer of the isoperimetric inequality, after this funny reparametrisation which scales $x$ and $y$ differently, we end up with an ellipse, which no longer is a maximizer. -The change of scale in the first paragraph allows us to "use a version of the isoperimetric inequality as close to the circle version as possible". In other words, by isolating the scale you can most efficiently use the isoperimetric inequality, which then allows for a simple proof of Wirtinger's inequality.<|endoftext|> -TITLE: Effects of condensing a random variable to only 2 possible values -QUESTION [6 upvotes]: $X$ is a random variable, which is not constant. $E[X]=0$. $E[X^4] \leq 2(E[X^2])^2$. -Let $Y$ be given by: $P(Y=E[X|X \geq 0]) = P(X \geq 0)$ and $P(Y=E[X|X \lt 0]) = P(X \lt 0)$. -Do we necessarily have $E[Y^4] \leq 2(E[Y^2])^2$? - -REPLY [3 votes]: No. Here is a counterexample. Define $X$ as follows. ${\rm P}(X=a) = p_1$, ${\rm P}(X=0) = p_2$, and ${\rm P}(X=\frac{{ - ap_1 }}{{1 - p_1 - p_2 }}) = 1-p_1-p_2$, -where $a$ is a positive constant. Then, ${\rm E}(X)=0$. Denote ${\rm E}(X^4)-2 [{\rm E}(X^2)]^2$ by $\xi$. Then, -$$ -\xi = a^4 p_1 + \Big(\frac{{ap_1 }}{{1 - p_1 - p_2 }}\Big)^4 (1 - p_1 - p_2 ) - 2\Big[a^2 p_1 + \Big(\frac{{ap_1 }}{{1 - p_1 - p_2 }}\Big)^2 (1 - p_1 - p_2 )\Big]^2. -$$ -To find ${\rm E}(Y^4)-2 [{\rm E}(Y^2)]^2$, which we denote by $\eta$, we first find -$$ -{\rm E}[X|X \ge 0] = a{\rm P}(X = a|X \ge 0) = a\frac{{{\rm P}(X = a)}}{{{\rm P}(X \ge 0)}} = \frac{{ap_1 }}{{p_1 + p_2 }} -$$ -and -$$ -{\rm E}[X|X < 0] = \frac{{ - ap_1 }}{{1 - p_1 - p_2 }}. -$$ -Hence, by definition, ${\rm P}(Y = \frac{{ap_1 }}{{p_1 + p_2 }}) = p_1 + p_2 $ and ${\rm P}(Y = \frac{{ - ap_1 }}{{1 - p_1 - p_2 }}) = 1 - p{}_1 - p_2$. -Thus, -$$ -{\rm E}(Y^i) = \Big(\frac{{ap_1 }}{{p_1 + p_2 }}\Big)^i (p_1 + p_2) + \Big(\frac{{ap_1 }}{{1 - p_1 - p_2 }}\Big)^i (1 - p_1 - p_2), -$$ -from which we get an explicit expression for $\eta$. Now, to furnish a counterexample, it suffices to find a triple $(a,p_1,p_2)$ for which $\xi \leq 0$ and $\eta > 0$. This is most easily done using a computer. Here is a concrete example. -Letting $a=4$, $p_1=0.36$, and $p_2=0.4$, we have $-ap_1/(1-p_1-p_2) = -6$. Here, -$$ -a^4 p_1 + \Big(\frac{{ap_1 }}{{1 - p_1 - p_2 }}\Big)^4 (1 - p_1 - p_2 ) = 403.2 -$$ -and -$$ -2 \Big[a^2 p_1 + \Big(\frac{{ap_1 }}{{1 - p_1 - p_2 }}\Big)^2 (1 - p_1 - p_2 )\Big]^2 = 414.72, -$$ -so ${\rm E}(X^4) \leq 2 [{\rm E}(X^2)]^2$; on the other hand, -$$ -\Big(\frac{{ap_1 }}{{p_1 + p_2 }}\Big)^4 (p_1 + p_2) + \Big(\frac{{ap_1 }}{{1 - p_1 - p_2 }}\Big)^4 (1 - p_1 - p_2) \approx 320.835 -$$ -and -$$ -2\Big[\Big(\frac{{ap_1 }}{{p_1 + p_2 }}\Big)^2 (p_1 + p_2) + \Big(\frac{{ap_1 }}{{1 - p_1 - p_2 }}\Big)^2 (1 - p_1 - p_2)\Big]^2 \approx 258.482, -$$ -so ${\rm E}(Y^4) > 2 [{\rm E}(Y^2)]^2$.<|endoftext|> -TITLE: inscribed simplex. -QUESTION [5 upvotes]: Suppose I have an inscribed simplex which has $(n+1)$ vertices, and the diameter of the hypersphere is $d$. I have a point $x$ inside this simplex, is it true that the distance between $x$ and $x$ is nearest to vertex is not greater than $\frac{d}{2}$ ? - -REPLY [7 votes]: Center the hypersphere at the origin, and let $h$ be the (non-negative!) distance between the point $P$ and the center. We can position our hypersphere so that $P$ lies on the $x$ axis with $x$-coordinate $-h$. I'll write $r$ for $d/2$, the radius of the hypersphere. -Consider the case $n=3$. -Suppose that the distance from a vertex, $V(x,y,z)$, to $P$ is strictly greater than $r$. Then -$$\text{dist}(V,P)^2=(x+h)^2+y^2+z^2>r^2$$ -But $V$ is on the (hyper)sphere, so -$$x^2 + y^2 + z^2 = r^2$$ -Therefore, -$$\begin{align} -2hx+h^2 &> 0 \\ -\Rightarrow h(x+\frac{h}{2}) &> 0 -\end{align}$$ -We see, then, that our non-negative $h$ must be strictly positive, and that $x > -\frac{h}{2} > -h$. -Now, if ALL vertices are further than $r$ away from $P$, then the above shows that $P$ is separated from those vertices --and the simplex they determine-- by the (hyper)plane $x=-\frac{h}{2}$. The point cannot lie inside the simplex. -Consequently, for a point within the simplex, the distance to at least one vertex (in particular, the closest) must be no greater than $r$. In the case where $P$ lies at the center of the hypersphere, the distance to any vertex is exactly $r$, so this upper bound on the minimum distance cannot be improved.<|endoftext|> -TITLE: An Eisenstein-like irreducibility criterion -QUESTION [12 upvotes]: I could use some help with proving the following irreducibility criterion. (It came up in class and got me interested.) -Let p be a prime. For an integer $n = p^k n_0$, where p doesn't divide $n_0$, set: $e_p(n) = k$. Let $f(x) = a_n x^n + \cdots + a_1 x + a_0$ be a polynomial with integer coefficients. If: - -$e_p(a_n) = 0$, -$e_p(a_i) \geq n - i$, where $i = 1, 2, \ldots, n-1$, -$e_p(a_0) = n - 1$, - -then f is irreducible over the rationals. -Reducing mod p and mimicking the proof of Eisenstein's criterion doesn't cut it (I think). I also tried playing with reduction mod $p^k$, but got stuck since $Z_{p^k}[X]$ is not a UFD. -Also, does this criterion has a name? - -REPLY [6 votes]: Apply Eisenstein's criterion to ${1 \over p^{n-1}}x^nf({p \over x})$.<|endoftext|> -TITLE: Do we have a proof of the infiniteness? -QUESTION [8 upvotes]: Crossposted on Mathoverflow. -Given a natural number $a$, are there infinitely many natural numbers not of the form $anm \pm m \pm n$, $n, m$ positive natural? -I give a proof that for $a=6$ the question is equivalent to the twin prime conjecture so it is known that we don't have any proof. But what about other values of $a$? - -There are infinitely many twin primes if and only if there are infinitely many natural numbers that are not of the form $6nm \pm n \pm m$. -Proof: Every number that is not a multiple of $2$ or $3$ is of the form $6N\pm 1$. So the only pairs that are not divisible by $2$ or $3$ are $(6N-1,6N+1)$ for any $N$. Now are there infinitely many such prime pairs (twin primes)? -If the number $6N-1$ is prime it should not be written as a product of some numbers $6n+1,6m-1$ for any $n,m > 0$. So $(6n+1)(6m-1)=6(6nm-n+m)-1$, which means that $N$ should not be of the form $6nm-n+m$ for any $n,m>0$. -Similarly, if $6N+1$ is a prime it should not be a product of some numbers $(6n-1)(6m-1) =6(6nm-n-m)+1$, or $(6n+1)(6m+1) =6(6nm+n+m)+1$. Which means that we have a prime couple of the form $(6N-1,6N+1)$ if and only if $N$ is not of the form $6nm \pm n \pm m$ for any $n,m$. - -REPLY [8 votes]: Added: Unfortunately, the answer below as originally written was wrong. It answered a related, but easier, question. I have added some bracketed remarks to point out where the blunder happens, and appended some remarks about the expected answer to the actual question. - -It might help to explicitly reformulate the question. (This is implicit in the proof that the question is equivalent to the twin prime conjecture for $a = 6$, but it seems worthwhile to make it explicit.) -If $k = a m n \pm m \pm n,$ -then -$a k \pm 1 = (a m \pm 1) (a n \pm 1).$ -If $a = 2$ then we can write $1 = (2\cdot 1 - 1)$, and so for any $k$, -we have $2 k - 1 = (2 \cdot 1 - 1)(2 k -1)$, -and (as the OP noted) every integer $k$ can be written in the desired form. -Thus we suppose from now on that $a > 2$. -The problem is then equivalent to asking if we can find infinitely many -numbers congruent to $\pm 1 \bmod a$ which have no proper factors congruent -to $\pm 1 \bmod a$. [Added: This is wrong. The problem is rather to -find infintely many multiples $a k$ of $a$ so that both $a k +1 $ and -$a k - 1$ have no proper factors congruent to $\pm 1 \bmod a$.] -This shows that the answer to the question depends a lot on whether or not -every residue class mod $a$ which is coprime to $a$ is congruent to $\pm 1 \bmod a$, i.e. on whether or not $\varphi(a) = 2$. [Added: It is not so clear to me now what difference the condition on $\varphi(a)$ makes, if any.] -If $\varphi(a) = 2$, then we are simply asking whether we can find infinitely many numbers coprime to $a$, which differ by $2$, and which have no proper factors, i.e. we asking for infinitely many twin primes. This is a well-known open problem! -On the other hand, -suppose that $\varphi(a) > 2$ (which in particular is the case if $a > 6$). We may then choose -$b$ and $c$ so that $b, c \not\equiv \pm 1 \bmod a$, and $b c \equiv 1 \bmod a$. -By Dirichlet's theorem, we may choose infinitely primes $p$ and $q$ such that -$p \equiv b \bmod a$ and $q \equiv c \bmod a$. -For any such $p$ and $q$, the product $pq \equiv 1 \bmod a$, but by construction has no proper factors congruent to $\pm 1 \bmod a$. -So if we set $k = (p q - 1)/a$, then $k$ cannot be written in the given form. -This gives infinitely many $k$, as desired. -[Added: Rather, this gives infinitely many $k$ that are not of the form -$a m n + m + n$ or $a m n - m - n$, but doesn't address whether these can be written in the form $a m n - m + n$, since we don't have any control over $a k - 1 = p q - 2.] - -Added: -It is not clear to me how to gain control over the factors of $p q - 2$, other than to assume it is prime. -So one way to solve the problem would be to find infinitely many primes $r \equiv -1 \bmod a,$ such that $r + 2$ is either prime, or is of the form -$r + 2 = p q,$ where $p,q \not\equiv \pm 1 \bmod a$. -This is then related to Chen's theorem, indeed is a strengthening of it. Thus -we are led to MO Scribe's question on MO which was inspired by the initial posting of the present question on MO. -There is no doubt that this result (on the existence of such primes $r$) is true. (Indeed, standard conjectures predict that there should be infinitely many twin primes $r, r+2$ with $r\equiv -1 \bmod a$.) But it is not clear (to me) whether or not it is in reach of current methods; that is the thrust of MO Scribe's question. -So what is the conclusion: Well, there is no doubt that one should be able to find infintely many such $k$, since it follows from standard conjectures on twin primes satisfying congruence conditions. On the other hand, proving this may be tricky, since it seems to require results that are at the edge of what is currently possible via seiving techniques.<|endoftext|> -TITLE: Two seemingly unrelated puzzles have very similar solutions; what's the connection? -QUESTION [12 upvotes]: I think it's an interesting coincidence that the locker puzzle and this puzzle about duplicate array entries (see problem 6b) have such similar solutions. Spoiler alert! Don't read on if you want to solve these puzzles yourself first (they're two of the best puzzles I've ever seen). -In both solutions we consider a collection of labeled boxes, each with a number inside, and then "traverse through boxes" by starting at a given box and using the number inside to decide where to go next. For example, we might start at box $1$, find the number $5$ inside, proceed to box $5$, find the number $2$ inside, proceed to box $2$, and so forth. -Furthermore, in both solutions we "traverse through boxes" for the same reason: we're interested in finding cycles. More specifically, for the locker puzzle, we're interested in the question, "If we start at box $n$, how many steps does it take to get back to box $n$?", and for the duplicate array puzzle, we're interested in the question, "Does there exist $n$ such that (1) if we start at box $1$, we will eventually get to box $n$, and (2) if we start at box $n$, we will eventually get back to box $n$?" -Since the two puzzles seem quite unrelated at first glance, I'll pose the following question: - -Is there a deep reason why "traversing through boxes" (described in the 2nd paragraph above) shows up in the solution for both of these puzzles? - -In addition, - -Are there other interesting problems for which "traversing through boxes", whether to find cycles or for any other reason, shows up in the solution? - -[Edit] I mistakenly said that for the locker puzzle, we're interested in the question, "If we start at box $n$, will we eventually get back to box $n$?" Instead, the question should be, "If we start at box $n$, how many steps does it take to get back to box $n$?" -[Edit] Thanks for all the great answers so far! However, I'm still not completely convinced there's nothing more going on here... - -REPLY [2 votes]: You might also enjoy this one, which is, I think, easier than the two you have linked to: you are starting with a pack of $n$ cards, numbered from 1 to $n$, but in a random order. You now play the following game: you look at the top card, numbered $i$, say, then you put the card into the $i$-th position from top. Repeat the whole procedure. Clearly, if at some point the top card it numbered 1, nothing interesting happens any more and the position of the cards never changes after that point. Prove that, independently of the starting configuration, after finitely many steps, the 1 will turn up on the top.<|endoftext|> -TITLE: Dedekind Cuts versus Cauchy Sequences -QUESTION [11 upvotes]: Are there any advantages or disadvantages in defining a real number in the following ways: -Definition 1 -A real number is an object of the form $\lim\limits_{n \to \infty} a_n$ where $(a_n)_{n}^{\infty}$ is a cauchy sequence of rational numbers. -Definition 2 A real number is a cut in $\mathbb{Q}$. - -REPLY [8 votes]: In the real numbers case, the construction is isomorphic, as kahen said in his answer. -If you look at this fact closely enough, you realize that this is because the backbone of the real line is actually the integers, which has no accumulation points within itself, that is if a sequence converges it has to be constant from some point. -This fact, while seems unrelated, helps us avoid the following case: -Consider some dense linear order (like the rationals) only bigger. For example, put two copies of the rationals one on top of the other, of course this can be embedded back into the rationals themselves, but consider the Dedekind closure of this order. It would have a point which is the exact point in which you switch from the first copy of the rationals to the second one. (This might not be such a good example, if you have a better example - this post is CW for this very reason) -So now let us try and prove something (together, I got stuck halfway through and although the intuition is clear to me, I'm uncertain how to put it into words - and this might as well be complete rubbish too.) -Theorem: Suppse $F$ is an ordered field (so it has characteristics 0, and we can assume $\mathbb{Q}\subseteq F$ without the loss of generality) such that its Cauchy completion (denoted by $C(F)$) and Dedekind completion (denoted by $D(F)$), both form a field - and it is the same field (namely there is a ring isomorphism between them). -Then $F\subseteq\mathbb{R}$. -Proof: Suppose not. Since $\mathbb{Q}\subseteq F$ then $\mathbb{R}\subseteq C(F)$ (and from our assumption, $\mathbb{R}\subseteq D(F)$ as well - they are isomorphic). Now let $a\in C(F)\setminus\mathbb{R}$ (without loss of generality take $a>0$), that is it is not a Cauchy limit of rational sequences. Meaning every $a_n\to a$ has only finitely many rational elements, so again without the loss of generality $a_n$ is not rational for all $n$. Since $D(F)$ is an ordered field so is $C(F)$, if there was $r\in\mathbb{R}$ such that $ar$ for any real number $r$. -Once more, we use the isomorphism, $a$ defines a cut which means the set $A = \{x\in F| x>q,\ \forall q\in\mathbb{Q}\}$ is non-empty, take $z$ to be the cut defined as the number between $\mathbb{Q}$ and $A$ (usually denoted by $z=\langle \mathbb{Q}\big| A\rangle$). Take some $\epsilon>0$ then $z-\epsilon$ is already smaller than some rational number, and therefore can be expressed as a limit of rational numbers, ergo $z-\epsilon\in\mathbb{R}$. -Take $z_n \to z$ some Cauchy sequence, since $z$ cannot be expressed as a real number, we can assume $z_n$ is not rational for all $z$, and therefore even if we take real numbers we cannot express $z$ as a Cauchy limit of such real numbers. However, for all $\epsilon>0$ we can express $z-\epsilon$ as a Cauchy limit of real numbers, so we can choose some sequence that would converge to $z$ Cauchy wise, which is a contradiction. - -I am quite certain that I made a big mess in this proof, and would like it very much if someone will clean it up a bit. It should probably be easier to prove it by some model theoretic method, or maybe by using equivalent topologies of order and metric. I'm not sure. -That being said, I completely resent the whole idea that we based metric spaces on the real line and then we say there are no metric-complete ordered fields except $\mathbb{R}$.<|endoftext|> -TITLE: Is every Lebesgue measurable function on $\mathbb{R}$ the pointwise limit of continuous functions? -QUESTION [54 upvotes]: I know that if $f$ is a Lebesgue measurable function on $[a,b]$ then there exists a continuous function $g$ such that $|f(x)-g(x)|< \epsilon$ for all $x\in [a,b]\setminus P$ where the measure of $P$ is less than $\epsilon$. -This seems to imply that every Lebesgue measurable function on $\mathbb{R}$ is the pointwise limit of continuous functions. Is this correct? - -REPLY [4 votes]: This is a comment adding to the discussion following the selected answer but it's a long comment, so I'm putting it here. -OP asked this second question, "can I conclude that every Lebesgue measurable function is the the pointwise limit of continuous functions a.e. ?" -Remark 0. A measurable function defined on the whole real line can be transformed into one that is defined on just the open interval (0,1), by mapping the domain $\mathbb R$ to the new domain (0,1). Therefore we only need to consider measurable functions defined on intervals. -Remark 1. Given a sequence of functions $f_n$ on $I = [a,b]$ such that it gets closer and closer to $f$ in the sense that $|f_n(x) - f(x)| < \frac{1}{n}$ holds for all $x$ on $I$ minus a set of measure $< \frac{1}{n}$, it does NOT follow that $f$ is the a.e. pointwise limit of $f_n$. -Remark 2. Given a sequence of functions $f_n$ on $I = [a,b]$ such that it gets closer and closer to $f$ in the sense that $|f_n(x) - f(x)| < 2^{-n}$ holds for all $x$ on $I$ minus a set of measure $< 2^{-n}$, it DOES follow that $f$ is the a.e. pointwise limit of $f_n$. This is an easy consequence of Borel-Cantelli lemma.. -Borel-Cantelli lemma on $\mathbb R$: If $E_n$ is a sequence of (measurable) subsets of $\mathbb R$ with rapidly decreasing measure in the sense that $\sum_n \lambda(E_n) < \infty$, then for all $x$ except on a null set, $x$ belongs to $E_n$ for only finitely many values of $n$. -Proof: By abuse of notation, if we write $E_n$ to also mean its indicator function, and we consider the function $\sum E_n$. The integral of this function is finite, therefore the function is a.e. finite. -To prove Remark 2, just set $E_n$ to be the exception set of measure $< 2^{-n}$. -See Convergence in measure - Wikipedia, the free encyclopedia -Remark 3. If the sequence of functions $f_n$ is such that $|| f_n - f ||_1 < 2^{-n}$, then it also follows that $f$ is the a.e. pointwise limit of $f_n$. (Proof: To show that the measure of $E_n$ = $\{x \in I : |f_n(x)-f(x)| \ge \epsilon \}$ is rapidly decreasing, use Markov's inequality.) Now you see there's a pattern. It's that fast convergence implies a.e. pointwise convergence. -Remark 4. One might say that Remark 3 answers the second question only for $L^1$ functions, but any measurable function can be transformed into a bounded function by transforming the codomain of $(-\infty, +\infty)$ to the bounded interval $(-1,1)$, and the second problem is invariant under this transform. -Remark 5. If we define $f_n$ to be the convolution of $f$ with the indicator function of $[-\frac{1}{n}, +\frac{1}{n}]$ times $2n$, then $f_n$ is a sequence of continuous functions converging to $f$ a.e. if $f$ is integrable. See Lebesgue differentiation theorem. -Remark 6. The second principle from Littlewood's three principles of real analysis says that any measurable function on I is approximately continuous, and Luzin's theorem is an instance of that principle, but I have always felt that other instances such as "Any measurable function on I can be approximated by continuous functions in the sense of convergence in measure" or "Any measurable $L^1$ function on I can be approximated by continuous functions in the sense of $L^1$ distance." to be better instances because they are easier to work with. Easier to remember as well.<|endoftext|> -TITLE: Why is the remainder function $R_{n}(x)$ decreasing? -QUESTION [8 upvotes]: When solving questions like these: - -Let $f(x)$ be a real function. Find - $f(0.1)$ using its Taylor expansion - such that the error is less than - $10^{-3}$. Find the lowest degree of - Taylor polynomial needed. - -(if someone can rephrase the question so its more clear that would be great. I didn't quite nail the translation to English) -We were explained that usually the process of solving this is finding some degree that works. Then, depending on how 'far' you are from the error term, you start trying lower degrees. When some degree doesn't work anymore, you say the one before it is the minimal. -I was wondering though, why the implicit assumption that the remainder function is decreasing? i.e. if degree $k$ doesn't suffice, $1, 2, ..., k-2, k-1$ won't work either. Our professor said that we can rely on this because most functions we're dealing with comply with this property. Why is that? - -REPLY [5 votes]: I presume you have a theorem like this: - -Suppose $f \in C^k(I)$ for some open interval $I$. Let $a,b \in I$ and $M$ be numbers such that $|f^{(n+1)}(t)| \leq M$ for all $t$ between $a$ and $b$. Then we have that - $$ |R_n f(b)| \leq \frac{M}{(n+1)!} |b-a|^{n+1}.$$ - -Obviously you'll want $|b-a| < 1$ which you almost always can arrange for. Now if $f$ is analytic (and thus equal to its own Taylor series), we can use the third of these alternate characterizations to see that the approximations get better. This should give you a good idea of how the approximations might get worse.<|endoftext|> -TITLE: Sample Standard Deviation vs. Population Standard Deviation -QUESTION [54 upvotes]: I have an HP 50g graphing calculator and I am using it to calculate the standard deviation of some data. In the statistics calculation there is a type which can have two values: -Sample -Population -I didn't change it, but I kept getting the wrong results for the standard deviation. When I changed it to "Population" type, I started getting correct results! -Why is that? As far as I know, there is only one type of standard deviation which is to calculate the root-mean-square of the values! -Did I miss something? - -REPLY [100 votes]: There are, in fact, two different formulas for standard deviation here: The population standard deviation $\sigma$ and the sample standard deviation $s$. -If $x_1, x_2, \ldots, x_N$ denote all $N$ values from a population, then the (population) standard deviation is -$$\sigma = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \mu)^2},$$ -where $\mu$ is the mean of the population. -If $x_1, x_2, \ldots, x_N$ denote $N$ values from a sample, however, then the (sample) standard deviation is -$$s = \sqrt{\frac{1}{N-1} \sum_{i=1}^N (x_i - \bar{x})^2},$$ -where $\bar{x}$ is the mean of the sample. -The reason for the change in formula with the sample is this: When you're calculating $s$ you are normally using $s^2$ (the sample variance) to estimate $\sigma^2$ (the population variance). The problem, though, is that if you don't know $\sigma$ you generally don't know the population mean $\mu$, either, and so you have to use $\bar{x}$ in the place in the formula where you normally would use $\mu$. Doing so introduces a slight bias into the calculation: Since $\bar{x}$ is calculated from the sample, the values of $x_i$ are on average closer to $\bar{x}$ than they would be to $\mu$, and so the sum of squares $\sum_{i=1}^N (x_i - \bar{x})^2$ turns out to be smaller on average than $\sum_{i=1}^N (x_i - \mu)^2$. It just so happens that that bias can be corrected by dividing by $N-1$ instead of $N$. (Proving this is a standard exercise in an advanced undergraduate or beginning graduate course in statistical theory.) The technical term here is that $s^2$ (because of the division by $N-1$) is an unbiased estimator of $\sigma^2$. -Another way to think about it is that with a sample you have $N$ independent pieces of information. However, since $\bar{x}$ is the average of those $N$ pieces, if you know $x_1 - \bar{x}, x_2 - \bar{x}, \ldots, x_{N-1} - \bar{x}$, you can figure out what $x_N - \bar{x}$ is. So when you're squaring and adding up the residuals $x_i - \bar{x}$, there are really only $N-1$ independent pieces of information there. So in that sense perhaps dividing by $N-1$ rather than $N$ makes sense. The technical term here is that there are $N-1$ degrees of freedom in the residuals $x_i - \bar{x}$. -For more information, see Wikipedia's article on the sample standard deviation.<|endoftext|> -TITLE: Integrating a $k$-dimensional (multivariate) Gaussian over a convex $k$-polytope -QUESTION [5 upvotes]: What is the integral of a $k$-dimensional (multivariate) Gaussian over a convex $k$-polytope? -Here I am specifically interested in $k\in\{2,3\}$, but insight on the general problem would also be appreciated. I am guessing that there is no closed form in the general case, but what if we restrict the number of vertices of the polytope to at most $k+1$? If no closed form exists even under this constraint, are there efficient ways to approximate the integral? - -REPLY [2 votes]: So you want to compute the following integral: -\begin{align*} -\frac{1}{(2\pi|\Sigma|)^{n/2}}\int_{V_k}\exp\left(-\frac{1}{2}(x-a)^T\Sigma^{-1}(x-a)\right)dx -\end{align*} -where $V_k\subset \mathbb{R}^k$ is a $k$-polytope, $a\in \mathbb{R}^k$ is the mean and $\Sigma$ is the $k\times k$ covariance matrix of $k$-variate normal vector. -My approach would be doing a change of variables $y=\Sigma^{1/2}(x-a)$ and then integrate by one variable at a time, since the integrand function will be a product of one-variable functions. If $\Sigma^{1/2}V_k$ is a $k$-cube then the integration is straightforward. For the general case it will not be easy. Take $k=2$, normal with zero mean and unit covariance matrix, and $V_2$ a triangle with vertices in $(0,0)$, $(1,0)$, $(0,1)$. Then your integral will be -\begin{align} -\frac{1}{2\pi}\int_{V_2}e^{-\frac{x_1^2}{2}}e^{-\frac{x_2^2}{2}}dx_1dx_2&=\frac{1}{2\pi}\int_{0}^{1}e^{-\frac{x_1^2}{2}}dx_1\int_{0}^{1-x_1}e^{-\frac{x_2^2}{2}}dx_2\\ -&=\frac{1}{\sqrt{2\pi}}\int_0^1e^{-\frac{x_1^2}{2}}(\Phi(1-x_1)-\Phi(0))dx_1 -\end{align} -which I do not think has closed formula. (I might be wrong). -As for approximation, I do not see why standard numerical integral solving methods would not work.<|endoftext|> -TITLE: A type of stochastic jump process -QUESTION [6 upvotes]: Let $X \geq 1$ be an integer r.v. with $E[X]=\mu$. Let $X_i$ be a sequence of iid rvs with the distribution of $X$. On the integer line, we start at $0$, and want to know the expected position after we first cross $K$, which is some fixed integer. Each next position is determined by adding $X_i$ to the previous position. So the question is, if we stop this process after the first time $\tau$ for which $Y_{\tau}=\sum_{i=1}^{\tau}X_i > K$, that is, after the first time it crosses $K$, then what is $E[Y_{\tau}-K]$?. Can we get a bound of $O(\mu)$? - -REPLY [6 votes]: Let $\tau = \min \{ n \geq 1:X_1 + \cdots + X_n > K \}$. Then $\tau$ is an integer-valued random variable, bounded from above by $K+1$ (since $X_i \geq 1$). Note that $\tau = n$ if and only if $\sum\nolimits_{i = 1}^{n - 1} {X_i } \le K$ and $\sum\nolimits_{i = 1}^{n} {X_i } > K$. Thus, the event $\lbrace \tau = n \rbrace$ depends only on the values $X_1,\ldots,X_n$. So, by definition, $\tau$ is a stopping time with respect to the sequence $X_1,X_2,\ldots$. Now, $X_1,X_2,\ldots$ are i.i.d. with finite expectation $\mu$, and $\tau$ is a stopping time for them. Moreover, ${\rm E}(\tau) < \infty$ since $\tau \leq K+1$. Hence, by Wald's identity, -$$ -{\rm E}\bigg(\sum\limits_{i = 1}^\tau {X_i } \bigg) = {\rm E}(\tau )\mu \leq (K+1)\mu. -$$ -So if we put $Y_\tau = \sum\nolimits_{i = 1}^\tau {X_i }$, we get -$$ -{\rm E}(Y_\tau - K) = {\rm E}(Y_\tau) - K \leq (K+1)\mu - K. -$$ -EDIT: -Since $\tau \geq 1$, we have -$$ -\mu - K \leq {\rm E}(Y_\tau - K) \leq (K+1)\mu - K. -$$ -As we have seen above, the problem reduces to calculating ${\rm E}(\tau)$. Put $S_n = \sum\nolimits_{i = 1}^n {X_i }$ ($S_0 = 0$). -Note that -$$ -{\rm P}(\tau = n) = {\rm P}(S_{n - 1} \le K,S_n > K) = {\rm P}(S_{n - 1} \le K) - {\rm P}(S_n \le K). -$$ -Hence, -$$ -{\rm E}(\tau) = \sum\limits_{n = 1}^{K + 1} {n{\rm P}(\tau = n)} = \sum\limits_{n = 1}^{K + 1} n[{\rm P}(S_{n - 1} \le K) - {\rm P}(S_n \le K)] = \sum\limits_{n = 0}^K {{\rm P}(S_n \le K)}. -$$ -So, we can write -$$ -{\rm E}(\tau) = 1 + \sum\limits_{n = 1}^\infty {{\rm P}(S_n \le K)} = 1 + \sum\limits_{n = 1}^\infty {F^{(n)}(K)}, -$$ -where $F^{(n)}$ is the distribution function of $S_n$. For $t>0$ real, define $m(t) = \sum\nolimits_{n = 1}^\infty {F^{(n)}(t)}$. -From the theory of renewal processes, we know that $m(t) = {\rm E}(N_t)$, where $\lbrace N_t:t \geq 0 \rbrace$ is a renewal process with inter-arrival times distributed according to the distribution of $X$. $m(t)$ is called the renewal function. It may be worth noting that by the Elementary Renewal Theorem, -$$ -\mathop {\lim }\limits_{t \to \infty } \frac{{m(t)}}{t} = \frac{1}{\mu }. -$$ -Returning to our original setting, we have -$$ -{\rm E}(Y_\tau ) = {\rm E}(\tau )\mu = (1 + m(K))\mu. -$$ -So, the problem reduces to calculating $m(K)$. -Finally, here is some useful link concerning renewal theory, which is very relevant to this answer. -http://www.postech.ac.kr/class/ie272/ie666_temp/Renewal.pdf<|endoftext|> -TITLE: how can be prove that $\max(f(n),g(n)) = \Theta(f(n)+g(n))$ -QUESTION [6 upvotes]: how can be prove that $\max(f(n),g(n)) = \Theta(f(n)+g(n))$ -though the big O case is simple since $\max(f(n),g(n)) \leq f(n)+g(n)$ -edit : where $f(n)$ and $g(n)$ are asymptotically nonnegative functions. - -REPLY [2 votes]: Hint...: $2 + 100 \le 100 + 100$ -For the sake of completeness: - - Use the fact that: $\max(f,g) \le f + g \le 2\max(f,g)$ when $f,g$ are non-negative<|endoftext|> -TITLE: what is the process to solving this absolute value inequality: $4-\left|\frac{5y}{3}+4\right| > \frac25$? -QUESTION [5 upvotes]: I keep trying to solve it but I seem to be making an error somewhere: - $$4-\left|\frac{5y}{3}+4\right| > \frac25$$ - -REPLY [4 votes]: $4-\left|\frac{5y}{3}+4\right| > \frac25$ can be rewritten as $-\left|\frac{5y}{3}+4\right| > \frac{-18}{5}$ or $\left|\frac{5y}{3}+4\right| < \frac{18}{5}$. From here, represent the expression as $\frac{-18}{5} < \frac{5y}{3}+4\ < \frac{18}{5}$ and proceed to solve it using arithmetic.<|endoftext|> -TITLE: Two points on circle resulting in 5 equal regions -QUESTION [6 upvotes]: What values of $Z_1$ and $Z_2$ make the five regions of the unit circle, shown below, equal in area? $\overline{Z_1}$ and $\overline{Z_2}$ are conjugates of $Z_1$ and $Z_2$; in other words they lie directly across the real axis from their counterparts. - -REPLY [7 votes]: The problem is overdetermined/impossible. The values of $Z_1$ and $Z_2$ are forced, as in Ross's answer, but the segments with area $\frac{2\pi}{5}$ are not divided into regions of equal area. -Let $0<\alpha<\pi$ be the solution to $\frac{\alpha-\sin\alpha}{2}=\frac{\pi}{5}$ (so $\alpha\approx 2.11314$). $\arg(Z_2)=\pi-\frac{\alpha}{2}$. -Let $0<\beta<\pi$ be the solution to $\frac{\beta-\sin\beta}{2}=\frac{2\pi}{5}$ (so $\beta\approx 2.8248$). $\arg(\overline{Z_1})=\arg(Z_2)-\beta=\pi-\frac{\alpha}{2}-\beta$. -Any point on the line segment joining $Z_2$ and $\overline{Z_1}$ is a linear combination of these two points, $t\cdot Z_2+(1-t)\overline{Z_1}$ with $0 -TITLE: Does the method for solving exact DEs generalize like this? -QUESTION [9 upvotes]: Differential equations of the form $M\,dx+N\,dy=0$ such that $\frac{\partial M}{\partial y}=\frac{\partial N}{\partial x}$ are said to be exact, because the left hand side of our equation is the exact differential of some function f, and the DE has a solution of the form $f(x,y)=c$, where c is a constant. Standard textbooks give us a procedure for solving such equations that essentially amounts to using the identity $f=\int\frac{\partial f}{\partial x}\: dx+\int(\frac{\partial f}{\partial y}-\frac{\partial}{\partial y}\int\frac{\partial f}{\partial x}\: dx)\: dy$. Specifically, we integrate M (respectively N, &c.) with respect to x to get $f=\int\frac{\partial f}{\partial x}\: dx+C(y)$, and then differentiate this with respect to y and do algebra to find C'(y), which we can then integrate to find C(y), which we can substitute back into our expression for f. -It occurred to me that this technique should generalize---that is, when instead of $M\,dx+N\,dy=0$ such that $\frac{\partial M}{\partial y}=\frac{\partial N}{\partial x}$, we have $\sum_{i}M_{i}\,dx_{i}=0$ where the $M_{i}$ satisfy the criteria for being an exact differential, then we can iteratively apply the same procedure to solve the equation. And so I came up with the following--- -Theorem (?). A differential equation of the form $\sum_{i=1}^{n}\frac{\partial f}{\partial x_{i}}=0$ for some function f has an implicit solution of the form $f=\sum_{i=1}^{n}a_{i}=c$ where $a_{0}=0$, $a_{i}=\int\left(\frac{\partial f}{\partial x_{i}}-\frac{\partial}{\partial x_{i}}\left(\sum_{j=1}^{i-1}a_{j}\right)\right)\: dx_{i}$ for $i\in\mathbb{N}_{+}$, and c is an arbitrary constant. -Proof (?). By induction on the number of variables n. The theorem is true for n=1 because $f=\int\,(\frac{\partial f}{\partial x}-0)\,dx$ by the fundamental theorem of calculus. -To complete the induction, we need to show that if $f(x_{1},...,x_{n})=\sum_{i=1}^{n}a_{i}=c$ -is a solution to $\sum_{i=1}^{n}\frac{\partial f}{\partial x_{i}}=0$, then $f(x_{1},...,x_{n+1})=\sum_{i=1}^{n+1}a_{i}=c$ is a solution to $\sum_{i=1}^{n+1}\frac{\partial f}{\partial x_{i}}=0$. Think of $f(x_{1},...,x_{n})$ as being the "special case" of $f(x_{1},...,x_{n+1})$ where $x_{n+1}$ is being "treated like a constant." (My thinking here is not nearly as precise as it should be and I'm overloading the function name f, but hopefully you understand the intuition to which I am appealing.) So we can say that $f(x_{1},...,x_{n+1})=\sum_{i=1}^{n}a_{i}+C(x_{n+1})$. -Then we proceed analogously as in the two-variable case. Differentiating by $x_{n+1}$ and applying algebra yields $C'(x_{n+1})=\frac{\partial f}{\partial x_{n+1}}-\frac{\partial}{\partial x_{n+1}}\left(\sum_{i=1}^{n}a_{i}\right)$, and then integrating with respect to $x_{n+1}$ yields $C(x_{n+1})=\int\left(\frac{\partial f}{\partial x_{n+1}}-\frac{\partial f}{\partial x_{n+1}}\left(\sum_{i=1}^{n}a_{i}\right)\right)\: dx_{n+1}$. -Then substituting into our earlier expression for $f(x_{1},...,x_{n+1})$ we get $f(x_{1},...,x_{n+1})=\sum_{i=1}^{n}a_{i}+\int\left(\frac{\partial f}{\partial x_{n+1}}-\frac{\partial f}{\partial x_{n+1}}\left(\sum_{i=1}^{n}a_{i}\right)\right)\: dx_{n+1}$, which by virtue of the definition of $a_{i}$ is equivalent to $f(x_{1},...,x_{n+1})=\sum_{i=1}^{n+1}a_{i}$. But by the principle of induction, this is quod erat demonstrandum. -Example. In the n=3 case (and naming our variables x, y, and z), we get $f(x,y,z)=\int\frac{\partial f}{\partial x}\: dx+\int(\frac{\partial f}{\partial y}-\frac{\partial}{\partial y}\int\frac{\partial f}{\partial x}\: dx)\: dy$ -$+\int(\frac{\partial f}{\partial z}-\frac{\partial}{\partial z}(\int\frac{\partial f}{\partial x}\: dx+\int(\frac{\partial f}{\partial y}-\frac{\partial}{\partial y}\int\frac{\partial f}{\partial x}\: dx)\: dy))\, dz$, which is seen to be an identity by performing the integrations. What's going on is that $a_{1}=\int\frac{\partial f}{\partial x}\: dx=f$, and all following terms are zero by design, e.g., $a_{2}=f-f$. -So my question is (and I offer my sincerest apologies if this is not an appropriate question, or if my poor exposition has rendered my intentions virtually unreadable) does this seem basically correct, or am I doing something wrong? And how do I fix my sloppy reasoning in the inductive step ("special case" ... "treated like a constant")? I would be much obliged for any input. - -REPLY [7 votes]: The idea is essentially correct, but what you wrote needs some editing. You have independently rediscovered some beautiful mathematics! Bravo! -Your Theorem (as stated) has a much easier proof, but in your proof, you address an important point not mentioned in the theorem statement. There are two key ideas you really need to separate. Once you do that, I think you will see how to clean up your arguments. -In the following, I assume that you are familiar with differential forms and the exterior derivative, so that expressions like -$$ df = \sum \frac{\partial f}{\partial x_i} d x_i$$ seem reasonable to you, but also that you know something about what $d( Adx + Bdy + Cdz)$ is. If not, let me know in a comment, and I will gladly rewrite this in $\mathbb R^3$ using the curl operator ($\nabla \times$). -First, about the proof of the theorem you stated. You need to think about what you mean by the differential equation $$A dx + Bdy = 0.$$ Obviously, you can "divide through" by $dx$ and then get an ordinary differential equation written in the normal way. Another way of seeing it is to say that you are looking for a parametrized curve $( x(t), y(t) )$ such that the tangent vector $(x'(t), y'(t))$ satisfies -$$ 0 = A d x + B dy = A x'(t) dt + B y'(t) dt = (Ax'(t) + By'(t)) dt.$$ -In other words, we want a curve $(x(t), y(t))$ so that its tangent vector is orthogonal to the vector $(A(x,y), B(x,y))$. If you happen to know that $(A, B) = \nabla f$ for some function $f$ (i.e. $df = Adx + Bdy$) then you have implicitly defined solutions given by $f = c$ for any constant $c$, since $\nabla f$ is orthogonal to level sets of $f$! -In higher dimensions, this picture generalizes perfectly. The solution to an equation given by $$ Adx + B dy + Cdz = 0$$ is going to be a surface in $\mathbb R^3$. If the differential form $Adx + Bdy + Cdz = df$ then by the same reasoning as above, level sets of $f$ will give you surfaces that satisfy this equation. -Now for the important point you address in your proof, but which you do not mention in the theorem statement (but you do mention in the text above the theorem statement). This is something called the Poincaré Lemma. A special case of this says that in $\mathbb R^n$, if you have a differential 1-form $\alpha$ and $d\alpha = 0$ then $\alpha = df$ for some function $f$. You have all the key ideas of the proof of this "lemma" lurking in your proof. (It's called a lemma, but it is a key result in differential geometry and differential topology.) -(The Poincaré Lemma does not hold if you work in a space with "holes". E.g. consider the 1-form given by $$\alpha := -\frac{y}{x^2+y^2} dx + \frac{x}{x^2+y^2} dy.$$ -This has $d \alpha = 0$, but you can't find a function so that $\alpha = df$.) -More generally, you can define a (partial) differential equation in $\mathbb R^n$ by taking $k$ differential 1-forms $\alpha_i, i=1\dots k$, and asking to find a surface $S$ with the property that every vector $v$ tangent to $S$ satisfies $\alpha_i(v) = 0$ for $i=1 \dots k$. The condition to find a solution is called the Frobenius integrability theorem and the special case when $k=1$ is the requirement that $d\alpha_1 = 0$. This is generally a topic you will see in a differential geometry class.<|endoftext|> -TITLE: A general formula for $\sum (k-1)(k-2)(k-3)$? -QUESTION [6 upvotes]: What is a "simpler" formula for -$$\sum_{3}^{n} \frac{(k-1)(k-2)(k-3)}{6}$$ - -REPLY [2 votes]: Perhaps a messy and boring way, we can use the generating function. -$$\sum_{k=3}^{n}\frac{(k-1)(k-2)(k-3)}{6}=\frac{1}{6}\sum_{k=2}^{n-1}k(k-1)(k-2)$$ -In addition, the generating function for $k(k-1)(k-2)$ is $x^3\left(\frac{1}{1-x}\right)^{(3)}.$ -Hence, the sum is the coefficient of $x^{n-1}$ in $\frac{1}{6}\frac{1}{1-x}x^3\left(\frac{1}{1-x}\right)^{(3)}=\frac{x^3}{(1-x)^5}$, which is -$$\binom{n-1-3+5-1}{5-1}=\binom{n}{4},\quad n\geqslant3.$$<|endoftext|> -TITLE: The arithmetic-geometric mean for symmetric positive definite matrices -QUESTION [21 upvotes]: A while back, I wanted to see if the notion of the arithmetic-geometric mean could be extended to a pair of symmetric positive definite matrices. (I considered positive definite matrices only since the notion of the matrix square root is a bit intricate for other kinds of matrices.) -I expected that some complications would arise since, unlike scalar multiplication, matrix multiplication is noncommutative. Another complication would be that the product of two symmetric matrices need not be symmetric (though the positive definiteness is retained, so one can still speak of a principal matrix square root). -By analogy with the scalar AGM, I considered the iteration -$$\mathbf A_0=\mathbf A \; ; \; \mathbf B_0=\mathbf B$$ $$\mathbf A_{i+1}=\frac12(\mathbf A_i+\mathbf B_i) \; ; \; \mathbf B_{i+1}=\sqrt{\mathbf A_i \mathbf B_i}$$ -I cranked up a short Mathematica routine: -matAGM[u_, v_] := First[FixedPoint[ - {Apply[Plus, #]/2, MatrixPower[Apply[Dot, #], 1/2]} &, {u, v}]] /; - MatrixQ[u, InexactNumberQ] && MatrixQ[v, InexactNumberQ] - -and decided to try it out on randomly generated SPD matrices. -(A numerical note: Mathematica uses the numerically stable Schur decomposition in computing matrix functions like the matrix square root.) -I found that for all of the randomly generated pairs of SPD matrices I tried, the process was convergent (though the rate of convergence is apparently not as fast as the scalar AGM). As expected, the order matters: matAGM[A, B] and matAGM[B, A] are usually not equal (and both results are unsymmetric) unless A and B commute (for the special case of diagonal A and B, the result is the diagonal matrix whose entries are the arithmetic-geometric means of the corresponding entries of the pair.) -I now have three questions: - -How do I prove or disprove that this process converges for any pair of SPD matrices? If it is convergent, what is the rate of convergence? -Is there any relationship between matAGM[A, B] and matAGM[B, A] if the two matrices A and B do not commute? -Is there any relationship between this matrix arithmetic-geometric mean and the usual scalar arithmetic-geometric mean? Would, say, arithmetic-geometric means of the eigenvalues of the two matrices have anything to do with this? - - -(added 8/12/2011) -More digging around has me convinced that I should indeed be considering the formulation of the geometric mean by Pusz and Woronowicz: -$$\mathbf A\#\mathbf B=\mathbf A^{1/2}(\mathbf A^{-1/2}\mathbf B\mathbf A^{-1/2})^{1/2}\mathbf A^{1/2}$$ -as more natural; the proof of convergence is then simplified, as shown in the article Willie linked to. However, I'm still wondering why the original "unnatural" formulation seems to be convergent (or else, I'd like to see a pair of SPD matrices that cause trouble for the unnatural iteration). I am also interested in how elliptic integrals might crop up in here, just as they did for the scalar version of the AGM. - -REPLY [2 votes]: You may want to take a look at exercise 6 on page 223 of Borwein and Borwein. (I am guessing anyone who asks about the AGM is familiar with this book.) There they discus a matrix version of the AGM and asks one to prove a connection between it and an elliptic integral. -They also give a paper by Stickel from 1985: "Fast compuation of Matrix Exponentials and Logarithms" as a reference. The journal name is Analysis.<|endoftext|> -TITLE: Do the maximal eigenvalues of $(X^TX)^{-1}$ matrix increase when number of columns of $X$ increases? -QUESTION [7 upvotes]: Suppose we have a square real $n\times n$ matrix $X=[x_1,...,x_n]$, where $x_i$ is $i$-th column of the matrix. -Now define $X_k=[x_1,..,x_k]$, i.e. matrix $X_k$ columns are the first $k$ columns of the matrix $X$. Define $\lambda_k$ as the maximal eigenvalue of $(X_k^TX_k)^{-1}$. -Is it possible to prove that $\lambda_1\le \lambda_2\le ... \le \lambda_n$? -If $X$ is orthogonal, then the answer is yes. Maybe this holds only for certain matrices? Any pointers would be greatly appreciated. - -REPLY [5 votes]: The answer is Yes: -Assume that $X_{k+1}^TX_{k+1}$ is invertible. One can check that its lowest absolute eigenvalue is given by -$$\left\vert{\frac1{\lambda_{k+1}}}\right\vert=\min\limits_{y\in{\mathbb R}^{k+1},\,\left\lVert y\right\lVert_2=1}\left\lVert{X_{k+1}y}\right\lVert_2^2\,,$$ -It holds -$$\min\limits_{y\in{\mathbb R}^{k+1},\,\left\lVert y\right\lVert_2=1}\left\lVert{X_{k+1}y}\right\lVert_2^2\leq\min\limits_{y\in{\mathbb R}^{k},\,\left\lVert y\right\lVert_2=1}\left\lVert{X_{k}y}\right\lVert_2^2\,.$$ -This completes the proof.<|endoftext|> -TITLE: When can you switch the order of limits? -QUESTION [207 upvotes]: Suppose you have a double sequence $\displaystyle a_{nm}$. What are sufficient conditions for you to be able to say that $\displaystyle \lim_{n\to \infty}\,\lim_{m\to \infty}{a_{nm}} = \lim_{m\to \infty}\,\lim_{n\to \infty}{a_{nm}}$? Bonus points for necessary and sufficient conditions. -For an example of a sequence where this is not the case, consider $\displaystyle a_{nm}=\left(\frac{1}{n}\right)^{\frac{1}{m}}$. $\displaystyle \lim_{n\to \infty}\,\lim_{m\to \infty}{a_{nm}}=\lim_{n\to \infty}{\left(\frac{1}{n}\right)^0}=\lim_{n\to \infty}{1}=1$, but $\displaystyle \lim_{m\to \infty}\,\lim_{n\to \infty}{a_{nm}}=\lim_{m\to \infty}{0^{\frac{1}{m}}}=\lim_{m\to \infty}{0}=0$. - -REPLY [2 votes]: The following is a minor variation on Arturo Magidin's answer, but according to my experience it is more widely used. -Proposition: Assume -$$ -\lim_{m \to \infty} \lim_{n \to \infty} a_{mn} -= \lim_{m \to \infty} a_m = a -\qquad\text{and}\qquad -\lim_{m \to \infty} a_{mn} = a_n -$$ -i.e. all limits exist and converge to their indicated value. We further assume that the single-variable limits converge uniformly in the other variable, i.e. for every $\varepsilon>0$ there exists $M > 0$ such that $|a_{mn} - a_{n}| < \varepsilon$ for all $m > M$ and all $n$, and likewise for $a_m$. Then $\displaystyle \lim_{n \to \infty} \lim_{m \to \infty} a_{mn}$ exists and it holds -$$ -\lim_{n \to \infty} \lim_{m \to \infty} a_{mn} -= -\lim_{m \to \infty} \lim_{n \to \infty} a_{mn} -. -$$ -Proof: For any $\varepsilon > 0$, we can find $N > 0$ such that $|a_{mn} - a_m| < \tfrac{\varepsilon}{3}$ for all $n > N$ and all $m$. Similarly, we can find $M>0$ such that both $|a_n - a_{mn}| < \tfrac{\varepsilon}{3}$ for all $n$ and $|a_m - a| < \tfrac{\varepsilon}{3}$ holds for all $m > M$. Thus, it holds -$$ -|a_n - a| \leq -|a_n - a_{mn}| + |a_{mn} - a_{m}| + |a_{m} - a| -< \varepsilon -$$ -for all $n > N$ and $m > M$.<|endoftext|> -TITLE: Zeros of a holomorphic function -QUESTION [10 upvotes]: Suppose $\Omega$ is a bounded domain in the plane whose boundary consist of $m+1$ disjoint analytic simple closed curves. -Let $f$ be holomorphic and nonconstant on a neighborhood of the closure of $\Omega$ such that -$$|f(z)|=1$$ for all $z$ in the boundary of $\Omega$. -If $m=0$, then the maximum principle applied to $f$ and $1/f$ implies that $f$ has at least one zero in $\Omega$. -What about the general case? I believe that $f$ must have at least $m+1$ zeros in $\Omega$, but I'm not able to prove it... -Thank you - -REPLY [3 votes]: Here's my attempt at an explanation. -Since the closure of $\Omega$ is compact, so is its image under $f$. By the Open Mapping Theorem, $f(\Omega)$ is open, so the boundary of that image must be $f(\partial \Omega)$, the image of the boundary of $\Omega$. We know that $f(\partial \Omega)$ is in the unit circle, so $f(\Omega)$ must be the unit disk. -Suppose $\gamma_j$ is one of the interior boundary curves of $\Omega$, oriented positively (i.e. counterclockwise). Thus as you travel around $\gamma_j$ in the forward direction, your right hand is in $\Omega$ and your left hand is in a "hole" in $\Omega$. Now suppose your friend travels on $f(\gamma_j)$ (which is on the unit circle) as you go around $\gamma_j$, so the friend is at $f(z)$ when you are at $z$. By the conformal property of analytic functions, your friend's right hand must also be in $f(\Omega)$. Thus as you go around $\gamma_j$ counterclockwise, your friend is going around the unit circle clockwise. When you get back to your starting point, your friend must also get back to his starting point, having gone at least once clockwise (i.e. in the "negative" direction) around the unit circle. -On the other hand (so to speak), if $\gamma_1$ is the outer boundary curve of $\Omega$, oriented counterclockwise, as you travel around $\gamma_1$ counterclockwise your left hand and your friend's are in $\Omega$ and $f(\Omega)$ respectively, so your friend is traveling counterclockwise around the unit circle.<|endoftext|> -TITLE: Exercise 1.6.3 from Alon & Spencer's *The Probabilistic Method*: prove that $Pr[|X-Y| \leq 2] \leq 3 Pr[|X-Y| \leq 1]$ for i.i.d. real RVs $X$ and $Y$ -QUESTION [31 upvotes]: Doing a little reading over the break (The Probabilistic Method by Alon and Spencer); can't come up with the solution for this seemingly simple (and perhaps even a little surprising?) result: - -(A-S 1.6.3) Prove that for every two independent identically distributed real random variables $X$ and $Y$, - $$Pr[|X-Y| \leq 2] \leq 3 Pr[|X-Y| \leq 1].$$ - -REPLY [7 votes]: You may read the paper "The 123 theorem and its extensions" by Noga Alon and Raphael Yuster.<|endoftext|> -TITLE: Homology of the Empty set -QUESTION [11 upvotes]: I am under the impression that the standard convention for the homology (singular) of the empty set is 0 in all non negative degrees and $\mathbb{Z}$ in degree $-1$. I have no problem with this convention, I am just curious what role it plays. Most conventions helps something sensible remain true in a particular case, what is this convention doing? -Doe this convention depend on which version of cohomology we use? Is there a different convention for Cech or what have cohomology? - -REPLY [5 votes]: Here's how I think of it. -With simplicial homology, we start by considering the set of all continuous maps from the $n$-simplex into our space $X$, $\{\sigma:\Delta^n\to X\mid\sigma\text{ continuous}\}$. Then we form chains of these by taking the free abelian group $C_n:=\mathbb{Z}\{C_n(X)\}$. These are the terms of our chain complex along with the boundary maps. -But what about for $n<0$? Hmm... that doesn't seem to make sense here so I guess let's just say that $C_n(X)=0$ for $n<0$. But wait! What about $n=-1$? -The $n$-simplex is the convex hull of $n+1$ (affinely independent) points in $\mathbb{R}^{n+1}$ so the $(-1)$-simplex is the convex hull of 0 points, i.e. the empty set. So $\Delta^{-1}=\varnothing$. Thus, $\{\sigma:\Delta^{-1}\to X\mid \sigma\text{ continuous}\}=\{\text{the empty function}\}=\{\varnothing\}$. Then $C_{-1}(X)=\mathbb{Z}\{\{\varnothing\}\}\cong \mathbb{Z}$. -This is where the extra $\mathbb{Z}$ in dimension $-1$ really comes from for reduced homology. However, for $n<-1$ the definition of simplex really doesn't make sense, so we'd better keep those terms of the chain complex zeros. -Poor guy.. Everybody always forgets about the empty set, and especially his alias "the empty function" :( -TLDR: The definition of a simplex means that the empty set is a $(-1)$ dimensional simplex<|endoftext|> -TITLE: famous space curves in geometry history? -QUESTION [7 upvotes]: For an university assignment I have to visualize some curves in 3 dimensional space. -Until now I've implemented Bézier, helix and conical spiral. -Could you give me some advice about some famous curves in geometry history? - -REPLY [6 votes]: Though is it is not 3D, the Clothoid or Cornu Spiral is an amazing curve. It surely can be made 3D by adding a simple extra parameter $z(t)=t$. It has infinite length but converges to two points in the plane. It has several applications in optics and road engineering, for example. An it looks quite nice: - -I found a 3D plot too:<|endoftext|> -TITLE: Convergence tests for improper multiple integrals -QUESTION [9 upvotes]: For improper single integrals with positive integrands of the first, second or mixed type there are the comparison and the limit tests to determine their convergence or divergence. There is also the absolute convergence theorem. -For multiple integrals I know that the comparison test can be used as well. -Question: But can the limit or other tests be generalized to multiple integrals? Could you provide some references? - -Added: I have thought of Riemann integration only. -Example: When we evaluate (Proof 1 of this and Apostol's article) this improper double integral $$\int_{0}^{1}\int_{0}^{1}\left(\dfrac{1}{1-xy}\right) \mathrm{d}x\mathrm{d}y,$$ we conclude that it is finite. -Question 2: Which test should we apply to know in advance that it converges? - -REPLY [11 votes]: Introduction -The following response attempts to address two aspects I perceive to be of interest in this question: one concerns whether there are some relatively rote or standard procedures for evaluating the convergence of certain kinds of improper multivariate integrals; the other is whether there is some simple intuition behind the subject. To keep the discussion from becoming too abstract, I will use the double integral (introduced in the question) as a running example. -Synopsis -Many integrals that are improper due to singular behavior of their integrand can be analyzed, both rigorously and intuitively, with a simple comparison. The idea is that any singularity of the integrand that doesn't blow up too fast compared to the codimension of the manifold of singular points can be "smoothed over" by the integral, provided the domain of integration doesn't contact the singular points too "tightly." It remains to make these ideas precise. -Analysis -When the domain of integration is relatively compact, as the one in the example is, the problems with convergence occur only at the possible singularities of the integrand within the closure of the domain, which in this case is the isolated point $(x,y) = (1,1)$. However, it is evident that if the domain of integration were to be expanded, any zero of the curve $1 - x y = 0$ could be a singularity. In general, many singularities occur this way: an integrand $f(\bf{x})$ is locally of the form $g(||h(\bf{x})||)$ where $h: \mathbb{R}^n \to \mathbb{R}^k$ and $g(r) = r^{-s} u(r)$ for some function $u$ that is bounded in some nonnegative neighborhood of $0$. In this case $h(\bf{x}) = 1 - x y$ and $g(r) = r^{-1}$, so $s=1$. -When $h$ is differentiable at $\bf{0}$ with nonsingular derivative there, principles of Calculus invite us to linearize the situation and geometric ideas suggest a simple form for the linearization. Specifically, the Implicit Function Theorem guarantees that local coordinates can be found near such a singularity in which the derivative of $h$ is in the form $\left( \bf{1}_{k \times k} ; \bf{0}_{k \times n-k} \right)$. The singularity itself can be translated to the origin where, to first order, the zeros of $h$ locally correspond with the vector subspace generated by the last $n-k$ coordinate axes. The effect on the integral is to introduce a factor given by the determinant of the Jacobian, which is locally bounded and so does not affect the convergence. In this example we can explicitly make these changes of coordinates by translating the singularity to $\bf{0}$, computing the gradient of $h$ there, and rotating it to point along the first axis. This amounts to the change of variables $(u, v)$ = $(2 - x - y, y - x)$, in which $h(u, v) = u - \frac{1}{4}(u^2 - v^2)$, equal to $u$ to first order. Within a small neighborhood of the origin, the domain of integration becomes a 90-degree wedge containing positive $(u,v)$ for which $u \le |v|$ and the zeros of $h$ coincide with the $v$ axis to first order. -Let's consider the case where the domain of integration locally contains an isolated point of the singular set $H$ (the zeros of $h$), which we translate to $\bf{0}$. Estimate the integral near this singularity by adopting spherical coordinates there. To do this, we need a closer look at the domain of integration in spherical coordinates. Any $\epsilon \gt 0$ determines the set of all possible unit direction vectors from $\bf{0}$ toward elements of the domain of integration that lie within a distance $\epsilon$ of $\bf{0}$. In the example, this corresponds to the set of points on the unit circle with angles between $-\pi/4$ and $\pi/4$. Suppose that for sufficiently small $\epsilon$ the closure (in the unit sphere) of this set of direction vectors does not include any of the tangent vectors of $H$ at $\bf{0}$. Then an easy estimate shows that the angular part of the integral in spherical coordinates is bounded. This reduces the task of integration to estimating the radial part. Because the volume element in spherical coordinates $(\rho, \Omega)$ is $\rho^{n-1} d\rho d\Omega$, the radial integral is proportional to $\rho^{n-1} g(\rho) d \rho$. This is the key calculation: it shows how integration in $n$ dimensions can "cancel" a factor of $\rho^{1-n}$ in the integrand. -We're reduced to evaluating a 1D integral, improper at $0$, whose integrand behaves like $\rho^{n-1} g(\rho)$. By virtue of the assumptions about $g$, this is bounded above by a multiple of $\rho^{n-1-s}$. Provided $n-1-s \gt -1$, this converges (at a rate proportional to $\rho^{n-s}$). In the example, $n=2$, $s=1$, so convergence is assured. -Generalizations -When $n-1-s \le -1$, the behavior of the original integral depends on exactly how the domain "pinches out" as it approaches the singularity, so more detailed analysis is needed. The spirit of the exercise doesn't change, though: with Calculus we linearize the situation (relying on simple estimates to take care of the second order remainder term), we use spherical coordinates centered at an isolated singularity in the domain (or more generally, cylindrical coordinates near a more complicated singularity) to perform the integration, and we estimate the rate of growth of the integral near the singularity in terms of a radial component and a contribution from a spherical angle. The behavior of the integral over the spherical angle as we approach the singularity is determined by the shape of the domain near the singularity, so often some careful (and interesting) geometric analysis is needed. This approach is characteristic of the proofs of many theorems in Several Complex Variables, such as the Edge of the Wedge Theorem (if I recall correctly--I'm reaching back to memories now a quarter century old). One good reference is Steven Krantz's book on Function Theory of SCV. -Summary -The behavior of the integral of $(1-x y)^{-1}$ near $(1,1)$ can be analyzed by adopting polar coordinates $(\rho, \theta)$ centered at $(1,1)$, observing that the domain here forms a wedge whose point contacts the singular set $1 - x y = 0$ "transversely," noticing that the radial behavior of the integrand is $\rho^{-1}$, remembering that the area element in polar coordinates has a $\rho^1 d \rho$ term, and noting that the integral of $\rho^{-1} \rho^1 d \rho$ converges at $0$. Indeed, what originally looked like a singularity has entirely disappeared. This procedure generalizes to multidimensional integrals subject to fairly mild restrictions on the nature of the singularities of their integrands and on the geometry of the domain of integration near those singularities.<|endoftext|> -TITLE: Reverse Taylor Series -QUESTION [5 upvotes]: Almost everyone is familiar with the famous Taylor Series: -$ f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} (x-a)^n $ -which, if it converges at more than one point, will converge in some interval about $a$. Has anyone considered the "Reverse" Taylor Series: -$ g(x) = \sum_{n=0}^\infty \frac{f^{(n)}(x-a)}{n!} a^n $ -I call it reverse, because its what you get for symbolically taking $a \rightarrow (x-a)$. You might be saying that there is absolutely no reason to believe that this series should converge to $f(x)$, but there are two big examples where it does. -For $e^x$: -$g(x) = \sum_{n=0}^\infty \frac{e^{(x-a)}}{n!} a^n = \frac{e^x}{e^a} \sum_{n=0}^\infty \frac{a^n}{n!} = e^x $ -For $x^k$: -$g(x) = \sum_{n=0}^k \frac{k(k-1)...(k-n+1)(x-a)^{k-n} a^n}{n!} = \sum_{n=0}^k {k \choose n} (x-a)^{k-n} a^n = ((x-a) + a)^k $ -I have yet to find a counter-example for when the Reverse Taylor Series does not give back the original function for an analytic function, and I also have yet to think of a way to prove that the Reverse Taylor Series should converge for a given function. Does anyone have any ideas? - -REPLY [7 votes]: Your two series are the same thing pointwise when they converge. $g(x)$ is the Taylor series for $f$ expanded around the point $x-a$. So it will converge as long as the Taylor series for $f$ at $x-a$ has a radius of convergence at least $a$. -However, in $f(x)$, moving $x$ around may make the Taylor series fail to converge due to your $x$ moving out of the radius of convergence. Whereas in $g(x)$, moving $x$ around will make the series fail to converge due to moving the point $x - a$ somewhere with a radius of convergence smaller than $a$. That is, once you fix $a$, in the Taylor series for $f$ around $a$, the convergence radius is fixed, independent of $x$. But in the series $g(x)$, the convergence of the series depends on how the radius of convergence at $x-a$ (which depends on $x$) compares with $a$. So that the function $g(x)$ converges at more than one point is insufficient to guarantee that $g(x)$ converges in a small interval about $a$. - -REPLY [5 votes]: The general idea: -Put $y=x-a$. -Then, -$$ -g(x) = \sum\limits_{n = 0}^\infty {\frac{{f^{(n)} (y)}}{{n!}}(x - y)^n } = f(x). -$$<|endoftext|> -TITLE: Learning algebra and harmonic analysis -QUESTION [25 upvotes]: I've revised my question a bit in response to the (very helpful) advice so far-- -I have an engineering background but am interested in learning abstract harmonic analysis. My interest is rather unstructured; the Fourier transform sparked my interest in mathematics, and I've been following since I first learned of it as a sort of guide through mathematics. I've decided it's time I delved into the abstract, and from my readings so far it's become clear to me that this will require a sojourn into a swathe of unfamiliar mathematics. My background in classical analysis, linear algebra, and probability is rather good (probably the rough equivalent of a 1st year american grad student), and so I'm familiar as well with point-set topology. I'm also familiar with measure theory (via probability, and some real analysis). I have essentially no abstract algebra. -I would be interested in three bits of advice: - -What algebra is essential for the canon of harmonic analysis? Perhaps this is a vacuous question (i.e., that an answer is as simple "just google" or "open a book")? It strikes me that techniques or background might work their way in unnamed. I'm hoping to prevent the sinking feeling that I don't know something I should. -Specific examples (theorems, objects, counterexamples) which illustrate important applications of harmonic analysis, especially in relation to algebra. I'm not looking for explanations, just statements of what's thought to be important. I'm hoping that this information will help me sift through the different branches of harmonic analysis, especially where I am most unfamiliar with the language. -Canonical/important texts which I might find useful. I'm especially interested in those good for self study (well-edited with many examples and end-of-chapter exercises, and ideally a conversational tone with lengthy remarks and some historical content). - -REPLY [18 votes]: Let me suggest that you look at the book "An introduction to topology and modern analysis", by Simmons. It covers concepts of point set topology that you presumably already know, but also gives a fairly concise, but quite readable, introduction to abstract algebra. It then brings these together in its final sections into a discussion of Banach algebras and related topics, culminating in a proof of the Gelfand--Naimark theorem. [See below for a brief remark about this theorem, and the other theorems mentioned in the subsequent paragraphs.] -Since you say that you are interested in abstract harmonic analysis, the Gelfand--Naimark theorem is a good place to start. (For example, it is not so far to go from there to the abstact form of Wiener's Tauberian theorem.) -Note: I am interpreting abstract harmonic analysis to mean something like harmonic analysis on locally compact abelian groups (and related topics). -Simmons also has exercises, I think. -When I was studying this stuff, the next place I went to after Simmons was Naimark's tome Normed rings. (There are various editions, and some of the later ones might be called Normed algebras instead, if I'm not misremembering. They are translated from Russian, so the slighly unusual, and changing, name may be an artefact of this; I'm not sure. In any case, they are basically about the theory of Banach algebras and its applications to abstract harmonic analysis.) -This is a place where one can read about various group rings of topological groups, Haar measure, the general form of Wiener's Tauberian theorem, and other concepts of abstract harmonic analysis. It is essentially too long to read from -start to finish, but in my experience one can dip into it in bits and pieces, and having a firm understanding of the material from Simmons helps a lot. -Naimark's book is a monograph, not a textbook as such, and although it has many historical comments and illustrative examples (although the examples are often at a theoretically fairly high level), I don't remember it as having exercises. But in any case, I am not suggesting it as a first point of call, but as somewhere to go after you have some basics under your belt. -There is also a book by Loomis, An introduction to abstract harmonic analysis, -which also treats Haar measure, various group rings, and so on. If I remember correctly it is less condensed than Naimark and also less comprehensive. My memory is that I preferred Naimark, but probably for idiosyncratic reasons. I don't remember whether Loomis's book has exercises. -All the books I'm mentioning are probably out of print, so I'm also assuming that you have access to a university library or something similar. (Any decent such library should have them.) -Finally, some fundamental results that I would recommend aiming for, which combine algebra -and analysis nicely: - -Gelfand's generalization of Wiener's theorem (that if $f$ is a nowhere zero continuous periodic function whose Fourier series is absolutely convergent, than -the Fourier series of $1/f$ is also absolutely convergent). -the Gelfand--Naimark theorem (identifying certain commutative Banach algebras with extra -structure as being algebras of continuous functions on compact topological space; it is a beautiful generalization of the classical spectral theorems for matrices). -The generalization of Wiener's Tauberian theorem to arbitrary commutative locally compact groups. (Wiener's original theorem says that if $f$ is an $L^1$-function on the real line whose Fourier transform is nowhere zero, then -the translates of $f$ span a dense subspace of $L^1$.) -More abstract, but basic to the previous example and lots of other things, is -the existence of Haar measure for any locally compact group. - -As already mentioned, -the first two results are in Simmons, and are easier, but already involve a very nice interplay between analysis and algebra. (The general framework is that of Banach algebras, which combine the analysis of Banach spaces with the -algebra of rings, ideals, and so on.) -The second two results are in Naimark, and Haar measure is also in Loomis (and many other places) (and the general form of Wiener may be in Loomis too; I forget now).<|endoftext|> -TITLE: Is the natural log of n rational? -QUESTION [7 upvotes]: It's famously unknown whether the natural log of 2 is rational or not. -How about the natural log of other numbers. Is it known/unknown whether these are rational? -Obviously ln(1) is 0, and ln(2^n) is n*ln(2) (and is thus rational iff ln(2) is rational), but how about other cases? - -REPLY [7 votes]: We can also use a non-simple continued fraction expansion of $\displaystyle e^{2x/y}$ to prove the irrationality of $\displaystyle e^{2x/y}$ when $\displaystyle x,y$ are positive integers. Thus if $\displaystyle \log n = x/y$, then $\displaystyle e^{2x/y} = n^2 $ is rational, contradicting irrationality of $\displaystyle e^{2x/y}$. -Incidentally, the first proof of irrationality of $\pi$ by Lambert used a continued fraction expansion (of $\tan x$, I believe). -The expansion we use: - -and the theorem we use to prove irrationality is quoted in the wiki page for Generalized Continued Fractions here: Conditions of Irrationality. -By this theorem, it is enough that for all sufficiently large positive integers $\displaystyle m$ we have that $\displaystyle (2m+1)y \gt x^2$, which is true for fixed $\displaystyle x,y$.<|endoftext|> -TITLE: why is the following thing a projection operator? -QUESTION [7 upvotes]: Let $T: E \rightarrow E$ be an endomorphism of a finite-dimensional vector space, and let $S$ be a circle in the complex plane that does not intersect any eigenvalues of $T$. Now let $Q = \frac{1}{2\pi i} \int_S (z-T)^{-1} \, dz$. -Why is $Q$ a projection operator? -The motivation behind this question is that the above situation occurs in a proof of Bott's periodicity theorem, but it's not clear to me that $Q$ is a projection... - -REPLY [4 votes]: If $A$ is any Banach algebra (such as the algebra of endomorphisms of a finite dimensional complex vector space), then for each subset $\Omega$ of the complex plane and each element $T$ of $A$ whose spectrum is contained in $\Omega$, holomorphic functional calculus yields a homomorphism $f\mapsto f(T)$ from the algebra of functions holomorphic in an open set containing $\Omega$ (identified if they agree on some neighborhood of $\Omega$) into $A$. Since the function $f:(\mathbb{C}\setminus S)\to\mathbb{C}$ defined by $f(w)=\frac{1}{2\pi i}\int_S(z-w)^{-1}dz$ takes on only the values $0$ and $1$ (it gives the winding number of $S$ about $w$), $f$ is idempotent (i.e., $f(w)^2=f(w)$ for all $w\in\mathbb{C}\setminus S$), and thus $f(T)$ is an idempotent element of $A$ for each $T$ whose spectrum is disjoint from $S$. - -REPLY [3 votes]: Show that the integral depends continuously on $T$, and show that $Q^2=Q$ when $T$ is diagonalizable, by finding how $Q$ changes change you change $T$ by a similar matrix, and then reducing to the one dimensional case. Then use the fact that diagonalizable matrices are dense in the space of all matrices, and that $Q^2$ and $Q$ are continuous functions of $Q$.<|endoftext|> -TITLE: Splitting a club set -QUESTION [5 upvotes]: Suppose $\kappa$ is an uncountable cardinal. Then $S\subseteq\kappa$ is club (CLosed UnBounded) if $S$ is unbounded in $\kappa$ and is a closed subset of $\kappa$ under the order topology. My question is the following: suppose $S\subseteq\kappa$ is club, and $S=S_1\cup S_2$. Is there necessarily some club set $C$ such that either $C\subseteq S_1$ or $C\subseteq S_2$? -(Note that, since the intersection of two clubs is again club, this question amounts to asking whether, for $\kappa$ an infinite cardinal, the set $\{S\subseteq\kappa: S$ is a superset of some club set $\}$ is an ultrafilter.) - -REPLY [4 votes]: The answer is no, this is never an ultrafilter. This uses choice. We assume that the cofinality of $\kappa$ is bigger than $\omega$, else, $C$ has a subset of type $\omega$ cofinal (which is automatically club), and this can be split into two disjoint cofinal (therefore club) sets. -If the cofinality of $\kappa$ is larger than $\omega_1$, simply note that for any regular $\alpha<\kappa$, $\{\beta<\kappa:\beta$ has cofinality $\alpha\}$ is stationary, i.e., it meets every club (consider the $\alpha$-th member of the increasing enumeration of the club. These sets are disjoint for different $\alpha$, and if $\kappa>\omega_1$, there are at least two regular $\alpha$ below $\kappa$, namely $\omega$ and $\omega_1$. This shows that we can split $C$ into disjoint stationary sets (and therefore, neither is club, since a stat. set must meet every club). -All I've used so far is that $\omega_1$ is regular. -To show that clubs on ordinals of cofinality $\omega_1$ can be split into disjoint stationary sets requires more choice. In fact, it is consistent that the club filter on $\omega_1$ is an ultrafilter. This is the case, for example, in models where the axiom of determinacy holds. (But it is strictly weaker than determinacy. A measurable suffices in consistency strength.) -The typical argument from choice uses the existence of Ulam matrices. See definition 12 and the following paragraph in this blog entry. An $\omega\times\omega_1$ Ulam matrix gives us that every stat. subset of $\omega_1$, in particular every club, can be split into $\omega_1$ disjoint stat. sets. So the club filter is not an ultrafilter, and we are done. - -Ulam's argument also shows that any stat. subset of $\kappa^+$ can be split into $\kappa^+$ stat. sets, and Solovay proved the same for any reglar $\lambda$, not just a successor. This, of course, gives the result that no club filter is an ultrafilter, but I wanted to point out that only in cofinality $\omega_1$ we need that much choice to carry out the argument. - -REPLY [2 votes]: If you are familiar with the idea of clubs and stationary, then there is a good measure theoretic intuition around it: -Clubs are of measure $1$. They are almost everywhere. -Non-stationary sets are of measure $0$ and are almost nowhere. -Stationary sets are of positive measure. -Now to your question, look at the unit interval, and take some subset of measure $1$, you can split it into two positive measured sets, none of which is of measure $1$, namely intersect it with $[0,\frac{1}{2})$ and with $[\frac{1}{2},1]$, and you have two disjoint subsets of positive measure. -Now it is important to say that this is just the intuition at heart, and since Andres gave a good answer with a formal backbone I decided to add this one as well.<|endoftext|> -TITLE: Intuition for complex eigenvalues -QUESTION [23 upvotes]: The eigenvalues of a rotation matrix are complex numbers. I understand that they cannot be real numbers because when you rotate something no direction stays the same. -My question -What is the intuition that the eigenvalues are complex? Why do they exist at all for rotation matrices? I mean it is not so that every time a calculation is not possible the result is complex (dividing by 0 is not possible at all - the result is not real, but it is not complex either!). The complex numbers seem to cover some middle ground here, but I don't understand how and why they come into play - there don't seem to be any square roots taken from negative numbers...) - -REPLY [10 votes]: Heuristically, I suppose we could see this being because the standard action of the complex numbers on $V = \mathbb R^{2n}$ is by rotation. That is, $(e_1, \ldots, e_{2n})$ is a basis for $V$, then we define multiplication by $i$ as -$ i e_{2k-1} = e_{2k}, \quad i e_{2k} = - e_{2k-1},$ for $k = 1, \ldots, n$, -so multiplying a vector $v$ by a complex number $\lambda$ will correspond to a scaling by a real and a rotation. -Now, if we have a rotation $A$ on the space $V$ and we want to find a line $l$ "invariant" under $A$, then we can try to look for a complex number $\lambda$ such that rotation of $l$ under $A$ is equivalent to the action of $\lambda$ on $l$. Thus, we can try and look for complex eigenvalues $\lambda$ of $A$. -This line of "reasoning" completely breaks down in the odd-dimensional case, because we can't define a complex structure on odd-dimensional spaces, but it might give a hint into why we'd look for complex eigenvalues at all. Then it becomes a matter of algebra to figure out that it actually works, and that it does so in a vector space of any finite dimension. -Finally, for the existence, as Alex already pointed out, we look for eigenvalues by finding roots of polynomials. All polynomials admit a root over the complex numbers, which translates into the existence of a complex eigenvalue.<|endoftext|> -TITLE: Proving finite dimensionality of modular forms using representation theory? -QUESTION [17 upvotes]: It is well known how to use algebraic geometry (differentials, divisors, and Riemann-Roch) in order to prove the finite dimensionality of the vector space of modular forms of some fixed weight and level. -Is it possible to prove the finite dimensionality of modular forms using representation theory? If so, any references? - -REPLY [6 votes]: Depending on what you mean by representation theory the answer is: yes! -Certainly, you can prove it using "analysis". The theorem is: the space of automorphic forms (on a reductive group) of a certain level with a specific $\mathcal Z$-type and $K$-type is finite dimensional. -The proof is based on the fact that a closed subspace of $L^2(X,\mu)$, with $\mu(X)$ finite, made of essentially bounded functions is finite dimensional. -This is Theorem 5.2 in Rudin's "Functional Analysis", where it is attributed to Grothendieck. It is also Lemma 8.3 in Borel's "Automorphic forms", where it is attributed to Godement. The statements are slightly different, so there is not necessarily an attribution error. -Let me sketch the proof for the cusp form case. The general case follows from the cuspidal case and an induction argument (on the rank of the group). For simplicity, $X$ will be $\Gamma\backslash SL_2({\mathbb R})$, where $\Gamma$ is an arithmetic subgroup. -Before you begin, you need: -1) Cusp forms are rapidly decreasing (I should add "in Siegel sets"). This is standard, if somewhat technical. -2) Distributions on $X$ with prescribed $\mathcal Z$- and $K$-types are actually smooth functions. This is essentially an application of elliptic regularity. -3) $L^2_{\rm cusp}(X)$ is closed in $L^2(X)$. This is either obvious or straightforward, depending on what you already know. -(Note that while cusp forms are rapidly decreasing, $L^2_{\rm cusp}(X)$ is not made up of essentially bounded functions, because we have stronger criteria for a function to be a cusp form than just being in $L^2_{\rm cusp}(X)$) -Now that we have these facts at hand, the proof is pretty straightforward. -Cusp forms of specific type are rapidly decreasing (1), so essentially bounded. A convergent (in $L^2$) sequence of cusp forms of a specific type converges weakly to a distribution of the same type (it is necessary to use distributions to talk about the $\mathcal Z$-type of a non-smooth function), hence it was actually a smooth function (2), so it converges to a cusp form (3). So we can deduce that the space of cusp forms of a given type is finite dimensional. -This is not much more than an abbreviated transcription of the argument in Borel's "Automorphic forms on $SL_2({\mathbb R})$". The theorem is originally from Harish-Chandra, in "Automorphic forms on semisimple Lie groups". - -The theorem I think Matt E refers to is the much stronger assertion that "irreducible unitary representations of reductive groups over local fields are admissible". This implies that irreducible unitary representations of adele groups are admissible. Which implies that subrepresentations of the space of cusp forms are admissible, which implies automorphic representations are admissible, since they are built up from cuspidal representations in ways that preserve admissibility (parabolic induction and subquotients). This theorem is provable in representation-theoretic terms (though it is surprisingly algebraic: the main tricks are about algebras). -It is important because it lets us factor automorphic representations over primes. It is not hard to show that irreducible admissible representations of adele groups factor over primes, but it wasn't proven that automorphic representations were admissible until it was proven that irreducible unitary representations were.<|endoftext|> -TITLE: On Dirichlet series and critical strips -QUESTION [23 upvotes]: (I'll keep this one short) -Given a Dirichlet series -$$g(s)=\sum_{k=1}^\infty\frac{c_k}{k^s}$$ -where $c_k\in\mathbb R$ and $c_k \neq 0$ (i.e., the coefficients are a sequence of arbitrary nonzero real numbers), and assuming that $g(s)$ can be analytically continued, does it follow that $g(s)$ possesses a critical strip containing its nontrivial zeroes? -If this does not generally hold, what restrictions should there be on the $c_k$ for $g(s)$ to possess a critical strip? -(My attempts at searching bring too much stuff on Riemann $\zeta$, with only a quick mention of general Dirichlet series; pointers to the literature would be appreciated.) - -REPLY [28 votes]: In the comments section to Willie Wong's answer, the following Dirichlet series came up: the Riemann $\zeta$-function, Dirichlet $L$-functions, and Ramanujan's series $\sum_{n \geq 1}\tau(n) n^{-s}$, where $\tau(n)$ is the coefficient of $q^n$ in $\Delta(q) = q\prod_{n=1}^{\infty} (1-q^n)^{24}$. -First note that the $\zeta$-function is a special case of a Dirichlet $L$-function (it is the $L$-function of the trivial character). -Now what is it that Dirichlet $L$-functions and Ramanujan's series have in common? Well, they are all automorphic $L$-functions. -An automorphic form (for the group $\mathrm{GL}_n$ over $\mathbb Q$; there are generalizations where $\mathbb Q$ is replaced by an arbitrary number field $F$ and $\mathrm{GL}_n$ is replaced by an arbitrary reductive group, but to simplify the explanations, I will focus just on the simplest level of generality here) is a function on the product $\mathrm{GL}_n(\mathbb R)\times \mathrm{GL}_n(\mathbb Z/N\mathbb Z)$ for some integer $N \geq 1$ which -is - -invariant under the natural (diagonal) action of $\mathrm{GL}_n(\mathbb Z)$; -grows moderately at infinity with respect to the $\mathrm{GL}_n(\mathbb R)$-coordinates; -satisfies a suitable differential equation in the $\mathrm{GL}_n(\mathbb R)$-coordinates. - -Rather than explaining the generalities in more detail (they can be found in many places), I think it's better to illustrate them: -E.g. Dirichlet characters arise in the case $n = 1$: they are defined as functions -on $(\mathbb Z/N\mathbb Z)^{\times} =: \mathrm{GL}_1(\mathbb Z/N\mathbb Z)$, -and so we can make them into functions on $\mathrm{GL}_1(\mathbb R)\times -\mathrm{GL}_1(\mathbb Z/N\mathbb Z)$ by defining them to be trivial on the -$\mathbb R^{\times}$-coordinate. -E.g. If $f(\tau)$ is a holomorphic modular form of weight $k$ and level one (where $\tau$ is an upper half-plane variable as usual), we can make $f$ into a function on -$\mathrm{GL}_2(\mathbb R)$ by first identifying this matrix group with the collection of bases of $\mathbb R^2$, then identifying $\mathbb R^2$ with $\mathbb C$, -and then defining, for any $\mathbb R$-basis $\omega_1,\omega_2$ of $\mathbb C$, -$f(\omega_1,\omega_2) := \omega_2^{-k} f(\omega_1/\omega_2)$. (This presumes -that $\omega_1/\omega_2$ is in the upper half-plane rather than the lower, -for simplicity.) Thus we get a function of the required kind (with $N = 1$). -The usual modularity condition becomes invariance under $\mathbb GL_2(\mathbb Z)$. The moderate growth condition becomes the condition that the Fourier expansion of $f$ involves only non-negative powers of $e^{2 \pi i \tau}$. The differential equation is the Cauchy--Riemann equation expressing holomorphy of $f$. -Higher level modular forms will involve values of $N$ that are $> 1$. -E.g. Maass forms are similar to the preceding example, except that now the -differential equation expresses that a Maass form is an eigenvector of the Laplacian. -For any fixed $n$ and fixed $N$, we have Hecke operators acting on the space -of automorphic forms, labelled by primes $p$ not dividing $N$, and so we can -consider Hecke eigenforms. In the case of Dirichlet characters, the fact that they are characters of $(\mathbb Z/N\mathbb Z)^{\times}$ (rather than just arbitrary functions) can be reinterpreted as saying that they are Hecke eigenforms. -Of course, Ramanujan's $\Delta$ is well-known to be a Hecke eigenform of weight $12$ and level $1$. -Given an automorphic Hecke eigenform we can use the Hecke eigenvalues to make -an Euler product Dirichlet series, which will give Dirichlet $L$-functions -for Dirichlet characters, and Ramanujan's Dirichlet series for $\Delta$. -(In the Dirichlet character case, if a prime $p$ divides the conductor $N$, we just have a trivial factor in the Euler product for that prime; when $n > 1$, -and $N > 1$, it is a bit more of a battle to figure out what Euler factors -to put in at the primes dividing $p$, but it can be done.) -Actually, it is better to restrict to cuspidal automorphic Hecke eigenforms. -Cuspidal is a vacuous condition when $n = 1$ (i.e. in that case we agree to call -everything cuspidal), and when $n > 1$ we replace "moderate growth at infinity" by "rapid decay at infinity", suitably interpreted. I'll assume that all my eigenforms are cuspidal -form now on. (E.g. $\Delta$ is cuspidal.) -In this way we get a natural class of $L$-functions which have: - -meromorphic continuation to the whole complex plane, which is in fact -holomorphic with the sole exception of Riemann's $\zeta$. -Functional equation with completely understood $\Gamma$-factors. E.g. for a -weight $k$ modular form of level one, if the $p$th Hecke eigenvalue is $a_p$, -then the $L$-function is $\prod_p (1 - a_p p^{-s} + p^{k - 1 - 2s})^{-1},$ and the functional equation relates $s$ and $k - s$. For $\Delta,$ I've already noted that $k = 12$. (In general the functional equation relates the $L$-series of an automorphic eigenform with $L$-series of its "complex conjugate" suitably understood, just as in the case of Dirichlet characters that are not necessarily real valued.) -Conjecturally, they should all satisfy the analogue of RH, i.e. all non-trivial zeroes should lie on the critical line, in the centre of the critical strip. - -Note incidentally that it is easy to change the apparent form of the functional equation. E.g. if we make a change of variable $s \mapsto s + 11/2$ in Ramanujan's series, then the functional equation will become $s \mapsto 1 -s$ -rather than $s \mapsto 12 -s $, and the critical line will be $\Re s = 1/2$, just as in the $\zeta$-function case. -All cuspidal automorphic $L$-functions can be renormalized in a similar way, so that the symmetry of the functional equation is $s \mapsto 1 -s$. This is called unitary normalization, and is common in the automorphic forms literature. -Up to rescaling, there are only countably many automorphic eigenforms altogether (just because if we fix the level $N$ and (appropriately generalized version of) the weight the space of automorphic forms is finite dimensional) and so altogether we are talking about a very special class of just countably many Dirichlet series, but these seem to be the ones that naturally generalize $\zeta(s)$. -By the way, this general point of view is due to Langlands, and forms a part of the general Langlands program. -Another point of view was given by Selberg, which focuses more on capturing the analytic properties necessary for getting good properties of a Dirichlet series, rather than beginning from a conceptual construction (as in the automorphic point of view). Namely, he introduced the Selberg class of Dirichlet series. Note that part of his axioms are an Euler product, analytic continuation, and functional equation. -My sense is, though, that people expect the Selberg class of Dirichlet series to more-or-less coincide with the class of automorphic $L$-functions, so I think it is just two points of view on the same question: Langlands is showing how to construct "good" Dirichlet series, and Selberg is writing down the properties a "good" Dirichlet series should satisfy. It turns out that "good" Dirichlet series are so special, though, that however you try to pick them out, you seem to end up with the same very special collection, namely the automorphic ones.<|endoftext|> -TITLE: Stirling-type formula for the logarithmic derivative of the Gamma function -QUESTION [8 upvotes]: How may one go about proving -$\displaystyle\frac{\Gamma'(s)}{\Gamma(s)}=O(\log|s|)$, -(away from the poles) directly? By a direct proof, I mean not to go through the usual Stirling formula with the exact error term. The use of a rough form of Stirling's formula is welcome. - -REPLY [7 votes]: You can use the product representation of $\Gamma(z)$, take logs and differentiate the resulting series. -$$\Gamma(z) = \dfrac{e^{-\gamma z}}{z} \prod_{n=1}^{\infty} \left(1 + \dfrac{z}{n}\right)^{-1} \ e^{\frac{z}{n}}$$ -You can find more information here: Polygamma Function and Digamma Function.<|endoftext|> -TITLE: Does the integral test work on higher dimensions? -QUESTION [9 upvotes]: The integral test of convergence states that, if $f:[1,+\infty)\to[0,+\infty)$ is a monotonically decreasing nonnegative function, then the series $\sum_1^\infty f(n)$ converges iff $\int_1^\infty f(n) dn$ is finite. -Is the high-dimensional generalization also true? That is, given $f:[1,+\infty)^N \to[0,+\infty)$, and $f(\dotsc,n_i,\dotsc) \ge f(\dotsc,n_i+\epsilon,\dotsc)$ for all $1\le i\le N$ and $n_i\in[1,+\infty)$ and $\epsilon>0$, then the sum -$$ \sum_{n_1=1}^\infty \cdots \sum_{n_N=1}^\infty f(n_1,\dotsc,n_N) $$ -converges iff the multiple integral -$$ \int_1^\infty \cdots \int_1^\infty f(n_1,\dotsc,n_N) dn_1 \dotsm dn_N $$ -is finite. -(This is just for checking if my answer over physics.SE is reasonable.) - -REPLY [6 votes]: It isn't true in general, but the direction you used in your physics.SE answer is. That is, if the sum converges, then the integral does too. The decreasing hypothesis implies that the maximum value of $f$ on the cube $[n_1,n_1+1]\times\cdots\times[n_N,n_N+1]$ is $f(n_1,\ldots,n_N)$, so that the integral over that cube is less than or equal to $f(n_1,\ldots,n_N)$. Adding up the integrals over all such cubes yields the result. -The problem with the other direction is that the function may drop off to zero in some directions rapidly enough to make the integral converge, while staying too big in another direction for the sum to converge. For example, $N=2$, $f(x,y)=1$ if $x=1$, $f(x,y)=0$ otherwise. -With an additional "shift" you can go from integral convergence to sum convergence. The decreasing hypothesis also implies that the minimum value of $f$ on the cube $[n_1,n_1+1]\times\cdots\times[n_N,n_N+1]$ is $f(n_1+1,\ldots,n_N+1)$, so that the integral over that cube is greater than or equal to $f(n_1+1,\ldots,n_N+1)$. By adding up the integrals over all such cubes, this implies that -$$ -\sum_{n_1=2}^\infty \cdots \sum_{n_N=2}^\infty f(n_1,\ldots,n_N) -\leq -\int_1^\infty \cdots \int_1^\infty f(x_1,\ldots,x_N) dx_1 \dotsm dx_N. -$$<|endoftext|> -TITLE: Spreading points in the unit interval to maximize the product of pairwise distances -QUESTION [20 upvotes]: This is prompted by question 15312, but moved to the reals. It must be solved already. Pick n points $x_i \in [0,1]$ to maximize $\prod_{i < j} (x_i - x_j)$. A little playing shows you don't want them evenly distributed-they need to push out to the ends. With four points, Alpha says to use $\{0,\frac{1}{2}\pm\frac{1}{2\sqrt{5}},1\}$ and with five, $\{0,\frac{1}{2}-\frac{\sqrt{\frac{3}{7}}}{2},\frac{1}{2},\frac{1}{2}+\frac{\sqrt{\frac{3}{7}}}{2},1\}$ - -REPLY [17 votes]: These points are known as Fekete points. A general Fekete problem is to maximize the product -$$\max_{z_1,...,z_n\in E}\prod\limits_{\quad 1\leq i < j \leq n}|z_i-z_j|$$ -where $E\subset \mathbb C$. -In case $E=[-1,1]$, there is a unique solution and the corresponding points coincide with the zeros of $(1-x^2)P'_{n-1}(x)$, where $P_{n-1}$ is the Legendre polynomial of degree $n-1$. -I cannot give a precise reference at the moment, but this can be probably found in Szegő's book on orthogonal polynomials.<|endoftext|> -TITLE: Finding standard ellipse characteristics from specific ellipse parametrisation -QUESTION [6 upvotes]: I have found the following ellipse representation $(x,y)=(x_0\cos(\theta+d/2),y_0\cos(\theta-d/2))$, $\theta \in [0,2\pi]$. This is a contour of bivariate normal distribution with uneven variances and correlation $\rho=\cos(d)$. I know that this is a rotated ellipse with centre $(0,0)$. How to find the lengths of major and minor axes and the angle between x-axis and major axis? - -REPLY [7 votes]: This formula rescales a standard ellipse $(\cos(\theta + d/2), \cos(\theta - d/2))$ (inscribed within the unit square) by the diagonal matrix $(x_0, y_0)$. By symmetry, the values $\theta = 0$ and $\theta = \pi/2$ correspond to vertices of this standard ellipse, allowing us to find their coordinates (and thus the lengths of the semi-axes), whence we easily deduce its equation is $x^2 + y^2 - 2 \rho x y = 1 - \rho^2$. Applying the diagonal matrix gives the conventional implicit form -$$\left(\frac{x}{x_0}\right)^2 + \left(\frac{y}{y_0}\right)^2 - 2 \rho \frac{x}{x_0} \frac{y}{y_0} = 1 - \rho^2 \text{.}$$ -From here you can look up anything you want. - -REPLY [6 votes]: The semi-major axis will be the distance from the origin to the farthest point on the ellipse. So $r^2=x_0^2\cos^2(\theta+d/2)+y_0^2\cos^2(\theta-d/2)$. As $r^2$ is a monotonic function of $r$, you can maximize it instead, which simplifies things a bit. The angle between the $X$ axis and the major axis will then be the arctangent of $y/x$ at this maximum point. The semi-minor axis will be the distance from the origin to the closest point on the ellipse, and will be $\pi/2$ away from the semi-major axis.<|endoftext|> -TITLE: A helical cycloid? -QUESTION [5 upvotes]: While combing around my notes looking for other possible examples for this question, I chanced upon another one of my unsolved problems: -Cycloidal curves are curves generated by a circle rolling upon a plane or space curve. It's not too hard to derive the required parametric equations if the fixed curve is a plane curve, but I've had some trouble deriving the corresponding expression for space curves. -More specifically, here is the particular problem I was concerned with: consider a (cylindrical) helix: -$$\begin{align*}x&=a\cos\;t\\y&=a\sin\;t\\z&=ct\end{align*}$$ -and imagine a circle of radius $r$ whose plane is always perpendicular to the x-y plane rolling on the helix, starting at the point $(a,0,0)$ ($t=0$). Imagine a point in the plane of the circle at a distance $hr$ from the center. What are the parametric equations for the locus of the point? -The two obvious pieces of information I have are that the center of the circle also traces a helix, whose parametric equation differs from the original by a vertical shift of $r$ (per Tony, that was an erroneous assumption), and that the expression for the arclength of the helix, $s=\sqrt{a^2+c^2}t$, should figure into the final parametric equations. Otherwise, I'm not sure how to start. -How does one derive parametric equations for the "helical cycloid"? - -The physical model I had in mind was a screw ramp winding around a cylinder. Supposing that there was a car that needed to go to the top of the cylinder by driving on the ramp, and supposing that a spot is placed on one of the car's wheels, what are the equations for the locus of the spot? - -REPLY [3 votes]: I like to think of helices on cylinders as images of lines under planar curling, so here's an approach incorporating that idea. -Imagine the circle rolling up a line drawn in the plane, with $P$ the point of tangency along the line and $Q$ a distinguished point on the circumference of the circle. (So, $Q$ traces out a cycloid in the plane.) Now, imagine the plane is made of thin paper, but the circle is made of stiff cardboard. If we curl the plane into a cylinder but the circle remains flat, the plane of the circle will be tangent to the cylinder along the (vertical) line passing through the point $P$. The point $P$ will follow a helix, and the point $Q$ --which lies in the plane of the circle-- traces out the helical cycloid in space. While the path that $P$ (and $Q$) takes is decidedly different after curling than before, the displacement vector between $P$ and $Q$ in the plane of the circle is the same throughout. -Before curling, if the circle of radius $r$ rolls along a horizontal track in the $uv$-plane, then we have $P=(r t, 0)$, and the point $Q$ traces out the standard cycloid: -$$\begin{align} -u &= r ( t - \sin t ) \\ -v &= r ( 1 - \cos t ) -\end{align}$$ -The displacement vector from $P$ to $Q$ at time $t$ is given by -$$d := PQ = [m, n] = r[-\sin t, 1 - \cos t ] = - 2 r \sin\frac{t}{2} \; [\cos\frac{t}{2},-\sin\frac{t}{2}]$$ -Rotating the plane through an angle, say, $\theta := \rm{atan\frac{c}{a}}$, and writing $b$ for $\sqrt{a^2+c^2}$, the circle's point of tangency along the tilted track is given by ... -$$P_2 = (r t \cos\theta,r t \sin\theta) = \left( \frac{r t a}{b}, \frac{r t c}{b}\right)$$ -... and the displacement vector, $d_2$, to the point $Q_2$ tracing out the tilted cycloid is given by ... -$$\begin{align} -d_2 :&= [m \cos\theta - n \sin \theta, m \sin\theta + n \cos \theta ] \\ -&= \frac{1}{b} [a m - c n, c m + a n] \\ -&= -\frac{2r}{b}\sin\frac{t}{2}\left[\cos\left(\theta-\frac{t}{2}\right), \sin\left(\theta - \frac{t}{2} \right) \right] \\ -\end{align}$$ -Now the fun part: Curl the $(u,v)$-plane around a cylinder of radius $a$, such that the $uv$-origin aligns with the $xyz$-point $(a,0,0)$ and the $v$-axis runs parallel to the $z$ axis. The curled, tilted track will coincide with a helix. The horizontal distance travelled by $P_2$ in the plane becomes the length of horizontal circular arc travelled by $P_3$ around the cylinder; upon dividing by the radius, $a$, of the cylinder, this length becomes the angular "distance" --$s := \frac{rt}{b}$-- travelled by $P_3$; the vertical distances match. Therefore: -$$P_3 = \left(a\cos s, a\sin s, \frac{rtc}{b} \right)=(a\cos s,a\sin s,c s)$$ -As for the displacement vector: The $u$ direction of the tangent plane coincides with the horizontal vector tangent to the cylinder at $P_3$; the $v$ direction coincides with the $z$ direction. Thus, the transformation from $uv$-coordinates to $xyz$-coordinates is given by -$$[1,0]\to[-\sin s, \cos s, 0]\hspace{0.5in}[0,1]\to[0,0,1]$$ -The image, $d_3$, of the displacement vector $d_2$, then, is -$$d_3 = -\frac{2r}{b}\sin\frac{t}{2}\left[-\sin s \cos\left(\theta-\frac{t}{2}\right), \cos s \cos\left(\theta-\frac{t}{2}\right), \sin\left(\theta - \frac{t}{2} \right) \right] $$ -and the path of $Q_3$, which traces the helical cycloid, is given by -$$ -Q_3 = P_3 + d_3 = \left\{ -\begin{align} -x &= a \cos s &+ \frac{2 r}{b} \sin s \sin\frac{bs}{2r} \cos\left(\theta-\frac{bs}{2r}\right)\\ -y &= a \sin s &- \frac{2 r}{b} \cos s \sin\frac{bs}{2r} \cos\left(\theta-\frac{bs}{2r}\right)\\ -z &= c s &- \frac{2 r}{b} \sin\frac{b s}{2r}\sin\left(\theta-\frac{bs}{2r}\right) -\end{align} -\right. -$$ -Here's a picture with $a=c=1$ and $r=1/2$: - -Note: The above does not simply curl the planar cycloid around the cylinder. Since -$$x^2 + y^2 = a^2 + \frac{4r^2}{b^2} \sin^2\frac{bs}{2r} \cos^2\left(\theta-\frac{bs}{2r}\right) \ge a^2$$ -we see that most of the helical cycloid lies outside the surface of the cylinder.<|endoftext|> -TITLE: How to show that for any abelian group $G$, $\text{Hom}(\mathbb{Z},G)$ is isomorphic to $G$ -QUESTION [8 upvotes]: How do I show that $\text{Hom}(\mathbb{Z},G)$ and $G$ are isomorphic? -The method suggested by the book is to define a map $k$: $\text(\mathbb{Z},G) \to G$ by $f \to f(1)$. -I am stuck on how exactly to show that it is an isomorphism. Could anyone shed some light into this? Is $f(1)$ an element of G? -(For those who wanted to know, this is not homework) - -REPLY [9 votes]: Any homomorphism $f$ from $\mathbb{Z}$ to $G$ is determined by its value at 1: indeed $f(n) = f(1)^n$ for any $n\in \mathbb{Z}$, since $f$ is a group homomorphism (I am using multiplicative notation for the group operation on $G$). So it is natural to consider the map suggested by the book: given $f\in \text{Hom}(\mathbb{Z},G)$, evaluate it at 1: $k:f\mapsto f(1)$. It sends $f$ to an element of $G$, namely $f(1)$. -To check that this is a group homomorphism, you need to understand how $\text{Hom}(\mathbb{Z},G)$ is a group in the first place. That's very simple: the group operation is pointwise multiplication: the product of two homs $f$ and $g$ is given by $fg(n) = f(n)g(n)$ (to explain what $fg$ is you need to evaluate it on an arbitrary input, since you are describing a homomorphism). So now, you should have no difficulties verifying that $k$ respects the group structure.<|endoftext|> -TITLE: Optimal algorithm for finding the odd sphere with a balance scale -QUESTION [10 upvotes]: Say we have $N$ spheres indexed as $1,2,3,\dotsc, N$ such that all of them have identical weight apart from one, and we don't know if that one is heavier or lighter. We have to determine which sphere has the odd weight using just a balance scale. -We could solve this problem by weighing repeatedly, but I am interested in a solution involving weighing as few times as possible, so my question is what is the optimal algorithm for this task? - -REPLY [8 votes]: For a complete treatment, we have to consider all the combinations with -some variant parameters. The base problem: given $n$ balls, one being -"deviant" (not the same weight than the $n-1$ others), how to find the -deviant ball in at most $w$ weighs ? Or, similarly, what is the maximum -number $n$ of balls for which the deviant ball can always be identified -in at most $w$ weighs ? -The variant parameters are: - -The balls may be "marked". A ball which is "marked heavy" may be -deviant by being heavier, but not lighter. We can consider a variant -problem where all balls are individually marked heavy or light -(independently of each other). The problem in which you know a -priori that the deviant ball is heavier is a subcase of the "marked -balls" problem (i.e. it is the "marked balls" problem where all balls -are marked "heavy"). -You may be asked to identify the deviant ball and to give its -deviance direction (i.e. you also need to find out whether the -deviant ball is heavier or lighter). -Possibly, an extra "standard" ball is given, guaranteed to be -non-deviant. - -We'll first see the "marked balls" problem because it is an essential -step of the full treatment. -Marked balls -First, some important notes: - -With marked balls, identifying the deviant ball implies identifying -the deviance, automatically. -If you have only one marked ball, then the problem is solved with no -weigh at all: if there is only one suspect, then it is the culprit. -If you have two marked balls and they bear distinct marks (one is -marked heavy, the other is marked light), then the problem is -unsolvable -- unless you have a standard ball available, in which -case you just have to make a weigh between that standard ball and one -of the potentially deviant balls. -Each weigh may yield only 3 different results, so with $w$ weighs you -may reach at most $3^w$ distinct conclusions. Thus, the problem -cannot be solved (reliably) if $n \gt 3^w$. - -It so happens that the "marked balls" problem can be solved for all $n$ -up to that $3^w$ limit (assuming the presence of a standard ball if -$n = 2$). This is demonstrated with the following recurrence: - -With $w = 0$ (no weigh at all), you may indeed solve the problem with -$n = 3^0 = 1$ marked ball. -With $w = 1$ and two marked balls with the same mark, just weigh one -against the other; if they have distinct marks, use the extra -standard ball, as explained above. -If $w = 1$ and three marked balls, then two of them (at least) have -the same mark. Weigh one against the other; this yields the result. -If $w \gt 1$ and $3^{w-1} \lt n \leq 3^w$, then you can assemble two -sets of balls of size $\lceil n/3 \rceil$, taking care to put the same -number of "heavy" marks in both sets (which implies that you also put -the same number of "light" marks in both sets). Then weigh one set -against the other. If the balance tilts to the left, then the deviant -ball is one of "heavy" balls from the left scale or one of the -"light" balls from the right scale. If the weigh result is balanced, -then the deviant ball is in the set of balls which you put in neither -set. Either way, you end up with at most $3^{w-1}$ suspect balls, -$w-1$ weighs, and at least one ball which is definitely non-deviant, -i.e. a "standard ball". Recurrence applies. - -Unmarked balls -Consider $n$ unmarked balls. Your first weigh will result in either a -balanced result, or an unbalanced result. If the result is balanced, -then the problem is reduced to that of $w-1$ allowed weighs with all the -balls that did not take part into the first weigh; and the balls used -in the first weigh are now known to be all "standard balls". If the -result is unbalanced, then all balls involved in the weigh are suspect -but can all be marked, so this brings us back to the problem of marked -balls (and the balls which were not used in the first weigh are now -known to be standard). -Let's call $f(w)$ the maximum number of unmarked balls which can be -processed in $w$ weighs, assuming that an extra standard ball is -available. For now, we suppose that we want to identify both the ball -and its deviance. -$f(1) = 1$. Indeed, with only one weigh, you get only three conclusions, -so you cannot process two balls, because two balls mean four -possible conclusions (first ball is heavy, first ball is light, second -ball is heavy, second ball is light). With one ball, you just weigh it -against the extra standard ball. -If $w \gt 1$ and you have $n \gt f(w-1)$ balls, then isolate $f(w-1)$ -balls, and split the remainder into two subsets of equal size (if there -is an odd number of remaining balls, add the standard ball). Do a weigh -between these two subsets. If a balanced result is obtained, then -recurrence applies (you have $f(w-1)$ unmarked balls and $w-1$ weighs). -If an unbalanced result is obtained, then all the balls involved in the -first weigh are now "marked" (except of course the extra standard ball, -if it was added). This leads us to the following relation: -$f(w) = f(w-1) + 3^{w-1}$. -Thus, $f(w) = (3^w - 1)/2$ (you can verify it fulfills the recurrence -relation and the starting point $f(1) = 1$). -What if you don't have an extra standard ball to begin with ? Then -you will not be able to make the first weigh with $3^{w-1}$ balls, since -that's an odd integer, and a weigh involves an even number of balls. So -you have to use only $3^{w-1}-1$ balls. However, after this first weigh, -you have one or several "standard balls", so you are back to the previous -problem. Thus, unavailability of an extra standard ball means just -decrementing by one the maximum number of processable balls. -What if you are not interested in identifying the actual deviance -direction, but just in finding out which ball is deviant ? Then all of -the above still holds, except for the starting point. If you call $g(w)$ -the maximum number of unmarked balls which you can process under these -conditions, then $g(1) = 2$: with two balls, just weigh one against the -extra standard ball; with three balls, you have to include two suspect -balls in the first weigh, but you will not known which is the culprit -since they are unmarked. It follows that $g(w) = f(w) + 1$ for all $w$. -Conclusion -If you are allowed $w$ weighs, then you can find the deviant ball and -its deviance among a maximum of $(3^w-3)/2$ balls. If you are not -interested in the deviance direction but only in identifying the deviant -ball, then you can process one extra ball. If a "standard" extra ball is -available, then you can process one extra ball. These two "one extra -ball" increments are cumulative; thus, with 3 weighs, you can process up -to 12, 13 or 14 balls, depending on whether you have an extra standard -ball, and whether you are interested in the deviance direction. -Extra conditions: if no "standard" extra ball is provided, then the -problem is unsolvable if: - -there are one or two balls and you want the deviance direction; -there are exactly two balls and you are not interested in the -deviance direction. - -Apart from these two edge cases, all number of balls no greater than the -maximum can be processed (there is no "hole").<|endoftext|> -TITLE: Approximating the volume of the Jacobian of a hyperelliptic curve -QUESTION [8 upvotes]: For an abelian variety $A_{/\mathbb{Q}}$, its volume $vol(A(\mathbb{R}))$ appears in the conjectured Birch Swinnerton-Dyer formula for the L-series at 1. -I am having trouble in understanding the size of this volume, as a function of the (minimal) equations defining the variety. Hence the question: -Say we have a hyperelliptic curve given by $$y^2 = x^{2g+1}+a_{2g}x^{2g}+\ldots +a_0$$ -for simplicity, assume that $a_i\approx M > 0$ for all $i$. What is a good crude estimate for the volume of the Jacobian? -Note: for an elliptic curve this is easy, since there is only one differential and hence only one simple integral. - -REPLY [2 votes]: I assumed above that all coefficients are positive, but this isn't easiest to compute with, so apply the isomorphism $x \mapsto -x$, and now the coefficients are alternating. The volume is unchanged. -A basis for the differentials is $\{\omega_i : 1\le i \le g\}$, where -$$\omega_i = x^{i-1}\frac{dx}{2y}$$ -For $M$ large, there is a positive root $\alpha_1$ of size $M$, and the other $2g$ roots are small. We now need to figure out a basis for the real homology (loops fixed by complex conjugation). There are $g-1$ independent loops $\gamma_{j+1}$, the $j$-th loop being from $\alpha_{2j}$ to $\alpha_{2j+1}$ and back around the "hole" (for example, if all roots are real, these are simply the paths where the polynomial is positive). One more independent loop is from $\alpha_1$ to $\infty$ and back (being real, this is the easiest to understand). -The volume of the jacobian of the curve is $|det((\int_{\gamma_j} \omega_i)_{ij})|$. So let's approximate these integrals. -For any $j>1$, since the roots involved are $O(1)$, the loop $\gamma_j$ is close to the origin. In this vicinity the root $\alpha_1$ contributes to $\int_{\gamma_j} \omega_i$ a factor of $M^{-1/2}$, and if we factor it out (recall that we only want a crude approximation), we are left with an integral close to the origin over a function that is relatively small. Hence $\int_{\gamma_j} \omega_i \approx M^{-1/2}$. Note that this is the same for any $i$ since close to the origin $x^{i-1}$ doesn't add much. -For $j=1$, the integral can be approximated as -$$\int_{\gamma_1} \omega_i = 2\int_M^\infty \frac{x^{i-1} dx}{2y} \approx \int_M^{2M} \frac{x^{i-1} dx}{\sqrt{x^{2g+1}+\cdot a_0}}$$ -The small roots give each the denominator a factor of $M^{1/2}$, so we can approximate further to arrive at: -$$M^{-g} \int_M^{2M} \frac{x^{i-1} dx}{\sqrt{x-M}} \approx M^{1/2+i-1-g}$$ -We have approximated all entries of the matrix. The top row's largest coefficient is of size $M^{-1/2}$, and hence the largest product of a generalised diagonal is $(M^{-1/2})^g$, since the entries in the other rows are also of size $M^{-1/2}$. -The crude approximation for the volume of the jacobian of a hyperelliptic curve given by a polynomial as above with positive coefficients of size $M$, a large number, is: $$Vol(J(\mathbb{R})) \approx M^{-g/2}$$<|endoftext|> -TITLE: What are polar co-ordinates? -QUESTION [11 upvotes]: This is a very basic question that I feel that I should know the answer to but haven't been able to think through clearly. -In linear algebra, we learn that the basis of a finite dimensional vector space can be thought of as 'co-ordinates' of that space. And, we model what we intuitively understand as the euclidean plane, using the vector space $\mathbb{R^2}$ equipped with the standard inner product and metric etc. The underlying space is taken to be independent of the choice of basis, that is we understand that properties that are inherent to the space are those that will be invariant under change of basis. -Now, $\mathbb{R^2}$ comes with a canonical basis: this can be understood as saying that given any arbitrary two dimensional vector space and any basis, the vectors $v_i$ of the basis under the co-ordinate map maps to $e_i$. Since, we have also introduced inner products and thus a notion of parallel, the intuitive picture we now have of the co-odrinate grid is a criss-cross of lines. and the 'co-ordinates' are called, imaginatively, 'rectangular co-ordinates'. -In school, we also learn about polar 'co-ordinates' of the plane. The associated picture is of concentric circles and rays fanning out of the origin. -However, these 'co-ordinates' do not fit within the 'Basic Linear Algebra' framework (since among other things, $0$ has no unique representation and the functions that change the variables are not linear). The one way of seeing the transformation of the 'rulings' of the plane is to consider $\mathbb{R^2}$ as $\mathbb{C}$ and the change as the map $\exp: \mathbb C \to \mathbb C$. - -What is the framework in which the notion of 'co-ordinates' subsumes both these pictures (likewise for cylindrical, spherical, etc in dimension(?) three). My second and connected question: is there a linear algebraic connection for using two numbers to represent points in polar co-ordinate, i.e. is it because the vector space dimension of $\mathbb{E^2}$ is two. My third question is: am I confusing different concepts of 'dimension' here? - -Added: Thanks for all the replies. They are all great but given my continuing dissatisfaction, either I have not really understood the answers or haven't been able to communicate the question properly (or perhaps and quite likely, I don't know what I want to ask). -Now, when we think of a (topological) manifold, we think of some object that is locally euclidean, i.e., we already have a 'handle' on $\mathbb{E^n}$ and want to use our knowledge of being able to do things, such as calculus, onto the new object . In my question, I am looking at ways we can 'handle' $\mathbb{E^2}$ itself, so invoking manifolds, seems a bit like putting the cart before the horse. I want to say something like this: The point of polar co-ordinates is to represent the plane by looking at it as $S^1\times \mathbb{R}$, where $S^1$ is a basic object like $\mathbb R$. - -REPLY [2 votes]: Another question implicit in what you ask is why use other co-ordinate systems. One reason is that the form of many equations is dramatically simplified in other co-ordinate systems. For example, the rectangular representation of the curve defined by the polar coordinate equation r = 1 + cos(2(theata)) would make it much harder to understand than in its polar form. r= 3 is a circle of radius 3 in polar coordinates. For modeling of certain phenomena polar coordinates makes things much easier, including work involving orbit calculations. -http://en.wikipedia.org/wiki/Orbit_equation<|endoftext|> -TITLE: Meaning of $\mathbb{R}[x]$ -QUESTION [6 upvotes]: I ran into this expression in a paper I was reading, and I'm confused about part of the meaning. Here $u$ and $v$ are two polynomials. -$$u, v \in \mathbb{R}[x]$$ -I'm not really familiar with usage of $[x]$ here, but if it means "nearest integer", then isn't this expression equivalent to simply: -$$u, v \in \mathbb{Z}$$ - -REPLY [5 votes]: Generally, if $\rm\,R \subset S\,$ are rings and $\rm\,s\in S\,$ then $\rm\,R[s]\,$ denotes the ring-adjunction of $\rm\,s\,$ to $\rm\,R,\,$ i.e. the smallest subring of $\rm\,S\,$ containing both $\rm\,R\,$ and $\rm\,s\,.\,$ Equivalently $\,\rm R[s]$ is the image of $\rm\,R[x]\,$ under the evaluation map $\rm\,x\mapsto s,\,$ i.e. elements of $\rm\,S\,$ writable as a polynomial in $\rm\,s\,$ with coefficents in $\rm\,R.\,$ -Similarly, if $\rm\,F \subset E\,$ are fields and $\rm\,\alpha\in E\,$ then $\rm\,F(e)\,$ denotes the field-adjunction of $\rm\,\alpha\,$ to $\rm\,F,\,$ i.e. the smallest subfield of $\rm\,E\,$ containing both $\rm\,F\,$ and $\rm\,\alpha.$ -The notation for the polynomial ring $\rm\,R[x]\,$ is the special case where $\rm\,x\,$ is transcendental over $\rm\,R\ $ (an "indeterminate" in old-fashioned language),$\ $ i.e. $\rm\, x\,$ isn't a root of any polynomial with coefficients in $\rm\,R\,$. One may view $\rm\,R[x]\,$ as the most general ring obtained by adjoining to $\rm\,R\,$ of a universal (or generic) element $\rm\,x,\,$ in the sense that any other adjunction $\rm\,R[s]\,$ is a ring-image of $\rm\,R[x]\,$ under the evaluation homomomorphism $\rm\, x\to s\,.\ $ -For example, if $\rm\,R \subset S\,$ are fields then $\rm\,R[s]\cong R[x]/(f(x))\,$ where $\rm\,f(x)= \,$ minimal polynomial of $\rm\,s\,$ over $\rm\,R.\,$ Essentially this serves to faithfully ring-theoretically model $\rm\,s\,$ as a "generic" root $\rm\,x\,$ of the minimal polynomial $\rm\,f(x)\,$ for $\rm\,s\,.\,$ -Polynomial rings may be characterized by the existence and uniqueness of such evaluation maps ("universal mapping property"), e.g. see any textbook on Universal Algebra, e.g. Bergman.<|endoftext|> -TITLE: Backwards epsilon -QUESTION [39 upvotes]: What does the $\ni$ (backwards element of) symbol mean? It doesn't appear in the Wikipedia list of mathematical symbols, and a Google search for "backwards element of" or "backwards epsilon" turns up contradictory (or unreliable) information. -It seems it can mean both "such that", or "contains as an element". Is this correct, and if so, which is the more common usage? - -REPLY [6 votes]: The fact that \ni is \in written in the opposite direction, should also support the idea of those who think that this symbol means "contains as an element".<|endoftext|> -TITLE: Combinatorial proof of a Fibonacci identity: $n F_1 + (n-1)F_2 + \cdots + F_n = F_{n+4} - n - 3.$ -QUESTION [25 upvotes]: Does anyone know a combinatorial proof of the following identity, where $F_n$ is the $n$th Fibonacci number? -$$n F_1 + (n-1)F_2 + \cdots + F_n = F_{n+4} - n - 3$$ -It's not in the place I thought it most likely to appear: Benjamin and Quinn's Proofs That Really Count. In fact, this may be a hard problem, as they say the similar identity -$$ F_1 + 2F_2 + \cdots + nF_n = (n+1)F_{n+2} - F_{n+4} +2$$ -is "in need of a combinatorial proof." -For reference, here (from Benjamin and Quinn's text) are several combinatorial interpretations of the Fibonacci numbers. - -REPLY [21 votes]: Recall that $F_{n+1}$ is the number of ways to tile a board of length $n$ with tiles of length $1$ and $2$. So $F_{n+4}$ is the number of ways to tile a board of length $n+3$ with tiles of length $1$ and $2$. Note that $n+3$ such tilings use at most one tile of length $2$, so $F_{n+4} - (n+3)$ such tilings use at least two tiles of length $2$. -Given such a tiling, look at where the second-to-last tile of length $2$ is used. The part after this tile is a tiling of some section of length $k+1$ where exactly one tile of length $2$ is used (which can be done in $k$ ways), and the part before this tile is a tiling of the remaining portion of length $n-k$ (which can be done in $F_{n-k+1}$ ways). Sum over $k$. -(The bigger lesson to take away here is that convolution is much easier to deal with than Hadamard product. Also, since the bijection I described above preserves the number of tiles of each type, the identity can be upgraded to an identity of $q$-Fibonacci numbers.)<|endoftext|> -TITLE: How to get the connectedness theorem from the quasi-finite version of ZMT? -QUESTION [6 upvotes]: Let $f: X \to Y$ be a proper morphism of noetherian schemes. If the natural map $\mathcal{O}_Y \to f_*(\mathcal{O}_X)$ is an isomorphism, then a version of Zariski's main theorem states that the fibers $X_y, y \in Y$ are all connected. (The case listed as "Zariski's main theorem" in Hartshorne is the case of $f$ birational and $Y$ normal.) This may be proved via the formal function theorem. -The fancier version of ZMT (EGA IV-8) is that a quasifinite separated morphism of finite presentation between quasicompact, quasiseparated schemes factors as the composite of an open immersion and a finite morphism. -Is there a direct way to deduce the connectedness theorem from the more general ZMT? - -REPLY [4 votes]: I'm not sure that this is directly possible. If memory serves, Zariski's original version of his main theorem showed something like the following: if $f: X \to Y$ is birational with $Y$ normal, and if $y \in Y$ is a point where $f^{-1}$ is not defined (I think this is what Zariski calls a fundamental point), then each component of $f^{-1}(y)$ is positive dimensional. This result can be proved using Grothendieck's form of ZMT, I think: if $f^{-1}(y)$ contains an isolated point, we can choose a n.h. of this point such that the restriction of $f$ to this n.h. has finite fibres, hence is an isomorphism (here we are using Grothendieck's ZMT together with normality of $Y$), and so in fact -$f^{-1}$ can be defined at $y$ after all. (This is quite possibly not quite correct, but hopefully is not totally bogus either, both in the argument and in the claim to some historical accuracy. Also, I think that if you look e.g. in the commutative algebra part of the stacks project a version of ZMT along these lines is discussed; at least, there are results in which the hypothesis of an isolated point in the fibre plays a prominent role.) -Zariski's connectedness theorem came quite a bit later, in his monograph on formal functions, and the techniques were quite a bit more involved. (They were a precursor to formal scheme techniques, in which one can complete in some directions but not in others, as opposed to earlier techniques with complete local rings, in which one completes in every direction around a point at once.) -A quick glance over Mumford's discussion of ZMT in the Red Book (which is always a good place to go to for learning ZMT intuition) suggests that I'm not blundering here. (Grothendieck's formulation of ZMT is what he calls version IV of ZMT, while the connectedness theorem is his version V, and he singles out version V as being "more global" than the other versions, and doesn't discuss any implications from the other versions to this one.) -[On the other hand, this contradicts the wikipedia entry, which suggests that Grothendieck's formulation does imply the connectedness theorem. (But doesn't quite say how.) There are plenty of people at Harvard you could ask for clarification on this point, of course ... .]<|endoftext|> -TITLE: Sum of rational numbers -QUESTION [6 upvotes]: The sum of a finite number of rational numbers is of course a rational number, but the sum of an infinite number of rational numbers might be an irrational number. Can someone give me some intuition why this sum might be irrational? I just "don't feel it." - -REPLY [5 votes]: Dear I am thinking about it because I am also a student of Mathematics. -We know that $e$ is an irrational number. -The value of $e$ is -$$= 1 + \frac{1}{1} + \frac{1}{2!} + \frac{1}{3!} + \ldots$$ -$$= 1 + \frac{1}{1} + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \ldots$$ -$$= 2.7182 \ldots$$ (Irrational Number) -So sum of infinite rational numbers may be irrational.<|endoftext|> -TITLE: Is $a_N=\int_1^N x^{-a}e^{ix} dx$ a bounded sequence? -QUESTION [5 upvotes]: Let $0 -TITLE: Does the Langlands program preserve CFT's distinction between local and global theories? -QUESTION [5 upvotes]: This question is vaguely related to: Different formulations of Class Field Theory -As I said there, I'm currently learning class field theory. For some motivation, I've also read a little about Langlands. Everything I've read about that program seems to be a generalization of the global formulation of CFT. Is the local language (and theorems) preserved as well? Are they obviously/with difficulty or/conjecturally equivalent? - -REPLY [9 votes]: There is a local aspect to the Langlands program, known as the local Langlands correspondence. In fact, Langlands conjectured the existence of such a correspondence for each local field $F$ and each reductive group $G$ over $F$. -He proved his local conjecture when $F$ is archimedean and $G$ is arbitrary. The case when $G = \mathrm{GL}_n$ and $F$ is non-archimedean is now solved (by Laumon, Rapoport, and Stuhler in the function field case, and by Harris and Taylor in the case of $p$-adic fields). There are many results known for other groups $G$ as well, but the full local conjectures for arbitrary $G$ are not yet settled, as far as I know. -In the $\mathrm{GL}_n$ case, the correspondence, roughly speaking, gives a bijeciton between (typically infinite dimensional!) irreducible representations of the group $\mathrm{GL}_n(F)$ and $n$-dimensional representations of the Galois group of $F$. (So, unlike the abelian case, there is no isomorphism of groups, but rather a certain bijection between certain kinds of representations of rather different groups.) -The case of general $G$ is more involved to state, and indeed, it is not so easy to find the precise conjecture in the literature. In any case, it involves many complications, the most significant of which is probably so-called endoscopy. Note that Ngo won the Fields medal this summer for his work on this topic. -Just as with local CFT, the motivations for the local conjecture come from the global theory. On the one hand, representations of matrix groups over local fields arise naturally in the theory of automorphic forms (this is a generalization/reformulation of the classical theory of Hecke operators in the theory of modular forms), and, on the other hand, (at least certain) automorphic Hecke eigenforms are suppose to be related to representations of Galois groups of global fields by global reciprocity laws. Restricting these global Galois representations to decomposition groups, one should recover the conjectural local correspondence. (And as I noted in my answer to your earlier question, this is in fact the mechanism by which those local correspondences are normally constructed, in the cases when they can be constructed.) -Finally, I'm not sure if I understand your last question about equivalence correctly, but if you are asking whether the existence of the global Langlands correspondence (whatever exactly that means) should follow from the local correspondence, the answer is no. Consider the case of CFT: knowing all the local Artin maps lets you write down the global Artin map, but you still have to prove the global Artin reciprocity law. Similarly, knowing the local Langlands correspondence allows one to write down a candidate for the global correspondence, but to prove that this candidate actually does the job is another, even more difficult, matter. -(As one example: before the modularity theorem for elliptic curves was proved, people knew how to write down the -candidate $q$-expansion that should be the weight $2$ modular form attached to an elliptic curve over $\mathbb Q$; this was because the relevant local issues were all completely understood. The problem was then to prove that this actually was a weight $2$ modular form; this was a global issue, which was completely open until Wiles and Taylor, Breuil, Conrad and Diamond solved it.)<|endoftext|> -TITLE: How to understand compactness? -QUESTION [19 upvotes]: How to understand the compactness in topology space in intuitive way? - -REPLY [7 votes]: The way to understand compactness is to see it in action. As you learn more, you'll see more and more situations in which compactness is useful, even fundamental. With the accumulation of evidence, like geological layers, you will construct understanding. Then one day you'll come across a new way of using compactness, a new angle, and then you will see that in fact you only had understood part of it... and this should keep going. -I am a firm believer that asking for understanding and, much worse, for intuitive understanding of things when one more or less has just encountered them is not the correct way to reach understanding.<|endoftext|> -TITLE: What are some examples of $\text{Isom}(M)$ and $\text{Conf}(M)$? -QUESTION [9 upvotes]: Edit: -Since I did not quite get the responses I would have liked when I asked this question four months ago, let me reformulate it slightly: - -What are some examples of $\text{Isom}(M)$ and $\text{Conf}(M)$? - -For example, Aaron mentions in his (very helpful) answer that $\text{Isom}(\mathbb{S}^n) \cong O(n+1)$. What about hyperbolic n-space? Or the n-torus? The more examples the better. -For precision, I am hoping that we can express $\text{Isom}(M)$ or $\text{Conf}(M)$ as some sort of "recognizable" Lie group, by which I mean a product, quotient, and/or connected sum of linear groups, euclidean spaces, or spheres. - -Original Question: (What are some examples of automorphism groups of manifolds which turn out to be Lie groups?) -I recently read that the group of diffeomorphisms of a smooth manifold which preserve some sort of geometric structure (e.g. Riemannian structure, conformal structure, etc.) frequently turn out to be a Lie group. What are some examples of this? -I've read about the euclidean group $E(n)$ which are the isometries of $\mathbb{R}^n$, and also about the conformal automorphisms of the complex plane, upper half plane, and unit disc. What others are there? Do the groups Diff(M), Iso(M), etc. frequently turn out to be something recognizable? What are the isometry groups or conformal groups of n-spheres, say, or some common 2-manifolds? - -REPLY [3 votes]: $\text{Isom}(\mathbb{S}^n) \cong O(n+1)$ -$\text{Isom}(\mathbb{R}^n) \cong E(n)$ -$\text{Isom}(\mathbb{H}^n) \cong O(n,1)$ -$\text{Conf}(\mathbb{S}^2) \cong \text{PGL}(2,\mathbb{C}) \cong \text{PSL}(2,\mathbb{C})$ -$\text{Conf}(U) \cong \text{PSL}(2,\mathbb{C})$ -Sources: "Riemannian Manifolds: An Introduction to Curvature" (Lee) (Chapter 3), and my own understanding of Wikipedia (and my complex analysis lecture notes).<|endoftext|> -TITLE: Combinatorial Identity $(n-r) \binom{n+r-1}{r} \binom{n}{r} = n \binom{n+r-1}{2r} \binom{2r}{r}$ -QUESTION [8 upvotes]: Show that $(n-r) \binom{n+r-1}{r} \binom{n}{r} = n \binom{n+r-1}{2r} \binom{2r}{r}$. -In the LHS $\binom{n+r-1}{r}$ counts the number of ways of selecting $r$ objects from a set of size $n$ where order is not significant and repetitions are allowed. So you have $n$ people you form $r$ teams and select $r$ captains and select $(n-r)$ players. -The RHS divides up a team into 2 sets? - -REPLY [5 votes]: Let $S$ be a set of $n+r-1$ elements. Both sides count the number of ways to select two disjoint sets $A,B\subseteq S$ of size $r$ and possibly an element $c\in S \setminus B$. -We first observe that $(n-r)\binom{n}{r}=n\binom{n-1}{r}$ as both sides count the number of ways to form a team of size $r+1$ with a captain out of $n$ people. -Applying the above on the original LHS we get $n\binom{n-1}{r}\binom{n+r-1}{r}$, which corresponds to selecting $A$ ($r$ out of $n+r-1$), then $B$ ($r$ out of the remaining $n-1$) and $c$ ($n-1$ choices in $S\setminus B$ and one option of not choosing $c$). -The RHS argument goes as follows: choose $2r$ elements for both $A$ and $B$, then choose $r$ of them to make $B$. Then, as before, there are $n$ options of choosing $c\in S\setminus B$ or none at all.<|endoftext|> -TITLE: Combinatorial Proof of $\binom{\binom{n}{2}}{2} = 3 \binom{n}{3}+ 3 \binom{n}{4}$ for $n \geq 4$ -QUESTION [14 upvotes]: For $n \geq 4$, show that $\binom{\binom{n}{2}}{2} = 3 \binom{n}{3}+ 3 \binom{n}{4}$. -LHS: So we have a set of $\binom{n}{2}$ elements, and we are choosing a $2$ element subset. -RHS: We are choosing a $3$ element subset and a $4$ element subset (each from a set of $n$ elements). But we multiply by $3$ by the multiplication principle for some reason. - -REPLY [14 votes]: LHS: The $\binom{n}{2}$ is the number of pairs you can form of n distinct elements, so the LHS counts the number of ways to choose two distinct pairs. -RHS: Notice that you can choose two pairs that have a common element (but only one). If the two pairs are disjoint, then you need to choose four elements and then ask how you pair them. If the pairs have a common element, then you need to choose only three elements and then choose which is the common element.<|endoftext|> -TITLE: What is the shortest string that contains all permutations of an alphabet? -QUESTION [50 upvotes]: What is the shortest string $S$ over an alphabet of size $n$, such that every permutation of the alphabet is a substring of $S$? -Edit 2019-10-17: -This is called a superpermutation, and there is a wikipedia page that keeps track of the best results. Turns out the problem is still open. - -REPLY [9 votes]: I researched this question 20 years ago and found that the length of the shortest string containing all the permutations of n objects to be as stated in http://www.notatt.com/permutations.pdf. We created a computer algorithm to generate all possible strings containing all permutations of n objects and proved this minimal length through brute force for alphabets up to 11 objects. We never could find a proof that our algorithm generated the shortest strings for any n and I would love for someone to pick this subject up. I've found that most mathematicians disregard this topic as already done when in fact, upon close examination, it has not been proven. If anyone knows of such a proof, please pass it along. You can find our paper at Minimal Superpermutations, Ashlock D., and J. Tillotson, Congressus Numerantium 93(1993), 91-98.<|endoftext|> -TITLE: Making Change for a Dollar (and other number partitioning problems) -QUESTION [32 upvotes]: I was trying to solve a problem similar to the "how many ways are there to make change for a dollar" problem. I ran across a site that said I could use a generating function similar to the one quoted below: - -The answer to our problem (293) is the - coefficient of $x^{100}$ in the reciprocal - of the following: -$(1-x)(1-x^5)(1-x^{10})(1-x^{25})(1-x^{50})(1-x^{100})$ - -But I must be missing something, as I can't figure out how they get from that to $293$. Any help on this would be appreciated. - -REPLY [4 votes]: For a closed form formula for the question: given n cents, how many ways can I make change using pennies, nickels, dimes, quarters see either: -Making change of n cents by William Gasarch -http://arxiv.org/abs/1406.5213 -This just uses recurrences. -OR -Graham-Knuth-Patashnik in their book Concrete Mathematics got a closed form using generating functions. (Actaully they did pennies,nickels,dimes,quarters,half-dollars). An online exposition of this is at -http://www.cs.umd.edu/~gasarch/BLOGPAPERS/knuthchange.pdf<|endoftext|> -TITLE: Riddle (simple arithmetic problem/illusion) -QUESTION [5 upvotes]: I'm not sure how well known this "riddle" is but here it goes. -3 people go to a restaurant, each buy food worth 10.00. When they're done, they give 30.00 to the waitress. She gives the money to the manager, manager says they paid too much, gives 5.00 back to the waitress. She comes back to the table, gives a dollar back to each person and then puts the remaining 2.00 in her pocket for tip. -So now, each person has paid 9.00. 9.00 x 3 = 27.00. Plus the 2.00 in the waitresses pocket is 29.00. What happened to the 30th dollar? -So what's the issue with this way of calculating (since if you backward it works fine), that doesn't give us the correct result? - -REPLY [10 votes]: I am not sure how 'mathematical' the riddle part of this story is. :) Each guest paid \$9 because together they paid \$30, and got back \$27. Of those \$27, the manager got \$25, and the waitress got the other \$2 -- there is no sense/reason in adding \$27 to \$2 -- though you might have a gut feeling that you are headed in the 'right direction' of getting the initial \$30 that way. But take a look -- there really is no extra dollar left, that's all it should be -- \$25+\$2. In other words, each guest paid 9 dollars with the tip included into that. -Another, possibly more insightful way to look at things: -The guests initially paid 30 dollars. The waitress returned them 3 dollars. So the guests ended up paying \$27 (not thirty!). Of those 27 dollars, 2 dollars were pocketed as a tip by the waitress, so you could also say that they paid \$8.333... each and then added a two dollar tip. I think it's a question of order of operations (and using it with respect to appropriate quantities) that could be confusing here. -Punchline: the waitress' 2 dollars were part of 27 dollars paid by the customers. They saved the other 3 dollars of the initial \$30 because of what the manager said, so if you are wondering 'what happened to \$30?' it's more of 27+3, or 25+2+3, where \$25 is what they paid without a tip, \$2 is the tip, and \$3 is the amount they saved.<|endoftext|> -TITLE: invariance of dimension under diffeomorphism of real subspaces -QUESTION [5 upvotes]: This is a problem from Arnold's book on ODEs I cannot solve. -Prove that if $f:U\to V$ is a diffeomorphism, then the Euclidean spaces with the domains $U$ and $V$ as subsets have the same dimension. -Hint. Use the implicit function theorem. -Thanks. - -REPLY [6 votes]: Thanks for the comments. After giving it more thought I was able to solve it on my own. Here's what I did. -Suppose $U\subset \mathbb{R}^{n+m},V\subset \mathbb{R}^m$ and $n>0$ (if $n<0$ just consider $f^{-1}$ instead of $f$ in what follows). We have -$f'=\left( -\begin{array}{cccccc} - \frac{\partial f_1}{\partial x_1} & \ldots & \frac{\partial f_1}{\partial x_n} & \frac{\partial f_1}{\partial y_1} & \ldots & \frac{\partial f_1}{\partial y_m} \\ - \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ - \frac{\partial f_m}{\partial x_1} & \ldots & \frac{\partial f_m}{\partial x_n} & \frac{\partial f_m}{\partial y_1} & \ldots & \frac{\partial f_m}{\partial y_m} -\end{array} -\right)=\left( -\begin{array}{cc} - \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(x_1,\ldots ,x_n\right)} & \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)} -\end{array} -\right)$ -$\left(f^{-1}\right)'=\left( -\begin{array}{ccc} - \frac{\partial x_1}{\partial f_1} & \ldots & \frac{\partial x_1}{\partial f_m} \\ - \ldots & \ldots & \ldots \\ - \frac{\partial x_n}{\partial f_1} & \ldots & \frac{\partial x_n}{\partial f_m} \\ - \frac{\partial y_1}{\partial f_1} & \ldots & \frac{\partial y_1}{\partial f_m} \\ - \ldots & \ldots & \ldots \\ - \frac{\partial y_m}{\partial f_1} & \ldots & \frac{\partial y_m}{\partial f_m} -\end{array} -\right)=\left( -\begin{array}{c} - \frac{\partial \left(x_1,\ldots ,x_n\right)}{\partial \left(f_1,\ldots ,f_m\right)} \\ - \frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)} -\end{array} -\right)$ -$\left(f^{-1}\right)'\cdot f'=\left( -\begin{array}{cc} - \frac{\partial \left(x_1,\ldots ,x_n\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(x_1,\ldots ,x_n\right)} & \frac{\partial \left(x_1,\ldots ,x_n\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)} \\ - \frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(x_1,\ldots ,x_n\right)} & \frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)} -\end{array} -\right)=\left( -\begin{array}{cc} - I_{(n\times n)} & 0 \\ - 0 & I_{(m\times m)} -\end{array} -\right)=I_{(n+m\times n+m)}$ -It follows that -$\frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)}=I_{(m\times m)}$ -Since the Jacobian matrix $\frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)}$ is not singular, we can apply the implicit function theorem. It follows that there is a function $g:A\to B$, where $A\subset \mathbb{R}^n,B\subset \mathbb{R}^m$ are open subsets and $A\times B\subset U$, such that $f(x,g(x))=\text{const}$ for all $x\in A$. It is then clear that $f$ cannot be bijective.<|endoftext|> -TITLE: Arithmetic on $[0,\infty]$: is $0 \cdot \infty = 0$ the only reasonable choice? -QUESTION [8 upvotes]: On page 18 of Rudin's Real and Complex analysis he defines $0 \cdot \infty = 0$ and says that "with this definition the commutative, associative, and distributive laws hold in $[0,\infty]$ without any restriction". -What is not clear to me is whether the quoted statement is a justification of the definition or just a consequence. Wouldn't the commutative, associative, and distributive laws also hold if we define $0 \cdot \infty = \infty$? - -REPLY [8 votes]: It may seem strange to define $0\cdot\infty=0$. However, one verifies without difficulty that with this definition the commutative, associative, and distributive laws hold in $[0,\infty]$ without any restriction. - -The way this is worded leads naturally to your question, as though Rudin were implying that this is the main justification for defining $0\cdot\infty$ in this way. Rather, I see this as a bonus after making the convention consistent with what happens when integrating the $0$ function or integrating over a space of measure $0$, as KCd's comment indicates. -If you ask yourself what the possibilities are, you can start by supposing that $0\cdot\infty=x$ for some $x\in[0,\infty]$, and apply the distributive law to see that $x=2x$, so that $x=0$ or $x=\infty$. You can then verify that with either convention the commutative, associative, and distributive laws will hold, so something more is needed to motivate the choice. Such a choice will always depend on context, and in some cases it won't be a good idea to even define $0\cdot\infty$. However, I am not aware of a mathematical context in which the convention $0\cdot\infty=\infty$ is useful.<|endoftext|> -TITLE: Permutation Identity and Sum -QUESTION [5 upvotes]: Show that $\displaystyle 1+ \sum\limits_{k=1}^{n} k \cdot k! = (n+1)!$ -RHS: This is the number of permutations of an $n+1$ element set. We can rewrite this as $n!(n+1)$. -LHS: It seems that the $k \cdot k!$ has a similar form to $(n+1)! = (n+1)n!$ Also we can write $1 = 0!$ I think you use the mulitplication principle is being used here (e.g. the permutations of a $k$ element set multiplied by $k$). -Note that a combinatorial proof is wanted (not an algebraic one). - -REPLY [7 votes]: Here is a brief description of one combinatorial way to approach this. Suppose we are permuting $\{1,2,3,\ldots,n+1\}$. One permutation is $P=(1,2,3,\ldots,n+1)$. Any other permutation has a first position from the left which differs from $P$. -Let $S_1$ be the set of permutations that first differs from $P$ in position $n$. Let $S_2$ be the set of permutations that first differs from $P$ in position $n-1$. Continue this until we get to $S_n$, which is the set of permutations that first differs from $P$ in position $1$. -When we count the size of $S_i$, we can convince ourselves that it is $(i+1)!-i! = i \cdot i!$. This is the part I'm skipping over. Also counting $P$ then gives us the left hand side.<|endoftext|> -TITLE: Number of even and odd subsets -QUESTION [5 upvotes]: Suppose we have the following two identities: - -$\displaystyle \sum_{k=0}^{n} \binom{n}{k} = 2^n$ -$\displaystyle \sum_{k=0}^{n} (-1)^{k} \binom{n}{k} = 0$ - -The first says that the number of subsets of an $n$-set is $2^n$. The second says that the number of subsets of even size equals the number of subsets of odd size (of an $n$-set). Thus there are $2^{n-1}$ subsets of even length and $2^{n-1}$ subsets of odd length? -To combinatorially prove the second identity, let $A$ be a $k$-subset of $[n]$. Then note whether $k$ is odd or even? - -REPLY [3 votes]: From another point of view you have: -$${\left( {1 + 1} \right)^n} =2^n= \sum\limits_{k = 0}^n {n \choose k}{{1^{n - k}}{1^k}}=\sum\limits_{k = 0}^n {n \choose k} $$ -and -$${\left( {1 - 1} \right)^n} = 0 = \sum\limits_{k = 0}^n {n \choose k}{{1^{n - k}}{{\left( { - 1} \right)}^k}} = \sum\limits_{k = 0}^n {\left( { - 1} \right)}^k{n \choose k}$$<|endoftext|> -TITLE: Is $[0;1]$ equinumerous to $\mathbb{R}^{\omega}$? -QUESTION [6 upvotes]: This article on Wikipedia states the following: - -Cardinal arithmetic can be used to show not only that the number of points in a real number line is equal to the number of points in any segment of that line, but that this is equal to the number of points on a plane and, indeed, in any finite-dimensional space. - -I was wondering why is it the case for finite-dimensional spaces, but not for $\mathbb{R}^{\omega}$. Can't a PMI proof of a bijection between $[0;1]$ and -$\mathbb{R}^{\omega}$ be established, or would that proof be showing that one can establish a bijection between $[0;1]$ and $\mathbb{R}^n$ for any given $n\in \mathbb{R}/\mathbb{N}$ but not $\mathbb{R}^{\omega}$? Is there something special about infinitely-dimensional $R$ that makes it have more points than $\mathbb{R}$ or any segment on $\mathbb{R}$? Thanks a lot. - -REPLY [12 votes]: Whoever told you that $[0, 1]$ is smaller than $\mathbb{R}^{\omega}$ is wrong; they both have the same cardinality as $\{ 0, 1 \}^{\omega} = \{ 0, 1 \}^{\omega \cdot \omega}$. By the way, something about your post suggests to me that you are not convinced that this set has a rigorous definition, and it does: $\mathbb{R}^{\omega}$ is precisely the set of functions from $\omega$ to $\mathbb{R}$. -There are several other ways one might imagine putting infinitely many copies of $\mathbb{R}$ together. The above is the infinite direct product; there is also the infinite direct sum $\bigoplus_{n=1}^{\infty} \mathbb{R}$, which consists of all sequences of real numbers which are eventually zero; unlike the infinite direct product, the infinite direct sum is countable-dimensional. And also one can consider the sequences $c_0$ of real numbers which converge to zero; this is not countable-dimensional but has nice topological properties. - -REPLY [4 votes]: The cleanest way of seeing the bijection from $\mathbb{R}^{\omega}$ to $\mathbb{R}$ is probably by taking advantage of your favorite bijection from $\mathbb{R}$ to $\left[0,1\right]$, then treating the numbers in that range as a sequence of decimals, 'stacking' them to get a $\omega\times\omega$ grid of digits, and using a diagonal sweep to compose them into a single decimal; so far instance, if the first three elements of your sequence of reals were .314159, .271828, and .161803, then take the first digit of the first, then the second digit of the first and the first digit of the second, then the third digit of the first, the second digit of the second, and the first digit of the first, continuing to interleave this way: .312471... It's easy to convince yourself that this is a one-to-one bijection; you can go the other way by writing the digits of your single number one by one diagonally into a grid, and then taking the result as a series of reals.<|endoftext|> -TITLE: Why is the Möbius strip not orientable? -QUESTION [82 upvotes]: I am trying to understand the notion of an orientable manifold. -Let M be a smooth n-manifold. We say that M is orientable if and only if there exists an atlas $A = \{(U_{\alpha}, \phi_{\alpha})\}$ such that $\textrm{det}(J(\phi_{\alpha} \circ \phi_{\beta}^{-1}))> 0$ (where defined). My question is: -Using this definition of orientation, how can one prove that the Möbius strip is not orientable? -Thank you! - -REPLY [17 votes]: A simple answer that IMO is easy to justify using your definition of orientation goes like this. -Given any manifold $M$ and a point $p \in M$ there is a homomorphism $O : \pi_1(M,p) \to \mathbb Z_2$ and the idea is this: if $\phi : [0,1] \to M$ is a path such that $\phi(0)=\phi(1)=p$, given any basis for the tangent space to $M$ at $p$, $T_pM$ you can parallel transport that basis along the path, and you'll get a second basis for the tangent space at $\phi(1)=p$, $T_pM$. And you can ask, is the change-of-basis map from your 1st to your 2nd basis for $T_pM$ orientation-preserving -- i.e. is the determinant of that linear transformation positive? If it is, define $O(\phi)=0$, if the determinant is negative, define $O(\phi)=1$. -Fact: the path-component of $p$ in the manifold $M$ is orientable if and only if $O$ is the zero function, $O=0$. You prove it by cutting your path $\phi$ into small segments and comparing orientations within charts -- the key analytical step is the intermediate value theorem, using that determinant is a continuous function of matrices. -Of course, in this discussion "parallel transport" assumes a Riemann metric but you don't really need a Riemann metric for this argument to work. The parallel transport of vectors along a path $\phi$ simply means continuously-varying vectors such that the vector corresponding to $t \in [0,1]$ is always tangent to the manifold, i.e. elements of $T_{\phi(t)} M$. And of course if you're transporting $n$ vectors you demand that these $n$ vectors always make basis for $T_{\phi(t)}M$. -And in the case of the Moebius band, given any concrete model of the Moebius band you transport a basis along any path that goes once around the band and $O(\phi)=1$.<|endoftext|> -TITLE: How should I think about what it means for a manifold to be orientable? -QUESTION [22 upvotes]: Let M be a smooth manifold. We say that M is orientable if and only if there exists an atlas $A = \{(U_{\alpha}, \phi_{\alpha})\}$ such that for all $\alpha, \beta$, $\textrm{det}(J(\phi_{\alpha} \circ \phi_{\beta}^{-1})) > 0$ (where defined). I'm struggling to understand the reason this definition is made. My question is: What is the intuitive reason for this definition of an orientable manifold? -Thank you! - -REPLY [5 votes]: I think it is very hard to get a better answer than any of the above. But there is a perspective that has not been mentioned, at least not explicitly. First, I would like to say that I happen to agree with Marianos answer. -The body of your question is much more explicit than your title, so I will address the question in your title since it seems others have addressed the one in the body of your question. I like to think of orientations, or in fact most structures on a manifold, in terms of the tangent bundle on the manifold in question. The tangent bundle on a compact n dimensional manifold $M$ is classified (up to isomorphism) by a homotopy class of map $M \to BO(n)$. If we can lift this map over the map $BSO(n) \to BO(n)$ then the bundle is orientable, and hence so is the manifold. Here, the notion of orientability, our structure is really a fact about the tangent bundle. We can talk about almost complex structures or spin structures in a similar way. The map $BSO(n) \to BO(n)$ is induced by the inclusion $SO(n) \to O(n)$. -Another way is to look at $\Lambda^n TM$, the top exterior power of the tangent bundle of M. Then an orientation is a nowhere zero continuous section of this bundle. Similarly, you can think of an orientation as a choice of generator of the top dimensional integral homology or cohomology. These are all probably less helpful than the above answers, but these are the ways I like to think about what an orientation is. -please let me know if you want me to expand on some of these.<|endoftext|> -TITLE: Continuity of a function at an isolated point -QUESTION [8 upvotes]: Suppose $c$ is an isolated point in the domain $D$ of a function $f$. -In the delta neighbourhood of $c$, does the function $f$ have the value $f(c)$? - -REPLY [15 votes]: You can also see this is true using the topological definition of continuity at a point: a function is continuous at a point $f(x)$ if for any neighborhood $V$ of $f(x)$ there is a neighborhood $U$ of $x$ such that $f(U)$ is contained in $V$. For an isolated point, you can take the neighborhood consisting of just the point $c$, so its image $f(c)$ will obviously be contained in $V$, as $V$ is a neighborhood of $f(c)$. - -REPLY [7 votes]: I see now that the comments above provide essentially the answer, with whatever definition of continuity you have. The following ties everything together. -Let's use as the definition of continuity $\lim_{x \rightarrow c}\,f(x) = f(c)$. Expand: For all $\varepsilon > 0$ there exists $\delta > 0$ such that whenever $0 < |x-c| < \delta$ it is true that $|f(x)-f(c)| < \varepsilon$. When $\delta$ is small enough, there are no points $x$ which work, so the part after the such that is vacuously true.<|endoftext|> -TITLE: Distribute a fixed number of points "uniformly" inside a polygon -QUESTION [12 upvotes]: I have a polygon in 2D (defined by a series of Vertex $V$ with coordinates). The polygon can be convex or concave. I have $n$ number of fix points I can put inside the polygon. -The question is, how can I distribute the fix points as uniformly as possible inside the polygon? -The motivation for this question is I want to create a mesh generator and I want all the triangular elements $E$ ( which is defined by a list of vertices $V$) to look good with no too small or too large angles. In order to control the granularity of the mesh I am thinking about using the number of fix points $n$ as the controlling parameter. The fix points are used to control the vertex of the triangular elements. -Is there any algorithm for this? - -REPLY [4 votes]: I came across this problem as well and have solved it by using K-means clustering. -For a given polygon, I generate N random cluster centers in the polygon's shape. For each center, I find the cluster of pixels in the polygon that are closest to this center (this is a Voronoi fragmentation of your shape). Iteratively move the centers to their respective cluster's center of gravity. -Note that this is a heuristic that doesn't guarantee to find the optimum distribution; when the iterations stabilize, you have found a local minimum that serves as an approximation of ideally uniformly distributed points in your polygon shape. -Here's an example result of 100 points uniformly distributed over a cloud shape and their respective (Voronoi) areas of influence<|endoftext|> -TITLE: Representing IF ... THEN ... ELSE ... in math notation -QUESTION [35 upvotes]: How do I correctly represent the following pseudocode in math notation? -EDIT1: Formula expanded. -EDIT2: Clarification. -(a,b) represents a line segment on a 1D line. a <= b for each segment. The division show below is done as per the following T-SQL code (which I suppose could be represented as a function in the formula?): -Input: @a1 real, @b1 real, @an real, @bn real -DECLARE @Result real - -if @a1 <= @an begin - SET @Result = @an - @b1 - - if @Result <= 0 RETURN 0 - - RETURN @Result / @an -end - -SET @Result = @a1 - @bn - -if @Result <= 0 RETURN 0 - -RETURN @Result / @a1 - -Formula: -if m = 1 then - if (a,b)_1 intersects (a,b)_n then - r = 1 - else if (a,b)_1 < (a,b)_n then - r = (a,b)_1 / (a,b)_n - else - r = (a,b)_n / (a,b)_1 -else if m = 2 then - if (a,b)_1 intersects (a,b)_n then - r = 1 - else if (a,b)_1 < (a,b)_n then - r = (a,b)_1 / (a,b)_n - else - r = (a,b)_n / (a,b)_1 - -The m = 2 block is shown as being the same as the m = 1 one for simplicity's sake. -The divisions are against the two points that are closets to each other, unless the segments intersect, at which point r = 1. - -REPLY [7 votes]: This was a rejected edit on the accepted answer so I'm posting it as a new answer instead. I just wanted to point out that "If $\varphi$ then $\psi$, else $\tau$" is equivalent to $(\varphi\wedge\psi)\vee(\neg\varphi\wedge\tau)$. Since $P \to Q$ is equivalent to $\neg P \vee Q$, we can expand $(\varphi\rightarrow\psi)\wedge(\neg\varphi\rightarrow\tau)$ as follows: -$\begin{align*} - (\varphi\rightarrow\psi)\wedge(\neg\varphi\rightarrow\tau) &\iff (\neg\varphi\vee\psi)\wedge(\varphi\vee\tau) \\ - &\iff \left((\neg\varphi\vee\psi)\wedge\varphi\right)\vee\left((\neg\varphi\vee\psi)\wedge\tau\right) \\ - &\iff (\varphi\wedge\psi)\vee(\neg\varphi\wedge\tau)\vee(\psi\wedge\tau) -\end{align*}$ -The last term, $(\psi\wedge\tau)$, is redundant. This can be corroborated with a truth table but it should be intuitive as the first two terms cover all cases due to the presence of $\varphi$ and $\neg\varphi$. Thus, the concept of "If $\varphi$ then $\psi$, else $\tau$" is mathematically equivalent to the sentential logic formula $(\varphi\wedge\psi)\vee(\neg\varphi\wedge\tau)$.<|endoftext|> -TITLE: Equidistributed sequence and Riemann integrable function -QUESTION [7 upvotes]: Let $f$ be a function of period 1, Riemann integrable on [0,1]. Let $\xi_n$ be a sequence -which is equidistributed in $[0,1)$. -(a) Is it true that $$\frac{1}{N}\sum\limits_{n=1}^N f(x+\xi_n)$$ converges to the constant $\int_0^1 f(y) dy$ -for each $x\in \mathbb{R}$ as $N\to \infty$ ? -(b) If so, is the convergence uniform over all $x$? - -REPLY [2 votes]: The answer to both questions (a) and (b) is yes, and is a consequence of the fact that Riemann integrable functions can be approximated from above and below by step functions. The proof follows in a fairly straightforward manner once you realize that this allows you to reduce the problem to that of indicator functions of intervals. To be precise, if $f\colon[0,1]\to\mathbb{R}$ is Riemann integrable then, for each fixed $\epsilon > 0$, there are step functions $g\le f\le h$ with $\int_0^1(h(y)-g(y))\,dy < \epsilon$. -$$ -\begin{align} -\frac1N\sum_{n=1}^Nf(\xi_n)-\int_0^1 f(y)\,dy&\le\frac1N\sum_{n=1}^Nh(\xi_n)-\int_0^1 h(y)\,dy+\int_0^1(h(y)-f(y))\,dy\\ -&\le\frac1N\sum_{n=1}^Nh(\xi_n)-\int_0^1 h(y)\,dy+\epsilon. -\end{align} -$$ -Similarly, the reverse inequality holds with $g$ replacing $h$ and $-\epsilon$ replacing $\epsilon$ on the right hand side. So, convergence for step functions implies convergence for Riemann integrable functions. Also, replacing $\xi_n$ by $x+\xi_n$ (mod 1) above, the problem of uniform convergence over all $x$ is also reduced to that of step functions. Linearity further reduces the problem to that of indicator functions of intervals in $(0,1]$. -However, if $f=1_A$ is an indicator function of an interval $A\subseteq(0,1]$ then the limit $\frac1N\sum_{n=1}^Nf(x+\xi_n)=\int_0^1f(y)\,dy$ follows by definition of equidistribution. -We can also obtain uniform convergence (in $x$) as follows: note that $y\mapsto f(x+y)=1_{\{y\in A-x{\rm\ (mod\ 1)}\}}$ is again an indicator function of an interval (taken mod 1). Convergence is uniform simultaneously on all intervals. For any $M > 0$ consider the finite collection of intervals $B_i=((i-1)/M,i/M]$. Then, any interval $A\subseteq(0,1]$ can be sandwiched between unions of the $B_i$. That is, $\cup_{i\in I}B_i\subseteq A\subseteq\cup_{j\in J}B_j$ for $I,J\subseteq\{1,2,\ldots,M\}$ with $I\subseteq J$ and $J\setminus I$ containing no more than two elements. -$$ -\frac1N\sum_{n=1}^N 1_A(\xi_n)-\int_0^1 1_A(y)\,dy\le\sum_{i\in J}\left(\frac1N\sum_{n=1}^N 1_{B_i}(\xi_n)-\int_0^1 1_{B_i}(y)\,dy\right) + 2/M. -$$ -The reverse inequality also holds if we make the sum run over $i\in I$ and replace $2/M$ by $-2/M$ on the right hand side. So, -$$ -\left\vert\frac1N\sum_{n=1}^N 1_A(\xi_n)-\int_0^1 1_A(y)\,dy\right\vert\le\sum_{i=1}^M\left\vert\frac1N\sum_{n=1}^N 1_{B_i}(\xi_n)-\int_0^1 1_{B_i}(y)\,dy\right\vert+2/M. -$$ -For any $\epsilon > 0$ we can choose $2/M < \epsilon/2$ and, using equidistribution, $N$ can be chosen large enough that the first term on the right hand side is less than $\epsilon/2$. So, the left hand side can be made less than $\epsilon$ by choosing $N$ large enough. This choice was independent of $A$ so, in fact, convergence is uniform on the set of all intervals $A$.<|endoftext|> -TITLE: Simple example of uncountable ordinal -QUESTION [15 upvotes]: Can you make a simple example of an uncountable ordinal? With simple I mean that it is easy to prove that the ordinal is uncountable. I know that the set of all the countable ordinals is an uncountable ordinal, but the only proof that I know is quite complicated. - -REPLY [15 votes]: Here is another way of arguing which is slightly different: Consider the set $X={\mathcal P}({\mathbb N}\times{\mathbb N})$ of all subsets of ${\mathbb N}^2$. Given $E\subseteq {\mathbb N}\times{\mathbb N}$, let $A_E$ be the set of all numbers that appear in either the domain or range of $E$ (this is sometimes called the field of $E$). If it happens that $E$ is a well-ordering of $A_E$, let $\alpha_E$ be the unique ordinal that is order isomorphic to $(A_E,E)$. Otherwise, let $\alpha_E=0$. Then $\{\alpha_E\mid E\in X\}$ is a set (is the image of $X$ under the map $E\mapsto\alpha_E$) and it is obvious that it consists precisely of the countable ordinals. One easily sees then that it is itself an ordinal, and uncountable (since no ordinal belongs to itself). This is $\omega_1$.<|endoftext|> -TITLE: What's stopping me from choosing the nth Eilenberg Mac Lane space to be the following simplicial abelian group? -QUESTION [5 upvotes]: Given an abelian group $X$, let $F_n(X)$ denote the simplicial abelian group defined as follows: -$F_n(X)_j=0$ for all $j -TITLE: Does the ratio of consecutive terms converge for all linear recursions? -QUESTION [6 upvotes]: Does $f(n+1)/f(n)$ converge as $n\rightarrow\infty$ for $f(n)$ defined by a linear recursion, for all linear recursions? - -REPLY [3 votes]: As Ross's counterexample points out the answer, of course, is no in general. However there are broad special cases where the answer is yes, which all more or less go back to the Perron-Frobenius theorem. In particular if the recurrence is combinatorial in the sense that it counts the number of words of length $n$ in a regular language, then subject to some reasonable assumptions about the language the limit you are looking at will exist.<|endoftext|> -TITLE: Volumes of n-balls: what is so special about n=5? -QUESTION [80 upvotes]: The volume of an $n$-dimensional ball of radius $1$ is given by the classical formula -$$V_n=\frac{\pi^{n/2}}{\Gamma(n/2+1)}.$$ -For small values of $n$, we have -$$V_1=2\qquad$$ -$$V_2\approx 3.14$$ -$$V_3\approx 4.18$$ -$$V_4\approx 4.93$$ -$$V_5\approx 5.26$$ -$$V_6\approx 5.16$$ -$$V_7\approx 4.72$$ -It is not difficult to prove that $V_n$ assumes its maximal value when $n=5$. -Question. Is there any non-analytic (i.e. geometric, probabilistic, combinatorial...) demonstration of this fact? What is so special about $n=5$? -I also have a similar question concerning the $n$-dimensional volume $S_n$ ("surface area") of a unit $n$-sphere. Why is the maximum of $S_n$ attained at $n=7$ from a geometric point of view? - -note: the question has also been asked on MathOverflow for those curious to other answers. - -REPLY [8 votes]: (n+1)- ball as sum of layered n-balls -The following view may help to gain some intuition: -An (n+1)-ball as a sum of many stacked slices. The slices are n-balls with radius $r(x)=\sqrt{1-x^2}$ and thickness $dx$. The 'volume' is $V_n r^n dx$: -$$\begin{array}{rcl} -V_{n+1} &=& \int_{-1}^1 V_n r(x)^n dx \\ -&=& V_{n} \int_{-1}^1 \left(\sqrt{1-x^2}\right)^{n} dx \\ -&=& V_n \int_{0}^1 t^{-\frac{1}{2}}\left(1-t\right)^\frac{n}{2} dt \\ -&=& V_n B\left(\frac{1}{2},\frac{n+2}{2}\right) -\end{array}$$ -The image below is an example for 3-dimensions. The inside of the sphere can be seen as being composed of layers of circular disks. -$\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad$ -Decreasing size of integrand -Comparison with cylinders and extrusion: We could see this summation as a sort of extrusion but with a decreasing radius. If, instead, the radius is kept the same then we would extrude a cylinder and the volume (=surface times height) gets multiplied by two. -But instead of the cylinder, the radius scales like $\sqrt{1-x^2}$ and the volume will be multiplied with a factor that is less than 2. How much exactly is determined by the integral $\int_{-1}^1 (\sqrt{1-x^2})^{n-1} dx$ which is the area under the curves as shown in the below image: -$\quad\quad\quad$ -The integrand term $(\sqrt{1-x^2})^{n-1}$ will decrease for higher $n$ and the multiplication factor of the volume when making the (n+1)-ball out of n-balls is decreasing. -$\quad\quad\quad\quad\quad\quad\quad\quad$ -Peak at $n=5$ -Now we can see how and why the peak at $n=5$ occurs. -The size of the n-sphere is made dimensionless by comparing with the size of a hyper-cube of size $r$ which has volume $r^n$. - -The volume of the n+1 hypercube is a multiple of the volume of the n hypercube by a constant factor $$V_{(n+1)-cube} = V_{n-cube} \times r$$ -The volume of the n+1 hypersphere is a multipe of the volume of the n hypersphere by a decreasing factor -$$V_{(n+1)-sphere} = V_{n-sphere} \times r B\left( \frac{1}{2}, \frac{n+2}{2} \right)$$ - -The relative growth of the hypersphere in comparison to the relative growth of the hypercube is continuously decreasing. But, initially this sphere might be considered as having a higher 'growth factor' (which starts at 2 for $n=0$) in comparison to the hyper cube (which has a constant 1). -As others have noted the choice of a hyper cube of size $r^n$ is artificial and the peak is a virtual effect. One could also compare, for instance, with a hypercube of size $(2r)^n$. This cube doesn't grow with rate $r$ but with rate $2r$. In this case the sphere does not initially grow faster and there is no peak. -What then, is special? -We could say that the sphere is special in the fact that, no matter with what hyper-cube you are comparing it to, eventually it's size will be smaller for sufficiently large $n$ (if it wasn't already at the start). The multiplication factor of the sphere is a decreasing factor, and the multiplication factor of a hyper-cube is constant. So maybe, the fact that the peak occurs at $n=5$ is not so special (it is an arbitrary point), but the fact that there is a peak might be considered special.<|endoftext|> -TITLE: Axiom of Choice Examples -QUESTION [18 upvotes]: In the wikipedia article, two examples are given which use/ do not use the axiom of choice. They are: - -Given an infinite pair of socks, one needs AC to pick one sock out of each pair. -Given an infinite collection of pairs of shoes, one shoe can be specified without AC by choosing the left one. - -Aren't these equivalent examples (just with different objects)? Why can't one just choose the left sock in (i) (so that AC is not needed)? - -REPLY [11 votes]: In both examples you are given an infinite family of sets of size 2, and a choice function picks an element of each family. In the case of the sets of shoes, each set comes with an ordering (left, right), and so we can define a choice function explicitly. In the case of pairs of socks, this is not the case: Of course, given any pair, we can assign an ordering to it so we can select one of the two socks. However, there is no obvious way of uniformly doing this for all pairs at the same time. This means (at least intuitively) that there is no way of defining a choice function. Its existence can only be granted by applying the axiom of choice. -There are several variants of this example. One that may be useful to think about is the following: One can show explicitly that if $A_n$ is a set of reals and $|A_n|=2$ for each $n\in{\mathbb N}$, then $\bigcup_n A_n$ is a (finite or infinite) countable set. However, it is consistent with the axioms of set theory except choice that there is a sequence $(A_n\mid n\in{\mathbb N})$ of sets, each $|A_n|=2$, and yet $\bigcup_n A_n$ is not countable. Although the construction of the model where this happens is technical, the point is that this formalizes the intuition that there is no "explicit" way of choosing a sock from each pair, simultaneously, and that any way of doing so is essentially non-constructive. -For more on the set theoretic versions of these collections of socks (Russell cardinals), see here.<|endoftext|> -TITLE: How many Borel-measurable functions from $\mathbb{R}$ to $\mathbb{R}$ are there? -QUESTION [5 upvotes]: How many Borel-measurable functions from $\mathbb{R}$ to $\mathbb{R}$ are there? The motivation is from this answer of mine on MathOverflow. - -REPLY [9 votes]: Sune, there are as many Borel-measurable functions as there are reals. -It is easy to see that a function is Borel-measurable iff the preimage of each open interval with rational end-points is a Borel set. Moreover, the function is completely determined by the sequence of pairs $(I,B)$ where $I$ varies over the intervals and $B$ is the preimage of $I$. -There are countably many possible $I$ and to each corresponds one of ${\mathfrak c}=|{\mathbb R}|$ many possible Borel sets. The total number of Borel functions is then bounded above by $|{\mathbb R}^{\mathbb N}|={\mathfrak c}$. Since each constant function is Borel-measurable, ${\mathfrak c}$ is also a lower bound.<|endoftext|> -TITLE: Name of the math movie -QUESTION [16 upvotes]: Does anyone remember a movie about four mathematicians studying infinity, who ended up committing a suicide? It was aired online somewhere, and I can't find it (it's a documentary, if that helps). - -REPLY [19 votes]: Are you talking about Dangerous Knowledge by BBC? Though it's not quite about four mathematicians (at least one of the people was physicist), and not all of them were studying infinity directly. That's the only one that comes to my mind as I watched it a couple of weeks ago. - -REPLY [9 votes]: Dangerous Knowledge it is! -Cantor - Infinity -Boltzmann - Entropy -Godel and Turing - Logic -http://video.google.com/videoplay?docid=-5122859998068380459#<|endoftext|> -TITLE: Interpretation of ideals as unitless subsets -QUESTION [5 upvotes]: One way to prove that a field $K$ has no ideals except the entire field and the trivial ideal is to note the fact that every element $x$ has an inverse. By the definition of an ideal, if $x$ is in the ideal then $x^{-1}x$ is because $x^{-1} \in K$. But now we have that 1 is in the ideal, and so again by the definition of an ideal we have that every element is in the ideal. Therefore it is either the entire field or trivial. -However, this works for any would-be ideal that has a unit; hence my question. I don't see how this coheres particularly with the idea that ideals are generalizations of things like "multiple of $n$", or that we use them to form quotient rings. -Can someone please explain whether this has a deeper meaning or if it's not really important? I think it might have something to do with what is written in the "motivation" section in the Wikipedia article for ideals but I'm not really sure. -Edit: I do realize that not all subsets without a unit are ideals. Sorry for the confusion. - -REPLY [5 votes]: One way to think of an ideal as being the set of multiples of the (possibly non-existent) g.c.d of all the elements that contain it. (I am thinking here of the case of a commutative ring with $1$, so that distinctions between left, right, and two-sided don't matter.) -In the integers, for example, any set of elements has a g.c.d. with all the reasonable properties that you could want, and furthermore if $\{a_i\}_{i \in I}$ is a set of integers, -then the ideal generated by the $a_i$ is in fact principal, and its generator is a -g.c.d. of this collection. In a more general ring, g.c.d.s don't necessarily exist, or even if they do, they don't have all the properties that they do in the integers. So rather than trying to work with g.c.d.s, we can introduce ideals, which generally have better properties (or, rather, do many of the same jobs in more general rings that g.c.d.s and their set of multiples do in the context of the integers). -With this in mind, one sees (a) why ideals are the natural kernels of quotient maps: think about the case of the integers, working {\em modulo} $n$ means setting all multiples of $n$ equal to zero; (b) why an ideal with a unit will be the trivial ideal (i.e. the whole ring): because the g.c.d. of any set containing a unit will have to be $1$.<|endoftext|> -TITLE: For what functions $f(x)$ is $f(x)f(y)$ convex? -QUESTION [17 upvotes]: For which functions $f\colon [0,1] \to [0,1]$ is the function $g(x,y)=f(x)f(y)$ convex over $(x,y) \in [0,1]\times [0,1]$ ? Is there a nice characterization of such functions $f$? -The obvious examples are exponentials of the form $e^{ax+b}$ and their convex combinations. Anything else? -EDIT: This is a simple observation summarizing the status of this question so far. The class of such functions f includes all log-convex functions, and is included in the class of convex functions. So now, the question becomes: are there any functions $f$ that are not log-convex yet $g(x,y)=f(x)f(y)$ is convex? -EDIT: Jonas Meyer observed that, by setting $x=y$, the determinant of the hessian of $g(x,y)$ is positive if and only if $f$ is a log-convex. This resolves the problem for twice continuously differentiable $f$. Namely: if $f$ is $C^2$, then $g(x,y)$ is convex if and only if $f$ is log-convex. - -REPLY [8 votes]: Suppose $f$ is $C^2$. First of all, because $g$ is convex in each variable, it follows that $f$ is convex, and hence $f''\geq0$. I did not initially have Slowsolver's insight that log convexity would be a criterion to look for, but naïvely checking for positivity of the Hessian of $g$ leads to the inequalities -$$f''(x)f(y)+f(x)f''(y)\geq0$$ -and -$$f'(x)^2f'(y)^2\leq f''(x)f(x)f''(y)f(y)$$ -for all $x$ and $y$, coming from the fact that a real symmetric $2$-by-$2$ matrix is positive semidefinite if and only if its trace and determinant are nonnegative. The first inequality follows from nonnegativity of $f$ and $f''$. The second inequality is equivalent to $f'^2\leq f''f$. To see the equivalence in one direction, just set $x=y$ and take square roots; in the other direction, multiply the inequalities at $x$ and $y$. Since $\log(f)''=\frac{f''f-f'^2}{f^2}$, this condition is equivalent to $\log(f)''\geq0$, meaning that $\log(f)$ is convex.<|endoftext|> -TITLE: Why is $\tau(n) \equiv \sigma_{11}(n) \pmod{691}$? -QUESTION [14 upvotes]: If $n$ is a natural number, let $\displaystyle \sigma_{11}(n) = \sum_{d \mid n} d^{11}$. -The modular form $\Delta$ is defined by $\displaystyle \Delta(q) = q \prod_{n=1}^{\infty}(1 - q^n)^{24}$. -Write $\tau(n)$ for the coefficient of $q^{n}$ in $\Delta(q)$. -I would like to know why $\tau(n) \equiv \sigma_{11}(n) \pmod{691}$. I think the proof may be somewhat difficult, so even just an outline of the argument would be much appreciated. -Thank you! - -REPLY [17 votes]: The proof is not at all obvious if you begin simply with the formula -$$\Delta(q) = q \prod_{n=1}^{\infty} (1-q^n)^{24}.$$ However, as Derek Jennings explains in his answer, if you use the (absolutely crucial!) fact that $\Delta$ is a cusp form of weight twelve and level one, the proof is actually not very difficult. -As Derek explains, the ring of modular forms of level one is generated by two $q$-expansions, namely -$$E_4 := 1 + 240 \sum_{n = 1}^{\infty} \sigma_3(n) q^n,$$ -which has weight 4, -and -$$E_6 := 1 + 504 \sum_{n=1}^{\infty} \sigma_5(n) q^n,$$ -which has weight 6. (The coefficients $240$ and $-504$ come from Bernoulli numbers, as Derek explains, but we don't need that at the moment.) -Now we see that we can make two monomials of weight 12 from these, namely -$E_4^3$ and $E_6^2$. How do we get $\Delta$? Well, the constant term of $\Delta$ vanishes, while $E_4^3$ and $E_6^2$ have constant term $1$, so -$\Delta$ must be proportional to $E_4^3 - E_6^2$. Since the coefficient -of $q$ in $\Delta$ is $1$ (i.e. $\tau(1) = 1$) while the coefficient -of $q$ in $E_4^3 - E_6^2$ is $1728,$ we find that -$$E_4^3 - E_6^2 = 1728 \Delta.$$ -It is useful to note that we can also use $E_4^3$ (say) and $\Delta$ as a basis for the weight $12$ modular forms. In fact, they are a very convenient basis, because if $f$ is any weight $12$ modular form, with constant term $a_0$, then we can subtract of $a_0 E_4^3$ to get rid of the constant term of $f$, and then -$f - a_0 E_4^3$ must be a mulitple of $\Delta$. -To go further, we have to introduce another fact, also noted by Derek Jennings, namely that -there is a weight 12 modular form -$$E_{12} = 1 + \dfrac{65520}{691} \sum_{n=1}^{\infty} \sigma_{11}(n) q^n.$$ -In fact, it is -(for me) -easier to work with -$$691 E_{12} = 691 + 65520\sum_{n=1}^{\infty} \sigma_{11}(n) q^n,$$ -which has integer coefficients. -Now we apply the above procedure to write $691 E_{12}$ in terms of $E_4^3$ and $\Delta$, to find that -$$691 E_{12} = 691 E_4^3 + (65520 - 691\cdot 720) \Delta.$$ -Now all the $q$-expansions in this formula have integral coefficients, and so -what we find, looking at the coefficient of $q^n$, is that -$65520\sigma_{11}(n) \equiv 65520\tau(n)$ for each $n \geq 1$. -Dividing by $65520$ (which is coprime to $691$) gives the desired formula. -The usual way this is summarized is to say that -$$\dfrac{691}{65520} E_{12} = \dfrac{691}{65520} + \sum_{n=1}^{\infty} \sigma_{11}(n) q^n$$ -is normalized (i.e. has coefficient of $q$ equal to $1$) and is a cuspform modulo $691$ (i.e. its constant term vanishes mod $691$). This forces it (as we have just seen) to be congruent to $\Delta$ modulo $691$. -Finally, let me note that by far the best place to read about the theory of modular forms used here is Serre's beautiful book A course in arithmetic. -Added: Since I was editing this anyway, let me add a cultural remark, namely that the study of congruences between Eisenstein series and cuspforms, -of which the congruence between $\dfrac{691}{65520} E_{12}$ and $\Delta$ considered above is the first example, is a central topic in modern number theory. It lies at the heart of Mazur's determination of the possible torsion subgroups of elliptic curves over $\mathbb Q$, and is the basic method via which Ribet proved his "converse to Herbrand's theorem'' result giving a criterion for non-triviality of various $p$-power subgroup of the class group of the cyclotomic field $\mathbb Q(\zeta_p)$.<|endoftext|> -TITLE: How do I integrate the following? $\int{\frac{(1+x^{2})\mathrm dx}{(1-x^{2})\sqrt{1+x^{4}}}}$ -QUESTION [44 upvotes]: $$\int{\frac{1+x^2}{(1-x^2)\sqrt{1+x^4}}}\mathrm dx$$ -This was a Calc 2 problem for extra credit (we have done hyperbolic trig functions too, if that helps) and I didn't get it (don't think anyone did) -- how would you go about it? - -REPLY [7 votes]: What a surprise! -Surfing the net, I found an almost same question on -"hard integral" -$$ -\displaystyle \int \frac{x^2 - 1}{(x^2 + 1) \sqrt{x^4 + 1}} \, dx -$$ -from June 19, 2008.<|endoftext|> -TITLE: Solve $x^3 \equiv 1 \pmod p$ for $x$ -QUESTION [12 upvotes]: How can I find solution for $x^3 \equiv 1 \pmod p$ ($p$ a prime) efficiently? -Trivial root is $x_1 = 1$. I need to find other roots $x_2, x_3$. - -REPLY [13 votes]: The integers modulo $p$ form a field. Since $x^3 - 1 = (x-1)(x^2+x+1)$, the problem is equivalent to solving $x^2+x+1\equiv 0 \pmod{p}$. For $p\neq 2$, the usual quadratic formula works; so you would need to find $y$ such that $y^2\equiv -3\pmod{p}$. If $p=2$, then $x^2+x+1=0$ has no solutions, and if $p=3$, then $x^3-1 = (x-1)^3$, so again there is no root other than $x=1$. -Let's consider the other primes, $p\gt 3$. -Using quadratic reciprocity, if $p\equiv 1\pmod{4}$, then $-1$ is a square modulo $p$ and we have: -$$\left(\frac{-3}{p}\right) = \left(\frac{-1}{p}\right)\left(\frac{3}{p}\right) = \left(\frac{p}{3}\right).$$ -So if $p\equiv 1 \pmod{3}$ (hence $p\equiv 1 \pmod{12}$) then $-3$ is a square modulo $p$; if $p\equiv 2\pmod{3}$ (so $p\equiv 5\pmod{12}$), then $-3$ is not a square modulo $p$, so there is no other solution. -If $p\equiv 3\pmod{4}$, then $-1$ is not a square modulo $p$, and we have: -$$\left(\frac{-3}{p}\right)=\left(\frac{-1}{p}\right)\left(\frac{3}{p}\right) = \left(\frac{p}{3}\right)$$ -(since $\left(\frac{3}{p}\right) = -\left(\frac{p}{3}\right)$); again, if $p\equiv 1\pmod{3}$ (so $p\equiv 7\pmod{12}$) then $-3$ is a square modulo $p$; and if $p\equiv 2\pmod{3}$ (so $p\equiv 11\pmod{12}$) then $-3$ is not a square modulo $p$. -In summary, there are roots of $x^3-1$ modulo $p$ other than $x\equiv 1\pmod{p}$ if and only if $p\equiv 1\pmod{6}$. If $p=6k+1$ is such a prime, then the two roots of $x^3-1$ other than $x=1$ are precisely the roots of $x^2+x+1$, which are -$$\frac{-1\pm\sqrt{-3}}{2} = -3k(-1\pm a),$$ -where $a$ is an integer such that $a^2\equiv -3\pmod{p}$. This means you still need to figure out the square root, which can be done using any of the methods suggested by Bill Dubuque, such as Tonelli's algorithm (say, in Shanks' version as described in Wikipedia). -Added: As Alex Bartel notes in comments and in his own answer, one can avoid the use of quadratic reciprocity above. Since the units of $\mathbb{Z}/p\mathbb{Z}$ form a cyclic group of order $p-1$, there are nontrivial solutions to $x^3=1$ if and only if $3$ divides the order, that is, if and only if $p\equiv 1\pmod{3}$. From there, it is an easy jump to solutions if and only if $p\equiv 1 \pmod{6}$, since otherwise we would have $p\equiv 4\pmod{6}$ which is impossible since $p$ is prime. Once one establishes that, the quadratic formula can be applied safe in the knowledge that these primes must have $-3$ as a quadratic residue.<|endoftext|> -TITLE: Area preserving transformations -QUESTION [6 upvotes]: Suppose $A$ is a linear transformation from $R^3$ to $R^3$ and $|det(A)| = 1$. I know that $A$ is volume preserving, but is it also area preserving? For example, if $a$ and $b$ are two vectors in $R^3$ that span a parallelogram, is the area of this parallelogram equal to the area of the paralellogram spanned by $A(a)$ and $A(b)$? -Thank you! - -REPLY [8 votes]: Matrices of the form $\begin{pmatrix}X & 0\\\\0 & \text{det}(X)^{-1}\end{pmatrix}$ with $X$ any invertible 2 by 2 matrix with determinant not equal to $\pm1$ give a host of counter examples: consider the action of such a matrix on a parallelogram in the subspace $\langle(1,0,0),(0,1,0)\rangle$.<|endoftext|> -TITLE: When is an elliptic integral expressible in terms of elementary functions? -QUESTION [31 upvotes]: After seeing this recent question asking how to calculate the following integral -$$ \int \frac{1 + x^2}{(1 - x^2) \sqrt{1 + x^4}} \, dx $$ -and some of the comments that suggested that it was an elliptic integral, I tried reading a little bit on the Wikipedia article about elliptic integrals. -It seems that the point is that most elliptic integrals cannot be expressed in terms of elementary functions. The Wikipedia article defines an elliptic integral as an integral of the form -$$\int R \left( x, \sqrt{ P(x) } \right ) \, dx$$ -where $R(x, y)$ is a rational function and $P(x)$ is a polynomial of degree $3$ or $4$ with no repeated roots. -Now, the article does mention in its introductory section that two exceptions in which the elliptic integrals can be expressed in terms of elementary functions are when the polynomial $P(x)$ has repeated roots or when the rational function $R(x, y)$ does not contain odd powers of $y$. -In the example in question we have $P(x) = 1 + x^4$ and -$$R(x, y) = \frac{1 + x^2}{(1 - x^2)y}$$ -so certainly it does not correspond to the two exceptions mentioned before. Thus I have a couple of questions about this: - -1) What are the conditions for an elliptic integral (as defined in the Wikipedia article) to be expressible in terms of elementary functions? More specifically, are the two above cited conditions the only exceptions or are there any others which may explain why the above integral is expressible in terms of elementary functions? -2) Depending on the answer to my first question, why is it that the above "elliptic integral" can be expressed in terms of elementary functions? - -Note: I'm not sure but I suppose that some conditions must be put on the rational function $R(x, y)$ so to avoid trivial cases, but I don't want to speculate. -Thank you very much in advance. - -REPLY [8 votes]: I might be coming at this too late to interest anyone, but let me build on Matt E's answer. Suppose that we have a elliptic curve $E$ given as $y^2 = P(x)$ and a differential form $\eta = f(x, \sqrt{P(x)}) dx/\sqrt{P(x)}$ on $E$. We would like to know whether or not there is a map $\phi: \eta \to \mathbb{P}^1$, and a differential form $\omega$ on $\mathbb{P}^1$, such that $\eta = \phi^* \omega$. I'll explain how to solve this, using the current problem as an example. -Recall that $dx/\sqrt{P(x)}$ has no zeroes or poles on $E$. So the poles of $\eta$ are precisely those of $f(x, \sqrt{P(x)}$. In our case, $f$ has poles at the points where $1-x^2 =0$, namely the four points $(x,y) = (\pm 1, \pm \sqrt{2})$. If $\eta = \phi^* \omega$, then the poles of $\eta$ will occur at precisely the preimages of the poles of $\omega$. Moreover, if $\phi(a) = b$, with $\phi$ ramified of order $e$ at $a$, then the residue of $\phi^* \omega$ at $a$ is going to be $e$ times the residue of $\omega$ at $b$. So we can guess which poles of $\eta$ come from the same pole of $\omega$ by seeing which ones have residues which are in positive rational ratios. -In this case, the residue is $1/ \sqrt{2}$ at $(1, \sqrt{2})$ and $(-1, - \sqrt{2})$, and is - $- 1/ \sqrt{2}$ at $(1, -\sqrt{2})$ and $(1, - \sqrt{2})$. So the most obvious guess is that $\phi(1, \sqrt{2}) = \phi(-1, -\sqrt{2})$ and $\phi(-1, \sqrt{2}) = \phi(-1, \sqrt{2})$, with branching of equal orders at these points. If I were going to write a careful algorithm, I'd have to consider other possibilities, but I'll just try this possibility. -So, we would like to know whether or not there is a rational function $\phi$ on $E$, of degree $2$, with $\phi(1, \sqrt{2}) = \phi(-1, -\sqrt{2})$ and $\phi(-1, \sqrt{2}) = \phi(-1, \sqrt{2})$, and with branching of some order $e$ at all these points? In other words, does $e (1, \sqrt{2}) + e (-1, -\sqrt{2}) = e (-1, \sqrt{2}) + e (-1, \sqrt{2})$ in the group law of $E$? Note that if you just chose $4$ random points on an elliptic curve, there would be no relations between them in the group law, consistent with the fact that there are usually no elementary solutions to elliptic integrals. -In this case, we win! Notice that the line $y= \sqrt{2} x$ is tangent to $E$ at the points $(1, \sqrt{2})$ and $(-1, -\sqrt{2})$. Similarly, $y = -\sqrt{2} x$ is tangent to $E$ at the other two points. So -$2 \cdot (1, \sqrt{2}) + 2 \cdot (-1, -\sqrt{2}) = 2 \cdot (-1, \sqrt{2}) + 2 \cdot (-1, \sqrt{2}).$ -Explicitly, the map $\phi$ should be $x \mapsto (\sqrt{x^4+1} - \sqrt{2} \cdot x)/(\sqrt{x^4+1} + \sqrt{2} \cdot x)$. We know that $\omega$ should be a form which has poles of residue $1/2 \sqrt{2}$ at $0$ and $\infty$, so it should be $du/2 \sqrt{2} u$. -Ask your favorite computer algebra system to pullback $du/2 \sqrt{2} u$ along $x \mapsto (\sqrt{x^4+1} - \sqrt{2} \cdot x)/(\sqrt{x^4+1} + \sqrt{2} \cdot x)$. It will whine a lot but, if you make it keep expanding and factoring, you will get the original integrand.<|endoftext|> -TITLE: Status of The Triangle Book -QUESTION [18 upvotes]: I am interested in finding out about the current status of the planned book: The Triangle Book by John H. Conway and Steve Sigur. I understand that Steve Sigur died some time back. I got no reply from Prof. Conway; is there someone who knows the fate of this project. Is it abandoned or is it still under preparation? Here is some information about the planned book. - -REPLY [3 votes]: The Triangle Book is on hold (Conway died this year) but there are people trying to revive it trust me. -Colm Mulcahy<|endoftext|> -TITLE: Simplicity of $A_n$ -QUESTION [45 upvotes]: I have seen two proofs of the simplicity of $A_n,~ n \geq 5$ (Dummit & Foote, Hungerford). But, neither of them are such that they 'stick' to the head (at least my head). In a sense, I still do not have the feeling that I know why they are simple and why should it be 5 and not any other number (perhaps this is only because 3-cycles become conjugate in $A_n$ after $n$ becomes greater than 4). - -What is the most illuminating proof of the simplicity of $A_n,~ n \geq 5$ that you know? - -REPLY [4 votes]: Many proofs of this theorem involve a trick, which is not so natural to think, or remember (I feel). -But the following is a Lemma, which is very easy to state, easy to prove, and immediately implies simplicity of $A_n$. This I saw as an exercise in Wilson's Finite Simple Groups. -Lemma: Let $n\geq 5$. Then every non-trivial conjugacy class in $A_n$ contains at least $n$ elements. -Proof: Consider a non-identity $\sigma\in A_n$. Two cases arises: $\sigma$ contains at least a cycle of length $\geq 3$ or all cycles have length $2$. -Key: if $\sigma$ takes $i$ to $j$ (i.e. $\sigma(i)=j$) then $\tau\sigma\tau^{-1}$ takes $\tau(i)$ to $\tau(j)$: -$$\tau\sigma\tau^{-1}(\tau(i))=\tau(j).$$ -(A) $\sigma=(123\cdots)\cdots$. -Let $\tau=(234)$. Then $$\tau\sigma\tau^{-1}=(134\cdots)\cdots$$ -This is a conjugate of $\sigma$ and is clearly different from $\sigma$. -Similarly, taking $\tau=(23k)$ with $k=4,5,\cdots,n$, we get $n-3$ conjugates of $\sigma$ different from $\sigma$ (i.e. we got $n-2$ conjugates). Then slightly change $\tau$'s as $\tau=(2k3)$, with $k=4,5$ (since $n$ is at least five), we get two more conjugates of $\sigma$. This case is complete. -(B) $\sigma=(12)(34)\cdots$ -Again for $\tau=(234), (235), \cdots, (23n)$, we get $\tau\sigma\tau^{-1}$ equal to -$$ (13)(42), (13)(54), \cdots (13)(n4);$$ -these $n-3$ distinct conjugates of $\sigma$ together with $\sigma$ give $n-2$ elements in conjugacy class of $\sigma$. You may try to obtain two more, just by slight modification [as in Case A]. Q.E.D. -Proof of Theorem: We assume simplicity of $A_5$ is proved. Let $n>5$ and $N$ a normal subgroup of $A_n$. Then $N\cap A_{n-1}$ is normal in $A_{n-1}$ ($A_{n-1}$ is subgroup of permutations fixing $n$ in $A_n$). By induction, $N\cap A_{n-1}=A_{n-1}$ or $N\cap A_{n-1}=1$. In the first case, $A_{n-1}\subseteq N$, and hence $N$ contains a $3$-cycle. This implies that $N=A_n$. In the second case, i.e. $N\cap A_{n-1}=1$, we get $|N|\leq n$ (since -$NA_{n-1}\leq A_n$ hence $\frac{|N|.|A_{n-1}|}{|N\cap A_{n-1}|}\leq |A_n|$). -Thus $N$ is normal subgroup of order $\leq n$, it follows that it should contain a conjugacy class of size $ -TITLE: Compactness on models with bounded finite size -QUESTION [7 upvotes]: I am aware that compactness fails on finite models, but the common counter-example uses models of arbitrary big finite size. So if we bound the size what results can we get? -Assume we have an infinite set of sentences $\Sigma$, of a language of first order logic $\mathcal{L}$. If there is a natural number $n$ such that every finite $S\subset\Sigma$ has a model of size $\leq n$, then does $\Sigma$ have a finite model? How about a model of size $\leq n$? -If the language is finite or there is a $k$ such that every function has at most $k$ arguments and every relation is at most $k$-place, we have finite many structures of size $\leq n$ of $\mathcal{L}$, let's call them $N_1,\ldots,N_m$. If for every $i\leq m$ there exists a finite $S_i\subset\Sigma$ such that $N_i$ doesn't satisfy $S_i$, then $\bigcup_{i\leq m}S_i$ wouldn't have a model of size $\leq n$, and thus we get that one of the models satisfy every sentence of $\Sigma$. Is this argument correct? And what about the general case, where we don't restrict the language whatsoever and the structures are infinite many? - -REPLY [4 votes]: Your argument is correct. In fact, we can axiomatize any finite model, meaning that if ${\mathcal M}=(M,\dots)$ is a model (in a language of arbitrary size) then there is a set of sentences $\Sigma$ such that any model of $\Sigma$ is isomorphic to ${\mathcal M}$: -Say $M=\{a_1,\dots,a_n\}$. -First you say that the universe has size $|M|=n$, with a sentence $\tau$ such as "there are $x_1,\dots,x_n$ which are pairwise different and such that each $y$ is equal to one of them." -For each relational symbol $R$, consider the formula - -"there are $x_1,\dots,x_n$ pairwise different and such that $\bigwedge_{R^{\mathcal M}(a_{i_1},\dots,a_{i_k})}R(x_{i_1},\dots,x_{i_k})$ and such that $\bigwedge_{R^{\mathcal M}(a_{i_1},\dots,a_{i_k})\mbox{ fails}}\lnot R(x_{i_1},\dots,x_{i_k})$", - -where the first big conjunction ranges over all tuples of elements of $M$ that are in the interpretation of $R$, and the second runs over all tuples that are not in that interpretation. Given any model of this formula of size $n$, there is a bijection between this model and $M$ such that the interpretation of $R$ in this model is just the image under the bijection of the interpretation of $R$ in ${\mathcal M}$. Let $\phi_R$ be the formula such that the sentence we just wrote is $\exists \vec x\phi_R$. -Similarly, there is a sentence $\exists\vec x\psi_f$ describing completely each function $f^{\mathcal M}$, in the same sense as the sentence above completely describes $R^{\mathcal M}$. -$\Sigma$ consists of $\tau$, and the following formulas: -For any finitely many constant symbols $c_1,\dots,c_j$, any finitely many relational symbols $R_1,\dots,R_s$, and any finitely many function symbols $f_1,\dots,f_k$, say that $c_l^{\mathcal M}=a_{i_l}$ for each $l$, the sentence - -"$\exists \vec x(\bigwedge_{a=1}^s\phi_{R_a}\land\bigwedge_{b=1}^k\psi_{f_b}\land \bigwedge_{l=1}^j c_l=x_{i_l})$". - -Note that if the language is finite, $\Sigma$ is finite as well. But the theory just described uniquely characterizes ${\mathcal M}$ even if the language is infinite. -This result is usually presented as a consequence of Beth’s definability theorem or of Svenonius’s theorem, see for example W. Hodges, Model theory. Cambridge University Press, (1993), Chapter 10. -(I guess this says that there is no useful compactness result for finite models, since the only version that would hold would simply give us a model that we already knew we had.) - -REPLY [4 votes]: For the general case, without any restriction on the language, the Compactness theorem does hold for models having a specified finite size. To see this, argue as follows: suppose that every finite subtheory of $\Sigma$ has a model of finite size at most $n$. Let $\Sigma^+$ be the theory obtained by adding the axiom "there are at most $n$ elements." Notice that every finite subtheory of $\Sigma^+$ is satisfiable. Thus, by the ordinary Compactness theorem, $\Sigma^+$ is satisfiable. Hence, $\Sigma$ has a model of size at most $n$, as desired.<|endoftext|> -TITLE: What is the minimum value of $a$ such that $x^a \geq \ln(x)$ for all $x > 0$? -QUESTION [10 upvotes]: This is probably just elementary, but I don't know how to do it. I would like to find the minimum value of $a$ such that $x^a \geq \ln(x)$ for all $x > 0$. Numerically, I have found that this minimum value lies between 0.365 and 0.37 (i.e., $x^{0.37} > \ln(x)$ for all $x > 0$, but $x^{0.365}$ is not). Is there any analytical way to find out exactly this minimum value? -EDIT: Based on the received answers, I finally came up with my own one as follows. -Consider the function $f(x) = x^a - \ln(x).$ This function is convex in $x$, and hence, achieves the unique minimum as $x^*$ such that $f'(x^*) = 0.$ Solving that equation yields $$f_{\mathrm{min}} = \min\limits_{x>0} f(x) = \frac{\ln(a)+1}{a}.$$ -Now, by letting $f_{min} = 0$, we get the desired value $a^* = 1/e.$ -Thank everyone for the answers! - -REPLY [7 votes]: $a = \max_{x>0} \frac{\ln(\ln(x))}{\ln(x)} = 1/e$. - -REPLY [5 votes]: Consider the minimum of the function $f(x)=x^{1/e}-\ln(x)$. -EDIT: As a second step, verify that $x^a < \ln(x)$ at $x=(1/a)^{1/a}$, for any $0 -TITLE: How do I compute Gaussian curvature in cylindrical coordinates? -QUESTION [6 upvotes]: I just asked this question on ask.metafilter, and it was suggested that I ask here. Though I'm talking about coding something up, this question is about the math behind it, not the implementation. -We have done analysis in the past where we've computed an approximation for Gaussian curvature of a surface in Cartesian coordinates. -What we've been doing for Cartesian (in MATLAB) is -[fu,fv] = gradient(Z) - -[fuu, fuv] = gradient(fu) - -[fvu,fvv] = gradient(fv) - -GC = (fuu*fvv - fuv*fuv)/(1 + fu^2 + fv^2) - -So now I have a surface that I'm modeling in cylindrical coordinates, and I can do the same thing as above for $r$ as a function of $\theta$ and $z$. The problem is that it's only taking into account the change in $r$, not the fact that there is curvature inherent in it being a cylinder. -Looking on Wolfram (equations 27, 32 and 37 and thereabouts), it seems like there's a centripetal component that I don't know how to apply. Dividing by the (constant?) radius doesn't seem like it would work, so I think I'm missing something. -Any help would be appreciated, either explaining how to modify these equations to work correctly, or some other approximation that has worked for you. -Thank you. - -REPLY [7 votes]: Your cylindrical coordinate surface $r(\theta,z)$ in Cartesian coordinates is -$$\begin{align*}x&=r(\theta,z)\cos\;\theta\\y&=r(\theta,z)\sin\;\theta\\z&=z\end{align*}$$ -which now allows you to apply the usual Gaussian curvature formula. In particular, you should get the expression -$$K=-\frac{r^3\frac{\partial^2 r}{\partial z^2}+r^2\left(\left(\frac{\partial^2 r}{\partial \theta\partial z}\right)^2-\frac{\partial^2 r}{\partial z^2}\frac{\partial^2 r}{\partial \theta^2}\right)+2r\frac{\partial r}{\partial \theta}\left(\frac{\partial^2 r}{\partial z^2}\frac{\partial r}{\partial \theta}-\frac{\partial r}{\partial z}\frac{\partial^2 r}{\partial \theta\partial z}\right)+\left(\frac{\partial r}{\partial \theta}\frac{\partial r}{\partial z}\right)^2}{\left(r^2+\left(r\frac{\partial r}{\partial z}\right)^2+\left(\frac{\partial r}{\partial \theta}\right)^2\right)^2}$$ - -For completeness, if you have $z$ as a function of $r$ and $\theta$, your Cartesian parametrization is -$$\begin{align*}x&=r\cos\;\theta\\y&=r\sin\;\theta\\z&=z(r,\theta)\end{align*}$$ -and the corresponding Gaussian curvature expression is -$$K=\frac{r^2\frac{\partial^2 z}{\partial r^2}\left(\frac{\partial^2 z}{\partial \theta^2}+r\frac{\partial z}{\partial r}\right)-\left(\frac{\partial z}{\partial \theta}-r\frac{\partial^2 z}{\partial r\partial \theta}\right)^2}{\left(r^2\left(\left(\frac{\partial z}{\partial r}\right)^2+1\right)+\left(\frac{\partial z}{\partial \theta}\right)^2\right)^2}$$ -I will leave the derivation of the Gaussian curvature expression for -$$\begin{align*}r&=f(u,v)\\\theta&=g(u,v)\\z&=h(u,v)\end{align*}$$ -to the interested reader.<|endoftext|> -TITLE: Measure-preserving transformations and invariant functions -QUESTION [8 upvotes]: Let $\tau: E \to E$ be a measure-preserving transformation of the measure space $(E, \mathcal{E}, \mu)$, i.e. $\mu(\tau^{-1}(A)) = \mu(A)$ for all $A \in \mathcal{E}$. Let $\mathcal{E}_\tau = \{ A \in \mathcal{E} : \tau^{-1}(A) = A \}$. In my lecture notes, it is claimed that a measurable function $f$ is invariant (i.e. $f \circ \tau = f$) if and only if it is measurable with respect to $\mathcal{E}_\tau$. -This is evidently false: Let $E = \{ 0, 1 \}$, and let $\mathcal{E} = \mathcal{P}(E)$ be the power set. Let $\mu$ be the uniform distribution. Let $\tau$ act on $E$ by transposing 0 and 1. Let $f: (E, \mathcal{E}) \to (E, \mathcal{E}_\tau)$ be the identity map on $E$. Clearly, $f$ is measurable and not invariant. Yet it's also measurable with respect to $\mathcal{E}_\tau$: the preimage of every measurable set in its codomain is in $\mathcal{E}_\tau$, by construction. -Have I misinterpreted the claim? If not, is there a similar claim which is true, e.g. by fixing the codomain of $f$? - -REPLY [3 votes]: I suspect that measurable function in the statement should be understood to be -real-valued measurable function, or something similar. -More generally, if each point $y$ in the codomain $Y$ of $f$ is measurable, and if $f$ is measuarble w.r.t. $\mathcal E_{\tau}$, then -we see that $f^{-1}(y)$ should be invariant under $\tau$, which is to say that -$f\circ\tau = f$. -The problem with your counter-example is that the points of $E$ are not in -your particular $\mathcal E_{\tau}$, and so the preceding argument breaks down -(as you implicitly noted!).<|endoftext|> -TITLE: On nonintersecting loxodromes -QUESTION [33 upvotes]: The (spherical) loxodrome, or the rhumb line, is the curve of constant bearing on the sphere; that is, it is the spherical curve that cuts the meridians of the sphere at a constant angle. A more picturesque way of putting it is that if one wants to travel from one point of a (spherical) globe to the antipodal point (say, from the North Pole to the South Pole) in a fixed direction, the path one would be taking on the globe would be a loxodrome. -For a unit sphere, the loxodrome that cuts the meridians at an angle $\varphi\in\left(0,90^\circ\right]$ is given by -$$\begin{align*}x&=\mathrm{sech}(t\cot\;\varphi)\cos\;t\\y&=\mathrm{sech}(t\cot\;\varphi)\sin\;t\\z&=\tanh(t\cot\;\varphi)\end{align*}$$ -While playing around with loxodromes, I noted that for certain values of $\varphi$, one can orient two identical loxodromes such that they do not intersect (that is, one can position two ships such that if both take similar loxodromic paths, they can never collide). Here for instance are two loxodromes whose constant angle $\varphi$ is $60^\circ$, oriented such that they do not cross each other: - -On the other hand, for the (extreme!) case of $\varphi=90^{\circ}$, the two loxodromes degenerate to great circles, and it is well known that two great circles must always intersect (at two antipodal points). -Less extreme, but seemingly difficult, would be the problem of positioning two 80° loxodromes such that they do not intersect: - - -This brings me to my first question: - -1) For what values of $\varphi$ does it become impossible to orient two loxodromes such that they do not cross each other? - -For simplicity, one can of course fix one of the two loxodromes to go from the North Pole to the South Pole, and try to orient the other loxodrome so that it does not cross the fixed loxodrome. - -That's the simpler version of my actual problem. Some experimentation seems to indicate that it is not possible to orient three loxodromes such that they do not cross each other. So... - -2) Is it true that for all (admissible) values of $\varphi$, one cannot position three loxodromes such that none of them cross each other? - -I've tried a bit of searching around to see if the problem has been previously considered, but I have not had any luck. Any pointers to the literature will be appreciated. - -REPLY [22 votes]: Updated answer to (2): Three non-intersecting $60^\circ$ loxodromes. - -The axes are coplanar and inclined at $120^\circ$ to each other. This image shows that symmetry better: - -And here's the Mercator projection: - -My approach was to plot one loxodrome such that its Mercator projection is a (black) line through the origin. Then, I tilted the spherical loxodrome "toward the camera"; that is, I rotated the sphere about the horizontal axis to get new, curvy (red) projections. - -From tilt-angles $108^\circ$ to $143^\circ$, the "curve" lies between parts of the "line", indicating a range of red loxodromes that don't intersect the black one. - -For a certain sub-range ($108^\circ$ to about $125^\circ$), a third (blue) non-intersecting loxodrome can be added by rotating the red one about the Mercator origin. Here's an image from the end of that range, where red and blue are tangent. - -That's the end of the illustrated intro. Now for some equations ... -Starting with your parameterization of the loxodrome, then tilting via angle $\theta$, gives this parameterization of the Mercator projection: -$$\begin{align} -u &= \rm{atan}\left( \frac{\sin t}{\cos t \cos \theta + \sinh\left(t \cot\phi\right) \sin \theta }\right) \\ -v &= \rm{atanh}\left( \frac{-\cos t \sin\theta + \sinh\left(t \cot\phi\right) \cos\theta }{\cosh\left(t \cot\phi\right)}\right) -\end{align} -$$ -A tilted loxodrome crosses into the range of (possible) non-intersection with the un-tilted loxodrome when the "top" of the outer loop about its tilted north pole meets the Mercator origin. (The nature of loxodromes guarantees that the two loxodromes will be tangent there.) The point on the loop corresponds to $t=\pi$, for which $u$ is already zero; for $v$ to vanish, we must have -$$0 = -\cos\pi \sin\theta + \sinh\left(\pi \cot\phi\right) \cos\theta = \sin\theta+\sinh\left(\pi \cot\phi \right) \cos\theta$$ -so -$$\tan\theta = -\sinh\left(\pi\cot\phi\right)$$ -Consequently, appropriately adjusting the "branch" of $\rm{atan}$, the range begins at -$$\theta_0 := \pi - \rm{atan}\left(\sinh\left(\pi\cot\phi\right)\right)$$ -The range of (possible) non-intersection ends when the loop around the tilted loxodrome's south pole brushes against the un-tilted loxodrome. This is when the point corresponding to $t=-\pi$ has $v = \pi\cot\phi$ (matching the upper-right point of the "straight" loxodrome projection). So, the range ends at -$$\theta_1 := 2\;\rm{atan}\left( \sinh\left(\pi\cot\phi\right) \right)$$ -I write "range of possible non-intersection", because that range collapses when $\theta_0 = \theta_1$. This gives us a critical loxodrome angle. -$$\phi_{*} = \rm{atan}\left(\frac{\pi}{ \rm{asinh}\left( \tan\frac{\pi}{3} \right) } \right) \approx 67.2565^\circ$$ - -You cannot arrive at two non-intersecting loxodromes with $\phi > \phi_{*}$ --in particular, with $\phi=80^\circ$-- by tilting one relative to the other in the way I've described. -Here's $\phi = 80^\circ$: - -Of course, "the way I've described" lacks generality. In addition to "vertical" tilts, one should also consider "lateral" spins (horizontal shifts in the Mercator projection). I'll leave that, and a full investigation of the three-loxodrome scenario, as an exercise.<|endoftext|> -TITLE: What is the Shortest possible formula to find the intersection between a set of two ranges of number -QUESTION [7 upvotes]: How to find the intersection between a set of two ranges of number. -let me explain the question with an example, -{2,3} {3,8} would result to 0 -{2,5} {3,8} would result to 2 -{3,6} {3,8} would result to 3 -{4,5} {3,8} would result to 1 -{4,5} {3,8} would result to 1 -{8,9} {3,8} would result to 0 - -Note:This could be easily done with series of if in a computer application, I am hungry for a mathematical solution! - -REPLY [11 votes]: If I understood you correctly, you have $\{x,x+1,\ldots,x+n\}$ and $\{y,y+1,\ldots,y+k\}$ and granted $x,y$ are integers you want to find out the number of elements in the intersections (i.e. how many numbers appear in both sets). -Well, take: $$\max\{0,\min\{y+k,x+n\}-\max\{x,y\}\}$$ and that should be it.<|endoftext|> -TITLE: Differential Form on a Riemann Surface -QUESTION [10 upvotes]: The following problem is basically from Miranda's "Algebraic Curves and Riemann Surfaces", which I am reading on my own; if there are any rules against posting textbook problems, my apologies! -Let $X$ be a smooth projective curve defined by the homogeneous polynomial $F(x,y,z)=0$, with $\deg F = d \geq 3$. Let $f(x,y) = F(x,y,1)$. Show that if $p(u,v)$ is a polynomial of degree at most $d-3$, then $p(u,v) \frac{du}{\partial f/ \partial v}$ defines a holomorphic 1-form on the compact Riemann surface X. If $X$ is not smooth, but has nodes, then this form is a holomorphic 1-form on the resolution. -I see that this is a holomorphic $1$-form on the affine curve defined by $f$, since the charts are just projection to the $x$ or $y$ coordinate; in the former case the form is evidently holomorphic and in the latter case the form transforms to $p(u,v) \frac{dv}{\partial f/ \partial u}$. However, I'm a bit confused as to the computations involved in checking this on the other affine curves, and what extra argument is needed for the nodes case. - -REPLY [5 votes]: You have two questions: how to change variables, and how to handle nodes. The case of changing variables, for hyperelliptic curves without nodes, is in chapter III.5.5 of Shafarevich's Basic algebraic geometry. The general case is on page 105 of Phillip Griffiths' Introduction to algebraic curves (China notes). The discussion of the nodes there needs supplementing with the argument from page 98, i.e. a proof that the derivative vanishes simply on both the separate branches lying over the node, rather than at the point in the plane, as suggested there in a footnote. Basically a fraction is holomorphic if the numerator vanishes as much as the denominator. Here the equation for the curve vanishes twice at the node so the derivative in the denominator vanishes once and is canceled by the vanishing of the adjoint polynomial. But you still have to finesse the point raised in Griffiths' footnote as mentioned above about the order of vanishing of the pullback to the normalization. Warm up on some specific examples.<|endoftext|> -TITLE: Generalization of a ring? -QUESTION [17 upvotes]: I've just started learning about rings. Rings are one additive abelian group strung together, through the associative law, with another structured operation. -Couldn't we continue stringing together operations in this manner (a multi-operation associative law)? Would what I'm thinking be encompassed by the ring definition through something I'm missing? If not, is this done and, if so, do useful objects come out of it? - -REPLY [11 votes]: Yes, any algebraic structure that has multiple operations will have laws like the distributive law that intertwine the operations. For otherwise the operations would not interact in any way and the structure could be studied as two independent structures with the non-interacting operations. For example, if we dropped the distributive law from the ring axioms then we'd simply have a set with a given abelian group structure and given monoid structure with no connection between the two structures. It is the distributive law that ties together these two structures and leads to the rich structure that is unique to rings - structure above and beyond the constituent structure of the additive group and multiplicative monoid. -One can observe the key role played by the distributive law even in the simplest results on rings. For example, consider the proof of the law of signs $\rm\ (-A)\:(-B) = A\:B\:.\ $ One simple proof is to observe that both terms are additive inverses of $\rm\ (-A)\:B\ $ hence they are equal by uniqueness of inverses. But to verify that they are inverse requires applying the distributive law. Similarly, any theorem that is truly ring-theoretic result $\:$ (i.e. $\:$ is not merely a result about abelian groups or monoids) must employ the distributive law in its proof (though perhaps obscured in some remote lemma). -Analogous remarks hold true for any algebraic structure with multiple operations, e.g. lattices with their intertwining absorption law $\rm\ X = X \vee (X \wedge Y)\ $ and it's dual, or distributive lattices (e.g. Boolean algebras) with their distributive law $\rm\ X \vee (Y \wedge Z) = (X\vee Y)\wedge (X\vee Z)\ $ and its dual or, more generally, the important modular law $\rm\ (X\wedge Y)\vee(Y\wedge Z) = Y \wedge ((X\wedge Y)\vee Z)\:.\ $ -The common properties of algebraic structures are studied in universal algebra (or general algebra). For example, one major theme is the study of the role played by properties of the lattices of congruences, e.g. congruence lattices of lattices are distributive, and congruence lattices of groups and rings are modular. These properties play fundamental roles in the theories of these structures.<|endoftext|> -TITLE: How to union many polygons efficiently -QUESTION [18 upvotes]: I've asked this question at SO, but only answer I got is a non-answer as far as I can tell, so I would like to try my luck here. -Basically, I'm looking for a better-than-naive algorithm for the constructions of polygons out of union of many polygons, each with a list of Vertex $V$. The naive algorithm to find the union polygon(s) goes like this: - -First take two polygons, union them, - and take another polygon, union it - with the union of the two polygons, - and repeat this process until every - single piece is considered. Then I will run - through the union polygon list and - check whether there are still some - polygons can be combined, and I will - repeat this step until a - satisfactory result is achieved. - -Is there a smarter algorithm? -For this purpose, you can imagine each polygon as a jigsaw puzzle piece, when you complete them you will get a nice picture. But the catch is that a small portion ( say <5%) of the jigsaw is missing, and you are still require to form a picture as complete as possible; that's the polygon ( or polygons)-- maybe with holes-- that I want to form. -Note: I'm not asking about how to union two polygons, but I am asking about--given that I know how to union two polygons--how to union $n$ number of (where $n>>2$) polygons in an efficient manner. -Also,all the polygons can share edges, and some polygon's edge can be shared by one or many other polygon's edge. Polygons can't overlapped among themselves. - -REPLY [10 votes]: Martin Davis describes an approach on his blog which he calls "Cascading Union". -The approach is to traverse a spatial index like an R-tree, to union polygons that are likely to overlap or touch, which gets rid of a lot of internal vertices. The naive approach might not reduce the number of vertices at all between two iterations... -Martin Davis description (snippet): - -This can be thought of as a post-order traversal of a tree, where the - union is performed at each interior node. If the tree structure is - determined by the spatial proximity of the input polygons, and there - is overlap or adjacency in the input dataset, this algorithm can be - quite efficient. This is because union operations higher in the tree - are faster, since linework is "merged away" from the lower levels. - - -Complexity -I don't know the exact complexity of the algorithm, but could be similar to the sweep line algorithm, since the complexity of the algorithms depends on the number of vertices remaining in each step. -See also the full description of the Cascading Union algorithm on Martin Davis blog.<|endoftext|> -TITLE: Infinite product representation of a function in terms of its non-trivial zeroes? -QUESTION [8 upvotes]: From Wikipedia's Weierstrass Factorization Theorem, I learned that every entire function can be represented as a product involving its zeroes. Examples are the sine and cosine function. The Riemann zeta function, however, is not entire. -Let us assume the Riemann Hypothesis. Can $\zeta(s)$ be represented by an infinite product involving both its trivial zeroes at $\zeta(s)=-2n$ (for $n \in \mathbb{N}$) and its non-trivial zeroes at $\zeta(s)=\frac{1}{2} + i t$? -Thanks, Max - -REPLY [13 votes]: The standard approach to this problem is to write -$\xi(s) = \dfrac{s(s-1)}{2}\pi^{-s/2}\Gamma(s/2)\zeta(s).$ This function is entire, and has zeroes precisely at the non-trivial zeroes of $\zeta(s)$. It also has a slow enough rate of growth that it can be written as a product over its zeroes: -$$\xi(s) = \xi(0) \prod_{\rho}(1-\dfrac{s}{\rho}),$$ -where the product is over zeroes of $\xi(s)$, i.e. over non-trivial zeroes of $\zeta(s)$. (Here I am following more-or-less the notation in Edward's book Riemann's zeta function, which is a good reference for these sort of things.) -[Edit: Also, to ensure convergence, the product should be taken over "matching pairs" of zeroes, i.e. the factors for a pair of zeroes of the form $\rho$ and $1-\rho$ should be combined.] -We can then write -$$\zeta(s) = \dfrac{2\xi(0)}{s(s-1)}\pi^{s/2}\dfrac{1}{\Gamma(s/2)}\prod_{\rho}(1-\dfrac{s}{\rho}).$$ -If we now replace $\dfrac{1}{\Gamma(s/2)}$ by its Weierstrass product, we get -a product formula for $\zeta(s)$, namely -$$\zeta(s) = \dfrac{\xi(0)}{s-1} (\pi e^{\gamma})^{s/2}\prod_{n=1}^{\infty}(1 +\dfrac{s}{2n})e^{-s/2n} -\prod_{\rho}(1-\dfrac{s}{\rho}).$$ -(Here $\gamma$ is Euler's constant.) -Note that we can now compute $\xi(0)$, because we know that the value of $\zeta(s)$ at $s = 0$ is equal to $-1/2$. We find that $\xi(0) = 1/2$, -and so -$$\zeta(s) = \dfrac{1}{2(s-1)}(\pi e^{\gamma})^{s/2}\prod_{n=1}^{\infty}(1+\dfrac{s}{2n})e^{-s/2n}\prod_{\rho}(1-\dfrac{s}{\rho}).$$ -An added cultural remark: Riemann's explicit formula for the prime counting function is obtained by taking a Fourier transform of the logarithm of this formula. Combined with the fact that all the non-trivial zeroes $\rho$ have real part $< 1$ (proved by Hadamard and de la Vallee Poussin), this gives the prime number theorem. - -REPLY [3 votes]: (Would be a comment, but I don't have the reputation.) -The Riemann zeta function is certainly not entire: it has a simple pole at s = 1. It is, however, meromorphic.<|endoftext|> -TITLE: Sine values being rational -QUESTION [8 upvotes]: Can $$\sin r\pi $$ be rational if $r$ is irrational? Either a direct or existence proof is fine. - -REPLY [12 votes]: As J. M. said, Niven's theorem does it. There is some $r$ such that $\sin (r\pi) = \frac{1}{3}$ As $\sin (r\pi)$ is rational and not $0, \pm1, \pm \frac{1}{2}$, $r$ is not rational<|endoftext|> -TITLE: sylow subgroup of a subgroup -QUESTION [5 upvotes]: Let $p$ be a prime and $H$ a subgroup of a finite group $G$. Let $P$ be a p-sylow subgroup of G. Prove that there exists $g\in G$ such that $H\cap gPg^{-1}$ is sylow subgroup of $H$. -I have no idea how to do this, any hints? -Note: Originally it was unclear if the problem was for possibly infinite groups or just finite ones. However, since the definition of $p$-Sylow subgroup being used is that it is a $p$-subgroup such that the index and the order are relatively prime, the definition only applies to finite groups. - -REPLY [8 votes]: Let $G$ be the direct product of countably many copies of the dihedral group $D$ of order 6 (or, if you prefer, $D$ is the symmetric group $S_3$). -We can construct a Sylow $2$-subgroup of $G$ by choosing Sylow $2$-subgroups of each of the direct factors of $G$, and taking their direct product. Since $D$ has three Sylow $2$-subgroups, $G$ has uncountably many Sylow $2$-subgroups, so they cannot all be conjugate in the countable group $G$. -If we let $P$ and $H$ be non-conjugate Sylow $2$-subgroups of $G$, then there is no $g \in G$ such that $H \cap gPg^{-1} \in {\rm Syl}_2(H)$.<|endoftext|> -TITLE: Prime powers, patterns similar to $\lbrace 0,1,0,2,0,1,0,3\ldots \rbrace$ and formulas for $\sigma_k(n)$ -QUESTION [43 upvotes]: Some time ago when decomponsing the natural numbers, $\mathbb{N}$, in prime powes I noticed a pattern in their powers. Taking, for example, the numbers $\lbrace 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 \rbrace$ and factorize them, we will get -$$\begin{align} 1&=2^0\times 3^0\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\ 2&=2^1\times 3^0\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\3&=2^0\times 3^1\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\ 4&=2^2\times 3^0\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\ 5&=2^0\times 3^0\times 5^1\times 7^0\times 11^0\times 13^0\times\ldots \\ 6&=2^1\times 3^1\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\ 7&=2^0\times 3^0\times 5^0\times 7^1\times 11^0\times 13^0\times\ldots \\ 8&=2^3\times 3^0\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\ 9&=2^0\times 3^2\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\ 10&=2^1\times 3^0\times 5^1\times 7^0\times 11^0\times 13^0\times\ldots \\ 11&=2^0\times 3^0\times 5^0\times 7^0\times 11^1\times 13^0\times\ldots \\ 12&=2^2\times 3^1\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\ 13&=2^0\times 3^0\times 5^0\times 7^0\times 11^0\times 13^1\times\ldots \\ 14&=2^1\times 3^0\times 5^0\times 7^1\times 11^0\times 13^0\times\ldots \\ 15&=2^0\times 3^1\times 5^1\times 7^0\times 11^0\times 13^0\times\ldots \\ 16&=2^4\times 3^0\times 5^0\times 7^0\times 11^0\times 13^0\times\ldots \\\end{align}$$ -Now if we look at the powers of $2$ we will notice that they are $$\lbrace f_2(n)\rbrace=\lbrace 0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4\rbrace$$ and for the powers of $3$ we have $$\lbrace f_3(n)\rbrace=\lbrace 0,0,1,0,0,1,0,0,2,0,0,1,0,0,1,0\rbrace$$ -This, of course, is a well known fact. -Since then I wondered if there was a formula for $f_2(n)$ or $f_3(n)$ or $f_p(n)$, with $p\in \mathbb{P}$. It seemed impossible but I was able to devise the suitable formulas. They are -$$\displaystyle\begin{align} f_2(n)=\sum_{r=1}^{\infty}\frac{r}{{2^{r+1}}}\sum_{k=0}^{2^{r+1}-1}\cos\left( \frac{2k\pi(n+2^{r})}{2^{r+1}} \right)\end{align}$$ -and for the general case we have -$$\displaystyle f_p(n)=\sum_{r=1}^{\infty}\frac{r}{p^{r+1}}\sum_{j=1}^{p-1}\left(\sum_{k=0}^{p^{r+1}-1}\cos\left( \frac{2k\pi(n+(p-j)p^{r})}{p^{r+1}} \right)\right)$$ -If one cares to analyse the formula for $f_p(n)$ it can be concluded that it needs not to be restricted to the prime numbers, so that we have $f_m(n), m \in \mathbb{N}$ and similar patterns for $\lbrace f_m(n)\rbrace$ will result. Now, the wonderfull thing is that we can express the arithmetical divisor functions $\sigma_k(n)$ in terms of $f_m(n)$ as follows -$$\displaystyle \sigma_a(n)=1+\sum_{m=2}^{\infty}\sum_{r=1}^{\infty}\frac{m^{a}}{m^{r+1}}\sum_{j=1}^{m-1}\left(\sum_{k=0}^{m^{r+1}-1}\cos\left( \frac{2k\pi(n+(m-j)m^{r})}{m^{r+1}} \right)\right)$$ -And, if we consider the divisor summatory function, $D(n)$, as -$$D(n)=\sum_{m \leq n}d(m)$$ -with $$d(n)=\sigma_{0}(n)=\sum_{d|n}1$$ -we can express $D(n)$ as -$$D(n)=\sum_{m=2}^{\infty}\sum_{r=1}^{\infty}\frac{r}{m^{r+1}}\sum_{j=1}^{m-1}\left(\sum_{k=0}^{m^{r+1}-1}\cos\left( \frac{2k\pi(2^{n}+(m-j)m^{r})}{m^{r+1}} \right)\right)$$ -Now, we know that, $d(n)$ and $D(n)$ are related to the Riemann zeta-function by -$$\zeta^{2}(z)=\sum_{n=1}^{\infty}\frac{d(n)}{n^{z}}$$ -and -$$\zeta^{2}(z)=z\int_{1}^{\infty}\frac{D(x)}{x^{z+1}}dx$$ -Now, my questions - -What can we say about the convergence of $f_m(z)$, $\sigma_a(z)$ and $D(z)$ with $z \in \mathbb{C}$? We can see that they converge for $z \in \mathbb{N}$. -I think that $\sigma_a(z)$ and $D(z)$ are only curiosities and aren't interesting in the context of the Riemann zeta-function because they are hard to compute. What do you think? -Are formulas $f_m(z)$, $\sigma_a(z)$ and $D(z)$ original. I think they are. I'd like to know if anyone has found something like this before. I've posted this as an answer to this post sometime ago. -Finally, is this intereting enough to publish somewhere? I'm just an amateur... - -To conclude I'd like to apologise for presenting all this formulas without showing how I got them but you can consider this previous post of mine and the question Greatest power of two dividing an integer, Difficult Infinite Sum and On the 61-st, the 62-nd, and the 63-rd Smarandache's problem page 38. -And now a challenge, can you present a formula for the characteristic function of the prime numbers? -EDIT: -I'm answering my challenge and leaving another. Considering that the characteristic function of the primes, $u(n)$, is given by -$$ -\begin{equation} -u(n)=\begin{cases} -&1\;\;\;\text{ if } n \in \mathbb{P} \\ -&\\ -&\\ -&0\;\;\;\text{ if } n \notin \mathbb{P} -\end{cases} -\end{equation} -$$ -I have found that $u(n)$ is given by the following formula -$$ -\begin{equation} -u(n)=\prod_{m=2}^{\infty}\;\;\prod _{r=1}^{\infty} \left\{1-\frac{1}{m^{r+1}} \sum _{j=1}^{m-1}\;\;\;\sum _{k=0}^{m^{r+1}-1} \cos\left(2 k \pi \cdot\frac{n-m+(m-j) m^r }{m^{r+1}}\right)\right\} -\end{equation} -$$ -Now, in the same spirit, what is formula for the prime counting function, $\pi(x)$? - -REPLY [10 votes]: These formulas are convoluted and what looks to me as probably useless representations of the p-adic order of an integer. All you seem to have done is taken advantage of the fact that: $$1_{p^j\mid n}=\frac{1}{p^j}\sum_{t=0}^{p^j-1}e^{2\pi i t\frac{n}{p^j}}$$ And then re-wrote it it without the imaginary part of each root of unity, so you got a sum of cosines, and then summed over the powers in $j$, so that it counted the multiplitices $p$ of a given integer, which you gave as a complex double sum. In addition I doubt this is of any use in a computational context, since: $$v_p(n)=\gcd(p^{\lfloor \log_p(n) \rfloor},n)$$Which can be calculated very fast using the euclidean algorithm. -Also, just to show you that it is not hard to obtain results about this function: $$\frac{\zeta(s)}{p^s-1}=\sum_{n=1}^\infty\frac{v_p(n)}{n^s}=\sum_{n=1}^\infty\frac{f_p(n)}{n^s}$$ -$$\sum_{k=1}^n v_p(k)=\sum_{k=1}^nf_p(k)=\sum_{j=1}^{\lfloor \log_p(n) \rfloor}\lfloor \frac{n}{p^j} \rfloor$$ -$$\sum_{k=1}^np^{v_p(k)}=\sum_{k=1}^np^{f_p(k)}=n+(1-\frac{1}{p})\sum_{j=1}^{\lfloor \log_p(n) \rfloor}p^j\lfloor \frac{n}{p^j} \rfloor$$ -And in general if we define for a fixed prime $p$ and an arbitrary function $g$ the function $\delta_p$: $$ -\delta_p(k) \stackrel{}{=} -\begin{cases} -g(j)-g(j-1) & \text{if } k = p^j \text{ with } j\ge 1, \\ -g(0) & \text{if } k=1 \\ -0 & \text{otherwise} -\end{cases} -$$ -So that $\sum_{d\mid k}\delta_p(d)=g(v_p(k))=g(f_p(k))$ -Then we have for arbitrary functions $g$ and $h$: -$$\sum_{k=1}^ng(f_p(k))h(k)=\sum_{k=1}^n(\sum_{d\mid k}\delta_p(d))h(k)=\sum_{k=1}^n\delta_p(k)\sum_{m=1}^{\lfloor n/k\rfloor}h(mk)$$ -$$=g(0)\sum_{m=1}^nh(m)+\sum_{j=1}^{\lfloor \log_p(n) \rfloor}(g(j)-g(j-1))\sum_{m=1}^{\lfloor n/p^j \rfloor}h(mp^j)$$ Where the upper index $n$ in this sum has been simplified to a sum with an upper index of $O(\ln(n))$ thus allowing any sort of sum evolving the p-adic order or as you are refering to it with the function $f_p$ to be calculatetable in an exponentially faster time then the sums you use just to represent one value of $f_p$. -So I wouldn't say your formulas are original, nor would I say they are very practical. Though for whatever my opinion is worth (probably not much) I would say that despite this, it is still great you are exploring and finding new things that interest you.<|endoftext|> -TITLE: How does (wikipedia's axiomatization of) intuitionistic logic prove $p \rightarrow p$? -QUESTION [5 upvotes]: I'm looking at Wikipedia's article on Intuitionist logic, and I can't figure out how it would prove $(p \rightarrow p)$. -Does it prove $(p \rightarrow p)$? If yes, how? -If no, is there a correct (or reasonable, if there is no "correct") axiomatization of intuitionistic logic available anywhere online? - -REPLY [8 votes]: $p→(p→p)$ (THEN-1) -$p→((p→p)→p)$ (THEN-1) -$(p→((p→p)→p))→((p→(p→p))→(p→p))$ (THEN-2) -$(p→(p→p))→(p→p)$ (MP from lines 2,3) -$(p→p)$ (MP from lines 1,4)<|endoftext|> -TITLE: What is the origin of the term "Differentiable"? -QUESTION [6 upvotes]: I was wondering today about why the word differentiable is used for describing functions that have a derivative or are differentiable. -Perhaps because originally one considered finite differences? But that seems somewhat not right, because roughly speaking a derivative measures not the difference $f(x+h)-f(x)$, but rather the ratio $(f(x+h)-f(x))/h$. -So, could people here shed light on why we use "differentiable"? Any pointers to academic / historical / etymological explanations are also welcome. Thanks! - -REPLY [10 votes]: From the Earliest Known Uses of Some of the Words of Mathematics webpage: - -DIFFERENTIAL CALCULUS. The term calculus differentialis was introduced by Leibniz in 1684 in Acta Eruditorum 3. Before introducing this term, he used the expression methodus tangentium directa (Struik, page 271). The OED has a nice quotation from Joseph Raphson’s Mathematical Dictionary of 1702: “A different way....passes....in France under the Name of Leibnitz's [sic] Differential Calculus, or Calculus of Differences.”<|endoftext|> -TITLE: The word problem for finite groups -QUESTION [13 upvotes]: The word problem for finite groups is decidable. Is it obvious that this is true? -In particular, I'm not entirely sure about what it means for the problem to be decidable (in this case---I think I understand what decidable means in general). I assume it means that we are given a fixed group G (do we have to assume this), but is the generating set (the letters) also fixed? -To decide the word problem for the group of symmetries of a square, with reflection $r$ and transposition $t$ as the generators, I would first find a canonical form for the group elements $1,r,r^2,r^3,t,tr,tr^2,tr^3$, and then note that using the relations $r^4=1$ and $rt=tr^3$ to show that any word can be reduced to something on my list. However, this seems like a lot of work, and it's not obvious to me what should be done in the case of an arbitrary finite group. - -REPLY [2 votes]: The easiest way to see that a finite group has solvable word problem is to notice that the solution is not required to be uniform over all finite groups. Given a finite group (presentation) there is an algorithm that takes that group (presentation) as input but ignores it. The algorithm then uses the group table, which is hard-coded into the algorithm, to reduce the word. The algorithm reduces the word by always replacing the product of the first two elements with a single element until there's just one element remaining.<|endoftext|> -TITLE: Why not write the solutions of a cubic this way? -QUESTION [8 upvotes]: For the solution of the cubic equation $x^3 + px + q = 0$ Cardano wrote it as: -$$\sqrt[3]{-\frac{q}{2} + \sqrt{\frac{q^2}{4} + \frac{p^3}{27}}}+\sqrt[3]{-\frac{q}{2} - \sqrt{\frac{q^2}{4} + \frac{p^3}{27}}}.$$ -but this is ambiguous because it does not tell you which cube roots to match up. Why don't people write it this way today: -$$\sqrt[3]{-\frac{q}{2} + \sqrt{\frac{q^2}{4} + \frac{p^3}{27}}}+\frac{-p}{3\sqrt[3]{-\frac{q}{2} + \sqrt{\frac{q^2}{4} + \frac{p^3}{27}}}}$$ -which is unambiguous. - -REPLY [10 votes]: Since no one has posted an answer and since my comment is a sort of tangential answer (and relevant to another question): -In UCSMP Precalculus and Discrete Mathematics, 3rd edition, p553 (in the "exploration" question), the cubic formula is given in a form analogous to what you describe (though it is for the general monic cubic, not the depressed cubic). It is given that way for the reason you describe—in particular, because of the way that most calculators and computer algebra systems define the principal root, the "traditional" way of writing the formula does not always yield correct results when computing blindly with technology. The formula as printed in the first printing run of PDM is actually missing a term, though it should be correct in subsequent printing runs. The correct formula reads: -Let $$A=\frac{\sqrt[3]{-2p^3+9pq-27r+3\sqrt{3}\sqrt{-p^2q^2+4q^3+4p^3r-18pqr+27r^2}}}{3\sqrt[3]{2}}$$ and $$B=\frac{-p^2+3q}{9A}.$$ Then, $$x_1=-\frac{p}{3}+A-B,$$ $$x_2=-\frac{p}{3}+\frac{-1-i\sqrt{3}}{2}A-\frac{-1+i\sqrt{3}}{2}B,$$ and $$x_3=-\frac{p}{3}+\frac{-1+i\sqrt{3}}{2}A-\frac{-1-i\sqrt{3}}{2}B$$ are the solutions to $$x^3+px^2+qx+r=0.$$<|endoftext|> -TITLE: arc-arc intersection, arcs specified by endpoints and height -QUESTION [11 upvotes]: I need to compute the intersection(s) between two circular arcs. Each arc is specified by its endpoints and their height. The height is the perpendicular distance from the chord connecting the endpoints to the middle of the arc. I use this representation because it is numerically robust for very slightly bent arcs, as well as straight line segments, for which the height is zero. In these cases, representing an arc using the center of its circle could lead to the center being far far away from the endpoints of the arc, and hence, numerically unstable. -My question at the highest level is how I would go about computing the intersection points, given that the centers of the circles of the arcs cannot necessarily be computed robustly. At a lower level, I am wondering if there is a parameterization of an arc using only the information I have stated above (which does not include the circle center). Of course, keep in mind numerical robustness is my principal concern here; otherwise I would just do the naive thing and compute the circle center for all non-linear arcs and hope for the best. -Edit: Formula for computing center of circle of arcs: -Suppose the chord length is $2t$, and the height is $h$. The distance from the chord to the circle center is $c$, so that $r=h+c$. Then it follows that $c=(t^2-h^2)/2h$, which breaks down when $h$ is very small. Computing the location of the circle center is some simple vector arithmetic using the chord vector and its perpendicular. - -REPLY [5 votes]: Have you considered finding the intersections using an implicit form for the circles, $$\frac{x^2}{r^2} + \frac{y^2}{r^2} + ax + by + c = 0?$$ This representation doesn't have any coefficients that diverge as the circle approaches a straight line. To find intersections, you'll have to solve a quadratic equation whose leading coefficient could be zero or arbitrarily close to it, but the alternative form of the quadratic formula should be able to deal with that robustly. -You'll then have to do some jiggery-pokery to figure out whether the intersection points lie within the arcs. If the arc's bending angle is smaller than $\pi$, a projection onto the line joining the endpoints will suffice. -(Disclaimer: While all of this feels like it should work, I haven't analyzed it in any detail. Also, there could still be a problem when the circle is close to a line and you want the longer arc. But I can't imagine that's a case that would turn up in any practical application.) -Update: For a concrete example, here is the equation for a circular arc passing through the three points $(0,0)$, $(0.5, h)$, and $(1,0)$: $$\kappa^2 x^2 + \kappa^2 y^2 - \kappa^2 x - 2\eta y = 0,$$ where $$\begin{align}\kappa &= \frac{8h}{4h^2 + 1}, \\ \eta &= \frac{8h(4h^2-1)}{(4h^2+1)^2}.\end{align}$$ As you can see, the coefficients remain bounded as $h \to 0$. -Update 2: Wait, that equation becomes trivial if $h = 0$, which is bad. We really want something like $x^2/r + y^2/r + ax + by + c,$ i.e. multiply the previous expression through by $r$. Then for the same example, our equation becomes $$\kappa x^2 + \kappa y^2 - \kappa x - 2\eta' y = 0,$$ where $\eta' = (4h^2-1)/(4h^2+1)$. Here are some explicit values. -$h = 1/2$: $$2 x^2 + 2 y^2 - 2 x = 0,$$ $h = 0.01$: $$0.07997 x^2 + 0.07997 y^2 - 0.07997 x + 1.998 y = 0,$$ $h = 0$: $$2 y = 0.$$ -By the way, in this format, the linear terms will always be simply $-2(x_0/r)x$ and $-2(y_0/r)y$, where the center of the circle is at $(x_0,y_0)$. As the center goes to infinity but the endpoints remain fixed, these coefficients remain bounded and nonzero (i.e. not both zero).<|endoftext|> -TITLE: How is addition defined? -QUESTION [20 upvotes]: I've been reading On Numbers and Games and I noticed that Conway defines addition in his number system in terms of addition. Similarly in the analysis and logic books that I've read (I'm sure that this is not true of all such books) how addition works is assumed. From what I understand the traditional method of building the number system begins with the natural numbers (and zero) -$0:=|\emptyset|$ -$1:=|\{\emptyset\}|$ -$2:=|\{\emptyset,\{\emptyset\}\}|$ -and so forth. In this construction addition could(?) be defined as the disjoint union of the sets associated with the two numbers. Then the integers could be defined as the additive inverse and so forth. Is this the ideal way to do it though, is there a more elegant method? - -REPLY [7 votes]: An extremely elegant way of defining numbers I want to mention does not use sets but lambda calculus, i.e. functions and function application. The system used there is called Church encoding. -The idea goes the following: Two, for example, means doing something for two times. -More precisely, when we have some operation (a function) and a value, we apply this function twice on this value. In lambda notation -$$ 2 \equiv \lambda f\,x \mapsto f (f\,x) $$ -So generally, any number $n$ is defined as a function that takes another function and returns it's $n$th iterate. -Now we can simply define addition as a function composition. -We first apply the function $n$ times, then $m$ times and thus we got a total of $n+m$ applications. -$$ n + m \equiv \lambda f\,x \mapsto n f (m f x)$$ -For example, we end up with -$$ 2 + 3 \equiv \lambda f\,x \mapsto f (f (f (f (f\,x)))) \equiv 5 $$<|endoftext|> -TITLE: When is a function satisfying the Cauchy-Riemann equations holomorphic? -QUESTION [30 upvotes]: It is, of course, one of the first results in basic complex analysis that a holomorphic function satisfies the Cauchy-Riemann equations when considered as a differentiable two-variable real function. I have always seen the converse as: if $f$ is continuously differentiable as a function from $U \subset \mathbb{R}^2$ to $\mathbb{R}^2$ and satisfies the Cauchy-Riemann equations, then it is holomorphic (see e.g. Stein and Shakarchi, or Wikipedia). Why is the $C^1$ condition necessary? I don't see where this comes in to the proof below. -Assume that $u(x,y)$ and $v(x,y)$ are continuously differentiable and satisfy the Cauchy-Riemann equations. Let $h=h_1 + h_2i$. Then -\begin{equation*} -u(x+h_1, y+h_2) - u(x,y) = \frac{\partial u}{\partial x} h_1 + \frac{\partial u}{\partial y}h_2 + o(|h|) -\end{equation*} -and -\begin{equation*} -v(x+h_1, y+h_2) - v(x,y) = \frac{\partial v}{\partial x} h_1 + \frac{\partial v}{\partial y} h_2 + o(|h|). -\end{equation*} -Multiplying the second equation by $i$ and adding the two together gives -\begin{align*} -(u+iv)(z+h)-(u+iv)(z) &= \frac{\partial u}{\partial x} h_1 + i \frac{\partial v}{\partial x} h_1 + \frac{\partial u}{\partial y} h_2 + i \frac{\partial v}{\partial y} h_2 + o(|h|)\\\ - &= \left( \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} \right) (h_1+i h_2) + o(|h|). -\end{align*} -Now dividing by $h$ gives us the desired result. -Does there exist a differentiable but not $C^1$ function $f: U \rightarrow \mathbb{R}^2$ which satisfies the Cauchy-Riemann equations and does NOT correspond to a complex-differentiable function? - -REPLY [3 votes]: Thinking of the Cauchy-Riemann operator as an elliptic partial differential operator, the basic elliptic regularity result implies that any distribution satisfying the C-R equation is a holomorphic function. For example, locally integrable suffices. This result was used in Gunning' "Riemann Surfaces", for example, in the discussion of Serre duality.<|endoftext|> -TITLE: Spivak's Calculus exercise. Chapter 10, Problem 27 -QUESTION [7 upvotes]: Suppose that $f$ is differentiable at - 0, and that $f(0) = 0$. Prove that - $f(x) = xg(x)$ for some function $g$ - which is continuous at 0. - -This is a problem from Spivak's Calculus, namely problem 27 of Chapter 10. (This is not homework, but rather self-study.) I am not sure how to go about this proof. The hint given in the text is to consider that $g(x)$ can be written as $f(x)/x$, but this puzzles me, because then continuity of $g$ at 0 says that $\lim_{x \to 0} g(x) = g(0) = f(0)/0 = 0/0$. - -REPLY [7 votes]: What is $\displaystyle \lim_{x \to 0} \frac{f(x) - f(0)}{x-0}$ ?<|endoftext|> -TITLE: Permutation with Duplicates -QUESTION [10 upvotes]: I could swear I had a formula for this years ago in school, but I'm having trouble tracking it down. The problem: -I have $3$ red balls and $3$ black balls in a basket. I draw them out one at a time. How many different sequences of six balls can I get, i.e. -rbrbrb -rrrbbb -bbbrrr -etc... - -I'm looking for a general formula for $n$ red and $m$ black. This is not a homework, just my aging brain trying to recover an abstract formula. - -REPLY [11 votes]: Place the $n+m$ balls in a row; the pick which $m$ you want to be red. You have $\binom{n+m}{m}=\frac{(n+m)!}{n!m!}$ possible ways of doing it, so that is the formula in this case. -Here's an alternative way of thinking about it: place the $n$ red in a row; now you need to decide where to insert the $m$ black. What you want to do is choose the locations of the $m$ black balls; there are $n+1$ possible locations (before all the red balls, in the $n-1$ spaces between red balls, and after all the red balls). You want to pick those allowing repetitions, and without regard to the order in which you pick them (what matters is how many times you pick each gap). -The number of ways in which you can select $r$ items from $k$ possibilities, allowing repetitions and without regard to order (combinations with repetitions) is $\binom{k+r-1}{r}$. -So here you want to select $m$ out of $n+1$ possibilities, it gives $\binom{n+m}{m} = \frac{(n+m)!}{m!n!}$, same as before. -To see a derivation of the formula, see for example Wikipedia's article on combinations. This is sometimes also called the "stars and bars problem", so you can find a proof in the corresponding page.<|endoftext|> -TITLE: Find points along a Bézier curve that are equal distance from one another -QUESTION [12 upvotes]: I'm trying to figure out a generic way of determining a series of points on a Bézier curve where all the points are the same distance from their neighboring points. By distance I mean direct distance between the points not distance along the curve. See the image below - -I've written a program that will solve this in an iterative fashion, but I'm wondering if there is a direct solution I could use. -My program starts by defining a circle of some initial radius (R) centered on the start point of the curve. It then intersects this circle with the curve to find the second point , which is R distance away from the start point). It then continues along the curve in this way finding points until it reaches the end of the curve. In most cases the distance between the last intersection point and the end point of the curve will not equal to R. The program then uses that difference to calculate a new value of R to try, and repeats the process. - -REPLY [4 votes]: So you want the algorithm to "evenly divide" some given Bézier curve into a sequence of n points, such that traveling along the Bézier curve you hit each point in the same order, and such that the direct Euclidean distance from each point to the next point (the chord distance, not the arc distance along the Bézier curve) is the same? -I suspect that you might converge on the desired points a little faster if you pick n "t" values along the curve, and nudge all of them on every iteration -- -rather than trying to fix one and going on to the next, and then throwing them all away and starting over every time you pick a new R value. -Say, for example, you want to divide the curve up into n = 100 points. -The first point is going to stay fixed at $t_0$=0 and the last point is going to stay fixed at $t_{100}$=1. I'd probably start out at equally spaced t values, -$t_0=0; t_1=1/100; t_2=2/100; ...; t_{99}=99/100; t_{100} = 1.$ -At each iteration, calculate the distances between neighboring points, then nudge 98 of the t values (being careful not to cross them) to make the distances more equal, -sooner or later converging on practically equal distances. -Off the top of my head, I seem to recall 2 methods for nudging those intermediate t values. -Let me call the easier-to-understand method "the Babylonian method", and the faster-converging method "the false-position method". (Is there a better name for these techniques?) -the Babylonian method -Nudge each point closer to halfway between its neighbors. -$d_{7,8}$ = distance( B($t_7$), B($t_8$) ) -$r_8$ = (1/2) ( $d_{8,9} - d_{7,8}$ ) / ( $d_{7,8} + d_{8,9}$ ) -Because distances are always positive, the ratio values $r$ are always in the range of -1/2 to +1/2 -- zero if $B(t_8)$ has equal distances from $B(t_7)$ and $B(t_9)$, -1/2 if $t_8$ needs to move "halfway" back to $t_7$, +1/2 if $t_8$ needs to be moved "halfway" to $t_9$. -If $r_8$ is positive, -$newt_8 = t_8 + r_8(t_9 - t_8)$ -If $r_8$ is negative, -$newt_8 = t_8 + r_8(t_8 - t_7)$ -the false position method -Each iteration, calculate the accumulated straight-lines distance from the start point to the current location of each point B($t_0$) ... B($t_{100}$). -Nudge each point closer to its "proper" fraction of the distance along that entire accumulated distance. -For example, say (after a few iterations) we find that the accumulated distance from the start point $B(t_0)$ to point $B(t_8)$ is $c_8$ = 6.9, while the accumulated distance to the next point $B(t_9)$ is $c_9$ = 11.5, and the accumulated straight-lines distance from the start to the end is $c_{100}$ = 101.4. -We need to make $t_8$ larger and $t_9$ smaller. -Using the false position method to estimate new values for all 98 "t" values, we get -error8L = 8/100 - $c_8 / c_{100}$ -error8R = $c_9 / c_{100}$ - 8/100 -If error8R is zero, it may help to push t8 all the way to t9. If error8L and error8R are both positive and approximately equal, it may help to push t8 halfway to t9. -$r_8$ = error8L / ( error8R + error8L ) -The r value is the proportion of the distance to travel from t8 to t9 -- zero if t8 doesn't need to change this iteration, +1 if t8 needs to be moved all the way to t9. -$newt_8$ = $t_8$ + $r_8$($t_9$ - $t_8$). -If the r value along the segment from the current point to the next point is negative, we need to re-calculate a positive r value along the segment from the previous point to the current point, as we do for $t_9$ in this example: -error9L = 9/100 - $c_8 / c_{100}$ -error9R = $c_9 / c_{100}$ - 9/100 -$r_9$ = error9R / ( error9R + error9L ) -$newt_9$ = $t_9$ - $r_9$($t_9$ - $t_8$). -One way to ensure we never cross the points is to clip each r value (whether in the forward or backward direction) to some maximum value slightly less than 1/2. -In our example, we need to slide $t_8$ a fraction we estimate as $r_8$ = 0.26 of the way "forward" from $t_8$ to $t_9$, -while we need to slide $t_9$ a fraction we estimate as $r_9$ = 0.51 (perhaps clipped to 0.499) of the way "backward" from $t_9$ to $t_8$. -Alas, I've pulled these progressive refinement methods out of some dim memories. -Perhaps something to do with building sine/cosine tables in one-degree increments? -I hope some kind commenter will remind me of the actual name of these methods, if (as is likely) I've mis-remembered the name or some crucial detail.<|endoftext|> -TITLE: Show that product of primes, $\prod_{k=1}^{\pi(n)} p_k < 4^n$ -QUESTION [32 upvotes]: This an interesting problem my friend has been working on for a while now (I just saw it an hour ago, but could not come up with anything substantial besides some PMI attempts). -Here's the full problem: -Let $x_{1}, x_{2}, x_{3}, \cdots x_{y}$ be all the primes that are less than a given $n>1,n \in \mathbb{N}$. -Prove that $$x_{1}x_{2}x_{3}\cdots x_{y} < 4^n$$ -Any ideas very much appreciated! - -REPLY [11 votes]: I just like to point out that this argument is in Hardy and Wright, An Introduction to the Theory of Numbers, with the slight differences that they avoid the use of the floor and ceiling functions, and finish off (quite nicely, in my opinion) with induction. -I'll type it here, to save you looking it up. -Theorem: $\theta(n) < 2n \log 2$ for all $ n \ge 1,$ where $$\theta(x) = \log \prod_{p \le x} p.$$ -Let $M = { 2m+1 \choose m},$ an integer, which occurs twice in the expansion of $(1+1)^{2m+1}$ and so $2M < 2^{2m+1}.$ Thus $M< 2^{2m}.$ -If $m+1 < p \le 2m+1,$ $p$ divides $M$ since it divides the numerator, but not the denominator, of $ { 2m+1 \choose m } = \frac{(2m+1)(2m)\ldots(m+2)}{m!}.$ -Hence -$$\left( \prod_{m+1 < p \le 2m+1} p \right) | M $$ -and -$$ \theta(2m+1) - \theta(m+1) = \sum_{m+1 < p \le 2m+1} \log p \le \log M < 2m \log 2.$$ -The theorem is clear for $n=1$ and $n=2,$ so suppose that it is true for all $n \le N-1.$ If $N$ is even, we have -$$ \theta(N)= \theta(N-1) < 2(N-1) \log 2 < 2N \log 2.$$ -If $N$ is odd, $N=2m+1 $ say, we have -$$\begin{align} -\theta(N)=\theta(2m+1) &=\theta(2m+1)-\theta(m+1)+\theta(m+1) \\ -&< 2m \log 2 +2(m+1) \log 2 \\ -&=2(2m+1) \log 2 = 2N \log 2, -\end{align}$$ -since $m+1 < N.$ Hence the theorem is true for $n=N$ and the result follows by induction. -EDIT: It turns out that this proof was discovered by Erdős and another mathematician, Kalmar, independently and almost simultaneously, in 1939. See Reflections, Ramanujan and I, by Paul Erdős.<|endoftext|> -TITLE: Normal ultrafilters and Stationary sets -QUESTION [5 upvotes]: If $\kappa$ is a measurable cardinal, and $\mathcal{U}$ is a normal ultrafilter which is $\kappa$-complete then $\mathcal{U}$ extends the club filter (i.e. every club is a member of $\mathcal{U}$). -One result is that all the sets in the ultrafilter are stationary. -Now, given a stationary set $S$ such that its complement in $\kappa$ is also stationary (i.e. there is no club subset in $S$) we can choose either to include $S$ or $\kappa\setminus S$ in an extension of an ultrafilter. -I am stuck in showing that I can have a normal ultrafilter with either $S$ or $\kappa\setminus S$ for every stationary $S$. -(I have to prove something else, but it really reduces to this claim.) -Edit: -In light of how the above statement is false, I will just give the original question: -I have to show that there can be a normal ultrafilter such that the set of measurable cardinals below $\kappa$ is not in the ultrafilter. -If there aren't stationary many of them - then clearly it's true. And since the limit of $\omega$ measurable cardinals is singular (ergo non-measurable); the only nontrivial case is when there is a stationary set of measurable cardinals. -I prefer hints over partial solutions, and partial solutions over complete solutions. Thanks. - -REPLY [5 votes]: Asaf, what you are saying is false. In $L[\mu]$, for example, there is a unique normal ultrafilter. (Unless I misunderstand. You are saying that for every $S$ stationary-costationary, you can find a normal $U$ with $S\in U$, right? Obviously, any normal $U$ satisfies that either $S$ or $\kappa\setminus S$ is in $U$, which is what you wrote.) - -Edit : This is a nice homework problem. There are several different solutions, and you may want to study them afterward. Here is a way of thinking about this; it is perhaps not the most efficient, but it is very useful. Suppose $U$ on $\kappa$ is normal and concentrates on measurables. Then $\kappa$ is measurable in $M$, where $j:V\to M$ is given by $U$. So there is a normal $U'$ on $\kappa$ in $M$, and $U'$ really is normal (in $V$, not just in $M$). Let $k:V\to N$ be given by $U'$. what can you say about the sizes of $j(\kappa)$ vs $k(\kappa)$ ? -Here is another approach: Use induction, use that you are at a measurable (so you have a normal $U$) and "integrate" the measures on small cardinals witnessing the claim. Check that the resulting measure is as you want. - -Let me add a couple of details to the first approach (I know it is more elaborate than the other one, but the payoff is worth the effort): -If $j:V\to M$ is the ultrapower embedding by a normal measure $U$ on $\kappa$, then ${}^\kappa M\subset M$ and from this it is easy to check that if $M\models$"$U'$ is a normal measure on $\kappa$", then $U'$ is a normal measure on $\kappa$ in $V$. -Now, suppose that $U$ concentrates on measurables. Since the identity represents $\kappa$ in $M$, it follows that $\kappa$ is measurable in $M$, and there is $U'$ as mentioned. -Essentially, we want to iterate this process: Form the embedding $k:V\to N$ given by $U'$, and if $U'$ concentrates on measurables, then we get $U''$ in $N$ and form $l:V\to P$, etc. We would like this process to stop after finitely many times. For this, we want to associate to a normal measure $U$ on $\kappa$ an ordinal $\alpha_U$ in such a way that if $U'$ is in the ultrapower by $U$, then $\alpha_{U'}<\alpha_U$. -The easiest way to do this is to set $\alpha_U=j(\kappa)$. If $U'\in M$, we can form in $M$ the ultrapower embedding by $U'$, call $k'$ the result, so $k':M\to M'$, and recall we called $k:V\to N$ the ultrapower of $V$ by $U'$. The point is that $k'(\kappa)=k(\kappa)$, because ${}^\kappa M\subset M$, so all functions $f:\kappa\to\kappa$ are in $M$, and this and $U'$ is all we need to compute the value of the embedding at $\kappa$. -But $M\models$"$j(\kappa)$ is inaccessible, while $k'(\kappa)<(2^\kappa)^+$", so $k'(\kappa) -TITLE: How to Compare powers without calculating? -QUESTION [9 upvotes]: Is there any rule for powers so that i can compare which one is greater without actually calculating? For example -54^53 and 53^54 -23^26 and 26^23 -3^4 and 4^3 (very simple but how without actually calculating) - -REPLY [11 votes]: If $a\gt b\gt e , b^a\gt a^b$. To see this, take logs. You want to compare $a \ln b$ with $b \ln a$. $\ln$ rises slowly, so the larger multiplier wins.<|endoftext|> -TITLE: Pullback-stability for epimorphisms -QUESTION [8 upvotes]: In category theory, you see the idea of a class of epimorphisms being stable under pullback. For example, in a regular category, the class of regular epimorphisms is closed under pullback. Every place I've seen the notion of pullback-stability, it's always a part of a bigger definition, such as regular category, or Grothendieck topology. Is there some bigger significance to the idea? Is there a theory for pullback-stable classes of epimorphisms? - -REPLY [6 votes]: There is a weaker notion than 'class of maps stable under pullback', and that is a coverage, which is a class of maps that are stable under weak pullback - this is like pullback but only the existence, not the universal property (i.e. being final in all cones). This is all you need to define sheaves. Note that you don't need to work with just epimorphisms, and you don't need your category to have any a priori limits. -Coverages tend to be given by very small amount of data, as their closures under all the usual operations (composition, isomorphisms, taking sieves) give rise to an equivalent site structure, but a coverage is the minimum you need to specify. For example, the category of manifolds has a coverage where the covering families are good open covers (every open is diffeomorphic to some $\mathbb{R}^n$, and so are all their finite intersections). This gives a site equivalent to the one where covering families are collections of jointly surjective submersions. The first is a lot less data!<|endoftext|> -TITLE: Does the sum of reciprocals of primes converge? -QUESTION [20 upvotes]: Is this series known to converge, and if so, what does it converge to (if known)? -Where $p_n$ is prime number $n$, and $p_1 = 2$, -$$\sum\limits_{n=1}^\infty \frac{1}{p_n}$$ - -REPLY [8 votes]: Let's start with three lemmas: - -Suppose $A\subseteq\{1,2,3,\ldots\}$ and $\sum\limits_{n\in A} \dfrac 1 n < \infty$. Then $\sum\limits_{n\in B} \dfrac 1 n <\infty$ where $B$ is the closure of $A$ under multiplication. - -The closure of the set of primes under multiplication is all of $\{1,2,3,\ldots\}$. - -$\sum\limits_{n=1}^\infty \dfrac 1 n = \infty$. - - -The second lemma is obvious. The third has a number of well known simple proofs. Here is one of those: -\begin{align} -& \frac 1 1 + \frac 1 2 + \frac 1 3 + \frac 1 4 + \frac 1 5 + \frac 1 6 + \cdots \tag 1 \\[10pt] -= {} &\left(\frac 1 1 + \frac 1 2\right) + \left(\frac 1 3 + \frac 1 4\right) + \left(\frac 1 5 + \frac 1 6\right) + \cdots \\[10pt] -\ge {} & \left(\frac 1 2 + \frac 1 2 \right) + \left( \frac 1 4 + \frac 1 4 \right) + \left( \frac 1 6 + \frac 1 6 \right) + \cdots \tag 2 \\[10pt] -= {} & \frac 1 1 + \frac 1 2 + \frac 1 3 + \cdots -\end{align} -The inequality on line $(2)$ is strict if the sum on line $(1)$ is finite, and that leads us to a contradiction. ${}\qquad\blacksquare$ -The proof of lemma 1 is most of the work; here it is: -\begin{align} -& \sum_{n\in B} \frac 1 n \le \overbrace{\sum_{\begin{smallmatrix} C\subseteq A \\[2pt] C \text{ is finite} \end{smallmatrix}} \prod_{k\in C} \frac 1 k = \prod_{a\in A} \sum_{x=0}^\infty \frac 1 {a^x}}^\text{factoring -- see below} = \prod_{a\in A} \frac 1 {1-\frac 1 a} \\[10pt] -= {} & \exp \sum_{a\in A} - \log\left( 1 - \frac 1 a\right) \le \exp \sum_{a\in A} \frac 1 a < \infty. -\end{align} -(As "Pipicito" points out in a comment below, some members of the set $B$ may occur more than once in the sum below and that is why $\text{“}{\le}\text{''}$ rather than $\text{“}{=}\text{''}$ should appear in the first step above.) -Here's the factorization in more detail: Let $A=\{a_1,a_2,a_3,\ldots\}$. Then the product to the right of $\text{“}{=}\text{''}$ under the $\overbrace{\text{overbrace}}$ above is -\begin{align} -& \left( 1 + \frac 1 {a_1} + \frac 1 {a_1^2} + \frac 1 {a_1^3} + \cdots \right) \\ -\times {} & \left( 1 + \frac 1 {a_2} + \frac 1 {a_2^2} + \frac 1 {a_2^3} + \cdots \right) \\ -\times {} & \left( 1 + \frac 1 {a_3} + \frac 1 {a_3^2} + \frac 1 {a_3^3} + \cdots \right) \\ -\times {} & \quad \cdots \cdots \\ -\vdots~ -\end{align} -When you expand the product, you multiply a term from the first factor, a term from the second factor, a term from the third factor, etc., but all except finitely many of those are $1$. The reason all but finitely many are $1$ is that if you multiply infinitely many non-$1$s, then the product is $0$, since its a product of infinitely many positive numbers less than $1/2$. Then you add up all possible such finite products, and that gives you the sum to the left of $\text{“}=\text{''}$ under the $\overbrace{\text{overbrace}}$ above.<|endoftext|> -TITLE: Density of zeros of a power series over the reals -QUESTION [8 upvotes]: Let $f(x) = \sum_{i =0}^\infty a_i x^i$ be a power series which converges for all real $x$. Assume that $f(x)$ is not identically zero. I'm interested in the density of the zeros of $f(x)$. Let $Z$ be the set of zeros of $f(x)$. Which of the following claims about density of $Z$ are true? -Claim 1: $Z$ is nowhere dense. -Claim 2: $Z$ is countable. -Claim 3: For any $a,b \in \mathbb{R}$ , $Z \cap [a,b]$ is finite. -I believe (correct me if I'm wrong) that claim 3 implies the other two. I suspect all three claims are true. -I suspect that the answers to these questions are well-known, though I was not able to find an obvious reference. Can anyone suggest a reference with a nice treatment of these questions? - -REPLY [11 votes]: Claim 3 is true, and it does imply the other 2. It is enough to consider analytic functions, i.e. functions that have a power series expansion in some interval centered at each real number, which in particular holds if you have an everywhere convergent power series. If $[a,b]$ had infinitely many zeros of $f$, then there would be a limit point $c$ of these zeros. By continuity $f(c)=0$. Let $n\gt0$ be the smallest positive integer such that $f^{(n)}(c)\neq0$ (using the assumption that $f$ is not identically $0$). Then $f$ has power series expansion -$$\begin{align*} -f(x)&=\sum_{k=0}^\infty\frac{f^{(k)}(c)}{k!}(x-c)^k -=\sum_{k=n}^\infty\frac{f^{(k)}(c)}{k!}(x-c)^k\\ - &=(x-c)^n\sum_{k=n}^\infty\frac{f^{(k)}(c)}{k!}(x-c)^{k-n}=(x-c)^ng(x), -\end{align*}$$ -where $g(x)$ is a continuous function such that $g(c)\neq0$, and hence $g(x)\neq0$ in some open interval $I$ containing $c$. So the only zero of $f$ in $I$ is $c$, contradicting the fact that $c$ is a limit point of the set of zeros. Hence, unless $f$ is identically zero, the limit point $c$ cannot exist. -For reference you can read any good text on complex analysis. If you prefer to stick to the real case, there is the book A primer of real analytic functions by Krantz and Parks. - -REPLY [4 votes]: All three claims are true. Claim 3 $\Rightarrow$ Claim 2 and Claim 1. The zeros of an analytic function are isolated. This implies Claim 3.<|endoftext|> -TITLE: What is the intuition for the point-set topology definition of continuity? -QUESTION [44 upvotes]: Let $X$ and $Y$ be topological spaces. A function $f: X \rightarrow Y$ is defined as continuous if for each open set $U \subset Y$, $f^{-1}(U)$ is open in $X$. This definition makes sense to me when $X$ and $Y$ are metric spaces- it is equivalent to the usual $\epsilon-\delta$ definition. But why is this a good definition when $X$ and $Y$ are not metric spaces? -How should we think about this definition intuitively? - -REPLY [4 votes]: Intuitively: - -Continuous maps are exactly those maps that preserve (in the forward direction) the notion of "closeness": A map $f : X \to Y$ is continuous iff points "close" to each other in $X$ are always sent to points that are (once again) "close" to each other in $Y.$ - -I now explain what this means exactly and why the most intuitive way of thinking of continuity is actually through its characterization in terms of closed sets${}^{1}$. In short, it allows you define continuous maps as being exactly those maps that preserve a certain property in the forward direction.${}^{2}$ Let me introduce some non-standard (i.e. my own made up, but sensible) definitions: - -Say that a point $y$ is close to a set $S$ if $y \in \overline{S}.$ -Say that a set $R$ is close to a set $S$ if $R \subseteq \overline{S}$ (i.e. if $R$ is contained in the closure of $S$). - -With these definitions, a subset is closed if and only if it contains every point/subset that is close to it (so the terminology "being close" intuitively describes "being closed"). Recall how the closure operator characterizes continuity: -Theorem (non-intuitive statement): A map $f : X \to Y$ is continuous if and only if for all subsets $A \subseteq X$, $f\left( \overline{A} \right) \subseteq \overline{f(A)}$. -This can be restated as: -Theorem (intuitive statement): A map $f : X \to Y$ is continuous if and only if for all subsets $A \subseteq X$, $f$ maps points that are close to $A$ to points that are close to $f(A)$. -You can replace the word "points" above with the word "sets" and the resulting statement will still be true. Thus continuous maps are exactly those that preserve (in the forward direction) the notion of "closeness" in $X$. -If this interpretation is valid then you might expect for the following characterization to also be valid. -Continuity at a given point $x \in X$: $f$ is continuous at $x$ if and only if whenever $x$ is close to a subset $A \subseteq X,$ then $f(x)$ is close to $f(A).$ -You can check that this is characterization of continuity at a point does actually hold. It also follows immediately from the above two characterizations that $f$ is continuous if and only if it is continuous at every point of its domain. - -You can actually (essentially) define the category of topological spaces using only the closure operator: see Kuratowski closure axioms (technically, this category is equivalent to the category of topological spaces). This justifies thinking of topological spaces in terms of "closeness" rather than open subsets. - -This is in contrast to the open set definition of continuity, which defines continuous maps as being those that preserve a certain property (i.e. openness of subsets) in the backwards direction.<|endoftext|> -TITLE: Ramanujan's First Letter to Hardy and the Number of $3$-Smooth Integers -QUESTION [30 upvotes]: A positive integer is $B$-smooth if and only if all of its prime divisors are less than or equal to a positive real $B$. For example, the $3$-smooth integers are of the form $2^{a} 3^{b}$ with non-negative exponents $a$ and $b$, and those integers less than or equal to $20$ are $\{1,2,3,4,6,8,9,12,16,18\}$. -In Ramanujan's first letter to G. H. Hardy, Ramanujan emphatically quotes (without proof) his result on the number of $3$-smooth integers less than or equal to $N > 1$, -\begin{eqnarray} -\frac{\log 2 N \ \log 3N}{2 \log 2 \ \log 3}. -\end{eqnarray} -This is an amazingly accurate approximation, as it differs from the exact value by less than 3 for the first $2^{1000} \approx 1.07 \times 10^{301}$ integers, as shown by Pillai. -Question: Knowing full well that Ramanujan only gave proofs of his own claims while working in England, I wonder if a proof of this particular estimate appears somewhere in the literature. Is this problem still open? If not, what is a reference discussing its proof? -Thanks! - -REPLY [16 votes]: For any reference requirement related to Ramanujan, it is always a good idea to check the series of volumes, titled Ramanujan's notebooks, compiled and annotated by Bruce C. Berndt and others. -This is a special case of Entry 15 in Ramanujan's second notebook, which is about numbers of the form $\displaystyle a^p b^q$. -A reference to this can be found here: Ramanujan's notebooks, Volume 4. -(I suggest you read page 66 onwards) -A snapshot (from the google books link itself): - -This volume talks about the "proof" Ramanujan gave (pages 68 and 69), provides references to Hardy's book (which apparently has a whole chapter on this) and also mentions a paper by Hardy and Littlewood which deals with it. -De Bruijn has considered the number of $y$-smooth numbers in this paper: On the number of positive integer less than x and free of prime factors greater than y. -The number of $y$-smooth numbers $\le x$ is apparently now known in literature as the DeBruijn function: $\psi(x,y)$. -A closely related function is the DeBruijn-Dickman function. -There is also a survey by Hildebrand and Tenenbaum which should be helpful.<|endoftext|> -TITLE: Proofs that every natural number is a sum of four squares. -QUESTION [22 upvotes]: I am planning to write a little note detailing several proofs of Lagrange's theorem that every natural number can be written as the sum of four perfect squares. I know of three different proofs so far: - -a completely elementary proof by descent. -a proof via Minkowski's theorem and lattices. -Jacobi's proof via modular forms. - -Can anybody think of any more nice, relatively elementary proofs of this result? Thanks in advance. - -REPLY [10 votes]: Perhaps the most beautiful solution is by way of Aubry's lemma - which employs a geometric variant of the Euclidean algorithm to turn a rational represention into an integral representation. This is the same technique that leads to the reflective generation of primitive Pythagorean triples and the associated ternary tree structure. Aubry's results are, in fact, very special cases of general results of Wall, Vinberg, Scharlau et al. on reflective lattices, i.e. arithmetic groups of isometries generated by reflections in hyperplanes. Generally reflections generate the orthogonal group of Lorentzian quadratic forms in dim < 10. See my MO post here for further remarks and references. -In my opinion, the results in this area are some of the most beautiful results in elementary number theory. Strangely enough, for all this beauty they appear to be little known. For example, over a decade ago when I mentioned to John Conway the connection between Aubry's work and Cassels and Pfister he was not aware of this (R.K. Guy told me that the presentation of the PPT ternary tree in their Book of Numbers (1996) is based on a lecture he heard by an undergraduate, Richard Vogeler, at an MAA section at Brigham Young Univ. on 89-04-07.) Also Pfister apparently was not aware of Aubry's work when he generalized Cassels result to arbitrary quadratic forms, founding the modern algebraic theory of quadratic forms ("Pfister forms"). Someday I hope to write something on the bizarre history of this beautiful circle of ideas so I would be grateful to hear from anyone who may know further details.<|endoftext|> -TITLE: Does $\sum \limits_{n=1}^\infty\frac{\sin n}{n}(1+\frac{1}{2}+\cdots+\frac{1}{n})$ converge (absolutely)? -QUESTION [13 upvotes]: I've had no luck with this one. None of the convergence tests pop into mind. -I tried looking at it in this form $\sum \sin n\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n}$ and apply Dirichlets test. I know that $\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n} \to 0$ but not sure if it's decreasing. -Regarding absolute convergence, I tried: -$$|\sin n\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n}|\geq \sin^2 n\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n}=$$ -$$=\frac{1}{2}\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n}-\frac{1}{2}\cos 2n\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n}$$ -But again I'm stuck with $\cos 2n\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n}$. -Assuming it converges then I've shown that $\sum \sin n\frac{1+\frac{1}{2}+\cdots+\frac{1}{n}}{n}$ doesn't converge absolutely. - -REPLY [16 votes]: The sequence $$a_n=\frac{1+\frac{1}{2}+\cdots +\frac{1}{n}}{n}$$ is decreasing because proving $a_{n+1} -TITLE: Combinatorics Olympiad problem - Sort out a schedule -QUESTION [13 upvotes]: Interesting problem from a $2000$ St. Petersburg school olympiad. -There are $109$ soldiers in a camp. Every night three of them go on watch patrol. Prove that it can be arranged so that after a while, every pair of soldiers has shared watch exactly three times. -I think I'm missing the key insight here. - -REPLY [7 votes]: I believe the following does the trick and that the $109$ is really a red herring, and that the problem has a solution for each odd -$n \ge 3.$ -Suppose $n=2m+1$ and consider the triples $(i,i+j,i+2j)$ where $i=1,2,3,\ldots,(2m+1)$ and $j=1,2,3,\ldots, m.$ We can construct a -$(2m+1) \times m $ array of triples with $(i,i+j,i+2j)$ in the $i$th row and $j$th column. We work modulo $2m+1.$ -For example here's a solution for $n=9.$ -$$\begin{array} {cccc} - (1,2,3) & (1,3,5) & (1,4,7) & (1,5,9) \\ - (2,3,4) & (2,4,6) & (2,5,8) & (2,6,1) \\ - (3,4,5) & (3,5,7) & (3,6,9) & (3,7,2) \\ - (4,5,6) & (4,6,8) & (4,7,1) & (4,8,3) \\ - (5,6,7) & (5,7,9) & (5,8,2) & (5,9,4) \\ - (6,7,8) & (6,8,1) & (6,9,3) & (6,1,5) \\ - (7,8,9) & (7,9,2) & (7,1,4) & (7,2,6) \\ - (8,9,1) & (8,1,3) & (8,2,5) & (8,3,7) \\ - (9,1,2) & (9,2,4) & (9,3,6) & (9,4,8) -\end{array}$$ -The number of triples = $m(2m+1)= { 2m+1 \choose m } = $ the number of distinct pairs of integers in $S= \{ 1,2,3,\ldots,2m+1 \} $ and all triples are unique, where the order of the elements is taken into account. (For suppose $(x,y,z)$ is in our array, then $x=i,y=i+j$ and $z=i+2j$ for some $i,j$ and thus if $xy$ it is in the $x$th row and $(y-x + 2m+1)$th column. So its position is uniquely determined by its elements. -Moreover, if $(x,y,z)=(i,i+j,i+2j)$ is in the array then none of $(x,z,y),(z,y,x)$ and $(y,x,z)$ can be in the array, for if one of the positions of the elements is fixed the latter three triples cannot obey the construction rule, that its elements increase by $j,$ since we know that $x,$ $y$ and $z$ increase by $j.$ -Now consider a pair of integers $(a,b),$ which appears in a given triple, $(a,b,c)$. There cannot exist another triple in the array $(a,b,d),$ say, with $c \ne d$ since $c$ is uniquely determined by $a$ and $b.$ -The pair $(a,b)$ cannot appear in more than three triples, in whatever order (we've already noted that we cannot have both $(a,b,c)$ and $(b,a,c)$ hence the three and not six), since the remaining element is uniquely determined by $a$ and $b.$ However, it cannot appear in less than three triples since the number of triples equals the number of pairs of distinct integers in -$S,$ so this would mean that another pair must appear in more than three triples, a contradiction. -Thus we have a solution for any odd $n \ge 3.$ -So a solution for $n=11$ is: -$$\begin{array} {ccccc} - (1,2,3) & (1,3,5) & (1,4,7) & (1,5,9) & (1,6,11) \\ - (2,3,4) & (2,4,6) & (2,5,8) & (2,6,10) & (2,7,1)\\ - (3,4,5) & (3,5,7) & (3,6,9) & (3,7,11) & (3,8,2) \\ - (4,5,6) & (4,6,8) & (4,7,10) & (4,8,1) & (4,9,3) \\ - (5,6,7) & (5,7,9) & (5,8,11) & (5,9,2) & (5,10,4) \\ - (6,7,8) & (6,8,10) & (6,9,1) & (6,10,3) & (6,11,5) \\ - (7,8,9) & (7,9,11) & (7,10,2) & (7,11,4) & (7,1,6) \\ - (8,9,10) & (8,10,1) & (8,11,3) & (8,1,5) & (8,2,7) \\ - (9,10,11) & (9,11,2) & (9,1,4) & (9,2,6) & (9,3,8) \\ - (10,11,1) & (10,1,3) & (10,2,5) & (10,3,7) & (10,4,9) \\ - (11,1,2) & (11,2,4) & (11,3,6) & (11,4,8) & (11,5,10) - -\end{array}$$ -Note that I'm not using Steiner triples, as $11 \ne 6k+1$ or $6k+3.$<|endoftext|> -TITLE: Why $9$ & $11$ are special in divisibility tests using decimal digit sums? (casting out nines & elevens) -QUESTION [68 upvotes]: I don't know if this is a well-known fact, but I have observed that every number, no matter how large, that is equally divided by $9$, will equal $9$ if you add all the numbers it is made from until there is $1$ digit. -A quick example of what I mean: - -$9*99 = 891$ -$8+9+1 = 18$ -$1+8 = 9$ - -This works even with really long numbers like $4376331$ -Why is that? This doesn't work with any other number. Similarly for $11$ and alternating digits sums. - -REPLY [2 votes]: The "specialness'' of $9$ stems from the fact that $10 \equiv 1 \pmod{9}$. -So, in our base-10 numeral system in which an integer can be expressed as -$$ d_{k}d_{k-1} \ldots d_{1}d_{0} = d_{k}10^{k} + d_{k-1}10^{k-1} + \ldots + d_{1}10 + d_{0}$$ -$$ \equiv d_{k}(1) + d_{k-1}(1) + \ldots + d_{1}(1) + d_{0} \pmod{9},$$ -all we need to do is to evaluate the sum $d_{k} + d_{k-1} + \ldots + d_{1} + d_{0}$ and check to see if it is divisible by $9$ in order to determine if the given integer is divisible by $9$. -Finally, note that a similar test with analogous justification also exists for determining if a given integer is divisible by $3$, as $10 \equiv 1 \pmod{3}$.<|endoftext|> -TITLE: Equivalent statements of the Axiom of Choice -QUESTION [15 upvotes]: As a little project for myself this winter break, I'm trying to go through as much of Enderton's Elements of Set Theory as I can. I hit a snag trying to show two forms of the Axiom of Choice are equivalent. This is exercise 31 on page 55. -The first form is: - -For any relation $R$ there is a function $G\subseteq R$ with $\text{dom}\ G=\text{dom}\ R$. - -and the second form is: - -For any set $I$ and any function $H$ with domain $I$, if $H(i)\neq\emptyset$ for all $i\in I$, then $\times_{i\in I}H(i)\neq\emptyset$. - -Here is what I have so far: -Assume the first form. Take any set $I$ and let $H$ be a function with domain $I$ such that $H(i)\neq\emptyset$ for all $i\in I$. This function $H$ is a relation, so by the Axiom of Choice, there exists a function $G\subseteq H$ such that $\text{dom}\ G=\text{dom}\ H=I$. Since $\text{dom}\ G=I$, for each $i\in I$, there exists some $G(i)$ such that $(i,G(i))\in G$. But since $G\subseteq H$, $(i,G(i))=(j,H(j))$ for some $j\in I$. Since these are ordered pairs, $i=j$ and $G(i)=H(j)$? I suppose I want to be able to show that for all $i\in I$, I can have $G$ "choose" some element $G(i)\in H(i)$, and thus $G\in\times_{i\in I}H(i)$, showing that $\times_{i\in I}H(i)\neq\emptyset$, but I don't see how the first form allows one to do that. Instead, all I see is that $G(i)=H(i)$. -Conversely, I assume the second form. I take any relation $R$, and denote $\text{dom}\ R=I$. Let $H$ be any function with domain $I$. Now if $H(i)\neq\emptyset$ for all $i$, then $\times_{i\in I}H(i)\neq\emptyset$, so then I could take some $f\in\times_{i\in I}H(i)$, so by definition, $\text{dom} f=I$, and for all $i$, $f(i)\in H(i)$. If it is the case that $(i,H(i))\in R$, then $f\subseteq R$, and the first form would be proven. Again, I suppose I want $H$ to be a function that, for each $i\in I$, $H$ takes the value of exactly one $y_i$ such that $iRy_i$, but again, I don't see how the assumed axiom allows one to do this. -Can anyone explain how to get around these two issues? Thank you. - -REPLY [8 votes]: Assume the first form. Let $I$ be any set and let $H$ be any function such that $\text{dom}\ H=I$, and $H(i)\neq\emptyset$ for all $i\in I$. Define a relation $R\subseteq I\times\bigcup_{i\in I}H(i)$ by -$$ -\langle i,x\rangle\in R\Leftrightarrow x\in H(i). -$$ -By assumption, there exists a function $G\subseteq R$ with $\text{dom}\ G=\text{dom}\ R=I$, as for each $i\in I$, $i\in\text{dom}\ R$ since $H(i)$ is nonempty. So for all $\langle i,G(i)\rangle\in G$, $\langle i,G(i)\rangle\in R$, and thus by the definition of $R$, $G(i)\in H(i)$. It follows that $G\in\prod_{i\in I}H(i)$, so $\prod_{i\in I}H(i)\neq\emptyset$. Thus the second form follows from the first. -Conversely, let $R$ be any relation, and denote $\text{dom}\ R=I$. Define a function -$$ -H\colon I\to\mathscr{P}(\text{ran}\ R)\colon i\mapsto H(i):=\{x\in\text{ran}\ R\ |\ iRx\}. -$$ -In particular, $H$ is a function with domain $I$, and $H(i)\neq\emptyset$ for all $i\in I$. So by the second form, $\prod_{i\in I}H(i)\neq\emptyset$, so take $G\in\prod_{i\in I}H(i)$. Hence $\text{dom}\ G=I$, and for all $i\in I$, $G(i)\in H(i)$. Also, for any $\langle i,G(i)\rangle\in G$, $G(i)\in H(i)\subseteq\text{ran}\ R$, and so $\langle i,G(i)\rangle\in R$, so $G\subseteq R$. Hence the two statements of the Axiom of Choice are equivalent.<|endoftext|> -TITLE: Numerically solving ODEs — how to estimate the solution between the nodes? -QUESTION [5 upvotes]: I have heard about a lot of fancy numerical methods for solving ODEs. I know that there are methods that (assuming sufficient smoothness) asymptotically give a low error, like the Runge-Kutta methods. These estimate the solution in a set of points $t_0$, $t_1$, etc. But what if I want to have a function that is close to the correct solution everywhere, not just in a discrete set of points? -I can extend the numerical solution to a piecewise linear function. This will be a continuous function and it will converge to the correct solution if the step-size goes to zero. -But the error estimate will be poor in most places unless I use a very small step-size, which rather defeats the purpose of using a high-order method. So how does one go about estimating the solution in practice between the $t_i$? - -REPLY [3 votes]: Remember that whatever method you use for solving $y^{\prime}=f(t,y)$, be it Runge-Kutta, Bulirsch-Stoer (extrapolative), Gear/Adams multistep, or fancier methods, one always has a triple of values $(t_i,y_i,y_i^{\prime})$ ($y_i^{\prime}=f(t_i,y_i)$) at each evaluation point. Thus, one can always do cubic Hermite interpolation across the points $(t_i,y_i)$ and $(t_{i+1},y_{i+1})$ (note that I am not assuming that the evaluation points are equispaced, as is often the case when doing adaptive solving). If the underlying method is at most third order accurate, cubic Hermite is a good choice. -Now, the upshot is that modern implementations of DE solvers always support so-called "dense output"; briefly, associated with a method with $p$th order "accuracy" (in quotes since "high order doesn't always imply high accuracy" ;) ) has with it an associated $p$th order interpolating function. To use my favorite example, the $(4,5)$ adaptive Runge-Kutta solver based on the Dormand-Prince coefficients has with it an associated fifth order interpolating function. The special properties inherent in the coefficients allow the existence of an associated interpolating function with the same order of "accuracy"; in general, not all Runge-Kutta coefficients will have an associated "nice" interpolating function (but again, one can always do cubic Hermite). -I could say more, but the books by Hairer/Nørsett/Wanner have an extensive discussion on dense output (and they say it better than what I can hope to say), as well as usable routines (also available within the site). You would do well to study them.<|endoftext|> -TITLE: If $\gcd(a,b)=d$, then $\gcd(ac,bc)=cd$? -QUESTION [10 upvotes]: $A$ an integral domain, $a,b,c\in A$. If $d$ is a greatest common divisor of $a$ and $b$, is it true that $cd$ is a greatest common divisor of $ca$ and $cb$? I know it is true if $A$ is a UFD, but can't think of a counterexample in general situation. - -REPLY [12 votes]: Here is the best that one can say for arbitrary integral domains: -LEMMA $\rm\ \ (a,b)\ =\ (ac,bc)/c\quad$ if $\rm\ (ac,bc)\ $ exists. -Proof $\rm\quad d\ |\ a,b\ \iff\ dc\ |\ ac,bc\ \iff\ dc\ |\ (ac,bc)\ \iff\ d|(ac,bc)/c$ -Generally $\rm\ (ac,bc)\ $ need not exist, as is most insightfully viewed as failure of -EUCLID'S LEMMA $\rm\quad a\ |\ bc\ $ and $\rm\ (a,b)=1\ \Rightarrow\ a\ |\ c\quad$ if $\rm\ (ac,bc)\ $ exists. -Proof $\ \ $ If $\rm\ (ac,bc)\ $ exists then $\rm\ a\ |\ ac,bc\ \Rightarrow\ a\ |\ (ac,bc) = (a,b)\:c = c\ $ by the Lemma. -Therefore if $\rm\: a,b,c\: $ fail to satisfy the Euclid Lemma $\Rightarrow\:$, -namely if $\rm\ a\ |\ bc\ $ and $\rm\ (a,b) = 1\ $ but $\rm\ a\nmid c\:$, then one immediately deduces that the gcd $\rm\ (ac,bc)\ $ fails to exist. For the special case $\rm\:a\:$ is an atom (i.e. irreducible), the implication reduces to: atom $\Rightarrow$ prime. So it suffices to find a nonprime atom -in order to exhibit a pair of elements whose gcd fails to exist. This task is a bit simpler, e.g. for $\rm\ \omega = 1 + \sqrt{-5}\ \in\ \mathbb Z[\sqrt{-5}]\ $ we have that the atom $\rm\: 2\ |\ \omega'\: \omega = 6\:,\:$ but $\rm\ 2\nmid \omega',\:\omega\:,\:$ so $\rm\:2\:$ is not prime. Therefore we deduce that the gcd $\rm\: (2\:\omega,\ \omega'\:\omega)\ =\ (2+2\sqrt{-5},\:6)\ $ fails to exist in $\rm\ \mathbb Z[\sqrt{-5}]\:$. -Note that if the gcd $\rm\: (ac,bc)\ $ fails to exist then this implies that the ideal $\rm\ (ac,bc)\ $ is not principal. Therefore we've constructively deduced that the failure of Euclid's lemma immediately yields both a nonexistent gcd and a nonprincipal ideal. -That the $\Rightarrow$ in Euclid's lemma implies that Atoms are Prime $\rm(:= AP)$ is denoted $\rm\ D\ \Rightarrow AP\ $ in the list of domains closely related to GCD domains in my post here. There you will find links to further literature on domains closely related to GCD domains. See especially the referenced comprehensive survey by D.D. Anderson: GCD domains, Gauss' lemma, and contents of polynomials, 2000. -See also my post here for the general universal definitions of $\rm GCD,\: LCM$ and for further remarks on how such $\iff$ definitions enable slick proofs, and see here for another simple example of such.<|endoftext|> -TITLE: Learning schemes -QUESTION [18 upvotes]: Could someone suggest me how to learn some basic theory of schemes? I have two books from algebraic geometry, namely "Diophantine Geometry" from Hindry and Silverman and "Algebraic geometry and arithmetic curves" from Qing Liu. I have had difficulties to prove the equivalence of many definitions. For example Hindry and Silverman defines an affine variety to be an irreducible algebraic subset of $\mathbb{A}^n$ with respect to Zariski topology. On the other hand, Liu defines an affine variety to be the affine scheme associated to a finitely generated algebra over a field. - -REPLY [18 votes]: I have found Kenji Ueno's book Algebraic Geometry 1: From Algebraic Varieties to Schemes to be quite satisfying in introducing the basic theory of schemes. Well, to be fair, this is only the first in a series of three books on the subject by the same author. So this first volume basically just develops the definitions of an affine scheme first and then of a scheme in general by "pasting" together affine schemes. It does not go into cohomology and more advanced stuff, which is the subject of the other two books. -However, what I really like is that he motivates very carefully the passage from the definition of an affine algebraic variety as an irreducible algebraic set in an affine space $\mathbb{A}_{k}^{n}$ to the definition of an affine variety using schemes, which is where you are having some trouble. What he does is that he starts by doing some algebraic geometry in the classic sense, that is, over an algebraically closed field $k$, in the first chapter of the book. -Then the author proves that there is a correspondence between the points in an algebraic set $V$ and the maximal ideals of its associated coordinate ring $k[V]$, where a point $(a_1, \dots , a_n) \in V$ corresponds to the maximal ideal of $k[V]$ determined by the ideal $(x_1 - a_1 , \dots , x_n - a_n) \in k[x_1, \dots, x_n]$, that is, a correspondence between the points in $V$ and the "points" in the maximal spectrum $\text{max-Spec}(k[V])$ of the coordinate ring $k[V \, ]$. -Then Ueno goes on to define an affine algebraic variety as a pair $(V, k[V \, ] )$ where $V$ is an an algebraic set. But he then makes the argument that one can go a little bit further and consider the pair $( \text{max-Spec}(R), R )$ where $R$ is a $k$ algebra. But here Ueno arguments that if the original intention was to study the sets of solutions of polynomial equations, then where is the geometry and where are the equations hidden if an algebraic variety is defined as this pair $( \text{max-Spec}(R), R )$? -The interesting thing is that if the $k$ algebra $R$ is finitely generated over $k$ then -$$ R \simeq \frac{ k[x_1 , \dots , x_n] }{I}$$ -so that as a consequence -$$ \text{max-Spec}(R) = V(I)$$ -so that again you'll have some equations (this is all done and explained in detail in the book). So then the author (re)defines an algebraic variety over an algebraically closed field $k$ (remember that he is doing everything in the classic sense) as a pair $( \text{max-Spec}(R), R )$, where $R$ is a finitely generated $k$ algebra. -And then at the end of the first chapter the author motivates the need for a more general theory, for example having in mind the needs of number theory, because since everything was done in the context of an algebraically closed field, then the arguments don't work for the fields (and rings) of interest in number theory. In particular, it is noted how an extension of the definitions to include these cases would need to take into account not only the set of maximal ideals, but the set of all prime ideals. -Then chapter two develops first some properties of this set of prime ideals, or prime spectrum of a ring, making it into a topological space with the Zariski topology... and then defines the necessary things in order to be able to define an affine scheme and a scheme (I mean, the concepts of a sheaf of rings, a ringed space, etc). -It is not a short story of course, but again I prefer this type of approach at first, than having to deal with an unmotivated (and difficult) definition that strives for great generality but I have no idea of where it comes from and what is its purpose. -Note that the book that Arturo recommended is great also but it assumes you already know some algebraic geometry and its level is higher than Ueno's book. -You should take a look at it and see if you like it, the book has a fair amount of examples and some exercises interspersed within the text also. You'll have to study from other sources as well but I believe that this book does a pretty good job at motivating the abstract definitions. -I hope this helps at least a little. - -REPLY [13 votes]: I am also currently learning about sheaves and schemes, and I'm finding Ravi Vakil's notes to be very helpful: -http://math216.wordpress.com/<|endoftext|> -TITLE: A problem with minimizing a function -QUESTION [11 upvotes]: I have the following cost function: -$\mbox{BSP Cost}=\sum_{i=1}^{\frac{n}{G}}G^{2}\left\lceil \frac{i}{p}\right\rceil +g\left(p\right)\sum_{i=1}^{\frac{n}{G}}Gi+l\left(p\right)\frac{n}{G}$ -I would like to minimize it by choosing an appropriate G (i.e., G is a function of p and n). I have simplified it to the following form: -$\mbox{BSP Cost}=\frac{Gn}{2p}+\frac{n^{2}}{2p}+g\left(p\right)\cdot\left(\frac{n}{2G}+\frac{n^{2}}{2G^{2}}\right)+l\left(p\right)\frac{n}{G}\to\min$ -To find the minimization, I used a derivative $\frac{d}{dG}$ on the cost function, and compared to zero. I got this: -$\frac{nG^{3}-g\left(p\right)npG-g\left(p\right)n^{2}p-2l\left(p\right)npG}{2pG^{3}}=0$ -$nG^{3}-g\left(p\right)npG-g\left(p\right)np-2l\left(p\right)npG=0$ -And I'm not sure how to proceed from this point. Can you help me find a function for G(n,p)? If you see any mistakes in the process above, please tell me. -EDIT: it might also be important to mention that all the variables and functions (g,n,p,l,G) are positive. -EDIT 2: an rough approximation formula will do! -EDIT 3: Here's an approximation of g,l: -$g\left(p\right)=-0.858p^{3}+12.31p^{2}-47.12p+79.67$ -$l\left(p\right)=670.9p^{2}+2815p-2763$ - -REPLY [2 votes]: Assuming $l(p)$ and $g(p)$ are known, I get the same for the derivative. You can simplifiy it by dividing out $n$. You now have a cubic in $G(n,p)$, though it seems not to be a function of $n$, which can be solved by the usual formula for given n and p, but that is a mess. Alternately, you can write it as $G^2=g(p)p(1+\frac{1}{G})+2l(p)p$ If a numeric solution is acceptable, this should converge nicely-start by evaluating the RHS with $G=0$, find $G^2$, plug $G$ into the RHS and iterate to convergence.<|endoftext|> -TITLE: Solving a tough limit -QUESTION [5 upvotes]: I am trying to verify for which $z \in \mathbb{R}$ the series $\sum _{n=1}^{\infty } \left(1-\cos \left(\frac{1}{n}\right)\right)^z$ converges. The only test that was successful for me is the Kummer Test which gave the apparently correct result that it converges if $z > \frac{1}{2}$. -To get there I use the fact that the series $\sum _{n=1}^{\infty } a_n$ converges (when the limit exists) if $\lim_{n\to \infty } \, n - \left(\frac{a_n}{a_{n+1}}-1\right) > 1$ -Using Mathematica I get $\lim_{n\to \infty } \, n - \left(\frac{a_n}{a_{n+1}}-1\right)=\lim_{n\to \infty } \, n \left(\left(1-\cos \left(\frac{1}{n}\right)\right)^z \left(1-\cos - \left(\frac{1}{n+1}\right)\right)^{-z}-1\right) = 2z$ and therefore $z > \frac{1}{2}$. -Start reading here if you are only interested in the problem and not how i got there: -Now I try to understand how to get that -$\lim_{n\to \infty } \, n \left(\left(1-\cos \left(\frac{1}{n}\right)\right)^z \left(1-\cos - \left(\frac{1}{n+1}\right)\right)^{-z}-1\right)=\lim_{x\to 0} \, \frac{-1+(1-\cos (x))^z \left(1-\cos - \left(\frac{x}{1+x}\right)\right)^{-z}}{x} = 2z$ -All attempts I did to show this in an elementary way failed and thats why I hope someone has an idea how to verify the result. For me its not obvious why the result to the limit is how it is - -REPLY [2 votes]: The key is to use the Taylor series approximation $$\cos x \approx 1 - \frac{x^2}{2},$$ which is good for small $x$. That applies to both $x$ and $x/(1+x)$, and so $$\frac{1-\cos x}{1-\cos \frac{x}{1+x}} \approx (1+x)^2.$$ Substituting this into your expression, we get that the expression is $$\approx \frac{(1+x)^{2z}-1}{x} \approx \frac{(1+2zx)-1}{x} = 2z.$$ This reasoning can be made formal using big-O (or little-O) notation, but I leave that for you to ponder.<|endoftext|> -TITLE: Designing an Irrational Numbers Wall Clock -QUESTION [52 upvotes]: A friend sent me a link to this item today, which is billed as an "Irrational Numbers Wall Clock." - -There is at least one possible mistake in it, as it is not known whether $\gamma$ is irrational. -Anyway, this started me wondering about how to improve the design of this clock from a mathematical standpoint. Here's my formulation of the problem: - -Find 12 numbers that have been proved to be irrational to place around the outside of a clock. -Each of eleven of the numbers must approximate as closely as possible one of the integers from 1 through 11. The 12th can either be just smaller than 12 or just larger than 0. -The numbers must have expressions that are as simple as possible (in the spirit of - or even more simple than - those in the clock given in the picture here). Thus, for example, no infinite sums, no infinite products, and no continued fractions. Famous constants and transcendental functions evaluated at small integers encouraged. -Expressions should be as varied as possible. Better answers would include at least one use of trig functions, logarithms, roots, and famous constants. - -Obviously, goals 2, 3, and 4 act against each other. And, as Jonas Meyer points out, "as closely as possible" and "as simple as possible" are not well-defined. That is intentional. I am afraid that if I tried to define those precisely I would preclude some answers that I might otherwise consider good. Thus, in addition to the mathematics, there's a sizable artistic component that goes into what would be considered a good answer. Hence the "soft-question" tag. I'm really curious as to what the math.SE community comes up with and then what it judges (via upvoting) to be the best answers, subject to these not-entirely-well-defined constraints. -Note that the designer of the clock given here was not trying to approximate the integers on a clock as closely as possible. -Finally, it's currently New Year's Day in my time zone. Perhaps a time-related question is appropriate. :) -Note: There is now a community wiki answer that people can edit if they just want to add a few suggestions rather than all twelve. - -REPLY [5 votes]: Some that I like: - -I'm rather fond of $3 \approx \log 20$. Every so often I find myself taking large powers of $e$ in my head; knowing this is helpful. -$7 \approx 12 \log_2 (3/2)$. This expresses the fact that, in music, a fifth is seven-twelfths of an octave. -$6 \approx \log (\pi^4 + \pi^5)$; I've actually seen T-shirts claiming that $\pi^4 + \pi^5 = e^6$. -$\pi + \pi^2 \approx 13$, although unless we're in an Orwell book your clocks have no thirteen. - -The wikipedia article on mathematical coincidences may have more ideas.<|endoftext|> -TITLE: Book/article/tutorial as an introduction to Cardinality -QUESTION [7 upvotes]: I study CS, but on the first semester I have a lot of mathematics. Of course, there is an introduction to set theory and logic. Recently, we had lectures about cardinality, different kinds of infinity, countable sets etc. I know the basics, but I don't understand many ideas as I would like to understand them. -Well, I am looking for a book/article/tutorial/etc. which is a good explanation of cardinality, countable sets, idea of different kinds of infinity and cardinality of sets of different functions. Now, I only need sources, which cover the basics, but I don't mind if You post something on higher level (but introduction stuff has greater priority). -PS: I was looking for similar questions on this site, I've found for exaple [1] and [2], but it isn't what I'm actually looking for. -I woluld appreciate You help. -[1] Cardinality of set of real continuous functions -[2] What Does it Really Mean to Have Different Kinds of Infinities? - -REPLY [2 votes]: There is Set Theory, by K. Kuratowski and A. Mostowski. -It's dry as dirt and yet for some reason I loved reading that book. -If it's available in your library, I'd take it home and see if it works out.<|endoftext|> -TITLE: Is $\ p_n^{\pi(n)} < 4^n$ where $p_n$ is the largest prime $\leq n$? -QUESTION [9 upvotes]: Is $\ p_n^{\pi(n)} < 4^n$ where $p_n$ is the largest prime $\leq n$? -Where $\pi(n)$ is the prime counting function. Using PMT it seems asymptotically $\ p_n^{\pi(n)} \leq x^n$ where $e \leq x$ - -REPLY [7 votes]: Using $$\pi(x) \le 1.25066 \frac{x}{\log x}$$ for all $x>1$ (from Rosser and Schoenfeld), you have -$$(p_n)^{\pi(n)} \le e^{1.25066 n} < 3.5^n$$ for all $n\ge 2$.<|endoftext|> -TITLE: Non-existence of a surjection $\aleph_n \to \aleph_{n+1}$, without the axiom of choice -QUESTION [11 upvotes]: Firstly, let's establish what exactly I mean by these symbols. Let $\omega_0 = \{ 0, 1, 2, \ldots \}$, where $0, 1, 2, \ldots$ are the usual von Neumann representations of the natural numbers. Let $n$ be a finite natural number. For each $n$, define $\omega_{n+1} = \sup S$, where $S$ the image of the set $R$ under the function-class mapping well-orders to their von Neumann ordinal, and $R \subset \mathcal{P}(\omega_n \times \omega_n)$ is the set of all well-orders on $\omega_n$. We define $\aleph_n = \omega_n$. -Unless I'm mistaken, this establishes the existence of $\omega_n$ for all $n \in \mathbb{N}$ as sets under the axioms of ZF. It's straightforward to see that there is no injection $\omega_{n+1} \to \omega_n$, as that would establish (via pullback) that $|\omega_{n+1}| \le \aleph_n$, and this is a contradiction as $\omega_{n+1}$ is strictly greater than all ordinals in $S$. This in turn implies, by the axiom of choice, that there is no surjection $\omega_n \to \omega_{n+1}$, and the conclusion that there is no surjection $\aleph_n \to \aleph_{n+1}$ follows. -My question is: Can this be done, using my definitions above, without the axiom of choice? I'm willing to accept reasonable alternative definitions, provided that they don't render the conclusion tautological. -(This is a self-imposed extension to a homework problem: Should I tag with homework?) - -REPLY [10 votes]: You've made a very big mess, I think. The way you defined cardinal numbers is very awkward, so to say. In my eyes, anyway. -Let us review the construction of ordinals: - -$0 = \emptyset$ -$\alpha+1 = \alpha \cup \{\alpha\}$ -At limit stages, $\delta = \bigcup_{\beta<\delta} \beta$ - -Now we define initial ordinals as ordinal numbers which cannot be bijected with any smaller ordinal. For example $\omega$ (the set of natural numbers) is such ordinal, while $\omega+1, \omega+\omega, \omega\cdot\omega$ are not initial ordinals. -Under the axiom of choice, every set is well-orderable, and therefore we can choose the initial ordinal out of each equivalence class as a representative. This is the usual notion of $\aleph$ numbers under the axiom of choice. -Without assuming choice, the cardinal system is not well-ordered and can behave very strangely. -Regardless to that, when you are only dealing with ordinals you don't need choice because there is a canonical choice function (take the minimal element). So even without the axiom of choice it is true that $\omega_\alpha$ has no bijection with $\omega_\beta$ for $\alpha\not=\beta$. -The idea behind aleph numbers, as far as I see it, is that it is a well ordering of cardinalities (not necessarily all cardinalities, though) and as such it holds just fine even when not assuming choice. However, in the case you don't have the axiom of choice to help you out, $2^{\aleph_0}$ might not be well-orderable and thus won't be represented by an ordinal, and therefore won't be represented by an $\aleph$ number, same with multiplication. It is equivalent to the axiom of choice that for every infinite set $|X| = |X\times X|$. -Just last remark, you said that the construction you gave infers the existence of $\aleph_n$ for every natural number $n$ while in fact it gives you $\aleph_\alpha$ for every ordinal $\alpha$ and not just for the natural numbers.<|endoftext|> -TITLE: Find all subrings of $\mathbb{Z}^2$ -QUESTION [15 upvotes]: This may be a simple question: - -Find all subrings of $\mathbb{Z}^2$. - -REPLY [19 votes]: It proves instructive to highlight that the argument sketched in Qiaochu's answer is actually a special case of a general relationship between subalgebras and congruences. Generalizing the way that one defines congruences for the ring of integers modulo $\rm\,m,\,$ one defines a congruence $\equiv$ as an equivalence relation on an algebra $\rm A $ that is compatible with the operations on $\rm A $. For example, for rings such compatibility means that if $\rm a'\equiv a,\ b'\equiv b\,$ then $\rm\, a'+b'\,\equiv\, a+b\ $ and similarly for all other operations. Now, viewing the equivalence relation $\equiv$ as a subset of $\rm\ A\times A\,,\, $ this compatibility condition is a closure condition: $ $ if $\rm\, (a',a),\,(b',b)\in\;\equiv\,$ then $\rm\, (a'+b',a+b)\in\; \equiv\,,\, $ i.e. $\,\equiv\,$ is closed under addition in $\rm\, A\times A\,$. Thus an equivalence relation on $\rm A $ is compatible with the operations of $\rm A $ iff it forms a subalgebra of $\rm\,A^2$. -Returning to the case as hand, where $\rm A $ is the ring $\,\mathbb Z,\, $ let $\rm\,S\,$ be a subring of $\rm\,\mathbb Z\times\mathbb Z\,.\, $ By the above, to show that $\rm\,S\,$ is a congruence we need only show that it is an equivalence relation. Firstly, since here $\rm\ (1,1)\in S\ $ generates the full diagonal $\rm\ (1,1)\ \mathbb Z,\, $ we deduce that $\rm\,S\,$ is a reflexive relation. Secondly, $\rm\,S\,$ is symmetric since if $\rm\, (a,b)\in S\ $ then also in $\rm\,S\,$ is $\rm\ (a+b,a+b)-(a,b) = (b,a).\, $ Thirdly, $\rm\,S\,$ is transitive since if $\rm\, (a,b),\ (b,c)\in S\, $ then so too is $\rm\, (a,b)\,+\,(c,c)-(c,b) =(a,c).\, $ Finally,$\,$ for every ring $\rm\,R,\, $ a congruence $\equiv$ is uniquely determined by the congruence class of $\,0,\,$ since $\rm\ a\equiv b\iff a-b\equiv 0.\, $ But the congruence class of $\,0\,$ has the structure of an ideal since $\rm\ a,b\equiv 0\ \Rightarrow\ a+b\equiv 0\ $ and $\rm\ ac\equiv 0,\,$ for all $\rm\,c \in R\,$ (see this answer for more on such ideal-determined algebras, i.e. where congruences are determined by a single equivalence class). -This explains - from general principles - the connection observed in Qiaochu's post between subalgebras of $\,\mathbb Z\times\mathbb Z\,$ and ideals of $\rm\mathbb Z.\, $ Note that while the connections between congruences and subalgebras of the square, and congruences and ideals hold true for every ring $\rm\,R,\,$ the rest of the above argument doesn't follow since generally $\rm\,(1,1)\,$ doesn't generate the full diagonal $\rm\,(1,1)\, R.\,$ Indeed, it generates only the diagonal of the characteristic subring (the image of $\,\mathbb Z\,$ in $\rm\,R)$. -The above argument shows that in order to verify that a subalgebra of $\rm\,R\times R\,$ is a ring congruence it suffices to show that the subalgebra contains the diagonal (i.e. it is reflexive), since this implies the other equivalence relation properties (symmetry and transitivity). This leads to the following -Theorem $\ $ The following are equivalent for a ring $\rm\,R\,$ and set $\rm\ S\subset R\times R$ -$\rm(1)\quad S\ $ is a congruence on $\rm\,R\,$ -$\rm(2)\quad S\ $ is a subalgebra of $\rm\,R\times R\,$ and $\rm\,S \supset (1,1)\, R$ -$\rm(3)\quad I\, :=\, \{\, r\in R\, :\ (r,0)\in S \,\}\ $ is an ideal in $\rm\,R\,$ -Proof $\rm\ (1\Rightarrow 2)\ $ follows the same way as sketched above. $\rm\ (1\!\!\iff\!\! 3)$ is well-known. -$\rm (2\Rightarrow 3)\quad i,\,j\in I\ \Rightarrow\ (i,0),\,(j,0)\in S\ \Rightarrow\ (i,0)+(j,0)=(i+j,0)\in S\ \Rightarrow\ i+j\in I $ -Also $\rm\; r\in R,\, j\in I\ \Rightarrow\ (r,r),\,(j,0)\in S\ \Rightarrow\ (r,r)\, *\, (j,0) = (r *\, j, 0)\in S\ \Rightarrow\ r\, *\,j \in I$ -For further details see any good textbook on universal algebra, e.g. Burris and Sankappanavar's textbook A Course in Universal Algebra or George Bergman's An Invitation to General Algebra and Universal Constructions.<|endoftext|> -TITLE: Difference between lattice and complete lattice -QUESTION [21 upvotes]: Definition of lattice require that any two elements of lattice should have LUB and GLB, while complete lattice extends it to, every subset should have LUB and GLB. But by induction , it is possible to show that if any two elements have LUB and GLB then every subset should have LUB and GLB. I read somewhere that the difference is because of infinite set, in that case it is possible that set along with some partial order is lattice but not complete lattice, can someone please elaborate it with one example? -regards - -REPLY [28 votes]: Regular induction ("holds for $1$" and "if it holds for $k$ then it holds for $k+1$") only gives you that the result holds for every natural number $n$; it does not let you go beyond the finite numbers. For example, you can prove by induction that there are natural numbers that require $n$ digits to write down in base $10$ for every $n$, but this does not mean that there are natural numbers that require an infinite number of digits to write down in base $10$. "For all $n$" is not the same as "for all sizes, finite or infinite". -(There is a kind of induction that would allow you to prove something for all sizes, not just finite. This is called transfinite induction. You prove the result holds for $1$, and that whenever it holds for all $m\lt k$, then it also holds for $k$ (or, you prove it holds for $1$, that if it holds for an ordinal/cardinal $\alpha$ then it holds for $\alpha+1$, and that if it holds for all ordinals/cardinals strictly smaller than $\gamma$, then it holds for $\gamma$). However you would be unable to do such a proof with lattices, because it is false). -So, if you have a lattice, then any nonempty finite subset has a least upper bound and a greatest lower bound, by induction. Even if you have a $0$ and a $1$ (a minimum and a maximum element) so that every set has an upper and a lower bound, you still don't get that every set has a least upper bound. For example, take $P = \mathbb{Q}\cup\{-\infty,\infty\}$, with the usual order among rationals, $-\infty\leq q\leq \infty$ for all $q\in\mathbb{Q}$. This is a lattice, with operations $a\wedge b = \min\{a,b\}$ and $a\vee b = \max\{a,b\}$ (since it is a totally ordered set). Every finite subset has a least upper bound (the maximum) and a greatest lower bound (the minimum). But it is precisely the absence of suprema and infima for general sets that stops it from being a complete lattice: the set $\{q\in\mathbb{Q}\mid q^2\lt 2\}$ has no least upper bound and no greatest lower bound. - -REPLY [4 votes]: Normal induction doesn't "reach" infinity. So proving something for finite sets doesn't prove it for infinite sets. -Explicitly: the integers $\mathbb{Z}$ form a lattice, but it is incomplete, since there is no greatest (or least) integer.<|endoftext|> -TITLE: Hitting times of reversible markov chain with known steady state probabilites -QUESTION [5 upvotes]: Consider a reversible markov chain Xn whose steady state distribution is known, can we find the expected hitting time to a subset A of the states starting from some state i ? Additionally you can assume the the chain has no self transitions. - -REPLY [4 votes]: The steady state distribution is not enough to determine the mean hitting time $E(T)$ of a given subset $A$. To see this in a simple case, assume there are $2n$ states and consider the simple random walks on the discrete circle $C_{2n}$ and on the complete graph $K_{2n}$. By symmetry, the steady state distribution is uniform in both cases. -Choose $A=\{j\}$ and $i$ at distance $n$ from $j$ in $C_{2n}$. Then, $T$ for $C_{2n}$ is distributed like the first hitting time of $\pm n$ by a standard symmetric random walk on the discrete line starting from $0$, hence $E_{C_{2n}}(T)=n^2$. On the other hand, $T$ for $K_{2n}$ is distributed like the time of first success in an i.i.d. sequence of trials with probability of success $1/(2n-1)$ at each trial, hence $E_{K_{2n}}(T)=2n-1$. For every $n\ge2$, $E_{C_{2n}}(T)\ne E_{K_{2n}}(T)$.<|endoftext|> -TITLE: How are norms different from absolute values? -QUESTION [22 upvotes]: Hopefully without getting too complicated, how is a norm different from an absolute value? -In context, I am trying to understand relative stability of an algorithim: -Using the inequality $\frac{|(x_0)- \tilde{f}(\epsilon, x_0)|}{|f(x_0)|} \leq \sigma_{rel} ||\epsilon|| + o(||\epsilon||)$ -For which $\sigma_{rel}$ describes the relative stability. -And $||\epsilon||$ is the maximum rounding error of an elementary function: $||\epsilon|| = \mbox{max} |\epsilon_i|, i=1,...n$, $\epsilon = (\epsilon_1,...,\epsilon_n)$. -After reading some of the wikipedia article on norms, I could not take away much other than a norm is positive and a length. I am confused about the statement that it is a function... -Any help trying to understand this would be great, thanks! - -REPLY [39 votes]: The absolute value is a particular instance of a norm. Or perhaps, you can think of norms as functions $\mathbf{V}\to\mathbb{R}$ where $\mathbf{V}$ is a vector space over a field $\mathbf{F}$, and "absolute values" are "norms on the base field". -The absolute value is a function $|\>|\colon\mathbb{R}\to[0,\infty)$; given any real number $r$, you get a nonnegative real number that we write $|r|$, and which satisfies the following properties: - -$|x|\geq 0$ for all $x\in\mathbb{R}$; $|x|=0$ if and only if $x=0$. -$|rx| = |r||x|$ for all $r,x$. -$|x+y| \leq |x|+|y|$. - -It's a real valued function of real variable, because it takes a real number as an input, and gives a (nonnegative) real number as an output. It's just that instead of calling the function, say, $f$, and writing the output as $f(x)$, we call the function "$|\>|$" and write the output as $|x|$. -Similarly, we have the modulus function for complex numbers, $|\>|\colon\mathbb{C}\to[0,\infty)$, defined by $|z| = \sqrt{z\overline{z}}$, and which also satisfies the three conditions above. -If $\mathbf{V}$ is a vector space over $\mathbb{R}$, then a norm on $\mathbf{V}$ is a function $||\>||\colon \mathbf{V}\to[0,\infty)$ that generalizes the absolute value; it must satisfy: - -$||\mathbf{x}|| \geq 0$ for all $\mathbf{x}\in\mathbf{V}$; $||\mathbf{x}||=0$ if and only if $\mathbf{x}=\mathbf{0}$. -$||r\mathbf{x}|| = |r|\,||\mathbf{x}||$ for all $r\in\mathbb{R}$, all $\mathbf{x}\in\mathbf{V}$. -$||\mathbf{x}+\mathbf{y}|| \leq ||\mathbf{x}|| + ||\mathbf{y}||$ for all $\mathbf{x},\mathbf{y}\in\mathbf{V}$. - -Again, this is a function: given any vector $\mathbf{x}$ in the domain, $||\mathbf{x}||$ is the output of the function (a nonnegative real number). -In particular, if you view $\mathbb{R}$ as a vector space over itself, then the absolute value gives a norm on $\mathbb{R}$. -For vector spaces over $\mathbb{C}$, you replace the absolute value of $r$ in the second property with the modulus of $r$. -The norm you describe in your post, $||\epsilon||=\max|\epsilon_i|$ is a particular norm that can be placed on $\mathbb{R}^n$; there are many norms that can be defined on $\mathbb{R}^n$. -The notion of norm on a vector space can be done with any field that is contained in $\mathbb{C}$, by restricting the modulus to that field. -Added. One can also extend the notion of norm by starting with any field $\mathbf{F}$, such as the rationals, simply asking for an "absolute value" function $|\>|\colon \mathbf{F}\to [0,\infty)$ that satisfies properties 1, 2, and 3 above. Then you extend the notion of norm for any vector space over that field. It is common to refer to such "absolute values" as "norms" so as not to confuse them with the usual absolute value. One classical example, mentioned by Asaf, are the $p$-adic norms on the rationals. First, fix a prime $p$; given an integer $p$, we define the $p$-order of $a$ to be the largest power of $p$ that divides $a$: that is, $\mathrm{ord}_p(a) = n$ if and only if $p^n$ divides $a$ and $p^{n+1}$ does not divide $a$. We formally set $\mathrm{ord}_p(0)=\infty$. We then extend this to the rationals: given a rational $\frac{a}{b}$, we let $\mathrm{o}_p(\frac{a}{b}) = \mathrm{ord}_p(a) - \mathrm{ord}_p(b)$. Finally, we define the $p$-adic norm on the rationals, $||\>||_p\colon\mathbb{Q}\to[0,\infty)$ by $||\frac{a}{b}||_p = p^{-o_p(a/b)}$. (You can use bases other than $p$; they amount to what are called "equivalent norms"). -Here you really want to think of $||\>||_p$ as a kind of "non-standard absolute value" on the rationals; it leads to a lot of interesting mathematics, beginning with the $p$-adic numbers, if you use $||\>||_p$ to define "Cauchy-sequences" instead of $|\>|$ and do the same construction that leads to the reals in the latter case.<|endoftext|> -TITLE: Applications of rings without identity -QUESTION [28 upvotes]: Many courses and books assume that rings have an identity. They say there is not much loss in generality in doing so as rings studied usually have an identity or can be embedded in a ring with an identity. What then are the major applications of rings without an identity occurring naturally in mathematics? - -REPLY [37 votes]: The most common example of rings without identity occurs in functional analysis, when one considers rings of functions. A typical example is to consider the ring of all functions of compact support on a non-compact space. Obviously, as these rings of functions are very important in $C^*$-algebras and in studying the properties of the space, knowledge about rings without identity is very important for studying these spaces. -Arbitrary direct sums of rings with unity are not rings with unity, which can also be fairly annoying. -It is true that one can always embed a ring (as an ideal, even) into a ring with identity. The most common such embedding is the Dorroh embedding, in which we start with a ring $R$, and consider the ring with underlying set $\mathbb{Z}\times R$ and operations given by $(n,a)+(m,b) = (n+m,a+b)$ and $(n,a)(m,b) = (nm, nb+ma+ab)$. It is not hard to verify that $r\mapsto (0,r)$ embeds $R$ into the Dorroh extension as an ideal. You can preserve the characteristic of $R$ if necessary: if $R$ is of characteristic $n$, then replace $\mathbb{Z}$ with $\mathbb{Z}/n\mathbb{Z}$ in the construction. The extension has other nice properties (ideals of $R$ remain ideals of the extension, for example). -(Luckily, I am currently going over a thesis about embedding rings as ideals into rings with identity, so I can give you some other classical results.) -However, the Dorroh extension does not preserve all ring properties that may be of interest in $R$. For example, a ring is entire if it has no nonzero zero divisors; a ring is prime if whenever $A$ and $B$ are ideals and $AB=0$, then either $A=0$ or $B=0$ (that is, "prime" is the ideal version of "entire"; an entire ring is necessarily prime). For example, if you perform the Dorroh extension on $\mathbb{Z}$ itself (perhaps not realizing it already had a $1$) then $(1,-1)(0,r)=(0,0)$ even though $\mathbb{Z}$ is entire. There are nontrivial examples of this situation as well. Another property not necessarily preserved by the Dorroh extension is being semiprime. -There are other standard embeddings of rings into rings with identity, such as the Szendrei extension (a quotient of the Dorroh extension). But even so there are ring-theoretic properties that may be very hard to maintain in these kinds of embeddings. Among the more difficult ones are simplicity (if $R$ is simple, can we embed $R$ into a simple ring with identity? Yes; Anne Vakarietis, a student of a colleague, just finished putting together the pieces for this in her dissertation). It's known that every commutative $n$-root ring (rings in which every element has an $n$th root) can be embedded in a commutative $n$-root ring with identity, but it is not known if this is possible for noncommutative rings. Likewise, it is not known if every semiprimary ring can be embedded in a semiprimary ring with identity. -And worse, there are some properties that we know cannot be respected by such embeddings. For example, Fuchs and Rangaswamy proved that not every $\pi$-regular ring can be embedded as an ideal in a $\pi$-regular ring with identity (a ring is $\pi$-regular if every element is $n$-regular for some natural number $n$; an element $x$ is $n$-regular if there exists some $y$ such that $x^nyx^n=x^n$; this is a generalization of von Neumann regularity). -So, in summary: yes, rings without identity arise very naturally, and as such they show up when investigating other mathematical objects. And while it is true that one can always embed a ring without identity as an ideal into a ring with identity, this may not be a good thing from the point of view of studying some ring-theoretic properties of these rings.<|endoftext|> -TITLE: The product of a paracompact space and a compact space is paracompact. (Why?) -QUESTION [8 upvotes]: A paracompact space is a space in which every open cover has a locally finite refinement. -A compact space is a space in which every open cover has a finite subcover. -Why must the product of a compact and a paracompact space be paracompact? -I really have very little intuition about how to go about this question, so any hints or a proof would be greatly appreciated. - -REPLY [7 votes]: The key idea is indeed to use the "tube lemma", as d.t. did in his solution, but there is a small gap in his solution. -(This proof does not assume prior knowledge of tube lemma) -Let $X$ be paracompact and $Y$ be compact. Let $\mathcal{A}$ be an open cover of $X\times Y$. -(Tube lemma part) First fix $x\in X$, and for each $y \in Y$, find $A\in\mathcal{A}$ and basis element $U\times V$ such that $(x,y)\in U\times V\subseteq A$. As $y$ ranges in $Y$, these various $U\times V$ cover $\{x\}\times Y$, which is compact. Thus there exists finitely many $U_1\times V_1 \subseteq A_1,\dots,U_n\times V_n\subseteq A_n$ that cover $\{x\}\times Y$. Let $U_x = U_1\cap \dots \cap U_n$. For later use, let $\mathcal{A}_x=\{A_1,...,A_n\}$. -Now, $\{U_x\}_{x\in X}$ is an open cover of $X$. Using paracompactness, there is an locally finite open refinement $\mathcal{B}$ that covers $X$. For purpose of notation, suppose $B_i,i\in I$ are the elements of $\mathcal{B}$. Using the refinement property, for each $i\in I$, pick $x_i\in X$ such that $B_i\subseteq U_{x_i}$. -Consider the open refinement $\mathcal{C}$ of $\mathcal{A}$ given by -$$\mathcal{C_{x_i}}:=\{A\cap (B_i\times Y)\}_{A\in \mathcal{A}_{x_i}},\quad \mathcal{C}:=\bigcup_{i\in I}\mathcal{C}_{x_i}$$ -To prove that this is a cover, consider any $(x,y)\in X\times Y$. First $x$ is in some $B_i$. Since $\mathcal{C}_{x_i}$ covers $B_i \times Y$, $(x,y)$ is covered by $\mathcal{C}$. -To prove that it is locally finite, consider any $(x,y)\in X\times Y$. First there exists an open neighbourhood $U\subseteq X$ of $x$ that intersects only finitely many elements of $\mathcal{B}$, say $B_1,...,B_m$. Then $U\times Y$ is the desired neighbourhood of $(x,y)$ that only intersects finitely many elements of $\mathcal{C}$ as it can only intersect elements from $\mathcal{C}_{x_1},...,\mathcal{C}_{x_m}$, each of which is a finite collection.<|endoftext|> -TITLE: On sort-of-linear functions: does $f(x+y) = f(x) + f(y)$ imply $f(\alpha x) = \alpha f(x)$? -QUESTION [12 upvotes]: Background -A function $ f: \mathbb{R}^n \to \mathbb{R} \ $ is linear if it satisfies -$$ f(x+y) = f(x) + f(y) \ \text {, and} \tag 1 \label 1 $$ -$$ f(\alpha x) = \alpha f(x) \tag 2 \label 2 $$ -for all $ x,y \in \mathbb{R}^n $ and all $ \alpha \in \mathbb{R} $. -A function satisfying only \eqref{2} is not necessarily linear. For example* $ f: \mathbb{R}^2 \to \mathbb{R} \ $ defined by $ f(x) = |x| \ $ (where $ |x| \ $ is the $ L^2 $ norm) satisfies \eqref{2} but is not linear. However, a function satisfying \eqref{1} does satisfy a weaker version of \eqref{2}, namely -$$ f(ax)=af(x) \tag {2b} \label {2b} $$ -for all $ a \in \mathbb{Q} $. -*Edit: As pointed out in the comments this example doesn't quite work since $|ax|=|a||x|$. -When $ f $ is continuous it's relatively straight-forward to show that under the extra hypothesis that $ f $ is continuous, \eqref{2b} implies \eqref{2}. I want to say that continuity is a necessary condition for \eqref{1} to imply \eqref{2}, or at least (worst) there is some extra hypothesis required (possibly weaker than continuity), but I'm not sure how to show it. -My question is therefore two-fold: - - -Is continuity a necessary condition for \eqref{1} to imply \eqref{2} and how could I go about proving it. -What are some examples (if there are any) of a function satisfying \eqref{1} but not \eqref{2}? - - -This can be stated in a slightly more general context as follows: -Suppose $ V\ $ is a vector space over $ \mathbb{R}\ $ and $ f: V \rightarrow \mathbb{R}\ $ satisfies -$$ f(x+y) = f(x)+f(y) \tag {1'} $$ -for all $ x,y \in V $. - -Under what conditions is $ f\ $ a vector space homomorphism? - - -The reason I believe continuity is necessary is because of the similarity to the fact that $ x^{\alpha} x^{\beta} = x^{\alpha + \beta} $ for all $ \alpha,\beta \in \mathbb{R} $. Irrational powers can be defined either via continuity (i.e. if $ \alpha \ $ is irrational, then $ x^{\alpha}:= \lim_{q\to \alpha} x^q \ $ where q takes on only rational values) or by using the exponential and natural logarithm functions, and either way proving the desired identity boils down to continuity. -I have come up with one example that satisfies (something similar to) \eqref{1} and not \eqref{2}, but it doesn't quite fit the bill: -$ \ $ Define $ \phi : \mathbb{Q}\left(\sqrt{2}\right) \to \mathbb{Q} \ $ defined by $ \phi\left(a+b\sqrt{2}\right) = a+b $. Then $ \phi(x+y) = \phi(x)+\phi(y) \ $ but if $ \alpha=c+d\sqrt{2} \ $ then $ \phi\Big(\alpha\left(a+b\sqrt{2}\right)\Big) = ac+2bd + ad+bc \neq \alpha \ \phi\left(a+b\sqrt{2}\right) $. -$ \ $ The problem is that even though $ \mathbb{Q}\left(\sqrt{2}\right) \ $ is a vector space over $ \mathbb{Q} $, the $ \alpha \ $ is coming from $ \mathbb{Q}\left(\sqrt{2}\right) \ $ instead of the base field $ \mathbb{Q} $. - -REPLY [8 votes]: It is not true that $|ax|=a|x|$; the correct identity is $|ax|=|a||x|$. -Whether or not adding the hypothesis of continuity is necessary for additive functions to be linear depends on the axiom of choice. Using a Hamel basis $B$ for $\mathbb{R}^n$ over $\mathbb{Q}$ together with one of its countable subsets $A=\{x_1,x_2,\ldots\}$, you can construct a discontinuous $\mathbb{Q}$ linear map from $\mathbb{R}^n$ to $\mathbb{R}$ by taking the unique $\mathbb{Q}$ linear extension of the function $f:B\to\mathbb{R}$ such that $f(x_k)=k|x_k|$ and $f(B\setminus A)=\{0\}$. Since $\mathbb{R}$ linear maps between finite dimensional real vector spaces are continuous, such a map cannot be linear. However, it is consistent with ZF that all additive functions from $\mathbb{R}^n$ to $\mathbb{R}$ are continuous (I am however not knowledgeable in the set theoretic background needed to show this).<|endoftext|> -TITLE: function is smooth iff the composition with any smooth curve is again smooth -QUESTION [7 upvotes]: I'm stuck on the following part of a proof: -Let $\phi: \mathbb R^m \to \mathbb R^n$ be a function such that $\gamma'(t) := \phi(\gamma(t))$ is smooth for every smooth function $\gamma: \mathbb R \to \mathbb R^m$. -I want to show that $\phi$ is smooth under these assumptions. -Could someone give me a pointer? -Thanks in advance! -S.L. - -REPLY [6 votes]: This was proved by Jan Boman in the paper "Differentiability of a function and of its compositions with functions of one variable", Math. Scand. 20 (1967), 249-268. (The theorem as stated is for the case $n=1$, but that is no problem as Jason DeVito already mentioned in a comment.) Here's an online version, and here's the MathSciNet link. According to the article and review, it had been an unpublished conjecture of Rådström.<|endoftext|> -TITLE: Why doesn't Hom commute with taking stalks? -QUESTION [27 upvotes]: I have been learning about sheaves and am thinking about the following problem. Let $F$ and $G$ be sheaves, say of abelian groups, on a space $X$. The sheaf $Hom(F, G)$ is defined by $Hom(F, G)(U)=Mor(F|_U, G|_U)$. Given a point $p \in X$ and an open set $U$ containing $p$, a morphism $\varphi: F|_U \rightarrow G|_U$ induces a homomorphism on stalks $\phi: F_p \rightarrow G_p$, which is an element of $Hom(F_p, G_p)$. Thus, by the universal property of direct limits, we have a homomorphism from $Hom(F, G)_p$ to $Hom(F_p, G_p)$. However, this is not in general injective or surjective. Why not? An example or a hint leading towards an example would be much appreciated. I have thought about this for some simple sheaves (such as skyscraper sheaves), but it seems to be true in those cases. -I am also interested in a more general answer if there is one, i.e. something category theoretic about Hom and direct limits. - -REPLY [2 votes]: Here is a somewhat abstract explanation giving a "deeper reason" for why Hom doesn't commute with taking stalks. In categorical logic, it is well known that in general, only so-called geometric constructions commute with taking inverse image under geometric morphisms. Calculating the stalk at a point is an example of such a process of taking an inverse image. -Anton's answer shows that the Hom construction is not geometric in general. -But there is a very general situation in which $\mathcal{H}om(F, \cdot)$ is geometric. Namely, it suffices for $F$ to be an $\mathcal{O}_X$-module on a ringed space $X$ which is of finite presentation around $x \in X$, i.e. such that there is a short exact sequence -$$ \mathcal{O}_X^n \longrightarrow \mathcal{O}_X^m \longrightarrow F \longrightarrow 0 $$ -on an open neighbourhood of $x$. (You can use the constant sheaf $\underline{\mathbb{Z}}$ as the structure sheaf $\mathcal{O}_X$ if you want to stay in the setting of sheaves of abelian groups.) This is because in this case, $\mathcal{H}om(F, G)$ is canonically isomorphic to -$$\left\{ x \in G^m \,\middle|\, \sum_i a_{ij} x_i = 0 \in G, j = 1,\ldots,n \right\},$$ -where $A = (a_{ij}) \in \mathcal{O}_X^{m \times n}$ is the presentation matrix and $G$ is an arbitrary $\mathcal{O}_X$-module, and this construction is patently geometric.<|endoftext|> -TITLE: Show for prime numbers of the form $p=4n+1$, $x=(2n)!$ solves the congruence $x^2\equiv-1 \pmod p$. $p$ is therefore not a gaussian prime. -QUESTION [7 upvotes]: I need to show that for prime numbers of the form $p=4n+1$, $x=(2n)!$ solves the congruence $x^2 \equiv-1\pmod p$. -I then need to show this implies p isn't a gaussian prime. -I have started to solve this using Wilson's theorem that a number $z$ is prime iff $(z-1)!\equiv-1\pmod z$. Therefore the endpoint of my proof should be that $(p-1)!\equiv-1\pmod p$. -As $p$ is of the form $p=4n+1$, I only need to prove that $4n!$ is congruent to $-1$ modulo $p$. -Here is my working so far: Starting with the congruence $x^2 \equiv -1\pmod p$: -$$\eqalign{x^2 = -1\pmod p&\implies - x^2 + 1 = kp -\implies (x-i)(x+i) = kp\cr -&\implies ((2n)!-i)((2n!)+i))=kp -\implies 4n!- 1 =kp\cr}$$ -This is where I start to run out of any ideas that seem to get me anywhere. -Any tips would be greatly appreciated! - -REPLY [3 votes]: I think the part about $p$ not being a Gaussian prime has not been addressed. You have arrived at $p\mid(x+i)(x-i)$ (with $x=(2n)!$). Now $p$ divides neither $x+i$ nor $x-i$, since $${x\pm i\over p}={x\over p}\pm{1\over p}i$$ is not a Gaussian integer. But the Gaussian integers are a unique factorization domain, and in such a domain a prime that divides a product divides (at least) one of the terms being multiplied. Thus, $p$ can't be a prime in the Gaussians.<|endoftext|> -TITLE: Does $M_n^{-1}$ converge for a series of growing matrices $M_n$? -QUESTION [5 upvotes]: $M_n$ is a $n\times n$ matrix with $M_{n+1}=\begin{pmatrix}M_n & a_n \\ b_n^T & c_n\end{pmatrix}$ and $a_n, b_n, c_n \to 0$ for $n\to \infty$. Is this sufficient to state $$ \lim_{n\to\infty}(M_n^{-1}) = (\lim_{n\to\infty}M_n)^{-1} ?$$ - -REPLY [2 votes]: Let's $ -\Omega(\mathbb{R}^{n\times n})=\{ X\in\mathbb{R}^{n\times n}: \mbox{ there is } X^{-1} \} -$ -and $\Omega(\mathbb{R}^{\mathbb{N}\times \mathbb{N}}) -= -\bigcup_{n\in\mathbb{N}}\Omega(\mathbb{R}^{n\times n})$, i.e. -$$ -\Omega(\mathbb{R}^{\mathbb{N}\times \mathbb{N}}) -= -\left\{ -X : -\mbox{ there is } n_0\in\mathbb{N} \mbox{ and there is } -X^{-1}\in\Omega(\mathbb{R}^{n_0\times n_0}) -\right\} -$$ -Note that $\Omega(\mathbb{R}^{n\times n})$ and $\Omega(\mathbb{R}^{\mathbb{N}\times \mathbb{N}})$ are vectorial spaces whit usual matrix sum and scalar product. -Supose that $M_n\in\Omega(\mathbb{R}^{p_n\times p_n})$ and $c_n\in\Omega(\mathbb{R}^{q_n\times q_n})$ such that $p_n+q_n=n$ for all $n\in\mathbb{N}$ . - -Proposition For all positive $k\in\mathbb{N}$ the aplication - $$ -\Omega(\mathbb{R}^{k\times k})\ni X\mapsto X^{-1}\in\Omega(\mathbb{R}^{k\times k}) -$$ is continuous. - -Proof. For see it's use block matrix inversion formula -$$ -\begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{A}^{-1}+\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & -\mathbf{A}^{-1}\mathbf{B}(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \\ -(\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1}\mathbf{CA}^{-1} & (\mathbf{D}-\mathbf{CA}^{-1}\mathbf{B})^{-1} \end{pmatrix} -$$ -and induction on $k$. - -PROPOSITION If $\displaystyle\lim_{n\to\infty}M_n$ converge and $\displaystyle\lim_{n\to\infty}M_n\in\Omega(\mathbb{R}^{\mathbb{N}\times\mathbb{N}})$ for all $p_n+q_n=n$ and $n\uparrow n_0\in\mathbb{N}$ then - $$ -\lim_{n\to \infty}( M_n)^{-1}= \big(\lim_{n\to \infty} M_n \big)^{-1}. -$$ - -Proof. Similarly, use the block matrix inversion formula and induction on the $p_n+q_n=n$.<|endoftext|> -TITLE: When are $n$ and $2n$ both sums of two squares? -QUESTION [9 upvotes]: Given a natural number $n$, can we completely characterize when $n$ and $2n$ are each a sum of two squares? -For example: $446,382,709=(13010)^{2}+(16647)^{2}$ and $892,765,418=2(446,382,709)=(3637)^{2}+(29657)^{2}$ -Do these imply, perhaps, that 446,782,709 is the hypotenuse of a primitive pythagorean triple? -(I found this question on a slip of paper while cleaning my office. It turns out that a trivial algebraic identity resolves the question...beyond Euler's characterization of when a natural number is the sum of two squares. Is the question nontrivial if we replace $2n$ by $3n$ above?) -EDIT: The modification is trivial, too. This is my fault, as I didn't think about my question before posting it. (This was a problem found on a slip of paper in my office. I posted it because it looked like a fun problem for students. This is certainly not the best forum for such things!!!) - -REPLY [19 votes]: A natural is the sum of two squares iff every prime of the form $\rm\:4\:k+3$ occurs to even power in its prime factorization (iff the same is true for $\rm\:2\:n\:$). Note $\rm\ n = x^2 + y^2\ \Rightarrow\ 2\:n = (x+y)^2 + (x-y)^2$ which arises by compositions of forms, or, in linear form, with the norm $\rm\ N(a+b\ i)\ =\ a^2 + b^2 $ -$$\rm 2\ (x^2+y^2)\ =\ N(1+i)\ N(x-y\ i)\ =\ N((1+i)\ (x-y\ i))\ =\ N(x+y + (x-y)\ i) $$ - -REPLY [14 votes]: $2(a^2 + b^2) = (a + b)^2 + (a - b)^2$. And, of course, the classification of numbers which are the sums of two squares is well-known (it is necessary and sufficient that if $p \equiv 3 \mod 4$, then the largest $k$ such that $p^k | n$ is even).<|endoftext|> -TITLE: The cardinality of a countable union of countable sets, without the axiom of choice -QUESTION [23 upvotes]: One of my homework questions was to prove, from the axioms of ZF only, that a countable union of countable sets does not have cardinality $\aleph_2$. My solution shows that it does not have cardinality $\aleph_n$, where $n$ is any non-zero ordinal (not necessarily finite). I have a sneaking suspicion that my solution is actually invalid, but I can't find any reference which invalidates my conclusion. -I have read that it is provable in ZF that there are no cardinals $\kappa$ such that $\aleph_0 < \kappa < \aleph_1$, but I believe the conclusion of my proof does not preclude the possibility that the cardinality is incomparable to $\aleph_1$ or some such. -I think the weakest point in my solution is where I claim that the supremum of a countable set of countable ordinals is again countable. This is true, of course, but it sounds uncomfortably close to the claim "a countable union of countable sets is countable", which is well-known to be unprovable in ZF. Can anybody confirm that the ordinal version is provable in ZF though? If not, I think I can weaken the claim to "the supremum of any set of countable ordinals is at most $\omega_1$", and this establishes the weaker result that the cardinality of a countable union of countable sets is not $\aleph_n$ for any ordinal $n \ge 2$. - -REPLY [26 votes]: The ordinal $\omega_1$ can (consistently) be a countable union of countable ordinals, i.e., the supremum of a countable set of countable ordinals. -This consistency result was one of the first found with forcing. It was announced in S. Feferman-A. Levy, "Independence results in set theory by Cohen's method II", Notices Amer Math Soc., 10, (1963) 593. -The result is that it is consistent with $\mathsf{ZF}$ that ${\mathbb R}$ is a countable union of countable sets. From this, it follows easily that $\omega_1$ also has this property. A proof can be found in Jech's book on the Axiom of Choice. -The problem is that, as you suspect, "The supremum of a countable set of countable ordinals" cannot be proved without some choice to be countable. The issue is that although we know that each countable ordinal is in bijection with $\omega$, there is no uniform way of picking for each countable ordinal one such bijection. Now, you need this to run the usual proof that a countable union of countable sets is countable. -In fact, things can be worse: Gitik showed that it is consistent with $\mathsf{ZF}$ that every infinite (well-ordered) cardinal has cofinality $\omega$. ("All uncountable cardinals can be singular", Israel J. Math, 35, (1980) 61-88.) -On the other hand, one can check that a countable union of countable sets of ordinals must have size at most $\omega_1$ (which is essentially what your HW is asking to verify). So, in Gitik's model, $\omega_2$ is a countable union of countable unions of countable ordinals, but not a countable union of countable ordinals. - -Let me add two comments about other things you say in your question: You write "I have read that it is provable in $\mathsf{ZF}$ that there are no cardinals $\kappa$ such that $\aleph_0<\kappa<\aleph_1$". This is true, but it is stronger than that: By definition $\aleph_1$ is the first ordinal that is not countable, so of course there are no cardinals in between $\aleph_0$ and $\aleph_1$. Similarly, there are no cardinals between any (well-ordered) cardinal $\kappa$ and its successor $\kappa^+$, by definition. -It is true, however, that a countable union of countable sets need not be comparable with $\aleph_1$ without choice. In fact, we can have a non-well-orderable set that can be written as a countable union of sets of size 2. - -Edit, Jun 24/16: To see that a countable union of countable sets of ordinals cannot equal $\omega_2$, we check more generally that if $\kappa$ is a (well-ordered) cardinal, then a union of $\kappa$ many sets, each of size at most $\kappa$, cannot have size $\kappa^{++}$: Let $S_0,S_1,\dots,S_\beta,\dots$, $\beta<\kappa$, be sets of ordinals, each of size at most $\kappa$. Let $S$ be their union and let $\alpha={\rm ot}(S)$, the order type of $S$. Similarly, let $o_\iota={\rm ot}(S_\iota)$ for each $\iota<\kappa$. Each $o_\iota$ is an ordinal below $\kappa^+$ and therefore $o=\sup_\iota o_\iota\le\kappa^+$. Use this to define a surjection $f$ from $\kappa\times o$ onto $\alpha$, from which it follows that there is an injection from $\alpha$ into $\kappa\times o$, and therefore a injection from $\alpha$ into $\kappa^+$: -Given $(\iota,\beta)\in \kappa\times o$, define $f(\iota,\beta)=0$ unless $\beta -TITLE: Probability of obtaining triangle when choosing $3$ points from $3\times3$ array -QUESTION [8 upvotes]: $9$ points are placed in a $3\times3$ array. If $3$ points are randomly selected, what is the probability that they are the vertices of a triangle? - -REPLY [26 votes]: Any $3$ points would form a triangle unless they are collinear. By considering horizontal, vertical and diagonal lines, we see that there are exactly $8$ cases of collinearity. Now there are $\binom{9}{3}=84$ ways to choose $3$ points out of $9$. Hence the probability is $\frac{84-8}{84}=\frac{19}{21}$.<|endoftext|> -TITLE: maximum area of three circles -QUESTION [12 upvotes]: Hi I am new here and have a calculus question that came up at work. -Suppose you have a $4' \times 8'$ piece of plywood. You need 3 circular pieces all equal diameter. What is the maximum size of circles you can cut from this piece of material? -I would have expected I could write a function for the area of the 3 circles in terms of $x$ and $y$, then differentiate it, find a point of maxima/minima and go from there. -My coworker did cut three $33''$ circles and that solved the real-world problem. But my passion would be to find the mathematical answer to this. I hope that my new stackexchange.com friends have the same passion, and can help me find the answer to this in general terms. -What I mean by that is someone says I have a piece of material Q units by 2Q units, what are the three circles of maximum size?... I hope you understand what I am asking. -I am looking to be a friend and contributor -BD - -REPLY [4 votes]: Let $Q:=[-2,2]\times[-1,1]$ be the given rectangle and let $r_0$ be the radius computed by Isaac for this rectangle. I shall prove that 3 circles of radius $r>r_0$ cannot be placed into $Q$ without overlap. The midpoints of these circles would have to lie in the smaller rectangle $Q':=[-2+r, 2-r]\times[-1+r,1-r]$. The left half $Q'_-$ of $Q'$ has a diameter $<2r_0<2r$ (by definition of $r_0$). It follows that $Q'_-$ can contain at most one center of non-overlapping circles of radius $r$, and the same is true for the right half $Q'_+$.<|endoftext|> -TITLE: What is the Tor functor? -QUESTION [84 upvotes]: I'm doing the exercises in "Introduction to commutive algebra" by Atiyah&MacDonald. In chapter two, exercises 24-26 assume knowledge of the Tor functor. -I have tried Googling the term, but I don't find any readable sources. Wikipedia's explanation use the the term "take the homology", which I don't understand (yet). -Are there any good explanations of what the Tor functor is available online not assuming any knowledge about homology? -The first exercise: -"If $M$ is an $A$-module, TFAE: - -1) $M$ is flat -2) $\operatorname{Tor}_n^A (M,N)=0$ for all $n>0$ and all $A$-modules $N$. -3) $\operatorname{Tor}_1^A (M,N)=0$ for all $A$-modules $N$." - -Thanks in advance. - -REPLY [44 votes]: You will be a lot more motivated to learn about Tor once you observe closely how horribly tensor product behaves. -Let us look at the simplest example possible. Consider the ring $R=\mathbb C[x,y]$ and the ideal $I=(x,y)$. These are about the most well-understood objects, right? What is the tensor product $I\otimes_RI$? This is quite nasty, it has torsions: the element $u = x\otimes y - y\otimes x$ is non-zero, but $xu=yu=0$! -Tor gives you a black box to understand this kind of things. Take the short exact sequence $0 \to I \to R \to R/I \to 0$ and tensor with $I$ we get: -$$0 \to \text{Tor}_1(R/I,I) \to I\otimes I \to I \to I/I^2 \to 0$$ -from which you can extract: -$$0 \to \text{Tor}_1(R/I,I) \to I\otimes I \to I^2 \to 0$$ -But $\text{Tor}_1(R/I,I) = \text{Tor}_2(R/I, R/I) = \mathbb C$ by standard homological algebra. So now everything fits nicely: the map from $I\otimes I \to I^2$ takes $f\otimes g$ to $fg$, and the kernel is generated by the element $u$, which is killed by $I$, so it is isomorphic to $R/I \cong \mathbb C$. -To summarize: tensor product, despite being a fundamental operation, is actually quite bad, and Tor helps you to understand it.<|endoftext|> -TITLE: How to simplify nested cubic radicals $\sqrt[3]{a+b\sqrt c}$ -QUESTION [9 upvotes]: While trying to answer this question, I got stuck showing that -$$\sqrt[3]{26+15\sqrt{3}}=2+\sqrt{3}$$ -The identity is easy to show if you already know the $2+\sqrt{3}$ part; just cube the thing. If you don't know this, however, I am unsure how one would proceed. -That got me thinking ... -If you have some quadratic surd $a+b\sqrt{c}$, where $a$, $b$, and $c$ are integers, and $c$ is not a perfect square, how do you find out if that surd is the cube of some other surd, i.e. how to simplify nested cubic radicals of the form -$$\sqrt[3]{a+b\sqrt c}$$ - -REPLY [3 votes]: Here is a method outlined on page 52 under Algebra: Surds in Carr's Synopsis, the book from which Ramanujan taught himself most of his mathematical skills. - -On page 73 in Miscellaneous Equations and Solutions, the author gives an example of how to solve a cubic equation of the form described in the above picture: - -And lastly, on page 53, it is claimed that a different and arguably more generalized method can dete[c]t any $\sqrt[n]{A\pm B}$ for odd $n$ such that $A=a\sqrt x$ and $B=b\sqrt y$ for some $\{a,b,x,y\}\subset \mathbb Z^+$ with a supplied example, for which at least one of $x$ and $y$ are square-free.<|endoftext|> -TITLE: What is the point of logarithms? How are they used? -QUESTION [27 upvotes]: Why do you need logarithms? -In what situations do you use them? - -REPLY [2 votes]: Logarithms are primarily used for two thing: -i) Representation of large numbers. For example pH(the number of hydrogen atoms present) is too large (up to 10 digits). To allow easier representation of these numbers, logarithms are used. -For example let's say the pH of the substance is $10000000000$. -This can written as $10^{10}$. -Or let the pH of another substance be $1000000$. This can be written as $10^6$. Note the base is always the same, but the exponent is unique. Therefore the log of the substance can be used to identify the substance. For example the first substance can be represented as $log$ $10000000000$ or $10$ and the second substance can be represented as $log$ $1000000$ or $6$. Note $6$ and $10$ are much easier to deal with. -But what if you're not a chemist? How would you use logs? -ii) Algebra. -Let's say you have the equation $316 = 10^x$. How would you solve for $x$? You could find the log of $316$ which is approximately $2.5$. The equation would then be $10^{2.5} = 10^x$. Therefore $x$ is $2.5$. Logs are therefore extremely useful when solving for exponents. -Note that although I have restricted my examples to log base 10 for simplicity, logs can exist in other bases. For example $\log_2 32$ (log to the base 2 of 32) is $5$ since $2^5= 32$. Other important log bases include the the natural log, which is commonly used in advanced mathematics. -What other appliances do logs have? Logs have a variety of real life applications such as calculating half lives and exponential growth/decay. In fact the inverse of an exponential function is a logarithmic function!<|endoftext|> -TITLE: Understanding Gödel's Incompleteness Theorem -QUESTION [56 upvotes]: I am trying very hard to understand Gödel's Incompleteness Theorem. I am really interested in what it says about axiomatic languages, but I have some questions: -Gödel's theorem is proved based on Arithmetic and its four operators: is all mathematics derived from these four operators (×, +, -, ÷) ? -Operations such as log(x) and sin(x) are indeed atomic operations, like those of arithmetic, but aren't there infinitely many such operators that have inverses (that is, + and - are "inverse" operations, × and ÷ are inverse operations). -To me it seems as though making a statement about the limitations of provability given 4 arbitrary operators is absurd, but that probably highlights a gap in my understanding, given that he proved this in 1931 and its unlikely that I have found a counter-argument. -As a follow-up remark, why the obsession with arithmetic operators? They probably seem "fundamental" to us as humans, but to me they all seem to be derived from four possible graphical arrangements of numbers (if we consider four sides to a digit), and fundamentally derived from addition. -[][] o [][] addition and, its inverse, subtraction -[][] -[][] multiplication (iterative addition) and, its inverse, division -There must be operators that are consistent on the natural numbers that we certainly aren't aware of, no? -Please excuse my ignorance, I am hoping I haven't offended any real mathematicians with this posting. - -edit: I think I am understanding this a lot more, and I think my main difficulty in understanding this was that: - -There are statements that are true that are unprovable. - -Seemed like an impossible statement. It does, however, make sense to me at the moment in the context of an axiomatic language with a limited number of axioms. Ultimately, suggesting that there are statements that are true and expressible in the language, but are unprovable in the language (because of the limited set of axioms), is what I believe to be the point of the proof -- is this correct? - -REPLY [7 votes]: This is a personal recommendation. -Peter Smith's "An Introduction to Gödel's Theorems" is very readable and self-contained. -I believe to fully appreciate Gödel's theorems it is a good idea to learn some computability theory before (Smith does include some of this background in his book, but a bit more of perspective doesn't hurt). -Two very good sources for beginners are Cutland's "Computability" and Boolos and Jeffrey "Computability and Logic". There's a forthcoming book from Enderton (who unfortunately recently passed away), "Computability Theory", which I have been able to read and highly recommend too. For a quick (and unusually clear) introduction to computability theory you can take a look at Enderton's notes, which, by the way, are an excerpt from his book: -http://www.math.ucla.edu/~hbe/computability.pdf -I hope this helps (it did to me). -Edit: Let me also recommend a very nice book by Melvin Fitting, Incompleteness in the Land of Sets. As the title suggests, the author shows us how to discuss incompleteness in its most natural place: the land of hereditarily finite sets.<|endoftext|> -TITLE: Parity of Perfect Matchings -QUESTION [8 upvotes]: Source: Lovasz, Plummer - Matching Theory -I had a question related to the number of perfect matchings in a graph. While going through the 8th Chapter on Determinant and Matchings in the text I stumbled across the first exercise problem. So far, I have been unable to solve either of the directions stated in the problem which I present below. -A graph $G$ has an even number of perfect matchings if and only if $\exists S \subseteq V(G); (S \neq \phi)$ such that all vertices in $V(G)$ are adjacent to an even number of vertices in $S$. -At the moment, all I can see is just that finding determinant of $A(G)$ over $F_2$ reveals the parity of perfect matchings. But, owing to my limited Linear Algebra background, I cannot see anything beyond. I would be real glad if someone can help me with this query. -Thanks! -EDIT The purpose of the current bounty is to get a combinatorial answer to the question if possible. I will remove this note after -(i) there is a combinatorial answer, or -(ii) the bounty period expires -whichever comes first. - -REPLY [2 votes]: The answer can be found by regarding an element in the kernel of the adjacency matrix over $F_2$, see the comments. -Note also that although the determinant of the adjacency matrix modulo 2 gives the parity of the number of matchings, it is not at all true that the determinant of the adjacency matrix can often be used to determine the number of matchings.<|endoftext|> -TITLE: Linear Algebra, cube & dimensions > 3 -QUESTION [12 upvotes]: I have found interesting problem in Gilbert's Strang book, ,,Introduction to Linear Algebra'' (3rd edition): - -How many corners does a cube have in 4 dimensions? How many faces? How many edges? A typical corner is $(0,0,1,0)$ - -I have found the answer for corners: - -We know, that corner is $(x_1,x_2,x_3,x_4)$. For every $x_i$ we can use either $1$ or $0$. We can do this in $2 \cdot 2 \cdot 2 \cdot 2 = 2^4 = 16$ ways. - -The same method can be used for general problem of cube in $n$ dimensions (I suppose): - -Let's say, we have $n$-dimensional cube (I assume, that length of edge is $1$, but it can be some $a$, where $a \in \mathbb{R}$ [1]). Here, corner of this cube looks like this: $(x_1,x_2, \ldots , x_n)$. For every $x_i$ there are $2$ possibilities: $x_i = 0$ or $x_i = 1$ ($x_i = a$ in general). So, this cube has $2^n$ corners. - -It was pretty simple, I think. But now, there are also faces and edges. To be honest, I do not know, how to find the answer in this cases. -I know, that solution for this problem is: - -A four-dimensional cube has $2^4 = 16$ corners and $2 \cdot 4 = 8$ three-dimensional sides and $24$ two-dimensional faces and $32$ on-dimensional edges. - -Could You somehow explain me, how to figure out this solution? I have found solution for corners by myself, using Linear Algebra methods & language. Could You show me, how to find the number of edges and faces, using Linear Algebra methods? -Is there other method to find these numbers? (I suppose, that answer for this question is positive) -I am also interested in articles/textbooks/etc. about space dimensions, if You know some interesting positions about that, share with me (and community). -As I wrote: I am interested in mathematical explanations (in particular using Linear Algebra methods/language but other methods may be also interesting) and some intuitions (how to find solution using imagination etc. [2]). -Thank You for help. - -[1] I am not sure of this assumption, because: -(a) I am not sure, how edges (and faces) behave in $n$ dimensions -(b) I am not sure, how should I think about the distance in $n$ dimensions. I mean, I know, that my intuition may play tricks here -[2] I am not asking, how to imagine $4$ dimensional cube, but I think, that there is a way to find the solution, using reasoning, not only Linear Algebra. - -Addition -My definition of face (there was a comment about that) is the same as definition here: http://en.wikipedia.org/wiki/Face_(geometry), especially: - -In geometry, a face of a polyhedron is any of the polygons that make up its boundaries. - -REPLY [2 votes]: This is the 27th problem in Exercises 1.1 of the book, Introduction to Linear Algebra, Gilbert Strang. -I hope my answer will be helpful for future readers. -Question 1: How many corners does a cube have in 4 dimensions? -Solution: -A corner of a cube in $N$ dimenison(s) can be written as $(x_1, x_2, \cdots, x_N)$ and for each $x$, the choice is either $1$ or $0$. Then the number of corners equal to $2^N$. Therefore, the number of corners in 4 dimesions is $16$. -Question 2: How many edges does a cube have in 4 dimensions? -Solution: -Notice that there are two useful invariants. - -Each edge is shared by exactly $2$ corners. -Each corner is connected to $N$ corners. - -Then the number of edges of a cube in $N$ dimensions is $2^N \cdot N \cdot \frac{1}{2} = 2^{N-1}\cdot N$. -Therefore, the number of edges is $2^{4-1} \cdot 4 = 32$. -Question 3: How many 3D faces does a cube have in 4 dimensions? -Solution: Let's first consider the question, "How man 2D faces does a cube have in 3 dimensions?" This answer is $3 * 2 = 6$. $3$ means 3 dimensions and $2$ means positive and negative directions. -Now, by a similar reasoning, the number of 3D faces is $4*2=8$. -Question 4: How many 2D faces does a cube have in 4 dimensions? -Solutions: -Notice that there are two useful invariants. - -Starting from a corner, the number of faces between edges is $C_{N}^{2}$. -Each face is shared by $4$ corners. - -Then the numebr of 2D faces of a cube in $N$ dimensions is $C_{N}^{2} \cdot 2^{N} \cdot \frac{1}{4} = C_{N}^{2} \cdot 2^{N-2}$. -Therefore, the number of 2d faces of a cube in 4 dimensions is $6*16/4=24$.<|endoftext|> -TITLE: Universal Chord Theorem -QUESTION [49 upvotes]: Let $f \in C[0,1]$ and $f(0)=f(1)$. -How do we prove $\exists a \in [0,1/2]$ such that $f(a)=f(a+1/2)$? -In fact, for every positive integer $n$, there is some $a$, such that $f(a) = f(a+\frac{1}{n})$. -For any other non-zero real $r$ (i.e not of the form $\frac{1}{n}$), there is a continuous function $f \in C[0,1]$, such that $f(0) = f(1)$ and $f(a) \neq f(a+r)$ for any $a$. -This is called the Universal Chord Theorem and is due to Paul Levy. -Note: the accepted answer answers only the first question, so please read the other answers too, and also this answer by Arturo to a different question: https://math.stackexchange.com/a/113471/1102 - -This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions. -and here: List of abstract duplicates. - -REPLY [5 votes]: I have first encountered this result in the book Van Rooij, Schikhof: A Second Course on Real Functions. -I will copy here the text of Exercise 9.P. - -Let $0<\alpha<1$. Let $f\colon[0,1]\to\mathbb R$, $f(0)=f(1)$. - (i) Show that if $\alpha$ is one of the numbers $\frac12,\frac13,\frac14,\dots$ and if $f$ is continuous, then the graph of $f$ has a horizontal chord of length $\alpha$, i.e., there exists $s,t\in[0,1]$ with $f(s)=f(t)$ and $|s-t|=\alpha$. - (ii) The proof you gave probably relies on Darboux continuity. Prove, however, that the given continuity condition on $f$ may not be weakened to Darboux continuity. (Take $\alpha:=\frac12$ and start with a function on $(0,\frac12]$ that maps every subinterval of $(0,\frac12]$ onto $\mathbb R$.) - (iii) Now let $\alpha\notin\{\frac12,\frac13,\dots\}$. Define a continuous function on $[0,1]$ with $f(0)=f(1)$ whose graph has no horizontal chord of length $\alpha$. (Choose $f$ such that $f(x+\alpha)=f(x)+1$ for $x\in[0,1-\alpha]$.) - -For this questions only the first and the third part are relevant. And the first part has been already solved in other answers. -Let us spell out in details construction following the hint from the third part. (Although the hint given there already gives quite a good idea how to proceed.) This is slightly different from the examples given in other answers. -We want to have $f(x+\alpha)=f(x)+1$. Notice that this also implies $f(x+k\alpha)=f(x)+k$. -If we define the function on the interval $[0,\alpha]$, then the above condition determines the function $f$ uniquely on the rest of the interval $[0,1]$. -We want to have $f(0)=0$ and $f(\alpha)=f(1)$. -Let us denote $n=\left\lfloor\frac1\alpha\right\rfloor$, i.e., $n$ is the largest integer such that $n\alpha<1$. -Then we have $1-n\alpha\in(0,\alpha)$. We need to choose $f(1-n\alpha)=-n$ in order to get $f(1)=0$. -(Notice that this cannot be done if $n\alpha=1$. It is possible only if $0<1-n\alpha<\alpha$, since the values $f(0)$ and $f(1)$ are already prescribed.) -Then arbitrary function defined as above (i.e., with the prescribed values in the points $0$, $1-n\alpha$, $\alpha$ and extended from $[0,\alpha]$ to the whole interval using $f(x+\alpha)=f(x)+1$) satisfies the required conditions. -Such function for a specific choice of $\alpha$ is illustrated in this picture: - -For comparison, here is plot of Lévy's function mentioned in Aryabhata's answer for the same value of $\alpha$ can be checked on WolframAlpha.<|endoftext|> -TITLE: Minkowski's inequality -QUESTION [6 upvotes]: Minkowski's inequality says the following: For every sequence of scalars $a = (a_i)$ and $b = (b_i)$, and for $1 \leq p \leq \infty$ we have: $||a+b||_{p} \leq ||a||_{p}+ ||b||_{p}$. Note that $||x||_{p} = \left(\smash{\sum\limits_{i=1}^{\infty}} |x_i|^{p}\right)^{1/p}$. This is how I tried proving it: -\begin{align*} -||a+b||^{p} &= \sum |a_k+b_k|^{p}\\\ -&\leq \sum(|a_k|+|b_k|)^{p}\\\ -&= \sum(|a_k|+|b_k|)^{p-1}|a_k|+ \sum(|a_k|+|b_k|)^{p-1}|b_k|. -\end{align*} -From here, how would you proceed? I know that you need to use Hölder's inequality. So maybe we can bound both the sums on the RHS since they are products. - -REPLY [6 votes]: Holder's Inequality would say that -$$\sum |x_ky_k| \leq \left(\sum |x_k|^r\right)^{1/r}\left(\sum|y_k|^s\right)^{1/s}$$ -where $\frac{1}{r}+\frac{1}{s}=1$. -Apply Holder's twice, once to each sum, using $x_k = a_k$, $y_k = (|a_k|+|b_k|)^{p-1}$ in one, and similarly in the other, with $r=p$ and $\frac{1}{s}=1-\frac{1}{p}$.<|endoftext|> -TITLE: Norms on C[0, 1] inducing the same topology as the sup norm -QUESTION [28 upvotes]: This is an old homework problem of mine that I was never able to solve. The solution may or may not involve the Baire category theorem, which I am terrible at applying. -Let $C[0, 1]$ denote the space of continuous functions $[0, 1] \to \mathbb{R}$. Suppose $|| \cdot ||$ is a norm on $C[0, 1]$ with respect to which the evaluation functions $f \mapsto f(x), x \in [0, 1]$ are continuous. Show that the topology induced by $|| \cdot ||$ is the same as the usual topology induced by the sup norm. -It is straightforward to show that any sequence converging uniformly converges with respect to $|| \cdot ||$. I am stuck on proving the converse; I cannot seem to figure out how to use the assumption that $|| \cdot ||$ is a norm. -Edit: Zhen Lin informs me that $|| \cdot ||$ is supposed to be complete. That should make the problem statement true now! - -REPLY [25 votes]: Here's $\DeclareMathOperator{\ev}{ev}$ the answer if $(C[0,1], \|\cdot\|)$ is assumed to be complete. Consider the family $\mathcal{F} = \{\ev_{x}\}_{x \in [0,1]}$ of continuous linear functionals on $(C[0,1],\|\cdot\|)$. For each $f \in C[0,1]$ we have $\sup_{x \in [0,1]} |\ev_{x}(f)| \leq \|f\|_{\infty}$, so the family $\mathcal{F}$ is pointwise bounded. By the uniform boundedness principle the family is uniformly bounded, that is to say $\sup_{x \in [0,1]} \|\ev_{x}\| \leq M$ for some constant $M$. On the other hand $|f(x)| = |\ev_{x}(f)| \leq \|ev_{x}\|\|f\| \leq M \|f\|$ gives $\|f\|_{\infty} = \sup_{x \in [0,1]} |f(x)| \leq M\|f\|$, so the identity $(C[0,1], \|\cdot\|) \to (C[0,1],\|\cdot\|_{\infty})$ has norm at most $M$. Since both spaces are complete, we may apply the open mapping theorem in order to conclude that its inverse is also continuous. In other words, the norms $\|\cdot\|$ and $\|\cdot\|_{\infty}$ are equivalent. - -Edit. Here's an example that shows that completeness of the norm is necessary: -Choose a discontinuous linear functional $\varphi: (C[0,1],\|\cdot\|_{\infty}) \to \mathbb{R}$ and define a norm on $C[0,1]$ by -\[ -\|f\| = \|f\|_{\infty} + |\varphi(f)|. -\] -Since $\|f\|_{\infty} \leq \|f\|$, we have that the identity $(C[0,1],\|\cdot\|) \to (C[0,1],\|\cdot\|_{\infty})$ is continuous. Since the evaluation functionals are continuous with respect to the sup-norm, they are also continuous with respect to the norm $\|\cdot\|$. But as $\varphi$ is discontinuous, there is a sequence $f_{n}$ with $\|f_{n}\|_{\infty} = 1$ and $|\varphi(f_{n})| \to \infty$, hence the norms cannot be equivalent. Of course, $\|\cdot\|$ cannot be complete because the last sentence would be in contradiction to the open mapping theorem. - -Edit 2. -I forgot to argue why the topologies in the above counterexample are not the same. This is obvious: The functional $\varphi$ is continuous with respect to $\|\cdot\|$ but it isn't continuous with respect to $\|\cdot\|_{\infty}$.<|endoftext|> -TITLE: Geometric / Visual explanation that the average height of a random binary tree of given size $n$ is asymptotically $2\sqrt{\pi n}$ -QUESTION [5 upvotes]: I just finished reading the proof that the average height of a random binary of given size $n$ is asymptotically $2\sqrt{\pi n}$. -I'm now searching for an intuitive, or geometric, or visual proof of this asymptotic equivalence. In other words, I'm trying to find an intuitive arguments that shows that this number grows like $\sqrt{n}$. -Any ideas? -Thanks! - -REPLY [7 votes]: Here's one way to at least guess the growth rate. A random binary tree of size $n$ is the same thing (plus or minus one) as a random walk on the non-negative integers of length $n$ starting and ending at $0$, with the height of the tree corresponding to the farthest the walk goes from $0$. The order of magnitude of the height should behave like the order of magnitude of the related problem of looking at the ending position of a random walk on the integers of length $n$ starting at $0$. (I don't know how to justify this rigorously but it seems pretty plausible to me.) -But this question is much easier, since now it's a sum of independent Bernoulli random variables. The expected end position is $0$, but it is straightforward to calculate that the variance in the end position is $n$, so the standard deviation is $\sqrt{n}$. Of course, since we're dealing with a sum of independent Bernoulli random variables, we can be much more precise because the central limit theorem applies. The point is that this probabilistic reasoning tells us what the right growth rate to expect is: it's also the same rate that crops up in Brownian motion.<|endoftext|> -TITLE: Traditional axes in 3d Mathematica plots? -QUESTION [19 upvotes]: Is there any way to tell Mathematica 7 to use "traditional" axes rather than boxing a three-dimensional graph? That is, rather than the default view produced by -Plot3D[Exp[-x^2 - y^2], {x, -2, 2}, {y, -2, 2},Boxed->False], - -I would like three "axis arrows" to emanate from the origin. - -REPLY [12 votes]: In the end, I ended up writing my own arrow routine, which produces scalable arrowheads and scalable labels: - -axes[x_, y_, z_, f_, a_] := - Graphics3D[ - Join[{Arrowheads[a]}, - Arrow[{{0, 0, 0}, #}] & /@ {{x, 0, 0}, {0, y, 0}, {0, 0, - z}}, { - Text[Style["x", FontSize -> Scaled[f]], {0.9*x, 0.1*y, 0.1*z}], - Text[Style["y", FontSize -> Scaled[f]], {0.1 x, 0.9*y, 0.1*z}], - Text[Style["z", FontSize -> Scaled[f]], {0.1*x, 0.1*y, 0.9*z}]}]] - -The arguments are the x, y, and z positions of the x, y, and z arrows, respectively, f is the font scale (try about 0.05), and a is the arrowhead scale (about 0.05 should do it). This is combined with ordinary 3D graphics using Show[], as in - -Show[Plot3D[Exp[-x^2 - y^2], {x, -2, 2}, {y, -2, 2}, Boxed -> False, - PlotStyle -> Opacity[0.7], Mesh -> 4, Axes -> None], - axes[2.5, 2.5, 1.5, 0.05, 0.02], - PlotRange -> {{-3, 3}, {-3, 3}, {0, 1.5}}] - -The resulting plot is<|endoftext|> -TITLE: Generalizations of the number theory concepts of "even" and "odd"? -QUESTION [11 upvotes]: One of the very first number theory concepts introduced to students -- even before primeness, divisibility, etc. -- is the idea that a natural number can either be "even" (that is, evenly divisible by 2) or "odd" (all other numbers). For all intents and purposes, at that time, even and odd numbers were evenly distributed and of the same density in the natural numbers. -There always seemed to be something inherently "special" about a number's evenness or oddness, besides the trivial one that members of one class could be divided equally into two groups and the other cannot. One would memorize addition and multiplication tables of evenness and oddness (an odd number will remain odd when added to ANY even number). You could not iterate through all of the even numbers through multiplication alone, while you could for the odds. -I've heard the chess board being described as possessing evenness and oddness. For example, dark squares and light squares can represent either even or odd. A diagonal move can be an "even" move and a side-to-side move is an "odd" one, and moves are represented as additions to a square. -In this way, a knight's move is an "odd" move (odd+odd+even), and when added to an odd square will yield an even square; when added to an even square will yield an odd square (odd + odd = even, odd + even = odd) -A bishop's move can be considered always even, so once a bishop is on an odd square, it can only ever move onto other odd squares. Likewise for bishops on even squares. -Are there any more generalizations of this concept to math? Is it meaningful to talk of even or odd matrices, or even or odd vectors or vector spaces? -I've heard the concept applied to functions (even or odd functions), but I don't know if they are related to this by anything other than their name. - -REPLY [16 votes]: Yes. The generalization is provided by modular arithmetic. The properties you are observing all come from the fact that taking the remainder modulo $n$ respects addition and multiplication, and this generalizes to any $n$. More generally in abstract algebra we study rings and their ideals for the same reasons. -The notion of evenness and oddness of functions is closely related, but it is somewhat hard to explain exactly why. The key point is that there is a certain group, the cyclic group $C_2$ of order $2$, which is behind both concepts. For now, note that the product of two even functions is even, the product of an even and odd function is odd, and the product of two odd functions is even, so even and odd functions under multiplication behave exactly the same way as even and odd numbers under addition. -There are also huge generalizations depending on exactly what you're looking at, so it's hard to give a complete list here. You mentioned chessboards; there is a more general construction here, but it is somewhat hard to explain and there are no good elementary references that I know of. Once you learn some modular arithmetic, here is the modular arithmetic explanation of the chessboard idea: you can assign integer coordinates $(x, y)$ to each square (for example the coordinate of the lower left corner), and then you partition them into black or white squares depending on whether $x + y$ is even or odd; that is, depending on the value of $x + y \bmod 2$. Then given two points $(a, b)$ and $(c, d)$ you can consider the difference $c + d - a - b \bmod 2$, and constraints on this difference translate to constraints on the movement of certain pieces. This idea can be used, for example, to prove that certain chessboards (with pieces cut out of them) cannot be tiled with $1 \times 2$ or $2 \times 1$ tiles because these tiles must cover both a white square and a black square. Of course there are generalizations with $2$ replaced by a larger modulus and larger tiles. -As for matrices and vectors, let's just say that there are a lot of things this could mean, and none of them are straightforward generalizations of the above concept. - -REPLY [8 votes]: One example that immediately comes to mind is permutations. -There is a concept of the parity of a permutation, which corresponds to the parity of the number of swaps needed to get it back to the original position. This same concept is sometimes talked in terms of "sign" of a permutation, which is either 1 or -1, depending on whether the permutation is even or odd. -Permutations are related to matrices, as they show up in the definition of the determinant, and in fact, the sign of the permutation is used unlike the definition of the permanent. -Some configurations of the famous fifteen puzzle are shown to be unsolvable by considering the parity of the permutations involved. -Look at this page: Parity of Permutation which also talks about generalizations of this concept here: Generalizations to Coxeter Groups.<|endoftext|> -TITLE: Do values attached to integers have implicit parentheses? -QUESTION [24 upvotes]: Given $5x/30x^2$ I was wondering which is the correct equivalent form. -According to BEDMAS this expression is equivalent to -$5*\cfrac{x}{30}*x^2$ -but, intuitively, I believe that it could also look like: -$\cfrac{5x}{30x^2}$ -I asked this question on MathOverflow (which was "Off-topic" and closed) and was told it was ambiguous. I was wondering what the convention was or if such a convention exists. According to Wikipedia the order of operations can be different based on the mnemonic used. - -REPLY [3 votes]: In-line math expression can be ambiguous when working with implicit multiplication. -The "ambiguous" answer that MathOverflow gave you is the best one. -In you asked that to Wolfram Alpha, the result would have been $$5\times x/30\times x^{2} = x^{3}/6$$ -But in that note http://mathworld.wolfram.com/Solidus.html , Wolfram Alpha indicate that with a/bc, textbooks often means a/(bc) but that they will interpret it (a/b)*c -[That note seems not to be known but lots of Wolfram Alpha user...] -[The same can be said for each calculator, you have to check how they wil interpret it by typing 6/2(3) and checking if the answer is 1 or 9] -Historicaly, you can read that document from 1917: -http://www.jstor.org/stable/2972726 -For the author, the usage is that 1/u*v = (1/u)v but 1/uv = 1/(uv) -And he indicate that the way the rule is writen (or rewriten by some author), lead to confusion. -The original author 'Crystal' that wrote the 'base' of the actual order of operation never wrote something like a/bc or a/b*c but always a/(bc) or (a/b)c [with parentheses!!] -So from both an historical document and a 'modern' calculating tool, you have the information that the confusion/ambiguity is known: -With implicit multiplication, some see a grouping intention, and some do not: -'ab = (a × b)' [then check if parentheses can be removed] or only 'ab = a × b' -So the best way to prevent confusion is to follow the 'rule' indicated by Wolfram Alpha : -"Parentheses should always be used when delineating compound denominators" -So when you have an expression like a/bc, it up to you to consider how you will work with the implicit multiplication, and rewrite it in accordance : (a/b)c or a/(bc), to have the answer/result you want. -Or from your expression : -5x/30x² = (5x/30)*x = ... = x^3/6 -or -5x/30x² = 5x/(30x²) = ... = 1/(6x)<|endoftext|> -TITLE: Group representations over p-adic vector spaces -QUESTION [8 upvotes]: Recently I have found a need to learn more about p-adic group representations over a p-adic vector space. Generally, this motivates a study of representations $\left( V, \rho \right)$ -for some group $G$ where $V$ is a vector space over $\mathbb{Q}_p$. Since I'm only familiar -with the theory for representations acting on a complex vector space, I was hoping for references where I could find more information? -This may seem like a question suited more for google, however, I don't seem to have enough knowledge to prompt google in the correct direction. - -REPLY [19 votes]: There are a few basic situations to consider that I know about. -The first case is when the group $G$ is compact. Then if we are given a -continuous rep. $\rho: G \to GL_n(\mathbb Q_p)$, the image is compact, -so lies in a maximal compact subgroup of $GL_n(\mathbb Q_p)$. Any such -is conjugate to $GL_n(\mathbb Z_p)$, so after changing basis, we may -assume that $\rho: G \to GL_n(\mathbb Z_p)$. (Another way to phrase this -is that $G$ must leave some $\mathbb Z_p$-lattice invariant, and there are -lots of ways to prove this, without having to mention the concept of "maximal -compact", if that makes you at all nervous.) -At this level of generality, there is not that much more to say. But there -are various subcases of interest in which one can say more. -E.g. if $G$ is a Galois group, then there is an enormous literature about $p$-adic Galois representations of Galois groups. If this is the case you are interested in, you might want to ask another more specific question about it. -There are two other basic subcases, but before introducing them, I have -to mention -a basic fact about $GL_n(\mathbb Z_p)$, namely that there is a quotient map -$GL_n(\mathbb Z_p) \to GL_n(\mathbb F_p)$, whose kernel, which I'll denote -by $K(1)$, is easily seen to be -a pro-p-group (i.e. a projective limit of $p$-groups). (The reason for the -$(1)$ is that we could also consider $K(n)$, the kernel of the "reduction modulo -$p^n$'' map, for each $n\geq 1$.) -Here now are two other interesting subcases of the compact case. -The first is when the group $G$ is profinite, and is virtually pro-prime-to-$p$, i.e. contains an open subgroup that is the projective limit of finite groups of order prime to $p$. -Examples of such $G$ are $GL_n(\mathbb Z_{\ell})$. -In this case, let $H$ be the open subgroup that is pro-prime-to-$p$. -Since $K(1)$ is pro-$p$, we see that $\rho(H)$ and $K(1)$ must have trivial -intersection, and so $\rho(H)$ injects into $GL_n(\mathbb F_p)$. -Thus $\rho(H)$ is finite, and hence $\rho(G)$ is finite too. Thus -in this case, the continuous $\rho$s all factor through some finite -quotient of $G$, and we reduce to standard representation theory (i.e. rep'n theory of finite groups over a field of char. $0$). -The second, and more interesting, case is when $G$ is virtually pro-$p$, i.e. contains -an open subgroup $H$ which is pro-$p$. (E.g. if $G$ is itself $GL_n(\mathbb Z_p)$, or a closed subgroup thereof.) In this case the theory is more genuinely $p$-adic, i.e. it doesn't just reduce to the classical rep'n theory of finite -groups. A good place to learn about some aspects of this is Lazard's opus Groupes analytiques $p$-adiques, in Publ. Math. IHES vol. 26 (1965), where -among other things he studies the continuous cohomology of such representations -(when the group $G$ is $p$-adic analytic, e.g. a matrix group), explains the relationship to Lie theory and Lie algebra cohomology, and proves various Poincare duality-type results for the cohomology. -More recent discussions of Lazard's work and related ideas can be found in -some of the literature surrounding non-abelian Iwasawa theory, e.g. Venjakob's article On the structure theory of the Iwasawa algebra of a $p$-adic Lie group in J. Eur. Math. Soc. vol. 4 (2002). -Finally, let me mention that -if $G$ is not compact, but is just locally compact, e.g. $GL_n(\mathbb Q_{\ell})$, then there usually won't be many interesting finite-dimensional representations in which $G$ preserves a lattice, and so it is natural to consider $p$-adic Banach space representations in which $G$ preserves a lattice -instead. -If $G$ contains a profinite open subgroup that is pro-prime-to-$p$, -then this theory is not so novel — one can see for example Vigneras's article Banach $\ell$-adic representations of $p$-adic groups in Astérisque vol. 330 (2010). -On the other hand, if $G$ contains a pro-$p$ open subgroup, then the theory is much more involved, and is the subject of a lot of recent work, especially by people thinking about $p$-adic Langlands. You can see some of the papers of Schneider and Teitelbaum, and of Breuil and Colmez, as well as some of the papers on my web-page. (I'm Emerton at Chicago.)<|endoftext|> -TITLE: Every regular , T_1 space is a Urysohn space -QUESTION [6 upvotes]: Definition: -A space $X$ is a Urysohn space iff whenever $x \neq y$ in $X$ there are nhoods of $U$ of $x$ and $V$ of $y$ such that $\overline{U} \cap \overline{V} = \emptyset$. -I want to show that every regular, T_1 space is Urysohn. -My attempt: -Let $x,y \in X$ be distinct points. Now consider the open set $X \setminus \{y\}$. By regularity we can find an open set $U$ such that $x \in U \subseteq \overline{U} \subseteq X \setminus \{y\}$. Now similarly by considering the open set $X \setminus \{x\}$ and using regularity again we can find an open set $V$ such that $y \in V \subseteq \overline{V} \subseteq X \setminus \{x\}$. -Now from where I don't think we can conclude $U$ and $V$ have disjoint closures. Can we? I'm stuck here, can you please help? - -REPLY [4 votes]: Suppose $X$ is $T_1$ and regular. -Take $x,y\in X$ two different points. Consider, as you did, $X\setminus\{y\}$ as an open neighbourhood of $x$, there exists $x\in U\subseteq\bar{U}\subseteq X\setminus\{y\}$. -Consider now, instead of your choice, $X\setminus\bar{U}$. It is an open neighbourhood of $y$ and from regularity you have $y\in V\subseteq\bar{V}\subseteq X\setminus\bar{U}$. -$\bar{U}\cap\bar{V}=\emptyset$ as needed.<|endoftext|> -TITLE: Expressing a root of a polynomial as a rational function of another root -QUESTION [26 upvotes]: Is there an easy way to tell how many roots $f(x)$ has in $\Bbb{Q}[x]/(f)$ given the coefficients of the polynomial $f$ in $\Bbb{Q}[x]$? -Is there an easy way to find the roots as rational expressions in $x$? - -The easiest example is a pure quadratic: $X^2 + 7$ for instance. If $A$ is a root, then so is $−A$. Good ole $\pm\sqrt{−7}$. -If the Galois group is abelian (like for any quadratic), then all of the roots can be expressed as polynomials in a given root. However, I am not sure how to tell by looking at the polynomial if its Galois group is abelian, and even if it is, I am not sure how to find those rational expressions for the other roots. -It might help to see some non-Abelian (non-Galois) examples: -If $A$ is a root of $X^6 + 2X^4 − 8$, then $−A$ is also a root, but its other $4$ roots cannot be expressed as rational functions of $A$ (assuming I still understand Galois theory). - -Is there some easy way (not asking a CAS to calculate the Galois group) to see the other $4$ roots of of $X^6 + 2X^4 − 8$ cannot be expressed as rational functions of $A$? - -This one had the nice feature that it was a function of $X^2$, so it was easy to find two roots. For $X^6 − 2X^5 + 3X^3 − 2X − 1$, I still have not found its other root (even using a CAS). - -If $A$ is a root of $X^6 − 2X^5 + 3X^3 − 2X − 1$, then what is a rational expression in $A$ for another root? - - -This all first came up with the polynomial $x^4−4x^2+2$, where several distinct ad hoc arguments each sufficed, but I had no real understanding of how to even tell if my ad hoc arguments were worth trying on other polynomials. If it helps, the roots are $A$, $−A$, $A^3−3A$, and $3A−A^3$. -The context is hand calculations and reasonable degrees (say $\leq 10$), though I am not opposed to having a polynomial evaluation oracle that computes $f(g(x)) \mod f(x)$ in $1$ second (so "try this finite and not too big list of possible roots" is ok). - -If someone is interested, I am curious what the normalizer of a point stabilizer in the Galois group actually means in terms of Galois theory. The index of the point stabilizer in its normalizer is the number of roots of $f$ in $\Bbb{Q}[x]/(f)$, but I'm not sure if it really means anything useful. - -REPLY [3 votes]: I haven't found a stellar way to do this by hand, but it is now easy to do with Pari/GP. The basic idea is you just factor f over Q[x]/(f). -Often this is easy to do: find some prime p in Q such that {x,x^p,x^(p^2),…} has exactly deg(f) distinct residues mod (f,p). Choose p larger than the twice the largest coefficient of the ones in the (unknown) answer. Replace a mod p with the integer of smallest absolute value congruent to a mod p for each of the coefficients of x^(p^i) mod (f,p). Check that the formula works. I had to take p=31 in the particularly 6th degree case, so this is not exactly great for by hand exams. -There are more refined versions of factoring using Hensel lifting or combining several primes both of which allow smaller primes to be used (and for it to work in general). The details of one (or two) algorithms are in section 3.6.2 of Henri Cohen's textbook for a Course in Computational Algebraic Number Theory, and some others are also in the source code to Pari/GP (with references).<|endoftext|> -TITLE: Seminorms and norms -QUESTION [5 upvotes]: Suppose we have the following lemma: -Lemma If $E_0 \hookrightarrow E$, and $E_0$ is a closed subspace then $E/E_0$ is a normed space and for $[x] \in E/E_0$ its norm is given by $||[x]|| = \text{inf}_{y \in E_0} ||x-y||$. -What is the intuition of this norm besides that it "works"? Does it just measure how close $x$ and $y$ are to being equivalent? Also does one usually try to introduce some norm on a space $E$ but finds out that it doesn't work? Thus one introduces it as a seminorm and then as a norm on $E/E_0$? In other words, is a seminorm on $E$ just a means to get a norm on $E/E_0$? Why can't one just introduce a norm on $E/E_0$ directly? - -REPLY [13 votes]: Your question seems to be superimposing several concepts/issues at once, and perhaps it will help to separate them. -First, if $E$ is a semi-normed space and $E_0$ is a subspace then we define -the semi-norm on $E/E_0$ according to the formula you wrote down. If you look at what's written, the coset $[x]$ is fixed, while $y$ is varying, so there is no particular $y$ of which it makes sense to ask "how close $x$ and $y$ are to being equivalent". Rather, you are measuring the distance of the point $x$ from the subspace $E_0$. Note that this depends only on $[x]$ (i.e. if we translate $x$ by an element of $E_0$, the distance to $E_0$ doesn't change). This is the most natural semi-norm to place on the quotient $E/E_0$: the distance of the coset $[x]$ from the $0$ in $E/E_0$ is measured by considering the distance of the representative $x$ from the $0$ coset thought of as a subset of $E$, i.e. the distance of $x$ from $E_0$. If $E_0$ is in fact closed in $E$ (with respect to the topology defined by the semi-norm on $E$), then one sees that the semi-norm on $E/E_0$ is actually a norm (and conversely). -Your second question (which got tangled up with your first, but is really a separate question, I think) is why do we sometimes consider semi-norms rather than norms. There are some situations in which $E$ naturally comes with a seminorm which is not a norm. We can then consider the subspace $E_0$ consisting of all elements of semi-norm equal to $0$, which will be a closed subspace (in the topology defined by the semi-norm). (In fact, $E_0$ is precisely the closure of the point $0$.) Now we can apply the above discussion to this particular choice of $E_0$, and so we get a norm on the quotient -$E/E_0$. -This procedure is applied for example to construct the $L^p$ spaces in measure theory. If we define $E$ to the be the space of all measurable real-valued functions on some measure space $X$ for which $\int |f|^p d\mu$ is finite (for some $p \geq 1$), then -$(\int |f|^p d\mu)^{1/p}$ is a semi-norm on $E$. One finds that $E_0$ is precisely the subspace of $E$ consisting of functions which vanish a.e., and -the $L^p(X)$ is defined to be the quotient $E/E_0$. -So the reason for not introducing $E/E_0$ directly in this case is that one first has to talk about functions, and then the elements of $E/E_0$ are certain equivalence classes of functions which it's not really possible to introduce directly, without first going via the functions themselves and then introducing the equivalence relation. And when we write down the "$L^p$-norm" on functions, which is what we naturally can write down, it turns out not be a norm, but just a semi-norm. Passing to $E/E_0$ then gives us an actual normed space, rather than just a semi-normed one. -As for why we pass from $E$ to $E/E_0$ at all, this is mainly convenience. From the point of view of analysis, there is essentially no difference to working in $E$ and $E/E_0$; if any two functions in $E$ lie in the same $E_0$-coset (i.e. they coincide a.e.) then you can't really tell the difference between them analytically. So it's just convenient to identify them, and to pass to their common coset in $E/E_0$. (A general priciple is that it's simpler to work in Hausdorff spaces, like $E/E_0$, than in non-Hausdorff ones, like $E$ itself, and so when nothing is lost by passing from $E$ to $E/E_0$, it's easiest just to do so.) -Finally, are seminorms purely an intermediate step on the way to norms, as in the example of $L^p$-spaces? The answer is no. The reason is that in -some situations one has whole families of semi-norms on a space (which are not necessarily themselves norms), which define an interesting or important topology. -E.g. consider the space $E$ of all continuous functions on $\mathbb R^n$. This doesn't have a natural norm: the obvious norm to consider on continuous functions is the sup norm, but since $\mathbb R^n$ is not compact, a continuous function need not be bounded, and so the sup norm is not defined on arbitrary continuous functions. -But, if $K$ is a compact subset of $\mathbb R^n$, then we can define a semi-norm -$|| \quad ||_K$ on $E$ as $|| f ||_K =$ the sup of $f$ on $K$. Note that this is not a norm: there can be continuous functions on $\mathbb R^n$ which are not identically zero, but vanish at every point of the particular compact set $K$. -But we can use this whole family of semi-norms to define a topology: we define a base of open sets by defining, for each $K$ and each $\epsilon >0$, a n.h. -$B_{K,\epsilon}(f)$ of the function $f$ to be the set of $g$ such that -$|| f - g||_K < \epsilon.$ This does form a basis of neighbourhoods of $f$, -because given $K,\epsilon$ and $K',\epsilon',$ if we choose $K'' = K \cup K'$ -and $\epsilon''$ to be the min of $\epsilon$ and $\epsilon'$, -then $$B_{K'',\epsilon''}(f) \subset B_{K,\epsilon}(f) \cap B_{K',\epsilon'}(f).$$ -This is a Hausdorff topology on $E$ (basically because a non-zero function -must be non-zero when restricted to some compact set, so not all these semi-norms can vanish on a non-zero function), making $E$ a so-called Frechet space. -So here the semi-norms are appearing in their own right, not just as an intermediate step to getting a normed space. -So the passage from semi-norms to norms that you ask about is not really about getting rid of semi-norms and replacing them by norms at all: it is rather about getting rid of non-Hausdorff spaces and replacing them by Hausdorff ones. It just happens that if your topology is defined by a single semi-norm that is not a norm, then it will be non-Hausdorff, and when you make it Hausdorff (by taking the quotient by the closure of $0$) the resulting space has its topology defined by a norm. But in general, there are lots of Hausdorff spaces whose topology is defined not by a norm, but rather by a family of semi-norms. (In applications, these spaces appear most often in the foundational aspects of distribution theory.)<|endoftext|> -TITLE: Could somebody elaborate "dimensional space" and "hyperplane"? -QUESTION [12 upvotes]: I am reading a text related to SVM, and the mathematical language is giving me a little hard time. - -Here training vectors xi - are mapped into a higher (maybe - infinite) dimensional space by the - function $\theta$. SVM finds a linear - separating hyperplane with the maximal - margin in this higher dimensional - space. - -I do not understand the term "dimensional space" in this case. Drawing on a paper is 2D. We are living in 3D space. In Mathematics, when we say "higher dimensional space", what are we actually implying in Mathematics? -Another term "hyperplane" is also giving me a bit of hard time to understand. Is it simply just a 2D plane? I try to search for its definition, and most of the time, I get a term that leads to many more terms (and more confusing) Frankly, mathematical language is difficult for me. -Could somebody simplify and relate "dimension" and "hyperplane" in the text above in an easier way to understand? -Thank you very much. - -REPLY [5 votes]: Karl, higher dimensions are defined so as to be analogous to 2-D and 3-D. This means analytical definitions of orthogonality, distance, and so on must be preserved. -To "visualise" 4-D and up, you can start by visualisation exercises with 4-cubes or think about arrays in programming. $array[i][j][k][l] is a four-dimensional array for example. If you took all of the possible combinations of 0 and 1 in that array -- for instance [0,1,0,0], [1,1,0,0], [0,1,1,1] and so on -- you would have a data structure that's equivalent to a 4-cube or tesseract. You can generate these in R using the combn() function. -Here's another example: imagine a 4-sequence of 4 stock prices progressing in parallel, second by second. Record all the seconds for a single trading day and you have a day-long 4-D vector. That's four-dimensional data. I could have looked at 500 stock prices progressing in parallel and that would be 500-dimensional data. -The Titanic data set in R is another accessible example of 4-D data (age, sex, class, survival). -As for hyperplanes, think about this: - -a line splits a circle -a plane splits a sphere -a hyperplane splits a ... whatever comes next. - -Squiggly or blobby higher-dimensional shapes are called "manifolds", by the way. -Below are examples of functions that map data to higher dimensions. - -$f(x,y) = (x, x, x, y, y)$. $f$ is like copying and pasting whole columns in a table or array. $f$ maps 2 dimensions to 5. -$g(x,y) = (x, \ x+1, \ x^2, \ x \cdot y, \ y^3 + y^5 - y^4 + 55)$. $g$ maps 2 dimensions to 5. -$h(a, b, c, d, e) = (a + b + c + d + e, \ a \cdot b \cdot c \cdot d \cdot e)$. $h$ maps 5 dimensions to 2. -Here is another function $j$ that maps 5 dimensions to 2: $j(a, b, c, d, e) = (a, e)$. $j$ is like erasing the middle columns of your data. In math books this may be called projection. -Here is one, $\theta$, that's used in a common SVD example: separating an inner ring from an outer ring. $\theta(x,y) = (x^2, \ \sqrt{2} \cdot x \cdot y, \ y^2)$ (that's mapping how many to how many dimensions?) -Here is one more function just to demonstrate that letters, numbers, or greeks can be used: $\gamma_{\eta}(a, b, c, d, e) = (0, 5)$. This one has a complicated name but it maps any 5-dimensional input to the same 2-D point, which is a "trivial" thing to do. - -As you can see you just count the commas inside the parentheses to figure out how many dimensions you've got (plus one). -Hope this helps.<|endoftext|> -TITLE: Understanding an integral from page 15 of Titchmarsh's book "The theory of the Riemann Zeta function" -QUESTION [7 upvotes]: In Titchmarsh's book "The theory of the Riemann Zeta function" pg. 15 -where the functional equation of the zeta function is being derived, -I couldn't understand this part: -$$\frac{s}{\pi} \sum_{n=1}^{\infty} \frac{(2n\pi)^s}{n} \int_{0}^{\infty} \frac{\sin y}{y^{s+1}} dy = \frac{s}{\pi} (2\pi)^s \{-\Gamma(-s)\}\sin\frac{1}{2}s\pi\zeta(1-s)$$ -I could not digest Titchmarsh's reasoning. -Can anyone explain this please? -Thanks, - -REPLY [3 votes]: I do not intend to post it, but for those who are interested the proof of the tough part of this question, namely the classic result that -$$ \Gamma(s) \sin \left( \frac{\pi s}{2} \right) = \int_0^\infty y^{s-1} \sin y \textrm{ d}y $$ -where $-1 < Re(s) < 1 $ -can be found in "Topics in Analytic Number Theory," by Hans Rademacher (in chapter 6).<|endoftext|> -TITLE: $2\cdot\int_0^\infty \frac{a-u^2}{\left( u^2+\frac{a^2}{b-a}\right) \left(u^2+\frac{b^2}{b-a} \right) \sqrt{\cdots } }\mathrm {d}u $ -QUESTION [5 upvotes]: at the moment I am trying to reproduce the results of a paper. -There, it turns out that a specific physical problem is mapped onto an integral to be calculated: -$$I(\Theta; a, b) = 2\cdot\int_0^\infty \frac{a-u^2}{\left( u^2+\frac{a^2}{b-a}\right) \left(u^2+\frac{b^2}{b-a} \right) \sqrt{a - u^2 -\frac{a}{1-a/b}\sin^2(\Theta) } }\mathrm {d}u \equiv \int_0^\infty f(u)du$$ -where I took the liberty to replace $\epsilon_d\rightarrow a$ and $\epsilon_m \rightarrow -b$ in contrast to the paper and one can assume that both -$a,b > 0$ and $b > a$. -Somehow, Mathematica manages to be able to calculate the numerical values of this integral with some warning messages due to the pole at -$$u_p = \sqrt{a -\frac{a}{1-a/b}\sin^2(\Theta)}$$ -Also, Mathematica can calculate the indefinite integral, both for $\Theta = 0$ (which is a special case to compare results) and in the general case. Nevertheless, I am not able to use the result since it is indefinite in all cases for $u\rightarrow \infty$. -So, I am asking for some advice to calculate the integral at hand with respect to given constant $a$ and $b$. -The special case of $\Theta = 0$ might already be worth to take a look since it is much easier to calculate it than for the general case. -In the meantime I tried something like a Cauchy principal value integration around $u_p$ using the parametrization $u_\delta (\varphi) = u_p -\delta e^{\mathrm{i}\varphi}$ along a half circle $C_\delta$ interpreting $u^2$ as $\bar{u}u$. Then, -$$I_\delta = \int_0^\pi f(u_\delta(\varphi))\delta d\varphi$$ -is the integral around $C_\delta$ which turned out to vanish for $\delta\rightarrow 0$ such that the whole integral should be given in terms of a principal value one. Noteworthy, I am not sure if my result is correct. -Please, if my question is not stated correctly, or anything is obvious don't hesitate to give me some advice. -Thank you in advance. -Sincerely -Robert - -REPLY [2 votes]: I am answering my own question here since I hope that it can safe some time for somebody. -To compare my results with that of the paper I calculated a complex quantity called reflection coefficient $r = |r|e^{\mathrm{i}\varphi}$. The absolute value $|r|$ tells you roughly how much of some field gets back-reflected in some domain at the boundary to another one, depending on several parameters. Its phase $\varphi$ is also of great importance since it will be useful for effects like interference for linear phenomena. -My results were simply different from those presented in the given paper since I did not care for the convention used for the calculation of the phase. Taking the modulus of $-\varphi$ with respect to $\pi$, I was able to reconstruct all given data. -I apologize for any inconvenience. Furthermore, an analytic calculation of the integral at hand is only feasible for the special case of $\Theta = 0$ (using Mathematica) since the results blow up quickly. -Greetings -Robert<|endoftext|> -TITLE: Proving a fourier series identity -QUESTION [5 upvotes]: I studied fourier series as an undergrad and grad. student in EE but did not fully grasp the concepts. Now that I am involved in medical imaging (MRI) understanding the basics of fourier series and transforms is very important and I am frustrated at my level of understanding. For example, a problem in a MRI physics book asks: -Prove the following fourier series identity: $$\sum_{n=-\infty}^\infty \exp(2i\pi n a) = \sum_{m=-\infty}^\infty \delta(a-m)$$. -The author gives a hint: "consider integrations over small intervals that either include or exclude the region where the argument of one of the delta functions vanishes". -My approach was to write out the left side of the equation for n = -5 to 5. I end up with the value 11. something's wrong for sure. Any help is greatly appreciated! -Thank you --Dave - -REPLY [3 votes]: First off, there's a typo in the left hand side.. the $n$th term of the series is given by ${\mathbb\exp (2\pi i n a)}$. -What you're being asked to prove is equivalent to the Poisson summation formula. Since Akhil Mathew already proved this as an answer, I'll just try to give some intuition. -You have to be careful when saying ${\mathbb\sum_{n = -\infty}^{\infty}\exp (2\pi i n a) = \lim_{m \rightarrow \infty} \sum_{n = -m}^{m} \exp (2\pi i n a)}$ since the partial sums only converge as distributions and the terms don't go to zero for most $a$. Similarly one has the distributional limit -$$ \sum_{n = -\infty}^{\infty} \exp (2\pi i n a) = \lim_{r \rightarrow 1-} \sum_{n = -\infty}^{\infty} r^{|n|}\exp (2\pi i n a)$$ -The right-hand sum can be written as the sum two geometric series which can be explicitly summed to $P_r(2\pi a)$, where $P_r(\theta)$ is the famous Poisson kernel. (see http://en.wikipedia.org/wiki/Poisson_kernel). The explicit formula is -$$P_r(2\pi a) = {1 - r^2 \over 1 - 2r\cos(2\pi a) + r^2}$$ -Note that $P_r(2\pi a)$ has period $1$. It can be shown that for $r$ near $1$, $P_r(2\pi a)$ is the sum of bump functions surrounding each integer, and as $r$ goes to $1$ from below, this sum of bump functions converges (in the sense of distributions) to the sum of delta functions at all integers, which is exactly your right-hand side. -Incidentally, if you tried to do the analogous thing with the distributional limit $\lim_{m \rightarrow \infty} \sum_{n = -m}^{m} \exp (2\pi i n a)$ in place of the above, you'd end out dealing with the Dirichlet kernel instead of the Poisson kernel. One can still show analogous convergence, but it's trickier. This is related to how showing pointwise convergence of Fourier series tends to be harder than showing Abel convergence.<|endoftext|> -TITLE: Orthonormal Eigenbasis -QUESTION [6 upvotes]: I am a little apprehensive to ask this question because I have a feeling it's a "duh" question but I guess that's the beauty of sites like this (anonymity): -I need to find an orthonormal eigenbasis for the $2 \times 2$ matrix $\left(\begin{array}{cc}1&1\\ -1&1\end{array}\right)$. I calculated that the eigenvalues were $x=0$ and $x=2$ and the corresponding eigenvectors were -$E(0) = \mathrm{span}\left(\begin{array}{r}-1\\1\end{array}\right)$ and $E(2) = \mathrm{span}\left(\begin{array}{c}1\\1\end{array}\right)$. Therefore, an orthonormal eigenbasis would be: -$$\frac{1}{\sqrt{2}}\left(\begin{array}{r}-1\\1\end{array}\right), -\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\1\end{array}\right).$$ -Here my question: Could the eigenvalues for $E(0)$ been $\mathrm{span}\left(\begin{array}{r}1\\-1\end{array}\right)$?? This would make the final answer $\frac{1}{\sqrt{2}}\left(\begin{array}{r}1\\-1\end{array}\right), \frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\1\end{array}\right)$. Is one answer more correct than the other (or are they both wrong)? -Thanks! - -REPLY [4 votes]: There is no such thing as the eigenvector of a matrix, or the orthonormal basis of eigenvectors. There are usually many choices. -Remember that an eigenvector $\mathbf{v}$ of eigenvalue $\lambda$ is a nonzero vector $\mathbf{v}$ such that $T\mathbf{v}=\lambda\mathbf{v}$. That means that if you take any nonzero multiple of $\mathbf{v}$, say $\alpha\mathbf{v}$, then we will have -$$T(\alpha\mathbf{v}) = \alpha T\mathbf{v} = \alpha(\lambda\mathbf{v}) = \alpha\lambda\mathbf{v}=\lambda(\alpha\mathbf{v}),$$ -so $\alpha\mathbf{v}$ is also an eigenvector corresponding to $\lambda$. More generally, if $\mathbf{v}_1,\ldots,\mathbf{v}_k$ are all eigenvectors of $\lambda$, then any nonzero linear combination $\alpha_1\mathbf{v}_1+\cdots+\alpha_k\mathbf{v}_k\neq \mathbf{0}$ is also an eigenvector corresponding to $\lambda$. -So, of course, since $\left(\begin{array}{r}-1\\1\end{array}\right)$ is an eigenvector (corresponding to $x=0$), then so is $\alpha\left(\begin{array}{r}-1\\1\end{array}\right)$ for any $\alpha\neq 0$, in particular, for $\alpha=-1$ as you take. -Now, a set of vectors $\mathbf{w}_1,\ldots,\mathbf{w}_k$ is orthogonal if and only if $\langle \mathbf{w}_i,\mathbf{w}_j\rangle = 0$ if $i\neq j$. If you have an orthogonal set, and you replace, say, $\mathbf{w}_i$ by $\alpha\mathbf{w}_i$ with $\alpha$ any scalar, then the result is still an orthogonal set: because $\langle\mathbf{w}_k,\mathbf{w}_j\rangle=0$ if $k\neq j$ and neither is equal to $i$, and for $j\neq i$, we have -$$\langle \alpha\mathbf{w}_i,\mathbf{w}_j\rangle = \alpha\langle\mathbf{w}_i,\mathbf{w}_j\rangle = \alpha 0 = 0$$ -by the properties of the inner product. As a consequence, if you take an orthogonal set, and you take any scalars $\alpha_1,\ldots,\alpha_k$, then $\alpha_1\mathbf{w}_1,\ldots,\alpha_k\mathbf{w}_k$ is also an orthogonal set. -A vector $\mathbf{n}$ is normal if $||\mathbf{n}||=1$. If $\alpha$ is any scalar, then $||\alpha\mathbf{n}|| = |\alpha|\,||\mathbf{n}|| = |\alpha|$. So if you multiply any normal vector $\mathbf{n}$ by a scalar $\alpha$ of absolute value $1$ (or of complex norm $1$), then the vector $\alpha\mathbf{n}$ is also a normal vector. -A set of vectors is orthonormal if it is both orthogonal, and every vector is normal. By the above, if you have a set of orthonormal vectors, and you multiply each vector by a scalar of absolute value $1$, then the resulting set is also orthonormal. -In summary: you have an orthonormal set of two eigenvectors. You multiply one of them by $-1$; this does not affect the fact that the two are eigenvectors. The set was orthogonal, so multiplying one of them by a scalar does not affect the fact that the set is orthogonal. And the vectors were normal, and you multiplied one by a scalar of absolute value $1$, so the resulting vectors are still normal. So you still have an orthonormal set of two eigenvectors. I leave it to you to verify that if you have a linearly independent set, and you multiply each vector by a nonzero scalar, the result is still linearly independent.<|endoftext|> -TITLE: Completion of a Noetherian ring R at the ideal $ (a_1,\ldots,a_n)$ -QUESTION [5 upvotes]: How can we prove that if $R$ is a commutative Noetherian ring, $\mathfrak{m} = (a_1,\ldots,a_n)$ is an ideal, then the completion of $R$ at $\mathfrak{m}$ is isomorphic to $R[[x_1,\ldots,x_n]]/(x_1-a_1,\ldots,x_n-a_n)$? - -REPLY [3 votes]: Here is an argument (maybe the same as the one that Mariano sketched, but with a little more detail): -Let $S = R[[x_1,\ldots,x_n]]$, and let $T = R[[x_1,\ldots,x_n]]/(x_1-a_1,\ldots,x_n-a_n).$ The ring $S$ is Noetherian (since $R$ is), and is complete with respect to the ideal $(x_1,\ldots,x_n)$. By Artin--Rees (for example) the finitely generated $S$-module $T$ is also complete with respect to this ideal. -Thus $T=\varprojlim T/(x_1,\ldots,x_n)^i = \varprojlim R/(a_1,\ldots,a_n)^i.$ -Thus $T$ is the completion of $R$ with respect to $(a_1,\ldots,a_n)$, as claimed.<|endoftext|> -TITLE: Example of a ring with $x^3=x$ for all $x$ -QUESTION [5 upvotes]: A ring $R$ is a Boolean ring if $x^2=x$ for all $x\in R$. By Stone representation theorem -a Boolean ring is isomorphic to a subring of the ring of power set of some set. -My question is what is an example of a ring $R$ with $x^3=x$ for all $x\in R$ that is -not a Boolean ring? (Obviously every Boolean ring satisfies this condition.) - -REPLY [2 votes]: Actually, not only would $\mathbb{Z}_3$ work, but it's the only solution that's an integral domain not of characteristic 2 (since, in such a case, $x^3-x=0\,\Rightarrow\,x\in {0,1,-1}$). -Another solution would be $R:=\mathbb{Z}_3[\mathbb{Z}_2]$, the group ring of $\mathbb{Z}_3$ over $\mathbb{Z}_2$ (ie the ring of "polynomials" over $\mathbb{Z}_3$, except that exponents are in $\mathbb{Z}_2$). To see why, given an element $f(x)\in R$ with -$f(x)=a_0+a_1 x^{b_1} + \cdots + a_n x^{b_n}$ -with $a_i\in \mathbb{Z}_3$ and $b_j \in \mathbb{Z}_2$. Since $R$ is a ring of characteristic 3, the Freshman's Dream implies that -$(f(x))^3=a_0^3+a_1^3 x^{3b_1} + \cdots + a_n^3 x^{3b_n} = a_0 + a_1 x^{b_1} + \cdots + a_n x^{b_n}=f(x)$. -In fact, by the same argument, if you're given any ring $T$ of characteristic 3 with $x^3=x$ for all $x\in T$, then $T[\mathbb{Z}_2]$ satisfies this property as well. -I can't think of another class of examples off the top of my head, but I'd be surprised if boolean rings and the class of examples above were the only examples of rings of this type. -EDIT: Yes, the solution of modding out a free algebra by an appropriate ideal would also work nicely.<|endoftext|> -TITLE: Need good material on multifractal analysis -QUESTION [7 upvotes]: I'm searching for some good reading material on multifractal analysis. Preferably something accessible that doesn't put the stress too much on mathematical proofs but rather on applications. As long as it gives a good review of the status of the field, the interesting results and applications, I would be happy. Also, if anything related to multifractal analysis and statistics or time series comes up, I'll take it as well. -Books, papers, internetpages, videos, etc... accepted! -EDIT: Since the question has been bumped, I decided to put a bounty on it. But I also want to make a bit more precise what I'm looking for. -I have always had the impression when encountering the multifractal techniques that people are able to compute a whole bunch of numbers with some nice and fancy formulas. But I have always missed an "understanding" of what the numbers mean. Why is it useful to do a mutlifractal analysis of fluid flow? Of species abundance distributions? Etc... I feel like the technique is purely descriptive with little theory backing up the connection with some deeper underlying structures. But that may just be due to my limited understanding of the field and that is precisely why I ask for pointers to where I can look for this. - -REPLY [4 votes]: OK, I'm going to hijack this thread even though there's an answer as I haven't found any quality, localized information about multifractals. -As mentioned in the comments, I first heard about multifractals from a Google Tech Talk by Rogene M. Eichler West, which can be found, without sound, on YouTube, called "Multifractals: Theory, Algorithms, & Applications" . Unfortunately Google Video got discontinued after they bought out YouTube and I can't find the original video that had the sound included. -I still do not understand on a deep level what, how and why multifractals are doing, are better than another method or how they do it, but from what I understand the idea is to generalize the concept of spectrum to include functions that have a scale symmetry, where the scale symmetry can be on many different scales (thus multi-fractal, instead of just being fractal). Just as the Fourier spectra is constructing a profile of the translation invariances of a function, the multifractal spectra gives information about the scale invariances of a function. -The general methodology seems to be, for a given function $f(t)$: - -Find the Hölder exponent, $h(t)$, as a function of time, $t$ -Find the singularity spectrum, $D(\alpha)$ - -Where $D(\alpha) \stackrel{def}{=} D_F\{x, h(x) = \alpha\}$, and $D_F\{\cdot\}$ is the (Hausdorff?) dimension of a point-set. -I believe the idea is that for chaotic/fractal/discontinuous functions, at any point they can be characterized, locally, by the largest term of their Taylor expansion and the Hölder exponent is a way to characterize this. Once you have the function, $h(t)$, characterizing the Hölder exponent, you use that to construct the singularity spectrum. I believe the singularity spectrum is a synonym for the multi-fractal spectrum. -From what I can tell, the specifics of how to calculate $h(t)$ and $D(\alpha)$ in practice vary from approximating them outright by their definition or by using wavelets to approximate the Hölder exponent and then using a Legendre transform to approximate the multifractal spectrum. -From what I understand, $D(\alpha)$ tends to be (or is always?) concave. I have only the vaguest notion of why this is so. How one relates wavelet transforms to finding the Hölder exponent, how one uses the Legendre transform to find the multi-fractal spectrum, why the multi-fractal spectrum should be concave, what kind of intuitive feeling one should get about a function from viewing the spectrum, amongst many others, I still have no idea about. -The multiplicative cascade seems to be a canonical example of a multifractal process. -Online, "A Brief Overview of Multifractal Time Series" gives a terse run through of multifractals. They claim to be able to tell a healthy heart from one that is suffering from congestive heart failure (see here). -Here are some slides giving a brief overview of multifractals. Near the end of the slides, they give a wavelet transform of the Devil's staircase function and talk a bit about using Wavelet Transform Modulus Maxima Method (WTMM), which appears to be a standard tool when doing this type of analysis (anyone have any good links for this?). -Looking around, I found Wavelets in physics by J. C. van den Berg that had this section web accessible for a definition of the singularity spectrum. -Rudolf H. Riedi seems to have a few papers out there that describe multifractal processes. Here are a few: - -"Multifractal Processes" -"Introduction to Multifractals" -Along with Jacques L. Véhel "TCP traffic is multifractal: a numerical study." - -While focused on finance, Laurent Calvet and Adlai Fisher have a lot of introduction to terminology in "Multifractility in asset returns: Theory and evidence". -And of course Mandelbrot, along with other authors, has many papers, some of which are: - -"Large Deviations and the Distribution of Price Changes" -"A Multifractal Model of Asset Returns" - -Fractional Brownian Motion is also mentioned frequently, but I have no real idea of how they relate. Large Deviation Theory also seems to be mentioned, but I don't know how this relates to multifractals either. I believe I've also seen entropy, phase transitions and statistical mechanics mentioned here and there. I would be curious if and what the relation to these subjects and multifractals is. -I feel like I'm stumbling around trying to understand this subject and I have yet to find a cohesive text that brings together enough intuition, math and implementation details so that I feel like I have a firm grasp of what's going on. I would welcome any additional resources or corrections to this answer.<|endoftext|> -TITLE: Nice application of the Cauchy?-Frobenius?-Burnside?-Pólya? formula -QUESTION [16 upvotes]: Burnside's Lemma, whose list of names is longer than the proof, says that the number of orbits of a permutation group is the average number of fixed points of its elements. It's a very elegant result, but I'm a bit disappointed by the fact that the examples given in the textbooks always amount to counting some colorings of a symmetric object, up to symmetry (the less original example probably being the cube). My question is then: do you know some funnier (but still rather direct) applications of this result? - -REPLY [2 votes]: You can use it to count the number of isomorphism classes of representations of a quiver over a finite field; Burnside's lemma was used for this purpose by Kac and Stanley (see Root Systems, Representations of Quivers and Invariant Theory by Victor G. Kac).<|endoftext|> -TITLE: Existence of open set in product topology -QUESTION [7 upvotes]: Let $X$ be a compact topological space and $Y$ a Hausdorff space. Let $C \subseteq Y$ be closed in $Y$ and $U$ an open set in $X \times Y$ which contains $X \times C$. Prove there exists an open set $V \subseteq Y$ such that $X \times C \subseteq X \times V \subseteq U$. -Here's what I tried. -Let $b \in C$ fixed. For each $x \in X$ find open sets $U_{x}$ and $V_{x}$ in $X$, $Y$ respectively containing $x$ and $b$. Then the colecction $\{U_{x}: x \in X\}$ covers $X$ so by compactness of $X$ , we have $X \subseteq \bigcup_{i=1}^{n} U_{i}$. From here how to obtain the open set $V$ ?, if we take $V$ as the finite intersection of the V_{i} then this won't satisfy $X \times V \subseteq U$. Also I don't see where to use the hypothesis that $C$ is closed. -I don't think the above approach works. Can you please help? Thanks. - -REPLY [5 votes]: Closedness of $C$ is irrelevant. Here's an argument: -Lemma. - -Let $f: X \to Y$ be a (not necessarily continuous) function between topological spaces which is closed, i.e., maps closed sets to closed sets. If $A \subset Y$ is arbitrary and $U \subset X$ is open such that $f^{-1}(A) \subset U$ then there is $V \supset A$ open such that $f^{-1}(V) \subset U$. -If $X$ is compact and $Y$ is Hausdorff then the projection $\pi: X \times Y \to Y$ is closed. - -Since $\pi^{-1}(C) = X \times C \subset U$, combining 1. and 2. with $A = C$ yields $V \supset C$ open such that $\pi^{-1}(V) = X \times V \subset U$. -Proof of 1: Since $f$ is closed, the set $V = Y \smallsetminus f(X \smallsetminus U)$ is open. Since $f^{-1}(A) \subset U$ we have $A \subset V$. Moreover, $X \smallsetminus U \subset f^{-1}(f(X \smallsetminus U))$ yields $f^{-1}(V) = X \smallsetminus f^{-1}(f(X \smallsetminus U)) \subset X \smallsetminus (X \smallsetminus U) = U$. -Proof of 2: Let $F \subset X \times Y$ be closed and let $y \in Y \smallsetminus \pi(F)$ be arbitrary. Observe that $F \cap (X \times \{y\}) = \emptyset$. Since $(X \times Y) \smallsetminus F$ is open and contains $X \times \{y\}$, the definition of the product topology yields for each $x \in X$ open sets $U_{x} \subset X$ and $V_{x} \subset Y$ such that $(x,y) \in U_{x} \times V_{x} \subset (X \times Y) \smallsetminus F$. Since $X \times \{y\}$ is compact, there are $x_{1},\ldots,x_{n}$ such that $X \times \{y\} \subset (U_{x_{1}} \times V_{1}) \cup \cdots \cup (U_{x_{n}} \times V_{x_{n}}) =: W$. By construction $W \subset (X \times Y) \smallsetminus F$ and the open set $V := V_{x_{1}} \cap \cdots \cap V_{x_{n}}$ contains $y$ and satisfies $\pi^{-1}(V) \subset W$, hence $V \cap \pi(F) = \emptyset$, hence $Y \smallsetminus \pi(F)$ is open.<|endoftext|> -TITLE: Continuous bijections from the open unit disc to itself - existence of fixed points -QUESTION [5 upvotes]: I'm wondering about the following: -Let $f:D \mapsto D$ be a continuous real-valued bijection from the open unit disc $\{(x,y): x^2 + y^2 <1\}$ to itself. Does f necessarily have a fixed point? -I am aware that without the bijective property, it is not necessarily true - indeed, I have constructed a counterexample without any trouble. However, I suspect with bijectivity it may be the case. I'm aware of the Brouwer Fixed Point Theorem and I imagine these two are intricately linked. However, i'm not certain where the bijectivity comes in - I believe we can argue something along the lines f now necessarily maps boundary to boundary - something about how if $x^2+y^2 \to 1$, $\|f(x,y)\| \to 1$ maybe. However, how does this help? Even if we could definitely define a limit to f(x,y) along the whole boundary and apply Brouwer, we can't guarantee the fixed points aren't all on the boundary anyway. -Conversely however, I still can't construct a counterexample. Could anyone help me finish this off please? Thanks! - -REPLY [3 votes]: Let $D$ be open disk with center $0$ of radius $1$ in complex plane. Consider holomorphic automorphism of $D$ given by formula -$z \mapsto \frac{z+a}{1+\bar a z}$ with $a\in D$. -Does it have fixed point if $a\neq 0$ ? [You must solve $z=\frac{z+a}{1+\bar a z}$] -You can also biholomorphically send $D$ to right half plane $H$ of $\mathbb C$ by -$z \mapsto \frac{1+z}{1-z}$ and then apply Chris Eagle comment to $H$.<|endoftext|> -TITLE: Prove: the intersection of Fibonacci sequence and Mersenne sequence is just $\{1,3\}$ -QUESTION [8 upvotes]: $$\frac{{{\varphi ^n} - {{(1 - \varphi )}^n}}}{{\sqrt 5 }} = {2^m} - 1 .$$ -Here $\varphi = \frac{{1 + \sqrt 5 }}{2}$ . This integer equation has no solution for $n>3$ and $m>2$. How to prove? - -REPLY [11 votes]: We need to find when $F_n+1$ is a power of 2. Almost every value of $n$ can be eliminated by considering the Pisano period. In particular, we can deduce that: - -$F_n+1 \equiv 0 \pmod {16}$ if and only if $n \equiv 22 \pmod {24}$ and -$F_n+1 \equiv 0 \pmod 9$ if $n \equiv 22 \pmod {24}$. - -This leaves the few small cases already listed.<|endoftext|> -TITLE: Why is Peano arithmetic undecidable? -QUESTION [21 upvotes]: I read that Presburger arithmetic is decidable while Peano arithmetic is undecidable. Peano arithmetic extends Presburger arithmetic just with the addition of the multiplication operator. Can someone please give me the 'intuitive' idea behind this? -Or probably a formula in Peano arithmetic that cannot be proved. Does it have something to do with the self reference paradox in Gödel's Incompleteness Theorem? - -REPLY [2 votes]: @hardmath: -"It's a good point, that undecidability of a formal first-order theory implies incompleteness but not the converse" - that is simply not true. -The theorem (that is actually an easy consequence of Post's theorem) we have here is: -For any FO-theory T, if T is recursively axiomatizable and complete, then it is decidable. -This does not mean that an undecidable theory is necessarily incomplete - it can be complete, but lack recursive axiomatization. That is e.g. the case of ${Th(N, +, \times, 0, 1)}$. Moreover, there are theories which are both recursive and decidable, but incomplete, e.g. $T=\{0=0\}$.<|endoftext|> -TITLE: Are there any countable Hausdorff connected spaces? -QUESTION [17 upvotes]: Do countable Hausdorff connected topological spaces exist? - -REPLY [5 votes]: $\pi$-Base is an online encyclopedia of topological spaces inspired by Steen and Seebach's Counterexamples in Topology. It lists the following countable, connected, Hausdorff spaces. You can learn more about any of them by visiting the search result. -Gustin's Sequence Space -Irrational Slope Topology -Prime Integer Topology -Relatively Prime Integer Topology -Roy's Lattice Space<|endoftext|> -TITLE: Exist domains in complex plane with only trivial automorphisms? -QUESTION [9 upvotes]: Does exist open domain in $\mathbb C$ who has only identity for holomorphic automorphism? -Related question: does exist open domain in $\mathbb C$ so that every holomorphic automorphism has fixed point? -These questions were inspired by (much easier!) question by mathmos6. - -REPLY [9 votes]: Steven Krantz has studied the automorphism group of domains, and has authored or coauthored some papers and surveys on the subject. In particular, Chapter 12 of his recent book "Geometric Function Theory" deals precisely with this topic and provides additional references. (It is a very nice book, by the way!) -Here are some general comments, they all come from this Chapter: - -The automorphism group of a domain is a Lie Group (this is proved in Kobayashi's "Hyperbolic manifolds and holomorphic mappings"), so it makes sense to talk about its dimension. -The automorphism group of the disk is 3-dimensional, and 3 is the largest dimension possible. -The only bounded domain whose automorphism group is 1-dimensional is the annulus. -A domain with "very many holes" is 0-dimensional. -The automorphism group of a domain with at least 2 but only finitely many holes is finite. - -This last case, which includes your question, was studied by M. Heins in two papers, - -"A note on a theorem of Radó concerning the $(1,m)$ conformal maps of a multiply connected region into itself", Bulletin of the American Mathematical Society 47 (1941), 128-130. -"On the number of 1-1 directly conformal maps which a multiply-connected plane region of finite connectivity $p$ ($>2$) admits onto itself", Bulletin of the American Mathematical Society 52 (1946), 454-457. - -Heins found sharp bounds $N(k)$ for the size of the automorphism group of a domain $\Omega_k$ with precisely $k\ge 2$ holes: - -$N(k)=2k$ if $k\ne 4,6,8,12,20$; -$N(4)=12$; -$N(6)=N(8)=24$; -$N(12)=N(20)=60$. - -In an exercise, Krantz describes a domain with trivial automorphism group: Start with the "box" $\{\zeta\in{\mathbb C}\mid|{\rm Re}\zeta|<2,|{\rm Im}\zeta|<2\}$, remove the four closed disks of radius $0.1$ and centered at $\pm1\pm i$, and "perturb one of the holes" by 0.1. -Finally, let me quote a paragraph at the end of the chapter: - -A very interesting open problem is to determine which finite groups arise as the automorphism groups of planar domains (there are some results for finitely connected regions). It is known that if $G$ is a compact Lie group, then there is some smoothly bounded domain in some ${\mathbb C}^n$ with automorphism group equal to $G$. But it is difficult to say how large $n$ must be in terms of elementary properties of the group $G$. - -Krantz closes by mentioning two references for the above: - -Bedford-Dadok, "Bounded domains with prescribed group of automorphisms", Comment. Math. Helv. 62 (1987), 561-572. -Saerens-Zame, "The isometry groups of manifolds and the automorphism groups of domains", Trans. Amer. Math. Soc. 301 (1987), 413-429.<|endoftext|> -TITLE: The primitive spectrum of a unital ring -QUESTION [5 upvotes]: I'm trying to investigate the notion of primitive spectrum and its so-called Jacobson or hull-kernel topology, but I can only find references which define it for C*-algebras: see the Wikipedia page "Spectrum of a C*-algebra" for the definition I'm talking about. It seems like this definition would make sense for any (let's stick with unital) ring whatsoever, so I suspect the problem is that in full generality we don't actually get a topology. -So here's what I want to ask: can you help me think of an example of a unital ring for which the Jacobson topology on its primitive spectrum is not actually a topology? Or even better, does anyone know of general conditions under which the primitive spectrum has a natural topology? Also, when the Jacobson topology is defined, is the primitive spectrum always quasi-compact? -By way of motivation, certain complex algebras have come up in my research in representation theory. Explaining what they are would take me far afield, but they do have the following nice property. In their paper "Extensions of representations of $p$-adic nilpotent groups," S. Gelfand and D. Kazhdan call a complex unital algebra $A$ quasi-finite provided that it has a filtration $A_0 \subset A_1 \subset \cdots \subset A$ by finite-dimensional semisimple subalgebras $A_k$, and simple modules for each $A_k$ are finite-dimensional. - -REPLY [7 votes]: You can do this for an arbitrary ring (with or without unit). Jacobson's original article can be found here (JSTOR, needs a university subscription). -I cannot do better than to simply quote C. Chevalley's Math Review (MathSciNet, needs a university subscription): - -A (two-sided) ideal $\frak J$ in a ring $\frak A$ is called primitive if $0$ - is the only $x\in\frak A$ such that ${\frak A}x\subset\frak J$. Let $S$ be - the set of primitive ideals in $\frak A$; a topology is introduced in $S$ - by defining the closure of a subset $M$ of $S$ to be the set of primitive - ideals which contain the intersection of all ideals in $M$. The space - defined in this way is a $T_0$-space, but generally not $T_1$ (except when - $\frak A$ is commutative). If $\frak A$ has a unit element, $S$ is compact - (that is, bicompact). The space $S$ is not changed if $\frak A$ is replaced - by $\frak A/\frak R$, where $\frak R$ is the radical of $\frak A$. If - $\frak A$ is semisimple and has a unit element, a necessary and sufficient - condition for $S$ to be disconnected is that there should exist a - decomposition of $\frak A$ as the direct sum of two two-sided ideals not - equal to $\{0\}$. It is shown that, given any totally disconnected compact - space $S$, it is possible to construct a ring $\frak A$ whose space of - primitive ideals is homeomorphic with $S$: the elements of $S$ are certain - mappings of $S$ into a field (not necessarily commutative) which can be - selected arbitrarily.<|endoftext|> -TITLE: What is $\frac{d}{dx}\left(\frac{dx}{dt}\right)$? -QUESTION [9 upvotes]: This question was inspired by the Lagrange equation, $\frac{\partial L}{\partial q} - \frac{d}{dt}\frac{\partial L}{\partial \dot{q}} = 0$. What happens if the partial derivatives are replaced by total derivatives, leading to a situation where a function's derivative with respect to one variable is differentiated by the original function? - -REPLY [2 votes]: $\frac{\partial L}{\partial \dot{q}}$ may be understood as the generalized momentum of the system (e.g. $\frac{d}{dv}\left(\frac{1}{2}mv^2\right) = mv$, the derivative of kinetic energy with respect to velocity is momentum). Then $\frac{d}{dt}\frac{\partial L}{\partial \dot{q}}$ is the time derivative of the generalized momentum. Likewise the $\frac{\partial L}{\partial q}$ term behaves like a generalized force. This begins to look like Newton's second law, but for generalized coordinates.<|endoftext|> -TITLE: Rings of matrices -QUESTION [5 upvotes]: Let $ A\in {\mathbb{F} }^{n\times n} $ be a fixed matrix. The set of all matrices that commute with A forms a subring of ${\mathbb{F} }^{n\times n}$. -Is any subring of ${\mathbb{F} }^{n\times n }$ (which contains the identity) of the above form? -Thanks. - -REPLY [5 votes]: Not if $n\gt 1$. Consider the subring of all scalar multiples of the identity matrix (this is isomorphic to $\mathbb{F}$ itself). -Since the collection of all matrices that commute with $A$ always includes $A$ itself, if the subring of all scalar multiples of the identity were of the form $C(A)$ for some $A$, then $A$ would necessarily be a scalar multiple of the identity; but scalar multiples of the identity are central in $\mathbb{F}^{n\times n}$, so the centralizer of such an $A$ would contain more than just the scalar multiples of the identity. -You might ask more generally whether every subring of $\mathbb{F}^{n\times n}$ of the form $C(S)$ for some $S\subseteq \mathbb{F}^{n\times n}$, where $C(S) = \{M\in\mathbb{F}^{n\times n}\mid MA=AM\text{ for all }A\in S\}$. But even here the answer is still negative; though my example no longer works in this setting (taking $S=\mathbb{F}^{n\times n}$ gives the scalar multiples of the identity), Mariano's example still does. Note that $S\subseteq C(C(S))$; if $T$ is the set of upper triangular matrices, then considering the matrix $E_{ij}$ that has a $1$ in the $(i,j)$ entry, $i\leq j$, and zeros elsewhere, you have that $E_{ij}A$ is the matrix that has zeros everywhere except the $i$th row, where it has the $j$th row of $A$; while $AE_{ij}$ is the matrix that has zeros everywhere except the $j$th column, where it has the $i$th column of $A$. Thus, if $A\in C(T)$, then $A$ must be a scalar multiple of the identity; this means that if $T=C(S)$ for some $S$, then $S$ must be contained in the scalar multiples of the identity, and once again we obtain a contradiction. - -REPLY [3 votes]: Let $T$ be subring of $M_2(\mathbb F)$ of upper triangular matrices. Is there an $A\in M_2(\mathbb F)$ such that $T$ is the set of matrices which commute with $A$?<|endoftext|> -TITLE: Algebra without Zorn's lemma -QUESTION [6 upvotes]: One can't get too far in abstract algebra before encountering Zorn's Lemma. For example, it is used in the proof that every nonzero ring has a maximal ideal. However, it seems that if we restrict our focus to Noetherian rings, we can often avoid Zorn's lemma. How far could a development of the theory for just Noetherian rings go? When do non-Noetherian rings come up in an essential way for which there is no Noetherian analog? For example, Artin's proof that every field has an algebraic closure uses Zorn's lemma. Is there a proof of this theorem (or some Zorn-less version of this theorem) that avoids it? - -REPLY [6 votes]: The proof of existence and uniqueness of algebraic closures goes through assuming only the ultrafilter lemma, which is strictly weaker than AC; see this MO question. Exactly how strong this assumption is relative to other well-known forms of AC appears to be unknown. I don't know what "Noetherian version of this theorem" means, since every field is Noetherian.<|endoftext|> -TITLE: Evaluation of the sum $\sum_{k = 0}^{\lfloor a/b \rfloor} \left \lfloor \frac{a - kb}{c} \right \rfloor$ -QUESTION [12 upvotes]: Let $a, b$ and $c$ be positive integers. Recall that the greatest common divisor (gcd) function has the following representation: -\begin{eqnarray} -\textbf{gcd}(b,c) = 2 \sum_{k = 1}^{c- 1} \left \lfloor \frac{kb}{c} \right \rfloor + b + c - bc -\end{eqnarray} -as shown by Polezzi, where $\lfloor \cdot \rfloor$ denotes the floor function. In trying to generalize the formula I came across the following summation -\begin{eqnarray} -\sum_{k = 0}^{\lfloor a/b \rfloor} \left \lfloor \frac{a - kb}{c} \right \rfloor. -\end{eqnarray} -I can prove the following identity for real $x$, -\begin{eqnarray} -\sum_{k = 0}^{c-1} \left \lceil \frac{x - kb}{c} \right \rceil = d \left \lceil \frac{x}{d} \right \rceil - \frac{(b-1)(c-1)}{2} - \frac{d-1}{2}, -\end{eqnarray} -where $\lceil \cdot \rceil$ denotes the ceiling function and $d = \text{gcd}(b,c)$. (Note that in the first summation the upper index is in general independent of $c$.) Ideas or reference suggestions are certainly appreciated. Thanks in advance! -Update I can prove the identity -\begin{eqnarray} -\sum_{k = 0}^{c-1} \left \lfloor \frac{x - kb}{c} \right \rfloor = d \left \lfloor \frac{x}{d} \right \rfloor - \frac{(b+1)(c-1)}{2} + \frac{d-1}{2}, -\end{eqnarray} -where $d = \text{gcd}(b,c)$. There is another identity which might be useful. If $n = c \ell +r$ with $0 \leq r < c$, then -\begin{eqnarray} -\sum_{k = 1}^{n} \left \lfloor \frac{k}{c} \right \rfloor = c \binom{\ell}{2} + (r + 1) \ell. -\end{eqnarray} -Update 2 Ok, so I can prove that for real $x, y > 0$, -\begin{eqnarray} -\sum_{k = 0}^{\lfloor y \rfloor} \left \lfloor x + \frac{k}{y} \right \rfloor = \lfloor xy + (\lceil y \rceil - y) \lfloor x + 1 \rfloor \rfloor + \chi_{\mathbb{N}}(y)(\lfloor x \rfloor + 1), -\end{eqnarray} -where $\chi_{\mathbb{N}}$ denotes the characteristic function of the positive integers. My original problem (and a nice generalization of it) will be in hand if I can evaluate the following minor generalization: For real $x, y > 0$ and $n \in \mathbb{Z}_{\geq 0}$, -\begin{eqnarray} -\sum_{k = 0}^{n} \left \lfloor x + \frac{k}{y} \right \rfloor. -\end{eqnarray} -Again, any help is certainly appreciated! - -REPLY [6 votes]: I would like to summarise and extend the results of my previous answer in a new answer as I prefer to keep the original in its current form to prevent it from turning into my magnum opus. -Theorem 1: -For positive integers $a, b$ and $c$ where $ d=\text{gcd}(b,c), $ $u=c/d,$ -$t= \lfloor a/b \rfloor $ and $ a \equiv \lambda \textrm{ mod } d, $ where -$ 0 \le \lambda < d,$ and $ u \, | \, (t+1) $ we have -$$ \sum_{k=0}^{t} \left \lfloor \frac{a - kb}{c} \right \rfloor = -\frac{t+1}{c} \left \lbrace a - \frac{tb}{2} - \frac{c-d}{2} - \lambda \right \rbrace . $$ -So far, the above theorem and its proof are included in my previous answer. The rest is new. -Theorem 2: -Along with the definitions in theorem 1, let $ a \equiv r \textrm{ mod } c, $ where $ 0 \le r < c $ and $ u \, | \, t $ then -$$ \sum_{k=0}^{t} \left \lfloor \frac{a - kb}{c} \right \rfloor = -\frac{t+1}{c} \left \lbrace a - \frac{tb}{2} \right \rbrace -- \frac{1}{c} \left \lfloor \frac{t+1}{u} \right \rfloor -\left \lbrace \frac{u(c-d)}{2} + \lambda u \right \rbrace - \frac{r}{c} . $$ -For example, with $a=1019,b=33$ and $c=55$ we have $d=\text{gcd}(33,55)=11,$ -$u=c/d=55/11=5,$ $t= \lfloor 1019/33 \rfloor = 30$ and $ 1019 \equiv 29 \textrm{ mod } 55, $ and -$ 1019 \equiv 7 \textrm{ mod } 11. $ Hence $ r = 29 $ and $\lambda = 7.$ -Note that the condition $ u \, | \, t $ is satisfied, and so theorem 2 gives -$$ \sum_{k=0}^{30} \left \lfloor \frac{1019 - 33k}{55} \right \rfloor = -\frac{31}{55} \left \lbrace 1019 - \frac{ 30 \cdot 33}{2} \right \rbrace -- \frac{6}{55} \left \lbrace \frac{5(55-11)}{2} + 7 \cdot 5 \right \rbrace - \frac{29}{55} = 279.$$ -This is easily verified with WolframAlpha, or similar. -Both of these theorems are special cases ($ \mu=0$ and $\mu=1$) of the general result: -Theorem 3: Along with the previous definitions in theorems 1 and 2, let -$ t+1 \equiv \mu \textrm{ mod } u,$ where $ 0 \le \mu < u $ then -$$ \sum_{k=0}^{t} \left \lfloor \frac{a - kb}{c} \right \rfloor = -\frac{t+1}{c} \left \lbrace a - \frac{tb}{2} \right \rbrace -- \frac{1}{c} \left \lfloor \frac{t+1}{u} \right \rfloor -\left \lbrace \frac{u(c-d)}{2} + \lambda u \right \rbrace $$ -$$ - \frac{1}{c} \sum_{k=0}^{\mu - 1} \left \lbrace -r+k \left \lbrace \left( \left \lfloor \frac{b}{c} \right \rfloor + 1 \right) c - b -\right \rbrace \textrm{ mod } c \right \rbrace .$$ -There are $ \mu $ terms in the last sum, so this is understood to be zero when $ \mu =0.$ The terms in the final summation of the equation are all $ \ge 0 $ and reduced modulo $c.$ -A sketch of the proof runs as follows. On adding up the $t+1$ equations at the start of my previous answer, we add up $ \lfloor (t+1)/u \rfloor $ “complete” sets of residues modulo $c$ -(not to be confused with a complete residue system modulo $c$) that are congruent to $ \lambda $ modulo $c.$ The term $ \sum_{k=0}^{\mu - 1} \lbrace r+k \lbrace \cdots \rbrace -\textrm{ mod } c \rbrace $ (equivalent, of course, to $ \sum_{k=0}^{\mu - 1} \lbrace (r-kb) \textrm{ mod } c \rbrace $ ) is the sum of the “left over residues.” -Rearranging the equation obtained from the summation proves the theorem. -Note: In the text $ \lbrace \cdot \rbrace $ is only used for readability and does not indicate fractional part (I find doubled-up curved brackets awkward to read). -Just for fun, here is what we get when we put $ \mu = 2 $ into theorem 3. -Theorem 2(b): Along with the previous definitions, since $ \mu = 2 $ our condition here is that $ u \, | \, (t-1) .$ Now for $ r \ge b $ we obtain -$$ \sum_{k=0}^{t} \left \lfloor \frac{a - kb}{c} \right \rfloor = -\frac{t+1}{c} \left \lbrace a - \frac{tb}{2} \right \rbrace -- \frac{1}{c} \left \lfloor \frac{t+1}{u} \right \rfloor -\left \lbrace \frac{u(c-d)}{2} + \lambda u \right \rbrace - \frac{2r-b}{c} . \quad (1) $$ -For example, with $a=1037,b=33$ and $c=55$ we have $d=\text{gcd}(33,55)=11,$ -$u=c/d=55/11=5,$ $t= \lfloor 1037/33 \rfloor = 31$ and $ 1037 \equiv 47 \textrm{ mod } 55, $ and -$ 1037 \equiv 3 \textrm{ mod } 11. $ Hence $ r = 47 $ and $\lambda = 3.$ -Note that the condition $ u \, | \, (t-1) $ is satisfied, and so theorem 2(b) gives -$$ \sum_{k=0}^{31} \left \lfloor \frac{1037 - 33k}{55} \right \rfloor = -\frac{32}{55} \left \lbrace 1037 - \frac{ 31 \cdot 33}{2} \right \rbrace -- \frac{6}{55} \left \lbrace \frac{5(55-11)}{2} + 3 \cdot 5 \right \rbrace - \frac{2 \cdot 47 - 33}{55} = 291.$$ -This is easily verified with WolframAlpha, or similar. -For $ r < b $ the final term in $(1)$ would be $ - \frac{2r+c-b}{c}.$ -(To be continued... time and enthusiasm permitting :-) )<|endoftext|> -TITLE: How far are the $p$-adic numbers from being algebraically closed? -QUESTION [48 upvotes]: A few days ago I was recalling some facts about the $p$-adic numbers, for example the fact that the $p$-adic metric is an ultrametric implies very strongly that there is no order on $\mathbb{Q}_p$, as any number in the interior of an open ball is in fact its center. -I know that if you take the completion of the algebraic closure of the $p$-adic completion you get something which is isomorphic to $\mathbb{C}$ (this result was very surprising until I studied model theory, then it became obvious). -Furthermore, if the algebraic closure is of an extension of dimension $2$ then the field is orderable, or even real closed. Either way, it implies that the $p$-adic numbers don't have this property. -So I was thinking, is there a $p$-adic number whose square equals $2$? $3$? $2011$? For which prime numbers $p$? How far down the rabbit hole of algebraic numbers can you go inside the $p$-adic numbers? Are there general results connecting the choice (or rather properties) of $p$ to the "amount" of algebraic closure it gives? - -REPLY [14 votes]: Suppose that $K$ is an algebraic number field, i.e. a finite extension of $\mathbb Q$. It has a ring of integers $\mathcal O_K$ (the integral closure of $\mathbb Z$ in $K$). Suppose that there is a prime ideal $\wp \subset \mathcal O_K$ such that: - -$p \in \wp,$ but $p \not\in \wp^2$. -The order of $\mathcal O_K/\wp = p.$ (Note that (1) implies in particular that -$\wp \cap \mathbb Z = p \mathbb Z$, so that $\mathcal O_K/\wp$ is an extension of $\mathbb Z/p\mathbb Z$. We are now requiring that it in fact be the trivial extension.) - -Then the number field $K$ embeds into $\mathbb Q_p$. The converse also holds. -So if you want to know whether you can solve the equation $f(x) = 0$ in $\mathbb Q_p$ (where $f(x)$ is some irreducible polynomial in $\mathbb Q[x]$), then set -$K = \mathbb Q[x]/f(x)$ and apply this criterion. -This is easiest to do when -$f(x)$ has integral coefficients, and remains separable when reduced mod $p$ -(something that you can check by computing the discriminant and seeing whether -or not it is divisible by $p$), -because in this case the criterion is equivalent to asking that $f(x)$ have a root mod $p$. -Incidentally, there are many $f(x)$ that satisfy this criterion -(because, among other -things, the algebraic closure of $\mathbb Q$ in $\mathbb Q_p$ has infinite degree over $\mathbb Q$), but there are also many $f(x)$ that don't.<|endoftext|> -TITLE: solving a differential equation involving $\frac{y-x^2}{\sin y-x}$ -QUESTION [5 upvotes]: I'm trying to find the general solution to -$$\frac{\text{d}y}{\text{d}x} = \frac{y-x^2}{\sin y-x}$$ -Any ideas would be greatly appreciated. -Thanks! - -REPLY [8 votes]: Your equation is exact once you write it as $$f(x,y)\,\mathrm d x+g(x,y)\,\mathrm d y=0.$$ Find a potential, and voilà. I'll leave you the fun of doing that; the general solution is implictly defined by the equation $$\frac{x^3}{3}-xy-\cos y=c$$ with $c$ a constant.<|endoftext|> -TITLE: Arbitrary non-integer power of a matrix -QUESTION [8 upvotes]: does there exist the notion of a non-integer power of a matrix? This seems to be accessible via semigroup-theory, yet I have not seen an actual definition so far. -I am not too firm at this right now, but I am curious. Can you give me a sketch of the definition and provide with some introductory information? - -REPLY [4 votes]: There are several techniques for extending scalar functions to matrices. -Wikipedia mentions techniques based on power series, eigendecomposition, Jordan decomposition, Cauchy integral, and more.<|endoftext|> -TITLE: Finding all linear operators $L: C([0,1]) \to C([0,1])$ which satisfy 2 conditions -QUESTION [7 upvotes]: As above, I'm trying to find all linear operators $L: C([0,1]) \to C([0,1])$ which satisfy the following 2 conditions: - -I) $Lf \, \geq \, 0$ for all non-negative $f\in C([0,1])$. -II) $Lf = f$ for $f(x)= 1$, $f(x)=x$, and $f(x)=x^2$. - -I'm honestly not sure where to start here - I'm struggling to use these conditions to pare down the class of linear operators which could satisfy the conditions significantly. Could anyone help me get a result out of this? Thank you! - -REPLY [5 votes]: The answer is: $L$ must be the identity of $C([0,1])$. -Let $f: [0,1] \to \mathbb{R}$ be continuous. We want to prove that $Lf = f$. -Since $[0,1]$ is compact, $f$ is uniformly continuous, so for fixed $\varepsilon \gt 0$ we can choose $\delta$ such that $|x - y| \leq \sqrt{\delta}$ implies $|f(x) - f(y)| \lt \varepsilon$. Observe that -\[ -|f(x) - f(y)| \leq \varepsilon + \frac{2\|f\|}{\delta}(x - y)^{2} -\] -for all $x,y \in [0,1]$. -If $|x - y| \leq \sqrt{\delta}$ this is clear and otherwise we have $(x - y)^{2} \gt \delta$, so $\varepsilon + \frac{2\|f\|}{\delta}(x - y)^{2} > \varepsilon + 2\|f\| \gt |f(x) - f(y)|$. -In other words, for $C = \frac{2\|f\|}{\delta}$ we have -\[ --\varepsilon - C(x - y)^{2} \leq f(x) - f(y) \leq \varepsilon + C(x-y)^{2}. -\] -Keep $y$ fixed (so we regard $y$ and $f(y)$ as constants), consider the three parts of these inequalities as functions of $x$ and apply $L$. Using that $L$ is monotone and that $L(ax^{2} + bx + c) = ax^{2} + bx + c$ by hypothesis, we get -\[ --\varepsilon - C(x - y)^{2} \leq (Lf)(x) - f(y) \leq \varepsilon + C(x-y)^{2} \qquad \text{for all $x,y \in [0,1]$}. -\] -In particular, setting $x = y$ yields $|Lf(y) - f(y)| < \varepsilon$. Since $\varepsilon$ and $y$ were arbitrary, we conclude $Lf(y) = f(y)$ for all $y \in [0,1]$. - -The argument given here may be strengthened with only little effort: -Korovkin's Theorem. -Let $L_{n}: C([0,1]) \to C([0,1])$ be positive operators such that $\|L_{n}g_{i} - g_{i}\|_{\infty} \xrightarrow{n \to \infty} 0$ for $g_{i}(x) = x^{i}$, $i = 0,1,2$. Then $\|L_{n}f - f\|_{\infty} \to 0$ for all $f \in C([0,1])$. -Its proof (as well as the argument above) is a variant of the usual proof of the Weierstrass approximation theorem using Bernstein polynomials. One may take -\[ -L_{n}f(x) = \sum_{k = 0}^{n} \begin{pmatrix} n \\ k \end{pmatrix}x^{k}(1-x)^{n-k} f(k/n), -\] -in Korovkin's theorem and verify directly that $L_n g_{i} \to g_{i}$ for $i = 0,1,2$, so Korovkin's theorem yields the Weierstrass approximation theorem. - -REPLY [3 votes]: Here are some hints: - -You don't have much choice for $L$. -If $f: [0,1] \to \mathbb{R}$ is continuous then it is uniformly continuous, so for all $\varepsilon > 0$ you can choose $\delta > 0$ such that $|x - y| < \sqrt{\delta}$ implies $|f(x) - f(y)| < \varepsilon$. This gives -\[ -|f(x) - f(y)| < \varepsilon + C (x - y)^{2} \quad \text{for ALL $x,y \in [0,1]$} -\] -with $C = \frac{2\|f\|_{\infty}}{\delta}$. -Use this to estimate $f(x) - f(y)$ from above and below by a quadratic polynomial in $x$ (view $y$ and $f(y)$ as constants). -If $p(x) = ax^{2} + bx + c$ then $Lp = p$ and if $g \leq h$ then $Lg \leq Lh$.<|endoftext|> -TITLE: Unique factorization domain that is not a Principal ideal domain -QUESTION [13 upvotes]: Let $c$ be an integer, not necessarily positive and not a square. Let $R=\mathbb{Z}[\sqrt{c}]$ -denote the set of numbers of the form $$a+b\sqrt{c}, a,b \in \mathbb{Z}.$$ -Then $R$ is a subring of $\mathbb{C}$ under the usual addition and multiplication. -My question is: if $R$ is a UFD (unique factorization domain), does it follow that -it is also a PID (principal ideal domain)? - -REPLY [14 votes]: Yes, because such quadratic number rings are easily shown to have dimension at most one (i.e. every nonzero prime ideal is maximal). But $\rm PID\:$s are precisely the $\rm UFD\:$s having dimension $\le 1\:.\ $ Below is a sketch of a proof of this and closely related results. -THEOREM $\rm\ \ \ TFAE\ $ for a $\rm UFD\ D$ -$1)\ $ prime ideals are maximal if nonzero -$2)\ $ prime ideals are principal -$3)\ $ maximal ideals are principal -$4)\ \rm\ gcd(a,b) = 1\ \Rightarrow\ (a,b) = 1$ -$5)\ $ $\rm D$ is Bezout -$6)\ $ $\rm D$ is a $\rm PID$ -Proof $\ $ (sketch of $1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 5 \Rightarrow 6 \Rightarrow 1$) -$1\Rightarrow 2)$ $\rm\ \ P\supset (p)\ \Rightarrow\ P = (p)$ -$2\Rightarrow 3)$ $\ \: $ Clear. -$3\Rightarrow 4)$ $\ \ \rm (a,b) \subsetneq P = (p)\ $ so $\rm\ (a,b) = 1$ -$4\Rightarrow 5)$ $\ \ \rm c = \gcd(a,b)\ \Rightarrow\ (a,b) = c\ (a/c,b/c) = (c)$ -$5\Rightarrow 6)$ $\ \ \rm 0 \ne I \subset D\:$ Bezout is generated by an elt with the least number of prime factors -$6\Rightarrow 1)$ $\ \ \rm P \supset (p),\ a \not\in (p)\ \Rightarrow\ (a,p) = (1)\ \Rightarrow\ P = (p)$<|endoftext|> -TITLE: Historical textbook on group theory/algebra -QUESTION [16 upvotes]: Recently I have started reading about some of the history of mathematics in order to better understand things. -A lot of ideas in algebra come from trying to understand the problem of finding solutions to polynomials in terms of radicals, which is solved by the Abel-Ruffini theorem and Galois theory. -I was wondering if there's a textbook (or history book which emphasizes the mathematics) that goes vaguely in chronological order or explicitly presents theorems and concepts in their historical context. -Alternatively, if you think it would be better to attempt to read the original papers (Abel's famous quote about reading the masters comes to mind), such as the works of the mathematicians mentioned in this wikipedia article , how would I go about doing so? -EDIT: while researching some of the recommended books, I found this interesting list (pdf) of references. - -The inspiration for this question came from asking myself "why would someone bother/think of defining a normal subgroup in the first place?" (although I already know the answer) and hence I am asking about Galois theory, but really this question works for any area of mathematics and perhaps someone should open a community wiki question for that. - -REPLY [2 votes]: See also the books A History of Abstract Algebra and Episodes in the History of Modern Algebra (1800-1950).<|endoftext|> -TITLE: Topology of the power set -QUESTION [5 upvotes]: Does anyone know a non trivial(that we cannot define on every set) topology defined on the power set of an uncountable set? - -REPLY [9 votes]: I think you're asking whether power objects exist in the category of topological spaces. I think the answer is not always: although we have a subobject classifier $\Omega$ (namely the 2-point space with the indiscrete topology), and the exponential object $\Omega^X$ is not guaranteed to exist. The obstruction is the requirement that for every continuous function $f: A \times X \to \Omega$ there must be a unique continuous function $\tilde{f}: A \to \Omega^X$, such that $\tilde{f}(a)(x) = f(a, x)$, and vice-versa. When $X$ is nice enough, e.g. locally compact and Hausdorff, then $\Omega^X$ does exist and has the compact-open topology.<|endoftext|> -TITLE: Conditions for a smooth manifold to admit the structure of a Lie group -QUESTION [21 upvotes]: As we know, Lie group is a special smooth manifold. I want to find some geometric property, which is only satisfied by the Lie group. I only found one property: parallelizability. Can you show me something more? - -REPLY [4 votes]: The cohomology of the manifold has to be a Hopf Algebra. More generally, the manifold has to be an H-space. This implies the unit sphere bundle is fibrewise homotopy-trivial (see comments below). Earlier I thought manifold + h-space implied triviality of the tangent bundle but that's not clear. -A manifold that's an H-space is called an "H-manifold" in the literature. Sometimes this is ambiguous since if H is a group, "H-manifold" could mean "manifold with an action of the group H" so beware when searching the literature. -I believe there is a well-known obstruction theory for H-manifolds to be Lie groups and many algebraic topologists know a pile of examples off the top of their heads. Unfortunately, if I ever did know such a pile of examples I've forgotten!<|endoftext|> -TITLE: Is there a classification of all finite indecomposable p-groups? -QUESTION [7 upvotes]: Is there a classification of all finite indecomposable p-groups? -(something like: those are exactly the quaternions, primary cylic groups, some dihedral groups, etc.). - -REPLY [10 votes]: I'll give some examples of indecomposable p-groups, a survey of small orders, and an asymptotic description. As Derek Holt points out, most p-groups are directly indecomposable, and perhaps he has a cleaner way of describing the third section. - -Some indecomposable finite p-groups: - -cyclic groups -p-groups of maximal class (with nilpotency class n and order pn+1), including: - -dihedral groups (of order at least 8) -quaternion groups -semi-dihedral groups - -extra-special groups with a center of order p and elementary abelian quotient -Any group with a cyclic center - - -Every group of order p is directly indecomposable (being cyclic). -One out of two groups of order p2 is directly indecomposable (the cyclic one). -Three out of five groups of order p3 are directly indecomposable (the cyclic one, and the two non-abelian ones which are both extra-special and maximal class). -Eight out of fourteen groups of order 16 and nine out of fifteen groups of order p4 for p odd are directly indecomposable (all but two with cyclic center). -34 out of 51 of order 32, 49 out of 67 of order 243, 59 out 77 for of order 3125, and then a steady pattern out of 61 + 2p + 2(3,p−1) + (4,p−1). -While the groups are organized into families, even the number of families becomes unmanageable after a point and so a different technique is needed for larger n. - -Where do decomposable p-groups come from? They are direct products of smaller p-groups. However, there are about p(2n3/27) groups of order pn, and so taking direct products of groups of order pi with groups of order pn−i for 1 ≤ i ≤ n/2 gives a somewhat complicated sum but which is (for large n) less than 1/pn th as big as p(2n3/27). In other words, a vanishingly small fraction of p-groups are decomposable, and the rest are indecomposable.<|endoftext|> -TITLE: Intuitive explanation of Left invariant Vector Field -QUESTION [33 upvotes]: Intuitively what is meant by a left invariant vector field on a manifold? - -REPLY [29 votes]: To talk about left invariance, you probably want to assume your manifold is a Lie group, so that the vector field is left invariant under the (derivative of) the group action. Intuitively, this means that the vector field is entirely determined by the vector at the unit element of the Lie group. Given any other element of the Lie group, say $g$, the vector at $g$ has to be $(l_g)_*(X_0)$, where $(l_g)_*$ is the derivative of left-multiplication by $g$ and $X_0$ is the vector at the unit element. So the Lie group action allows you to take a single vector and distribute it out over the manifold in a smooth, nonvanishing way. -The simplest example is Euclidean space $\mathbb R^n$ regarded as an abelian Lie group under addition. In this case, a left invariant vector field is simply a constant vector field in the usual calculus sense. All the vectors point in the same direction. The map $(l_g)_*$ identifies the tangent spaces at each point in the usual "parallel transport" way.<|endoftext|> -TITLE: Is the group isomorphism problem decidable for abelian groups? -QUESTION [10 upvotes]: According to wikipedia the group isomorphism problem is an undecidable problem. -When we restrict to (countable) abelian groups does it become decidable or does it remain undecidable? -In case it becomes decidable I'd love to have a reference to an algorithm deciding it. - -REPLY [8 votes]: If you are considering the notion of decidability in the sense of Turing decidability, as is usual in computability theory, then it doesn't make immediate sense to ask whether the isomorphism problem for arbitrary countable groups is decidable or not, since the "input" for an instance of this problem would consist of two infinite objects, the groups in some form of presentation. This takes one outside the context of decidability with Turing machines, which work with finite input and output. -The problem does make sense for finitely presented groups, since here one can be given two such presentations. In the abelian case, the isomorphism problem in this case is decidable. In the non-abelian case, it is not decidable---a result following from the non-decidability of the word problem, since one cannot even determine the special case of whether a given presentation is a presentation of the trivial group. -If you consider computably presented groups, by taking as inputs two programs that will enumerate the relations, then you will not be able to decide in finite time any nontrivial property of the sets they enumerate. This is a consequence of Rice's theorem. For example, you will not be able to decide whether the programs will enumerate no relations or all relations, and so the isomorphism problem will not be decidable in this context. -Nevertheless, by adopting a more set-theoretic rather than computational perspective, one could inquire what is the descriptive-set-theoretic complexity of the set of pairs of isomorphic countable abelian groups. It clearly has complexity at most $\Sigma^1_1$, that is, lightface analytic, since two groups are isomorphic iff there is an isomorphism, and I believe that it is likely $\Sigma^1_1$-complete, since the countable abelian groups can code quite a bit of information, but I would have to check with my descriptive set-theoretic collegues. -There is also the subject of Borel equivalence relation theory, which considers the complexity of equivalence relations under Borel reducibility, and in general, the isomorphism relation for finitely generated groups (not necessarily abelian) is a Borel relation in the relevant space, but quite high in complexity. The isomorphism relation of arbitrary countable groups is much higher still, and as I've said, I think it remains high even just for countable abelian groups.<|endoftext|> -TITLE: Choosing seats for guests -QUESTION [5 upvotes]: You have a circular table with $N$ seats.$K$ bellicose guests are going visit your house of-course you don't want them to sit beside each other.As the host, you want to find out how many ways there are to choose $K$ seats such that none of them is adjacent to each other. -I noticed that there is a solution other than $0$ if ($N \ge 2K $) but I am not sure how to approach for the rest. -EDIT: Only $K$ bellicose guests are visiting,no friendly guest are there the remaining $N-K$ seats will be vacant. -A possible mathematical translation of this problem: Choosing $K$ candidate points from a circle of $N$ indistinguishable points such that there are more than one vacant point between adjacent candidate points. - -REPLY [4 votes]: Choose a seat $S$, and a bellicose guest $B$. Sit $B$ in $S$, and tell them not to move, whether they like it or not. That done, there are $(K-1)!$ ways of ordering the remaining bellicose guests clockwise around the table, and $F!$ ways of ordering the friendly guests (here I am using leonbloy's notation $F = N - K$). For each such ordering, we have to choose a pattern of the form $f...b...f...b...f$. This pattern: -1. starts and ends with $f$ (so that $B$ is isolated); -2. contains $(K-1)$ $b$'s and $F$ $f$'s; and -3. contains no two adjacent $b$'s. -But the number of such patterns is the same as the number of patterns that -1. start with $f$; and -2. contain $(K-1)$ $b$'s and $(F-K+1)$ $f$'s. -(To see this, just replace each instance of $bf$ in the original pattern by $b$.) The number of such patterns is the binomial coefficient $\binom{F-1}{K-1}$. So we end up with: -$(K-1)!F!\binom{F-1}{K-1} = \frac{F!(F-1)!}{(F-K)!}$ -This is the number of seating arrangements with guest $B$ in seat $S$. Multiply by $N$ to get the total number. -Edit Reading the question more carefully, it asks for the number of (what I call here) patterns, not the number of seatings. For each pattern, there are $K!F!$ seatings, so the answer is -$N\frac{F!(F-1)!}{(F-K)!}/(K!F!) = \frac{N(F-1)!}{(F-K)!K!}$<|endoftext|> -TITLE: An approximation of an integral -QUESTION [6 upvotes]: Is there any good way to approximate following integral? -$$\int_0^{0.5}\frac{x^2}{\sqrt{2\pi}\sigma}\cdot \exp\left(-\frac{(x^2-\mu)^2}{2\sigma^2}\right)\mathrm dx$$ -$\mu$ is between $0$ and $0.25$, the problem is in $\sigma$ which is always positive, but it can be arbitrarily small. -I was trying to expand it using Taylor series, but terms looks more or less this $\pm a_n\cdot\frac{x^{2n+3}}{\sigma^{2n}}$ and that can be arbitrarily large, so the error is significant. - -REPLY [11 votes]: A standard way to get a good approximation for integrals that "look" Gaussian is to evaluate the Taylor series of the logarithms of their integrands through second order, expanding around the point of maximum value thus (continuing with @Ross Millikan's substitution): -$$\eqalign{ - &\log\left(\sqrt{y}\cdot \exp\left(-\frac{(y-\mu )^2}{2\sigma ^2}\right)\right) \cr -= &\frac{-\mu ^2-\sigma ^2+\mu \sqrt{\mu ^2+2 \sigma ^2}+2 \sigma ^2 \log\left[\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right]}{4 \sigma ^2} \cr -+ &\left(-\frac{1}{2 \sigma ^2}-\frac{1}{\left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)^2}\right) \left(y-\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right)^2 \cr -+ &O\left[y-\frac{1}{2} \left(\mu +\sqrt{\mu ^2+2 \sigma ^2}\right)\right]^3 \cr -\equiv &\log(C) - (y - \nu)^2/(2\tau^2)\text{,} -}$$ -say, with the parameters $C$, $\nu$, and $\tau$ depending on $\mu$ and $\sigma$ as you can see. The resulting integral now is a Gaussian, which can be computed (or approximated or looked up) in the usual ways. The approximation is superb for small $\sigma$ or large $\mu$ and still ok otherwise. -The plot shows the original integrand in red (dashed), this approximation in blue, and the simpler approximation afforded by replacing $\sqrt{y} \to \sqrt{\mu}$ in gold for $\sigma = \mu = 1/20$. - - -(Added) -Mathematica tells us the integral, when taken to $\infty$, can be expressed as a linear combination of modified Bessel Functions $I_\nu$ of orders $\nu = -1/4, 1/4, 3/4, 5/4$ with common argument $\mu^2/(4 \sigma^2)$. From the Taylor expansion we can see that when both $\mu$ and $\sigma$ are small w.r.t. $1/2$--specifically, $(1/4-\mu)/\sigma \gg 3$, the error made by including the entire right tail will be very small. (With a little algebra and some simple estimates we can even get good explicit bounds on the error as a function of $\mu$ and $\sigma$.) There are many ways to compute or approximate Bessel functions, including polynomial approximations. From looking at graphs of the integrand, it appears that the cases where the Bessel function approximation works extremely well more or less complement the cases where the preceding "saddlepoint approximation" works extremely well.<|endoftext|> -TITLE: Where can I find a time scale (or anything similar) listing the main discoveries and achievments in mathematics? -QUESTION [7 upvotes]: I am currently preparing my next physics exam, and I got courious if there may be on the Net some sort of time scale of mathematical discoveries, so that I could compare discoveries and achievements in physics with the mathematic instruments available at the time. -While there is large abundance of such graphs for computer science and physics, maths related ones are quite hard to find.. -Thanks. - -REPLY [4 votes]: Wikipedia has such a list at this article.<|endoftext|> -TITLE: Lagrange's theorem -QUESTION [7 upvotes]: I don't understand left coset and Lagrange's theorem... I'm working on some project so please write me anything what could help me understand it better - -REPLY [20 votes]: Suppose $G$ is a group, and $H$ is any subgroup of $G$. I would like to try to define something similar to "congruence modulo $n$" for integers, but for this arbitrary group $G$ and subgroup $H$. -In the integers, we say that $a\equiv b\pmod{m}$ if and only if $a-b$ is an element of the subgroup $m\mathbb{Z}$. So we are going to try something similar for $H$ and $G$, taking into account that $G$ need not be abelian. -I'm not going to assume the group is finite until I say so explicitly. -Definition. Let $G$ be a group and let $H$ be a subgroup. If $x,y\in G$, we say that $x$ and $y$ are congruent on the right modulo $H$ if and only if $xy^{-1}\in H$. We write this $x\equiv_H y$. -Proposition. $\equiv_H$ is an equivalence relation on $G$. -Proof. We need to show that the relation is reflexive, symmetric, and transitive. Let $x\in G$. Then $xx^{-1} = e\in H$ (since $H$ is a subgroup, hence contains $e$), so $x\equiv_H x$. -Now let $x,y\in G$, and suppose that $x\equiv_H y$; we want to show that $y\equiv_H x$ holds as well. Since $x\equiv_H y$, then $xy^{-1}\in H$. Since $H$ is a subgroup, it is closed under taking inverses, so $(xy^{-1})^{-1} = (y^{-1})^{-1}x^{-1} = yx^{-1}\in H$. By defintion, this means that $y\equiv_H x$, as desired. -Finally, suppose that $x,y,z\in G$ and that $x\equiv_H y$ and $y\equiv_Hz$. We want to show that $x\equiv_H z$. The first congruence implies $xy^{-1}\in H$; the second that $yz^{-1}\in H$. Since $H$ is a subgroup, it is closed under products, so $(xy^{-1})(yz^{-1})=xz^{-1}\in H$, hence $x\equiv_H z$, as desired. QED -Notice that the three basic properties of a subgroup are precisely needed for the three basic properties of an equivalence relation: $H$ contains the identity is used for reflexivity; $H$ is closed under inverses is used for symmetry; and $H$ is closed under products is used for transitivity. -Now, since $\equiv_H$ is an equivalence relation on $G$, by the Fundamental Theorem of Equivalence Relations, $\equiv_H$ induces a partition on $G$; that is, $G$ is broken up into disjoint parts, one for each equivalence class. Our next goal is to figure out if we have some description of the equivalence classes that is independent of the equivalence relation (this is very useful in general). Indeed, we do: -Theorem. Let $G$ be a group and let $H$ be a subgroup; let $x\in G$. The equivalence class of $x$ under the relation $\equiv_H$ is equal to the set $Hx = \{hx\mid h\in H\}$. That is, -$$[x]_H = \{y\in G\mid x\equiv_Hy\} = \{hx\mid h\in H\} = Hx.$$ -Proof. We need to show that each element in $Hx$ is in $[x]_H$, and that each element in $[x]_H$ is in $Hx$. -Let $z\in Hx$; that means that $z = hx$ for some $h\in H$. Then -$$xz^{-1} = x(hx)^{-1} = x(x^{-1}h^{-1}) = h^{-1}\in H,$$ -so by definition we have that $x\equiv_H z$. Thus, if $z\in Hx$, then $z\in[x]_H$. So $Hx\subseteq[x]_H$. -Conversely, let $y\in [x]_H$. Then $x\equiv_H y$, so $xy^{-1}\in H$. Thus, there exists $h\in H$ such that $xy^{-1}=h$. Multiplying on the right by $y$ we get $x=hy$, and multiplying on the left by $h^{-1}$ we get $h^{-1}x = y$. Since $h^{-1}\in H$, we get $y = h^{-1}x \in Hx$. Thus, if $y\in[x]_H$ then $y\in Hx$, so $[x]_H\subseteq Hx$. -Putting the two inclusions together, we conclude that $[x]_H = Hx$ for each $x\in G$. QED -Corollary. Let $G$ be a group, $H$ a subgroup. Then: - -$Hx$ is nonempty for each $x\in G$. -$\displaystyle G = \mathop{\cup}_{x\in G} Hx$. -For all $x,y\in G$, if $Hx\cap Hy\neq\emptyset$, then $Hx=Hy$. - -Proof. This follows from the fact that since the sets of the form $Hx$ are exactly the equivalence classes of the equivalence relation $\equiv_H$, they form a partition of $G$. QED -Corollary. Let $G$ be a group, $H$ a subgroup. For all $x,y\in G$, $x\equiv_H y$ if and only if $Hx = Hy$. -We give the sets of the form $Hx$ a name: -Definition. Let $G$ be a group, $H$ a subgroup, and $x\in G$. The set -$$Hx = \{ hx\mid h\in H\}$$ -is called the right coset of $x$ modulo $H$. -"Coset" is short for "congruence set", because the right coset is exactly the set of all things that are congruent on the right to $x$ modulo $H$. -In general, when you have an equivalence relation, the equivalence classes can have different sizes. But not so with these equivalence relation: because they are obtained by taking all possible products with elements of $H$, all equivalence classes are bijectable. -Theorem. Let $G$ be a group and $H$ a subgroup. Let $x,y\in G$ be any two elements. Then there is a bijection between the sets $Hx$ and $Hy$, given by -\begin{align*} -\psi\colon Hx &\to Hy\\ -hx&\mapsto hy -\end{align*} -Proof. To show that $\psi$ is one-to-one, let $hx,h'x\in Hx$ be such that $\psi(hx)=\psi(h'x)$. Then $hy = h'y$. Multiplying on the right by $y^{-1}$, we get $h=h'$, so $hx=h'x$. Thus, $\psi$ is one-to-one. -To show $\psi$ is onto, let $hy\in H$. Then $hx\in Hx$, and $\psi(hx)=hy$. Thus, $\psi$ is one-to-one and onto, so it is a bijection. QED -Now, let's assume that $G$ is finite, and $H$ is a subgroup. Then the equivalence relation $\equiv_H$ partitions $G$ into equivalence classes; each equivalence class is of the form $Hx$ for some $x\in G$, and by the previous theorem, they are all bijectable with one another, so they all have the same size, $k$. If there are $m$ distinct equivalence classes, each of size $k$, and they partition $G$, then the size of $G$ is the sum of the sizes of the distinct equivalence classes; that is, $|G|=mk$. -But what is $k$? $k$ is the size of the equivalence classes; they are all the same size. In particular, the equivalence class $He$ has size $k$. But what is $He$? Well, -$$He = \{ he\mid h\in H\} = \{h\mid h\in H\} = H.$$ -That is, $He = H$, so that $H$ itself has size $k$. -Since $|G|=mk = m|H|$, that means that $|H|$ divides $|G|$. That is: -Corollary [Lagrange's Theorem]. If $G$ is a finite group, and $H$ is a subgroup, then the size of $H$ divides the size of $G$. -You might ask if you can define a "congruence on the left"; yes, you can. We say $x$ and $y$ are congruent on the left modulo $H$ if $x^{-1}y\in H$, and we write $x{}_H\equiv y$. If we proceed as above, then we get that the equivalence class of $x$ modulo $H$ on the left is $xH$ (instead of $Hx$); the process is completely analogous; these are called left cosets of $H$. Turns out that there is a bijection between left and right cosets. One is tempted to define the bijection by sending the coset $xH$ to the coset $Hx$, but it turns out that this is not well defined (you can have $xH = yH$, but also have $Hx\neq Hy$). However, mapping $xH$ to the coset $Hx^{-1}$ works out (I'll leave it to you to work it out), so that the number of left cosets and the number of right cosets of $H$ in $G$ are the same, and the size of any left coset is the same as the size of any right coset of $H$. In general, the equivalence relation "congruent on the left modulo $H$" is not the same as the equivalence relation "congruent on the right modulo $H$"; the case when they are the same is interesting in its own right, and corresponds to the case when $H$ is a normal subgroup, which has a lot of important consequences all of its own that you will no doubt discover soon.<|endoftext|> -TITLE: Useful sufficient conditions for a topological space to be the underlying space of a topological group? -QUESTION [29 upvotes]: Here is a question that I have had in my head for a little while and was recently reminded of. - -Let $X$ be a (nonempty!) topological space. What are useful (or even nontrivial) sufficient conditions for $X$ to admit a group law compatible with its topology? - -There are many necessary conditions: for instance $X$ must be completely regular. In particular, if it is a Kolmogorov space it must be a Urysohn space. It must be homogeneous in the sense that the self-homeomorphism group $\operatorname{Aut}(X)$ acts transitively. In particular $\operatorname{Aut}(X)$ must act transitively on the connected components of $X$. -It is rather clear though that these conditions are nowhere near sufficient even for very nice topological spaces. A recent question on this site calls attention to this: even for the class of smooth manifolds there are a sequence of successively more intricate necessary conditions but (apparently) no good sufficient conditions. -What about for other classes of spaces? Here are two examples: it suffices for $X$ to be discrete (i.e., every set is open) or indiscrete (i.e., only $\varnothing$ and $X$ are open). These are, of course, completely trivial. Can anyone do any better than this? For instance: - -If $X$ is totally disconnected and compact (for me this implies Hausdorff), must it admit the structure of a topological group? - -Added: as some have rightly pointed out, the above necessary condition of homogeneity is not implied by my assumptions. So let us assume this as well. Note that assuming second-countability as well forces the space to be (finite and discrete or) homeomorphic to the Cantor set, which is well-known to carry a topological group structure, so I don't want to assume that. - -REPLY [11 votes]: In this short paper we have a counterexample for an even weaker statement (it admits no left topological group structure). There are also consistent examples (under CH) that are S-spaces etc. Note that any first countable compact zero-dimensional homogeneous space that is not metrisable is a counterexample, as a first countable topological group is metrisable.<|endoftext|> -TITLE: Nonzero $f \in C([0, 1])$ for which $\int_0^1 f(x)x^n dx = 0$ for all $n$ -QUESTION [35 upvotes]: As the title says, I'm wondering if there is a continuous function such that $f$ is nonzero on $[0, 1]$, and for which $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$. I am trying to solve a problem proving that if (on $C([0, 1])$) $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 0$, then $f$ must be identically zero. I presume then we do require the $n=0$ case to hold too, otherwise it wouldn't be part of the statement. Is there ay function which is not identically zero which satisfies $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$? -The statement I am attempting to prove is homework, but this is just idle curiosity (though I will tag it as homework anyway since it is related). Thank you! - -REPLY [9 votes]: There is a proof, using the Weierstrass approximation theorem, that if $f$ is continuous then $f$ is necessarily zero! -Classical Weierstrass's Theorem : If f is a continuous real valued function on $[a, b]$, then there exists a sequence of polynomials $p_n$ such that -$$ \lim_{n\rightarrow+\infty}p_n(x)=f(x)$$ uniformly on $[a, b]$. -By the assumptions of the problem and the linearity of the integral -is easy to see that -$$ -\int_{0}^1f(x)p(x)dx=0 -$$ -for all polynomials $p(x)$ on $C([0,1])$. Now just apply Theorem!<|endoftext|> -TITLE: Trouble with a problem involving Rouché's Theorem -QUESTION [7 upvotes]: The problem is from Marden, the first section: -The polynomial $g(z) = z^n + b_1 z^{n-1} + ... + b_n$ has at least $m+1$ zeros in an arbitrary neighborhood of a point $z = c$ if $|g^{(k)}(c)| \leq \epsilon$ for $k = 0,1,...,m$ and for $\epsilon$ sufficiently small and positive. -There is a hint provided: use Rouché's Theorem. -I can prove the result in the special case of $c = 0$, because then I can bound each of the relevant $b_j$ by $\epsilon$. Unfortunately I don't see a way to extend this to the general case. -I would appreciate some help on working toward a solution. I've been stumped since lunch on this one. - -REPLY [3 votes]: Since $g(z)$ is a polynomial of degree $n$, so is its Taylor series about the point $z=c$. Note that $g^{(n)}(z) = n!$, so that the coefficient of $(z-c)^n$ in this series is $1$: -$$ -g(z) = g(c) + g'(c)(z-c) + \cdots + \frac{g^{(n-1)}(c)}{(n-1)!} (z-c)^{n-1} + (z-c)^n. -$$ -Suppose that, for some $m \in \{0,1,2,\cdots,n-1\}$ and $\epsilon > 0$, we know that -$$ -\left|g^{(k)}(c)\right| \leq \epsilon -$$ -for all $0 \leq k \leq m$. It then follows from the triangle inequality that the head of the polynomial satisfies -$$ -\begin{align} -\left|g(c) + g'(c)(z-c) + \cdots + \frac{g^{(m)}(c)}{m!} (z-c)^{m}\right| &\leq \epsilon \sum_{k=0}^{m} \frac{|z-c|^k}{k!} \\ -&< \epsilon e^{|z-c|}. -\tag{1} -\end{align} -$$ -The rest of the polynomial -$$ -h(z) = \frac{g^{(m+1)}(c)}{(m+1)!} (z-c)^{m+1} + \cdots + \frac{g^{(n-1)}(c)}{(n-1)!} (z-c)^{n-1} + (z-c)^n -$$ -has a zero of multiplicity at least $m+1$ at $z=c$, and, since the zeros of analytic functions are isolated, this is the only zero of $h(z)$ in the closed disk $|z-c| \leq \delta$ for all $\delta > 0$ small enough. The circle $|z-c| = \delta$ is compact and $h(z) \neq 0$ there, so we can also find a $\lambda > 0$ such that -$$ -|h(z)| \geq \lambda > 0 \qquad \text{on}\,\,\, |z-c| = \delta. -\tag{2} -$$ -Now, if we choose -$$ -\epsilon \leq \lambda e^{-\delta} -$$ -then we can deduce from $(1)$ and $(2)$ that -$$ -\begin{align} -\left|g(c) + g'(c)(z-c) + \cdots + \frac{g^{(m)}(c)}{m!} (z-c)^{m}\right| &< \epsilon e^{\delta} \\ -&\leq \lambda \\ -&\leq |h(z)| -\end{align} -$$ -on the circle $|z-c| = \delta$. -We may now apply Rouché's to conclude that for any fixed $\delta > 0$ we can find an $\epsilon > 0$ small enough so that the polynomial $g(z)$ has at least $m+1$ zeros in the disk $|z-c| < \delta$.<|endoftext|> -TITLE: Fibonacci[n]-1 is always composite for n>6. why? -QUESTION [16 upvotes]: In[11]:= Select[Table[Fibonacci[n], {n, 1, 10000}], PrimeQ[# - 1] &] -Out[11]= {3, 8} -Edit: Fibonacci[n]-1 is always composite for n>6. why? - $$\sum\limits_{i = 0}^n {{F_i}} = {F_{n + 2}} - 1$$ -In[16]:= Select[Table[Fibonacci[n], {n, 1, 10000}], PrimeQ[# + 1] &] -Out[16]= {1, 1, 2} -Fibonacci[n]+1 is always composite for n>3. why? - -REPLY [11 votes]: Wikipedia cites the following source for the assertion that no sufficiently large Fibonacci number is one greater or one less than a prime: -Ross Honsberger, Mathematical Gems III (AMS Dolciani Mathematical Expositions No. 9), 1985, ISBN 0-88385-318-3, p. 133. -Not sure if that reference has a proof.<|endoftext|> -TITLE: Does the equation $x^4+y^4+1 = z^2$ have a non-trivial solution? -QUESTION [35 upvotes]: The background of this question is this: Fermat proved that the equation, -$$x^4+y^4 = z^2$$ -has no solution in the positive integers. If we consider the near-miss, -$$x^4+y^4-1 = z^2$$ -then this has plenty (in fact, an infinity, as it can be solved by a Pell equation). But J. Cullen, by exhaustive search, found that the other near-miss, -$$x^4+y^4+1 = z^2$$ -has none with $0 < x,y < 10^6$. -Does the third equation really have none at all, or are the solutions just enormous? - -REPLY [4 votes]: I tried the heuristic from the Hardy-Littlewood circle method for this equation. The heuristic suggests that the number of solutions within the range $\max\{\vert x\vert,\vert y\vert,\vert z\vert\} -TITLE: Stirling numbers of the second kind on Multiset -QUESTION [11 upvotes]: Stirling numbers of the second kind $S(n, k)$ count the number of ways to partition a set of $n$ elements into $k$ nonempty subsets. What if there were duplicate elements in the set? That is, the set is a multiset? - -REPLY [2 votes]: By way of enrichment here is a solution using Power Group -Enumeration in polynomial form. This is not the most efficient -algorithm and if we are just after the count we can in fact do better -by enumerating all such set partitions recursively, which is also -included here. The PGE algorithm has a certain chance of actually -leading to a closed form expression rather than the output of some -procedure, however. -The algorithm is the same as presented at the following MSE link -I and at this link -MSE link II (edge -colorings of the cube). We require the cycle indices of the -permutation groups acting on the slots and on the values being -distributed into them. For a multiset -$$A_1^{q_1} A_2^{q_2} \times \cdots \times A_p^{q_p}$$ -with $q_k$ the multiplicities the slots consist of $Q= \sum_r q_r$ -distinct slots, into which we distribute the identifiers of the -multisets where that slot is to reside. We may imagine $q_r$ slots of -type $r$ forming a block, with the blocks being adjacent and -containing only slots of the same type. Now with a block forming a -multiset the group acting on these slots is the symmetric group -$S_{q_r}$ with cycle index $Z(S_{q_r}).$ Therefore the cycle index of -the slot permutation group (all blocks) is $$\prod_r Z(S_{q_r}).$$ On -the other hand the multiset identifiers are permuted by the symmetric -group $S_Q$ with cycle index $Z(S_Q).$ These data suffice to run the -algorithm. Observe that when we cover (consult links for the details -of this procedure) the cycles of a slot permutation with a cycle from -a multiset identifier permutation we must record the identifiers used -in the covering in a generating function. We then obtain the desired -result by replacing the indeterminates in the terms of this generating -function (once computed) with $v^m$ where $m$ is the number of -indeterminates in the term. This yields the generating function by the -number of multisets for all $k$ at once. -We present some examples. When all $q_r$ are one we are working with -$p$ distinct items and should obtain Stirling numbers. (Stirling -numbers exemplify the complexity of this problem as the PGE -routine is faster here and takes less memory than the recursive -enumeration routine. This is because the slot permutation group in -this case contains only one element. The computation was done at this -MSE link III.) -Indeed we find - -> add(v^k*stirling2(3,k), k=1..3); - 3 2 - v + 3 v + v - -> MSETS([1,1,1]); - 3 2 - v + 3 v + v -> add(v^k*stirling2(7,k), k=1..7); - 7 6 5 4 3 2 - v + 21 v + 140 v + 350 v + 301 v + 63 v + v - -> MSETS([seq(1, w=1..7)]); - 7 6 5 4 3 2 - v + 21 v + 140 v + 350 v + 301 v + 63 v + v - -> add(v^k*stirling2(10,k), k=1..10); - 10 9 8 7 6 5 -v + 45 v + 750 v + 5880 v + 22827 v + 42525 v - - 4 3 2 - + 34105 v + 9330 v + 511 v + v - -> MSETS([seq(1, w=1..10)]); - 10 9 8 7 6 5 -v + 45 v + 750 v + 5880 v + 22827 v + 42525 v - - 4 3 2 - + 34105 v + 9330 v + 511 v + v - -On the other hand when $p=1$ we have $q_1$ like items and should get -the values of the partition function $p_k(n).$ Here we find once again - -> add(v^k*part(3,k), k=1..3); - 3 2 - v + v + v - -> MSETS([3]); - 3 2 - v + v + v - -> add(v^k*part(7,k), k=1..7); - 7 6 5 4 3 2 - v + v + 2 v + 3 v + 4 v + 3 v + v - -> MSETS([7]); - 7 6 5 4 3 2 - v + v + 2 v + 3 v + 4 v + 3 v + v - -> add(v^k*part(10,k), k=1..10); - 10 9 8 7 6 5 4 3 2 -v + v + 2 v + 3 v + 5 v + 7 v + 9 v + 8 v + 5 v + v - -> MSETS([10]); - 10 9 8 7 6 5 4 3 2 -v + v + 2 v + 3 v + 5 v + 7 v + 9 v + 8 v + 5 v + v - -Now of course it gets difficult and interesting when we have less -structure in the size and number of the source multiset. Here are a -few last examples: - -> ENUM([2,3,4]); - 9 8 7 6 5 4 3 2 -v + 6 v + 26 v + 75 v + 151 v + 191 v + 126 v + 29 v - - + v - -> MSETS([2,3,4]); - 9 8 7 6 5 4 3 2 -v + 6 v + 26 v + 75 v + 151 v + 191 v + 126 v + 29 v - - + v - -> ENUM([3,4,5]); - 12 11 10 9 8 7 6 -v + 6 v + 30 v + 111 v + 328 v + 757 v + 1331 v - - 5 4 3 2 - + 1652 v + 1264 v + 474 v + 59 v + v - -> MSETS([3,4,5]); - 12 11 10 9 8 7 6 -v + 6 v + 30 v + 111 v + 328 v + 757 v + 1331 v - - 5 4 3 2 - + 1652 v + 1264 v + 474 v + 59 v + v - -> ENUM([2,2,3,3]); - 10 9 8 7 6 5 4 -v + 10 v + 63 v + 257 v + 704 v + 1223 v + 1204 v - - 3 2 - + 536 v + 71 v + v - -> MSETS([2,2,3,3]); - 10 9 8 7 6 5 4 -v + 10 v + 63 v + 257 v + 704 v + 1223 v + 1204 v - - 3 2 - + 536 v + 71 v + v - -The source code is quite compact, with the answer being about half and -the rest for testing and verification. - -with(combinat); - -pet_cycleind_symm := -proc(n) -option remember; - - if n=0 then return 1; fi; - - expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n)); -end; - -MSETS := -proc(mset) -option remember; -local idx_slots, idx_sets, res, term_a, term_b, - v_a, v_b, inst_a, inst_b, len_a, len_b, p, q; - - idx_slots := - expand(mul(pet_cycleind_symm(item), item in mset)); - - if not(type(idx_slots, `+`)) then - idx_slots := [idx_slots]; - fi; - - idx_sets := - pet_cycleind_symm(add(item, item in mset)); - - if not(type(idx_sets, `+`)) then - idx_sets := [idx_sets]; - fi; - - res := 0; - - for term_a in idx_slots do - for term_b in idx_sets do - p := 1; - - for v_a in indets(term_a) do - len_a := op(1, v_a); - inst_a := degree(term_a, v_a); - - q := 0; - - for v_b in indets(term_b) do - len_b := op(1, v_b); - inst_b := degree(term_b, v_b); - - if len_a mod len_b = 0 then - q := q + - len_b* - add(mul(B[len_b, - len_b*cyc_num+cyc_pos], - cyc_pos=1..len_b), - cyc_num=0..inst_b-1); - fi; - od; - - p := p*q^inst_a; - od; - - res := res + - lcoeff(term_a)*lcoeff(term_b)*p; - od; - od; - - map(term -> lcoeff(term)*v^nops(indets(term)), - expand(res)); -end; - -part := -proc(n, k) -option remember; -local res; - - if n=0 or k=0 then - return `if`(n=0 and k=0, 1, 0); - fi; - - if k=1 then return 1 fi; - - res := 0; - if n >= 1 then - res := res + part(n-1, k-1); - fi; - if n >= k then - res := res + part(n-k, k); - fi; - - res; -end; - -ENUM := -proc(mset) -option remember; -local total, flat, recurse, msets, res; - - total := add(item, item in mset); - flat := - [seq(seq(p, q=1..mset[p]), p=1..nops(mset))]; - - msets := table(); res := 0; - - recurse := - proc(idx, len, part) - local pos, cur, sorted; - - if idx > total then - sorted := - sort([seq(part[pos], pos=1..len)]); - - if not(type(msets[sorted], `integer`)) then - res := res + v^len; - msets[sorted] := 1; - fi; - - return; - fi; - - cur := A[flat[idx]]; - for pos to len do - part[pos] := part[pos]*cur; - recurse(idx+1, len, part); - part[pos] := part[pos]/cur; - od; - - part[len+1] := part[len+1]*cur; - recurse(idx+1, len+1, part); - part[len+1] := part[len+1]/cur; - end; - - recurse(1, 0, Array([seq(1, q=1..total)])); - res; -end;<|endoftext|> -TITLE: Determine the set of values of $\exp(1/z)$ for $0<|z|0$ let $A=\{w:w=\exp(1/z), 0<|z| 0$. First, we'll prove that $A \subset \mathbb{C} - \{0\}$, then we'll show that $\mathbb{C} - \{0\} \subset A$. - -Let's show that $A \subset \mathbb{C} - \{0\}$. This amounts to showing that $0$ does not belong to A. Given any $w, z$ such that $w = exp(1/z)$ and $0 < |z| < r$, we can write: $1/z = a + ib$ (with $a, b \in \mathbb{R}$). Then $|w| = e^a|e^ {ib}| > 0$. Thus, $0$ is not in A. -Now, let's show that any non-zero complex number $x$ can be written as $x = \exp(1/z)$ with $0 < |z| < r$. This will demonstrate that $A \subset \mathbb{C} - \{0\}$. For that, we pick $x \neq 0$. We write $x$ in exponential form ($\rho, \theta \in \mathbb{R}$): -\begin{eqnarray} - x & = & \rho e^{i\theta}\\ - & = & \exp(ln(\rho) + i\theta) -\end{eqnarray} -Let's pick $k \in \mathbb{N}$ such that $|ln(\rho) + i\theta + 2k\pi| > 1/r$. Then $z = ln(\rho) + i\theta + 2k\pi$ verifies $0 < |z| < r$ and we have $x = exp(1/z)$. Thus $x$ belongs to $A$. - -We have shown that A both contains and is a subset of $\mathbb{C} - \{0\}$. Thus $A = \mathbb{C} - \{0\}$<|endoftext|> -TITLE: Integral curves of the gradient -QUESTION [14 upvotes]: Let $f : M \rightarrow \mathbb{R}$ be a differentiable function defined on a riemannian manifold. Assume that $| \mathrm{grad}f | = 1$ over all $M$. Show that the integral curves of $\mathrm{grad}f$ are geodesics. - -REPLY [4 votes]: Another strategy: define coordinates $(x^1, \ldots, x^{n-1}, y)$ at points $p \in M$ so that $(x^1, \ldots, x^{n-1})$ are local slice coordinates for the level sets $\{q \in M : f(q) = f(p)\}$, and $y \in \mathbb R$ satisfies $f(\gamma_p(y)) = f(q)$, where $\gamma_p$ is the integral curve of $\mathrm{grad} f$ starting at $p$. In these coordinates, $\mathrm{grad} f = \partial_y$, and $g(\partial_{i}, \partial_y) \equiv 0$, where $\partial_i := \partial_{x^i}$. Furthermore, $g(\partial_y, \partial_y) = |\mathrm{grad}f|^2 \equiv 1$. We'll prove $D_t \dot\gamma(t) \equiv 0$ in these coordinates for every integral curve $\gamma$ of $\mathrm{grad}f$. -By symmetry of the Levi-Civita connection $\nabla$ and commutativity of coordinate vector fields, $\nabla_{\partial_i} \partial_y = \nabla_{\partial_y} \partial_i$. So, since $\nabla$ is compatible with the Riemannian metric, for $i = 1, \ldots, n-1$, -\begin{align*} -\left\langle D_t \dot\gamma(t), \partial_i\right\rangle &= \frac d{dt} \left\langle \dot\gamma(t), \partial_i\right\rangle - \left\langle \dot\gamma(t), D_t \partial_i \right\rangle = \frac{d}{dt} \langle \partial_y, \partial_i \rangle - \left\langle \partial_y, \nabla_{\partial_y} \partial_i \right\rangle = -\left\langle \partial_y, \nabla_{\partial_i} \partial_y \right\rangle \\ -&= \left\langle \nabla_{\partial_i} \partial_y, \partial_y \right\rangle - \partial_i \left\langle \partial_n, \partial_n \right\rangle = \left\langle \nabla_{\partial_i} \partial_y, \partial_y \right\rangle = -\left\langle D_t \dot\gamma(t), \partial_i\right\rangle. -\end{align*} -So $\left\langle D_t \dot\gamma(t), \partial_i\right\rangle = 0$ for $i = 1, \ldots, n-1$. Furthermore, -$$ -\left\langle D_t \dot\gamma(t), \partial_y\right\rangle = \left\langle D_t \partial_y, \partial_y\right\rangle=\frac d{dt} \left\langle \partial_y, \partial_y \right\rangle - \left\langle \partial_y, D_t \partial_y \right\rangle = - \left\langle \partial_y, D_t \partial_y \right\rangle. -$$ -So, $\left\langle D_t \dot\gamma(t), \partial_y\right\rangle = 0$, whence $\left\langle D_t \dot\gamma(t), X\right\rangle \equiv 0$ for every vector field $X$ over $M$. So $D_t \dot\gamma(t) \equiv 0$, so $\gamma(t)$ is geodesic.<|endoftext|> -TITLE: Count of elements in $\Bbb{Z}_7[x]/(3x^2+2x)$ -QUESTION [5 upvotes]: Hi I have some problem how to get count of elements in $\Bbb{Z}_7[x]/(3x^2+2x)$. I think there belong to only polynomials which are indivisible with $3x^2+2x$ ($\gcd=1$). I think it is so as far I know that for example in every $\Bbb{Z}_m$, $m$ prime, is the count of belonging elements eqauls to $\phi(m)$. But I really dont know how to get these polynomials in some efficient way. - -REPLY [4 votes]: Sima: Working with polynomials with coefficients in ${\mathbb Z}_7$ (but the same is true for any field), any polynomial $p$, when divided by a polynomial $q$ of degree larger than $0$, produces a remainder $r$ that is a polynomial of degree strictly less than $q$. If $q(x)=3x^2+2x$, the remainder $r$ is then a polynomial of degree 1 or less, i.e., it has the form $ax+b$ where $a,b$ are elements of ${\mathbb Z}_7$. -Two elements of ${\mathbb Z}_7[x]$ are identified in the quotient by $3x^2+2x$ iff they have the same remainder, so the elements of ${\mathbb Z}_7[x]/3x^2+2x$ are in correspondence with the remainders that, by the paragraph above, are precisely the linear polynomials $ax+b$. There are 7 possibilities for $a$ and 7 for $b$, for a total of $7^2=49$ elements. - -REPLY [4 votes]: HINT $\ $ Use the division algorithm in $\rm\ \mathbb Z_7[x]\ $ to show that every polynomial $\rm\ f(x)\ \in\ \mathbb Z_7[x]\ $ is congruent $\rm\ mod\ \ 3\ x^2 + 2\ x\ $ to a unique polynomial of degree $\:\le 1\:$, viz. $\rm\ f(x)\ \ mod\ \ 3\ x^2 + 2\ x\:,$ analogous to the fact that every elt of $\rm\ \mathbb{Z}/m\ $ has a unique representative in $\rm\:\{0,1,\cdots,\:m-1\}\:.$ -The analogous result holds true over any ring if the leading coefficient of the polynomial is a unit, i.e $\rm\ |R[x]/(f(x))|\ =\ |R|^n\ $ for any $\rm\ f(x) \in R[x]\ $ having degree $\rm\:n\:$ and unit leading coefficient. The hypothesis on the leading coefficient guarantees that one can divide by $\rm\:f(x)\:$ with unique remainder. Indeed, the standard high-school long division algorithm clearly works, and if there were two unequal remainders of degree $\rm < n$ then their difference would be divisible by $\rm\:f\:,$ which is impossible since multiples of $\rm\:f\:$ have degree $\ge n$ (else the leading coefficient of $\rm\:f\:$ is a zero-divisor, not a unit).<|endoftext|> -TITLE: Formal System and Formal Logical System -QUESTION [16 upvotes]: I was reading the Wikipedia article for Mathematical_logic. When reaching Formal_logical_systems, I was curious about its definition and clicked into its own article Logical_system, which redirected me to the article for Formal_system, where I found the definition for formal system: - -In formal logic, a formal system (also - called a logical calculus[citation - needed]) consists of a formal language - and a set of inference rules, used to - derive (to conclude) an expression - from one or more other premises that - are antecedently supposed (axioms) or - derived (theorems). The axioms and - rules may be called a deductive - apparatus. A formal system may be - formulated and studied for its - intrinsic properties, or it may be - intended as a description (i.e. a - model) of external phenomena. - -and the definition for logical system - -A logical system or, for short, logic, - is a formal system together with a - form of semantics, usually in the form - of model-theoretic interpretation, - which assigns truth values to - sentences of the formal language, that - is, formulae that contain no free - variables. A logic is sound if all - sentences that can be derived are true - in the interpretation, and complete - if, conversely, all true sentences can - be derived. - -I thought logical system and formal system were the same concept by self-learning, but here it says logical system is a formal system plus semantics/interpretation. Isn't that in logical system we don't care about the semantics/meaning of propositions and that's the reason that the subject is called formal logic? But why here it says logical system has semantics/interpretation? How to understand "semantics" and whether logical system and formal system are the same? -Is logical system a special example of formal system, in the same sense as Lambda calculus is a special example of formal system as in examples of formal systems? Or is logical system a formal system with additional structure called "semantics", in the same sense as topological vector space is a vector space with additional topological structure? -Thanks and regards! - -REPLY [15 votes]: We definitely care that formal logic has a formal semantics. Here's a list of reasons why. More simply, we need to know very precisely what a formalism means. Without this precise definition, there's a lot of wasted time arguing about what things mean. -To answer your other questions, consider the simplest, most accessible case: propositional logic. We have atomic propositions, represented by simple symbols like P and Q, and then formulas built up from the usual connectives like and ($\wedge$), or ($\vee$), and not ($\neg$). Together with the proof theory which governs how we do deductions, we now have a formal system by your definition. -The semantics is built by introducing a separate meta-level, totally outside everything in the previous paragraph. In that level we define two truth values, true and false. In this level we also define a function which maps each propositional symbol to a truth value. Let's call this function I. Here's one possible version of I: {(P,true),(Q,false)}. I is the interpretation function. Finally, we define the usual semantics of the connectives: for example, and ($\wedge$) means both its arguments must be true. -Voila! We have now sketchily defined the semantics of propositional logic. If it sounds like truth tables, it is: each row of a truth table corresponds to one possible interpretation function. Here's a nice writeup. -The nice thing is that all modern formal semantic theories look like the propositional case, just higher-powered: they all have a separate meta-level to define the semantics and an interpretation function to map from the object layer of formulas to the meta-level containing the semantics (roughly speaking). -So now we can quickly answer your questions: a logical system has "semantics/interpretation" because we need to define a separate meta-level and an interpretation function to let us interpret the meanings of well-formed formulas. -A formal system and a logical system are not the same because we can define a formal system that has no formal semantics, say, a context-free grammar or a well-defined card game. Finally, there is additional structure to define the semantics of a logical system, but on a totally separate meta-level.<|endoftext|> -TITLE: In-place inversion of large matrices -QUESTION [11 upvotes]: In Solving very large matrices in "pieces" there is a way shown to solve matrix inversion in pieces. Is it possible to apply the method in-place? -I am refering to the answer in the referenced question that starts with: -$\textbf{A}=\begin{pmatrix}\textbf{E}&\textbf{F}\\\\ \textbf{G}&\textbf{H}\end{pmatrix}$ -And computes the inverse via: -$\textbf{A}^{-1}=\begin{pmatrix}\textbf{E}^{-1}+\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&-\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\\\\ -\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&\textbf{S}^{-1}\end{pmatrix}$, where $\textbf{S} = \textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F}$ -Edit 05.10.2017: -Terrence Tao makes use of the same in a recent blog post: https://terrytao.wordpress.com/2017/09/16/inverting-the-schur-complement-and-large-dimensional-gelfand-tsetlin-patterns/ , and puts it in relation to a "Schur complement". - -REPLY [9 votes]: We suppose an (m+n) by (m+n) matrix $\textbf{A}$ with block partition: -$\textbf{A}=\begin{pmatrix}\textbf{E}&\textbf{F}\\\\ \textbf{G}&\textbf{H}\end{pmatrix}$ -A sequence of steps can be outlined, invoking recursively an in-place matrix inversion and other steps involving in-place matrix multiplication. -Some but not all of these matrix multiplication operations would seem to require additional memory to be "temporarily" allocated in between recursive calls to the in-place matrix inversion. At bottom we will argue that needed additional memory is linear, e.g. size m+n. -1. The first step is recursively to invert in-place matrix $\textbf{E}$: -$\begin{pmatrix}\textbf{E}^{-1}&\textbf{F}\\\\ \textbf{G}&\textbf{H}\end{pmatrix}$ -Of course to make that work requires the leading principal minor $\textbf{E}$ to be nonsingular. -2. The next step is to multiply $\textbf{E}^{-1}$ times block submatrix $\textbf{F}$ and negate the result: -$\begin{pmatrix}\textbf{E}^{-1}&-\textbf{E}^{-1}\textbf{F}\\\\ \textbf{G}&\textbf{H}\end{pmatrix}$ -Note that the matrix multiplication and overwriting in this step can be performed one column of $\textbf{F}$ at a time, something we'll revisit in assessing memory requirements. -3. Next multiply $\textbf{G}$ times the previous result $-\textbf{E}^{-1}\textbf{F}$ and add it to the existing block submatrix $\textbf{H}$: -$\begin{pmatrix}\textbf{E}^{-1}&-\textbf{E}^{-1}\textbf{F}\\\\ \textbf{G}&\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F}\end{pmatrix}$ -This step can also be performed one column at a time, and because the results are accumulated in the existing block submatrix $\textbf{H}$, no additional memory is needed. -Note that what has now overwritten $\textbf{H}$ is $\textbf{S}=\textbf{H}-\textbf{G}\textbf{E}^{-1}\textbf{F}$. -4. & 5. The next two steps can be done in either order. We want to multiply block submatrix $\textbf{G}$ on the right by $\textbf{E}^{-1}$, and we also want to recursively invert in-place the previous result $\textbf{S}$. After doing both we have: -$\begin{pmatrix}\textbf{E}^{-1}&-\textbf{E}^{-1}\textbf{F}\\\\ \textbf{G}\textbf{E}^{-1}&\textbf{S}^{-1}\end{pmatrix}$ -Note that the matrix multiplication and overwriting of $\textbf{G}$ can be done one row at a time. -6. Next we should multiply these last two results, overwriting $\textbf{G}\textbf{E}^{-1}$ with $\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}$ and negating that block: -$\begin{pmatrix}\textbf{E}^{-1}&-\textbf{E}^{-1}\textbf{F}\\\\ -\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&\textbf{S}^{-1}\end{pmatrix}$ -7. We are on the home stretch now! Multiply the two off-diagonal blocks and add that to the diagonal block containing $\textbf{E}^{-1}$: -$\begin{pmatrix}\textbf{E}^{-1}+\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&-\textbf{E}^{-1}\textbf{F}\\\\ -\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&\textbf{S}^{-1}\end{pmatrix}$ -8. Finally we multiply the block containing $-\textbf{E}^{-1}\textbf{F}$ on the right by $\textbf{S}^{-1}$, something that can be done one row at a time: -$\textbf{A}^{-1}=\begin{pmatrix}\textbf{E}^{-1}+\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&-\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}\\\\ -\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}&\textbf{S}^{-1}\end{pmatrix}$ -Requirements for additional memory: -A temporary need for additional memory arises when we want to do matrix multiplication and store the result back into the location of one of the two factors. -Such a need arises in step 2. when forming $\textbf{E}^{-1}\textbf{F}$, in step 4. (or 5.) when forming $\textbf{G}\textbf{E}^{-1}$, in step 6. when forming $\textbf{S}^{-1}\textbf{G}\textbf{E}^{-1}$, and in step 8. when forming $-\textbf{E}^{-1}\textbf{F}\textbf{S}^{-1}$. -Of course other "temporary" allocations are hidden in recursive calls to in-place inversion in step 1. and step 4.&5. when matrices $\textbf{E}$ and $\textbf{S}$ are inverted. -In each case the allocated memory can be freed (or reused) after one column or one row of a required matrix multiplication is performed, because the overwriting can be done one column or one row at a time following its computation. The overhead for such allocations is limited to size m+n, or even max(m,n), i.e. a linear overhead in the size of $\textbf{A}$.<|endoftext|> -TITLE: History of "Show that $44\dots 88 \dots 9$ is a perfect square" -QUESTION [35 upvotes]: The problem - -Show that the sequence, $49, 4489, 444889, \dots$, gotten by inserting - the digits $48$ in the middle of the - previous number (all in base $10$), consists only of - perfect squares. - -has become a classic. For some reason, I got curious as to who actually discovered this. -After doing some research, seems like this problem was used in the Stanford University Competitive Exam in Mathematics for high school seniors, in the year 1964. One pdf which has this is here: http://www.computing-wisdom.com/jstor/stanford-exam.pdf. It appears as problem 64.2. -The pdf also mentions that there is a booklet which gives references to previous appearances of the problems from the above exam, but that seemed to be a dead-end regarding this particular problem. - -Does anyone know who originally - discovered this little gem? More - interestingly, is it known how this - was discovered? - -Update 1 -On some more research I found this book from 1903: Algebra Part II by E.M. Langley and S.R.N. Bradly which has this as an exercise on page 180. The question seems to have been phrased in such a way as to claim ownership and also tells how it was discovered. I guess we just need confirmation now. -Update 2 -More digging reveals this German book by Dr. Hermann Schubert, Mathematische Mussestunden, which has this on page 24. The book was published in 1900, but the preface seems to be dated earlier. If someone can read German, perhaps maybe they can read and see what the book claims about the origins of this problem. The book seems to have a list of references at the beginning. - -REPLY [16 votes]: UPDATE: -This fact appeared as a problem in the October 1, 1889 issue of the Journal de Mathématiques Elémentaires, page 160. The problem is attributed to F. Briganti of the Ecole de industrielle de Fermo. - -I don't yet have the rep to leave a comment so I'll leave this as an answer... -This fact appears in a note of M. C.-A. Laisant in 1892. See here, page 77. He remarks that this "cette remarque, paraît-il, a été faite depuis longtemps" (this remark, it seems, was made a long time ago). No references are given. Later in the same volume, in the minutes of the February 27, 1892 meeting of the Société Philomathique de Paris, this fact is referred to as a "fait connu" - a known fact - so perhaps this was already known at the time. -So it seems that you will have to dig much farther back to find the original source. -(The page numbers in this Google Books version don't seem to work, but just search for 4489 and you should find the two references to this fact.)<|endoftext|> -TITLE: Proving there is no natural number which is both even and odd -QUESTION [8 upvotes]: I've run into a small problem while working through Enderton's Elements of Set Theory. I'm doing the following problem: - -Call a natural number even if it has the form $2\cdot m$ for some $m$. Call it odd if it has the form $(2\cdot p)+1$ for some $p$. Show that each natural number number is either even or odd, but never both. - -I've shown most of this, and along the way I've derived many of the results found in Arturo Magidin's great post on addition, so any of the theorems there may be used. It is the 'never both' part with which I'm having trouble. This is some of what I have: -Let -$$ -B=\{n\in\omega\ |\neg(\exists m(n=2\cdot m)\wedge\exists p(n=2\cdot p+1))\}, -$$ -the set of all natural numbers that are not both even and odd. Since $m\cdot 0=0$, $0$ is even. Also $0$ is not odd, for if $0=2\cdot p+1$, then $0=(2\cdot p)^+=\sigma(2\cdot p)$, but then $0\in\text{ran}\ \sigma$, contrary to the first Peano postulate. Hence $0\in B$. Suppose $k\in B$. Suppose $k$ is odd but not even, so $k=2\cdot p+1$ for some $p$. Earlier work of mine shows that $k^+$ is even. However, $k^+$ is not odd, for if $k^+=2\cdot m+1$ for some $m$, then since the successor function $\sigma$ is injective, we have -$$ -k^+=2\cdot m+1=(2\cdot m)^+\implies k=2\cdot m -$$ -contrary to the fact that $k$ is not even. -Now suppose $k$ is even, but not odd. I have been able to show that $k^+$ is odd, but I can't figure a way to show that $k^+$ is not even. I suppose it must be simple, but I'm just not seeing it. Could someone explain this little part? Thank you. - -REPLY [3 votes]: HINT $\ $ Here's the inductive step: $\rm\ 2m \ne 2n+1\ \Rightarrow\ 2m+1 \ne 2(n+1)$<|endoftext|> -TITLE: $\pi_{1}({\mathbb R}^{2} - {\mathbb Q}^{2})$ is uncountable -QUESTION [43 upvotes]: Question: Show that $\pi_{1}({\mathbb R}^{2} - {\mathbb Q}^{2})$ is uncountable. -Motivation: This is one of those problems that I saw in Hatcher and felt I should be able to do, but couldn't quite get there. -What I Can Do: There are proofs of this being path connected (though, I'm not exactly in love with any of those proofs) and this tells us we can let any point be our base-point. Now, let $p$ be some point in ${\mathbb R}^{2} - {\mathbb Q}^{2}$ and let's let this be our base-point. We can take one path from $p$ to $q$ and a second one from $q$ to $p$, and it's not hard to show that if these paths are different then there is at least one rational on the "inside" of it. Since there are uncountably many $q$, this would seem to imply uncountably many different elements of the fundamental group; the problem I'm having is showing that two loops like we've described are actually different! For example, a loop starting at $p$ and passing through $q$ should be different from a loop starting at $p$ and passing through $q'$ for none of these points the same, for at least an uncountable number of elements $q'$. Is there some construction I should be using to show these elements of the fundamental group are different? - -REPLY [41 votes]: In case anyone doesn't like the arguments from uncountability, here's a more concrete argument: -Fix a base-point $(p,p)$ for an arbitrary $p\in\mathbb{R}-\mathbb{Q}$. For any $q\in\mathbb{R}-\mathbb{Q}$, $q\not=p$, consider the loop $L_q$ consisting of the following line segments: $(p,p)\rightarrow(p,q)\rightarrow(q,q)\rightarrow(q,p)\rightarrow(p,p)$ -We claim that for any $q_1 -TITLE: What are the subgroups of $\operatorname{Alt}(2p)$ containing the normalizer of a Sylow $p$-subgroup? -QUESTION [10 upvotes]: Is it easy to describe them? - -$G = \operatorname{Alt}(2p)$, -$P$ a Sylow $p$-subgroup generated by $(1,2,...,p)$ and $(p+1,p+2,...,2p)$, -$N = N_G(P)$, -$M = \{g \in \operatorname{Alt}(2p) : g \text{ takes every orbit of }P\;\text{ to some orbit of }P\;\}$ - - -Is it true that the unique subgroup strictly between $N$ and $G$ is $M$, unless $p=2$ or $p=3$ (where $N=M$ is already maximal)? If not, then what does happen? - -I think it is clear that $N ≤ M < G$. I am fine with $M$ being maximal in $G$. I don't really understand why $N$ would be maximal in $M$, or why $N$ is contained in a unique maximal subgroup. -If the symmetric group is easier to work in, and the answer is close, I am fine with that case as well. -I am trying to understand Bender type ideas in this easy but slightly exotic case, so I'd prefer elementary proofs, but any exposé of Bender's ideas is a welcome (but not necessary) addition. - -REPLY [8 votes]: The proof seems quite long, and requires the classification of finite simple groups. Perhaps the most surprising part to me is that the similar question in prime degree $p$ (rather than degree $2p$ as asked) still requires the classification to handle general primes, and requires quite a bit of modular representation theory to handle primes of special forms. At any rate, here is what I've had time to write up. -Itô's conjecture and Wielandt's problem - -Is the normalizer $F$ of a Sylow $p$-subgroup of the symmetric group on p points maximal? - -This seemed to be true. The first results along these lines were Galois's investigations on groups of prime degree, followed by Mathieu's discovery of his sporadic groups (especially of prime degree). Burnside used ordinary character theory to show that a transitive group of prime degree was either a subgroup of $F$ or insoluble. Brauer (1943) applied his ideas of modular character theory to give some bounds. Itô (1958+) began to get detailed information on groups of prime degree when the primes were of special form. Wielandt's 1964 textbook (Part V, especially §31) gathered together Burnside, Schur, and to some extent Brauer's work and gave representation theoretic proofs of several of these results. Neumann (1972) used modular representation theory to show that any overgroup is triply transitive (see Huppert–Blackburn XII.10.9). Chillag (1977) showed that if the overgroup inverted an element of order $p−1$ in the normalizer, then it was the entire symmetric group. Finally after the CFSG was declared complete, lists of $2$-transitive groups became available, a published list being clearly presented in Kantor (1985). -List of $2$-transitive permutation groups -Such groups divide into $2$-sorts by Galois: those with elementary abelian socle and those with simple socle. -The ones with simple socle $N$ are almost simple and have degree $v$ as in the following list: - -$v ≥ 5$, $N = \operatorname{Alt}(v)$ -$v = (q^d−1)/(q−1)$, $N = \operatorname{PSL}(d,q)$, $q$ a prime power, $(d,q) ≠ (2,2), (2,3)$, and two representations if $d ≥ 3$ -$v = q^3+1$, $N = \operatorname{PSU}(3,q)$, $q$ a prime power, $q > 2$ -$v = q^2+1$, $N = \operatorname{Sz}(q)$, $q=2^{2e+1}$, $e$ a positive integer -$v = q^3+1$, $N = \operatorname{Ree}(q)$, $q=3^{2e+1}$, $e$ a positive integer -$v = 2^{2n−1} ± 2^{n−1}$, $N = \operatorname{Sp}(2n,2)$, $n ≥ 3$ -$v = 11$, $N = \operatorname{PSL}(2,11)$, two representations -$v \in \{ 11, 12, 22, 23, 24 \}$, $N = \operatorname{Mathieu}(v)$, and two representations for $v=12$ -$v = 12$, $N = \operatorname{Mathieu}(11)$ -$v = 15$, $N = \operatorname{Alt}(7)$, two representations -$v = 176$, $N = \operatorname{HS}$, two representations -$v = 276$, $N = \operatorname{Co}3$ - -The ones with abelian socle of order $v = p^d$ are all contained in $\operatorname{AΓL}(d,p)$. The subgroup $G_0$ of permutations fixing the $0$ vector comes from a relatively short list, and is a complement to the regular normal subgroup $V$ of order $v$. These should be on the list of transitive finite linear groups: - -$G ≤ \operatorname{AΓL}(1,v)$ is solvable, $G_0$ is cyclic of order dividing $v−1$ -$\operatorname{SL}(n,q) ⊴ G_0$, $q^n = p^d$ -$\operatorname{Sp}(n,q) ⊴ G_0$, $q^n = p^d$ -$G_2(q)' ⊴ G_0$, $q^6 = p^d$, $q$ even -$G_0 \in \{ \operatorname{Alt}(6), \operatorname{Alt}(7) \}$, $v = 2^4$ -$\operatorname{SL}(2,q) ⊴ G_0$, $q \in \{3,5\}$, and either $d=2$, $p \in \{ 5, 7, 11, 19, 23, 29, \text{ or }59 \}$, or $d=4, p=3$. -five more examples with $d=4, p=3$ ($G_0$ a semi-direct product of the quaternion-type extra-special group of order $32$ with one of the five transitive subgroups of $\operatorname{Sym}(5)$) -$\operatorname{SL}(2,13) = G_0$, $d=6$, $p=3$. - -Sifting the list (not done) -Now one has to check which of these groups actually have degree $2p$ (or just $p$). The second half of the list is easily dealt with: $2p$ is not a prime power unless it is $4$, and $p$ is exactly a prime, so the only possibility is $\operatorname{AΓL}(1,p) = \operatorname{AGL}(1,p)$, the normalizer of the Sylow $p$-subgroup. -The first half of the list is extremely difficult to analyze as it brings up questions similar to Fermat primes. However, we can also use that these groups have to contain the normalizer in $\operatorname{Alt}(2p)$ of a Sylow $p$-subgroup. There are six infinite families (alt, psl, unitary, suzuki, ree, symplectic) and a few sporadics. -The alternating group is not a maximal subgroup of itself. -In the case $2.\operatorname{PΓL}(d,q) = \operatorname{Aut}(\operatorname{PSL}(d,q))$, one would have $(q^d−1)/(q−1)=2p$. This number is even, so $q$ must be odd, and $d$ must be even. If $p$ divides $q−1$, then $p$ divides $|N|$ at least $d−1$ times. If $p$ divides the order of the socle $N$ only once, then $p$ must divide $|\operatorname{Out}(N)| = (q−1,d)⋅f⋅2$. If $p$ divides $f$, then $p$ divides $q−1$ by Fermat's little theorem, however, then $p$ divides $|N|$ at least $d−1$ times, so $d = 2$ and $2p = q+1$, so $q=2p-1$, but $q$ is also a $p$'th power. Since $q^p−2p+1$ is $0$ at $p=0$ and its derivative in $p$ is positive if $q ≥ 9$ (and since there are no small counterexamples), this is a contradiction. If $p$ divides $(q−1,d)$, then $d≥p$, and so $p$ divides $|N|$ at least $p−1$ times, which is too many since we can assume $p ≥ 3$. Hence $p$ divides $|N|$ exactly twice, and $[G:N]$ is coprime to $p$. If $p ≥ d$, then a Sylow $p$-subgroup of $N$ is abelian and semi-simple, so contained in a torus. In other words, the order $k$ of $q$ mod $p$ satisfies $d/3 < k ≤ d/2$. A permutation matrix effects the orbit swap, so we only need to see if all $(p−1)/2$ multiplication automorphisms from the normalizer of the Sylow $p$-subgroup in $\operatorname{Alt}(p)$ are contained in $G$. Hence we have to look at the normalizer of the torus… -The unitary and Ree cases are impossible as $q^3+1$ is divisible by $2$, $(q+1)/2$, and $qq-q+1$, so is not twice a prime. Suzuki has odd degree and sympletic cannot occur, $2$ divides $v$ only once, so $n=2$ is out of bounds. -Now the only sporadic possible is $M_{22}$, but its automorphism group is only divisible by $11$ once, so it cannot occur. -Bibliography - -Brauer (1943) MR0008237 -Ito (1960) MR0117283 - -and MR0124389 for larger index -and his 1963 MR0147535 where he raises the bar and begins the attack on his more general conjecture by proving multiple transitivity, which Neumann then simplifies and extends -and his 1967 MR0224697 where he handles Fermat primes - -Neumann (1972) MR0313369 - -and its application to Itô's conjecture MR0352227 - -Chillag (1977) MR0437622 -Kantor (1985) MR0773556<|endoftext|> -TITLE: A way to determine the ideal number of maximum iterations for an arbitrary zoom level in a Mandelbrot fractal -QUESTION [22 upvotes]: I've created a JavaScript-based fractal drawer which you can see here: -http://jsfiddle.net/xfF3f/12/ -As you're probably all aware, a Mandelbrot Set is created by iterating over pixels as though they were coordinates on the real and imaginary planes. Each pixel then has a real and an imaginary part (like all complex numbers) which can be fed into an iterative loop of its own: -$ z = z^2 + c $ -Where both z and c are complex numbers, z starts at 0, and c is the value of our pixel. If you run this a bunch of times, the squared modulus of z (|z|) will either stay below a limit you set (6 in my fractal drawer) or it will go above the limit. If it goes above the limit, you break the iteration and consider that pixel to be outside of the set. When this happens, you color that pixel a certain color depending on how big |z| has gotten and how many iterations it took to determine that it's outset the set. (See more on the formula used in my drawer here: Continuous coloring of a Mandelbrot fractal) -So given all this, we can say that a higher max iteration value will tell you with greater precision whether or not a point is in the set. It will also take more time to run because it's doing more calculations per pixel. There are also other visual factors... -If you start at a full zoom out and run the plot with a maxIterations value of 50, 100, and 300, you get this: - -So you can see that while the detail of the edge of the set does get better as you increase the maxIterations value, the pixels outside the set are almost all red. At this zoom level, I'd say something like 50 iterations would be an ideal balance of color variation and edge detail. -Now if you zoom in to some arbitrary level keeping 50 maxIterations, you will begin to see something like this: - -The detail is horrible and the colors are also a bit homogeneous. So let's see what happens if we keep the same zoom level and change the maxIterations number to 80, 120, 250, 500, 1000, and 2000 (remember, the coordinates and zoom are exactly the same in all images, the only difference is the maxIterations value): - -As usual, increasing the maxIterations value too much leaves most of the points outside of the set red. Here I'd say something between the second (120 maxIterations) and third (250 maxIterations) is more or less ideal. -This is all relatively simple to do one image at a time with your eye and some tinkering, but this would be very difficult to do if I were to create a zoom like this: http://vimeo.com/1908224. I'd need some method of finding something like an ideal maxIterations value depending on the zoom level. -So after all of this, my question is: is there some such method? If not, where might I start to look in order to figure this out for myself? Am I thinking about this wrong? Is there a more obvious solution that I'm missing? -Thanks in advance! - -REPLY [6 votes]: As this graph shows, the behaviour of normalized iteration counts of points near the Mandelbrot set varies widely, indicating that attempts at a formula based on scale factors is doomed to fail. - -In any case the ideal number of iterations is infinite. For views with no interior regions visible it is preferable to structure computations such that they can be incremental without a fixed iteration limit, as all pixels will escape eventually. For views with interior regions visible one needs a limit, but this can be set dynamically by considering the behaviour of the pixels - maybe keep doubling the limit until no more pixels have escaped. Interior checking can help speed this up. -How to colour the iterations once you have calculated them is an aesthetic matter that has been addressed by the accepted answer. I do also recommend distance estimation as a way to make filaments uniformly visible.<|endoftext|> -TITLE: Relation of Brownian Motion to Helmholtz Equation -QUESTION [15 upvotes]: one can obtain solutions to the Laplace equation -$$\Delta\psi(x) = 0$$ -or even for the Poisson equation $\Delta\psi(x)=\varphi(x)$ in a Dirichlet boundary value problem using a random-walk approach, see e.g. Introduction to Brownian Motion. -Now, already this fascinating concept is not really clear to me. Nevertheless, it would be worth to get into details if there also exists such a connection to the Helmholtz equation -$$\Delta\psi(x) +k^2\psi(x)= 0$$ -Hence my question: -Can we use some random walk to calculate solutions of the Helmholtz equation numerically? -It would also be interesting if this is still true for open systems where the boundary conditions are different than in the Dirichlet case and for which $k$ is now a domainwise constant function. -Thank you in advance -Robert - -REPLY [14 votes]: The general form for the infinitesimal generator of a continuous diffusion in $\mathbb{R}^n$ is -$$ -Af(x) = \frac12\sum_{ij}a_{ij}\frac{\partial^2 f(x)}{\partial x_i\partial x_j}+\sum_ib_i\frac{\partial f(x)}{\partial x_i}-cf(x).\qquad{\rm(1)} -$$ -Here, $a_{ij}$ is a positive-definite and symmetric nxn matrix, $b_i$ is a vector and $c$ is a non-negative scalar, with the coefficients $a,b,c$ functions of position $x$. Such operators are said to be semi-elliptic second order differential operators. The case with $c=0$ is the most common - being the generator of a Markov process (or semigroup). However, the $c > 0$ case does occur, and is then a generator of a submarkovian semigroup. -The coefficients $a,b,c$ can be understood as follows: $a$ gives the covariance matrix of the motion over small time intervals (i.e., the level of noise). $b$ gives the mean over small time intervals (the drift) and $c$ is the rate at which the process is "killed". To be precise, a processes $X_t$ can be modeled formally by adding an additional state $\it\Delta$ called the cemetary. So, we represent the state space for the killed diffusion as $\mathbb{R}^n\cup\{{\it\Delta}\}$. In a small time interval $\delta t$, the process has probability $c\delta t$ of being killed, in which case it jumps to the cemetary, and stays there. So, $X_t={\it\Delta}$ for all $t\ge\tau$ with $\tau$ being the (random) time when the process is killed. The terminology I am using here is taken from Revuz & Yor (Continuous Martingales and Brownian Motion), and will vary between different authors. -Anyway, getting back to PDEs. Suppose we want to solve the PDE $A\psi(x)=0$ on an open domain $U\subseteq\mathbb{R}^n$ with boundary condition $\psi(x)=\psi_0(x)$ for $x$ on the boundary of $U$ ($\partial U$, say). You can do the following. Simulate the process $X_t$ with initial condition $X_0=x$. Wait until the first time $T$ at which it hits the boundary and, when this occurs (if the process doesn't get killed first. i.e., $T < \tau$), take the expected value. -$$ -\psi(x)=\mathbb{E}_x\left[\psi_0(X_T)1_{\{T < \tau\}}\right].\qquad\qquad{\rm(2)} -$$ -Then $\psi$ satisfies the PDE $A\psi=0$. -This is all very general. Getting back to the Helmholtz equation, we can let $a_{ij}$ be the identity matrix and $b_i=0$, and $c$ is a constant. In that case our generator becomes $A\psi=\frac12\Delta\psi-c\psi$. [Edit: This is not quite the same as the Helmholtz equation, which has $c=-k^2$, because here we have $c > 0$. There is a sign difference which changes the behaviour of the solutions. See below] The process then is the following: run a Brownian motion starting from $x$ until the first time $T$ it hits the boundary. Decide if it has not been killed, which has probability $e^{-cT}$ conditional on $T$. If it hasn't, take the value $\psi_0(X_T)$. Finally, take the average of this process (e.g., using Monte Carlo). There is one practical issue here though. Throwing away all the paths on which the process gets killed is a bit wasteful, so you would simply multiply by the probability of not being killed on each path, rather than actually discarding them. i.e., you simulate a regular Brownian motion, and then calculate -$$ -\psi(x)=\mathbb{E}_x\left[\psi_0(X_T)e^{-cT}\right].\qquad\qquad{\rm(3)} -$$ -We can even go the whole hog and solve $A\psi(x)=\varphi(x)$ for a general $x$-dependent coefficients and source term $\varphi$, -$$ -\begin{align} -\psi(x)&=\mathbb{E}_x\left[1_{\{T<\tau\}}\psi_0(X_T)-\int_0^{T\wedge\tau}\varphi(X_t)\,dt\right]\\ -&=\mathbb{E}_x\left[e^{-\int_0^Tc(\hat X_s)\,ds}\psi(\hat X_T)-\int_0^Te^{-\int_0^tc(\hat X_s)\,ds}\varphi(\hat X_t)\,dt\right]. -\end{align}\qquad{\rm(4)} -$$ -Here, $X$ is the process killed at (state-dependent) rate $c$, and I'm using $\hat X$ for the process without killing, which requires multiplying by the survival probabilities $e^{-\int c(\hat X_s)\,ds}$ instead. -One other area in which you have a '$-cf$' term in the PDE governing diffusions is in finance, and it occurs in two different, but closely related ways. Prices of financial assets are frequently modeled as diffusions (even, jump-diffusions), and the value of a financial derivative would be expressed as the expected value of its future value - under a so-called "risk-neutral measure" or "martingale measure" (which are just a special probability measures). However, you need to take interest rates into account. If the rate is $r$, then you would multiply the future (time $t$) value by $e^{-rt}$ before taking the expected value, which is effectively the same as adding a $-rf(x)$ term to the generator. And, as in the general case above, $r$ can be a function of the market state. The second main way (which occurs to me) in which such terms appear in finance is due to credit risk. If a counterparty has probability $rdt$ of defaulting in any time $t$, then you would have a $-rf(x)$ term occurring in the diffusion. This is more in line with the "killing" idea discussed above, but behaves in much the same way as interest rates. -Finally, I'll mention that the PDE in the more general time-dependent situation is of the form $\partial f/\partial t + Af =0$, where $f$ is the expected value of some function of the process at a future time (and not necessarily the first time it hits the boundary). As mentioned in the other answer this is sometimes known as the Feynman-Kac formula, generally by physicists, and also as the Kolmogorov backward equation by mathematicians. Actually, the backward equation in the Wikipedia link doesn't have the $-cf$ term, but it would in the more general case of diffusions with killing. The adjoint PDE applying to probability densities is known as the Fokker-Planck equation by physicists and the Kolmogorov forward equation to mathematicians. - -Edit: As mentioned above, what we have here does not quite correspond to the Helmholtz equation, because of the sign of $c$, and the behaviour of the solutions does change depending on whether $c$ is positive or negative. In the probabilistic interpretation, $c > 0$ is the rate at which the process is killed. Looking at (3) and (4), we can see that solutions to the PDE will decay exponentially as we move further from the boundary. Furthermore, if the values on the boundary are non-negative, then $\psi$ has to be non-negative everywhere. The probabilistic method naturally leads to $\psi(x)$ being a positive linear combination of its boundary values (i.e., an integral with respect to a measure on the boundary). On the other hand, the Helmholtz equation has oscillating wavelike solutions. The values of $\psi(x)$ can exceed its values on the boundary and, even if $\psi\vert_{\partial U}$ is positive, it is possible for $\psi(x)$ to go negative inside the domain. So, it is not a positive linear combination of its boundary values. We could just try using a negative $c$ in (3) and (4) but, for the reasons just mentioned, this cannot work in general. What happens is that $e^{\vert c\vert T}$ is not integrable. To get around this, it is possible to transform the Helmholtz equation so that the zeroth order coefficient $-c$ is positive. We can make a substitution such as $\psi(x)=\tilde\psi(x)e^{ikS(x)}$, where $S$ is any solution to $\Vert\nabla S\Vert=1$. Then, the Helmholtz equation becomes, -$$ -\frac12\Delta\tilde\psi + ik\nabla S\cdot\nabla\tilde\psi + \frac{ik}{2}(\Delta S)\tilde\psi=0. -$$ -So we have a zeroth order term of the form $-\tilde c\tilde\psi$ for $\tilde c=-ik\Delta S/2$. This is imaginary, so does not make sense as a "killing" rate any more. However, as its real component is nonnegative (zero here), equations such as (3) and (4) above give bounded and well-defined expressions for $\tilde\psi$. A google search gives the following paper which uses such a substitution: Novel solutions of the Helmholtz equation and their application to diffraction. I expect that the papers linked to in the other answer use similar techniques to transform the equation into a form which can be handled by the probabilistic method - although I have not had a chance to look at them yet, and are not free access.<|endoftext|> -TITLE: One line Proof of the Prime Number Theorem -QUESTION [5 upvotes]: Whenever I am not doing anything, I generally happen to see pages of some good Mathematical Institutes in India, so as to know more about the faculty members and see what they are working on. -While doing so, in one of the faculty webpage he has written this statement. - -The prime number theorem has an one-line proof : The Riemann zeta function does not vanish on the one line "real part of $s$ equals 1". - -Can anyone explain more about this proof. Is he assuming the Riemann Hypothesis? I am curious to know. - -REPLY [15 votes]: The assertion you made about $\zeta(s)$ does not assume the Riemann hypothesis and can, in fact, be proved reasonably quickly using reasonably elementary manipulations of trigonometric functions. The actual deduction of the prime number theorem from this is not trivial, and can be done (edit: as Matt E observes below, this is not the only -- and, indeed, not the original -- way) by some sort of "Tauberian" theorem: in these notes by Ash (which are based off a short paper by Donald Newman), the argument is to express $\zeta'(s)/\zeta(s)$ as the "Mellin transform" (basically, a variant of the Laplace transform) of partial sums of the von Mangoldt function, denoted $\psi$. The point of the "Tauberian" theorem is that you can deduce properties about the original function from the Mellin transform: since we have some information about the Mellin transform (namely, because $\zeta$ is zero-free on the line $Re(s)=1$), we can deduce some information about $\psi$. This is enough to imply the prime number theorem.<|endoftext|> -TITLE: Homology Group of Quotient Space -QUESTION [6 upvotes]: Let X be the quotient space of $S^2$ under the identifications $x\sim-x$ for $x$ in the equator $S^1$. Compute the homology groups $H_i(X)$. Do the same for $S^3$ with antipodal points of the equator $S^2 \subset S^3$ identified. -This is probably related to cellular homology. Thanks. - -REPLY [4 votes]: Let $X=S^2/\mathord\sim$ and,letting $S^2_+\subseteq S^2$ be the upper closed hemisphere in the sphere, let $X_+=S^2_+/\mathord\sim$ be the quotient of $S^2_+$ by the restricted equivalence relation. Now consider the long exact sequence in reduced homology for the pair $(X,X_+)$. -Using excision &c, show that the relative homology of $(X,X_+)$ is the same as that of the result of collapsing $X_+$ to a point, so that you get a $2$-sphere. On the other hand, $X_+$ is a projective plane, so you also know its homology. Now use the long exact sequence.<|endoftext|> -TITLE: Partial fraction integration -QUESTION [6 upvotes]: So I want to find all antiderivaties of -$\frac{x}{x^3-1}$ -Since the denominator is of a lesser degree than the numerator, partial fraction is to be used instead of long division. -I've started by doing: -$\frac{x}{x^3-1} = \frac{A}{x^2+x+1} + \frac{B}{x-1}$ -Hence, -$x = A(x-1)+B(x^2+x+1)$ -Then setting $x = 1$ to get $3B = 1 => B = \frac{1}{3}$ -So I have $\int({\frac{A}{x^2+x+1}}+{\frac{1}{3(x-1)}})$ -However, I'm not sure how to continue from here. - -REPLY [12 votes]: Like most techniques of integration, the ideal of partial fractions is to reduce a difficult integral to integrals that are, if not easy, at least doable. The three types of integrals that show up when doing partial fractions are: - -$\displaystyle \int\frac{1}{(ax+b)^n}\,dx$, with $a,b$ constants, $a\neq 0$, and $n\geq 1$. -$\displaystyle \int\frac{x}{(ax^2+bx+c)^n}\,dx$, with $n\geq 1$, $ax^2+bx+c$ irreducible quadratic. -$\displaystyle \int\frac{1}{(ax^2+bx+c)^n}\,dx$ with $n\geq 1$, $ax^2+bx+c$ irreducible quadratic. - -The first type is easy to solve: do a substitution $u=ax+b$ and go at it. -Second type is a bit more involved. By doing a substitution $u=ax^2+bx+c$, we get -\begin{align*} -\int\frac{x}{(ax^2+bx+c)^n}\,dx &= \frac{1}{2a}\int \frac{2ax\,dx}{(ax^2+bx+c)^n}\\ - &= \frac{1}{2a}\left(\int\frac{2ax+b}{(ax^2+bx+c)^n}\,dx - \int\frac{b}{(ax^2+bx+c)^n}\,dx\right)\\ -&= \frac{1}{2a}\int\frac{du}{u^n} - \frac{b}{2a}\int\frac{dx}{(ax^2+bx+c)^n}. -\end{align*} -The first integral can be done; the second reduces to the third type mentioned above. -And so we come down to the third type. When $n=1$, the simplest thing to do is to complete the square; factoring out $a$ we may assume we have $x^2+Bx+C$. Completing the square, you get $(x+\frac{B}{2})^2 + (C - \frac{B^2}{4})$. Because we are assuming that the original quadratic is irreducible, that means that $B^2 - 4C\lt 0$, so that $C-\frac{B^2}{4}\gt 0$. Substituting $u=X+\frac{B}{2}$ turns this into a fraction of the form $\frac{1}{u^2+r^2}$; factor out $r^2$, do another substitution, and you can turn it into a fraction of the form $\frac{1}{w^2+1}$. But $\int\frac{dw}{w^2+1}$ is an easy integral: you have an immediate antiderivative for it. -So, modulo a bunch of algebra and some substitution, you can solve $\int\frac{1}{ax^2+bx+c}\,dx$ with $ax^2+bx+c$ irreducible quadratic: complete the square, do some substituttions, and turn it into $\int\frac{1}{w^2+1}\,dx$. -What if you have $\int\frac{dx}{(ax^2+bx+c)^n}$ with $n\gt 1$, $ax^2+bx+c$ irreducible quadratic? Those are more complicated, but not too bad; you can still complete the square and do a bit of algebra, so that you bring it to the form -$$\int \frac{du}{(u^2+r^2)^n}$$ -for some positive $r$. Then one can use the reduction formula (obtained by doing integration by parts): -$$\int\frac{du}{(u^2+r^2)^n} = \frac{1}{2r^2(n-1)}\left(\frac{u}{(u^2+r^2)^{n-1}} + (2n-3)\int\frac{du}{(u^2+r^2)^{n-1}}\right)$$ -and continuing this way you will eventually end in the integral with denominator $u^2+r^2$, which we know how to do.<|endoftext|> -TITLE: Coordinate free proof that curvature is the "square" of the connection -QUESTION [14 upvotes]: Here's the setup. Consider a vector bundle $E$ over a manifold $M$ and let $\Omega^*(M, E)$ denote the space of $E$-valued differential forms (i.e. the space of sections of the vector bundle $\bigwedge^* T^*M \otimes E$). Let $\nabla$ be an affine connection on $E$, i.e. a map $\Omega^0(M, E) \to \Omega^1(M, E)$ satisfying the Leibniz rule. For each vector field $X$ there is a contraction operator $\iota(X)$ on $\Omega^*(M,E)$, and we form $\nabla_X = \iota(X) \nabla$. -Recall that the curvature of $\nabla$ is defined to be $F(X,Y) = [\nabla_X, \nabla_Y] - \nabla_{[X,Y]}$. It is well known that $F$ is actually a $End(E)$-valued 2-form. $\Omega^2(M, End(E))$ acts on $\Omega^*(M, E)$ in the standard way, and it is also well known that $\nabla^2: \Omega^k(M,E) \to \Omega^{k+2}(M, E)$ is given by the action of $F$. -My question is about the proof that "$\nabla^2 = F$" in the sense described above. The textbook proofs all involve trivializing $E$ and doing a coordinate calculation; I don't mind trivializing $E$, but for my own nefarious (and perhaps misguided) purposes I would like to avoid coordinates. Here is what I mean. -If $E$ is trivial then $\nabla = d + \omega$ where $\omega$ is an $End(E)$-valued 1-form. So $F(X,Y) = [\iota(X)(d+\omega), \iota(Y)(d + \omega] - \iota([X,Y])(d + \omega)$, and it seems to me that one should be able to carry out the argument using only Cartan's homotopy formulas. There are complications which arise from the fact that the $d$ operator and the contraction operators are different for $\Omega^*(M, End(E))$ versus $\Omega^*(M,E)$, but nevertheless I can get tantalizingly close. I can show my progress if it would help you help me. -In any event, I've been at it for quite some time now and I've decided it's time to seek assistance. So I'm wondering if anybody knows either how to do this, a reason why it can't be done, or a place where I could look it up. Thanks in advance! - -REPLY [6 votes]: Let $\sigma \in \Omega^n(M,E)$ and $X$, $Y$, $Z_1$, . . ., $Z_n$ be vector fields, then: -$(\nabla^2\sigma)(X,Y,Z_1,. . . , Z_n) = \nabla_X(\nabla(\sigma))(Y,Z_1,. . . , Z_n) $ -$= \nabla_X(\nabla_Y(\sigma))(Z_1,. . . , Z_n) -\nabla\sigma(d_X(Y))(Z_1,. . . , Z_n) $ -$= \nabla_X(\nabla_Y(\sigma))(Z_1,. . . , Z_n) -(\nabla_{d_X(Y)}\sigma)(Z_1,. . . , Z_n) $ -$= -\nabla_Y(\nabla_X(\sigma))(Z_1,. . . , Z_n) +(\nabla_{d_Y(X)}\sigma)(Z_1,. . . , Z_n) $ -$= \frac{1}{2}(\nabla_X(\nabla_Y(\sigma)) -\nabla_Y(\nabla_X(\sigma)))(Z_1,. . . , Z_n) -\frac{1}{2}(\nabla_{d_X(Y)-d_Y(X)}\sigma)(Z_1,. . . , Z_n) $ -$= \frac{1}{2}(([\nabla_X, \nabla_Y])(\sigma))(Z_1,. . . , Z_n) -\frac{1}{2}(\nabla_{[X,Y]}\sigma)(Z_1,. . . , Z_n) $<|endoftext|> -TITLE: What can we say about $f$ if $\int_0^1 f(x)p(x)dx=0$ for all polynomials $p$? -QUESTION [14 upvotes]: This question was motivated by another question in this site. -As explained in that problem (and its answers), if $\displaystyle f$ is continuous on $\displaystyle [0,1]$ and $\displaystyle \int_0^1 f(x)p(x)dx=0$ for all polynomials $\displaystyle p$, then $\displaystyle f$ is zero everywhere. -Suppose we remove the restriction that $\displaystyle f$ is continuous. -Can we conclude from $\displaystyle f\in L^1([0,1])$ that $\displaystyle f$ is zero almost everywhere? -(This should be terribly standard. My apologies, I am rusty of late.) - -REPLY [13 votes]: Assuming you're using Lebesgue integrals, to even make the statement that ${\mathbb \int_0^1 f(x)p(x) = 0}$ for all polynomials $p(x)$, you are forcing $f(x)$ to be in $L^1$. This can be seen by setting $p(x) = 1$; in order for the statement ${\mathbb \int_0^1 f(x) = 0}$ to be well-defined the positive and negative parts $f^+(x)$ and $f^-(x)$ have to integrate to the same finite value. -So suppose $f(x)$ is some $L^1$ function with ${\mathbb \int_0^1 f(x)p(x) = 0}$ for all polynomials $p(x)$. By the Stone-Weierstrass theorem you can uniformly approximate any continuous function by polynomials, so taking limits you also have ${\mathbb \int_0^1 f(x)g(x) = 0}$ for every continuous $g(x)$. In particular it is true for $g(x) = \cos(2\pi nx)$ and $g(x) = \sin(2\pi nx)$ for any $n$. This means each Fourier coefficient of $f(x)$ is zero. By the uniqueness theorem for Fourier coefficients, this means $f(x) = 0$ a.e.<|endoftext|> -TITLE: What is complete induction, by example? $4(9^n) + 3(2^n)$ is divisible by 7 for all $n>0$ -QUESTION [14 upvotes]: So, I've been revising for an exam and I came up against the question " prove $4(9^n) + 3(2^n)$ is divisible by 7 for all $n>0$. -Now, I know how to do this. If I assume $n=k$ divisible by $7$, then I need to show that $n=k+1$ is divisible by $7$. The easiest way to do this is to state that if we label $n_k=k$ and $n_{k+1} = k+1$ then if $7|n_{k+1}-n_{k}$ and $7|n_{k} \Rightarrow 7|n_{k+1}$. -So, without further ado, $4(9^k)9 - 4(9) + 3(2^k)2 - 3(2^k) = 8\cdot4(9^k) + 3\cdot2^k = 8(4(9^k) + 3(2^k)) - 7\cdot 3(2^k)$. As required. Now clearly for $n=0$ this expression is $7$ so divisible for all $n\geq 0$. -My question is, how would I go about proving this via complete induction? I asked because "proof by strong induction also accepted" was mentioned in the mark scheme. Now according to wikipedia, my first assumption is that not only is $n=k$ true but so is $n=k-1$ and so on down to $n=0$. How do I improve my expression of that and how do I go from there to show a proof using this technique? -Edit: The build up to the question is on the topic of induction, so that's why I proved it that way, but Yuval Filmus has pointed out that if we are simply asked to prove it, the fact that $9 \equiv 2 \mod(7)$ means the proof is trivial. - -REPLY [4 votes]: A simple powerful way to prove by complete induction that $\rm\ f(n) = 4\cdot 9^n + 3\cdot 2^n \equiv 0\ \ (mod\ 7)\:$ is as follows: Put $\rm\ S\ f(n) = f(n+1)\:.\ $ Note $\rm\ S-9\ $ kills $\rm\ 9^n,\ $ and $\rm\ S-2\ $ kills $\rm\:2^n,\, $ therefore $\rm (S-9)\ (S-2)\ $ kills $\rm\:f(n),\, $ i.e. $\rm\ f(n)\ $ satisfies $\rm\ f(n+2) - 11\ f(n+1) + 18\ f(n) = 0.\, $ Now since $\rm\ 0\equiv f(0)\equiv f(1),\, $ using the recurrence and complete induction shows $\rm\, f(n)\equiv 0\, $ for all $\rm\ n \in \mathbb N$. -An analogous complete induction proves that a solution of a monic linear recurrence is determined uniquely by its initial conditions - the uniqueness theorem for linear difference equations. Generally uniqueness theorems provide very powerful tools for proving equalities. See some of my other posts for further examples of such. -This is closely related to inductive proofs of the recursion theorem, which justifies the use of recursive definitions. For a nice introduction see Henkin: On mathematical induction.<|endoftext|> -TITLE: Relationship between Riemannian Exponential Map and Lie Exponential Map -QUESTION [14 upvotes]: It is well known that for a matrix Lie group the Lie exponential map is $e ^z$. This maps a tangent vector $z$ at the identity to a group element. -On the other hand the general Riemannian exponential map centered at point $x$ is given by $\exp _x \triangle$ which maps a tangent vector $\triangle$ at point $x$ (not necessarily identity element) to a group element. -Is there a relationship between these two exponential maps? -For example is below formula correct? If so, are there any conditions involved? -$\exp _x \triangle = xe ^{x^{-1}\triangle}$ - -REPLY [14 votes]: Notice that you need to pick a metric on a Lie group for the "general Riemannian exponential map" to be defined. -If you happen to pick an invariant metric on a Lie group, then every geodesic is (locally) a translate of a 1-parameter subgroup (so essentially both exponentials are the same thing) -I don't know what happens if there are no invariant metrics.<|endoftext|> -TITLE: Group where every element is order 2 -QUESTION [40 upvotes]: Let $G$ be a group where every non-identity element has order 2. -If |G| is finite then $G$ is isomorphic to the direct product $\mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \ldots \times \mathbb{Z}_{2}$. -Is the analogous result - $G= \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \ldots $. -true for the case |G| is infinite? - -REPLY [39 votes]: Perhaps the best way to look at the problem is to establish the following more precise result: -For a group $G$, the following are equivalent: -(i) Every non-identity element of $G$ has order $2$. -(ii) $G$ is commutative, and there is a unique $\mathbb{Z}/2\mathbb{Z}$-vector space structure on $G$ with the group operation as addition. -I guess you probably already know how to show that if every nonidentity element has order $2$, $G$ is commutative: for all $x,y \in G$, $e = (xy)^2 = xyxy$. Multiplying on the left by $x$ and on the right by $y$ gives $xy = yx$. -Having established the commutativity, it is convenient to write the group law additively. Then there is only one possible $\mathbb{Z}/2\mathbb{Z}$-vector space structure on $G$, since it remains to define a scalar multiplication and of course we need $0 \cdot x = 0, \ 1 \cdot x = x$ for all $x \in G$. But you should check that this actually works: i.e., defines an $\mathbb{Z}/2\mathbb{Z}$-vector space structure, just by checking the axioms: the key point is that for all $x \in G$, $(1+1)x = x + x = 0 = 0x$. -So now your question is equivalent to: is every $\mathbb{Z}/2\mathbb{Z}$ vector space isomorphic to a product of copies of $\mathbb{Z}/2\mathbb{Z}$? Well, the only invariant of a vector space is its dimension. It is clear that every finite-dimensional vector space is of this form. Every infinite dimensional space is isomorphic to a direct sum $\bigoplus_{i \in I} \mathbb{Z}/2\mathbb{Z}$, the distinction being that in a direct sum, every element has only finitely many nonzero entries. (In other words, the allowable linear combinations of basis elements are finite linear combinations.) Moreover, for any infinite index set $I$, the direct sum $\bigoplus_{i \in I} \mathbb{Z}/2\mathbb{Z}$ has dimension $I$ and also cardinality $I$. -Finally, it is not possible for a direct product of two element sets to have countably infinite cardinality: if $I$ is infinite, it is at least countable, and then the infinite direct product has the same cardinality of the real numbers (think of binary expansions). So the answer to your question is "yes" for direct sums, but "no" for direct products. - -REPLY [17 votes]: If every non-identity element of $G$ has order 2, then the group is abelian. Notationally, it helps to write the group operation additively, with identity $0$. In that case, you can view the group as a vector space over the field with two elements, $F_2=\mathbb{Z}/2\mathbb{Z}$. Every vector space has a basis, so -$$ -G\cong\bigoplus_{i\in I}F_2 -$$ -where $I$ has the cardinality of a basis of $G$. The point here is that representing a vector space by a basis corresponds to a direct sum rather than the direct product, because you can only take finite linear combinations of basis elements. (and, as pointed out in Andres' answer, it is not possible to represent all such groups as direct products). - -REPLY [9 votes]: Not really. The direct sum $\bigoplus_{n\in{\mathbb N}}{\mathbb Z}_2$ is a counterexample. More generally, take $G=\prod_{n=1}^\infty{\mathbb Z}_2$. This set has size $|{\mathbb R}|$. It is easy to see that, given any element of this group, there is a countable subgroup of $G$ that contains this element. Much more pathological counterexamples are also possible, of course.<|endoftext|> -TITLE: Diffeomorphic, group-isomorphic Lie groups that are not isomorphic as Lie groups -QUESTION [20 upvotes]: Do there exist two Lie groups which are diffeomorphic as smooth manifolds, have isomorphic group structures, yet are not isomorphic as Lie groups? -Of course, for this to happen, any diffeomorphism would fail to preserve the group structure, and any group isomorphism would either fail to be smooth, or its inverse would fail to be smooth. -I have no other reason for asking this other than out of curiosity. (In particular, this is not a problem I found out of a textbook.) -Related Question: Are there topological groups that are homeomorphic and have isomorphic group structures, yet are not isomorphic as topological groups? - -REPLY [6 votes]: The examples (from the mathoverflow link) in the accepted answer are not quite Lie groups (at least not with the standard definition): They are not 2nd countable. However, there are examples of simply connected nilipotent Lie groups which are isomorphic as abstract groups but not as Lie groups, see here. These groups are both diffeomorphic to ${\mathbb C}^7$. At the same time, if you restrict to the class of semisimple Lie groups then an abstract isomorphism implies the existence of an isomorphism as Lie groups. (Although, the given abstract isomorphism may fail to be continuous.)<|endoftext|> -TITLE: Is there "essentially only 1" Jordan arc in the plane? -QUESTION [12 upvotes]: Let $p : [0,1] \to \mathbb{R}^2$ be continuous and injective. - -Does there always exists a continuous function $f : [0,1]\times \mathbb{R}^2 \to \mathbb{R}^2$ such that -For all $x\in \mathbb{R}^2$, $f(0,x) = x$ - -and - -For all $t\in [0,1]$, $x\mapsto f(t,x)$ is a homeomorphism - -and - -For all $s\in [0,1]$, $f(1,p(s)) = \langle s,0\rangle$ -? - -REPLY [16 votes]: The function $f$ that you want is called an "ambient isotopy," and your question is whether all arcs in the plane are ambient isotopic. The answer is yes, although I'm not sure of a good reference. I believe that one can construct the ambient isotopy by hand using the 2D Schönflies theorem. The idea is to connect the endpoints of your Jordan arc by an arc, using the fact that the complement is path-connected, and apply the Schönflies theorem to the result. -The analogous result for arcs in $\mathbb R^3$ is false. There are "wild" arcs which are not ambiently isotopic to a standard arc. See for example this article. There are also wild spheres in $\mathbb R^3$, such as Alexander's horned sphere. -Edit: I asked this question on MathOverflow, which got more definitive answers here.<|endoftext|> -TITLE: Minimal non-cyclic groups other than the Klein four groups -QUESTION [10 upvotes]: everyone. I am afraid that my question is too trivial. But here it is. The Klein four group is the first counterexample to the the statement: "If all proper subgroups of a group are cyclic, then the group is cyclic." I am looking for other examples, if any. Are there? -Thanks in advance. - -REPLY [6 votes]: Another collection of examples, not yet mentioned, are the Prüfer groups (also known as quasicyclic groups); it is sometimes denoted $\mathbb{Z}_{p^{\infty}}$. They are infinite, but every proper subgroup is finite (and every nontrivial quotient is isomorphic to the original group). Here are three descriptions: - -For a fixed prime $p$, let $\mathbb{Z}_{p^{\infty}}$ be the group of all $p^k$-th complex roots of unity for all $k\geq 0$, with the group operation being multiplication. It is not hard to show that every proper subgroup is generated by a $p^n$th primitive complex root of unity for some $n$, hence is cyclic. -As an alternative description, consider subgroup of the additive group $\mathbb{Q}/\mathbb{Z}$ that consists of all classes represented by a fraction whose denominator is a power of $p$. -A final description: consider the collection of groups $\{\mathbb{Z}/p^n\mathbb{Z}\}_{n\in\mathbb{N}}$, with injections $i_n\colon\mathbb{Z}/p^n\mathbb{Z}\to\mathbb{Z}/p^{n+1}\mathbb{Z}$ given by $i_n(a+p^n\mathbb{Z}) = pa+p^{n+1}\mathbb{Z}$. Then $\mathbb{Z}_{p^{\infty}}$ is the direct limit $\displaystyle\lim_{\longrightarrow}\mathbb{Z}/p^n\mathbb{Z}$ of this direct system. - -The additive group $\mathbb{Q}$ is almost such a group, in that every finitely generated subgroup is cyclic, but of course it contains noncyclic proper subgroups (e.g., the pre-image of the Prüfer group for some $p$).<|endoftext|> -TITLE: Differentiable+Not monotone -QUESTION [10 upvotes]: Is there a real function that is differentiable at any point but nowhere monotone? - -REPLY [12 votes]: Yes. See for example "Everywhere Differentiable, Nowhere Monotone, Functions" by Y. Katznelson and Karl Stromberg.<|endoftext|> -TITLE: Monotone+continuous but not differentiable -QUESTION [26 upvotes]: Is there a continuous and monotone function that's nowhere differentiable ? - -REPLY [27 votes]: No. Even without the assumption of continuity, a monotone function on $\mathbb{R}$ is differentiable except on a set of measure $0$ (and it can have only countably many discontinuities). This is mentioned on Wikipedia, and proofs can be found in books on measure theory such as Royden or Wheeden and Zygmund. You can read the details in the latter book at this link. - -REPLY [13 votes]: Even any function of bounded variation is differentiable almost everywhere.<|endoftext|> -TITLE: Stochastic interpretation of Einstein equations -QUESTION [21 upvotes]: Einstein's theory of gravitation, general relativity, is a purely geometric theory. -In a recent question I wanted to know what the relation of Brownian motion to the Helmholtz equation is and got a very thorough answer from George Lowther. -He pointed out that there is, roughly speaking, a very general relation of semi-elliptic second order differential operators of the form -$$Af = \frac12 a^{ij}f_{,ij} + b^i f_{,i} - cf = 0$$ -to a "killed" Brownian motion. (I used some summation convention and $,i = \frac{\partial}{\partial x^i}$.) -Now, the Einstein field equations -$$R_{\mu\nu}-\frac12 g_{\mu\nu}R = \frac{8\pi G}{c^4}T_{\mu\nu}$$ -are coupled hyperbolic-elliptic partial differential equations (I dropped the cosmological constant here). Can we somehow adopt the relation of a random process to this kind of equation, or - -Is there a way to interprete the - Einstein equations stochastically? - -REPLY [6 votes]: The link between Einstein equations and a stochastic process can be achieved through Ricci flow. I can state the question very easily for the 2d case while, for higher dimensions things may become quite involved. The idea is that a stochastic process satisfy a diffusion equation -$$\partial_tP=\Delta_2P$$ -and one can write down the solution through a Wiener integral -$$P=\int[dx(t)]e^{-\frac{1}{2}\int_0^td\tau{\dot x}^2(\tau)}.$$ -When one extend this to a generic two-dimensional manifold, the diffusion equation, when applied to the metric, is that of the Ricci flow as one has just the Laplacian replaced by the Beltrami operator applied to metric. Then, the fixed point of this Ricci flow is just Einstein equations for the two-dimensional manifold at hand. I have given some considerations about, well-founded on a theorem by Baer and Pfaeffle (see here). -The exciting idea behind this is that a Ricci flow could be always derived from a stochastic process underlying a manifold. I think that this is material to be studied yet.<|endoftext|> -TITLE: analytic function on $\mathbb{C}$ such that $f\Bigl(\frac{1}{2n}\Bigr)=f\Bigl(\frac{1}{2n+1}\Bigr)=\frac{1}{2n}$ -QUESTION [5 upvotes]: I was working out some problems from a entrance test paper, and this problem looked a bit difficult. - -Does there exist an analytic function on $\mathbb{C}$ such that $$f\biggl(\frac{1}{2n}\biggr)=f\biggl(\frac{1}{2n+1}\biggr)=\frac{1}{2n} \quad \ \text{for all} \ n\geq 1?$$ - -I did apply $f$ repeatedly to get some more insights about the problem, but couldn't get anything from it. So how do i solve this one? - -REPLY [12 votes]: Suppose there is such an $f(z)$. Let $g(z) = f(z) - z$. If $f(z)$ were analytic, then $g(z)$ would also be analytic. But $g(z)$ has a limiting sequence of zeros (in particular, $g(1/2n) = 0$), and the zeros of any nonconstant analytic function are discrete. Therefore $g(z) = 0$, which is not the case; contradiction. So there is no such analytic function.<|endoftext|> -TITLE: Why is the space of surjective operators open? -QUESTION [12 upvotes]: Suppose $E$ and $F$ are given Banach spaces. Let $A$ be a continuous surjective map. Why is there a small ball around $A$ in the operator topology, such that all elements in this ball are surjective? - -REPLY [16 votes]: A bounded linear operator $A: E \to F$, where $E$ and $F$ are Banach spaces, is surjective if and only if there is $c > 0$ such that for all $\phi \in F^*$, -$\|A^* \phi\| \ge c \|\phi\|$ (see e.g. Rudin, "Functional Analysis", Theorem 4.15). -Now if $A$ is such an operator, so is $B$ for $\|A-B\| < c$, since - $$\|B^* \phi\| \ge \|A^* \phi\| - \|A^* - B^*\| \|\phi\| \ge (c - \|A-B\|) \|\phi\|$$<|endoftext|> -TITLE: What is the intuition behind Gauss sums? -QUESTION [20 upvotes]: Let $ \chi $ be a character on the field $ F_p $, and fix some $a \in F_p $. -We define a Gauss sum to be: -$g_a (\chi) = \sum_{t\in F_p}\chi(t)\zeta^{at}$ where $\zeta$ is a primitive $p^{th}$ root of unity. -What is the intuition behind this definition? - -REPLY [19 votes]: When you say $\chi$ is a character of the field $F_p$, you really mean it is a character of the group $F_p^\times$. Any (multiplicative) character $\chi$ on $F_p^\times$ can be extended to a function on $F_p$ by setting $\chi(0) = 0$, and with this convention $\chi$ as a function on $F_p$ is totally multiplicative. Fixing a choice of nontrivial $p$th root of unity $\zeta$, any function $f \colon F_p \rightarrow {\mathbf C}$ has a Fourier transform ${\mathcal F}f \colon F \rightarrow {\mathbf C}$ given by -$({\mathcal F}f)(a) = \sum_{t \in F_p} f(t)\overline{\zeta^{at}}$. So the Gauss sum $g_a(\chi)$ is essentially the Fourier transform at $a$ of the function $\chi$ (as a function on $F$). For more on Fourier transforms of functions on a finite abelian group, see Section 4 (starting at Definition 4.4) of -https://kconrad.math.uconn.edu/blurbs/grouptheory/charthy.pdf -Another intuition (besides the idea that a Gauss sum of a character is basically the Fourier transform of that character, viewed as a function on the additive group $F_p$) is that a Gauss sum is a discrete analogue of the Gamma function. See pp. 56--58 of Koblitz's book "$p$-adic Analysis: A Short Course on Recent Work" for a table illustrating this analogy (including the idea that a Jacobi sum is like the Beta function).<|endoftext|> -TITLE: Distinctness is maintained after adding some element to all sets -QUESTION [12 upvotes]: Let $S=\{S_1,S_2,\ldots,S_n\}$ be a set of $n$ distinct subsets with $S_i \subseteq \{1,\ldots,n\}$ for $i=1,\ldots, n$ then $k \in \{1,\ldots,n\}$ exists with $S_i \cup \{k\}$ is distinct for $i=1,\ldots,n$. -I found this in a old problem sheet of mine about sets and graph theory. Is there a elegant solution to this problem? - -REPLY [9 votes]: This result follows from Bondy's theorem (in fact, it is equivalent) which states that, -Given $\displaystyle n$ distinct sets $\displaystyle S_1, S_2, \dots, S_n$, each a subset of $\displaystyle \{1,2, \dots, n\}$, there is a set $\displaystyle A \subset \{1,2, \dots, n\}$, with $\displaystyle |A| \leq n-1$ such that the sets $\displaystyle S_i \cap A$ are all distinct. -Pick a $\displaystyle k \notin A$. Then we have that if $\displaystyle S_i \cup \{k\} = S_j \cup \{k\}$, then $\displaystyle (S_i \cup \{k\}) \cap A = (S_j \cup \{k\}) \cap A$. Since $\displaystyle k \notin A$, it follows that $\displaystyle S_i \cap A = S_j \cap A$ contradicting the result of Bondy's theorem. -You can find a short proof and a sketch of an elegant linear algebra proof (originally due to Babai and Frankl) of Bondy's theorem, in the excellent book, Extremal Combinatorics by Stasys Jukna.<|endoftext|> -TITLE: What are the differences and relations of Haar integrals, Lebesgue integrals, Riemann integrals? -QUESTION [13 upvotes]: Are Riemann integrals special cases of Haar integrals? Why do we need the invariant property under some actions of groups in the definition of Haar integrals? For example, if we have a group of real matrices $G$ and we have an inner product $\langle\>\ ,\ \rangle: G \times G \to \mathbb{R}$, for any $A, B$ in $G$, $C$ in the subgroup $O:=\{C\in G: C^{T}C=I\}$, $\langle A, B\rangle=\langle O^{T}AO,O^{T}BO\rangle$, then we can define an integral $\int_{G} f d\mu$ over $G$. Why do we need the property: $\langle A, B\rangle =\langle O^{T}AO,O^{T}BO\rangle$? Thank you very much. -Edit: I am having a course about random matrices. The course is related to the lecture notes: http://arxiv.org/abs/0801.1858. On page 2, equation (2.4), I think that the integral is a haar integral. I don't know why we need (2.5) in the paper. Thank you very much. - -REPLY [8 votes]: Lebesgue integration, in the sense of integration with respect to Lebesgue measure on $\mathbb{R}^n$, is a special case of Haar integration, because Lebesgue measure is the Haar measure on the abelian group $\mathbb{R}^n$ with addition and the usual topology (normalized so that the measure of a unit $n$-cube is $1$). Riemann integration is closely related to Lebesgue integration, but a fundamental difference is that the Riemann integral is not directly related to a measure, and as a result there are far fewer general results applicable. For example, a pointwise limit of uniformly bounded Riemann integrable functions on a bounded interval need not be Riemann integrable, even if the convergence is monotone. -Lebesgue integration can be seen as a completion of Riemann integration. If $(R)\int f$ denotes the Riemann integral of a continuous function with compact support on $\mathbb{R}^n$, then $f\mapsto(R)\int f$ is a positive linear functional on $C_c(\mathbb{R}^n)$, and the Riesz representation theorem yields a positive Borel measure $\mu$ on $\mathbb{R}^n$ such that $(R)\int f=\int fd\mu$ for all $f\in C_c(\mathbb{R}^n)$. The completion of $\mu$ is Lebesgue measure. For (proper) Riemann integrable functions, the Riemann and the Lebesgue integral are the same, but there are many functions that are "nice" from the measure theoretic perspective but not Riemann integrable, such as the characteristic function of the rationals on $\mathbb{R}$. On the other hand, there are functions whose improper Riemann integral exists but whose Lebesgue integral does not exist due to the positive and negative parts not being integrable, such as $\frac{\sin(x)}{x}$ on $\mathbb{R}$. In such cases, you could define an "improper" Lebesgue integral by taking limits as the domain increases to obtain the same result as with improper Riemann integrals, so there is no real loss in only considering Lebesgue integrals. -I recommend reading pages 195-197 of Körner's A companion to analysis for a nice discussion of the motivation of going beyond Riemann integration.<|endoftext|> -TITLE: Given an infinite number of monkeys and an infinite amount of time, would one of them write Hamlet? -QUESTION [269 upvotes]: Of course, we've all heard the colloquialism "If a bunch of monkeys pound on a typewriter, eventually one of them will write Hamlet." -I have a (not very mathematically intelligent) friend who presented it as if it were a mathematical fact, which got me thinking... Is this really true? Of course, I've learned that dealing with infinity can be tricky, but my intuition says that time is countably infinite while the number of works the monkeys could produce is uncountably infinite. Therefore, it isn't necessarily given that the monkeys would write Hamlet. -Could someone who's better at this kind of math than me tell me if this is correct? Or is there more to it than I'm thinking? - -REPLY [4 votes]: NOTE: By probability, I mean the chance of it happening per iteration, and starting with a new page each time. -Let's take a step back, shall we? (Not too many, because there's a cliff behind you.) Let's think of what the probability is of producing the following randomly: - -A - -Assuming there are only 26 characters (A-Z, uppercase), the probability would be $\frac{1}{26}$. -What about this: - -AA - -It'd be $(\frac{1}{26})^2$. This: - -AAA - -It'd be $(\frac{1}{26})^3$. And this: - -XKCD - -It'd be $(\frac{0}{26})^4$. [Just kidding, it's: $(\frac{1}{26})^4$]. -So, for every character we add to the quote, it will be: $(1/26)^c)$, where $c$ represents the number of characters. -Basically, it would be a probability of $(\frac{1}{26*2+12})^c$ since the characters used could be: A-Z, a-z, .!?,;: "'/() Of course, there could be more characters, but that's just an example. :)<|endoftext|> -TITLE: Smooth complete intersection counterexample -QUESTION [8 upvotes]: Does anyone know of a nice example of a non-singular complete intersection $X$ (say in $\mathbb{P}^n_k$, maybe even $k=\overline{k}$, char($k$)=0) such that it cannot be written as $E\cap H$ where $E$ is a non-singular complete intersection and $H$ is a hypersurface (or maybe this can always be done)? -For instance, it is often the case that one might want to use an induction argument for non-singular complete intersection and you know that you can write $X=E\cap H$, but if the argument needs to use non-singularity then we don't necessarily know that $E$ satisfies this. - -REPLY [16 votes]: (1) If you ask that $E$ and $H$ both be nonsingular, the answer is no. In $\mathbb{P}^3$, let $Q$ be a degree $2$ hypersurface with a singularity at one point $p$ (the cone on a smooth conic). Let $C$ be a generic smooth cubic; so $C$ does not pass through $p$ and is transverse to $Q$. Then $X:=C \cap Q$ is a degree $6$, genus $4$, smooth curve, embedded by its canonical section. -If we want to write $X$ as $A \cap B$, for hypersurfaces $A$ and $B$, then we must have $(\deg A)(\deg B) = 6$. Since $X$ does not lie in any plane, one of $A$ and $B$ must have degree $2$. There is only one degree two hypersurface containing $X$, namely $Q$, and $Q$ is not smooth. -In fact, I can give a smaller example. In $\mathbb{P}^2$, take a smooth cubic $C$ and intersect it with $Q$, the union of two lines, where the crossing point of the lines is not in $C$. You get $6$ points, and the same logic applies. -(2) If you work in $\mathbb{P}^n$, over a characteristic zero field, and only ask that $E$ be smooth (and not $H$), the answer is yes. Let $X = \{ f_1 = f_2 = \cdots = f_r =0 \}$ with $\deg f_i = d_i$, and reorder the terms such that $d_1 \geq d_2 \geq \cdots \geq d_r$. Choose $h_{ij}$ a polymonial of degree $d_i - d_j$, and otherwise generic. Define $g_i = f_i + \sum_{j>i} h_{ij} f_j$. Clearly, the $g_i$ and $f_i$ generate the same homogeneous ideal. We claim that, for generic values of the $h_{ij}$, every intersection $\{ g_1 \cap g_2 \cap \cdots \cap g_s =0 \}$ is smooth, for $s \leq r$. -Proof: By induction on $s$. When $s=0$, the claim is trivial. By induction, fix $h_{ij}$ for $i -TITLE: Field Extensions -QUESTION [7 upvotes]: Let $L/K$ a finite extension and $f(x)\in K[x]$ a non-linear irreducible polynomial. Prove that if $\mathrm{gcd}\left( \mathrm{deg}(f) , \left[ L:K \right] \right)=1$ then $f(x)$ has no roots in $L$. - -Added: (Solution based on the answer below) -Suppose $f(x)$ has a root in $L$, namely $\alpha$ and consider the extension $K(\alpha)/K$. Since $f$ is irreducible we have that $[K(\alpha) : K] = \mathrm{deg}(f) > 1$. On the other hand we have that $[L:K]=[L:K(\alpha)][K(\alpha):K]$. Then $[L:K]=[L:K(\alpha)](\mathrm{deg}(f))$ but this is imposible since $\mathrm{gcd}\left( \mathrm{deg}(f) , \left[ L:K \right] \right)=1$ and $\mathrm{deg}(f) > 1$. - -REPLY [8 votes]: What do you know about the degree $[K(\alpha):K]$ of an extension when $\alpha$ is a root of an irreducible polynomial $g(x)\in K[x]$? -What do you know about the degrees $[L:K]$, $[K:F]$, and $[L:F]$ of extensions when you have a tower $F\subset K\subset L$ ?<|endoftext|> -TITLE: What is the property where f(g(x)) = g(f(x))? -QUESTION [9 upvotes]: What is the property where f(g(x)) = g(f(x))? - -REPLY [15 votes]: Besides being called (composition) commutative, it is sometimes also said that such functions are permutable, e.g. see here. As an example, a classic result of Ritt shows that permutable polynomials are, up to a linear homeomorphism, either both powers of x, both iterates of the same polynomial, or both Chebychev polynomials.<|endoftext|> -TITLE: How to prove that this kind of differential form exists on an algebraic curve? -QUESTION [5 upvotes]: The following is a problem in Miranda's Algebraic Curves and Riemann Surfaces. -Given any algebraic curve $X$ and a point $p \in X$, show that there is a meromorphic $1$-form $\omega$ on $X$ whose Laurent series at $p$ looks like $dz/z^n$ for $n > 1$, and which has no other poles on $X$. -The point of this is as a step towards the proof that the Mittag-Leffler problem can be solved for $X$. - -REPLY [2 votes]: This is equivalent to showing that $$h^0(\Omega[n\cdot p]) > 0$$ for all n > 1. -By Riemann-Roch, we have $$h^0(O_X [-n\cdot p]) - h^0(\Omega[n\cdot p]) = -n + 1 - g$$ Since the degree of the divisor is negative, $$h^0(O_X [-n\cdot p]) = 0$$ so, when rearranging, we get: $$h^0(\Omega[n\cdot p]) = n + g - 1$$ If n > 1, then n + g - 1 > 0 for all g.<|endoftext|> -TITLE: Modules $M$ such that the automorphism of $M \otimes M \otimes M$ induced by the permutation $(123)$ is the identity -QUESTION [6 upvotes]: I've been struggling with the following problem for a couple of days and I don't seem to get any further: -Let $R$ be a commutative ring. I would like to get (something like) a classification of all finitely generated $R$-modules $M$ that satisfy the following condition: -When we look at $M \otimes_R M \otimes_R M$, the permutation (123) induces an automorphism of $M \otimes_R M \otimes_R M$ by sending $a \otimes b \otimes c$ to $c \otimes a \otimes b$. I demand that this automorphism be the identity map. In other words, in $M \otimes_R M \otimes_R M$, the elements $a \otimes b \otimes c, c \otimes a \otimes b, b \otimes c \otimes a$ should all be the same. -If $R$ is a field, it is easy to see that the only non-trivial finitely generated $R$-modules (i.e. finite-dimensional vector spaces) that satisfy this condition are the 1-dimensional ones. Furthermore, one sees more generally that all cyclic modules satisfy the condition, too. Up to now I've neither come up with an example of a non-cyclic module that satisfies the condition, nor was I able to prove that all modules that satisfy this condtion must be cyclic. -Can somebody help with this matter? - -REPLY [9 votes]: It is not true that a module that satisfies the above condition is cyclic. For instance, it is possible that $M \neq 0$ but $M \otimes M$ (and thus higher tensor powers) are zero. Consider for instance $M = \mathbb{Q}/\mathbb{Z}$ over the integers. Then $$M \otimes M = \mathbb{Q}/\mathbb{Z} \otimes \mathbb{Q}/\mathbb{Z} = 0$$ -since to tensor with $\mathbb{Q}/\mathbb{Z}$ is to tensor over $\mathbb{Q}$ and take a quotient, and a torsion group tensored with $\mathbb{Q}$ is zero. -Now, suppose $M$ is finitely generated. The above example was not (and, in fact, the above phenomenon can't happen for a finitely generated module). Then it is easy to see that if $M$ satisfies your condition, so does the base-change $M \otimes_R R'$ for $R'$ any $R$-algebra (considered as an $R'$-module, that is!) because base extension is a monoidal functor with respect to the tensor product. -It follows that for any prime ideal $\mathfrak{p}$, $M \otimes k(\mathfrak{p})$ (for $k(\mathfrak{p})$ the residue field, i.e. the quotient field of the residue ring) satisfies this property, so from what you have proved about vector spaces, it has rank at most one. Thus all the fibers of $M$ are of rank at most one. -So at least $M$ is "close" to being cyclic. -More generally, the above observation on base-change reduces the question to the case of a local ring, because to check that two things are equal, it suffices to check on all the localizations. By Nakayama's lemma, it follows that for a local ring, and your observation for a field, any module satisfying your property is in fact cyclic. For local rings, we thus have an if and only if statement. -Putting the above observations all together, we find: - - -A module satisfies your property if and only if the localization at each prime is cyclic (as a module over the localized ring).<|endoftext|> -TITLE: How many possible combinations are there in Hua Rong Dao? -QUESTION [6 upvotes]: How many possible combinations are there in Hua Rong Dao? -Hua Rong Dao is a Chinese sliding puzzle game, also called Daughter in a Box in Japan. You can see a picture here and an explanation here . -The puzzle is a $4 \times 5$ grid with these pieces - -$2 \times 2$ square ($1$ piece) -$1\times 2$ vertical ($4$ pieces) -$2 \times 1$ horizontal ($1$ piece) -$1 \times 1$ square ($4$ pieces) - -Though traditionally each type of piece will have different pictures, you can treat each of the $1\times 2$'s as identical and each of the $1\times 1$'s as identical. -The goal is to slide around the pieces (not removing them) until the $2 \times 2$ "general" goes from the middle top to the middle bottom (where it may slide out of the border). -I'm not concerned in this question with the solution, but more curious about the number of combinations. Naively, I can come up with an upper bound like this -Place each piece on the board, ignoring overlaps. -The $2\times2$ can go in any of $3 \cdot 4 = 12$ squares -The $2\times1$ can go in any of $4 \cdot 4 = 16$ squares -The $1\times2$ can go in any of $3 \cdot5 = 15$ squares -The $1 \times 1$ can go in any of $4\cdot 5 = 20$ squares -If you place the pieces one at a time, subtracting out the used squares -place the $2 \times 2 = 12$ options -place each of the $2 \times 1 = \dfrac{(16 - 4) (16 - 6) (16 - 8) (16 - 10)}{ 4!}$ options -place the $1 \times 2 = 15 - 6$ options -place the $1 \times 1 = { {20-14} \choose 4} = 15$ options -multiplied together this works out to $388,800$. -Is there any way I might be able to narrow this down further? The two obvious things not taken into account are blocked pieces (a $2 \times 1$ pieces will not fit into two separated squares) and the fact that not all possibilities might be accessible when sliding from the starting position. -Update: -I realized that the puzzle is bilaterally symmetrical, so if you just care about meaningful differences between positions, you can divide by two. - -REPLY [2 votes]: A straightforward search yields the figure of $4392$ for all but the $1 \times 1$ stones. The former fill $14$ out of $20$ squares, so there are $\binom{6}{4} = 15$ possibilities to place the latter. In total, we get $$4392 \times 15 = 65880.$$ These can all be generated, and one can in principle calculate the number of connected components in the resulting graph, where the edges correspond to movements of pieces. -Edit: There are 898 different connected components. There are 25955 configurations reachable from the initial state.<|endoftext|> -TITLE: Is there a way to rotate the graph of a function? -QUESTION [39 upvotes]: Assuming I have the graph of a function $f(x)$ is there function $f_1(f(x))$ that will give me a rotated version of the graph of that function? -For example if I plot $\sin(x)$ I will get a sine wave which straddles the $x$-axis, can I apply a function to $\sin(x)$ to yield a wave that straddles the line that would result from $y = 2x$? - -REPLY [6 votes]: For common functions, it ms very easy. $f(x)$ rotated $\phi$ is can be calculated by $(x+f(x)\cdot i)(\cos(\phi)+\sin(\phi)\cdot i)$ as coordinates instead of complex numbers. Let's, however, replace $x$ with $t$, just to reduce confusion. -$(t+f(t)\cdot i)(\cos(\phi)+\sin(\phi)\cdot i) = -t\cos(\phi)-f(t)\sin(\phi)+t\sin(\phi)\cdot i+f(t)\cdot \cos(\phi)\cdot i$ -In parametric form, that's: -$X=t\cos(\phi)-f(t)\sin(\phi)$ -$Y=t\sin(\phi)+f(t)\cos(\phi)$ -To convert that to a function, we find $t$ as a function of $x$ and plug that into $Y$ as a function of $t$. -This is possible with some equations, such as $f(t)=t^2$ or $f(t)=\dfrac 1t$. However, with the sine function, it's not very easy. In fact, there is no definite function for the rotation of a sine function. However, you can represent it as an infinite polynomial. -The parametric of this graph would be -$X=\dfrac{t-2\sin(t)}{\sqrt5}$ -$Y=\dfrac{2t+\sin(t)}{\sqrt5}$ -To approximate a polynomial $y$-as-a-function-of-$x$ formula, we find the coefficients for each part of this formula. -The $x^0$ coefficient is the $y$-intercept divided by $0!$ ($y$ when $x$ is zero)/$0!$ -The $x^1$ coefficient is the $y$-intercept of the derivative divided by $1!$ - $((y$ when $x$ is $0.00001)-(y$ when $x$ is $0))/0.00001/1!$ -The $x^2$ coefficient is the $y$-intercept of the second derivative divided by $2!$ -$((y$ when $x$ is $0.00002)-2*(y$ when $x$ is $0.00001)+(y$ when $x$ is $0))/0.00001/0.00001/2!$ -The $x^3$ coefficient is the $y$-intercept of the third derivative divided by $3!$ -$((y$ when $x$ is $0.00003)-3*(y$ when $x$ is $0.00002)+3*(y$ when $x$ is $0.00001)-(y$ when $x$ is $0))/0.00001/0.0001/0.0001/3!$ -In case you haven't noticed, I'm using Pascal's triangle in this calculation. -I hope this helps!<|endoftext|> -TITLE: Why should I avoid the Frobenius Norm? -QUESTION [9 upvotes]: I vaguely remember the Frobenius matrix norm -( ${||A||}_F = \sqrt{\sum_{i,j} a_{i,j}^2}$ ) was somehow considered unsuitable for numerical analysis applications. I only remember, however, that it was not a subordinate matrix norm, but only because it did not take the identity matrix to $1$. It seems this latter problem could be solved with a rescaling, though. I don't remember my numerical analysis text considering this norm any further after introducing this fact, which seemed to be its death-knell for some reason. -The question, then: for fixed $n$, when looking at $n \times n$ matrices, are there any weird gotchas, deficiencies, oddities, etc, when using the (possibly rescaled) Frobenius norm? For example, is there some weird series of matrices $A_i$ such that the Frobenius norm of the $A_i$ approaches zero while the $\ell_2$-subordinate norm does not converge to zero? (It seems like that can not happen because the $\ell_2$ norm is the square root of the largest eigenvalue of $A^*A$, and thus bounded from above by the Frobenius norm...) - -REPLY [5 votes]: As Joel Tropp puts it: - -Frobenius norm error bounds are typically vacuous. - -"An Introduction to Matrix Concentration Inequalities," (arxiv), page 84 -The reason is explained there, and also in the Appendix of this paper. Essentially, noise shows up as a long tail of singular values that are individually much smaller than the leading singular value, but when summed up, may exceed the leading singular value. The Frobenius norm is the sum of squares of the singular values, and hence, you are just measuring noise--the signal has little effect. The spectral norm, on the other hand, is just the leading singular value--and hence it is measuring the actual signal.<|endoftext|> -TITLE: "On the consequences of an exact de Bruijn Function", or "If Ramanujan had more time..." -QUESTION [7 upvotes]: In this question on Math.SE, I asked about Ramanujan's (ridiculously close) approximation for counting the number of 3-smooth integers less than or equal to a given positive integer $N$, namely, -\begin{eqnarray} -\frac{\log 2 N \ \log 3 N}{2 \log 2 \ \log 3}. -\end{eqnarray} -In his posthumously published notebook, Ramanujan generalized this formula to similarly counting numbers of the form $b_1^{r_1} b_{2}^{r_{2}}$, where $b_1, b_2 > 1$ are (not necessarily distinct) natural numbers and $r_1, r_2 \geq 0$: -\begin{eqnarray} -\frac{\log b_1 N \ \log b_2 N}{2 \log b_1 \ \log b_2}, -\end{eqnarray} -where a factor of $\frac{1}{2}$ is to be added if $N$ is of the prescribed form. Both of these problems could be understood by counting the number of non-negative integer solutions of the Diophantine inequality: $(\log b_1) x_1 + (\log b_2) x_2 \leq \log N$, which also counts the number of $\mathbb{Z}$-lattice points in the $\log N$ dilate of the $2$-polytope $\mathcal{P} = \textbf{conv}(\mathbf{0}, (\log b_{1})^{-1} \mathbf{e}_1, (\log b_2)^{-1} \mathbf{e}_{2})$. Here, $\{ \mathbf{e}_{i}\}$ is the standard basis of $\mathbb{R}^{n}$. Unfortunately, counting lattice points in real polytopes is hard. -My question, now, is how important a contribution to mathematics would it be if one had an exact formula for counting $y$-smooth integers less than or equal to a real $x > 0$, i.e., an exact formula for the de Bruijn function $\Psi(x,y)$? This would, at the same time, translate into an exact formula for counting lattice points of real polytopes of the form $\textbf{conv}(\mathbf{0}, a_{1} \mathbf{e}_{1}, \dots, a_{n} \mathbf{e}_{n})$ with $a_{i} \in \mathbb{R}_{> 0}$. -I'm aware of the review articles by Pomerance, Granville, Hildebrand and Tenenbaum, which each deal with various estimates of the De Bruijn function and its useful in the Quadratic Sieve Method of factorization and applications to cryptography, and even Waring's Problem. However, none of these review articles deal with the consequences of having an exact formula. - -REPLY [4 votes]: I would say it would be important enough to publish it. According to the survey by Hildebrand and Tenenbaum, the approximate estimates have themselves found surprising applications in number theory. An exact formula will only do better! -In any case, one can never expect the surprising ways some results can be used. Consider the pigeon-hole principle for instance. -Congratulations, btw. -(Probably not the answer you were expecting, though).<|endoftext|> -TITLE: Alternatives to show that $|\mathbb{R}|>|\mathbb{Z}|$ -QUESTION [6 upvotes]: Cantor's Diagonal Argument is the standard proof of this theorem. However there must be other proofs, what are some of these proofs? -I am asking this because whenever I think of this question, I immediately think of the Cantor's Argument, which kills the possibility of other interesting finds. - -REPLY [2 votes]: I read about this proof in Raymond Smullyan's book "Satan, Cantor and Infinity". The proof is due to Willaim Zwicker (according to the book) and involves something called a hypergame. I'll first mentioned the hypergame paradox (because it's fun and can give an insight about the proof) and then write the proof. -Let's call a game with turns a normal game if it always ends in finite turns. For example tic-tac-toe and checkers are both normal games. Now imagine the following game which I will call hypergame: In the first turn the first player names a normal game. In the second turn the second player plays the first move of the normal game named and then the players continue playing that game in turns. -The paradox arises from the question: Is the hypergame a normal game? Assume that it is and assume that we try to play hypergame: I go first and say "Let's play hypergame" (I can do that since it's a normal game). Then it's your turn and you can say "Let's play hypergame" and so forth. Thus the hypergame cannot be normal since if it is we can start a hypergame that doesn't end in finite turns. But if the hypergame isn't normal then every hypergame will end in finite turns (since every game I will name will end in finite moves and our game will last that number of turns plus one) which will make the hypergame normal! -Now the proof: Assume that a function $f:\mathbb{N}\to\mathcal{P}(\mathbb{N})$ and let's assume that $f$ sends $n$ to $X_n$. We define a path of numbers as a finite or infinite sequence of numbers such that an element $n$ of the path is either the last (and the path is finite) or the successor of $n$ belongs in $X_n$. For example take a number $n$. If $m\in X_n$ then a successor of $n$ in a path can be $m$. If $X_n$ is empty then $n$ is always the last element of a path. Now we call an element $n\in\mathbb{N}$ normal if every path that begins with $n$ is finite. -Finally take the set $Z$ that contains exactly all the normal elements of $\mathbb{N}$. It's easy to see that there doesn't exist an $n_0$ such that $Z=X_{n_0}$, which would mean that $f$ is not onto. The argument is essentially the same as the one I used above: Assume that $Z=X_{n_0}$. If $n_0\in Z$ then there is an infinite path, namely $(n_0,n_0,n_0,\ldots)$, but $Z$ is the set of all normal elements so that's impossible. If $n\notin Z$ then there is an infinite path that begins with $n_0$ (by the definition of $Z$). Let's call it $(n_0,m,k,\ldots)$. But then there is an infinite path that begins with $m$ (take the aforementioned path and remove $n_0$ from the start). This is again impossible since $m\in Z$ (by the definition of a path) and every element of $Z$ is normal. -This proof uses self-reference but doesn't seem to use the diagonal lemma. I don't know if it's hidden somewhere in the proof (though it looks to me like it isn't). - -REPLY [2 votes]: Cantor had several other proofs before coming up with the diagonalization argument. (See http://en.wikipedia.org/wiki/Cantor%27s_first_uncountability_proof for one version. Or https://mathoverflow.net/questions/23953/earliest-diagonal-proof-of-the-uncountability-of-the-reals.) -I also read a recent account of this in a Math Monthly article. (I think a version of that article is at http://www.math.jhu.edu/~wright/Cantor_Pick_Phi.pdf.) -The "Nested Intervals" proof is really very nice. Assume the reals are countable and so arrange them as a sequence. Then use that ordering to create intervals $[a_n, b_n]$ where $a_{n-1} -TITLE: If $g\geq2$ is an integer, then $\sum\limits_{n=0}^{\infty} \frac{1}{g^{n^{2}}} $ and $ \sum\limits_{n=0}^{\infty} \frac{1}{g^{n!}}$ are irrational -QUESTION [7 upvotes]: How do we show that if $g \geq 2$ is an integer, then the two series $$\sum\limits_{n=0}^{\infty} \frac{1}{g^{n^{2}}} \quad \ \text{and} \ \sum\limits_{n=0}^{\infty} \frac{1}{g^{n!}}$$ both converge to irrational numbers. -Well, i tried to see what happens, if they converge to a rational but couldn't get anything out it. - -REPLY [9 votes]: Just for the record: the usual proof of irrationality is as follows (say for the first sum). Suppose that the sum is $p/q$. Multiply by $qg^{n^2}$, where $n$ is "big enough". We get that $$\sum_{m > n} \frac{q}{g^{m^2-n^2}}$$ is an integer. However, the latter is bounded by the geometric series $$q \sum_{t > 0} g^{-Nt} = \frac{q}{g^N} \cdot \frac{1}{1-g^{-N}} = \frac{q}{g^N-1},$$ where $N = (n+1)^2 - n^2 = 2n+1$. When $n$ is big enough, $q/(g^N-1) < 1$, contradiction. -This proofs works for any series $$\sum_{n=1}^\infty \frac{1}{\prod_{i=1}^n a_i}$$ where the non-zero integers $a_i$ satisfy $|a_i| \rightarrow \infty$.<|endoftext|> -TITLE: Categorification of $\pi$? -QUESTION [24 upvotes]: Is there a categorification of $\pi$? -I have to admit that this is a very vague question. Somehow it is motivated by this recent MO question, which made me stare at some digits and somehow forgot my animosity about this branch of mathematics, wondering if there is a connection to the branches I love so much. -$\pi$ is the area of the unit circle, so perhaps we have to categorify the unit circle (using the projective line?) and the area of such an object. Areas are values of integrals, and there are some kind of integrals in category theory (ends), but this is really just a wild guess. -For some great examples of categorification see this list on MO, and for the meaning of categorification see this MO question or that article by Baez/Dolan. Inspired by the answers of Todd Trimble, we may consider a categorified sinus function -$$\sin(X) = \sum_{n \geq 0} (-1)^{\otimes n} X^{\otimes (2n+1)} / (2n+1)!$$ -in any complete symmetric monoidal category (if we can make sense of $-1$). -Edit: Perhaps $(-1)$ should be the universal invertible object $\mathcal{L}$ such that the symmetry on $\mathcal{L} \otimes \mathcal{L}$ equals $-1$. In the theory of $k$-linear cocomplete symmetric monoidal categories, this is the category of super vector spaces over $k$, i.e. $\mathbb{Z}/2$-graded vector spaces, with a twisted symmetry. Here, $\mathcal{L}$ is $1$-dimensional concentrated in degree $1$. Thus, we have a sine function for super vector spaces, namely $\sin(V) = \oplus_{n \geq 0} \mathcal{L}^{\otimes n} \otimes V^{\otimes (2n+1)} / \Sigma_{2n+1}$. How does it look like, and can we extract something which resembles $\pi$? - -REPLY [12 votes]: One approach to questions of this general nature is to find a groupoid whose groupoid cardinality is equal to the number in question; this is a form of groupoidification. For example, the groupoid cardinality of the groupoid $\text{FinSet}_0$ of finite sets and bijections is $\sum \frac{1}{n!} = e$, so this is a reasonable categorification of $e$; in fact arguably this is "the" categorification of $e$ and goes a long way towards explaining its prevalence in mathematics. Similarly, for any finite set $X$, the groupoid cardinality of the groupoid of $X$-colored finite sets and color-preserving bijections is $\sum \frac{|X|^n}{n!} = e^{|X|}$. Note that this groupoid is the coproduct of $|X|$ copies of $\text{FinSet}_0$; see also this math.SE question. -In a TWF188 John Baez mentions that he looked into the problem of finding a "natural" groupoid whose cardinality is $\pi$, but that he (and possibly some collaborators) weren't able to come up with any nice examples. So possibly this is the wrong direction to go in the particular case of $\pi$.<|endoftext|> -TITLE: limsup and liminf of a sequence of subsets of a set -QUESTION [8 upvotes]: I am confused when reading Wikipedia's article on limsup and liminf of a sequence of subsets of a set $X$. - -It says there are two different ways -to define them, but first gives what is common for the two. - Quoted: - -There are two common ways to define - the limit of sequences of set. In both - cases: -The sequence accumulates around sets - of points rather than single points - themselves. That is, because each - element of the sequence is itself a - set, there exist accumulation sets - that are somehow nearby to infinitely - many elements of the sequence. -The supremum/superior/outer limit is a - set that joins these accumulation sets - together. That is, it is the union of - all of the accumulation sets. When - ordering by set inclusion, the - supremum limit is the least upper - bound on the set of accumulation - points because it contains each of - them. Hence, it is the supremum of the - limit points. -The infimum/inferior/inner limit is a - set where all of these accumulation sets - meet. That is, it is the - intersection of all of the - accumulation sets. When ordering by - set inclusion, the infimum limit is - the greatest lower bound on the set of - accumulation points because it is - contained in each of them. Hence, it - is the infimum of the limit points. -The difference between the two - definitions involves the topology - (i.e., how to quantify separation) is - defined. In fact, the second - definition is identical to the first - when the discrete metric is used to - induce the topology on $X$. - -Because it mentions that a sequence -of subsets of a set $X$ accumulate to -some accumulation subsets of $X$, are -there some topology on the power set -of the set for this accumulation to -make sense? What kind of topology is -that? Is it induced from some -structure on the set $X$? Is it possible -to use mathematic symbols to -formalize what it means by -"supremum/superior/outer limit" and -"infimum/inferior/inner limit"? -If I understand correctly, here is -the first way to define -limsup/liminf of a sequence of -subsets. Quoted: - -General set convergence -In this case, a sequence of sets - approaches a limiting set when its - elements of each member of the - sequence approach that elements of the - limiting set. In particular, if $\{X_n\}$ - is a sequence of subsets of $X$, then: -$\limsup X_n$, which is also called the - outer limit, consists of those - elements which are limits of points in - $X_n$ taken from (countably) infinitely - many n. That is, $x \in \limsup X_n$ if and - only if there exists a sequence of - points $x_k$ and a subsequence $\{X_{n_k}\}$ of - $\{X_n\}$ such that $x_k \in X_{n_k}$ and $x_k \rightarrow x$ as - $k \rightarrow \infty$. -$\liminf X_n$, which is also called the - inner limit, consists of those - elements which are limits of points in - $X_n$ for all but finitely many n (i.e., - cofinitely many n). That is, $x \in \liminf X_n$ - if and only if there exists a - sequence of points $\{x_k\}$ such that $x_k \in X_k$ - and $x_k \rightarrow x$ as $k \rightarrow \infty$. - -So I think for this definition, $X$ is -required to be a topological space. -This definition is expressed in -terms of convergence of a sequence -of points in $X$ with respect to the -topology of $X$. If referring back to -what is common for the two ways of -definitions, I will be wondering how -to explain what is a "accumulation -set" in this definition here and -what topology the "accumulation set" -is with respect to? i.e. how can the -definition here fit into -aforementioned what is common for -the two ways? -It says there are two ways to define -the limit of a sequence of subsets -of a set $X$. But there seems to be -just one in the article, as quoted -in 2. So I was wondering what is the -second way it refers to? -As you might give your answer, here -is my thought/guess (which has -actually been written in the article -but not in a way saying it is the -second one). Please correct me. -In an arbitrary complete lattice, by -viewing meet as inf and join as sup, -the limsup of a sequence of points -$\{x_n\}$ is defined as: $$\limsup -\, x_n = \inf_{n \geq 0} -\left(\sup_{m \geq n} \, x_m\right) -= \mathop{\wedge}\limits_{n \geq 0}\left( \mathop{\vee}\limits_{m\ -\geq n} \, x_m\right) $$ similarly -define liminf. -The power set of any set is a -complete lattice with union and -intersection being join and meet, so -the liminf and limsup of a sequence -of subsets can be defined in the -same way. I was wondering if this is -the other way the article tries to -introduce? If it is, then this -second way of definition does not -requires $X$ to be a topological -space. So how can this second way -fits to what is common for the two -ways in Part 1, which seems to requires some -kind of topology on the power set of -$X$? -I understand this way of definition -can be shown to be equivalent to a -special case of the first way in my -part 2 when the topology on -$X$ is induced by discrete metric. -This is another reason that let me -doubt it is the second way, because -I guess the second way should at -least not be equivalent to a special -case of the first way. -Can the two ways of definition fit -into any definition for the general -cases? In the general cases, -limsup/liminf is defined for a -sequence of points in a set with -some structure. Can limsup/liminf of -a sequence of subsets of a set be -viewed as limsup/liminf of a -sequence of "points". If not, so in some -cases, a sequence of subsets must be -treated just as a sequence of subsets, but not -as a sequence of "points"? -EDIT: @Arturo: In the last part of your reply to another question, -did you try to explain how -limsup/liminf of a sequence of -points can be viewed as -limsup/liminf of a sequence of -subsets? I actually want to -understand in the opposite -direction: -Here is a post with my current -knowledge about limsup/liminf of a -sequence of points in a set. For -limsup/liminf of a sequence of -subsets of any set $X$, defined in -terms of union and intersection of -subsets of $X$ as in part 3, it can -be viewed as limsup/liminf of a -sequence of points in a complete -lattice, by viewing the power set of -$X$ as a complete lattice. But for -limsup/liminf of a sequence of -subsets of any set defined in part 2 -when X is a topological space, I was -wondering if there is some way to -view it as limsup/liminf of a -sequence of points in some set? - -It is also great if you have other approaches to understand all the ways of defining limsup/liminf of a sequence of subsets, other than the approach in Wikipedia. -Thanks and regards! - -REPLY [4 votes]: First, you might also want to take a look at this answer to a similar question. -Okay: the first description assumes that there is some sort of notion of "accumulation point" at work in the set $X$, as you surmise; this may be derived from a topology. -The second description talks about limit points, but you can apply it to any set by endowing the set with the discrete topology (every subset is open, every subset is closed). If you do that, then the definition is the usual definition of limit superior of a sequence of sets: it is the collection of all points that are in infinitely many of the terms of the sequence, while the limit inferior is the collection of all points that are in all sufficiently large terms of the sequence. -The "second way" of defining it is in terms of unions and intersection. If $\{X_n\}_{n\in\mathbb{N}}$ is a family of sets, then -\begin{align*} -\limsup_{n\in\mathbb{N}} X_n &= \bigcap_{n=1}^{\infty}\left(\bigcup_{j=n}^{\infty} X_j\right)\\\ -\liminf_{n\in\mathbb{N}} X_n &= \bigcup_{n=1}^{\infty}\left(\bigcap_{j=n}^{\infty} X_j\right). -\end{align*} -This coincides with the notion of the limit superior being the set of all limit points of infinitely many terms in the sequence, under the discrete topology; and the limit inferior being the set of all limit points of all sufficiently large-indexed terms of the sequence (again, under the discrete topology). -The notion of "accumulation point" in the first description is more informal. If you are working with a topological space, then it is limit points as described above and by "accumulation set" you should read "set of all limit points". -For your third point, in order to be able to talk about joins and meets you need to have some sort of complete lattice order on your set, so that you can talk about those infinite meets and infinite joins; this is the case, for instance, in the real numbers; appropriately interpreted, you do get essentially the definition you propose, though you need to tweak it a bit in order to actually get what the actual definition is (see the other answer quoted above); you don't actually work with the points themselves, but with a slightly different set determined by the points. -I think that the previous answer linked to answers essentially your fourth point, of how to interpret limit superior and limit inferior of a sequence of points as a special case of limit superior and limit inferior of sets; but if this is not the case, point it out and I'll try to answer it de nuovo.<|endoftext|> -TITLE: How does one characterize surfaces with constant nonzero Gaussian and mean curvature -QUESTION [5 upvotes]: I know that for any surface, the Gaussian curvature $K$ and mean curvature $H$ satisfy the inequality $H^2 \geq K$ , and the sphere is a surface where that inequality becomes an equation. Thus, the sphere has both constant Gaussian and mean curvature. -Are there other surfaces whose Gaussian and mean curvatures are constant and nonzero? - -REPLY [2 votes]: The only surfaces in Euclidean space with $K$ and $H$ both constant are: planes, spheres and right circular cylinders. It appears as an exercise in the Struik's book and a proof in "Curves and Surfaces", Montiel-Ros, Graduate Studies in Mathematics, vol. 69. AMS, 2009<|endoftext|> -TITLE: Why is $\pi $ equal to $3.14159...$? -QUESTION [68 upvotes]: Wait before you dismiss this as a crank question :) -A friend of mine teaches school kids, and the book she uses states something to the following effect: - -If you divide the circumference of any circle by its diameter, you get the same number, and this number is an irrational number which starts off as $3.14159... .$ - -One of the smarter kids in class now has the following doubt: - -Why is this number equal to $3.14159....$? Why is it not some other irrational number? - -My friend is in a fix as to how to answer this in a sensible manner. Could you help us with this? -I have the following idea about how to answer this: Show that the ratio must be greater than $3$. Now show that it must be less than $3.5$. Then show that it must be greater than $3.1$. And so on ... . -The trouble with this is that I don't know of any easy way of doing this, which would also be accessible to a school student. -Could you point me to some approximation argument of this form? - -REPLY [6 votes]: First off, notice that the kernel of your question is obscured by focusing on the decimal digits of an irrational (indeed, transcendental) number. We could ask the same kind of question about a plain vanilla positive integer. For example, why is it that 1729 ("Hardy's taxicab number") is the smallest positive integer representable as the sum of two cubes in two different ways? Why isn't it some other number instead? Why is it this specific number? However, this new perspective puts us on the road to the answer to your question. Consider this question: Why is it that, for convex polyhedra, the number of vertices minus the number of edges plus the number of faces always equals 2? However, by stereographic projection, this is equivalent to asking why it is that, for connected planar graphs, the number of vertices minus the number of edges plus the number of regions (counting the "infinite" region as one of the regions) always equals 2. However, the proof for the case of connected planar graphs typically involves dismantling the graph in a canonical manner that reduces to a triangle, for which the value of 2 is "obvious". This brings us to the answer to your question: it is not a matter of mathematics, or even of metamathematics, but a question of psychology. The more steeped you are in Mathematics, the more obvious things seem to you. For example, for Euler, who had, among other things, committed many primes to memory, the value of Hardy's taxicab number would perhaps seem self-evident, as it did to Ramanujan. To an Infinite Intelligence (God), everything would be self-evident, but for finite minds there always comes a boundary where they wonder why a certain value is what it is. Nor is Mathematics unique in that regard. In Physics they still haven't figured out why the so-called "fine structure constant" is the value that it is (approximately 1/137), and the reasons for its value is the subject of on-going debate. -Anyway, perspective might be gained and retained by bearing in mind the following famous joke: -Mathematics is really Psychology. -Psychology is really Biology. -Biology is really Chemistry. -Chemistry is really Physics. -Physics is really Mathematics.<|endoftext|> -TITLE: Prerequisites on Probability Theory -QUESTION [14 upvotes]: Please answer as many questions as you can. -What are the topics one should know before delving into probability theory? (Please recommend any books you know on those topics too.) I think there is set theory, but set theory is a large topic in itself. Does probability need only a little bit of set theory? Also, there is combinatorics, but again combinatorics is big in itself. And is it a good idea to know all topics such as set theory and combinatorics to fully understand probability theory? Or is it enough just to read those topics on the fly? -Thanks - -REPLY [15 votes]: Dependending on how deeply you want to explore the field, you will need more or less. -If you want a basic introduction then some basic set theory (what is a set and elementary set operations), combinatorics (knowing different ways of counting, inclusion-exclusion principle) and calculus (knowing derivatives and integrals). This could get you through a basic text in probability. -If you want more serious stuff, I would study measure theory (which serves as the foundation of probability through Kolmogorov's axioms), a thorough knowledge of analysis that goes beyond just knowing calculus, maybe even some functional analysis, combinatorics and generally some discrete mathematics (like working with difference equations). -This will allow you to follow a solid introductory course on probability. After that, it depends a lot on what related branches you want to explore. If you want to study Markov chains, a good knowledge of linear algebra is a must. If you want to delve deeper into statistics (like hypothesis testing and such) more analysis will do you good, etc... - -REPLY [6 votes]: It depends which kind of probability theory you're interested in. An introductory course on probability theory can either dwell on discrete probability or continuous probability. -Discrete probability, which deals with discrete events (e.g. the probability that if you throw a dice it comes up $6$ ten times in a row), only really needs elementary combinatorics. From set theory you need to know the definitions of basic concepts, and from combinatorics you need to know the likes of the binomial coefficient and its properties. -A little more is needed to understand Poisson random variables, namely Stirling's approximation, which is a topic you don't really learn anywhere; this is why these courses often just give the definition, which requires you to know the Taylor expansion of $e^x$. But this topic in its entirety is not necessarily covered. -Continuous probability deals with things like the normal distribution and the central limit theorem - distributions which may take "continuous" values (e.g. every real value rather than only integral values). Sometimes it is given as an addendum to a discrete probability course. To understand continuous probability you will need to know basic calculus (the kind you get from a first course, and then some). -Introductory courses don't usually cover multivariate Gaussians, but these require some linear algebra. -Summarizing, you will need to be confident about some fairly basic topics. Besides some familiarity with basic concepts, it's also best to have some "mathematical maturity", although not too much of it is actually needed in an introductory course. - -REPLY [6 votes]: For elementary, probability theory you can look into these two books: - -A First Course in Probability by Sheldon Ross -An Introduction to Probability Theory and Its Applications, by W.Feller. -Introduction to Probability and Measure by K. R. Parthasarathy. - -Both books provide very good introduction to the subject. Moreover, it would be nice if you know some basic calculus and set theory because you may need them when you study about Distribution functions of Various Random variables. -The last book which i have added is a really nice book. It's available in Indian edition but i am not sure about it's sales in foreign. - -REPLY [3 votes]: Combinatorics is a very large subject but one set of counting problems that is very helpful for discrete probability is counting the number of inequivalent ways to place balls in boxes. Fundamental cases include when the balls are considered distinguishable or indistinguishable and the boxes are considered distinguishable and indistinguishable. Fred Roberts' book Applied Combinatorics does a nice job with this material.<|endoftext|> -TITLE: Chicken-Egg problem with Fubini’s theorem -QUESTION [10 upvotes]: Fubini's theorem states that if you have a double integral over a function $f(x,y)$, then you can compute the integral as an iterated integral, if $f(x,y)$ is in $\mathcal{L}^1$. But to find out if $f$ is in $\mathcal{L}^1$ you need to compute the double integral. -What am I missing? The examples I found all apply Fubini's theorem without checking that $f(x,y)$ is in $\mathcal{L}^1$. Many thanks for any clarification! - -REPLY [6 votes]: The comment made by Qiaochu Yuan answered my question but because it's a comment I can't accept it and close this question. -So hereby I declare this question as solved. Thanks to all for your help. -Here is the comment: - -You don't have to compute the double - integral; you just have to bound it. - For example if both of the underlying - measure spaces are finite and f is - bounded then this is automatically - true. Similarly if you are computing - an integral over R^2 it suffices to - bound f on a sequence of compact - subsets of R^2, say concentric circles - or unit squares.<|endoftext|> -TITLE: How do you mathematically round a number? -QUESTION [11 upvotes]: How does someone mathematically round a number to its nearest integer? -For example 1.2 would round down to 1 and 1.7 would round up to 2 - -REPLY [9 votes]: This may not answer the original question, but when I came across this post I was looking for a more mathematical approach (rather than using a defined floor/ceil function). I ended up using modulo to define a floor, ceiling, and "round up from half" -Fractions -To floor round after dividing a numerator n by a denominator d: -$$Floor\left(\frac nd\right)\enspace =\enspace f(n, d) = \frac nd - \frac{n \bmod d}{d}$$ -To ceiling round after dividing a numerator n by a demonimator d: -$$Ceil\left(\frac nd\right)\enspace =\enspace f(n, d) = \frac nd + \left( 1 - \frac{n \bmod d}{d}\right)$$ -Decimals -To floor round a decimal x: -$$Floor\left(x\right)\enspace =\enspace f(x) = x - \left( x \bmod 1 \right)$$ -To celing round a decimal x: -$$Ceil\left(x\right)\enspace =\enspace f(x) = x + \left(1 - \left(x \bmod 1\right) \right)$$ -To round a decimal x up from 0.5: -$$UpFromHalf\left(x\right)\enspace =\enspace f(x) = Floor(\left(x \bmod 1\right) + 0.5)$$ -How it works: -The goal is to remove the decimal part of the divided number somehow. -The modulo part gets the remaining numerator. When we divide this by the denominator we get the decimal part of the divided number (always less than 1). -For floor rounding we eliminate the decimal part by subtract the decimal part of the divided number from the divided results. -For ceiling rounding we figure out the number that, when added to the divided results, will increase it to the next whole number. -Expectations: -A: $2/3 \Longrightarrow 0$,$\mspace50pt$ -B: $3/3 \Longrightarrow 1$,$\mspace47pt$ -C: $10/3 \Longrightarrow 3$ -Dividing: -A: $\frac23 = 0.\overline6$,$\mspace56pt$ -B: $\frac33 = 1$,$\mspace59pt$ -C: $\frac{10}{3} = 3.\overline3$ -Mod: -A: $2 \bmod 3 = 2$,$\mspace40pt$ -B: $3 \bmod 3 = 0$,$\mspace36pt$ -C: $10 \bmod 3 = 1$ -Mod Result Divided: -A: $\frac{2 \bmod 3}{3} = 0.\overline6$,$\mspace40pt$ -B: $\frac{3 \bmod 3}{3} = 0$,$\mspace40pt$ -C: $\frac{10 \bmod 3}{3} = 0.\overline3$ -Subtract out the remaining decimal -A: $\frac 23 - \frac{2 \bmod 3}{3} = 0$,$\mspace31pt$ -B: $\frac 33 - \frac{3 \bmod 3}{3} = 1$,$\mspace24pt$ -C: $\frac{10}{3} - \frac{10 \bmod 3}{3} = 3$<|endoftext|> -TITLE: Uncountable subset with uncountable complement, without the Axiom of Choice -QUESTION [13 upvotes]: Let $X$ be a set and consider the collection $\mathcal{A}(X)$ of countable or cocountable subsets of $X$, that is, $E \in \mathcal{A}(X)$ if $E$ is countable or $X-E$ is countable. If $X$ is countable, then $\mathcal{A}(X)$ coincides with the power set $\mathcal{P}(X)$ of $X$. Now suppose that $X$ is uncountable. Assuming the axiom of choice, we can conclude that $\mathcal{A}(X) \ne \mathcal{P}(X)$, since $|X| = |X| + |X|$. So the question is: - -Can we prove in ZF that $\mathcal{A}(X) \ne \mathcal{P}(X)$ for every uncountable set $X$? - -I'm assuming that a set $X$ is uncountable if there is no injective function $f : X \rightarrow \mathbb{N}$. - -REPLY [10 votes]: I should add on Andres' answer with a minor variation: -It is consistent with ZF without the axiom of choice that there is a set $X$ such that every subset of $X$ is countable or co-countable. These sets are called $\aleph_1$-amorphous (and sometimes quasi-minimal). They make peculiar counterexamples to some propositions. -If $X$ is $\aleph_1$-amorphous then $\mathcal P(X)=\mathcal A(X)$. Note that this is the "maximal" counterexample that we can produce since anything larger would have a subset which is not countable nor co-countable. -One remark is that amorphous sets are vacuously $\aleph_1$-amorphous, since every subset is finite (ergo countable) or co-finite (ergo co-countable). However it is possible to have an $\aleph_1$-amorphous set and still retain the Principle of Dependent Choice, a choice principle which amongst other things imply that every infinite set has a countable subset. In such model the $\aleph_1$-amorphous set is "non-degenerate", namely it has a countably infinite subset. \ No newline at end of file