TITLE: Inverse of cosh(x) QUESTION [7 upvotes]: My goal is to find the inverse of $y=\cosh(x)$ Therefore: $$x=\cosh(y)=\frac{e^y+e^{-y}}{2}=\frac{e^{2y}+1}{2e^y}$$ If we define $k=e^y$ then: $$k^2-2xk+1=0$$ $$k=e^{y}=x\pm\sqrt{x^2-1}$$ $$y=\ln(x\pm\sqrt{x^2-1})=\cosh^{-1}(x)$$ However, apparently: $\cosh^{-1}(x)=\ln(x+\sqrt{x^2-1})$ is right, but NOT $\cosh^{-1}(x)=\ln(x-\sqrt{x^2-1})$ What step did I miss? REPLY [8 votes]: The function $\cosh$ is even, so formally speaking it does not have an inverse, for basically the same reason that the function $g(t)=t^2$ does not have an inverse. But if we restrict the domain of $\cosh$ suitably, then there is an inverse. The usual definition of $\cosh^{-1}x$ is that it is the non-negative number whose $\cosh$ is $x$. Note that for $x\gt 1$, we have $$x-\sqrt{x^2-1}=\frac{1}{x+\sqrt{x^2-1}}\lt 1,$$ and therefore $\ln(x-\sqrt{x^2-1})\lt 0$ whereas we were looking for the non-negative $y$ which would satisfy the inverse equation. Thus, $y=\ln(x+\sqrt{x^2-1})$ is not the non-negative number whose $\cosh$ is $x$. Remark: If one defined $\cosh^{-1}(x)$ as the non-positive number whose $\cosh$ is $x$, then the answer $\ln(x-\sqrt{x^2-1})$ would be the right one.<|endoftext|> TITLE: I don't understand what a "free group" is! QUESTION [33 upvotes]: My lecture note glosses over it really, introduces it and says "well it intuitively makes sense" but I say, nope it doesn't. Free groups on generators $x_1,...,x_m,x_1^{-1},...,x_m^{-1}$ is a group whose elements are words in the symbols $x_1,...,x_m,x_1^{-1},...,x_m^{-1}$ subject to the group axioms. The group operation is concatenation. What do I not understand? Well, to star with, where's the identity? The operation, say I denote it $*$, is $x_1 * x_2=x_1x_2$ yes? How is the identity defined? I mean, $e*x_1=ex_1$ because it's "concatenation" so I cannot conveniently say $e*x_1=x_1$ and ignore the fact I need to "concatenate" it. These are apparently words, symbols not numbers. The inverse doesn't make sense too, $x_1*x_1^{-1}=x_1x_1^{-1}$ and period. Not $x_1*x_1^{-1}=e$. I mean, I don't even know what $e$ is supposed to be in this supposedly group object so I am left puzzled. I don't see any mathematics here, concatenation, in other words, is just "lining up the symbols in order." It's not like $1 \times 2 \times 10=20$ but $1 \times 2 \times 20=1220$. And another problem. Doesn't the free group have order infinity? It can't be finite can it? Because, say I start with $x_1,...,x_m,x_1^{-1},...,x_m^{-1}$ but it must be closed under concatenation. Well, $x_1*x_2=x_1x_2$ already causes an issue because clearly we just created a new element. A new word $x_1x_2$. Continuing this way, we keep adding the newly created words and reach infinity. And before someone directs me to it, no, wikipedia's page on free groups didn't help me understand this either. This bizarre notion is confusing and incomprehensible than ever. Does anyone know the answers to my questions? REPLY [2 votes]: Most important is the property of a free group: In a free group there are no relationships between elements other than those that you can prove by the axioms of a group. For example, $aa^{-1} = 1$ follows from the group axioms and is true in every group, and therefore in a free group as well. ab = 1 is possible and will be true in some groups, but it doesn't follow from the group axioms, and therefore isn't true in a free group unless a and b were formed from the generators in such a way that in the product ab all the multiplications cancel each other out (that is ab contains a generator followed by its inverse or vice versa, you can remove it, then you have again a generator followed by its inverse and so on until you are left with 1). Nothing like $Z_{12}$ where 5 + 7 = 0 (or (1+1+1+1+1)+(1+1+1+1+1+1+1) = 0) can happen in a free group.<|endoftext|> TITLE: Uniformly Most Powerful Test for a Uniform Sample QUESTION [5 upvotes]: Let $X_{1}, \dots, X_{n}$ be a sample from $U(0,\theta), \theta > 0$ (uniform distribution). Show that the test: $\phi_{1}(x_{1},\dots,x_{n})=\begin{cases} 1 &\mbox{if } \max(x_{1},\dots,x_{n}) > \theta_{0} \quad or \quad \max(x_{1},\dots,x_{n}) \leq \alpha^{1/n}\theta_{0}\\ 0 & \mbox{else } \end{cases}$ Is the UMP (uniformly most powerful test) of size $\alpha$ for testing $H_{0}:\theta = \theta_{0}$ against $H_{1}:\theta \neq \theta_{0}$ I know that the statistic $T$ given by $T(x_{1},\dots,x_{n})=\max(x_{1},\dots,x_{n})$ has the property that $U(0,\theta)$ has an MLR (monotone likelihood ratio) property in $T$. Then, by the Karlin-Rubin Theorem we get a test for $H_{0}: \theta \leq \theta_{0}$ against $H_{1}: \theta > \theta_{0}$, of the form $\phi(x)=1$ if $x > x_{0}$ and $0$ if $x < x_{0}$, for $x_{0}$ chosen such that $E_{\theta_{0}}(\phi(x))=\alpha$ However, the Karlin-Rubin Theorem does not give an UMP of the form as specified in the problem. What would be a way to approach this problem? I'm completely lost. Thanks for the help! REPLY [3 votes]: Given $\theta$, the probability that $\max(X_{1},\dots,X_{n}) \le m$ is $\left(\frac{m}{\theta}\right)^n$ when $0 \le m \le \theta$, so the density of the maximum is $n\frac{m^{n-1}}{\theta^n} I_{[0 \le m \le \theta]}$ So the likelihood function for $\theta$ given $\max(x_{1},\dots,x_{n})$ is proportional to $L(\theta) = \frac{1}{\theta^n} I_{[ \theta\ge \max(x_{1},\dots,x_{n})] }$, which is constant in the maximum observation apart from the indicator function $L(\theta)=0$ when $\theta\lt \max(x_{1},\dots,x_{n})$, while $L(\theta)$ is a decreasing function of $\theta$ when $\max(x_{1},\dots,x_{n}) \le \theta\lt \infty$, so using the Karlin–Rubin theorem you could reject $H_0$ either when $\theta_0 \lt \max(x_{1},\dots,x_{n})$ or when $\theta_0 \gt m_0$ for some $m_0$ where $\Pr(\max(X_{1},\dots,X_{n}) \le m_0 \mid \theta_0) = \alpha$ i.e. $\left(\frac{m_0}{\theta_0}\right)^n \le \alpha$ This makes the rejection regions $\theta_0 \lt \max(x_{1},\dots,x_{n})$ or $\theta_0 \gt \alpha^{-1/n}\max(x_{1},\dots,x_{n})$. If you prefer, you can express these as $\max(x_{1},\dots,x_{n}) \gt \theta_0$ or $\max(x_{1},\dots,x_{n}) \lt \alpha^{1/n} \theta_0$<|endoftext|> TITLE: Variance converging to zero implies weak convergence to delta measure? QUESTION [6 upvotes]: I have the following question: Suppose that I have a sequence of random variables $X_n$ such that all moments exists and are finite. I have that $E[X_n]\to a$, where $a$ is a finite number and also $\text{Var}(X)\to 0$. We may also assume that a density with respect to the Lebesgue measure exists. Does this imply that $X_n\to_w \delta_a$? weakly? Thanks a lot, any ideas on conditions are welcomed! This is true for the Gaussian distribution, but I wondered to what extend we can transfer it. Literature is also welcomed! REPLY [5 votes]: Intuitively: the mean is almost constant, and the variance is smaller and smaller, hence we have are not too far away from the mean with a high probability. More formally, we have $$\mathbb E\left[\left(X_n-a\right)^2\right]= \mathbb E\left[X_n^2\right]-2a\mathbb E\left[X_n\right]+a^2 =\operatorname{Var}\left(X_n\right)+\mathbb E\left[X_n\right]^2 +a^2-2a\mathbb E\left[X_n\right].$$ By assumption, $\operatorname{Var}\left(X_n\right)\to 0$ and $\mathbb E\left[X_n\right]^2 +a^2-2a\mathbb E\left[X_n\right]\to a^2+a^2-2a^2=0$, hence $X_n\to a$ in $\mathbb L^2$, hence in probability, hence in distribution.<|endoftext|> TITLE: Efficiently evaluating $\int x^{4}e^{-x}dx$ QUESTION [25 upvotes]: The integral I am trying to compute is this: $$\int x^{4}e^{-x}dx$$ I got the right answer but I had to integrate by parts multiple times. Only thing is it took a long time to do the computations. I was wondering whether there are any more efficient ways of computing this integral or is integration by parts the only way to do this question? Edit: This question is similar to the question linked but slightly different because in the other question they are asking for any method to integrate the function which included integration by parts. In this question I acknowledge that integration by parts is a method that can be used to evaluate the integral but am looking for the most efficient way. This question has also generated different responses than the question linked such as the tabular method. REPLY [13 votes]: Here is a nice little trick to integrate it without using partial integration. $$ \int x^4 e^{-x} \,\mathrm dx = \left. \frac{\mathrm d^4}{\mathrm d \alpha^4}\int e^{-\alpha x} \,\mathrm dx \right|_{\alpha=1} = \left.- \frac{\mathrm d^4}{\mathrm d \alpha^4} \frac{1}{\alpha} e^{-\alpha x}\right|_{\alpha=1} $$ The idea is to introduca a variable $\alpha$ in the exponent and write the $x^4$ term as the fourth derivative with respect to $\alpha$. This is especially helpful when you want to calculate the definite integral $\int_0^\infty$ because in this case the differentiation greatly simplifies. $$ \int\limits_0^\infty x^n e^{-x} \,\mathrm dx = (-1)^n \left. \frac{\mathrm d^n}{\mathrm d \alpha^n} \frac{1}{\alpha} \right|_{\alpha=1} = n!\stackrel{n=4}{=} 24 $$<|endoftext|> TITLE: There are apparently $3072$ ways to draw this flower. But why? QUESTION [177 upvotes]: This picture was in my friend's math book: Below the picture it says: There are $3072$ ways to draw this flower, starting from the center of the petals, without lifting the pen. I know it's based on combinatorics, but I don't know how to show that there are actually $3072$ ways to do this. I'd be glad if someone showed how to show that there are exactly $3072$ ways to draw this flower, starting from the center of the petals, without lifting the pen (assuming that $3072$ is the correct amount). REPLY [12 votes]: Others have already solved the problem. I just wanted to add that, properly speaking, this problem and/or its solution belong to a branch of math called graph theory. Formally: Proposition. The following undirected graph has $3072$ directed Eulerian trails starting at $x$ and ending at $z$. We could be even more precise by describing the graph in more formal terms, of course, rather than just drawing a picture. This can be done by giving a symmetric function $\{x,y,z\}^2 \rightarrow \mathbb{N}$ that tells us how many edges go between any two vertexes. In this particular case, our function is<|endoftext|> TITLE: Algebraic shortcut to find $a^n + b ^n$? QUESTION [5 upvotes]: Recently, I found this problem online: Given $a+b=1$ and $a^2+b^2=2$, find $a^7+b^7$. Although I could've solved it by substituting the first equation into the second and then using the quadratic formula; the way the question was set up, I suspected that there was a shortcut. However, I couldn't find the solution to the problem on that website, so I'm asking here: If you are given $a+b$ and $a^2+b^2$, is there a shortcut to finding $a^n+b^n$? REPLY [12 votes]: Note that $$a^{n+1}+b^{n+1}=(a^n+b^n)(a+b)-ab(a^{n-1}+b^{n-1}).\tag{1}$$ Since $(a+b)^2=1$, and $a^2+b^2=2$, we have $2ab=-1$ and therefore $ab=-\frac{1}{2}$. It follows from (1) that $$a^{n+1}+b^{n+1}=(a^n+b^n)(1)+\frac{1}{2}(a^{n-1}+b^{n-1}).$$ Let $f(k)=a^k+b^k$. We have obtained the recurrence $$f(n+1)=f(n)+\frac{1}{2}f(n-1).$$ Using this recurrence, we can compute our way to $f(7)$ fairly quickly. Remarks: $1.$ The same strategy can be used for any given $a+b$ and $a^2+b^2$. $2.$ To go a little faster, we could use $a^{n+2}+b^{n+2}=(a^n+b^n)(a^2+b^2)-a^2b^2(a^{n-2}+b^{n-2})$. $3.$ If we are in a real hurry, and $n$ is not small, there are various tricks to speed up computation. For example $$a^{2k}+b^{2k}=(a^k+b^k)^2-2(ab)^k.$$ So we can take "giant" steps.<|endoftext|> TITLE: Proving a binomial coefficient identity QUESTION [5 upvotes]: I'm having some trouble with the following proof: $$\sum^k_{a=0} {{n}\choose{a}}{{m}\choose{k-a}} = {{n+m}\choose{k}}$$ I'm trying to prove this to learn a couple of things about the Pascal's triangle. I don't know where to start on the proof. I have tried expanding both sides using the binomial coefficient property but have had no luck. REPLY [4 votes]: The RHS counts the number of ways of selecting a committee of $k$ people from $n$ women and $m$ men. The expression $$\binom{n}{a}\binom{m}{k - a}$$ counts the number of ways of selecting a committee of $k$ people consisting of $a$ women and $k - a$ men selected from selected from a group consisting of $n$ women and $m$ men. Thus, the summation on the LHS counts all the ways of selecting a committee of $k$ people can be selected from $n$ women and $m$ men.<|endoftext|> TITLE: What are the projections of a commutative C* algebra? QUESTION [9 upvotes]: I am aware that the commutative C* algebra is $C_0(X)$ for some nice space $X$ but I cannot figure out what the projections should be. The natural candidates (indicator functions on nice subsets of $X$ don't work because they are not continuous). REPLY [7 votes]: You're on the right track: you want to look at indicator functions. If $f$ is a function such that $f^2=f$, can you prove it must be an indicator function? Now of course, you're right that most indicator functions aren't continuous. Can you describe the indicator functions which are? And of those, which vanish at infinity? Answers to these questions can be found below: If $f^2=f$, then $f(x)^2=f(x)$ for all $x$, so $f(x)$ must always be either $0$ or $1$. This means $f$ is the indicator function of the set $S=\{x:f(x)=1\}$. The indicator function of a set $S\subseteq X$ is continuous iff $S$ is both closed and open, and additionally vanishes at infinity iff $S$ is compact. So the projections of $C_0(X)$ are the indicator functions of subsets of $X$ which are both compact and open.<|endoftext|> TITLE: Definite Integral $\int_0^1 \left \{\frac{1}{x^\frac{1}{6}} \right\}\, dx$ QUESTION [6 upvotes]: The curly brackets mean 'FractionalPart' which, I believe, is defined as {${x}$}$=x-\lfloor x \rfloor$ where $x \in \mathbb{R}$. My best approximation so far is: .182657 , however, I suspect there is a closed form expression for this definite integral. My approximation was attained by using a lower bound of .00000001 (a bit to the right of 0) with a graphing program. REPLY [4 votes]: Let $g_m =\int_0^1 \{x^{-1/m}\} dx $. I get $g(m) =\dfrac{m}{m-1}-\zeta(m) $. Here's how. I want to partition $[0, 1]$ into intervals over which $\{x^{-1/m}\} $ is between two consecutive integers, and then sum the integral over these intervals. So, for each positive integer $n$, I want $n = x^{-1/m} $ or $x =\frac1{n^m} $. Let $\begin{array}\\ g_{m, n} &=\int_{\frac1{(n+1)^m}}^{\frac1{n^m}} \{x^{-1/m}\} dx\\ &=\int_{\frac1{(n+1)^m}}^{\frac1{n^m}} (x^{-1/m}-n) dx\\ &=\int_{\frac1{(n+1)^m}}^{\frac1{n^m}} x^{-1/m}dx -n(\frac1{n^m}-\frac1{(n+1)^m})\\ &=\dfrac{x^{1-1/m}}{1-1/m}\big|_{\frac1{(n+1)^m}}^{\frac1{n^m}} -(\frac{n}{n^{m}}-\frac{n}{(n+1)^{m}})\\ &=\dfrac{m}{m-1}x^{(m-1)/m}\big|_{\frac1{(n+1)^m}}^{\frac1{n^m}} -(\frac{1}{n^{m-1}}-\frac{n+1-1}{(n+1)^{m}})\\ &=\dfrac{m}{m-1}(\frac1{n^{m-1}}-\frac1{(n+1)^{m-1}}) -(\frac{1}{n^{m-1}}-\frac{1}{(n+1)^{m-1}}+\frac{1}{(n+1)^{m}})\\ &=(\dfrac{m}{m-1}-1)(\frac1{n^{m-1}}-\frac1{(n+1)^{m-1}}) -\frac{1}{(n+1)^{m}}\\ &=\dfrac{1}{m-1}(\frac1{n^{m-1}}-\frac1{(n+1)^{m-1}}) -\frac{1}{(n+1)^{m}}\\ \end{array} $ Therefore $\begin{array}\\ g_m &=\sum_{n=1}^{\infty} g_{m, n}\\ &=\sum_{n=1}^{\infty} (\dfrac{1}{m-1}(\frac1{n^{m-1}}-\frac1{(n+1)^{m-1}}) -\frac{1}{(n+1)^{m}})\\ &=\dfrac{1}{m-1}\sum_{n=1}^{\infty} (\frac1{n^{m-1}}-\frac1{(n+1)^{m-1}}) -\sum_{n=1}^{\infty}\frac{1}{(n+1)^{m}})\\ &=\dfrac{1}{m-1}-(\zeta(m)-1)\\ &=\dfrac{m}{m-1}-\zeta(m)\\ \end{array} $ For $m=6$, Wolfy says this is $0.182656938015... $, so this has a good chance of being correct,<|endoftext|> TITLE: Show that $\Bbb Q(\sqrt[p]{a}, \omega )=\Bbb Q(\sqrt[p]{a}+ \omega )$ QUESTION [5 upvotes]: Show that $\Bbb Q(\sqrt[p]{a}, \omega )=\Bbb Q(\sqrt[p]{a}+ \omega )$, where $\omega=e^{2\pi i/p}$ and $a$ is prime. For simplicity let's call the left hand field $K$, and the right hand field $R$. I know that the strategy is to show inclusion both ways to get the inequality. I also know that $K$ is the splitting field of the polynomial of $\sqrt[p]{x}-a$, I have proven this. I can show that $R \subseteq K$ since I know that $\omega, \sqrt[p]{a} \in K$ implying that $\omega+\sqrt[p]{a} \in K$ implying $R \subseteq K$. I have trouble showing the opposite inclusion. I know the goal is to show that $\omega, \sqrt[p]{a} \in R$ knowing that $\omega+\sqrt[p]{a} \in R$. I tried multiple times to show this, mainly playing around with different polynomial and using the binomial theorem but I can't figure anything out for this. Any hints or help to show this is appreciated. Thanks in advance REPLY [2 votes]: This is a classical primitive-element argument. Let $v=\sqrt[p]{a}$ and $u=v+\omega$. The minimal polynomial of $v$ is $V=X^p-a$ while the minimal polynomial of $\omega$ is $W=X^{p-1}+X^{p-2}+\ldots +1$. Next, consider the pertubated polynomial $F=W(u-X)$. The roots of $F$ are $u-\omega=v,u-\omega^2,u-\omega^3,\ldots,u-\omega^{p-1}$. The roots of $V$ on the other hand are $v,v\omega,v\omega^2,\ldots,v\omega^{p-1}$. If $r$ is any common root to $F$ and $W$, we can write $r=u-\omega^i=v\omega^j$ for some indices $i$ and $j$ between $1$ and $p-1$. It follows that $v(1-\omega^j)=\omega-\omega^i$, and if $j\neq 1$ we could write $v=\frac{\omega-\omega^i}{1-\omega^j}\in{\mathbb Q}[\omega]$ which is impossible because $v$ has degree $p$ while $\omega$ has degree $p-1$. So $j=1$ and $r=v$. We have therefore shown that the only common root to $F$ and $V$ is $v$. So the GCD of those two polynomials is $X-v$. But since those two polynomials have coefficients in ${\mathbb Q}[u]$, so does their GCD (this is the "invariance of GCD" property and follows from the Euclidean algorithm). So $v\in{\mathbb Q}[u]$ which is exactly what you need.<|endoftext|> TITLE: Show that $f$ is either injective or a constant function. QUESTION [5 upvotes]: Let $\Omega$ be a domain in $\mathbb{C}$ and let $\{f_n\}_{n \in \mathbb{N}}$ be a sequence of injective functions that converge in $O(\Omega)$ to $f$ . Show that $f$ is either injective or a constant function. How does the conclusion change if, instead of a domain, we allow $\Omega$ to be an arbitrary open set ? I know that $f$ is holomorphic as an almost uniform limit. But I dont know how to proceed. REPLY [4 votes]: Assume that the limit function $f$ is neither constant nor injective. Then $f$ takes some value $a$ in disjoint disks $B_1, B_2 \subset \Omega$. As in Number of roots of a sequence of a uniformly convergent holomorphic functions implies an upper bound for the number of roots of their limit it follows from Rouché's theorem that there are $n_1, n_2$ such that for $n \ge n_j$, $f_n$ takes the value $a$ in $B_j$ (at least once). For $n \ge \max(n_1, n_2)$ this is a contradiction to $f_n$ being injective. Rouché's theorem is applied separately to the two disks in $\Omega$, so the same conclusion holds if $\Omega$ is a (not necessarily connected) open set.<|endoftext|> TITLE: None of $3,5,7$ can divide $r^4+1$ QUESTION [9 upvotes]: Let $n=r^4+1$ for some $r$. Show that none of $3,5,$ and $7$ can divide $n$. I am thinking to use a corollary that "each prime divisor p of an integer of the form $(2m)^4+1$ has the form $8k+1$", but I failed. Anyone can give some hint? REPLY [2 votes]: Check for $r=3k,3k+1,3k+2$ Then check for $r=5k,5k+1,5k+2,5k+3,5k+4$ Then check for $r=7k,7k+1,7k+2,7k+3,7k+4,7k+5,7k+6$ It is the easiest way but lengthy.<|endoftext|> TITLE: Math for blind people... QUESTION [20 upvotes]: What happens if some blind person want to study math? Is there some "braille alphabet" for mathematical symbols? Are there math books, at least for undergraduate students, written for blind people? REPLY [2 votes]: There exists a certain variation-or rather "enrichment"-of the Braille Alphabet, named Nemeth Braille, after its' creator, Abraham Nemeth, which is also using the standard six-dot Braille cells to create mathematical symbols. I am not sure on whether it is exhaustive-that is, if all mathmematical expressions can be written by making use of its symbols-but I am pretty certain that is sufficient for a full undergraduate course. This pdf file contains a full version-I believe it is the latest version-of the Nemeth Code. As an example, of typical mathematical statements written in Nemeth Braille check this (taken from the afformentioned file): In general, learning by audio lectures can also help, while the presence of many on-line material, such as in various youtube sources, even though they are not specifically made for the blind or visually impaired people, can be a great educational asset. The Soviet mathematician Lev Pontryagin, mentioned in the comments, is a fine example of someone who studied and contributed greatly to mathematics while being blind since he was 14. Also, one can mention, with the danger of sounding..sacrilegeous, the example of Euler, whose productivity was not in the least affected after losing his eyesight-actually one could argue that it was increased.<|endoftext|> TITLE: Drawing cards without replacement: all kings before any jacks QUESTION [5 upvotes]: This is a question that came up with my friends while playing a card drinking game: You draw cards without replacement from a standard 52-card shuffled deck. What is the probability that you draw all of the kings before drawing any of the jacks? I was thinking of a solution along the lines of combinations, but I'm not sure if that's the way to go. Simulation tells me the answer is ~1.4%, but I don't know how to get to this answer. Disclaimer: This is not a homework question. I'm really just curious. REPLY [7 votes]: The only thing that matters is the relative placement of the two critical types of card. There are $\binom{8}{4}$ equally likely ways to choose the places occupied by the Kings. Only one of these is "favourable." Thus the probability that all the Kings come before any Jack is $\dfrac{1}{\binom{8}{4}}$. This is approximately $0.0143$, quite close to the number obtained in the simulation.<|endoftext|> TITLE: How to know whether an Ordinary Differential Equation is Chaotic? QUESTION [19 upvotes]: Assuming we have an ordinary differential equation (ODE) such as Lorenz system: $$ \dot x=\sigma(y-x)\\ \dot y=\gamma x-y-xz\\ \dot z=xy-bz $$ where $$ \sigma = 10\\ \gamma = 28\\ b = \frac{8}{3}\\ x(0)=10\\ y(0)=1\\ z(0)=1 $$ This system is known to be chaotic because of its behavior [1], [2]. However, we normally judge about a system by the output results plot. But how can I judge about a system whether it is chaotic or not just by looking at its formulation in state space representation without plotting it? Or if there is no way for 100% judging, at least is there any way to guess it? REPLY [9 votes]: An introduction to the Lorenz system can be found in [1,2]. If there is no general tool to prove that a continuous dynamical system is chaotic, there are at least several tools to prove that a system is not chaotic (see e.g. [3]). Here is a short non-exhaustive list of features which allow a first-order autonomous ODE system $$ \dot{X} = F(X) \, \qquad\text{where}\qquad X\in\mathbb{R}^n \;\text{and}\; F \in C^1 $$ to be chaotic: $F$ is nonlinear the phase space dimension $n$ is equal to $3$ or larger (consequence of the Poincaré-Bendixson theorem) there is at least one eigenvalue of the Jacobian matrix $\partial F/\partial X$ evaluated at the equilibria of the system that has a non-negative real part (consequence of the Hartman-Grobman theorem) There are several case-dependent methods for the analysis of chaos. In the case of periodically forced Hamiltonian systems, a dedicated tool is Melnikov's method.<|endoftext|> TITLE: Extension of Vector field along a curve always exists? QUESTION [8 upvotes]: Let $c:I\to M$ be a $C^{\infty}$ curve on smooth manifold $M$ of dimension $n$ and $X:I\to TM$ be a vector field along $c$, $$\forall t\in I\qquad X(t)\in T_{c(t)}M.$$ does there exist a vector field $\bar{X}:M \to TM$ such that $X=\bar{X}\circ c$? REPLY [9 votes]: As @Bebop showed, globally it is not always possible to find the extension of a vector field along $c$, but (as you might guess) $\textit{locally}$ we can. We can do this by the help of Constant Rank Theorem provided that the curve is regular ($c'(t) \neq 0$ so that the curve $c$ is an immersion). By this theorem, we can find an adapted coordinates for the curve. And using this coordinate chart we can build the local extension for $X$. Construction : By constant rank theorem, $\forall t_0 \in I$ we can always find a small interval $J$ contain $t_0$ and a chart $(U,\varphi)$ contain $c(t_0)$ s.t the curve $c : I \rightarrow M$ has the following representation $$ c(t) = (t,0,\dots,0) \qquad \forall t \in J$$ The vector field along $c$, $X : I \rightarrow TM$ has representation on $J\subset I$ $$ X(t) = X^j(t) \frac{\partial}{\partial x^j} \Bigg|_{(t,0,\dots,0)}$$ where $c(J)\subset U$. Because $c(J)$ is a closed subset contained in $U$ then by Extension Lemma for Smooth Function (also in Lee's Smooth Manifold) we can define the component functions $\tilde{X}^j : U \rightarrow \mathbb{R}$ of a local vector field $\tilde{X} : U \rightarrow TM$ $$ \tilde{X} (x^1,\dots,x^m) = \tilde{X}^j (x^1,\dots,x^m) \frac{\partial}{\partial x^j} \Bigg|_{(x^1,\dots,x^m)} $$ such that the restriction to $c(J)$ in the chart $(U,\varphi)$ is $$ \tilde{X}^j |_{c(J)} (x^1,\dots,x^m) = \tilde{X}^j (t,0,\dots,0) := X^j(t)$$ equal to the component functions of the vector field along curve $X^j(t)$. Therefore by construction $\tilde{X} \circ c = X$ on $J\subset I$.<|endoftext|> TITLE: Order of integration for multiple can be easily swapped if limits are constants, right? QUESTION [6 upvotes]: The order of integration can easily be swapped if the limits are constants, right? $$\int_{a}^{b}\int_{c}^{d}f(x,y)dydx=\int_{c}^{d}\int_{a}^{b}f(x,y)dxdy$$ It only gets hard if the limits are functions of each other, right? $$\int_{a}^{b}\int_{c(x)}^{d(x)}f(x,y)dydx \neq \int_{c(x)}^{d(x)}\int_{a}^{b}f(x,y)dxdy$$ Sorry for the potentially trivial answer. Just doing a reality check over here. REPLY [4 votes]: This is true in a very general sense, by Fubini's theorem. In $\mathbb{R}^2$, the conditions for Fubini's theorem boil down to: The integral of the absolute value over the specified range must be finite, in any one of the three senses that: we integrate with respect to $x$ and then $y$ we integrate with respect to $y$ and then $x$ we integrate simultaneously over the entire surface If this holds, then the values of all three of the above integrals are equal. This is usually true over a compact (i.e. bounded and closed) range, because we're usually integrating continuous functions over those ranges, and continuous functions on compact sets are necessarily bounded in modulus.<|endoftext|> TITLE: Continuity of Derivative at a point. QUESTION [11 upvotes]: Is it possible that derivative of a function exists at a point but derivative does not exist in neighbourhood of that point. If this happens then how is it possible. I feel that if derivative exists at a point then the left hand derivative is equal to the right hand derivative so derivative should exist in neighbourhood of that point. REPLY [3 votes]: You ask about existence of a derivative in a single point, but in title you say continuity. As for existence, a derivative $f'(a)$ of a real function $f(x)$ at point $x=a$ is defined as a limit $$\lim_{h\to 0} \frac{f(a+h)-f(a)}h$$ The existence (and the value) of the limit determines a derivative at the chosen point, independent on the existence of the limit in any neighborhood of $a$. As others show, there exist functions which are differentiable at a single point only. However if you ask for continuity, it requires the derivative to be defined (exist) in some neighbourhood of $a$, so that a limit of the derivative exists: $$\lim_{x\to a}f'(x)$$ Then you can ask if a derivative is continuous at $a$. And there are functions (examples given in other answers) with a derivative discontinuous at some point, although existing in a neighborhood of that point.<|endoftext|> TITLE: The Zariski closure of a constructible set is the same as the standard closure? QUESTION [8 upvotes]: Question: Let $X$ be an affine variety over $\Bbb C$, and let $Y\subseteq X$ be a constructible set (i.e. $Y$ is a finite union of locally closed sets). Is it true that the Zariski closure of $Y$ is the same as the closure of $Y$ in the standard Euclidean topology inherited from the inclusion $X\subseteq\Bbb C^n$? In this question I asked whether these two closures were the same when $Y$ is the orbit of an algebraic group action. Then, this answer says that the answer is yes because orbits are constructible sets. However, I don't know a proof of the fact that these orbit closures are the same for constructible sets. If $\bar{Y}^E$ denotes the euclidean closure and $\bar{Y}^Z$ the Zariski one, then it is clear that $$\bar{Y}^E\subseteq \bar{Y}^Z$$ since the Euclidean topology is finer than the Zariski topology. But for the converse, I am clueless. REPLY [5 votes]: Yes it is true. You probably know that a non-empty Zariski-open set $U\subseteq X$ is Zariski-dense, and it is a standard fact that $U$ is also Euclidean-dense. (See this mathoverflow answer for a slick proof of that fact.) Assuming this, the proof of your claim is simple: Wlog, $Y$ is locally closed because the closure of a finite union is the union of the closures. Then, by definition, $Y$ is Zariski-open in $\bar{Y}^Z$ and hence $Y$ is Euclidean-dense in $\bar{Y}^Z$, i.e. $\bar{Y}^E\cap\bar{Y}^Z=\bar{Y}^Z$. Since $\bar{Y}^E\subseteq\bar{Y}^Z$, this concludes the proof.<|endoftext|> TITLE: Second derivative of Kullback–Leibler divergence QUESTION [6 upvotes]: The Kullback-Leibler divergence is defined here. I have to find the second derivative of $\textrm{KL}(p(s, \theta)||\mu(s) q(\theta, s))$ regarding $p(s, \theta)$, where $p(s, \theta)$ is a joint probability (and therefore, $\frac{1}{p(s, \theta)} \geq 0$), and $\frac{\partial^2 p(s, \theta)}{\partial p(s,\theta)} = 0$. By definition, $\textrm{KL}(p(s, \theta)||\mu(s) q(\theta, s)) = \sum\limits_{i} P(i) \log \frac{P(i)}{Q(i)}$, where $Q(s,\theta)=\mu(s) q(\theta, s)$. The final result of the derivation must show that the second derivative of $\textrm{KL}(p(s, \theta)||\mu(s) q(\theta, s))$ is non-negative. My colleague scribbled these steps before leaving: $ \frac{\partial^2 \sum\limits_{i} p(s, \theta) \log \frac{p(s,\theta)}{Q(s, \theta)}}{\partial p(s,\theta)|s,\theta} = \\ P(s,\theta) \log P(s,\theta)-P(s,\theta) \log Q(s,\theta) = \\ \log P(s,\theta) * \frac{1}{P(s,\theta)} = \\ \log P(s,\theta) - \log Q(s,\theta) = \\ \frac{1}{P(s,\theta)} \geq 0.$ But this was really fast, and his handwriting isn't the best. Therefore, there might be some details wrong (and he might also have made a mistake). I'm guessing in the second line, some of terms must be derivated (as if it were a first derivation of a multiplication $(fg)' = f'g + fg'$), but the minus sign doesn't correspond and we are doing a second derivation. I'm also assuming he omitted the $\sum$. On the third line, I guess the second half of the equation equals $0$ and he derived the $\log$ into $\frac{1}{P(s,\theta)}$, but why is there a $\log$ still? On the fourth line there are other $\log$s, which I have no idea where they came from, but if we derive everything, we reach the final line, which we know is positive, and the proof ends. I tried to reach the solution on my own, like this $ \frac{\partial^2 \sum\limits_{i} P(i) \log \frac{P(i)}{Q(i)}} {\partial P(i)} = \sum\limits_{i} P'' \log \frac{P}{Q} + 2 P' \log '\frac{P}{Q} + P \log '' \frac{P}{Q} = \sum\limits_{i} 0 + 2\frac{Q}{P} + P \frac{PQ'-QP'}{P^2} = \sum\limits_{i} 2\frac{Q}{P} + \frac{PQ'-QP'}{P} = \sum\limits_{i} 2\frac{Q}{P} - \frac{Q}{P} = \sum\limits_{i} \frac{Q}{P} $ But I can't really move from here to the final solution. What did I do wrong? What was my colleague doing? REPLY [6 votes]: Found it. I was missing the fact that $\log(\frac{A}{B})=\log(A)-\log(B)$. From there, we can easily do \begin{align} & \!\!\!\!\!\!\!\!\frac{\partial^2 \textrm{KL}(p(s, \theta)||\mu(s) q(\theta|s))}{\partial p(s,\theta)^2} = \\ & = \frac{\partial^2 p(s, \theta) \log \frac{p(s,\theta)}{Q(s,\theta)}}{\partial p(s,\theta)^2} = \nonumber\\ & = \frac{\partial^2 p(s, \theta) \log p(s,\theta) - p(s, \theta) \log Q(s,\theta)}{\partial p(s,\theta)^2} = \nonumber\\ & = \frac{\partial^2 p(s, \theta) \log p(s,\theta)}{\partial p(s,\theta)^2} - \frac{\partial^2 p(s, \theta) \log Q(s,\theta)}{\partial p(s,\theta)^2} = \nonumber\\ & = \frac{\partial^2 p(s, \theta)}{\partial p(s,\theta)^2} \log p(s,\theta) + 2 \frac{\partial p(s, \theta) \log p(s,\theta)}{\partial p(s,\theta)} + p(s, \theta) \frac{\partial^2 \log p(s,\theta)}{\partial p(s,\theta)^2} - 0 = \nonumber\\ & = 0 + \frac{2}{p(s,\theta)} + p(s, \theta) \frac{\partial \frac{1}{p(s,\theta)}}{\partial p(s,\theta)} = \nonumber\\ & = \frac{2}{p(s,\theta)} - \frac{1}{p(s,\theta)} = \nonumber\\ & = \frac{1}{p(s,\theta)} \geq 0 \nonumber\\ \end{align}<|endoftext|> TITLE: Proof of this inequality QUESTION [6 upvotes]: I have a finite sequence of positive numbers $(a_i)_1^n$ for which: $a_1>a_n$, $a_j\geq a_{j+1}\geq\cdots\geq a_n$ for some $j\in\{2,\ldots,n-1\}$, $a_1>a_2>\cdots>a_{j-1}$, $a_j\geq a_i$ for all $1\leq i\leq n$. I conjecture that: $$(a_n+a_2+\cdots+a_j)\left(\sum_{i=j+1}^{n-1}{\frac{a_i^2}{(a_1+\cdots+a_i)(a_n+a_2+\cdots + a_i)}}+\frac{a_1+a_n}{a_1+\cdots+a_n}\right) \geq (a_n+a_1+\cdots+a_j)\left(\sum_{i=j+1}^{n-1}{\frac{a_i^2}{(a_1+\cdots+a_i)(a_n+a_1+\cdots + a_i)}}+\frac{a_n}{a_1+\cdots+a_n}\right).$$ I have an unappealing brute-force proof when $n\in\{3,4,5\}$ but I can't prove it in general. I have tried calculus to no avail, and it doesn't seem like a good fit for any of the standard inequalities.Does this look even remotely similar to anything already done? I appreciate that it's a rather ugly inequality but some suggestions would be greatly appreciated! The proof when $n=3$ is outlined below. By condition 2. we know that $j=2$ so that \begin{align*} (a_3+a_2)\left(\frac{a_1+a_3}{a_1+a_2+a_3}\right)-(a_3+a_1+a_2)\left(\frac{a_3}{a_1+a_2+a_3}\right)=\frac{a_2(a_1-a_3)}{a_1+a_2+a_3}>0 \end{align*} where we have used condition 1, which states that $a_1>a_3$.The proofs when $n=4$ and $n=5$ are likewise, only uglier. When $n=5$ the trick is to find common denominators then simply pair each negative term with some larger positive term. It's horrible, but it works. Perhaps a general proof would involve a similar argument but more formalised? If a full proof can't be found then I'd be happy for a proof in the special case where $j=2$ and $j=3$. REPLY [3 votes]: Your conjecture fails for $n \ge 7$ (and maybe for some smaller $n$). The following results are from programming your inequality in Sage and testing a variety of patterned sequences. 1. First family of counterexamples Let $S(a,b) = [a, a-1, a-2, ..., 1, b, b-1, b-2,..., 1]$, of length $a+b$, for positive integers $a,b$. For any $a \ge 2$, there is some $b^* \gt a$ such that for any $b \ge b^*$, $S(a, b)$ is a counterexample. (The quantity $\ \text{LHS}-\text{RHS}\ $ is a decreasing function of $b$, and falls below $0$ at $b=b^*$.) Such counterexamples include $S(2, b\ge 11), S(3, b\ge 14), S(4, b\ge 16), S(5,b\ge 19)$, and so on. (I haven't determined a formula for $b^*$.) The smallest counterexample of this form appears to be $$S(2,11) = [2, 1, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]$$ for which $$\text{LHS} = \frac{79582008868974649241}{15735265132809166560} = 5.0575... < 5.0608... = \frac{1185342437701}{234217526928} = \text{RHS} $$ 2. Second (shorter) family of counterexamples Another family of counterexamples is $[2, b, b, b, b, b, 1]$ with $b \ge 5$. The smallest counterexample of this form appears to be $[2, 5, 5, 5, 5, 5, 1]$, for which $$\text{LHS} = \frac{3515219}{1225224} = 2.869... < 2.881... = \frac{30446902}{10567557} = \text{RHS} $$ Here's the core of the program (note the indexing adjustments due to Python lists being $0$-based): def lhs(L,j): n = len(L) tot = (L[0] + L[n-1])/sum(L) for i in [j+1..n-2]: tot += L[i]^2 / ( sum(L[0:i+1])*( L[n-1] + sum(L[1:i+1]) ) ) return (L[n-1] + sum(L[1:j+1]))*tot def rhs(L,j): n = len(L) tot = L[n-1]/sum(L) for i in [j+1..n-2]: tot += L[i]^2 / ( sum(L[0:i+1])*( L[n-1] + sum(L[0:i+1]) ) ) return (L[n-1] + sum(L[0:j+1]))*tot for b in [3..8]: L = [2,b,b,b,b,b,1]; left = lhs(L,1); right = rhs(L,1) print b, left.n(), right.n(), (left-right).n() > 3 1.96695167577521 1.92603768780239 0.0409139879728115 > 4 2.41522132314971 2.40469223123685 0.0105290919128582 > 5 2.86904190580661 2.88116752055371 -0.0121256147470981 > 6 3.32586148147269 3.35536963036963 -0.0295081488969435 > 7 3.78448484495198 3.82769776396051 -0.0432129190085339 > 8 4.24427752155089 4.29855086580553 -0.0542733442546420<|endoftext|> TITLE: infimum in inequalities QUESTION [5 upvotes]: I am just starting on analysis and I have a question about the concept of infimum: Consider this scenario: Theres is metric space (X,d) and a set $A \subseteq X$, hence for any $x,y \in X$ and $a\in A$ $ d(x,a) \leq d(x,y) +d(y,a) \; \forall a \in A$ .... this is true since d() is a valid metric. Now can I justify the following statement: $\inf_{a\in A} d(x,a) \leq d(x,y) + \inf_{a \in A} d(y,a)$ Basically $ d(x,A) \leq d(x,y) + d(y,A) $? Is is it OK to take the infimum on both sides of the inequality? How to argue that doing so is right? the '$a\in A$' which minimises (weakly speaking here) d(x,a) need not be the same '$a\in A$' which minimises d(y,a)? P.S: This question is in the backdrop of proving that the function d(x,A) is continuous REPLY [4 votes]: Let $x,y$ be fixed members of the metric space and suppose $A$ is a subset of the metric space. Let $D = \{ d(x,a) \in \mathbb{R} : a \in A \}$ and let $E = \{ d(x,y) + d(y,a) \in \mathbb{R} : a \in A\}.$ Let arbitrary $e \in E$ be given. Then $e = d(x,y) + d(y,a)$ for some $a \in A$. Thus, there is some $d \in D$ (namely, $d(x,a)$) such that $d(x,a) \leq e.$ By definition of $\inf,$ $\inf D \leq d$ and therefore $\inf D \leq e.$ Since $e$ was arbitrary, this is true $\forall e \in E$. This shows that $\inf D$ is a lower bound for $E.$ Again, by definition of $\inf,$ this must mean that $\inf D \leq \inf E,$ as required. Yes, your assertion is ok. Apologies for my earlier comment which was misleading because these two propositions are different: $\forall b \in B, \exists a \in A : a \leq b$ $a \in A, b \in B \Rightarrow a \leq b$<|endoftext|> TITLE: Parallel transport along a closed geodesic QUESTION [5 upvotes]: It do Carmo, in exercise 9.4, it is claimed that parallel transport along a closed geodesic in an even-dimensional orientable manifold "leaves a vector orthogonal to the geodesic invariant." So, let $\gamma:[0,a]\to M$ be this geodesic, with $\gamma(0)=\gamma(a)=p$, and let $V\in T_pM$ be orthogonal to $\gamma'(0)$. Let $P_t$ be the parallel transport along the geodesic. From Picard-Lindelöf, it is clear that $P_a\gamma'(0)=\gamma'(a)$. Let $V(t):=P_t V$. I believe the claim is that $V=P_aV$. It is clear that $\langle V(a),\gamma'(0)\rangle=0$, so $V(a)$ lies in the same $\gamma'(0)$-orthogonal subspace ($T_pM^\bot$) as $V$. Let $E_1,\dotsc,E_{n-1}$ be an orthonormal basis of this subspace. Extend these via parallel transport along the geodesic. Then $E_1(a),\dotsc,E_{n-1}(a):=P_aE_1,\dotsc,P_aE_{n-1}$ is also an orthonormal frame of $T_pM^\bot$. Since $P_t$ preserves orientation (I have already proved this), there is an $O\in\mathrm{SO}(n-1)$ such that $E_i(a)=\sum_{j=1}^{n-1}O_i{}^jE_j$. At this point I get stuck. I see no reason for $O$ to be the identity or what $M$ being even-dimensional has to do with anything. How do I continue? REPLY [2 votes]: You've misinterpreted the statement - you to need to show it leaves one such vector invariant, not all such vectors. (You can reasonably easily produce a counterexample to the stronger statement by constructing a flat $R^3$ bundle over $S^1$ with a twist.) Thus the problem boils down to showing that every $O \in SO(n-1)$ has a fixed point, which is true exactly when $n-1$ is odd. If you're not familiar with this fact, you can establish it by studying the eigenvalues of $O$, remembering that any complex ones must come in conjugate pairs and that their product (the determinant) is positive.<|endoftext|> TITLE: Show that orthogonal matrices are diagonalizable QUESTION [5 upvotes]: I want to prove that all orthogonal matrices are diagonalizable over $C$. I know that a matrix is orthogonal if $Q^TQ = QQ^T = I$ and $Q^T = Q^{-1}$, and that a matrix $A$ is diagonalizable if $A = PDP^{-1}$ where $D$ is a diagonal matrix. How can I start this proof? REPLY [4 votes]: Note that if $Q$ is orthogonal then $Q$ is normal, because \begin{equation*} Q Q^T = Q^T Q = I. \end{equation*} So the spectral theorem implies that $Q$ is diagonalizable.<|endoftext|> TITLE: Let $\int_{- \infty}^{\infty} f(x) dx =1$. Then show that $ \int_{- \infty}^{\infty} \frac{1}{1+ f(x)} dx = \infty.$ QUESTION [9 upvotes]: Let $f : \mathbb{R} \to [ 0, \infty)$ be a measurable function. If $\int_{- \infty}^{\infty} f(x) dx =1$. Then I want show that $ \int_{- \infty}^{\infty} \frac{1}{1+ f(x)} dx = \infty.$ Any help will be appreciated. Thank you in advance. REPLY [11 votes]: $$\int_{-c}^c \frac{1}{1+f(x)}\, dx = \int_{-c}^c \frac{1+f(x)}{1+f(x)} \, dx - \int_{-c}^c \frac{f(x)}{1+f(x)} \, dx \\ = 2c - \int_{-c}^c \frac{f(x)}{1+f(x)} \, dx \\ \geqslant 2c - \int_{-c}^c f(x) \, dx$$ REPLY [6 votes]: The set $E=\{x\,:\,f(x)>1\}$ has finite measure since otherwise $\int_{\mathbb{R}} f\ge \int_E f \ge |E| = \infty$. So $\mathbb{R}\backslash E$ has infinite measure and thus we get $$\int_{-\infty}^\infty \frac1{1+f(x)} dx \ge \int_{\mathbb{R}\backslash E} \frac{1}{2} dx = \infty$$ because $f(x)\le 1$ for $x\in \mathbb{R}\backslash E$.<|endoftext|> TITLE: Volume of the Intersection of Ten Cylinders QUESTION [6 upvotes]: I'm in Calculus 2, and we were first given the problem to find the intersection of two perpendicular cylinders of equal radius. This breaks down into eight times the volume of a quarter circle (with radius r) with perpendicular square cross sections. $$V=8\int_0^r \sqrt{r^2-x^2}^2dx=8\int(r^2-x^2)dx=8\left[ r^2x - \frac{1}{3}x^3 \right]^{r}_{0}=\frac{16}{3}r^3$$ After this question on the problem set, my teacher has written "Aren't you glad I didn't have you find the intersection of ten cylinders?" Assuming the ten cylinders intersect in an equal way, like the faces of an icosahedron, I assume this would make some sort of curvy-face icosahedron. My question is two parts Can I find the volume using a Calculus II base of knowledge (including a bit of multivar)? What is the volume of the intersection of ten cylinders of equally radius equally spaced? Edit: The question should be so that the axis of each cylinder is perpendicular to the face of an icosahedron- because this is 10 pairs of parallel sides, that should be ten cylinders. Edit 2: Question 1 is answered: No, but maybe. (That wasn't the important part anyway) Question 2 is still hanging, as I'd like to see the methodology involved, I'll restate the problem with my current understanding of it. Ten cylinders, each of radius r intersect along the lines that are perpendicular to the faces of a regular icosahedron at the center of each face. What is the volume of the intersection? I have created rather crude pictures with my limited Geogebra knowledge: REPLY [7 votes]: The picture below illustrates what one will get if one intersect ten infinite long cylinders of unit radius, whose axes are aligned along the ten diagonals of a dodecahedron, against each other. $\hspace1in$ The resulting figure is very complicated. It consists of $180$ quadrilateral faces and each cylinder contribute $18$ faces. Faces coming from same cylinder has been colored with same color. For example, all the red faces lie on a cylinder whose axis is pointing along the $(-1,1,1)$ direction. The $18$ faces from any cylinder fall into two groups. Up to mirror reflection, $12$ of them are congruent to each other. The remaining $6$ faces are congruent to each other directly. If one study the figure carefully, one will notice the quadrilaterals arrange themselves into $12$ pentagons. Each pentagon carries $15$ quadrilaterals and these pentagons forming the faces of a dodecahedron. As a "dodecahedron", one vertex $U$ of it is lying along the direction $(-1,1,1)$ and another nearby one $V$ is lying along the direction $(0,\phi, \phi^{-1})$ where $\phi$ is the golden ratio. To simplify analysis, choose a new coordinate system such that $U$ lies along the $z$-axis and $V$ in the $yz$-plane. i.e. $$\begin{array}{rcl} (x,y,z)_U^{old} = \sqrt{\frac38} (-1,1,1) &\mapsto& (x,y,z)_U = \frac{3}{\sqrt{8}}(0,0,1)\\ (x,y,z)_V^{old} = \sqrt{\frac38} ( 0,\phi,\phi^{-1}) &\mapsto& (x,y,z)_V = \frac{3}{\sqrt{8}}(0,\frac23,\frac{\sqrt{5}}{3})\\ \end{array} $$ If one "zoom in" the figure from the direction of new +ve $x$-axis and perform an orthographic projection to the new $yz$-plane, one see something like below: $\hspace1in$ The $18$ red faces now lies along the equator. The cylinder holding them becomes $$\mathcal{C} \stackrel{def}{=} \{ (x,y,z) : x^2 + y^2 = 1 \}.$$ Furthermore, the $18$ red faces can be viewed as the union of $12$ non-simple polygons. Each of them is congruent to either the non-simple polygon $\mathcal{P}$ with vertices $AHDIGDF$ (the one highlighted by a white border) or its mirror image. To compute the volume of the intersection, we first need to figure out the area of $\mathcal{P}$. As shown in figure above, we can break $\mathcal{P}$ into $6$ right angled triangles: $$\mathcal{P} = \triangle ABF \cup \triangle BDF \cup \triangle AHC \cup \triangle HDC \cup \triangle DEG \cup \triangle DIE$$ It turns out it is not that hard to compute the area of these sort of right angled triangle on a cylindrical surface. Let me use $\triangle ABF$ on $\mathcal{C}$ as an example. First, the curve $AF$ lies on the intersection of two cylinders. The axes of these two cylinders are pointing along the direction $OU$ and $OV$ respectively ($O = (0,0,0)$ is the origin, right behind $A$ in above figure). From above figure, it is easy to see $AF$ lies on the plane equal distance between $U$ and $V$. Let $\alpha = \angle BAF$ and $\beta = \angle VOU$. The slope of $AF$ with respect to the equator is then given by $$\tan\alpha = \cot\frac{\beta}{2} = \frac{1+\cos\beta}{\sin\beta} = \sqrt{\frac{1 + \cos\beta}{1 - \cos\beta}} = \sqrt{\frac{3+\sqrt{5}}{3-\sqrt{5}}} = \frac{3+\sqrt{5}}{2} = \phi^2$$ The point $F$ is one of the vertex of the dodecahedra, it is not hard to see $\;z_F = \frac{3}{\sqrt{8}}\cdot \frac13 = \frac{1}{\sqrt{8}}$. We can parametrize $AF$ by the map $$ [0,\theta_F] \ni \theta\; \mapsto\; (x,y,z) = (\cos\theta,\sin\theta,\tan\alpha\sin\theta ) \in \mathcal{C} \quad\text{ where }\quad \tan\alpha\sin\theta_F = z_F $$ With this parametrization, the area of the $\triangle ABF$ on $\mathcal{C}$ is given by: $$\int_0^{\theta_F} \tan\alpha \sin\theta d\theta = \tan\alpha - \tan\alpha \cos\theta_F = \tan\alpha - \sqrt{\tan\alpha^2 - z_F^2} = \phi^2 - \sqrt{\phi^4 - \frac18 } $$ As one can see from this example, given the slope $k$ and height $h$ of such a right angled triangle, its area on the cylinder can be computed using following function: $$A(k,h) = k - \sqrt{k^2 - h^2}$$ Since we are dealing with cylinders with unit radius, the volume of the cone span by $O$ and such a right angled triangle is simply $\frac13 A(k,h)$. By brute force, one can work out the slopes and heights of remaining $5$ right angled triangles. To summarize, we have: $$ \begin{cases} \tan\angle BAF = \phi^2,\\ \tan\angle HAB = \frac{1}{\phi^2},\\ \tan\angle FDB = \tan\angle IDE = \sqrt{2},\\ \tan\angle CDH = \tan\angle EDG = \frac{1}{\sqrt{2}} \end{cases} \quad\text{ and }\quad \begin{cases} |z_F| = \frac{1}{\sqrt{8}},\\ |z_G| = |z_H| = \frac{1}{4\phi^2}\\ |z_I| = \frac{1}{2\phi^2} \end{cases} $$ From this, we find the volume of the intersection is given by $$\verb/Volume/ = \frac{10 \times 12}{3}\left[ \begin{align} & A\left(\phi^2,\frac{1}{\sqrt{8}}\right) + A\left(\sqrt{2},\frac{1}{\sqrt{8}}\right) + A\left(\frac{1}{\phi^2},\frac{1}{4\phi^2}\right)\\ + & 2 A\left(\frac{1}{\sqrt{2}},\frac{1}{4\phi^2}\right) + A\left(\sqrt{2},\frac{1}{2\phi^2}\right) \end{align} \right] $$ With help of a CAS, one can simplify this to $$\begin{align} \verb/Volume/ &= 5\left(24 + 24 \sqrt{2} + \sqrt{3} - 4\sqrt{6} - 7\sqrt{15} - 4\sqrt{30}\right)\\ &\approx 4.277158048659416687225951566030890254054503016349939576882... \end{align} $$ which is about $2\%$ larger than the volume of unit sphere.<|endoftext|> TITLE: Understanding a derivation of the SVD QUESTION [6 upvotes]: Here's an attempt to motivate the SVD. Let $A \in \mathbb R^{m \times n}$. It's natural to ask, in what direction does $A$ have the most "impact". In other words, for which unit vector $v$ is $\| A v \|_2$ the largest? Denote this unit vector as $v_1$. Let $\sigma_1 = \| A v_1 \|_2$, and define $u_1$ by $A v_1 = \sigma_1 u_1$. Next, we would like to know in what direction orthogonal to $v_1$ does $A$ have the most "impact"? In other words, for which unit vector $v \perp v_1$ is $\| A v \|_2$ the largest? Denote this unit vector as $v_2$. Let $\sigma_2 = \| A v_2 \|_2$, and define $u_2$ by $A v_2 = \sigma_2 u_2$. Question: Are the vectors $u_1$ and $u_2$ guaranteed to be orthogonal? If so, is there an easy proof for this fact, or a viewpoint that makes this obvious? REPLY [3 votes]: Here's my attempt at an intuitive explanation of the fact that $u_1$ and $u_2$ are guaranteed to be orthogonal, based on the answer given by @user1952009. Throughout this answer, $\| \cdot \|$ will denote the $\ell_2$-norm. Assume that $v_1$ is a maximizer of $\|Av\|$ subject to the constraint that $\|v\| = 1$. Assume also that $v_2 \perp v_1$. Claim: Under these assumptions, $A v_2 \perp A v_1$. Explanation: It's possible to look at this in a way that makes it intuitive or even "obvious". If $Av_2$ were not orthogonal to $A v_1$, then it seems like we could improve $v_1$ by adding $\epsilon v_2$ to it, for a sufficiently tiny value of $\epsilon$. When we perturb $v_1$ a tiny bit in the direction of $v_2$, then the norm of $v_1$ does not change (to first order, at least). [A satellite in circular orbit moves locally in a straight line, and its distance from the center of the Earth is constant.] However, we cannot say the same for the norm of $Av_1$, because $A v_1$ is perturbed in the direction of $A v_2$, and $A v_1$ and $A v_2$ are not orthogonal. The growth in $\| A v_1 \|$ is non-negligible. Again: when $v_1$ is perturbed in the direction $v_2$, the change in norm is negligible (so the norm is still $1$). But, $A v_1$ is perturbed in the direction $A v_2$, and the change in norm is non-negligible (so the norm can increase). Suppose you're standing 1 kilometer from the origin and you want to take a step in order to increase the magnitude of your displacement vector from the origin. In which direction should you move? Is it better to move in a direction orthogonal to your displacement vector, or parallel to it? If you step in a direction orthogonal to your displacement vector, then the change in the magnitude of your displacement vector is negligible. However, if you step in a direction parallel to your displacement vector, then the change in magnitude of your displacement vector is significant. Finally, let's convert this intuition into a rigorous proof. To get a rigorous proof, we have to face the fact that $v_1 + \epsilon v_2$ does not actually have norm $1$ when $\epsilon \neq 0$, even if $\epsilon$ is tiny. We can fix this by taking our perturbed version of $v_1$ to be \begin{equation} \tag{$\spadesuit$} \tilde v(\epsilon) = \sqrt{1 - \epsilon^2} \, v_1 + \epsilon v_2. \end{equation} The vector $\tilde v(\epsilon)$ really does have norm $1$. We are assuming (for a contradiction) that $A v_2$ is not orthogonal to $A v_1$. This implies that $A v_2 = c A v_1 + w$, for some $c \neq 0$ and $w \perp A v_1$. It follows that \begin{align} \| A \tilde v(\epsilon) \|_2^2 &= \| (\sqrt{1 - \epsilon^2} + c \epsilon) A v_1 + \epsilon w \|^2 \\ &= (\underbrace{\sqrt{1 - \epsilon^2} + c \epsilon}_{>1 \text{ if }\epsilon \text{ is small enough}})^2 \| A v_1 \|^2 + \epsilon^2 \| w \|^2. \end{align} This shows that $v_1$ is not a true maximizer of $\| A v \|$ subject to the constraint $\| v\|_1$. We have arrived at a contradiction. The point of the intuitive discussion was to explain how we might think of perturbing $v_1$ as in equation ($\spadesuit$), and why we would expect this perturbation of $v_1$ to be an improvement on $v_1$.<|endoftext|> TITLE: Coordinate free Geometric Algebra vs. Linear Algebra QUESTION [5 upvotes]: I think I know what coordinate free means. But I never found in ANY text a good explanation of it or something like: This is the problem solved with coordinates and this is the problem solved without coordinates, etc. Since the philosophy of GA is that everything should be coordinate free, I would like to see an example of something that can be done in GA without coordinates but you have to use coordinates with usual linear algebra. To be specific, in GA you can make something like this in, let's say $\mathbb{G_3}$: $c=(bab)I$ The vector $a$ is rotated about the vector $b$, and then you take the dual plane of it. Is there an efficient way to do this without coordinates in linear algebra? This is just an example that I made up spontaneously; maybe there are better examples. REPLY [5 votes]: In linear algebra and geometric algebra both, you can talk about "transformations," "bilinear forms" and "vector spaces" all without referring to coordinates. They are all abstract ideas. It is only when you begin to identify $V=F^n$ for some $n$ and field $F$, transformations with matrices, bilinear forms with Gram matrices, etc., that you start to get coordinates. So, one example of a theorem of linear algebra that you could solve without coordinates is this: If $A^2=A$, then $V=Im A\oplus Ker A$. You need never refer to bases and coordinates, you only need to use the abstract properties of $V$ and $A$. Another one is Given a unit vector $v$, the transformation $x\mapsto x-2v(v^\ast x)$ is a reflection in the plane normal to $v$. In fact, in the example you gave, you said everything entirely in terms of abstract vectors and geometric notions, never referring once to coordinates. Whether or not you prove what you are doing works using coordinates is another matter. It is nice to be able to prove things in a coordinate-free way sometimes, but it is not always desirable. For one thing, the coordinate definition of "determinant" is much less complicated than the coordinate free version.<|endoftext|> TITLE: Catalan's constant and $\int_{0}^{2 \pi} \int_{0}^{2 \pi} \ln(\cos^{2} \theta + \cos^{2} \phi) ~d \theta~ d \phi$ QUESTION [9 upvotes]: According to my book (The Nature of computation, page 691): $$\int_{0}^{2 \pi} \int_{0}^{2 \pi} \ln(\cos^{2} \theta + \cos^{2} \phi) ~d \theta ~d \phi= 16 \pi^2 \left(\frac{C}{\pi}- \frac{\ln2}{2}\right),$$ where $C$ is Catalan's constant. I have tried to derive this expression by looking at integral representations of $C$, but I have not been able to perform the integral. Any help? Thank you. REPLY [9 votes]: Here is one possible reduction that leads to the answer. Step 1. Let $I$ denote the integral. As in @Takahiro Waki's computation, we utilize several trigonometric identities to write \begin{align*} I &= \int_{0}^{2\pi}\int_{0}^{2\pi} \log\left( 1 + \frac{\cos2\theta + \cos2\phi}{2} \right) \, \mathrm{d}\theta\mathrm{d}\phi \\ &= \int_{0}^{2\pi}\int_{0}^{2\pi} \log( 1 + \cos(\theta+\phi)\cos(\theta-\phi)) \, \mathrm{d}\theta\mathrm{d}\phi. \end{align*} Now observe that $(\theta, \phi) \to (\theta-\phi, \theta+\phi) =: (x, y)$, as mapping $(\Bbb{R}/2\pi\Bbb{Z})^2 \to (\Bbb{R}/2\pi\Bbb{Z})^2$ is a 2-1 covering with $\mathrm{d}\theta\mathrm{d}\phi = \frac{1}{2}\mathrm{d}x\mathrm{d}y$. This gives $$ I = \int_{0}^{2\pi}\int_{0}^{2\pi} \log( 1 + \cos x \cos y ) \, \mathrm{d}x \mathrm{d}y. \tag{1} $$ (See Addendum for a more direct proof.) Step 2. Applying the McLaurin expansion of the function $z \mapsto \log(1+z)$ to $\text{(1)}$ and integrating term by term, we get $$ I = -\sum_{n=1}^{\infty} \frac{1}{2n} \left( \int_{0}^{2\pi} \cos^{2n} x \, \mathrm{d}x \right)^2 = -2\pi \sum_{n=1}^{\infty} \frac{\Gamma(n+\frac{1}{2})^2}{n!^2 n}. \tag{2} $$ In order to make a further simplification, notice that for complex $z$ with $|z| < 1$, we have \begin{align*} \sum_{n=0}^{\infty} (-1)^n \frac{\Gamma(n+\frac{1}{2})}{n!} z^n &= \frac{\sqrt{\pi}}{\sqrt{1+z}}, \tag{3} \\ \sum_{n=1}^{\infty} (-1)^n \frac{\Gamma(n+\frac{1}{2})}{n!n} z^n &= -2\sqrt{\pi} \log\left( \frac{1+\sqrt{1+z}}{2} \right) \tag{4}. \end{align*} Thus it follows from Parseval's identity that $$I = 2\pi \int_{-\pi}^{\pi} \log\left( \frac{1+\sqrt{1+e^{i\theta}}}{2} \right) \, \frac{\mathrm{d}\theta}{\sqrt{1 + e^{-i\theta}}}. \tag{5} $$ Now we observe that for $ |\theta| < \pi$, we obtain $\sqrt{1 + e^{-i\theta}} = e^{-i\theta/2}\sqrt{1 + e^{i\theta}}$. Using this, we simplify the above expression as $$ I = 2\pi \int_{-\pi}^{\pi} \log\left( \frac{1+\sqrt{1+e^{i\theta}}}{2} \right) \frac{e^{i\theta/2}}{\sqrt{1 + e^{i\theta}}} \, \mathrm{d}\theta. $$ Finally, applying the substitution $u = ie^{i\theta/2}$ and replacing the resulting semicircular contour by the linear segment $[-1, 1]$, we find that $$ I = 4\pi \int_{-1}^{1} \log\left(\frac{1+\sqrt{1-u^2}}{2}\right) \, \frac{\mathrm{d}u}{\sqrt{1-u^2}}. \tag{6} $$ Step 3. It remains to compute $\text{(6)}$. Applying the substitution $u = \sin \theta$, we have $$ I = 8\pi \int_{0}^{\pi/2} \log\left(\frac{1+\cos\theta}{2}\right) \, \mathrm{d}\theta = 16\pi \int_{0}^{\pi/2} \log\cos(\theta/2) \, \mathrm{d}\theta. $$ The final integral is not hard to compute by using $$ \log\left|\cos(\theta/2)\right| = \Re\log(1+e^{i\theta}) - \log 2 = -\log 2 + \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \cos(n\theta), $$ and the result is $$ I = 16\pi \left( C - \frac{\pi}{2} \log 2 \right) $$ as corrected by @JeanMarie. Addendum. Here we collect some arguments which clarifies some steps of the main computation. Equation (1). Notice that the transform $(x, y) = (\theta-\phi, \theta+\phi)$ maps the square $[0, 2\pi]^2$ to a square $\mathcal{D}$ with vertices $(0, 0)$, $(2\pi, 2\pi)$, $(-2\pi, 2\pi)$ and $(0, 4\pi)$. Now split this square into four non-overlapping parts $$ \mathcal{D} = \mathcal{D}_1 \cup \mathcal{D}_2 \cup \mathcal{D}_3 \cup \mathcal{D}_4, $$ where $\mathcal{D}_1$ is the right triangle formed by 3 vertices $(0, 0)$, $(2\pi, 2\pi)$ and $(0, 2\pi)$. $\mathcal{D}_2$ is the right triangle formed by 3 vertices $(0, 0)$, $(-2\pi, 2\pi)$ and $(0, 2\pi)$. $\mathcal{D}_3$ is the right triangle formed by 3 vertices $(0, 4\pi)$, $(2\pi, 2\pi)$ and $(0, 2\pi)$. $\mathcal{D}_4$ is the right triangle formed by 3 vertices $(0, 4\pi)$, $(-2\pi, 2\pi)$ and $(0, 2\pi)$. Then by translating each pieces appropriately and reassembling, we find that $\mathcal{D}_1 \cup ( (2\pi, -2\pi) + \mathcal{D}_4) = [0, 2\pi]^2$, $((2\pi, 0) + \mathcal{D}_2) \cup ( (0, -2\pi) + \mathcal{D}_3) = [0, 2\pi]^2$. Thus utilizing the $2\pi$-periodicity of both $\cos x$ and $\cos y$ we can write \begin{align*} I &= \frac{1}{2} \iint_{\mathcal{D}} \log( 1 + \cos x \cos y ) \, \mathrm{d}x \mathrm{d}y \\ &= \frac{1}{2} \sum_{i=1}^{4} \iint_{\mathcal{D}_i} \log( 1 + \cos x \cos y ) \, \mathrm{d}x \mathrm{d}y \\ &= 2 \times \frac{1}{2} \iint_{[0, 2\pi]^2} \log( 1 + \cos x \cos y ) \, \mathrm{d}x \mathrm{d}y. \end{align*} This proves $\text{(1)}$. Equation (2). This is a simple consequence of the following beta function identity $$ 2\int_{0}^{\pi/2} \cos^{2s-1}\theta \sin^{2t-1}\theta \, \mathrm{d}\theta = \beta(s, t) = \frac{\Gamma(s)\Gamma(t)}{\Gamma(s+t)}, \quad \Re(s), \Re(t) > 0. $$ Equation (3) and (4). By the generalized binomial theorem, we get $$ \frac{1}{\sqrt{1+z}} = \sum_{n=0}^{\infty} \binom{-1/2}{n} z^n, \quad |z| < 1. $$ Now by expanding the binomial coefficient, we find that \begin{align*} \binom{-1/2}{n} &= \frac{(-\frac{1}{2})(-\frac{1}{2}-1)\cdots(-\frac{1}{2}-n+1)}{n!} \\ &= (-1)^n \frac{(n-\frac{1}{2})\cdots(1-\frac{1}{2})}{n!} = (-1)^n \frac{\Gamma(n+\frac{1}{2})}{n!\Gamma(\frac{1}{2})} = (-1)^n \frac{\Gamma(n+\frac{1}{2})}{n!\sqrt{\pi}}. \end{align*} Plugging this back to the binomial series proves $\text{(3)}$. In order to prove $\text{(4)}$, notice that both sides of $\text{(4)}$ define analytic functions on $|z| < 1$ with value zero at $z = 0$ and that their derivatives coincide: $$ -2\sqrt{\pi} \frac{\mathrm{d}}{\mathrm{d}z} \log\left( \frac{1+\sqrt{1+z}}{2} \right) = \frac{\sqrt{\pi}}{z}\left( \frac{1}{\sqrt{1+z}} - 1 \right) = \sum_{n=1}^{\infty} (-1)^n \frac{\Gamma(n+\frac{1}{2})}{n!} z^{n-1}. $$ This proves that $\text{(4)}$ is true. Equation (5). Let $0 < r < 1$. Then using the absolute convergence we can rearrange the sum to write \begin{align*} &2\pi \log\left( \frac{1+\sqrt{1+re^{i\theta}}}{2} \right)\frac{1}{\sqrt{1+re^{-i\theta}}} \\ &\hspace{9em} = -\sum_{\substack{m \geq 0 \\ n \geq 1}} (-1)^{m+n} \frac{\Gamma(m+\frac{1}{2})\Gamma(n+\frac{1}{2})}{m!n!n} r^{m+n} e^{i\theta(m-n)}. \end{align*} Now let us integrate both sides with respect to $\theta$ on $[0, 2\pi]$. Since the right-hand side converges uniformly, we can integrate term by term to get $$ 2\pi \int_{0}^{2\pi} \log\left( \frac{1+\sqrt{1+re^{i\theta}}}{2} \right)\frac{\mathrm{d}\theta}{\sqrt{1+re^{-i\theta}}} = -2\pi \sum_{n=1}^{\infty} \frac{\Gamma(n+\frac{1}{2})^2}{n!^2 n} r^{2n}. $$ As we take limit as $r \uparrow 1$, the left-hand side converges to the left-hand side of $\text{(5)}$ by the dominated convergence theorem. (It is dominated by $C \left| \theta - \pi \right|^{-1/2}$ for some constant $C > 0$.) On the other hand, the right-hand side converges by the monotone convergence theorem to $I$. Therefore $\text{(5)}$ follows.<|endoftext|> TITLE: Divisibilty of $1^{101} + 2^{101} + 3^{101}+ 4^{101}+\cdots+2016^{101}$ QUESTION [8 upvotes]: $1^{101} + 2^{101} + 3^{101}+ 4^{101}+\cdots+2016^{101}$ is divisible by which of the following? $(A)$ $2014$ $(B)$ $2015$ $(C)$ $2016$ $(D)$ $2017$ Could someone share the approach to deal with such type of questions? REPLY [4 votes]: 1) $$1^{101}+2016^{101}=(1+2016)A_1=2017A_1$$ $$2^{101}+2015^{101}=(2+2015)A_2=2017A_2$$ ... $$1008^{101}+1009^{101}=(1008+1009)A_{1008}=2017A_{1008}$$ 2) $$1^{101}+2015^{101}=(1+2015)B_1=2016B_1$$ $$2^{101}+2014^{101}=(2+2014)B_2=2016B_2$$ ... $$2016^{101}=2016B$$ But $$2016 \not |1013^{101}$$ Similarly, for $2015$ and $2014$ Answer: $2017$<|endoftext|> TITLE: Evaluating an indefinite integral using complex analysis QUESTION [6 upvotes]: Using tools from complex analysis, I have to prove that $$ \int_0^{\infty} \frac{\ln x}{(x^2 + 1)^2}\,dx = - \frac{\pi}{4}.$$ But I'm not really sure where I should start. Any help would be appreciated. REPLY [3 votes]: Let $C$ be the classical key-hole contour and $I$ be the integral $$\begin{align}I&=\oint_C \frac{\log^2(z)}{(z^2+1)^2}\,dz\\\\ &=\int_0^R \frac{\log^2(x)}{(x^2+1)^2}\,dx+\int_R^0 \frac{(\log(x)+i2\pi)^2}{(x^2+1)^2}\,dx+\int_0^{2\pi}\frac{\log^2(Re^{i\phi})}{(R^2e^{i2\phi}+1)^2}iRe^{i\phi}\,d\phi\\\\ &=-i4\pi\int_0^R\frac{\log(x)}{(x^2+1)^2}\,dx+\int_0^R\frac{4\pi^2}{(x^2+1)^2}\,dx+\int_0^{2\pi}\frac{\log^2(Re^{i\phi})}{(R^2e^{i2\phi}+1)^2}iRe^{i\phi}\,d\phi \tag 1 \end{align}$$ As $R\to \infty$ the first integral on the right-hand side of $(1)$ approaches the integral of interest, the second integral approaches $\pi^3$, and the third approaches $0$. In addition, from the residue theorem, we have $$\begin{align} I&=2\pi i \text{Res}\left(\frac{\log^2(z)}{(z^2+1)^2}, z=\pm i \right)\\\\ &=2\pi i\left(\left(-\frac{\pi}{4}+i\frac{\pi^2}{16}\right)+\left(\frac{3\pi}{4}-i\frac{9\pi^2}{16}\right)\right)\\\\ &=\pi^3+i\pi^2 \tag 2 \end{align}$$ Setting $(1)$ equal to $(2)$, we find $$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty\frac{\log(x)}{(x^2+1)^2}\,dx=-\frac{\pi}{4}}$$ as was to be shown!<|endoftext|> TITLE: What polytopes can be induced by a norm? QUESTION [6 upvotes]: Let $\|\cdot\|:\mathbb{R}^n\to\mathbb{R}_{\ge0}$ be a norm and define $B_{\|\cdot\|}(0,1):=\{x\in\mathbb{R}^n\mid\|x\|\le1\}$ be the unit circle. For which regular n-dimensional polytopes (relative to the Euclidean norm resp. the Euclidean product) there exists a norm such that $B_{\|\cdot\|}(0,1)$ is equal to the polytope given? For $n=3$ we can easily obtain a cube (wit the max norm) and an oktahedra (with $\|\cdot\|_1$) but can we find a norm withe the unit circle of a tetrahedra? What other forms are possible? I came up with this question purely out of curiosity, but I am quite clueless how to approach it. So any thoughts are appreciated. REPLY [4 votes]: Let $X \subseteq \mathbb{R}^n$ be any compact convex set that is symmetric about the origin and contains an open neighbourhood of $0$. Then we can define the Minkowski functional $p_X : \mathbb{R}^n \to \mathbb{R}_{\geq 0}$ by $$ p_X(y) = \inf\big\{\lambda \in \mathbb{R}_{> 0} \: : \: \lambda^{-1}y\in X\big\}. $$ One easily shows that $p_X$ is a well-defined norm on $\mathbb{R}^n$ and that $X$ is precisely the closed unit ball for this norm. (Here you use that any open neighbourhood of $0$ is absorbing, so that there always exists some $\lambda > 0$ such that $\lambda^{-1}y \in X$ holds.) Conversely, let $\lVert\:\cdot\:\rVert$ be any norm on $\mathbb{R}^n$, then one can prove that the closed unit ball with respect to this norm is compact, convex and symmetric about the origin and contains an open neighbourhood of $0$. (Here you need that all norms on $\mathbb{R}^n$ are equivalent, i.e. induce the same topology.) This gives us a bijective correspondence between centrally symmetric bodies in $\mathbb{R}^n$ and norms on $\mathbb{R}^n$. To answer your question: basically, any form is possible, as long as there is no obvious reason why it shouldn't be. Specifically, any polytope can be realised as (a translation of) the closed unit ball of some norm if and only if it meets the following criteria: It is symmetric about its center; It is convex; It is full-dimensional (not contained in some affine hyperplane); It is bounded. Here I implicitly use the following properties that are intuitively clear but nevertheless require some proof. Proposition 1. A convex set $X \subseteq \mathbb{R}^n$ has empty interior if and only if it is contained in some affine hyperplane. A proof of this proposition can be found on Mathematics Stack Exchange and elsewhere on the internet. Proposition 2. Let $X \subseteq \mathbb{R}^n$ be a set that meets all four of the above criteria. Then $X$ contains an open neighbourhood of its center. Sketch of proof. For this we use that the interior of a convex set is again convex. Assume without loss of generality that the center of $X$ is the origin (translate if necessary). Since $X$ contains an open neighbourhood of some $x\in X$, by symmetry it also contains an open neighbourhood of $-x$ (we can reflect the entire open neighbourhood in the origin). Thus, $x$ and $-x$ are interior points. Since $\text{Int}(X)$ is convex, it follows that $0\in\text{Int}(X)$ holds as well.$\hspace{2mm}\blacksquare$ Finally, I have used the following assumption: Assumption. Polytopes are already closed to begin with (by most definitions they are).<|endoftext|> TITLE: All possible ways to order numbers in an array with decreasing rows and columns QUESTION [5 upvotes]: Given positive integer numbers $1,2,...,N\cdot M$. How many ways are there to order them in an $N\times M$ array given that the values decrease in each row from left to right and in each column from top to bottom? For small arrays one can just count but I don't find a general rule. Thanks for any help. REPLY [3 votes]: This is the number of standard Young tableaux for a Young diagram with $N$ rows and $M$ columns. By the hook length formula, this is $$ \frac{(NM)!}{\prod_{i=1}^M\prod_{j=1}^N(i+j-1)}\;. $$ This is OEIS sequence A060854. That entry gives the alternative formula $$ (NM)!\prod_{k=0}^{N-1}\frac{k!}{(M+k)!}\;. $$<|endoftext|> TITLE: Limits by Products and Equalizers QUESTION [6 upvotes]: The category $Cat$ of small categories is complete. Could you spell out with details the construction of the limit of a functor $F : J \to Cat$ by products and equalizers? (Mac Lane, Categories for the Working Mathematician, Chapter V, Section 2, Theorem 2) REPLY [8 votes]: The limit of a functor $F :B \to C$ can be constructed as the equalizer of $$s,t :\prod_A FA \longrightarrow \prod_{f : A \to B}FB$$ where $s$ and $t$ are the unique morphisms defined by $$ \pi_{(f : A \to B)}s =\pi_B\\ \pi_{(f : A \to B)}t = (Ff)\pi_A $$ and the universal cone is given by composing with the projections of $\displaystyle\prod\limits_A FA$. In case $C = Cat$, note that $FA$ is a category for all $A \in B$, and $s$, $t$, and $Ff$ are functors. The functor $s$ is defined on an object $(X_A)_A \in \displaystyle\prod\limits_A FA$ as $$s((X_A)_A) = (X_B)_{f : A \to B}$$ that is, the $(f : A \to B)$-component of $s((X_A)_A)$ is $X_B$. The functor $t$ is defined on $(X_A)_A$ as $$t((X_A)_A) = ((Ff)(X_A))_{f : A \to B}$$ On the morphisms of the product category $\displaystyle\prod\limits_A FA$ (these are families of morphisms $g_A : X_A \to Y_A$ in $FA$ for each $A \in B$), the functors $s$ and $t$ are defined by the same formulas: $$s((g_A)_A) = (g_B)_{f:A\to B}$$ and $$t((g_A)_A) = ((Ff)(g_A))_{f:A\to B}$$ Finally, note that the limit appears as the subcategory of $\displaystyle\prod\limits_A FA$ consisting of objects and morphisms equalized by $s$ and $t$. Explicitly, an object of the limit $L$ is $(X_A)_A \in \displaystyle\prod\limits_A FA$ such that for all $f : A \to B$ we have $X_B = (Ff)(X_A)$, and a morphism between objects $(X_A)_A$ and $(Y_A)_A$ in $L$ is a family of morphisms $(g_A : X_A \to Y_A)_A$ in $\displaystyle\prod\limits_A FA$ such that $g_B = (Ff)(g_A)$ for all $f : A \to B$.<|endoftext|> TITLE: Direct proof of Bolzano-Weierstrass using Axiom of Completeness QUESTION [5 upvotes]: The author of my intro analysis text has an exercise to give a proof of Bolzano-Weierstrass using axiom of completeness directly. Let $(a_n)$ be a bounded sequence, and define the set $$S=\{x \in \mathbb{R} : x 0$, there must be infinitely many $a_n$ such that $\sup S - \epsilon < a_n < \sup S + \epsilon$. (If there were only finitely many $a_n$ in that interval, then $\sup S + \frac{\epsilon}{2} \in S$, contradicting $\sup S$ as an upper bound.) However, I don't know how to pinpoint a single subsequence $(a_{n_k})$ such that all such elements with $k \geq \text{ some } N$ are in this interval for all $\epsilon$. REPLY [8 votes]: Since $(a_n)$ is bounded, $S$ is nonempty and bounded above. So by AoC there exists an upper bound $s=\sup S$. Consider $s-\frac1k$ and $s+\frac1k$ where k is an arbitrary but fixed natural number. Since any number smaller than $s$ is not an upper bound of $S$, so $\exists (s'\in S)(s-\frac1k < s')$. Observe the property which forms $S$, by transtivity of $<$, $s-\frac1k$ also has the property: $s-\frac1k < a_n$ for infinitely many terms $a_n$. Apply this similar reasoning on $s+\frac1k$ we can see that $s+\frac1k \notin S$, so there are none or only finitely many terms $a_n$ satisfying $1+\frac1k < a_n$, this is the same as saying there are infinite many terms $a_n$ satisfying $a_n \ge 1+\frac1k$. Combine these two parts we get: for all $k\in N$, there are infinitely many terms of $a_n$ satisfying $s-\frac1k < a_n \le s+\frac1k$. The last statement gave us a hint of how to build a subsequence of $(a_n)$. For every different $k\in N$, we can pick a term from infinitely many terms that satisfy that inequality. For example we can pick $a_{n_1}$ from $\{a_n : s-1n_k$ to make this is indeed a subsequence, no repeatition or backward happened. (This is always can be done because for every pick we have infinitely many terms in hand) Then we need to check whether this subsequence $(a_{n_k})$ converges to something. By intuition this should be $\sup S$. To satisfy the inequality $|a_{n_k}-s|<\epsilon$ for every $\epsilon >0$, choose $K>\frac1\epsilon$. If $k\ge K$ then $\frac1k < \epsilon$ which implies $$s-\epsilon < s-\frac1k < a_{n_k} \le s+\frac1k < s+\epsilon$$ So the B-W theorem has been proved using AoC.<|endoftext|> TITLE: Number of integral solutions to a linear inequality QUESTION [5 upvotes]: I am trying to show the following identity: Let $k,n \in \mathbb{N}$. Then $$ \text{card}\{x \in \mathbb{Z}^n: \sum_{i=1}^n |x_i| \leq k\} =\sum_{i=0}^n 2^{n-i} {n \choose i}{k \choose n-i}. $$ My attempt: Let $A=\{x \in \mathbb{Z}^n: \sum_{i=1}^n |x_i| \leq k\}$. Let $0\leq i \leq n$. Let $x_1,\ldots,x_{n-i}\geq 1$ be such that $x_1+\ldots+x_{n-i} \leq k$. Then it is easy to see that $|A|=\sum_{i=0}^n 2^{n-i} {n \choose i} |\text{no. of positive integral solutions to } x_1+\ldots+x_{n-i} \leq k| $. However, I am getting the number of positive integral solutions to $x_1+\ldots+x_{n-i} \leq k$ as not equal to ${k \choose n-i}$. Can anyone help me? REPLY [3 votes]: Let $t=k-\left(|x_1|+\cdots+|x_n|\right)$, so we want to find the number of solutions of $|x_1|+\cdots+|x_n|+t=k$ where the $x_i$ and $t$ are integers and $t\ge0$. For each $i$ with $0\le i\le n$, we can 1) choose $i$ of the terms $x_1,\cdots,x_n$ to be 0 in $\binom{n}{i}$ ways 2) choose the signs of the remaining $n-i$ terms in $2^{n-i}$ ways 3) If we let $y_1,\cdots,y_{n-i}$ be the terms of the form $|x_j|$ chosen to be nonzero, $\hspace{.2 in}$we must find the number of solutions of $y_1+\cdots+y_{n-i}+t=k$ with $y_j>0$ for each $j$ and $t\ge0$. $\hspace{.2 in}$Letting $y_{n-i+1}=t+1$ gives $\hspace{.2 in}$$y_1+\cdots+y_{n-i+1}=k+1\;$ with $y_j>0$ for all $j$; so there are $\binom{k}{n-i}$ solutions. This gives a total of $\displaystyle\sum_{i=0}^n 2^{n-i}\binom{n}{i}\binom{k}{n-i}$ solutions.<|endoftext|> TITLE: Riesz Representation Theorem in Linear Algebra QUESTION [6 upvotes]: Let $\mathbb{V}$ be a finite dimensional inner product space and $\alpha : \mathbb{V} \rightarrow \mathbb{R}$ a linear functional. Prove that there is a unique vector $\overrightarrow v_{0} \in \mathbb{V}$ such that $\alpha(\overrightarrow v)=\langle\overrightarrow v,\overrightarrow v_{0}\rangle$ for all $\overrightarrow v \in \mathbb{V}$. My approach: I suppose that there is exists another vector $\overrightarrow w_{0} \in \mathbb{V}$ that satisfies the same property. We get $\langle\overrightarrow v,\overrightarrow v_{0}-\overrightarrow w_{0}\rangle=0$ and I need to show that $\overrightarrow v_{0}=\overrightarrow w_{0}$ somehow. Any tips on how to do that? I tried taking an orthonormal basis for $\mathbb{V}$ but that didn't help in the end. REPLY [8 votes]: You get $$\langle v,\,v_0-w_0\rangle=0\;\;\color{red}{\forall\,v\in V}\iff v_0-w_0=0$$ as zero is the only vector which is orthogonal to the whole space. Existence Choose an orthonormal basis $\;\{u_1,...,u_n\}\;$ of $\;V\;$ , and let $$v_0:=\sum_{k=1}^n\alpha(u_k)u_k\in V\;\;\implies\;\;\forall\,v=\sum_{i=1}^n a_iu_i\in V:$$ $$\langle v_0,v\rangle=\sum_{i,k=1}^n\alpha(u_k)a_i\langle u_k,u_i\rangle=\sum_{k=1}^n\alpha(u_k)a_k\stackrel{\text{linearity of}\;\alpha}=\alpha\left(\sum_{k=1}^n a_ku_k\right)=\alpha(v)$$<|endoftext|> TITLE: Is there a way to update the inverse of a sum of two matrices following a rescaling of one of them? QUESTION [5 upvotes]: Suppose I have two matrices $A$ and $B$ (let's assume that both $A$ and $B$ are invertible, as is their sum), and a scalar $g$. I am interested in the matrix $$M^{-1} = (A + gB)^{-1}$$ I am aware of various expressions for computing this inverse in general, but I am interested in whether, if I calculate $M^{-1}$ for some value of $g$, is there a way to quickly update $M^{-1}$ following an update to the value of $g$? I am specifically interested in whether this can be done without performing any additional inversions after updating $g$, i.e. if I can just store $A$, $B$, $A^{-1}$, $B^{-1}$ (or some factorizations of them) and the previous value of $g$ in memory, and then somehow update $M^{-1}$ as a function of these? I've just found this, which suggests a solution if $B=I$, but I fear I may be out of luck for the more general case where $B\neq I$. I would also potentially be interested in solutions which rely on sparsity of either $A$ or $B$, as I may have some cases in which this is true. REPLY [2 votes]: Ok, so for completeness: per the comment from deb above, we can do: $$(A+gB)^{-1} = B^{-1}(AB^{-1} + gI)^{-1}$$ and let $$AB^{-1}=PDP^{-1}$$ where $P$ gives the eigenvectors and $D$ the eigenvalues of $C=AB^{-1}$, which allows us to apply this previous answer to get $$(A+gB)^{-1} =B^{-1}P(D + gI)^{-1}P^{-1}$$ $P$ and $B^{-1}P$ can be pre-computed once, and $(D+gI)^{-1}$ is easy to invert quickly because it is diagonal. Neat!<|endoftext|> TITLE: Intuition about the second isomorphism theorem QUESTION [26 upvotes]: In group theory we have the second isomorphism theorem which can be stated as follows: Let $G$ be a group and let $S$ be a subgroup of $G$ and $N$ a normal subgroup of $G$, then: The product $SN$ is a subgroup of $G$. The intersection $S\cap N$ is a normal subgroup of $G$. The quotient groups $SN/N$ and $S/(S\cap N)$ are isomorphic. Now, I've seem this theorem some time from now and I still couldn't grasp much intuition about it. I mean, it certainly is one important result, because as I've seem it is highlighted as one of the three isomorphism theorems. The first isomorphism theorem has a much more direct intuition though. We have groups $G$ and $H$ and a homomorphism $f:G\to H$. If this $f$ is not injective we can quotient out what is stopping it from being injective and lift it to $G/\ker f$ as one isomorphism onto its image. Is there some nice interpretation like that for the second isormorphism theorem? How should we really understand this theorem? REPLY [24 votes]: I assume you are having intuitive difficulties with the third statement of the theorem. Let me try and give an intuitive explanation. Every element of $SN$ is of the form $sn$ with $s \in S$ and $n \in N$. Now in $SN/N$ the $n$'s get 'killed' in the sense that in this group $\overline{sn}=\overline{s}$ for $s \in S$ and $n \in N$. However, we are not left with a group that is isomorphic with $S$, because if $s \in N$, that is if $s \in S \cap N$, then $s$ is also the identity in $SN/N$. So, we are left with $S$, but with the remaining part of $N$ completely filtered out, that is $$\frac{SN}{N} \cong \frac{S}{S \cap N}$$<|endoftext|> TITLE: Proving that $\pi(2x) < 2 \pi(x) $ QUESTION [7 upvotes]: In our analytic number theory class we were given the following problem as homework: prove rigorously that for large $x$ the number of primes in $(1,x]$ exceeds that in $(x,2x]$. In class we proved the prime number theorem, and then proceeded to prove several results such as $\pi(x) = Li(x) +O(x^\theta \ln x)$ and the explicit formula for $\psi_1(x)$. This is clearly quite intuitive but I'm lost as to what I can use to prove the result. Any help is greatly appreciated. REPLY [6 votes]: It can be shown that for $x \ge 11$, $\pi(2x) < 2\pi(x)$ For $x=1$, it is not true since $2\pi(1) = 0$ but $\pi(2)=1$ For $x \in \{2,4,10\}$, $2\pi(x)=\pi(2x)$ Pierre Dusart showed that for $x \ge 60184$ $$\frac{x}{\ln{x}-1} < \pi(x) < \frac{x}{\ln {x}-1.1}$$ For the proof, see Theorem 6.9 here. For $a \ge 2$, $\ln ax - 1.1 = \ln a + \ln x - 1.1 \ge \ln 2 - 1.1 + \ln x > \ln x - 0.41 > \ln x - 1$ So, it follows that for $x \ge 60184, a \ge 2$: $$\pi(ax) < \frac{ax}{\log{ax}-1.1} < \frac{ax}{\log x - 1} < a\pi(x)$$ By brute force, it can be shown that in all cases where $x < 60184$, $\pi(2x) < 2\pi(x)$. Here is java code that I used to verify it (adding an image only):<|endoftext|> TITLE: Diffeomorphism group of product manifold QUESTION [6 upvotes]: For a given differentiable manifold $M$, the diffeomorphism group $\mathrm{Diff}\left( M \right)$ of $M$ is the group of all $C^\infty$ diffeomorphisms of $M$ to itself. Consider a product manifold of the form $M \times N$. My question is: is $\mathrm{Diff}\left( M \times N\right) \cong \mathrm{Diff}\left(M\right) \times \mathrm{Diff}\left( N\right)$? My (physicist's) intuition is no, for consider $\mathbb{R}^2 \cong \mathbb{R} \times \mathbb{R}$. It seems like $\mathrm{Diff}\left(\mathbb{R}\right) \times \mathrm{Diff}\left( \mathbb{R}\right)$ on $\mathbb{R}^2$ can give smooth coordinate transformations along two 'axes', but it can't give 'twists' etc., as could $\mathrm{Diff}\left(\mathbb{R}^2\right)$. Am I right? And, if so, is anyone able to help make my intuition more precise? Thanks for any help! REPLY [4 votes]: As the other comment mentions, the answer is no. For example, even if you want to consider only those diffeomorphisms which preserve one of the projections $X \times Y \to X$ we have the space of maps $X \to \text{Diff}(Y)$. (i.e. fiber transformations) The homotopy groups of $\text{Diff}(X\times Y)$ may not even be the same as those of $\text{Diff}(X)\times \text{Diff}(Y)$. See, for example, the Gluck twist: https://www.jstor.org/stable/1993581?seq=1#page_scan_tab_contents<|endoftext|> TITLE: Prove that the value of the constant $C$ must be $1$ QUESTION [5 upvotes]: After proving the prime number theorem in class, our professor directs us to a remark by Lagrange that for large values of $x$, $\pi(x)$ is approximately equal to $$ \frac{x}{\log x - B}. $$ (This is from Ingham's Distribution of prime numbers, page 2). He then gave the following problem: prove that there is exactly one constant $B$ such that $$ \left|\pi(x)-\frac{x}{\ln x -B}\right| = O\left(\frac{x}{\ln^3 x}\right), $$ and that value is $1$. Where do I start? REPLY [2 votes]: Since this question followed a proof of the prime number theorem I would conjecture that de la Vallée-Poussin's proof was exposed. One result of this method is following estimate of the error in the estimate $\;\pi(x)\sim\operatorname{Li}(x)$ as presented in Edwards' excellent "Riemann's zeta function" : ($c\,$ is a positive constant) $$\tag{1}\left|\frac{\pi(x)-\operatorname{Li}(x)}{x/(\log\,x)^2}\right|\le \frac{\operatorname{Li}(x)}{x}\frac{(\log\,x)^2}{\exp(\sqrt{c\;\log\,x})}$$ Since the logarithmic integral $\operatorname{Li}(x)$ admits the asymptotic expansion : $$\tag{2}\operatorname{Li}(x)\sim \frac x{\log x}+\frac {1!\,x}{(\log x)^2}+\frac {2!\,x}{(\log x)^3}$$ (as we may easily deduce by repetitive integration by parts of $\;\displaystyle \operatorname{Li}(x):=\operatorname{Li}(2)+\int_2^x\frac{dt}{\log t})\;$ we deduce that we have not only $\;\displaystyle\pi(x)\sim \frac x{\log x}\;$ from the PNT but also : $$\tag{3}\pi(x)\sim \frac x{\log x}+\frac x{(\log x)^2},\quad x\to \infty$$ From $(1)$ the difference between $\pi(x)$ and $\operatorname{Li}(x)$ can indeed be neglected compared to $\dfrac x{(\log x)^2}$ (since the RHS of $(1)$ goes to $0$ as $\,x\to \infty$) but in fact also compared to $\dfrac x{(\log x)^3}$ and so on because $\;\log((\log x)^n)=n\,\log \log x$ "can't beat" $\sqrt{c\;\log\,x}$ for very large $x$. From $\;\displaystyle\pi(x)\sim \frac x{A\,\log(x)-B}\sim \frac x{A\,\log\,x}\;\frac{1}{\large{1-\frac B{A\,\log x}}}\sim \frac x{A\,\log\,x}+\frac {B\;x}{(A\,\log\,x)^2}\;$ we deduce that $\boxed{A=1}$ (the P.N.T.) and $\boxed{B=1}$ as wished. We obtained in fact the more general (see Edwards p.$87$ for some consequences by Littlewood) : $$\tag{4}\pi(x)\sim \frac x{\log x}+\frac {1!\;x}{(\log x)^2}+\frac {2!\;x}{(\log x)^3}+\cdots+\frac {(n-1)!\;x}{(\log x)^n},\quad x\to \infty$$<|endoftext|> TITLE: Eigenvalues of two related symmetric matrices QUESTION [8 upvotes]: What are the eigenvalues of following $n \times n$ matrices $A$ and $B$? $A=\begin{bmatrix} 0 & 0 & 1 & 1 & 1 & 1 & 1 & \cdots & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & \ddots & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 1 & 0 \end{bmatrix} $ Where, $a_{ij}=1$,if $i=j-1$ or $i=j+1$ for all $i=3,4,5...n-1$ $a_{13}=a_{14}=...=a_{1n-1}=1$, $a_{31}=a_{41}=...=a_{(n-1)1}=1$ $a_{23}=a_{n(n-1)}=1$ As it is symmertic matrix all of its eigenvalues are real. $B=\begin{bmatrix} 0 & 0 & 1 & 1 & 1 & 1 & 1 & \cdots & 1 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & \cdots & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 1 & 0 & \cdots & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & \ddots & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 1 & 0 \end{bmatrix} $ Where, $b_{ij}=1$,if $i=j-1$ or $i=j+1$ for all $i=3,4,5...n-1$ $b_{13}=b_{14}=...=b_{1n}=1$, $b_{31}=b_{41}=...=a_{(n-1)1}=1$ $b_{23}=b_{n(n-1)}=b_{n1}=1$ As it is symmertic matrix all of its eigenvalues are real. REPLY [4 votes]: EDIT: I've fixed my work and computed a numerically verified closed-form solution. Define the $n \times n$ matrices $$ \renewcommand{\arraystretch}{1.38} A_n(x) = \begin{bmatrix} -x & 0 & 1 & \cdots & 1 & 0 \\ 0 & -x & 1 \\ 1 & 1 & -x & 1 \\ \vdots & & \ddots & \ddots & \ddots \\ 1 & & & 1 & -x & 1 \\ 0 & & & & 1 & -x \end{bmatrix}, \qquad n \geq 4, $$ and $$ \renewcommand{\arraystretch}{1.38} B_n(x) = \begin{bmatrix} -x & 0 & 1 & \cdots &\cdots & 1 \\ 0 & -x & 1 \\ 1 & 1 & -x & 1 & \\ \vdots & & \ddots & \ddots & \ddots \\ & & & 1 & -x & 1 \\ 1 & & & & 1 & -x \end{bmatrix}, \qquad n \geq 3. $$ We'll keep defining matrices in this fashion. We also always define the corresponding sequences $a_n(x) := \det A_n(x)$, $b_n(x) = \det B_n(x)$, etc. Your desired characteristic polynomials (up to sign) are $a_n(x)$ and $b_n(x)$, and your desired eigenvalues are the roots of those polynomials. Going forward, we'll often omit $x$-dependence, writing for example $a_n$ and $A_n$. The common idea is this: to evaluate $\det A_n(x)$, we do cofactor expansion and eventually are left with minors equal to matrices we recognize, e.g. $B_n(x)$. Hence we can set up some type of recursion for the $a_n$, $b_n$, etc., which we solve via generating functions. There's a ton of computations here (sorry for the long post, but I think it's cool that we can actually write down a closed-form answer), so I will only display the most relevant/tricky parts. These are: recognizing previously defined matrices when doing cofactor expansion, and being very careful about the $n$ for which our relations are well-defined. (The bold warning is where I made my previous mistake. Pay careful attention to side notes like "$n \geq 5$" and make sure you see why they are what they are.) If you're unfamiliar with the use of generating functions to solve recurrences, I'd be happy to show you one of them. I omitted all such computations to save space because they are more standard. Here goes. Recursion for $a_n$: by cofactor expansion up the right column of $A_n$, we have $$ \det A_n = - x \det B_{n-1} - \underbrace{\left| \begin{matrix} -x & 0 & 1 & \cdots & 1 & 1 \\ 0 & -x & 1 \\ 1 & 1 & -x & \ddots & \\ \vdots & & \ddots & \ddots & 1 \\ 1 & & & 1 & -x & 1 \\ & & & & & & 1 \end{matrix} \right|}_{(n-1)\times(n-1)}, $$ and evaluating the remaining determinant via cofactor expansion along the last column gives $$ a_n = -x b_{n-1} - b_{n-2}, \qquad n \geq 3. $$ Recursion for $b_n$: by cofactor expansion up the right column of $B_n$, \begin{align*} \det B_n &= - x \det B_{n-1} - \det C_{n-1} + (-1)^{n-1} \det D_{n-1} \\ \iff b_n &= - x b_{n-1} - c_{n-1} + (-1)^{n-1} d_{n-1}, \qquad n \geq 5 \text{ (for $C_{n-1}$ to be well-defined)}, \end{align*} where $$ \renewcommand{\arraystretch}{1.38} C_n(x) = \begin{bmatrix} -x & 0 & 1 & \cdots & \cdots & 1 \\ 0 & -x & 1 \\ 1 & 1 & -x & 1 \\ \vdots & & \ddots & \ddots & \ddots & \\ & & & 1 & - x & 1 \\ 1 & & & 0 & 0 & 1 \end{bmatrix}, \qquad n \geq 5, \qquad \renewcommand{\arraystretch}{1.38} D_n(x) = \begin{bmatrix} 0 & -x & 1 & \\ 1 & 1 & -x & 1 \\ \vdots & & \ddots & \ddots & \ddots \\ \vdots & & & 1 & -x & 1 \\ \vdots & & & & 1 & -x \\ 1 & & & & & 1 \end{bmatrix}, \qquad n \geq 3. $$ Recursion for $c_n$: by cofactor expansion along the bottom row, \begin{align*} \det C_n &= \det B_{n-1} + (-1)^{n-1} \det D_{n-1}^T \\ c_n &= b_{n-1} + (-1)^{n-1} d_{n-1}, \qquad n \geq 5 \text{ (for $C_{n}$ to be well-defined)} \end{align*} since determinants are invariant under matrix transpose. Recursion for $d_n$: by cofactor expansion along the bottom row, \begin{align*} \det D_n &= \det D_{n-1} + (-1)^{n-1} \det E_{n-1}, \\ d_n &= d_{n-1} + (-1)^{n-1} e_{n-1}, \qquad n \geq 4 \text{ (for $D_{n-1}$ to be well-defined)} \end{align*} where $$ \renewcommand{\arraystretch}{1.38} E_n(x) = \begin{bmatrix} -x & 1 & \\ 1 & -x & 1 \\ & \ddots & \ddots & \ddots \\ & & 1 & -x & 1 \\ & & & 1 & -x \\ \end{bmatrix}. $$ Recursion for $e_n$: finally our hard work has paid off with a simple tri-diagonal matrix. I won't go into the details for this (it's the same type of argument we've been preparing to make in the above), but the wikipedia page on tri-diagonal matrices will tell you that $$ e_n = - x e_{n-1} - e_{n-2}, \qquad (e_{-1}, e_0) = (0, 1) $$ which can be solved in closed form via generating functions (I used mathematica): $$ 2^{n+1} \sqrt{x^2-4} \cdot e_n(x) = x \left[\left(-\sqrt{x^2-4}-x\right)^n-\left(\sqrt{x^2-4}-x\right)^n\right]+\sqrt{x^2-4} \left[\left(-\sqrt{x^2-4}-x\right)^n+\left(\sqrt{x^2-4}-x\right)^n\right]. $$ Now with this closed form for $e_n(x)$ we start back-solving into our previously found recursions. Recursion for $d_n$: from $d_n = d_{n-1} + (-1)^{n-1} e_{n-1}$ for $n \geq 4$ and the initial condition $d_3 = x^2 + x - 1$ (computed by hand from $D_3$), we get $$ 2^{n+1} (x-2) \sqrt{x^2-4} \cdot d_n(x) = \left( x + \sqrt{x^2-4} \right)^n \left( x - 2 + \sqrt{x^2-4} \right) - \left( x - \sqrt{x^2-4} \right)^n \left( x - 2 - \sqrt{x^2-4} \right) + 2^{n+1} (1-x) \sqrt{x^2-4}. $$ Recursion for $b_n$: combining our recursions for $c_n$ and $b_n$, we find $$ b_n = - x b_{n-1} - b_{n-2} + (-1)^{n-1} \left[ d_{n-1} + d_{n-2} \right], \qquad n \geq 6, $$ and hence we need the initial conditions $(b_4, b_5) = (x^4-4 x^2-2 x+1,-x^5+6 x^3+4 x^2-3 x-2)$, computed by hand from $B_n$. This gives \begin{align*} 2^{n+1} (x-2)^2 \sqrt{x^2-4} \cdot b_n(x) = \left(-\sqrt{x^2-4}-x\right)^n \left[ (x-2) \left((x-2)^2-2 n\right) + \left(x^2 -2x + 2\right)\sqrt{x^2-4} \right] + \left(\sqrt{x^2-4}-x\right)^n \left[ - (x-2) \left((x-2)^2-2 n\right) + \left(x^2 -2 x + 2\right)\sqrt{x^2-4} \right] + (-1)^n 2^{n+2} (1-x) \sqrt{x^2-4}. \end{align*} This determines our recursion for $a_n$, and solves your problem. (You will have to compute the first few values of $a_n$ and $b_n$ by hand.) I did this using Mathematica and verified everything I've written above for $n \leq 50$.<|endoftext|> TITLE: Are $(C[0,1],d_\infty)$ and $(C[0,1],d_1)$ homeomorphic? QUESTION [5 upvotes]: Two metric spaces are said to be homeomorphic if there is a bijection f between them such that $f$ and $f^{-1}$ are both continuous. Consider $C[0,1]$ with metrics: $d_\infty (f,g)=\max_{x\in [0,1]}|f(x)-g(x)|$ $d_1(f,g)=\int_0^1|f(x)-g(x)|dx$ We already know that the identity map $(C[0,1],d_1)→(C[0,1],d_∞)$ is not continuous (Prove that the identity map $(C[0,1],d_1) \rightarrow (C[0,1],d_\infty)$ is not continuous). Does this imply $(C[0,1],d_∞)$ and $(C[0,1],d_1)$ are not homeomorphic? Or could you find a bijection which is continuous in both direction? Any help is appreciated. REPLY [2 votes]: No, they are not. Confirming a conjecture of Banach, Victor Klee proved that if there is a complete metric on a normed (or, more generally, metrizable topological vector) space inducing the norm (vector space) topology then the norm (uniformity induced by the vector space topology) is complete. This can be seen, e.g., in Köthe's book topological Vector Spaces I, page 165. Clearly, $d_1$ is not complete.<|endoftext|> TITLE: If X is independent to Y and Z, does it imply that X is independent to YZ ? QUESTION [5 upvotes]: After years of mathematics, I am struggling with this simple question. If we have 3 r.v. $X,Y,Z$ and we have $X$ independent to $Y$ and to $Z$, then do we have that $X$ is also independent to $YZ$ ? At first sight, I thought that if $X$ is independent to $Y$ and $Z$, it is also independent to the sigma-algebra generated by $Y$ and to $Z$ and hence $YZ$ but the example below made me confused : https://en.wikipedia.org/wiki/Pairwise_independence If someone can make this clear... Thank you very much in advance ! REPLY [8 votes]: Short answer: No, $X \perp Y , X \perp Z$ doesn't imply $X \perp YZ$ Let's say you do an experiment where you choose a number randomly from: {1,2,3,4,6,7,8,9} Let X = 1 if: (Your chosen number is even AND less than five) OR (your chosen number is odd AND greater than 5) and 0 otherwise Let Y = 1 if your chosen number is even, 0 otherwise Let Z = 1 if your chosen number is greater than 5, 0 otherwise Now we know X = 1 with probability - 0.5 (2,4,7,9) and 0 with probability 0.5 (1,3,6,8) If we know Y = 1, X is still 1 with p = 0.5 (2,4) and 0 with p = 0.5 (6,8) If we know Y = 0, X is still 1 with p = 0.5 (7,9) and 0 with p = 0.5 (1,3) If we know Z = 1, X is still 1 with p = 0.5 (7,9) and 0 with p = 0.5 (6,8) If we know Z = 0, X is still 1 with p = 0.5 (2,4) and 0 with p = 0.5 (1,3) So X is independent of Y and X is independent of Z. But knowing if a number is even AND knowing if it's greater than 5 (Y & Z), makes us know X with certainty. e.g. Y = 1, Z = 1, then YZ = 1, X has to be 0 with probability 1 (as X is 0 if the number is an even number > 5) $P(X = 0 | Y ) = P(X = 0) = 0.5$ (same for X = 1) $P(X = 0 | Z ) = P(X = 0) = 0.5$ (same for X = 1) But $P(X = 0 | YZ = 1) = 1 \neq P(X=0) $ (similar for X = 1)<|endoftext|> TITLE: How to solve a PDE with a Dirac Delta and what does the PDE means? QUESTION [6 upvotes]: If I have a PDE $ \Delta u= \delta(0)$ on some bounded domain in $\mathbb{R}^2$ with smooth boundary with some nice enough boundary condition. What is the solution of the PDE? And what is the PDE mean in term of integral? Because I know $\delta(0)$ is not a function. Thank you REPLY [2 votes]: While the Dirac $\delta$ is not a classical pointwise-valued function, it can be manipulated legitimately as though it were, in many situations, especially linear differential questions. As a distribution, it can be differentiated. Sometimes derivatives of classical, pointwise-valued functions really are distributions. For example, the second derivative of $|x|$ on the real line really is $2\delta$... not just in some fantasy sense, but in the sense that this conclusion behaves correctly with respect to integration by parts and such. In two or more dimensions, the "Green's function" is harder to understand, for more than one reason, in contrast to the derivation of it for second-order ODEs on an interval (Sturm-Liouville problems). It is not obvious, for example, that in two dimensions the Laplacian applied to $\log |x|$ is a constant multiple of $\delta$ (although this has been known for a long time).<|endoftext|> TITLE: Asymptotic estimate for the sum $\sum_{n\leq x} 2^{\omega(n)}$ QUESTION [5 upvotes]: How to find an estimate for the sum $\sum_{n\leq x} 2^{\omega(n)}$, where $\omega(n)$ is the number of distinct prime factors of $n$. Since $2^{\omega(n)}$ is multiplicative, computing its value at prime power, we see that $2^{\omega(n)}=\sum_{d\mid n}\mu^2(d)$. Then \begin{align} \sum_{n\leq x}2^{\omega(n)}&=\sum_{n\leq x}\sum_{d\mid n}\mu^2(d)=\sum_{d\leq x}\mu^2(d)\sum_{d\mid n} 1\\ &=\sum_{d\leq x}\mu^2(d)\left\lfloor \frac{x}{d}\right\rfloor=\sum_{d\leq x}\mu^2(d)(\frac{x}{d}+O(1))\\ &=x\sum_{d\leq x}\frac{\mu^2(d)}{d}+O(\sum_{d\leq x}\mu^2(d)) \end{align} I get stuck here, the series $\sum_{n=1}^\infty\frac{\mu^2(n)}{n}$ is not convergent, I don't know how to estimate the first term. REPLY [6 votes]: It is not difficult to see that $$\sum_{d\leq x}\mu^{2}\left(d\right)=x\frac{6}{\pi^{2}}+O\left(\sqrt{x}\right)\tag{1}$$ (for a proof see here) so by Abel's summation we have $$\sum_{d\leq x}\frac{\mu^{2}\left(d\right)}{d}=\frac{\sum_{d\leq x}\mu^{2}\left(d\right)}{x}+\int_{1}^{x}\frac{\sum_{d\leq t}\mu^{2}\left(d\right)}{t^{2}}dt $$ hence using $(1)$ we have $$ \begin{align*} \sum_{d\leq x}\frac{\mu^{2}\left(d\right)}{d}= & \frac{6}{\pi^{2}}+O\left(\frac{1}{\sqrt{x}}\right)+\frac{6}{\pi^{2}}\int_{1}^{x}\frac{1}{t}dt+O\left(\int_{1}^{x}\frac{1}{t^{3/2}}dt\right)\\ = & \frac{6}{\pi^{2}}\log\left(x\right)+O\left(1\right) \end{align*} $$ hence $$\sum_{n\leq x}2^{\omega\left(n\right)}=\frac{6}{\pi^{2}}x\log\left(x\right)+O\left(x\right).$$<|endoftext|> TITLE: Nested root integral $\int_0^1 \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x}}}}$ QUESTION [34 upvotes]: The bigger goal is to find the antiderivative: $$\int \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x}}}}~~~~~(*)$$ But I can settle for the definite integral in $(0,1)$. Motivation: $$\int \frac{dx}{\sqrt{x+\sqrt{x}}}=2\sqrt{x+\sqrt{x}}-\ln (1+2\sqrt{x}+2\sqrt{x+\sqrt{x}})+C$$ This integral is easy to solve by using the following substitution: $$x=u^4$$ $$\sqrt{x+\sqrt{x}}=u\sqrt{1+u^2}$$ Now consider the integral $(*)$. If we take $x=u^8$, we get the integral: $$(*)=\int \frac{8u^6du}{\sqrt{u^6+\sqrt{1+u^4}}}$$ Still seems bad, and Mathematica can't solve it (or the definite integral either). Another way I tried is by the following substitutions: $$\sqrt{x+\sqrt{x}}=\frac{y}{2}$$ $$(*)=\int \frac{y(\sqrt{1+y^2}-1)dy}{\sqrt{1+y^2} \sqrt{2+2y+y^2-2\sqrt{1+y^2}}}$$ $$y=\sinh t$$ $$(*)=\int \frac{\sinh t(\cosh t-1)dt}{\sqrt{(\cosh t-1)^2+2\sinh t}}$$ Believe it or not, Mathematica actually solves this integral, but the resulting expression is so long and complicated, it seems useless (and by long I mean three times the size of my screen). What do you think, is there a reasonable closed form solution for this integral? Or at least, the definite integral: $$\int_0^1 \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x}}}}=\int_0^{\sinh^{-1} \sqrt{8}} \frac{\sinh t(\cosh t-1)dt}{\sqrt{(\cosh t-1)^2+2\sinh t}}$$ Edit: $$\int \frac{\sinh t(\cosh t-1)dt}{\sqrt{(\cosh t-1)^2+2\sinh t}}=\sqrt{(\cosh t-1)^2+2\sinh t}-\int \frac{\cosh t~dt}{\sqrt{(\cosh t-1)^2+2\sinh t}}$$ Now let's make another substitution: $$e^t=v$$ $$\int \frac{\cosh t~dt}{\sqrt{(\cosh t-1)^2+2\sinh t}}=\int \frac{(v^2+1)~dv}{v\sqrt{v-1}\sqrt{v^3+v^2+7v-1}}$$ Now I see the connection to elliptic integrals (which Mathematica gives as part of the answer). We just probably need to factor: $$v^3+v^2+7v-1$$ The limits $x \in (0,1)$ will become $v \in (1,3+\sqrt{8})$. We can also make another change of variable, leaving only one radical and getting somewhat better behaved function (finite everywhere on the real line): $$z=\sqrt{v-1}$$ $$\int \frac{(v^2+1)~dv}{v\sqrt{v-1}\sqrt{v^3+v^2+7v-1}}=2\int \frac{(z^4+2z^2+2)~dz}{(z^2+1)\sqrt{z^6+4z^4+12z^2+8}}=$$ $$=2\int \frac{dz}{(z^2+1)\sqrt{z^6+4z^4+12z^2+8}}+2\int \frac{(z^2+1)~dz}{\sqrt{z^6+4z^4+12z^2+8}}$$ REPLY [18 votes]: $$\color{brown}{\textbf{Simple example ($\mathrm{k=2}$).}}$$ Let us consider the integral $$N_2(x) = \int \dfrac{\mathrm dx}{\sqrt{x+\sqrt x}}.$$ Substitution $$y = \sqrt{1+\frac1{\sqrt x}},\quad x=\dfrac1{(y^2-1)^2},\quad \mathrm dx = -\dfrac {4y}{(y^2-1)^3}\,\mathrm dy,\quad x>0,\quad y>1, \tag{i1}$$ gives $$\sqrt{x+\sqrt x} = \sqrt{\dfrac1{(y^2-1)^2}+\dfrac1{y^2-1}} = \dfrac y{y^2-1},\tag{i2}$$ so $$N_2(x)=Q_2\left(\sqrt{1+\frac1{\sqrt x}}\right),\tag{i3}$$ where $$Q_2(y) = -4\int\dfrac{\mathrm dy}{(y^2-1)^2} = \int\dfrac{-4+4y^2}{(y^2-1)^2}\,\mathrm dy - \int \dfrac{4y^2}{(y^2-1)^2}\mathrm dy$$ $$ = 4\int\dfrac{\mathrm dy}{y^2-1} + 2\int y\,\mathrm d\left(\dfrac1{y^2-1}\right) = \dfrac{2y}{y^2-1}+ 2\int\dfrac{\mathrm dy}{y^2-1},$$ $$Q_2(y) = \dfrac {2y}{y^2-1} + \log\left(\frac{y-1}{y+1}\right) + \mathrm{const}.\tag{i4}$$ Taking in account $(\mathrm{i2})$ and $(\mathrm{i4}),$ solution $(\mathrm{i3})$ can be written as $$N_2(x) = 2\sqrt{x + \sqrt x} + \ln\dfrac{\sqrt{1+\frac1{\sqrt x}}-1}{\sqrt{1+\frac1{\sqrt x}}+1}+\mathrm{const} = 2\sqrt{x + \sqrt x} + \ln\dfrac{\sqrt{1+\sqrt x}-\sqrt[4]x}{\sqrt{1+\sqrt x}+\sqrt[4]x}+\mathrm{const},$$ or, eliminating the numerator, $$\boxed{N_2(x) = 2\sqrt{x + \sqrt x} - \ln\left(1 + 2\sqrt x + 2\sqrt{x+\sqrt x}\right) + \mathrm{const}}.$$ $$\color{brown}{\textbf{OP antiderivative ($\mathrm{k=3}$).}}$$ $\color{brown}{\underline{\textrm{Transformation to single radical.}}}$ Substitution. Taking in account $(\mathrm i1)-(\mathrm i4),$ can be written $$N_3(x) = \int \dfrac{\mathrm dx}{\sqrt{x+\sqrt {x+\sqrt x}}} = Q_3\left(\sqrt{1+\frac1{\sqrt x}}\right),\tag{1}$$ where $$\begin{align} &\sqrt{x+\sqrt{x+\sqrt x}} = \sqrt{\dfrac1{(y^2-1)^2}+\dfrac y{y^2-1}},\\[4pt] &Q_3(y) = \int\dfrac{-\dfrac {4y}{(y^2-1)^3}\,\mathrm dy}{\sqrt{\dfrac1{(y^2-1)^2}+\dfrac y{y^2-1}}} = \int\dfrac{-\dfrac {4y}{(y^2-1)^3}-\dfrac{y^2+1}{(y^2-1)^2}}{\sqrt{\dfrac1{(y^2-1)^2}+\dfrac y{y^2-1}}}\,\mathrm dy + \int\dfrac{\dfrac{y^2+1}{(y^2-1)^2}\,\mathrm dy }{\sqrt{\dfrac1{(y^2-1)^2}+\dfrac y{y^2-1}}}\\[4pt] & = \sqrt{\dfrac1{(y^2-1)^2}+\dfrac y{y^2-1}} + \int \dfrac{y^2+1}{y^2-1} \dfrac{\mathrm dy}{\sqrt{y^3-y+1}}. \end{align}$$ Then $$N_3(x) = 2\sqrt{x+\sqrt {x+\sqrt x}} +Q_{30}\left(\sqrt{1+\frac1{\sqrt x}}\right),\tag2$$ where $$Q_{30}(y) = \int \dfrac{y^2+1}{y^2-1} \dfrac{\mathrm dy}{\sqrt{y^3-y+1}}.\tag3$$ In this way, substitution $(\mathrm i1)$ has transformed given nested root integral to elliptic type. Algebraic transformations. Denote $$\begin{cases} P_3(y) = y^3-y+1\\[4pt] r = \sqrt[3]{\dfrac{9+\sqrt{69}}{18}} + \sqrt[3]{\dfrac{9-\sqrt{69}}{18}}\approx 1.32471\,79572\,44746\\[4pt] \lambda = \sqrt{3r^2-1} = \dfrac12\sqrt{4+\sqrt[3]{800+96\sqrt69}+\sqrt[3]{800-96\sqrt69}} \approx 2.06509\,87866\,78274\\[4pt] w = \dfrac12+\dfrac34\dfrac r\lambda\approx 0.98110\,94143\,9836556,\\[4pt] p = r+\lambda \approx 3.38981\,67439\,23020. \end{cases}\tag4$$ Then \begin{align} &P_3(-r)=-\left(\dfrac{9+\sqrt{69}}{18}+\dfrac{9-\sqrt{69}}{18}\right) -3\left(\sqrt[3]{\dfrac{9+\sqrt{69}}{18}} \cdot \sqrt[3]{\dfrac{9-\sqrt{69}}{18}}\right)\\ &\times\left(\sqrt[3]{\dfrac{9+\sqrt{69}}{18}} + \sqrt[3]{\dfrac{9-\sqrt{69}}{18}}\right) +\sqrt[3]{\dfrac{9+\sqrt{69}}{18}} + \sqrt[3]{\dfrac{9-\sqrt{69}}{18}}+1\\ &=-3\sqrt[3]{\dfrac{12}{324}}\left(\sqrt[3]{\dfrac{9+\sqrt{69}}{18}} + \sqrt[3]{\dfrac{9-\sqrt{69}}{18}}\right)+\sqrt[3]{\dfrac{9+\sqrt{69}}{18}} +\sqrt[3]{\dfrac{9-\sqrt{69}}{18}}=0, \end{align} i.e. $(-r)$ is the root of $P_3(y),$ $$r^3=r+1\tag5.$$ According to the Bezou theorem, $P_3(y)$ allows the decomposition in the form of $$P_3(y) = y^3-y+1 = (y+r)(y^2-ry+r^2-1) = (y+r)\big((y+r)^2-3r(y+r)+3r^2-1\big),$$ or $$P_3(y) = (y+r)\big((y+r)^2-3r(y+r)+\lambda^2).\tag6$$ On the other hand, for any constant $p$ can be obtained the fractional decomposition of $$R_3(y) = \dfrac{y^2+1}{y^2-1} = A + B\dfrac{y+p}{y-1} + C\dfrac{y+p}{y+1},$$ where $$B = \lim\limits_{y\to1}\dfrac{y-1}{y+p}R_3(y) = \dfrac1{p+1},$$ $$C = \lim\limits_{y\to-1}\dfrac{y+1}{y+p}R_3(y) = -\dfrac1{p-1},$$ $$A = \lim\limits_{y\to\infty}\left(R_3(y)-B\dfrac{y+p}{y-1} - C\dfrac{y+p}{y+1}\right) = \dfrac{p^2+1}{p^2-1}.$$ Therefore, $$\begin{align} &Q_{30}(y) = \dfrac{p^2+1}{p^2-1}I_0(y) + \dfrac1{p+1}I_1(y,p+1) - \dfrac1{p-1} I_1(y,p-1),\tag7\\[4pt] &\text{where}\\[4pt] &\begin{cases} I_0(y) = \int\dfrac{\mathrm dy}{\sqrt{y^3-y+1}}\\ I_1(y,q) = \int\dfrac{y+p}{y+p-q}\dfrac{\mathrm dy}{\sqrt{y^3-y+1}}\\ \end{cases}\tag8 \end{align}$$ $\color{brown}{\underline{\textrm{The first elliptic integral.}}}$ Substitution $(\mathrm i1)$ transforms the interval $x\in(0,1)$ to the interval $y\in(\infty,\sqrt2).$ Should be found the real-valued antiderivative for this interval. Substitution $$y=g(t) = -r+\lambda\tan^2t,\quad \mathrm dy = 2\lambda\dfrac{\tan t}{\cos^2t}\,\mathrm dt, \quad t=\arctan\sqrt{\dfrac{y+r}\lambda}\tag{f1}$$ for the presentation $(6)$ gives \begin{align} &I_0(g(t)) = \int\dfrac{2\lambda\dfrac{\tan t}{\cos^2t}}{\sqrt{\lambda\tan^2t\left(\lambda^2\tan^4t -3r\lambda\tan^2t+\lambda^2\right)}}\mathrm dt\\[4pt] &=\int\dfrac{2\mathrm dt}{\sqrt{\lambda(\cos^2t+\sin^2t)^2-(2\lambda+3r)\sin^2t\cos^2t}}\\[4pt] &=\dfrac1{\sqrt \lambda}\int \dfrac{\mathrm d(2t)}{\sqrt{1-\dfrac{2\lambda+3r}{4\lambda}\sin^2 2t}} = \dfrac1{\sqrt \lambda}\mathrm F(2t\ |\ w)+\mathrm{const}, \end{align} where $$\quad F(\varphi\ |\ w) = \int\limits_0^\varphi \dfrac{\mathrm d\varphi}{\sqrt{1-w\sin^2\varphi}}\tag{f3}$$ is the elliptic integral of the first kind. Assuming the domain of $x\in(0,\infty)$ and taking in account $(\mathrm f1),$ one can get $$\left(y\in(1,\infty)\right)\wedge(y+r\in(\lambda,\infty)) \Rightarrow t\in\left(\dfrac\pi4,\dfrac\pi2\right),$$ $$\sin^2 2t = \dfrac{4\tan^2 t}{(1+\tan^2 t)^2},$$ then $$\sin^2 2t = \sin^2(\pi-2t) = \dfrac{4\lambda(y+r)}{(y+r+\lambda)^2},\quad \cos 2t = -\cos(\pi-2t) = \dfrac{\lambda - y - r}{\lambda+y+r}\tag{f4}$$ $$\pi - 2t = 2\arctan\sqrt{\dfrac\lambda{y+r}} = \arccos\dfrac{y+r-\lambda}{y+r+\lambda}.\tag{f5}$$ At the same time, $$\mathrm F(\pi-\varphi\ |\ w) = \mathrm F\left(\dfrac\pi2\ \bigg|\ w\right) - \mathrm F(\varphi\ |\ w).\tag{f6}$$ Therefore, $$\color{green}{\mathbf{\dfrac{p^2+1}{p^2-1}I_0(y) = -\dfrac1{\sqrt \lambda}\dfrac{(\lambda+r)^2+1}{(\lambda+r)^2-1} F\left(\arccos\dfrac{y+r-\lambda}{y+r+\lambda}\ \Bigg|\ w\right) + const. \tag{f7}}}$$ Note that the choice $\mathrm{const}=0$ provides the condition $I_0(\infty)= 0.$ Substitution $(\mathrm f1)$ effectively transforms the expression under the radical and provides simple form of solution for $I_0(y)$. Effective integration of $I_1(y,q)$ looks more serious problem. Decomposition in the form $(7),(8)$ for choosen value $(4)$ of parameter $p$ allows to simplify integration. $\color{brown}{\underline{\textrm{The second elliptic integral.}}}$ From $(4),(8)$ should $$I_1(y,q) = \int\dfrac{y+r+\lambda}{y+r+\lambda-q}\dfrac{\mathrm dy}{\sqrt{y^3-y+1}}.\tag{s1}$$ Applying substitution $(\mathrm f1)$ and taking in account $(4),$ one can get \begin{align} &I_1(g(t),q)=\int \dfrac{\lambda(1+\tan^2 t)}{\lambda(1+\tan^2 t) -q}\cdot \dfrac{2\mathrm dt}{\sqrt \lambda\sqrt{1-w\sin^2 2t}} =\int \dfrac1{\lambda-q\cos^2 t}\cdot \dfrac{\mathrm 2\sqrt \lambda\,\mathrm dt}{\sqrt{1-w\sin^2 2t}}\\[4pt] &= \int \dfrac1{2\lambda-q-q\cos 2t}\cdot \dfrac{4\sqrt \lambda\,\mathrm dt}{\sqrt{1-w\sin^2 2t}} = \int \dfrac{2\lambda-q+q\cos 2t}{(2\lambda-q)^2-q^2\cos^2 2t}\cdot \dfrac{4\sqrt \lambda\,\mathrm dt}{\sqrt{1-w\sin^2 2t}}\\[4pt] &= -\int \dfrac{2\lambda-q+q\cos 2t}{4\lambda(q-\lambda)-q^2\sin^2 2t}\cdot \dfrac{4\sqrt \lambda\,\mathrm dt}{\sqrt{1-w\sin^2 2t}}\\[4pt] &= -\int \dfrac{2\lambda-q-q\cos(\pi-2t)}{4\lambda(q-\lambda)-q^2\sin^2(\pi-2t)}\cdot \dfrac{4\sqrt \lambda\,\mathrm dt}{\sqrt{1-w\sin^2(\pi-2t)}}\\[4pt] &= \int \dfrac{\dfrac{2\lambda-q}{2\sqrt \lambda(q-\lambda)} -\dfrac{q\cos(\pi-2t)}{2\sqrt \lambda(q-\lambda)}} {1-\dfrac{q^2}{4\lambda(q-\lambda)}\sin^2(\pi-2t)}\cdot \dfrac{\mathrm d(\pi-2t)}{\sqrt{1-w\sin^2(\pi-2t)}}, \end{align} $$\begin{align} &\mathbf{\dfrac1q I_1(g(t),q)= A(q)\Pi\left(u(q),\pi-2t\ |\ w \right) -B(q)P\left(u(q),\pi-2t\ |\ w \right) + const}, \end{align}\tag{s2}$$ where $$\begin{cases} A(\lambda+r\pm1)=\dfrac1{2\sqrt\lambda}\dfrac{1\pm(r-\lambda)}{(r\pm1)(\lambda+r\pm1)} \approx \dbinom{0.00885\,15573\,11770}{-0.78032\,04227\,56824}\\[4pt] B(\lambda+r\pm1) = \dfrac1{2\sqrt\lambda(r\pm1)}\approx\dbinom{0.14966\,81250\,93454}{1.07150\,27311\,21398}\\[4pt] u(\lambda+r\pm1)=\dfrac{(\lambda+r\pm1)^2}{4\lambda(r\pm1)}\approx\dbinom{1.00350\,99620\,67355}{2.12922\,75171\,98979}, \end{cases}\tag{s3}$$ $$\Pi(u,\varphi\ |\ w) = \int\limits_0^\varphi \dfrac{\mathrm d\varphi}{(1-u\sin^2\varphi)\sqrt{1-w\sin^2\varphi}}\tag{s4}$$ is the elliptic integral of the third kind, and $$\mathrm P(u,\varphi\ |\ w) = \int\limits_0^\varphi \dfrac{\cos\varphi\,\mathrm d\varphi}{(1-u\sin^2\varphi)\sqrt{1-w\sin^2\varphi}} =\dfrac1{2\sqrt{u-w}}\ln\left|\dfrac{\sqrt{u-w}+\sqrt{\csc^2\varphi-w}} {\sqrt{u-w}-\sqrt{\csc^2\varphi-w}}\right|\tag{s5}$$ (see also Wolfram Alpha test) can be considered as "pseudo-elliptic" integral, linked with known rational integral in the form of $$\int \dfrac{\mathrm ds}{(as^2+b)\sqrt{fs^2+g}}= {\small \begin{cases} \dfrac 1{\sqrt{b(ag-bf)}}\arctan \dfrac{s\sqrt{ag-bf}}{\sqrt{b(fs^2+g)}},\quad\text{if}\quad ag-bf > 0\\[4pt] \dfrac1{2\sqrt{b(bf-ag)}}\ln \left|\dfrac{\sqrt{b(fs^2+g)}+s\sqrt{bf-ag}}{\sqrt{b(fs^2+g)}-s\sqrt{bf-ag}}\right| \quad\text{if}\quad ag-bf < 0. \end{cases}}\tag{s6} $$ $\color{brown}{\underline{\textrm{Transformations of the second integral solution.}}}$ Solving of the convergency problem. From $(4),(\mathrm s3)$ should $w < 1 < u(q),$ and the first inequality provides real-valued final expression of $(\mathrm s3)$. On the other hand, the second inequality presents hyperbolic case of $\Pi-$function. This leads to the convergency problem, because the denominator equals to zero if $u(q)\sin^2(\pi-2t) = 1.$ Taking in account $(\mathrm f4)$ and $(\mathrm s3),$ one can get \begin{align} &u(p+1)\sin^2(\pi-2t) = \dfrac{y+r}{(\lambda+y+r)^2}\left(\dfrac{1+r}{(\lambda+1+r)^2}\right)^{-1}\in(0,1),\\[4pt] &u(p-1)\sin^2(\pi-2t) =\dfrac{y+r}{(\lambda+y+r)^2}\left(\dfrac{r-1}{(\lambda+r-1)^2}\right)^{-1}\in(0,3.897), \end{align} $$u(p+1)\sin^2(\pi-2t)\bigg|_{y=1}=1,\tag{t1}$$ $$u(p-1)\sin^2(\pi-2t)\bigg|_{y=\dfrac{2r^2+r-1}{r+1}\approx 11.8} = 1.\tag{t2}$$ To avoid this problem, the integral $(\mathrm s2)$ should be transformed. Let $$u(q)v(q)=w,\quad b=\sqrt{(u(q)-1)(1-v(q))},\tag{t3}$$ then $$\dfrac1{1-u\sin^2 \varphi}+\dfrac1{1-v\sin^2\varphi} = 1+\dfrac{1-w\sin^4\varphi}{1-(u+v)\sin^2\varphi+w\sin^4\varphi},$$ \begin{align} &\dfrac {\mathrm d}{\mathrm d\varphi}\left(\ln\left(\sqrt{1-w\sin^2 \varphi}+b\tan\varphi\right) - \ln\left(\sqrt{1-w\sin^2\varphi}-b\tan\varphi\right)\right)\\[4pt] &=\dfrac{2b(w\sin^2\varphi\tan^2\varphi-\sec^2\varphi)}{\sqrt{1-w\sin^2\varphi}(b^2\tan^2\varphi+w\sin^2\varphi-1)}\\[4pt] &=\dfrac{2b(1-w\sin^4\varphi)}{\sqrt{1-w\sin^2\varphi} (-b^2\sin^2\varphi+(1-w\sin^2\varphi)(1-\sin^2\varphi))}\\[4pt] &=\dfrac{2b(1-w\sin^4\varphi)} {\sqrt{1-w\sin^2\varphi}(1-(b^2+w+1)\sin^2\varphi+w\sin^4\varphi)}\\[4pt] &=\dfrac{2b(1-w\sin^4\varphi)} {\sqrt{1-w\sin^2\varphi}(1-(u+v)\sin^2\varphi+w\sin^4\varphi)}. \end{align} So $$\mathbf{\mathrm \Pi(u(q),\varphi,w) = \dfrac1{2b}\ln \dfrac{\sqrt{\csc^2\varphi-w}\cos\varphi+b}{\sqrt{\csc^2\varphi-w}\cos\varphi-b} +\mathrm F(\varphi,w) - \mathrm \Pi(v(q),\varphi,w),\tag{t4}}$$ wherein $$v(\lambda+r\pm1) = \dfrac{(3r+2\lambda)(r\pm1)}{(\lambda+r\pm1)^2}\approx\dbinom{0.97767\,78023\,97853}{0.46078\,18593\,70775}.\tag{t5}$$ Simplifications. Firstly, the factor near $\mathrm F(\varphi,w)$ is \begin{align} &-\dfrac1{\sqrt\lambda}\dfrac{p^2+1}{p^2-1}+A(p+1) - A(p-1)\\[4pt] &= -\dfrac1{\sqrt \lambda((\lambda+r)^2-1)}\bigg((\lambda+r)^2+1+\dfrac{(\lambda+r-1)(r-\lambda+1)}{2(r+1)}+\dfrac{(\lambda+r+1)(\lambda-r+1)}{2(r-1)}\bigg)\\[4pt] &= -\dfrac1{\sqrt \lambda((\lambda+r)^2-1)}\bigg((\lambda+r)^2+1+\dfrac{(r^2-(\lambda-1)^2)(r-1)+((\lambda+1)^2-r^2)(r+1)}{2(r^2-1)}\bigg)\\[4pt] &= -\dfrac1{\sqrt \lambda((\lambda+r)^2-1)}\bigg((\lambda+r)^2+1 +\dfrac{-r^2+2\lambda r+\lambda^2+1}{r^2-1}\bigg)\\[4pt] &= -\dfrac1{\sqrt\lambda((\lambda+r)^2-1)}\bigg((\lambda+r)^2-1 +\dfrac{r^2+2\lambda r+\lambda^2-1}{r^2-1}\bigg)\\[4pt] &= -\dfrac1{\sqrt\lambda((\lambda+r)^2-1)}((\lambda+r)^2-1+r((r+\lambda)^2-1)) = -\dfrac{r+1}{\sqrt\lambda}. \end{align} At the second, applying formulas $(\mathrm f4),(\mathrm s3),(\mathrm t1),(\mathrm t5),$ one can get: $$\begin{cases} u(p\pm1)-1 = \dfrac{(r\pm1+\lambda)^2}{4\lambda(r\pm1)}-1 = \dfrac{(r\pm1-\lambda)^2}{4\lambda(r\pm1)},\\[4pt] 1-v(p\pm1) = \dfrac{u(p\pm1)-w}{u(p\pm1)} =\dfrac1{(r\pm1)(\lambda+r\pm1)^2},\\[4pt] b = \sqrt{(u-1)(1-v)} = \dfrac1{2\sqrt\lambda(r\pm1)}\dfrac{1\pm(r-\lambda)} {\lambda+r\pm1} = A(q), \end{cases}\tag{t6}$$ so the factor near the logarithm in formulas $(\mathrm t4)$ is $\frac12.$ Besides, \begin{align} &\csc^2(\pi-2t) - w = \dfrac{(y+r+\lambda)^2}{4\lambda(y+r)}-\dfrac12 - \dfrac{3r}{4\lambda} = \dfrac{(y+r)^2 + \lambda^2-3r(y+r)}{4\lambda(y+r)} = \dfrac{y^3-y+1}{4\lambda(y+r)^2}, \end{align} then $$2\sqrt\lambda\sqrt{\csc^2\varphi-w}\cos\varphi = \dfrac{\sqrt{y^3-y+1}}{y+r}\dfrac{y+r-\lambda}{y+r+\lambda} = \sqrt{y^3-y+1}\left(\dfrac2{y+r+\lambda}-\dfrac1{y+r}\right).\tag{t7}$$ At last, \begin{align} &u(p\pm1)-w = \dfrac{(\lambda+r\pm1)^2}{4\lambda(r\pm1)}-\dfrac12-\dfrac{3r}{4\lambda}\\[4pt] &= \dfrac{3r^2-1+2\lambda(r\pm1)+(r\pm1)^2-2\lambda(r\pm1)-3r(r\pm1)}{4\lambda(r\pm1)} =\dfrac{r^2\mp r}{4\lambda(r\pm1)} =\dfrac{r^3-r}{4\lambda(r\pm1)^2},\\[4pt] \end{align} $$\begin{cases} u(p\pm1)-w = \dfrac1{4\lambda(r\pm1)^2}\\[4pt] \sqrt{\dfrac{\csc^2(\pi-2t) - w}{u(p\pm1)-w}} = \dfrac{y\pm1}{y+r}\sqrt{y^2-y+1}\\[4pt] \dfrac{B(p\pm1)}{2\sqrt{u(p\pm1)-w}} =\dfrac12\dfrac{2\sqrt \lambda(r\pm1)}{2\sqrt \lambda(r\pm1)} = \dfrac12,\\[4pt] \end{cases}\tag{t8}$$ so $$\mathbf{B(p\pm1)\mathrm P\left(u(p\pm1),\arccos\dfrac{y+r-\lambda}{y+r+\lambda}\ \Big|\ w \right) = \dfrac12\ln\left|\ \dfrac {y+r + (r\pm1)\sqrt{y^3-y+1\phantom{\Big|}}} {y+r - (r\pm1)\sqrt{y^3-y+1\phantom{\Big|}}}\ \right|. \tag{t9}}$$ Closed form of the antiderivative. Finally, the closed form of antiderivative can be presented by the formula $(2),$ where $$\color{green}{\boxed{\mathbf{\begin{align} &Q_{30}(\infty)-Q_{30}(y) = \dfrac{r+1}{\sqrt\lambda} \mathrm F\left(\arccos\dfrac{y+r-\lambda}{y+r+\lambda}\ \Bigg|\ w\right)\\[4pt] &+\dfrac12\ln\left|\dfrac{\dfrac{\sqrt{y^3-y+1}}{y+r}\dfrac{y+r-\lambda}{y+r+\lambda}+C_1} {\dfrac{\sqrt{y^3-y+1}}{y+r}\dfrac{y+r-\lambda}{y+r+\lambda}-C_1}\right| +\dfrac12\ln\left|\dfrac{\dfrac{\sqrt{y^3-y+1}}{y+r}\dfrac{y+r-\lambda}{y+r+\lambda}+C_2} {\dfrac{\sqrt{y^3-y+1}}{y+r}\dfrac{y+r-\lambda}{y+r+\lambda}-C_2}\right|\\[4pt] &-\dfrac{C_1}{2\sqrt\lambda}\Pi\left(\dfrac{(3r+2\lambda)(r+1)}{(\lambda+r+1)^2},\arccos\dfrac{y+r-\lambda}{y+r+\lambda}\ \Bigg|\ w \right)\\[4pt] &-\dfrac{C_2}{2\sqrt\lambda}\Pi\left(\dfrac{(3r+2\lambda)(r-1)}{(\lambda+r-1)^2}, \arccos\dfrac{y+r-\lambda}{y+r+\lambda}\ \Bigg|\ w \right)\\[4pt] & +\dfrac12\ln\left| \dfrac {y + r + (r+1)\sqrt{y^3-y+1\phantom{\Big|}}} {y + r - (r+1)\sqrt{y^3-y+1\phantom{\Big|}}}\right| -\dfrac12\ln\left|\dfrac {y + r + (r-1)\sqrt{y^3-y+1\phantom{\Big|}}} {y + r - (r-1)\sqrt{y^3-y+1\phantom{\Big|}}}\right|,\\[4pt] &C_1 = \dfrac{r-\lambda+1}{(r+1)(r+\lambda+1)},\quad C_2 = \dfrac{\lambda-r+1}{(r-1)(r+\lambda-1)}, \end{align}}}\tag{*}}$$ wherein the parameters $r,\,\lambda,\,w$ are given by $(4),$ and $Q_{30}(\infty) = 0.$ Since \begin{align} &\int_{\sqrt2}^\infty \dfrac {y^2+1}{y^2-1} \dfrac{\mathrm dy}{\sqrt{y^3-y+1}} =Q_{30}(\infty) - Q_{30}(\sqrt2) \end{align} $\approx4.01502\,83666\,96210\,78140$ (term1) $+0.33133\,29488\,62457\,95551$ (term2) $+0.03549\,45749\,48612\,01694$ (term3) $-0.13827\,66191\,08129\,76996$ (term4) $-2.80594\,91779\,29332\,44409$ (term5) $+0.80579\,17865\,34765\,72804$ (term6+term7) $\approx 2.24342\,18800\,04584\,26784,$ then $$\int_0^1 \dfrac{\mathrm dx}{\sqrt{x+\sqrt{x+\sqrt x}}} = 2\sqrt{1+\sqrt2}-(Q_{30}(\infty)-Q_{30}(\sqrt2)) \approx 0.86412\,60680\,55497.$$ This result corresponds with the numeric calculations and confirms the obtained formulas for the antiderivative.<|endoftext|> TITLE: How many points to prove a trigonometric identity? QUESTION [9 upvotes]: I am taking a trigonometric identity from another post, arbitrarily. $$\frac{2\sec\theta +3\tan\theta+5\sin\theta-7\cos\theta+5}{2\tan\theta +3\sec\theta+5\cos\theta+7\sin\theta+8}=\frac{1-\cos\theta}{\sin\theta}.$$ Besides the usual approach by reworking/simplifying the expressions using elementary identities, one could use a "lazy" approach, by evaluating both members for several $\theta$ and checking equality. This works for polynomials, if you probe them at $d+1$ points, where $d$ is the degree. Can we derive general rules about the number of equalities required to guarantee that trigonometric expressions of a certain complexity are indeed identical ? REPLY [5 votes]: If the numerator and denominator of an trigonometric expression (supposed to equal $0$) are trigonometric polynomials of degree at most $k$ then the fraction is a rational function of $u = \exp(i \theta)$ of degree at most $2k$, and $4k+1$ distinct angles would be needed. This matches the number of coefficients (minus 1 for scaling) needed to write a ratio of two trigonometric polynomials. I don't see how one could make exact comparison of the values at so many distinct points without having a separate proof of the identity. For numerical evaluation the question of accuracy needed is similar to that for rational functions of the same degree and size of coefficients, but maybe there is some simplification from knowing $|u|=1$.<|endoftext|> TITLE: Is $\Gamma(\alpha, 0, x)$ log-concave as a function of $x$ (for fixed $\alpha$)? QUESTION [5 upvotes]: The incomplete Gamma function is defined as: $$\Gamma(\alpha, 0, x) = \int_0^x t^{\alpha - 1} e^{-t} dt$$ Let $\alpha \ge 1$ be a fixed number. Is $\Gamma(\alpha, 0, x)$ log-concave, as a function of $x$? Assume that $x\ge 0$. P.S. A function is log-concave if its logarithm is concave. REPLY [2 votes]: According to your definition, $$ \frac{d}{dx}\Gamma(\alpha,0,x) = x^{\alpha -1}e^{-x},\qquad \frac{d^2}{dx^2}\Gamma(\alpha,0,x) = x^{\alpha-2}e^{-x}(\alpha-1-x).$$ Let we just set $f(x)=\Gamma(\alpha,0,x)$ in order to have a more efficient notation. We may show that $f$ is a log-concave function by showing that $\frac{d^2}{dx^2}\log f(x)<0$, i.e. $$ f''(x)\,f(x) \leq f'(x)^2 $$ that is equivalent to: $$ x^{\alpha-2}e^{-x}(\alpha-1-x)\int_{0}^{x}t^{\alpha-1}e^{-t}\,dt \leq x^{2\alpha-2}e^{-2x} $$ or to: $$ (\alpha-1-x)\int_{0}^{x}t^{\alpha-1}e^{-t}\,dt \leq x^{\alpha}e^{-x} $$ or to: $$ (\alpha-1-x)\int_{0}^{1}(1-u)^{\alpha-1}e^{xu}\,dt \leq 1\tag{1}$$ that is trivial if $x\geq (\alpha-1)$, and is a simple consequence of $(1-u)\leq e^{-u}$ otherwise. Hence yes, $f$ is a log-concave function. Interestingly, $g(\alpha)=\Gamma(\alpha,0,x)$ is a log-convex function with respect to $\alpha$ due to the Cauchy-Schwarz inequality, since $\frac{d^k}{d\alpha^k}\Gamma(\alpha,0,x)=\int_{0}^{x}\left(\log t\right)^k t^{\alpha-1}e^{-t}\,dt$.<|endoftext|> TITLE: The infinite integral of $\frac{\sin x}{x}$ using complex analysis QUESTION [7 upvotes]: The problem i came across is the evaluation of $$\int_0^\infty\frac{\sin x}{x}\,dx$$ I chose the function $f(z) = \dfrac{e^{iz}}{z}$ and took a contour of $[\varepsilon , R ] + [R , R+iy] + [-R+iy , R+iy] + [-R,-R+iy]+[-R, -\varepsilon]$ . The problem is how do I continue now to find integrals on each of these segments ? REPLY [11 votes]: Take the function $\;f(z)=\frac{e^{iz}}z\;$ and define the (positive) indented semicircle $$\gamma_r:=\{z\in\Bbb C\;:\;z=re^{it}\;,\;\;0\le t\le \pi\,,\,\,r>0\}$$ Now, for big $\;R\in\Bbb R_+\;$ and very small $\;\epsilon>0\;$ , take the contour: $$C:=[-R,-\epsilon]\cup(-\gamma_\epsilon)\cup[\epsilon,R]\cup\gamma_R$$ We're going to use the lemma, and in particular its corolary, that in the first and most upvoted answer here Observe that $\;f\;$ is analytic on the contour and within the domain enclosed by it, so by Cauchy's Theorem its integral equals zero. Also $$\text{Res}_{z=0}(f)=\lim_{z\to 0} zf(z)=e^0=1$$ and thus by the lemma $$\lim_{\epsilon\to0}\int_{\gamma_\epsilon}f(z)=\pi i$$ and also $$\left|\int_{\gamma_R}f(z)\,dz\right|\le\frac{\pi R e^{-R\cdot\text{Im}\,z}}{R}\xrightarrow[R\to\infty]{}0\,,\,\text{since}\;\;\text{Im}\,(z)>0\;\;\text{on}\;\;\gamma_R$$ So we get: $$0=\oint_C f(z)\,dz=\int_{-R}^\epsilon f(x)\,dz-\int_{\gamma_\epsilon} f(z)\,dz+\int_\epsilon^R f(x)\,dx+\int_{\gamma_R}f(z)\,dz\implies$$ $$0=\lim_{R\to\infty,\,\epsilon\to0}\oint_C f(z)dz=\int_{-\infty}^\infty\frac{e^{ix}}xdx-\pi i$$ and now just compare imaginary parts and divide by two since the real function is even.<|endoftext|> TITLE: Is the fixed point set of an action a submanifold? QUESTION [10 upvotes]: Let $M$ be a differentiable manifold, and $G$ a Lie group acting smoothly on $M$. Under which condition - if any - is the set of fixed points of the action a submanifold of $M$? My thoughts so far: if we could determine this set as the preimage of some non-critical value of a diffeomorphism, we would be done. In the easy case $M=\mathbb R^n$, one can consider the function $g\cdot x - x$, with $g\in G$ fixed and $x\in M$ varying, and then pick the preimage of zero. In any case, this gives a larger set than those of all the fixed points: an intersection would be needed, and then who knows what may be happening? I'm particularly interested in compact Lie groups, and I tried looking for counterexamples, but I can't find any. I have the feeling it shouldn't be too hard to find a set of fixed points self-intersecting in some point (I'm thinking of sth like the axes in $\mathbb R^2$), but I probably shouldn't be trusting feelings. References, hints, generic intuitions are very welcome! REPLY [15 votes]: Let $x \in M$ be a fixed point of a $G$-action, $G$ acting properly on $M$ (meaning the map $G \times M \to M \times M$, $(g,m) \mapsto (m,gm)$, is proper). We'll use the following tubular neighborhood theorem: If $M$ is a smooth proper $G$-manifold without boundary, and $N$ is a closed $G$-invariant submanifold, then there is an open neighborhood $U \supset N$ which is $G$-equivariantly diffeomorphic to the total space of a $G$-vector bundle over $N$ (that is, a vector bundle over $N$ such that the action of $G$ on $N$ extends to a fiberwise linear action of $G$ on the bundle.) See proposition 4.5 here. Now, let's kill the fly. Since $x$ is a fixed point, it's a $G$-invariant submanifold. A $G$-vector bundle over it is the same thing as a vector space with a linear action of $G$. The fixed point set of this action is then necessarily a vector subspace. Transporting this back to the neighborhood of $x$, we see that locally near $x$, the fixed point set forms a submanifold, as desired. (Indeed, the set of points $x$ such that the isotropy group of the action at $x$ contains some fixed closed subgroup $H \subset G$ forms a submanifold by the exact same argument; of course, the example above corresponds to the case $H=G$.)<|endoftext|> TITLE: On the extension of the solution to a nonlinear ODE QUESTION [7 upvotes]: Consider the nonlinear ODE $$x' = (x^2 - e^{2t})f(t, x)$$ with $f$ continuous. Prove that for any $\tau > 0$, if $|x_0|$ is sufficiently small, the solution $x(t)$ to the ODE above can be extended to $\tau \leq t < \infty$. My general sketch: 1) If I can show that for any compact subset $\Omega \subset \mathbb{R}^n $, the solution $x(t)$ hits the boundary point of $\Omega$ then I can keep extending the solution to infinity; 2) Equivalently, I was trying to show that the integral $$\int_{x_0}^{x_0 + \alpha} \frac{1}{F(t, x)}$$ diverges, where $F(t, x) = (x^2 - e^{2t})f(t, x)$; 3) The third way I was thinking is that I can use the fact $|F(t, x)| \leq K|x_0|$, the maximum interval of existence is $(-\infty, \infty)$. But I can't see to bound it. REPLY [4 votes]: The idea is to use the equation, and a variation on Gronwal's inequality to show the solutions $x(t)$ that start near $0$ remain bounded for al $t$. Not sure what to make of $\tau$. The way the problem is stated, it must have come within a context in which referring to $\tau$ is significant. Let $x_0$ be such that $|x_0|<1$, then, the solution $x(t)$ of the ODE that starts at $x_0$ satisfies $|x(t)|\ < 1 \le e^t$ for all small $t \ge 0$. I claim that $|x(t)| < e^t$ for all $t$. If that were not the case, there would be $T>0$ such that $|x(t)| < e^t$ for all $t \in [0,T)$ and $|x(T)| = e^T$. So far we have only used the continuity of $x$. Let's use the equation. For $t \in [0,T)$, $$|x'(t)| = (e^{2t} - x^2)|f(t,x)| \le (e^{2T}-x^2)M_T$$ where $M_T = sup\{|f(t,u)| | 0 \le t \le T\text{ and } |u| \le e^t \}$, therefore $$| \int_{0}^{t} \frac{x'(s)}{e^{2T}-x^2}ds| \le M_Tt.$$ The integral can be simplified as follows: $$\int_{0}^{t} \frac{x'(s)}{e^{2T}-x^2}ds = \int_{x_0}^{x(t)} \frac{1}{e^{2T}-x^2}dx = \frac{1}{e^T} ln(\frac{1+u}{1-u}){\LARGE|}_{e^{-T}x_0}^{e^{-T}x(t)} $$ so: $$| ln(\frac{e^T + x(t)}{e^T - x(t)}) | \le | ln(\frac{e^T + x_0}{e^T - x_0}) | + e^TM_Tt$$ By our choice of $T$, as $t \nearrow T$, $x(t) \rightarrow \pm e^T$, in either case, $| ln(\frac{e^T + x(t)}{e^T - x(t)}) | \rightarrow +\infty$, which contradicts the previous inequality. $\blacksquare$<|endoftext|> TITLE: Looking for function $f$ such that $f'<0$ and $(xf)'>0$ QUESTION [7 upvotes]: I'm looking for a function $f(x)$ with the following properties for $x\ge 0$: $$0\le f(x)\le 1$$ $$f(0)=1$$ $$f'(x)\le 0$$ $$f(x)+xf'(x)\ge 0$$ $$\lim_{x\to\infty} xf(x)=L$$ where $L$ is a positive constant. Essentially I want $xf(x)$ to initially approximate $x$ and then level off at $L$. Candidates include: $$f(x)={1\over 1+x/L}$$ $$f(x)={\tanh (x/L) \over x/L}$$ but neither of these have an extra parameter that allows me to control how quickly $xf(x)$ approaches $L$ while keeping the initial slope of $xf(x)$ equal to $1$. This is what I would like, ideally. The motivation for this: $f(x)$ can be thought of as an efficiency with $x$ the input and $xf(x)$ the output. The system is perfectly efficient at zero input and decreases in efficiency with increasing input, but you never get less output from more input. Thanks for your responses. REPLY [2 votes]: $$f(x)=\frac{1}{2x}\left(1-\frac{1}{(x+1)^2}\right)=\frac{x+2}{2(1+x)^2} $$ fulfills the given constraints: $f$ is bounded between $0$ and $1$ on $\mathbb{R}^+$, $f(0)=1$, $f$ is decreasing, $xf$ is increasing and $L=\frac{1}{2}$. A parametric family is given by: $$ f_a(x) = \frac{a^3}{2x}\left(\frac{1}{a^2}-\frac{1}{(x+a)^2}\right)=\frac{a(2a+x)}{2(x+a)^2} $$ with $\color{red}{L=\frac{a}{2}}$ for any $a>0$.<|endoftext|> TITLE: Surface area from indicator function QUESTION [12 upvotes]: I know that the volume and the surface area of a sphere of radius $R$ are related by a derivative: $$V(R)=\frac{4}{3}\pi R^3$$ $$A(R)=4\pi R^2=\frac{\partial V(R)}{\partial R}$$ I am asking if an analogous relation, in the sense that it allows to know the value of the surface from the value of the volume, exists for the indicator functions. I know the indicator function of a set $\Omega\in\mathbb{R}^n $ and $\vec{x}\in\mathbb{R}^n$ is a generic point : $$ \chi_{\Omega}(\vec{x})= \begin{cases} \hfill 1 \text{ if } \vec{x}\in \Omega \\ \hfill 0 \text{ if } \vec{x}\notin \Omega \\ \end{cases} $$ the volume of $\Omega$ is easily computed: $$V(\Omega)=\iiint_{\mathbb{R}^n} \chi_{\Omega}(\vec{x})d\vec{x} $$ Is it possible to compute the value of the surface area $A(\Omega)$ from the knowledge of $\chi(\Omega)$? Taking the derivative of $\chi_{\Omega}(\vec{x})$ I expect to have something related to the delta function. From an intuitive point of view, I expect the integral: \begin{equation} \iiint_{\mathbb{R}^3} ||\nabla \chi_{\Omega}(\vec{x})|| d\vec{x} \tag{*}\label{*} \end{equation} to be related to the surface area and this makes me think about a certain relationship. I also had a look online and in the book ''Shapes and Geometries Metrics, Analysis, Differential Calculus, and Optimization'' but I have not find anithing which solves my problem directtly. I have also thougt to use the divergence theorem but that would mean to find a field $\vec{F}$ whose divergence is $\chi$ and this is the countrary of what I am looking for by analaogy (something which allows me to compute the area from the derivative (gradient) of the volume). Is my "intuition correct" and if yes could you give me a detailed answer or/and a good book/reference which attacks that problem directly? ---------------EDIT--------------- I reasoned a bit more on my question and I think I have found something. In particular, https://en.wikipedia.org/wiki/Surface_area remembered me that ''While for piecewise smooth surfaces there is a unique natural notion of surface area, if a surface is very irregular, or rough, then it may not be possible to assign an area to it at all.'' Then assuming to deal with a voulume $\Omega \in \mathbb{R}^n$ whose boundary $\partial \Omega$ is regular enough to have a well defined surface area, I reasoned as follow: the indicartor function is used to compute approximatively the surface area by implicitly assuming it to be smooth and calculating its derivative (which are nonvanishing only on the smooth-assumed boundary). This post Smooth approximation of characteristic function of a bounded open set gave me the idea: By seeing the indicator function $\chi_{\Omega}(\vec{x})$ as the limit of the following succession of functions: \begin{equation} f_n(\vec{x})=\frac{n^3}{\pi^{\frac{3}{2}}}e^{-(n{\vec{x}})^2} \end{equation} which has integral $1$ and approaches the Dirac delta function as $n\to \infty$. The convolution $\chi_{\Omega}*f_n$ is smooth $\forall n$ since $f_n$ is smooth and it converges everywhere to $\chi_{\Omega}$: \begin{equation} [\chi_{\Omega}*f_n](\vec{x})=\int_{\mathbb R^3}\chi_{\Omega}(\vec{y})f_n(\vec{x}-\vec{y})d\vec{y} \end{equation} \begin{equation} \nabla^k_{\vec{x}}[\chi_{\Omega}*f_n](\vec{x})=\int_{\mathbb R^3}\chi_{\Omega}(\vec{y})\nabla^k_{\vec{x}}f_n(\vec{x}-\vec{y})d\vec{y} \end{equation} Therefore, using this formalism, we can define the implicit equation for the surface as: \begin{equation} h_n(\vec{x})=[\chi_{\Omega}*f_n](\vec{x})-0.5 \end{equation} \begin{equation} \chi_{\Omega}(\vec{x})=\theta(h_n(\vec{x})) \tag{**}\label{**} \end{equation} Given a 3D surface defined implicitly by $h_n(x,y,z)=0$ the normal versor to it is defined by: \begin{equation} \hat{N}_n=\frac{\nabla h_n}{||\nabla h_n||} \end{equation} For finite $n$, the vector field $\hat{N}_n$ defined here is continuous and differentiable, hence we can apply the divergence theorem using $\hat{N}_n$ as a vector field: \begin{equation} \iiint_V( \nabla\cdot\hat{N_n}) \;\text{d}\tau=\iint_{\partial V} (\hat{N_n}\cdot\hat{N_n})\;\text{dS}=\iint_{\partial V} \text{dS}= A \tag{***}\label{***} \end{equation} Therefore we are able to compute the surface area integrating over the volume the divergence of the vector field defined by the normal to the surface. The vector field $\hat{N}_n$ defined here is continuous and differentiable in the region around the border of V for finite $n$, but as $n\to\infty$ it becomes ill defined Therefore, up to now I think that my method allows to have an approximate estimation of the area of the surface for $n$ finite, but in the limir $n\to\infty$ we have that the vector field $\hat{N}_n$ becomes ill defined and so I cannot say anything about the convergence of the area to the real value... I am now trying to show that \ref{***} becomes \ref{*} in the limit $n\to\infty$...intuitively this seems possible... Recalling \ref{*}, we have that, using \ref{**}: \begin{equation} \nabla \chi_{\Omega}(\vec{x})=\delta(h_n(\vec{x}))\nabla h_n(\vec{x}) \end{equation} Hence \ref{*} becomes: \begin{equation} \iiint_{\mathbb{R}^3} \delta(h_n(\vec{x})) ||\nabla h_n(\vec{x})|| d\vec{x} \end{equation} Now, using the coarea formula from geometric measure theory (https://en.wikipedia.org/wiki/Dirac_delta_function): $$\int_{\mathbf{R}^n} f(\mathbf{x}) \, \delta(g(\mathbf{x})) \, d\mathbf{x} = \int_{g^{-1}(0)}\frac{f(\mathbf{x})}{|\mathbf{\nabla}g|}\,d\sigma(\mathbf{x}) $$ we have: \begin{equation} \iiint_{\mathbb{R}^3} \delta(h_n(\vec{x})) ||\nabla h_n(\vec{x})|| d\vec{x}=\iint_{h_n^{-1}(0)} \frac{||\nabla h_n(\vec{x})||}{||\nabla h_n(\vec{x})||}dS=\iint_{h_n^{-1}(0)} dS \end{equation} Therefore I have proved that \ref{*} is a good definition of the surface area. Now the question is how well \ref{***} approximates the area REPLY [3 votes]: It's always risky to answer "no" to open-ended "is it possible"-type questions. That said, in the case of using the volume formula for a family of regions to deduce surface area (the way the area of a sphere of radius $r$ is the derivative with respect to $r$ of the volume of a ball of radius $r$), the answer is probably "no": Think, for example, of a non-spheroidal ellipsoid with semi-axes $a$, $b$, and $c$. Its volume is $\frac{4}{3}\pi abc$, but its surface area is a non-elementary function of $a$, $b$, and $c$. If I understand what you're getting at, my answer to Why is the derivative of a circle's area its perimeter (and similarly for spheres)? is related, and may be of interest.<|endoftext|> TITLE: Proving $(A\times B)^- = A^-\times B^-$ (closure of cartesian product) QUESTION [5 upvotes]: My proof, for: $$(A\times B)^- = A^-\times B^-$$ using the metric $$d''((a_1,a_2),(b_1,b_2)) = max\{d_1(a_1,b_1),d_2(a_2,b_2)\}$$ $\rightarrow$ Well, if $a = (a_1,a_2)\in (A\times B)^-$ then: $$d''((a_1,a_2), A\times B) = 0 \implies d''((a_1,a_2),(a_a,a_b))=0 \implies d_1(a_1,a_a)=0, d_2(a_2,a_b)=0$$ for all $a_a\in A$ and $a_b\in B\implies a_1\in A^-$ and $a_2\in B^-$, therefore $a\in (A\times B)^-$ $\leftarrow$ $$a\in A^- \times B^ \implies (a_1,a_2)\in A^- \times B^- \implies a_1\in A^-, a_2\in B^-\implies d_1(a_1,a_a)=0 \ \ \forall a_a\in A, \\d_2(a_2,a_b)=0 \ \ \forall a_b\in B \implies d''((a_1,a_2),(a_a,a_b))=0, \ \ \forall a_a\in A, a_b\in B\implies d''(a,A\times B)=0\implies a\in (A\times B)^-$$ REPLY [3 votes]: The distance of a point to a set is defined as $d(a,B) = \inf\{d(a,b)\mid b\in B\}$. So you cannot conclude that from $d(a,B) = 0$ we have $d(a,b) = 0$ for all $b\in B$. All you can say is that for every $\varepsilon>0$ there exists $b\in B$ such that $d(a,b)<\varepsilon$. Use the property: $a\in \overline A$ if and only if for all $\varepsilon>0$ there exists $b\in A$ such that $d(a,b)<\varepsilon$. So your proof should go like this: Let $(a,b)\in \overline{A\times B}$. We need to show $a\in \overline A$ and $b\in \overline B$. So let $\varepsilon>0$ be arbitrary. Then there exists $(a',b')\in A\times B$ such that $d''((a,b),(a',b')) = \max\{d_1(a,a'), d_2(b,b')\} < \varepsilon$. Hence $d_1(a,a')<\varepsilon$ and $d_2(b,b')<\varepsilon$. Since $\varepsilon>0$ was arbitrary, we conclude $a\in \overline A$ and $b\in \overline B$, i. e. $(a,b)\in \overline A\times \overline B$. Conversely, let $(a,b)\in \overline A\times \overline B$. Let $\varepsilon>0$ be arbitrary. Since $a\in \overline A$, there exists $a'\in A$ such that $d_1(a,a')<\varepsilon$. Since $b\in \overline B$, there exists $b'\in B$ such that $d_2(b,b')<\varepsilon$. But then we have $$ d''((a,b),(a',b')) = \max\{d_1(a,a'), d_2(b,b')\} < \varepsilon. $$ Since $\varepsilon>0$ was arbitrary, we conclude $(a,b)\in \overline{A\times B}$. Notice, that the inclusion $\overline{A\times B}\subseteq \overline A\times \overline B$ is actually almost trivial, because $\overline A\times \overline B$ is closed (its complement being $((X\setminus A)\times Y)\cup (X\times (Y\setminus B))$, a union of open sets; here $X\supseteq A$ and $Y\supseteq B$ are the total spaces) and $\overline {A\times B}$ is the smallest closed set containing $A\times B$ (i. e. the intersection of all closed sets containing $A\times B$).<|endoftext|> TITLE: Let $f(z)$ be analytic in the unit disc, QUESTION [9 upvotes]: Let $f(z)$ be analytic in the unit disc, $Rf(z) \ge 1$, and $f(0)>0$. Show that $$f(0) \cdot \frac{1-|z|}{1+|z|} \le |f(z)| \le f(0) \cdot \frac{1+|z|}{1-|z|} \quad z \in D $$ This looks like something I would have to prove using cases. I tired to throw some numbers in there to see what was going on but I am still not seeing how to prove this. I think I am missing or forgetting a concept to use. I have attempted the Schwartz lemma as suggested below and assumed $f(0)=1$ I am not sure if this is the right path to take? REPLY [7 votes]: The linear fraction $g(w)=\frac{1-w}{1+w}$ maps the right half plane to the unit disk such that $g(1)=0$. By applying Schwarz's lemma to the function $w\mapsto g(f(w))$, we have $|g(f(z))|\le |z|$. The function $g^{-1}(w)=\frac{1-w}{1+w}$ maps the disk $|w|\le|z|$ to the closed disk with diameter $\left(\frac{1-|z|}{1+|z|},\frac{1+|z|}{1-|z|}\right)$, so $f(z)=g^{-1}(g(f(z))$ lies in that disk. Obviously, the disk lies in $\frac{1-|z|}{1+|z|}\le|w|\le \frac{1+|z|}{1-|z|}$.<|endoftext|> TITLE: Showing reducibility of a polynomial in a Discrete Valuation Ring QUESTION [6 upvotes]: Let $R$ be a complete discrete valuation ring with uniformiser $\pi$. I would like to show that a polynomial $f$ in $R[X]$ is reducible. Does it suffice to show that $f$ is reducible in $\frac{R}{\pi^i}[X]$ for all $i\in\mathbb{Z}_{\geq{1}}$? Thoughts: I think it does because $R$ complete means it is the inverse limit of the $\frac{R}{\pi^i}$'s. Moreover each factorisation of $f$ in $\frac{R}{\pi^i}[X]$ passes down to $j\leq i$. But I kind of need to pass down from $i$ equal to infinity to make this work. I feel like I've seen something like this before somewhere else as well where it was made to work but I can't remember the argument... The example I'm interested in is when $R$ is a $\mathbb{Z}_p$ and $\pi=p$ where $p$ is a rational prime. REPLY [4 votes]: If $R/(\pi)$ is a finite field, then $R$ is compact for the $\pi$-adic topology, which gives an easy proof of this. Denote $R_n[X]$ the set of polynomials of degree at most $n$ with coefficients in $R$. You can give it the product $\pi$-adic topology (two polynomials are close when all their coefficients are close) and it is also compact. Your hypothesis says that you have factorisations $f = P_kQ_k \pmod {\pi^k}$, that is, you have an infinite sequence $(P_k,Q_k)$ such that $\lim (P_k Q_k) = f$ and $1 \le \deg P_k, \deg Q_k < \deg f$, and so $(P_k, Q_k)$ are in the compact space $R_{\deg f-1}[X]^2$. Therefore the sequence $(P_k,Q_k)$ has one (maybe several) limit point $(P,Q)$. And since the multiplication map is continuous, you must have $PQ = f$, and $\deg P, \deg Q < \deg f$ so this is a nontrivial factorisation. Concretely, you can just look at your sequence $(P_k,Q_k)$, first select an infinite subsequence where all the $(P_k,Q_k) \pmod \pi$ agree to some $(p_1,q_1) \in (R/\pi)[X]^2$, then select an infinite subsequence where all the $(P_k,Q_k) \pmod {\pi^2}$ agree to some $(p_2,q_2) \in (R/\pi^2) [X]^2$, and so on. The family $(p_l,q_l)$ you get is compatible, and defines an element $(p,q)$ in the projective limit $R[X]^2$<|endoftext|> TITLE: How to prove that $\frac{\Gamma(3/2)}{3\Gamma(11/6)}=\frac{\sqrt{3}\,\Gamma(1/3)^2}{10\pi\sqrt[3]{2}}$? QUESTION [5 upvotes]: My task is to show that: $$\frac{\Gamma(3/2)}{3\Gamma(11/6)}=\frac{\sqrt{3}\,\Gamma(1/3)^2}{10\pi\sqrt[3]{2}}.$$ I'm trying to show this using properties of the gamma function, but it seams every step I take doesn't take me in the right direction. Any suggestions? REPLY [3 votes]: From the functional equation $$\frac12\sqrt{\pi}=\frac12\Gamma\left(\frac12\right)=\Gamma\left(\frac32\right)$$ Again from the functional equation, $$\Gamma\left(\frac{11}6\right)=\frac56\Gamma\left(\frac56\right)$$ From the duplication formula, $$\Gamma\left(\frac13\right)\Gamma\left(\frac56\right)=\sqrt[3]2\sqrt{\pi}\Gamma\left(\frac23\right)$$ And then from the reflection formula, $$\Gamma\left(\frac13\right)\Gamma\left(\frac23\right)=\frac{2\pi}{\sqrt3}$$ Multiply all those equations together to get $$\frac12\sqrt{\pi}\left(\Gamma\left(\frac13\right)\right)^2\Gamma\left(\frac{11}6\right)\Gamma\left(\frac56\right)\Gamma\left(\frac23\right)=\frac{10\pi^{\frac32}\sqrt[3]2}{6\sqrt3}\Gamma\left(\frac32\right)\Gamma\left(\frac56\right)\Gamma\left(\frac23\right)$$ Simplify, and that should do it.<|endoftext|> TITLE: Is basis change ever useful in practical linear algebra? QUESTION [34 upvotes]: In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? REPLY [5 votes]: Structural dynamics with the Finite Element Method Computing the dynamic response of a structure is done with modal analysis. The original vector space is displacements and rotations in each node. These are unsuited to dynamic response. The system of equations is therefore transformed into a new vector space where each element in the vector space represents a vibration mode. Wikipedia has some nice animations of vibration modes, ordered from lowest to highest frequency: Each mode (eigenvector) is the vibration to a specific frequency (eigenvalue). When these have been found, we can compute interesting stuff like the structural response to an earthquake of a certain frequency spectrum. Afterwards, the basis is changed back to the original to obtain forces and displacements in each node.<|endoftext|> TITLE: Does $\operatorname{card}(X) < \operatorname{card}(Y)$ imply $\operatorname{card}(X^2) < \operatorname{card}(Y^2)$ without choice? QUESTION [9 upvotes]: I looked to see if this question was already posted, but did not find anything. Please let me know if this is a duplicate. Assume $X, Y$ are infinite sets such that there is an injection $X \to Y$ but no injection $Y \to X$. Using the axiom of choice, we have that $\operatorname{card}(X) = \operatorname{card}(X^2)$ and similarly for $Y$, so under the hypothesis there is no injection $Y \times Y \to X \times X$. I really have two questions relating to the title: 1) How much (if any) choice do we need to prove that there is no injection $Y \times Y \to X \times X$? And for my own interest, as I could not come up with an argument: 2) Assuming choice, is there a direct argument to prove that there is no injection $Y \times Y \to X \times X$ without relying on cardinalities of the squares? It is definitely possible that the answer to 1) already answers 2). Perhaps this is really a trivial question, but I have thought about it for a little while and have not made any progress. REPLY [3 votes]: As to your question (1), the answer is: all of AC is needed. Lemma: if $f:A\times B\to C\cup D$ is injective, $C\cap D=\varnothing$, and at least one of $A,B,C,D$ is well ordered, we have $|A|\leq|C|\vee|B|\leq|D|$. (By symmetry, one has $|A|\leq|D|\vee|B|\leq|C|$ as well!). Proof: By symmetry, we may assume either $A$ or $C$ can be well ordered. Let $R$ well order $A$. If $\forall_{b\in B}\exists_{a\in A}f(a,b)\in D$, then $b\mapsto f(a,b)$ gives an injection $B\to D$, where for $b\in B$ we take $a$ to be the $R$-minimal $x\in A$ such that $f(x,b)\in D$. If not, it follows that $\exists_{b\in B}\forall_{a\in A}f(a,b)\in C$, and in that case the assignment $a\mapsto f(a,b)$ injects $A$ into $C$ for such a $b$. Now let $S$ well order $C$. If $\forall_{a\in A}\exists_{b\in B}f(a,b)\in C$, we get an injection $A\to C$ by letting $a\mapsto$ the $S$-minimal $c\in C$ for which there exists a $b\in B$ with $f(a,b)=c$. And if $\exists_{a\in A}\forall_{b\in B}f(a,b)\in D$, then for such an $a$ the prescription $b\mapsto f(a,b)$ defines an injection $B\to D$. Hence $|A|\leq|C|\vee|B|\leq|D|$ in both situations. Reversing the roles of $C$ and $D$ in the $R$ wo $A$ case, and of $A$ and $B$ in the $S$ wo $C$ case, we find that, equally, $|A|\leq|D|\vee|B|\leq|C|$, $\Box$. EDIT: the condition $C\cap D=\varnothing$ was never used in the proof, so it can be discarded. Proposition: $(\forall_{X,Y}(|X|<|Y|\Rightarrow|X|^{2}<|Y|^{2}))\Rightarrow AC$ Proof: assuming the antecedent, we show that every set can be well ordered. (As is well known, the well ordering principle is equivalent to the AC.) Let $A$ be a set which we wish to well-order. We can assume $|A|\geq\aleph_{0}$ (else replace $A$ with $A\cup\mathbb{N}$; a well ordering for that set will induce one for $A$). Put $B:=A^{\mathbb{N}}$. Then $|A|\leq|B|$ and $|B|^{2}=|B|$. We now have to become slightly technical, and consider $C:=\Gamma(B)$ where $\Gamma$ is Hartogs' function; that is, $C$ is the set of all ordinals $\alpha$ that have an injection $\alpha\to B$. The facts that $\Gamma(B)$ is indeed a set, and that it is the smallest ordinal that does not allow an injection $\to B$, can be found in most text books on set theory. Write $\kappa:=|B|$ and $\mu:=|C|$. Since both are $\geq\aleph_{0}$, one has $\kappa+\mu\leq\kappa.\mu$ (indeed, we find that $\kappa.\mu=(\kappa+1).(\mu+1)=\kappa.\mu+\kappa+\mu+1\geq\kappa+\mu$). Let us assume $\kappa+\mu=\kappa.\mu$. As C is an ordinal, the Lemmma applies, giving $\kappa\leq\mu\vee\mu\leq\kappa$. The latter option is impossible, as no injection $C\to B$ can exist. So $\kappa\leq\mu$, that is, an injection $B\to C$ exists, and as $C$ is well ordered, $B$ inherits a well ordering from $C$ via this injection. But $|A|\leq|B|$, so that $A$ inherits a well ordering from the one on $B$ via an injection $A\to B$. Now assume $\kappa+\mu<\kappa.\mu$. By our hypothesis, this yields $(\kappa+\mu)^{2}<(\kappa.\mu)^{2}=\kappa^{2}.\mu^{2}=\kappa.\mu$. The latter equality follows from the fact that $|B|^{2}=|B|$, as we have already remarked, while $|C|^{2}=|C|$ since $C$ is an infinite ordinal. However, one has: $(\kappa+\mu)^{2}=\kappa^{2}+2\kappa.\mu+\mu^{2}=\kappa+\kappa.\mu+\mu$; again, $2|C|=|C|$ since $C$ is an infinite ordinal. Thus $(\kappa+\mu)^{2}=\kappa+\kappa.\mu+\mu=\kappa+\kappa.\mu+\mu+1=(\kappa+1)(\mu+1)=\kappa.\mu$, in contradiction with $(\kappa+\mu)^{2}<\kappa.\mu$, $\Box$.<|endoftext|> TITLE: Different arrows in set theory: $\rightarrow$ and $\mapsto$ QUESTION [26 upvotes]: Can someone explain the difference between symbols: $\rightarrow$ and $\mapsto$ Thanks. REPLY [12 votes]: In programming parlance, $\to$ is part of a type signature, while $\mapsto$ is part of a function definition. The expression $$x \mapsto \operatorname{floor}(1/x)$$ denotes the function that takes in a number and spits out the floor of its reciprocal. There are many different type signatures that can be consistently assigned to this function. If you drop in numbers between $0$ and $1$, the function will spit out positive integers, so $$(0, 1) \to \mathbb{N}$$ is one valid type signature. As M. Vinay noted, it's not unusual to combine these notations in a function definition. For example, I could declare a function $g$ with the definition and type signature above by writing $$\begin{align*} g \colon (0,1) & \to \mathbb{N} \\ x & \mapsto \operatorname{floor}(1/x). \end{align*}$$<|endoftext|> TITLE: How do I find the limit of this function? QUESTION [8 upvotes]: This is a question from my calculus text. It says $$\lim\limits_{x\to1}\left(\frac{p}{1-x^p}-\frac{q}{1-x^q}\right)$$ where $p$ and $q$ are natural numbers. I know this is an infinity-infinity indeterminant form which can be converted to a $0/0$ form. I tried substituting $x=1+h$ where $h\to0$. But it is not working. What should I do? REPLY [2 votes]: You can rewrite the function in the form $$ \frac{p(1-x^q)-q(1-x^p)}{(1-x^p)(1-x^q)} $$ We assume $p>q>1$ just to do the next computation. Consider $f(x)=qx^p-px^q+p-q$, so $f'(x)=pqx^{p-1}-pqx^{q-1}$ and $$ f''(x)=pq(p-1)x^{p-2}-pq(q-1)x^{q-2} $$ Hence $f'(1)=0$ and $f''(1)=pq(p-q)$. Observe that the same happens with no condition on $p$ and $q$ (provided they're nonzero). Therefore $$ f(x)=\frac{pq(p-q)}{2}(x-1)^2+g(x)(x-1)^3 $$ for some polynomial $g$. Note also that $$ x^p-1=p(x-1)+g_p(x)(x-1)^2 $$ for some polynomial $g_p$, so your limit is $$ \lim_{x\to 1} \frac{\frac{pq(p-q)}{2}(x-1)^2+g(x)(x-1)^3} {(p(x-1)+g_p(x)(x-1)^2)(q(x-1)+g_q(x)(x-1)^2)}=\frac{p-q}{2} $$ Note that actually the hypothesis that $p$ and $q$ are integer is not really used. The above are the Taylor expansions at $1$, where $g$, $g_p$ and $g_q$ are not necessarily polynomials, but the result is the same.<|endoftext|> TITLE: Finding $\int \frac{dx}{a+b \cos x}$ without Weierstrass substitution. QUESTION [13 upvotes]: I saw somewhere on Math Stack that there was a way of finding integrals in the form $$\int \frac{dx}{a+b \cos x}$$ without using Weierstrass substitution, which is the usual technique. When $a,b=1$ we can just multiply the numerator and denominator by $1-\cos x$ and that solves the problem nicely. But I remember that the technique I saw was a nice way of evaluating these even when $a,b\neq 1$. REPLY [6 votes]: Kepler found the substitution when he was trying to solve the equation $$\ell=mr^2\frac{d\nu}{dt}=\text{constant}$$ where $\ell$ is the orbital angular momentum, $m$ is the mass of the orbiting body, the true anomaly $\nu$ is the angle in the orbit past periapsis, $t$ is the time, and $r$ is the distance to the attractor. This is Kepler's second law, the law of areas equivalent to conservation of angular momentum. Then Kepler's first law, the law of trajectory, is $$r=\frac{a(1-e^2)}{1+e\cos\nu}$$ where $a$ and $e$ are the semimajor axis and eccentricity of the ellipse. So to get $\nu(t)$, you need to solve the integral $$\int\frac{d\nu}{(1+e\cos\nu)^2}$$ So as to relate the area swept out by a line segment joining the orbiting body to the attractor Kepler drew a little picture The attractor is at the focus of the ellipse at $O$ which is the origin of coordinates, the point of periapsis is at $P$, the center of the ellipse is at $C$, the orbiting body is at $Q$, having traversed the blue area since periapsis and now at a true anomaly of $\nu$. To perform the integral given above, Kepler blew up the picture by a factor of $1/\sqrt{1-e^2}$ in the $y$-direction to turn the ellipse into a circle. The orbiting body has moved up to $Q^{\prime}$ at height $$y=\frac{a\sqrt{1-e^2}\sin\nu}{1+e\cos\nu}$$But still $$x=\frac{a(1-e^2)\cos\nu}{1+e\cos\nu}$$ Now he could get the area of the blue region because sector $CPQ^{\prime}$ of the circle centered at $C$, at $-ae$ on the $x$-axis and radius $a$ has area $$\frac12a^2E$$ where $E$ is the eccentric anomaly and triangle $COQ^{\prime}$ has area $$\frac12ae\cdot\frac{a\sqrt{1-e^2}\sin\nu}{1+e\cos\nu}=\frac12a^2e\sin E$$ so the area of blue sector $OPQ^{\prime}$ is $$\frac12a^2(E-e\sin E)$$ and then we can go back and find the area of sector $OPQ$ of the original ellipse as $$\frac12a^2\sqrt{1-e^2}(E-e\sin E)$$ So if doing an integral with a factor of $\frac1{1+e\cos\nu}$ via the eccentric anomaly was good enough for Kepler, surely it's good enough for us. In the original integer, $$\int\frac{dx}{a+b\cos x}=\frac1a\int\frac{dx}{1+\frac ba\cos x}=\frac1a\int\frac{d\nu}{1+\left|\frac ba\right|\cos\nu}$$ where $\nu=x$ is $ab>0$ or $x+\pi$ if $ab<0$. Now we see that $e=\left|\frac ba\right|$, and we can use the eccentric anomaly, $$\sin E=\frac{\sqrt{1-e^2}\sin\nu}{1+e\cos\nu}$$ $$\cos E=\frac{\cos\nu+e}{1+e\cos\nu}$$ $$d E=\frac{\sqrt{1-e^2}}{1+e\cos\nu}d\nu$$ and the integral reads $$\begin{align}\int\frac{dx}{a+b\cos x}&=\frac1a\int\frac{d\nu}{1+e\cos\nu}=\frac12\frac1{\sqrt{1-e^2}}\int dE\\ &=\frac1a\frac1{\sqrt{1-e^2}}E+C=\frac{\text{sgn}\,a}{\sqrt{a^2-b^2}}\sin^{-1}\left(\frac{\sqrt{a^2-b^2}\sin\nu}{|a|+|b|\cos\nu}\right)+C\\&=\frac{1}{\sqrt{a^2-b^2}}\sin^{-1}\left(\frac{\sqrt{a^2-b^2}\sin x}{a+b\cos x}\right)+C\end{align}$$ Of course it's a different story if $\left|\frac ba\right|\ge1$, where we get an unbound orbit, but that's a story for another bedtime.<|endoftext|> TITLE: Solve $x^2 = 2^n + 3^n + 6^n$ over positive integers. QUESTION [6 upvotes]: Solve $x^2 = 2^n + 3^n + 6^n$ over positive integers. I have found the solution $(x, n) = (7, 2)$. I have tried all $n$'s till $6$ and no other seem to be there. Taking $\pmod{10}$, I have been able to prove that if $4|n$ that this proposition does not hold. Can you give me some hints on how to proceed with this problem? Thanks. REPLY [3 votes]: If $n=2k+1\ge 3$, then $x^2\equiv 3\mod 4$. If $n=2k\ge 4$, then $(x+2^k)(x-2^k)=x^2-2^{2k}=6^{2k}+3^{2k}=3^{2k}(1+2^{2k})$. We have $\gcd (x-2^k, x+2^k)=1$, then $x+2^k\ge 3^{2k} \Rightarrow x-2^k\ge 3^{2k}-2^{2k+1}>2^{2k}+1$.<|endoftext|> TITLE: Prove that the determinant of an invertible matrix $A$ is equal to $±1$ when all of the entries of $A$ and $A^{−1}$ are integers. QUESTION [6 upvotes]: Prove that the determinant of an invertible matrix $A$ is equal to $±1$ when all of the entries of $A$ and $A^{−1}$ are integers. I can explain the answer but would like help translating it into a concise proof with equations. My explanation: The fact that $\det(A) = ±1$ implies that when we perform Gaussian elimination on $A$, we never have to multiply rows by scalars. This means that for each column, the pivot entry is created by the previous column’s row operations and can be brought into place by swapping rows. (And the first column must already contain a $1$). Therefore, we never need to multiply by a non-integral value to perform Gaussian elimination. REPLY [10 votes]: Let $A\in\mathbb Z^{n\times n}$ such that $A^{-1}\in\mathbb Z^{n\times n}$. Note that the determinant of an integer matrix is an integer, so $\det\colon\mathbb Z^{n\times n}\to \mathbb Z$. Now, $1=\det(\mathbb I)=\det(A\cdot A^{-1})=\det(A)\cdot\det(A^{-1})$. Since both $\det(A)$ and $\det(A^{-1})$ are integers, they can only be $1$ or $-1$.<|endoftext|> TITLE: Complexity of Gaussian Process algorithms is $\mathcal{O}(n^3)$ QUESTION [8 upvotes]: It is often quoted that the complexity of Gaussian Process algorithms is $\mathcal{O}(n^3)$ due to the need to invert an $n \times n$ matrix, where $n$ is the number of data points. But as far as I can find online, matrix multiplication is also $\mathcal{O}(n^3)$. So would the complexity of GP algorithms still be $\mathcal{O}(n^3)$ if we no longer needed to invert matrices? Or have I missed something. REPLY [5 votes]: Matrix multiplications is indeed of complexity $\mathcal{O}(n^3)$ (well, for school book implementations). It is cubic when we multiply two matrices together. However, in my personal opinion, it is quite rare to perform this operation in machine learning algorithms, hence perhaps that is why we rarely hear people blaming the complexity of algorihms on matrix multiplication. A more common operation is to perform matrix-vector multiplication, which has complexity $\mathcal{O}(n^2)$. In contrast, computing $A^{-1}$ alone would cost us $\mathcal{O}(n^3)$. If you no longer need to invert the matrix, the final complexity would depends on what do you replace this step with. The complexity stays the same if you replace this step with another $\mathcal{O}(n^3)$ procedure. If you replace it with a cheaper procedure, then the complexity reduces.<|endoftext|> TITLE: How to prove that my polynomial has distinct roots? QUESTION [5 upvotes]: I want to prove that the polynomial $$ f_p(x) = x^{2p+2} - abx^{2p} - 2x^{p+1} +1 $$ has distinct roots. Here $a$, $b$ are positive real numbers and $p>0$ is an odd integer. How can I prove that this polynomial has distinct roots for any arbitrary $a$,$b$ and $p$. Thanks in advance. REPLY [9 votes]: Let $c$ denote $ab$. Note that \begin{equation*} f(x) = (x^{p+1}-1)^2 - cx^{2p} \end{equation*} and \begin{equation*} f'(x) = 2(p+1)x^p(x^{p+1}-1) - 2pcx^{2p-1} \end{equation*} Then, $f(x) = 0 \iff$ \begin{equation*} c = \dfrac{(x^{p+1}-1)^2}{x^{2p}} = \varphi(x)\ (\text{say}) \end{equation*} and $f'(x) = 0 \iff$ \begin{align*} c & = \dfrac{(p+1)x^p(x^{p+1}-1)}{px^{2p-1}} = \dfrac{(p+1)x}{p}\dfrac{x^{p+1}-1}{x^p} \iff\\ c & = \left(\dfrac{p+1}{p}\right)x \sqrt{\varphi(x)}. \end{align*} Thus, $f(x)$ and $f'(x)$ vanish for the same $x$ if and only if for some root $x$ of $f(x)$, \begin{align*} c & = \left(\dfrac{p+1}{p}\right)x \sqrt c \iff\\ x & = \dfrac{p\sqrt c}{p+1}. \end{align*} Thus, for every $p$ and $c = ab$, if such an $x$ is a root, it is a multiple root. Now, when does such a root $x$ exist? Let $x = t$ be one such. Then $c = \left(\dfrac{p+1}{p} \right)^2 t^2$. Then, since $f(t) = 0$, we have (from the original form of the equation): \begin{align*} t^{2p+2} - \left(\dfrac{p+1}{p}\right)^2 t^{2p + 2} - 2t^{p+1} + 1 = 0\\ -\dfrac{(2p + 1)}{p^2}t^{2p + 2} - 2t^{p+1} + 1 = 0\\ (2p + 1)t^{2(p + 1)} + 2p^2 t^{p + 1} - p^2 = 0. \end{align*} When treated as a quadratic equation in $t^{p+1}$, the discriminant is \begin{equation*} 4p^4 + 4p^2(2p + 1) = 4p^2(p + 1)^2, \end{equation*} and therefore, the solutions are \begin{equation*} t^{p+1} = -p, \dfrac{p}{2p + 1}. \end{equation*} That is, \begin{equation*} t = (-p)^{\frac 1 {p + 1}}, \left(\dfrac p {2p + 1} \right)^{\frac 1 {p + 1}}. \end{equation*} But substituting the same $c$ in $f'(t) = 0$, we get \begin{align*} & 2(p+1)t^p(t^{p+1}-1)-2p\left(\dfrac{p+1}{p}\right)^2t^{2p+1}=0\\ & p(t^{p+1}-1)-(p+1)t^{p+1}=0 \implies\\ & t = (-p)^{\frac{1}{p+1}} \end{align*} Thus, only the first of the previous two solutions satisfies both equations. Then, $c = \left( \dfrac{p+1}{p} \right)^2 t^2$ gives us \begin{equation*} \boxed{c= \dfrac{(p+1)^2}{(-p)^{\frac{2p}{p+1}}}}. \end{equation*} Thus, the equation has multiple roots exactly when $c$ and $p$ are related as above. Note that for odd values of $p$, $c$ will be a real number if and only if $p$ is of the form $4k + 1$, and then, $c < 0$. If, as stated in the question, $c = ab$ is a positive real number, the equation will have distinct roots. Example For $p = 1$, $f(x) = x^4 - cx^2 - 2x^2 + 1$ and $f'(x) = 4x^3 - 2cx - 4x$. Then, $f(x) = 0$ and $f'(x) = 0$ imply that $c = \left(\dfrac{x^2 - 1}{x}\right)^2$ and $c = 2x\left(\dfrac{x^2-1}{x}\right)$ respectively. Thus, if $x$ is a multiple root, then $x = \dfrac{\sqrt c}{2}$. Taking $t$ to be such a root, so that $c = 4t^2$, and substituting in $f(t) = 0$, we get \begin{align*} t^4 - 4t^4 - 2t^2 + 1 = 0\\ 3t^4 + 2t^2 - 1 = 0. \end{align*} Thus, $t^2 = -1, \dfrac 1 3$, of which only the first one satisfies $f'(t)=0$. Thus, $c = -4$. For $c = 4$, $x^4 + 2x^2 + 1 = 0$ has roots $\pm i, \pm i$.<|endoftext|> TITLE: Is $\mathbb{Q}^2 \cup \mathbb{I}^2 $ disconnected? QUESTION [5 upvotes]: My intuition says that it is but I'm not entirely sure. I thought about using the projection map since it is continuous and surjective and also because I know that the rationals and irrationals are disconnected in the reals. However, there isn't a fixed set to be projected in this case so I figure that not to be route to take in this case. Something is telling me that disconnectedness follows straight from the definition although I'm not sure how. Any help is appreciated. REPLY [5 votes]: $\mathbb{Q}^2 \cup \mathbb{I}^2$ is connected. Every line of finite, nonzero rational slope passing through $(0,0)$ is contained in $\mathbb{Q}^2 \cup \mathbb{I}^2$, because if the quotient of two real numbers is a nonzero rational number then either both are rational or both are irrational. The union of these lines is path connected. Thus $\mathbb{Q}^2 \cup \mathbb{I}^2$ contains a dense connected subset, so it is connected.<|endoftext|> TITLE: Proving a result in infinite products: $\prod (1+a_n)$ converges (to a non zero element) iff the series $\sum a_n$ converges QUESTION [6 upvotes]: We assume that $\sum |a_n|^{2}$ converges, then I want to conclude that $\prod (1+a_n)$ converges to a non zero element $\iff$ the series $\sum a_n$ converges. My attempt If $\prod (1+a_n)$ converges to a non zero element then we can write $$\prod (1+a_n)= \prod \exp(\ln(1+a_n))= \exp(\sum \ln (1+a_n)) \le \exp(\sum |\ln (1+a_n)|)$$ Then using that $\sum |a_n|^{2}$ converges we can choose $n$ such that $|a_n|^2 < \frac{1}{4}$ and we know that $|ln(1+z)|\le 2 |z|$ if $|z|< \frac{1}{2}$ using the series expansion, we get that $$\exp(\sum |\ln (1+a_n)|) \le \exp(\sum 2|a_n|)$$ For the converse I want to use the result that says that if $\sum |a_n|$ converges then $\prod (1+a_n)$ converges so since we have that $\sum |a_n|^{2}$ converges then $\sum |a_n|$ converges and $\prod (1+a_n)$ does too. Questions But from here I don't know if I am right, how to conclude and solve the converse part to say that we have a non zero limit, and another thing Can someone provide explicit examples of a sequence of complex numbers $\{a_n\}$ such that $\sum a_n$ converges but $\prod (1+a_n)$ diverges and the other way around (This is $\prod (1+a_n)$ converges but $\sum a_n$ diverges )? Thanks a lot in advance. REPLY [4 votes]: If $\sum |a_n|^2$ converges, then $|a_n| \to 0$ and there is a positive integer $N$ such that $|a_n| < 1/2$ for all $n > N$. We then have $$| \log(1 + a_n) - a_n| = \left|\sum_{k=2}^\infty (-1)^{k+1}\frac{a_n^k}{k} \right| \leqslant \sum_{k=2}^\infty \frac{|a_n|^k}{k} \leqslant \frac1{2}|a_n|^2 \sum_{k=0}^\infty |a_n|^k \\ = \frac{|a_n|^2}{2(1 - |a_n|)} < |a_n|^2 $$ Hence, the series $\sum (\log(1+a_n) - a_n)$ is absolutely convergent. Therefore, $\sum a_n$ and $\sum \log(1+a_n)$ converge or diverge together , which implies that $\sum a_n$ and $\prod (1+a_n)$ converge or diverge together<|endoftext|> TITLE: Let $G$ be a Lie group and $H$ be a closed subgroup of $G$. Show that if $H$ and homogeneous space $\frac{G}{H}$ are connected, then $G$ is connected. QUESTION [5 upvotes]: Let $G$ ge a Lie group and $H$ be a closed subgroup of $G$. Show that if $H$ and homogenuous space $\frac{G}{H}$ are connected, then $G$ is connected. Remark: For this proof I use the following theorem: Theorem: Let $G$ be a Lie group and $M$ smooth manifold, suppose that $G$ acts transitively (smooth) on $M$. Let $M_{\alpha}$ any connected component of $M$ and $G^{0}$ the connected component of identity in $G$, then $G^{0}$ acts transitively (smooth) on $M_{\alpha}$, furthermore, $M_{\alpha} \cong \frac{G^{0}}{G^{0}_{p}}$ for $p\in M_{\alpha}$, where $G^{0}_{p}$ isotropy group of $p$. Considering $M=\frac{G}{H}=M_{\alpha}$ because $\frac{G}{H}$ is connected, we have that $G$ acts transitively on $G/H$ because $G/H$ is a homogeneous space, then, by the previous theorem $G^{0}$ acts transitively on $G/H$. Furthermore, as $H$ is connected closed subgroup of $G$, then $H\subseteq G^{0}$, therefore $H$ acts transitively on $\frac{G}{H}$. If you read the proof of this theorem will note that the action can say it is: $$\begin{array}{rcl} G\times \frac{G}{H}&\rightarrow& \frac{G}{H} \\ (g,g^{*}H) &\rightarrow & gg^{*}H \end{array}$$ Then, by the following theorem: Theorem: Let $G$ be a Lie group, $M$ smooth manifold $G\times M \rightarrow M$ be a transitive and smooth action, then $G_{p}$ (isotropy group) is closed subgroup of $G$ and $M \cong \frac{G}{G_{p}}$ for each $p\in M$. we have $\frac{G}{H} \cong \frac{H}{H_{p}}$ where taking $p=eH$ we have: $$H_{p}=\left\{h\in H : heH=eH\right\}=H.$$ Therefore, $\frac{G}{H} \cong \frac{H}{H}$, then $G\cong H$, and as $H$ is connected, then $G$ is connected. My questions: I need to know if my test has a mistake and if any of you have another proof. REPLY [4 votes]: Your argument has a significant issue in the paragraph after the theorem you quote: $H$ almost never acts transitively on $G/H$ by any action, and the induced action of $H$ on $G/H$ coming from the natural action of $G$ on $G/H$ is transitive iff $G = H$, so is very rare indeed. In other words, by stating $H$ acts transitively on $G/H$, it automatically follows that $G = H$, so $G$ is connected if $H$ is. Here' a sketch of a proof. Consider the projection $\pi:G\rightarrow G/H$ and note that $\pi$ is an open map. Now, assume $H$ is connected, $G/H$ is connected, but $G$ is disconnected. Pick nonemtpy open sets $U$ and $V$ witnessing the fact that $G$ is disconnected. Because $\pi$ is an open map, $\pi(U)$ and $\pi(V)$ are nonempty open subsets of $G/H$. Since $G/H$ is connected, we must have $\pi(U)\cap \pi(V)\neq \emptyset$ so pick $x\in \pi(U)\cap \pi(V)$. Being in the intersection means that $\pi(y) = \pi(z) = x$ for some $y\in U$ and $z\in V$. Since $\pi(y) = \pi(z)$, $y = zh$ for some $h\in H$. Now, show that the relatively open subsetes $U\cap yH$ and $V\cap yH$ of $yH$ disconnect $yH$. Since $yH$ is homeomorphic to $H$, it follows that $H$ is disconnected as well, contradiction.<|endoftext|> TITLE: Closed form of $\int_0^{\pi/2} \frac{\arctan^2 (\sin^2 \theta)}{\sin^2 \theta}\,d\theta$ QUESTION [13 upvotes]: I'm trying to evaluate the closed form of: $$I =\int_0^{\pi/2} \frac{\arctan^2 (\sin^2 \theta)}{\sin^2 \theta}\,d\theta$$ So far I've tried introducing a parameters, $\displaystyle I(a,b) = \int_0^{\pi/2} \frac{\arctan (a\sin^2 \theta)\arctan (b\sin^2 \theta)}{\sin^2 \theta}\,d\theta$ but that doesn't lead to an integral I can manage. Expanding the series for $\arctan^2 x$ leads to the sum: $$I = \frac{\pi}{2}\sum\limits_{n=0}^{\infty}\frac{(-1)^n}{(n+1)4^{2n+1}}\binom{4n+2}{2n+1}\left(\sum\limits_{k=0}^{n}\frac{1}{2k+1}\right)$$ and using $\displaystyle \int_0^1 x^{n-\frac{1}{2}}\log (1-x)\,dx = \frac{-2\log 2 + 2\sum\limits_{k=0}^{n}\dfrac{1}{2k+1}}{n+\frac{1}{2}}$ leads to an even uglier integral: $\displaystyle \Im \int_{0}^{1} \frac{1}{1+\sqrt{1-i\sqrt{x}}}\frac{\log(1-x)}{x}\,dx$ among others. I got the non-square version, which seems to have a nice closed form $\displaystyle \int_0^{\pi/2} \frac{\arctan (\sin^2 \theta)}{\sin^2 \theta}\,d\theta = \frac{\pi}{\sqrt{2}\sqrt{1+\sqrt{2}}}$ but the squared version seems difficult. Any help is appreciated. (P.S. - I'm not familiar with Hypergeometric identities, so it would be very helpful if a proof or a reference to a proof was provided, should the need arise to use them.) REPLY [19 votes]: Let $I$ denote the integral. Then, using integration by parts we can write $$I=\int_{0}^{\pi\over 2}\frac{\left(\tan^{-1}(\sin^2 x) \right)^2}{\sin^2 x}dx = 4\int_0^{\frac{\pi}{2}}\frac{\cos^2 x\tan^{-1}(\sin^2 x)}{1+\sin^4 x}dx$$ The main idea of this evaluation is to use differentiation under the integral sign. Let us introduce the parameter $\alpha$: $$f(\alpha)=\int_0^{\frac{\pi}{2}}\frac{\cos^2 x\tan^{-1}(\alpha \sin^2 x)}{1+\sin^4 x}dx$$ Taking derivative inside the integral, \begin{align*} f'(\alpha) &= \int_0^{\pi\over 2}\frac{\cos^2 x}{1+\sin^4 x}\cdot\frac{\sin^2 x}{1+\alpha^2 \sin^4 x}dx \\ &= \frac{1}{1-\alpha^2}\int_0^{\pi\over 2}\frac{\cos^2 x\sin^2 x}{1+\sin^4 x}dx-\frac{\alpha^2}{1-\alpha^2}\int_0^{\pi\over 2}\frac{\cos^2 x\sin^2 x}{1+\alpha^2 \sin^4 x}dx \quad (\text{Partial Fractions}) \end{align*} Let $g(\alpha)=\int_0^{\pi\over 2}\frac{\cos^2 x\sin^2 x}{1+\alpha^2 \sin^4 x}dx$. Then, $$\frac{I}{4}=f(1)=\int_0^1\frac{g(1)-\alpha^2 g(\alpha)}{1-\alpha^2}d\alpha \tag{1}$$ Evaluation of $g(\alpha)$ \begin{align*} g(\alpha) &= \int_0^{\frac{\pi}{2}}\frac{\cos^2 x\sin^2 x}{1+\alpha^2 \sin^4 x}dx \\ &= \int_0^\infty \frac{t^2}{\left(t^4(1+\alpha^2)+2t^2+1 \right)(t^2+1)}dt \quad (t=\tan x)\\ &= -\frac{1}{\alpha^2}\int_0^\infty\frac{1}{1+t^2}dt+\frac{1}{\alpha^2}\int_0^\infty\frac{1+(1+\alpha^2)t^2}{(1+\alpha^2)t^4+2t^2+1}dt \\ &= -\frac{\pi}{2\alpha^2}+\frac{\pi \sqrt{1+\sqrt{1+\alpha^2}}}{2\sqrt{2}\alpha^2}\tag{2} \end{align*} That last integral was evaluated using an application of the residue theorem. Substitute this into equation (1) to get $$I=\pi\sqrt{2}\int_0^1\frac{\sqrt{1+\sqrt{2}}-\sqrt{1+\sqrt{1+\alpha^2}}}{1- \alpha ^2 }d\alpha \tag{3}$$ Evaluation of integral (3) Luckily, integral (3) has a nice elementary anti-derivative. \begin{align*} &\; \int \frac{\sqrt{1+\sqrt{2}}-\sqrt{1+\sqrt{1+\alpha^2}}}{1-\alpha^2}d\alpha \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha) -\int \frac{\sqrt{1+\sqrt{1+\alpha^2}}}{1-\alpha^2} d\alpha \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\int \frac{t\sqrt{1+t}}{(2-t^2)\sqrt{t^2-1}}dt\quad (t=\sqrt{1+\alpha^2}) \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\int \frac{t}{(2-t^2)\sqrt{t-1}}dt \\ &=\sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)- 2\int \frac{u^2+1}{2-(u^2+1)^2}du\quad (u=\sqrt{t-1}) \\ &=\sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)- 2\int \frac{u^2+1}{(\sqrt{2}-1-u^2)(\sqrt{2}+1+u^2)}du \\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\int\frac{du}{\sqrt{2}-1-u^2}+\int\frac{du}{\sqrt{2}+1+u^2} \\ &=\sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\sqrt{\sqrt{2}+1}\tanh^{-1}\left( u \sqrt{\sqrt{2}+1}\right)+\sqrt{\sqrt{2}-1}\tan^{-1}\left(u\sqrt{\sqrt{2}-1} \right) +C\\ &= \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\sqrt{\sqrt{2}+1}\tanh^{-1}\left( \sqrt{\sqrt{1+\alpha^2}-1} \sqrt{\sqrt{2}+1}\right)\\ &\quad +\sqrt{\sqrt{2} -1}\tan^{-1}\left(\sqrt{\sqrt{1+\alpha^2}-1}\sqrt{\sqrt{2}-1} \right) +C \end{align*} Therefore, the integral is \begin{align*} I &= \pi \sqrt{2} \lim_{\alpha\to 1}\Bigg\{ \sqrt{1+\sqrt{2}}\tanh^{-1}(\alpha)-\sqrt{\sqrt{2}+1}\tanh^{-1}\left( \sqrt{\sqrt{1+\alpha^2}-1} \sqrt{\sqrt{2}+1}\right)\\ &\quad +\sqrt{\sqrt{2} -1}\tan^{-1}\left(\sqrt{\sqrt{1+\alpha^2}-1}\sqrt{\sqrt{2}-1} \right) \Bigg\} \\ &= \pi\sqrt{2}\left\{\frac{\sqrt{\sqrt{2}+1}}{2}\log\left(\frac{\sqrt{2}+2}{4} \right) +\sqrt{\sqrt{2}-1}\tan^{-1}\left(\sqrt{2}-1 \right)\right\} \\ &= \pi\sqrt{2}\left\{\frac{\sqrt{\sqrt{2}+1}}{2}\log\left(\frac{\sqrt{2}+2}{4} \right) +\frac{\pi}{8}\sqrt{\sqrt{2}-1}\right\} \\ &= \color{maroon}{\pi \log\left( \frac{2+\sqrt{2}}{4}\right) \sqrt{\frac{\sqrt{2}+1}{2}}+\frac{\pi^2}{4}\sqrt{\frac{\sqrt{2}-1}{2}}}\approx 0.576335 \end{align*}<|endoftext|> TITLE: Alternative "Fibonacci" sequences and ratio convergence QUESTION [10 upvotes]: So the well known Fibonacci sequence is $$ F=\{1,1,2,3,5,8,13,21,\ldots\} $$ where $f_1=f_2=1$ and $f_k=f_{k-1}+f_{k-2}$ for $k>2$. The ratio of $f_k:f_{k-1}$ approaches the Golden Ratio the further you go: $$\lim_{k \rightarrow \infty} \frac{f_k}{f_{k-1}} =\phi \approx 1.618$$ Let's define a class of similar sequences $F_n$ where each $f_k$ is the sum of the previous $n$ numbers, $f_k=f_{k-1} + f_{k-2} + \dots + f_{k-n}$ so that the traditional Fibonacci sequence would be $F_2$ but we can talk about alternatives such as $$F_3 = \{1,1,1,3,5,9,17,\dots \}$$ where we initialized the values $f_1$ through $f_3$ to be $1$ and we can show that in this case $$ \lim_{k \rightarrow \infty} \frac{f_k}{f_{k-1}} \approx 1.839286755 $$ The following table gives some convergences for various values of $n$: $$ \begin{matrix} F_n & \text{Converges to} \\ \hline F_2 & \phi \\ F_3 & 1.839286755 \\ F_4 & 1.927561975 \\ F_5 & 1.965948237 \\ F_{6} & 1.983582843 \\ F_{10} & 1.999018626 \end{matrix} $$ Just by inspection, it seems that the convergence values are converging toward $2$ as $n \rightarrow \infty$. So my primary question is: What is the proof that the convergence converges to 2 (assuming it does). REPLY [5 votes]: From the standard theory of linear recurrence,s $F_n$ is the positive real root of the equation $z^n = z^{n-1} + z^{n-2} + \cdots + z + 1$. Multiplying both sides by $1-z$ and rearranging, you get that $F_n$ is the positive real root of $f_n(z) = z^{n+1} - 2z^n + 1 = 0$ that is not $z = 1$. (By Descartes' rule of signs (https://en.wikipedia.org/wiki/Descartes%27_rule_of_signs), there are either 0 or 2 positive real roots of this polynomial; we know there is at least 1 , so there must be 2.) Now, we have $$ f_n(2)= 2^{n+1} - 2 \cdot 2^{n} + 1 = 1 $$ and $$f_n(2 - 1/2^{n-1}) = (2 - 1/2^{n-1})^{n+1} - 2 \cdot (2-1/2^{n-1})^n + 1. $$ We aim to show that $f_n(2-1/2^{n-1}) < 0$; then $f_n$ must have a root between $2 - 1/2^{n-1}$ and $2$. Factoring out powers of 2, we get $$f_n(2 - 1/2^{n-1}) = 2^{n+1} (1-1/2^n)^{n+1} - 2^{n+1} (1-1/2^n)^n + 1.$$ Factoring out $2^{n+1} (1-1/2^n)^n$ from the first two terms gives $$f_n(2 - 1/2^{n-1}) = 2^{n+1} (1-1/2^n)^n (-1/2^n) + 1$$ or, finally, $$f_n(2 - 1/2^{n-1}) = 1-2(1-1/2^n)^n. $$ So the result that $f_n$ has a root in the interval $[2-1/2^{n-1}, 2]$, for $n$ sufficiently large, follows from the fact that $(1-1/2^n)^n > 1/2$ for $n$ sufficiently large. Since $\lim_{n \to \infty} (1-1/2^n)^n = 1$ (use for example L'Hopital's rule) this follows. Thus $F_n \in [2 - 1/2^{n-1}, 2]$ and the desired result follows, for example, from the squeeze theorem.<|endoftext|> TITLE: Integral of $\frac{1}{x\sqrt{x^2-1}}$ QUESTION [7 upvotes]: I am very confused by this. I know that the derivative of $\text{arcsec}(x)$ is $\dfrac{1}{|x|\sqrt{x^2-1}}$. However, if you plug in the integral of $\dfrac{1}{x\sqrt{x^2-1}}$ into wolfram alpha it gives some other answer with an inverse tangent: $$ \int \dfrac{1}{x\sqrt{x^2-1}}dx = - \tan^{-1}\Bigg(\frac{1}{\sqrt{x^2-1}} \Bigg) +C $$ I was just wondering why this is, or why wolfram is giving something totally different. Are they equivalent? REPLY [5 votes]: The two answers are equivalent. Remember that $\sin$ and $\tan$, from a trigonometric point of view, are just ratios of sides of a right angled triangle. The $\tfrac{1}{\sqrt{x^2-1}}$ in the wolfram result looks suspiciously like an application of Pythagoras's theorem, doesn't it? You can do the calculation yourself.<|endoftext|> TITLE: Product of all numbers in a given interval $[n,m]$ QUESTION [14 upvotes]: Introduction I was wondering if, you define some interval $[n,m]$ which contains all numbers between $n$ and $m$ (consider either $\mathbb Q$ or $\mathbb R$), what would be the product of all those numbers? One of my instant guesses was that'll tend to infinity. Clearly, it would be neat to assume $a\in[n,m], a\ne 0$ since the product will be $0$ otherwise. In addition, $n,m\ge0$ since multiplying infinitely many negative numbers is undefined. Q: Is it correct to claim that interval type $[a\gt0,b\gt a]$ in its product of all of its infinitely many real numbers, tends to $0$ no matter how big $b$ is, since you can include numbers infinitely close to $0$? A: It seems it can either tend to $0$, $\infty$ or a finite value $x$, which is analyzed below. Interval of type $[a\gt1,b\gt a]$, would tend to infinity in all cases since we never decrease in value no matter what number we pick to multiply next. Symmetrical expression to work multiply the numbers Imagine taking a interval $[n,m]$, and taking the numbers for each $k$ iteration, as averages of previous numbers. Like this: $$n,m$$ $$n,\frac{n+m}{2},m$$ $$n,\frac{3n+m}{4},\frac{n+m}{2},\frac{n+3m}{4},m$$ $$\dots$$ Which would for $k$ iterations as $k$ tends towards $\infty$, eventually contain all numbers in that interval. The product of all those numbers for $k$ iterations can be written as: $$ \prod ^{2^k} _{x=0} \frac{(2^k - x)n+xm}{2^k} $$ If we now take an interval $[n,m]$ as $[2,3]$ for example, we have: For $k=4$ it is: $5.11501\dots \times 10^6$ For $k=8$ it is: $3.24661\dots \times 10^{101}$ For $k=10$ it is: $7.56107\dots \times 10^{404}$ And we can say it clearly tends towards $\infty$ itself. And if we pick an interval containing numbers less than 1, we can see it can tend towards $0$; For example: $[0.1,2],k=14$ then: $1.3926\dots \times 10^{-1062}$ But If we have a greater number, we can still tend towards infinity; For example: $[0.1,3],k=14$ then: $8.6083\dots \times 10^{1535}$ But then If we choose $[0.1,2.3639556]$ then for $k=14$ we will get: $1.000756\dots$ Seems it tends to a bit over $1$? Q: Does that mean we can by choosing the right ratio, or the right values in the interval, make it tend towards any value? A: But then If we increase the $k$, we start tending to either $\infty$ or $0$ very rapidly; for $k=16$ I had to decrease the $m$ a bit to $2.3638689$ for it to be roughly $1.0076\dots$ This brings up the question; Can we actually make the product tend to a finite value for the case where $k$ approaches $\infty$ by choosing the right ratio of $n$ and $m$ in which $(n \lt 1)$ , $(m\gt 1)$? Calculating values for finite cases? I've now tried to find the $m$ for $[0.1,m]$ so that the product of the interval ($x$) : For $k=2$, $m\approx 3.10745$ for it to be $x\approx 1.00001$ For $k=10$, $m= 2.36569706$ for it to be $x= 1$ For $k=14$, $m\approx 2.363955478620$ for it to be $x\approx 1.00000001474698$ For $k=15$, $m\approx 2.363897553300$ for it to be $x\approx 1.00000169779107$ For $k=16$, $m\approx 2.363868593513$ for it to be $x\approx 1.00000000497575$ $$\dots$$ As you can see, for bigger $k$ we get more precision on the digits of $m$ This brings up another question; How would one compute $m$ with most precision for choosen $n$, $(01,m>n$ the result tends to $\infty$ $5.$ When $01$ The result can either tend to $0$, $\infty$ or a finite value $x$, which can be calculated by using a symmetric expression I used in my approach. Better Calculations I use WolframAlpha while increasing $k$ but it runs up to $20$ since bigger $k$ exceed its computational time allowed. When applying calculations proposed by String in his answer (using Newton-Raphson method), and feeding them to WolframAlpha, the most precision we get is (for $x=1$): $n=\frac{1}{2}$, $m\approx 1.6030164899169670747912206652529016572070546450201637$ $n=\frac{1}{3}$, $m\approx 1.8699324270643973162008471123760292568887231725646945$ $n=\frac{1}{4}$, $m\approx 2.0245006935913233776633741232407065465057011574013821$ $n=\frac{1}{5}$, $m\approx 2.1267864647345386651244066521323021196320323179442441$ $$\dots$$ The $m$ seems to tend to $e$ as $n\to\infty$. I might get back to this if I get a chance, but I doubt it'll be soon enough. REPLY [4 votes]: There is no such thing as a product of uncountably many numbers (where not most of them are $=1$ or some are $=0$). Compare to sums: Even with countably many summnds, we do not speak of sums, but of series (even though we suggestively use the same symbol $\sum$ for both). Those have very different properties from sums: A sum of rationals is always defined, is always a rational, and does not depend on the order of summation. On the other hand, a series of rationals may fail to converge, or converge to an irrational number, or converge to different values if we change the order of the terms. So to repeat: Even a "sum" of countably many numbers is not really a sum. A "sum" of uncountably many (non-zero) summands is even more horrible: For any such best there must exist some $\epsilon>0$ such that uncountably many terms are $>\epsilon$, or uncountzably many terms are $<-\epsilon$; already their contribution is (positive or negative) infinite. It is not an easy task, especially for an arbitrary index set, to assign any meaning to this. The same argument holds for products (if not with some extra considerations). That being said, you gave a specific definition of an expression $$P(a,b)=\lim_{n\to\infty}P(a,b;2^n),$$ where $$P(a,b;N)=\prod_{j=0}^N\frac{(N-j)a+jb}{N}, $$ which we shall investigate (and better forget that we want to call this "product of all numbers in $[a,b]$). Consider first the case $00$ we have $P(a,b,2^n)=P(-b,-a,2^n)$. (Though for me this fun result is more a hint that the definition of $P$ is not perfect)<|endoftext|> TITLE: How can I know the analytic continuation exists in certain cases? QUESTION [5 upvotes]: As pointed in Does the analytic continuation always exists? we know it doesn't always exist. But: take the $\Gamma$ function: the first definition everyone meet is the integral one: $$ z\mapsto\int_{0}^{+\infty}t^{z-1}e^{-t}\,dt $$ which defines an holomorphic function on the half plane $\{\Re z>0\}$. Moreover we immediately get the functional equation: $$ \Gamma(z+1)=z\Gamma(z)\;,\;\;\;\forall\; \Re z>0. $$ This equation is used to extend the function on the whole complex plane (minus the negative integers)... but: WHY CAN WE DO THIS?! We know that there is an holomorphic function $\Gamma$ which can be expressed as the integral on that half plane. Why are we allowed to write $$ \Gamma\left(\frac12\right)=-\frac12\Gamma\left(-\frac12\right) $$ for example? LHS is defined, RHS, NOT!!! But where's the problem? Simply let's define $\Gamma\left(-\frac12\right)$ in such a way... but why can we do this? How can I know that this function I named $\Gamma$ which is holomorphic on the above half plane admits an extension? REPLY [3 votes]: The following are citations from the classic Applied and Computational Complex Analysis, Vol. I by P. Henrici. The chapter 3: Analytic Continuation provides a thorough treatise of the theme. Here we look at two aspects, which should help to clarify the situation. At first we take a look when two functions $f(z)$ and $g(z)$ are analytic continuations from each other. Theorem 3.2d: (Fundamental lemma on analytic continuation) Let $Q$ be a set with point of accumulation $q$ and let $R$ and $S$ be two regions such that their intersection contains $Q$ and $q$ is connected. If $f$ is analytic on $R$, $g$ is analytic on $S$, and $f(z)=g(z)$ for $z\in Q$, then $f(z)=g(z)$ throughout $R\cap S$ and $f$ and $g$ are analytic continuations of each other. We observe, we need at least a set $Q$ with an accumulation point where analytic functions $f$ and $g$ have to coincide. This set is part of the intersection of two regions $R$ and $S$ where $f$ and $g$ are defined. Finally we conclude that throughout $R\cap S$ the functions coincide. The second aspect sheds some light at functional relationships in connection with analytic continuation. We can read in Section 3.2.5: Analytic Continuation by Exploiting Functional Relationships Occasionally an analytic continuation of a function $f$ can be obtained by making use of a special functional relationship satisfied by $f$. Naturally this method is restricted to those functions for which such relationships are known. He continues with example 15 which seems that P. Henrici had precisely a user with OPs question in mind. Example 15: Let the function $g$ possess the following properties: (a) $g$ is analytic in the right half-plane: $R:\Re (z)>0$ (b) For all $z\in R, zg(z)=g(z+1)$ We assert that $g$ can be continued analytically into the whole complex plane with the exception of the points $z=0,-1,-2,\ldots$. We first continue $g$ into $S:\Re (z)>-1,z\neq 0$. For $z\in S$,let $f$ be defined by \begin{align*} f(z):=\frac{1}{z}g(z+1) \end{align*} For $z\in S, \Re(z+1)>0$. Hence by virtue of (a) $f$ is analytic on $S$. In view of (b) $f$ agrees with $g$ on the set of $R$. Since $S$ is a region, $f$ represents the analytic continuation of $g$ from $R$ to $S$. We note that $f$ satisfies the functional relation $f(z+1)=zf(z)$ on the whole set $S$. Denoting the extended function again by $g$, we may use the same method to continue $g$ analytically into the set $\Re(z)>-2,z\neq 0,-1$, and thus step by step into the region $z\neq 0,-1,-2,\ldots$. Of course this example addresses the Gamma Function $\Gamma(z)$ which is treated in detail in chapter 8, vol. 2. He then continues with further methods of analytic continuation, such as the principle of continuous continuation and the symmetry principle.<|endoftext|> TITLE: The classifying space of open covers of a manifold QUESTION [5 upvotes]: Let $M$ be a manifold of dimension $d$ and let $\mathsf{Disk}_{/M}$ be the category of open subsets of $M$ that are diffeomorphic to $\mathbb{R}^d$ with morphisms given by inclusions. Let $\mathrm{B} \mathsf{Disk}_{/M}$ be the classifying space of this category. How do I prove that $\mathrm{B} \mathsf{Disk}_{/M}$ is homotopy equivalent to $M$? Intuitively, $\mathrm{B} \mathsf{Disk}_{/M}$ should be some thickening of $M$. I think this might follow from Quillen's theorem A. In order to apply it, I would first need to exhibit $M$ as the classifying space of some more tractable category. I thought about $M \simeq \left\lvert \operatorname{Sing} M \right\rvert$, but $\operatorname{Sing} M$ is not the nerve of a category. The second barycentric subdivision of $\operatorname{Sing} M$ is a category, but it is pretty complicated. Alternatively, we might try a direct argument. If I'm not mistaken, we have $$\mathrm{B} \mathsf{Disk}_{/M} \simeq \operatorname*{colim}_{[n] \to \mathsf{Disk}_{/M}} \Delta^n.$$ On the other hand, $$M \cong \operatorname*{colim}_{\mathsf{Disk}_{/M}} \mathbb{R}^d \simeq \operatorname*{hocolim}_{[0] \to \mathsf{Disk}_{/M}} \Delta^0.$$ So one could try to prove that the index categories $\mathsf{Disk}_{/M}^{\mathbf{\Delta}}$ and $\mathsf{Disk}_{/M} \cong \mathsf{Disk}_{/M}^{[0]}$ are sufficiently similar. I don't quite know what this means precisely, though. I'll also be happy if someone could provide a reference for the proof - it certainly ought to be well-known. Edit: Writing $M \simeq \operatorname*{hocolim}_{[0] \to \mathsf{Disk}_{/M}} \Delta^0$ is a little duplicitous. In the first colimit for $\mathrm{B}\mathsf{Disk}_{/M}$, the gluing data is specified by the simplicial identities in $\mathbf{\Delta}$, but in the second homotopy colimit, the gluing data is not visible at all in the expression - it depends on how all the various $\mathbb{R}^d$'s intersect - and is obscured by writing $\mathbb{R}^d \simeq \Delta^0$, which superficially cannot intersect meaningfully. In other words, this makes the (possible) comparison less obvious. REPLY [4 votes]: I think I have at least a sketch of an argument. Recall, every manifold admits a locally finite good open cover $\mathcal{U}$. Let us write $\mathcal{U}$ also for the poset category of nonempty finite intersections generated by elements of $\mathcal{U}$ ordered by inclusion. The nerve theorem in this case should apply and we deduce that $\mathrm{B}\mathcal{U} \simeq M$. There is an inclusion of categories $\mathcal{U} \hookrightarrow \mathsf{Disk}_{/M}$. We want to apply Quillen's theorem A to this to show $\mathrm{B}\mathcal{U} \to \mathrm{B}\mathsf{Disk}_{/M}$ is a homotopy equivalence, and then we would be done. To apply the theorem, we need to check that for each $\mathbb{R}^d$ in $\mathsf{Disk}_{/M}$, the comma category $\mathcal{U}_{\mathbb{R}^d/} := \mathcal{U} \times_{\mathsf{Disk}_{/M}} \mathsf{Disk}_{\mathbb{R}^d/ /M}$ is empty or contractible. But $\mathcal{U}_{\mathbb{R}^d/}$ has an initial object, namely the intersection of all open sets in the cover containing $\mathbb{R}^d$, hence it is contractible if not empty. This completes the argument.<|endoftext|> TITLE: Subgroups of the Semi-Direct Product $\mathbb{Z}/7\mathbb{Z} \rtimes (\mathbb{Z}/7\mathbb{Z})^{\times}$ QUESTION [5 upvotes]: I want to list all the subgroups of the semi-direct product $\mathbb{Z}/7\mathbb{Z} \rtimes (\mathbb{Z}/7\mathbb{Z})^{\times}$, under the homomorphism $\theta: (\mathbb{Z}/7\mathbb{Z})^{\times} \rightarrow \mathrm{Aut}(\mathbb{Z}/7\mathbb{Z})$, $\theta: a \mapsto \theta_{a}$ where $\theta_{a}(i)=ai$. Until now, I know that the subgroups of $(\mathbb{Z}/7\mathbb{Z})^{\times}$ will be of orders $1, 2, 3$ or $6$ and moreover they will be unique (similarly, the cyclic group with $7$ elements only has the trivial subgroups). I was thinking that the subgroups of the semi-direct product would be semi-direct products of the subgroups of $\mathbb{Z}/7\mathbb{Z}$ and $(\mathbb{Z}/7\mathbb{Z})^{\times}$. Is my claim correct? If not, what would be a way to compute those subgroups? REPLY [2 votes]: Let $H$ be a subgroup of the group $G$ of order $42$ in question. If $7$ divides $|H|$, then, since $G$ contains normal subgroup $P$ of order $7$ which is Sylow-$7$ subgroup, $H$ will contain the normal subgroup $P$. Then $H/P$ is subgroup of $G/P\cong \mathbb{Z}_7^{\times}$; so possible orders of $H/P$ are $1,2,3,6$ and it is unique (in $\mathbb{Z}_7^{\times}$), according to which we will get unique subgroups of $G$ of order $7,14,21,42$. If $7$ does not divide $|H|$ then $|H|$ will be $1,2,3$ or $6$, $H$ will be conjugate to subgroup of $\mathbb{Z}_7^{\times}$; this is because of the following: if $|H|=1$ then it is obvious. If $|H|=6$, then $H$ is complement of a normal Hall subgroup $\mathbb{Z}_7$ and by Schur-Zassenhaus, the complements of a normal Hall subgroup are conjugate. If $|H|=2$ or $3$ then $H$ will be Sylow-$2$ or Sylow-$3$ subgroup, and $\mathbb{Z}_7^{\times}$ contains Sylow-$2$ and Sylow-$3$ subgroup, hence $H$ will be conjugate to a subgroup of $\mathbb{Z}_7^{\times}$.<|endoftext|> TITLE: Why do division algebras always have a number of dimensions which is a power of $2$? QUESTION [7 upvotes]: Why do number systems always have a number of dimensions which is a power of $2$? Real numbers: $2^0 = 1$ dimension. Complex numbers: $2^1 = 2$ dimensions. Quaternions: $2^2 = 4$ dimensions. Octonions: $2^3 = 8$ dimensions. Sedenions: $2^4 = 16$ dimensions. REPLY [12 votes]: They don't. Here is a 9-dimensional associative non-commutative division algebra (over $\Bbb{Q}$): $$ D=\left\{\left(\begin{array}{ccc} x_1&\sigma(x_2)&\sigma^2(x_3)\\ 2x_3&\sigma(x_1)&\sigma^2(x_2)\\ 2x_2&2\sigma(x_3)&\sigma^2(x_1) \end{array}\right)\bigg\vert\ x_1,x_2,x_3\in E\right\}, $$ where $E=\Bbb{Q}(\cos2\pi/7)$ and $\sigma$ is the automorphism defined by $\sigma(\cos2\pi/7)=\cos4\pi/7$. Only over the reals are we so constrained. Topology makes a huge difference. Or, more precisely, the fact that odd degree polynomials with real coefficients always have a real zero.<|endoftext|> TITLE: Imaginary exponent of Fourier transforms QUESTION [5 upvotes]: I'm reading about Fourier transforms. I'm curious why the imaginary unit is needed in exponent. Why not instead define it as: $$ \hat f(t)=\int_xe^{-tx}f(x) \, dx $$ I'm looking at the proofs of some of the basic properties and I don't see why the above definition wouldn't suffice. REPLY [6 votes]: There's a general framework that the Fourier transform fits into using Pontryagin duality and studying the characters of a locally compact abelian group, such as $\mathbb{R}$. The characters of $\mathbb{R}$ are exactly the maps $x \mapsto e^{itx}$, which is where the complex factor comes from. This has all sorts of wonderful consequences, like the fact that the Fourier transform is unitary and that we have inversion and Plancherel. Alternatively, for a somewhat silly reason: The integral $\int e^{-tx} f(x) \, dx$ will not converge for too many functions because it is very poorly behaved when $t$ and $x$ have opposite signs. Hence you lose Plancherel (in any sense), together with the fact that $\langle f, g \rangle = \langle \hat{f}, \hat{g} \rangle$, definition on all of $L^2$, and so on. It doesn't even converge for all Schwarz functions, so this is an issue. If you restrict to $x \ge 0$, then you've defined the Laplace transform.<|endoftext|> TITLE: Limit of Euler's Totient function QUESTION [5 upvotes]: Clearly if $p$ is prime, the sequence $\frac{\phi(p)}{p} \rightarrow 1$. In general, however, if $s_n \in S \subseteq \mathbb{N}$, we are not even guaranteed of the existence of: $\displaystyle \lim_{n \to \infty} \frac{\phi(s_n)}{s_n}$. My question is this: Does there exist an infinite sequence $S \subseteq \mathbb{N}$ such that $\displaystyle \lim_{n \to \infty} \frac{\phi(s_n)}{s_n}=1$ and at most finitely many $s_i$ are prime? My intuition tells me no, but I'm not sure. Having the above limit exist for some sequence alone is quite a strong statement, so having it equal one and contain finitely many primes is pretty restrictive. Admittedly, that isn't an argument and I've been having trouble finding one. Any thoughts would be appreciated. EDIT: It just occurred to me that if $s_i=p_i^2$ for prime $p_i$, then the above holds. Reformulating, is there such an $S$ with finitely many prime powers? REPLY [8 votes]: Certainly. Let $\{p_i\}_{i\in\mathbb{N}}$ enumerate the prime numbers, and take $s_n=p_np_{n+1}$. We have $$\lim_{n\rightarrow\infty}\frac{\phi(p_np_{n+1})}{p_np_{n+1}}=\lim_{n\rightarrow\infty}\frac{p_np_{n+1}-p_n-p_{n+1}+1}{p_np_{n+1}}=1.$$<|endoftext|> TITLE: Why is wolfram alpha plotting this differently? QUESTION [6 upvotes]: I have an equation for a cylinder as $x^2+(y-b)^2=a^2$ for some $a$ and $b$. so I just plugged in $b=2$ and $a=1$ and tried to plot it using wolfram alpha, and the 3D plot looked like half a cylinder, like this. Why am I not getting a 3D plot for a cylinder instead? REPLY [3 votes]: Here is a parametrization using Mathematica. As @Patrick Stevens pointed out, you cannot plot this in Wolfram Alpha (yet).<|endoftext|> TITLE: Dense Subspaces: Intersection QUESTION [6 upvotes]: Hilbert Space: $\mathcal{H}$ Dense Subspaces: $$\mathcal{D},\mathcal{D}'\leq\mathcal{H}:\quad\overline{\mathcal{D}},\overline{\mathcal{D}'}=\mathcal{H}\not\Rightarrow\mathcal{D}\cap\mathcal{D}'\neq\{0\}$$ (Counterexample?) REPLY [9 votes]: Let $\mathcal{H}=L^2[0,1]$. Let $\mathcal{D}$ be the subspace of polynomials on $[0,1]$. Let $\mathcal{D}'$ be the subspace generated by $\{ \sin(n\pi x) \}_{n=1}^{\infty}$. Both are dense, and they have nothing in common except the $0$ vector.<|endoftext|> TITLE: Ring with spectrum homeomorphic to a given topological space QUESTION [5 upvotes]: I would like to ask whether given a topological space $X$, we can find a commutative ring with unity $R$ such that $\operatorname{Spec} R$ (together with the Zariski topology) is homeomorphic to $X$. Since the spectrum is a compact space, this is obviously only possible if $X$ is compact. Furthermore, from this answer we obtain that for spectra, $T_1$ already implies Hausdorff. How many more restrictions must we impose? Can we give a characterisation of when a topological space is a spectrum of a ring? REPLY [7 votes]: A topological space which is homeomorphic to the spectrum of a ring is called a spectral space. Spectral spaces were characterized intrinsically by Melvin Hochster in his thesis: Theorem (Hochster): Let $X$ be a topological space. Then $X$ is spectral iff it satisfies the following conditions: $X$ is sober. $X$ is compact. If $U,V\subseteq X$ are compact open sets, then $U\cap V$ is also compact. The compact open subsets of $X$ form a basis for the topology of $X$. It is not hard to show that every spectral space satisfies these conditions (note that the compact open subsets of $\operatorname{Spec} A$ are just the finite unions of distinguished open sets). The reverse direction is much more difficult; see Theorem 6 of this paper of Hochster's for details.<|endoftext|> TITLE: Probability of three dice falling in the same quadrant of a box QUESTION [8 upvotes]: This is surely really basic for most people here but it's tripping me up. You get a box and draw lines to split it up into 4 parts. I got asked what the probability was that when rolling three dice, all three dices would end up in the same quadrant. My first take on this was a 1/4 chance of die 1 in quadrant x a 1/4 chance of die 2 in same quadrant x a 1/4 chance of die 3 in same quadrant x => 1/4*1/4*1/4 = 1/64 chance My second take on this was that the first die doesn't matter at all so all that's left is a 1/4 chance of die 2 in same quadrant a 1/4 chance of die 3 in same quadrant => 1/4*1/4 = 1/16 chance But I have been given a solution where all possible combinations are drawn out and as there are 20 possible combinations, the odds are 1/20. What is correct (if any) and why? REPLY [10 votes]: The first take and the second take are the same. The point is , in take 1, what happens is you are inherently fixing the quadrant $x$ in which you want the dice to fall. In truth the dice could fall in any of the four quadrants, but they all have to fall in the same one. Thus, $\frac{1}{64}*4 =\frac{1}{16}$ is the right answer without doubt. As for the third answer, you may tell the solution giver: It's quite simple. At the end of the roll, let $x_i$ be the number of dice present in quadrant $i$, $i=1,2,3,4$. In the end $x_1+x_2+x_3+x_4=3$, and each of these numbers $x_i$ is between $0$ and $3$. How many combinations of $x_i$ are possible?$\binom{6}{3}=20$. But the combinations are not equiprobable : in fact they are distributed multinomially. REPLY [3 votes]: Assuming equal size of each box (more precisely equal probability of ending up in each of the boxes) the solution is 1/16. Your second take is correct. Alternatively take your first take (which gives the solution for a specific box out of the four boxes) and multiply by four.<|endoftext|> TITLE: Covariance of two random vectors QUESTION [12 upvotes]: IF I have $X,Y_1,...,Y_n$ iid then how do I calculate: cov $\left [\begin{pmatrix}X\\.\\.\\.\\X \end{pmatrix}, \begin{pmatrix}Y_1\\.\\.\\.\\Y_n \end{pmatrix}\right]$? REPLY [14 votes]: This is known as the cross-covariance between vectors, and is defined by $$ \text{cov}[\boldsymbol{X},\boldsymbol{Y}] = \text{E}[(\boldsymbol{X}-\boldsymbol{\mu_X})(\boldsymbol{Y}-\boldsymbol{\mu_Y})^\text{T}] $$ where $$ \boldsymbol{\mu_X} = \text{E}[\boldsymbol{X}]\\ \boldsymbol{\mu_Y} = \text{E}[\boldsymbol{Y}] $$ In your case, because all the components of $\boldsymbol{X}$ are the same, things simplify greatly. $$ \boldsymbol{X} = X \left[ \begin{array}{c}1\\1\\\vdots\\1\end{array} \right], \;\; \boldsymbol{\mu_X} = \mu_X \left[ \begin{array}{c}1\\1\\\vdots\\1\end{array} \right] $$ Where $\mu_X=\text{E}[X]$. Then $$ \boldsymbol{X}-\boldsymbol{\mu_X} = (X-\mu_X) \left[ \begin{array}{c}1\\1\\\vdots\\1\end{array} \right] $$ Now $$ (\boldsymbol{X}-\boldsymbol{\mu_X})(\boldsymbol{Y}-\boldsymbol{\mu_Y})^\text{T} = (X-\mu_X) \left[ \begin{array}{c}1\\1\\\vdots\\1\end{array} \right]\left[ \begin{array}{cccc}Y_1-\mu_1&Y_2-\mu_2&\cdots&Y_n-\mu_n\end{array} \right] $$ where $\mu_m=\text{E}[Y_m]$ for $m\in[1,2,\cdots,n]$. Expanding out that matrix product we have $$ (\boldsymbol{X}-\boldsymbol{\mu_X})(\boldsymbol{Y}-\boldsymbol{\mu_Y})^\text{T} = (X-\mu_X)\left[ \begin{array}{cccc} Y_1-\mu_1&Y_2-\mu_2&\cdots&Y_n-\mu_n\\ Y_1-\mu_1&Y_2-\mu_2&\cdots&Y_n-\mu_n\\ \vdots&\vdots&\ddots&\vdots\\ Y_1-\mu_1&Y_2-\mu_2&\cdots&Y_n-\mu_n \end{array} \right] $$ Taking that scalar inside the matrix, we see it multiplies each entry in the matrix. Then taking the expectation of the result finally gives $$ \text{E}[(\boldsymbol{X}-\boldsymbol{\mu_X})(\boldsymbol{Y}-\boldsymbol{\mu_Y})^\text{T}] = \left[ \begin{array}{cccc} \text{E}[(X-\mu_X)(Y_1-\mu_1)]&\text{E}[(X-\mu_X)(Y_2-\mu_2)]&\cdots&\text{E}[(X-\mu_X)(Y_n-\mu_n)]\\ \text{E}[(X-\mu_X)(Y_1-\mu_1)]&\text{E}[(X-\mu_X)(Y_2-\mu_2)]&\cdots&\text{E}[(X-\mu_X)(Y_n-\mu_n)]\\ \vdots&\vdots&\ddots&\vdots\\ \text{E}[(X-\mu_X)(Y_1-\mu_1)]&\text{E}[(X-\mu_X)(Y_2-\mu_2)]&\cdots&\text{E}[(X-\mu_X)(Y_n-\mu_n)] \end{array} \right] $$ $$ = \left[ \begin{array}{cccc} \text{cov}(X,Y_1)&\text{cov}(X,Y_2)&\cdots&\text{cov}(X,Y_n)\\ \text{cov}(X,Y_1)&\text{cov}(X,Y_2)&\cdots&\text{cov}(X,Y_n)\\ \vdots&\vdots&\ddots&\vdots\\ \text{cov}(X,Y_1)&\text{cov}(X,Y_2)&\cdots&\text{cov}(X,Y_n) \end{array} \right] $$ Now we are at the answer: you specified all the variables to be identically distributed and independent. Independent variables have covariance $0$. SO, you get the all zeros matrix for your answer $$ \text{cov}(\boldsymbol{X},\boldsymbol{Y})=\text{E}[(\boldsymbol{X}-\boldsymbol{\mu_X})(\boldsymbol{Y}-\boldsymbol{\mu_Y})^\text{T}] = \left[ \begin{array}{cccc} 0&0&\cdots&0\\ 0&0&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&0\\ \end{array} \right] $$<|endoftext|> TITLE: A basic inequality: $a-b\leq |a|+|b|$ QUESTION [8 upvotes]: Do we have the following inequality: $$a-b\leq |a|+|b|$$ I have considered $4$ cases: $a\leq0,b\leq0$ $a\leq0,b>0$ $a>0,b\leq0$ $a>0,b>0$ and see this inequality is true. However I want to make sure about that. REPLY [4 votes]: Right. Observe that $a\leq |a|$ and that $(-b)\leq |(-b)|=|b|.$ Adding, we have $a-b=a+(-b)\leq |a|+|(-b)|=|a|+|b|.$<|endoftext|> TITLE: Personal notebooks of a Fields medalist QUESTION [6 upvotes]: I once read that some Fields medalist published all of his personal handwritten notebooks, and that they are freely available somewhere on the net. I can't remember whose mathematician it was, so I can't find the notes anymore. Do someone knows who it is and where to find the notes? I remember browsing the notes, I find it quite fascinating that we can follow a little bit his flow of thought. I think it is of great pedagogical value. REPLY [6 votes]: Daniel Quillen (Fields Medal 1978). Here is the link to his notebooks.<|endoftext|> TITLE: How to solve $A\tan\theta-B\sin\theta=1$ QUESTION [6 upvotes]: I was wondering if it is possible to solve $$A\tan\theta-B\sin\theta=1$$ for $\theta$, where $A>0,B>0$ are real constants. For sure this can be straightforwardly implemented numerically, but maybe an alternative exists :)... REPLY [2 votes]: Like any other quartic equation, there is a classical way to solve it explicitly in terms of radicals; but this is generally very messy. It is practically much easier, and no less accurate in the final analysis, to solve it by a numerical method such as Newton--Raphson.<|endoftext|> TITLE: Representable homology classes on smooth manifolds QUESTION [5 upvotes]: Let $X$ be a closed (compact without boundary) smooth manifold. We can consider its singular homology $H_*(X,\mathbb{Z})$. Let $H_{k}(X,\mathbb{Z})$ be the $k$-th singular homology group of $X$ and let us consider $[\alpha]\in H_{k}(X,\mathbb{Z})$. I have seen in various places that it is sometimes possible to "represent" $[\alpha]$ as a smooth $k$-dimensional submanifold of $X$. How does this correspondence between elements in $H_{k}(X,\mathbb{Z})$ and $k$-dimensional submanifolds of $X$ exactly works? When exactly elements in $H_{k}(X,\mathbb{Z})$ can be represented by $k$-dimensional submanifolds? How is this related to intersection theory, the cup product in cohomology, Poincare duality and the intersection form of $X$ (when $X$ is even-dimensional)? Does this correspondence work for other homology theories? Thanks. REPLY [7 votes]: Fundamental Classes Let $Y$ be a connected, closed, orientable $k$-dimensional manifold, then $H_k(Y; \mathbb{Z}) \cong \mathbb{Z}$. The choice of a generator of $H_k(Y; \mathbb{Z})$ is equivalent to a choice of orientation. One often denotes the generator by $[Y]$; we call this a fundamental class of $Y$. If we consider $Y$ with it's other orientation, the corresponding fundamental class is $-[Y]$. If we drop the orientability hypothesis, $H_k(Y; \mathbb{Z}) = 0$ so the above construction fails. However, $H_k(Y; \mathbb{Z}_2) \cong \mathbb{Z}_2$, so we define the $\mathbb{Z}_2$-fundamental class to be the generator of $H_k(Y; \mathbb{Z}_2)$. We can also define the fundamental class if $Y$ is only compact instead of closed (i.e. $Y$ has boundary). In that case, we use the fact that $H_k(Y, \partial Y; \mathbb{Z}) \cong \mathbb{Z}$ in the orientable case and $H_k(Y, \partial Y; \mathbb{Z}_2) \cong \mathbb{Z}_2$ in the non-orientable case to define fundamental classes as relative homology classes. All of the above is discussed in detail on the Manifold Atlas Project's fundamental class page. Submanifolds and Homology Now suppose $Y$ is a connected, closed, orientable $k$-dimensional submanifold of $X$ and let $i : Y \to X$ denote the inclusion, then $i_* : H_k(Y; \mathbb{Z}) \to H_k(X; \mathbb{Z})$, in particular $i_*[Y] \in H_k(X; \mathbb{Z})$. Note, the element $i_*[Y]$ might be zero. For example, $S^1$ can be realised as a submanifold of $S^2$ via the equator, but $i_*[S^1] \in H_1(S^2; \mathbb{Z}) = 0$. We can create homology classes on $X$ from different types of submanifolds as above using the different notions of fundamental class. If $Y$ is a non-orientable submanifold of $X$, using the $\mathbb{Z}_2$-fundamental class, $i_*[Y] \in H_k(X; \mathbb{Z}_2)$. If $X$ is a manifold with boundary and $Y$ is an orientable submanifold with boundary (so that $\partial Y \subseteq \partial X$), $i_*[Y, \partial Y] \in H_k(X, \partial X; \mathbb{Z})$. If $X$ is a manifold with boundary and $Y$ is a non-orientable submanifold with boundary, using the $\mathbb{Z}_2$-fundamental class, $i_*[Y, \partial Y] \in H_k(X, \partial X; \mathbb{Z}_2)$. Realisable Classes and The Steenrod Problem An element $\alpha \in H_k(X; \mathbb{Z})$ is said to be realisable if there is a $k$-dimensional connected, closed, orientable $k$-dimensional submanifold $Y$ such that $\alpha = i_*[Y]$. The Steenrod Problem was whether every class is realisable. Thom showed in $1954$ that this is not true in general. However, it is true if $0 \leq k \leq 6$. What is true however is that for any class $\alpha$, there is an integer $N$ such that $N\alpha$ is realisable. We could ask the analogue of Steenrod's Problem for $\mathbb{Z}_2$ coefficients: is every class in $H_k(X; \mathbb{Z}_2)$ realisable? The answer is yes! I don't know about the analogue of Steenrod's Problem for manifolds with boundary.<|endoftext|> TITLE: Show that a convex polygon is contained within the largest circle determined by three consecutive vertices QUESTION [10 upvotes]: Given a convex $n$-gon. The circumcircle is constructed for every triple of consecutive vertices of the polygon. We get the $n$ circles. Select the circle with the largest radius. Prove that the circle contains the polygon. My work so far: $n=3 -$ triangle - obviously. $n=4 -$ If $\angle B = \max \left\{A,B,C,D \right\}$ then $ABCD \in \omega_{ABC}$ $n\ge 5$. I need help here. REPLY [3 votes]: The crux of the argument is contained in the following proposition. Proposition 1 Let $p$ and $q$ be points in the plane, $H$ be one half-space separated by the line through $p$ and $q$, and $D$ be a disk with $p$ and $q$ on its boundary and centered in $H$. If point $r\in H$ forms a circumcircle with $p$ and $q$ that has a smaller radius than that of $D$, then $r\in D$. See the configuration in this proposition in the figure below. Any circle through $p$ and $q$ has a center on the line orthogonal to the line through $p$ and $q$ and passing halfway between those points. For that circle to have a radius smaller than $D$ and be centered in $H$, the center must be on dotted segment between $c$ (the center of $D$) and the midpoint between $p$ and $q$. And we can see that the portion of any such circle that is in $H$ (the dotted circle in the figure) is contained in $D$. Next, we establish that ``adding vertices'' to a polygon only increases the size of the largest circumradius formed by consecutive vertices. Proposition 2 Let $P$ be a polygon with vertices in clockwise order $v_0$, ..., $v_{n-1}$ and let $P'$ be a polygon with vertices $v_0$, ..., $v_{n-1}$, $v'$ (where $v'$ is not necessarily listed in clockwise order). The largest circumradius of three consecutive vertices of $P'$ is at least as large as that for $P$. Assume the largest circumradius for $P$ is formed by vertices $v_{i-1}$, $v_i$, $v_{i+1}$. If $v'$ does not occur between vertices $v_{i-1}$ and $v_{i+1}$, then those vertices remain consecutive and thus the largest circumradius associated with $P'$ is at least as large as $P$. So we focus on the case that $v'$ causes $v_{i-1}$, $v_i$ and $v_{i+1}$ to no longer be consecutive. Without loss of generality, suppose $v'$ lies between $v_i$ and $v_{i+1}$. Now there are two cases each providing a larger circumradius of a new set of consecutive vertices: If $v'$ lies inside the circumcircle of $v_{i-1}$, $v_i$, $v_{i+1}$, then the circumradius of $v_i$, $v'$, $v_{i+1}$ is larger. If $v'$ lies outside the circumcircle of $v_{i-1}$, $v_i$, $v_{i+1}$, then the circumradius of $v_{i-1}$, $v_i$, $v'$ is larger. Now we can proceed to the main result. Let $P$ be a convex polygon with vertices $v_0$, $v_1$, ..., $v_n$ ordered in clockwise order. Without loss of generality, assume that $v_0$, $v_1$, $v_2$ form the largest circumcircle of any three consecutive vertices of the polygon. The proof follows by induction, evaluating the sequence of polygons $P_i$ with vertices $v_0$, ..., $v_i$. The base case holds trivially for a single triangle $P_2$. The inductive step involves adding a single new vertex $v_i$ to the polygon and checking that vertex lies in the circumcircle of $v_0$, $v_1$, $v_2$. Proposition 2 ensures that $v_0$, $v_1$, $v_2$ provides the largest circumradius for each polygon, i.e., the largest circumradius for $P_i$ cannot be realized by a different set of consecutive vertices. We complete the result by applying the Proposition 1 with $p=v_0$, $q=v_1$, $r=v_i$, and $D$ being the circumcircle of $v_0$, $v_1$, $v_2$. Convexity of the polygon ensures the assumptions of the proposition. Specifically, $v_2$ and $r$ must lie in the same half-space and the center of $D$ lies in the same half-space as $v_2$.<|endoftext|> TITLE: Regarding the axiom $2^\kappa = 2^{\kappa^+}$ for regular cardinals $\kappa$, and its relationship to a couple of other axioms. QUESTION [6 upvotes]: (Take ZFC as background.) The following two statements both follow from GCH: ICF. Injective continuum function. The continuum function (i.e. $\kappa \mapsto 2^\kappa)$ is injective. NJA. No jumping axiom. For all infinite cardinals $\kappa$ and $\nu$, we have: $$\kappa < 2^\nu \rightarrow 2^\kappa \leq 2^\nu.$$ To see that NJA follows from GCH, assume $\kappa < 2^\nu$. Hence $\kappa < \nu^+$. So $\kappa \leq \nu$. Thus $2^\kappa \leq 2^\nu$. In fact, the converse holds too; we can actually prove GCH from the above two axioms. To see this, notice that GCH can be interpreted as saying that from $\kappa < 2^\nu$, we can derive $\kappa \leq \nu$. So assume $\kappa < 2^\nu$. Then from NJA, we deduce $2^\kappa \leq 2^\nu$. So from ICF, it follows that $\kappa \leq \nu$, as required. Another interesting axiom that seems to be related to NJA is: BA. Beth axiom. For all infinite cardinals $\kappa$, there exists an ordinal $\alpha$ such that $2^\kappa = \beth_\alpha$. I haven't been able to puzzle out whether or not NJA and BA imply each other, so: Question 0. Is there a relationship between NJA and BA? Okay. My motivation for factoring GCH as ICF+NJA is that I'm interested in axioms for set theory that determine the structure of the cardinal numbers (like GCH), but which don't trivialize the cardinal characteristics of the continuum (unlike GCH). One approach to find such axioms is to look for statements that prove the truth of NJA, but which don't prove ICF. An obvious first axiom to consider is: DCF. Degenerate continuum function. For all infinite cardinal numbers $\kappa$, the following hold. If $\kappa$ is regular, then $2^\kappa = 2^{\kappa^+}$. $2^\kappa$ is regular. This clearly refutes ICF. For example, under DCF, we can prove that $$2^{\aleph_0} = 2^{\aleph_1} = 2^{\aleph_2} = \cdots$$ and even that $$2^{\aleph_0} = 2^{\aleph_\omega}.$$ But, I haven't been able to puzzle out whether DCF implies either and/or both of NJA or BA. Question 1. Does DCF imply either and/or both of NJA or BA? REPLY [6 votes]: First of all, NJA implies BA. To see this, take a $\kappa$ and consider whether it is a beth number or not. If $\kappa=\beth_\alpha$ then of course $2^\kappa=\beth_{\alpha+1}$. Otherwise we can fit $\kappa$ between two beth numbers, $\beth_\alpha<\kappa<\beth_{\alpha+1}$, whence NJA implies that $2^\kappa=\beth_{\alpha+1}$. On the other hand, the implication is not reversible: simply look at a model where $2^{\omega}=\omega_2, 2^{\omega_1}=2^{\omega_2}=\omega_3$ and GCH holds above. Here BA holds, since $2^{\omega_1}=\beth_2$, but NJA clearly fails at $\omega_1<2^{\omega}$. For your second question, DCF does not imply BA (and therefore also not NJA). To get this, start with a model where $2^{\kappa}=\kappa^{+\omega+1}$ for every regular $\kappa$ (we can get this by Easton's theorem). This model is easily seen to satisfy DCF. Now force over this model to get $2^\omega=\aleph_{\omega\cdot 2+1}$ and $2^{\aleph_{\omega+1}}=\aleph_{\omega\cdot 2+2}$. In the end we stil have DCF (we only changed exponentiation on two blocks of $\omega$ many cardinals) as well as $\beth_1=\aleph_{\omega\cdot 2+1}$ and $\beth_2=\aleph_{\omega\cdot 3+1}$. But if we look at $\kappa=\aleph_{\omega+1}$ the cardinal $2^\kappa=\aleph_{\omega\cdot 2+2}$ is not a beth number, so BA fails.<|endoftext|> TITLE: Units of $\mathbb Z[X]/(X^n+1)$? QUESTION [5 upvotes]: What are the units of the cyclotomic ring $\mathbb Z[X]/(X^n+1)$, with $n$ being a power of $2$? I am starting to think that the set $\{\pm X^k,k=0,\dots,n-1\}$ contains all units, is that so ? REPLY [3 votes]: It is not true. Dirichlet's theorem says that the unit group of a number field $K$ is finitely generated of rank $r_1+r_2-1$ where $r_1$ and $r_2$ are the number of real (resp. half the number of complex) embeddings of $K$. The field $\mathbb Q(\zeta)$, $\zeta^n+1=0$, with $n>1$ a power of $2$, has no real embeddings and it has exactly $n$ complex embeddings. Thus the group of units of its ring of integers (which is $\mathbb Z[\zeta]$) has rank $n/2-1$. In particular, it's not a finite group for $n>2$. What is true is that you described entirely the torsion in the group of units. But there are lots of other units generally.<|endoftext|> TITLE: Betti numbers of complex "sphere" QUESTION [7 upvotes]: Let $X$ be the set of solutions to $x_1^2+\ldots+x_n^2=1$ in $\mathbb{C}^n$. This has real dimension $2(n-1)$, but since $X$ is an affine algebraic variety, the only possible non-zero topological Betti numbers of $X$ are $b_0,\ldots,b_{n-1}$. What is the top Betti number $b_{n-1}$? The real sphere $S^{n-1}$ embeds in $X$, determining a class in $H_{n-1}(X,\mathbb{Q})$. I wonder if this class actually spans the homology group. I have seem some results on Betti number for projective hypersurfaces, but not for affine ones. REPLY [3 votes]: Here is the answer I wanted to write, but since Qiaochu posted his first I'll make mine Community Wiki and use his notation. Recall that the tangent bundle to the sphere $S^{n-1}\subset \mathbb R^n$ is the submanifold $TS^{n-1}\subset S^{n-1}\times \mathbb R^n$ consisting of pairs $(u,v) \in S^{n-1}\times \mathbb R^n$ with $u\bullet v=0$ (usual scalar product in $\mathbb R^n$). The amazing result is that we have a diffeomorphism $$X\stackrel {\sim}{\to} TS^{n-1}: x=a+ib\mapsto (u=\frac {a}{||a||},v=b)$$ whose inverse diffeomorphism is $TS^{n-1}\stackrel {\sim}{\to} X:(u,v)\mapsto x+iy=(\sqrt {1+||v||^2}){u}+iv$. Like all vector bundles $TS^{n-1}$ is homotopic to its base space $S^{n-1}$, and thus $X$ is homotopic to $S^{n-1}$ too, so that finally $b_{n-1}(X)=1$ for $n\geq2$ and $b_0(X)=2$ for $n=1$.<|endoftext|> TITLE: How to solve the integral $\int\limits_0^a {\frac{{\sqrt {{a^2} - {x^2}} }}{{b - x}}} \mathop{\mathrm{d}x}\\$? QUESTION [5 upvotes]: I have seen this integral: $$\int\limits_0^a {\frac{{\sqrt {{a^2} - {x^2}} }}{{b - x}}} \mathop{\mathrm{d}x}\\$$ In this integral: $a$ and $b$ are constants. I have try with two ways, but failed: $u = \sqrt {{a^2} - {x^2}}$ $x = a\sin t$ It seems that they are not true-way to solve this integral. Any suggestion for solving this integral? Now, I am not a student; so, this is not my exercise. Sorry about my English. If my question is not clear, please comment below this question. REPLY [2 votes]: First off I would let $x=-a\cos\theta$. Then $$\begin{align}\int_0^a\frac{\sqrt{a^2-x^2}}{b-x}dx&=\int_{\frac{\pi}2}^{\pi}\frac{a^2\sin^2\theta}{b+a\cos\theta}d\theta\\ &=\int_{\frac{\pi}2}^{\pi}\frac{a^2(1-\cos^2\theta)}{b+a\cos\theta}d\theta\\ &=\int_{\frac{\pi}2}^{\pi}\left(-a\cos\theta+b+\frac{a^2-b^2}{b+a\cos\theta}\right)d\theta\end{align}$$ Now that last term may look a little intimidating, but I just did it this morning, so we get$$\begin{align}\int_0^a\frac{\sqrt{a^2-x^2}}{b-x}dx&=\left[-a\sin\theta+b\theta+\frac{a^2-b^2}{\sqrt{b^2-a^2}}\cos^{-1}\left(\frac{b\cos\theta+a}{b+a\cos\theta}\right)\right]_{\frac{\pi}2}^{\pi}\\ &=a+b\frac{\pi}2+\sqrt{b^2-a^2}\left(\cos^{-1}\left(\frac ab\right)-\pi\right)\end{align}$$<|endoftext|> TITLE: Topological book which covers applications in the Medical Field (Medicine/Bacteria/Cancer/Viruses) QUESTION [6 upvotes]: To get to the point I'm looking for a book on Topology that covers specifically its uses in the medical field. I've seen a lot of book requests in Topology, but they are all about learning topology or engineering based applications of topology, which are great topics, but not what I'm looking for. I would love a book that specifically deals with Cancer, Medicine, Viruses, and/or Bacteria. Really anything that has a medical aspect to it. I've taken one topology course so even if there is no book that has what I'm looking for if you could offer a book that covers concepts/theorems that are used in applications in the medical field that would be appreciated too. My professor recommended me Topology Now and I believe Topology and it's Applications, but they don't seem to cover anything medical related. I have not got my hands on a physical copy of either book so I'm just basing that previous sentence on synopses that I've read online. Thanks for any thoughts, ideas, or recommendations you may have to offer! I'm also heading to the library later today to look, so I might have some specific titles to ask for recommendations of too. Thanks. REPLY [2 votes]: If you've already studied some topology, you might consider Computational Topology: An Introduction by Edelsbrunner & Harer. It will help with understanding topological data analysis and has a few biological applications in its final chapter. See also Simplicial Models and Topological Inference in Biological Systems by Nanda & Sazdanovic. Here is the abstract from the latter: This article is a user’s guide to algebraic topological methods for data analysis with a particular focus on applications to datasets arising in experimental biology. We begin with the combinatorics and geometry of simplicial complexes and outline the standard techniques for imposing filtered simplicial structures on a general class of datasets. From these structures, one computes topological statistics of the original data via the algebraic theory of (persistent) homology. These statistics are shown to be computable and robust measures of the shape underlying a dataset. Finally, we showcase some appealing instances of topology-driven inference in biological settings – from the detection of a new type of breast cancer to the analysis of various neural structures. For applications to neuroscience, see Two’s company, three (or more) is a simplex: Algebraic-topological tools for understanding higher-order structure in neural data by Giusti, Ghrist and Bassett. For the basics of knot theory, try either The Knot Book by Adams or the brief introduction in Chapter 12 of Introduction to Topology: Pure and Applied by Adams & Franzosa. A useful resource is http://appliedtopology.org/. In fact, the latest entry at the time of writing is Funding opportunities for interdisciplinary research from the Center for Topology of Cancer Evolution and Heterogeneity.<|endoftext|> TITLE: Probability Problem Book: Graduate Level QUESTION [5 upvotes]: I'm looking for a problem book in the style of Zhang's Linear Algebra: Challenging Problems for Students to prepare for a probability qualifying exam. In particular, the desirable source must have: Graudate level difficulty (for a "just OK" school). Complete, detailed solutions to problems. Comprehensive coverage of probability fundamentals. To clarify "comprehensive coverage," here is a subset of the topics: Measure-theoretic foundations (Lebesgue construction, Dynkin/$\pi$-$\lambda$, DC/MCT/FL, Fubini/Tonelli, Radon-Nikodym, etc.) Conditional probabilities/expectation Convergences (weak, strong, probability, distribution, Borel-Cantelli, etc.) Laws of large numbers (weak, strong) Central limit theorems Discrete time stochastic processes (martingales, branching processes, Doob decomposition, Markov, recurrence, etc.) Continuous time stochastic processes Brownian Motion (Wiener measure [which always makes me giggle], tightness, strong Markov, etc.) For greater detail, the following textbooks were used for the associated classes: Folland Chapters 1-3 Durrett Chapters 1-6 Karatzas and Shreve Chapters 1-2 I care more about being adept at solving probability problems (which I currently am not) than about passing this particular exam, so good references that do not necessarily cover some of this material are still desirable, where "good" means the inclusion of clear, well-written solutions to provide feedback. REPLY [2 votes]: "One Thousand Exercises in Probability" by Grimmett and Stirzaker is a possible suggestion, though INMHO not as promising as it sounds. Some pros and some cons: pros: covers many areas and $1000$ exercises are a $1000$ exercises! Graduate level: check $\checkmark$ cons: does not have the, let's say standard, exercises in each subject, so INMHO it serves better as a complement to a textbook that covers - through examples or exercises - the basics (or more) in every subject. (My) conclusion: It will certainly help you but keep looking around. For books with clear, well-written solutions, you could also check Hoel Port Stone "Introduction to Probability Theory" and Bertsekas, Tsitsiklis "Introduction to Probability 2nd Edition". The solutions may also be found online (for sure for the second one). But these cover more basic subjects.<|endoftext|> TITLE: The Gamma function has no zeros QUESTION [5 upvotes]: How can I prove the Gamma function has no zeros in its holomorphy domain $\Bbb C\setminus\Bbb Z_{\le0}$ using only its integral definition $\Gamma(z)=\int_0^{+\infty}t^{z-1}e^{-t}\,dt$ valid when $\Re z>0$ and the functional equation $\Gamma(z+1)=z\Gamma(z)$? From the integral definition we can find easily the holomorphic extension; thus it would be enough to prove that $\Gamma\neq0$ in $\{\Re z>0\}$, using thus the integral form. But I can't prove neither this. Can someone help me? EDIT: This question is not a duplicate because all the solution given use more "advanced" tools. Here I'm asking to prove that Gamma has no zeros using ONLY its integral representation REPLY [3 votes]: Assume that $\Gamma(\alpha) = 0$ for some $\Re(\alpha) > 0$. Then for any $s \geq 0$, the substitution $ t = (1+s)x$ gives $$ 0 = \frac{\Gamma(\alpha)}{(1+s)^{\alpha}} = \int_{0}^{\infty} x^{\alpha-1} e^{-x} e^{-sx} \, \mathrm{d}x. \tag{1}$$ This is already enough to give a contradiction since the right-hand side is the Laplace transform of $x \mapsto x^{\alpha-1} e^{-x}$ and hence cannot be identically zero. If we avoid the use of Laplace transform, still we can derive a contradiction. Let $0 < \sigma < \Re(\alpha)$. For this $\sigma$, we know that $\Gamma(\sigma) > 0$. Then multiply both sides of $\text{(1)}$ by $s^{\sigma-1}/\Gamma(\sigma)$ and integrate w.r.t. $s$ on $[0, \infty)$. By the Fubini's theorem, this yields \begin{align*} 0 & = \int_{0}^{\infty} x^{\alpha-1} e^{-x} \left( \frac{1}{\Gamma(\sigma)} \int_{0}^{\infty} s^{\sigma-1} e^{-sx} \, \mathrm{d}y \right) \, \mathrm{d}x \\ &= \int_{0}^{\infty} x^{\alpha-\sigma-1} e^{-x} \, \mathrm{d}x \\ &= \Gamma(\alpha-\sigma). \end{align*} This shows that $\Gamma(z) = 0$ along the line segment joining $i\Im(\alpha)$ and $\alpha$. Then the identity theorem tells us that $\Gamma(z)$ is identically zero for $\Re(z) > 0$, which is impossible.<|endoftext|> TITLE: Is the category of categories a topos? QUESTION [8 upvotes]: The 2-category of small categories is the archetypal example of a 2-topos (whatever that is). But what about the 1-category of small categories? Is it a topos? A 2-topos? Something else? REPLY [5 votes]: I thought I would add here an elementary proof that $\mathbf{Cat}$ has no subobject classifier. (This would be as a complement to the other answers which redirect to an MO post, which further redirect to literature references.) (I'm also posting this as a community wiki since it has minimal changes from my answer to the corresponding question for the category of groupoids, so it didn't seem appropriate to get additional reputation points from this copy.) So, first note that the object functor $\operatorname{Ob} : \mathbf{Cat} \to \mathbf{Set}$ is representable by the category $1$ with one object and one (identity) morphism. Similarly, the arrows functor $\operatorname{Arr} : \mathbf{Cat} \to \mathbf{Set}$ is representable by the category $A$ with two objects $s,t$, and three morphisms (one identity morphism $s \to s$, one morphism $s \to t$, one identity morphism $t \to t$) with the unique composition. In particular, given any monomorphism $F : C \to D$ in $\mathbf{Cat}$, this implies that $\operatorname{Ob}(F)$ and $\operatorname{Arr}(F)$ are injective functions, from which it is straightforward to conclude that $F$ is a composition of an isomorphism from $C$ to a subcategory of $D$ with the inclusion functor into $D$. This implies that $\mathbf{Cat}$ is well-powered, with $\operatorname{Sub}(C)$ being the set of subcategories of $C$. Now, suppose that $\mathbf{Cat}$ had a subobject classifier $\Omega$. Then we would have to have: $$\operatorname{Ob}(\Omega) \simeq \operatorname{Hom}(1, \Omega) \simeq \operatorname{Sub}(1) = \{ 1, \emptyset \}$$ and $$\operatorname{Arr}(\Omega) \simeq \operatorname{Hom}(A, \Omega) \simeq \operatorname{Sub}(A) = \{ A, A_d, \{ s \}, \{ t \}, \emptyset \}.$$ Here $A_d$ represents the subcategory with objects $s$ and $t$, and only the identity morphisms. Also, the source and target morphisms $s, t : \operatorname{Arr} \to \operatorname{Ob}$ are induced by the functors $1 \to A$ corresponding to $s, t \in \operatorname{Ob}(A)$, respectively. From this, we can see that: $$ A \in \operatorname{Hom}_{\Omega}(1, 1) \\ A_d \in \operatorname{Hom}_{\Omega}(1, 1) \\ \{ s \} \in \operatorname{Hom}_{\Omega}(1, \emptyset) \\ \{ t \} \in \operatorname{Hom}_{\Omega}(\emptyset, 1) \\ \emptyset \in \operatorname{Hom}_{\Omega}(\emptyset, \emptyset).$$ We now consider the category corresponding to the poset $\{ 0, 1, 2 \}$ with the induced order, and the subcategory corresponding to the poset $\{ 0, 2 \}$. We then want to find a morphism $F : \{ 0, 1, 2 \} \to \Omega$ such that $\{ 0, 2 \}$ is the pullback of this morphism and $\top : 1 \to \Omega$ (which must correspond to the object 1 of $\Omega$). We must have $F(0) = 1, F(1) = \emptyset, F(2) = 0$ so $F(0 \le 1) = \{ s \}$, $F(1 \le 2) = \{ t \}$. Thus, $F(0 \le 2) = \{ t \} \circ \{ s \}$. But if we repeat the same argument with the subcategory $\{ 0, 2 \}$ with no morphism $0 \to 2$, we will find that for this subobject also with corresponding functor $F' : \{ 0, 1, 2 \} \to \Omega$, $F' = F$. This contradicts the fact that $F'$ and $F$ must give different pullbacks. Note that the construction of $\Omega$ above gives a perfectly good subobject classifier for the category of directed graphs. (In fact, this category is equivalent to the category of functors $AOst \to \mathbf{Set}$ where $AOst$ is the category with two objects $A,O$ generated by two morphisms $s, t : A \to O$, so it is a topos.) Given a subgraph $G'$ of $G$, to get the corresponding morphism $G \to \Omega$, we map objects of $G$ to 1 if they are in $G'$ and to $\emptyset$ otherwise. And for edges in $G$, if the source or target is not in $G'$ then the edge also cannot be in $G'$ and correspondingly, there is exactly one element of the $\operatorname{Hom}_{\Omega}$ set; otherwise, if the source and target are in $G'$, then we send that edge to $A$ if the edge is in $G'$, and to $A_d$ otherwise. What the argument above says is essentially: if we try to extend this to categories, then the problem we get is that if neither $f$ nor $g$ is in the subcategory and neither is the common target of $f$ and source of $g$, but the source of $f$ and the target of $g$ are, then that is not enough information to determine whether $g \circ f$ is in the subcategory. However, if there were a subcategory classifier $\Omega$, it would end up having to agree with the subgraph classfier at the graph level, and then the composition law within that $\Omega$ would have to determine that $g \circ f$ is either always in the subcategory or never in it. (And we could make a similar argument for the case where the common target of $f$ and source of $g$ is in the subcategory.)<|endoftext|> TITLE: Lipschitz implies bounded gradient QUESTION [15 upvotes]: Assume $f:\mathbb{R}^n \to \mathbb{R}$ is convex, and $L$-Lipschitz, so $|f(x)-f(y)|\leq L\|x-y\|$. I would like to show that $\|\nabla f(x)\|\leq L$. In one dimension this is a straightforward consequence of the fact that convexity implies $f(y)-f(x)\geq f'(x) (y-x), \forall x, y \in \mathbb{R}$, but I'm having trouble translating this to several variables (in particular, Cauchy-Schwarz is working on the opposite direction!). If it helps, I don't mind assuming the domain of $f$ is a compact $[a,b]^n$. REPLY [10 votes]: Here's how to do it if $f$ is defined on all of $\mathbb{R}^n$: If $\nabla f(x)=0$, we're done, otherwise, take $y=x+\nabla f(x)$. Then by convexity and Lipschitzness we have $$L\|\nabla f(x)\| =L \|x-y\|\geq | f(y)-f(x)|\geq \left|\left<\nabla f(x),\nabla f(x)\right>\right|=\|\nabla f(x)\|^2$$ which gives $\|\nabla f(x)\|\leq L$. If $f$ is defined on a convex set and $x$ is an interior point, the same argument but with $y=x+\eta\nabla f(x)$ for some small $\eta>0$ gives us the same bound. The annoying case is when $x$ is a point where the gradient is defined but moving in the direction of $\nabla f(x)$ takes us out of the convex set $K$. This can happen when $x$ is on the boundary of $K$ and yet we can approach it from each coordinate direction from within $K$ to get a one-sided partial derivative. Then, by convexity of the set, observe that either $\nabla f(x)$ or $-\nabla f(x)$ points in a direction where there are points from $K$. If it's $\nabla f(x)$, we already saw how to deal with it. So suppose that for all $\eta>0$ small enought, $x-\eta\nabla f(x)\in K$. Then we use the coordinate-free formulation of the gradient, which says that $$\lim_{\|x-y\|\to0} \frac{f(y)-f(x)-\left<\nabla f(x),y-x\right>}{\|y-x\|} =0$$ which in our case gives $$ \lim_{\eta\to 0} \frac{f(x-\eta\nabla f(x))-f(x) +\eta\|\nabla f(x)\|^2}{\eta\|\nabla f(x)\|} = 0$$ and thus $$ \|\nabla f(x)\| = \lim_{\eta\to0}\frac{f(x-\eta\nabla f(x))-f(x)}{\eta\|\nabla f(x)\|}$$ and this is a limit of quantities each of which is $\leq L$.<|endoftext|> TITLE: Real Analysis, Folland Problem 6.1.2 $L^p$ spaces QUESTION [6 upvotes]: Background Information: In this chapter we work on a fixed measure space $(X,M,\mu)$. If $f$ is measurable on $X$ and $0 < p < \infty$, we define $$\|f\|_{L^p} = \left[\int |f|^p d\mu\right]^{1/p}$$ and we define $$L^p(X,M,\mu) = \{f:X\rightarrow \mathbb{C}: f \ \text{is measurable and} \ \|f\|_p <\infty\}$$ Holder's Inequality - If $p > 1$ and $\frac{1}{p} + \frac{1}{q} = 1$ then $$\|fg\|_{L^1} \leq \|f\|_{L^p}\|g\|_{L^q}$$ Theorem 6.8 a.) If $f$ and $g$ are measurable functions on $X$, then $\|fg\|_{L^1}\leq\|f\|_{L^1}\|g\|_{L^\infty}$. If $f\in L^1$ and $g\in L^{\infty}$, $\|fg\|_{L^1} = \|f\|_{L^1}\|g\|_{L^\infty}$ if and only if $|g(x)| = \|g\|_{L^\infty}$ a.e. on the set where $f(x)\neq 0$. b.) $\|\cdot\|_{L^\infty}$ is a norm on $L^{\infty}$. c.) $\|f_n - f\|_{L^\infty}$ if and only if there exists $E\in M$ such that $\mu(E^c) = 0$ and $f_n\rightarrow f$ uniformly on $E$. d.) $L^{\infty}$ is a Banach space. e.) The simple functions are dense in $L^{\infty}$ Attempted proof a.): Let $(X,M,\mu)$ be a fixed measure space. Suppose $f$ and $g$ are measurable functions on $X$. Let $p\in\{1,\infty\}$ from Holder's inequality it follows that $$\|fg\|_{L^1}\leq\|f\|_{L^1}\|g\|_{L^\infty}$$ I am not sure if this is rigourous enough and I am not sure how to prove the remaining parts. Attempted proof b.) We have $$L^{\infty} = L^{\infty}(X,M,\mu) = \{f:X\rightarrow \mathbb{C}: f \ \text{is measurable and } \|f\|_{L^\infty} < \infty\}$$ It is obvious that $\|f\|_{L^\infty} = 0$ iff $f = 0$ a.e. and $\|f\|_{L^\infty} = |c|\|f\|_{L^\infty}$ so now we just need to show that the triangle inequlaity holds in order to prove the claim that $\|\cdot\|_{L^\infty}$ is a norm in $L^{\infty}$. Let $p = \infty$, by Minkowski's inequality it follows that $$\|f + g\|_{L^\infty} \leq \|f\|_{L^\infty} + \|g\|_{L^\infty}$$ thus we are done. Attempted proof c.) If $\|f_n - g\|_{L^{\infty}}$ then $f_n\rightarrow f$ in $L^{\infty}$. Now for $\epsilon > 0$ there exists an $N$ such that $$E_n = \bigcup_{m\geq N}\{|f_m - f| > \frac{1}{b}\}$$ Take $E = \bigcup_{1}^{\infty}E_n$ then clearly $\mu(E^c) = 0$. Now we need to show that $f_n\rightarrow f$ uniformly on $E$ which I am not exactly sure how to do. I know these are "simple" to prove, but I am pretty lost on how to begin with each of these. I just read the section yesterday so maybe I just need to digest it more. Any help would be great, thanks. REPLY [4 votes]: Assuming you're using the essential supremum norm, here's an answer to part b) and part c). Here is a link to wikipedia's page on the essential supremum and below the proof of part c) is a link to a proof that $(L^\infty, \|\cdot\|_\infty)$ is a Banach space. Proof of b): Relying on the fact that a linear combination of essentially bounded and integrable functions is again essentially bounded and integrable (hence $L^\infty$ is a vector space) we shall only verify that $ess\, \sup |f|$ is indeed a norm on $L^\infty$ and the metric induced by this norm is complete. We first prove the essential supremum of the absolute value of an essentially bounded and integrable functions satisfies the three properties of a norm. Lemma (for you to prove, ask for more details if necessary): It is important to note that for any $f\in L^\infty$, we have $|f(x)|\leq \|f\|_\infty$ a.e. on $X$. First suppose $\|f\|_\infty=0$. Then if $\epsilon >0$ there exists $C\geq 0$ such that $|f(x)|\leq C$ almost everywhere on $X$ and $0\leq C< \epsilon$ since $\epsilon$ cannot be a lower bound of the set $\hat{U}_f=\{C\geq 0: |f(x)|\leq C \text{ a.e. on } X\}$. But then we have $|f(x)|\leq C< \epsilon$ a.e. on $X$ so that $|f(x)|<\epsilon$ a.e. on $X$ which in turn implies $f(x)=0$ a.e. $X$ (since $\epsilon >0$ was arbitrary), so the only set where $f$ is not zero is a set of a measure zero, so by the definition of $L^\infty$ (remember elements of $L^\infty$ aren't functions, but classes of functions, where two functions in a given class differ only on sets of measure zero, but are identical else where) we see that $f(x)\equiv 0$, i.e. the class of functions of which are zero almost everywhere. Thus $\|f\|_\infty=0$ implies $f=0$. The other direction is less work; if $f(x)=0$ almost everywhere on $X$ then it follows that $0=\inf \{C\geq 0: |f(x)| \leq C \text{ a.e. on }X\}$, so $\|f\|_\infty=0$. Thus, we've proved $\|f\|_\infty=0$ if and only if $f=0$. Now let $\alpha \in \mathbb{C}$. We wish to show $\|\alpha f\|_\infty = |\alpha | \cdot \| f\|_\infty$ or in other words, $|\alpha | ess\, \sup |f|=ess\, \sup |\alpha f|$. We may assume $\alpha \neq 0$ else it is trivial. Now note, that $$|\alpha f(x)|\leq |\alpha ||f(x)|\leq |\alpha |\cdot \|f\|_\infty \text{ a.e. on } X$$ hence by definition of $ess\, \sup |\alpha f|$ we have $$\|\alpha f\|_\infty \leq |\alpha |\cdot \|f\|_\infty. $$ On the other hand, we know $$|\alpha f(x)|\leq \|\alpha f\|_\infty \text{ a.e. on } X$$ which implies $$| f(x)|\leq \frac{\|\alpha f\|_\infty}{|\alpha|} \text{ a.e. on } X$$ so that by the definition of $ess \, \sup |f|$, we have $$\|f\|_\infty \leq \frac{\|\alpha f\|_\infty}{|\alpha|}$$ and hence $$|\alpha|\|f\|_\infty \leq \|\alpha f\|_\infty$$ this together with the first inequality we proved shows that $|\alpha | \cdot \| f\|_\infty = \|\alpha f\|_\infty$. Now it remains to prove the triangle inequality, i.e. for $f,g\in L^\infty$ (again keeping in mind these are classes of functions), we wish to show, $$\|f+g\|_\infty \leq \|f\|_\infty + \|g\|_\infty.$$ Now, we know that $|f(x)|\leq \|f\|_\infty$ and $|g(x)|\leq \|g\|_\infty$ almost everywhere on $X$, so that $$|f(x)+g(x)|\leq |f(x)|+|g(x)|\leq \|f\|_\infty+ \|g\|_\infty \text{ a.e. on } X$$ and since the right hand side is just some real nonnegative number, it follows by the definition of $ess \, \sup |f+g|$ that $$\|f+g\|_\infty \leq \|f\|_\infty + \|g\|_\infty.$$ Part c): Convergence in $L^\infty$ is uniform convergence almost everywhere. More precisely $$\lim_{n\to \infty} \|f_n-f\|_\infty=0 \iff \exists \, E\in \Sigma \text{ such that } \mu(E^c)=0 \text{ and } (f_n)\to f \text{ uniformly on } E$$ Proof: If $\exists \, E\in \Sigma \text{ such that } \mu(E^c)=0 \text{ and } (f_n)\to f \text{ uniformly on } E$ then, we know that for any $\epsilon >0$ there exists $N\in \mathbb{N}$ such that $$|f_n(x)-f(x)|< \epsilon \text{ for all } n\geq N \text{ and all } x\in E.$$ Now from this we wish to show $\lim_{n\to \infty} \|f_n-f\|_\infty=0$. By the above inequality however, we have for any $n\geq N$, $$|f_n(x)-f(x)|< \epsilon \text{ a.e. on } X$$ since $\mu(E^c)=0$ and $E^c$ is at most where the uniform convergence doesn't hold. Thus by the definition of $ess\, \sup |f_n-f|$ we have $$\|f_n-f\|_\infty <\epsilon$$ for all $n\geq N$. Thus $f_n \to f$ as $n\to \infty$ (in the $L^\infty$ metric). The converse will take a little more work. We assume $\lim_{n\to \infty} \|f_n-f\|_\infty=0$, i.e. for any $\epsilon >0$ there exists $N'\in \mathbb{N}$ such that $n\geq N'$ implies $\|f_n-f\|_\infty <\epsilon$. This implies that for any $n\geq N'$, $$|f_n(x)-f(x)|\leq \|f_n-f\|_\infty <\epsilon$$ a.e. on $X$. Let $M_n=\|f_n-f\|_\infty $ for each $n\geq N'$ and $$A_n=\{x\in X: |f_n(x)-f(x)|> M_n\}.$$ Since the above inequality holds almost everywhere on $X$, it is clear that $\mu(A_n)=0$. Let $A=\cup_{n\geq N} A_n$. Then $\mu(A)=0$ since a countable union of sets with measure zero is itself measurable and with measure zero. Finally let $E=A^c$ and it follows that this is the set where $f_n$ converges uniformly to $f$ and $\mu(E^c)=0$, completing the proof. For a proof of d), see L^infinity is a Banach space Let me know if you have any questions/see a typo I missed/etc... I'll try to update soon with hints for parts a) and e) but I must run now... I know e) has an analogous theorem for $L^p$ spaces so the proof should not be greatly different... Edit 1: Part a) Here is a proof of the first claim in a), i.e. if $f$ and $g$ are measurable and $g$ is essentially bounded then $\|fg\|_1\leq \|f\|_1\|g\|_\infty$. Proof: Recall the above lemma that says $|g(x)|\leq \|g\|_\infty$ a.e. on $X$ for any essentially bounded and measurable function $g: X\to \mathbb{C}$. Thus, $$|f(x)||g(x)|\leq |f(x)|\cdot \|g\|_\infty \text{ a.e. on } X.$$ Upon integrating over all of $X$, implicitly using the fact that if $h$ is an integrable function and $A\subset X$ is a set of measure zero then $\int_A h =0$, we obtain, $$\int_X |f(x)g(x)|d\mu \leq \|g\|_\infty \int_X |f(x)|d\mu$$ i.e. $\|fg\|_1\leq \|f\|_1\|g\|_\infty$. Edit 2: Now for the second claim: If $\|fg\|_1=\|f\|_1\cdot \|g\|_\infty$ if and only if $|g(x)|=\|g||_\infty$ on the set where $f(x)\neq 0$. Proof: Let $A=\{x\in X: f(x)\neq 0\}$. Suppose $|g(x)|=\|g||_\infty$ for every $x\in A$. Then, $$\|fg\|_1=\int_X |f(x)||g(x)|d\mu=\|g\|_\infty \cdot \int_A |f(x)|d\mu = \|g\|_\infty \|f\|_1$$ since the integral is zero everywhere besides $A$, since $f(x)$ is zero on $X\setminus A$. I still can't figure out the converse...<|endoftext|> TITLE: Examine convergence of $\sum_{n=1}^{\infty}(\sqrt[n]{a} - \frac{\sqrt[n]{b}+\sqrt[n]{c}}{2})$ QUESTION [7 upvotes]: How to examine convergence of $\sum_{n=1}^{\infty}(\sqrt[n]{a} - \frac{\sqrt[n]{b}+\sqrt[n]{c}}{2})$ for $a, b, c> 0$ using Taylor's theorem? REPLY [3 votes]: You have, using Taylor's polynomial, $$ a^{1/n}=e^{\frac1n\,\log a}=1+\frac1n\log a+\frac{e^{c_n}}{2n^2}\,\log^2 a, $$ where $0 TITLE: Showing $\sum_{i = 1}^n\frac1{i(i+1)} = 1-\frac1{n+1}$ without induction? QUESTION [7 upvotes]: I oversaw a high-school mathematics test the other day, and one of the problems was the following Show, using induction or other means, that $$\sum_{i = 1}^n\frac1{i(i+1)} = 1-\frac1{n+1}$$ The induction proof is very standard, where the induction step relies on the fact that $\frac{1}{n+1} + \frac{1}{n(n+1)} = \frac{1}{n}$, and I'm sure it's been answered on this site before. However, I got intrigued by the "or other means" part of the question. I don't know whether the teacher who wrote the test even considered any alternative solutions (he may just have written it so that if anyone has a crazy idea that works out, then they should get a full score for it), but I tried to find one. For instance, we may do the following telescope-ish argument: $$ \frac{1}{1(1+1)} + \frac{1}{2(2+1)} + \cdots + \frac{1}{(n-1)n} + \frac{1}{n(n+1)} + \frac{1}{n+1}\\ = \frac{1}{1(1+1)} + \frac{1}{2(2+1)} + \cdots + \frac{1}{(n-1)n} + \frac1n\\ \vdots\\ = \frac{1}{1(1+1)} + \frac12\\ = 1 $$ However, I feel that this is just an induction proof in disguise (hidden in the vertical dots). If one uses the mechanics of the induction proof to check whether the formula is true for a specific $n$, then one certainly does the exact same calculations as I have done here. Is there a proof of this fact that clearly does not use induction (or at least hides it better)? The more elementary the better, and the ultimate goal would be to do it within the syllabus of the students taking the test (or at least not far from it). For reference, they should be familiar with the summation formula of arithmetic and geometric series and their derivations (so techniques resembling those would be well within specifications). If there is a solution using calculus, then the students should be able to integrate elementary trigonometric functions, as well as exponential functions, logarithms and rational functions. They are familiar with integration by parts, substitution and partial fractions. I welcome more advanced solutions as well, of course. REPLY [6 votes]: For a proof of intermediate level, note that for each $k=1,2....,n$ we have $$\int_k^{k+1} \frac{1}{t^2}\ dt=\frac{1}{k(k+1)}.$$ Adding these integrals for $k$ from $1$ to $n$ then gives $$\int_1^{n+1}\frac{1}{t^2}\ dt=1-\frac{1}{n+1}.$$ In this approach the "telescoping" may be said to hide in the additivity of the integral over the disjoint subintervals of $[1,n+1].$<|endoftext|> TITLE: Why $|x|$ is not rational expression? QUESTION [11 upvotes]: I'm 9th grade student, and my teacher said that $|x|$ is not rational expression ( expression like $\frac{p(x)}{q(x)}$ s.t $p(x)$ and $q(x)\neq 0$ are polynomial) but he didn't have convincing reason. One of my friends said that it is provable by use of differential, but I don't know about calculus. Is there any proof for this fact, without use of calculus? REPLY [14 votes]: If there were polynomials $p$ and $q$ such that $|x|=\frac{p(x)}{q(x)}$ for all $x\in\Bbb R$, then $\frac{p(x)}{q(x)}$ would be defined for all $x\in\Bbb R$, and $q(x)$ could never be $0$. Moreover, we’d have $p(x)=|x|q(x)$ for all $x\in\Bbb R$. If $x\ge 0$, this means that $p(x)=xq(x)$, and if $x<0$ it means that $p(x)=-xq(x)$. Let $r(x)=p(x)-xq(x)$; this is certainly a polynomial, and $$r(x)=\begin{cases} 0,&\text{if }x\ge 0\\ -2xq(x),&\text{if }x<0\;. \end{cases}$$ I’m going to assume that you know that a polynomial of degree $n\ge 1$ has at most $n$ real roots, though you’ve probably never seen a proof. Our supposed polynomial $r(x)$ evidently has infinitely many real roots, since each non-negative real is a root, so it must be the constant function $r(x)\equiv 0$. But then $-2xq(x)=0$ for each $x<0$, and it follows that $q(x)=0$ for each $x<0$. We saw at the beginning that this is impossible: $q(x)$ can never be $0$. This contradiction shows that in fact no such polynomials $p$ and $q$ exist, and $|x|$ is not a rational function.<|endoftext|> TITLE: Euler Product formula for Riemann zeta function proof QUESTION [9 upvotes]: In class we introduced Reimann Zeta function $$ \zeta (x)=\sum_{n=1}^{+\infty} \frac{1}{n^x} $$ And we proved its domain was $D=(1,+\infty)$ Now Euler proved that $$ \zeta(x)=\prod_{p\text{ prime}}\frac{1}{1-p^{-x}} $$ By saying $$ \zeta(x)=1+\frac{1}{2^x}+\frac{1}{3^x}+... \\ \zeta(x)(\frac{1}{2^x})=\frac{1}{2^x}+\frac{1}{4^x}+... \\ \zeta(x)(1-\frac{1}{2^x})=1+\frac{1}{3^x}+\frac{1}{5^x}+... $$ And so on for every prime number. However this proof isn't a 'rigorous proof' as my professor says. Why is that and how would one prove this rigorously? Any reference would be helpful. I have seen on wikipedia that to make the proof rigorous we need to observe $\mathfrak{R}(x)>1$ Is that the real part of x or something else? REPLY [5 votes]: Eulers formula for the Zeta function is, $$ \prod_{p \in \mathbb{P}}^{p \le A} \frac1{1- p^{-s}} = \prod_{p \in \mathbb{P}} (\sum_{k=0}^{\infty} p^{-ks}) $$ Infinite sums and products may depend on order. Finite do not. Consider the finite products, $$ \prod_{p \in \mathbb{P}}^{p \le A} \frac{1- p^{-(K+1)s}}{1- p^{-s}} = \prod_{p \in \mathbb{P}}^{p \le A} \sum_{k=0}^{K} p^{-ks} $$ Product over a sum is the sum over the cartesian products of the products. $$ \prod_{a \in A} \sum_{b \in B_a} t_{a,b} = \sum_{c \in \prod_{a \in A} B_a} \prod_{a \in A}t_{a,c_a} $$ where the product of sets $\prod_{a \in A} B_a$ is taken to be a cartesian product. $$ \prod_{p \in \mathbb{P}}^{p \le A} \sum_{k=0}^{K} p^{-ks} = \sum_{v \in \prod_{p \in \mathbb{P}}^{p \le A} \{ 0 .. K\}} \prod_{p \in P}^{p \le A}p^{-v_ps} = \sum_{v \in \prod_{p \in \mathbb{P}}^{p \le A} \{ 0 .. K\}} (\prod_{p \in P}^{p \le A}p^{-v_p})^{-s}$$ Change the summing variable using, $$ \sum_{v \in V } g(f(v)) = \sum_{w \in \{f(v) : v \in V \} } g(w)$$ which is valid only if f is one to one. This is true by Fundamental theorem of arithmetic, as every number has a unique factorization. This gives, $$ \prod_{k=0}^{K} \sum_{p \in \mathbb{P}}^{p \le A} p^{-ks} = \sum_{n \in \{\prod_{p \in P}^{p \le A}p^{-v_p} : v \in \prod_{p \in \mathbb{P}}^{p \le A} \{ 0 .. K\} \} } n^{-s} $$ Define Q by, $$ Q(A, K) = \{\prod_{p \in P}^{p \le A}p^{-v_p} : v \in \prod_{p \in \mathbb{P}}^{p \le A} \{ 0 .. K\} \}$$ No natural number may have a factor greater than itself. Also the highest power it can have for a prime factor is $N = 2^K$, giving $K = \log_2(N)$. Every number from 1..N must have a unique factorization, and that factorization must be constructed by Q. $$ \{1 .. N\} \subset Q(N, log_2(N)) $$ The positive natural numbers are given by, $$ \lim_{A \to \infty, K \to \infty}Q(A,K) = \mathbb{N+} = \{\prod_{p \in P}p^{-v_p} : v \in \prod_{p \in \mathbb{P}} \{ 0 .. \infty\} \}$$ Then, $$ \prod_{p \in \mathbb{P}}^{p \le A} \frac{1- p^{-(K+1)s}}{1- p^{-s}} = \prod_{p \in \mathbb{P}}^{p \le A} (\sum_{k=0}^{K} p^{-ks}) = \sum_{n \in Q(A, K)} n^{-s} $$ Taking limits, $$ \lim_{A \to \infty, K \to \infty} \prod_{p \in \mathbb{P}}^{p \le A} \frac{1- p^{-(K+1)s}}{1- p^{-s}} = \prod_{p \in \mathbb{P}} \frac1{1- p^{-s}} $$ $$ \lim_{A \to \infty, K \to \infty} \sum_{n \in Q(A, K)} n^{-s} = \sum_{n \in \mathbb{N^+}} n^{-s} $$ The order of summation is not prescribed by the sum. Infinite sums can have different values depending on order. However, $$ \sum_{1..n}^{\infty} n^{-s} $$ is absolutely convergent for $\Re(s) > 1$, which guarantees that the sum will converge to the same limit irrespective of order. So $$ \prod_{p \in \mathbb{P}} \frac1{1- p^{-s}} = \sum_{n \in \mathbb{N^+}} n^{-s} = \sum_{n = 1}^{\infty} n^{-s} = \zeta(s) $$ An alternative approach compares the limit with, $\zeta(s)$. Consider, $$ \prod_{p \in \mathbb{P}} \frac1{1- p^{-s}}- \zeta(s) $$ Then, $$ \lim_{A \to \infty, K \to \infty} \prod_{p \in \mathbb{P}}^{p \le A} \frac{1- p^{-(K+1)s}}{1- p^{-s}} - \lim_{N \to \infty} \sum_{n = 1}^{N} n^{-s} $$ Or, $$ \lim_{A \to \infty, K \to \infty} \sum_{n \in Q(A, K)} n^{-s} - \lim_{N \to \infty} \sum_{n = 1}^{N} n^{-s} $$ So, $$ \lim_{N \to \infty} \sum_{n \in (Q(N, \log_2(N)) - \{1 .. N\})} n^{-s} $$ $s$ may be complex, $ s = u + it $ $$ \lim_{N \to \infty} \sum_{n \in (Q(N, \log_2(N)) - \{1 .. N\})} n^{-u} e^{-it \ln(n)}$$ The magnitude is, $$ \lim_{N \to \infty} | \sum_{n \in (Q(N, \log_2(N)) - \{1 .. N\})} n^{-s} | <= \lim_{N \to \infty} \sum_{n \in (Q(N, \log_2(N)) - \{1 .. N\})} n^{-u}$$ $$ <= \lim_{N \to \infty}\sum_{n=N+1}^{\infty} n^{-u} = 0 $$ As $\sum_{n=0}^{\infty} n^{-u}$ converges absolutely for $ u > 1 $ So if $s = u + it \wedge u > 1$, $$ \prod_{p \in \mathbb{P}} \frac1{1- p^{-s}} = \zeta(s) $$<|endoftext|> TITLE: Is the fundamental group of a retract a subgroup of the original space? QUESTION [5 upvotes]: Let $X$ be a topological space and $A$ a retract of $X$. Is the fundamental group of $A$ a subgroup of the fundamental group of $X$? REPLY [7 votes]: Let $i$ be the inclusion and $r$ the retraction, so that $r\circ i$ is the identity. Then $(r\circ i)_*=r_*\circ i_*$ is the identity. Thus $r_*$ is surjective and $i_*$ is injective. From the latter fact we deduce that the fundamental group of the retract is isomorphic to a subgroup of the fundamental group of the larger space.<|endoftext|> TITLE: Find all continuous functions over reals such that $f(x)+f(y) = f(x+y)-xy-1$ for all $x,y \in \mathbb{R}$ QUESTION [9 upvotes]: Find all continuous functions over reals such that $f(x)+f(y) = f(x+y)-xy-1$ for all $x,y \in \mathbb{R}$. I saw first that $f(0) = -1$ but then I am struggling to see how to get a formula for $f(x)$. If I do $x = 0$ we get $f(0) + f(x) = f(x)-1$ which doesn't really help. Is there a better way to get a formula for $f(x)$ here? REPLY [2 votes]: Since you found $f(0)+f(0)=f(0)-0-1$so that $f(0)=-1$, if you hold $y$ constant and take the derivative with respect to $x$, you get $$f^{\prime}(x)=f^{\prime}(x+y)-y$$ Then rearrange to $$f^{\prime\prime}(x)=\lim_{y\rightarrow0}\frac{f^{\prime}(x+y)-f^{\prime}(x)}{y}=\lim_{y\rightarrow0}1=1$$ So $f(x)=\frac12x^2+C_1x+C_2$. Since $f(0)=C_2=-1$, we are down to $f(x)=\frac12x^2+C_1x-1$, and since this works in the original equation, we are done. EDIT: OK, so here's a solution that doesn't require twice differentiable functions. Rewrite the recurrence relation as $$\frac{f(x+y)-f(x)}y=\frac{f(y)-f(0)}y+x$$ Given that we know $f(0)=-1$. Then $$\begin{align}\lim_{y\rightarrow0}\frac{f(x+y)-f(x)}y&=\lim_{y\rightarrow0}\frac{f(y)-f(0)}y+\lim_{y\rightarrow0}x\\ &=f^{\prime}(x)=f^{\prime}(0)+x\end{align}$$ On integration we get $$f(x)=\frac12x^2+f^{\prime}(0)x+C_3$$ And recall that we knew $$-1=f(0)=C_3$$ So we are back to verifying that $$f(x)=\frac12x^2+f^{\prime}(0)x-1$$ works for all values of $f^{\prime}(0)$.<|endoftext|> TITLE: Do I understand the Chevalley Restriction Theorem correctly? QUESTION [5 upvotes]: Let $G$ be a complex semisimple Lie group with Lie algebra $\frak g$, and let $\frak h$ be a Cartan subalgebra with Weyl group $W$. The Chevalley Restriction Theorem states that the restriction map $\Bbb C[{\frak g}]\to\Bbb C[{\frak h}]$ induces an isomorphism of graded $\Bbb C$-algebras $$\Bbb C[{\frak g}]^G\to\Bbb C[{\frak h}]^W.$$ What is the inverse of this map? Using root-space decomposition, there is a projection $$\pi:{\frak g}={\frak h}\oplus\bigoplus_{\alpha\in\Phi}{\frak g}_\alpha\to {\frak h}.$$ Thus, we have a map $$\pi^*:\Bbb C[{\frak h}]\to\Bbb C[{\frak g}].$$ Is the restriction of this to $\Bbb C[{\frak h}]^W$ the inverse of the Chevalley restriction map? REPLY [4 votes]: No. Look at the matrix $\left(\begin{smallmatrix}0 & -1 \\ 1 & 0\end{smallmatrix}\right) \in \mathfrak{g}=\mathfrak{sl}(2)$. This is a sum of two elements of two root spaces using the standard Cartan. But the trace of the square is not zero, and trace of the square is an invariant polynomial.<|endoftext|> TITLE: What is the subword complexity function of this infinite word? QUESTION [8 upvotes]: Let $w_{0}$ denote the finite word $01$ in the free monoid $\{ 0, 1 \}^{\ast}$, and for $i \in \mathbb{N}$ define $w_{i}$ as the word obtained by adjoining the first $\left\lfloor \frac{\ell(w_{i-1})}{2} \right\rfloor$ entries in $w_{i-1}$ to the right of $w_{i-1}$. We thus have that: \begin{align*} w_{0} & = 01 \\ w_{1} & = 010 \\ w_{2} & = 0100 \\ w_{3} & = 010001 \\ w_{4} & = 010001010 \\ & \text{etc.} \end{align*} Let $$ w = 0100010100100010001010001010010001010010000100010100100010001010100 \ldots$$ denote the infinite binary word obtained in the limit, with respect to the sequence $(w_{i} : i \in \mathbb{N}_{0})$. Since the construction of this infinite word is very simple and natural, it is surprising that the integer sequence $$(0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, \ldots)$$ given by the consecutive entries in $w$ is not currently in the On-Line Encyclopedia of Integer Sequences (OEIS). Recall that the subword complexity function $\sigma_{v} = \sigma : \mathbb{N} \to \mathbb{N}$ of an infinite word $v$ is the function on $\mathbb{N}$ that maps $n \in \mathbb{N}$ to the number of distinct factors of $v$ of length $n$. Given the simple definition of the binary word $w$, it is natural to ask: what is $\sigma_{w}$? It is not obvious to me how to find a closed-form evaluation of the sequence $$(\sigma_{w}(n) )_{n \in \mathbb{N}} = (2, 3, 5, 8, 12, \ldots),$$ since proving a statement of the form $\sigma_{w}(n) = m$ for fixed $n \in \mathbb{N}$ (where $m \in \mathbb{N}$) appears to be nontrivial in general. However, for certain 'small' values of $n \in \mathbb{N}$, the evaluation of $\sigma_{w}(n)$ is relatively trivial. For example, using induction, it is easily seen that $\sigma_{w}(2)=3$. It is also natural to ask: What is the abelian complexity function of $w$? REPLY [2 votes]: Not a full answer, but probably a useful reference. You mention that the integer sequence $$(0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, \ldots)$$ given by the consecutive entries in $w$ is not currently in OEIS. However, the sequence of positions of $1$ in this sequence, namely: $$ 1, 5, 7, 10, 14, 18, 20, 24, 26, 29, 33, 35, \dotsm $$ appears as A020942, First column of 3rd-order Zeckendorf array, in OEIS. Quoting OEIS Any number $n$ has unique representation as a sum of terms from $\{1, 2, 3, 4, 6, 9, 13, 19, ...\}^{*}$ such that no two terms are adjacent or pen-adjacent; e.g., $7=6+1$. Sequence gives all $n$ where that representation involves $1$. $^{(*)}\scriptsize\text{ These terms obey the recurrence equation $a(n) = a(n-1) + a(n-3)$.}$ Two references are given [1] Larry Ericksen and Peter G. Anderson, Patterns in differences between rows in k-Zeckendorf arrays, The Fibonacci Quarterly 50, February 2012. [2] C. Kimberling, The Zeckendorf array equals the Wythoff array, Fibonacci Quarterly 33 (1995) 3-8. Reference [1] gives interesting connections with the sequence of words $w_k$ on a $k$-letter alphabet defined as follows: \begin{align} w_0 &= a_0,\\ w_1 &= a_0a_1,\\ &\ \vdots\\ w_{k-1} &= a_0a_1 \dotsm a_{k-1} \end{align} and $w_i = w_{i-1}w_{i-k}$ for $i \geqslant k$. This makes this article a reasonable approach towards the solution of your problem.<|endoftext|> TITLE: $R = \mathbb{Z}[\sqrt{-41}]$, show that 3 is irreducible but not prime in $R$ QUESTION [7 upvotes]: I'm asked to show that 3 is irreducible but not prime in $R = \mathbb{Z}[\sqrt{-41}]$. And if $R$ is a Euclidean domain. To show that it's not prime I have $(1 + \sqrt{-41})(1 - \sqrt{-41}) = 42 = (3)(14)$. I get that 3 divides 14 but how do I show that 3 does not divide either $(1 + \sqrt{-41})$, $(1-\sqrt{-41})$ to show that it's not prime? $x \mid 42$ but $x$ does not divide 3? To show that it's irreducible how am I showing that either $(1 + \sqrt{-41})$, $(1 - \sqrt{-41})$ is a unit since clearly $(1 + \sqrt{-41})(1 - \sqrt{-41}) = 42$? REPLY [4 votes]: The answers given already are all very complete and good. I wanted to give a slightly different perspective on the fact that $3$ is not prime; this perspective is alluded to in Robert Soupe's very nice answer. If $3 \in R:= \mathbb{Z}[\sqrt{-41}]$ is prime, then $3R$ is a prime ideal of $R$. In particular, this means that $R/3R$ is an integral domain. Since $R$ can be expressed as the quotient $\mathbb{Z}[X]/\langle X^{2}+41 \rangle$ and we can interchange the order in which we take quotients, we see that $$R/3R \cong \mathbb{Z}[X]/\langle X^{2}+41, 3 \rangle \cong (\mathbb{Z}/3\mathbb{Z})[X]/\langle X^{2}+2 \rangle$$ Of course, $X^{2}+2$ is reducible over $\mathbb{Z}/3\mathbb{Z}$, since it factors as $(X+1)(X+2)$. The images of $X+1$ and $X+2$ in the quotient ring $(\mathbb{Z}/3\mathbb{Z})[X]/\langle X^{2}+2 \rangle$ are thus zero divisors, so $R/3R$ is not a domain, and therefore $3$ is not a prime element of $R$.<|endoftext|> TITLE: How are the average rate of change and the instantaneous rate of change related for ƒ(x) = 2x + 5? QUESTION [5 upvotes]: How are the average rate of change and the instantaneous rate of change related for ƒ(x) = 2x + 5 ? Should I figure out what is similar between them to solve this question? I don't understand how they correlate REPLY [4 votes]: The average rate of change is defined as: $r_{avg} = \frac{f(b)-f(a)}{b-a}$, over some interval $[a,b]$. The instantaneous rate of change is defined as the slope of $f$ evaluated at a single point, e.g.: $f'(c)$ where $c$ is a predefined point. In this case, because $f(x)=2x+5$, we have: $r_{avg}=\frac{(2b+5)-(2a+5)}{b-a}$=$\frac{2b-2a}{b-a}$=$2$ And you will also note notice that the slope of $f(x)$ is...$2$! Addendum: The reason that the average rate of change equals the instantaneous is because the slope is constant in this function. In general, instantaneous change is defined as the derivative of the function evaluated at a single point. The average change is the same as always (pick two points, and divide the difference in the function values by the difference in the $x$ values). But, if the curve looks like $y=x^2$, this won't be the case. The slope at a single point anywhere along the curve is different, and the average rate of change will not always be the same as the instantaneous, though it can for certain points. (See "Mean Value Theorem")<|endoftext|> TITLE: How does the cardinality of the set of all probability measure on a set $X$ change according to the cardinality of $X$? QUESTION [8 upvotes]: I was wondering concerning the following problem: Take $X$ as a parameter space endowed with its Borel $\sigma$-algebra. What is the cardinality of $\Delta (X)$, understood as the set of all probability measure over $X$? [When I write that $X$ is a parameter space, I am thinking about the cardinality of $X$ as finite, countable or uncountable] If $X$ is a doubleton, we should already have $| \Delta (X) | = \mathfrak{c}$. What happens if we move on? Also, how does this relate to the cardinality of the Borel sets? Any feedback is most welcome. Thank you for your time. REPLY [3 votes]: This is only a partial answer (however, a complete question does not seem to be possible here). Since you're mentioning Borel $\sigma$-algebra, I presume that $X$ is a topological space. Extending Ross Millikan's comment, If $X$ has a countable base $B$, then $|\Delta(X)|=\mathfrak{c}$ unless the only non-empty open set is $X$. Indeed, in this case the measure is completely determined by its values on sets of the form $A_1\cap\dots\cap A_k$, $A_i\in B$ (since it is a $\pi$-system generating the Borel $\sigma$-algebra). But there are only countably many such sets. So the cardinality is at most $\mathfrak c$. On the other hand, it is at least $\mathfrak c$ by OP. In particular, for any separable metric space with at least two elements, $|\Delta(X)|=\mathfrak c$. What can be said besides that, is an interesting question. Say, if $X$ is a non-separable metric space, then, obviously, $|\Delta(X)|\ge |X|$ (considering delta measures on singletons), and also $|\Delta(X)|\ge \mathfrak c$ (which would follow from the previous if we accept CH). However, it is hard to bound $|\Delta(X)|$ from above. A helpful idea is that there cannot be an uncountable collection of disjoint sets of positive probability. In particular, the cardinality of set of all discrete probability measures is at most $(X\times \mathbb R)^{\aleph_0}$ (to each such measure we associate a sequence of points from $X$ and their corresponding weights), which is $\max(\mathfrak c, |X|)$. Unfortunately, we cannot go much beyond this with such argument. We could write that each point from $X$ has a countable base of neighborhoods, and in total they make a base $B$ of topology. But the above argument will fail, since $B$ might not generate the whole Borel $\sigma$-algebra (since it is uncountable). It might still be that the values of this probability measure are completely determined by its values on some countable subfamily of $B$ (since, as I wrote, there can't be an uncountable disjoint family of sets with positive measure), but I am far from being sure in this.<|endoftext|> TITLE: A strange integral having to do with the sophomore's dream: QUESTION [45 upvotes]: I recently noticed that this really weird equation actually carries a closed form! $$\int_0^1 \left(\frac{x^x}{(1-x)^{1-x}}-\frac{(1-x)^{1-x}}{x^x}\right)\text{d}x=0$$ I honestly do not know how to prove this amazing result! I do not know nearly enough about the sophomore's dream integral properties to answer this question, which I have been trying to apply here. (If possible, please stay with real methods, as I do not know contour integration yet) REPLY [3 votes]: As property of definite integrals it is known that for any function integrand $f(x)$ $$\int_0^a{f(x)}\text{d}x = \int_0^a{f(a-x)}\text{d}x$$ Geometrically this means the area under the curve when flipped about the line $ x= \dfrac{a}{2}$ as a rigid figure cannot change. The property can be stated symbolically as in one particular generalisation where ${p} $ is a constant: $$\int_0^1 \frac{f(x)^p}{f(1-x)^p}\text{d}x=\int_0^1 \frac{f(1-x)^p}{f(x)^p}\text{d}x.$$ For p = 1 you can write the integrand also as a product in another example. $$ \int_0^{\pi}{\cos^5 \phi \sin \phi }\, d \phi = \int_0^{\pi}{\sin ^5 \phi \cos \phi }\, d \phi $$ I am sure the generalisation takes a fanciful attention.Earlier days I would imagine a "curved roof trapezoid " and flip it without change of area under the roof.<|endoftext|> TITLE: There is no function from $\mathbb{R} \to (0, \infty)$ satisfying $f'(x)=f(f(x))$ QUESTION [6 upvotes]: Problem: Prove that there is no differentiable function from $\mathbb{R} \to (0, \infty)$ satisfying $f'(x)=f(f(x))$. I could not make much progress, except for observing that any derivatives (any order, whenever they exist) are always positive. Please help. Even hints are appreciated. REPLY [2 votes]: Since $f$ outputs positive numbers, $f'$ must be positive. Since $f'=f\circ f$, then $f''$ exists, and $f''=\left(f'\circ f\right)\cdot f'$, and you can see that $f''$ is also positive. With $f$, $f'$, and $f''$ positive on $\mathbb{R}$, $f$ must have a horizontal asymptote $y=c\geq0$ as $x\to-\infty$ and $\lim_{x\to-\infty}f'(x)=0$. (Formal proof below.) Then $$f(c)=\lim_{x\to-\infty}f(f(x))=\lim_{x\to-\infty}f'(x)=0$$ and it is not permitted that $f(c)=0$. With $f$ and $f'$ positive, $f$ is bounded below and increasing. This guarantees $c$ exist, and gives the existence of the horizontal asymptote. But it isn't automatic that $\lim_{x\to-\infty}f'(x)$ exists. For example, maybe the curve is flat as it moves left, and occasionally, for very brief periods, takes on a very steep slope. This is why (as Greg Martin notes in the comments) we must make use of $f''$ being positive. The logic is the same with $f'$ as was with $f$. Since $f'$ and $f''$ are positive, then $f'$ has a left asymptote at $y=d\geq0$. But $d$ must equal $0$ since otherwise $f$ wouldn't be bounded below.<|endoftext|> TITLE: Function with no roots QUESTION [28 upvotes]: Given a non-constant function $f(x)$, is it possible for it to have no zeroes (neither real nor complex)? Say for example, $f(x)=\cos x-2$, does a complex solution exist for this because for real $x$, $\cos x$ belongs to $[-1, 1]$? Or, say $f(x)=e^x$, does a complex zero exist for this because $e^x \gt0$ for real $x$? REPLY [2 votes]: The $f(x)>0$ for real $x$ condition is mostly irrelevant. If a continuous function $f(x)$ with no real roots is defined on the whole real line, then it necessarily must satisfy $f(x) >0$ for all real $x$ or $f(x)<0$ for all real $x$, as otherwise it would have a real root by the intermediate value theorem. However, this doesn't imply that it had no complex roots. As an example, $x^2 +1>0$ for all real $x$, but it has the complex roots $\{i,-i\}$. But $e^x$, as commenters have pointed out, is also positive for all real $x$ and has no complex roots. And if a function is not continuous or not defined on the whole real line, it may not satisfy that condition and yet have no complex roots. For instance, $\frac{1}{x}$ takes on both positive and negative real values, but had no roots anywhere in the complex plane (but it has a pole at 0, so the intermediate valid theorem doesn't hold).<|endoftext|> TITLE: Prove that $W \cup S^1$ is connected in the subspace topology of $\mathbb{R^2}$ QUESTION [5 upvotes]: I want to solve the following question: Prove that the union of $W$ and the unit circle $S^1$ is connected in the subspace topology of $\mathbb{R^2}$ where $W=\{(x, y) \in \mathbb{R^2} | x=(1-e^{-t})cost, y=(1-e^{-t})sint, t \geq0\}$ I know that a connected topological space, $X$ is connected if it does not split into open disjoint non-empty subsets. Also, for a space $(Y, \tau)$ the subspace topology on $X \subset Y$ is $\tau|_X=\{U \cap X : U \in \tau\}$. How can the union be connected, since you are taking the union of non-empty subsets? REPLY [2 votes]: $W$ is connected (since it is a path). Note that $\overline{W}$ is connected, since it is the closure of a connected set. But $\overline{W}=W \cup S^1$.<|endoftext|> TITLE: How to calculate intersection and union of probabilities? QUESTION [6 upvotes]: lets say I have a switch A with 3 legs, each leg has 0.8 chance to be connected (and then electricity will flow), we need only 1 leg connected for A to transfer the electricity (so sorry I didn't explain it that well I'm having hard time to translate this problem) So I calculated the chance of A to transfer electricity by doing $1-(0.2)^3$ which is $124\over125$ which I think is true. The problem is I wanted to say that A will transfer elecricity only if 1 of his legs will be connected so its like saying $0.8 + 0.8 + 0.8$ which is obviously wrong(over $1$) so I used the weird formula that says to do like this: $0.8 + 0.8 + 0.8 - (0.8)^2 -(0.8)^2 - (0.8)^2 + (0.8)^3 = {124\over125}$ too. My only problem is that I didnt understand why I had to use that formula and why I could multiple probabilities for the intersection but couldn't just sum them for the union. Thanks in advance REPLY [11 votes]: The Product Rule applies to events which are independent. Only then is the probability of the intersection equal to the product of the probabilities of the events. $\mathsf P(A\cap B) ~=~ \mathsf P(A)~\mathsf P(B)~$ only when events $A$ and $B$ are independent. (When dealing with more than two events we require mutual independence.) Otherwise conditional probability must be used: $\mathsf P(A\cap B)~=~\mathsf P(A)~\mathsf P(B\mid A)\\\qquad\qquad~=~\mathsf P(A\mid B)~\mathsf P(B)$ The Addition Rule applies only when the events are mutually exclusive (also known as disjoint).   Only then is the probability of the union equal to the sum of probabilities of the event. $\mathsf P(A\cup B)~=~\mathsf P(A)+\mathsf P(B)$ Otherwise if the events are not disjoint (ie they have common outcomes) then we would be over measuring and must exclude the measure of the intersection. $\mathsf P(A\cup B)~=~\mathsf P(A)+\mathsf P(B) - \mathsf P(A\cap B)$ When dealing with more than two events, the principle of inclusion and exclusion is required $\begin{align}\mathsf P(A\cup B\cup C)~=~&\mathsf P(A)+\mathsf P(B)+\mathsf P(C) - \mathsf P(A\cap B)-\mathsf P(A\cap C)-\mathsf P(B\cap C)+\mathsf P(A\cap B\cap C)\end{align}$ ... and so on. $\Box$<|endoftext|> TITLE: Why is a differential equation a submanifold of a jet bundle? QUESTION [7 upvotes]: I'm reading The geometry of jet bundles by D.J Saunders and struggle with the definition of a differential equation 6.2.23. on page 203. First of all, Saunders introduces a differential operator determined by a bundle map: Let $(E, \pi, M)$ and $(H, \rho, M)$ be bundles and let $(f, id_M)$ a bundle morphism between the jet-bundle $J^k(\pi)$ and $H$. This means, that $f: J^k(\pi) \to H$ is a smooth map that respects the projections. The differential operator determined f is the map $D_f: \Gamma_{loc} (\pi) \to \Gamma_{loc}(\rho)$ taking a local section $\phi$ of $\pi$ to the local section $D_f(\phi)$ of $\rho$ with $D_f(\phi) (p) = f(j^k_p \phi)$. For a differential operator $D_f$ determined by $f$ and a local section $\chi$ of $\rho$ he than defines the differential equation determindes by $D_f$ and $\chi$ to be the submanifold $$S_{f; \chi} = \lbrace j^k_p \phi: f(j^k_p \phi) = \chi (p) \rbrace \subset J^k \pi $$ I can see, why the definition of the set $S_{f; \chi}$ defines a differential equation in a rather intuitive way. Now my question is: Why is this set in deed a submanifold? I guess there is a nice widly known theorem on the core of the theory on fibre bundles which I unfortunatly do not know. So if anybody could point out a reference to such a theorem, I would be grateful! REPLY [3 votes]: Let $\bar \chi : J^k \pi \to H$ be the composition of $\chi$ with the projection of $J^k \pi$; i.e. $\bar \chi (\xi_p) = \chi(p)$. Then since $$S_{f;\chi} = \{\xi_p \in J^k \pi : f(\xi_p) = \bar \chi(\xi_p) \},$$ a slight generalization of the regular value theorem tells us that it is a submanifold if $Df - D\bar\chi$ is surjective everywhere in $S_{f;\chi}$. I believe you need some additional assumption to make this true - otherwise standard examples of non-manifold level sets should easily extend to this setting. Since $\bar\chi$ is constant in the vertical direction, it suffices for the restriction of $Df$ to the vertical bundle to be surjective; so you probably want to require this of your differential operator.<|endoftext|> TITLE: Reference for category of Lie algebras? QUESTION [5 upvotes]: Are there any references which deal with categorical aspects of Lie algebras? I'm looking for constructions like kernels, products, coproducts (limits and colimits in general) etc. My goal is to get a better understanding of the category of Lie algebras (using category theory machinery). Thanks. REPLY [8 votes]: This is the sort of thing you should work out for yourself as an exercise. Limits are computed as in vector spaces / sets, so the interesting question is how to compute colimits. This reduces to computing coproducts and coequalizers. Coproducts are given by a version of the free product (look up free Lie algebras to get a sense of how this behaves), while coequalizers are given by quotienting by a suitable ideal. It's useful to observe that taking universal enveloping algebras is a left adjoint, so preserves colimits, and also useful to think of Lie algebras as analogous to groups.<|endoftext|> TITLE: How to prove Liouville's theorem for subharmonic functions QUESTION [6 upvotes]: I noticed this post and this paper, which gives a version of Liouville's theorem for subharmonic functions and the reference of its proof, but I think there must be an easier proof for the following version of Liouville's theorem with a stronger condition. A subharmonic function that is bounded above on the complex plane $\mathbb C$ must be constant I think we may need to use the fact that the maximum of a subharmonic function cannot be achieved in the interior of its domain unless the function is constant(MVP). But how do we prove that a bounded-above subharmonic function on the complex plane $\mathbb C$ can achieve its maximum at a certain point of $\mathbb C$?(Maybe we don't need to use MVP for proof) Thanks in advance! REPLY [9 votes]: If $v$ is subharmonic in the complex plane $\Bbb C$ then $$ \tag 1 v(z) \le \frac{\log r_2 - \log |z|}{\log r_2 - \log r_1} M(r_1, v) + \frac{\log |z| - \log r_1}{\log r_2 - \log r_1} M(r_2, v) $$ for $0 < r_1 < |z| < r_2$, where $$ M(r, v) := \max \{ v(z) : |z| = r \} \quad . $$ That is the "Hadamard three-circle theorem" for subharmonic functions, and follows from the fact that the right-hand side of $(1)$ is a harmonic function which dominates $v$ on the boundary of the annulus $\{ z : r_1 < |z| < r_2 \}$ . (Remark: It follows from $(1)$ that $M(r, v)$ is a convex function of $\log r$.) Now assume that $v(z) \le K$ for all $z \in \Bbb C$. Then $M(r_2, v) \le K$, and $r_2 \to \infty$ in the inequality $(1)$ gives $$ \tag 2 v(z) \le M(r_1, v) $$ for $0 < r_1 < |z|$. It follows that $$ v(z) \le \limsup_{r_1 \to 0} M(r_1, v) = v(0) $$ because $v$ is upper semi-continuous. Thus $v$ has a maximum at $z=0$ and therefore is constant. Remark: As noted in the comments, the condition “$v$ is bounded above” can be relaxed to $$\liminf_{r \to \infty} \frac{M(r, v)}{\log r} = 0 $$ which is still sufficient to conclude $(2)$ from $(1)$.<|endoftext|> TITLE: Existence of a "basis" for the symmetric positive definite matrices QUESTION [6 upvotes]: Let $P_{\text{sym}}$ denote the convex cone of the symmetric positive definite real matrices (of size $n \times n$). Question: Is there a finite subset $B$ of matrices in $P_{\text{sym}}$ such that every matrix in $P_{\text{sym}}$ is a (non-zero) conic combination of elements of $B$? Note: It is not possible for such an expression to be unique; Assuming otherwise, we have a finite set $\{P_1,...,P_k \} \subseteq P_{\text{sym}}$ such that any matrix $P \in P_{\text{sym}}$ can be expressed uniquely as: $P=\sum_{i=1}^k a_iP_i$ where all $a_i \ge 0$ and $(a_1,...,a_k)\neq \bar 0$. This would imply $P_{\text{sym}}$ is homeomorphic to the set $D_k:= \{(a_1,...,a_k) \in \mathbb{R}^k | a_i \ge 0 \}$. (via the map $(a_1,...,a_k) \to \sum_{i=1}^k a_iP_i$). However, it is known that $P_{\text{sym}}$ is homeomorphic to $\mathbb{R}^{n(n+1)/2}$, so $\mathbb{R}^{n(n+1)/2} \cong D_k$. For $k \neq \frac{n(n+1)}{2}$ this is impossible, since this would imply an open subset of $\mathbb{R}^k$ is homeomorphic to an open subset of $\mathbb{R}^{n(n+1)/2}$, contradicting the invariance of domain. For $k=\frac{n(n+1)}{2}$, this is also impossible; Invariance of domain implies the only subsets of $\mathbb{R}^n$ which are homeomorphic to it are open, and $D_k$ is not open. REPLY [10 votes]: Some pictures to help the intuition: For $n=2$, any symmetric matrix can be written as $P=\begin{bmatrix}a+b & c \\ c & a-b\end{bmatrix}$. $P$ is positive definite if and only if $a>0$ and $a^2 > b^2 + c^2$. This is the interior of a right circular cone in $abc$-space. For $n=3$, consider the affine subspace of symmetric matrices $P=\begin{bmatrix}1 & d & e \\ d & 1 & f \\ e & f & 1\end{bmatrix}$. $P$ is positive definite if and only if $d,e,f \in (-1,1)$ and $\det P = 1 - d^2 - e^2 - f^2 + 2def > 0$. This is the interior of the middle tetrahedron-like piece of Cayley's cubic surface. Both these regions have curved boundaries, and so cannot be generated by conic combinations of a finite set of points.<|endoftext|> TITLE: Weak convergence $\iff$ strong convergence in finite dimensional space QUESTION [8 upvotes]: I am seeking a proof of the following claim. Weak convergence $\implies$ strong convergence in a finite-dimensional normed linear space. Thank you. REPLY [2 votes]: It is sufficient to prove that if $\mathrm{dim}(E) < \infty$ then $\sigma(E,E') = \mathcal{T}_E$ where $\sigma(E,E')$ is weak topology and $\mathcal{T}_E$ strong topology, or topology induced by norm on $E$. Equivalently every open $\mathcal{T}_E$-neighborhood of origin $B_E(\epsilon)$ it's also an open $\sigma(E,E')$-neighborhood. Let $x=(x_1,...,x_n)$, then $\left \| x \right \|_E:=\max_{1 \leq j \leq n} |x_j|$ defines a norm on $E$ (I can take this norm, since in finite dimensions are all equivalent), and \begin{align*} \displaystyle B_E(\epsilon)&=\lbrace x \in E : \left \| x \right \|_E < \epsilon \rbrace = \lbrace x \in E : |x_i| < \epsilon , \forall i=1,...,n \rbrace = \bigcap_{i=1}^n B_{\mathbb{K}}(\epsilon) \end{align*} where this intersection is by definition a $\sigma(E,E')$-neighborhood open. Note that we have weak convergence iff there is convergence with respect weak topology, substantially by definition.<|endoftext|> TITLE: Different functions or same functions QUESTION [5 upvotes]: I have a question in my booklet : $f(x) = \frac{x}{x}$ and $f(x) = 1$ are different or not why or why not? I can only think that the functions are different because the second one is a constant function irrespective of the value of $x$. But on evaluating the first function we will always get $1$. and in the first function is undefined at $x = 0$ while the second function is defined at $x = 0$. Is there any other way of thinking it mathematically and my instructor never covered anything related to this. Kindly help. REPLY [3 votes]: It depends on how you phrase the question. If you say Let $f,g : \mathbb{Q}\setminus\{0\} \to \mathbb{Q}$, $$f(x) = \frac{x}{x}, \qquad g(x)=1$$ then $f$ and $g$ are in fact equal. This is also true if you replace the rational numbers $\mathbb{Q}$ with the reals, or with complex numbers... as long as you do it consistently for both functions. However, because $g$ doesn't actually use its argument, there's no reason to assume its domain should be $\mathbb{Q}\setminus\{0\}$. I could write Let $g : \{1, 2, (0,5), i,\mathrm{cucumber}\} \to \mathbb{R}$, $$g(x)=1$$ or, more reasonably, just Let $g : \mathbb{Q} \to \mathbb{Q}$, $$g(x)=1.$$ Then, $g$ would be a completely different kind of object from $f$, and depending on your philosophy they would either be nonequal or it wouldn't even make sense to ask whether they're equal.<|endoftext|> TITLE: Determine whether a polygon is convex based on its vertices. QUESTION [5 upvotes]: We have a polygon $A_1A_2\ldots A_k \subset \Bbb{R^2}$ with the coordinates: $$A_1 = (x_1, y_1)$$ $$A_2 = (x_2, y_2)$$ $$\vdots$$ $$A_k = (x_k, y_k)$$ Is there any way to determine whether or not this polygon is convex or concave? REPLY [4 votes]: Rewritten on 2018-03-26. We only need to rely on the following four statements: The sign of the 2D analog of the vector cross product indicates whether the second vector is clockwise (positive) or counterclockwise (negative) with respect to the first vector, in a standard right-handed coordinate system. If the 2D analog of the vector cross product is zero, the two vectors are collinear. If the 2D analog of the vector cross product between consecutive pairs of edge vectors in a polygon has differing signs (ignoring zeroes, as if they had no sign), the polygon must be concave. If we examine the signs of the $x$ and $y$ components of the edge vectors, (again ignoring zeroes as if they had no sign), consecutively along the polygon, as a circular list, there must be exactly two sign changes, or the polygon is concave. Statements 1, 2 and 3 are known from basic vector algebra. Statements 3 and 4 combined, is equivalent to calculating the angle between each pair of consecutive edges in the polygon, and verifying that they are all in the same orientation (counterclockwise or clockwise), and that the sum of the angles is 360° (so that we can correctly detect self-intersecting polygons), except that we only consider four separate directions (the four quadrants in a standard coordinate system). In pseudocode, the testing algorithm is as follows: Function isconvex(vertexlist): If (the number of vertices in 'vertexlist' < 3), Then Return FALSE End If Let wSign = 0 # First nonzero orientation (positive or negative) Let xSign = 0 Let xFirstSign = 0 # Sign of first nonzero edge vector x Let xFlips = 0 # Number of sign changes in x Let ySign = 0 Let yFirstSign = 0 # Sign of first nonzero edge vector y Let yFlips = 0 # Number of sign changes in y Let curr = vertexlist[N-1] # Second-to-last vertex Let next = vertexlist[N] # Last vertex For v in vertexlist: # Each vertex, in order Let prev = curr # Previous vertex Let curr = next # Current vertex Let next = v # Next vertex # Previous edge vector ("before"): Let bx = curr.x - prev.x Let by = curr.y - prev.y # Next edge vector ("after"): Let ax = next.x - curr.x Let ay = next.y - curr.y # Calculate sign flips using the next edge vector ("after"), # recording the first sign. If ax > 0, Then If xSign == 0, Then xFirstSign = +1 Else If xSign < 0, Then xFlips = xFlips + 1 End If xSign = +1 Else If ax < 0, Then If xSign == 0, Then xFirstSign = -1 Else If xSign > 0, Then xFlips = xFlips + 1 End If xSign = -1 End If If xFlips > 2, Then Return FALSE End If If ay > 0, Then If ySign == 0, Then yFirstSign = +1 Else If ySign < 0, Then yFlips = yFlips + 1 End If ySign = +1 Else If ay < 0, Then If ySign == 0, Then yFirstSign = -1 Else If ySign > 0, Then yFlips = yFlips + 1 End If ySign = -1 End If If yFlips > 2, Then Return FALSE End If # Find out the orientation of this pair of edges, # and ensure it does not differ from previous ones. w = bx*ay - ax*by If (wSign == 0) and (w != 0), Then wSign = w Else If (wSign > 0) and (w < 0), Then Return FALSE Else If (wSign < 0) and (w > 0), Then Return FALSE End If End For # Final/wraparound sign flips: If (xSign != 0) and (xFirstSign != 0) and (xSign != xFirstSign), Then xFlips = xFlips + 1 End If If (ySign != 0) and (yFirstSign != 0) and (ySign != yFirstSign), Then yFlips = yFlips + 1 End If # Concave polygons have two sign flips along each axis. If (xFlips != 2) or (yFlips != 2), Then Return FALSE End If # This is a convex polygon. Return TRUE End Function where != is the not-equal operator. This approach considers vertex lists with less than three vertices concave, but as this is determined by the very first If clause, you can change it to suit yourself. Degenerate polygons with at least three points, so either all the same point, or collinear, are considered convex by this approach. Note that because this implementation does only two multiplications, and a number of additions, subtractions, and comparisons per polygon edge, it should be extremely efficient approach in most programming languages. In a practical implementation, you might wish to replace > 0 with > eps, < 0 with < eps, x == 0 with x >= -eps && x <= eps, and x != 0 with x < -eps || x > eps, to account for rounding errors with floating-point numbers. I'd use an eps about one quarter to one sixteenth of the smallest meaningful change in either $x$ or $y$ coordinates.<|endoftext|> TITLE: Partial derivatives inverse question QUESTION [19 upvotes]: If $ \partial u/\partial v=a $, then $ \partial v/\partial u=1/a$? REPLY [28 votes]: Functions of a Single Variable For functions of one variable, if $y=f(x)$ is strictly monotone and differentiable on an interval, and $f'(x)\ne 0$ in that interval, then the inverse function $x=f^{-1}(y)$ is also strictly monotone and differentiable in the corresponding interval and $$\bbox[5px,border:2px solid #C0A000]{\frac{dx}{dy}=\frac{1}{\frac{dy}{dx}}}\tag 1$$ EXAMPLE: Suppose $y=\sin(x)$ for $x\in (-\pi/2,\pi,2)$. Note that the sine function is monotone and differentiable on $(-\pi/2,\pi/2)$ with $\frac{dy}{dx}=\cos(x)$ and $\cos(x)\ne 0$. The inverse function, call it $x=\arcsin(y)$ for $y\in (-1,1)$, is therefore monotone and its derivative is $$\frac{dx}{dy}=\frac{1}{\cos(x)}=\frac{1}{\sqrt{1-y^2}}$$ Therefore, we have $\frac{d\,\arcsin(y)}{dy}=\frac{1}{\sqrt{1-y^2}}$. Functions of a Two Variables The relationship in $(1)$ does not apply, in general, to functions of more than one variable. As an example, examine the transformation of Cartesian coordinates $(x,y)$ to polar coordinates $(\rho,\phi)$ as given by $$\begin{align} \rho &=\sqrt{x^2+y^2}\\\\ \phi &=\operatorname{arctan2}(y,x) \end{align}$$ and $$\begin{align} x&=\rho \cos(\phi)\\\\ y&=\rho \sin(\phi) \end{align}$$ We examine the relationship between $\frac{\partial \rho }{\partial x}$ and $\frac{\partial x}{\partial \rho}$ to see if $(1)$ holds. Note that $$\begin{align} \frac{\partial \rho }{\partial x}&=\frac{x}{\rho}\\\\ & =\cos(\phi)\\\\ &=\frac{\partial x}{\partial \rho} \end{align}$$ Therefore, $\frac{\partial \rho }{\partial x}\ne \frac{1}{\frac{\partial x}{\partial \rho}}$ and $(1)$ does not hold (unless $y=0$). Instead of the relationship $(1)$ holding, we have instead $$\begin{equation} \begin{pmatrix} \frac{\partial x}{\partial \rho} & \frac{\partial x}{\partial \phi} \\ \frac{\partial y}{\partial \rho} & \frac{\partial y}{\partial \phi} \end{pmatrix} \begin{pmatrix} \frac{\partial \rho}{\partial x} & \frac{\partial \rho}{\partial y} \\ \frac{\partial \phi}{\partial x} & \frac{\partial \phi}{\partial y} \end{pmatrix}=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{equation}$$ whereupon matrix inversion becomes $$\begin{equation} \begin{pmatrix} \frac{\partial x}{\partial \rho} & \frac{\partial x}{\partial \phi} \\ \frac{\partial y}{\partial \rho} & \frac{\partial y}{\partial \phi} \end{pmatrix} =\begin{pmatrix} \frac{\partial \rho}{\partial x} & \frac{\partial \rho}{\partial y} \\ \frac{\partial \phi}{\partial x} & \frac{\partial \phi}{\partial y} \end{pmatrix}^{-1} \tag 2\end{equation}$$ Note that $(2)$ is the analog of $(1)$ and applies whenever a transformation and its inverse exists and is prescribed by differentiable functions. Moreover, it can be generalized to any number of variables.<|endoftext|> TITLE: Probability that a Wiener process is negative at 2 given that it was positive at 1 QUESTION [6 upvotes]: Let $W_t$ be a standard Wiener process, i.e., with $W_0=0$. If $W_1>0$, what is the probability that $W_2<0$? This is my attempt: we want to determine the conditional probability $$\mathbb P(W_2 | W_1>0) = \frac{\mathbb P(W_2<0 \cap W_1>0)}{\mathbb P(W_1>0)}.$$ The denominator is easily computed equal to $1/2$. It remains to find $\mathbb P(W_2<0 \cap W_1>0)$. Now, it is a well known fact that $(W_1, W_2)$ is jointly normally distributed, so we can write $$ \mathbb P(W_2<0 \cap W_1>0) = \mathbb P((W_1, W_2)\in ((0, \infty) \times (-\infty, 0)).$$ Also, it is not difficult to determine the covariance matrix of the random vector $(W_1, W_2)$, so the joint normal density can be completely determined. However, from this point on I don't know how to proceed. It seems an integration on the IVth quadrant of this density is necessary, but I was wondering if there is a less pedantic, more intelligent method. REPLY [3 votes]: I think I finally found a very simple and nice solution, which uses almost no calculation. Here it is. First, we can write $W_2 = (W_2-W_1) + W_1$, and if we write $X=W_2-W_1$, $Y=W_1$, from the data of the problem we see that $X$ and $Y$ are both standard normal random variables, and independent with resepct to each other. Therefore, the problem asks for the computation of the conditional probability $$\mathbb P(X+Y<0 | Y>0) = \frac{\mathbb P(X+Y<0\cap Y>0)}{\mathbb P(Y>0)}.$$ Now, ${\mathbb P(Y>0)}=1/2$ is immediate since $Y$ is standard normal. To compute the probability $\mathbb P(X+Y<0\cap Y>0)$, let us argue as follows: consider the joint density of $(X, Y)$ on a cartesian diagram labeled $(X,Y)$. Therefore, we are asking for the measure of the set obtained as the intersection of the sets $(X+Y<0\cap Y>0)$. This set corresponds to the lower half of the IInd quadrant on the Cartesian plane, and its measure is thus $1/8$. It follows that the measure of $\mathbb P(X+Y<0 | Y>0)$ is $\frac{1/8}{1/2}$, which is thus equal to $1/4$.<|endoftext|> TITLE: Real analysis for a non-mathematician. QUESTION [17 upvotes]: I'm currently in an engineering program, so most of my mathematical education has been applied in nature (multivariable calculus, ODEs, PDEs, probability). The only real "theory"-based courses I've taken have been abstract algebra$^1$ and a proof-based differential equations class. I'm looking to expand my mathematical horizons before graduate school, and I figured the two places that might be interesting for me (as well as useful!) are real analysis and differential geometry. I don't really have the space in my course schedule to take either of these, and the former is listed as a prerequisite for the latter at my university. I know that Rudin's Principles of Mathematical Analysis is the crème de la crème for real analysis texts, but I've started reading a pdf of it and it's not only extremely dense (which I'm not quite used to), but I also don't think I have the mathematical maturity to grasp it. I was wondering if anyone knows of any good texts where I can learn real analysis without presupposing a great deal of mathematical maturity. My foundations in calculus are quite strong, but my knowledge is that real analysis is only tangentially related. As a side question, I also want to ask if real analysis is really a prerequisite for differential geometry (I'm skeptical). $^1$We covered what you would normally find in an undergraduate algebra class and also touched on a bit of Galois theory, but to be completely honest I wasn't all that comfortable with the few Galois theory lectures we had, anyway. REPLY [2 votes]: I'm surprised no-one has mentioned Pugh's Real Mathematical Analysis. He has a whole chapter on formalism in math, specifically in analysis, where he guides you through thinking in inequalities and limits, and also is known for emphasizing pictures and geometric intuition, which is crucial for internalizing any proof. Highly recommend it -- very palatable for someone new to rigorous mathematics.<|endoftext|> TITLE: Convergence of Riemann sums for improper integrals QUESTION [12 upvotes]: I was considering whether or not the limit of Riemann sums converges to the value of an improper integral on a bounded interval. This appears to be true in some cases when the sum avoids points where the function is not defined. For example, the right-hand Riemann sum for $1/\sqrt{x}$ converges $$\lim \frac1{n}\sum_{i = 1}^n \sqrt{n/i} = \int_0^1 \frac{dx}{\sqrt{x}}.$$ This does not work, however, for the function $\sin(1/x)/x$ even though the improper integral is finite. I’m sure this has something to do with the function being monotone, but I am not able to find a proof. REPLY [8 votes]: We can prove that sequences of right- or left-hand Riemann sums will converge for a monotone function with a convergent improper integral. Suppose WLOG $f:(0,1] \to \mathbb{R}$ is nonnegative and decreasing. Suppose further that there is a singularity at $x =0$ but $f$ is Riemann integrable on $[c,1]$ for $c > 0$ and the improper integral is convergent: $$\lim_{c \to 0+}\int_c^1 f(x) \, dx = \int_0^1 f(x) \, dx.$$ Take a uniform partition $P_n = (0,1/n, 2/n, \ldots, (n-1)/n,1).$ Since $f$ is decreasing we have $$\frac1{n}f\left(\frac{k}{n}\right) \geqslant \int_{k/n}^{(k+1)/n}f(x) \, dx \geqslant \frac1{n}f\left(\frac{k+1}{n}\right), $$ and summing over $k = 1,2, \ldots, n-1$ $$\frac1{n}\sum_{k=1}^{n-1}f\left(\frac{k}{n}\right) \geqslant \int_{1/n}^{1}f(x) \, dx \geqslant \frac1{n}\sum_{k=2}^nf\left(\frac{k}{n}\right). $$ Hence, $$ \int_{1/n}^{1}f(x) \, dx +\frac{1}{n}f(1) \leqslant \frac1{n}\sum_{k=1}^{n}f\left(\frac{k}{n}\right) \leqslant \int_{1/n}^{1}f(x) \, dx+ \frac{1}{n}f \left(\frac{1}{n} \right).$$ Note that as $n \to \infty$ we have $f(1) /n \to 0$ and since the improper integral is convergent, $$\lim_{n \to \infty} \int_{1/n}^{1}f(x) \, dx = \int_0^1 f(x) \, dx, \\ \lim_{n \to \infty}\frac{1}{n}f \left(\frac{1}{n} \right) = 0.$$ The second limit follows from monotonicity and the Cauchy criterion which implies that for any $\epsilon > 0$ and all $n$ sufficiently large $$0 \leqslant \frac{1}{n}f \left(\frac{1}{n} \right) \leqslant 2\int_{1/2n}^{1/n}f(x) \, dx < \epsilon.$$ By the squeeze theorem we have $$\lim_{n \to \infty}\frac1{n}\sum_{k=1}^nf\left(\frac{k}{n}\right) = \int_0^1 f(x) \, dx.$$ This proof can be generalized for non-uniform partitions. For oscillatory functions like $g(x) = \sin(1/x)/x$, the failure of the sequence of right-hand Riemann sums to converge is, non-monotonicity notwithstanding, related to non-convergence as $n \to \infty$ of $$ \frac{1}{n}g \left(\frac{1}{n} \right) = \sin n. $$ This particular case appears to have been covered nicely by @Daniel Fischer in Improper integrals and right-hand Riemann sums<|endoftext|> TITLE: identify the topological type obtained by gluing sides of the hexagon QUESTION [5 upvotes]: Identify the topological type obtained by gluing sides of the hexagon as shown in the picture below Clearly the boundary is encoded by the word $abcb^{-1}a^{-1}c$ I do not understand how the surface is glued together - could you help me, please? Edit: This answer is a Klein bottle (see answer below) Similar problem: see Is this octogon topologically equivalent to the Klein Bottle? REPLY [3 votes]: You should get the Klein bottle: By combining arrows $a$ and $b$ into a single red arrow, and arrows $a^{-1}$ and $b^{-1}$ into another red arrow, we get the first figure below. Now we must glue the red sides together, and blue sides together, so that the arrows line up.<|endoftext|> TITLE: When are differentials actually useful? QUESTION [7 upvotes]: I think of differentials as a way to approximate $\Delta y$ in a function $y = f(x)$ for a certain $\Delta x$. The way I understood it, the derivative itself is not a ratio because you can't get $\frac{dy}{dx}$ by taking the ratio of the limits of the numerator and denominator separately. However, once you do have $\frac{dy}{dx}$, you can then think of $dx$ as $\Delta x$ and of $dy$ as the change in $y$ for a slope $\frac{dy}{dx}$ over a certain $\Delta x$. My problem is that I don't know why differentials are useful. The examples I saw are along the lines of approximating the max error in a sphere's volume if we know that the radius we're given (let's say 20) has a possible error of let's say 0.01. In this kind of example, it seems to me we're better off computing $V(20.01) - V(20)$, instead of $\Delta V \approx dV = V' \cdot \Delta x$. In the first case at least we get an exact maximum error instead of an approximation. So my question is: When are differentials actually useful? Are they ever anything more than at best a time saver over hand computations? Are they useful at all in today's world with Wolfram Alpha and other strong computational engines around? REPLY [2 votes]: Optimization, function analysis, computations (computers need a general approach unlike humans), financial and economical applications, engineering, simplifying expression.... the list is literally endless.<|endoftext|> TITLE: Does the operation of completion preserve injectivity? QUESTION [6 upvotes]: It seems to me I saw a counterexample somewhere, but I can't find it, can anybody help me? Let $\varphi:X\to Y$ be a linear continuous map of locally convex spaces, and $\widetilde{\varphi}:\widetilde{X}\to\widetilde{Y}$ the corresponding linear continuous map of their completions. If $\varphi:X\to Y$ is injective, is $\widetilde{\varphi}:\widetilde{X}\to\widetilde{Y}$ injective as well? I think, the answer must be "no", so another question is Under which conditions the injectivity of $\varphi:X\to Y$ implies the injectivity of $\widetilde{\varphi}:\widetilde{X}\to\widetilde{Y}$? REPLY [3 votes]: Here is a way of getting lots counterexamples. Let $\widetilde{X}$ be your favorite infinite-dimensional complete space, let $X\subset \widetilde{X}$ be a dense subspace, let $v\in\widetilde{X}\setminus X$, and let $L$ be the span of $v$. Then we can take $Y=\widetilde{X}/L$, and the quotient map $\widetilde{\varphi}:\widetilde{X}\to Y$ is not injective but its restriction to $X$ is. I don't know of any useful conditions under which the answer is yes.<|endoftext|> TITLE: Sum of cube roots of a quadratic QUESTION [5 upvotes]: If $a$ and $b$ are the roots of $x^2 -5x + 8 = 0$. How do I find $\sqrt[3]{a} + \sqrt[3]{b}$ without finding the roots? I know how to evaluate $\sqrt[2]{a} + \sqrt[2]{b}$ by squaring and subbing for $a+b$ and $ab$ via sum and product of roots. But for this question, if I cube $\sqrt[3]{a} + \sqrt[3]{b}$ I'm left with radicals which are difficult to resolve, e.g. $\sqrt[3]{a^2b}$ How should I go about approaching this problem? Edit: I've also tried letting $\sqrt[3]a+\sqrt[3]b=m$, which makes $a+b+3m\sqrt[3]{ab}=m^3$ (by rising everything to the power of $3$ and then substituting $\sqrt[3]a+\sqrt[3]b=m$ again), if that is of any help. REPLY [6 votes]: The solutions of the quadratic equation are not real numbers, so $\sqrt[3]{a}$ and $\sqrt[3]{b}$ are not canonically determined; therefore, the sum can take up to 9 different values. Since $ab = 8$, once you choose one of the three possible values for $\sqrt[3]{a}$, you may want to consider only the value $\sqrt[3]{b} = 2/\sqrt[3]{a}$ for the second root. In that case, let $S = \sqrt[3]{a} + \sqrt[3]{b}$. Then $S^3 = a+b + 3\sqrt[3]{ab} S$, so $S$ is a solution of the cubic equation $S^3 = 6S+5 \Longleftrightarrow (S+1)(S^2-S-5) = 0$. The solutions are $S_1=-1$, $S_2=\frac{1}{2}(1-\sqrt{21})$, and $S_3 = \frac{1}{2}(1+\sqrt{21})$.<|endoftext|> TITLE: The Marginal Distribution of a Multinomial QUESTION [15 upvotes]: The binomial distribution is generalized by the multinomial distribution, which follows: \begin{align} f(x_1,\ldots,x_k;n,p_1,\ldots,p_k) & {} = \Pr(X_1 = x_1\mbox{ and }\dots\mbox{ and }X_k = x_k) \\ \\ & {} = \begin{cases} { \displaystyle {n! \over x_1!\cdots x_k!}p_1^{x_1}\cdots p_k^{x_k}}, \quad & \mbox{when } \sum_{i=1}^k x_i=n \\ \\ 0 & \mbox{otherwise,} \end{cases} \end{align} In particular, the "three"nomial distribution follows: $${n! \over x_1! x_2!(n-x_1-x_2)!}p_1^{x_1}p_2^{x_2}p_3^{n-x_1-x_2}$$ I am not able to show why the marginal probability of this distribution, with respect to either $x_1$ or $x_2$ follows $b(n, p_1)$ or $b(n, p_2)$, respectively. Please help! REPLY [14 votes]: The simplest way is "combinatorial/probabilistic." Recall that the "three-nomial" distribution measures the probability of $x_1$ Type 1 events, $x_2$ Type 2 events, and $n-x_1-x_2$ Type 3 events, when an experiment is repeated independently $n$ times, with probabilities of "success" respectively equal to $p_1$, $p_2$, and $p_3$, where $p_1+p_2+p_3=1$. The probability of $x_1$ Type 1 events is therefore $$\binom{n}{x_1}p_1^{x_1}(p_2+p_3)^{n-x_1}.\tag{1}$$ It follows that the marginal distribution of $X_1$ is binomial. If we really wish to sum, by the Binomial Theorem the probability (1) is equal to $$\binom{n}{x_1}p_1^{x_1}\sum_{x_2=0}^{n-x_1}\binom{n-x_1}{x_2}p_2^{x_2}p_3^{n-x_1-x_2}.$$ This is precisely the same as the result of summing over all (appropriate) $x_2$ the probability that $X_2=x_2$ and $X_3=n-x_1-x_2$. If we want to save writing down (1) until the very end, we can just write the sum argument backwards.<|endoftext|> TITLE: Scalar multiplication as a special form of matrix multiplication QUESTION [5 upvotes]: Question What do we gain or lose, conceptually, if we consider scalar multiplication as a special form of matrix multiplication? Background The question bothers me since I have been reading about dilations and scaling of geometrical objects in Paul Lockhart's book "Measurement". Geometrically, dilation is a transformation that stretches an object in one dimension by a certain factor. Analogously, the linear transformation $$ \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & \lambda \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} $$ "stretches" the third component by the factor $\lambda$. Scaling is a geometric transformation that stretches an object in all dimensions by a certain factor. Analogously, the linear transformation $$ \begin{pmatrix} \lambda & 0 & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & \lambda \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} $$ "stretches" all three components by the factor $\lambda$. This, however, can be written more succinctly using scalar multiplication: $$ \lambda \cdot \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix}. $$ In fact, every scalar multiplication can be expressed as a multiplication with a special matrix, and it turns out to be a mere shortcut. On the face of it, this observation is not very spectacular; however, it raises interesting philosophical and conceptual questions as to the foundations of linear algebra. For example, if scalar multiplication is only a nice-to-have shortcut, then isn't it in fact superfluous conceptually? Currently, scalar multiplication is taught as if it was a distinct concept, independent of matrix multiplication. What would change if we got rid of this shortcut? What could alternative axioms of vector spaces and moduls look like? What about linear transformations? What is easier, what is harder$-$not to write down, but conceptually? I know that this topic is very broad, but I would like to collect opinions, ideas, examples. REPLY [2 votes]: Note: Some discussion with Siddharth has revealed a flaw in my earlier answer, which I've corrected now (with a complete reversal of conclusion). The set of all endomorphisms of an Abelian group $(V, +)$ forms a ring $\operatorname{End}(G)$ with pointwise addition as the ring addition and composition of mappings as the ring multiplication. If there exists a subring $L \subseteq \operatorname{End}(G)$ with center $F = \operatorname{Z}(L)$, such that $F$ is a field and $F \cap \operatorname{End}(G) = (1)$, the ideal generated by the identity endomorphism, then $V$ is a vector space over the field $F$ with $L$ as its ring of linear operators. Why does this work? [Edit: I'm not sure the following is entirely correct. I'll have to work on it further and update the answer]. In one direction (vector space $\to$ above characterization), it's obvious that the linear operators of a vector space $V$ do form a subring of the endomorphism ring of the Abelian group $(V, +)$, and that the subset of "scalar" operators form a subfield of this ring. As shown here (beautifully), the scalar operators are exactly the central elements of the ring of linear operators. Conversely, it's not hard to show that the Abelian group $(V, +)$ forms a (traditional) vector space over the field $F$ defined above, in a manner that makes $L$ (isomorphic to) the ring of all linear operators of this space.<|endoftext|> TITLE: Why doesn't this definition of natural numbers hold up in axiomatic set theory? QUESTION [15 upvotes]: I was reading about older definitions of the natural numbers on Wikipedia here (in retrospect, not the best place to learn mathematics) and came across the following definition for the natural numbers: (paraphrased) Let $\sigma$ be a function such that for every set $A$, $\sigma(A) := \{ x \cup \{ y \} \mid x \in A \wedge y \notin x \} $. Then $\sigma(A)$ is the set obtained by adding any new element to all elements of $A$. Then define $0 := \{ \emptyset \}$, $1 := \sigma(0)$, $2 := \sigma(1)$ et cetera. The way I understood this definition is that the natural number $n$ is "defined" as the set of all sets with exactly $n$ elements. This sounded straightforward to me, until I read the next paragraph: This definition works in naive set theory, type theory, and in set theories that grew out of type theory, such as New Foundations and related systems. But it does not work in the axiomatic set theory ZFC and related systems, because in such systems the equivalence classes under equinumerosity are "too large" to be sets. For that matter, there is no universal set V in ZFC, under pain of the Russell paradox. Why exactly doesn't this definition work in ZFC? I don't fully understand how the sets in this definition are "too large". Is part of the problem just that there is no "universal set" to pick the element $y$ from? I tried to do some more reading to find my answer, but the material was way out of my depth. (I am only familiar with the basics of set theory, Russell Paradox, Cantor diagonal argument, and not much more. ) So I apologize in advance if this is a really simple question... REPLY [16 votes]: Notice that, under that definition, we have $$ \begin{split} 1 = \sigma(0) &= \sigma(\{\emptyset\}) = \{x \cup \{y\} : x \in \{\emptyset\} \land y \not \in x\} \\ &= \{ \emptyset \cup \{y\} : \text{y is a set}\}\\ & = \{ \{y\} : y \text{ is a set}\}\\ &= \{ z : |z| = 1\}. \end{split} $$ In ZFC, the collection of all sets ("$V$") does not form a set, so the definition breaks down already at stage $1$. If $\sigma(0)$ was a set then $V = \{y : \{y\} \in \sigma(0)\}$ would also be a set. So that is really the technical difficulty. Frege and Russell proposed that the number $1$ could be defined to consist of all $1$-element sets, but that collection of sets is not itself a set in ZFC. The usual way of describing why $V$ is not a set is that it is "too large"; this sense of "largeness" is one of the more common ways of motivating the ZFC axioms, so the Wikipedia author alluded to it. The idea of "largeness" is really an allusion to the "cumulative hierarchy" vision of set theory. Unfortunately, the cumulative hierarchy is hard to describe in one sentence, because it depends already on the notion of ordinal. But the idea is that we can form a collection of sets stage by stage, so that the powerset of each set is formed at the next stage after the set is formed, and so that all the members of each set are formed at stages strictly before the set itself is formed. One way to understand the ZFC axioms is that they are only trying to describe the sets that are formed via this process. But $V$ cannot be formed at any stage, because it would have to already contain its powerset, but the powerset ought to be formed at the next stage. So the claim that $V$ is too "large" really means that $V$ could not be formed at any stage of the process. Back to defining the numbers. We can imagine that two sets have the same cardinality if there is a bijection between them. This is an equivalence relation, so it ought to have equivalence classes. And the equivalence class of $\{\emptyset\}$ will consist of every set that has exactly one element. That is the idea behind the definition above. But these equivalences classes are not sets in the cumulative hierarchy, so ZFC has trouble with them. The way that we usually circumvent this kind of problem in ZFC is to select a "particular" representative from each equivalence class. Then, instead of referring to the entire equivalence class, we refer just to that representative. The most commonly use set of representatives in ZFC are the von Neumann ordinals. So we have $$ \begin{split} 0 &= \emptyset\\ 1 &= \{\emptyset\} = \{0\}\\ 2 &= \{\emptyset, \{\emptyset\}\} = \{0,1\}\\ 3 &= \{0,1,2\} \end{split} $$ and so on. This is not really much different than the definition due to Frege and Russell, as you can see.<|endoftext|> TITLE: Is there a binary operation satisfying these conditions? QUESTION [6 upvotes]: Last night I started to read some book that has to do with applications of groups in physics and the question came in my mind about the existence of some structure, which I define in this way: Suppose that we have set $S$ which has at least countably infinite number of elements which is equipped with the binary operation $*$ and that we have: 1) $\forall x,y \in S$ we have $x*y \in S$ 2) There exist one and only one $e \in S$ such that we have $x*e=e*x=x$, for every $x \in S$. 3) For every $x \in S$ there exist $l \in S$ and $r \in S$ such that we have $l*x=e$ and $x*r=e$ and $l \neq r$. So this structure is similar to the group in that that it satisfies closure axiom and it has unique identity element but it is different from group in that that every element has left and right inverse which do not coincide and we do not assume associativity. Since I know that there are a lot of structures in mathematics if this structure exists it is probably not something new, but I do not know. And now the question: Does structure defined in this way exist? REPLY [6 votes]: As posted in the comments, if you insist that rule 3 is valid for $x=e$, the answer is "No." (See details in the comment by Travis.) However, if you are willing to exclude $x=e$ from rule 3, here's an example of such a structure on $S=\{0, 1, 2, 3, ....\}$: Define $0*n = n*0 = n$, $1*2 = 2*3 = 3*1 = 0$, $4*5 = 5*6 = 6*4 = 0$, and so on, and define the operation $*$ on the other pairs as you wish.<|endoftext|> TITLE: What is the optimal path between $2$ fixed points around an invisible obstructing wall? QUESTION [50 upvotes]: Every day you walk from point A to point B, which are $3$ miles apart. There is a $50$% chance each walk that there is an invisible wall somewhere strictly between the two points (never at A or B). The wall extends $1$ mile in each direction perpendicular to the line segment (direct path) between A and B, and its position can be at any random location between the two points. That is, it can be any distance between A and B such as $0.1$ miles away from A, $1.5$ miles away, $2.9$ miles away.... You don't know if the wall is present or where it is until you walk into it. You must walk around the wall if present as there is no other way to circumvent it. Assume the wall is negligible thickness (like a force field), all ground is perfectly flat, and the y coordinates at both A and B are $0$ (although I don't think the optimal path will change much if they weren't). What strategy minimizes the average expected walk distance between A and B? How do we know for certain this strategy yields the shortest average walking distance? To get the $100$ bounty points, I would like a convincing argument from you as to why you feel your solution is optimal and cannot be beaten. For those of you using computer simulation, anything reasonably close to optimal is a candidate for the bounty. Can anyone beat the optimal solutions submitted so far? There are still a few days to try. Even a slight beat (if not due to roundoff error or some other error) would help prove previous solutions are non-optimal. You can just use the table of $31$ points (x-y coordinates) and compute that path length and then if you can find better then I will accept that answer. I think by Sunday night I may have to award the bounty otherwise it may get auto-awarded. REPLY [13 votes]: I think a have a plausible solution. If we follow the curve $y=y(x)$ on a wall day, then we travel a distance of $\int_0^u\sqrt{1+\left(y^{\prime}(x)\right)^2}dx$ before hitting the wall at $u$. Then we have to go $w-y(u)$ to go around the wall of half-width $w$, and then a further $\sqrt{w^2+(B-u)^2}$ to get to our goal at $x=B$. Assuming uniform distribution of wall placements $u$, the average distance on a wall day is $$\frac1B\int_0^B\left[\int_0^u\sqrt{1+\left(y^{\prime}(x)\right)^2}dx+w-y(u)+\sqrt{w^2+(B-u)^2}\right]du$$ On a non-wall day, our average distance is the uninterrupted path length, $$\int_0^B\sqrt{1+\left(y^{\prime}(x)\right)^2}dx=\frac1B\int_0^BB\sqrt{1+\left(y^{\prime}(u)\right)^2}du$$ Since half of the days are wall days, the average distance is $$\frac1{2B}\int_0^B\left[\int_0^u\sqrt{1+\left(y^{\prime}(x)\right)^2}+w-y(u)+\sqrt{w^2+(B-u)^2}+B\sqrt{1+\left(y^{\prime}(u)\right)^2}\right]du$$ Now, we can change the order of integration of that double integral to get $$\begin{align}\int_0^B\int_0^u\sqrt{1+\left(y^{\prime}(x)\right)^2}dx\,du&=\int_0^B\sqrt{1+\left(y^{\prime}(x)\right)^2}\int_x^Bdu\,dx\\ &=\int_0^B(B-x)\sqrt{1+\left(y^{\prime}(x)\right)^2}dx\end{align}$$ So now our average distance reads $$\bar s=\frac1{2B}\int_0^B\left[(2B-x)\sqrt{1+\left(y^{\prime}(x)\right)^2}dx+w-y(x)+\sqrt{w^2+(B-x)^2}\right]dx$$ Now we want to vary the optimal solution by a small adjustment, $\delta y$. Expanding that first square root for small $\delta y$, $$\sqrt{1+\left(y^{\prime}+\delta y^{\prime}\right)^2}\approx\sqrt{1+\left(y^{\prime}\right)^2+2y^{\prime}\delta y^{\prime}}\approx\sqrt{1+\left(y^{\prime}\right)^2}+\frac{y^{\prime}}{\sqrt{1+\left(y^{\prime}\right)^2}}\delta y^{\prime}$$ Integrating by parts, $$\begin{align}\int_0^B(2B-x)\frac{y^{\prime}}{\sqrt{1+\left(y^{\prime}\right)^2}}\delta y^{\prime}dx&=\left.(2B-x)\frac{y^{\prime}}{\sqrt{1+\left(y^{\prime}\right)^2}}\delta y\right|_0^B-\\ &\int_0^B\delta y\left\{-\frac{y^{\prime}}{\sqrt{1+\left(y^{\prime}\right)^2}}+(2B-x)\frac{y^{\prime\prime}}{\sqrt{1+\left(y^{\prime}\right)^2}}-(2B-x)\frac{\left(y^{\prime}\right)^2y^{\prime\prime}}{\left(1+\left(y^{\prime}\right)^2\right)^{\frac32}}\right\}dx\\ &=\int_0^B\delta y\cdot\frac{y^{\prime}+\left(y^{\prime}\right)^3-(2B-x)y^{\prime\prime}}{\left(1+\left(y^{\prime}\right)^2\right)^{\frac32}}dx\end{align}$$ The integrated term above drops out because we assume that $\delta y(0)=\delta y(B)=0$ So combining with the $-\delta y$ from the big integral, we have the condition $$\int_0^B\left[\delta y\cdot\frac{y^{\prime}+\left(y^{\prime}\right)^3-(2B-x)y^{\prime\prime}}{\left(1+\left(y^{\prime}\right)^2\right)^{\frac32}}-\delta y\right]dx=0$$ for arbitrary small adjustments $\delta y$ so it follows that $$\frac{y^{\prime}+\left(y^{\prime}\right)^3-(2B-x)y^{\prime\prime}}{\left(1+\left(y^{\prime}\right)^2\right)^{\frac32}}-1=0$$ This is a variables separable differential equation for $y^{\prime}$ and we are going to write it as $$\frac{dx}{2B-x}=\frac{dy^{\prime}}{y^{\prime}\left(1+\left(y^{\prime}\right)^2\right)-\left(1+\left(y^{\prime}\right)^2\right)^{\frac32}}$$ Letting $y^{\prime}=\sinh\theta$, we have $$\frac{dx}{2B-x}=-\left(\frac{\sinh\theta}{\cosh\theta}+1\right)d\theta$$ And we have an integral: $$-\ln(2B-x)=-\ln(\cosh\theta)-\theta-C_1$$ $$2B-x=C\sqrt{1+\left(y^{\prime}\right)^2}\left(y^{\prime}+\sqrt{1+\left(y^{\prime}\right)^2}\right)$$ EDIT: At this point I had initially missed that the above differential equation was capable of routine solution. First multiply out the right hand side and make a substitution: $$v=\frac{2B-x}C=y^{\prime}\sqrt{1+\left(y^{\prime}\right)^2}+1+\left(y^{\prime}\right)^2$$ Then isolate the radical and square out: $$(v-1)^2-2(v-1)\left(y^{\prime}\right)^2+\left(y^{\prime}\right)^4=\left(y^{\prime}\right)^2+\left(y^{\prime}\right)^4$$ Then we can simplify a little to $$(v-1)^2=(2v-1)\left(y^{\prime}\right)^2$$ Now we will make the final substitution of $\xi=2v-1$ so that $v=\frac{\xi+1}2$ and $$\frac{(v-1)^2}{2v-1}=\frac{(\xi-1)^2}{4\xi}=\left(y^{\prime}\right)^2$$ Taking square roots, $$\frac{\xi-1}{2\sqrt{\xi}}=y^{\prime}=\frac{dy}{dx}=\frac{dy}{d\xi}\frac{d\xi}{dv}\frac{dv}{dx}=-\frac2C\frac{dy}{d\xi}$$ We can integrate to $$\frac13\xi^{\frac32}-\xi^{\frac12}=-\frac2Cy+C_1$$ At $x=B$, $y=0$, $\xi=\xi_f=\frac{2B}C-1$ and $\frac13\xi_f^{\frac32}-\xi_f^{\frac12}=C_1$ so now the solution in terms of $\xi$ is $$\frac13\xi^{\frac32}-\xi^{\frac12}-\frac13\xi_f^{\frac32}+\xi_f^{\frac12}=-\frac2Cy$$ At $x=0$, $y=0$, $\xi=\xi_0=\frac{4B}C-1=2\xi_f+1$ and $$\frac13\xi_0^{\frac32}-\xi_0^{\frac12}-\frac13\xi_f^{\frac32}+\xi_f^{\frac12}=0$$ Isolating like radicals and squaring out, $$(2\xi_f+1)^3-6(2\xi_f+1)^2+9(2\xi_f+1)=\xi_f^3-6\xi_f^2+9\xi_f$$ Simplifying, $$7\xi_f^3-6\xi_f^2-9\xi_f+4=(\xi_f+1)(7\xi_f^2-13\xi_f+4)=0$$ With solution set $$\xi_f\in\left\{0,\frac{13+\sqrt{57}}{14},\frac{13-\sqrt{57}}{14}\right\}\cap(0,1)=\left\{\frac{13-\sqrt{57}}{14}\right\}$$ Since $$\xi_f=\frac{13-\sqrt{57}}{14}=\frac{2B}C-1$$ $$C=\frac{(27+\sqrt{57})B}{24}$$ $$\xi_0=2\xi_f+1=\frac{20-\sqrt{57}}7$$ $$x=2B-Cv=2B-C\frac{(\xi+1)}2=\left(\frac{69-\sqrt{57}}{48}-\frac{27+\sqrt{57}}{48}\xi\right)B$$ We can find the initial slope $$y^{\prime}(0)=\frac{\xi_0-1}{2\sqrt{\xi_0}}=\frac{\xi_f}{\sqrt{\xi_0}}=\frac{13-\sqrt{57}}{98}\sqrt{20+\sqrt{57}}$$ The path reaches the farthest from the straight line when $y^{\prime}=0$, $\xi=1$, so $$x=\frac{21-\sqrt{57}}{24}B$$ $$y=\frac C2\left(\frac13\xi_f^{\frac32}-\xi_f^{\frac12}-\frac13+1\right)=\left(\frac{27+\sqrt{57}}{72}-\frac{15+\sqrt{57}}{36}\sqrt{\frac{13-\sqrt{57}}{14}}\right)B$$ On a non-wall day, the path length is $$\begin{align}s_{\text{min}}&=\int_0^B\sqrt{1+\left(y^{\prime}\right)^2}dx=\int_{\xi_0}^{\xi_f}\sqrt{1+\frac{(\xi-1)^2}{4\xi}}\left(-\frac C2\right)d\xi\\ &=-\frac C2\int_{\xi_0}^{\xi_f}\frac{\xi+1}{2\sqrt{\xi}}d\xi=-\frac C2\left[\frac13\xi^{\frac32}+\xi^{\frac12}\right]_{\xi_0}^{\xi_f}\\ &=-\frac{(27+\sqrt{57})B}{144}\left[\xi_f^{\frac32}+3\xi_f^{\frac12}-\xi_0^{\frac32}-3\xi_0^{\frac12}\right]\\ &=\frac{75+\sqrt{57}}{72}\sqrt{\frac{20-\sqrt{57}}7}-\frac{51+\sqrt{57}}{72}\sqrt{\frac{13-\sqrt{57}}{14}}\end{align}$$ The mean path length is $$\begin{align}\bar s&=\frac1{2B}\int_0^B\left[(2B-x)\sqrt{1+\left(y^{\prime}\right)^2}+w-y+\sqrt{w^2+(B-x)^2}\right]dx\end{align}$$ We already have the invariant part of the integral at hand so we only need $$\begin{align}\frac1{2B}\int_0^B\left[(2B-x)\sqrt{1+\left(y^{\prime}\right)^2}-y\right]dx&=\frac1{2B}\int_{\xi_0}^{\xi_f}\left[\frac C2(\xi+1)\frac{(\xi+1)}{2\sqrt{\xi}}+\frac C6\left(\xi^{\frac32}-3\xi^{\frac12}-\xi_f^{\frac32}+3\xi_f^{\frac12}\right)\right]\left(-\frac C2d\xi\right)\\ &=\frac{C^2}{48B}\int_{\xi_0}^{\xi_f}\left[-5\xi^{\frac32}-3\xi^{-\frac12}+2\xi_f^{\frac32}-6\xi_f^{\frac12}\right]d\xi\\ &=\frac{C^2}{48B}\left[-2\xi^{\frac52}-6\xi^{\frac12}+2\xi_f^{\frac32}\xi-6\xi_f^{\frac12}\xi\right]_{\xi_0}^{\xi_f}\\ &=\frac{C^2}{48B}\left[-6\xi_f^{\frac12}-6\xi_f^{\frac32}+2\xi_0^{\frac52}+6\xi_0^{\frac12}-2\xi_f^{\frac32}\xi_0+6\xi_f^{\frac12}\xi_0\right]\\ &=\frac B{24^2}\left[\sqrt{\frac{20-\sqrt{57}}7}(299+\sqrt{57})+\sqrt{\frac{13-\sqrt{57}}{14}}(1+3\sqrt{57})\right]\end{align}$$ After simplification. So overall the mean length of the optimal path is $$\begin{align}\bar s&=\frac B{24^2}\left[\sqrt{\frac{20-\sqrt{57}}7}(299+\sqrt{57})+\sqrt{\frac{13-\sqrt{57}}{14}}(1+3\sqrt{57})\right]+\\ &\frac14\sqrt{B^2+w^2}+\frac{w^2}{4B}\ln\left(\frac{B+\sqrt{B^2+w^2}}w\right)+\frac w2\end{align}$$ For comparison, our analytical results are $y^{\prime}(0)=0.291906063555883$, $x_{\text{max}}=1.681270695591156$, $y_{\text{max}}=0.267103189302944$, minimum path length $s_{\text{min}}=3.065013667336774$, and mean path length $\bar s=3.648267343901591$. END OF ANALYTICAL EDITS We can set the value of $C$ by applying initial conditions for $y^{\prime}(0)$ and so we have a relation between $y^{\prime}$ and $x$: $$f(y^{\prime})=\sqrt{1+\left(y^{\prime}\right)^2}\left(y^{\prime}+\sqrt{1+\left(y^{\prime}\right)^2}\right)-\frac{2B-x}C=0$$ We can differentiate this to set up Newton's method for solving for $y^{\prime}$ given $x$: $$f^{\prime}\left(y^{\prime}\right)=\frac{\left(y^{\prime}+\sqrt{1+\left(y^{\prime}\right)^2}\right)^2}{\sqrt{1+\left(y^{\prime}\right)^2}}+\frac1C$$ Then we can solve the differential equation, keeping track of the extra length $$v^{\prime}(x)=\frac1{2B}\left[(2B-x)\sqrt{1+\left(y^{\prime}\right)^2}-y\right]$$ that weren't taken into account by integrating those 'constant' terms $$\frac1{2B}\int_0^B\left(w+\sqrt{w^2+(B-x)^2}\right)dx=\frac14\sqrt{B^2+w^2}+\frac{w^2}{4B}\ln\left(\frac{B+\sqrt{B^2+w^2}}w\right)+\frac w2$$ Then we mess around with that initial slope $y^{\prime}(0)$ until our trajectory reaches $(B,0)$ and we have a solution. The derivative function: % f.m function yprime = f(t,y); global B C y0p yp = y0p; v = y(2); x = t; err = 1; tol = 1.0e-6; while abs(err) > tol, g = sqrt(1+yp^2)*(yp+sqrt(1+yp^2))-(2*B-x)/C; gp = (yp+sqrt(1+yp^2))^2/sqrt(1+yp^2)+1/C; err = g/gp; yp = yp-err; end y0p = yp; yprime = [yp; ((2*B-x)*sqrt(1+yp^2)-y(1))/(2*B); sqrt(1+yp^2)]; The program: % Wall.m global B C y0p B = 3; y0p = 0.29190595656; C = 2*B/(sqrt(1+y0p^2)*(y0p+sqrt(1+y0p^2))); w = 1; y0 = [0 0 0]; xspan = [0 B]; options = odeset('AbsTol',1.0e-12); [t,y] = ode45(@f,xspan,y0,options); plot(t,y(:,1)); xlabel('x'); ylabel('y'); format long; err = y(end,1) meanlen = y(end,2)+1/4*sqrt(B^2+w^2)+w^2/(4*B)*log((B+sqrt(B^2+w^2))/w)+w/2 minlen = y(end,3) format; The trajectory: The final result for initial slope was $y^{\prime}(0)=0.29190595656$ and the optimal length was $3.648267344782407$. On a non-wall day, the path length was $3.065013635851723$. EDIT: By popular demand I have included a Matlab program that simulates the walk for a given polygonal path. I hope the comments and your thinking about the problem will permit you to write a similar program. Here is the subroutine that does the simulation. % get_lengths.m % get_lengths.m simulates the walk between points A and B % inputs: % nPoints = number of vertices of the polygonal path % x = array of the nPoints x-coordinates of the vertices % y = array of the nPoints y-coordinates of the vertices % B = distance between points A and B % w = half-width of wall % nWall = number of wall positions to be simulated % nWall must be big enough that the distance between % simulation points is less than smallest distance % between to consecutive values in array x % outputs: % meanlen = mean path length on wall day as determined % by simulation % minlen = path length on non-wall day function [meanlen,minlen] = get_lengths(nPoints,x,y,B,w,nWall); % Initially we haven't gone anywhere meanlen = 0; minlen = 0; current_y = y(1); current_x = x(1); nextx = 2; % index of next path vertex % find dimensions of first triangle in path base = x(2)-x(1); altitude = y(2)-y(1); hypotenuse = sqrt(base^2+altitude^2); % length of step in wall position dx = B/(nWall-1); % simulation loop for k = 1:nWall, xWall = (k-1)*dx; % Next wall position % Now the tricky part: we have to determine whether % the next wall placement will go beyond the next % path vertex if xWall <= x(nextx), % Find steps in path length and y using similar triangles xstep = xWall-current_x; minlen = minlen+xstep*hypotenuse/base; current_y = current_y+xstep*altitude/base; current_x = xWall; % get length of path after we hit the wall meanlen = meanlen+minlen+(w-current_y)+sqrt((B-current_x)^2+w^2); else % We have to update triangle because the next wall placement % is past the next vertex % get path length to next vertex % Step to next vertex xstep = x(nextx)-current_x; minlen = minlen+xstep*hypotenuse/base; % update triangle base = x(nextx+1)-x(nextx); altitude = y(nextx+1)-y(nextx); hypotenuse = sqrt(base^2+altitude^2); % Step to next wall position xstep = xWall-x(nextx); minlen = minlen+xstep*hypotenuse/base; current_y = y(nextx)+xstep*altitude/base; current_x = xWall; % get length of path after we hit the wall meanlen = meanlen+minlen+(w-current_y)+sqrt((B-current_x)^2+w^2); % update vertex index nextx = nextx+1; end end % turn sum of simulation lengths into mean meanlen = meanlen/nWall; Here is a test program that runs a couple of sample paths % sample.m % sample.m -- tests get_lengths with a sample set of points B = 3; % length of gauntlet normally 3 miles w = 1; % half-width of wall, normally 1 mile nPoints = 7; % number of vertices nWall = 1000; % number of wall positions % here are the x-coordinate of the vertices x = [0.0 0.5 1.0 1.5 2.0 2.5 3.0]; % here are the x-coordinate of the vertices y = [0.0000 0.1283 0.2182 0.2634 0.2547 0.1769 0.0000]; % Simulate! [meanlen, minlen] = get_lengths(nPoints,x,y,B,w,nWall); % print out results fprintf('Results for optimal path\n'); fprintf('Average wall day length = %17.15f\n',meanlen); fprintf('Average non-wall day length = %17.15f\n',minlen); fprintf('Average overall length = %17.15f\n', (meanlen+minlen)/2); % simulate another set of wall parameters y = [0.0000 0.1260 0.2150 0.2652 0.2711 0.2178 0.0000]; % Simulate! [meanlen, minlen] = get_lengths(nPoints,x,y,B,w,nWall); % print out results fprintf('Results for another path\n'); fprintf('Average wall day length = %17.15f\n',meanlen); fprintf('Average non-wall day length = %17.15f\n',minlen); fprintf('Average overall length = %17.15f\n', (meanlen+minlen)/2); Output of the test program Results for optimal path Average wall day length = 4.237123943909202 Average non-wall day length = 3.062718632179695 Average overall length = 3.649921288044449 Results for another path Average wall day length = 4.228281363535175 Average non-wall day length = 3.074249982789312 Average overall length = 3.651265673162244 Enjoy! EDIT: Since my program might be awkward for others to adapt, I have provided a program that computes optimal polygonal paths. It starts with a $3$-point path and varies the free point using Golden Section Search until it finds an optimal path. Then it subdivides each interval of the path, creating a $5$-point path. It makes repeated sweeps through the $3$ free points of the path until the changes are small enough. Then it subdivides again until a $65$-point path is optimized. Here is the new program: % wiggle.m % wiggle.m attempts to arrive at an optimal polygonal solution to % the wall problem. It starts with a 3-point solution and wiggles % free point up an down until it is optimally placed. Then it % places points halfway between each pair of adjacent points and % then wiggles the free points, back to front, until each is % optimal for the other point locations and repeats until the % changes from sweep to sweep are insignificant. Then more intermediate % points are added... clear all; close all; B = 3; % Length of gauntlet w = 1; % Wall half-width % Next two parameters increase accuracy, but also running time! nWall = 4000; % Number of wall positions to simulate tol = 0.25e-6; % Precision of y-values fid = fopen('wiggle.txt','w'); % Open output file for writing phi = (sqrt(5)+1)/2; % Golden ratio x = [0 B]; % initial x-values y = [0 0]; % initial guess for y-values npts = length(y); % Current number of points legends = []; % Create legends for paths nptsmax = 65; % Maximum number of points % Main loop to generate path for npts points while npts < nptsmax, % Subdivide the intervals npts = 2*npts-1; % Create x-values and approximate y-values for new subdivision xtemp = zeros(1,npts); yold = zeros(1,npts); for k = 2:2:npts-1, xtemp(k) = (x(k/2)+x(k/2+1))/2; xtemp(k+1) = x(k/2+1); yold(k) = (y(k/2)+y(k/2+1))/2; yold(k+1) = y(k/2+1); end x = xtemp; y = yold; maxerr = 1; % Each trip through this loop optimizes each point, back to front while maxerr > tol, % Each trip through this loop optimizes a single point for n = npts-1:-1:2, ytest = y; % Sample vector for current path % Guess range containing minimum if npts == 3, y1 = 0.0; y3 = 0.5; else y1 = min([y(n-1) y(n+1)]); y3 = max([y(n-1) y(n) y(n+1)]); end % Third point for golden section search y2 = y1+(y3-y1)/phi; % Find lengths for all 3 candidate points ytest(n) = y1; [u,v] = get_lengths(length(ytest),x,ytest,B,w,nWall); L1 = (u+v)/2; ytest(n) = y2; [u,v] = get_lengths(length(ytest),x,ytest,B,w,nWall); L2 = (u+v)/2; ytest(n) = y3; [u,v] = get_lengths(length(ytest),x,ytest,B,w,nWall); L3 = (u+v)/2; % It's really difficult to bracket a minimum. % If we fail the first time, we creep along for % a little while, hoping for success. Good for npts <= 65 :) nerr = 0; while (L2-L1)*(L3-L2) > 0, y4 = y3+(y3-y2)/phi; ytest(n) = y4; [u,v] = get_lengths(length(ytest),x,ytest,B,w,nWall); L4 = (u+v)/2; y1 = y2; L1 = L2; y2 = y3; L2 = L3; y3 = y4; L3 = L4; nerr = nerr+1; if nerr > 10, error('error stop'); end end % Now we have bracketed a minimum successfully % Begin golden section search. Stop when bracket % is narrow enough. while abs(y3-y1) > tol, % Get fourth point and length y4 = y3-(y2-y1); ytest(n) = y4; [u,v] = get_lengths(length(ytest),x,ytest,B,w,nWall); L4 = (u+v)/2; % Update bracketing set if L4 < L2, y3 = y2; L3 = L2; y2 = y4; L2 = L4; else y1 = y3; L1 = L3; y3 = y4; L3 = L4; end end % Record our new optimal point y(n) = y2; end % Find maximum change in y-values maxerr = max(abs(y-yold)); yold = y; end % Save optimal length to file fprintf(fid,'N = %d, L = %10.8f\n', npts, L2); % Only plot early curves and final curve if (npts <= 9) || npts >= nptsmax, % Add legend entry and plot legends = [legends {['N = ' num2str(npts)]}]; plot(x,y); hold on; end end % Print legend legend(legends,'Location','south'); % Print labels xlabel('x'); ylabel('y'); title('Optimal Polygonal Paths'); fclose(fid); Here are the lengths of the optimal paths found: $$\begin{array}{rr}\text{Points}&\text{Length}\\ \hline 3&3.66067701\\ 5&3.65156744\\ 9&3.64914214\\ 17&3.64852287\\ 33&3.64836712\\ 65&3.64832813\\ \end{array}$$ And of course a graphical depiction of the paths: This was counterintuitive to me in that the paths with fewer points lay entirely below the paths with more points.<|endoftext|> TITLE: Conjectured primality test for $F_n(28)=28^{2^n}+1$ QUESTION [6 upvotes]: How to prove that following conjecture is true ? Definition Let $P_m(x)=2^{-m}\cdot \left(\left(x-\sqrt{x^2-4}\right)^{m}+\left(x+\sqrt{x^2-4}\right)^{m}\right)$ , where $m$ and $x$ are nonnegative integers . Conjecture Let $F_n(28)=28^{2^n}+1$ with $n\ge 2$ . Let $S_i=P_{28}(S_{i-1})$ with $S_0=P_{14}(P_{14}(8))$ thus , $F_n(28)$ is prime iff $S_{2^n-2} \equiv 0 \pmod{F_n(28)}$ PARI/GP implementation of test T28(n)= { my(s=Mod(2*polchebyshev(14,1,polchebyshev(14,1,4)),28^2^n+1)); for(i=1,2^n-2, s=2*polchebyshev(28,1,s/2)); s==0 } Searching for counterexample (PARI/GP) CE28(x,y)= { for(n=x,y, N=28^2^n+1; my(s=Mod(2*polchebyshev(14,1,polchebyshev(14,1,4)),N)); for(i=1,2^n-2, s=2*polchebyshev(28,1,s/2)); if(s==0 && !isprime(N),print(n))) } P.S. There is Inkeri's primality test for Fermat numbers in the literature which is very similar to this test , but there is no freely available proof of his test . REPLY [4 votes]: This is a partial answer. This answer proves the following : $$\text{if $F_n(28)$ is prime, then $S_{2^n-2}\equiv 0\pmod{F_n(28)}$}$$ Proof : First of all, we prove by induction that $$S_i=a^{28^i\times 14^2}+b^{28^i\times 14^2}\tag1$$ where $a=4-\sqrt{15},b=4+\sqrt{15}$ with $ab=1$. Proof for $(1)$: $$\begin{align}S_0&=P_{14}(P_{14}(8))\\&=P_{14}\left(2^{-14}\left(\left(8-\sqrt{60}\right)^{14}+\left(8+\sqrt{60}\right)^{14}\right)\right)\\&=P_{14}(a^{14}+b^{14})\\&=2^{-14}\left(\left(a^{14}+b^{14}-\sqrt{(a^{14}+b^{14})^2-4}\right)^{14}+\left(a^{14}+b^{14}+\sqrt{(a^{14}+b^{14})^2-4}\right)^{14}\right)\\&=2^{-14}\left(\left(a^{14}+b^{14}-\sqrt{(b^{14}-a^{14})^2}\right)^{14}+\left(a^{14}+b^{14}+\sqrt{(b^{14}-a^{14})^2}\right)^{14}\right)\\&=2^{-14}\left(\left(2a^{14}\right)^{14}+\left(2b^{14}\right)^{14}\right)\\&=a^{28^0\times{14}^{2}}+b^{28^0\times {14}^2}\end{align}$$ Supposing that $(1)$ holds for $i$ gives $$\begin{align}S_{i+1}&=P_{28}(S_{i})\\&=P_{28}(a^{28^i\times{14}^{2}}+b^{28^i\times{14}^2})\\&=2^{-28}\left(\left(a^{28^i\times{14}^2}+b^{28^i\times{14}^2}-\sqrt{(a^{28^i\times{14}^{2}}+b^{28^i\times{14}^2})^2-4}\right)^{28}+\left(a^{28^i\times{14}^2}+b^{28^i\times{14}^2}+\sqrt{(a^{28^i\times{14}^{2}}+b^{28^i\times{14}^2})^2-4}\right)^{28}\right)\\&=2^{-28}\left(\left(a^{28^i\times{14}^2}+b^{28^i\times{14}^2}-\sqrt{(b^{28^i\times{14}^2}-a^{28^i\times{14}^2})^2}\right)^{28}+\left(a^{28^i\times{14}^2}+b^{28^i\times{14}^2}+\sqrt{(b^{28^i\times{14}^2}-a^{28^i\times{14}^2})^2}\right)^{28}\right)\\&=2^{-28}\left((2a^{28^i\times{14}^2})^{28}+(2b^{28^i\times{14}^2})^{28}\right)\\&=a^{28^{i+1}\times 14^2}+b^{28^{i+1}\times 14^2}\qquad\blacksquare\end{align}$$ Let $N=28^{2^n}+1$. Then, from $(1)$, we have $$S_{2^n-2}=a^{28^{2^n-2}\times 14^2}+b^{28^{2^n-2}\times 14^2}=a^{(N-1)/4}+b^{(N-1)/4}$$ Now, in order to prove that $S_{2^n-2}\equiv 0\pmod N$, it is sufficient to prove that $S_{2^n-2}^2\equiv 0\pmod N$, i.e. $$a^{(N-1)/2}+b^{(N-1)/2}\equiv -2\pmod N.$$ We use $$a^{(N+3)/2}+b^{(N+3)/2}=(a+b)(a^{(N+1)/2}+b^{(N+1)/2})-(a^{(N-1)/2}+b^{(N-1)/2})\tag2$$ Using that $$\sqrt{4\pm\sqrt{15}}=\frac{\sqrt{10}\pm\sqrt 6}{2}$$ we have $$\begin{align}&2^{N+1}(a^{(N+1)/2}+b^{(N+1)/2})\\&=\left(\sqrt{10}-\sqrt 6\right)^{N+1}+\left(\sqrt{10}+\sqrt 6\right)^{N+1}\\&=\sum_{i=0}^{N+1}\binom{N+1}{i}(\sqrt{10})^i((-\sqrt 6)^{N+1-i}+(\sqrt 6)^{N+1-i})\\&=\sum_{j=0}^{(N+1)/2}\binom{N+1}{2j}(\sqrt{10})^{2j}\cdot 2(\sqrt{6})^{N+1-2j}\\&\equiv 2\cdot 6^{(N+1)/2}+10^{(N+1)/2}\cdot 2\qquad\pmod N\\&\equiv 2\cdot (-6)+(-10)\cdot 2\qquad\pmod N\\&\equiv -32\qquad\pmod N\end{align}$$ (because $6^{(N-1)/2}\equiv -1\equiv 10^{(N-1)/2}\pmod N$) from which we have $$a^{(N+1)/2}+b^{(N+1)/2}\equiv -8\qquad\pmod N$$ since $2^{N+1}\equiv 4\pmod N$ is coprime to $N$. Also, $$\small\begin{align}&2^{N+3}(a^{(N+3)/2}+b^{(N+3)/2})\\&=\left(\sqrt{10}-\sqrt 6\right)^{N+3}+\left(\sqrt{10}+\sqrt 6\right)^{N+3}\\&=\sum_{i=0}^{N+3}\binom{N+3}{i}(\sqrt{10})^i((-\sqrt 6)^{N+3-i}+(\sqrt 6)^{N+3-i})\\&=\sum_{j=0}^{(N+3)/2}\binom{N+3}{2j}(\sqrt{10})^{2j}\cdot 2(\sqrt 6)^{N+3-2j}\\&\equiv 2\cdot 6^{(N+3)/2}+\frac{(N+3)(N+2)}{2}\cdot 10\cdot 2\cdot 6^{(N+1)/2}+\frac{(N+3)(N+2)}{2}\cdot 10^{(N+1)/2}\cdot 2\cdot 6+10^{(N+3)/2}\cdot 2\quad\pmod N\\&\equiv 2\cdot 6^{(N+3)/2}+6\cdot 10\cdot 6^{(N+1)/2}+6\cdot 10^{(N+1)/2}\cdot 6+10^{(N+3)/2}\cdot 2\qquad\pmod N\\&\equiv 2(-6^2)+60(-6)+6\cdot (-10)\cdot 6+(-10^2)\cdot 2\qquad\pmod N\\&\equiv -992\qquad\pmod N\end{align}$$ from which we have $$a^{(N+3)/2}+b^{(N+3)/2}\equiv -62\qquad\pmod N$$ Hence, from $(2)$, $$\begin{align}a^{(N-1)/2}+b^{(N-1)/2}&=8(a^{(N+1)/2}+b^{(N+1)/2})-(a^{(N+3)/2}+b^{(N+3)/2})\\&\equiv 8(-8)-(-62)\qquad\pmod N\\&\equiv -2\qquad\pmod N\end{align}$$ Thus, $$\begin{align}S_{2^n-2}^2&=(a^{(N-1)/4}+b^{(N-1)/4})^2\\&=a^{(N-1)/2}+b^{(N-1)/2}+2\\&\equiv 0\qquad\pmod N\end{align}$$ from which $S_{2^n-2}\equiv 0\pmod{F_n(28)}$ follows.<|endoftext|> TITLE: Approximate spectral decomposition QUESTION [11 upvotes]: I am interested in effective and computations for finding approximate spectral decompositions in some suitable format. Namely, let $A: H \rightarrow H$ be a Hermitian operator on an $n-$dimensional Hilbert space $H$ with the spectrum $\{\lambda_1, \ldots \lambda_m\}, m \leq n$. Then, $A$ can be decomposed as: $$ A = \sum_{i=1}^{m}\lambda_i P_i,$$ where $P_i, i=1,\ldots m$ are orthogonal projections with $P_i, P_j = 0, i \neq j$ onto the eigenspaces $H_i = \ker \{ \lambda_i I - A \}$ such that: $$ H= \displaystyle \underset{i=1}{\overset{m}{\oplus}} H_i.$$ In an approximate format, the theorem can be stated as follows (p. 380): for any $\varepsilon > 0$, there exist projections $P_i, i=1, \ldots n$ with $P_i, P_j = 0, i \neq j$, and real numbers $\alpha_1, \ldots \alpha_n$ such that $\big|\big| A - \displaystyle \sum_{i=1}^{n} \alpha_i P_i \big|\big| \leq \varepsilon$. What about the approximate eigenspaces? A particular example is this article, but it addresses exact spectral decomposition at the cost of additional input (cardinality of spectrum). REPLY [2 votes]: This is not an answer, and don't take it as such. From a theoretical point of view, you have $A=\sum_{k=1}^{n}\lambda_k P_k$ where the $P_k$ are orthogonal projections. If you start iterating in $A$, or perform various functions of $A$, the problem is that $f(A)=\sum_{k=1}^{n}f(\lambda_k)P_k$. You can get down below the level of a projection. You can form functions of $A$ in such that way that $f_k(A)=P_k$, but that's the smallest granularity. Anything else starts with a vector and, even then, the best you can do is $P_kv$ for some vector $v$. If you know where an eigenvalue is located, then you can use Complex Analysis to assert that $$ P_{k} = \frac{1}{2\pi i}\oint_{|\lambda-\lambda_k|=\delta}(\lambda I-A)^{-1}d\lambda. $$ where the circular contour is positively oriented, centered at $\lambda_k$ and of radius $\delta > 0$ chosen small enough that there are no other eigenvalues on or inside the circle of radius $\delta$ centered at $\lambda_k$. What's nice about such a representation is this: if you perturb $A$ so that the eigenvalues don't change by much, even if there are bifurcations in the eigenvalues, so long as the eigenvalues do not cross the boundary of your circle of integration, then $P_k$ remains a projection--it is the projection onto the direct sum of the eigenspaces associated with all eigenvalues in the circle. This tells you something about what happens under perturbations. What's nice about this is that you don't really care whether the eigenvalues are very close or whether they're exactly on top of each other--this integral will give you the sum of all the projections associated with eigenvalues in the circle. And the sum of all such projections is an orthogonal projection. I suspect from your point of view of computability, this may be interesting, but I don't know. If you know the positions of eigenvalues to a certain accuracy, then you could theoretically perform several such integrations in order to obtain $\lambda_1,\cdots,\lambda_k$ and contour-integration-derived projections $P_l$ such that $\|\sum_{l=1} \lambda_l P_l-A\| < \epsilon$. The projection $P_l$ would be the sum of all the orthogonal projections for eigenvalues within a certain tolerance of $\lambda_l$. This approximation would in the operator norm, with the exact difference being $$ \sum_{l}\sum_{\{m : |\lambda_m-\lambda_l| < \epsilon\}}\lambda_m Q_m -\sum_{l}\sum_{\{ m : |\lambda_m-\lambda_l| < \epsilon\} }\lambda_l Q_m \\ = \sum_{l}\sum_{\{m : |\lambda_m-\lambda_l|\}}(\lambda_m-\lambda_l)Q_m $$ The projection $P_l$ is then given by $$ P_l = \sum_{\{ m : |\lambda_m-\lambda_l| \}}Q_m. $$ These are orthogonal projections, and $P_l P_l' = 0$ if $l\ne l'$.<|endoftext|> TITLE: If $f$ is entire and $\lim_\limits{z\to\infty} \frac{f(z)}{z} = 0$ show that $f$ is constant QUESTION [14 upvotes]: I'm learning Complex Analysis and need to verify my work to this problem since my textbook does not provide any solution: If $f$ is entire and $\lim_\limits{z\to\infty} \dfrac{f(z)}{z} = 0$ show that $f$ is constant. My work and thoughts: From the $\varepsilon$ — $\delta$ definition of the limit we have that $$\forall{\varepsilon} > 0, \exists{n_0} \in \mathbb{N} : \forall{\left|z\right|} \geq n: \left| \frac{f(z)}{z} \right| < \varepsilon \iff \frac{\left| f(z) \right|}{\left| z \right|} < \varepsilon\iff \left| f(z) \right| < \varepsilon \left| z \right|.$$ Now let $C_R = \{z \in \mathbb{C} : \left| z \right| = R \}$. For every $\left| z \right| < R$, by Cauchy's integral formula for derivatives we have that $$ \left| f'(z) \right| = \frac{1}{2 \pi } \left| \int_{|\zeta|=R} \frac{f(\zeta)}{(\zeta - z)^2} \, d\zeta \right|= \frac{1}{2 \pi } \left| \int_{0}^{2\pi} \frac{f(\zeta)}{(\zeta - z)^2} \, \zeta'(t) dt \right| \le$$ $$\le \frac{1}{2\pi} \frac{\varepsilon \left| z \right|}{(R - \left| z \right|)^2} 2\pi R = \frac{\varepsilon \left| z \right|}{(R - \left| z \right|)^2} R.$$ Thus, letting $R \rightarrow \infty$ yields the desired result, that is $$\left| f'(z) \right| \leq \lim_{R \to \infty} \frac{\varepsilon \left| z \right|}{(R - \left| z \right|)^2} R = 0 \implies f(z) = c \;\; \text{with} \; c \in \mathbb{C}.$$ Is my work correct? Are there parts of the proof that need improvements? I'm also looking for other (possibly quicker) solutions using the "big guns" theorems. The only one I'm familiar with is Picard's little theorem but it doesn't apply here. REPLY [5 votes]: As @Dr.Mv said, we have that $\lim_{z\rightarrow\infty}f'(z)=0$, which means that $f'$ is bounded. By Liouville's theorem $f'$ is constant and, since $\lim_{z\rightarrow\infty}f'(z)=0$, $f'=0$. Thus , $f$ is constant.<|endoftext|> TITLE: Calculate $\operatorname{RHom}$ in a the derived category of graded $\mathbb{C}[x]$-Modules QUESTION [5 upvotes]: I was trying to do the following exercise. Consider the category of graded $\mathbb{C}[x]$-Modules, it is clear that we can regard $\mathbb{C}[x]$ as a graded module setting $\operatorname{deg}(x)=1$. Given the module $M=\mathbb{C}[x]/(x)$ I want to show that $\operatorname{RHom}(M,\mathbb{C}[x](-1)[1])\cong M$ where $\mathbb{C}[x](-1)$ is the shift of the grading of $\mathbb{C}[x]$ by $-1$. First of all $\operatorname{RHom}: \mathcal{D}(\mathcal{A}) \to \mathcal{D}(\mathcal{Ab})$ where $\mathcal{A}$ is the category of graded $\mathbb{C}[x]$-Modules. I think I should show that $RHom(M,\mathbb{C}[x](-1)[1])$ is a complex $B^{\bullet}$ with the property that $H^{n}(B^{\bullet})=0$ for every $n$ except $n=0$. To do this I could replace $\mathbb{C}[x](-1)$ by a graded injective resolution $I^{\bullet}$ (I have no idea of how to find this) and then take $\operatorname{Hom}(M,I^{\bullet}[1])$. Now some questions: How do I find $I^{\bullet}$? Should I try to find a short exact sequence to compute this instead of the resolution? What's the point of shifting $I^{\bullet}$? What's the point of shifting $\mathbb{C}[x]$? REPLY [3 votes]: First it is easy to see that $\operatorname{Hom}(M,\mathbb{C}[x](-1))\cong 0$ since $M$ has only elements in degree $0$ and must be sent to $C[x]_{-1}=0$. There is an easy projective resolution of $M$ given by: $$0 \to \mathbb{C}[x](-1) \xrightarrow{d_1} \mathbb{C}[x] \xrightarrow{d_0} M \to 0$$ $d_1(p(x))=xp(x)$ and $d_0$ is the quotient map. This resolution is also a short exact sequence so we can get a long exact sequence: $$ 0 \to \operatorname{Hom}(M,\mathbb{C}[x](-1)) \to \operatorname{Hom}(\mathbb{C}[x],\mathbb{C}[x](-1)) \to \operatorname{Hom}(\mathbb{C}[x](-1),\mathbb{C}[x](-1))\to\\ \to\operatorname{Ext}^{1}(M,\mathbb{C}[x](-1)) \to \operatorname{Ext}^{1}(\mathbb{C}[x],\mathbb{C}[x](-1)) \to \operatorname{Ext}^{1}(\mathbb{C}[x](-1),\mathbb{C}[x](-1)) \to 0$$ Clearly since $\mathbb{C}[x]$ is free the last two term must vanish. Also ·$\operatorname{Hom}(\mathbb{C}[x],\mathbb{C}[x](-1))$ must be zero by degree reasons s finally $$\operatorname{Hom}(\mathbb{C}[x](-1),\mathbb{C}[x](-1))\cong \operatorname{Ext}^{1}(M,\mathbb{C}[x](-1)) \cong \mathbb{C} \cong M$$<|endoftext|> TITLE: What is the *exact* consistency strength of $MK$? QUESTION [7 upvotes]: It's well known that the existence of an inaccessible cardinal proves $Con(MK)$. Joel Hamkins has a nice blog post (http://jdh.hamkins.org/km-implies-conzfc/) that explains what you get out of $MK$, and in particular that $MK \vdash Con_n(ZFC)$ for every $n$ [here $Con_n(ZFC)$ is the iteration of the consistency sentence $n$-many times, e.g. $Con_4(ZFC) =_{df} Con(Con(Con(Con(ZFC))))$]. Question: Is the exact consistency strength of $MK$ known? REPLY [8 votes]: Thanks for your kind words about my blog post. Let me try to answer your question. To describe the consistency strength of a theory or assertion, we should compare the consistency of that theory or assertion to that of other more familiar or landmark theories or assertions. For example, the consistency strength of ZFC plus the continuum hypothesis is ZFC itself; the consistency strength of ZF+DC+all sets are Lebesgue measurable is the same as ZFC + there is an inaccessible cardinal. We compare our given theory or statement to a landmark theory. The issue with your question, then, is that KM is itself such a landmark theory. The exact consistency strength of KM is: KM itself. Your question is a little like asking, "What is the exact consistency strength of ZFC plus an inaccessible cardinal?" The answer would be: ZFC plus an inaccessible cardinal. But naturally such an answer will not satisfy. Perhaps you could explain what kind of answer you were seeking? Meanwhile, it is possible to explain how the strength of KM relates to other large cardinals. It doesn't line up exactly with any of the usual large cardinals. (Although I view KM itself as a kind of large cardinal axiom.) Lower bounds. The post on my blog to which you link explains that KM is strictly stronger than ZFC in consistency strength, and the argument given there shows that KM implies that there is a proper class club of cardinals $\kappa$ with $V_\kappa\prec V$. Thus, these are all worldly cardinals, and by elementarity each of them will be a limit of worldly cardinals. So one can begin to climb the degrees of worldliness, in the style of hyperMahloness, and see that there is a stationary proper class of hyperworldly cardinals, hyper-hyper worldly cardinals, and so on. Another lower bound is provided by my recent paper V. Gitman, J. D. Hamkins, Open determinacy for class games, in review. Namely, we prove that the principle of clopen determinacy for proper class games is equivalent over GBC to the principle ETR of elementary transfinite recursion, which allows transfinite recursion over proper class well-founded relations, which are not necessarily set theory. That principle also gives the truth predicate, which leads to the worldly cardinals as in the previous paragraph. These theories are strictly weaker than GBC + $\Pi^1_1$-comprehension, which is also strictly weaker than KM. Upper bound. Meanwhile, KM is strictly weaker than ZFC + there is an inaccessible cardinal. This is simply because if $\kappa$ is an inaccessible cardinal, then $\langle V_\kappa,\in,V_{\kappa+1}\rangle$ is a model of KM, and so from an inaccessible cardinal we can deduce $\text{Con}(KM)$ and $\text{Con}(KM+\text{Con}(KM))$ and more, iterating many times. Equiconsistencies. It turns out that KM is equiconsistent with a natural strengthening of KM denoted by $\text{KM}^+$, which includes the class-choice principle: $\forall x\exists X\ \varphi(x,X)\to\exists Y\forall x\ \varphi(x,Y_x)$. The axiom says that if for every set $x$ there is a class $X$ with a certain property, then you can find a class $Y\subset V\times V$, whose slices $Y_x$ serve as witnesses. Gitman, Johnstone and I have proved that this assertion is not provable in KM itself, but one can construct a model of the stronger theory from any model of KM, so they are equiconsistent.<|endoftext|> TITLE: Proving continuity and monotonicity of $t\mapsto t^x, t>0$ with minimal assumptions. QUESTION [11 upvotes]: I'm trying to prove that The function $t\mapsto t^x,\, x\in \Bbb R,\, t>0$ is continuous and monotonic. Suppose $+, \cdot\,:\Bbb R^2\to \Bbb R$ (addition and multiplication) have already been defined (via the standard dedekind cuts construction). If $a\in \Bbb Z$, we define $t^a=t^{a-1}\cdot t$, $t^{1}=t$. If $r=\frac a b\in \Bbb Q$, we define $t^r=\sup \{x: x^b1$, $t^x=\sup\{t^r: r1$ and $x \in \Bbb R$ If $n = 0$, define $t^n := 1$; if $n \in \Bbb N \setminus \{ 0 \}$, define $t^n := t^{n-1} t$. If $n \in \Bbb Z \setminus \Bbb N$, define $t^n := \left( \frac 1 t \right) ^{-n}$. Asume now that $t>1$. If $n \in \Bbb N \setminus \{ 0 \}$, define $t^{\frac 1 n} := \sup \{ x \mid x^n \le t \}$. Notice that $\{ x \mid x^n \le t \} \ne \emptyset$ because it surely contains $1$. Notice, too, that $t>1$ implies $t^2 > t$ and $t^2$ is an upper bound for this set, which implies that the $\sup$ is finite. If $t<1$ replace $\sup$ by $\inf$ in the above. If $t=1$ then things are clear. Assume now $t > 1$. Since $t ^{\frac 1 n} = \sup \{ x \mid x^n \le t \}$, let $(t_n) \subset \{ x \mid x^n \le t \}$ be any sequence such that $t_n \to t$ (there always exist sequences in a set tending to the $\sup$ or $\inf$ of that set). Then $t_n ^n \le t$, so passing to the limit produces $\left( t ^{\frac 1 n} \right) ^n \le t$. It is easy to prove that, $\sup \{ x \mid x^n \le t \} = \inf \{ x \mid x^n \ge t \}$. Taking, then, $(t_n) \subset \{ x \mid x^n \ge t \}$ with $t_n \to t$, one gets that $t_n ^n \ge t$ so passing to the limit implies $\left( t ^{\frac 1 n} \right) ^n \ge t$. Since we have obtained inequalities in both directions, it follows that $\left( t ^{\frac 1 n} \right) ^n = t$. If $0 1$, then either $s>1$ or $t>1$. To make a choice, let's asume $t>1$, the proof being identical if $s>1$. We have $$\left( st \right) ^{\frac 1 n} = \sup \{ x \mid x^n \le st \} = \sup \left\{ x \left| x^n \le \left( s ^{\frac 1 n} \right) ^n t \right. \right\} = \sup \left\{ x \left| \left( \frac x {s ^{\frac 1 n}} \right) ^n \le t \right. \right\} = \dots$$ and, if you make the change of variable $x = y \ s ^{\frac 1 n}$, the above continues as $$\dots = \sup \left\{ y \ s ^{\frac 1 n} \Big| y^n \le t \right\} = s ^{\frac 1 n} \sup \{ y \mid y^n \le t \} = s ^{\frac 1 n} t ^{\frac 1 n} .$$ If $st = 1$ then $t = \frac 1 s$ and things are easy. If $st < 1$ then at least one of them is $<1$. To make a choice, assume $t<1$ and use the same idea as above, but replace $\sup$ by $\inf$. If $a, b \in \Bbb N \setminus \{0\}$, let us prove by induction on $a$ that $(t^a) ^{\frac 1 b} = \left( t ^{\frac 1 b} \right) ^a$. If $a=1$ then you just have the definition. If $a>1$ then $\left( t ^{\frac 1 b} \right) ^a = \left( t ^{\frac 1 b} \right) ^{a-1} \ \left( t ^{\frac 1 b} \right) = (t^{a-1}) ^{\frac 1 b} \ t^{\frac 1 b} = (t^{a-1} t) ^{\frac 1 b} = (t^a) ^{\frac 1 b}$. If $a,b>0$ with $\gcd (a,b) = 1$, define $t^{\frac a b} := (t^a) ^{\frac 1 b} = \left( t^ {\frac 1 b} \right) ^a$. If $\gcd (a,b) = d$, let $a' = \frac a d, \ b' = \frac b d$ and define $t^{\frac a b} := t^{\frac {a'} {b'}}$. Then, if $q \in \Bbb Q _+$, $t^q$ does not depend on how you represent $q$ as a fraction. If $q \in \Bbb Q, q<0$, then define $t^q = \left( \frac 1 t \right)^{-q}$. If $t>1$ and $x>0$ or $t<1$ and $x<0$ then define $t^x := \sup \{ t^q \mid q \in \Bbb Q, \ q \le x\} $. If $t>1$ and $x<0$ or $t<1$ and $x>0$ then use $\inf$ instead of $\sup$. If $t=1$ or $x=0$ then things are trivial. 2. The monotonicity of $t \mapsto t^x$ We shall assume in the first part of the proof that $x \ge 0$. Let $n \in \Bbb N$ and $1 < s < t$. Let us show by induction on $n$ that $s^n < t^n$. If $n=1$ then this is obvious. If $n>1$ then $s^n = s^{n-1} s < t^{n-1} s < t^{n-1} t = t^n$. So far, we have proved that the power function is strictly increasing for non-zero natural powers. Suppose that $s ^{\frac 1 n} > t ^{\frac 1 n}$. Using the above paragraph, we obtain that $\left( s ^{\frac 1 n} \right) ^n > \left( t ^{\frac 1 n} \right) ^n$. Using the fact that $\left( t ^{\frac 1 n} \right) ^n = t$, this becomes $s>t$, which contradicts the choice $s0$. Then, chaining the result obtained so far, $$s^q = s^{\frac a b} = (s^a) ^{\frac 1 b} < (t^a) ^{\frac 1 b} = t^{\frac a b} = t^q .$$ This shows that the power function is strictly increasing for strictly positive rational powers. Finally, $$s^x = \sup \{ s^q \mid q\in \Bbb Q, \ q \le x \} \le \sup \{ t^q \mid q\in \Bbb Q, \ q \le x \} = t^x ,$$ so $t \mapsto t^x$ is increasing for $x \ge 0$ on $(1, \infty)$. If $0 TITLE: Limits of integrals QUESTION [12 upvotes]: How would you show that if $f : [0,1] \rightarrow \Bbb R$ is continuous, then $$\lim_{n\rightarrow \infty}\int_0^1\int_0^1 \cdots \int_0^1 f\left( \frac{x_1+x_2+\cdots+x_n}{n} \right)~dx_1~dx_2\cdots dx_n = f\left( \frac{1}{2} \right) $$ and $$\lim_{n\rightarrow \infty}\int_0^1\int_0^1 \cdots \int_0^1 f((x_1 x_2 \cdots x_n)^{\frac{1}{n}})~dx_1~dx_2\cdots dx_n = f\left(\frac{1}{e}\right) $$ REPLY [2 votes]: For the second problem, recall that Weierstrass tells us that polynomials are dense in $C[0,1].$ So it's enough to prove the result for polynomials, and for this it's enough to prove it for each $x^k.$ For $x^k$ the $n$-fold integral works out nicely to be $1/(1+k/n)^n.$ This has limit $1/e^k = (1/e)^k,$ which is the desired answer for this function. Weiersrtass can also be used for the first problem as well, although it's not quite as simple.<|endoftext|> TITLE: Giving a specific example of a positive sequence increasing to 1 and with its partial products having a positive limit QUESTION [6 upvotes]: In my real analysis class I was asked this which got me stick: Is there an example of a sequence of real positive numbers increasing to the limit 1 $ \{ a_n \}_{n=1}^{\infty} $ such that the partial products $ a_1 , a_1a_2,a_1a_2a_3,... $ converges to a positive limit? I thought about it and thought it might be true because I coud not disprove it generally but I cannot come up with an example. Woud someone please be able to provide an example if any? Thanks to all helpers. REPLY [14 votes]: Take any positive decreasing sequence $x_1 , x_2, \dots$ that converges to a positive value $x$ Define $a_n = \frac{x_{n+1}}{x_n}$ Then $$a_1 a_2 \dots a_n = \frac{x_{n+1}}{x_1}\to \frac{x}{x_1}>0$$<|endoftext|> TITLE: Proof by induction, 1 · 1! + 2 · 2! + ... + n · n! = (n + 1)! − 1 QUESTION [9 upvotes]: So I'm supposed to prove that $$1 · 1! + 2 · 2! + \dots + n · n! = (n + 1)! − 1$$ using induction. What I've done Basic Step: Let $n=1$, $$1\cdot1! = 1\cdot1 = 1 = (n+1)!-1 = 2!-1 = 2-1 = 1$$ Induction Step: Assume $f(k) = 1\cdot1! + 2\cdot2! + \dots + k\cdot k! = (k+1)!-1$ \begin{align} F(k+1) &= 1\cdot1! + 2\cdot2! + \dots + k\cdot k! + (k+1)\cdot(k+1)!\\ &= (k+1)!\ - 1 + (k+1)\cdot(k+1)!\\ &= (k+1)!\cdot((k+1) - 1) = (k+1)!\cdot(k) \end{align} I think I'm supposed to make $(k+1)!\cdot k = ((k+1)+1)!+1 = (k+2)!-1$ but I'm not sure how to get there. REPLY [5 votes]: Bernard's answer highlights the key algebraic step, but I thought I might mention something that I have found useful when dealing with induction problems: whenever you have an induction problem like this that involves a sum, rewrite the sum using $\Sigma$-notation. It makes everything more concise and easier to manipulate: \begin{align} \sum_{i=1}^{k+1}i\cdot i!&=\sum_{i=1}^k i\cdot i!+(k+1)(k+1)!\tag{by definition}\\[1em] &= [(k+1)!-1]+(k+1)(k+1)!\tag{induction hyp.}\\[1em] &= (k+1)![1+(k+1)]-1\tag{rearrange}\\[1em] &= (k+1)![k+2]-1\tag{simplify}\\[1em] &= (k+2)!-1.\tag{by definition} \end{align}<|endoftext|> TITLE: If a polynomial ring is a free module over some subring, is that subring itself a polynomial ring? QUESTION [8 upvotes]: Suppose I have a graded polynomial ring $k[x_1,\ldots,x_n]$ on homogeneous generators, where $k$ is a field and the $x_i$ indeterminates, and further that I have a homogeneous graded subring $A$ such that $k[x_1,\ldots,x_n]$ is made a free $A$-module by the inclusion, $k[x_1,\ldots,x_n]$ is finite over $A$. It follows, as pointed out in comments, that then $A$ contains another polynomial ring on $n$ variables. I'd like this to imply $A$ is a polynomial ring too, but is it? REPLY [4 votes]: The answer to my question is in the affirmative. I've now found two proofs. The result is apparently old and originally due to Macaulay. One is on pp. 155 and 171 of Larry Smith's book Polynomial Invariants of Finite Groups. It uses some homological algebra to prove first that without the finiteness assumption, the polynomial ring is free over $A$ if and only if $A$ is a polynomial ring on a regular sequence of generators. More than this, finiteness of the polynomial ring over $A$ shows this sequence can have length no less than $n$, and regularity that it can have length no more than $n$. It is also shown that given a sequence $f_1,\ldots,f_n \in k[x_1,\ldots,x_n]$, it is regular if and only if the $f_i$ are algebraically independent and $k[x_1,\ldots,x_n]$ is a finite $k[f_1,\ldots,f_n]$-module. The second is in Richard Kane's Reflection Groups and Invariant Theory. Starting on p. 195, he wants to show that if a polynomial ring $S(V)$ (symmetric algebra on a vector space $V$) is finite over $R = S(V)^G$ the invariants of some $G$-action by pseudo-reflections on $V$ and $S$ is a finite, free $R$-module, then $R$ is a polynomial algebra. He carries this through with a generators-and-relations level approach involving formal partial derivatives and Euler's theorem on homogeneous functions to leverage a hypothetical algebraic relation on the generators of $R$ into an $S$-linear and then an $R$-linear relation among these same generators and contradict minimality. But nothing in his approach uses anything other than that $S$ is a polynomial ring and there is some generator of $R$ of order relatively prime to the characteristic of $k$.<|endoftext|> TITLE: Quotient of compact metric space is metrizable (when Hausdorff)? QUESTION [6 upvotes]: Apparently it's a standard result that, although not every (Hausdorff) quotient of a metric space is metrizable, it always is metrizable when the space being quotiented is compact. Alas, I can't find a proof - neither in textbooks nor one of my own. Can anyone give a hint, or a reference (or a proof, of course)? Question 2: Is every quotient of a compact metric space (i.e. whether this quotient is Hausdorff or not) second-countable? REPLY [5 votes]: Here is a somewhat high-tech proof for the first question that I'm fond of. First, note that if $K$ is a compact Hausdorff space, then $K$ is metrizable iff the space $C(K)$ of continuous real-valued functions on $K$ is separable (with respect to the sup norm). Indeed, if $K$ is metrizable iff $K$ embeds in the space $[0,1]^\mathbb{N}$, and $K$ embeds in $[0,1]^\mathbb{N}$ iff there are countably many elements of $C(K)$ that separate points of $K$. If $C(K)$ is separable, you can find countably many such elements by taking a countable dense subset. Conversely, if countably many functions separate points, then by the Stone-Weierstrass theorem the set of all polynomials in these functions with rational coefficients is dense in $C(K)$, so $C(K)$ is separable. Now we simply observe that if $X$ is a compact metric space and $K$ is a Hausdorff quotient of $X$, the quotient map $X\to K$ induces an isometric embedding $C(K)\to C(X)$. Since $X$ is metrizable, $C(X)$ is separable. Since any subspace of a separable metric space is separable, it follows that $C(K)$ is separable, and hence $K$ is metrizable.<|endoftext|> TITLE: "There is no set containing everything"? QUESTION [7 upvotes]: I was reading this question regarding codomains, and I found something interesting in User134824's answer: "On the other hand, owing to the set-theoretic fact that "there is no set containing everything," it's not possible to pick a single universal codomain for functions." Why is it impossible to have a set containing everything? Why can't we define $U=\mathbb{R} \cup \mathbb{C} \cup ....$(all possible sets)? P.S. This is a soft-question, so I am looking for intuitive, non-technical answers; I do not know any set theory REPLY [2 votes]: The problem via Cantor's paradox has already been noted. It is also the case that the most common set theories prove the existence of "the set of all $x\in A$ such that $x\notin x$". If $A$ is the universe, then there is a set $R$ containing every set that is not a member of itself; but $R\in R \iff R\notin R$, which is a paradox (Russell's, specifically). More trivially, common set theories accept the Axiom of Foundation, which implies that no set can be a member of itself. But a set containing every set must have itself as a member. There are, as someone mentions in the post linked to in the comments, consistent set theories with universal sets, but these theories must reject each of Foundation, the existence of $\{x: x\in A \wedge x\notin x\}$ for all $A$, and Cantor's theorem that $A < \mathcal{P}(A)$. The consequences of axiom systems that disprove these in favor of the existence of a universe can be counterintuitive or cumbersome; Currying a binary function might not work in $\mathsf{NFU}$, or complementation might not work in $\mathsf{GPK}$.<|endoftext|> TITLE: Understanding an example of "for all" and "for some" usage in statements. QUESTION [6 upvotes]: I'm reading "Analysis I" by Tao and reviewing an appendix chapter on logic. In there he gives an example on how "for all x" is usually much stronger than just saying "for some x": "$6<2x<4$ for all $3 TITLE: $\aleph_1$ almost sure events that almost never all hold QUESTION [6 upvotes]: This recent question sparked my curiosity. Is there a family of events $(E_k)_{k \in I}$ such that$\def\pp{\mathbb{P}}$ $\pp(E_k) = 1$ for any $k \in I$ but $\pp( \bigcap_{k \in I} E_k ) = 0$? Clearly any such family must be uncountable, and my intuition tells me that it exists, so I proceeded to try constructing one. Let $X \sim U([0,1])$, and let $E_r = ( X \ne r )$ for each $r \in [0,1]$. Then $\pp(E_r) = 1$ for any $r \in [0,1]$, but $\pp( \bigcap_{r \in [0,1]} E_r ) = 0$. So now my question is, is there a family of size $\aleph_1$ (that ZFC proves has the above property)? At first I meant my question to be general, where one is allowed to choose any maximal probability measure (non-negative, total measure 1, countable additivity, cannot be extended) based on which all the events are defined. But Eric's comment suggests that the question may be non-trivial even for the Lebesgue measure on a Euclidean space. So I'm now interested in both variants. =) REPLY [4 votes]: Let me just address the question on $\mathbb{R}$. One way to ask this question is: Is there a family of $\aleph_1$-many subsets of $\mathbb{R}$, each of which is Lebesgue null, whose union is all of $\mathbb{R}$? (I'm interpreting the sets in this phrasing as the complements of the events in the question, to match with language used elsewhere.) It turns out (IIRC) that this is independent of ZFC! If $\vert\mathbb{R}\vert=\aleph_1$ (the Continuum Hypothesis), then the answer is clearly "yes": just take each set to be a singleton! If $\vert\mathbb{R}\vert>\aleph_1$, the answer is not obviously no; and indeed if I remember correctly, it is consistent that $\vert\mathbb{R}\vert=\aleph_2$ and $\mathbb{R}$ is the union of $\aleph_1$-many null sets. A related question is: What is the smallest number of null sets, whose union is non-null? (Here "null" refers to the usual Lebesgue measure on $\mathbb{R}$.) This number, $\mathfrak{m}$, is a cardinal characteristic of the continuum (see https://en.wikipedia.org/wiki/Cardinal_characteristic_of_the_continuum#non.28N.29 and http://www.math.lsa.umich.edu/~ablass/hbk.pdf); it is provably uncountable, and provably at most $2^{\aleph_0}$, but consistently we can have $$\aleph_1<\mathfrak{m}<2^{\aleph_0}.$$<|endoftext|> TITLE: How much classical geometry must a geometer know? QUESTION [9 upvotes]: From my reading Wikipedia, I understand there are several branches of classical geometry (if the ordering is off, or I'm missing a few things, let me know): Absolute Euclidean Non-Euclidean Spherical Hyperbolic Projective Affine Transformational Inversive These can be studied from: Coordinate geometry (aka algebraic) view Axiomatic (aka synthetic) view Analytic view The combination of these branches and points of view generate many possible avenues of study. How many of these are useful for a modern geometer to know? By useful, I mean fairly likely to lend intuitive insights into their work (This may seem vague without specifying what kind of work they do. But since most geometers aren't working in classical geometry, I guess the work of any such geometer could be considered relevant to the question). How deeply must they learn? I've got a good high school level knowledge of math, and some college level math, and I have an interest in pure mathematics for a career. EDIT: Added tags relating to modern branches of geometry, so that these kinds of geometers see this question. REPLY [8 votes]: If you are planning to do research in modern geometry, pretty much nothing on your list is relevant. Instead: If you are planning to work in algebraic geometry, you should learn commutative algebra or/and complex differential geometry. If you are planning to work in Riemannian geometry, most likely you should spend time learning differential equations, functional analysis, Sobolev spaces, etc. I know, this sounds a bit sad, but the reality is that you have only that much time. I do not mean that learning "classical" geometry is waste of time, not at all, but you have to prioritize...<|endoftext|> TITLE: Prove a function is constant. (complex analysis) QUESTION [5 upvotes]: Suppose function $f$ is holomorphic on $\mathbb C-\{0\}$ and satisfies $$|f(z)| \le \sqrt {|z|} + \frac{1}{\sqrt {|z|}}.$$ Prove that $f$ is a constant function. I think it's related to Laurent series representation and ML inequality but I have no idea how to prove it. Can anyone give me some hints? Thanks in advance! REPLY [7 votes]: Let $$ f(z) = \sum_{n=-\infty}^\infty a_n z^n $$ be the Laurent series for $f$ in $\Bbb C \setminus \{ 0 \}$. The coefficients $a_n, n \in \Bbb Z,$ can be computed as an integral (compare https://en.wikipedia.org/wiki/Laurent_series) $$ a_n = \frac{1}{2 \pi i} \int_\gamma \frac{f(z)}{z^{n+1}} \, dz $$ where $\gamma$ is any closed curve which surrounds $z=0$ exactly once. Taking $\gamma$ as a circle with radius $r > 0$ gives the estimate $$ |a_n| \le \frac{1}{r^n} \max \{ |f(z)| : |z| = r \} \le \frac{ \sqrt {r} + \frac{1}{\sqrt {r}}}{r^n} \, . $$ For fixed $n \ge 1$ and $r \to \infty$ the RHS tends to zero, and the same is true if $n \le -1$ and $r \to 0$. Therefore $a_n = 0$ for all $n \ne 0$. Alternatively, you can argue as suggested in the comments. $g(z) = z f(z)$ has a removable singularity at $z=0$ and is therefore an entire function. From $$ |g(z)| \le r^{1/2} + r^{3/2} $$ the usual estimates of the derivative with the Cauchy integral formula show that $g$ is a polynomial of degree at most one. So we have $$ f(z) = \frac{a z + b}{z} $$ for some constants $a, b$, and from the growth restriction for $r \to 0$ it follows that $b = 0$ must hold.<|endoftext|> TITLE: Most efficient method for computing Singular Value Decomposition of a triangular matrix? QUESTION [14 upvotes]: There are several methods available for computing SVD of a general matrix. I am interested in knowing about the best approach which could be used for computing SVD of an upper triangular matrix. Please suggest me an algorithm which could be optimized for this special case of matrices. REPLY [3 votes]: Stardard algorithms for computing SVD factorization of an $m \times n$ matrix A process in two steps: bidiagonalization of A SVD factorization of bidiagonal matrix using QR method or divide-and-conquer. The bidiagonalization procedure cannot utilize the triangular structure of A, and therefore standard SVD solvers cannot be optimized for triangular matrices. However the Jacobi method can be optimized for triangular matrices, see MLA Drmac, Zlatko, and Krešimir Veselic. "New fast and accurate Jacobi SVD algorithm. II." SIAM Journal on Matrix Analysis and Applications 29.4 (2008): 1343-1362; also Lapack Working Note 170. To my knowledge, this optimization was not implemented and it is hard to say if this method would be faster, than methods for general matrices. Jacobi method is slower than divide-and-conquer for general matrices, so it may be still slower for triangular matrices.<|endoftext|> TITLE: Difference between Increasing and Monotone increasing function QUESTION [18 upvotes]: I have some confusion in difference between monotone increasing function and Increasing function. For example $$f(x)=x^3$$ is Monotone increasing i.e, if $$x_2 \gt x_1$$ then $$f(x_2) \gt f(x_1)$$ and some books give such functions as Strictly Increasing functions. But if $$f(x)= \begin{cases} x & x\leq 1 \\ 1 & 1\leq x\leq 2\\ x-1 & 2\leq x \end{cases} $$ Is this function Monotone increasing? REPLY [2 votes]: Let $y=f(x)$ be a differentiable function on an interval $(a,b)$. If for any two points $x_1,x_2 \in (a,b)$ such that $x1 \lt x2$, there holds the inequality $f(x_1) \leq f(x_2)$, the function is called increasing on ths interval. If there holds the inequality $f(x_1) \lt f(x_2)$, the function is called strictly increasing on the interval.<|endoftext|> TITLE: How important are inequalities? QUESTION [7 upvotes]: When reading the prefaces of many books devoted to the theory of inequalities, I found one thing repeatedly stated: Inequalities are used in all branches of mathematics. But seriously, how important are they? Having finished a standard freshman course in calculus, I have hardly ever used even the most renowned inequalities like the Cauchy-Schwarz inequality. I know that this is due to the fact that I have not yet delved into the field of more advanced mathematics, so I would like to know just how important they are. While these inequalities are usually concerned with a finite set of numbers, I guess they must be generalised to fit into subjects like analysis. Can you provide some examples to illustrate how inequalities are used in more advanced mathematics? REPLY [3 votes]: I think they're most important because of limits. I'm sure you've done limits in your calculus class. Limits are extremely important in maths - they're not just used to define derivatives and integrals. There's a whole branch of mathematics called analysis which deals with limits. Sometime in the 18th century, mathematicians tried to understand calculus more rigorously and they came up with a formal definition of a limit. $\lim \limits_{n \rightarrow \infty} a_n = a \iff \forall \varepsilon > 0, \exists N \in \mathbb{N} \ s.t. |a_n - a| < \varepsilon $ (In simple words, if you have a sequence of numbers say $ \dfrac{1}{2}, \dfrac{3}{4}, \dfrac{7}{8}, \dfrac{15}{16}...$ which tend to a number (1 in the previous example), after a certain point ($N$) all points within a certain range ($\varepsilon$) of the limit.) Because of this, analysis is all about inequalities - including the triangle inequality and the Cauchy-Schwartz. They're very useful. There are other places such as in Computer Science, when you define the order of growth of an algorithm (Big-O notation), and Operations Research where you use them to put certain constraints on maximisation/minimisation problems, e.g. find the best portfolio of investments, given that you may at most invest $1000. (The last one requires a good knowledge of theoretical probability - which is basically analysis and inequalities.)<|endoftext|> TITLE: How to prove that a set exists in ZFC? QUESTION [9 upvotes]: This is probably something really trivial, but I don't have any helpful set theory books (nor any library where I could borrow them for that matter) and googling such things as "proving a set exists in ZFC" doesn't give me any helpful links. At first I thought that checking if a set I constructed doesn't contradict any axioms would be enough, but it would be really tiresome to check them all for any set I build and I have been wondering if there is any simpler method than this. Besides, I'm not even sure if it would be a correct technique considering that although the empty set doesn't contradict any axiom, people still needed to axiomatise its existence (I know that technically it can be deduced from other axioms, but many list it as an axiom). If not contradicting axioms is not yet a proof of existence, then what is? EDIT: I see a lot of people thinking that I need a proof of existence of at least one set, i.e. that our universe of discourse is not empty. If the original post wasn't clear enoguh, my question is: "I have constructed a set, e.g. ${\{{\{v_0}\},{\{v_0,v_1}\}}\}$. How to prove that it exists? Just checking the axioms doesn't seem to be enough, since the axioms of existence and infinity explicitly say that the sets they describe exist, and if not contradicting axioms would be enough to justify a set's existence, then we could simply build the sets described by these axioms, see that they don't contradict anything and there would be no need to list these two axioms" REPLY [2 votes]: In the usual form of the axioms of ZFC, only the Axiom of Infinity and the Axiom of Existence ($\exists x\;(x=x)$) begin with " $\exists$ ". All the other axioms begin " $\forall$ ". We cannot deduce a statement beginning with "$\exists$ " from a collection of axioms that all begin " $\forall$ " because it is a sub-system of the system $\neg\exists x\;( S(x)$ for every formula $S(x)$. And this system is consistent, for the simple reason that it is satisfied if the universe of our discourse has nothing in it. It is desirable to separate Infinity from the other axioms, and not desirable to explore the insides of nothingness, so we add Existence.<|endoftext|> TITLE: What is duality argument for the operator on $L^p-$ spaces? QUESTION [7 upvotes]: Suppose that the operator $T: L^{p}(\mathbb R) \to L^{p}(\mathbb R)$ (say for instance, some nice convolution operator) is bounded for $1\leq p \leq 2.$ At various, places we see that (for instance: in the proof of Hilbert transform is bounded on $12$ by duality argument. My Vague Question is: What is the standard duality argument in these kind of situations? Can you illustrate some specific example? REPLY [5 votes]: $\newcommand\ip[2]{\left\langle #1,#2\right\rangle}$ Leaving out many technical details, just illustrating what that "duality argument" is: Let's define $$\ip fg=\int fg.$$ Suppose that $K$ is a kernel and define $\tilde K(t)=K(-t)$. Then, assuming you can justify the Fubini (typically by restricting to some nice subspace of $L^p$), you see that $$\ip{\tilde K*f}g=\ip f{K*g}.$$ Suppose we've shown that $$||K*f||_p\le c||f||_p.$$ It follows that $||\tilde K*f||_p\le c||f||_p$. So for $f\in L^p$ and $g\in L^{p'}$ we have $$\left|\ip f{K*g}\right|=\left|\ip{\tilde K*f} g\right|\le c||f||_p||g||_{p'},$$which shows that $$||K*g||_{p'}\le c||g||_{p'}.$$<|endoftext|> TITLE: Is the restriction of a finite map of affine varieties also finite? QUESTION [7 upvotes]: If $f:X\to Y$ is a dominant(i.e.$f(X)$ is dense in $Y$) regular map of affine varieties, then $f$ is called a finite map if $k[X]$ is integral over $f^*k[Y]$. My question is: if $Z\subset X$ is a closed subset of $X$, then how to show the restiction $f|_Z: Z\to \overline{f(Z)}$ is still a finite map? (Note: This result is supposed to prove the fact that "a finite map takes closed sets to closed sets"(cf.Page60, Basic Algebraic Geometry 1, by Shafarevich), so please do not use the latter fact in the answer.) REPLY [9 votes]: Regular map $f:X\to Y$ of affine varieties correspond bijectively to morphisms of $k$-algebras $\phi: k[Y]\to k[X]$. The map $f$ is dominant iff $\phi$ is injective, and is (by definition) finite iff $\phi$ makes $k[X]$ a finite $k[Y]$- module i.e. if $k[X]$ is a module of finite type over $\phi(k[Y])$ . [The definition in Shafarevich is equivalent but confuses the issue with irrelevant integrality conditions.] For any subset $Z\subset X$ the induced morphism $\bar \phi:k[Y]/\phi ^{-1}(I(Z))\to k[X]/I(Z) $ is also injective and module-finite so that the corresponding geometric restriction map map $\operatorname {res}(f): Z\to \overline {f(Z)}$ is dominant and finite, just as $f$ was. NB Notice that, in conformity with your request, I have never mentioned closedness of any map.<|endoftext|> TITLE: How to prove $p^2 \mid \binom {2p} {p }-2$ for prime $p$? QUESTION [8 upvotes]: How to prove $p^2 \mid \binom {2p} {p } -2$ for prime $p$? I have a hint: for $1 \le i \le p-1$, $p \mid \binom p i$. I cannot even start the proof. Please help. REPLY [7 votes]: Note that: $$\binom{2p}p=\sum_{i\mathop=0}^p\binom pi\binom p{p-i}=\sum_{i\mathop=0}^p\binom pi^2$$ In fact, this formula can be generalized: $$\binom{a+b}r=\sum_{i\mathop=0}^r\binom ai\binom b{r-i}$$ The proof can be derived by considering the coefficient of $x^r$ in the expansion of $(1+x)^{a+b}$ and $(1+x)^a(1+x)^b$ respectively, which would be the same. Now, from your hint, since $p$ is actually a prime, we have: $$p^2|\tbinom pi^2$$ where $1\le i\le p-1$. Therefore: $$\sum_{i\mathop=0}^p\binom pi^2=\dbinom p0^2+\sum_{i\mathop=1}^{p-1}\binom pi^2+\binom pp^2\equiv\dbinom p0^2+0+\dbinom pp^2=2\mbox{ (mod p}^2\mbox{)}$$<|endoftext|> TITLE: p-Norm with p $\to$ infinity QUESTION [6 upvotes]: I have to show that: for all vectors $v\in \Bbb R^n$: $\lim_{p\to \infty}||v||_p = \max_{1\le i \le n}|v_i|$ with the $||\cdot ||_p$ norm defined as $$ ||\cdot ||_p: (v_1, \dots ,v_n) \to (\sum^n_{i=i} |v_i|^p)^{1/p} $$ I think I once read something about mixing the root and the same power with the power going to infinity but i can't really remember anything concrete. Any Ideas? Thanks in advance REPLY [4 votes]: As a norm is completely described by its unit ball, let us see the way unit balls of $||.||_p$ converge. See (classical) pictures below of the unit balls of $||.||_1 (square), ||.||_2 (circle), ||.||_3$ and $||.||_9$ in $\mathbb{R}^2$. These balls are getting more and more "square" as $p$ increases, the limit square being described by equation $\max(|x|,|y|)=1$, providing a geometric intuition about the way the limit is obtained. see https://en.wikipedia.org/wiki/Lp_space.<|endoftext|> TITLE: What is an intuition behind conjugate permutations? QUESTION [10 upvotes]: I know the definition of conjugate permutations. $$\exists p \quad p^{-1} \alpha p=\beta$$ So the $\alpha$ and $\beta$ is a pair of conjugate permutations. But can anybody can give some concise, vivid example to describe conjugate permutations? To help me understand the relation of $\alpha$ and $\beta$ intuitively? I'm a newbie in group-theory. First time here. If there is anything unsuitable in my specification, do tell me please. REPLY [8 votes]: A little more explicitly: Let's say you have the permutation $\alpha$ with $\alpha(1) = 3$, $\alpha(2) = 1$, $\alpha(3) = 2$. Let's say your are afraid of numbers, so to make this permutation friendly you use letters instead. You relabel $1$ as $a$, $2$ as $b$, $3$ as $c$ to get a new permutation $\beta$ of the letters $\{a,b,c\}$. We get $\beta(a) = c$, $\beta(b) = a$, $\beta(c) = b$. To figure out more carefully what is going on, define $p:\{a,b,c\} \rightarrow \{1,2,3\}$ by $p(a) = 1,p(b) = 2, p(c) = 3$, so that $p$ is the "translator" between your sets of symbols. You want to know where $\alpha$ should do in this translated alphabet. To find out what should happen to $a$, first translate: $p(a) = 1$. Then apply $\alpha$: $\alpha(1) = 3$. Finally, translate back using $p^{-1}$: $p^{-1}(3) = c$. So the "translated" version of $\alpha$ is the map $\beta = p^{-1} \alpha p$ with $\beta(a) = c$. W get $$\beta(a) = p^{-1}\alpha p(a) = p^{-1} \alpha(1) = p^{-1}(3) = c,$$ $$\beta(b) = p^{-1} \alpha p(b) = p^{-1} \alpha(2) = p^{-1} (1) = a,$$ $$\beta(c) = p^{-1} \alpha p(c) = p^{-1} \alpha (3) = p^{-1} (2) = b$$ which is what we want. Here we're using two sets of symbols, $\{1,2,3\}$ and $\{a,b,c\}$, but we might as well relabel using the same set of symbols $\{1,2,3\}$ and $\{1,2,3\}$. The same idea works, and then you have the notion of conjugacy in $S_n$: if $\beta = p^{-1} \alpha p$, then $\beta$ is just given by taking $\alpha$ and relabeling the symbols using $p$ as a "translation".<|endoftext|> TITLE: In a $\triangle ABC,a^2+b^2+c^2=ac+ab\sqrt3$,then the triangle is QUESTION [7 upvotes]: In a $\triangle ABC,a^2+b^2+c^2=ac+ab\sqrt3$,then the triangle is $(A)$equilateral $(B)$isosceles $(C)$right angled $(D)$none of these The given condition is $a^2+b^2+c^2=ac+ab\sqrt3$. Using sine rule, $a=2R\sin A,b=2R\sin B,c=2R\sin C$,we get $\sin^2A+\sin^2B+\sin^2C=\sin A\sin C+\sin A\sin B\sqrt3$ I am stuck here. REPLY [5 votes]: $$a^2+b^2+c^2=ac+ab\sqrt3$$ The above equation can be re-written as $$\frac{a^2}{4}-ac+c^2+\frac{3a^2}{4}-ab\sqrt3+b^2=0$$ which is $$(\frac{a}{2}-c)^2+(\frac{\sqrt3 a}{2}-b)^2 = 0$$ which implies that $\frac{a}{2} = c$ and $\frac{\sqrt3 a}{2}= b$ .Based on this it can be concluded that the ratio of sides is 1:$\sqrt{3}$:2, which is a right angled-triangle.I hope this was helpful.<|endoftext|> TITLE: Is there any function continuous in $R$ and differentiable in rational numbers with zero derivative? QUESTION [7 upvotes]: I'm looking for a function continuous in $R$ and differentiable in all rational numbers and it's derivative should be $0$.But not the constant function. And there is a same question about irrational numbers. Can any one help?? REPLY [6 votes]: An example of a strictly increasing function with derivative $0$ at rationals is the Minkowski's question mark function, which is defined as follow : given $x$ whose continued fraction representation is $[a_0, a_1, a_2, ... ]$, let $?(x) = a_0 - 2 \sum \limits_{n=1}^{\infty} \frac{(-1)^n}{2^{a_1+\cdot \cdot \cdot + a_n}}$. You can prove that this function's derivative is $0$ over the rationals, but yet it is strictly increasing, so there is no interval on which it is constant. (PS : continued fraction : $x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac{1}{...}}} $ where $a_0 \in Z$, and $a_1,a_2 > 0$ integers. You have existence and unicity of this decomposition (and the list $(a_0,a_1,...)$ is finite iff $x$ is rational) ) $ $ Concerning your second question, if $f$ is continuous, differentiable at irrational numbers and with derivative $0$ at irrational numbers, then $f$ is constant. More generally, if $f$ is continuous and its derivative exists and is $0$ at the complement of a countable set, then you can prove that $f$ is constant. $ $ To prove this, first fix $\varepsilon > 0$ if $f'(x) = 0$, then it exists $\delta > 0$ such that $\forall s \in [x-\delta, x+\delta],\ \mid f(s) - f(x) \mid < \varepsilon \mid s - x \mid$. So the variation of $f$ over any subinterval of $[x - \delta, x+\delta]$ is less than $2\varepsilon$ times the length of the subinterval (*). let us note $(x_n)$ the (countable) other points where the derivative is not defined or not $0$. Using continuity, for all $n$, it exists $\delta$ such that $\forall s\in [x_n - \delta, x_n + \delta],\ \mid f(s) - f(x_n) \mid < \frac{\varepsilon}{2^{n}}$ Hence, the variation over all those intervals when $n$ is an integer is less than $2 \sum \limits_{n\in\mathbb{N}} \frac{\varepsilon}{2^n} = 4\varepsilon$ To be clearer, we can define for all $x$, $\delta(x)$ equals the $\delta$ mentioned in the first case or the second depending on $x$ (hence for all $x$, $[x-\delta(x), x+\delta(x)]$ is a neighborhood of $x$ over which the variation of $f$ is tiny). $ $ Then let us fix $ac$ such that $e TITLE: Why is establishing absolute consistency of ZFC impossible? QUESTION [22 upvotes]: Why is establishing the absolute consistency of ZFC impossible? What are the fundamental limitations that prohibit us with coming up with a proof? EDIT: This post seems to make the most sense. In short: if we were to come up with a mathematical proof of the consistency of ZFC, we would be able to mimic that proof inside ZFC. Ergo, if ZFC is consistent, there can be no proof that it is. REPLY [8 votes]: To add to Carl Mummert's answer... Confusing point 0: No formal system can be shown to be absolutely consistent. Yes you read that right. Perhaps you might say, how about the formal system consisting of just first-order logic and not a single axiom, namely the pure identity theory, or maybe even without equality? Sorry, but that doesn't work. How do you state the claim that this system is absolutely consistent? You would need to work in a meta-system that is powerful enough to reason about sentences that are provable over the formal system. To do so, your meta-system already needs string manipulation capability, which turns out to be more or less equivalent to first-order PA. Hence before you can even assert that any formal system is consistent, you need to believe the consistency of the meta-system, which is likely going to be essentially PA, which then means that you do not ever have absolute consistency. Of course, if you assume that PA is consistent, then you have some other options, but even then it's not so simple. For example you may try defining "absolute consistent" as "provably consistent within PA as the meta-system". This flops on its face because then you cannot even claim that PA itself is absolutely consistent, otherwise it contradicts Godel's incompleteness theorem. So ok you try again, this time defining "T is absolutely consistent" to mean "PA proves that Con(PA) implies Con(T)". Alright this is better; at least we can now show that PA is absolutely consistent, though it is trivial! However, note that even with the assumption of consistency of some formal system such as PA, as above, you are still working in some meta-system, and by doing so you are already accepting the consistency of that meta-system without being able to affirm it non-circularly. Therefore any attempt to define any truly absolute notion of consistency is doomed from the start.<|endoftext|> TITLE: 4 cards are drawn from a pack without replacement. What is the probability of getting all 4 from different suits? QUESTION [21 upvotes]: 4 cards are drawn from a pack without replacement. What is the probability of getting all 4 from different suits? Here's how I tried to solve: For the first draw, we have 52 cards, and we have to pick one suit. So, probability for this is $\frac{13}{52}$. For the second draw, only 51 cards are left. The second suit has to be selected, so there are 13 cards from that suit. The probability is $\frac{13}{51}$. Similarly, the third and fourth draw have probabilities $\frac{13}{50}$ and $\frac{13}{49}$ respectively. Since the draws are independent, the total probability becomes $$\frac{13}{52} \times \frac{13}{51} \times \frac{13}{50} \times \frac{13}{49}$$ But my book says the answer is $\frac{{13\choose 1} \times {13 \choose 1} \times {13\choose1} \times {13\choose1}}{52 \choose 4}$. My answer differs by a factor of $4!$. What did I do wrong? REPLY [14 votes]: You have to multiply your answer with $4!$, because there are $4!$ ways in which you can choose the order of the $4$ suits.<|endoftext|> TITLE: topology related to Binary space QUESTION [5 upvotes]: I ran upon this topology based on binary space, perhaps using obscure terminology, but I am curious what it is and its properties. Let binary space be the set of strings of $0,1$'s, and let $S$ be the set of all functions that map the binary space to a set of two elements, $\{0,1\}$. It's like it decides a true or false for every binary string. For the topology $\mathcal{T}$, let a basic set be $U_{V,f}$, where any $g$ in this set must satisfy $g(x)=f(x)$ for all $x$ in $V$, and $V$ is a finite subset of $B$. Is this space $(S,\mathcal{T})$ Hausdorff? Compact? Any related material is welcome. Thanks REPLY [6 votes]: The set of finite binary sequences $B$ is denoted $\{0,1\}^{<\omega}$. Your set $S$ is the set of functions $f:\{0,1\}^{<\omega}\to \{0,1\}$. That is, $$S=\{0,1\}^{\{0,1\}^{<\omega}}.$$ You defined a basic open set around $f\in S$ to be the set of all $g\in S$ which agree with $f$ on a given finite subset of ${\{0,1\}^{<\omega}}$. This is exactly how basic open sets are defined in the product topology. As the product of compact Hausdorff spaces (in your case $\{0,1\}$) is compact Hausdorff in the product topology, $S$ is compact Hausdorff. Do not get hung up on the index set $\{0,1\}^{<\omega}$. You could replace it with any other countably infinite set (such as $\omega$) and get the same space.<|endoftext|> TITLE: Any subgroup that contains the subgroup generated by all commutators is normal. QUESTION [5 upvotes]: First, I was asked to show that if $G$ is a group and $G'$ is generated by $\{xyx^{-1}y^{-1}|x,y\in G\}$, then $G'\trianglelefteq G$ and $G/G'$ is Abelian. This was not too difficult to show. The second part of the question said if $G$ is a group and $H\supseteq G'$, where $G'$ is as in the last part, then $H\trianglelefteq G$ and $G/H$ is Abelian. I'm not sure of the best way to approach this problem. Any help would be appreciated. REPLY [2 votes]: Hint for normality: for all $g \in G$ and $h \in H$, $g^{-1}hg=[g,h^{-1}]h$.<|endoftext|> TITLE: Alternating series test for complex series QUESTION [8 upvotes]: I want to show that we can continue Riemann's zeta function to Re$(s)>0$, $s\neq 1$ by the following formula \begin{align} (1-2^{1-s})\zeta(s)&=\left(1-2\frac{1}{2^s}\right)\left(\frac1{1^s}+\frac1{2^s}+\ldots \right) \\ &=\frac1{1^s}+\frac1{2^s}+\ldots -2\left(\frac1{2^s}+\frac1{4^s}+\ldots \right)\\ &=\frac1{1^s}-\frac1{2^s}+\frac1{3^s}-\frac1{4^s}+\ldots \\ &=\sum_{n=1}^\infty (-1)^{n-1}\frac1{n^s}. \end{align} In order to do that, I need to show that the series converges for Re$(s)>0$, except $s=\frac{2k\pi i}{\ln 2}+1$, $k\in \mathbb{Z}$, which are removable singularities. Any ideas on how to do that? REPLY [6 votes]: Here, I thought it might be instructive to present an approach that uses a generalization of Dirichlet's Test and that has wider applicability. To that end we proceed. Let $s\in \mathbb{C}$. The Dirichlet Eta function, $\eta(s)$, as represented by the series $$\eta(s)=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s} \tag 1$$ is easily seen to converge for $\text{Re}(s)>1$. If $s$ is purely real, then Dirichlet's test guarantees that the series representation converges for $\text{Re}(s)=s>0$. If $s$ is not purely real, then Dirichlet's test is inapplicable since the term $\frac{1}{n^s}$ is not real and monotonically decreasing. It is not obvious, therefore, that the series in $(1)$ converges when $\text{Re}(s)>0$ for general complex values of $s$. There is a generalization of Dirichlet's test See Here to which we can appeal and show the convergence of $(1)$ whenever $\text{Re}(s)>0$. The Generalized Dirichlet Test states that if $a_n$ and $b_n$ are, in general, complex sequences, then the sequence of their product, $\sum_{n=1}^\infty a_nb_n$, converges under the following three conditions: There exists a number $M$, independent of $N$, such that the partial sums of $b_n$ satisfy $$\left|\sum_{n=1}^N b_n\right|\le M$$ The terms $a_n$ tend to zero as $n\to \infty$ The sequence $a_n$ is of bounded variation with $\sum_{n=1}^\infty |a_{n+1}-a_n| \le L <\infty$ Let $a_n=\frac{1}{n^s}$ and $b_n=(-1)^{n-1}$, $\text{Re}(s)>0$. Clearly conditions $1$ and $2$ are satisfied. To show that $3$ is satisfied, we note that for fixed $s$ with $\text{Re}(s)>0$ $$\begin{align} \sum_{n=1}^\infty \left|\frac{1}{(n+1)^s}-\frac{1}{n^s}\right|& \le \int_1^\infty \left|\frac{d}{dt}\left(\frac{1}{t^s}\right)\right|\,dt\\\\ &=\int_1^\infty \left|\frac{-s}{t^{s+1}}\right|\,dt\\\\ &=|s|\int_1^\infty \frac{1}{t^{1+\text{Re}(s)}}\,dt\\\\ &=\frac{|s|}{\text{Re}(s)} \end{align}$$ where we recognized that $\sum_{n=1}^\infty \left|\frac{1}{(n+1)^s}-\frac{1}{n^s}\right|$ represented the sum of the lengths of secant lines of the parametric curve $\frac{1}{t^s}$, $t\in [1,\infty)$. Therefore, by the Generalized Dirichlet Test, the series $$\eta(s) =\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s}$$ converges when $\text{Re}(s)>0$. And we are done!<|endoftext|> TITLE: Prove that trees have at least two vertices of degree one QUESTION [11 upvotes]: Prove that every tree with $n\geq 2$ vertices has at least two vertices of degree one. What I tried: Suppose that there are fewer than two vertices of degree one. So we can split into two cases. Case one: there is no vertex of degree one. Since we know that every tree has $n-1$ edges then the total degree of any tree have to be $2(n-1)$. But for this case since no vertex have degree $1$ then every vertex have at least a degree of $2$ and since there are $n$ vertices, the total degree is $\geq 2n$ which is a contradiction. Case two: there is only one vertex of degree one. Similarly the total degree of any tree have to be $2(n-1)$. Then there are $(n-1)$ vertices with which have degree of $\geq 2$ while only one vertex with degree of one. Thus summing up to find the total of the vertices, we have that the total degree of vertices is $\geq 2(n-1)+1=2n-1$ which is also a contradiction. This thus proves the statement for both cases. Is my proof correct? Could anyone explain better? REPLY [2 votes]: Proposition: Any nontrivial tree must have at least two vertices of degree one (sometimes called leaves) Here's a proof by contradiction: Let $T$ be a nontrivial tree, that is, one with at least two vertices. In fact, let's draw one, just for fun: o (v1) / \ (v0) o o (v2) / o (v3) It might be easier to look at the tree like this: o (v0 = u) | o (v1) | o (v2) | o (v3 = v) Of note: I labeled $v_0$ as $u$ and $v_3$ as $v$. $u$ represents the beginning of the tree while $v$ represents the end. Let $P$ be a path in $T$ of the longest possible length. It's alright if your tree has more than one path of equal length, so long as we choose one with the longest possible length. As mentioned above, $u$ and $v$ represent the beginning and end vertices of $P$. So our $P$ is $u, v_1, v_2, v$. Next, suppose, contrary to the proposition, that there are NOT two vertices of degree one. This brings us to why we care about $P$. The longest path, in any tree, contains a start and end vertex of degree one. Both ends of $P$ must be vertices with degree one, satisfying the "at least two vertices of degree one" part of the original proposition. But we're supposing that there are NOT two vertices of degree one. That means either $u$ or $v$ need to be adjacent to one more vertex in $T$. Let's pick $u$, the start vertex and we will call the adjacent vertex $u'$. Which vertex is $u'$? $u$ is already adjacent to a vertex on the path $P$ ($v_1$). If $u'$ was another vertex on the path $P$, then we'd get a cycle. I've illustrated this below: o (v0 = u) / | | o (v1) | | \ | o (v2 = u') | o (v3 = v) We cannot have cycles in a tree, since a tree is an undirected acyclic graph. So this can't be. However, if we think $u'$ is a vertex adjacent to $u$, but not on the path $P$, then we will contradict our previous choice of $P$ as longest path since there would be a vertex preceding $u$.<|endoftext|> TITLE: Which is easier to integrate? QUESTION [5 upvotes]: My calculus teacher gave us this problem in class: Which is easier to integrate? $$\int \sin^{100}x\cos x dx$$ or $$\int \sin^{50}xdx$$ By easier, I assume the teacher means which integral would take less work. I'm unsure of how to approach this problem because of the relatively large exponents. I would guess the second because it has smaller exponents but I'm not sure. REPLY [2 votes]: Which would you prefer to do? Let $u=\sin x$ and $du=\cos x$, transforming the integral into $\displaystyle \int u^{100} du$. Or use the reduction formula: $\displaystyle\int \sin^n x dx=\frac{-\sin^{n-1}x \cos x}{n}+\frac{n-1}{n}\int \sin^{n-2}x dx$ for $n=50$? I found the formula here. Of course, both can be done, but the reduction formula is incredibly tedious to use. The power rule of integration is much simpler.<|endoftext|> TITLE: What's the point of allowing only quantification of variables in first-order logic. QUESTION [11 upvotes]: In first-order languages, ${\forall}$ is allowed to quantify only over variables, so that ${\forall}v(P)$, where $v$ is some variable and $P$ is a WFF is the only kind of a WFF concering universal quantifiers allowed in such languages. Why is that? No texts I read on this subject (which due to the limitations I have due to the place I live have only been sites like Wikipedia, Proofwiki or Wolfram MathWorld) provided an explanation why quantifying over WFFs is not allowed. I don't see any advantages to this and it leads to annoying problems like ZFC being an infinite list of axioms, which don't avoid the concept of ${\forall}P(Q)$, $P$ and $Q$ being WFFs, just phrase it in another way. So what's the justification of not quantifying over WFFs? What are the advantages of such approach? REPLY [2 votes]: You can actually have formulas like $\forall P. \textbf{Apply}(P,x)$ (where $\textbf{Apply}$ is your predicate symbol that you want to denote the application of the predicate $P$ to $x$) in a first order logic with two type of variables. The problem is there can be nothing in the rules implying that the variable $P$ must range on fbf's, strings, sets or numbers: it will depend on your model.<|endoftext|> TITLE: How can I get f(x) from its Maclaurin series? QUESTION [8 upvotes]: I know how to get a Maclaurin series when $f(x)$ is given. I have to find $\sum_{n=0}^{\infty}\frac{f^{(k)}(0)}{k!}x^k$. But how can I get $f(x)$ from its Taylor series? The problem is $$f(x) = \sum_{n=0}^{\infty} C_n x^n,$$ where $C_n$ is a Catalan number defined by $C_n = \frac{1}{n+1}\binom{2n}{n}$. How can I get $f(x)$? REPLY [6 votes]: Here is a different approach based upon the Lagrange inversion theorem. We follow the paper Lagrange Inversion: when and how by R. Sprugnoli etal. It is convenient to apply formula G6 from the paper, which states that if there are functions $F(u)$ and $\phi(u)$, so that \begin{align*} C_n=[u^n]F(u)\phi(u)^n\tag{1} \end{align*} the following is valid \begin{align*} f(x)&=\sum_{n=0}^{\infty}C_nx^n\\ &=\sum_{n=0}^{\infty}[u^n]F(u)\phi(u)^nx^n\\ &=\left.\frac{F(u)}{1-x\phi^{\prime}(u)}\right|_{u=x\phi(u)} \end{align*} We obtain \begin{align*} f(x)&=\sum_{n=0}^{\infty}C_nx^n\\ &=\sum_{n=0}^{\infty}\frac{1}{n+1}\binom{2n}{n}x^n\\ &=\sum_{n=0}^{\infty}\left(\binom{2n}{n}-\binom{2n}{n-1}\right)x^n\\ &=\sum_{n=0}^{\infty}\left([t^n](1+t)^{2n}-[t^{n-1}](1+t)^{2n}\right)x^n\tag{2}\\ &=\sum_{n=0}^{\infty}\left([t^n](1+t)^{2n}-[t^{n}]t(1+t)^{2n}\right)x^n\tag{3}\\ &=\left.\frac{1}{1-2x(1+u)}\right|_{u=x(1+u)^2}-\left.\frac{u}{1-2x(1+u)}\right|_{u=x(1+u)^2}\\ &=\frac{1}{1-\frac{2u}{1+u}}-\frac{u}{1-\frac{2u}{1+u}}\\ &=1+u \end{align*} Since $u=x(1+u)^2$, we get \begin{align*} u(x)&=\frac{1}{2x}\left(1-2x\pm\sqrt{1-4x}\right)\\ &=\frac{1}{2x}\left(1\pm\sqrt{1-4x}\right)-1 \end{align*} Comment: In (2) we use the coefficient of operator $[t^n]$ to denote the coefficient of $t^n$ in a series. This way we can write \begin{align*} [t^k](1+t)^{n}=\binom{n}{k} \end{align*} In (3) we use the rule $[t^{n-k}]A(t)=[t^n]t^kA(t)$. We can set according to (1) \begin{align*} \phi(t)=(1+t)^2 \end{align*} and $F(t)=1$ resp. $F(t)=t$ for the terms. We select the solution $u(x)$ which can be expanded as power series and obtain \begin{align*} f(x)&=\sum_{n=0}^{\infty}C_nx^n =1+u\\ &=\frac{1}{2x}\left(1-\sqrt{1-4x}\right) \end{align*} Note: Another variant of the Lagrange inversion theorem which derives the generating function of Catalan numbers from a different point of view is provided in this answer.<|endoftext|> TITLE: Computing (the ring structure of) $\mathrm{Ext}^\bullet_R(k,k)$ for $R=k[x]/(x^2)$ QUESTION [7 upvotes]: Let $k$ be some field (say of characteristic zero, if it matters) and define $$R=k[x]/(x^2).$$ I want to compute $$\mathrm{Ext}^\bullet_R(k,k)$$ and, in particular, the ring structure on it (though I think I can do this part if I can compute the Ext modules). I know that we can think of elements of $\mathrm{Ext}^m_R(k,k)$ as length $m+2$ exact sequences of the form $$0\to k\to X_m\to\ldots\to X_1\to k\to0$$ modulo some sensible equivalence relation, and that we can also think of it as $$\mathrm{Ext}^m_R(k,k) = H^m(\mathrm{Hom}_{R\hbox{-}\mathsf{mod}}(P_\bullet,k)) = H^m(\mathrm{Hom}_{R\hbox{-}\mathsf{mod}}(k,I^\bullet))$$ for some projective (or injective) resolution $P_\bullet$ (or $I^\bullet$, respectively) of $k$. However, when it comes to the hands-on part of actually computing this, I hit a mental block. Since $R$ is a local ring (with maximal ideal $(x)$) we know that projective modules are exactly the free modules, and so computing a projective resolution will probably be easiest...? I would appreciate hints and partial answers over explicit answers (though it's likely I might have to ask for more hints if I still struggle...). At this stage I'll take whatever I can get. Edit: Here is a partial answer, all that remains is the question of the ring structure. Note that $k\cong R/(x)$ and so we have an epimorphism $\pi\colon R\twoheadrightarrow k$ given by $x\mapsto0$ (the quotient map). If we write $R=k[\varepsilon]$ where $\varepsilon$ is such that $\varepsilon^2=0$ then we obtain the following free resolution of $k$: $$\ldots\xrightarrow{\cdot\varepsilon}k[\varepsilon]\xrightarrow{\cdot\varepsilon}k[\varepsilon]\twoheadrightarrow k\to0.$$ Now any morphism $k[\varepsilon]\to k$ must send $\varepsilon$ to some element $\eta\in k$ such that $\eta^2=0$. But $k$ is a field, and so we are forced to choose $\eta=0$. This means that any such morphism is determined entirely by where it send $1\in k$, and it can send it to any $x\in k$. Thus $$\mathrm{Hom}_{R\hbox{-}\mathsf{mod}}(k[\varepsilon],k)\cong k.$$ So taking $\mathrm{Hom}_{R\hbox{-}\mathsf{mod}}(-,k)$ of the free resolution gives us the sequence $$0\to k\xrightarrow{\cdot0}k\xrightarrow{\cdot0}\ldots$$ which has homology $H_n=\ker d_n/\mathrm{im}\,d_{n+1}=k/0\cong k$ for all $n\geqslant0$. Thus $$\mathrm{Ext}^\bullet_R(k,k)\cong\bigoplus_{n\geqslant0}k$$ So my question now is about the ring structure of $\mathrm{Ext}^\bullet_R(k,k)$, and also about thinking of $\mathrm{Ext}$ as being extensions of $k$ by $k$. Unless I'm wrong, this means that we should be able to construct, taking $n=1$, short exact sequences $$0\to k\hookrightarrow X\twoheadrightarrow k\to0$$ and the collection of all such sequences should be isomorphic to $k$. The first thing that sprang to mind was to take $X=R$ and the epimorphism multiplication by $x\varepsilon$ for $x\in k$, but then I struggle to find a monomorphism into $R$ with the right kernel, and also taking $x=0$ means that the map fails to be an epimorphism. What is the correct choice of $\,\,\to X\to\,\,$? How can we compute explicitly the ring structure on $\mathrm{Ext}^\bullet_R(k,k)$? Edit 2: Following the ideas in the comments, I'm trying to explicitly spell out the following isomorphism, but I'm struggling to understand how the quotients are realised on both sides (i.e. the equivalence relations): I feel like the right-hand side should just be chain maps modulo homotopy equivalence, even though the $\mathrm{Hom}$ complex is just of maps of chains. I'm pretty certain that the lifts $\hat{f}_\bullet$ that we construct are in fact chain maps. REPLY [3 votes]: To compute the ring structure in the Yoneda ring of $A=\mathbb k[x]/(x^2)$ you can take several paths. First, note that $A$ is Koszul, so its Yoneda ring is its Koszul dual. the Koszul dual to $A$ is simply $\mathbb k[x]$, a polynomial ring in one variable. This can be read off quite directly from the bar construction $BA$ of $A$, since it has basis elements $t^n = [t\vert\cdots \vert t]$ ($n$ copies of $t$) which are all cycles, and the coproduct in $BA$ is simply given by $\Delta(t^n) = \sum_{i+j=n} t^i\otimes t^j$. Then $(BA)^\vee$ computes $\operatorname{Ext}_A(\mathbb k,\mathbb k)$ and $\Delta^\vee = \,\smile$, so you're done. Put in another way, you can consider a model of this algebra in the category of dg algebras. One such model, which is the minimal model, is the free algebra $TX$ on generators $x_i$ of degree $i$ and differential given on generators by $dx_{n+1} = \sum_{i+j=n} (-1)^i x_ix_j$. The map $TX\to A$ sends $x_0\to x$ and kills all other generators. It is known then that (since the model is minimal) $X^\vee$ identifies canonically with $\operatorname{Ext}_A(\mathbb k,\mathbb k)$ and that the cup product is dual to the quadratic part of the differential $d_2 : X\to X\otimes X$, which in this case is just $d$ (since $A$ is Koszul). So again you recover that $x^i\smile x^j = x^{i+j}$ where the Koszul sign rule gives you the abscence of signs.<|endoftext|> TITLE: Does $\sum^{\infty}_{n=1}\frac{\ln{n}}{n^{1.1}}$ converge or diverge? QUESTION [5 upvotes]: Does $$\sum^{\infty}_{n=1}\frac{\ln{n}}{n^{1.1}}$$ converge or diverge? I think the basic comparison works but I have a hard time finding a comparer. Could someone suggest one? REPLY [2 votes]: One has, by the integral convergence test and an integration by parts: $$ 0<\sum^{\infty}_{n=1}\frac{\ln{n}}{n^{1.1}}\leq \int^{\infty}_{1}\frac{\ln{x}}{x^{1.1}}dx=\left[-\frac{\ln x}{x^{0.1}}\right]^{\infty}_{1}+\int^{\infty}_{1}\frac{dx}{x^{1.1}}=100 $$ the series is convergent.<|endoftext|> TITLE: Obtaining the Rodrigues formula QUESTION [5 upvotes]: On $So(3)$ the algebra of a $3 \times 3$ skew symmetric matrices define Lie bracket $[A,B]=AB-BA$ Consider the exponential map $$EXP: So(3) \to So(3)$$. We have the $So(3)$ matrix $$A=\begin{bmatrix} 0 & -c & b \\c & 0 & -a\\ -b & a & 0\end{bmatrix}$$ Upon letting $\theta=\sqrt{a^2 + b^2 +c^2}$ , show that we obtain the identity (which is the Rodrigues formula) $$EXP (A)=I_3 + \frac{sin \theta}{\theta} A+ \frac{I-cos \theta}{\theta^2} A^2$$ I am not sure how we get the expressions $$A^{2n}=(-1)^{n+1} \theta^{2(n+1)}\begin{bmatrix} -(b^2+c^2) & ab & ac \\ab & -(a^2+c^2) & bc\\ ac & bc & -(a^2+b^2)\end{bmatrix}$$ and $$A^{2n+1}=(-1)^n \theta^{2n}\begin{bmatrix} 0 & -c & b \\c & 0 & -a\\ -b & a & 0\end{bmatrix}$$ I understand that you look at the powers of $A$, $A^2$, $A^3$ and so on. I also understand that you get $A^{2n}$ for even powers of n and $A^{2n+1}$ for odd powers of n. I work out $$A^2= \begin{bmatrix} -b^2-c^2 & ab & ac \\ab & -a^2-c^2 & bc\\ ac & bc & -a^2-b^2\end{bmatrix}$$ $$A^3= \begin{bmatrix} 0 & a^2c-c(-b^2-c^2) & -a^2b+b(-b^2-c^2) \\-b^2 c+c(-a^2-c^2) & 0 & ab^2-a(-a^2-c^2)\\ bc^2-b(-a^2-b^2) & -ac^2+a(-a^2-b^2) & 0\end{bmatrix}$$ From $A^2$ how do you get the expression for $A^{2n}$? From $A^3$ how do you get the expression for $A^{2n+1}$? For example how is $\theta$ incorporated? REPLY [6 votes]: "Rodrigues formula" will be easier to prove with $A/\theta=B$, thus the result we have to prove is under the form: $$Exp(\theta B)=I_3 + (\sin \theta) B+ (1-\cos \theta) B^2 \ \ \ (0)$$ by taking this definition: $$B:=\begin{bmatrix} 0 & -c & b \\c & 0 & -a\\ -b & a & 0\end{bmatrix} \ \ (1a) \ \ \text{with} \ \ \sqrt{a^2+b^2+c^2}=1 \ \ \ (1b)$$ (your presentation has its own merits, but its drawback is that it is un-natural for an angle to have the dimension of a length). Using (1a) and (1b), the characteristic polynomial of $B$ is found to be $$det(B-\lambda I_3)=-\lambda^3-\lambda \ \ \ $$ whose roots (eigenvalues of $B$) are $\{0,i,-i\}$. As a consequence, Cayley-Hamilton's theorem gives $$B^3=-B \ \ \ \text{and, consequently} \ \ \ B^4=-B^2, B^5=-B^3=B, \ \ (2) \ \ \text{etc.}$$ (cycling: $-B, -B^2, B, B^2, -B...$). Use now the definition $$Exp(\theta B)=\sum_{k=0}^{\infty}\dfrac{1}{k!}\theta^k B^k \ \ \ (3)$$ Transform (3) by using the different relationships (2). Gather them into 3 groups, the first one, with $I_3$ alone. the second group with terms $\alpha_kB$. the third group with terms $\beta_kB^2$. It is easy to see that $\sum_{k=0}^{\infty} \alpha_k=\sin \theta$ and $\sum_{k=0}^{\infty} \beta_k=1-\cos \theta$, proving (0). Remark: A similar, simpler, computation exists in 2D. Here it is. Let $J=\begin{bmatrix}0&-1\\1&0 \end{bmatrix}$ (representing a $\pi/2$ rotation, with eigenvalues $i$ and $-i$). A basic relationship for $J$ is $J^2=-I$, thus $J^3=-J$, etc. Proceeding in the same way as we have done upwards, one obtains $$Exp(\theta J)=(\cos\theta)I_2+(\sin\theta)J$$ which is nothing else than a matrix version of the classical $e^{i\theta}=(\cos\theta)+(\sin\theta)i$ but can also be seen and proved as the particular case of (0) with $a=0, b=0, c=1.$<|endoftext|> TITLE: Comparing Two Statements of the Rank Theorem QUESTION [9 upvotes]: I don't think this a duplicate, even though a similar question appears here. Let $m\geq n$ and let $F:\mathbb R^n\to \mathbb R^m$ be a $\mathcal C'$ mapping s.t. rank $F'(x)=r\leq n$ for all $x\in E\subseteq \mathbb R^n.\ $Fix $a\in E$ and set $A=F'(a)$ and let $P$ be a projection in $\mathbb R^m$ onto the $Y_1=$range of $A$. Set $Y_2=kerP.$ Then the claim is that there are open sets $U\subseteq E$ and $V$ in $\mathbb R^n$ s.t $a\in U$; and there is a bijective $\mathcal C'$ map $H:V\to U$ s.t $\tag1 F(Hx)=Ax+\varphi (Ax)$ where $\varphi$ is a $\mathcal C'$ function mapping $A(V)$ into $U$. This is,of course, a somewhat abbreviated version of the statement that appears in blue Rudin. The proof is not hard, but it seems rather abstruse and uniformative, compared to the version I first learned, which follows my comments, and question here. $H$ seems to be simply a change of coordinates (diffeomorphism),but I can't find an easy geometric interpretation of $\varphi$. It is easy to show that $P$ restricted to $F(U)$ is a bijection onto $A(V)$ but this is immediate from the other version below. In fact, it seems a lot easier to understand the idea in the following, different (?) version of the Rank Theorem: Suppose $U, V$ are open in $\mathbb R^n,\mathbb R^m$,resp. and let $F:U\to V$ be a $\mathcal C'$ map s.t. $F'(x)$ has rank $r$ for all $x\in U$. Then, there exist open sets $U_1,V_1\in \mathbb R^n,\mathbb R^m$, resp. and diffeomorphsims $\varphi:U_1\to U_0$ and $\psi:V_1\to V_0$ s.t. for fixed $a\in U$, $\varphi (a)=0;\ \psi (f(p))=0$ and $\tag2 \psi\circ f\circ \varphi ^{-1} (x_1,x_2,\cdots ,x_r,x_{r+1},\cdots x_n)=(x_1,\cdots ,x_r,0,0,\cdots ,0)$. This formulation is simpler and more intuitive. Are the two versions equivalent? If not, how is Rudin's "better" than the other? REPLY [5 votes]: The formulation of the rank theorem in Rudin is indeed quite confusing. However it is not quite so bad once the interpretation of the theorem is fleshed out. Rank Theorem (Rudin): Suppose $m, n, r$ are nonnegative integers, $m > r$, $n > r$, $\mathbf{F}$ is a $C^1$-mapping of an open set $E\subseteq \mathbb{R}^n$ into $\mathbb{R}^m$, and $\mathbf{F}'(\mathbf{x})$ has rank $r$ for every $x\in E$. Fix $\mathbf{a} \in E$, put $A = \mathbf{F}'(\mathbf{a})$, let $Y_1$ be the range of $A$, and let $P$ be a projection in $\mathbb{R}^m$ whose range is $Y_1$. Let $Y_2$ be the null space of $P$. Then there are open sets $U$ and $V$ in $\mathbb{R}^n$, with $\mathbf{a} \in U$, $U \subseteq E$, and there is a $1$-$1$ $C^1$-mapping $\mathbf{H}$ of $V$ onto $U$ (whose inverse is also of class $C^1$) such that $$\mathbf{F}(\mathbf{H}(\mathbf{x})) = A\mathbf{x} + \varphi(A\mathbf{x})\qquad (\mathbf{x}\in V)$$ where $\varphi$ is a $C^1$-mapping of the open set $A(V)\subseteq Y_1$ into $Y_2$. Let's contrast this with some other standard textbook formulations of the rank theorem, this one from Lee's Introduction to Smooth Manifolds, for example. Rank Theorem (Lee): Suppose $M$ and $N$ are smooth manifolds of dimensions $m$ and $n$, respectively, and $F:N\rightarrow M$ is a smooth map with constant rank $r$. For each $p \in M$ there exist smooth charts $(U,\psi)$ for $N$ centered $p$ and $(V,\phi)$ for $M$ centered at $F(p)$ such that $F(U)\subseteq V$, in which $F$ has a coordinate representation of the form $$\hat{F}(x^1,\cdots x^r,x^{r+1},\cdots, x^n) = (x^1,\cdots,x^r,0,\cdots ,0),$$ where $\psi^{-1}\circ F \circ \phi = \hat{F}: \mathbb{R}^n \rightarrow \mathbb{R}^m$. To make the comparison more transparent, let me disregard the $C^1$ condition in Rudin's version of the theorem and just assume everything is smooth if necessary. We also don't need the full manifold formulation of Lee's version, so we will just take $M = \mathbb{R}^m$ and $N = \mathbb{R}^n$. The key content of the rank theorem is that the local image of a rank $r$ map is an $r$-dimensional manifold. The difference between the two versions is that Lee's rank theorem fully flattens out the resulting manifold by mapping it to the linear subspace $(x^1,\cdots x^r,0,\cdots 0)$. He accomplishes this by transforming both the domain and the codomain. On the other hand, Rudin never touches the codomain. He instead chooses coordinates on the domain so that the resulting image submanifold is given as an orthogonal decomposition (more precisely, the decomposition will be orthogonal if $P$ is chosen to be an orthogonal projection, otherwise it will be oblique) into the tangent plane at a point $\mathbf{a}$ (the $A\mathbf{x}$ term), plus deviations orthogonal to this tangent plane (the $\varphi(A\mathbf{x})$ term). Therefore you can view Rudin's version as sort of intermediate, where the image submanifold of $\mathbf{F}$ is straightened out partially along the tangent plane, but not quite all the way. It follows that to go from Rudin's version to Lee's version, all we need to do is to fully "straighten out" this manifold. The map that does this is $f(\mathbf{x}) = \mathbf{x} - \varphi(P\mathbf{x})$. Intuitively, this map removes the deivations orthogonal to the tangent plane. Then we have $$f(\mathbf{F}(\mathbf{H}(\mathbf{x})) = f(A\mathbf{x} + \varphi(A\mathbf{x})) = A\mathbf{x} + \varphi(A\mathbf{x}) - \varphi(A\mathbf{x}) = A\mathbf{x},$$ where we use the fact that $PA\mathbf{x}=A\mathbf{x}$ and $P\varphi(A\mathbf{x}) = \mathbf{0}$. Note that $f$ is in fact a smooth diffeomorphism, with inverse $f^{-1}(\mathbf{x}) = \mathbf{x} + \varphi(P\mathbf{x})$. Since $A$ is of rank $r$, there exists invertible matrices $Q_1$ and $Q_2$ such that $Q_1AQ_2 = I_r$, where $I_r$ is the matrix which is zero everywhere except the first $r$ diagonal entries, which are $1$. Let $Q_1\mathbf{y}=\mathbf{x}$. Then $$Q_1f(\mathbf{F}(\mathbf{H}(Q_2\mathbf{y})) = I_r\mathbf{y}.$$ Therefore we can take the coordinate charts $\psi$ and $\phi$ in Lee's formulation to be $\phi = \mathbf{H}\circ Q_2$ and $\psi^{-1} = Q_1\circ f$. Since we have the relations between the coordinate charts in Lee's version and the various maps in Rudin's version, we can easily go between the two formulations. These relations also makes precise the missing steps it takes to fully "straighten out" the image manifold.<|endoftext|> TITLE: Frobenius Norm and Relation to Eigenvalues QUESTION [10 upvotes]: I've been working on this problem, and I think that I almost have the solution, but I'm not quite there. Suppose that $A \in M_n(\mathbb C)$ has $n$ distinct eigenvalues $\lambda_1... \lambda_n$. Show that $$\sqrt{\sum _{j=1}^{n} \left | {\lambda_j} \right |^2 } \leq \left \| A \right \|_F\,.$$ I tried using the Schur decomposition of $A$ and got that $\left \| A \right \|_F = \sqrt{TT^*}$, where $A=QTQ^*$ with $Q$ unitary and $T$ triangular, but I'm not sure how to relate this back to eigenvalues and where the inequality comes from. REPLY [14 votes]: You are in the right way. The corresponding Schur decomposition is $A = Q U Q^*$, where $Q$ is unitary and $U$ is an upper triangular matrix, whose diagonal corresponds to the set of eigenvalues of $A$ (because $A$ and $U$ are similar). Now because Frobenius norm is invariant under unitary matrix multiplication: $$||QA||_F = \sqrt{\text{tr}((QA)^*(QA))} = \sqrt{\text{tr}(A^*Q^* QA)} = \sqrt{\text{tr}(A^*A)} = ||A||_F$$ (the same remains for multiplication of $Q$ on the right) then we could write: $$||A||_F = ||Q U Q^*||_F = ||U||_F \rightarrow \sqrt{\sum_{j=1}^n |\lambda_j|^2} \leq ||A||_F$$ directly proves your statement. Note: The inequality comes from the definition of the Frobenius norm: The sum of the square of all entries in the matrix. Since $U$ contains the eigenvalues on his diagonal, the term in the left has to be less or equal to the sum over all entries, because $U$ could have non zero entries over his diagonal.<|endoftext|> TITLE: Minimize $\mbox{trace}(AX)$ over $X$ with a positive semidefinite $X$ QUESTION [5 upvotes]: I want to minimize $\mbox{trace}(AX)$ over $X$, under the constraint that $X$ is positive semidefinite. I guess the solution should be bounded only for a positive semidefinite $A$, and it's zero, or the solution should be minus infinity. If this is correct, can anyone tell me why? or if it is wrong, please tell me the correct solution. Thank you very much!! REPLY [3 votes]: If $\mathrm X \in \mathbb R^{n \times n}$ is symmetric and positive definite, then there is a matrix $\mathrm Y \in \mathbb R^{r \times n}$ such that $$\mathrm X = \mathrm Y^T \mathrm Y$$ If $\mathrm A \in \mathbb R^{n \times n}$ is also symmetric, then it has an eigendecomposition of the form $\mathrm A = \mathrm Q \Lambda \mathrm Q^T$, where $\mathrm Q$ is orthogonal. Hence, $$\mbox{tr} (\mathrm A \mathrm X) = \mbox{tr} (\mathrm A \mathrm Y^T \mathrm Y) = \mbox{tr} (\mathrm Y \mathrm A \mathrm Y^T) = \mbox{tr} (\mathrm Y \mathrm Q \Lambda \mathrm Q^T \mathrm Y^T) = \mbox{tr} (\mathrm Y \mathrm Q \Lambda (\mathrm Y \mathrm Q)^T)$$ Let $\mathrm Z := \mathrm Y \mathrm Q$. Hence, $$\begin{array}{rl} \mbox{tr} (\mathrm Y \mathrm Q \Lambda (\mathrm Y \mathrm Q)^T) = \mbox{tr} (\mathrm Z \Lambda \mathrm Z^T) &= \mbox{tr} \left(\displaystyle\sum_{k=1}^n \lambda_k (\mathrm A) \mathrm z_k \mathrm z_k^T\right)\\\\ &= \displaystyle\sum_{k=1}^n \lambda_k (\mathrm A) \, \mbox{tr} \left(\mathrm z_k \mathrm z_k^T\right)\\\\ &= \displaystyle\sum_{k=1}^n \lambda_k (\mathrm A) \|\mathrm z_k\|_2^2\\\\ &\geq \lambda_{\min} (\mathrm A) \displaystyle\sum_{k=1}^n \|\mathrm z_k\|_2^2\\\\ &= \lambda_{\min} (\mathrm A) \, \mbox{tr} (\mathrm Z^T \mathrm Z)\end{array}$$ where $$\mbox{tr} (\mathrm Z^T \mathrm Z) = \mbox{tr} (\mathrm Q^T \mathrm Y^T \mathrm Y \mathrm Q) = \mbox{tr} (\mathrm Q^T \mathrm X \mathrm Q) = \mbox{tr} (\mathrm X)$$ Thus, $$\mbox{tr} (\mathrm A \mathrm X) \geq \lambda_{\min} (\mathrm A) \, \mbox{tr} (\mathrm X)$$ If $\lambda_{\min} (\mathrm A) < 0$, we can make $\mbox{tr} (\mathrm A \mathrm X)$ arbitrarily large and negative, i.e., there is no (finite) minimum. If $\mathrm A$ is positive semidefinite, then $\mbox{tr} (\mathrm A \mathrm X) \geq 0$, i.e., the minimum is zero.<|endoftext|> TITLE: Prove $\frac{a_n}{S_n^2} \leq \frac{1}{S_{n-1}}-\frac{1}{S_n}$ for partials sums of a divergent series QUESTION [7 upvotes]: Let $(a_n)$ be a sequence of non-negative numbers such that $a_1 > 0$ and $\sum a_n$ diverges. Let $S_n = \sum_{k=1}^n a_k$. Prove that, for all $n \geq 2$, $$\frac{a_n}{S_n^2} \leq \frac{1}{S_{n-1}}-\frac{1}{S_n}$$ How would I start this proof? I've just been staring at it and am very stuck. All i know so far is that $S_n-S_{n-1}=a_n$. Where does the inequality come from? REPLY [2 votes]: $$\frac{a_n}{S_n^2} \leq \frac{1}{S_{n-1}}-\frac{1}{S_n} = \frac{S_n - S_{n-1}}{S_{n-1} S_n} = \frac{a_n}{S_{n-1} S_n}$$ Multiply both sides by $q=S_n/a_n$: $$\frac 1{S_n} \leq \frac 1{S_{n-1}}$$ For given properties of $(a_n)$ the value of $q$ is postive, so the inequality does not change the direction. For the same reason we can multiply now by both denominators: $$S_{n-1} \leq S_n$$ and after subtraction $$0 \leq S_n - S_{n-1}$$ which is $$0 \leq a_n$$<|endoftext|> TITLE: Question about $\aleph$-fixed point QUESTION [5 upvotes]: I am working through a proof on cardinals I found and can't reason some of the steps. The proposition is that there is an $\aleph$-fixed point, i.e. there is an ordinal $\alpha$ (which is necessarily a cardinal), so that $\aleph_{\alpha} = \alpha$. The proof goes as follows: Let $\alpha_{0} = \aleph_{0}$ (or any other cardinal), $\alpha_{n + 1} = \aleph_{\alpha_{n}}$, and $\alpha = \sup \{ \alpha_{n} \mid n \in \omega \}$. Now if $\alpha = \alpha_{n}$ for some $n$, then $\alpha = \alpha_{n+1} = \aleph_{\alpha_{n}} = \aleph_{\alpha}$. Otherwise $\alpha$ is a limit ordinal and we have that $\aleph_{\alpha} = \sup \{ \aleph_{\xi} \mid \xi < \alpha\} = \sup\{ \aleph_{\alpha_{n}} \mid n \in \omega \} = \sup \{\alpha_{n + 1} \mid n \in \omega \} = \alpha$. Now the limit case makes sense to me, but why on earth can we state that if $\alpha = \alpha_{n+1}$, then $$ \alpha = \alpha_{n+1} = \aleph_{\alpha_{n}} = \aleph_{\alpha}. $$ I suppose the main issue I am having is why does $\alpha = \alpha_{n}$ imply that $\alpha = \alpha_{n+1}$. After that, it is really just a matter of applying definitions. REPLY [2 votes]: $\alpha\mapsto \aleph_{\alpha}$ is a normal ordinal function: it's (strictly) increasing, and it's "continuous at limit ordinals", in the sense that $$ \aleph_{\lambda} = \sup_{\alpha < \lambda} \aleph_{\alpha}. \tag{limit $\lambda$} $$ Every normal ordinal function $f$ has fixed points, in fact unboundedly many if its domain is unbounded. Suppose $\gamma \in dom(f)$. Let $\alpha_0 = \gamma$ and $\alpha_{n+1} = f(\alpha_n)$; let $\alpha = \sup_n f(\alpha_n)$. Then $\alpha$ is a limit, as $(\alpha_n)_{n<\omega}$ is a strictly increasing sequence with limit $\alpha$. By definition of a normal function at limits, $f(\alpha) = \sup_{\xi < \alpha} f(\xi) = \sup_n f(\alpha_n) = \sup_n \alpha_{n+1} = \alpha$.<|endoftext|> TITLE: image of adjoint equals orthogonal complement of kernel QUESTION [18 upvotes]: Let $T:V\to W$ be a linear map of finite-dimensional spaces. Then $${\rm im}(T^{\textstyle*})=({\rm ker}\,T)^\perp\ .\tag{$*$}$$ I can prove this as follows: $${\rm ker}(T^{\textstyle*})=({\rm im}\,T)^\perp\tag{$**$}$$ is quite easy, and we know $T^{\textstyle*}{}^{\textstyle*}=T$ and $W^{\perp\perp}=W$, so $${\rm im}(T^{\textstyle*})=({\rm im}\,T^{\textstyle*})^{\perp\perp}=({\rm ker}\,T^{\textstyle*}{}^{\textstyle*})^\perp=({\rm ker}\,T)^\perp\ .$$ However, I would be interested in a "direct" proof of $(*)$. It's fairly easy to show ${\rm LHS}\subseteq{\rm RHS}$. For the converse I have tried obvious things but seem to be going round in circles. Also, any insights as to why $(**)$ is harder than $(*)$ - if in fact it is :) Edit. To clarify, I am considering the adjoint defined in terms of an inner product, $$\langle\,T({\bf v})\mid{\bf w}\,\rangle =\langle\,{\bf v}\mid T^{\textstyle*}({\bf w})\,\rangle\ .$$ REPLY [12 votes]: Let ${\bf v}\in{\rm im}(T^\ast)$, then ${\bf v}=T^\ast({\bf w})$ for some ${\bf w}\in W$. Now, given ${\bf u}\in\ker T$, we see that $T({\bf u})={\bf 0}$ and therefore $$\langle {\bf u}\mid{\bf v}\rangle =\langle {\bf u}\mid T^\ast({\bf w})\rangle =\langle T({\bf u})\mid {\bf w}\rangle =\langle {\bf 0}\mid{\bf w}\rangle =0.$$ That is, ${\bf v}\in(\ker T)^\perp$. Conversely, if ${\bf v}\notin{\rm im}(T^\ast)$, then there exists an ${\bf v}'\in{\rm im}(T^\ast)^\perp$ such that $\langle {\bf v}\mid{\bf v}'\rangle\ne 0$. In fact, we have ${\bf v}'\in\ker T$ because $T^\ast T({\bf v}')\in{\rm im}(T^\ast)$, which implies $$\langle T({\bf v}')\mid T({\bf v}')\rangle =\langle {\bf v}'\mid T^\ast T({\bf v}')\rangle=0\quad\Longrightarrow\quad T({\bf v}')={\bf 0}.$$ Therefore ${\bf v}\notin(\ker T)^\perp$, which completes the proof.<|endoftext|> TITLE: Is $\sqrt{z}$ an analytic function? QUESTION [6 upvotes]: I know hat $\sqrt{z}$ is a multivalued function with a branch point at $z=0$, but it can be expanded (I think) as a Taylor series that will converge, meaning is should in theory be called analytic. Is it common practice to call such a function analytic or not? REPLY [7 votes]: Hint: If $z\neq 0$ and $r=|z|$ and $\arg(z)=\theta$, then $z=r(\cos(\theta)+i\sin(\theta))$. Hence $\sqrt{r}e^{\frac{i\theta}{2}}$ is a square root of $z$. Using this branch of $\sqrt{z}$, you can show that $\sqrt{z}$ is not analytic by showing that $\int_C \sqrt{z}\mathrm{d}z\neq 0$ where $C$ is the unit circle. If I remember correctly, this is an exercise from Foundations of Analysis.<|endoftext|> TITLE: Every $\mathcal{C}^1$ manifold can be made smooth? QUESTION [8 upvotes]: I heard of a theorem saying that each $\mathcal{C}^k$-manifold with $k\geq 1$ can be made into a smooth manifold, i.e. $\mathcal{C}^{\infty}$ (by restriction of the atlas). However, I cannot find this theorem anywhere. Can anyone point me in the write direction (book, paper, webpage, ...) and/or give me the proof? REPLY [8 votes]: This result can be found in Hirsch's Differential Topology. More precisely, Theorem $2.9$ of section $2$, chapter $2$ which I have reproduced below. Theorem: Let $\alpha$ be a $C^r$ differential structure on a manifold $M$, $r \geq 1$. For every $s$, $r < s \leq \infty$, there exists a compatible $C^s$ differential structure $\beta \subset \alpha$, and $\beta$ is unique up to $C^s$ diffeomorphism.<|endoftext|> TITLE: Groups which can not occur as automorphism group of a group QUESTION [10 upvotes]: Consider the following natural question: Given a finite group $H$, does there exists a finite group $G$ such that $\mathrm{Aut}(G)\cong H$? In short, does any finite group occurs as the automorphism group (of some group)? The answer is NO. For example, $H=\mathbb{Z}_3$. Another counter-example is $H=Q_8$, the quaternion group of order $8$ (I leave their verification to interested reader). Note that $\mathbb{Z}_4$ occurs as automorphism group of $\mathbb{Z}_5$. This raises following two questions to me: Among finite abelian groups, which groups can occur as automorphism groups? Is there any other non-abelian finite group which can not occur as automorphism group? Perhaps first question is easy to answer since structure of automorphism group of finite abelian groups is well known; I don't know its complete answer. For second question, I would be happy to see if there is an infinite family of counterexamples. REPLY [8 votes]: There are several results addressing this question. MacHale proved $1983$ in his article Some Finite Groups Which Are Rarely Automorphism Groups Theorem: There is no group $G$ such that $Aut(G)$ is abelian of order $p^5$, $p^6$ or $p^7$ for a prime $p$. Theorem: There is no group $G$ such that $Aut(G)$ has order $p^4$, for odd prime $p$. More recently, Ban an Yu showed: Theorem: there is no group $G$ such that $Aut(G)$ is an abelian $p$-group of order $n2$ is a prime, but there is one with $|Aut(G)|=p^{12}$. For the abelian case: There is no group $G$ such that $Aut(G)\cong C_n$ for $n>1$ odd, see here. A full classification of abelian groups occuring as automorphism groups of finite groups is still not known.<|endoftext|> TITLE: Inverse of the sum of a invertible matrix with known Cholesky-decomposion and diagonal matrix QUESTION [13 upvotes]: I want to ask a question about invertible matrix. Suppose there is a $n\times n$ symmetric and invertible matrix $M$, and we know its Cholesky decomposion as $M=LL'$. Then do we have an efficient way to calculate $(M+D)^{-1}$, where $D=diag(d_1,...,d_n)$ with positive diagonal entries, by taking the information of $M=LL'$ rather than calculating from scratch with $M+D$ directly? Or what if for the sepcial case $D=dI_n$? Thanks a lot! REPLY [17 votes]: At the present time there is no known algorithm for efficiently performing high rank diagonal updates to Cholesky or LU factorizations, even in the case where the update matrix is a multiple of the identity. Such an algorithm is highly desirable in a wide variety of applications, and if one were discovered it would be a major breakthrough in numerical linear algebra. The following related math.stackexchange and scicomp.stackexchange threads are worth looking into: Cholesky of Matrix plus Identity Can diagonal plus fixed symmetric linear systems be solved in quadratic time after precomputation? as well as the following links noted by Kirill in the comments of the above note math.stackexchange thread: [1], [2], [3], [4], [5]. However, if you are willing to consider other types of matrix decompositions such as the (generalized) eigenvalue decomposition, (generalized) Schur decomposition, or (generalized) singular value decomposition, then there are efficient algorithms for performing updates based on precomputation, as long as the update is of the form: $$M \rightarrow M + d B_0,$$ where $B_0$ is a general fixed matrix that can be involved with the precomputation, and $d$ is a scalar that is not known at the precomputation stage, but rather is updated on the fly. Efficient update algorithms for the case where the matrix $B_0$ changes are not currently known (even in the diagonal case). It turns out that there is no essential difference if the update matrix $B_0$ is diagonal or not, though it does matter if it is the identity. Here I mention and summarize the results, then below discuss each case in more detail. Updates for symmetric $M$ and $B_0$ can be done efficiently after precomputing an eigenvalue decomposition, whereas in the nonsymmetric case the Schur decomposition must be used. If $B_0$ is the identity one can use the standard versions of the decompositions listed above, whereas if $B_0$ is not the identity, the generalized versions are required. For situations where the matrices naturally arise in the form $M=A^TA$ and $B_0=R^TR$ (e.g., updates to a regularization parameter in regularized least squares problems), one can work directly with the factors $A$ and $R$ by precomputing a generalized SVD decomposition, thereby never forming the squared systems, which could be much larger if the factor matrices are rectangular. Finally, if the update is low-rank (e.g., $B_0$ is diagonal but only contains a few nonzero diagonal elements), one can perform updates to a solver based on any factorization (LU, Cholesky, whatever) with the Woodbury formula. A summary of which decompositions can be used for certain cases is shown in the following tables. The numbers reference more detailed discussion below. $$\begin{array}{c|c|c} & \text{update }= d I & \text{update} = d B_0\\ \hline M \text{ and } B_0 \text{ symmetric} & \text{eigenvalue decomposition}~(1.) & \text{generalized eigenvalue decomposition}~(2.)\\ \hline M \text{ and/or } B_0 \text{ nonsymmetric}& \text{Schur decomposition}~(3.) & \text{generalized Schur decomposition}~(4.) \end{array}$$ and $$\begin{array}{c|c} M=A^TA ~\text{ and } ~B_0=R^TR & \text{generalized SVD}~(5.)\\ \hline B_0 \text{ is low rank} & \text{Woodbury formula}~(6.) \end{array}$$ Details for specific cases: (Symmetric $M$, $B_0=I$) Let $QDQ^T$ be the eigenvalue decomposition of $M$. The inverse of the updated version can be written as: $$(M + dI)^{-1} = Q(D + dI)^{-1}Q^T.$$ (Symmetric $M$ and $B_0$) Let $$B_0 U = M U \Lambda$$ be the factorization associated with the generalized eigenvalue problem for $B_0$ and $M$. It turns out (see link in previous sentence) that this $U$ simultaneously diagonalizes $M$ and $B_0$, in the sense that $U^T B_0 U = \Lambda$ and $U^T M U = I$, so you can write $$M+dB_0 = U^{-T}U^T(M + d B_0)UU^{-1} = U^{-T}(I + d \Lambda)U^{-1}.$$ The inverse of the updated matrix is then: $$(M+dB_0)^{-1} = U (I + d \Lambda)^{-1} U^T.$$ (Nonsymmetric $M$, $B_0=I$) Use the Schur decomposition as described in Jack Poulson's answer on scicomp. (Nonsymmetric $M$ and $B_0$) Let $$M=Q S Z^T, \quad B_0 = Q T Z^T$$ be the generalized Schur decomposition of $M$ and $B_0$ (also sometimes referred to as the QZ decomposition). Here $Q,Z$ are orthogonal, and $S,T$ are upper triangular. Then the update takes the form, $$M + d B_0 = Q(S + d T)Z^T,$$ with the inverse being: $$(M + d B_0)^{-1} = Z(S + d T)^{-1}Q^T.$$ Since the sum of upper triangular matrices is upper triangular, one can perform solves for such an updated system by triangular backsubstitution. ($M=A^TA$ and $B_0=R^TR$) Use the generalized SVD. The way to do this for matrix updates is described as an example in Section 3 of Van Loan's original paper: Van Loan, Charles F. "Generalizing the singular value decomposition." SIAM Journal on Numerical Analysis 13.1 (1976): 76-83. ($B_0$ is low rank) Use the Woodbury formula.<|endoftext|> TITLE: On a proof that the metric volume form is parallel wrt to the Levi-Civita connection QUESTION [5 upvotes]: In the context of (semi-)Riemannian geometry, the following fact is well-known: if a (semi-)Riemannian manifold $(M,g)$ is oriented, then the unique volume form $\epsilon = \mathrm{vol}_g$, induced by the metric together with the orientation, is parallel with respect to the Levi-Civita connection $\nabla$. That is, $$ \nabla \epsilon = 0$$ or, using indices, $\nabla_b \epsilon _{a_1 \cdots a_n}=0$ where $n = \mathrm{dim}(M)$. I am aware of a few different ways of proving this result and most make good sense to me. My problem is that I can't seem to completely follow the logic in one particular proof which I found in Robert M. Wald's "General Relativity" (Appendix B, page 432). The argument there goes as follows: $\epsilon$ is uniquely specified by the choice of orientation together with the condition $$ \epsilon^{a_1 \cdots a_n} \epsilon_{a_1 \cdots a_n} = (-1)^s n! $$ where $s$ is the number of negative eigenvalues of $g$ (so $s=0$ for a Riemannian metric) and indices are raised and lowered using $g$. Taking covariant derivatives, since the RHS is constant, one has $$ 0 = \nabla_b (\epsilon^{a_1 \cdots a_n} \epsilon_{a_1 \cdots a_n}) = (\nabla_b \epsilon^{a_1 \cdots a_n}) \epsilon_{a_1 \cdots a_n} + \epsilon^{a_1 \cdots a_n} \nabla_b \epsilon_{a_1 \cdots a_n} = 2\epsilon^{a_1 \cdots a_n} \nabla_b \epsilon_{a_1 \cdots a_n}$$ using the fact that the metric is parallel with respect to $g$ in the last step. So far so good, we have obtained that $\epsilon^{a_1 \cdots a_n} \nabla_b \epsilon_{a_1 \cdots a_n} = 0$. Question. But then it is argued that this, "in turn, implies that $\nabla_b \epsilon_{a_1 \cdots a_n}=0$ since $\epsilon_{a_1 \cdots a_n}$ is totally antisymmetric in its last $n$ indices and $\epsilon^{a_1 \cdots a_n}$ is non-vanishing". Can anyone expand on the logic of how this implication works? I have tried to interpret this by thinking of $\epsilon^{a_1 \cdots a_n}$ and of $\nabla_j \epsilon_{a_1 \cdots a_n}$, for each fixed $j$, as analogous to two antisymmetric matrices $A$ and $B$ respectively, and the desired statement is then something like $$ \mathrm{Tr}(A^TB) = 0 \ \Longrightarrow \ B = 0,$$ but I can't seem to get very far with this reasoning. Thanks in advance for your help! REPLY [3 votes]: Since $\epsilon$ and $\nabla_b \epsilon$ (for fixed $b$) are both antisymmetric, the only non-zero terms in the sum $$\epsilon^{a_1 \cdots a_n} \nabla_b \epsilon_{a_1 \cdots a_n}$$ will be those where $a_1\cdots a_n$ is a permutation of $1 \cdots n$. Moreover, if we permute the indices back to this order in each term, the signs that each $\epsilon$ pick up will be the same, so they will always cancel. Thus all the terms are in fact the same; i.e. $$\epsilon^{a_1 \cdots a_n} \nabla_b \epsilon_{a_1 \cdots a_n} = n!\ \epsilon^{1\cdots n} \nabla_b \epsilon_{1 \cdots n}.$$ The RHS is now a genuine product, and we know $\epsilon^{1 \cdots n}$ is non-zero (since it is a non-degenerate volume form); so for the product to be zero we must have $\nabla_b \epsilon_{1\cdots n} = 0.$<|endoftext|> TITLE: Evaluation of $\int \frac{1-\sin x}{(1+\sin x)\cos x}dx$ QUESTION [5 upvotes]: Evaluate $$I=\int \frac{(1-\sin x) dx}{(1+\sin x)\cos x}$$ I tried in the following way: $$1-\sin x=1-\cos\left(\frac{\pi}{2}-x\right)=2 \sin^2\left(\frac{\pi}{4}-\frac{x}{2}\right)$$ Similarly $$1+\sin x=2 \cos^2\left(\frac{\pi}{4}-\frac{x}{2}\right)$$ So $$I=\int \tan^2\left(\frac{\pi}{4}-\frac{x}{2}\right) \sec x \: dx=\int \sec^2\left(\frac{\pi}{4}-\frac{x}{2}\right) \sec x \: dx-\int \sec x \:dx$$ If $$J=\int \sec^2\left(\frac{\pi}{4}-\frac{x}{2}\right) \sec x \: dx$$ Applying parts for $J$ we get $$J=-2\sec x \tan\left(\frac{\pi}{4}-\frac{x}{2}\right)+2\int \sec x \tan x \tan\left(\frac{\pi}{4}-\frac{x}{2}\right)dx $$ But i am clueless from here REPLY [6 votes]: Multiply both numerator and denominator by $\;1+\sin x\;$ , so you get ( observe that $\;\cos x=(1+\sin x)'\;$): $$\int\frac{\cos x}{(1+\sin x)^2}dx=-\frac1{1+\sin x}+K$$<|endoftext|> TITLE: Localization Preserves Euclidean Domains QUESTION [6 upvotes]: I'm wanting to prove that given a ring $A$ (by "ring" I mean a commutative ring with identity) and a multiplicative subset $S \subset A$: if $A$ is an Euclidean Domain, and $0 \notin S$ then $S^{-1}A$ (localization of A at S) is also an Euclidean Domain. I'm trying to produce an Euclidean Function in $S^{-1}A$ using the Euclidean Function $N:A \rightarrow \mathbb{N}$, that I already have from $A$ but I'm having trouble trying to define it in a way that works and verifies the properties an Euclidean Function must verify. Does any one mind giving me hints? I don't really want a solution.. I would like to work it myself. Thanks in advance. :) REPLY [7 votes]: In wikipedia's language, we may assume that $N$ satisfies $N(a)\le N(ab)$ for $a,b\in A$. Let us denote the candidate function for the localization by $N_S\colon (S^{-1}A)\setminus\{0\}\to\mathbb N$. We will also replace $S$ by its saturation, i.e. by $S_{\mathrm{sat}}:=\{ a\in A \mid \exists b\in A: ab\in S\}$. Notice that $S_{\mathrm{sat}}^{-1}A=S^{-1}A$ because for any $a\in S_{\mathrm{sat}}$, we have $a^{-1}=\frac{b}{s}\in S^{-1}A$ where $b\in A$ and $s\in S$ are such that $s= ab$. Hence, assume henceforth that $S$ is saturated in the sense that for any $a\in A$, if there exists some $b\in A$ with $ab\in S$, then we have $a\in S$. Hint: First, note that you may assume $N_S(s)=1$ for all $s\in S$. Indeed, for any $a\in S^{-1}A$, you have $N_S(s)\le N_S(\frac as\cdot s)=N_S(a)$. Hence, $N_S(s)$ must be minimal. Argue similarly that $N_S(\frac 1s)=1$ for $s\in S$. Now use the fact that $A$ is a unique factorization domain. Full spoiler, hover for reveal: We first note that an Element $s\in S$ can not have any prime factor in $A\setminus S$. Indeed, let $s=s_1\cdots s_n$ be the prime factors of $s$. Then, $s_1\in S$ and $s_2\cdots s_n\in S$ because $S$ is saturated. Proceed by induction. For $\frac{ta}{s}\in S^{-1}A$, with $t,s\in S$ and $a$ not divisible by any element of $S$, let $N_S\left(\frac{ta}{s}\right):=N(a)$. This is well-defined because if $\frac{t_1a_1}{s_1}=\frac{t_2a_2}{s_2}$, then $s_1t_2a_2=s_2t_1a_1$. Since $a_1$ is not divisible by any element of $S$, it contains no prime factor in $S$. Since $s_2t_1\in S$, it contains no prime factor in $A\setminus S$. This argument symmetrically works for $s_1t_2\cdot a_1$ and it follows that $s_1t_2=s_2t_1$ and (more importantly) $a_1=a_2$. Now we prove that $N_S$ yields a degree function turning $S^{-1}A$ into a Euclidean ring. Given $\frac{t_1a_1}{s_1},\frac{t_2a_2}{s_2}\in S^{-1}A$, we perform division with remainder $s_2t_1a_1 = qa_2 + r$ such that either $r=0$ or $N(r) TITLE: $f_1,...,f_n$ be linear functionals on a real vector space $V$, then is there a norm on $V$ which makes every $f_i$ continuous? QUESTION [5 upvotes]: Let $V$ be a real vector space, $f_1,...,f_n$ be linear functionals on $V$; then does there exist a norm on $V$ with respect to which each of $f_i$ is continuous? And what if we have infinitely many, linearly independent, such functionals? REPLY [3 votes]: The answer for infinitely many functionals is no; there's a counterexample here. For finitely many functionals it must be yes... Right. First, there is a norm on any real vector space $X$, for example if $B$ is a (Hamel) basis define $$\left\vert\left\vert\sum_{b\in B}c_b b\right\vert\right\vert=\sum_{b\in B}|c_b|.$$Now if $||\cdot||$ is a norm on $X$ define a new norm by $$|||x|||=||x||+\sum_{j=1}^n|f_j(x)|.$$<|endoftext|> TITLE: Find the latus rectum of the Parabola QUESTION [9 upvotes]: Let $y=3x-8$ be the equation of tangent at the point $(7,13)$ lying on a parabola, whose focus is at $(-1,-1)$. Evaluate the length of the latus rectum of the parabola. I got this question in my weekly test. I tried to assume the general equation of the parabola and solve the system of equations to calculate the coefficients with the help of these given conditions. But this way it becomes very lengthy and tedious. Can anyone provide an elegant solution? Thanks. REPLY [5 votes]: Denote: F: Focus (-1,-1) P: The point on the parabola (7,13) M: the foot of the perpendicular from the Focus F on the tangent at P V: Vertex of Parabola Then a standard property of the parabola (Prop 125 in Askwith - A Course of Pure Geometry) states that $FM^2 = FP.FV$. We easily obtain $FM = \sqrt {10}$ and $FP = 2 \sqrt{65}$ Then $FV = \sqrt{\frac{5}{13}}$. The length of latus rectum $=4 FV = 4 \sqrt{\frac{5}{13}}$<|endoftext|> TITLE: Prove $x^5+10x^3+ax^2+bx+c=0$ has no more than four real roots QUESTION [5 upvotes]: I'm trying to prove that $$f(x) = x^5+10x^3+ax^2+bx+c=0$$ can not have more than four real roots, no matter the values of $a,b,c$ real Numbers . My attempt: $f'(x) = 5x^4+30x^2+2ax+b =0$ and $ f''(x) = 20x^3+60x+2a$, now $f(x)$ is differentiable and continuous in all real line but from here I want to use the intermediate value theorem, however I don't know how to apply it for $f$. REPLY [10 votes]: $f'''(x)=60x^2+60$ has no real roots $\implies f''$ has at most $1$ real root $\implies f'$ has at most $2$ real roots $\implies f$ has at most $3$ real roots This argument uses Rolle's theorem at each step.<|endoftext|> TITLE: Rational sum of the $p$-adic series QUESTION [6 upvotes]: Koblitz states (as one of the excercises to chapter 2) that whenever we are given an integer $k > 0$ and prime $p$, the series $$ f(p, k) = \sum_{n=0}^\infty n^kp^n $$ converges in $\mathbb Q_p$ and its limit is even rational (that is, in its $p$-adic expansion we can find a place from which the same finite sequence of digits repeats over and over). Can we find a explicit formula for $f(p, k)$ or maybe fix some $p_0$ and then efficiently compute the value of $f(p_0, k)$? Here are some initial terms: $\begin{array}{c|ccccccc} p/k &1&2&3&4 & 5 & 6 & 7 & 8 & 9 & 10 \\\hline 2 &2& -6& 26& -150& 1082& -9366& 94586& -1091670&14174522& -204495126 \\ 3 &\frac{3}{4}&-\frac{3}{2}&\frac{33}{8}&-15&\frac{273}{4}&-\frac{1491}{4}&\frac{38001}{16}&-17295&\frac{566733}{4}&-\frac{2579313}{2} \\ 5 &\frac{5}{16}&-\frac{15}{32}&\frac{115}{128}&-\frac{285}{128}&\frac{3535}{512}&-\frac{26355}{1024}&?&-\frac{1139685}{2048}&?&? \\ 7 &\frac{7}{36}&-\frac{7}{27}&\frac{91}{216}&-\frac{70}{81}&\frac{2149}{972}&-\frac{3311}{486}&\frac{285929}{11664}&-\frac{220430}{2187}&?&? \end{array}$ REPLY [2 votes]: To expand the hint given by Bruno Joyal I would like to point out that this is a special case of Hurwitz-Lerch zeta function. To evaluate it for any $k$ it's enough to transform a property of Euler polynomials proven by Frobenius (from OEIS) to get $$\sum_{n=0}^\infty n^k p^n = p \sum_{n=1}^k n!\left\{\begin{matrix} k \\ n \end{matrix}\right\} (p-1)^{-n-1}.$$<|endoftext|> TITLE: Interesting integral: $I=\int_0^1 \int_0^1 \log\left( \cos(\pi x)^2 + \cos(\pi y)^2 \right)dxdy$ QUESTION [11 upvotes]: I've stumbles across the following integral when doing some combinatorial work: $$ I=\int_0^1 \int_0^1 \log\left( \cos(\pi x)^2 + \cos(\pi y)^2 \right)dxdy$$ After plugging this into Mathematica, it outputs: $$ I=\frac{4C}{\pi}-\log(4)$$ where $C$ is Catalan's constant $\left( C= \frac{1}{1^2}-\frac{1}{3^2}+\frac{1}{5^2}-\ldots \right)$. I have messed around with this integral yet I cannot figure out how it got that result. Anyone have any ideas? REPLY [14 votes]: The calculations will be simplified if we know that $\cos(\pi x) \ge 0$ and $\cos(\pi y) \ge 0$. To achieve that we observe that $$ \int_{0}^{1}\int_{0}^{1}\log(\cos^{2}(\pi x)+\cos^{2}(\pi y))\, dxdy = 4\int_{0}^{1/2}\int_{0}^{1/2}\log(\cos^{2}(\pi x)+\cos^{2}(\pi y))\, dxdy . $$ We study $$ f(s) = \int_{0}^{1/2}\int_{0}^{1/2}\log(\cos^{2}(\pi x)+s\cos^{2}(\pi y))\, dxdy $$ and are interested in $4f(1)$. Since $$ f(1)-f(0) = \int_{0}^{1}f'(s)\, ds $$ we are ready if we can determine the integral and $f(0)$. We get that \begin{gather*} f(0) = \int_{0}^{1/2}\int_{0}^{1/2}\log(\cos^{2}(\pi x))\, dxdy = \int_{0}^{1/2}\log(\cos(\pi x))\, dx \\[2ex]= \int_{0}^{1/2}\log(\sin(\pi x))\, dx = \dfrac{1}{2}\int_{0}^{1}\log(\sin(\pi x))\, dx = \dfrac{1}{2}\int_{0}^{1/2}\log(\sin(\pi 2z))2\, dz\\[2ex] = \int_{0}^{1/2}\log 2\, dz + \int_{0}^{1/2}\log(\sin(\pi z))\, dz +\int_{0}^{1/2}\log(\cos(\pi z))\, dz\\[2ex] = \dfrac{1}{2}\log 2 + f(0)+f(0). \end{gather*} Consequently$f(0) = -\dfrac{1}{2}\log 2$. We proceed to $$ f'(s) = \int_{0}^{1/2}\int_{0}^{1/2}\dfrac{\cos^{2}(\pi y)}{\cos^{2}(\pi x) +s\cos^{2}(\pi y)}\, dxdy. $$ The integral with respect to $x$ can be evaluated via a standard substitution $t= \tan\dfrac{z}{2}$. \begin{gather*} \int_{0}^{1/2}\dfrac{\cos^{2}(\pi y)}{\cos^{2}(\pi x) +s\cos^{2}(\pi y)}\, dx = \int_{0}^{1/2}\dfrac{2\cos^{2}(\pi y)}{1+\cos(2\pi x) + 2s\cos^{2}(\pi y)}\, dx\\[2ex]= \dfrac{1}{\pi}\int_{0}^{\pi}\dfrac{\cos^{2}(\pi y)}{1+\cos(z) + 2s\cos^{2}(\pi y)}\, dz = \dfrac{1}{\pi}\int_{0}^{\infty}\dfrac{\cos^{2}(\pi y)}{1+t^{2} +1-t^{2} + 2(1+t^{2})s\cos^{2}(\pi y)}2\, dt \\[2ex] = \dfrac{1}{\pi}\int_{0}^{\infty}\dfrac{\cos^{2}(\pi y)}{1 + s\cos^{2}(\pi y)+ st^{2}\cos^{2}(\pi y)}\, dt\\[2ex] = \dfrac{1}{\pi}\left[\dfrac{\cos(\pi y)}{\sqrt{s}\sqrt{1+s\cos^{2}(\pi y)}}\arctan\left(\sqrt{\dfrac{s}{1+s\cos^{2}(\pi y)}}\cos(\pi y)t\right)\right]_{0}^{\infty} \\[2ex]= \dfrac{\cos(\pi y)}{2\sqrt{s}\sqrt{1+s\cos^{2}(\pi y)}}. \end{gather*} It remains to integrate with respect to $y$. \begin{gather*} \int_{0}^{1/2}\dfrac{\cos(\pi y)}{2\sqrt{s}\sqrt{1+s\cos^{2}(\pi y)}}\, dy = \int_{0}^{1/2}\dfrac{\cos(\pi y)}{2\sqrt{s}\sqrt{1+s -s\sin^{2}(\pi y)}}\, dy\\[2ex] = \left[\dfrac{1}{2\pi s}\arcsin\left(\sqrt{\dfrac{s}{1+s}}\sin(\pi y)\right)\right]_{0}^{1/2} = \dfrac{1}{2\pi s}\arcsin\sqrt{\dfrac{s}{1+s}} = \dfrac{1}{2\pi s}\arctan\sqrt{s}. \end{gather*} Finally we return to \begin{gather*} f(1)-f(0) = \int_{0}^{1}f'(s)\, ds = \int_{0}^{1}\dfrac{1}{2\pi s}\arctan\sqrt{s}\, ds\\[2ex] = \int_{0}^{1}\dfrac{1}{2\pi u^{2}}\arctan(u)2u\, du = \int_{0}^{1}\dfrac{1}{\pi u}\arctan(u)\, du = \dfrac{C}{\pi}. \end{gather*} Since we already know $f(0)$ we conclude that $$ \int_{0}^{1}\int_{0}^{1}\log(\cos^{2}(\pi x)+\cos^{2}(\pi y))\, dxdy = \dfrac{4C}{\pi} -2\log 2 = \dfrac{4C}{\pi} - \log 4. $$<|endoftext|> TITLE: Can every partially ordered set (POSET) take the form of a directed acyclic graph (DAG)? QUESTION [9 upvotes]: A POSET (Partially ordered set) is a set on the elements of which we have established a partial order relation ($\leq$), i.e. a relation which is: reflexive: $x\leq x,$ for every x in S anti-symmetric: $x \leq y \wedge y \leq x \Rightarrow x=y $ transitive: $x\leq y, y\leq x \Rightarrow x\leq z$ My question is if every POSET can take the form of a DAG (Directed Acyclic Graph) if we view its elements as the nodes and the relation itself as the edge set. REPLY [7 votes]: YES. Every POSET can take the form of a DAG. However, in order to obtain a less cluttered graph, you’d better avoid drawing every single edge. (see figure) You can omit the edges that can be inferred from the reflexive and transitive properties. Moreover you can arrange nodes in order to orient every edge upward and omit vector tips. In this way you obtain a Hasse diagram: it is just a stripped down DAG.<|endoftext|> TITLE: Complex manifold with subvarieties but no submanifolds QUESTION [15 upvotes]: Note, I have now asked this question on MathOverflow. There are examples of compact complex manifolds with no positive-dimensional compact complex submanifolds. For example, generic tori of dimension greater than one have no compact complex submanifolds. The proof of this fact, see this answer for example, shows that these tori also have no positive-dimensional analytic subvarieties either (because analytic subvarieties also have a fundamental class). My question is whether the non-existence of compact submanifolds always implies the non-existence of subvarieties. Does there exist a compact complex manifold which has positive-dimensional analytic subvarieties, but no positive-dimensional compact complex submanifolds? Note, any such example is necessarily non-projective. REPLY [2 votes]: This question has been answered on MathOverflow. I've replicated inkspot's accepted answer below. There are surfaces of type $VII_0$ on which the only subvariety is a nodal rational curve (I. Nakamura, Invent. math. 78, 393-443 (1984), Theorem 1.7, with $n=0$). REPLY [2 votes]: The theorem that inkspot refers to in his answer is originally from Inoue's paper New Surfaces with No Meromorphic Functions, II which seems like a more complete reference for this question. In particular, Inoue gives an explicit example of a compact complex surface which has an analytic subvariety but no compact complex submanifolds. If $x$ is a real quadratic irrationality (i.e. a real irrational solution of a real quadratic equation), denote it's conjugate by $x'$. Let $M(x)$ be the free $\mathbb{Z}$-module generated by $1$ and $x$, then set $U(x) = \{\alpha \in \mathbb{Q}(x) \mid \alpha > 0, \alpha\cdot M(x) = M(x)\}$ and $U^+(x) = \{\alpha \in U(x) \mid \alpha\cdot\alpha' > 0\}$. Both $U(x)$ and $U^+(x)$ are infinite cyclic groups and $[U(x) : U^+(x)] = 1$ or $2$. If $\omega$ is a real quadratic irrationality such that $\omega > 1 > \omega' > 0$, then $\omega$ is a purely periodic modified continued fraction; that is, $\omega = [[\overline{n_0, n_1, \dots, n_{r-1}}]]$ where $n_i \geq 2$ for all $i$, $n_j \geq 3$ for at least one $j$, and $r$ is the smallest period. For every such $\omega$, Inoue constructs a compact complex surface $S_{\omega}$ which is now known as an Inoue-Hirzebruch surface. There are compact subvarieties $C$ and $D$ of $S_{\omega}$ with irreducible components $C_0, \dots, C_{r-1}$ and $D_0, \dots, D_{s-1}$ respectively; here $s$ is the smallest period of the modified continued fraction expansion of another element $\omega^*$ related to $\omega$ (alternatively, $s$ can be determined from the modified continued fraction expansion of $\frac{1}{\omega}$). When $r \geq 2$, $C$ is a cycle of non-singular rational curves, and when $r = 1$, $C$ is a rational curve with one ordinary double point. Proposition $5.4$ shows that $C_0, \dots, C_{r-1}, D_0, \dots, D_{s-1}$ are the only irreducible curves in $S_{\omega}$. In the case where $[U(\omega) : U^+(\omega)] = 2$, we have $r = s$. Furthermore, there is an involution $\iota$ such that $\iota(C_i) = D_i$ for $i = 0, \dots, r - 1$. The quotient of $S_{\omega}$ by $\iota$ is denoted $\hat{S}_{\omega}$ and is now known as a half Inoue surface. Note that the images of $C_0, \dots, C_{r-1}$ are the only irreducible curves in $\hat{S}_{\omega}$. If we can find a real quadratic irrationality $\omega$ such that $\omega > 1 > \omega' > 0$, $r = 1$, and $[U(\omega) : U^+(\omega)] = 2$, then $\hat{S}_{\omega}$ is a compact complex surface containing a unique curve, namely a rational curve with one ordinary double point. In particular, it provides an example of a compact complex manifold with a subvariety but no compact complex submanifolds. One such $\omega$ was given in the paper. Example. Take $\omega = (3 + \sqrt{5})/2$. Then $[U(\omega) : U^+(\omega)] = 2$ and $\alpha_0 =\ \text{a generator of}\ U(\omega) = (1 + \sqrt{5})/2$, $\alpha = \alpha_0^2 = (3 + \sqrt{5})/2$, $\omega = [[\overline{3}]]$, $r = 1$. In this case, $b_2(\hat{S}_{\omega}) = 1$ and $\hat{S}_{\omega}$ contains exactly one curve $\hat{C}$. $\hat{C}$ is a rational curve with one ordinary double point and $(\hat{C})^2 = -1$. For those interested in the details, in addition to Inoue's paper, it may also be worth reading the earlier paper Hilbert modular surfaces by Hirzebruch. As mentioned in his paper, Inoue used some methods from Hirzebruch's paper (which gives some indication of why the resulting surfaces are jointly named).<|endoftext|> TITLE: Function analytic in each variable does not imply jointly analytic QUESTION [8 upvotes]: I have heard that a function $f: \mathbb R^2 \to \mathbb R$ can be analytic in each variable (i.e. $f(x,y_0) = \sum_{n=0}^{\infty} a_n x^n, \forall x \in \mathbb R$, and the same for $y$) without being jointly analytic (i.e. $f(x,y) = \sum_{i,j=0}^{\infty} a_i b_j x^i y^j, \forall x,y \in \mathbb R$). Is there some standard example of such a function? REPLY [6 votes]: For completeness, I add a jointly $C^\infty$ example, taken from separate vs joint real analyticity: $$f(x,y) = xy\exp\left(-\frac{1}{x^2+y^2}\right),\qquad f(0,0)=0 \tag1$$ The restriction to each coordinate axis is identically zero. The restriction to other lines parallel to coordinate axes is real analytic since it's a composition of real analytic functions. But $f$ is not jointly analytic, since it decays faster than any polynomial at $(0,0)$: $$\lim_{(x,y)\to (0,0)} (x^2+y^2)^{-N} f(x,y)=0 \qquad \forall N$$ (A function that is representable by a power series cannot do this, since it's asymptotic to the sum of the lowest-degree nonzero terms of the series.) A relevant result was proved by J. Siciak in A characterization of analytic functions of n real variables, Studia Mathematica 35:3 (1970), 293-297: if a jointly $C^\infty$ function is real-analytic on every line segment, then it is jointly real-analytic. The function (1) fails the condition of Siciak's theorem, since it is not real-analytic on the line $y=x$.<|endoftext|> TITLE: Notation for equivalent equations QUESTION [5 upvotes]: What is the notation for showing that equations are equivalent by rearranging terms? For example, for the arc length formula: $$s=r\theta$$ I sometimes solve for $r$ and write it as $$r=\frac{s}{\theta}$$ When I show my work, I usually write this relationship like this: $$s=r\theta\Rightarrow r=\frac{s}{\theta}$$ Is this the correct way to write it or is there a better way? Up to this point, my teachers haven't cared, but I know in college and later in high school it will definitely matter. REPLY [5 votes]: Per request by GamrCorps, I post my original comment here. What you have written says that the truth of the expression on the left implies the truth of the expression on the right. It is correct. If you want equivalence you can use the double arrow that shows how the truth of either implies the truth of the other.<|endoftext|> TITLE: Is there an explicit irrational number which is not known to be either algebraic or transcendental? QUESTION [15 upvotes]: There are many numbers which are not able to be classified as being rational, algebraic irrational, or transcendental. Is there an explicit number which is known to be irrational but not known to be either algebraic or transcendental? REPLY [6 votes]: The most famous have been answered. Let us be a little less constructive. At least one of $\zeta(5)$, $\zeta(7)$, $\zeta(9)$, $\zeta(11)$ is irrational, a result due to V. V. Zudilin, Communications of Moscow Mathematical Society (2001), and their true nature (algebraic and transcendental) seems unknown at the present time. This result improves the irrationality of one of the nine numbers $\zeta(5)$, $\zeta(7)$, $\ldots$ $\zeta(21)$.<|endoftext|> TITLE: A String Tied Around The Earth QUESTION [6 upvotes]: Say you're standing on the equator and you have a string below you tied around the equator (40,075 km) that is the length of the equator + 1 meter (40,075.001 km). What is the maximum height you can you lift the string off the ground? Can you create a function of both circumference of the circle (earth) and string to output the distance between the two if pulled tight? Assumptions: For illustration, the result would be pulled from a single point, making a triangle until it met with the earth, in which it would follow the curvature of the earth. Similar to a snow-cone or O> The string does not stretch The earth can be assumed to be a perfect sphere REPLY [2 votes]: Draw a picture. Let $P$ be the peak of the stretched string, and let $C$ be the centre of the Earth. Let $G$ be the point on the Earth's surface which is on the line $PC$, and let $T$ and $T'$ be the points of tangency of the string with the Equator after the string has been pushed upwards to full height $PG$. Let $r$ be the radius of the Earth, and let $\epsilon r$ be the excess amount of string in addition to $2\pi r$. Note that $\epsilon$, in your example, is very small. Let $\theta=\angle TCG$ Note that $\theta$ is small. The key equation is $$\epsilon r=2r\tan\theta -2r\theta.$$ This holds because $2r\tan\theta$ is the sum of the lengths of the two tangent segments $PT$ and $PT'$, while $2 r\theta$ is the amount of string saved because it no longer covers the minor arc $TGT'$. The difference is equal to the amount $\epsilon r$ of extra string we have available. Using the fact that $\tan\theta\approx \theta +\frac{\theta^3}{3}$, we get $$\theta\approx \left(\frac{3\epsilon}{2}\right)^{1/3}.$$ Now that we know $\theta$, we can find the height $PG$ of the peak of the string. From the diagram, we can see that $PG=r(\sec\theta-1)$. Since $\theta$ is small, this is well approximated by $\frac{\theta^2}{2}$, and we end up with $$PG\approx \frac{r}{2}\left(\frac{3\epsilon}{2}\right)^{2/3}.$$<|endoftext|> TITLE: Principal Minors of $B(AB)^{-1}A$ and Cauchy-Binet Terms QUESTION [5 upvotes]: I am looking for a proof for the following conjecture. I think the result follows from applying a generalization of the Cauchy-Binet formula to the matrix $\mathbf{M}$ defined bellow. I've tested it as much as I could using Mathematica and am convinced it is true, but I haven't been able to prove it. Any help will be much appreciated. Setup Suppose $\mathbf{A}$ and $\mathbf{B}$ are $(n \times m)$ and $(m \times n)$ matrices respectively, with $n2=n$. All principal minors of $\mathbf{M}$ of order greater than $n$, are equal to zero, since $\operatorname{rank}(\mathbf{M})=n$. Attempts at the proof using the suggestions by @darijgrinberg Fix $k \in P(K)$ and let $v\equiv |k|$, i.e. the number of elements in $k$. First suppose that $v \gt n$, so that $\det(\mathbf{M}_k)=0$, since $\operatorname{rank}(\mathbf{M}_k)=0$, and $\{ i \in K_n : k \subset i \} = \emptyset$, so that the conjecture holds trivially. Next, suppose that $v \le n$. What follows is tentative. Working with the definition of $\mathbf{M}_{k}$: By definition, $$ \mathbf{M}_{k}=\mathbf{B}_{rk}(\mathbf{A}\mathbf{B})^{-1}\mathbf{A}_{ck} $$ where $\mathbf{B}_{rk}$ and $(\mathbf{A}\mathbf{B})^{-1}\mathbf{A}_{ck}$ are $(v \times n)$ and $(n \times v)$ matrices respectively. Let $L\equiv \{1,\dots,n\}$ and $L_v$ be the set of subsets of $L$ with $v$ elements, and from the Cauchy-Binet formula we have that $$ \det(\mathbf{M}_{k})=\sum_{j \in L_v}\det\left(\mathbf{B}_{rk,cj}\right)\det\left((\mathbf{A}\mathbf{B})_{rj}^{-1}\mathbf{A}_{ck}\right), \tag1$$ and $$ \det\left((\mathbf{A}\mathbf{B})_{rj}^{-1}\mathbf{A}_{ck}\right)=\sum_{i \in L_v}\det\left((\mathbf{A}\mathbf{B})_{rj,ci}^{-1}\right)\det\left(\mathbf{A}_{ri,ck}\right).$$ So that $$ \det(\mathbf{M}_{k})=\sum_{i,j \in L_v}\det\left((\mathbf{A}\mathbf{B})_{rj,ci}^{-1}\right)\det\left(\mathbf{A}_{ri,ck}\right)\det\left(\mathbf{B}_{rk,cj}\right),$$ and since $$ (\mathbf{A}\mathbf{B})^{-1}=\frac{1}{\Delta}\operatorname{adj}(\mathbf{A}\mathbf{B}) \implies (\mathbf{A}\mathbf{B})_{rj,ci}^{-1}=\frac{1}{\Delta}\operatorname{adj}(\mathbf{A}\mathbf{B})_{rj,ci} $$ it follows that $$ \det(\mathbf{M}_{k})=\frac{1}{\Delta^v} \sum_{i,j \in L_v}\det\left(\operatorname{adj}(\mathbf{A}\mathbf{B})_{rj,ci}\right)\det\left(\mathbf{A}_{ri,ck}\right)\det\left(\mathbf{B}_{rk,cj}\right).$$ Next, for each $i,j \in L_{v}$ define $i'\equiv \{1,\dots,n\}\setminus i$ and $j'\equiv \{1,\dots,n\}\setminus j$, then it follows from Jacobi's theorem that $$ \det(\operatorname{adj}(\mathbf{A}\mathbf{B})_{rj,ci})=(-1)^{\sigma_{ij}}\det(((\mathbf{A}\mathbf{B})^{\top})_{rj',ci'})\Delta^{v-1}=(-1)^{\sigma_{ij}}\det(\mathbf{A}_{rj'}\mathbf{B}_{ci'}) $$ where $$ \sigma_{ij} \equiv i_{v+1}'+\cdots+i_{n}'+j_{v+1}'+\cdots+j_{n}' $$ and, therefore, $$ \det(\mathbf{M}_{k})=\frac{1}{\Delta}\sum_{i,j \in L_{v}}(-1)^{\sigma_{ij}}\det(\mathbf{A}_{rj'}\mathbf{B}_{ci'})\det(\mathbf{A}_{ri,ck})\det(\mathbf{B}_{rk,cj}) $$ So far I haven't been able to proceed from here. Working with the conjecture equation: Notice that for any $j \in K_{n}$, $$ \det((\mathbf{A}\mathbf{B})^{-1}\mathbf{A}_{cj})=\frac{1}{\Delta}\det(\mathbf{A}_{cj}), $$ so that the conjecture can be rewritten as $$ \det(\mathbf{M}_{k}) = \sum_{j\in \{ i \in K_n : k \subset i \}}\det(\mathbf{B}_{rj})\det((\mathbf{A}\mathbf{B})^{-1}\mathbf{A}_{cj}), \tag2$$ which looks similar equation $(1)$, rewritten here for convenience $$ \det(\mathbf{M}_{k})=\sum_{j \in L_v}\det\left(\mathbf{B}_{rk,cj}\right)\det\left((\mathbf{A}\mathbf{B})_{rj}^{-1}\mathbf{A}_{ck}\right). $$ I am not sure how to deal with the difference in the indexes between the two equations. My guess is that I will need to use a Laplace expansion to equation $(2)$. REPLY [2 votes]: So, I think I have a proof now. It is nowhere as simple as I believed when I wrote the comments. I shall use a few relatively standard facts without much of a proof; I hope you know them (if not, let me know and I'll expand). First, let me introduce my notations (which are sometimes different from yours): Fix a commutative ring $\mathbf{k}$. Let $\mathbb{N}=\left\{ 0,1,2,\ldots\right\} $. For each $n\in\mathbb{N}$, let $\left[ n\right] $ denote the $n$-element set $\left\{ 1,2,\ldots,n\right\} $. For each $n\in\mathbb{N}$, let $I_{n}\in\mathbf{k}^{n\times n}$ be the $n\times n$ identity matrix. For each $n\in\mathbb{N}$ and each $\left( a_{1} ,a_{2},\ldots,a_{n}\right) \in\mathbf{k}^{n}$, let $\operatorname*{diag} \left( a_{1},a_{2},\ldots,a_{n}\right) \in\mathbf{k}^{n\times n}$ be the diagonal $n\times n$-matrix whose diagonal entries are $a_{1},a_{2} ,\ldots,a_{n}$. I shall use the standard abbreviation $\sum\limits_{U\subseteq V}$ (when $V$ is a set) for $\sum\limits_{U\in\mathcal{P}\left( V\right) }$ (where $\mathcal{P}\left( V\right) $ denotes the power set of $V$). For any $n\in\mathbb{N}$ and $m\in\mathbb{N}$ and any $n\times m$-matrix $A\in\mathbf{k}^{n\times m}$, we define the following submatrices of $A$: If $S=\left\{ s_{1} TITLE: False Proof that $\sqrt{4}$ is Irrational QUESTION [10 upvotes]: Everyone with any basic knowledge of number theory knows the classic proof of the irrationality of $\sqrt{2}$. Curious about generalizations using elementary methods, I looked up the irrationality of $\sqrt{3}$, and found the following: Say $ \sqrt{3} $ is rational. Then $\sqrt{3}$ can be represented as $\frac{a}{b}$, where a and b have no common factors. So $3 = \frac{a^2}{b^2}$ and $3b^2 = a^2$. Now $a^2$ must be divisible by $3$, but then so must $a $ (fundamental theorem of arithmetic). So we have $3b^2 = (3k)^2$ and $3b^2 = 9k^2$ or even $b^2 = 3k^2 $ and now we have a contradiction. Such a proof follows the same basic logic as the proof for $\sqrt{2}$, except for using the fundamental theorem of arithmetic to replace and generalize the trivial fact that $n$ is even if $n^2$ is even. However, when I apply this proof format to $\sqrt{4} $ (which is clearly an integer and thus rational) I get the following: Say $ \sqrt{4} $ is rational. Then $\sqrt{4}$ can be represented as $\frac{a}{b}$, where a and b have no common factors. So $4 = \frac{a^2}{b^2}$ and $4b^2 = a^2$. Now $a^2$ must be divisible by $4$, but then so must $a $ (fundamental theorem of arithmetic). So we have $4b^2 = (4k)^2$ and $4b^2 = 16k^2$ or even $b^2 = 4k^2 $, which implies that $b=4n$ by the fundamental theorem. Now we have a contradiction (since can note that both $a$ and $b$ are divisible by $4$ and we assumed they were coprime) This proof is clearly false, yet I fail to see where it differs. Where does it do so? REPLY [4 votes]: Your error is stating that if $a^2$ is divisible by 4 so must $a$ be. The fundimental theorem states if a prime $p $ divides $ab $ then $p $ must divide $a $ or $p$ must divide $b $. That is true because $p $ is indivisable. But if $p $ is composite it doesn't hold. $p$ could equal $jk $ and $j$ could divide $a $ and $k $ divide $b $. Example: 3 divides 4 times 9 so 3 either divides 4 or 3 divides 9 because 3 is prime. But 6 divides 4 times 9 but 6 neither divides 4 nor 9 but instead 3 divides 9 while 2 divides 4 so 6=2 times 3 divide 4 times 9. So 4 divides $a^2$ means $2*2$ divides $a*a$ so 2 divides $a $ is all you can conclude with certainty. ... because 4 is not prime. Actually because 4 is not square free. All of its prime factors must divide into $a$ but the square powers can be distributed among (and are) distributed among the square powers of $a$.<|endoftext|> TITLE: A pill bottle with large and small pills QUESTION [7 upvotes]: Alright here's the exact question: A bottle initially contains $48$ large pills and $76$ small pills. Each day a patient randomly chooses one of the pills. If a small pill is chosen, it is eaten. If a large pill is chosen, the pill is broken in two; one part is eaten and the other part is returned to the bottle, and is now considered to be a small pill. Let $X$ be the number of small pills in the bottle after the last large pill has been chosen and its smaller half is returned. Find $\operatorname{E}(X)$. Now here is my thought process so far: $X_i$ = the time at which the $i$th large pill is chosen and then broken. Then $X = \sum_{i = 1}^{48}X_i$, so $\operatorname{E}(X) = \sum_{i = 1}^{48}\operatorname{E}(X_i)$. From this I gather that $X\sim \operatorname{Geo}(\frac{48-i+1}{76+i-1})$, but I have no idea how I would actually go about computing such a ridiculous amount of geometric random variables. REPLY [5 votes]: The process is not well-defined since you didn't specify a distribution. It stands to reason that a patient randomly choosing a pill out of a bottle is more likely to pick a large one than a small one. For this answer, I'll assume that you meant to imply that the pills are chosen independently and with uniform distribution. Let there be $k$ large and $n$ small pills. A small pill survives until the end if it is chosen after all large pills. For the $n$ original small pills, there are $k$ large pills they need to survive, and the probability for this is $\frac1{k+1}$ by symmetry. By linearity of expectation, this yields a contribution $\frac n{k+1}$ to the expected number of surviving small pills. Then we also have to take into account the small pills that are produced during the process. The small pill that is produced when the $j$-th to the last large pill is broken needs to survive $j-1$ large pills, with probability $\frac1j$, for a contribution of $$ \sum_{j=1}^k\frac1j=H_k\;, $$ where $H_k$ is the $k$-th harmonic number. In total, the expected number of small pills is $$ \frac n{k+1}+H_k\;, $$ which in your case with $k=48$ and $n=76$ is $$ \frac{76}{49}+H_{48}=\frac{18624692152821783046631}{3099044504245996706400}\approx6.01\;. $$ Your approach is wrong in two respects; $X$ is the number of surviving small pills, not a time; and the sum of times at which the large pills are chosen has no significance. If you wanted to pursue this approach, you'd need the sum of the times it takes to choose the next large pill after the previous large pill. This is the approach taken in the coupon collector's problem, but I doubt that it would prove fruitful in this case.<|endoftext|> TITLE: Primes of the form $(2p)^{2}+1$, $p$ prime, have $h^{2}+1$ as a prime divisor? QUESTION [6 upvotes]: I'm an undergraduate student and I usually ask questions here about things I'm struggling with in my academical mathematical studies, but this particular question is actually more like a curiosity. More specifically, I'm playing around with that "famous" problem of creating a sequence that only generates prime numbers. I know that many people have tried this and it is a very hard problem, but this is part of mathematics, right? I mean, attacking hard problems as a (yet) naive student at the very least to make you realize how difficult they actually are and hopefully motivates for further study on the issue. Feel free to say that I'm wasting my time, though. So, to the question: I realized that the numbers $N$ of the form $N=(2p)^{2}+1$, where $p$ is prime, are often prime numbers (at least for small $p)$. This is sort of justified by the fact that it will never by divisible by a prime $q\equiv 3\pmod 4$, since it would imply that $-1$ is a quadratic residue mod $q$. Hence, all its prime divisors are $\equiv 1\pmod 4$, and in particular, all of its divisors are of this form, since $1\cdot 1\equiv 1\pmod 4$. Now, I thought of that famous result that says that any prime $\equiv 1\pmod 4$ can be written as a sum of two squares of integers, and hence such a number $N$ would be of the form (in prime factorization) \begin{equation} N=(a^{2}+b^{2})(c^{2}+d^{2})\dots(x^{2}+y^{2}). \end{equation} My question (for now, since I've just started these investigations), is: Is it true that, when $N$ is not prime, one of these prime factors will always be the form $h^{2}+1$? This is happening in my particular examples. I guess this is a hard question, and I'd appreciate any efforts, or references. Thanks. REPLY [3 votes]: Some examples where no prime factors are $h^2 + 1$ p 4 p^2 + 1 17 1157 prime fac 13 . 89 . 23 2117 prime fac 29 . 73 . 43 7397 prime fac 13 . 569 . 97 37637 prime fac 61 . 617 . 107 45797 prime fac 41 . 1117 . 113 51077 prime fac 13 . 3929 . 127 64517 prime fac 149 . 433 . 137 75077 prime fac 193 . 389 . 167 111557 prime fac 281 . 397 . 173 119717 prime fac 13 . 9209 . 227 206117 prime fac 53 . 3889 . 263 276677 prime fac 337 . 821 . 277 306917 prime fac 13 . 23609 . 283 320357 prime fac 457 . 701 . 307 376997 prime fac 277 . 1361 . 313 391877 prime fac 29 . 13513 . 347 481637 prime fac 13 . 37049 . 353 498437 prime fac 41 . 12157 . 383 586757 prime fac 29 . 20233 . 397 630437 prime fac 229 . 2753 . 433 749957 prime fac 13 . 57689 . 443 784997 prime fac 181 . 4337 . 463 857477 prime fac 61 . 14057 . 467 872357 prime fac 41 . 21277 . 487 948677 prime fac 29 . 32713 . 503 1012037 prime fac 13 . 77849 . 523 1094117 prime fac 193 . 5669 . 557 1240997 prime fac 29 . 42793 . 577 1331717 prime fac 317 . 4201 . 607 1473797 prime fac 13 . 73 . 1553 . 613 1503077 prime fac 509 . 2953 . 617 1522757 prime fac 421 . 3617 . 643 1653797 prime fac 181 . 9137 . 673 1811717 prime fac 29 . 62473 . 727 2114117 prime fac 53 . 113 . 353 . 757 2292197 prime fac 53 . 61 . 709 . 787 2477477 prime fac 97 . 25541 . 823 2709317 prime fac 13 . 208409 . 853 2910437 prime fac 73 . 39869 . 857 2937797 prime fac 1489 . 1973 . 863 2979077 prime fac 53 . 56209 . 877 3076517 prime fac 41 . 75037 . 907 3290597 prime fac 89 . 36973 . 953 3632837 prime fac 13 . 113 . 2473 . 977 3818117 prime fac 229 . 16673 . 997 3976037 prime fac 13 . 305849 . 1093 4778597 prime fac 233 . 20509 . 1097 4813637 prime fac 1721 . 2797 . 1117 4990757 prime fac 269 . 18553 . 1123 5044517 prime fac 41 . 61 . 2017 . 1153 5317637 prime fac 13 . 97 . 4217 . 1223 5982917 prime fac 1153 . 5189 . 1237 6120677 prime fac 109 . 233 . 241 . 1283 6584357 prime fac 13 . 137 . 3697 .<|endoftext|> TITLE: Let $\nu$ be a signed measure, then E is $\nu$-null iff $|\nu|(E)=0$ QUESTION [6 upvotes]: I am having trouble with understanding the proof of the forward direction. In all the proofs I have seen it's always the case that the following fact is implied. Taking a Hahn Decomposition $X=P\cup N$. $\nu(E)=0 \Rightarrow \nu(E\cap P)=0$ since $E\cap P\subset E$. However why is this true? An example, proving $E$ is $\nu$-null iff $|\nu| (E)=0$. I can only deduce $0=\nu(E)=\nu(E\cap P)+\nu (E\cap N)$ and $\nu(E\cap P)\geq 0$ since it is a subset of P, but $\nu (E\cap N)$ could still be negative. Alternatively, are there any other proofs? REPLY [5 votes]: Your reasoining is right. Your problem seems to be with the definition of null set for signed measures. Let $\nu$ be a signed measure on the $\sigma$-algebra $\Sigma$, then the definition of $\nu$-null set is: Definition: $E$ is $\nu$-null set if for all $A\in \Sigma$ and $A\subseteq E$ we have $\nu(A)=0 \phantom{i}$. Note that if $E$ is a $\nu$-null set then $\nu(E) =0$. But the implication does not work in the reverse direction. That is to say: we may have $\nu(E) =0$ without $E$ being a $\nu$-null set. Let us prove that if $E$ is a $\nu$-null then $|\nu|(E)=0$. Proof: Taking a Hahn Decomposition $X=P\cup N$. Since $E$ is a $\nu$-null and $E\cap P\subset E$, we have $\nu(E\cap P)=0$. Since $E$ is a $\nu$-null and $E\cap N\subset E$, we have $\nu(E\cap N)=0$. So we have $$|\nu|(E) = \nu(E\cap P)+ \nu(E\cap P)=0$$ Remark: The implication $$\nu(E)=0 \Rightarrow \nu(E\cap P)=0 \textrm{ since } E\cap P\subset E$$ is actually wrong. The correct statement is $$E \textrm{ is a } \nu\textrm{-null set } \Rightarrow \nu(E\cap P)=0 \textrm{ since } E\cap P\subset E$$<|endoftext|> TITLE: Is a space metric on the positive real numbers not complete? QUESTION [5 upvotes]: Say we have a metric space $(\mathbb{R}^+, d)$ where the distance function is $d(x,y) = |x - y| + | 1/x - 1/y |$ Then I argue that this metric space is not complete: If we look at the Cauchy sequence $1/x$, which is contained in the metric space, we see that the limit of the sequence $\lim_{x\to \infty} \frac{1}{x} = 0$ is not in the metric space. Hence, the metric space is not complete. Am I doing something wrong or is this a valid argument/proof? REPLY [11 votes]: Milo Brandt has shown what is wrong with your argument. In fact $\langle\Bbb R^+,d\rangle$ is complete. Here’s a hint outlining how you might prove that. HINT: Suppose that $\sigma=\langle x_n:n\in\Bbb N\rangle$ in $\langle\Bbb R^+,d\rangle$ is $d$-Cauchy. Show that $\sigma$ is Cauchy in the Euclidean metric, and conclude that $\sigma$ converges to some $x\in\Bbb R$ in the Euclidean metric. Show that $x\ge 0$. Show that if $x$ were $0$, $\sigma$ wouldn’t be Cauchy after all. Show that if $x>0$, $\sigma$ converges to $x$ in the metric $d$ as well as in the Euclidean metric. Put the pieces together to conclude that $\langle\Bbb R^+,d\rangle$ is complete.<|endoftext|> TITLE: Does every non-Archimedean absolute value satisfy the ultrametric inequality? QUESTION [8 upvotes]: The Archimedean property occurs in various areas of mathematics; for instance it is defined for ordered groups, ordered fields, partially ordered vector spaces and normed fields. In each of these contexts it is roughly the following property: Archimedean property. For any two (strictly) positive elements $x$ and $y$ there is some $n\in\mathbb{N}$ such that $n \cdot x$ exceeds $y$. This definition might not be adequate in each of the mentioned contexts, but at least it conveys the general idea. Indeed, in the context of normed fields we have the following definition (paraphrasing the definition given on Wikipedia): Definition. Let $F$ be a field with an absolute value $\left|\:\cdot\:\right|$, that is, a function $\left|\:\cdot\:\right| : F \to \mathbb{R}_{\geq 0}$ satisfying the following properties: $|x| = 0$ if and only if $x = 0$; For all $x,y\in F$ we have $|xy| = |x|\cdot |y|$; For all $x,y\in F$ we have $|x + y| \leq |x| + |y|$. Then $F$ is said to be Archimedean if for any non-zero $x\in F$ there exists some $n\in\mathbb{N}$ such that $$ \big|\:\underbrace{x + \cdots + x}_{n\ \text{times}}\:\big| > 1. $$ An absolute value that does not satisfy this property is called non-Archimedean. However, in the literature the term non-Archimedean absolute value is usually used as a synonym for an absolute value which satisfies the ultrametric inequality: For any $x,y\in F$ we have $|x + y| \leq \max(|x|,|y|)$. It is not so hard to see that an ultrametric absolute value can never be Archimedean: one easily proves that $|1| = 1$ holds, and then we find $|1 + 1| \leq 1$, followed by $|1 + 1 + 1| \leq 1$ and so on (repeatedly using the ultrametric inequality). It is however not so clear to me that any non-Archimedean absolute value must necessarily satisfy the ultrametric inequaltiy. Is this always true? Or is it only true for certain fields, say $\mathbb{Q}$, that happen to be the most common fields in the study of absolute values on fields? REPLY [7 votes]: Indeed, a non-Archimedean absolute value automatically satisfies the ultrametric inequality (as pointed out by Robert Israel). In my original question, I used a slightly different formulation of the Archimedean property (and the referenced lecture notes might not be online forever), so here is a full proof. Proposition. Let $F$ be a field and let $|\cdot|$ be a non-Archimedean absolute value. Then $|\cdot|$ satisfies the ultrametric inequality. Proof. Since $|\cdot|$ is non-Archimedean, we may choose some non-zero $x\in F$ such that $$ \big|\:\underbrace{x + \cdots + x}_{n\ \text{times}}\:\big| \leq 1,\tag*{for all $n\in\mathbb{N}$.} $$ We may interpret any element of $\mathbb{N}$ (or $\mathbb{Z}$, for that matter) as an element of $F$ by identifying it with its image under the natural ring homomorphism $\mathbb{Z} \to F$. Then the above becomes $$ |n|\cdot |x| = |n\cdot x| \leq 1,\tag*{for all $n\in\mathbb{N}$.} $$ Since $x$ is non-zero by assumption, we have $|x| \neq 0$, hence $$ |n| \leq \frac{1}{|x|},\tag*{for all $n\in\mathbb{N}$.} $$ Now let $y,z\in F$ be given. By the binomial theorem, for all $k\in\mathbb{N}$ we have $$ (y + z)^k \: = \: \sum_{j=0}^k \binom{k}{j} y^j z^{k-j}, $$ hence $$ |y+z|^k \: = \: |(y+z)^k| \: = \: \left|\sum_{j=0}^k \binom{k}{j} y^j z^{k-j}\right| \: \leq \: \sum_{j=0}^k \frac{|y|^j\cdot |z|^{k-j}}{|x|} \: \leq \: \frac{k+1}{|x|}\cdot \max(|y|,|z|)^k. $$ Equivalently: for all $k\in\mathbb{Z}_{> 0}$ we have $$ |y+z| \: \leq \: \sqrt[k]{\frac{k+1}{|x|}}\cdot \max(|y|,|z|). $$ As $k$ increases, this factor $\sqrt[k]{\frac{k+1}{|x|}}$ converges (decreasingly) to one, so we have $$ |y + z| \: \leq \: \inf_{k\to\infty} \sqrt[k]{\frac{k+1}{|x|}}\cdot \max(|y|,|z|) \: = \: \max(|y|,|z|).\tag*{$\Box$} $$ This peculiar little trick is now standard in the literature. It is also used in many textbooks, for instance: W. Schikhof, Ultrametric Calculus: An Introduction to p-Adic Analysis, Cambridge Studies in Advanced Mathematics. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511623844 (Lemma 8.2) Paulo Ribenboim, The Theory of Classical Valuations, Springer Monographs in Mathematics (section 1.2, fact E). Antonio J. Engler & Alexander Prestel, Valued Fields, Springer Monographs in Mathematics (proposition 1.1.1). Pierre Antoine Grillet, Abstract Algebra, Second Edition, Springer Graduate Texts in Mathematics 242 (chapter VI, proposition 3.2). Alain M. Robert, A Course in p-adic Analysis, Springer Graduate Texts in Mathematics 198 (chapter 2, section 1.6, first theorem).<|endoftext|> TITLE: Can you define linear maps between vector spaces over different fields? QUESTION [8 upvotes]: My book defines linear maps between vector spaces with a chosen fixed field but can you define linear maps between vector spaces over different fields? Is there any reason to restrict attention to fixed fields? REPLY [7 votes]: You could do something like this: Suppose $V$ is a vector space over $F$ and $W$ is a vector space over $K$. A linear transformation $T: V \to W$ in your sense would have to satisfy $T(\lambda v)= \lambda T(v)$ but the problem is that $\lambda\notin K$. We can fix this by assuming a ring homomorphism $g: F \to K$ and then asking that $T(\lambda v)= g(\lambda) T(v)$. This will work, but it is not anything more general: Indeed, since $F$ is a field, $g$ must be injective. Therefore, $F$ can be seen as a subfield of $K$ and $W$ can be seen as a vector space over $F$, so there is nothing new.<|endoftext|> TITLE: Can some mathematical objects exist without set theory? QUESTION [8 upvotes]: I'm a beginner in undergraduate mathematics, and am studying subjects like real analysis and abstract algebra. It seems that most mathematical objects hinge on set theory. A group is a set equipped with special properties, so is a ring, a field, and a vector space; a function is a subset of $ \mathsf{A} \times \mathsf{B} $, and a relation is a subset of $ \mathsf{A} \times \mathsf{A} $, and so on. But without set theory, will these objects be ill-defined? I've spent a month learning about the construction of $ \mathbb{R} $, as well as proving things like $ \ a + 0 =a \ $ by studying Dedekind Cuts. Even a cut requires the notion of a set. I've realised that the intuition I had from childhood about what real numbers are is sloppy; I think I know how they work and what they are, but if you push me to explain what they are rigorously, I'm unable to do so. So if mathematical objects cannot be explained in terms of sets, they are at best hazy things with no clear definition. But how did mathematicians come to realise that by studying special types of sets with special properties, wonderful things will emerge? How did they also avoid studying arbitrary boring sets? Finally, I've read a few accounts about groups being highly geometric objects, with many exquisite "creatures" waiting to be studied, but what is geometric about a set with special properties? Likewise, I came into real analysis thinking that I can finally prove why $ \ a+0 = a \ $. Indeed I can, but not without a cost; I've been rather self-conscious about obvious things I once took for granted if I don't have a definition of them in terms of sets. Edit: My question differs from Zev's link because I want to know if mathematical objects hinge on sets (in other words, they exist outside of set theory), not whether there are mathematical objects which cannot be described by set theory. REPLY [3 votes]: Ultimately your question, as expressed in your Edit, is a philosophical one, not a mathematical one. If you're a formalist, then no, the objects of consideration have no independent existence, and, visualize what we may, only the formal systems, marks on paper and screens, really exist. If you're a Platonist, however, mathematical objects actually exist in the same sense that tables and chairs do, and axiom systems merely characterize classes of such abstract entities. It's a question of which comes first, the theory or the (alleged) objects: do the axioms of, say, group theory "bring groups into existence" (existence at least in the minds of mathematicians)? or do they and did they exist independently, and the group axioms merely capture group-ness? Were groups created or discovered? There's no theorem that answers these questions, and there's no "right" answer; it's more a matter of disposition and belief.<|endoftext|> TITLE: Proving $n - \frac{_{2}^{n}\textrm{C}}{2} + \frac{_{3}^{n}\textrm{C}}{3} - ...= 1 + \frac{1}{2} +...+ \frac{1}{n}$ QUESTION [7 upvotes]: Prove that $n - \frac{_{2}^{n}\textrm{C}}{2} + \frac{_{3}^{n}\textrm{C}}{3} - ... (-1)^{n+1}\frac{_{n}^{n}\textrm{C}}{n} = 1 + \frac{1}{2} + \frac{1}{3} +...+ \frac{1}{n}$ I am not able to prove this. Please help! REPLY [2 votes]: @Olivier Oloa's proof is so brilliant! Even though, we can still prove this by induction on $n$. For $n=1$, then the result is clear. Now, suppose that the result holds for some $n\in\mathbb{N}$, then for $n+1$, we have \begin{align} \sum_{k=1}^{n+1}\frac{(-1)^{k+1}}{k}{n+1\choose k} &=\sum_{k=1}^n\frac{(-1)^{k+1}}{k}\cdot\frac{n+1}{n-k+1}{n\choose k}+\frac{(-1)^{n+2}}{n+1}\\ &=\sum_{k=1}^n\frac{(-1)^{k+1}}{k}{n\choose k} +\sum_{k=1}^n\frac{(-1)^{k+1}}{n-k+1}{n\choose k} -\frac{(-1)^{n+1}}{n+1}\\ &=\sum_{k=1}^n\frac{1}{k} -\frac{1}{n+1}\sum_{k=1}^n\frac{(-1)^{k}(n+1)}{n-k+1}{n\choose k} -\frac{(-1)^{n+1}}{n+1}\\ &=\sum_{k=1}^n\frac{1}{k} -\frac{1}{n+1}\sum_{k=0}^{n+1}(-1)^{k}{n+1\choose k} +\frac{1}{n+1}\\ &=\sum_{k=1}^n\frac{1}{k}-\frac{(1-1)^{n+1}}{n+1}+\frac{1}{n+1}\\ &=\sum_{k=1}^{n+1}\frac{1}{k}. \end{align} This completes the proof.<|endoftext|> TITLE: Up-to-date Matrix Cookbook QUESTION [6 upvotes]: My copy of the Matrix cookbook is dated November 15, 2012, and is the newest copy I've been able to find. Identities may not change overtime, but the approach to an error-free presentation can be asymptotic, and some topics may be missing. Where can I find an up-to-date copy? The address "matrixcookbook.com" listed in the 2012 book is defunct, the email in the book doesn't work, and the 2302.dk website seems to be out of commission, and I haven't found personal sites for the authors. REPLY [3 votes]: I'm making good progress---360 equations---on compiling a matrix mathematics reference Matrix Forensics here. Crucially, the LaTeX source is available so that others can contribute and it to it over time. The compiled PDF is available here.<|endoftext|> TITLE: Trying to prove that $\lim_{N \rightarrow \infty} \frac{1}{N} \Sigma_{n=1}^N f(n\alpha) = \int_0^1 f(x) dx$ QUESTION [5 upvotes]: Let $\alpha$ be an irrational number. Let $f: \mathbb{R} \rightarrow \mathbb{C}$ be a continuous periodic function with period 1. Show that $\lim_{N \rightarrow \infty} \frac{1}{N} \Sigma_{n=1}^N f(n\alpha) = \int_0^1 f(x)\,dx$ The beginning (but probably not the end) of my confusion with this problem has to do with the irrational inputs. Why would that be necessary? Any help is appreciated! REPLY [2 votes]: You can use the equidistribution theorem to conclude that the sequence $\{n\alpha\},n=1,2,\dots$ is equidistributed on the unit interval $[0,1]$ (note that we need the condition that $\alpha$ is irrational), where $\{n\alpha\}:=n\alpha-\lfloor n\alpha\rfloor$ is the fraction part of $n\alpha$. Then apply the Riemann integral criterion for equidistribution (see here).<|endoftext|> TITLE: When are homology groups the trivial group? QUESTION [5 upvotes]: I've noticed that all the spaces $X$ whose (singular) homology I've computed or seen computed have $H_n(X)=0$ whenever $n$ is greater than the dimension of $X$. So I have the following conjecture: Conjecture. Suppose $n$ is the least integer such that a space $X$ may be embedded into $\mathbb{R}^n$. Then for all $m >n$ we have $H_m(X)=0$. Is this true? REPLY [8 votes]: This is false, and somewhat surprising. There is a counterexample due to Milnor and Barrat . The space is a union of a nested sequence of spheres embedded in $\mathbb{R}^3$ and it has infinitely many non-zero singular homology groups. The nesting of the spheres is similar as in the Hawaiian earring. However, the result is true if one imposes local niceness on the space $X$, e.g. if $X$ is a subcomplex of an $n$-dimensional CW-complex.<|endoftext|> TITLE: joint distribution of x with..itself QUESTION [6 upvotes]: I have a weird question about probability and density functions : Let's take a random variable X whose p.d.f exists and let's denote it $f_{X}\left(x\right)$. Does the definition of the joint probability $f_{X,X}\left(x,x\right)$ exist ? clearly it's not continuous but i wanted to "check" that the marginal of X ($f_{X}\left(x\right)$) would be the integral of this joint distribution... Can you give me more insight about it? thanks, Romain REPLY [6 votes]: You can define the probability distribution of $(X,X)$ but there is no density with respect to the Lebesgue measure of $\mathbb{R}^2$, because $(X,X)$ is supported by $\Delta=\{(x,x),\, x\in\mathbb{R}^2\}$, whose Lebesgue measure in $\mathbb{R}^2$ is $0$. So the pdf of $(X,X)$ does not exist, it's a degenerate distribution. However, you can calculate it's cdf, which always exists. $$P(X\leq x,X\leq y)=P(X\leq\min(x,y))=F_X(\min(x,y))$$ where $F_X$ denotes the cdf of $X$.<|endoftext|> TITLE: Relations between center (fundamental group) and (co)root and weight lattices for Lie groups QUESTION [9 upvotes]: I would like to find some explanation or reference for the following facts, provided they are correct, and clarify some of the assumptions. Denote by $G$ a (perhaps semisimple compact connected) Lie group: its center is given by the quotient of the weight lattice by the root lattice, i.e. $Z(G)=\Lambda_\text{weight} / \Lambda_\text{root}$ 1a. if moreover $G$ is (perhaps simple and simply connected) of ADE type, its center is $Z(G)=\Lambda^*_\text{root} /\Lambda_\text{root}$, where $*$ denotes the dual lattice its first homotopy group is given by the quotient of the co-weight lattice by the co-root lattice, i.e. $\pi_1(G)=\Lambda_\text{weight}^\vee / \Lambda_\text{root}^\vee$ REPLY [14 votes]: To see such relations more clearly, one should first of all avoid identifying the weight space $\Lambda\otimes\Bbb R$ with its dual space. Even in the semisimple case, an invariant inner product is only determined up to a scalar factor, and although the Killing form provides a distinguished choice, it is really not that special, nor well behaved under direct sum decompositions. Let $G$ be a compact connected semisimple Lie group with Cartan subgroup $H$, let $\def\Hom{\operatorname{Hom}}\Lambda=\Hom(H,\Bbb C^\times)$ be its character lattice. With $\def\h{\mathfrak h}\h$ the Lie algebra of $H$, each $\lambda\in\Lambda$ has a derivative $D\lambda:\h\to\Bbb C$ with purely imaginary image, and by mapping $\lambda$ to $\def\1{2\pi\mathbf i}{D\lambda\over\1}:\h\to\Bbb R$, we can embed $\Lambda$ into to dual vector space $\h^*$. For all $h\in\h$ one has $\lambda(\exp(h))=\exp(\1\langle\lambda,h\rangle)$ (the pairing indicates that the second $\lambda$ is being interpreted as linear form on $\h$). Therefore $\ker(\exp:\h\to H)$ is the set of $h\in\h$ on which all linear forms in $\Lambda\subset\h^*$ take integral values; this is a lattice in$~\h$ that I shall denote by $\Lambda^\vee$. We have $H\cong\h/\Lambda^\vee$. The conjugation action of $G$ on itself induces (as its derivative at the identity) the adjoint action $\def\g{\mathfrak g}G\to GL(\g)$, a linear representation of $G$ on its Lie algebra$~\g$. By connectedness of$~G$, an element $g\in G$ is central whenever its adjoint action is trivial. The centre $\def\ZG{\mathrm Z(G)}\ZG$ is contained in $H$. Restricting the adjoint action to$~H$, the space $\g$ decomposes into the direct sum of $\h$ (the set of fixed points, or weight space for the zero weight) and of the one-dimensional root-subspaces. The characters associated to the root subspaces are the roots, which form a finite subset$~\Phi$ of$~\Lambda$. It follows that $\ZG=\bigcap_{\alpha\in\Phi}\ker(\alpha)$ (where the kernel is of $\alpha$ as element of $\Hom(H,\Bbb C^\times)$). The $\Bbb Z$-span $\langle\Phi\rangle$ of $\Phi$ is a sub-lattice of$~\Lambda$ that has full rank, and therefore finite index (only at this point is it relevant that $G$ is semisimple; until now the weaker hypothesis of reductive suffices). It is already spanned by a chosen subset of simple roots. In the correspondence $H\cong\h/\Lambda^\vee$, a subset $\ker(\alpha)\subset H$ corresponds to the set of classes of $h\in\h$ with $\langle\alpha,h\rangle\in\Bbb Z$. Therefore $\ZG$ corresponds naturally to $\langle\Phi\rangle^\vee/\Lambda^\vee$, where $\langle\Phi\rangle^\vee$ is the dual lattice to the root lattice$~\langle\Phi\rangle^\vee$, the set of $h\in\h$ for which $\langle\alpha,h\rangle\in\Bbb Z$ for all simple roots$~\alpha$, and therefore for all $\alpha\in\langle\Phi\rangle$. This dual lattice is a sub-lattice of$~\h$ that contains $\Lambda^\vee$ as a finite index sub-lattice, so $\ZG$ is a finite Abelian group. To get back to the original question, one sees that the quotient $\Lambda/\langle\Phi\rangle$ is naturally isomorphic not to $\ZG$, but to its dual group $\widehat{\ZG}=\Hom(\ZG,\Bbb C^\times)$. Every finite Abelian group is isomorphic to its dual group (by the structure theorem), but not naturally so. So the first statement is not wrong, it is just not the best way to represent $\ZG$ as a quotient of lattices. For the fundamental group $\pi_1(G)$ one needs to consider the universal cover $\widetilde G$ of $G$, which is a simply connected group. As usual for universal covers, the fundamental group $\pi_1(G)=\pi_1(G,e)$ is isomorphic the fibre of the covering $\widetilde G\to G$ over the identity $e$ of$~G$ (by associating to a loop in$~G$ the endpoint of its lift to$~\widetilde G$). Since the inverse image of $\ZG$ is $\mathrm Z(\widetilde G)$, the latter centre contains the mentioned fibre (which is the inverse image of $\{e\}$). So $\pi_1(G,e)$ can be seen a subgroup of $\mathrm Z(\widetilde G)$, namely the kernel of the natural surjection $\mathrm Z(\widetilde G)\to\ZG$. But by the above $\mathrm Z(\widetilde G)$ is isomorphic to the quotient of lattices $\langle\Phi\rangle^\vee/\widetilde\Lambda{}^\vee$, where $\widetilde\Lambda{}^\vee$ is the dual lattice to the character lattice $\widetilde\Lambda$ of$~\widetilde G$, the lattice generated by the fundamental weights. But we have $\widetilde\Lambda{}^\vee=\langle\Phi^\vee\rangle$, the lattice generated by the (simple) coroots $\alpha^\vee$ (and not to be confused with the dual lattice $\langle\Phi\rangle^\vee$ of the root lattice). Then $\pi_1(G)$ gets naturally identified with the kernel of the projection $\langle\Phi\rangle^\vee/\langle\Phi^\vee\rangle\to\langle\Phi\rangle^\vee/\Lambda{}^\vee$, which is $\Lambda{}^\vee/\langle\Phi^\vee\rangle$. This gives you your second point, but you should not write the coroot lattice as $\Lambda_{\rm root}^\vee$: this would suggest the larger lattice $\langle\Phi\rangle^\vee$ rather than $\langle\Phi^\vee\rangle$. Your point 1a. seems just confused to me; the only way it could be justified is by somehow making $\Lambda_{\rm root}^\vee$ denote $\Lambda$, but any attempt to do so (under rather strong hypothesis) seems pointless, as $\Lambda$ is quite clear.<|endoftext|> TITLE: Motivating the Cross-Ratio and 'the ratio of ratio's' in $\mathbb{R}\mathbb{P}^2$ QUESTION [6 upvotes]: Trying to come across the idea of the cross ratio naturally by thinking about the projective plane $\mathbb{R} \mathbb{P}^2$, using ideas from Brannan's Geometry book: given 4 collinear points $A,B,C,D$ we note (for the vectors respresenting these points) that $$C = aA + bB$$ $$D = cA + dB$$ so that the cross ratio is defined as $(b/a)/(d/c)$. Why is this obvious, and why is it obvious it should be a projective invariant without big calculations? My guess is that you take $$C = aA + bB = a[A + (b/a)B] \sim A + (b/a)B$$ and $$D \sim A + (d/c)B$$ so that, for some reason, we want to define the discrepancy between these terms, i.e. we want to find the $\lambda$ that would turn $(b/a)$ into $(d/c)$: $$(b/a) = \lambda (d/c)$$ so that $$\lambda = (b/a)/(d/c)$$ tells us that, given two generators of a line ($A$ and $B$) we have a third point specified by the ratio $b/a$ and a fourth point specified by the ratio $d/c$ but because we just consider the factor that turns, for our given starting generators, a third point into a fourth point we expect it to be preserved when we project between lines: This is the best I can do to make the statement that the ratio of ratio's (Stillwell's 4 Pillar's book) is preserved under projections, because it seems to be saying you use this term to turn a third point into a fourth point given two staring points. Still not 100% clear on why it should also make sense when you start taking projections from points in perspective :\ Any thoughts? Any pictures to make it nicer? Any ideas on making the ratio of ratio's, along with the cross-ratio, more obvious? There are some great answers on here, such as https://math.stackexchange.com/a/1023055/82615 or https://math.stackexchange.com/a/627396/82615 but I have not come away with a child-like grasp of this yet, lets hope it can be achieved! REPLY [2 votes]: I'm sure I've seen this phenomenon before: one starts with some quantity defined in terms of coordinates and asks how anyone would think of that and how it could be natural. This is too "up close." For example, is it better to understand the determinant of a transformation on a real vector space as a change in volume, or is it more valuable to dig into a hideous mess of addition and multiplication of entries in matrices? (and why is this computation invariant under similarity transformations?!) Better to forget about the matrix and focus on the transformation and the single quantity (the determinant). The change of perspective that fixes this is to go a little more abstract and realize that the 'mysterious' definition in terms of coordinates is just a computational corollary of the abstract picture. Try this Here's another version of the definition of the cross-ratio: Let $A,B,C,D$ be coplanar in $\Bbb R^3$ such that $\langle A\rangle, \langle B\rangle, \langle C\rangle, \langle D\rangle$ are distinct subspaces, and $A+B=C$. Then there is a unique $\lambda\in R$ such that $\langle D\rangle =\langle A+\lambda B\rangle $. This scalar $\lambda$ is called the cross ratio of the $4$-tuple $(A, B; C, D)$. By identifying these lines as points of the projective plane, we have defined the cross-ratio for collinear projective points. (I may have misremembered the position of $\lambda$ and accidentally adopted a nonstandard version of the cross-ratio: I'll have to check later.) We also know that since $PGL(2,\Bbb R)\cong GL(3, \Bbb R)/\Bbb R^\times$, by deciding upon a line at infinity, you can identify every projective transformation as a linear transformation on $\Bbb R^3$ working on homogeneous coordinates. If $T$ is this transformation, then by linearity $T(A)+T(B)=T(A+B)=T(C)$ and if $\alpha D= A+\lambda B$, then $\alpha T(D)=T(A+\lambda B)=T(A)+\lambda T(B)$, so that the cross ratio of $(T(A),T(B);T(C),T(D))$ is also $\lambda$. Thus $\lambda$ is invariant under the action of the projective group. Conclusion From here, you get to the familiar definition with ratios of four numbers by fixing a homogeneous coordinate system and doing computations within the framework of homogeneous coordinates. As I mentioned at the outset, I'm not sure there is any real intuitive value in trying to make rationalizations of the quotient of quotients directly (beyond what you've already said.) It's probably best to grok the single, coordinate-free quantity rather than the method used to find the quantity, just as one can understand the determinant of a transformation in terms of volume change rather than some weird computation with matrix entries.<|endoftext|> TITLE: Solve this problem on functions QUESTION [5 upvotes]: Let $f$ be a bijection from the set of non-negative integers to itself. Show that there exist integers $a$,$b$,$c$ such that $a < b < c$ and $f(a)+f(c)=2f(b)$. I don't know how to approach this problem. I think I'll have to show some kind of contradiction. But, I haven't managed to do anything fruitful yet. Any help would be appreciated. REPLY [2 votes]: We take a such that $f(a) = 0$. Ad absurdum, we suppose : $\forall b>a, \forall c >b, f(c) \neq 2f(b)$ (*) As f is a bijection, $2f(b) \in f(\mathbb{N})$, but $f(b) \neq 0$ so $f(b) \neq 2 f(b)$, and (*). Hence $2f(b) \notin f([\mid b, +\infty \mid ])$. Therefore : $\forall b>a, \exists c < b, f(c) = 2f(b)$. By induction it is then clear that $\forall b>a, \exists k>0, \exists C \leqslant a, f(C) = 2^k f(b)$ (base case : a+1 ; inductive step on b). From that you can deduce quickly a contradiction, using that f is a bijection (so $\{f(0),..,f(a)\}$ should be an infinite set). We conclude that : $\exists b>a, \exists c>b, 0 + f(c) = 2f(b)$.<|endoftext|> TITLE: Perfect Pairing, non-degeneracy and dimension. QUESTION [5 upvotes]: On this wikipedia entry https://en.wikipedia.org/wiki/Bilinear_form#Different_spaces it tells us that if $B: V \times W \to K$ is a bilinear map, then In finite dimensions, [a perfect pairing] is equivalent to the pairing being nondegenerate (the spaces necessarily having the same dimensions). My question is why does non-degeneracy imply that the induced linear map from $V$ to $W^*$ is an isomoprhism? And, why do $V$ and $W$ necessarily have the same dimension? REPLY [4 votes]: You need non-degeneracy in both arguments. For example, let $\phi\in V^\ast$ be non-zero and define $B\colon K\times V\to K, \,B(\lambda,v)=\phi(\lambda v)$. Then $B(\lambda,v)=0$ for all $v\in V$ implies $\lambda=0$, but $B$ is not a perfect pairing if $\dim(V)\geq 2$. Now assume that $B$ is non-degenarate in both arguments. Denote by $A\colon V\to W^\ast$ the operator given by $Av(w)=B(v,w)$ for $v\in V$, $w\in W$. This operator is injective since $B$ is non-degenerate in the first argument: $Av=0$ iff $Av(w)=0$ for all $w\in W$ iff $B(v,w)=0$. In particluar, $\dim V\leq \dim W^\ast=\dim W$. Using non-degeneracy in the second argument, one obtains the converse inequality $\dim W\leq \dim V^\ast=\dim V$. Thus, $\dim V=\dim W$. Now basic linear algebra yields that $A$ is an isomorphism.<|endoftext|> TITLE: Can I compare real and complex eigenvalues? QUESTION [21 upvotes]: I'm calculating the eigenvalues of the matrix $\begin{pmatrix} 2 &0 &0& 1\\ 0 &1& 0& 1\\ 0 &0& 3& 1\\ -1 &0 &0 &1\end{pmatrix}$, which are $1$,$3$, $\frac{3}{2}+\sqrt{3}i$ and $\frac{3}{2}-\sqrt{3}i$. I wish to recognize the biggest and smallest of these. But how can I compare real and complex numbers? REPLY [3 votes]: Suppose that there was an order on $\mathbb{C}$ compatible with the natural order on $\mathbb{R}$. Then either $i>0$ or $i<0$. Assume that $i>0$, then multiplying this inequality by $i$ we find that $-1=i^2>0$, but that's a contradiction. Thus $i<0$. But then multiplying by $i$ we get $-1=i^2>0$ (the inequality flipped since we multiplied by a negative number). In both cases we arrive at a contradiction. Hence there's no order on $\mathbb{C}$ compatible with $\mathbb{R}$.<|endoftext|> TITLE: Equivalent definition of metric compatibility for a connection: what does $\nabla g$ mean? QUESTION [10 upvotes]: So the definition I know for metric compatibility is: $$Xg(Y,Z)=g(\nabla_XY,Z)+g(Y,\nabla_XZ),$$ which make sense, as $g(Y,Z)$ is a smooth function from the manifold to reals and we think of $X$ as a derivation. So now I read this apparent equivalent definition that says $\nabla g=0$. Can someone explain what this means? How can I do $\nabla$ of $g$ I thought $g_p$ is an element of $T_p^*M\otimes T_p^*M$ at every point. Furthermore after you explain the meaning of this can you show me that these two definitions are indeed equivalent? REPLY [17 votes]: It seems you are missing some necessary background, namely, how to extend a connection on $TM$ to all tensor bundles. I will summarise this construction. Given a connection \begin{align*} \nabla : \Gamma(TM) \times \Gamma(TM) &\to \Gamma(TM)\\ (X, Y) &\mapsto \nabla_XY \end{align*} on $TM$, there is an associated connection (which I will also denote $\nabla$) on $T^*M$ given by \begin{align*} \nabla : \Gamma(TM) \times \Gamma(T^*M) &\to \Gamma(T^*M)\\ (X, \alpha) &\mapsto \nabla_X\alpha \end{align*} where $(\nabla_X\alpha)(Y) := X(\alpha(Y)) - \alpha(\nabla_XY)$. With this definition, together with the definition $\nabla_Xf = Xf$ for a smooth function $f$, we see that the following identity holds: $$\nabla_X(\alpha(Y)) = (\nabla_X\alpha)(Y) + \alpha(\nabla_XY).$$ More generally, given a $(p, q)$-tensor $T$, we implicitly define the covariant derivative $\nabla_XT$, which is again a $(p, q)$-tensor, by the following equation: \begin{align*} \nabla_X(T(Y_1, \dots, Y_p, \alpha_1, \dots, \alpha_q)) =&\ (\nabla_XT)(Y_1, \dots, Y_p, \alpha_1, \dots, \alpha_q)\\ &+ \sum_{i=1}^pT(Y_1, \dots, \nabla_XY_i, \dots, Y_p, \alpha_1, \dots, \alpha_q)\\ &+ \sum_{j=1}^qT(Y_1, \dots, Y_p, \alpha_1, \dots, \nabla_X\alpha_j, \dots, \alpha_q).\qquad (\ast) \end{align*} One could instead consider the covariant derivative of $T$ as a $(p+1, q)$-tensor $\nabla T$ given by $$(\nabla T)(X, Y_1, \dots, Y_p, \alpha_1, \dots, \alpha_q) := (\nabla_XT)(Y_1, \dots, Y_p, \alpha_1, \dots, \alpha_q).$$ Now, $g$ is a $(2, 0)$-tensor. So if $Y$ and $Z$ are vector fields, $g(Y, Z)$ is a smooth function and hence $$\nabla_X(g(Y, Z)) = X(g(Y, Z)).$$ On the other hand, by $(\ast)$, $$\nabla_X(g(Y, Z)) = (\nabla_Xg)(Y, Z) + g(\nabla_XY, Z) + g(Y, \nabla_XZ).$$ Using these two equations, we see that $$(\nabla g)(X, Y, Z) = (\nabla_X g)(Y, Z) = X(g(Y, Z)) - g(\nabla_XY, Z) - g(Y, \nabla_XZ).$$ So $\nabla$ is compatible with the metric $g$ if and only if $(\nabla g)(X, Y, Z) = 0$ for all vector fields $X, Y, Z$ (i.e. $\nabla g = 0$).<|endoftext|> TITLE: Chern class of ideal sheaf QUESTION [7 upvotes]: Let $X$ be a smooth projective surface. Let $Z$ be a dimensional $0$ subscheme of length $l$. Suppose $I_Z$ is the ideal sheaf of $Z$. Then it claimed that $c_1(I_Z) = 0$ and $c_2(I_Z) = l$. (1)Why this is true? (Here the Chern class is about a sheaf rather than vector bundles. Hence, we use a resolution of $I_Z$ by vector bundles to define its Chern classes. I want to use $0 \to I_Z \to \mathcal{O}_X \to i_*\mathcal{O}_Z \to 0$ to compute Chern classes. This means that at least I have to show that $c_2(i_*\mathcal{O}_Z) = -l$, this seems very wired to me because $i_*\mathcal{O}_Z$ supported in dimensional $0$.) (2) Besides, I want to know how to define the length of zero dimensional scheme (may be reduced)? Edit I found that the post https://mathoverflow.net/questions/106148/chern-classes-of-ideal-sheaf-of-an-analytic-subset related to this problem, but I am still looking for an elementary proof (just use HRR theorem). REPLY [8 votes]: I just realized that this is a consequence of Grothendieck–Riemann–Roch. Let $i: Z \to X$ be the closed embedding, by GRR, we have $$ch(i_! \mathcal{O}_Z) td(X) = i_*(ch(\mathcal{O}_Z) td(Z)),$$ where $i_i\mathcal{O}_Z = i_*\mathcal{O}_Z$ because $i$ is an affine morphism and $R^ji_*\mathcal{O}_Z = 0$, for $j >0$. The left hand side of the formula is $$[0 + c_1(i_*\mathcal{O}_Z) + \frac 1 2 (c_1^2(i_*\mathcal{O}_Z) - 2c_2(i_*\mathcal{O}_Z))][1 + \frac 1 2 c_1(X) + \frac 1 {12} (c_1^2(X)+ c_2(X))],$$ while the right hand side is just $l \cdot 1$. By comparing the coefficients, we have $c_1(i_*\mathcal{O}_Z) = 1$ and $c_2(i_*\mathcal{O}_Z) = -l$. Finally, by using $0 \to I_Z \to \mathcal{O}_X \to i_*\mathcal{O}_Z \to 0$, and $$1 = c_t(\mathcal{O}_X) = c_t(I_Z)\cdot c_t(i_*\mathcal{O}_Z),$$ one can find that $c_1(I_Z)=0, c_2(I_Z) = l$.<|endoftext|> TITLE: What is an intuitive explanation of the combinations formula? QUESTION [9 upvotes]: I perfectly understand the permutations formula i.e. if you have $n$ things how many ways can you rearrange it if taken $k$ at a time (or if you have $k$ slots)? So you draw the following tree. And the formula comes out naturally. $$^nP_k = \frac{n!}{(n-k)!}$$ I get that it is including all of the duplicates, too. And to get rid of them we use the combinations formula: $$^nC_k = \frac{n!}{k!(n-k)!}$$ Can somebody explain why we have to divide by the "number of ways to re-arrange the slots" to get rid of the duplicates? I understand that it works and the division is used to scale a number down but I am not getting that "aha moment" of why we do it. Can somebody explain it? REPLY [2 votes]: Suppose I have a bag full of $n$ balls, and I want to choose $k$ of them. I pull $k$ balls out and put them to the side, in a nice line. I then tip the balls left in the bag out and arrange them in a nice a line. In total I have $n!$ ways to arrange all of my balls, $k!$ ways to arrange the first line, and $(n-k)!$ ways to arrange the second line. Therefore the number of ways to choose $k$ balls is $\frac{n!}{k!(n-k)!}$: The number of ways I can have $n$ items, divided by the number of ways I can take $k$ and the number of ways I can have $(n-k)$ remaining items arranged.<|endoftext|> TITLE: If $x+y+z=3k$, where $x, y, z, k$ are integers, prove that $x!y!z! \geq (k!)^3$ QUESTION [5 upvotes]: If $x+y+z=3k$, where $x, y, z, k$ are integers, prove that $x!y!z! \geq (k!)^3$ Well I was able to prove this intuitively, but what i need is a rigorous mathematical proof. I shall explain my intuitive proof. Let us take $L=x!y!z!$ at at $x=y=z=k$. Thus, $L=(k!)^3$. Now, to attain any other case, you would have to multiply by a number greater than $k$ and divide by a number less that $k$. Which in short means we would have to multiply by a number greater than $1$. This would imply that it's magnitude would become greater than $L$. Thus using this algorithm, all cases can be obtained and they would all be greater than or equal to $L$. Thus, $L$ is the lowest possible value. Well, this is easy to explain. The problem would be to write it down as a proper mathematical proof. Please help me achieve that... REPLY [2 votes]: Much more is true. If $(x_i)$ is a set of $n$ reals with $x_i \ge 1$, then $\left(\frac1{n}\sum x_i\right)! \le (\prod x_i!)^{1/n} $, or $\left(\frac1{n}\sum x_i\right)!^n \le \prod x_i! $. This is because the factorial function is log-convex. Here is the proof. Jensen's inequality states that if $f$ is a convex function ($f''(x) \ge 0$) then $f(\frac1{n}\sum x_i) \le \frac1{n}\sum f(x_i) $. The Gamma function and the factorial function are log-convex - their logs are convex. See here for a typical discussion: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.456.1448&rep=rep1&type=pdf Letting $f(x) =\log(x!) $, $\log((\frac1{n}\sum x_i)!) \le \frac1{n}\sum \log(x_i!) $. Exponentiating, $(\frac1{n}\sum x_i)! \le (\prod x_i!)^{1/n} $.<|endoftext|> TITLE: Prove partial derivatives exist, but not all directional derivatives exists. QUESTION [5 upvotes]: During my analysis course my teacher explained the difference between partial derivatives and directional derivatives using the notion that a partial derivatives looks at the function as approaching a point along the axes (in case of of the plane), and a directional derivative as approaching a point from any direction in the plane. He also explained that the existence of directional derivatives is a stronger notion than the existence of partial derivatives exists: if all directional derivatives exist, then the partial derivatives exist too. I am to show (not necessarily prove) a case where the partial derivatives exist, but not all directional derivatives exist (hence, f is not differentiable). REPLY [11 votes]: Consider for instance $$f(x,y)=\begin{cases} \frac{xy}{x^2+y^2}&\text{if }(x,y)\neq(0,0),\\ 0&\text{otherwise.}\end{cases}$$ The partial derivatives exist everywhere. Away from the origin this is clear. At the origin we need to calculate the partials from first principles: $$\frac{f(0+h,0)-f(0,0)}h=\frac{0-0}h=0$$ so letting $h\to0$ we find $f_x(0,0)=0$. By symmetry we also have $f_y(0,0)=0$. However, not all directional derivatives exist at the origin. For example, let $\mathbf v=(\frac1{\sqrt{2}},\frac1{\sqrt{2}})$. Then $$\frac{f(\mathbf0+t\mathbf v)-f(\mathbf0)}{t}=\frac{\frac{t^2}{t^2+t^2}-0}t=\frac1{2t}$$ which does not have a limit as $t\to0$.<|endoftext|> TITLE: If there are $74$ heads and $196$ legs, how many horses and humans are there? QUESTION [39 upvotes]: I was going through some problems then I arrived at this question which I couldn't solve. Does anyone know the answer to this question? One day, a person went to a horse racing area. Instead of counting the number of humans and horses, he counted $74$ heads and $196$ legs. How many humans and horses were there? REPLY [42 votes]: Just in case you don't like algebra and don't think in terms of centaurs, here's yet another approach (very close to the centaur version but omitting the mythology). Suppose for a moment that all 74 heads belong to humans. Then there would be $2\times 74=148$ legs. That's 48 legs short of the specified number 196. To get that many extra legs, we have to replace some of the humans with horses. Every time we replace a human with a horse, we gain two legs, so we should do $\frac{48}2=24$ such replacements. We started with 74 humans, and we need to replace 24 of them with horses, so at the end we have 24 horses and $74-24=50$ humans.<|endoftext|> TITLE: Number of vertices of a random convex polygon QUESTION [5 upvotes]: Take $n>2$ random points, chosen independently with uniform probability on $[0,1]\times[0,1]$. What is the probability $P(n,k)$ that the convex hull of these points is a polygon with exactly $24$? Then things get really tricky! REPLY [6 votes]: This is a very broad question. $P(n,n)$ and $P(n,3)$ are known, and this allows $P(5,4)$ to be deduced. I'm not aware that any other values are known. For $k=n$, the answer is given in the dissertation Probability that $n$ random points are in convex position (Pavel Valtr, 1994): $$ P(n,n)=\left(\frac{\binom{2n-2}{n-1}}{n!}\right)^2\;. $$ For $k=3$, the event that the convex hull consists of three points is the union of the disjoint events that one of the $\binom n3$ triangles formed by $3$ of the $n$ points contains the remaining $n-3$ points. The probability for each of these events is the $(n-3)$-rd power of the probability that a point lies in the triangle formed by three points, which is the $(n-3)$-rd moment of the area of the triangle. These moments are given at MathWorld, and multiplying them by $\binom n3$ yields $$ P(n,3)=2^{5-n}\frac{(n-1)H_{n-2}+1}{n(n-1)^2}\;, $$ where $H_n$ is the $n$-th harmonic number. From $P(5,3)+P(5,4)+P(5,5)=1$ we can also deduce $P(5,4)=\frac59$. Here are the values for $n\le5$: \begin{array}{c|cc} n\setminus k&3&4&5\\\hline 3&1\\ 4&\frac{11}{36}&\frac{25}{36}\\ 5&\frac5{48}&\frac59&\frac{49}{144}\\ \end{array} You can read more about them at Expected size of subset forming convex polygon. and at How is the number of points in the convex hull of five random points distributed?, where the distribution $P(5,k)$ is derived in a different way.<|endoftext|> TITLE: Question about limit of cosimplicial diagram associated with a sheaf QUESTION [5 upvotes]: Let $F$ be a presheaf of sets on the usual topological site. Let $\left\{U_i\rightarrow U\right\}$ be covering of $U$. On the second page of Hypercovers and Simplicial Presheaves the authors write the equalizer of $$\prod_iFU_i\rightrightarrows\prod_{i,j}FU_{ij}$$ is in fact the limit of the entire cosimplicial diagram (degeneracy maps omitted) $$\prod_iFU_i\rightrightarrows\prod_{i,j}FU_{ij}\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow}\prod_{ijk}F(U_{ijk})\substack{\longrightarrow \\ \longrightarrow \\ \longrightarrow \\ \longrightarrow}\cdots$$ I'm trying to work this out using the concrete description of limits in the category of set, but I'm confused and stuck at the very beginning. I would like some guidance in this verification as well as geometric explanations of why we can truncate above level 1 and ignore degeneracy maps. REPLY [4 votes]: Here's a very concrete argument. A limit of the cosimplicial object is an $X$ which comes a priori with maps to each of the cosimplicial levels, but since $0$ is weakly initial in $\Delta$, only needs to be given a map $f:X\to \prod FU_i$, from which the other maps are determined. Now $f$ is equalized by the two face maps, so $f$ factors through the equalizer $Y$ of the truncated diagram above. On the other hand, try to define a cone with tip $Y$ over the whole cosimplicial diagram starting with $g_0=g:Y\to \prod FU_i$ using, say, the zeroth face maps $d_0^i$. So we get maps $g_i$ to each level of the cosimplicial diagram. We need to show each face map composes with $g_i$ to get $g_{i+1}$. This happens by a double induction on $k$ and $i$ because of the cosimplicial identity $d_k^i \circ d_{k-1}^{i-1}=d_{k-1}^i\circ d_{k-1}^{i-1}$. Now the degeneracy maps are handled because $s^i_kg^i=s^i_kd^{i-1}_kg^{i-1}=g^{i-1}$, using the cosimplicial identities again and the cone over the face maps. Thus $g$ induces a cone of $Y$ over the cosimplicial diagram, yielding a factorization of $g$ through $f$. These two factorization so combine in the usual way to show $g$ displays $Y$ as a limit of the whole cosimplicial diagram. All this was pure algebra of simplicial sets. Geometrically, the point is simply that the restriction of a section over $A$ to $A\cap B\cap C$ is the restriction of its restriction to $A\cap B$ or $A\cap C$, so if you have some sections which are consistent on twofold intersections, these last restrictions agree with the restriction of that over $B$ to $A\cap B, C$ to $A\cap C$ etc, so they're guaranteed to agree on threefold and higher intersections. In (higher) stack theory, the story no longer goes through: you choose isomorphisms between sections over twofold intersections, which must commute when restricted to threefold intersections, so you need a diagram truncated one level higher (or for higher stacks, you might need the entire diagram.)<|endoftext|> TITLE: What is the most general/abstract way to think about Tensors QUESTION [5 upvotes]: In their most general and abstract definitions as Mathematical Objects : A Scalar is an element of a field used to define Vector Spaces A Vector is an element of a Vector Space. Since a Scalar is a Tensor of rank-0 and a Vector is a Tensor of rank-1, then what Space are Tensors an element of? Can you even think of Tensors abstractly as elements of a Mathematical Space? REPLY [4 votes]: Rank one tensors, on a vector space $V$ over the scalar field $\Bbb F$, are linear maps $$V\to\Bbb F$$ and $$V^*\to\Bbb F,$$ where $V^*$ is the dual space of $V$. Rank two tensors are bilinear maps $$V\times V\to\Bbb F,$$ $$V^*\times V\to\Bbb F,$$ $$V^*\times V^*\to\Bbb F.$$ Rank three tensors are trilinear maps $$V\times V\times V\to\Bbb F,$$ $$V^*\times V\times V\to\Bbb F,$$ $$V^*\times V^*\times V\to\Bbb F,$$ $$V^*\times V^*\times V^*\to\Bbb F,$$ and so on.<|endoftext|> TITLE: Why is the image of the implicit function in the implicit function theorem not open? QUESTION [6 upvotes]: We have a continuously differentiable function $f$ from $\mathbb{R}^{n+m}$ to $\mathbb{R}^n$, and we find a continuously differentiable function $g$ which maps points from $\mathbb{R}^m$ into $\mathbb{R}^n$ (this function defines $x \in \mathbb{R}^n$ "implicitly" in terms of $y \in \mathbb{R}^m $). What regularity properties must the image of $g$ have as a direct result of the proof of the implicit function theorem (if any) in order to allow meaningful differentiable geometry (e.g. manifolds) to "happen"? In the proof of the Inverse (not implicit) function theorem, we need to care about the properties of the image of $f$. Yet for some reason the properties of the image of $g$ in the Implicit (not inverse) function theorem don't matter? Why is that? What properties, if any, of $g$ in the implicit function theorem, then, are important for the definitions of manifolds and tangent spaces and bundles? This question seems to address a related issue, but the discussion seems to assume that the image of g is open. However, Rudin does not mention this in his proof, nor does it seem to necessarily follow from his proof (we only show the existence of two open sets in $\mathbb{R}^{n+m}$ and one in $\mathbb{R}^m$, but I'm asking about the image of the latter as a set in $\mathbb{R}^n$). EDIT: "Isn't this unsatisfactory? How can we use the conclusions of the implicit function theorem if the implicit function is neither a homeomorphism nor a diffeomorphism?" --- I realize now that it can't be a diffeomorphism except for the case where $n=m$, because otherwise we would have a contradiction. Context: This is in some sense the sequel to a question I asked here. In that question I asked why we wanted the image of the continuously differentiable function $f$ to be open. The answer seemed to come down to having a sufficiently meaningful/useful definition of derivative apply to the inverse function so that we had in particular continuity of the inverse function and a homeomorphism. That having the image be open is necessary for this seems somewhat unclear, but appears to be justifiable using Brouwer's invariance of domain theorem. However, in Rudin's proof of the implicit function theorem, it does not seem like it is necessary that the image of the "implicit function" $g$ be open. Why don't we have the same problem for the implicit function theorem? Wouldn't we need the image to be open for the inverse image of some set under $g$ to be a manifold? (Assuming it is supposed to somehow be a manifold?) REPLY [5 votes]: The function $g$ is much less important than you seem to think it is. You don't actually care about $g$; the function you care about is $f$, and you are just using $g$ to get a simple description of what $f$ looks like. In particular, you never ask for the inverse image of a set under $g$ to be a manifold; you ask for the inverse image of a set under $f$ to be a manifold. For instance, when you take the inverse image of a single point $p$ under $f$, the implicit function theorem tells you that you get the graph of a differentiable function $g$, and the graph of any differentiable function is always a differentiable submanifold. Indeed, we know the graph is a differentiable submanifold because we can parametrize it by the function $\mathbb{R}^n\to\mathbb{R}^{n+m}$ sending $x$ to $(x,g(x))$. This has nothing to do with any special regularity properties of $g$ besides just being a function and being differentiable. Note that it is a highly nontrivial statement that there even exists such a function $g$: it says that locally, once you choose the first $n$ coordinates of a point of $\mathbb{R}^{n+m}$, there is always exactly one way to choose the last $m$ coordinates to get a point which $f$ sends to $p$. The content of the implicit function theorem isn't that the function $g$ is super-nice, but merely that it exists at all. It is $f$ which you want to be super-nice, and the mere existence of $g$ is all you need to conclude useful facts about $f$ (e.g., that $f^{-1}(\{p\})$ is a manifold).<|endoftext|> TITLE: Pairwise independence implies independence for normal random variables QUESTION [8 upvotes]: I'm reading a book on Brownian Motion. In the proof of the existence of such random function (Wiener, 1923), the following is stated: Indeed, all increments $B(d)-B(d-2^{-n})$, for $d\in \mathcal{D}_n\setminus \{0\}$, are independent. To see this it suffices to show that they are pairwise independent. as the vector of these increments is Gaussian. The last part of this quote is the claim that pairwise independent normal variables from a Gaussian family are independent. Could anyone provide/direct me to a proof of this claim? Thanks! REPLY [9 votes]: Actually it is rather straight forward: Assume you have a gaussian vector (so the joint distribution follows a gaussian distribution) $$ \mathbf X=(X_1,X_2,\ldots,X_n)\sim N(\mathbf \mu,\Sigma) $$ where $\Sigma\in M^{n\times n}(\mathbb R)$ describes the covariance matrix. The density function looks like $$ f_{\mathbf X}(x_1,\ldots,x_n) = \frac{1}{\sqrt{(2\pi)^{n}\lvert\Sigma\rvert}} \exp\left(-\frac{1}{2}({\mathbf X}-{\mathbf\mu})^\mathrm{T}{\Sigma}^{-1}({\mathbf X}-{\mathbf\mu}) \right), $$ Now, we have independence (mutually) iff the density function factorizes. Since the random variables are pairwise independent (uncorrelated suffices), the covariance matrix is a diagonal matrix $$ \Sigma=\mathrm{diag}(\sigma_1,\ldots,\sigma_n) \text{ and }\mathbf \mu=(m_1,\ldots,m_n) $$ since the above holds, the density indeed factorizes and we get $$ f_{\mathbf X}(x_1,\ldots,x_n)=\prod_{i=1}^n\frac{1}{\sqrt{2\pi{\sigma_i}^2}}\exp\bigg(-\frac{{(x_i-m_i)}^2}{2{\sigma_i}^2}\bigg) $$ and we have mutually independent gaussians.<|endoftext|> TITLE: Local Truncation Error of Implicit Euler QUESTION [5 upvotes]: The LTE of an implicit Euler method is $O(h^2)$ because the method has order $O(h)$, but I'm not sure where to get started in proving this arithmetically. Any help would be appreciated. Thank you! REPLY [5 votes]: Part one Before you read further, remember this golden rule: Compute error by expanding in Taylor series. Unfortunately, there are two competing definitions of truncation error. Your textbook (or class) appears to use one of them. I'll address this issue later. Your textbook/class definition of $\tau_{n+1}$ is $$h \tau_{n+1} := LHS - RHS$$ assuming that the exact solution $y$ is used. Here by $LHS$ and $RHS$, I mean the left-hand side and right-hand side of the finite-difference method. This gives you the first equation they have, which is $$ h \tau_{n+1} = y_{n+1} - y_n - h f(t_{n+1}, y_{n+1})$$ From here, you have to decide what you want to expand in Taylor series. Because $f(t_{n+1}, y_{n+1}) = y'(t_{n+1})$, it makes sense that we should expand a Taylor series around the point $t_{n+1}$. Since $y_{n+1} = y(t_{n+1})$, there's nothing to expand around $t_{n+1}$, so it follows that we should be expanding $y_n$ around $t_{n+1}$. Then, by Taylor's theorem, $$y(t_n) = y(t_{n+1}) + y'(t_{n+1})(t_{n} - t_{n+1}) + \frac{1}{2}y''(\xi) (t_n - t_{n+1})^2 $$ for some $\xi \in (t_n, t_{n+1})$. This simplifies to $$y_n = y_{n+1} - h f(t_{n+1}, y_{n+1}) + \frac{h^2}{2} y''(\xi) $$ Plug this in to your expression for $\tau_{n+1}$ to obtain the result they wanted. Part two This is an optional part of my answer, because of a pet peeve I have. The answer I gave above assumes that the truncation error is defined in the way that you gave in your original question. However, this is not how the truncation error should be defined, in my opinion. The correct definition should be $$ h\tau_{n+1} = y_{n+1} - \Phi(t_n, y_n),$$ where $\Phi(t_n, y_n)$ is the approximate value of $y_{n+1}$ obtained assuming we have the exact value of the true solution $y_n = y(t_n)$. For an explicit method, this definition gives the same truncation error computation as with the first definition. However, for an implicit method, the results differ, though to lowest-order the two definitions will still coincide. Using this second (and, in my opinion, better) definition, we have $$ h \tau_{n+1} = y_{n+1} - y_n - h f(t_{n+1}, \Phi(t_n, y_n))$$ Applying Taylor series as before to $y_n$, we obtain $$ h \tau_{n+1} = h f(t_{n+1}, y_{n+1}) - h f(t_{n+1}, \Phi(t_n, y_n)) - \frac{h^2}{2} y''(\xi)$$ We then expand $f(t_{n+1}, \Phi(t_n, y_n))$ around the point $(t_{n+1}, y_{n+1})$, which yields \begin{align*} h \tau_{n+1} & = f_y(t_{n+1}, \psi) (y_{n+1} - \Phi(t_n, y_n)) - \frac{h^2}{2} y''(\xi) \\ & = h^2 f_y(t_{n+1}, \psi) \tau_{n+1} - \frac{h^2}{2} y''(\xi), \end{align*} where $\psi$ is some unknown value between $y_{n+1}$ and $\Phi(t_n, y_n)$, and $f_y$ indicates the partial derivative of $f$ with respect to the second variable. Solving for $\tau$ yields $$ \tau_{n+1} = - \left(\frac{h}{2}\right) \frac{ y''(\xi) }{1 - h f_y(t_{n+1}, \psi)} $$ To lowest-order in $h$ this expression is still the same as the previous result for $\tau_{n+1}$ using the other definition, since we can expand the denominator via geometric series and the first terms coincide.<|endoftext|> TITLE: What equation produces this curve? QUESTION [23 upvotes]: I'm working on an engineering project, and I'd like to be able to input an equation into my CAD software, rather than drawing a spline. The spline is pretty simple - a gentle curve which begins and ends horizontal. Is there a simple equation for this curve? Or perhaps two equations, one for each half? I can also work with parametric equations, if necessary. REPLY [3 votes]: I am an professional electrical engineer (according to the gubbermint of California), so I will give you an engineer's answer that is sure to be offensive to the math people here. Download a program called "Curve Expert" Just get the basic program, not the professional version. Input your data, and you will get a better fit than you want. From your graph, it could be a sinusoid, a polynomial, an inverse hyperbolic sine, a rational function, something built from the cumulative normal distribution, or some kind of logistic function. Curve expert will help you decide what is best. You will probably fall in love with it and buy the license when the trial period expires. *** disclaimer: I have used curve expert for years, but have no financial, romantic, political, or other interest in it.<|endoftext|> TITLE: Which surface is homotopy equivalent to $\Bbb{R}^4$ minus the planes $x=y=0$, $z=w=0$? QUESTION [6 upvotes]: In completing an exercise I have shown that $\Bbb{R}^3$ minus the axes $x=0$, $y=0$, and $z=0$ is homotopic equivalent to the cube graph $Q_3$. To visualize this, $\Bbb{R}^3-0$ is homotopy equivalent to $S^2$; then, remove the $x=0$ axis and we have a space equivalent to "a cylinder"; finally,remove the $y=0$ and $x=0$ axes and we have a punctured cylinder: expanding the punctures gives the graph. The next part of the question asks me to consider $\Bbb{R}^4$ minus the planes $x=y=0$, $z=w=0$. I tried a similar approach, however $\Bbb{R}^4$ is much more difficult to imagine. Start with $\Bbb{R}^4-0\simeq S^3$. I think that removing the $x=y=0$ axis gives a surface in $\Bbb{R}^4$ somewhat like a cylinder but "enclosing" the $x=y=0$ plane. The next cut/puncture I have no idea with, however. I cannot think how to remove $z=w=0$, without leaving the resulting surface disconnected. Another idea I had was by trying to construct something analogous to $Q_3$. Where $Q_3$ had eight vertices, one corresponding to each octant, perhaps my surface in $\Bbb{R}^4$ will have sixteen edges with thirty-two surfaces corresponding to the adjacent orthants (sharing a hyperplane). REPLY [4 votes]: $\newcommand{\Reals}{\mathbf{R}}\newcommand{\Cpx}{\mathbf{C}}$To come at Steve D's comment from a slightly different angle, it may help to view the complement of orthogonal coordinate planes in $\Reals^{4}$ as $\Cpx^{2}$ with the "axes" $\Cpx \times \{0\}$ and $\{0\} \times \Cpx$ removed. What remains is $\Cpx^{\times} \times \Cpx^{\times}$.<|endoftext|> TITLE: Isolating $y$ in $\sin(xy)=\cos(xy)$ QUESTION [6 upvotes]: Given $\sin(xy)=\cos(xy)$, what is the best way to isolate $y$? Since $\sin(\frac{\pi}{2}) = \cos(\frac{\pi}{2})$ it would seem intuitive to say that $xy=\frac{\pi}{2}$ and thus that $y=\frac{\pi}{2x}$ Is this the correct approach, or am I missing something important? REPLY [12 votes]: $$sin(xy)=cos(xy)\iff xy= \frac{\pi}{4}+k\pi;\space k\in \mathbb Z$$ Thus $$\color{red}{y=\frac{(4k+1)\pi}{4x}}$$